Wednesday, November 09, 2005

Performance & program evaluation time

This is a big and often confusing area in sport.

Performance is definitely a central issue. As such, the question is, how often do you evaluate (in this case blood lactate) to ensure you are on track for peak performance?

In sport science there is a saying that you should never test for
the sake of testing. You must test for the sake of improving your
performance criteria. All too often coaches end up testing for the sake
of testing and never using the test data to improve performance.

As for what tests you choose, these need to reflect directly on the
parameters you are training in order to achieve a peak performance
(race);
    Crew boats will differ from solo boats,
    Sprint will differ from distance,
    V1 will differ from OC1,
    Turn regatta races will differ from non-turn regattas,
    Rough water will differ from flatwater,
    and the list goes on...
There is no magic time interval for evaluating a training program. It depends on a number of factors such as;
    How dedicated the training is (2 x week or 10x week),
    What the goal of the training is (aerobic, anaerobic, skills, etc.),
    What resources are needed for evaluation (facilities, equipment, etc.),
    When those resources are available,
    And most importantly, how long you are willing to gamble that your program is working without feedback.
I prefer anywhere from 2-6 weeks, depending on what I am using in my evaluation protocols. Sometimes evaluation will be simply a time trial that reflects race duration (i.e. 500 m sprint or 5-10 km distance) scheduled after the athlete's / crew's recovery period.

Technique evaluations can occur at any time, you can back that up with video analysis or a "report card" if you want.

As for using the same evaluation tools, I would say you are safer doing that rather than trying out different tools. One criteria that is important in athlete evaluation is that the testing methods and conditions are reproducible.

In an ideal world. In real life, we can settle for recording sufficient information to interpret the data.
    i.e. first 5 km time trial in OC1; 30:00 minutes, no wind, slack
    tide, overcast. Second 5 km time trial a month later in OC1; 29:55
    minutes, 10 kn headwinds, peak tidal flow rate, sunny
In this case, if we look at the time alone there is negligible improvement. However, when we take the wind into account and current we are looking at a different result. Unfortunately we can't measure the effects of those conditions on the performance.

This opens up a whole new aspect of evaluation; fitness testing vs performance. Fitness testing uses non-sport specific tools (i.e. chin ups, swimming and running) to estimate your ability to do the sport in question. Performance testing uses the sport in question in its natural
environment.

Paddle ergometers are an excellent example of a fitness tool that is almost a performance tool. Big thing missing in many paddle ergs is the interaction between paddler, hull, paddle and water. As such, this is a fitness evaluation tool. A very specific fitness evaluation tool
for certain, but it will rarely illustrate finer aspects of performance.

How you end up fine tuning your training is a very indepth area, and very specific to the evaluation you did. Was it technique? Was it aerobic? Was it anaerobic? Was it nutrition? Was it tactical?...

And many of these answers are trade secrets. Once you've been told I'd have to kill you.


Alan Carlsson
Engineered Athlete Services

No comments: