In my book, I had an entire section on individualization and how to classify runners. Most coaches rely on simply splitting Instead, I tried to expound upon a model using a Fast Twitch vs. Slow Twitch fiber continuum for each event. Where we expand and instead of classify someone as a 5k runner for example, we consider him a fast, slow, or specialist 5k runner.
To figure out how to classify someone, I suggested and have always used a combination of a few factors including PR comparison, lactate levels, stride mechanics, and so forth, which has given good results. But I wanted to see if there was a more quantifiable way, using PR comparisons in particular.
Usually when we compare our PR’s we are looking at how strong they are compared to each other. So we can see whether our 800, 1,500, or 5k is comparatively stronger than the other. We can use calculators that predict our races and see what ones we are closest and far away to get an idea or we can use tables like the IAAF’s to compare the relative “strength” of each PR.
These are all good methods, but again, there’s a bit of guesswork of what’s your strongest event and how much stronger or weaker it is. So what I wanted was something quantifiable and objective.
Speed Preservation- Percentage of speed retained. So if we run 1,500 at 60sec pace and a 3k at 65sec pace, we preserve 92.30% of that speed for the 3k.
If you are a running training history buff, you might know Frank Horwill’s 4sec rule. Where in an ideal world, every time you double the distance you should be able to run the next at 4sec slower per 400m. The problem with this it’s a rule of thumb. And I wondered if that actually held up and whether we’d actually see variations in how much speed is perceived for each runner for each distance. If there was variation then we could flip the script and use it to predict what type of runner each person is.
I’m not a math or statistics guy, but what I did was take a database of 1,500 Professional, college, and British runners using data collected through the years on other projects to see what the average speed preservation numbers were for each. (Just a note, why these three groups because between tilastopaja, tfrrs, and powerof10, these groups have the most complete record of performances). Which gave me an average for the entire group for each speed preservation level.
For example, the average 3000/5000% was 96.61% with a standard deviation of 1.08%.
So, for example if we have an 8:00 3k guy, if he held onto the average speed, he would be a 13:47 runner. But we all know that not every 8 flat guy can run 13:47. Instead we have variation within that amount of speed preservation. Which made me ask the question of whether a 5k/10k runner would have better speed preservation when moving up in distance versus a 1500/5k runner. We assume this is true, and experience tells us it is, but can we map out and quantify it.
So what I did was separate out the runners based on their primary event and map out the differences between them to see if we could see a pattern of speed preservation for each event. And it turns out that you can.
So with a few calculations and a bit of messing around, what we’re left with is for all groups, this is the average speed preservation for each group
(Where 1= 15/5k % 2=800/1500% 3= 1500/3000% and 4= 3000/5000%)
As you can see we can see distinct variations in speed preservation for each group from 800- ST 5k (the 10k is a projection because of insufficient 10k/marathon specialist data).
The interesting thing is that as we go up in endurance type we see higher and higher speed preservation numbers, regardless of distance. What changes within the groups, is the degree of drop off from one event to the next. For example, we can see that 800 or ST 800 all have pretty big drop offs when going from the 1,500 to the 3k for example. We can do some pretty cool stuff with the variance between each groups, but that’s for another time.
The cool thing is we can flip things on their head and use this standardized data for each “group” to classify runners. We can make it quantifiable.
Let me give a quick example.
Let’s say we have a runner whose PR’s are the following:
Then we know their speed preservation is:
We can take these percentages and see where they fall upon the line of pure Fast Twitch 800m to Slow Twitch 10k. Essentially, what we’re trying to do is find the best fit for this runner.
In this example our runner gets a classification score of: 4.10
What the heck does that mean? It’s based on a classification from 1 (pure 800 FT) to 7 (pure 10k ST).
Where that puts this runner is right in the ST 1500/ FT 5k zone.
What that means is that right now, that athletes PR’s fit the line of a FT 5k runner best at the current moment. So we know how to train at the moment, and that if we are training for a 10k for example, we might need to shift the work to try and slightly move them more towards the speed preservation of a 5k specialist.
The next step is to add a bit to the database to clear up the data and then to compare levels of performance to see if higher level runners actually have better speed preservation.