/cdn.vox-cdn.com/uploads/chorus_image/image/15883401/20130205_jla_se7_438.0.jpg)
A few weeks ago, we walked through the idea of regression to the mean and how it applies to hockey. If you didn't read that one before, you might want to read it now -- it helps set the table for this one.
In that article, we found that on-ice shooting percentage (the team's shooting percentage with a given player on the ice) isn't a terribly repeatable skill. In fact, when we dug into it in more detail, we found that a three-year sample tells us just a little bit about what a forward's future on-ice shooting percentage will be and virtually nothing about a defenseman's future projections.
What we didn't address is the flip side of the coin: how repeatable is on-ice save percentage? Let's take a look at that now.
There were 127 forwards who were on the ice for 1000 5-on-5 shots in both the three-year period from '07-10 and the three-year period from '10-13. Their on-ice save percentage over the first three years was essentially useless as a predictor of how they'd do in the next three years:
Over a three-year span, it doesn't matter whether a forward sees his team stop 94% or 90% of the shots with him on the ice at 5-on-5; either way, the best guess for how he'll do in the next three years is league average. If there are differences between players in their ability to influence the opponents' shooting percentage, those differences are much less than whatever other random factors come into play.
Making judgments about forwards' defense based on how many goals are scored against their team with them on the ice is a mistake. It gives the player credit or blame for save percentages that have zero predictive value and therefore makes the ratings significantly worse than they would be if we used only the shot totals.
Let's turn to analyzing defensemen, whom we might expect would be more responsible for a team's save percentage. Here's the analogous plot for the 97 defensemen who were on the ice for at least 1000 shots in each three-year period:
Hm. That's still pretty unimpressive. If you just close your eyes and put 97 random dots on a piece of paper, there's a 15% chance of seeing a correlation that strong just by pure chance.
Even if we assume that this correlation is real, our best estimate of future on-ice save percentage would be regressed 85% of the way back to the mean -- which means the projections for those guys way out at 93.7% over a three-year span would be just two tenths of a point above the mean. And the individual talent the player possesses would be even lower than that; playing in certain rinks or with certain goalies or in certain systems could have an impact that large.
So while there may be a sliver of repeatable talent for defensemen preventing the opponents from getting high-percentage shots, after three years of data we aren't even close to being able to reliably tell who's good at it.
And like with forwards, making judgments about defensemen based on the number of goals scored against them is a mistake that ends up crediting or blaming the player for factors entirely out of their control.
This adjustment isn't confined to stats-based analysis by any means. Our memories are tuned to remember the occasional high-impact play much more strongly than all of the low-impact stuff that happens in between. We will remember a defenseman's turnover or missed coverage that leads to a goal but probably forget if the shot hits the goalie in the chest, so the team's save percentage with him on the ice affects how many of his mistakes we remember and how we judge him.
So whether we rely primarily on stats or memory, we need to be aware of on-ice save percentage and its tendency to regress. Failing to do that leads to proclaiming Andrej Meszaros (93.3% sv%) to be the best defenseman on a team with Pronger, Timonen, Carle, and Coburn; it leads to trading for Kent Huskins (95.3%) and Douglas Murray (93.5%). It leads to Braydon Coburn (88.8%) being widely discussed as a possible salary dump; it leads to dumping Zbynek Michalek (89.5%) and Justin Falk (88.8%); it leads to buying out Tom Gilbert (87.7%).
Variance is inevitable. The key to making good evaluations is understanding where it comes from, what it means, and how to account for it.