May 112014

I often feel that I am the sole defender of goal based hockey analyitics in a world dominated by shot attempt (corsi) based analytics. In recent weeks I have often heard the pro-corsi crowd cite example after example of where corsi-based analytics “got it right” or “predicted something fairly well”. While it is always good to be able to cite examples where you got things right a fair an honest evaluation looks at the complete picture, not just the good outcomes. Otherwise it is analytics by anecdotes which is an oxymoron if there every was one.

For example, Kent Wilson of recent wrote about the “Dawning of the Age of Fancy Stats” in which he cited several instances of where hockey analytics got it right or did well in predicting outcomes.

The big test case which seems to have moved the needle in favour of the nerds is, of course, the Toronto Maple Leafs. Toronto came into the season with inflated expectations after an outburst of percentages during the lock-out shortened year saw them break into the post-season. Their awful underlying numbers caused the stats oriented amongst us to be far more circumspect about their chances, of course.

Toronto is the recent example that the hockey analytics crowd likes to bring up in support of their case but it is just one example. We don’t hear much about how many predicted the Ottawa Senators to be in the playoffs and some even had them challenging for the top spot in the eastern conference. We don’t hear much about how the New Jersey Devils missed the playoffs yet again despite having the 5th best 5v5close Fenwick% in the league, the year after missing the playoffs with the 3rd best 5v5close Fenwick% in the league. If we are truly interested in hockey analytics we need a complete and unbiased assessment of all outcomes, not just the ones that support our underlying belief.

In the same article Kent Wilson quoted a tweet from Dimitri Filipovic about the success of Corsi in predicting outcomes of playoff series.

Relevant #fact: since ’08 playoffs, teams that were 5+ % better than their opponent in 5v5 fenwick close during the regular season are 25-7.

While interesting, what it really doesn’t tell us a whole lot more than “when one team is significantly better at outshooting their opponents they more often than not win”. Well, that really isn’t saying a whole lot. It is more or less saying, when a dominant team plays a mediocre team, the dominant team usually wins. Not really that interesting when you think of it that way.

Here is another fact that puts that into perspective. Since the 2008 playoffs, the team with the better 5v5close Fenwick% has a 53-35-2 record (there were 2 cases where teams had identical fenwick% to 1 decimal place). That actually makes it sound like 5v5close Fenwick% is predictive overall, not just in cases where one team is significantly better than another. Of course, if we look at goals we find that the team with the better 5v5close goal% has a 54-34-1 record. In other words, 5v5close possession stats did no better at predicting playoff outcomes than 5v5close goal stats. It is easy to throw out stats that support a point of view, but it is far more important to look at the complete picture. That is what analytics is about.

A similar statistic was promoted by Michael Parkatti in a recent talk on hockey analytics at the University of Alberta. In that talk Parkatti stated that of the last 15 Stanley Cup winners all but 3 had a “ShotShare” (all situations) of at least 53%. The exceptions were Pittsburgh in 2009, Boston in 2011 and Carolina in 2006. I will note that it appears that all three of these teams are below 51% and 2009 Penguins were below 50%. That seems sort of impressive but I did some digging myself and found that every Stanley Cup winner since 1980 had a “GoalShare” (all situations) greater than 52%. Every single one. No exceptions. I didn’t look at any cup winners pre-1980 but the trend may very well go back a lot further. As impressive as 12 of 15 is, 34 of 34 is far more impressive.

Here is the thing. We know that goal percentage correlates with winning far better than corsi percentage. This is an indisputable fact. It is actually quite a bit better. The sole reason we use corsi is that goals are infrequent events and thus not necessarily indicative of true talent due to small sample size issues. This is a fair argument and one that I accept. In situations where you have small sample sizes definitely use corsi as your predictive metric (but understand its limitations). The question that needs to be answered is what constitutes a small sample size and more importantly what sample size do we need such that goals become as good or better of a predictor of future events than corsi. I have pegged this crossing point at about 1 seasons worth of data, maybe a bit more if looking at individual players who may not be getting 20 minutes of ice time a game (my guess is around >750 minutes of ice time is where I’d start to get more comfortable using goal data than corsi data). I am certain not everyone agrees but I haven’t see a lot of analyses attempting to find this “crossing point”.

Let’s take another look at how well 5v5close Fenwick% and Goal% predict playoff outcomes again but lets look by season rather than overall.

2008 7-7-1 6-9
2009 9-6 11-4
2010 9-6 11-4
2011 10-5 11-4
2012 7-7-1 7-7-1
2013 11-4 8-7
Total 53-35-2 54-35-1

In full seasons not affected by lockouts we find that GF% was generally the better predictor (only 2008 did GF% under perform FF%) but in last years lockout shortened season FF% significantly outperformed GF%. Was this a coincidence or is it evidence that 48 games is not a large enough sample size to rely on GF% more than CF% but 82 games probably is?

I have seen numerous other examples in recent weeks where “analytics” supporters have used what amounts to not much more than anecdotal evidence to support their claims. This is not analytics. Analytics is a fair, unbiased and complete fact based assessment of reality. Showing why a technique is a good predictor some of the time is not enough. You need to show why it is overall a better predictor all of the time or at least define when it is and when it isn’t.

I recently wrote an article on whether last years statistics predicted this years playoff teams and found that GF% seemed to do at least as well as CF% despite last season being a lock-out shortened year.

With all that said, you will frequently find me using “possession” statistics so I certainly don’t think they are useless. It is just my opinion that puck possession is just one aspect of the game and puck possession analytics has largely been oversold when it comes to how useful it is as a predictor. Conversely goal based analytics has been largely given a bad rap which I find a little unfortunate.

(Another article worth reading is Matt Rudnitsky’s MONEYPUCK: Why Most People Need To Shut Up About ‘Advanced Stats’ In The NHL.)


  6 Responses to “Being honest about “possession” stats as a predictive tool”


    Not saying you are wrong, but you need better data to back up your opinions. The comparison of how well FF% and GF% translate to same-season playoff success has issues. Teams with high GF% face weaker teams in the playoffs. With two hypothetical teams that are identical in true talent, but variance leads one to having a 55% GF% and the other a 50% GF%, the former will tend to face much weaker competition. You need to control for these variables. The study you mentioned about using last year’s data to predict this year’s playoff teams was very poorly done and didn’t show anything. For one thing, it’s only using a single season. And that single season had a massive shake up of divisional alignment. It would really not be hard to do a good study on this. But I do agree that too much emphasis is out on Corsi/Fenwick. It may be the best we have, but it is extremely flawed.


      Teams with high GF% face weaker teams in the playoffs. With two hypothetical teams that are identical in true talent, but variance leads one to having a 55% GF% and the other a 50% GF%, the former will tend to face much weaker competition.

      Do they? Based on what? How can we be certain without knowing how to rank teams? The basis for using corsi over win% which is driven largely by variance driven GF% is that corsi isn’t driven by variance and this is a better estimate of true talent. If this is the case, then the standings, which drive matchups, can’t be used as a metric for determining which team is “better” than another. If corsi is better at determining which team is better, then I think your point is moot. Remember, under your scenario the 55% team may very well play the 50% team, which as you say are equals. They could also play a truly better team that had “bad luck” in the variance equation.

      You may be right though. I just don’t know how you test it without knowing what the best method for identifying true talent which is what I am trying to test. Chicken and the egg problem.

      The study you mentioned about using last year’s data to predict this year’s playoff teams was very poorly done and didn’t show anything. For one thing, it’s only using a single season. And that single season had a massive shake up of divisional alignment.

      Oh, for sure. It wasn’t meant as a thorough study, just an observation. There are better ways to conduct this study too but is further evidence as to the limitations of Corsi which are not very often discussed, either casually or in a formal thorough study. It is important to know that even when Corsi is the best option it still isn’t that good.


    I’m somewhat with you on this (probably not surprising, since I’m the only hockey-analytics writer not raked over the coals in the Moneypuck piece from Sportsgrid). I definitely agree that analysts need to acknowledge their misses as well as their hits to further their own understanding. It helps, of course, when you make lots of specific predictions as I do. Had I done the event-rate research I did in the fall before the season started, I don’t think I would’ve picked the Senators and Devils as playoff teams: teams that play a particularly “firewagon” game or a very low-event game both rely on spectacular goaltending, as the 1st type of squad gives up tons of shots and the second type doesn’t score very often because they don’t shoot a lot.

    For my playoff series predictions, I developed a metric with help from Derek from Fear the Fin that might interest you. Basically, as a way of characterizing teams’ overall quality, I came up with a modified GF% that uses 5v5 SF and SA, and Sh% and Sv% regressed to team-specific three-year averages. It’s sort of a middle ground between a purely Corsi-based picture of effective play and one that relies too heavily on noisy percentages.


      Hi Nick. I haven’t read all of your stuff but I have enjoyed and respect what I have read from you. It seems well balanced and focused on understanding what is happening and not agenda driven.

      I think GF% is generally better than CF% or FF% once you get to a full years worth of data but there are certainly things you could do to attempt to adjust it (such as regressing Sh% and Sv%) if there is historical evidence to suggest that those are not sustainable. I don’t advocate blindly regressing to the mean but if there is evidence to do some sort of adjustment to the percentages I think it could be beneficial.


        Much appreciated, David. I enjoy your willingness to revisit and critique analytical assumptions as well. And obviously, few of us could do what we do without your stats site.


      I 100% agree that the analytics crowd have been lousy at acknowledging their misses as they trumpet their hits. This probably stems from being the new kid on the block, and trying too hard to illustrate the power of their numbers. I only take offense when they call down the intelligence of anyone who does not like fancy stats.

      Speaking to the article in general, I think it raises some important questions and I enjoyed reading it.

Sorry, the comment form is closed at this time.