Kings, Flames, Avalanche and Possession Analytics

The other day I posted the following twitter comment after the Flames defeated the Kings to gain a playoff position while simultaneously eliminating the reigning Stanley Cup Champion Los Angeles Kings from the playoffs.

I posted this comment for two reasons. First because I think if you are being honest about evaluating possession analytics you have to consider the failures on an equal ground as the successes. I am certain that if the Kings defeated the Flames and ultimately made the playoffs over the Flames there would have been people that would use it as evidence that possession analytics is good at predicting future results. That would be a fair thing to do but you have to consider the failures too and possession analytics failed twice here, first with the Flames making the playoffs and second with the Kings missing. So, I made this comment because analytically it is the correct thing to do and I felt it needed to be said.

The other reason I made this comment was to see how people would react and to see whether people would react with fairness as explained above or in a defensive manner defending possession analytics and dismissing the Flames/Kings outcome as largely luck. For the most part the reaction was more subdued that I had thought but there were some jumping in defense of possession analytics including the following tweet from @67sound.

If you are relying on the LOS ANGELES KINGS to minimize the importance of possession metrics I don’t even know where to begin.

This is an over reaction because I didn’t actually try to minimize the importance of possession, I was just pointing out where it failed. If you follow me I use possession metrics all the time, I just think that there is too much consideration for when possession metrics succeed in predicting outcomes and too little consideration of when it fails and when other metrics succeed. I have talked about this before on a few occasions where people want to point out how well possession metrics are at predicting outcomes but not actually comparing the success rates against other predicting methodologies. In many instances possession statistics do a great job at predicting outcomes, but often goal based metrics actually do slightly better.

The follow up discussion to my tweet soon started to rationalize why the possession stats failed in predicting the Los Angeles Kings missing the playoffs.

Scott Cullen of TSN.ca wrote the following in his Statistically Speaking column about the Kings.

For starters, the Kings were 2-8 in shootouts and 1-7 in overtime games. Given the randomness involved in shootout results, that’s basically coming out on the wrong end of coin flips. 3-15 in overtime and shootout games, after going 12-8 the year before, is enough in tightly-contested standings, to come up short. Records in one-goal games tend to be unsustainable, but there’s enough of them in hockey that they make a huge difference in the standings.

Most of these are fair comments. The shootout record in almost completely random and not actually representative of how good they are at playing hockey (though I disagree with overtime records not being useful in evaluating how good the Kings are at playing hockey). With a bit better fortune the Kings likely would have made the playoffs and probably should have. The thing is though we all need to be careful not to use “luck” as a tool in confirmation bias as luck can be used to explain everything. Flames made the playoffs, write it off as good luck and move on without blinking an eye. They will regress next year, just watch. Kings missed the playoffs, write it off as bad luck and move on without blinking an eye. They will be better next year, just watch. A thorough review needs to be conducted, not just quickly write off anything that goes counter to our beliefs/predictions as luck.

The Kings missed the playoffs this year with 95 points. The previous four seasons they have had 100, 101 (prorated over 82 games), 95, and 98 points. So, on average the LA Kings have been a ~98 point team over the past 5 seasons. If they went 5-5 instead of 2-8 in shootouts that is exactly where they would have finished. For the most part this Kings team is what they have mostly been and what we probably should have expected. That is a good, but not elite, regular season team. Over these past 5 seasons they have finished 18th, 10th, 7th, 13th and 12th place overall. That actually compares somewhat poorly to the cross-town Anaheim Ducks who have finished 3rd, 2nd, 3rd, 25th, and 9th over the past 5 seasons. The Kings score adjusted Fenwick % over that time is 55.3% compared to the Ducks 50.3% and yet four of the five seasons the Ducks finished ahead of the Kings in the regular season. The reason for this is the Ducks have a 9.19 5v5close shooting percentage over the past four seasons compared to the Kings 6.69%. That difference is not luck. It’s a persistent repeatable skill that possession analytics doesn’t capture. Barring major off season roster moves no one should be predicting the Kings to end the regular season ahead of the Ducks next season. I suspect some will though just as was done for this season when using possession analytics to predict regular season point totals (Kings were predicted to get 107 points, Ducks 91).

So the Kings have been a pretty good but not a dominant regular season team. They have won the Stanley Cup twice during this period and have been a dominant possession team which has given us the perception that they are an elite team. Is it possible that we have generally over rated them because of their possession and post season success?  Maybe. Are they really a great team or just a good one that got hot when it mattered a couple times? It’s a question worth asking I think but if you just chalk up missing the playoffs this season to luck it is probably one you won’t be asking.

While we are on the subject of teams that were predicted to regress this season one such team is the Colorado Avalanche. A lot of people are tossing them out as an example of where possession statistics successfully predicted their failures this season. A major reason for predicting this regression was due to regression in their shooting and save percentages as Travis Yost of TSN.ca wrote prior to the season.

Using that regression for forecasting purposes, expect Colorado to shoot around 7.89 per cent for next year at evens and stop around 92.47 per cent of the shots.

Those are 5v5 shooting and save percentages Yost is talking about. In actual fact Colorado’s shooting hasn’t regressed this year as it is more or less identical to last seasons 5v5 shooting percentage (8.75% this season vs 8.80% last season). Save percentage has regressed almost what Yost predicted (92.52%) so he was right there (the role luck played in this is unknown though) but a major (and maybe the primary) reason for the Avalanche’s failures this season is they are playing a substantially worse possession game than last season. Colorado’s 5v5close CF% dropped from 47.4% last season to 42.9% this season which is a massive drop and likely the major reason for their failures this season. That drop can largely be attributed to letting two of their best CF% players leave in the off season – Paul Stastny and PA Parenteau and replacing them with poorer possession players in Iginla and especially Briere. Coaching may be a factor too. So some of the Avalanche’s failures this season can be attributed to a regression in save percentage but a significant part of it is due to poor off-season roster decisions.

Once again, we need to be careful with the “I told you they would regress” and leave it at that if the majority of their regression is due to factors you didn’t predict (to be fair Yost did mention that the Avalanche’s possession might drop a bit due to roster changes as well but it wasn’t the crux of his argument). It is quite possible, if not highly likely, the Avalanche is in fact a well above average shooting percentage team and we shouldn’t expect it to regress next season just as we shouldn’t expect the Ducks to either.

I need to reiterate here that it isn’t that I don’t believe that possession is an important aspect of the game. It is. It is why the Kings are good despite terrible shooting talent. It is why the Leafs are bad despite good shooting talent. What I really want to see and why I always point out where possession failed is because I want to ensure is that everyone evaluates possession fairly in the context of the complete game. I often hear things like “no one ever said possession was everything” and yet I frequently hear claims made without any mention of factors other than possession metrics. The Kings being a perfect example. Everyone assumed they were a great team that, barring massive bad luck, would make the playoffs and when they didn’t make the playoffs they started throwing out all the evidence of that bad luck. Truth is it was perfectly reasonable to predict that with even a little bit of bad luck the Kings could miss the playoffs though I don’t recall anyone really suggesting that (correct me if I am wrong though). It is also fair to suggest that if Colorado made smarter off season roster moves they could have been a playoff team again and not regress nearly to the extent they did but the discussion about the Avalanche revolved around bad possession, high PDO, they were lucky and will regress a lot. I want to see a better balance in hockey analytics as I think too much of hockey analytics is dominated by possession analytics. That is why I write tweets like the one about the Kings and Flames. There needs to be more balance.

So, my final words of advice is if you don’t believe that possession is everything (which apparently none of you do) you ought to be doing more than just conducting possession analytics. If you can honestly say you are doing that I congratulate you. If you can’t, well, what you do next is up to you.

 

This article has 7 Comments

  1. Count me amongst those who read your tweet the other day and was inspired to pontificate upon it.

    My problem with most of the outspoken possession arguments is when they are doing their evaluations they claim anybody above 50% as a strong possession team and anybody below as a poor possession team. But many of those should actually be just average possession teams, as average should encompass roughly 2/3 of the league. If you are going to call 50% average then it ignores that wide range of values that encompasses the natural ebb and flow of the season.

    So when I look at it I prefer to think of strong possession teams as those that are at least 1 standard deviation above the mean, and poor possession those that are at least 1 standard deviation below the mean (slight variations from year to year, but in general the average range encompasses those that fall between 47% and 53%).

    When I looked at the past 7 seasons of data that way I found that the possession crowd does have one very glaring proponent in their corner. 5 of the past 7 Stanley Cup champions have been elite possession teams (and considering Pittsburgh’s mid-season coaching change which saw them end the year as an elite possession team we can really make that 6 of 7). Only the Bruins managed to win the Cup while being an average possession team.

    But beyond that the clear cut differences break down. There is a higher likelihood of a strong possession team making the playoffs (and having a deep run) and a lesser chance for a poor possession team. However, there are plenty of examples of strong possession teams who either didn’t even qualify or lost in the early rounds, and a number of examples of poor possession teams that made the playoffs (most notable being the 2008 Penguins who made it all the way to the Cup Final). So there is definitely more work that needs to be done in improving evaluation metrics, since the ones we have aren’t as bulletproof as their vocal supporters seem to think.

    I was wondering if perhaps we have been seeing signs of the league normalizing as more teams focus on possession metrics (you had an article discussing something like that a year or two ago). This season we had the “summer of analytics” with most of the league at least paying lip service to the idea of building rosters and making decisions based on possession. So when it gets that predominant throughout the league there is more parity and balance, and we start to see more of the true talent playing out.

    1. I was wondering if perhaps we have been seeing signs of the league normalizing as more teams focus on possession metrics

      This is something I have been wondering and am planning on investigating and hopefully writing an article or two on over the next few weeks. I think there is some evidence to support this so it will be interesting to see what a more thorough analysis comes up with. Improving possession at the expense of shot quality is something I have written about in the past and I think there is more evidence to support that from this years data.

  2. A thought provoking post, We know Possession stats are better (than goal based) at predicting shorter samples. However, goal based do as well or better over longer samples. (you have shown this prior). Wouldn’t it make sense that possession stats ought to do a better job in predicting the shorter sample of the playoffs?! (of course the Kings got lucky – however it could also have been their skill)..
    In addition an argument can be made that playoff hockey is substantially different than regular season hockey.

    1. Coaches spend their whole focus on matching and shutting down the opponent.
    2. The best goalies always play
    3. Both teams are equally rested

    Perhaps the shooting % skill can be more easily scouted and defended against
    and the other two factors could also work to neutralize shooting%’s.

    This could mean that possession stats are more important as a skill that can’t be easily defended against. (at least until now that possession is being studied as an actual skill)…

  3. Good post. As perhaps an extension, the flood of how “unlucky” the Kings were this year has me wondering if sensibilities were trumping sense with respect to randomness. Randomness is just that – random. How it affects any given situation is – random. Suggesting that when the outcome fits one’s hypothesis that it is the strength of the hypothesis, while a non-supporting outcome is strictly “luck”, is almost certainly misusing randomness to fit one’s needs. Given how the Kings have flirted with the playoff bubble, it makes sense that one year they were not going to make it; and it is also quite possible that randomness contributed to them making the playoffs on an occasion as well.

    The misuse of the term “luck” also annoys me. Randomness is the full set of variables/factors that are not being accounted for. In most hockey analyses the number of items in that set are many. Luck is the subset of those that are, for practical purposes, random. The rest are factors that are not random; it is just that for varying reasons they are not being accounted for. And, the more random factors there are, the more likely that we don’t have a clue as to what role they played for individual situations. This is one of the dangers in applying aggregate-derived trends to individual events.

    (p.s. and this is even more of a potential factor when taking aggregate-derived trends at the team level, and the trying to apply them to individual players. One is introducing a number of variables/factors that were not present in the original analysis. I thought your analysis of the effect of coaching changes on Corsi% demonstrated this well.)

  4. I appreciate the post and dont think you’ll find anyone who would disagree with you. I dont doubt that there are many “stats friendly” folks who may oversimplify their analysis and not look at the context you suggest. I also think there are many folks, on every side of the issue, who can come across as oversimplifying things because the medium (larger Twitter) forces that to happen. Its not feasible to write blog posts every time a point can be made, and its not feasible to add qualifiers like “but we have to keep that in context of a range of factors that could have impacted that” when making comments that come across as definitive and self-contained.

  5. I very much appreciate the point about trying to understand analytics-prediction “failures”, because, frankly, there are enough of them over the past 10 seasons that it’s hard to dismiss them as weird outliers. I’d agree with you that possession isn’t an end-all that explains everything in the NHL. I’ve made this argument before, but a good possession game will tend to push the math of GF% in a team’s favor over the long haul, and good possession teams should finish higher in the standings, on average, than weaker possession teams. But this isn’t necessarily true. And if you see strong possession teams perennially missing the playoffs (e.g., the Devils in 2013 and 2013-14) or perennially on the bubble (e.g., LA), there’s reason to wonder if those are fundamentally flawed teams. Ultimately, the goal of analytics is to understand why and how teams win. And teams that underperform (or overperform) their metrics consistently should be examined in detail rather than dismissed as outliers.

    The 2014-15 Kings are more of a unique case insofar as you expect a team with strong possession numbers and decent PDO to be a safe bet for at least making the postseason. Other people have focused on 1-goal games and weird SO/OT results to explain their situation, but I actually have a simpler explanation. LA is a very low-event team: they had the lowest rate of 5v5 Corsi attempts against (score-adjusted) in the league, and ALSO the 4th-lowest rate of attempts for. Combining a middling Sh% and a low rate of shots, along with a mediocre power play, gives you a team that ranked 18th in goals scored this season. They were excellent defensively, but they probably just didn’t score enough, because they didn’t shoot enough.

Comments are closed.