What baffles me most in hockey analytics

I am always astonished by how much I get critiqued about shot quality and whether it exists or not and whether it is important or not. I get even more criticism if I ever bring up the idea that defenders can have an ability to boost save percentage. To me this is the most baffling thing I have come across in hockey analytics. Let me explain why.

I would hazard a guess that everyone that criticizes me about my belief in shot quality and ability of players to drive shooting or save percentages absolutely believes in score effects. This is why we have score adjusted stats. To summarize score effects, when a team is protecting a lead they generate fewer shots for but post a higher shooting percentage while giving up more shots against and posting a higher save percentage. Matt Cane wrote a good article about about score effects which I recommend you read if you are not familiar with the concept. Score effects is a well established concept within the hockey analytics community and no one really disputes it.

Leading_vs_Trailing_SavePct

What baffles me is that the same people that believe that teams can essentially boost their goalies save percentage when they are defending a lead will rebel against the very idea that a guy whose primary role is to play shut down defensive hockey (exactly what teams often do when defending a lead) might actually have an ability to boost their goalies save percentage. A guy like Brandon Sutter for example. Or maybe Dion Phaneuf the past several years with the Leafs. It absolutely boggles my mind that otherwise smart people (including some currently working for NHL teams) can believe in score effects but will absolutely resist the idea that individual players can systematically boost save percentage during 5v5 close/tied situations.

If you are evaluating players, especially players that have a well defined role that is heavily offensive oriented or heavily defensive oriented, then you will absolutely be missing out on important information if you are only looking at shot totals.

Here are 5v5close Sv%RelTM stats since 2010 of a few players that seem to be able to boost their goalies save percentage.

Season Kopitar Hossa Winnik Backes Couture Stepan Zajac Turris
2010-11 2.7 1.6 3.3 3.8 2.8 -0.7 0.2 -1.3
2011-12 1.4 1.2 -0.6 1.4 0.9 1.2 1.8 1.3
2012-13 1.1 5.8 0.1 3.8 3.2 -0.3 -0.2 3.1
2013-14 2.2 1.9 1.3 2.2 1.5 1.8 0.8 2.4
2014-15 -0.4 0.1 5.6 -0.2 2.9 3.5 2.9 0.6
2015-16 2.9 4 0.5 3.9 4.6 2.4 1.3
Season Pietrangelo Weber Brodin Phaneuf Brodie Niskanen
2010-11 -0.5 -1.5 1.3 3.4
2011-12 0.7 1.9 1.8 0.7 1
2012-13 1.8 1.7 1.8 1.6 1.7 3.7
2013-14 3.6 2.8 0.5 2.1 0.7 2.5
2014-15 0.1 0.2 2.7 1 3.2 -0.5
2015-16 1 1.9 2.3 0.2 -0.2 2.1

 

This article has 7 Comments

  1. I agree that quantifying defensive ability is an important goal for hockey analytics. However I would like to see a more in-depth statistical check of Sv%RelTM. Specifically, how does a given player’s value correlate season to season? Also it would be interesting to see a same season correlation, say Sv%RelTM in even vs odd games.

    1. I’ll be honest, the correlations are not great when you are looking at the league as a whole. However, if you use usage statistics to identify offensive or defensive players you will find that players that play predominantly offensive players or predominantly defensive players will show some consistency. This is why I want to start focusing on how players perform in their roles. Of all the players in the NHL, there are only a small portion of them that you would say are pure defensive players.

      In fact, simply identifying the top offensive players tells you something about their Sv%RelTM. As an example, I took all players with 3500 5v5close minutes from 2007-08 to 2014-15 and calculated an ‘Offensive Player Metric’ which is simply % of the teams GF he was on the ice for divided by % of his teams ice time he was on the ice for (all 5v5close situations). These are the players that had the highest goal for relative to their ice time on their team. I then sorted the list and here are the Sv%RelTM that I get for the top 10, 20, 30 and bottom 10, 20, 30.

      Top 10 -0.38
      Top 20 -0.265
      Top 30 -0.243333333

      Bottom 10 0.36
      Bottom 20 0.405
      Bottom 30 0.246666667

      Players who generate a lot of offense relative to their ice time have poor Sv%RelTM and players who do not generate much offense (presumably more defensive players) have a high Sv%RelTM.

      In the end there is a fair amount of uncertainty and there is still a lot to learn about who can impact Sv%RelTM and how but I believe it is difficult to deny that such a relationship exists.

  2. So here’s my (likely poorly articulated) thoughts on why I struggle with this idea.

    1) I think, in general, that players can have an impact on their goalie’s save percentage.

    2) I think that it’s very difficult to measure this impact. In particular, I don’t think Sv% RelTM (particularly at 5v5 Close) captures this impact because the sample sizes are small, and the variance in Sv% is high. If you look at Marian Hossa’s data over the range in your chart, the average width of a 95% confidence interval on his Save Percentage is 5.7%, which means that only once is his TMSv% outside of the 95% confidence interval of his observed save percentage.

    This is going to be a real problem for a lot of players, we just can’t be certain whether the save percentage that we observe is a true reflection of a player’s underlying ability to drive Sv% given the sample that we have in a single season.

    3) Related to the variance, you were able to identify 14 players here that you felt boosted 5v5 Sv% in close games, but you’re only able to do that retrospectively. If you take the 12 players who played in 2010-11, 4 of them posted a negative 5v5close Sv%RelTM in that year. That’s nearly 1/3 of the “best” players at this skill who would have been judged to be bad after that year. If we can truly trust a statistic to measure a skill, it doesn’t seem reasonable to me that 1/3rd of the very best players at that skill would be measured poorly by that statistic in a given year.

    4) The correlations are a non-insignificant problem, because the correlations answer the question “how well are we able to predict a player’s results in the future using what we’ve observed in the past.” This is the key question that a GM needs to know – given that we saw a player did well in year 1, should we feel confident in the prediction that he’ll do well in year 2. We can’t say that we know he will with Sv%RelTM.

    As I said, I do think that players have some ability to impact their team’s defensive results, but that impact is entangled in too many factors for Sv%RelTM to be a reliable metric. In order for a player to impact a goalie’s ability to stop the puck, not only do they need to be in the correct position, but they also need 4 other players (who they have no control over) to be in the correct position as well. That’s part of the reason we see so much variance in individual numbers – one player alone can’t make his teammates do their jobs. When you throw in the rest of the inherent luck/randomness/variance/will of the hockey gods into the picture (bad bounces, weird screens, deflections off the stanchion, a goalie leaving a half inch between his pads and the post for a split second, etc.) it makes it really difficult to say that the result (goal or no goal) can be heavily influenced by one of five defenders on the ice.

    (Side note: I didn’t check all your data, but Kopitar’s 5v5Close Sv%RelTM seems to be off vs. Puckalytics)

  3. Thanks for the feedback Matt. I agree with everything you have said. This is a really hard problem. I understand that and I understand that there are great uncertainties involved in the analysis and having confidence in the results.. I get that. The issue I have is when people minimize the importance of something because it can’t be identified as a talent with a high degree of confidence. Lack of confidence is not lack of existence. What I am trying to point out is these talents exist and can have an important impact on outcomes, even if they are incredibly hard to identify.

    Why does this matter? Knowing what we don’t know or can’t be sure of is as important as knowing what we can say with confidence. It is also more intellectually honest when we say what we have confidence in and what we don’t have confidence in and their relative importance. I personally believe winning the goal battle at the “on-ice” level is probably 50% driven by winning the shot quantity battle and 50% driven by winning the percentages battle over the long term (+/- 10%). By that I mean, if the NHL played an infinite number of games the spread in shot generation talent would be roughly equal to the spread in on-ice shooting % when calculated based on the impact of goals scored.

    Think about how that impacts the discussion. If I were advising a general manager about a player there would be a huge difference between the following two statements.

    -The player is a dominant possession player, and is the best possession player on the team. His shooting percentage is low but shooting percentages are highly random so don’t worry about that.

    -The player is a dominant possession player, and is the best possession player on the team. His shooting percentage is low but shooting percentages over 82 games are highly variable so I cannot say whether this is typical of how he will perform in the future. Shooting percentage is 50% of goal scoring talent though so from a quantitative point of view there is still significant uncertainty in his overall value. I have looked at shot location data and I have not seen anything in there to explain the low shooting percentage. What do the scouts think about his passing/shooting ability? If they like his skills then it is certainly possible that his shooting percentage should improve.

    The difference when you know what you don’t know is that you don’t close off the discussion. You leave the door open to more exploration and understanding. Most importantly though it allows you to determine the confidence of your player evaluation. You may have 100% confidence in your Corsi evaluation of a player but if Corsi is only 50% of overall player value then your player evaluation is still highly uncertain.

    1. I get what you’re saying, and the transparency around it is important. But we do have some ideas what a player’s shooting percentage will be from year to year. You could say something like:

      he player is a dominant possession player, and is the best possession player on the team. His shooting percentage is low but shooting percentages are highly random so don’t worry about that. Given that he shot about 2% below league average last year, we’d expect him to be around 0.4% below league average this year, which would make him a (positive/negative) player. There’s a wide range of outcomes though, so it’s possible his shooting percentage is far away from that.

      You can augment the discussion based on how much past sample you have, but you can still give an estimate based on past correlations. In the case of Sv%RelTM, we have to be careful, because we can’t say what we’d expect to happen in the future based on past data. You can say that qualitatively he appears to defend the net well, but as soon as you start to bring that kind of stat into the discussion you’re implying certainty around defensive talent that doesn’t exist.

      1. But by shifting from -2% to -0.4% you are just shifting the uncertainty. It might be a more likely outcome, but it is still highly uncertain. What we really need to do is either find a method to seriously tighten the bounds of the estimate or simply say “as best we can tell is he is most likely a below average shooting percentage player but we can’t say for sure to what degree”

        That is semantics though. The greater problem I have the the minimization of the importance of the percentages which is flat out misleading in my opinion.

  4. David, i like that you are at least questioning the new analytics. I for one think defenseman are highly underrated on impact and goalies are quite overrated as a result.

    The game of hockey is about the small stuff. All shots are not the same and if your large data samples wash out the difference is shot quality, then it should tell you that your data is not a good measurement of player impact.

    We all know there are players when given time to set their shots are more certain at finishing that shot than others. Why do the Capitals cycle the puck to Ovechkin on the powerplay so often? Or Kane in Chicago? Or Jagr on every team he’s been on? Or what makes a guy like Justin Williams so valuable? So a shot coming off of Ovechkin’s stick is more valuable than one coming off of David Clarkson’s, right?

    This is just comparing the quality of a different player shooting. But there are so many factors, as David has mentioned – set-up and finish and situation – that impact whether the puck goes in or not.

    On the flip, the defensemen and the system also impact this shot quality directly. Does this defenseman get to rebounds consistently? Does he rush shooters? Does he get his stick on shots? Again, without a doubt a guy like Duncan Keith impacts the shot quality and thus the saves Corey Crawford has to make. And again, if the data washes out such impacts then again the analysis falls short of portraying hockey.

    I get where the idea comes from – if you measure enough shots, then you capture enough of these events and thus the impact is measured indirectly within the numbers of shots.

    The assumption that shots capture what is important to hockey might be wrong.

    I always look back to teams like the Colorado Avalanche of the 90’s and early 2000s. And the Devils as well. They got quick strike counter-attack goals a lot and at key times. I’d much rather have a team that moves the puck down fast and scores with one shot than a team that generates ten shots and doesn’t score. But that second team will have stronger analytics, often.

Comments are closed.