Apr 012014
 

Last week Tyler Dellow had a post titled “Two Graphs and 480 Words That Will Convince You On Corsi%” in which, you can say, I was less than convinced (read the comments). This post is my rebuttal that will attempt to convince you on the importance of Sh% in player evaluation.

The problem with shooting percentage is that it suffers from small sample size issues. Over small sample sizes it often gets dominated by randomness (I prefer the term randomness to luck) but the question I have always had is, if we remove randomness from the equation, how important of a skill is shooting percentage? To attempt to answer this I will look at the variance in on-ice shooting percentages among forwards as we increase the sample size from a single season (minimum 500 minutes ice time) to 6 seasons (minimum 3000 minutes ice time). As the sample size increases we would expect the variance due to randomness to decrease. This means, when the observed variance stops decreasing (or significantly slows the rate of decrease) as sample size increases we know we are approaching the point where any variance is actually variance in true talent and not small sample size randomness. So, without going on any further I present you my first chart of on-ice shooting percentages for forwards in 5v5 situations.

 

ShPctVarianceBySampleSize

Variance decline pretty much stops by the time you reach 5 years/2500+ minutes worth of data but after 3 years (1500+ minutes) the drop off rate falls off significantly. It is also worth noting that some of the drop off over longer periods of time is due to age progression/regression and not due to reduction in randomness.

What is the significance of all of this?  Well, at 5 years a 90th percentile player would have 45% more goals given an equal number of shots as a 10th percentile player. A player one standard deviation above average will have 33% more goals for given an equal number of shots as a player one standard deviation below average.

Now, let’s compare this to the same chart for CF/20 to get an idea of how shot generation varies across players.

CF20VarianceBySampleSize

It’s a little interesting that the top players show no regression over time but the bottom line players do. This may be because terrible shot generating players don’t stick around long enough. More importantly though is the magnitude of the difference between the top players and the bottom players.  Well, a 90th percentile CF20 player produces about 25% more shots attempts than a 10th percentile player and a one standard deviation above average CF20 player produces about 18.5% more than a one standard deviation below average CF20 player (over 5 years). Both of these are well below (almost half of) the 45% and 33% we saw for shooting percentage.

I hear a lot of ‘I told you so’ from the pro-corsi crowd in regards to the Leafs and their losing streak and yes, their percentages have regress this season but I think it is worth noting that the Leafs are still an example of a team where CF% is not a good indicator of performance. The Leafs 5v5close CF% is 42.5% but their 5v5close GF% is 47.6%. The idea that CF% and GF% are “tightly intertwined” as Tyler Dellow wrote is not supported by the Maple Leafs this season despite the fact that the Maple Leafs are the latest “pro-Corsi” crowds favourite “I told you so” team.

There is also some evidence that the Leafs have been “unlucky” this year. Their 5v5close shooting percentages over the past 3 seasons have been 8.82 (2nd), 8.59(4th), 10.54(1st) while this year it has dropped to 8.17 (8th). Now the question is how much of that is luck and how much is the loss of Grabovski and MacArthur and the addition of Clarkson (who is a generally poor on-ice Sh% player) but the Leafs Sh% is well below the past few seasons and some of that may be bad luck (and notably, not “regression” from years of “good luck”).

In summary, generating shots matter, but capitalizing on them matters as much or more.

 

Feb 092014
 

There is a recently posted article on BroadStreetHockey.com discussing overused and overrated statistics. The first statistic on that list is Plus/Minus. Plus/minus has its flaws and gets wildly misused at times but it doesn’t mean it is a useless statistics if used correctly so I want to defend it a little but also put it in the same context as corsi.

The rational given in the BroadStreetHockey.com article for plus/minus being a bad statisitcs is that the top of the plus/minus listing is dominated by a few teams. They list the top 10 players in +/- this season and conclude:

Now there are some good players on the list for sure, but look a little bit closer at the names on the list. The top-ten players come from a total of five teams. The top eight all come from three teams. Could it perhaps be more likely that plus/minus is more of a reflection of a team’s success than specific individuals?

Now that is a fair comment but let me present you the following table of CF% leaders as of a few days ago.

Player Name Team CF%
MUZZIN, JAKE Los_Angeles 0.614
WILLIAMS, JUSTIN Los_Angeles 0.611
KOPITAR, ANZE Los_Angeles 0.611
ERIKSSON, LOUI Boston 0.606
BERGERON, PATRICE Boston 0.605
TOFFOLI, TYLER Los_Angeles 0.595
TOEWS, JONATHAN Chicago 0.592
THORNTON, JOE San_Jose 0.591
MARCHAND, BRAD Boston 0.591
ROZSIVAL, MICHAL Chicago 0.590
TARASENKO, VLADIMIR St.Louis 0.589
KING, DWIGHT Los_Angeles 0.589
BROWN, DUSTIN Los_Angeles 0.586
DOUGHTY, DREW Los_Angeles 0.584
BURNS, BRENT San_Jose 0.583
BICKELL, BRYAN Chicago 0.582
HOSSA, MARIAN Chicago 0.581
KOIVU, MIKKO Minnesota 0.580
SAAD, BRANDON Chicago 0.579
SHARP, PATRICK Chicago 0.578
SHAW, ANDREW Chicago 0.578
SEABROOK, BRENT Chicago 0.576

Of the top 22 players, 8 are from Chicago and 7 are from Los Angeles. Do the Blackhawks and Kings have 68% of the top 22 players in the NHL? If we are tossing +/- aside because it is “more of a reflection of a team’s success than specific individuals” then we should be tossing aside Corsi as well, shouldn’t we?

The problem is not that the top of the +/- list is dominated by a few teams it is that people misinterpret what it means and don’t consider the context surrounding a players +/-. No matter what statistic we use we must consider context such as quality of team, ice time, etc. Plus/minus is  no different in that regard.

There are legitimate criticisms of +/- that are unique to +/- but in general I think a lot of the criticisms and subsequent dismissals of +/- having any value whatsoever are largely unfounded. It isn’t that plus/minus is over rated or over used it is that it is often misued and misinterpreted and to be honest I see this happen just as much with Corsi and the majority of other “advanced” statistics as well. It isn’t the statistic that is the problem, it is the user of the statistic. That, unfortunately, will never change but that shouldn’t stop us who know how to use these statistics properly from using them to advance our knowledge of hockey. So please, can we stop dismissing plus/minus (and other stats) as a valueless statistics just because a bunch of people frequently misuse it.

The truth is there are zero (yes, zero) statistics in hockey that can’t and aren’t regularly misused and used without contextualizing. That goes from everything from goals and point totals to corsi to whatever zone start or quality of competition metric you like. They are all prone to be misused and misinterpreted and more often than not are. It is not because the statistics themselves are inherently flawed or useless its because hockey analytics is hard and we are a long long way from fully understanding all the dynamics at play. Some people are just more willing to dig deeper than others. That will never change.

 

(Note: This isn’t intended to be a critique of the Broad Street Hockey article because the gist of the article is true. The premise of the article is really about statistics needing context and I agree with this 100%. I just wish it wasn’t limited to stats like plus/minus, turnovers, blocked shots, etc. because advanced statistics are just as likely to be misused.)

 

Apr 112013
 

Every now and again someone asks me how I calculate HARO, HARD and HART ratings that you can find on stats.hockeyanalysis.com and it is at that point I realize that I don’t have an up to date description of how they are calculated so today I endeavor to write one.

First, let me define HARO, HARD and HART.

HARO – Hockey Analysis Rating Offense
HARD – Hockey Analysis Rating Defense
HART – Hockey Analysis Rating Total

So my goal when creating then was to create an offensive defensive and overall total rating for each and every player. Now, here is a step by step guide as to how they are calculated.

Calculate WOWY’s and AYNAY’s

The first step is to calculate WOWY’s (With Or Without You) and AYNAY’s (Against You or Not Against You). You can find goal and corsi WOWY’s and AYNAY’s on stats.hockeyanalysis.com for every player for 5v5, 5v5 ZS adjusted and 5v5 close zone start adjusted situations but I calculate them for every situation you see on stats.hockeyanalysis.com and for shots and fenwick as well but they don’t get posted because it amounts to a massive amounts of data.

(Distraction: 800 players playing against 800 other players means 640,000 data points for each TOI, GF20, GA20, SF20, SA20, FF20, FA20, CF20, CA20 when players are playing against each other and separate of each other per season and situation, or about 17.28 million data points for AYNAY’s for a single season per situation. Now consider when I do my 5 year ratings there are more like 1600 players generating more than 60 million datapoints.)

Calculate TMGF20, TMGA20, OppGF20, OppGA20

What we need the WOWY’s for is to calculate TMGF20 (a TOI with weighted average GF20 of the players teammates when his team mates are not playing with him), TMGA20 (a TOI with weighted average GA20 of the players teammates when his team mates are not playing with him), OppGF20 (a TOI against weighted average GF20 of the players opponents when his opponents are not playing against him) and OppGA20 (a TOI against weighted average GA20 of the players opponents when his opponents are not playing against him).

So, let’s take a look at Alexander Steen’s 5v5 WOWY’s for 2011-12 to look at how TMGF20 is calculated. The columns we are interested in are the Teammate when apart TOI and GF20 columns which I will call TWA_TOI and TWA_GF20. TMGF20 is simply a TWA_TOI (teammate while apart time on ice) weighted average of TWA_GF20. This gives us a good indication of how Steen’s teammates perform offensively when they are not playing with Steen.

TMGA20 is calculated the same way but using TWA_GA20 instead of TWA_GF20. OppGF20 is calculated in a similar manner except using OWA_GF20 (Opponent while apart GF20) and OWA_TOI while OppGA20 uses OWA_GA20.

The reason why I use while not playing with/against data is because I don’t want to have the talent level of the player we are evaluating influencing his own QoT and QoC metrics (which is essentially what TMGF20, TMGA20, OppGF20, OppGA20 are).

Calculate first iteration of HARO and HARD

The first iteration of HARO and HARD are simple. I first calculate an estimated GF20 and an estimated GA20 based on the players teammates and opposition.

ExpGF20 = (TMGF20 + OppGA20)/2
ExpGA20 = (TMGA20 + OppGF20)/2

Then I calculate HARO and HARD as a percentage improvement:

HARO(1st iteration) = 100*(GF20-ExpGF20) / ExpGF20
HARD(1st iteration) = 100*(ExpGA20 – GA20) / ExpGA20

So, a HARO of 20 would mean that when the player is on the goal rate of his team is 20% higher than one would expect based on how his teammates and opponents performed during time when the player is not on the ice with/against them. Similarly, a HARD of 20 would mean the goals against rate of his team is 20% better (lower) than expected.

(Note: The OppGA20 that gets used is from the complimentary situation. For 5v5 this means the opposition situation is also 5v5 but when calculating a rating for 5v5 leading the opposition situation is 5v5 trailing so OppGF20 would be OppGF20 calculated from 5v5 trailing data).

Now for a second iteration

The first iteration used GF20 and GA20 stats which is a good start but after the first iteration we have teammate and opponent corrected evaluations of every player which means we have better data about the quality of teammates and opponents the player has. This is where things get a little more complicated because I need to calculate a QoT and QoC metric based on the first iteration HARO and HARD values and then I need to convert that into a GF20 and GA20 equivalent number so I can compare the players GF20 and GA20 to.

To do this I calculate a TMHARO rating which is a TWA_TOI weighted average of first iteration HARO. TMHARD and OppHARO and OppHARD are calculated in a similar manner. TMHARD, OppHARO and OppHARD are similarly calculated. Now I need to convert these to GF20 and GA20 based stats so I do that by multiplying by league average GF20 (LAGF20) and league average GA20 (LAGA20) and from here I can calculated expected GF20 and expected GA20.

ExpGF20(2nd iteration) = (TMHARO*LAGF20 + OppHARD*LAGA20)/2
ExpGA20(2nd iteration) = (TMHARD*LAGA20 + OppHARD*LAGF20)/2

From there we can get a second iteration of HARO and HARD.

HARO(2nd iteration) = 100*(GF20-ExpGF20) / ExpGF20
HARD(2nd iteration) = 100*(ExpGA20 – GA20) / ExpGA20

Now we iterate again and again…

Now we repeat the above step over and over again using the previous iterations HARO and HARD values at every step.

Now calculate HART

Once we have done enough iterations we can calculate HART from the final iterations HARO and HARD values.

HART = (HARO + HARD) /2

Now do the same for Shot, Fenwick and Corsi data

The above is for goal ratings but I have Shot, Fenwick and Corsi ratings as well and these can be calculated in the exact same way except using SF20, SA20, FF20, FA20, CF20 and CA20.

What about goalies?

Goalies are a little unique in that they only really play the defensive side of the game. For this reason I do not include goalies in calculating TMGF20 and OppGF20. For shot, fenwick and corsi I do not include the goalies on the defensive side of things either as I assume a goalie will not influence shots against (though this may not be entirely true as some goalies may be better at controlling rebounds and thus secondary shots but I’ll assume this is a minimal effect if it does exist). The result of this is goalies do have a HARD rating but no HARO, or shot/fenwick/corsi based HARD or HARO rating.

I hope this helps explain how my hockey analysis ratings are calculated but if you have any followup questions feel free to ask them in the comments.

 

Apr 052013
 

I often get asked questions about hockey analytics, hockey fancy stats, how to use them, what they mean, etc. and there are plenty of good places to find definitions of various hockey stats but sometimes what is more important than a definition is some guidelines on how to use them. So, with that said, here are several tips that I have for people using advanced hockey stats.

Don’t over value Quality of Competition

I don’t know how often I’ll point out one players poor stats or another players good stats and immediately get the response “Yeah, but he always plays against the opponents best players” or “Yeah, but he doesn’t play against the oppositions best players” but most people that say that kind of thing have no real idea how much quality of opponent will affect the players statistics. The truth is it is not nearly as much as you might think.  Despite some coaches desperately trying to employ line matching techniques the variation in quality of competition metric is dwarfed by variation in quality of teammates, individual talent, and on-ice results. An analysis of Pavel Datsyuk and Valterri Filppula showed that if Filppula had Datsyuk’s quality of competition his CorsiFor% would drop from 51.05% to 50.90% and his GoalsFor% would drop from 55.65% to 55.02%. In the grand scheme of things, this are relatively minor factors.

Don’t over value Zone Stats either

Like quality of competition, many people will use zone starts to justify a players good/poor statistics. The truth is zone starts are not a significant factor either. I have found that the effect of zone starts is largely eliminated after about 10 seconds after a face off and this has been found true by others as well. I account for zone starts in statistics by eliminating the 10 seconds after an offensive or defensive zone face off and I have found doing this has relatively little effect on a players stats. Henrik Sedin is maybe the most extreme case of a player getting primarily offensive zone starts and all those zone starts took him from a 55.2 fenwick% player to a 53.8% fenwick% player when zone starts are factored out. In the most extreme case there is only a 1.5% impact on a players fenwick% and the majority of players are no where close to the zone start bias of Henrik Sedin. For the majority of players you are probably talking something under 0.5% impact on their fenwick%. As for individual stats over the last 3 seasons H. Sedin had 34 goals and 172 points in 5v5 situations and just 2 goals and 14 points came within 10 seconds of a zone face off, or about 5 points a year. If instead of 70% offensive zone face off deployment he had 50% offensive zone face off deployment instead of having 14 points during the 10 second zone face off time he may have had 10.  That’s a 4 point differential over 3 years for a guy who scored 172 points. In simple terms, about 2.3% of H. Sedin’s 5v5 points can be attributed to his offensive zone start bias.

A derivative of this is that if zone starts don’t matter much, a players face off winning percentage probably doesn’t matter much either which is consistent with other studies. It’s a nice skill to have, but not worth a lot either.

Do not ignore Quality of Teammates

I have just told you to pretty much ignore quality of competition and zone starts, what about quality of teammates? Well, to put it simply, do not ignore them. Quality of teammates matters and matters a lot. Sticking with the Vancouver Canucks, lets use Alex Burrows as an example. Burrows mostly plays with the Sedin twins but has played on Kesler’s line a bit too. Over the past 3 seasons he has played about 77.9% of his ice time with H. Sedin and about 12.3% of his ice time with Ryan Kesler and the reminder with Malhotra and others. Burrow’s offensive production is significantly better when playing with H. Sedin as 88.7% of his goals and 87.2% of his points came during the 77.9% ice time he played with H. Sedin. If Burrows played 100% of his ice time with H. Sedin and produced at the same rate he would have scored 6 (9.7%) more goals and 13 (11%) more 5v5 points over the past 3 seasons. This is far more significant than the 2.3% boost H. Sedin saw from all his offensive zone starts and I am not certain my Burrows example is the most extreme example in the NHL. How many more points would an average 3rd line get if they played mostly with H. Sedin instead of the average 3rd liner. Who you play with matters a lot. You can’t look at Tyler Bozak’s decent point totals and conclude he is a decent player without considering he plays a lot with Kessel and Lupul, two very good offensive players.

Opportunity is not talent

Kind of along the same lines as the Quality of Teammates discussion, we must be careful not to confuse opportunity and results. Over the past 2 seasons Corey Perry has the second most goals of any forward in the NHL trailing only Steven Stamkos. That might seem impressive but it is a little less so when you consider Perry also had the 4th most 5v5 minutes during that time and the 11th most 5v4 minutes.  Perry is a good goal scorer but a lot of his goals come from opportunity (ice time) as much as individual talent. Among forwards with at least 1500 minutes of 5v5 ice time the past 2 seasons, Perry ranks just 30th in goals per 60 minutes of ice time. That’s still good, but far less impressive than second only to Steven Stamkos and he is actually well behind teammate Bobby Ryan (6th) in this metric. Perry is a very good player but he benefits more than others by getting a lot of ice time  and PP ice time. Perry’s goal production is a large part talent, but also somewhat opportunity driven and we need to keep this in perspective.

Don’t ignore the percentages (shooting and save)

The percentages matter, particularly shooting percentages. I have shown that players can sustain elevated on-ice shooting percentages and I have shown that players can have an impact on their line mates shooting percentages and Tom Awad has shown that a significant portion of the difference between good players and bad players is finishing ability (shooting percentage).  There is even evidence that goal based metrics (which incorporate the percentages) are a better predictor of post season success than fenwick based metric. What corsi/fenwick metrics have going for them is more reliability over small sample sizes but once you approach a full seasons worth of data that benefit is largely gone and you get more benefit from having the percentages factored into the equation. If you want to get a better understanding of what considering the percentages can do for you, try to do a Malkin vs Gomez comparison or a Crosby vs Tyler Kennedy comparison over the past several years. Gomez and Kennedy actually look like relatively decent comparisons if you just consider shot based metrics, but both are terrible percentage players while Malkin and Crosby are excellent percentage players and it is the percentages that make Malkin and Crosby so special. This is an extreme example but the percentages should not be ignored if you want a true representation of a players abilities.

More is definitely better

One of the reason many people have jumped on the shot attempt/corsi/fenwick band wagon is because they are more frequent events than goals and thus give you more reliable metrics. This is true over small sample sizes but as explained above, the percentages matter too and should not be ignored. Luckily, for most players we have ample data to get past the sample size issues. There is no reason to evaluate a player based on half a seasons data if that player has been in the league for several years. Look at 2, 3, 4 years of data.  Look for trends. Is the player consistently a higher corsi player? Is the player consistently a high shooting percentage player? Is the player improving? Declining? I have shown on numerous occassions that goals are a better predictor of future goal rates than corsi/fenwick starting at about one year of data but multiple years are definitely better. Any conclusion about a players talent level using a single season of data or less (regardless of whether it is corsi or goal based) is subject to a significant level of uncertainty. We have multiple years of data for the majority of players so use it. I even aggregate multiple years into one data set for you on stats.hockeyanalysis.com for you so it isn’t even time consuming. The data is there, use it. More is definitely better.

WOWY’s are where it is at

In my mind WOWY’s are the best tool for advanced player evaluation. WOWY stands for with or without you and looks at how a player performs while on the ice with a team mate and while on the ice without a team mate. What WOWY’s can tell you is whether a particular player is a core player driving team success or a player along for the ride. Players that consistently make their team mates statistics better when they are on the ice with them are the players you want on your team. Anze Kopitar is an example of a player who consistently makes his teammates better. Jack Johnson is an example of a player that does not, particularly when looking at goal based metrics.   Then there are a large number of players that are good players that neither drive your teams success nor hold it back, or as I like to say, complementary players. Ideally you build your team around a core of players like Kopitar that will drive success and fill it in with a group of complementary players and quickly rid yourself of players like Jack Johnson that act as drags on the team.

 

Mar 202013
 

I generally think that the majority of people give too much importance to quality of competition (QoC) and its impact on a players statistics but if we are going to use QoC metrics let’s at least try and use the best ones available. In this post I will take a look at some QoC metrics that are available on stats.hockeyanalysis.com and explain why they might be better than those typically in use.

OppGF20, OppGA20, OppGF%

These three stats are the average GF20 (on ice goals for per 20 minutes), OppGA20 (on ice goals against per 20 minutes) and GF% (on ice GF / [on ice GF + on ice GA]) of all the opposition players that a player lined up against weighted by ice time against. In fact, these stats go a bit further in that they remove the ice time the opponent players played against the player so that a player won’t influence his own QoC (not nearly as important as QoT but still a good thing to do). So, essentially these three stats are the goal scoring ability of the opposition players, the goal defending ability of the opposition players, and the overall value of the opposition players. Note that opposition goalies are not included in the calculation of OppGF20 as it is assume the goalies have no influence on scoring goals.

The benefits of using these stats are they are easy to understand and are in a unit (goals per 20 minutes of ice time) that is easily understood. GF20 is essentially how many goals we expect the players opponents would score on average per 20 minutes of ice time. The drawback from this stat is that if good players play against good players and bad players play against bad players a good player and a bad player may have similar statistics but the good players is a better player because he did it against better quality opponents. There is no consideration for the context of the opponents statistics and that may matter.

Let’s take a look at the top 10 forwards in OppGF20 last season.

Player Team OppGF20
Patrick Dwyer Carolina 0.811
Brandon Sutter Carolina 0.811
Travis Moen Montreal 0.811
Carl Hagelin NY Rangers 0.806
Marcel Goc Florida 0.804
Tomas Plekanec Montreal 0.804
Brooks Laich Washington 0.800
Ryan Callahan NY Rangers 0.799
Patrik Elias New Jersey 0.798
Alexei Ponikarovsky New Jersey 0.795

You will notice that every single player is from the eastern conference. The reason for this is that the eastern conference is a more offensive conference. Taking a look at the top 10 players in OppGA20 will show the opposite.

Player Team OppGF20
Marcus Kruger Chicago 0.719
Jamal Mayers Chicago 0.720
Mark Letestu Columbus 0.721
Andrew Brunette Chicago 0.723
Andrew Cogliano Anaheim 0.723
Viktor Stalberg Chicago 0.724
Matt Halischuk Nashville 0.724
Kyle Chipchura Phoenix 0.724
Matt Belesky Anaheim 0.724
Cory Emmerton Detroit 0.724

Now, what happens when we look at OppGF%?

Player Team OppGF%
Mike Fisher Nashville 51.6%
Martin Havlat San Jose 51.4%
Vaclav Prospal Columbus 51.3%
Mike Cammalleri Calgary 51.3%
Martin Erat Nashville 51.3%
Sergei Kostitsyn Nashville 51.3%
Dave Bolland Chicago 51.2%
Rick Nash Columbus 51.2%
Travis Moen Montreal 51.0%
Patrick Marleau San Jose 51.0%

There are predominantly western conference teams with a couple of eastern conference players mixed in. The reason for this western conference bias is that the western conference was the better conference and thus it makes sense that the QoC would be tougher for western conference players.

OppFF20, OppFA20, OppFF%

These are exactly the same stats as the goal based stats above but instead of using goals for/against/percentage they use fenwick for/against/percentage (fenwick is shots + shots that missed the net). I won’t go into details but you can find the top players in OppFF20 here, in OppFA20 here, and OppFF% here. You will find a a lot of similarities to the OppGF20, OppGA20 and OppGF% lists but if you ask me which I think is a better QoC metric I’d lean towards the goal based ones. The reason for this is that the smaller sample size issues we see with goal statistics is not going to be nearly as significant in the QoC metrics because over all opponents luck will average out (for every unlucky opponent you are likely to have a lucky one t cancel out the effects). That said, if you are doing a fenwick based analysis it probably makes more sense to use a fenwick based QoC metric.

HARO QoC, HARD QoC, HART QoC

As stated above, one of the flaws of the above QoC metrics is that there is no consideration for the context of the opponents statistics. One of the ways around this is to use the HockeyAnalysis.com HARO (offense), HARD (defense) and HART (Total/Overall) ratings in calculating QoC. These are player ratings that take into account both quality of teammates and quality of competition (here is a brief explanation of what these ratings are).The HARO QoC, HARD QoC and HART QoC metrics are simply the average HARO, HARD and HART ratings of players opponents.

Here are the top 10 forwards in HARO QoC last year:

Player Team HARO QoC
Patrick Dwyer Carolina 6.0
Brandon Sutter Carolina 5.9
Travis Moen Montreal 5.8
Tomas Plekanec Montreal 5.8
Marcel Goc Florida 5.6
Carl Hagelin NY Rangers 5.5
Ryan Callahan NY Rangers 5.3
Brooks Laich Washington 5.3
Michael Grabner NY Islanders 5.2
Patrik Elias New Jersey 5.2

There are a lot of similarities to the OppGF20 list with the eastern conference dominating. There are a few changes, but not too many, which really is not that big of a surprise to me knowing that there is very little evidence that QoC has a significant impact on a players statistics and thus considering the opponents QoC will not have a significant impact on the opponents stats and thus not a significant impact on a players QoC. That said, I believe these should produce slightly better QoC ratings. Also note that a 6.0 HARO QoC indicates that the opponent players are expected to produce a 6.0% boost on the league average GF20.

Here are the top 10 forwards in HARD QoC last year:

Player Team HARD QoC
Jamal Mayers Chicago 6.0
Marcus Kruger Chicago 5.9
Mark Letestu Columbus 5.8
Tim Jackman Calgary 5.3
Colin Fraser Los Angeles 5.2
Cory Emmerton Detroit 5.2
Matt Belesky Anaheim 5.2
Kyle Chipchura Phoenix 5.1
Andrew Brunette Chicago 5.1
Colton Gilles Columbus 5.0

And now the top 10 forwards in HART QoC last year:

Player Team HART QoC
Dave Bolland Chicago 3.2
Martin Havlat San Jose 3.0
Mark Letestu Columbus 2.5
Jeff Carter Los Angeles 2.5
Derick Brassard Columbus 2.5
Rick Nash Columbus 2.4
Mike Fisher Nashville 2.4
Vaclav Prospal Columbus 2.2
Ryan Getzlaf Anaheim 2.2
Viktor Stalberg Chicago 2.1

Shots and Corsi based QoC

You can also find similar QoC stats using shots as the base stat or using corsi (shots + shots that missed the net + shots that were blocked) on stats.hockeyanalysis.com but they are all the same as above so I’ll not go into them in any detail.

CorsiRel QoC

The most common currently used QoC metric seems to be CorsiRel QoC (found on behindthenet.ca) but in my opinion this is not so much a QoC metric but a ‘usage’ metric. CorsiRel is a statistic that compares the teams corsi differential when the player is on the ice to the teams corsi differential when they player is not on the ice.  CorsiRel QoC is the average CorsiRel of all the players opponents.

The problem with CorsiRel is that good players on a bad team with little depth can put up really high CorsiRel stats compared to similarly good players on a good team with good depth because essentially it is comparing a player relative to his teammates. The more good teammates you have, the more difficult it is to put up a good CorsiRel. So, on any given team the players with a good CorsiRel are the best players on team team but you can’t compare CorsiRel on players on different teams because the quality of the teams could be different.

CorsiRel QoC is essentially the average CorsiRel of all the players opponents but because CorsiRel is flawed, CorsiRel QoC ends up being flawed too. For players on the same team, the player with the highest CorsiRel QoC plays against the toughest competition so in this sense it tells us who is getting the toughest minutes on the team, but again CorsiRel QoC is not really that useful when comparing players across teams.  For these reasons I consider CorsiRel QoC more of a tool to see the usage of a player compared to his teammates, but is not in my opinion a true QoC metric.

I may be biased, but in my opinion there is no reason to use CorsiRel QoC anymore. Whether you use GF20, GA20, GF%, HARO QoC, HARD QoC, and HART QoC, or any of their shot/fenwick/corsi variants they should all produce better QoC measures that are comparable across teams (which is the major draw back of CorsiRel QoC.

 

Feb 272013
 

The last several days I have been playing around a fair bit with team data and analyzing various metrics for their usefulness in predicting future outcomes and I have come across some interesting observations. Specifically, with more years of data, fenwick becomes significantly less important/valuable while goals and the percentages become more important/valuable. Let me explain.

Let’s first look at the year over year correlations in the various stats themselves.

Y1 vs Y2 Y12 vs Y34 Y123 vs Y45
FF% 0.3334 0.2447 0.1937
FF60 0.2414 0.1635 0.0976
FA60 0.3714 0.2743 0.3224
GF% 0.1891 0.2494 0.3514
GF60 0.0409 0.1468 0.1854
GA60 0.1953 0.3669 0.4476
Sh% 0.0002 0.0117 0.0047
Sv% 0.1278 0.2954 0.3350
PDO 0.0551 0.0564 0.1127
RegPts 0.2664 0.3890 0.3744

The above table shows the r^2 between past events and future events.  The Y1 vs Y2 column is the r^2 between subsequent years (i.e. 0708 vs 0809, 0809 vs 0910, 0910 vs 1011, 1011 vs 1112).  The Y12 vs Y23 is a 2 year vs 2 year r^2 (i.e. 07-09 vs 09-11 and 08-10 vs 10-12) and the Y123 vs Y45 is the 3 year vs 2 year comparison (i.e. 07-10 vs 10-12). RegPts is points earned during regulation play (using win-loss-tie point system).

As you can see, with increased sample size, the fenwick stats abilitity to predict future fenwick stats diminishes, particularly for fenwick for and fenwick %. All the other stats generally get better with increased sample size, except for shooting percentage which has no predictive power of future shooting percentage.

The increased predictive nature of the goal and percentage stats with increased sample size makes perfect sense as the increased sample size will decrease the random variability of these stats but I have no definitive explanation as to why the fenwick stats can’t maintain their predictive ability with increased sample sizes.

Let’s take a look at how well each statistic correlates with regulation points using various sample sizes.

1 year 2 year 3 year 4 year 5 year
FF% 0.3030 0.4360 0.5383 0.5541 0.5461
GF% 0.7022 0.7919 0.8354 0.8525 0.8685
Sh% 0.0672 0.0662 0.0477 0.0435 0.0529
Sv% 0.2179 0.2482 0.2515 0.2958 0.3221
PDO 0.2956 0.2913 0.2948 0.3393 0.3937
GF60 0.2505 0.3411 0.3404 0.3302 0.3226
GA60 0.4575 0.5831 0.6418 0.6721 0.6794
FF60 0.1954 0.3058 0.3655 0.4026 0.3951
FA60 0.1788 0.2638 0.3531 0.3480 0.3357

Again, the values are r^2 with regulation points.  Nothing too surprising there except maybe that team shooting percentage is so poorly correlated with winning because at the individual level it is clear that shooting percentages are highly correlated with goal scoring. It seems apparent from the table above that team save percentage is a significant factor in winning (or as my fellow Leaf fans can attest to, lack of save percentage is a significant factor in losing).

The final table I want to look at is how well a few of the stats are at predicting future regulation time point totals.

Y1 vs Y2 Y12 vs Y34 Y123 vs Y45
FF% 0.2500 0.2257 0.1622
GF% 0.2214 0.3187 0.3429
PDO 0.0256 0.0534 0.1212
RegPts 0.2664 0.3890 0.3744

The values are r^2 with future regulation point totals. Regardless of time frame used, past regulation time point totals are the best predictor of future regulation time point totals. Single season FF% is slightly better at predicting following season regulation point totals but with 2 or more years of data GF% becomes a significantly better predictor as the predictive ability of GF% improves and FF% declines. This makes sense as we earlier observed that increasing sample size improves GF% predictability of future GF% while FF% gets worse and that GF% is more highly correlated with regulation point totals than FF%.

One thing that is clear from the above tables is that defense has been far more important to winning than offense. Regardless of whether we look at GF60, FF60, or Sh% their level of importance trails their defensive counterpart (GA60, FA60 and Sv%), usually significantly. The defensive stats more highly correlate with winning and are more consistent from year to year. Defense and goaltending wins in the NHL.

What is interesting though is that this largely differs from what we see at the individual level. At the individual level there is much more variation in the offensive stats indicating individual players have more control over the offensive side of the game. This might suggest that team philosophies drive the defensive side of the game (i.e. how defensive minded the team is, the playing style, etc.) but the offensive side of the game is dominated more by the offensive skill level of the individual players. At the very least it is something worth of further investigation.

The last takeaway from this analysis is the declining predictive value of fenwick/corsi with increased sample size. I am not quite sure what to make of this. If anyone has any theories I’d be interested in hearing them. One theory I have is that fenwick rates are not a part of the average GMs player personal decisions and thus over time as players come and go any fenwick rates will begin to vary. If this is the case, then this may represent an area of value that a GM could exploit.

 

Feb 212013
 

Over the past few years I have had a few discussions with other Leaf fans about the relative merits of Francois Beauchemin. Many Leaf fans argue that he was a good 2-way defenseman who can play tough minutes and is the kind of defenseman the Leafs are still in need of. I on the other hand have never had quite as optimistic view of Beauchemin and I don’t think he would make this team any better.

On some level I think a part of the difference in opinion is that many look at his corsi numbers which aren’t too bad but I prefer to look at his goal numbers which have generally not been so good. So, let’s take a look at Beauchemin’s WOWY numbers and see if there is in fact a divergence between Beauchemin’s corsi WOWY numbers and his goal WOWY numbers starting with 2009-11 5v5 WOWY starting with CF% WOWY.

Beauchemin200911CFWOWY

I have included a diagonal line which is kind of a ‘neutral’ line where players perform equally well with and without Beauchemin. Anything to the right/below the line indicates the player played better with Beauchemin than without and anything to the left/above they played worse with Beauchemin. As you can see, the majority of players had a better CF% with Beauchemin than without. Now, let’s take a look at GF% WOWY.

Beauchemin200911GFWOWY

While a handful of players had better GF% with Beauchemin, the majority were a little worse off. There is a clear difference between Beauchemin’s CF% WOWY and his GF% WOWY. What is interesting is this difference can be observed in 2007-08, 2009-10, 2010-11, and 2011-12 (he was injured for much of 2008-09 so his WOWY data is not reliable due to smaller sample size). Looking at his 5-year WOWY charts you get a clear picture that Beauchemin seemingly has a skill for ‘driving play’ but not ‘driving goals’. Let’s dig a little further to see if we can determine what his ‘problem’ by looking at his 2009-11 two year CF20, GF20, CA20 and GA20 WOWY’s.

CF20:

Beauchemin200911CF20WOWY

GF20:

Beauchemin200911GF20WOWY

As you can clearly see, Beauchemin appears to be much better at generating shots and shot attempts than he is at generating goals. The majority of players have a higher corsi for rate when with Beauchemin than when not with Beauchemin but the majority also have a lower goals for rate. What about ‘against’ rates?

CA20:

Beauchemin200911CA20WOWY

GA20:

Beauchemin200911GA20WOWY

For CA20 and GA20 is is better to be to be above/left of the diagional line because unlike GF%/CF%/GF20/CF20 it is better to have a smaller number than a larger number. There doesn’t seem to be quite as much of a difference between CA20 and GA20 as with CF20 and GF20 so the difference between CF% and GF% is driven by the inability to convert shots and shot attempts into goals as opposed to the defensive side of the game. That said, there is no clear evidence that Beauchemin makes his teammates any better defensively.

There are two points I wanted to make with this post.

  1. Leaf fans probably shouldn’t be missing Beauchemin.
  2. For a lot of players a corsi evaluation of that player will give you a reasonable evaluation of that player but there are also many players where a corsi evaluation of that player will not tell the complete story. Some players can consistently see a divergence between their goal stats and their corsi stats and it is important to take that into consideration.

 

Nov 082012
 

Eric T. over at NHL Numbers had a post last week summarizing the current state of our statistical knowledge with respect to accounting for zone start differences.  If you haven’t read it definitely go read it because it is not only a good read but because it concludes that how the majority of people have been doing is is wrong.

Overall, no two estimates are in direct agreement, but the analyses that are known to derive from looking directly at the outcomes immediately following a faceoff converge in the range of 0.25 to 0.4 Corsi shots per faceoff — one-third to one-half of the figure in widespread use. It is very likely that we have been overestimating the importance of faceoffs; they still represent a significant correction on shot differential, but perhaps not as large as has been previously assumed.

In the article Eric refers to my observation that eliminating the 10 seconds after a zone start effectively removes any effect that the zone start had on the game.  From there he combined my zone start adjusted data found at stats.hockeyanalysis.com with zone start data from behindthenet.ca and came up with an estimate that a zone start is worth 0.35 corsi.  He did this by subtracting the 10 second zone start adjusted corsi from standard 5v5 corsi and then running a regression against the extra offensive zone starts the player had.  In the comments I discussed some further analysis I did on this using my own data (i.e. not the stuff on behindthenet.ca) and came up with similar, though slightly different, numbers.  In any event I figured the content of that comment was worthy of its own post here.

So, when I did the correlation between extra offensive zone starts and difference between 5v5 and 5v5 10 second zone start adjusted corsi I got the following (using all players with >1000 minutes of ice time over last 5 seasons):

My calculations come up with a slope of 0.3043 which is a little below that of Eric’s calculations but since I don’t know the exact methodology he used that might explain the difference (i.e. not sure if Eric used complete 5 years of data, or individual seasons).

What is interesting is that when I explored things further, I noticed that the results varied across positions, but varied very little across talent levels.  Here are some more correlations for different positions and ice time restrictions.

Position Slope r^2
All Players >1000 min. 0.30 0.55
Skaters >1000 min. 0.28 0.52
Forwards >1000 min. 0.26 0.50
Defensemen >1000 min. 0.33 0.57
Goalies >1000 min. 0.44 0.73
Forwards >500 min. 0.26 0.50
Forwards >2500 min. 0.26 0.52
Forwards 500-2500 min. 0.26 0.39

Two observations:

1.  The slope for forwards is less than the slope for defensemen which is (quite a bit) less than the slope for goalies.

2.  There is no variation in slope no matter what restrictions we put on a forwards ice time.

There isn’t really much to say regarding the second observation except that it is nice to see consistency but the first observation is quite interesting.  Goalies, who have no impact on corsi, see the greatest zone start influences on corsi of any position.  It is a little odd but I think it addresses one of the concerns that Eric had pointed out in his article:

The next step would be to remove the last vestige of sampling bias from our analysis. The approaches that focus on the period immediately after the faceoff reduce the impact of teams’ tendency to use their best forwards in the offensive zone, but certainly do not remove it altogether.

I think that is exactly what we are witnessing here, but maybe more importantly teams put out their best defensive players and, maybe more importantly, their best face off guys for defensive zone face offs. If David Steckel, who is an excellent face off guy, is getting all the defensive zone face offs, it is naturally going to suppress the corsi events immediately after the defensive zone face off because he is going to win the draw more often than not.  There is probably more line matching done for the zone face offs than during regular play so the line matching suppresses some of the zone start impact.  It is more difficult to line match when changing lines on the fly so a good coach can more easily get favourable line matches. The result is normal 5v5 play offensive players might see a boost to their corsi (because they can exploit good matchups) and during offensive zone face offs they see their corsi suppressed because they will almost always be facing good defensive players and top face off guys.  Thus, the boost to corsi based on a zone start is not as extreme as should be for offensive players.  The opposite is true for defensive players.

Defensemen are less often line matched so we see their corsi boost due to an offensive zone face off a little higher than that of forwards, but it isn’t near as high as goalies because there are defensemen that are primarily used in offensive situations and others that are primarily used in defensive situations.

Goalies though, tell us the real effect because they are always on the ice and they are not subject to any line matching.  In the table above you will notice that goalies have a significantly higher slope and an impressively high r^2.  I feel I have to post the chart of the correlation because it really is a nice chart to look at.

I have looked at a lot of correlations and charts in hockey stats but very few of them are as nice with as high a correlation as the chart above.

I believe that this is telling us that an offensive zone start is worth 0.44 corsi, but only when a player is playing against similarly defensively capable players as he would during regular 5v5 play which I speculate above is not necessarily (or likely) the case.  The 0.44 adjustment really only applies to an idealistic situation that doesn’t normally occur for any players other than goalies.  So where does that leave us?  Should we use a zone start adjustment of 0.44 corsi for all players, or should we use something like 0.33 for defensemen and 0.26 for forwards?  The answer isn’t so simple.  One could argue that we should apply 0.44 to all players and then make some sort of QoC adjustment and that would make some sense.  But if we are not intending to apply a QoC adjustment, does that mean we should use 0.33 and 0.26?  Maybe, but that is a little inconsistent because it would mean you are using a QoC adjustment only for the zone start adjustment of a players stats, and not for all his stats.  The answer for me is what I have been doing the past little while and not even attempt to adjust a players stats based on zone starts differences and rather simply just ignore the the portion of play that is subject to being influenced by zone starts – the 10 seconds after a zone start face off.  To me it seems like the simplest and easiest thing to do.

 

Jun 262012
 

I have had a lot of battles with the pro-corsi crowd with regards to the merits of using Corsi as a player evaluation tool.  I still get people dismissing my goal based analysis (which seems really strange since goals are what matters in hockey) so I figured I should summarize my position in one easy to understand post.  So, with that, here are 10 significant reasons why I don’t like to use a corsi based player analysis.

1.  Look at the list of players with the top on-ice shooting percentage over the past 5 seasons and compare it to the list of players with the top corsi for per 20 minutes of ice time and you’ll find that the shooting percentage list is far more representative of top offensive players than the top corsi for list.

2.  Shooting percentage is a talent and is sustainable and three year shooting percentage is as good a predictor of the following 2 seasons goal scoring rates as 3 year fenwick rates and 3 year goal rates are a far better predictor.

r^2
2007-10 FF20 vs 2010-12 GF20 0.253
2007-10 SH% vs 2010-12 GF20 0.244
2007-10 GF20 vs 2010-12 GF20 0.363

3.  I have even shown that one year GF20 is on average as good a predictor of  the following seasons GF20 as FF20 is as a predictor of the following seasons FF20 so with even just one full season of data goal rates are as good a metric of offensive talent as fenwick rate is.  Only when the sample size is less than one season (and for almost all NHL regulars we have at least a seasons worth of data) is fenwick rate a better metric for evaluating offensive talent.

4.  Although difficult to identify, I believe I have shown players can suppress opposition shooting percentage.

5.  Zone starts affect shots/corsi/fenwick stats significantly more than they affect goal stats thus the non-adjusted shot/corsi/fenwick data are less useful than the non-adjusted goal data.

6.  Although not specifically a beef with Corsi, much of the corsi analysis currently being done does not split out offensive corsi and defensive corsi but rather looks at them as a percentage or as a +/- differential.  I believe this is a poor way of doing analysis because it really is useful to know whether a player is good because he produces a lot of offense or whether the player is good because he is great defensively.  Plus, when evaluating a player offensively we need to consider the offensive capability of his team mates and the defensive capability of his opposition, not the overall ability of those players.

7.  I have a really hard time believing that 8 of the top 9 corsi % players over the past 5 seasons are Red Wing players because they are all really talented and had nothing to do with the system they play or some other non-individual talent factor.

8.  Try doing a Malkin vs Gomez fenwick/corsi comparison and now do the same with goals.  Gomez actually has a very good and very comparable fenwick rating to Malkin, but Malkin is a far better player at producing goals thanks to his far superior on-ice shooting percentage (FSh% = fenwick shooting percentage = goals / fenwick for).  Gomez every single season has a much poorer on-ice shooting percentage than Malkin and this is why Malkin is the far better player.  Fenwick/Corsi doesn’t account for this.

Malkin Gomez Malkin Gomez Malkin Gomez
Season(s) FF20 FF20 GF20 GF20 FSh% FSh%
2011-12 16.5 14.0 1.301 0.660 7.9% 4.7%
2010-11 16.1 16.4 0.949 0.534 5.9% 3.3%
2009-10 15.3 14.2 1.112 0.837 7.3% 5.9%
2008-09 12.4 16.8 1.163 0.757 9.4% 4.5%
2007-08 14.1 15.9 1.206 0.792 8.5% 5.0%
2007-11 14.7 14.7 1.171 0.745 8.0% 5.1%

 

So there you have it.  Those are some of the main reasons why I don’t use corsi in player analysis.  This isn’t to say Corsi isn’t a useful metric.  It is a useful metric in identifying which players are better at controlling play. Unfortunately, controlling play is only part of the game so if you want to conduct a complete thorough evaluation of a player, goal based stats are required.

 

Apr 192012
 

Prior to the season Gabe Desjardins and I had a conversation over at MC79hockey.com where I predicted several players would combine for a 5v5 on-ice shooting percentage above 10.0% while league average is just shy of 8.0%.  I documented this in a post prior to the season.  In short, I predicted the following:

  • Crosby, Gaborik, Ryan, St. Louis, H. Sedin, Toews, Heatley, Tanguay, Datsyuk, and Nathan Horton will have a combined on-ice shooting percentage above 10.0%
  • Only two of those 10 players will have an on-ice shooting percentage below 9.5%

So, how did my prediction fair?  The following table tells all.

Player GF SF SH%
SIDNEY CROSBY 31 198 15.66%
MARTIN ST._LOUIS 74 601 12.31%
ALEX TANGUAY 43 371 11.59%
MARIAN GABORIK 57 582 9.79%
JONATHAN TOEWS 51 525 9.71%
NATHAN HORTON 34 359 9.47%
HENRIK SEDIN 62 655 9.47%
BOBBY RYAN 52 552 9.42%
PAVEL DATSYUK 50 573 8.73%
DANY HEATLEY 42 611 6.87%
Totals 496 5027 9.87%

Well, technically neither of my predictions came true.  Only 5 players had on-ice shooting percentages above 9.5% and as a group they did not maintain a shooting percentage above 10.0%.  That said, my prediction wasn’t all that far off.  8 of the 10 players had an on-ice shooting percentage above 9.42% and as a group they had an on-ice shooting percentage of 9.87%.  If Crosby was healthy for most of the season or the Minnesota Wild didn’t suck so bad the group would have reached the 10.0% mark.  So, when all is said and done, while technically my predictions didn’t come perfectly true, the intent of the prediction did.  Shooting percentage is a talent, is maintainable, and can be used as a predictor of future performance.

I now have 5 years of on-ice data on stats.hockeyanalysis.com so I thought I would take a look at how sustainable shooting percentage is using that data.  To do this I took all forwards with 350 minutes of 5v5 zone start adjusted ice time in each of the past 5 years and took the first 3 years of the data (2007-08 through 2009-10) to predict the final 2 years of data (2010-11 and 2011-12).  This means we used at least 1050 minutes of data over 3 seasons to predict at least 700 minutes of data over 2 seasons.  The following chart shows the results for on-ice shooting percentage.

Clearly there is some persistence in on-ice shooting percentage.  How does this compare to something like fenwick for rates (using FF20 – Fenwick For per 20 minutes).

Ok, so FF20 seems to be more persistent, but that doesn’t take away from the fact that shooting percentage is persistent and a reasonable predictor of future shooting percentage.  (FYI, the guy out on his own in the upper left is Kyle Wellwood)

The real question is, are either of them any good at predicting future goal scoring rates (GF20 – goals for per 20 minutes) because really, goals are ultimately what matters in hockey.

Ok, so both on-ice shooting percentage and on-ice fenwick for rates are somewhat reasonable predictors of future on-ice goal for rates with a slight advantage to on-ice shooting percentage (sorry, just had to point that out).  This is not inconsistent with what I  found a year ago when I used 4 years of data to calculate 2 year vs 2 year correlations.

Of course, I would never suggest we use shooting percentage as a player evaluation tool, just as I don’t suggest we use fenwick as a player evaluation tool.  Both are sustainable, both can be used as predictors of future success, and both are true player skills, but the best predictor of future goal scoring is past goal scoring, as evidenced by the following chart.

That is pretty clear evidence that goal rates are the best predictor of future goal rates and thus, in my opinion anyway, the best player evaluation tool.  Yes, there are still sample size issues with using goal rates for less than a full seasons worth of data, but for all those players where we have multiple seasons worth of data (or at least one full season with >~750 minutes of ice time) for, using anything other than goals as your player evaluation tool will potentially lead to less reliable and less accurate player evaluations.

As for the defensive side of the game, I have not found a single reasonably good predictor of future goals against rates, regardless of whether I look at corsi, fenwick, goals, shooting percentage or anything else.  This isn’t to suggest that players can’t influence defense, because I believe they can, but rather that there are too many other factors that I haven’t figured out how to isolate and remove from the equation.  Most important is the goalie and I feel the most difficult question to answer in hockey statistics is how to separate the goalie from the defenders. Plus, I believe there are far fewer players that truly focus on defense and thus goals against is largely driven by the opposition.

Note:  I won’t make any promises but my intention is to make this my last post on the subject of sustainability of on-ice shooting percentage and the benefit of using a goal based player analysis over a corsi/fenwick based analysis.  For all those who still fail to realize goals matter more than shots or shot attempts there is nothing more I can say.  All the evidence is above or in numerous other posts here at hockeyanalysis.com.  On-ice shooting percentage is a true player talent that is both sustainable and a viable predictor of future performance at least on par with fenwick rates.  If you choose to ignore reality from this point forward, it is at your own peril.