Sep 142013
 

A while back I came up with a stat which at the time I called LT Index which is essentially the percentage of a players teams ice time when leading that the player is on the ice for divided by the percentage of a players teams ice time when trailing that the player is on the ice for (in 5v5 situations and only in games in which the player played). LT Index standing for Leading-Trailing Index. I have decided to rename this statistic to Usage Ratio since it gives us an indication of whether players are used more in defensive situations (i.e. leading and protecting a lead and thus a Usage Ratio above 1.00) or in offensive situations (i.e. when trailing and in need of a goal and thus a Usage Ratio less than 1.00). I think it does a pretty good job of identifying how a player is used.

I then compared players Usage Index to their 5v5 tied statistics using the theory that a player being used in a defensive role when leading/trailing is more likely to be used in a defensive role when the game is tied. This is also an out of sample comparison (which is always a nice thing to be able to do) since we are using leading/trailing situations to identify offensive vs defensive players and then comparing to 5v5 tied situations that in no way overlap the leading or trailing data.

Let’s start by looking at forwards using data over the last 3 seasons and including all forwards with >500 minutes of 5v5 tied ice time. The following charts compare Usage Ratio with 5v5 Tied CF%, CF60 and CA60.

UsageRatiovsCFPct

UsageRatiovsCF60

UsageRatiovsCA60

Usage Ratio is on the horizontal axis with more defensive players to the right and offensive players to the left.

Usage Ratio has some correlation with CF% but that correlation is solely due to it’s connection with generating shot attempts for and not for restricting shot attempts against. Players we identify as offensive players via the Usage Ratio statistic do in fact generate more shots but players we identify as defensive players do not suppress opposition shots any. In fact, Usage Ratio and 5v5 tied CA60 is as uncorrelated as you can possibly get. One may attempt to say this is because those defensive players are playing against offensive players (i.e. tough QoC) and that is why but if this were the case then those offensive players would be playing against defensive players (i.e. tough defensive QoC) and thus should see their shot attempts suppressed as well. We don’t observe that though. It just seems that players used as defensive players are no better at suppressing shot attempts against than offensive players but are, as expected, worse at generating shot attempts for.

Before we move on to defensemen let’s take a look at how Usage Ratio compares with shooting percentage and GF60.

UsageRatiovsShPct

 

UsageRatiovsGF60

As seen with CF60, Usage Ratio is correlated with both shooting percentage and GF60 and the correlation with GF60 is stronger than with CF60. Note that the sample size for 3 seasons (or 2 1/2 actually) of 5v5 tied data is about the same as the sample size for one season of 5v5 data (players in this study have between 500 and 1300 5v5 tied minutes which is roughly equivalent of how many 5v5 minutes forwards play over the course of one full season).

FYI, the dot up at the top with the GF60 above 5 is Sidney Crosby (yeah, he is in a league of his own offensively) and the dot to the far right (heavy defensive usage) is Adam Hall.

Now let’s take a look at defensemen.

UsageRatiovsCFPctDefensemen

UsageRatiovsCF60Defensemen

UsageRatiovsCA60Defensemen

There really isn’t much going on here and how a defenseman is used really does’t tell us much at all about their 5v5 stats (only marginal correlation to CF60). As with forwards, defensemen that we identify as being used in a defensive are not any better at reducing shots against than defensemen we identify as being used in an offensive manner.

To summarize the above, players who get more minutes when playing catch up are in fact better offensive players, particularly when looking at forwards but players who get more minutes when protecting a lead are not necessarily any better defensively. We do know that there are better defensive players (the range of CA60 among forwards is similar to the range of CF60 so if there is offensive talent there is likely defensive talent too), and yet coaches aren’t playing these defensive players when protecting a lead. Coaches in general just don’t know who their good defensive players are.

Still not sold on this? Well, let’s compare 5v5 defensive zone start percentage (percentage of face offs taken in the defensive zone) to CF60 and CA60 (for forwards) in 5v5 tied situations.

DefensiveFOPctvsCF60

Percentage of face offs in the defensive zone is on the horizontal axis and CF60 is on the vertical axis. This chart is telling us that the fewer defensive zone face offs a forward gets, and thus likely more offensive face offs, the more shot attempts for they produce. In short, players who get offensive zone starts get more shot attempts.

DefensiveFOPctvsCA60

The opposite is not true though. Players who get more defensive face offs don’t give up any more or less shots than their low defensive zone face off counterparts. This tells me that if there is any connection between zone starts and CF% it is solely due to the fact that players who get offensive zone starts are better offensive players and not because players who get defensive zone starts are better defensive players.

You might again be saying to yourself ‘the players who are getting the defensive zone starts they are playing against better offensive players so doesn’t make sense that their CA60 is inflated above their talent levels (which presumably is better than average defensively)?  This might be true, but if zone starts significantly impacted performance (as would be the case if that last statement were true), either directly or indirectly because zone starts are linked to QoC, then there should be more symmetry between the charts. There isn’t though. Let’s look at what these two charts tell us:

  1. The first chart tells us that players who get offensive zone starts generate more shot attempts.
  2. The second chart tells us that players who get defensive zone starts don’t give up more shots attempts against.

If zone starts were a major factor in results, those two statements don’t jive. How can one side of the ledger show an advantage and the other side of the ledger be neutral? The way those statements can work in conjunction with each other is if zone starts don’t significantly impact results which is what I believe (and have observed before).

But, if zone starts do not significantly impact results, then the results we see in the two charts above are driven by the players talent levels. Knowing that we once again can observe that coaches are doing a decent job of identifying offensive players to start in the offensive zone but are doing a poor job at identifying defensive players to play in the defensive zone.

All of this is to say, NHL coaches generally do a poor job at identifying their best defensive players so if you think that guy who is getting all those defensive zone starts (aka ‘tough minutes’) are more likely to be defensive wizards, think again. They may not be.

 

Jun 182013
 

If you have been following the discussion between Eric T and I you will know that there has been a rigorous discussion/debate over where hockey analytics is at, where it is going, the benefits of applying “regression to the mean” to shooting percentages when evaluating players. For those who haven’t and want to read the whole debate you can start here, then read this, followed by this and then this.

The original reason for my first post on the subject is that I rejected Eric T’s notion that we should “steer” people researching hockey analytics towards “modern hockey thought” in essence because I don’t we should ever be closed minded, especially when hockey analytics is pretty new and there is still a lot to learn. This then spread into a discussion of the benefits of regressing shooting percentages to the mean, which Eric T supported wholeheartedly while I suggested that I think further research into isolating individual talent even goal talent through adjusting for QoT, QoC, usage, score effects,  coaching styles, etc. can be equally beneficial and focus need not be on regressing to the mean.

In Eric T’s last post on the subject he finally got around to actually implementing a regression methodology (though he didn’t post any player specifics so we can’t see where it is still failing miserably) in which he utilized time on ice to choose a mean for which a players shooting percentage should regress to. This is certainly be better than regressing to the league-wide mean which he initially proposed but the benefits are still somewhat modest. The results for players who played 1000 minutes in the 3 years of 2007-10 and 1000 minutes in the 3 years from 2010-13 showed the predictive power of his regressed GF20 to predict future GF20 was 0.66 which was 0.05 higher than the 0.61 predictive power raw GF20. So essentially his regression algorithm improved predictive power by 0.05 while there still remains 0.34 which is unexplained. The question I attempt to answer today is for a player who has played 1000 minutes of ice time, what is the amount of his observed stats that is true randomness and what amount is simply unaccounted for skill/situational variance.

When we look at 2007-10 GF20 and compare it to 2010-13 GF20 there are a lot of factors that can explain the differences from a change in quality of competition, a change in quality of team mates, a change in coaching style, natural career progression of the player, zone start usage, and possibly any number of other factors that might come into play that we do not currently know about as well as true randomness. To overcome all of these non-random factors that we do not yet know how to fully adjust for in order to get a true measure of the random component of a players stats we need to be able to get two sets of data that have attributes (QoT, QoC, usage, etc) as similar to each other as possible. The way I did this was to take each of the 6870 games that have been played over the past 6 seasons and split them into even and odd games and calculate each players GF20 over each of those segments. This should, more or less, split a players 6 years evenly in half such that all those other factors are more or less equivalent across halves. The following table shows how predicting the even half is at predicting the odd half based on how many total minutes (across both halves) that the player has played.

Total Minutes GF20 vs GF20
>500 0.79
>1000 0.85
>1500 0.88
>2000 0.89
>2500 0.88
>3000 0.88
>4000 0.89
>5000 0.89

For the group of players with more than 500 minutes of ice time (~250 minutes or more in each odd/even half) the upper bound on true randomness is 0.21 while the predictive power of GF20 is 0.79. With greater than 1000 minutes randomness drops to 0.15 and with greater than 1500 minutes and above the randomness is around 0.11-0.12. It’s interesting that setting the minimum above 1500 minutes (~750 in each even/odd half) of data doesn’t necessarily reduce the true randomness in GF20 which seems a little counter intuitive.

Let’s take a look at the predictive power of fenwick shooting percentage in even games to predict fenwick shooting percentage in odd games.

Total Minutes FSh% vs FSh%
>500 0.54
>1000 0.64
>1500 0.71
>2000 0.73
>2500 0.72
>3000 0.73
>4000 0.72
>5000 0.72

Like GF20, the true randomness of fenwick shooting percentage seems to bottom out at 1500 minutes of ice time and there appears to be no benefit to going with increasing the minimum minutes played.

To summarize what we have learned we have the following which is for forwards with >1000 minutes in each of 2007-10 and 2010-13.

GF20 predictive power 3yr vs 3yr 0.61
True Randomness Estimate 0.11
Unaccounted for factors estimate 0.28
Eric T’s regression benefit 0.05

There is no denying that a regression algorithm can provide modest improvements but this is only addressing 30% of what GF20 is failing to predict and it is highly doubtful that efforts to improve the regression algorithm any more will result in anything more than marginal benefits. The real benefit will come from researching the other 70% we don’t know about. It is a much more difficult  question to answer but the benefit could be far more significant than any regression technique.

Addendum: After doing the above I thought, why not take this all the way and instead of doing even and odd games do even and odd seconds so what happens one second goes in one bin and what happens the following second goes in the other bin. This should absolutely eliminate any differences in QoC, QoT, zone starts, score effects, etc. As you might expect, not a lot has changed but the predictive power of GF20 increases marginally, particularly when dealing with lower minute cutoffs.

Total Minutes GF20 vs GF20 FSh% vs FSh%
>500 0.81 0.58
>1000 0.86 0.68
>1500 0.88 0.71
>2000 0.89 0.73
>2500 0.89 0.73
>3000 0.90 0.75
>4000 0.90 0.73
>5000 0.89 0.71

 

Apr 172013
 

Even though I am a proponent of shot quality and the idea that the percentages matter (shooting and save percentage) puck control and possession are still an important part of the game and the Maple Leafs are dreadful at it. One of the better easily available metrics for measuring possession is fenwick percentage (FF%) which is a measure of the percentage shot attempts (shots + shots that missed the net) that your team took. So a FF% of 52% would mean your team took 52% of the shots while the opposing team took 48% of the shots. During 5v5 situations this season the Maple Leafs have a FF% of 44.4% which is dead last in the NHL. So, who are the biggest culprits in dragging down the Maple Leafs possession game? Let’s take a look.

Forwards

Player Name FF% TMFF% OppFF% FF% – TMFF% FF%-TMFF%+OppFF%-0.5
MACARTHUR, CLARKE 0.485 0.44 0.507 0.045 0.052
KESSEL, PHIL 0.448 0.404 0.507 0.044 0.051
KOMAROV, LEO 0.475 0.439 0.508 0.036 0.044
KADRI, NAZEM 0.478 0.444 0.507 0.034 0.041
GRABOVSKI, MIKHAIL 0.45 0.424 0.508 0.026 0.034
VAN_RIEMSDYK, JAMES 0.456 0.433 0.508 0.023 0.031
FRATTIN, MATT 0.475 0.448 0.504 0.027 0.031
LUPUL, JOFFREY 0.465 0.445 0.502 0.02 0.022
BOZAK, TYLER 0.437 0.453 0.508 -0.016 -0.008
KULEMIN, NIKOLAI 0.421 0.454 0.51 -0.033 -0.023
ORR, COLTON 0.401 0.454 0.5 -0.053 -0.053
MCLAREN, FRAZER 0.388 0.443 0.501 -0.055 -0.054
MCCLEMENT, JAY 0.368 0.459 0.506 -0.091 -0.085

FF% is the players FF% when he is on the ice expressed in decimal form. TMFF% is an average of the players team mates FF% when they are not playing with the player in question (i.e. what his team mates do when they are separated from them, or a quality of teammate metric). OppFF% is an average of the players opponents FF% (i.e. a quality of competition metric). From those base stats I took FF% – TMFF% which will tell us which players perform better than their teammates do when they aren’t playing with him (the higher the better). Finally I factored in OppFF% by adding in how much above 50% their opposition is on average. This will get us an all encompassing stat to indicate who are the drags on the Leafs possession game.

Jay McClement is the Leafs greatest drag on possession. A few weeks ago I posted an article visually showing how much of a drag on possession McClement has been this year and in previous years. McClement’s 5v5 FF% over the past 6 seasons are 46.2%, 46.8%, 45.3%, 47.5%, 46,2% and 36.8% this season.

Next up are the goons, Orr and McLaren which is probably no surprise. They are more interested in looking for the next hit/fight than they are the puck. In general they are low minute players so their negative impact is somewhat mitigated but they are definite drags on possession.

Kulemin is the next biggest drag on possession which might come as a bit of a surprise considering that he has generally been fairly decent in the past. Looking at the second WOWY chart here you can see that nearly every player has a worse CF% (same as FF% but includes shots that have been blocked) with Kulemin than without except for McClement and to a much smaller extent Liles. This is dramatically different than previous seasons  (see second chart again) when the majority of players did equally well or better with Kulemin save for Grabovski. Is Kulemin having an off year? It may seem so.

Next up is my favourite whipping boy Tyler Bozak. Bozak is and has always been a drag on possession. Bozak ranks 293 of 312 forwards in FF% this season (McClement is dead last!) and in the previous 2 seasons he ranked 296th of 323 players.

Among forwards, McClement, McLaren, Orr, Kulemin and Bozak appear to be the biggest drags on the Maple Leafs possession game this season.

Defense

Player Name FF% TMFF% OppFF% FF% – TMFF% FF%-TMFF%+OppFF%-0.5
FRANSON, CODY 0.469 0.437 0.506 0.032 0.038
GARDINER, JAKE 0.463 0.44 0.506 0.023 0.029
KOSTKA, MICHAEL 0.459 0.435 0.504 0.024 0.028
GUNNARSSON, CARL 0.455 0.437 0.506 0.018 0.024
FRASER, MARK 0.461 0.445 0.506 0.016 0.022
LILES, JOHN-MICHAEL 0.445 0.443 0.503 0.002 0.005
PHANEUF, DION 0.422 0.455 0.509 -0.033 -0.024
HOLZER, KORBINIAN 0.399 0.452 0.504 -0.053 -0.049
O_BYRNE, RYAN 0.432 0.505 0.499 -0.073 -0.074

O’Byrne is a recent addition to the Leafs defense so you can’t blame the Leafs possession woes on him, but in Colorado he was a dreadful possession player so he won’t be the answer to the Leafs possession woes either.

Korbinian Holzer was dreadful in a Leaf uniform this year and we all know that so no surprise there but next up is Dion Phaneuf, the Leafs top paid and presumably best defenseman. In FF%-TMFF%+OppFF%-0.5 Phaneuf ranked a little better the previous 2 seasons (0.023 and 0.003) so it is possible that he is having an off year or had his stats dragged down a bit by Holzer but regardless, he isn’t having a great season possession wise.

 

 

Apr 112013
 

Every now and again someone asks me how I calculate HARO, HARD and HART ratings that you can find on stats.hockeyanalysis.com and it is at that point I realize that I don’t have an up to date description of how they are calculated so today I endeavor to write one.

First, let me define HARO, HARD and HART.

HARO – Hockey Analysis Rating Offense
HARD – Hockey Analysis Rating Defense
HART – Hockey Analysis Rating Total

So my goal when creating then was to create an offensive defensive and overall total rating for each and every player. Now, here is a step by step guide as to how they are calculated.

Calculate WOWY’s and AYNAY’s

The first step is to calculate WOWY’s (With Or Without You) and AYNAY’s (Against You or Not Against You). You can find goal and corsi WOWY’s and AYNAY’s on stats.hockeyanalysis.com for every player for 5v5, 5v5 ZS adjusted and 5v5 close zone start adjusted situations but I calculate them for every situation you see on stats.hockeyanalysis.com and for shots and fenwick as well but they don’t get posted because it amounts to a massive amounts of data.

(Distraction: 800 players playing against 800 other players means 640,000 data points for each TOI, GF20, GA20, SF20, SA20, FF20, FA20, CF20, CA20 when players are playing against each other and separate of each other per season and situation, or about 17.28 million data points for AYNAY’s for a single season per situation. Now consider when I do my 5 year ratings there are more like 1600 players generating more than 60 million datapoints.)

Calculate TMGF20, TMGA20, OppGF20, OppGA20

What we need the WOWY’s for is to calculate TMGF20 (a TOI with weighted average GF20 of the players teammates when his team mates are not playing with him), TMGA20 (a TOI with weighted average GA20 of the players teammates when his team mates are not playing with him), OppGF20 (a TOI against weighted average GF20 of the players opponents when his opponents are not playing against him) and OppGA20 (a TOI against weighted average GA20 of the players opponents when his opponents are not playing against him).

So, let’s take a look at Alexander Steen’s 5v5 WOWY’s for 2011-12 to look at how TMGF20 is calculated. The columns we are interested in are the Teammate when apart TOI and GF20 columns which I will call TWA_TOI and TWA_GF20. TMGF20 is simply a TWA_TOI (teammate while apart time on ice) weighted average of TWA_GF20. This gives us a good indication of how Steen’s teammates perform offensively when they are not playing with Steen.

TMGA20 is calculated the same way but using TWA_GA20 instead of TWA_GF20. OppGF20 is calculated in a similar manner except using OWA_GF20 (Opponent while apart GF20) and OWA_TOI while OppGA20 uses OWA_GA20.

The reason why I use while not playing with/against data is because I don’t want to have the talent level of the player we are evaluating influencing his own QoT and QoC metrics (which is essentially what TMGF20, TMGA20, OppGF20, OppGA20 are).

Calculate first iteration of HARO and HARD

The first iteration of HARO and HARD are simple. I first calculate an estimated GF20 and an estimated GA20 based on the players teammates and opposition.

ExpGF20 = (TMGF20 + OppGA20)/2
ExpGA20 = (TMGA20 + OppGF20)/2

Then I calculate HARO and HARD as a percentage improvement:

HARO(1st iteration) = 100*(GF20-ExpGF20) / ExpGF20
HARD(1st iteration) = 100*(ExpGA20 – GA20) / ExpGA20

So, a HARO of 20 would mean that when the player is on the goal rate of his team is 20% higher than one would expect based on how his teammates and opponents performed during time when the player is not on the ice with/against them. Similarly, a HARD of 20 would mean the goals against rate of his team is 20% better (lower) than expected.

(Note: The OppGA20 that gets used is from the complimentary situation. For 5v5 this means the opposition situation is also 5v5 but when calculating a rating for 5v5 leading the opposition situation is 5v5 trailing so OppGF20 would be OppGF20 calculated from 5v5 trailing data).

Now for a second iteration

The first iteration used GF20 and GA20 stats which is a good start but after the first iteration we have teammate and opponent corrected evaluations of every player which means we have better data about the quality of teammates and opponents the player has. This is where things get a little more complicated because I need to calculate a QoT and QoC metric based on the first iteration HARO and HARD values and then I need to convert that into a GF20 and GA20 equivalent number so I can compare the players GF20 and GA20 to.

To do this I calculate a TMHARO rating which is a TWA_TOI weighted average of first iteration HARO. TMHARD and OppHARO and OppHARD are calculated in a similar manner. TMHARD, OppHARO and OppHARD are similarly calculated. Now I need to convert these to GF20 and GA20 based stats so I do that by multiplying by league average GF20 (LAGF20) and league average GA20 (LAGA20) and from here I can calculated expected GF20 and expected GA20.

ExpGF20(2nd iteration) = (TMHARO*LAGF20 + OppHARD*LAGA20)/2
ExpGA20(2nd iteration) = (TMHARD*LAGA20 + OppHARD*LAGF20)/2

From there we can get a second iteration of HARO and HARD.

HARO(2nd iteration) = 100*(GF20-ExpGF20) / ExpGF20
HARD(2nd iteration) = 100*(ExpGA20 – GA20) / ExpGA20

Now we iterate again and again…

Now we repeat the above step over and over again using the previous iterations HARO and HARD values at every step.

Now calculate HART

Once we have done enough iterations we can calculate HART from the final iterations HARO and HARD values.

HART = (HARO + HARD) /2

Now do the same for Shot, Fenwick and Corsi data

The above is for goal ratings but I have Shot, Fenwick and Corsi ratings as well and these can be calculated in the exact same way except using SF20, SA20, FF20, FA20, CF20 and CA20.

What about goalies?

Goalies are a little unique in that they only really play the defensive side of the game. For this reason I do not include goalies in calculating TMGF20 and OppGF20. For shot, fenwick and corsi I do not include the goalies on the defensive side of things either as I assume a goalie will not influence shots against (though this may not be entirely true as some goalies may be better at controlling rebounds and thus secondary shots but I’ll assume this is a minimal effect if it does exist). The result of this is goalies do have a HARD rating but no HARO, or shot/fenwick/corsi based HARD or HARO rating.

I hope this helps explain how my hockey analysis ratings are calculated but if you have any followup questions feel free to ask them in the comments.

 

Apr 052013
 

I often get asked questions about hockey analytics, hockey fancy stats, how to use them, what they mean, etc. and there are plenty of good places to find definitions of various hockey stats but sometimes what is more important than a definition is some guidelines on how to use them. So, with that said, here are several tips that I have for people using advanced hockey stats.

Don’t over value Quality of Competition

I don’t know how often I’ll point out one players poor stats or another players good stats and immediately get the response “Yeah, but he always plays against the opponents best players” or “Yeah, but he doesn’t play against the oppositions best players” but most people that say that kind of thing have no real idea how much quality of opponent will affect the players statistics. The truth is it is not nearly as much as you might think.  Despite some coaches desperately trying to employ line matching techniques the variation in quality of competition metric is dwarfed by variation in quality of teammates, individual talent, and on-ice results. An analysis of Pavel Datsyuk and Valterri Filppula showed that if Filppula had Datsyuk’s quality of competition his CorsiFor% would drop from 51.05% to 50.90% and his GoalsFor% would drop from 55.65% to 55.02%. In the grand scheme of things, this are relatively minor factors.

Don’t over value Zone Stats either

Like quality of competition, many people will use zone starts to justify a players good/poor statistics. The truth is zone starts are not a significant factor either. I have found that the effect of zone starts is largely eliminated after about 10 seconds after a face off and this has been found true by others as well. I account for zone starts in statistics by eliminating the 10 seconds after an offensive or defensive zone face off and I have found doing this has relatively little effect on a players stats. Henrik Sedin is maybe the most extreme case of a player getting primarily offensive zone starts and all those zone starts took him from a 55.2 fenwick% player to a 53.8% fenwick% player when zone starts are factored out. In the most extreme case there is only a 1.5% impact on a players fenwick% and the majority of players are no where close to the zone start bias of Henrik Sedin. For the majority of players you are probably talking something under 0.5% impact on their fenwick%. As for individual stats over the last 3 seasons H. Sedin had 34 goals and 172 points in 5v5 situations and just 2 goals and 14 points came within 10 seconds of a zone face off, or about 5 points a year. If instead of 70% offensive zone face off deployment he had 50% offensive zone face off deployment instead of having 14 points during the 10 second zone face off time he may have had 10.  That’s a 4 point differential over 3 years for a guy who scored 172 points. In simple terms, about 2.3% of H. Sedin’s 5v5 points can be attributed to his offensive zone start bias.

A derivative of this is that if zone starts don’t matter much, a players face off winning percentage probably doesn’t matter much either which is consistent with other studies. It’s a nice skill to have, but not worth a lot either.

Do not ignore Quality of Teammates

I have just told you to pretty much ignore quality of competition and zone starts, what about quality of teammates? Well, to put it simply, do not ignore them. Quality of teammates matters and matters a lot. Sticking with the Vancouver Canucks, lets use Alex Burrows as an example. Burrows mostly plays with the Sedin twins but has played on Kesler’s line a bit too. Over the past 3 seasons he has played about 77.9% of his ice time with H. Sedin and about 12.3% of his ice time with Ryan Kesler and the reminder with Malhotra and others. Burrow’s offensive production is significantly better when playing with H. Sedin as 88.7% of his goals and 87.2% of his points came during the 77.9% ice time he played with H. Sedin. If Burrows played 100% of his ice time with H. Sedin and produced at the same rate he would have scored 6 (9.7%) more goals and 13 (11%) more 5v5 points over the past 3 seasons. This is far more significant than the 2.3% boost H. Sedin saw from all his offensive zone starts and I am not certain my Burrows example is the most extreme example in the NHL. How many more points would an average 3rd line get if they played mostly with H. Sedin instead of the average 3rd liner. Who you play with matters a lot. You can’t look at Tyler Bozak’s decent point totals and conclude he is a decent player without considering he plays a lot with Kessel and Lupul, two very good offensive players.

Opportunity is not talent

Kind of along the same lines as the Quality of Teammates discussion, we must be careful not to confuse opportunity and results. Over the past 2 seasons Corey Perry has the second most goals of any forward in the NHL trailing only Steven Stamkos. That might seem impressive but it is a little less so when you consider Perry also had the 4th most 5v5 minutes during that time and the 11th most 5v4 minutes.  Perry is a good goal scorer but a lot of his goals come from opportunity (ice time) as much as individual talent. Among forwards with at least 1500 minutes of 5v5 ice time the past 2 seasons, Perry ranks just 30th in goals per 60 minutes of ice time. That’s still good, but far less impressive than second only to Steven Stamkos and he is actually well behind teammate Bobby Ryan (6th) in this metric. Perry is a very good player but he benefits more than others by getting a lot of ice time  and PP ice time. Perry’s goal production is a large part talent, but also somewhat opportunity driven and we need to keep this in perspective.

Don’t ignore the percentages (shooting and save)

The percentages matter, particularly shooting percentages. I have shown that players can sustain elevated on-ice shooting percentages and I have shown that players can have an impact on their line mates shooting percentages and Tom Awad has shown that a significant portion of the difference between good players and bad players is finishing ability (shooting percentage).  There is even evidence that goal based metrics (which incorporate the percentages) are a better predictor of post season success than fenwick based metric. What corsi/fenwick metrics have going for them is more reliability over small sample sizes but once you approach a full seasons worth of data that benefit is largely gone and you get more benefit from having the percentages factored into the equation. If you want to get a better understanding of what considering the percentages can do for you, try to do a Malkin vs Gomez comparison or a Crosby vs Tyler Kennedy comparison over the past several years. Gomez and Kennedy actually look like relatively decent comparisons if you just consider shot based metrics, but both are terrible percentage players while Malkin and Crosby are excellent percentage players and it is the percentages that make Malkin and Crosby so special. This is an extreme example but the percentages should not be ignored if you want a true representation of a players abilities.

More is definitely better

One of the reason many people have jumped on the shot attempt/corsi/fenwick band wagon is because they are more frequent events than goals and thus give you more reliable metrics. This is true over small sample sizes but as explained above, the percentages matter too and should not be ignored. Luckily, for most players we have ample data to get past the sample size issues. There is no reason to evaluate a player based on half a seasons data if that player has been in the league for several years. Look at 2, 3, 4 years of data.  Look for trends. Is the player consistently a higher corsi player? Is the player consistently a high shooting percentage player? Is the player improving? Declining? I have shown on numerous occassions that goals are a better predictor of future goal rates than corsi/fenwick starting at about one year of data but multiple years are definitely better. Any conclusion about a players talent level using a single season of data or less (regardless of whether it is corsi or goal based) is subject to a significant level of uncertainty. We have multiple years of data for the majority of players so use it. I even aggregate multiple years into one data set for you on stats.hockeyanalysis.com for you so it isn’t even time consuming. The data is there, use it. More is definitely better.

WOWY’s are where it is at

In my mind WOWY’s are the best tool for advanced player evaluation. WOWY stands for with or without you and looks at how a player performs while on the ice with a team mate and while on the ice without a team mate. What WOWY’s can tell you is whether a particular player is a core player driving team success or a player along for the ride. Players that consistently make their team mates statistics better when they are on the ice with them are the players you want on your team. Anze Kopitar is an example of a player who consistently makes his teammates better. Jack Johnson is an example of a player that does not, particularly when looking at goal based metrics.   Then there are a large number of players that are good players that neither drive your teams success nor hold it back, or as I like to say, complementary players. Ideally you build your team around a core of players like Kopitar that will drive success and fill it in with a group of complementary players and quickly rid yourself of players like Jack Johnson that act as drags on the team.

 

Mar 202013
 

I generally think that the majority of people give too much importance to quality of competition (QoC) and its impact on a players statistics but if we are going to use QoC metrics let’s at least try and use the best ones available. In this post I will take a look at some QoC metrics that are available on stats.hockeyanalysis.com and explain why they might be better than those typically in use.

OppGF20, OppGA20, OppGF%

These three stats are the average GF20 (on ice goals for per 20 minutes), OppGA20 (on ice goals against per 20 minutes) and GF% (on ice GF / [on ice GF + on ice GA]) of all the opposition players that a player lined up against weighted by ice time against. In fact, these stats go a bit further in that they remove the ice time the opponent players played against the player so that a player won’t influence his own QoC (not nearly as important as QoT but still a good thing to do). So, essentially these three stats are the goal scoring ability of the opposition players, the goal defending ability of the opposition players, and the overall value of the opposition players. Note that opposition goalies are not included in the calculation of OppGF20 as it is assume the goalies have no influence on scoring goals.

The benefits of using these stats are they are easy to understand and are in a unit (goals per 20 minutes of ice time) that is easily understood. GF20 is essentially how many goals we expect the players opponents would score on average per 20 minutes of ice time. The drawback from this stat is that if good players play against good players and bad players play against bad players a good player and a bad player may have similar statistics but the good players is a better player because he did it against better quality opponents. There is no consideration for the context of the opponents statistics and that may matter.

Let’s take a look at the top 10 forwards in OppGF20 last season.

Player Team OppGF20
Patrick Dwyer Carolina 0.811
Brandon Sutter Carolina 0.811
Travis Moen Montreal 0.811
Carl Hagelin NY Rangers 0.806
Marcel Goc Florida 0.804
Tomas Plekanec Montreal 0.804
Brooks Laich Washington 0.800
Ryan Callahan NY Rangers 0.799
Patrik Elias New Jersey 0.798
Alexei Ponikarovsky New Jersey 0.795

You will notice that every single player is from the eastern conference. The reason for this is that the eastern conference is a more offensive conference. Taking a look at the top 10 players in OppGA20 will show the opposite.

Player Team OppGF20
Marcus Kruger Chicago 0.719
Jamal Mayers Chicago 0.720
Mark Letestu Columbus 0.721
Andrew Brunette Chicago 0.723
Andrew Cogliano Anaheim 0.723
Viktor Stalberg Chicago 0.724
Matt Halischuk Nashville 0.724
Kyle Chipchura Phoenix 0.724
Matt Belesky Anaheim 0.724
Cory Emmerton Detroit 0.724

Now, what happens when we look at OppGF%?

Player Team OppGF%
Mike Fisher Nashville 51.6%
Martin Havlat San Jose 51.4%
Vaclav Prospal Columbus 51.3%
Mike Cammalleri Calgary 51.3%
Martin Erat Nashville 51.3%
Sergei Kostitsyn Nashville 51.3%
Dave Bolland Chicago 51.2%
Rick Nash Columbus 51.2%
Travis Moen Montreal 51.0%
Patrick Marleau San Jose 51.0%

There are predominantly western conference teams with a couple of eastern conference players mixed in. The reason for this western conference bias is that the western conference was the better conference and thus it makes sense that the QoC would be tougher for western conference players.

OppFF20, OppFA20, OppFF%

These are exactly the same stats as the goal based stats above but instead of using goals for/against/percentage they use fenwick for/against/percentage (fenwick is shots + shots that missed the net). I won’t go into details but you can find the top players in OppFF20 here, in OppFA20 here, and OppFF% here. You will find a a lot of similarities to the OppGF20, OppGA20 and OppGF% lists but if you ask me which I think is a better QoC metric I’d lean towards the goal based ones. The reason for this is that the smaller sample size issues we see with goal statistics is not going to be nearly as significant in the QoC metrics because over all opponents luck will average out (for every unlucky opponent you are likely to have a lucky one t cancel out the effects). That said, if you are doing a fenwick based analysis it probably makes more sense to use a fenwick based QoC metric.

HARO QoC, HARD QoC, HART QoC

As stated above, one of the flaws of the above QoC metrics is that there is no consideration for the context of the opponents statistics. One of the ways around this is to use the HockeyAnalysis.com HARO (offense), HARD (defense) and HART (Total/Overall) ratings in calculating QoC. These are player ratings that take into account both quality of teammates and quality of competition (here is a brief explanation of what these ratings are).The HARO QoC, HARD QoC and HART QoC metrics are simply the average HARO, HARD and HART ratings of players opponents.

Here are the top 10 forwards in HARO QoC last year:

Player Team HARO QoC
Patrick Dwyer Carolina 6.0
Brandon Sutter Carolina 5.9
Travis Moen Montreal 5.8
Tomas Plekanec Montreal 5.8
Marcel Goc Florida 5.6
Carl Hagelin NY Rangers 5.5
Ryan Callahan NY Rangers 5.3
Brooks Laich Washington 5.3
Michael Grabner NY Islanders 5.2
Patrik Elias New Jersey 5.2

There are a lot of similarities to the OppGF20 list with the eastern conference dominating. There are a few changes, but not too many, which really is not that big of a surprise to me knowing that there is very little evidence that QoC has a significant impact on a players statistics and thus considering the opponents QoC will not have a significant impact on the opponents stats and thus not a significant impact on a players QoC. That said, I believe these should produce slightly better QoC ratings. Also note that a 6.0 HARO QoC indicates that the opponent players are expected to produce a 6.0% boost on the league average GF20.

Here are the top 10 forwards in HARD QoC last year:

Player Team HARD QoC
Jamal Mayers Chicago 6.0
Marcus Kruger Chicago 5.9
Mark Letestu Columbus 5.8
Tim Jackman Calgary 5.3
Colin Fraser Los Angeles 5.2
Cory Emmerton Detroit 5.2
Matt Belesky Anaheim 5.2
Kyle Chipchura Phoenix 5.1
Andrew Brunette Chicago 5.1
Colton Gilles Columbus 5.0

And now the top 10 forwards in HART QoC last year:

Player Team HART QoC
Dave Bolland Chicago 3.2
Martin Havlat San Jose 3.0
Mark Letestu Columbus 2.5
Jeff Carter Los Angeles 2.5
Derick Brassard Columbus 2.5
Rick Nash Columbus 2.4
Mike Fisher Nashville 2.4
Vaclav Prospal Columbus 2.2
Ryan Getzlaf Anaheim 2.2
Viktor Stalberg Chicago 2.1

Shots and Corsi based QoC

You can also find similar QoC stats using shots as the base stat or using corsi (shots + shots that missed the net + shots that were blocked) on stats.hockeyanalysis.com but they are all the same as above so I’ll not go into them in any detail.

CorsiRel QoC

The most common currently used QoC metric seems to be CorsiRel QoC (found on behindthenet.ca) but in my opinion this is not so much a QoC metric but a ‘usage’ metric. CorsiRel is a statistic that compares the teams corsi differential when the player is on the ice to the teams corsi differential when they player is not on the ice.  CorsiRel QoC is the average CorsiRel of all the players opponents.

The problem with CorsiRel is that good players on a bad team with little depth can put up really high CorsiRel stats compared to similarly good players on a good team with good depth because essentially it is comparing a player relative to his teammates. The more good teammates you have, the more difficult it is to put up a good CorsiRel. So, on any given team the players with a good CorsiRel are the best players on team team but you can’t compare CorsiRel on players on different teams because the quality of the teams could be different.

CorsiRel QoC is essentially the average CorsiRel of all the players opponents but because CorsiRel is flawed, CorsiRel QoC ends up being flawed too. For players on the same team, the player with the highest CorsiRel QoC plays against the toughest competition so in this sense it tells us who is getting the toughest minutes on the team, but again CorsiRel QoC is not really that useful when comparing players across teams.  For these reasons I consider CorsiRel QoC more of a tool to see the usage of a player compared to his teammates, but is not in my opinion a true QoC metric.

I may be biased, but in my opinion there is no reason to use CorsiRel QoC anymore. Whether you use GF20, GA20, GF%, HARO QoC, HARD QoC, and HART QoC, or any of their shot/fenwick/corsi variants they should all produce better QoC measures that are comparable across teams (which is the major draw back of CorsiRel QoC.

 

Jun 042012
 

I have a ton of information on my stats website stats.hockeyanalysis.com but one of the things I have always wanted to do is to make it more visual and I’d like to announce the first step in that process.  Thanks to google and their cool google chart api I have now added bubble charts when you do a stats search that returns no more than 30 players (more than 30 players makes the bubble charts too cluttered).  For example, if you did a search of all Maple Leaf Skaters with 500 minutes of 5v5 zone start adjusted ice time this past season you will see a nice bubble chart at the bottom plotting each players defensive rating (i.e. HARD+) along the horizontal axis and their offensive rating along the vertical axis (i.e. HARO+).  Or you can see the same thing using corsi ratings (i.e. CorHARD+ vs CorHARO+) if you are one of those people who prefer corsi based ratings.  Or, if you prefer, you can even look at multi-year goal ratings such as 3 year 5v5 zone start adjusted goal ratings for the Toronto Maple Leafs (though still not perfect, I believe 3 year goal ratings are the best indicator of a players value).

In the charts, forwards and defensemen are differentiated by different colors and the size of the bubble is indicative of the amount of time the player was on the ice for (the largest bubbles for the players with the most ice time and the smallest bubbles for the players with the least).  As always with my ratings, any value over 1.00 is above average and any rating below 1.00 is below average and these ratings take into account quality of teammates and quality of opposition and the players on-ice statistics.  This means players with bubbles to the right side of the chart are stronger defensive players and players with bubbles towards the top of the chart are stronger offensive players.  The best players are good at both and thus have their bubbles in the upper right quadrant.   Players with bubbles in the lower right quadrant are the worst performing players.  The nice thing about these charts is it gives a very easy to read visual representation of every player on a team.

I am hoping that this is just a start of things to come with more charting enhancements (and others as well) to be implemented in the future.  As always, if you have any suggestions submit a comment below or drop me a message.

Feb 212012
 

There was a twitter conversation between Gabe Desjardins and David Staples last night in which Gabe suggested that Daniel Sedin’s heavy offensive zone start bias resulted in an additional 7-9 points that he would not have gotten if his zone starts were more evenly split between offensive and defensive zone.  When I saw this I immediately though that seemed like a really high number so I decided to take a look though the play by play sheets and see how many of Daniel Sedin’s even strength points came from a faceoff in the offensive zone.  Of all of Daniel Sedin’s points so far, here are the only ones that might at all be attributed to an offensive zone faceoff.

Date Opppnent Type Time After Faceoff
Oct. 15 Edmonton Assist 8 seconds
Oct. 20 Nashville Goal 11 seconds
Oct. 29 Washington Assist 19 seconds
Nov. 29 Columbus Goal 8 seconds
Dec. 6 Colorado Goal 24 seconds
Jan. 31 Chicago Goal 29 seconds
Feb. 18 Toronto Assist 40 seconds

Every other point that Daniel Sedin got was either on the PP, after a faceoff in another zone or after a line change during the play or after the opponent had possession of the puck.  Even the points above we don’t know if the opposition had control of the puck between the faceoff and the goal, especially for the plays 19 seconds or longer after the faceoff (a lot can happen in 19 seconds) and the goal vs Colorado was during 4 on 4 play as well.  But for the sake of argument, let’s say we can directly tie all 7 of those points to being a result of offensive zone face offs.  Also, for the sake of easy math, let’s assume his OZone% is 70% (it’s actually closer to 80%).  So, on 70% OZone starts he scored 7 goals.  If we reduce his Ozone% to 50% you’d naturally think you’d lose an equivalent portion of points so he’d end up with 5 points instead of 7.  Net result, Daniel Sedin’s offensive zone start bias has accounted for just 2 additional points so far this season.

What about previous seasons?  Well, over the previous 3 seasons Daniel Sedin was on the ice for 197 5v5 goals for.  If we ignore the 30 seconds following an offensive or defensive zone start (and 30 seconds is more than ample to account for zone starts) he was on the ice for 151 goals for.  That means we can fairly safely assume that offensive zone starts at best resulted in 46 goals for.

Now, over the past 3 seasons Daniel Sedin was on the ice for 1164 offensive zone face offs and 656 defensive zone face offs for an OZone% of about 64%.  Those 1164 offensive zone faceoffs accounted for at most 46 goals meaning approximately every 25 offensive zone starts resulted in a goal.  If Sedin had a 50% OZone% over the previous 3 seasons instead of his 64% he’d have been on the ice for about 910 offensive faceoffs, or about 254 fewer than he actually had.  Since every 25 offensive zone starts results in a goal those 254 extra offensive zone face offs he took resulted in approximately 10 extra goals being scored.  So, on average Daniel Sedin was on the ice for 3-4 extra goals per season because of his offensive zone faceoff bias, and that is being generous with the math.  That result is not far off this seasons observations above.

So, considering one of the best offensive players in the game with one of the most significant offensive zone biases in the game is only on the ice for at most an additional 4 goals a season as a result of their offensive zone bias, I think we can chaulk up the zone start effect as mostly insignificant.  The majority of players aren’t near as talented as D. Sedin and his linemates are and the majority of players end up having between 45% and 55% zone starts.  As a result, the majority of the players probably only see a zone bias affect their stats by at most one or two goals a season.  It’s pretty much not worth consideration.

Of course, a corsi based analysis would show a more significant difference because zone starts affect corsi more than goals.

 

Dec 152010
 

I have been pretty quiet here recently not because of a lack of things I want to write about but because I needed to get my stats site up and running first so I can reference it in my writings.  Plus, getting my stats site up has been on my todo list for a real long time.  There will be a lot more stats to come including my with/against on ice pairing stats which I had up a season or two ago and many of you found interesting as well as team stats but for now let me explain what is there.

What you will find there now is my player rating system which produces the following ratings:

HARD – Hockey Analysis Rating – Defense

HARO – Hockey Analysis Rating – Offense

HART – Hockey Analysis Rating – Total

HARD+ – Hockey Analysis Rating – Defense

HARO+ – Hockey Analysis Rating – Offense

HART+ – Hockey Analysis Rating – Total

HARD is the defensive rating and is calculated by taking expected goals against while on the ice and dividing it by actual goals against while on the ice.  The expected goals against is calculated by taking the average of a players team mates goals against per 20 minutes (TMGA20) and averaging it with the players opposition goals for per 20 minutes (OppGF20).  Similarly HARO is calculated by taking a players actual goals for while on the ice and dividing it by the expected goals against while on the ice.  For both, a rating above 1.00 means that the player helped the team perform better than expected when he was on the ice where as a rating below 1.00 means the player hurt the teams performance when he was on the ice.  HART is just an average of HARD and HARO.

HARD+, HARO+ and HART+ are enhanced ratings which result from an iterative process that iteratively feeds HARD and HARO ratings into an algorithm to refine the ratings.  For the most part this iterative process produced a nice stable state but sometimes the algorithm goes haywire and things fail (i.e. for a particular season or seasons).  For this reason I am calling the + ratings experimental but if you don’t see anything wacky (i.e. large differences in every players ratings) they should be considered reliable and probably better ratings than the straight HARD, HARO and HART ratings.  Anything better than 1.00 should be considered better than the average player and anything less than 1.00 should be considered below average.

Continue reading »