Apr 122013
 

Now that I have added home and road stats to stats.hockeyanalysis.com I can take a look at how quality of competition differs when the team is at home vs when they are on the road. In theory because the home team has last change they should be able to dictate the match ups better and thus should be able to drive QoC a bit better. Let’s take a look at the top 10 defensemen in HARO QoC last season at home and on the road (defensemen with 400 5v5 home/road minutes were considered).

Player Name Home HARO QOC Player Name Road HARO QOC
GIRARDI, DAN 8.81 MCDONAGH, RYAN 6.73
MCDONAGH, RYAN 8.49 GORGES, JOSH 6.48
PHANEUF, DION 8.46 GIRARDI, DAN 6.03
GARRISON, JASON 8.27 SUBBAN, P.K. 5.95
GORGES, JOSH 8.25 PHANEUF, DION 5.94
GLEASON, TIM 8.21 GUNNARSSON, CARL 5.48
SUBBAN, P.K. 8.19 ALZNER, KARL 5.35
WEAVER, MIKE 7.92 STAIOS, STEVE 5.15
ALZNER, KARL 7.74 TIMONEN, KIMMO 4.95
REGEHR, ROBYN 7.72 WEAVER, MIKE 4.67

There is definitely a lot of common names in each list but we do notice that the HARO QoC is greater at home than on the road for these defensemen. Next I took a look at the standard deviation of all the defensemen with 400 5v5 home/road minutes last season which should give us an indication of how much QoC varies from player to player.

StdDev
Home 3.29
Road 2.45

The standard deviation is 34% higher at home than on the road which again confirms that variation in QoC are greater at home than on the road.  All of this makes perfect sense but it is nice to see it backed up in actual numbers.

 

 

Apr 012013
 

I have been on a bit of a mission recently to push the idea that quality of competition (and zone starts) is not a huge factor in ones statistics and that most people in general over value its importance. I don’t know how often I hear arguments like “but he plays all the tough minutes” as an excuse as to why a player has poor statistics and pretty much every time I do I cringe because almost certainly the person making the argument has no clue how much those tough minutes impact a players statistics.

While thinking of how to do this study, and which players to look at, I was listening to a pod cast and the name Pavel Datsyuk was brought up so I decided I would take a look at him because in addition to being mentioned in a pod cast he is a really good 2-way player who plays against pretty tough quality of competition. For this study I looked at 2010-12 two year data and Datsyuk has the 10th highest HART QoC during that time in 5v5 zone start adjusted situations.

The next step was to look how Datsyuk performed against various types of opposition. To do this I took all of Datsyuk’s opponent forwards who had he played at least 10 minutes of 5v5 ZS adjusted ice time against (you can find these players here) and grouped them according to their HARO, HARD, CorHARO and CorHARD ratings and looked at how Datsyuk’s on-ice stats looked against each group.

OppHARO TOI% GA20
>1.1 46.84% 0.918
0.9-1.1 34.37% 0.626
<0.9 18.79% 0.391

Lets go through a quick explanation of the above table. I have grouped Datsyuk’s opponents by their HARO ratings into three groups, those with a HARO >1.1, those with a HARO between 0.9 and 1.1 and those with a HARO rating below 0.9. These groups represent strong offensive players, average offensive players and weak offensive players. Datsyuk played 46.84% of his ice time against the strong offensive player group, 34.37% against the average offensive player group and 18.79% against the weak offensive player group. The GA20 column is Datsyuk’s goals against rate, or essentially the goals for rate of Datsyuk’s opponents when playing against Datsyuk. As you can see, the strong offensive players do significantly better than the average offensive players who in turn do significantly better than the weak offensive players.

Now, let’s look at how Datsyuk does offensively based on the defensive ability of his opponents.

OppHARD TOI% GF20
>1.1 35.39% 1.171
0.9-1.1 35.36% 0.994
<0.9 29.25% 1.004

Interestingly, the defensive quality of Datsyuk’s opponents did not have a significant impact on Datsyuk’s ability to generate offense which is kind of an odd result.

Here are the same tables but for corsi stats.

OppCorHARO TOI% CA20
>1.1 15.59% 15.44
0.9-1.1 77.79% 13.78
<0.9 6.63% 10.84

 

OppCorHARD TOI% CF20
>1.1 18.39% 15.89
0.9-1.1 68.81% 18.49
<0.9 12.80% 22.69

I realize that I should have tightened up the ratings splits to get a more even distribution in TOI% but I think we see the effect of QoC fine. When looking at corsi we do see that CF20 varies across defensive quality of opponent which we didn’t see with GF20.

From the tables above, we do see that quality of opponent can have a significant impact on a players statistics. When you are playing against good offensive opponents you are bound to give up a lot more goals than you will against weaker offensive opponents. The question remains is whether players can and do play a significantly greater amount of time against good opponents compared to other players. To take a look at this I looked at the same tables above but for Valtteri Filppula, a player who rarely gets to play with Datsyuk so in theory could have a significantly different set of opponents to Datsyuk. Here are the same tables above for Filppula.

OppHARO TOI% GA20
>1.1 42.52% 1.096
0.9-1.1 35.35% 0.716
<0.9 22.12% 0.838

 

OppHARD TOI% GF20
>1.1 32.79% 0.841
0.9-1.1 35.53% 1.197
<0.9 31.68% 1.370

 

OppCorHARO TOI% GA20
>1.1 12.88% 19.03
0.9-1.1 78.20% 16.16
<0.9 8.92% 14.40

 

OppCorHARD TOI% GF20
>1.1 20.89% 15.48
0.9-1.1 64.94% 17.16
<0.9 14.17% 19.09

Nothing too exciting or unexpected in those tables. What is more important is how the ice times differ from Datsyuk’s across groups and how those differences might affect Filppula’s statistics.

We see that Datsyuk plays a little bit more against good offensive players and a little bit less against weak offensive players and he also plays a little bit more against good defensive players and a little bit less against weak defensive players. If we assume that Filppula played Datsyuk’s and that Datsyuk’s within group QoC ratings was the same as Filppula’s we can calculate what Filppula’s stats will be against similar QoC.

Actual w/ DatsyukTOI
GF20 1.135 1.122
GA20 0.905 0.917
GF% 55.65% 55.02%
CF20 17.08 17.09
CA20 16.37 16.49
CF% 51.05% 50.90%

As you can see, that is not a huge difference. If we gave Filppula the same QoC as Datsyuk instead of being a 55.65% GF% player he’d be a 55.02% GF% player. That is hardly enough to worry about and the difference in CF% is even less.

From this an any other study I have looked at I have found very little evidence that QoC has a significant impact on a players statistics. The argument that a player can have bad stats because he plays the ‘tough minutes’ is, in my opinion, a bogus argument. Player usage can have a small impact on a players statistics but it is not anything to be concerned with for the vast majority of players and it will never make a good player have bad statistics or a bad player have good statistics. Player usage charts (such as those found here or those found here) are interesting and pretty neat and do give you an idea of how a coach uses his players but as a tool for justifying a players good, or poor, performance they are not. The notion of ‘tough minutes’ exists, but are not all that important over the long haul.

 

 

Mar 202013
 

I generally think that the majority of people give too much importance to quality of competition (QoC) and its impact on a players statistics but if we are going to use QoC metrics let’s at least try and use the best ones available. In this post I will take a look at some QoC metrics that are available on stats.hockeyanalysis.com and explain why they might be better than those typically in use.

OppGF20, OppGA20, OppGF%

These three stats are the average GF20 (on ice goals for per 20 minutes), OppGA20 (on ice goals against per 20 minutes) and GF% (on ice GF / [on ice GF + on ice GA]) of all the opposition players that a player lined up against weighted by ice time against. In fact, these stats go a bit further in that they remove the ice time the opponent players played against the player so that a player won’t influence his own QoC (not nearly as important as QoT but still a good thing to do). So, essentially these three stats are the goal scoring ability of the opposition players, the goal defending ability of the opposition players, and the overall value of the opposition players. Note that opposition goalies are not included in the calculation of OppGF20 as it is assume the goalies have no influence on scoring goals.

The benefits of using these stats are they are easy to understand and are in a unit (goals per 20 minutes of ice time) that is easily understood. GF20 is essentially how many goals we expect the players opponents would score on average per 20 minutes of ice time. The drawback from this stat is that if good players play against good players and bad players play against bad players a good player and a bad player may have similar statistics but the good players is a better player because he did it against better quality opponents. There is no consideration for the context of the opponents statistics and that may matter.

Let’s take a look at the top 10 forwards in OppGF20 last season.

Player Team OppGF20
Patrick Dwyer Carolina 0.811
Brandon Sutter Carolina 0.811
Travis Moen Montreal 0.811
Carl Hagelin NY Rangers 0.806
Marcel Goc Florida 0.804
Tomas Plekanec Montreal 0.804
Brooks Laich Washington 0.800
Ryan Callahan NY Rangers 0.799
Patrik Elias New Jersey 0.798
Alexei Ponikarovsky New Jersey 0.795

You will notice that every single player is from the eastern conference. The reason for this is that the eastern conference is a more offensive conference. Taking a look at the top 10 players in OppGA20 will show the opposite.

Player Team OppGF20
Marcus Kruger Chicago 0.719
Jamal Mayers Chicago 0.720
Mark Letestu Columbus 0.721
Andrew Brunette Chicago 0.723
Andrew Cogliano Anaheim 0.723
Viktor Stalberg Chicago 0.724
Matt Halischuk Nashville 0.724
Kyle Chipchura Phoenix 0.724
Matt Belesky Anaheim 0.724
Cory Emmerton Detroit 0.724

Now, what happens when we look at OppGF%?

Player Team OppGF%
Mike Fisher Nashville 51.6%
Martin Havlat San Jose 51.4%
Vaclav Prospal Columbus 51.3%
Mike Cammalleri Calgary 51.3%
Martin Erat Nashville 51.3%
Sergei Kostitsyn Nashville 51.3%
Dave Bolland Chicago 51.2%
Rick Nash Columbus 51.2%
Travis Moen Montreal 51.0%
Patrick Marleau San Jose 51.0%

There are predominantly western conference teams with a couple of eastern conference players mixed in. The reason for this western conference bias is that the western conference was the better conference and thus it makes sense that the QoC would be tougher for western conference players.

OppFF20, OppFA20, OppFF%

These are exactly the same stats as the goal based stats above but instead of using goals for/against/percentage they use fenwick for/against/percentage (fenwick is shots + shots that missed the net). I won’t go into details but you can find the top players in OppFF20 here, in OppFA20 here, and OppFF% here. You will find a a lot of similarities to the OppGF20, OppGA20 and OppGF% lists but if you ask me which I think is a better QoC metric I’d lean towards the goal based ones. The reason for this is that the smaller sample size issues we see with goal statistics is not going to be nearly as significant in the QoC metrics because over all opponents luck will average out (for every unlucky opponent you are likely to have a lucky one t cancel out the effects). That said, if you are doing a fenwick based analysis it probably makes more sense to use a fenwick based QoC metric.

HARO QoC, HARD QoC, HART QoC

As stated above, one of the flaws of the above QoC metrics is that there is no consideration for the context of the opponents statistics. One of the ways around this is to use the HockeyAnalysis.com HARO (offense), HARD (defense) and HART (Total/Overall) ratings in calculating QoC. These are player ratings that take into account both quality of teammates and quality of competition (here is a brief explanation of what these ratings are).The HARO QoC, HARD QoC and HART QoC metrics are simply the average HARO, HARD and HART ratings of players opponents.

Here are the top 10 forwards in HARO QoC last year:

Player Team HARO QoC
Patrick Dwyer Carolina 6.0
Brandon Sutter Carolina 5.9
Travis Moen Montreal 5.8
Tomas Plekanec Montreal 5.8
Marcel Goc Florida 5.6
Carl Hagelin NY Rangers 5.5
Ryan Callahan NY Rangers 5.3
Brooks Laich Washington 5.3
Michael Grabner NY Islanders 5.2
Patrik Elias New Jersey 5.2

There are a lot of similarities to the OppGF20 list with the eastern conference dominating. There are a few changes, but not too many, which really is not that big of a surprise to me knowing that there is very little evidence that QoC has a significant impact on a players statistics and thus considering the opponents QoC will not have a significant impact on the opponents stats and thus not a significant impact on a players QoC. That said, I believe these should produce slightly better QoC ratings. Also note that a 6.0 HARO QoC indicates that the opponent players are expected to produce a 6.0% boost on the league average GF20.

Here are the top 10 forwards in HARD QoC last year:

Player Team HARD QoC
Jamal Mayers Chicago 6.0
Marcus Kruger Chicago 5.9
Mark Letestu Columbus 5.8
Tim Jackman Calgary 5.3
Colin Fraser Los Angeles 5.2
Cory Emmerton Detroit 5.2
Matt Belesky Anaheim 5.2
Kyle Chipchura Phoenix 5.1
Andrew Brunette Chicago 5.1
Colton Gilles Columbus 5.0

And now the top 10 forwards in HART QoC last year:

Player Team HART QoC
Dave Bolland Chicago 3.2
Martin Havlat San Jose 3.0
Mark Letestu Columbus 2.5
Jeff Carter Los Angeles 2.5
Derick Brassard Columbus 2.5
Rick Nash Columbus 2.4
Mike Fisher Nashville 2.4
Vaclav Prospal Columbus 2.2
Ryan Getzlaf Anaheim 2.2
Viktor Stalberg Chicago 2.1

Shots and Corsi based QoC

You can also find similar QoC stats using shots as the base stat or using corsi (shots + shots that missed the net + shots that were blocked) on stats.hockeyanalysis.com but they are all the same as above so I’ll not go into them in any detail.

CorsiRel QoC

The most common currently used QoC metric seems to be CorsiRel QoC (found on behindthenet.ca) but in my opinion this is not so much a QoC metric but a ‘usage’ metric. CorsiRel is a statistic that compares the teams corsi differential when the player is on the ice to the teams corsi differential when they player is not on the ice.  CorsiRel QoC is the average CorsiRel of all the players opponents.

The problem with CorsiRel is that good players on a bad team with little depth can put up really high CorsiRel stats compared to similarly good players on a good team with good depth because essentially it is comparing a player relative to his teammates. The more good teammates you have, the more difficult it is to put up a good CorsiRel. So, on any given team the players with a good CorsiRel are the best players on team team but you can’t compare CorsiRel on players on different teams because the quality of the teams could be different.

CorsiRel QoC is essentially the average CorsiRel of all the players opponents but because CorsiRel is flawed, CorsiRel QoC ends up being flawed too. For players on the same team, the player with the highest CorsiRel QoC plays against the toughest competition so in this sense it tells us who is getting the toughest minutes on the team, but again CorsiRel QoC is not really that useful when comparing players across teams.  For these reasons I consider CorsiRel QoC more of a tool to see the usage of a player compared to his teammates, but is not in my opinion a true QoC metric.

I may be biased, but in my opinion there is no reason to use CorsiRel QoC anymore. Whether you use GF20, GA20, GF%, HARO QoC, HARD QoC, and HART QoC, or any of their shot/fenwick/corsi variants they should all produce better QoC measures that are comparable across teams (which is the major draw back of CorsiRel QoC.

 

Feb 112013
 

When I updated stats.hockeyanalysis.com this season I added new metrics for Quality of Teammates (QoT) and Quality of Competition (Q0C). The QoC metrics are essentially the average Hockey Analysis Rating (HARO for offense, HARD for defense and HART for overall) of the opponents that the player plays against. What is interesting about these ratings, as compared to those found elsewhere, is that I split the QoC rating up into offensive and defensive metrics. Thus, there is a QoC HARO rating for measuring the offensive quality of competition, a QoC HARD for measuring the defensive quality of competition, and a QoC HART for overall quality of compentition (basically the average of QoC HARO + QoC HARD). The resulting metrics give a result that is above 1.00 for above average competition and below 1.00 for below average competition and 1.00 would be average competition.

Let’s take a look at defensemen first and take a look at the defensemen who have the highest QoC HARO during 5v5close situations over the previous 2 seasons. This should identify the defensemen who have face the best offensive players and her are the top 15.

Player Name HARO QOC
GIRARDI, DAN 1.036
CHARA, ZDENO 1.036
GARRISON, JASON 1.035
MCDONAGH, RYAN 1.034
WEAVER, MIKE 1.033
GORGES, JOSH 1.031
ALZNER, KARL 1.029
GLEASON, TIM 1.026
SEABROOK, BRENT 1.025
BOYCHUK, JOHNNY 1.025
SUBBAN, P.K. 1.025
PHANEUF, DION 1.025
CARLSON, JOHN 1.022
HAMONIC, TRAVIS 1.021
LIDSTROM, NICKLAS 1.021

That’s actually a pretty decent representation of defensive defensemen though there is a bias towards the eastern conference in large part because the eastern conference has more offense (the top 4 teams in goals for last year were eastern conference teams while 9 of the 11 lowest scoring teams were from the western conference).

Now, lets take a look at the forwards with the toughest offensive competition.

Player Name HARO QOC
SUTTER, BRANDON 1.032
PERRON, DAVID 1.032
CALLAHAN, RYAN 1.031
FISHER, MIKE 1.03
SYKORA, PETR 1.029
BOLLAND, DAVE 1.028
ZAJAC, TRAVIS 1.028
ELIAS, PATRIK 1.028
BERGERON, PATRICE 1.027
HAGELIN, CARL 1.027
ZUBRUS, DAINIUS 1.027
PLEKANEC, TOMAS 1.027
WEISS, STEPHEN 1.026
RECCHI, MARK 1.026
ERAT, MARTIN 1.025

Not a lot of surprises there.  They are mostly third line defense first players (IMO Brandon Suter is the best defensive center in the NHL and this is just more evidence of why) or quality 2-way players though as you go further down the list you start to see more offensive players showing up like Alfredsson and Spezza which is probably evidence of a coach wanting to line match top line against top line instead of a checking line against top line.

Where things get interesting is looking at who is 300th on the list of forwards in HARO QoC. It’s none other than Manny Malhotra of massive defensive zone start bias fame. Malhotra’s HARO QoC is just 0.980 while the Canucks center who is assigned mostly offensive zone starts, Henrick Sedin, has a HARO QoC 0.994, which isn’t real difficult but is somewhat higher than Malhotra’s. So, despite all those defensive zone starts by Malhotra (presumably because he is considered a better defensive player), Henrik Sedin plays against tougher offensive opponents. How can this be? Despite Malhotra’s significant defensive zone start bias his five most frequent 5v5close opponent forwards over the previous 2 seasons are David Jones, Matt Stajan, Tim Jackman, Joran Eberle, Matt Cullen. Aside from Eberle those guys don’t really scare you much. It seems Malhotra was facing Edmonton’s top line but not Calgary’s, Minnesota’s or Colorado’s. Henrik Sedin’s top 5 opposition forwards are Dave Bolland, Dany Heatley, Curtis Glencross, Olli Jokinen and Jarome Iginla. Beyond that you have Backes, O’Reilly, Bickell, Thornton, Zetterberg, and Getzlaf. Despite the massive offensive zone start bias, it seems the majority of teams are still line matching power vs power with the Sedins. The conclusion is defensive zone starts does not immediately imply playing against quality offensive players. It can be argued that despite the defensive zone starts Manny Malhotra plays relatively easy minutes.

Using a rigid zone start system like the Vancouver Canucks do actually makes it easier for opposing teams to line match on the road as they know who you are likely to be putting on the ice depending on where the face off is. If the San Jose Sharks want to avoid a Thornton against Malhotra matchup, just don’t start Thornton in the offensive zone. Here are all the forwards with >750 5v5close minutes and at least 40% of the face offs they were on the ice for being in the defensive zone along with their HARO QoC.

Player Name HARO QOC
Manny Malhotra 0.980
Jerred Smithson 0.977
Max Lapierre 0.970
Adam Burish 0.982
Steve Ott 0.993
Jay McClement 0.983
Sammy Pahlsson 1.014
Brian Boyle 1.010
Dave Bolland 1.028
Kyle Brodziak 1.002
Matt Cullen 0.998
Paul Gaustad 0.993

Only 4 of the 12 heavy defensive zone start forwards faced opposition that was above average in terms of quality while the majority of them rank quite poorly.

It is also interesting to see who plays against the best defensive forwards.  One might assume it is elite offensive first line players but as we saw above, teams seemed to want to avoid matching up top offensive players against Manny Malhotra. So, let’s take a look.

Player Name HARD QOC
FRASER, COLIN 1.044
BOLL, JARED 1.043
MAYERS, JAMAL 1.037
JACKMAN, TIM 1.035
MACKENZIE, DEREK 1.032
ABDELKADER, JUSTIN 1.031
CLIFFORD, KYLE 1.031
EAGER, BEN 1.029
BELESKEY, MATT 1.028
MILLER, DREW 1.028
KOSTOPOULOS, TOM 1.027
MCLEOD, CODY 1.025
NICHOL, SCOTT 1.024
WINCHESTER, BRAD 1.023
PAILLE, DANIEL 1.021

Pretty much only tough guys and 3rd/4th liners on that list. Teams are deliberately using the above players in situations that avoid them facing top offensive players and as a result are facing other teams third and fourth lines and thus are facing more defensive type players.

The one conclusion we can draw from this analysis is that quality of competition is driven by line matching techniques more so than zone starts.

 

Jul 112012
 

I have been wondering about the benefits of using 5v5 close data instead of 5v5 when we do player analysis and player comparisons.  The rationale for comparing players in 5v5close situations is that we are comparing players under similar situations.  When teams have a comfortable lead they go into a defensive shell resulting in fewer shots for but with a higher shooting percentage and more shots against, but a lower shooting percentage.  The opposite of course is true when a team is trailing.  But what I have been thinking about recently is whether there is a quality of competition impact during close situations.  My hypothesis is that teams that are really good will play more time with the score close against other good teams and less time with the score close against significantly weaker teams.  Conversely, weak teams will play more minutes with the score close against other weak teams than against good teams.

My hypothesis is that players on good teams will have a tougher QoC during 5v5 close situations than during overall 5v5 situations and players on weak teams will have weaker QoC during 5v5 close situations than during overall 5v5 situations.  Let’s put that hypothesis to the test.

The first thing I did was to select one key player from each of the 30 teams to represent that team in the study.  Mostly forwards were chosen but a few defensemen were chosen as well.  From there I looked at the average of their opponents goals for percentage (goals for / [goals for + goals against]) over the past 3 seasons in zone start adjusted 5v5 situations as well as zone start adjusted 5v5 close situations and then compared the difference to the players teams record over the past three seasons.  The table below is what results.

Player Team GF% 5v5 GF% Close Close – 5v5 3yr Pts Avg. Pts
Doan Phoenix 50.3% 50.6% 0.3% 303 101.0
Chara Boston 50.7% 50.9% 0.2% 296 98.7
Toews Chicago 50.4% 50.6% 0.2% 310 103.3
Datsyuk Detroit 50.8% 51.0% 0.2% 308 102.7
Weber Nashville 50.5% 50.7% 0.2% 303 101.0
Backes St. Louis 50.8% 51.0% 0.2% 286 95.3
E. Staal Carolina 50.4% 50.5% 0.1% 253 84.3
Ribeiro Dallas 50.5% 50.6% 0.1% 272 90.7
Gaborik Ny Rangers 50.1% 50.2% 0.1% 289 96.3
Malkin Pittsburgh 50.1% 50.2% 0.1% 315 105.0
Ovechkin Washington 49.9% 50.0% 0.1% 320 106.7
Enstrom Winnipeg 50.1% 50.2% 0.1% 247 82.3
Weiss Florida 50.3% 50.3% 0.0% 243 81.0
Plekanec Montreal 50.4% 50.4% 0.0% 262 87.3
Tavares NY Islanders 50.3% 50.3% 0.0% 231 77.0
Hartnell Philadelphia 50.1% 50.1% 0.0% 297 99.0
J. Thornton San Jose 50.9% 50.9% 0.0% 314 104.7
Kessel Toronto 50.1% 50.1% 0.0% 239 79.7
H. Sedin Vancouver 50.0% 50.0% 0.0% 331 110.3
Nash Columbus 50.9% 50.8% -0.1% 225 75.0
J. Eberle Edmonton 50.6% 50.5% -0.1% 198 66.0
Kopitar Los Angeles 50.6% 50.5% -0.1% 294 98.0
M. Koivu Minnesota 50.7% 50.6% -0.1% 251 83.7
Parise New Jersey 50.8% 50.7% -0.1% 286 95.3
Getzlaf Anaheim 51.0% 50.8% -0.2% 268 89.3
Roy Buffalo 50.3% 50.1% -0.2% 285 95.0
Stastny Colorado 50.3% 50.1% -0.2% 251 83.7
Spezza Ottawa 50.6% 50.4% -0.2% 260 86.7
Stamkos Tampa 50.2% 50.0% -0.2% 267 89.0
Iginla Calgary 50.5% 50.2% -0.3% 274 91.3
50.4% 50.5% >0 97.3
50.3% 50.3% =0 91.3
50.6% 50.4% <0 86.6

The list above is sorted by the difference between the oppositions 5v5 close GF% and the oppositions 5v5 GF%.  The bottom three rows of the last column is what tells the story.  These show the average point totals of the teams for players whose opposition 5v5 close GF% was greater than, equal to and less than the opponents 5v5 GF%.  As you can see, the greater than group had a team average 97.3 points, the equal to group had a team average of 91.3 points and the less than group had a team average of 86.6 points.  This means that good teams have on average tougher 5v5 close opponents than straight 5v5 opponents and weak teams have tougher 5v5 opponents than 5v5 close opponents which is exactly what we predicted.  It is also not unexpected.  Weak teams tend to play close games against similarly weak teams while strong teams play close games against similarly strong teams.

Another important observation is how little deviation from 50% there is in each players opposition GF% metrics.  The range for the above players is from 49.9% to 51.0%.  That is an incredible tight range and reconfirms to me the small importance QoC has an a players performance, especially when considering longer periods of time.

I also conducted the same study using fenwick for percentage as the QoC metric instead of goals for percentage but the results were less conclusive.  The >0 group had an average of 93.2 team points int he standings, the =0 group had 93.4 team points in the standings and the <0 group had 83.25 team points in the standings.  Furthermore there was even less variance in opposition FF% than GF% and only 12 teams had any difference between opposition 5v5 and opposition 5v5 close FF%.  For me, this is further evidence that fenwick/corsi are not optimal measures of player value.

Finally, I looked at the difference in player performance during 5v5 situations and found no trends among the different performance levels.  For GF% almost every player had their 5v5 close GF% within 4% of their of their 5v5 GF% (r^2 between the two was 0.7346) and for FF% every player but Parise had their 5v5 close FF% within 1.7% of their 5v5 GF% (r^2 = 0.945).  Furthermore, there was consistency as to which players saw an improvement (or decrease) in their 5v5 close GF% or FF% so it seems it might be luck driven (particularly for GF%) or maybe coaching factors.

So what does this all mean?  It means that in 5v5 close situations good teams have a bias towards tougher QoC than weak teams do.  Does it have a significant factor on player performance?  No, because the QoC metrics vary very little across players or from situation to situation (from my perspective QoC can be ignored the majority of the time).  Does it mean that we should be using 5v5 close in our player analysis?  I am still not sure.  I think the benefits of doing so are still probably quite small if there is any at all as 5v5 close performance metrics mirror 5v5 performance metrics quite well and in the case of goal metrics using the larger sample size of 5v5 data almost certainly supersedes any benefits of using 5v5 close data.

 

Jan 252012
 

Whenever I get into a statistical debate over which player might be better than another the inevitable argument that comes up is “yeah, but player A plays against tougher competition and gets tougher assignments” which is a valid argument to make.  But how valid?  The other day I looked at a simple, straight forward method for accounting for zone start differences (which can be significant) and today I thought I’d take a look at quality of teammates and quality of competition.

Whenever I browse through my stats.hockeyanalysis.com site or in my own database I have always been curious about the general lack of variation in the quality of competition and to a lesser extent quality of teammate stats (especially over multiple seasons of data) and I thought it would be worth while taking a look at it more closely.

My stats site has a number of metrics that we can look at but let me define a few.

  • GF20 – Goals For per 20 minutes of ice time.
  • GA20 – Goals Against per 20 minutes of ice time.
  • TMGF20 – Weighted average (by ice time played with) of teammates GF20
  • TMGA20 – Weighted average (by ice time played with) of teammates GA20
  • OppGF20 – Weighted average (by ice time played against) of opponents GF20
  • OppGA20 – Weighted average (by ice time played against) of opponents GA20

I also have the same stats for fenwick as well identified with an F instead of a G in the above abbreviations.

So, let’s take a look at a players offensive capabilities.  Things that would affect a players GF20 are the players own offensive talents, the offensive talents of his teammates and the defensive talents of his opponents.  We know that not all players have the same talent level, but what about the talent levels of his teammates and his opposition?  What is the variation among them?

The above table shows the mean goal production (GF20) in blue along with lines representing + and – one standard deviation.  Also included is TMGF20 in green and OPPGA20 in red and their + and – standard deviation lines.  I have included data for one, two, three and 4 seasons of data and skaters with a minimum of 400 minutes of 5v5 ice time average per season.

As you can see, there is very very little variation in quality of opposition, almost to the point we can almost  ignore it.  The variation in quality of teammate is significant and cannot be ignored and while it seems to get reduced over time, it’s impact cannot be ignored even when using 4 years of data.

Here is the same chart except using fenwick stats instead of goal stats.

QoC_QoT_importance_fenwick

We see pretty much the same thing when we look at fenwick data as we do goal data.  There is very little variation in quality of opposition, but significant variation in quality of teammate.  What about on the defensive side of things?

QoC_QoT_importance_fenwick_against

Once again, the quality of opposition has very little variation across a group of players almost to the point that it can be ignored.

All of this tells us that when comparing/evaluating players, the quality of competition a  player faces varies very little from player to player and we should be really careful when we use arguments such as “Player A faces tougher quality of competition” because in the grand scheme of things, the quality of competition probably only has a very minor influence on Player A’s on-ice stats.  And if you think about it, this probably makes sense.  If you have a great offensive player, the theory is your opponents will want to match up their great defensive players against him.  But, at the same time you are trying to match up your great offensive player against their weakest defensive players.  When at home, you get the line matching advantage, while on the road your opponent does.  When all is said and done everything more or less evens out.