Feb 112013
 

When I updated stats.hockeyanalysis.com this season I added new metrics for Quality of Teammates (QoT) and Quality of Competition (Q0C). The QoC metrics are essentially the average Hockey Analysis Rating (HARO for offense, HARD for defense and HART for overall) of the opponents that the player plays against. What is interesting about these ratings, as compared to those found elsewhere, is that I split the QoC rating up into offensive and defensive metrics. Thus, there is a QoC HARO rating for measuring the offensive quality of competition, a QoC HARD for measuring the defensive quality of competition, and a QoC HART for overall quality of compentition (basically the average of QoC HARO + QoC HARD). The resulting metrics give a result that is above 1.00 for above average competition and below 1.00 for below average competition and 1.00 would be average competition.

Let’s take a look at defensemen first and take a look at the defensemen who have the highest QoC HARO during 5v5close situations over the previous 2 seasons. This should identify the defensemen who have face the best offensive players and her are the top 15.

Player Name HARO QOC
GIRARDI, DAN 1.036
CHARA, ZDENO 1.036
GARRISON, JASON 1.035
MCDONAGH, RYAN 1.034
WEAVER, MIKE 1.033
GORGES, JOSH 1.031
ALZNER, KARL 1.029
GLEASON, TIM 1.026
SEABROOK, BRENT 1.025
BOYCHUK, JOHNNY 1.025
SUBBAN, P.K. 1.025
PHANEUF, DION 1.025
CARLSON, JOHN 1.022
HAMONIC, TRAVIS 1.021
LIDSTROM, NICKLAS 1.021

That’s actually a pretty decent representation of defensive defensemen though there is a bias towards the eastern conference in large part because the eastern conference has more offense (the top 4 teams in goals for last year were eastern conference teams while 9 of the 11 lowest scoring teams were from the western conference).

Now, lets take a look at the forwards with the toughest offensive competition.

Player Name HARO QOC
SUTTER, BRANDON 1.032
PERRON, DAVID 1.032
CALLAHAN, RYAN 1.031
FISHER, MIKE 1.03
SYKORA, PETR 1.029
BOLLAND, DAVE 1.028
ZAJAC, TRAVIS 1.028
ELIAS, PATRIK 1.028
BERGERON, PATRICE 1.027
HAGELIN, CARL 1.027
ZUBRUS, DAINIUS 1.027
PLEKANEC, TOMAS 1.027
WEISS, STEPHEN 1.026
RECCHI, MARK 1.026
ERAT, MARTIN 1.025

Not a lot of surprises there.  They are mostly third line defense first players (IMO Brandon Suter is the best defensive center in the NHL and this is just more evidence of why) or quality 2-way players though as you go further down the list you start to see more offensive players showing up like Alfredsson and Spezza which is probably evidence of a coach wanting to line match top line against top line instead of a checking line against top line.

Where things get interesting is looking at who is 300th on the list of forwards in HARO QoC. It’s none other than Manny Malhotra of massive defensive zone start bias fame. Malhotra’s HARO QoC is just 0.980 while the Canucks center who is assigned mostly offensive zone starts, Henrick Sedin, has a HARO QoC 0.994, which isn’t real difficult but is somewhat higher than Malhotra’s. So, despite all those defensive zone starts by Malhotra (presumably because he is considered a better defensive player), Henrik Sedin plays against tougher offensive opponents. How can this be? Despite Malhotra’s significant defensive zone start bias his five most frequent 5v5close opponent forwards over the previous 2 seasons are David Jones, Matt Stajan, Tim Jackman, Joran Eberle, Matt Cullen. Aside from Eberle those guys don’t really scare you much. It seems Malhotra was facing Edmonton’s top line but not Calgary’s, Minnesota’s or Colorado’s. Henrik Sedin’s top 5 opposition forwards are Dave Bolland, Dany Heatley, Curtis Glencross, Olli Jokinen and Jarome Iginla. Beyond that you have Backes, O’Reilly, Bickell, Thornton, Zetterberg, and Getzlaf. Despite the massive offensive zone start bias, it seems the majority of teams are still line matching power vs power with the Sedins. The conclusion is defensive zone starts does not immediately imply playing against quality offensive players. It can be argued that despite the defensive zone starts Manny Malhotra plays relatively easy minutes.

Using a rigid zone start system like the Vancouver Canucks do actually makes it easier for opposing teams to line match on the road as they know who you are likely to be putting on the ice depending on where the face off is. If the San Jose Sharks want to avoid a Thornton against Malhotra matchup, just don’t start Thornton in the offensive zone. Here are all the forwards with >750 5v5close minutes and at least 40% of the face offs they were on the ice for being in the defensive zone along with their HARO QoC.

Player Name HARO QOC
Manny Malhotra 0.980
Jerred Smithson 0.977
Max Lapierre 0.970
Adam Burish 0.982
Steve Ott 0.993
Jay McClement 0.983
Sammy Pahlsson 1.014
Brian Boyle 1.010
Dave Bolland 1.028
Kyle Brodziak 1.002
Matt Cullen 0.998
Paul Gaustad 0.993

Only 4 of the 12 heavy defensive zone start forwards faced opposition that was above average in terms of quality while the majority of them rank quite poorly.

It is also interesting to see who plays against the best defensive forwards.  One might assume it is elite offensive first line players but as we saw above, teams seemed to want to avoid matching up top offensive players against Manny Malhotra. So, let’s take a look.

Player Name HARD QOC
FRASER, COLIN 1.044
BOLL, JARED 1.043
MAYERS, JAMAL 1.037
JACKMAN, TIM 1.035
MACKENZIE, DEREK 1.032
ABDELKADER, JUSTIN 1.031
CLIFFORD, KYLE 1.031
EAGER, BEN 1.029
BELESKEY, MATT 1.028
MILLER, DREW 1.028
KOSTOPOULOS, TOM 1.027
MCLEOD, CODY 1.025
NICHOL, SCOTT 1.024
WINCHESTER, BRAD 1.023
PAILLE, DANIEL 1.021

Pretty much only tough guys and 3rd/4th liners on that list. Teams are deliberately using the above players in situations that avoid them facing top offensive players and as a result are facing other teams third and fourth lines and thus are facing more defensive type players.

The one conclusion we can draw from this analysis is that quality of competition is driven by line matching techniques more so than zone starts.

 

  5 Responses to “Taking a look at quality of competition”

  1.  

    Just discovered this site and I have to say I’m impressed. I have a quick question/comment. As I was reading about how the ratings were calculated I saw that at the most basic level you calculate HART as (HARO + HARD)/2 in order to produce a baseline in which a score of 1 is an average player. Perhaps I am mistaken but I don’t think this method of adding the metrics together works. For instance if you have a player who is 1/3 worse than an average on offense (HARO score of .333) but 3x better on defense (HARD score of 3) then his HART will be 1.6666 when the root scores 1/3 and 3/1 are exactly symetrical and would produce a GF% of 50 % and hence should produce a score of 1.

    Make sense?

    •  

      I need to add a good explanation of these ratings but think of it this way: HARO = actual GF while on ice / expected GF while on ice so someone with a HARO of 1.10 would see his team score 10% more goals when he is on the ice than is expected based on the talent levels of his line mates and quality of opposition. Similarly, a player with a HARD rating of 1.1 would give up 10% fewer goals against than expected given the quality of his team mates. The challenge is determining what expected goals for and against should be but I start off with expected goals for being (average goals for of his line mates + averages goals against of his opponents)/2. From there I get initial HARO and HARD ratings and then I recalculate expected goals for and against based on the initial HARO and HARD ratings of his line mates and opponents. I then iterate over and over again but essentially when all is said and done, a 1.10 HARO rating would mean the player has 10% more goals for while on the ice than can be expected. A 0.333 HARO rating would mean the player is on the ice for 66.6% fewer goals than expected which is downright terrible (over decently large sample sizes).

  2.  

    I get it. But go look at Daniel Briere. He has a GF% is exactly 50% but his HART indicates he is 20% better than expected. That is an artifact of the way HARO and HARD are adding together in HART. By the numbers he is almost exactly as better at preventing goals in relation to expectations as he is bad at scoring them. However he is getting almost twice the credit for prevention as he is penalized for the lack of scoring. This is a real life example of the thought experiment above.

    Now, over a larger sample size this problem will average out because you’ll get less extreme core GF and GA number however that will disguise the problem rather than address it.

    •  

      I think there are sample size issues going on there. There just haven’t been enough games played to get reliable results. Take a look at last year where his GF% was 52.0 and his HART rating was 1.005. That said, your comments have given me some thought about how I am calculating HARD ratings to make sure I am doing them the best way. If I am not doing them correctly it wouldn’t make a huger difference, but may in these small sample size situations.

  3.  

    Empirically, over a large enough sample, it probably wouldn’t make a huge difference in most cases. However, this isn’t an ordinary sample size problem in which the sample of events is not representative of the “true” nature of events. Rather this is a theoretical problem that manifests itself most obviously at the extreme ranges of results. Results that are more likely to occur with small samples.

    The problem, I think, is that the relationship between HART and event % (goal, Fenwick, whatever you are measuring) is not linear and hence any linear equation is going to have a range where it is accurate and anything outside that range is going to produce oddities. But I’m not a statistician so what do I know.

Sorry, the comment form is closed at this time.