Follow up on recent conversations…
Earlier this week I participated in several conversations elsewhere and thought I’d post a followup to a few of them.
The other day I posted an article about on ice shooting percentage being a talent and referenced a discussion I had with Gabe Desjardin’s of Arctic Ice Hockey at MC79Hockey.com. As part of that discussion we were negotiating a “bet” of sorts. In it I suggested that as a group Crosby, Gaborik, Ryan, St. Louis, H. Sedin, Toews, Heatley, Tanguay, Datsyuk, Horton would have an average on-ice shooting percentage over 10% this season while Gabe suggested that the group of them would regress significantly to the mean. The problem was, we couldn’t agree on what the mean was that we were regressing to.
I suggested we use 7.94% which is league wide-shooting percentage (all goals divided by all shots at 5v5 over the last 4 seasons) but Gabe suggested we use only players who were on the ice for 1000 shots at 5v5 over the last 4 seasons which resulted in a much higher 9.12%. He suggested we need to do this because we don’t want to include players who are “sub-replacement” level. This seems odd to me because he believes that variation in shooting percentage is largely random so using that theory the sub-replacement level player is just as likely to have an elevated shooting percentage than a below average one.
So, in the end, we never seemed to agree on what mean on-ice shooting percentage to use but I wanted to make it known that I still do believe players can drive shooting percentage and thus corsi derived player analysis will never tell us the whole story. So, with that said I’ll predict the 10 players above will end the season with an on-ice 5v5 even strength shooting percentage above 10% and no more than 2 will have an on-ice shooting percentage below 9.5%. So, come seasons end lets look back and see what happened and hopefully my prediction will come true and we can all get past the “shooting percentage is random” nonsense.
Over at Pension Plan Puppets we had another Mike Weaver discussion. In it I compared how goalies performed when Weaver was on the ice vs when Weaver was not on the ice. Let me expand on that here.
|Year||with Weaver||without Weaver|
The table shows the goals against average of each goalie that Weaver has played at least 200 minutes in front of in each of the past 4 seasons along with the goalies goals against average when Weaver is on the ice in front of them and when Weaver is not on the ice in front of them. As you can see, in every instance the goalie has had a significantly better goals against average when Weaver is on the ice than when he is not. This should put to rest some of the “but Weaver has played in front of excellent goalies” argument that came out of my previous Mike Weaver is an excellent defensive defenseman post.
Separating Goalie Performance from Defensemen
As a followup to the Mike Weaver discussion @so_Truculent asked me how I separate out goaltender play in my goal based analysis. i.e. how do you account for a player who plays against, on average worse goalies, or a player who plays in front of sub-par goalies. How I do it is a somewhat complex process but it is exactly the same as how I account for quality of competition and quality of teammates for forwards and defensemen. Let me describe the process as simply as I can.
I assign every player an offensive and defensive rating based on their GF20 (goals for while on ice per 20 minutes) and GA20 (goals against while on ice per 20 minutes) stats. Now, lets use an example. Let’s assume we are evaluating Mike Weaver defensively. I will then calculate an estimated GA20 based on an average GA20 of all the players who played with Mike Weaver (weighted by ice time played with Mike Weaver) and based on the average GF20 of all the players who Mike Weaver played against (weighted by ice time played against Mike Weaver). This gives me an estimated GA20 based on the players Mike Weaver plays with and against. I then assign Mike Weaver a new defensive rating equivalent to Mike Weavers estimated GA20 divided by his actual GA20. So, if Mike Weaver gives up fewer goals against than the average of his teammates and opposition, then Weaver gets a better than average rating and if he gives up more then he gets a below average rating.
I do this for every player in the league for both offensively and defensively. This included forwards, defensemen and goaltenders (defensively only, I don’t factor goalies into offensive ratings). That’s the beauty of this system. It works for all players the same way. If a goalie make his teammates GA20 ratings better and the oppositions GF20 ratings worse then the goalie gets a better than average rating.
Now, after we do this once we have improved information about the players offensive and defensive abilities and new offensive and defensive ratings. So, I take these new numbers and run them through the process again. And again. And again. Eventually it comes to a stable state where each iteration has very little change in the players ratings. At this point we have the HARO+ and HARD+ ratings you see at stats.hockeyanalysis.com. The HART+ ratings are just an average of each players HARO+ and HARD+ ratings. Players with a HARO+ rating above 1.00 are above average offensively (though the median is something below 1.00) and players with a HARD+ rating above 1.00 are above average defensively (again, the median is something below 1.00 though).
I don’t know for sure if these are perfect ratings, but generally speaking I am very happy with the results, and I am very happy that the iterative process converges on a ratings solution as opposed to going completely haywire as some iterative processes do. I also really like the fact that I can include goalies in the process because separating goalie performance from the players in front of them is probably one of the hardest questions to answer in hockey, especially if you are like me and believe that players can suppress shooting percentage (meaning save % is a team stat as much as it is a goalie stat).