Jul 152014
 

Before I get into rush shots of individual players I am going to look at some teams. I am starting with the Columbus Blue Jackets which was suggested for me to look at by Jeff Townsend who was interested to see impact the decline of Steve Mason and then the transition to Bobrovsky had. Before we get to that though, let’s first look at the offensive side of things (and if you haven’t read my introductory pieces on rush shots read them here, here and here).

ColumbusRushShPct

The League data is league average over the past 7 seasons.

There is a lot of randomness happening here, particularly the rush shot shooting percentages. This could be due to randomness as sample size for single season 5v5 road data is getting pretty small, particularly for rush shot data. Having looked at a number of these charts I think sample size is definitely going to be an issue. They key will be looking for trends above and beyond the variability.

Now for save percentages.

ColumbusRushSvPct

This chart is definitely a little more stable. Steve Mason’s excellent rookie season was 2008-09 where he actually had a below average non-rush 5v5road save percentage but an above average rush save percentage. Columbus never again posted a rush save percentage anywhere close to league average until this past season. Interestingly, despite Bobrovsky’s good season in 2012-13 his 5v5road save percentage that year was somewhat average (at home it was outstanding though which just goes to show you how variable these things can be).

Let’s take a look at the percentage of shots that were rush shots for and against.

ColumbusRushPct

Not really sure what to read into that, but I thought I toss it out there for you.

Something that I haven’t looked at before is PDO which is the sum of shooting and save percentages. There is no reason we can’t do this for rush and non-rush shots so here is what it looks like for Columbus.

ColumbusRushPDO

Again, I am not sure what we can read into this PDO table. PDO is kind of an odd stat in my opinion. PDO typically gets used as a “luck” metric which it can be if it deviates from 100.0% significantly which is certainly the case for a couple of seasons of Rush Shot PDO.

I am still trying to figure out how useful any of this rush/non-rush information is. Certainly I think we hit some serious sample size issues when looking at a single seasons worth of road-only data and I think that puts some of the usefulness in question. I have done some year over year correlations and truthfully they aren’t very good. I think that is largely sample size related but I still think playing style and roster turnover can have significant impacts too. All that said, there is a clear difference between the difficulty of rush and non-rush shots and teams that can maximize the number of rush shots they take and minimize the number of rush shots against will be better off.

 

Jul 102014
 

Yesterday I introduced the concept of rush shots which are basically any shot we can identify as being a shot taken subsequent to a rush up the ice which can be determined by the location of previous face off, shot, hit, giveaway or takeaway events. If you haven’t read the post from yesterday go give it a read for a more formal definition of what a rush shot is. Today I am going to take a look at how rush shots vary when teams are leading vs trailing as well as investigate home/road differences as arena biases in hits, giveaways and takeaways might have a significant impact on the results.

Leading vs Trailing

One hypothesis I had is that a team defending a lead tends to play more frequently in their own zone and thus have the potential to generate a higher percentage of shots from the rush. Here is a table of leading vs trailing rush shot statistics.

Game Situation Rush Sh% Other Sh% Overall Sh% % Shots on Rush
Leading 10.43% 8.03% 8.62% 24.3%
Trailing 9.36% 7.15% 7.63% 22.0%
Leading-Trailing 1.07% 0.89% 0.98% 2.28%

As expected, teams get a boost in the percentage of overall shots that are rush shots when leading (24.3%) compared to when trailing (2.28%). This higher percentage of shots being rush shots would factor in to the higher shooting percentages but it actually doesn’t seem to be all that significant. The more significant impact still seems to be that teams with the lead experience boosts in shooting percentage on both rush and non-rush shots. The hypothesis that teams have a higher shooting percentage when leading due to the fact that they have more shots on the rush doesn’t seem to be true. It’s just that they shoot better. Note that empty net situations are not considered and thus the shooting percentages when leading are not a result of empty net goals.

 Home vs Road

My concern with home stats is the various arena game recorders dole out hits, giveaways and takeaways at different rates. I determine what is a rush and what isn’t based in part on those events so there is the potential of significant arena biases in rush shot stats. To investigate I looked at the percentage of shots that were rush shots at home and on the road for each team. Here is what I found.

RushShotPercentage_Home_vs_Road

That is about as conclusive as you can get. The rush shot percentage at home is far more variable than on the road with higher highs and lower lows. It is possible that last change line matching usage tactics that coaches can more easily employ at home could account for some of the added variability but my guess is it is mostly due to arena scorer biases. From the chart above I suspect Buffalo, Minnesota, New Jersey  and Pittsburgh don’t hand out hits, giveaways and takeaways as frequently as other arenas. This chart takes a look at last years real time stats (the above chart is for last 7 seasons combined).

HitsGiveawaysTakeaways_Home_vs_Road

Most teams have more hits+giveaways+takeaways on home ice than on the road. The teams that have more on the road than at home are Buffalo, Minnesota, New Jersey, Pittsburgh and St. Louis. Despite comparing a 7-year chart with a 1-year chart the two charts seem to align up fairly well. There does seem to be significant arena biases in rush shot statistics so when looking at team and player stats it is certainly best to consider road stats only.

 

Jul 092014
 

I have been pondering doing this for a while and over the past few days I finally got around to it. I have had a theory for a while that an average shot resulting from a rush up the ice is more difficult than a shot than the average shot that is generated by offensive zone play. It makes sense for numerous reasons:

  1. The rush may be an odd-man rush
  2. The rush comes with speed making it more difficult for defense/goalie to defend.
  3. Shots are probably take from closer in (aside from when a team wants to do a line change rarely do they shoot from the blue line on a rush).

To test this theory I defined a shot off the rush as the following:

  • A shot within 10 seconds of a shot attempt by the other team on the other net.
  • A shot within 10 seconds of a face off at the other end or in the neutral zone.
  • A shot within 10 seconds of a hit, giveaway or takeaway in the other end or the neutral zone.

I initially looked at just the first two but the results were inconclusive because the number of rush events were simply too small so I added giveaway/takeaway and hits to the equation and this dramatically increased the sample size of rush shots. This unfortunately introduces some arena bias into the equation as it is well known that hits, giveaways and takeaways vary significantly from arena to arena. We will have to keep this in mind in future analysis of the data and possibly consider just road stats.

For now though I am going to look at all 5v5 data. Here is a chart of how each team looked in terms of rush and non-rush shooting percentages.

Rush_vs_NonRush_ShootingPct_2007-14b

So, it is nice to see that the hypothesis holds true. Every team had a significantly higher shooting percentage on “rush” shots than on shots we couldn’t conclusively define as a rush shot (note that some of these could still be rush shots but we didn’t have an event occur at the other end or neutral zone to be able to identify it as such). As a whole, the league has a rush shot shooting percentage of 9.56% over the past 7 seasons while the shooting percentage is just 7.34% on shots we cannot conclusively define as a rush shot. Over the 7 years 23.5% of all shots were identified as rush shots while 28.6% of all goals scored were on the rush.

In future posts over the course of the summer I’ll investigate rush shots further including but not limited to the following:

  • How much does the frequency of rush shots drive a teams/players overall shooting/save percentages?
  • Are score effects on shooting/save percentages largely due to increase/decrease in rush shot frequency?
  • Are there teams/players that are better at reducing number of rush shots?
  • Can rush shots be used to identify and quantify “shot quality” in any useful way?
  • How does this align with the zone entry research that is being done?

 

 

Jul 042014
 

The other day I put up a post on Mike Weaver’s and Bryce’s Salvador’s possible ability to boost their goalies save percentage and I followed it up with a post on the Maple Leafs defensemen where we saw Phaneuf, Gunnarsson, Gleason and Gardiner all seemingly able to do so as well while Robidas had the reverse effect (lowering goalie save percentage). This got some fight back from the analytics community suggesting this is not possible. My question to them is, why not?

Their answer is that if you do year over year analysis of a players on-ice save percentage or a year over year analysis of a players on-ice save percentage relative to their teams you will find almost no correlation. While this is true I claim that this is not sufficient to prove that such a talent does not exist. Here is why.

We Know Players Can and Do Impact Save %

The most compelling argument that players can and do impact save % is that we see it happening all the time and it is fully accepted among the hockey analytics community. It is known as score effects. Score effects are a well entrenched concept in hockey analytics.  It is why we often look at 5v5 “close” or 5v5 tied statistics instead of just 5v5 statistics. Generally speaking, the impact score effects have is that the trailing teams usually experiences an increase in shot rate along with a decrease in shooting percentage while the team protecting the lead experiences a decrease in shot rate but an increase in shooting percentage. The following table shows the Boston Bruins shooting and save percentages when tied, leading and trailing over the past 7 seasons combined.

. Tied Leading Trailing
Shooting% 7.27% 9.14% 7.66%
Save% 93.36% 93.86% 92.53%

The difference in the Bruins save percentage between leading and trailing is 1.33%. This is the difference between a .923 save percentage goalie and a .910 save percentage goalie which is the difference between an elite goalie and a below average goalie. That is not insignificant. Is this the goalies fault or does it have something to do with the players in front of him? The latter seems most likely. It makes sense that when protecting the lead the players take fewer risks in an attempt to generate offense and in return give up fewer good scoring chances against albeit maybe more chances in total. Conversely, the team playing catch up take more offensive risks so they end up giving up more quality scoring chances against. This is reflected in their teams save and shooting percentages when leading and trailing.

So, now if a team can play a style that boosts the team save percentage when they are protecting a lead, why is it so inconceivable that a player could see the same impact in his on-ice save percentage if that player plays that style of hockey all the time? If Mike Weaver and Bryce Salvador play the same style all the time that teams play when protecting a lead, why can they not boost on-ice save percentage? There is no reason they can’t.

It is Difficult to Detect because Individual Players Don’t Have a lot of Control of Outcomes

The average player’s individual ability to influence of what happens on the ice is actually fairly small as there are also 9 other skaters and 2 goalies on the ice with him. At best you can say the average player has a ~10% impact on outcomes while he is on the ice. That isn’t much. Last week James Mirtle tweeted a link to Connor Brown’s hockeydb.com page as evidence why +/- is a useless statistic. Over the course of three OHL season’s Brown’s +/- went from -72 to -11 to +44. I suggested to Mirtle that if this is the criteria for tossing out stats we can toss out a lot of stats including corsi% because most stats are highly team/linemate dependent. When challenged that this dramatic of reversal is not seen in corsi% I cited David Clarkson as an example.  In 2012-13 Clarkson was 4th in CF% but in 2013-14 he was 33rd (of 346) in CF%. From one year to the next he went from 4th best to 14th worst. Why is this? WEll, Clarkson essentially moved from playing with good corsi players on a good corsi team to playing with bad corsi players on a bad corsi team. No matter how much puck possession talent Clarkson has (or hasn’t) his talent doesn’t dominate over the talent level of the 4 team mates he is on the ice with.

Now think about how many players change teams from one year to the next and think about how many players get moved up and down a line up and change line mates from one season to next. It is not an insignificant number. TSN’s UFA tracker currently has 109 UFA’s getting signed starting July 1st, the majority of them changing teams. There are only ~800 NHL players (regulars and depth players) in a season so that is pretty significant turnover. Some teams turn over a quarter to half their line up while others stay largely the same. With that much roster turnover and with so little ability for a single player to drive outcomes it should be expected that the majority of statistics see relatively high “regression”. Regression doesn’t mean lack of individual talent though.

Think of this scenario. We have a player with an average ability to boost on-ice save percentage and he has been playing on a team with a number of players who are good at boosting on-ice save percentage but generally speaking he doesn’t play with those players. Under this scenario it will appear that the player is poor at boosting on-ice save percentage because he is being compared to  players who are good at it. Now that player moves to another team who isn’t very good at boosting on-ice save percentage. Now that same average player will look like he is a good player because he has a better on-ice shooting percentage than his teammates. The result is little year over year consistency but that doesn’t mean there aren’t talent differences among players.

Hockey is not like baseball which is a series of one-on-one matchups between pitcher and batter or isolated attempts to make a fielding play on a hit ball. Outcomes in hockey are completely interdependent on up to 12 other players on the ice. QoT is the largest driver of a players statistics in hockey. Only when we factor out QoT completely can we truly be able to identify every players talent level for any metric we measure. This is a kind of like the chicken and an egg problem though because to identify a players talent level we need to know the talent level of their team mates which in turn required knowledge of his own talent level. We can’t just look at year over year regression to isolate talent level.

Comments

The “team” aspect in hockey is more significant than any other sport and any particular players statistics are largely driven by the quality of his team mates. Even more than teammates, style of play can be a significant factor in a players statistics. The quality of the players that a particular player plays with is a function of both the team the plays on and the role (offensive first line vs defensive third line) he is playing on the team and this is maybe the greatest driver of a players statistics. This is why David Clarkson can be a Corsi king in New Jersey and a Corsi dud in Toronto. It also accounts for why James Neal can be a 25 goal guy playing on the first line in Dallas to a 40 goal guy in Pittsburgh (and probably back to a 25 guy guy in Nashville next year).  This also accounts for why year over year correlation in many stats is not very good despite there being measurable differences in the talent that that stat is measuring. Significant statistical regression is not sufficient, in my opinion, to conclude insignificant controllable talent if no significant attempt to completely isolate individual contribution to team results has been successfully made.

Just for fun, here is a chart of Lidstrom’s on-ice save percentage vs team save percentage. It is pretty outstanding that an offensive defenseman can do this too.

LidstromOnOffSavePct

 

Apr 292014
 

It seems every time a new hockey person gets hired these days they will get asked “do you believe in hockey analytics?” It started with Trevor Linden in Vancouver. Then Brendan Shanahan in Toronto. And today Brad Treliving in Calgary. Nichols on hockey has a good rundown on both Treliving’s and Burke’s response to the question today so go give it a read.

As we all know, Brian Burke is an analytics skeptic to say the least. A popular Brian Burke quote is the following:

“Let’s get the record straight on that too. The first analytics systems I see that’ll help us win, I’ll buy it. I’ll pay cash so that no one else can use it. I’m not a dinosaur on that.”

What Burke gets wrong here is that analytics is not a “system” you can buy but rather it is a thought process and a way of doing business. Walmart is famous for using analytics to maximize the profits of their retail operation by knowing their customers buying habits, knowing what their customers will buy, how much they will buy and when they will buy it based on everything from the weather to the economy. Analytics is a huge part of their success. That said, there is no analytics “system” that another retailer can purchase off the shelf that will allow them to do the same. It isn’t a system that makes Walmart so successful it is the way they use analytics to operate their business that permeates the entire operation that makes them successful. Every retail operation has a different customer base. Every retail situation has a different set of products they sell. Every retail situation has a different cost structure. There is no single “system” that can be applied that will guarantee retail success. That doesn’t mean that every retail operation can’t benefit from analytics because analytics is a way of doing business. It is the mindset of wanting to know as much as you can and applying unbiased analyticical techniques to that knowledge to drive decision making. It is the mindset of wanting to know as much about your customers buying habits as you possibly can. It is the mindset of wanting to know what your customers will want to buy and when and why. It is a mindset of knowing how many employees you need on staff at a given time to maximize sales and profits. It is about wanting to know how long a line up customers will tolerate before the leave and make a purchase elsewhere. Analytics is a way of thinking that permeates throughout your organization, it is not a “system” that you can buy and apply.

I don’t know the extent that NHL teams are using hockey analytics but I get the feeling that there are very few that are doing so in a real serious way. Being a highly analytical person I may be biased but to me an NHL team that truly adopts hockey analytics would see the idea of analytics permeate throughout the organization. Analytics should be an important driver of coaching tactics and decisions. It should be an important driver of scouting and player evaluation. It should be an important driver of team building. It should be an important driver of maximizing salary cap commitments. It also should not be one-directional as I firmly believe hockey analytics can benefit significantly from the hockey knowledge of players, coaches, general managers and scouts to improve and test analytical techniques. I have my doubts that there are many NHL organizations that have truly adopted hockey analytics when defined in that way. Some may be dabbling, few are truly adopting.

Interestingly though, I suspect there isn’t one NHL organization that doesn’t use analytics in a significant way on the business side of the organization to do everything from setting ticket, beer and hot dog prices, to setting advertising rates to evaluating their sales staff effectiveness. I am certain analytics permeates through the business side of an NHL organization in a significant way so it is kind of surprising there is any resistance to it on the hockey side.

 

Feb 272013
 

The last several days I have been playing around a fair bit with team data and analyzing various metrics for their usefulness in predicting future outcomes and I have come across some interesting observations. Specifically, with more years of data, fenwick becomes significantly less important/valuable while goals and the percentages become more important/valuable. Let me explain.

Let’s first look at the year over year correlations in the various stats themselves.

Y1 vs Y2 Y12 vs Y34 Y123 vs Y45
FF% 0.3334 0.2447 0.1937
FF60 0.2414 0.1635 0.0976
FA60 0.3714 0.2743 0.3224
GF% 0.1891 0.2494 0.3514
GF60 0.0409 0.1468 0.1854
GA60 0.1953 0.3669 0.4476
Sh% 0.0002 0.0117 0.0047
Sv% 0.1278 0.2954 0.3350
PDO 0.0551 0.0564 0.1127
RegPts 0.2664 0.3890 0.3744

The above table shows the r^2 between past events and future events.  The Y1 vs Y2 column is the r^2 between subsequent years (i.e. 0708 vs 0809, 0809 vs 0910, 0910 vs 1011, 1011 vs 1112).  The Y12 vs Y23 is a 2 year vs 2 year r^2 (i.e. 07-09 vs 09-11 and 08-10 vs 10-12) and the Y123 vs Y45 is the 3 year vs 2 year comparison (i.e. 07-10 vs 10-12). RegPts is points earned during regulation play (using win-loss-tie point system).

As you can see, with increased sample size, the fenwick stats abilitity to predict future fenwick stats diminishes, particularly for fenwick for and fenwick %. All the other stats generally get better with increased sample size, except for shooting percentage which has no predictive power of future shooting percentage.

The increased predictive nature of the goal and percentage stats with increased sample size makes perfect sense as the increased sample size will decrease the random variability of these stats but I have no definitive explanation as to why the fenwick stats can’t maintain their predictive ability with increased sample sizes.

Let’s take a look at how well each statistic correlates with regulation points using various sample sizes.

1 year 2 year 3 year 4 year 5 year
FF% 0.3030 0.4360 0.5383 0.5541 0.5461
GF% 0.7022 0.7919 0.8354 0.8525 0.8685
Sh% 0.0672 0.0662 0.0477 0.0435 0.0529
Sv% 0.2179 0.2482 0.2515 0.2958 0.3221
PDO 0.2956 0.2913 0.2948 0.3393 0.3937
GF60 0.2505 0.3411 0.3404 0.3302 0.3226
GA60 0.4575 0.5831 0.6418 0.6721 0.6794
FF60 0.1954 0.3058 0.3655 0.4026 0.3951
FA60 0.1788 0.2638 0.3531 0.3480 0.3357

Again, the values are r^2 with regulation points.  Nothing too surprising there except maybe that team shooting percentage is so poorly correlated with winning because at the individual level it is clear that shooting percentages are highly correlated with goal scoring. It seems apparent from the table above that team save percentage is a significant factor in winning (or as my fellow Leaf fans can attest to, lack of save percentage is a significant factor in losing).

The final table I want to look at is how well a few of the stats are at predicting future regulation time point totals.

Y1 vs Y2 Y12 vs Y34 Y123 vs Y45
FF% 0.2500 0.2257 0.1622
GF% 0.2214 0.3187 0.3429
PDO 0.0256 0.0534 0.1212
RegPts 0.2664 0.3890 0.3744

The values are r^2 with future regulation point totals. Regardless of time frame used, past regulation time point totals are the best predictor of future regulation time point totals. Single season FF% is slightly better at predicting following season regulation point totals but with 2 or more years of data GF% becomes a significantly better predictor as the predictive ability of GF% improves and FF% declines. This makes sense as we earlier observed that increasing sample size improves GF% predictability of future GF% while FF% gets worse and that GF% is more highly correlated with regulation point totals than FF%.

One thing that is clear from the above tables is that defense has been far more important to winning than offense. Regardless of whether we look at GF60, FF60, or Sh% their level of importance trails their defensive counterpart (GA60, FA60 and Sv%), usually significantly. The defensive stats more highly correlate with winning and are more consistent from year to year. Defense and goaltending wins in the NHL.

What is interesting though is that this largely differs from what we see at the individual level. At the individual level there is much more variation in the offensive stats indicating individual players have more control over the offensive side of the game. This might suggest that team philosophies drive the defensive side of the game (i.e. how defensive minded the team is, the playing style, etc.) but the offensive side of the game is dominated more by the offensive skill level of the individual players. At the very least it is something worth of further investigation.

The last takeaway from this analysis is the declining predictive value of fenwick/corsi with increased sample size. I am not quite sure what to make of this. If anyone has any theories I’d be interested in hearing them. One theory I have is that fenwick rates are not a part of the average GMs player personal decisions and thus over time as players come and go any fenwick rates will begin to vary. If this is the case, then this may represent an area of value that a GM could exploit.

 

Jan 232013
 

One of the challenges in hockey analytics, or any type of data analysis, is how to best visualize data in a way that is exceptionally informative and yet really simple to understand. I have been working on a few things can came up with something that I think might be a useful tool to understand how a player gets utilized by his coach.

Let’s start with some background. We can get an idea of how a player is utilized by looking at when the player gets used and how frequently he gets used.  Offensive players get more ice time on the power play and more ice time when their team is trailing and needs a goal. Defensive players get more ice time on the PK and when they are protecting a lead. This all makes sense, but the issue is some teams spend more time on the PP or PK than others while bad teams end up trailing more than good teams and leading less. This means doing a straight time on ice comparison between players on different teams doesn’t always accurately depict the usage of the player. If a player on the Red Wings plays the same number of minutes with the lead as a player on the Blue Jackets it doesn’t mean the players are used int he same way.  The Blue Jackets will lead a game significantly less than the Red Wings thus in the hypothetical example above the Blue Jackets are depending on their player a higher percent of the time with a lead than the Red Wings are their player.

To get around this I looked at percentages. If Player A played 500 minutes with a lead and his team played a total of 2000 minutes with a lead during games which Player A played, then Players A’s ice time with a lead percentage would be 25%. In games in which Player A played he was used in 25% of the teams time leading. I can calculated these percentages for any situation from 5v5 to 4v5 or 5v4 special teams to leading and trailing situations. The challenge is to visualize the data in a clear and understandable way. To do this I use radar charts. Lets look at a couple examples so you get an idea and we’ll use players that have extreme and opposite usages: Daniel Sedin and Manny Malhotra.

For those not up to speed on my terminology f10 is zone start adjusted ice time which ignores the 10 seconds after a face off in either the offensive or defensive zone.

The charts above are largely driven by PP and PK ice time but players that are used more often in offensive roles will have their charts bulge to the top and top right while those in more defensive roles will have their charts bulge more to the bottom and bottom left. Also, the larger the ‘polygon’ the more ice time and more relied on the player is. In the examples above, Sedin is clearly used more often in offensive situations and clearly gets more ice time.

Let’s now look at a player who is used in a more balanced way, Zdeno Chara.

That is a chart that is representative of a big ice time player who plays in all situations. We can then take it a step further and compare players such as the following.

In normal 5v5 situations Gardiner was depended on about as much as Phaneuf, but Phaneuf was relied on a lot more on special teams and a bit more when protecting a lead. Of course, you can also compare across teams with these charts:

Phaneuf and Chara were depended on almost equally in all situations except on the PP where Phaneuf was used far more frequently.

I am not sure where I will go with these charts but I think I’ll look at them from time to time as I am sure they will be of use in certain situations and I have a few ideas as to how to expand on them to make them even more interesting/useful.

 

Sep 032012
 

A month and a half ago Eric T at NHLNumbers.com had a good post on quantifying the impact on teammate shooting percentage.  I wanted to take a second look at the relative importance the impact on teammate shooting percentage can have because I disagreed somewhat with Eric’s conclusions.

For a very small number of elite playmakers, the ability to drive shooting percentage can be a major component of their value. For the vast majority of the league, driving possession is a more significant and more reproducible path to success.

It is my belief that it is important to consider impact on shooting percentage for more than a “very small number of elite playmakers” and I’ll attempt to show that now.

The method that Eric used to identify a players impact on shooting percentage is to compare that players line mates shooting percentages with him to their overall shooting percentage.  As noted in the comments the one flaw with this is that their overall shooting percentage is impacted by the player we are trying to evaluate which will end up underestimating the impact.  In the comments Eric re-did the analysis using a true “without you” shooting percentage and the impact of driving teammate shooting percentages was greater than initially expected but he concluded the conclusions didn’t  chance significantly.

Overall average for the top ten is a 1.2% boost (up from 0.9% in story) and 5 goals per year (up from 4.5). I don’t think this changes the conclusions appreciably.

In the minutes that a player is on the ice with one of the very best playmakers in the league, his shooting percentage will be about 1% better. For a player who gets ~150-200 shots per year and plays ~40-60% of his ice time with that top-tier playmaker, that’s less than a one-goal boost. It’s just not that big of a factor.

He also suggested that using the “without you” shooting percentage instead of the “overall shooting percentage” would probably result in “more accurate but less precise” analysis.  This is because a guy like Daniel Sedin would get very few shots when playing apart from Henrik Sedin because they rarely play apart and this small “apart” sample size might be subject to significant small sample size errors.

Continue reading »