Dissent and uncertainty…

So, it seems Hockey Graphs doesn’t like dissent. I tried to post a response in the comments to Matt Cane’s defense of WAR to respond to the latest Hockey Graphs attack (though a far more substantiated critique than most other Hockey Graphs writers as I would expect from Matt). Of course the Hockey Graphs policy is to stomp out and censor critique and discussion so it seems it didn’t pass moderation. Sad, but that seems to be the way they work over there. Luckily I saved a copy of my response which I will post here.

I don’t disagree with much of what you write but let me address a few points.

Whether they are “long run Crosby-good” is irrelevant – GAR is measuring, to a degree, their contributions this year.

Fair, however looking at this years GAR I don’t in any way think you can make the following conclusions that the article made (bold mine):

“Unheralded Brandon Saad, Nick Foligno bring star power to Blue Jackets”

(Unheralded? Sure. Star power? Not convinced)

“Foligno and Saad might be two of the league’s most unheralded star forwards, depending on what numbers you’re looking at.”

(Sure, cherry picking numbers you can say almost anything about anyone, I still wouldn’t call them “star forwards”)

“In Foligno and Saad, the Jackets might not have a traditional “superstar,” but they might have two guys who bring the same value…”

(Same value as a superstar? This is a bold statement that I think needs a lot more support than one season GAR and certainly a lot more discussion than was in the article.)

“But even though the numbers are big fans of their work, it might still be a while before the rest of the league catches on.”

(Again, if you are trying to suggest GAR knows something the NHL GMs, coaches, scouts, etc. don’t know you better have a ton of evidence to support your case).

My problem is actually less in using GAR as a starting point (as Dom suggests) but in the fact that the article barely moved off the starting line. Even looking at GAR for Foligno/Saad in previous seasons to see if this season is an anomaly or reflective of true value would be a good start. There is no depth in the article and the tone of the article (certainly the headline and the closing paragraphs) was that these guys are unheralded star players, not good players having an unusually star-like seasons. Instead conclusions (or suggestions of conclusions) about the players are made almost solely on one season’s GAR.

As an example Curtis McElhinney is playing out of his mind this year and his save % is ahead of Cam Talbot. No one would take him over Talbot.
But no one is going to discount save percentage because of this small inconsistency – there’s nuance involved.

Well, we discount save percentage over small sample sizes all the time, as we should. We know it takes a track record before we have confidence in a goalies save percentage so if a goalie has a handful of good games or even a good season we largely discount it if it is unusual.

Similarly, if a players GAR was 5, 7, 4, 6, 5, 13 over a 6-year span we shouldn’t start calling that player a star because he has a 13 GAR in one season. I have been fairly consistent over the years in saying, if a player has more than one year track record we should be looking at those years when we evaluate them.

The only difference between David’s method and Dawson’s is the aggregation. Dawson uses an algorithm, David does it manually. Personally, I’d lean towards using the algorithm. I’ve contradicted myself within tweet threads before and don’t trust my brain to handle all that info the same way each time.

This is fair, and if you want to use GAR to help aggregate statistics in a consistent way, have at it. I am not saying don’t ever use all-encompassing stats but use them with caution. Understand the reliability of them and the component metrics.

My second tweet in the series of tweets that started this whole mess we are in was:

and it finished with

I have been fairly consistent in my critiques of hockey analytics over the years including that there are too many claims about players with too much (perceived) confidence and very little discussion of the underlying uncertainties. My critique of Dom’s article was no different and I find it unfortunate it exploded into the mess it did. I apologize for any part I may have played in it getting out of hand however I won’t back down critiquing the use of limited and unreliable statistics to make bold claims. The analytics community would bash Steve Simmons or Pierre McGuire for doing that so we ought to not do that ourselves.

That is the end of my comment however here are a couple closing thoughts.

Bold claims get more eye balls than talking about uncertainty and when you are just launching a subscription based website in Cleveland/Columbus I guess that is what you need. However that doesn’t make good analytics and I won’t apologize for calling it out. Knowing what you don’t or are uncertain about is critical to the research process. No one researches what they know, only what they don’t know and want to find out about. We need to embrace uncertainty, not ignore it.

Below are some tweets I have made about uncertainty over the past couple years. Clearly it is something I have been critiquing the analytics community on for a long time now.




This article has 2 Comments

  1. Thanks for responding, David. I get what you’re saying around some of Dom’s language, and I agree to a degree, some of the statements may have been too broad for an article that only focused on this year’s stats. At the same time, I also understand the challenge of writing an interesting article without getting tied down in the disclaimers that we really should have throughout.

    I do agree with you wholeheartedly on this statement: “too many claims about players with too much (perceived) confidence and very little discussion of the underlying uncertainties.” Again, it’s definitely a tricky line to walk between boring your readers to death saying “I believe this, but there’s a good chance that the variance is so huge I could be wrong” and not fully explaining that there’s a lot we don’t know that could impact a player’s results. It’s even trickier when the primary means of communicating a lot of thoughts in the analytics sphere limits you to 140 characters. But it’s also a problem that’s not going away any time soon, so we probably should find a good way to do it.

    Anyways, I’ll leave it at that – pleasure disagreeing with you as always 🙂

    1. Matt,

      This is a major problem. Hockey discussion, for the most part, does not take place in academic journals, and there is no peer-review process. “Hockey Analytics” often justifies its approach by reference to the scientific process. Concerns regarding entertainment value (or future employment) ought not matter. Potential limitations and objections must be examined and reexamined, particularly in the absence of peer review. If you cannot clearly and charitably set forth criticisms of one’s claim, as well as possible alternatives, then you ought not be comfortable with the beliefs/high credence levels. As a former attorney turned philosophy Phd, I am perpetually frustrated by this lack of critical perspective and intellectual rigor.

      Also, I notice how these ideas spread. The language used in the original article–the strength of statements–seems to quickly settle, becoming part of “the known”, with future discussions (lazily) linking back to it: “John Doe has shown that …” You very rarely see even a mediocre literature review, illustrating even a glimpse of awareness of potential weaknesses.

      Along similar lines, I notice how *quickly* this process takes place. For example, Alex Novet recently wrote on strong and weak links, concluding that “hockey is a strong link game.” This was published Mar. 14, 2017. About two weeks later (April 4, 2017), Stimson is referring to it as part of the web his theory fits, as potential justification for the he sets forth. No reference to potential weaknesses, limitations, etc., of Novet’s approach/claims.

Comments are closed.