Quick Comparison of Four Public Expected Goal Models

Last week I wrote about SportLogiq and its Expected Goals model and how it compared to public expected goal models. I just wanted to expand on that a little by comparing four public expected goal models and how they performed over the past 5 seasons.

The most rudimentary method to evaluate an expected goals model is by looking at whether at a league-wide level they accurately estimate the actual number of goals that are scored. This is what I am doing here and the graphs below show the percent deviation the expected goal model is from the total goals scored. Negative values indicate the model under predicted the number of goals scored league-wide and positive values indicate the model over predicted the number of goals scored league-wide.

The first chart is in 5v5 situations only.

It is interesting that every model has more or less the same pattern, predicting more goals (relative to actual goals scored) the past two seasons and fewer goals the three seasons prior. However, I am proud to say that my Puckalytics model has been the most consistent with the least deviation from actual goals. The other three models perform about the same with MoneyPuck slightly out performing the other two.

Here is the chart for 5v4 powerplay goals.

There is a very clear pattern that is consistent across models with Evolving Hockey always predicting the highest number of goals, Money Puck always predicting the fewest goals and Natural Stat Trick and Puckalytics somewhere int he middle. Overall Natural Stat Trick is the best at predicting total 5v4 goals while Money Puck is the least accurate always predicting too few 5v4 goals.

There aren’t near as many 4v5 short handed goals scored but here is the chart for them.

Where Money Puck under estimated 5v4 goals it is generally overestimating 4v5 goals. The other three models perform fairly similarly.

Comparing expected goal models at to total goals scored is the most rudimentary method available. The next step is to start looking at how it performs for various teams and see if team by team performance matches what we saw at the total goals level. Are there certain teams that models under or over predict?

I don’t have shot by shot data for models other than my own, however if I did I could determine how models perform on shots with similar characteristics (slot shots, point shots, mid-range shots, wrist shots, slap shots, etc.).

Leave a Comment