top of page

Brownalytics 2019 Predictions Recap

At the core of what started the 'brownalytics' project was a question about how stats can predict game outcomes in the NFL from week to week and from season to season. This question led to the creation of a statistical model which seeks to estimate the win probability for each team, in each NFL game based on their  running average statistics and those of their respective opponents. An important benchmark, then, for evaluating the effectiveness of the brownalytics model is how accurately it is able to classify the eventual winner of each game as the favorite in each matchup based solely on the stats fed into it each week.

 

To that end, and as a measure of accountability for the predictions made by brownalytics' model,  some high-level summary statistics about our 2019 season predictions are reviewed below to recap the year.

 

Overall Performance:

​

Overall, the brownalytics model performed roughly the same as it did during its initial season (2018) of creating week-by-week NFL predictions (60.2% accuracy vs. 59.9% in 2018). While the goal was certainly to achieve a more significant increase in accuracy, the underlying brownalytics model was largely unchanged from 2018 to 2019, save for the implementation of simulation (each contest was simulated 5,000x per week in 2019) versus the simple use of a single spot prediction to create game-by-game win probabilities (which was used for the 2018 season).

​

overall_record.png
away_record.png
home_record.png

 

Another measure of the brownalytics model's performance can be gleaned from FiveThirtyEight.com's NFL forecasting game, where anyone on the internet is encouraged to test their predictive prowess by creating a probabilistic forecast of each week's games. Truth be told, this was a nervous leap to take. Predicting games and posting them on twitter is one thing; seeing how they stack up against people who (presumably) know what they're doing is quite another and - especially early on - the returns were discouraging. But eventually things evened out and the model finished around the 82nd percentile overall (heading into the Super Bowl); a decent first try, but not close to either FiveThirtyEight's forecast accuracy or to many of the super forecasters who post their excellent work to twitter week-in and week-out (note: be sure to follow @greerreNFL and @LeeSharpeNFL for outstanding NFL analytics content).

​

five_thirty_eight.png

 

One notable change to the methodology which is worth highlighting was an effort to down-weight home-field advantage in brownalytics' projections by "playing" each simulated game on a constant proportion of home, away and neutral fields, as opposed to simply categorizing each game as home or away based on the location of the game in question. The logic behind making this update flowed from the idea that a statistically-superior team, over enough games while controlling for statistical variance, should pull off enough away and neutral-field wins against a statistically inferior team so that the correct expected result manifests itself in the output of brownalytics' 5,000 game simulations. This approach may have been incrementally successful this year, as brownalytics' success rate for home field picks was fairly flat (61.8% vs. 62.8% 2018) while success rates for away field picks rose (58.5% vs. 51.3% 2018) in comparison.

​

As has been discussed at some length on #analyticstwitter, 2019 was a year where home-field advantage was down overall, so the dynamics described above may be a coincidental byproduct of natural long-cycle fluctuations (and luck). While this is certainly possible, brownalytics' increased predictive accuracy for away-team picks in 2019 is encouraging, and will be explored further through continued development prior to the 2020 season.

​

Weekly Performance Trends:

​

As was the case with the 2019 Browns season, brownalytics' week-to-week predictive accuracy was a bit of a roller coaster ride. Weeks 1-4 were rough, as the year-over-year model that brownalytics uses to make early-season predictions was exposed to be quite weak (that will be a key focus for improvement heading into 2020). Weeks 5-8 built confidence, weeks 9-10 brought us crashing back to Earth, and weeks 11-17 provided a drama-free end to 2019 on brownalytics' spreadsheets while the actual Browns' season imploded on our TV sets.

​

linear_record.png
cumulative_record.png

Best Week/Worst Week:

​

Gambling isn't what brownalytics was created for (hence no incorporation of spreads into our work), but placing a couple bets was seriously considered after posting a best-ever weekly success rate of 86.7% in week eight.

​

confusion_matrix_best.png

​​

Luckily though, we didn't divert any of our operating budget into the betting markets, because week nine began a regression which was punctuated by our worst-ever weekly success rate (23.1%) in week 10.

​

confusion_matrix_worst.png

Team-level Performance:

 

Presented below is a team-level view of where brownalytics had the most and the least success predicting outcomes for in 2019, along with the same measure for our Browns. Brownalytics had the most success predicting wins for the Saints (i.e. win was predicted and win was observed) and took similar enjoyment from predicting Bengals' losses week-in and week-out. On the other hand, Brownalytics was unable to accurately predict either of Cincinnati's two wins, and failed in each instance where losses for both the Ravens and the Chiefs were predicted. 

 

As a note, the summary presented below only factors in teams where more than one prediction was made in the category described. For example, brownalytics predicted New Orleans to win in nine of their games and they won eight of those (88.9%). Conversely, brownalytics predicted Cincinnati to lose 14 games, of which they lost 12 (85.7%). Brownalytics also accurately predicted one loss from both San Francisco and Dallas, resulting in a 100% success rate, but brownalytics chose to ignore cases with 1-of-1 prediction correct for the purposes of the visual below.

​

tm_level_stats.png

​​

Lastly, below is the full table of team-level statistics that evaluate the quality of predictions made by brownalytics's model across a few important statistical measures, whose descriptions are included for reference and clarity.

​

Accuracy:  The model's overall ability to correctly predict both true wins and true losses out of the total population of predictions made for a given team (16). This is the metric listed for the Browns in the visual above.

​

Sensitivity:  The model's overall ability to correctly predict wins as a percentage of total actual wins by that team.

 

Specificity: The model's overall ability to correctly predict losses as a percentage of total actual losses by that team. 

​

tm_detail_stats.png
bottom of page