We invested a significant amount of time to make sure we had an objective and accurate way of assessing fantasy expertise. Our innovative approach has been validated by Professor Jeff Ohlmann, Ph.D. – an expert in sports analytics with a background in mathematics.
Please note that this overview covers our process for scoring In-Season (Weekly) Rankings. Our accuracy methodology for Preseason (Draft) Rankings is similar but uses each expert’s preseason cheat sheet.
Below is a breakdown of our process for determining our In-Season (weekly) Accuracy results. Please note that we also run a separate analysis that evaluates the accuracy of the experts’ Draft (preseason) rankings.
Step 1: Collect the right data.
Our analysis aims to determine who provides the most accurate weekly rankings using standard scoring settings. We take a snapshot of every expert’s rankings at the start of the Thursday Night game each week and also at the beginning of the 1pm EST games on Sunday. Players from the Thursday Night game are locked at their rank spots so experts cannot change their rankings after the game has begun.
Once the week concludes on Monday night, we score the predictions and incorporate the results into our Year-to-Date leaderboard that spans the full 16 weeks of the fantasy season. In 2016, we anticipate evaluating over 130 experts for our competition.
Step 2: Determine the player pool.
For each position, we evaluate relevant players as determined by our Expert Consensus Rankings (ECR) and the week’s actual fantasy leaders. This ensures that our player pool covers everyone who was fantasy-relevant in a given week, including the players who were surprise studs and busts.
In other words, if a player unexpectedly becomes a difference-maker, he will be part of our player pool since we make sure to include all key performers. On the flip side, because we also use our consensus ranks to create the player pool, anyone who surprisingly disappoints will also be evaluated. In 2016, we are grading the experts based on the following set of players:
Quarterbacks
Top 20 in ECR
Top 20 in Actual Points
Running Backs
Top 35 in ECR
Top 35 in Actual Points
Wide Receivers
Top 35 in ECR
Top 35 Actual Points
Tight Ends
Top 15 in ECR
Top 15 in Actual Points
Kickers
Top 15 in ECR
Top 15 in Actual Points
Defense & Special Teams
Top 15 in ECR
Top 15 in Actual Points
Linebackers
Top 30 in ECR
Top 30 in Actual Points
Defensive Backs
Top 30 in ECR
Top 30 in Actual Points
Defensive Linemen
Top 30 in ECR
Top 30 in Actual Points
Step 3: Score the experts’ predictions.
The experts’ rankings are evaluated by assigning a projected point value to each player based on the actual production of the rank slot the expert gave the player. We then compare these projected point totals to every player’s actual point production to generate an “Accuracy Gap” for the expert’s predictions. The closer this value is to zero for a player, the better it is for the expert because it indicates their prediction was closer to the player’s actual point production.
Another way to think of the “Accuracy Gap” is as the expert’s “Error” for each prediction. A perfect gap would be 0, indicating that there was no error between the expert’s predicted rank and the player’s actual rank.
As an example, if an expert ranks Kelvin Benjamin at WR #28 in week 1, we’d assign a projected point value (i.e. 9.7 pts) to this prediction based on the production of the player that actually finished as WR #28 for week 1. This value represents the expected point production for the player at that rank slot. In other words, the expert is effectively predicting that Kelvin Benjamin will score 9.7 points for the week.
Now, say that Benjamin outperformed expectations and finished as WR #13 for the week (15.1 pts). We compare the absolute value of the prediction (9.7 pts) and the actual production (15.1 pts) to assign the expert an Accuracy Gap of 5.4 pts for their Benjamin ranking. We repeat this for every other WR in the player pool and sum the scores to get a total WR Accuracy Gap for the expert. As noted above, a lower number is a better score.
If an expert does not have a player in our pool ranked, we assign a rank that is 1 spot lower than the lowest ranking the player received across experts. The rationale for this is we cannot assume that the expert is projecting 0 points when they do not rank a player. If they stopped their WR rankings at 60 then the unranked player being evaluated could be #61 in their mind or some rank number that is much lower. We avoid assigning a rank of 61 in this scenario because it could penalize experts who take the time to create deep rank lists.
The flip side of the example above occurs when an expert ranks a player within the rank range (e.g. Top 35 RB) that winds up NOT being in our player pool. In other words, the player was not a top 35 consensus RB for the week and he did not finish among the top 35 RBs based on actual point production.
In this scenario, we assess a penalty that is equal to the following: The absolute value of the expert’s Accuracy Gap for the player minus the average expert’s Accuracy Gap for the player. The penalty is only applied if the expert’s prediction rates are worse than the average expert’s prediction. This ensures that penalties are assessed only in situations where notably poor predictions have been made.
The scenario above would most commonly come into play if an expert failed to take an injured player out of his or her rankings. In that example, it is important that the expert is penalized for offering advice that could lead fantasy owners to make a poor decision.
Step 4: Rank the Experts.
After the results are calculated for the entire player pool across experts, we rank the experts by position from top to bottom based on their Accuracy Gap. As noted above, a lower gap is considered better because it indicates that an expert’s predictions were closer to the actual production of the players evaluated.
For the Overall assessment, we add up the Accuracy Gap totals from the QB, RB, WR, and TE positions. DST and K are excluded because (a) many experts do not produce rankings for these positions, (b) they represent the widest spectrum of fantasy scoring which can impact the results, and (c) many fantasy owners believe that predicting performance for these two positions involves much more luck relative to the other positions. We do this calculation for each individual week and then sum the scores of the 16 weeks evaluated to determine which experts have provided the most accurate advice over the course of the season.
In addition to Overall Accuracy, we’re also able to determine which experts offered the most accurate predictions for each individual player. We simply rank the Accuracy Gaps from top to bottom across experts for each player. The closer a projection is to the player’s actual point total, the smaller the expert’s error is and the better their accuracy rank is for the player.
We hope this detailed overview was helpful. Thanks for taking the time to learn more about our accuracy system!
Comments
Article is closed for comments.