How do you measure the accuracy of fantasy baseball experts?

The information below is specifically for fantasy baseball. Learn more about our fantasy football accuracy methodology here

We’ve invested a significant amount of time to make sure we offer an objective and accurate way of assessing fantasy expertise. Below is a breakdown of our process for determining our Fantasy Baseball Accuracy results. Please note that we also run a separate analysis that evaluates the accuracy of baseball projections.

Step 1: Collect the right data.

Our analysis aims to determine who provides the most accurate draft rankings using a 5×5 rotisserie format. We take a snapshot of each expert’s rankings just prior to the start of the season to ensure we’re analyzing each pundit’s final set of predictions. In 2017, a total of 56 experts were evaluated for our study.

Step 2: Determine the player pool.

For each position, we evaluate relevant players as determined by our Expert Consensus Rankings (ECR) and the season’s actual fantasy leaders based on final ranks in the FantasyPros Player Rater. This ensures that our player pool covers everyone who was fantasy relevant, including the players that were surprise studs and busts.

In other words, if a player unexpectedly becomes a difference-maker (e.g. Jean Segura), he will be part of our player pool since we make sure to include all key performers. On the flip side, because we also use preseason ranks to create the player pool, anyone who surprisingly disappoints (e.g. Byron Buxton) will also be evaluated. Since there is no standard for position eligibility, we set a player’s position based on where the majority of experts ranked the player.

We then scrub out players from secondary positional assignments to avoid duplicating players across multiple positions. For 2018, we graded the experts based on the following set of players:

First Basemen
Top 20 in ECR
Top 20 in Player Rater

Second Basemen
Top 20 in ECR
Top 20 in Player Rater

Third Basemen
Top 20 in ECR
Top 20 in Player Rater

Shortstops
Top 20 in ECR
Top 20 in Player Rater

Catchers
Top 15 in ECR
Top 15 in Player Rater

Outfielders
Top 50 in ECR
Top 50 in Player Rater

Starting Pitchers
Top 50 in ECR
Top 50 in Player Rater

Relief Pitchers
Top 25 in ECR
Top 25 in Player Rater

Step 3: Score the experts’ predictions

The experts’ rankings are evaluated by assigning a Value Over Replacement Player (VORP) score to each player based on the actual production of the rank slot the expert gave the player. We then compare these scores to every player’s actual VORP using our Player Rater to generate an “Accuracy Gap” for the expert’s predictions. The closer this value is to zero for a player, the better it is for the expert because it indicates their prediction was closer to the player’s actual valuation.

Another way to think of the “Accuracy Gap” is as the expert’s “Error” for each prediction. A perfect gap would be 0, indicating that there was no error between the expert’s predicted rank and the player’s actual rank.

As an example, if an expert ranks Mookie Betts at OF #6, we’d assign a projected VORP score (i.e. 11.84 in 2016) to this prediction based on the production of the player that actually finished as OF #6 for the season. This value represents the expected production for the player at that rank slot. In other words, the expert is effectively predicting that Mookie Betts will achieve a VORP of 11.84 for the season.

Now, say that Betts outperformed expectations and finished as OF #1 for the season (16.32 VORP). We compare the absolute value of the prediction (11.84 VORP) and the actual production (16.32 VORP) to assign the expert an Accuracy Gap of 4.48 for their Betts ranking. We repeat this process for every other OF in the player pool and sum the scores to get a total OF Accuracy Gap for the expert. As noted above, a lower number is a better score.

If an expert does not have a player in our pool ranked, we assign a rank in one of two ways based upon how the player made it into our player pool (i.e. via preseason ECR or the end-of-season Player Rater). 1) If the player made it into the pool via the ECR cutoff, we assign the player a rank equal to the last player the expert ranked +1. Therefore, if an expert ranked 80 outfielders and failed to include Mookie Betts in his rankings, we would slot Betts as the expert’s OF #81. 2) If the player made it into the player pool solely based on the Player Rater cutoffs (i.e. the player exceeded preseason expectations), we assign a rank equal to the player’s ECR +1 rank spot.

The reason for this distinction is that we do not want to unfairly punish experts who have a deep set of rankings. For example, in 2016, Aaron Sanchez had a preseason ECR of SP #96 and finished the season as a top 20 SP in our Player Rater. He qualified for the pool of evaluated players due to his actual production. For an expert who ranked 70 SPs and didn’t include Sanchez, it would be unreasonable to assume that Sanchez would have been his or her SP #71. Instead, we slot Sanchez as their SP #97 since that is a fair expectation of the expert’s valuation based upon the industry consensus opinion.

The flip side of the examples above occurs when an expert ranks a player within the rank range (e.g. Top 40 OF) that winds up NOT being in our player pool. In other words, the player was not a top 40 consensus OF for the preseason and he did not finish among the top 40 OFs based on actual production. In this scenario, we assess a penalty that is equal to the following: The absolute value of the expert’s Accuracy Gap for the player minus the average expert’s Accuracy Gap for the player. The penalty is only applied if the expert’s prediction rates worse than the average expert’s prediction. This ensures that penalties are only assessed in situations where notably poor predictions have been made.

The scenario above would most commonly come into play if an expert failed to take an injured player out of his or her rankings. In that example, it is important that the expert is penalized for offering advice that could lead fantasy owners to make a poor decision.

Step 4: Rank the Experts

After the results are calculated for the entire player pool across experts, we rank the experts by position from top to bottom based on their Accuracy Gap. As noted above, a lower gap is considered better because it indicates that an expert’s predictions were closer to the actual production of the players evaluated. For the Overall assessment, we add up the Accuracy Gap totals from the Hitter and Pitcher positions.

In addition to Overall Accuracy, we’re also able to determine which experts offered the most accurate predictions for each individual player. We simply rank the Accuracy Gaps from top to bottom across experts for every player. The closer a projection is to the player’s actual point total, the smaller the expert’s error is and the better their accuracy rank is for the player.

We hope this detailed overview was helpful. Thanks for being interested enough to read through it!

Was this article helpful?
17 out of 19 found this helpful
Have more questions? Submit a request

Comments

0 comments

Article is closed for comments.