Skip to main content

Fantasy Football – Rest of Season Accuracy Methodology

Below is a breakdown of our process for determining our Rest of Season Accuracy results. Please note that we also run separate analyses that evaluate the accuracy of the experts’ In-Season (weekly) rankings and Draft Rankings.

Step 1: Collect the right data.
Our analysis aims to determine who provides the most accurate Rest of Season (ROS) rankings using Half PPR scoring settings. We take a snapshot of each expert’s ROS rankings each Tuesday (weeks 2 through 16) at approximately 5pm ET. This ensures that we’re evaluating fresh rankings that feature advice used for waiver wire claims and trade analysis. Note that week 17 is excluded as a separate week from the analysis since “Rest of Season” is just 1 week at that point and that weekly outlook is already covered in our weekly accuracy competition.

Step 2: Determine the player pool.
For each position, we evaluate relevant players as determined by our Rest of Season Expert Consensus Rankings (ECR) and the season’s actual fantasy leaders. We set a fresh player pool for each set of weekly ROS rankings we’re evaluating. For instance, in week 2, the player pool is based on week 2’s ROS rankings and actual fantasy leaders from weeks 2 through 17. In week 3, the pool is week 3 ROS rankings and actual fantasy leaders from weeks 3 through 17. This ensures that our player pool covers everyone who was fantasy relevant at all times, including the players that were surprise studs and busts. For 2021, we will grade the experts based on the following set of players below. For example, at Running Back, we look at the Top 50 RBs in ROS ECR and the Top 50 RBs based on Actual Fantasy Points. An important thing to note is this means we evaluate each expert on MORE than 50 total RBs since some of the Top 50 RBs based on actual production would not have been in the bucket for Top 50 in ROS ECR.

Quarterbacks
Top 25 in ECR
Top 25 in Actual Points

Running Backs
Top 50 in ECR
Top 50 in Actual Points

Wide Receivers
Top 60 in ECR
Top 60 Actual Points

Tight Ends
Top 20 in ECR
Top 20 in Actual Points

Kickers
Top 20 in ECR
Top 20 in Actual Points

Defense & Special Team
Top 20 in ECR
Top 20 in Actual Points

Linebackers
Top 25 in ECR
Top 25 in Actual Points

Defensive Backs
Top 25 in ECR
Top 25 in Actual Points

Defensive Linemen
Top 25 in ECR
Top 25 in Actual Points

Step 3: Score the experts’ predictions
As mentioned, the experts are evaluated based on all 15 sets of Rest of Season rankings they submit throughout the season from week 2 to week 16. 

  • Week 2 (week 2 to 17 outlook)
  • Week 3 (week 3 to 17 outlook)
  • Week 4 (week 4 to 17 outlook)
  • Etc

These rankings are weighted so earlier weeks of rankings count more in the accuracy computation. For example, Rest of Season rankings in week 2 are more important to fantasy owners than Rest of Season rankings in week 15 so that is factored into the scoring. Specifically, the weight of each week is equal to the number of weeks remaining in the fantasy season. Therefore, week 2 receives a weight of 15 in our calculation, week 3 gets a weight of 14, and so on all the way down to week 16 which receives a weight of just 1.

For each week of the season, we will use final rankings based on fantasy points scored for that slice of the season. For example, an expert’s week 3 ROS rankings will be compared to final standings based on total fantasy points scored in weeks 3-17. To do this, we assign a projected point value to each player based on the historical production (rolling 3-year average) of the rank slot the expert gave the player for that particular slice of the season. We then compare these projected point totals to the player’s actual final rank (using the same 3-year average template) to generate an “Accuracy Gap” for the expert’s predictions. The closer this value is to zero for a player, the better it is for the expert because it indicates their prediction was closer to the player’s actual point production. Another way to think of the “Accuracy Gap” is as the expert’s “Error” for each prediction. A perfect gap would be 0, indicating that there was no error between the expert’s predicted rank and the player’s actual rank. We use a 3-year average for these values to smooth out outliers (e.g. Christian McCaffrey going bonkers in 2019).

As an example, if an expert ranks Saquon Barkley at RB #2 in their week 3 ROS rankings, we’d assign a projected point value (e.g. 248 pts) to this prediction based on the average production of the player that actually finished as RB #2 over the past few years from week # 3 to week #17. This value represents the expected point production for the player at that rank slot. In other words, the expert is effectively predicting that Barkley will score 248 points from weeks 3 to 17. Now, say that Barkley under-performed relative to expectations and finished as RB #4 during that time frame (e.g. 220 points). We compare the absolute value of the prediction (248 pts) and the actual average production (220 pts) to assign the expert an Accuracy Gap of 28 pts (i.e. an error amount) for their Barkley ranking. We repeat this for every other RB in the player pool and sum the scores to get a total RB Accuracy Gap for the expert. As noted above, a lower number is a better score. 

If an expert does not have a player in our pool ranked, we assign a rank in one of two ways based upon how the player made it into our player pool (i.e. via Rest of Season ECR or the end-of-season Actual Rank for the slice of the season being evaluated). 1) If the player made it into the pool via the ECR cutoff, we assign the player a rank equal to the last player the expert ranked +1. Therefore, if an expert ranked 70 running backs and failed to include Frank Gore in his rankings, we would slot Gore as the expert’s RB #71. 2) If the player made it into the player pool solely based on the Actual Rank cutoffs (i.e. the player exceeded Rest of Season expectations, such as Raheem Mostert in 2019), we assign whichever rank is worse: the player’s ECR +1 rank spot or the expert’s last ranked player +1 spot.

The reason for this distinction is that we do not want to unfairly advantage experts who have a shallow set of rankings. For example, in 2019, Raheem Mostert had a week 1 Rest of Season ECR of RB #103 and finished the season as RB #27 based on fantasy points scored. He would qualify for the pool of evaluated players due to his actual production. For an expert who ranked 60 RBs and didn’t include Mostert in their week 1 ROS ranks, it would be unreasonable to assume that Mostert would have been his or her RB #61. Instead, we slot Mostert as their RB #104 since that is a fair expectation of the expert’s valuation based upon the industry consensus opinion.

The flip side of the example above occurs when an expert ranks a player within the rank range (e.g. Top 50 RB) that winds up NOT being in our player pool. In other words, the player was not a top 50 consensus RB for Rest of Season and he did not finish among the top 50 RBs based on actual production. In this scenario, we assess a penalty that is equal to the following: The absolute value of the expert’s Accuracy Gap for the player minus the average expert’s Accuracy Gap for the player. The penalty is only applied if the expert’s prediction rates worse than the average expert’s prediction. This ensures that penalties are only assessed in situations where notably poor predictions have been made.

The scenario above would most commonly come into play if an expert failed to take an injured player out of his or her rankings. In that example, it is important that the expert is penalized for offering advice that could lead fantasy owners to make a poor decision.

This entire process is completed for the 15 sets of ROS rankings the experts submit during the season that correspond to ROS outlooks for Weeks 2-17, Weeks 3-17, Weeks 4-17, etc through Weeks 16-17. We will have 15 scores for the experts at the conclusion of this assessment.

Step 4: Add Weekly Weighting + Drop 2 Lowest Weeks
Now that we have expert accuracy scores (i.e. Accuracy Gaps) for each week at each position, we convert them to z-scores, which allows us to compare all experts on a similar scale across weeks. That is, we calculate the average expert score at each position for each week and the standard deviation across the pool of experts evaluated, and convert each expert’s score according to the formula:

Positional Z-Score = ((Individual Expert Score – Average Expert Score)/Standard Deviation) * -1 

Note that we multiply the above formula by negative 1 so that a higher Z-Score represents a better rating for the expert.

Thanks to this conversion to positional Z-Scores, we have expert scores for each week that are on the same scale. An important note is that we will weight all weeks so that earlier weeks are valued higher (weights = remaining weeks in the fantasy season times the Z-Scores). Specifically, week 2 scores are multiplied by 15, week 3 scores by 14, all the way to week 16 which has a multiplier of just 1.  We then “drop” each expert’s two worst weeks by replacing them with a score of 0, so long as the scores are negative. When determining the weeks to drop, we will take our weighting into account, rather than just dropping the lowest raw scores. For example, if an expert has a Z-Score of -0.5 in week 6 (a weight of 11) and a score of -1 in week 14 (a weight of 3), week 6 is a higher priority to drop because the weighted score of -0.5*11 = -5.5 for week 6 is lower than -1*3=-3 for week 14. This ensures that each expert gets the maximum possible benefit from dropping two weeks, so there is no luck involved with their worst week being one with more or less weight.

Lastly we calculate an expert’s overall Rest of Season score by taking a weighted average of their week 2 through 16 weekly scores. These 15 scores combine to determine the expert’s overall score, with a higher number being better for the expert.

Step 5: Rank the Experts
After the scores and weightings are calculated for the entire player pool across experts, we rank the experts by position from top to bottom based on their scores from the previous step. For the Overall assessment, we add up the scores from the QB, RB, WR and TE positions. DST and K are excluded because (a) many experts do not produce rankings for these positions, (b) they represent the widest spectrum of fantasy scoring which can impact the results, and (c) many fantasy owners believe that predicting performance for these two positions involves much more luck relative to the other positions.

We hope this detailed overview was helpful. Thanks for taking the time to read through it!