We’ve had an overwhelming response to last week’s Accuracy Awards announcement. After being knee deep in data for the past several months, it was awesome to see that other people really care about this topic as much as we do. In light of this, we wanted to share some more of the detail and insights from the study.
Key Findings
- The most accurate expert was between 19% and 31% better than the least accurate expert, depending on the position. RB had the tightest spread while QB had the largest.
- There were drastically larger spreads when looking at individual weeks. On average, the difference between the best and worst expert was 58% for RB and 182% for TE.
- When looking purely at whether the expert’s predictions were right or wrong (i.e. with no weighting based on the value of the predictions), the spreads were considerably tighter.
- The data we’ve pulled together offers a ton of insight beyond just the summary accuracy scores that we’ve published so far.
More Thoughts
1. A lot of experts were bunched together with very little spread in their accuracy scores. This makes sense to us. Similar to sports betting, it’s really difficult for experts to distance themselves from the pack over the long haul. But also similar to sports betting, something as small as a 5% edge can make a huge difference. How many times have you lost by a small margin in your heads up match each week? Just a few points can mean the difference between a win and a loss! A few other takeaways:
We’ve had an overwhelming response to last week’s Accuracy Awards announcement. After being knee deep in data for the past several months, it was awesome to see that other people really care about this topic as much as we do. In light of this, we wanted to share some more of the detail and insights from the study.
Key Findings
- The most accurate expert was between 19% and 31% better than the least accurate expert, depending on the position. RB had the tightest spread while QB had the largest.
- There were drastically larger spreads when looking at individual weeks. On average, the difference between the best and worst expert was 58% for RB and 182% for TE.
- When looking purely at whether the expert’s predictions were right or wrong (i.e. with no weighting based on the value of the predictions), the spreads were considerably tighter.
- The data we’ve pulled together offers a ton of insight beyond just the summary accuracy scores that we’ve published so far.
More Thoughts
1. A lot of experts were bunched together with very little spread in their accuracy scores. This makes sense to us. Similar to sports betting, it’s really difficult for experts to distance themselves from the pack over the long haul. But also similar to sports betting, something as small as a 5% edge can make a huge difference. How many times have you lost by a small margin in your heads up match each week? Just a few points can mean the difference between a win and a loss! A few other takeaways:
- Be wary of sites that claim their advice is 40% more accurate than other sites – especially if they’re making you pay for this advice. It’s especially concerning that some of these sites don’t publish their methodology. If you have a site that claims this and can back it up, we’d love to hear from you and enter you in our 2010 accuracy study.
- Player list size and average fantasy points per position naturally influenced the size of the spreads. We assessed 40 RB and 50 WR spots compared to 20 QB and 15 TE spots. From a “fantasy points per prediction” perspective, QB was the highest and TE was the lowest. These two factors – fewer predictions and more points per prediction – contributed to the relatively larger accuracy differences for the QB position.
2. Similar to sports gambling again (disclaimer: we are not sports gambling site and do not promote this activity in any way, unless you’re with us in Vegas), these “fantasy cappers” can get hot and cold with their picks from week to week. The amount of variance is again related to the number of predictions that we analyze – the more predictions, the smaller the spread. This is evidence that:
- It may not be a good idea to draw hard conclusions from studies that only examine one ranking per year. This is one reason why we chose to focus on weekly rankings (16 weeks of data) vs. draft rankings (1 list per year). In the future, we’re going to track both draft and weekly rankings and keep year-to-year records.
- Despite the fact that the average edge may get smaller and smaller with the more data that we gather, I’m a strong believer that the best experts will rise up over time. Similar to poker, anyone can drag a monster pot or even win a tournament or two, but the guys that can show positive results over millions of hands are the true experts.
3. On a Win % basis, where we just calculate how often the expert was right, the spread between the best and worst expert was only 14% to 19% depending on position. We don’t use Win % as our final accuracy rating because it doesn’t incorporate the value of each prediction. When you pick a guy that scores 15 points more than the other guy, that prediction should be worth more than when you correctly predict something that only nets 1 fantasy point. Also, when looking at these numbers, please keep in mind that:
- We’re not including every possible prediction. We only score the predictions that involve at least some disagreement between the experts. There’s no reason to score the Chris Johnson vs. Kevin Faulk match-up if every expert is picking Chris Johnson. It’s also not a match-up a typical advice seeker would actually seek. So, when you look at the Win %, think of it as the expert’s ability to make the right call on decisions that are actually contemplated by fantasy players. Including all predictions would naturally improve every expert’s Win %.
4. Running this analysis just got us more curious about other insights that our data can provide. We’ll do our best to share more of this cool information as we dig through it. Here are just a few of the questions we’re hoping to answer:
- For each expert, are there certain players that they have pegged much better than other experts? Are there certain players that they just completely missed on?
- Similarly, are there certain players that the expert tended to overvalue or undervalue relative to the other experts?
- Are certain experts more likely to go against the consensus opinion than others? When they do back the underdog, are they correct more often than not?
- Which experts tend to agree or disagree with each other the most?
- When an expert misses badly on a player, does he tend to over correct the next week, or does he stick to his original opinion?
- The list is endless…we’re just getting started!
Please note that these summary numbers refer to the RB, WR, QB, and TE positions only. DST and K were evaluated and the ratings are published, but given the wide differences in league settings for these two positions, I would take the results with a grain of salt. In fact, we did not include these two positions for our Overall category.