Here’s the link to sign up and enter your predictions!
Deadline: Monday November 4, before 12 noon US Eastern Time
My first ever post on this blog was an analysis of forecasts for the 2022 midterm elections, comparing the accuracy of PredictIt, Manifold, and Nate Silver. I’m planning a similar comparison of 2024 election forecasts, except with Polymarket rather than PredictIt (RIP).
This time there will also be a forecasting contest that regular people can enter, and an analysis of the aggregated regular-people forecasts compared to the market predictions. The idea here is basically to see how well a non-market “wisdom of crowds” approach does compared to real-money and play-money prediction markets, in an apples-to-apples comparison on the same set of questions.
Also, there will be prizes! The prizes will be (at a minimum, I might increase them if I need to get more participants):
1st Place: $500
2nd Place: $250
3rd Place: $100
Best Early Forecast: $100 (deadline: Tuesday October 22, 12 noon US Eastern Time — two weeks out from election)
And of course, the bragging rights — I’ll probably publish the names of the top 5-10 or so most accurate forecasters overall, and the top 3-5 or so that submit before the early deadline.
Scoring Methodology
Forecasts will be evaluated based on time-weighted Brier scores. Here’s the raw Brier score formula:
In this formula, N is the total number of predictions, p_i is the prediction probability, and o_i is the true outcome of the event (either 1 if it happened or 0 if it didn’t). (Sorry about the awkward notation, I haven’t really been able to find a good way to write math notation like subscripts and stuff on Substack.)
Basically, this is just a measure of the forecast’s average squared deviation from the true outcomes of the events being predicted. The lower the Brier score, the better the forecast, and a perfect score is 0.
But I also want there to be a slight reward for earlier forecasts. So the formula for the final score will be:
where D is the number of days before the election the forecast is made. So, for example, a forecast with a raw Brier score of 0.082 made 20 days before the election would slightly beat a forecast with a raw Brier score of 0.08 made the day before the election. So just a slight reward for being early, but remember there’s also a separate prize for early predictions made made more than two weeks out.
If you have any questions you can reach me by email at mikest@udel.edu, or by leaving a comment on this post. Thanks and good luck!
PS - you don’t need to be a subscriber of this blog to win the contest, but if you would like to subscribe (it’s free and has lots of articles about forecasting and prediction markets) then just click this button:
For the predictions competing for the early forecast, do they still have the time-weighting applied, or are all forecasts before the deadline judged using raw brier score for the purposes of the early forecast prize?