Forecasting – Theme Review
In 2022 I set my theme as ‘forecasting‘. My goal in selecting that theme was to get better at having realistic expectations about future, currently-unknowable events. So much of life requires making decisions where the eventual outcome is not based on a simple cause and effect equation. The important decisions tend to require we make them without full knowledge. Instead we have an opaque equation that will be resolved based on which one of a range of potential outcomes actually comes to be. Because of that, I believed that being better at forecasting the future would allow me to make better decisions, by better assessing the risk and reward profiles of those scenarios.
Last January, when I selected it, I wrote: “My goal this year will be to learn more about forecasting and prediction. To figure out what the common shortcomings of predictors are, how to improve them in myself, and how to shape my thinking around forecasts.” To do so I sought to read a few books about the topic, put those into action with some practice and try to achieve something new related to the topic. I would then come back to try and summarize some of my learnings.
This blog post serves as a summary of what I read and what I learned.
What I Read
I identified the following books as good ones on the topic and I was able to read through the first four. You’ll find some quotes from them in the sections below
-
“Superforecasting: The Art and Science of Prediction” Philip Tetlock – DONE
-
“The Signal and the Noise” by Nate Silver – DONE
-
“Thinking in Bets” by Annie Duke – DONE
- “Thinking Fast and Slow” by Daniel Kahneman – DONE
-
“Noise” by Daniel Kahneman
- Wisdom of Crowds” by James Surowiecki
-
“How to Measure Anything” by Douglas Hubbard
-
“Super Crunchers” by Ian Ayre
-
“Think Again” by Adam Grant
-
“Data Detective” by Tim Harford
-
“How not to be wrong” by Jordan Ellenberg
What I Learned
Here are some of the things I learned this year. To test what I’ve leaned, I’ve been participating in forecasting challenges on a website called Good Judgement Open. I’ll reference some of the questions I forecasted in the notes below.
1. We are hard wired to be bad at forecasting
The future is fairly knowable, but there are lots of ways we get forecasts wrong. Some lies our brain tells us and some lies we tell ourselves. These biases steer us away from an accurate view of the world and lead to more confusion and frustration as well as sub-optimal decision making. Some of these biases are really hard to fight, even if you know they are there, because of the way our brains have evolved to function.
Some specific examples of common biases that impact our ability to forecast include:
The Availability Heuristic
People assume their memory is an accurate judge of the past. The problem is, our memories aren’t perfect, they tend to remember some things more than others, so using what comes to mind as an indicator of what might happen can be a source of error.
Similarly, some things happen less than once in a lifetime, but that doesn’t mean they never happen. The pandemic of 2020 is something I heard described as ‘unprecedented’ on many occasions, but that simply isn’t true, a mere 101 years prior the Spanish flu set precedent, infecting nearly one third of the world’s population. There have been many smaller outbreaks over the past decades as well that indicated something like this was very possible.
The Unintuitive Nature of Statistics
How likely is an event to occur if there is a 10% chance and we take 10 attempts?
Intuitively, it seems like it will be 100%. In reality, it is about 65%.
Any time you’re dealing with an event where the frequency is the inverse of the odds, the chances of the event occurring are about 2/3rds. The limit of that equation is around 62% but its a bit higher for lower N, up to 75% for N=2.
One place tends to trip people of us if some event has never occurred in a pool of fairly low N. If in 50 occurrences, some specific event has never happened, it doesn’t mean the odds are 0% or even 2%. Even if the odds were 4% and there had been 50 occurrences, 12% of the time the event won’t have occurred yet, so the odds could be as high as 4%, 5% or even 10% and we could reasonably still not have seen that event occur.
Regression to the Mean
Not everything that looks like a trend is actually a trends. This is a common trap people fall into when they see a set of datapoints that looks like it is moving in a consistent direction recently. They assume that direction will continue, but any times, it is actually just a series of random results that only appear to be a pattern because of when we’re looking at it. As we get more random results, we’ll see a regression to the mean.
An example of this would be if a random number generator showed the results 2, 3 and 4. What would we expect the next number to be? If it is truly random, anything is just as possible and there is no reason 5 should be more likely.
2. The prevalent role of chance
Chance plays a big factor in how things turn out and this is something most people commonly undervalue. With a large enough sample size many very rare things can happen once, but isn’t necessarily a strong signal that they will happen again.
If one thousand people play a rock-paper-scissor tournament, someone will win. Does that have any bearing on what will happen in the second tournament? Probably not.
When forecasting, the first question I always ask myself is how much chance is involved in this scenario. Some events inherently have a lot of randomness in them: a coin flip, a roll of the dice, etc. It isn’t worth spending a lot of energy asking the next questions if the event in question is completely chance based.
Some other events that have a lot of randomness in them are market prices, especially oil. Another question I forecasted fairly well this year was which cryptocurrency would perform the best. In the end, Ripple did the best during the evaluation period, not necessarily because it is the biggest or the most exciting, but because the value of all cryptocurrencies went down during the period and it just happened to go down the least.
Most things have some amount of predictability though – so I try to understand where on the spectrum this event lies. Professional sports are one area what I tend to lean a bit towards the side of random, as most teams performing that level can (and do sometimes) beat teams better than them. In the World Cup, my forecasts fared fairly well by betting a bit more on randomness than folks who were favoring specific strong teams. My returns came when teams like Morocco and Croatia made it to the semi-finals over teams that were favored, like Portugal, Spain and Brazil.
3. Everything has a prior
Getting into the right range for potential outcomes is the next step. I find that for most questions there is some prior set of data to compare it to to understand the range the answer might be in.
How much will the stock market move next year? No one really knows. But, over the past 100 years we’ve seen an average 10% growth, so that might be a good place to start.
Averages, however, hide the fact that most of the time there is a distribution. Understanding that range will help set some boundaries. The ranges of yearly stock market returns go from 50% up to 50% down, so that defines some boundaries we’d be safe to stay within. There can always be outlier events but or first time events, but understanding the past gives us a good place to start from.
I find that when I’m trying to make a prediction on a topic, it is ok if I have no knowledge about the topic, as long as I can find some information about the past. With that, I can usually take a stab, and often that is good enough to get close. Sometimes it is even better than being biased. Someone with no knowledge of the 2016 US Presidential election might have fared well just by looking at the general trend that every 8 years we flip between a Democrat and a Republican and that 2008-2016 had a Democrat in office.
Sometimes the prior isn’t what it seems like it would be. In sports the announcers like to throw out stats that seem relevant but really don’t offer much valuable information; ‘Over the last 20 years this football team has beaten that football team 18 out of 20 times’. That is a nice tidbit of history, but doesn’t say much about the next game in most cases, especially if only 10 of the 100 players on the field were involved in any of those other games and this year the first team is the last ranked team while the other team, the normal underdog, has a new quarterback and is undefeated.
4. Every scenario is somewhat unique
“The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function,” Philip E. Tetlock, Superforecasting: The Art and Science of Prediction
A base rate is always just a place to start, every scenario is unique and offers signals about the unique situation. What we must figure out is how strong those signals are and how much they should steer us away from a base rate, towards the signal the specific event is sending us.
One of the traps we can fall into is thinking too many cases are too special. Sometimes there is a strong reason for believing something will have an outlier event occur, a sudden drop in price, a complete reversal from polls, etc. Most of the time though, things do what they usually do. For me to move very far from a base rate, I require overwhelming evidence that this case truly is unique.
One place I see this come up is with startups. Most people know that 99%+ of startups fail. Most people work under the assumption that the one they’re rooting for will be the exception though. The reason might be that they have a great idea, strong founders or early traction with a valuable market. Most of the startups from the comparison pool had that too though. Moving from a 1% base rate to a 5% assumption based on some solid data (past success from the founders, a unique patent, etc.) might be justifiable, but moving to a 100% is not.
5. Find more information
“The litmus test for whether you are a competent forecaster is if more information makes your predictions better.” -Nate Silver, The Signal and the Noise
I’ve found that my accuracy on forecasting questions dramatically increases when I invest more time into researching the facts. Sometimes this is fairly benign details that others are missing, such as how many votes are needed to pass a measure 51 or 60? The difference between those can take an outcome from being a certainty to a long shot.
One question I performed well on was whether or not the UN would declare a famine in Afghanistan in 2022. Most people assumed that would happen, as many people are starving there and there was a lot of media attention about it. Looking for more information I saw that the burden of declaring a famine from the UN was very high (there were five requirements that had to all be met) and looking at past data I saw that only a handful of official famines had been declared and there were many more situations of near-famine that were never officially declared a famine by the UN. Because of that, I forecasted only a 10% chance when the crowd was thinking there was a 66% chance. In the end the famine was not declared in the stated period, so I was correct.
6. Be more creative about what could happen
Think of all of the ways you could be wrong. If you can’t think of any, you aren’t being creative enough. Surely a species-ending meteor impact would change the outcome some.
There are lots of events that are low probability but that will still have a material impact, especially when combined.
Three Olympic games have been cancelled in the past hundred years. If we’re answering he question of what the odds that USA will win the most medals in the 2040 Olympics, we should certainly account for a ~5-15% chance that the games don’t happen at all. Similarly, I did well on a number of questions about box office performance simply because the movie was delayed and so the revenue was $0 and I had assigned a small percentage chance to that.
Shifting gears a bit to my professional life, I spend a lot of time trying to creatively think of ways projects can go wrong so we can prepare for those scenarios and stop them from happening. This creative exploration is one of the reasons teams can often perform better than individuals in my field of work. There are more people who are able to look in more directions and spot possibilities that should be accounted for. Sometimes this helps the group avoid a bad decision and sometimes it just helps them be prepared for the event if and when it does occur. Having a culture that rewards people highlighting those risks will ensure people keep doing it, many cultures push that sort of truth seeking aside for a ‘yes person’ culture, and the result is usually a completely predictable disaster that no one spoke up about.
“The company should do its best to reward this constructive dissent by taking the suggestions seriously or the expression of diverse viewpoints won’t be reinforced.” Annie Duke, Thinking in Bets
7. Practice
Intuition is nothing more and nothing less than recognition.” Daniel Kahneman, Thinking, Fast and Slow
Most forecasters are really bad and just go on continuing to be bad, because there isn’t a great feedback loop. Practicing is important to being good at forecasting. I’ve found it is even more important to be very specific about why you are predicting something so that in the end you can look back and learn from it.
At one point I went back and looked through some of my best and worst questions and had a few interesting learnings. On one question where I performed poorly, all of my assumptions were correct, but I was doing the math wrong. Something seemed off, but I didn’t take the time to double check it until afterwards. Now I have alarm bells in my head that go off if something seems off and that has saved me on a few other questions where a technical error could have cost me, even though my logic was sound.
One thought on “Forecasting – Theme Review”