Superforecasting Summary
<1 min read ⌚
MicroSummary: Would it not be great if you could accurately predict what happens on the stock exchange or what will be the result of a football game? In ‘Superforecasting,’ the authors present some techniques to improve their predictions and achieve better results. Regardless of the area, be it in finance, politics, or daily life, predicting the future can offer great competitive advantages, and people who can use the right tools to superforecast will get ahead!
The Art and Science of Prediction
Let’s learn it together right now!
Superforecasting PDF Summary
In some areas, we are always looking for predictions for the future. That is the case with weather forecasts, the stock market and even the results of sports games. But these are not the only situations in which you can create predictions.
Our fixation with predictions is present in most areas of our lives, and we get bothered when events do not happen as they should. But how can we ensure that our forecasts are more accurate?
That is the function of superforecasting because they are corrected and realigned with each new information and later analyzed and improved.
The important thing here is to understand that these superforecastings are measurable and real abilities.
They can be taught and improved with appropriate investments. People are becoming more confident about their predictions, and they do not want to be wrong.
The problem is that this idea can sabotage the metrics and generate misinterpretations about the future.
One Person Can Cause Great Impact
We make forecasts all the time, either mapping our next career steps or choosing investments. Overall, our forecasts reflect our expectations about how the future will be.
Despite this, the predictions are limited, since unknown events can lead to unintended consequences.
We live in a complex world in which a single person can instigate great events.
For example, have you heard of the Arab Spring? It all started when a Tunisian salesman, Mohamed Bouazizi, set himself on fire after being humiliated by corrupt police officers.
There is a theory that explains why it is so difficult to predict this kind of event. It is called chaos theory (also known as ‘the butterfly effect’).
American meteorologist Edward Lorenz explains: in non-linear systems like Earth’s atmosphere, even tiny changes can have a considerable impact.
If the wind trajectory changes by a small fraction, the long-term climate patterns can change drastically. Explaining more dramatically: If a butterfly flies its wings in Brazil, it can cause a hurricane in Texas.
Forecasts Need To Be Assessed With Rigor
Despite the limitations, we should not rule out or ignore the importance of predictions. Let’s think about meteorology for example. Weather forecasts are relatively reliable when made a few days earlier.
That is because meteorologists analyze the accuracy of their predictions after the event. By comparing their forecasts with real-time, they can better understand how the weather works.
The problem is, people in other areas usually do not measure the accuracy of their predictions.
So to improve our predictions, we need to work on precision and seriously compare what we think would happen, with what happened and that means compromising the metrics.
For example, until the mid-20th century, the medical field was filled with experts who relied on their years of experience and believed in their many different types of therapies and treatments.
But many of them proved incorrect, and some caused more harm than good.
The emergence of an evidence-based medicine proved challenging for those doctors who relied on their experiences.
They were exceptionally resistant to the tests since they considered them unethical.
The problem here is that feeling right is not the same as being right. And so relying on data and metrics is a useful way to do away with any information bias we have.
Percentages And Accuracy In Forecasts
Measuring forecasts is not as easy as it sounds. In addition to collecting forecasts, judging accuracies, making calculations, there are a number of factors to consider.
To ensure the accuracy of a forecast, you must first understand the meaning of the original forecast.
For example, in April 2007, the media said that Microsoft CEO Steve Ballmer made a prediction: the iPhone would not get much market share.
Considering the size of the Apple, Ballmer’s prediction seemed absurd, and people laughed at him. Others said that Apple controlled 42 percent of the smartphone market in the United States, a very significant number.
But let’s think about what Ballmer said: he said that the iPhone could generate a lot of money; however, would never gain a significant market share in the global mobile market: its forecast was between 2 to 3%.
Instead, your company’s software, Microsoft, would dominate the market.
And his prediction was more or less correct.
According to data from Garner IT, iPhone global sales in the middle of 2013 accounted for about 6% of the mobile phone sales – a number close to that predicted by Ballmer. Meanwhile, Microsoft software was used by most cell phones sold in the world.
Predictions should also avoid vague language and need to use numbers to increase accuracy.
Vague words such as “cold”, “maybe” or “probably” are common in predictions, but surveys show that people attribute different meanings to words like these.
Therefore, those who make the predictions need to speak in the most accurate way possible, using percentages for example.
Consider how American intelligence organizations like the NSA and the CIA claimed that Saddam Hussein was hiding weapons of mass destruction – a claim that proved utterly untrue.
If these intelligence agencies had accurately calculated and used percentages, the United States might not have invaded Iraq.
If there was a 60% chance of Iraq having WMDs, there would still be a 40% counter-chance – an absurd justification for starting a war.
The Brier Method For Measuring Forecasts Accuracy
We must avoid such mistakes as those committed by US intelligence agencies. Therefore, it is very important to ensure forecasts with greater accuracy. Let’s think of some ways to achieve greater accuracy:
The best way is to keep a score. To do that, the author’s research team established a government-funded project called ‘Good Judgment,’ which had thousands of volunteers answering more than a million questions in four years. Using a score, the team hoped to improve the accuracy of the forecast.
Participants answered questions such as “Will the president of Tunisia flee to exile next month? Or “will the euro fall below $ 1.20 in the next 12 months? “Each evaluated the likelihood of the participant’s prediction, adjusted after reading relevant information.
Then the team gathered and assigned each forecast a note called “Brier,” indicating the accuracy of the forecast.
This note, named after Glenn. W. Brier is the most commonly used method for measuring prediction accuracy.
The lower the score, the more accurate the prediction: a perfect forecast gets a grade of zero. A random person generates a Brier 0.5 note, while a wrong prediction generates a 2.0 note for example.
The interpretation of the Brier note also depends on the question. You may have a Brier rating of 0.2, which looks pretty good, but in fact, your prediction can be terrible.
Let’s say you are predicting the weather. If the weather is consistently hot with blue skies in Phoenix, Arizona.
One can predict that the weather will be warm with sunshine and receive a Brier score of zero, which is better than 0.2.
But if you got a score of 0.2 for predicting the weather in Springfield, Missouri – a place where the weather always changes – you’ll be considered a very good meteorologist!
People With A Supervision Use The Fermi Method
Are all the people who make good predictions great thinkers with access to secret information? No! So how do these people make such accurate predictions?
These people deal with the impossible problem, breaking into minor subproblems. It is known as the Fermi thinking style.
Physicist Enrico Fermi, a central figure in the creation of the atomic bomb, estimated with incredible precision things like the number of pianos in Chicago without any prior information.
He did this by separating what is known from the unknown. That is the first step. For example, when Yasser Arafat, leader of the Palestine Liberation Organization, died of unknown causes, many speculations arose that he would have been poisoned.
So in 2012, researchers discovered high levels of polonium-2010 – a lethal radioactive element – in his belongings. This discovery further reinforced the idea that he could have been poisoned, which led to the exhumation of his body, for verification in France and Switzerland.
As part of the ‘Good Judgment’ project, some people working on predictions were asked about the high levels of polonium in Yasser Arafat’s body.
Bill Flack, one of the volunteers, addressed the question using the Fermi style and breaking the information into smaller pieces.
Flack first discovered that polonium decayed rapidly, so if Arafat had been poisoned, there was a chance the polonium would not be detected in his remains because he had died in 2004.
Flack searched the tests with the substance and concluded that it could be detectable.
Finally, Flack also took into account the fact that Arafat had Palestinian enemies who might want to blame Israel for his death.
He considered that there was a 60% chance of finding the polonium in Arafat’s body.
So Flack dealt with the basic questions first and then thought of the subsequent assumptions, and this is exactly how good predictions are made.
Using Anchorage And Adopting An External Vision
Because each situation is unique, you need to avoid rushing judgment in a case. The best way to approach any question is to adopt an outlook. It means discovering the initial probability of an event.
For example, imagine an Italian family living in a modest home in the United States. The father works as a librarian, and the mother has a part-time job in a daycare center. They live with their children and with their grandmother.
If you were asked what the chances are that this Italian family owns a pet, you could try to answer the question by thinking about the details of their life situation. But if you think so, you can skip some important things!
Rather than looking at the details first, you should start by researching the percentage of American households that own a pet. In a few seconds, thanks to Google, you will find that this figure is 62%. That is your outward vision.
By doing so, you can now take an inner view. It will give you the details to adjust your percentage correctly.
In this example of the Italian family, starting with the outside view gives you an initial estimate: about 62% chance that this family will own a pet. So you can be more specific and adjust this value. For example, you can check the rate of Italian families living in the United States who own pets.
The reason behind this outward vision comes from a concept called ‘anchoring’. An anchor is an initial value, before any adjustment. If instead, you start with deeper details, your prediction will probably be far from any anchor or precise value.
Updates And Continuous Research Are The Secret Of Superforecasts
Once you make your first prediction, you can’t just wait to see if it’s right or not. You need to update and modify your judgment based on new information.
After Bill Flack predicted a 60% chance of polonium being detected in Yasser Arafat’s body, he continued to read the news and updated his prediction whenever possible. So, well after Flack’s first forecast, a team of Swiss researchers said new tests were needed, and they would announce the results later.
As Flack had done a lot of research on polonium, he knew that this postponement meant that the team had found the substance; and who were trying to confirm the sources. So Flack increased his forecast to 65%.
As predicted by him, the research team found polonium in Arafat’s body, and Flack’s final Brier grade was 0.36. It’s very impressive if we consider the difficulty of this question.
But although new information can help you, they can get in the way too. For example, a survey by the US government’s Intelligence Advanced Research Projects Activity (IARPA) questioned whether there would be less ice in the Arctic on September 15, 2014, compared to the previous year.
Doug Lorch concluded that there was a 55% chance that the answer would be positive.
However, two days after his estimate, Lorch read a report created one month earlier from the Sea Ice Prediction Network, which gave him information to increase his forecast to 95%, a major change based on a single report.
Then, on September 15, 2014, the amount of Arctic ice was higher than the previous year. Lorch’s initial forecast was 45% probability, but after its adjustment, it fell to a mere 5%.
A skillful upgrade requires careful detailing and careful analysis of the information. Do not be afraid to change your mind, but think twice about the usefulness of the new information.
Working In Group Can Help In Forecasts
You may have already heard the term ‘groupthinking.’ Psychologist Irving Janis, who created this term, hypothesized that people in small groups built a team spirit by unconsciously creating illusions that interfere with critical thinking.
This interference comes from the fact that people flee confrontations, and so they agree with each other.
The ‘Good Judgment’ research team decided to see whether group work would influence accuracy. They did this by creating online communication platforms, separated by groups.
At first, the research team shared ideas in group dynamics and alerted online groups about groupthinking. At the end of the first year, the results were: on average, people working in teams were 23% more accurate than those who worked alone.
In the second year, researchers chose to put the people who made the best predictions together into one group and found that these groups outperformed regular groups.
However, group dynamics were also impacted. Elaine Rich, who was in one of the best groups, was not satisfied with the result.
Everyone was very polite, and few criticisms and discussions were made. To change this, the groups tried to do more to show that they accepted constructive criticism.
Another way to increase the efficiency of group work is the precise questioning that encourages people to rethink their arguments.
This tactic is not new since great teachers practiced the precise questioning since the times of Socrates.
Accurate questioning represents the detailed exploration of an argument. For example, when asking the definition of a particular term.
Even if opinions are polarized, this questioning reveals the thinking behind the conclusion, which allows future investigations.
The Characteristics Of Good Forecasts
The predictions should be clear; it should be easy for any observer to agree or disagree with you.
They need to have a concrete date for accomplishment. Predictions like ‘unemployment will decrease with stimuli’ do not make it clear when this will happen.
They must be probabilistic. For predictions to be used as a basis for decisions, it is important to know their level of confidence and to gauge confidence when necessary.
They should use numbers specific to the probabilities.
Many mistakes can happen when different people assign different meanings to phrases like ‘there is a great possibility that this will happen …’, with interpretations ranging from 20% to 80% chance in one case.
Making multiple predictions is important. Considering partial knowledge and probabilistic events, there is no way to judge whether your prediction of ‘70% chance of rain ‘was wrong or you were unlucky.
With large amounts of similar predictions, we can begin to judge the accuracy of their predictions.
Like this summary? We’d Like to invite you to download our free 12 min app, for more amazing summaries and audiobooks.
“Superforecasting Quotes”
For scientists, not knowing is exciting. It’s an opportunity to discover; the more that is unknown, the greater the opportunity. Click To Tweet For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. Click To Tweet There is no divinely mandated link between morality and competence. Click To Tweet The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function. Click To Tweet Forecasters who see illusory correlations and assume that moral and cognitive weakness run together will fail when we need them most. Click To TweetFinal Notes:
Having supervision is not an innate characteristic. You can work and practice to develop this important skill these days.
Stay informed about events, seek to break down problems in minor situations and start taking an external view!
Also published on Medium.