The Illusion of Skill and How to Make Better Predictions

Imagine being faced with the daunting task of predicting the future with nothing but incomplete information and a handful of hunches. Perhaps you’re considering new experiences, like travel or moving to a different town. When I imagined quitting my job, I envisioned a happier world, free from the perceived burden of corporate work. In the following months, however, my expectations were shattered upon realizing that I still largely felt the same despite my newfound freedom. In recent years, this recognition has remained vivid, sparking my curiosity about why predicting outcomes in the face of uncertainty is so difficult.

Whether anticipating a geopolitical event, forecasting stock market trends, or simply contemplating life’s next move, accurate prediction is an ever-present challenge. As I began to journal in earnest some years ago, I uncovered a fascinating trend hidden between the mundane details: my predictions were fraught with overblown concern and startling inaccuracy. Delving into the complexities of prediction and expertise, it becomes clear that many factors—from biases and cognitive distortions to the whims of randomness—shape our perceptions of certainty and accuracy. Despite these hurdles, new research offers a glimmer of hope.

Listen to the Podcast

The Study of Prediction Accuracy

When you or I predict something—say, the chances our employer will be bought out in a year or whether our friend will again lose his new girlfriend to climbing—we aren’t generally paid for our prediction. We can be wrong with relative impunity. Our friends or family members probably aren’t keeping score, and it’s well enough to shrug off any missed predictions. Maybe the buy-out or breakup hasn’t happened yet, but it will, you might say. Or we take comfort in signs that the company or relationship is struggling, but not altogether lost. When our prophecies eventually prove incorrect, we are prone to constructing new narratives—ones where our past selves accurately predicted today’s reality. This common distortion of memory is known as the hindsight bias.

Philip Tetlock, a psychologist and political scientist from the University of Pennsylvania, showed that experts who are paid to make predictions are no better than the rest of us. His now-famous work, published in his 2005 influential book, Expert Political Judgement, upended the world of prediction punditry. Through an exhaustive analysis of over 80,000 predictions, Tetlock revealed a startling truth: so-called “experts” often fared no better than simple statistical algorithms. Or, to put it more bluntly, they performed on par with “dart-throwing chimps.” Tetlock’s research demonstrated that neither political affiliation nor professional credentials significantly improved the accuracy of predictions.

Further still, an inverse relationship emerged among the most elite predictors: an expert’s status—think job titles and credentials—was a decent proxy for error.

In other words, the more well-known and frequently cited pundits made the least accurate forecasts. So it begs the question: Why are even experts so often wrong?

Creating Narratives in a Random World

On a summer morning in 1945, an American bomber squadron pierced the hazy skies over Japan. The city of Kokura, their intended target, lay shrouded and obscured by clouds and drifting smoke, remnants of the previous day’s firebombing of Yahata. Faced with the mounting threat of enemy fire and dwindling fuel, the Americans nervously circled over a veil of gray in search of their objective.

Unable to find their mark, the B-29 crew continued southwest with their cargo—a 10,000-pound atomic bomb named “Fat Man.” Rumbling towards Nagasaki, they encountered similar cloud cover. With tensions mounting, the bombardier, Captain Kermit Beahan, spotted a narrow gap in the clouds and released the bomb into the void below. The detonation—an explosive force of 21,000 tons of TNT—resulted in the immediate death of at least 39,000 people in Nagasaki.

Fat Man Bomb. Clipping Chains
The mushroom cloud of the “Fat Man” bomb over Nagasaki, Japan. (Photo: Charles Levy)

Meanwhile, Kokura remained untouched. Atmospheric conditions—wind direction, water vapor—forever altered the lives of thousands and changed the course of history. Six days later Japan surrendered, bringing an end to World War II but not the end of suffering. Scores more perished in the years to come from burns, radiation sickness, and other injuries.

The events of that morning were not just a tragic moment in history, but also a stark reminder of the fickle nature of existence. The bombing of Nagasaki—and the luck of Kokura—underscored the profound impact of randomness.

The world is far more random and unpredictable than we’d like to believe.

The Past Makes Sense in Hindsight

It’s easy to connect the dots on past events. We can understand the forces or actions that led to successful businesses—or autocracies. But we so often fail to recognize the sheer influence of random chance. Uncertainty is forever.

The world is far more random and unpredictable than we’d like to believe. To cope with the chaos, we construct narratives in an attempt for order. And where information is lacking, causes and consequences are inferred. People fill gaps to create a satisfying story. The less information we have, the harder we subconsciously work to connect the dots. This gap-filling, like psychological masonry, can lead us to believe we know more than we actually do. And eventually, as our confidence in the unknowable grows, these inferences become factual and certain, blinding us to our ignorance.

The illusion of validity is a cognitive bias that explains overconfidence in the accuracy of our judgments.

This bias is particularly evident in financial markets.

The Stock Market and the Illusion of Validity

Daniel Kahneman, a Pulitzer Prize-winning psychologist, visited an investment banking firm on Wall Street in 1984, joined by his colleagues Amos Tversky and Richard Thaler. The following interaction, detailed in his book Thinking, Fast and Slow, convinced Kahneman that perhaps an entire industry was built on an illusion of skill.

As they settled in, Kahneman admitted his ignorance to one of the firm’s managers. “When you sell a stock,” he said, “who buys it?” The manager, waving in the general direction of the window, indicated that the buyer was probably someone very much like him—another industry veteran. Kahneman found this logic puzzling. Why would this manager, he wondered, sell a stock to someone “very much like him?” Wouldn’t that person also want to sell?

Stock Market predictions. Clipping Chains
Photo: Pixabay

Each day, billions of shares are traded between many buyers and sellers. In theory, these buyers and sellers have the same information, but a difference of opinion. A seller believes the share price will drop, while a buyer thinks it will rise. The same company, the same share, but two different narratives on the future value of that share. If assets in a market are priced perfectly, then no one can gain or lose by trading. But people make predictions, and those predictions are often flawed.

As I’ve discussed at length, the notion that most individual investors can beat the returns of a standard index fund over the long haul is an illusion. A landmark study examining 10,000 investors and almost 163,000 trades aimed to answer a straightforward question: Do our predictions about future stock prices hold when we trade stocks? Investors in the study sold stocks only to quickly buy others, assuming that the stocks they purchased would outperform the ones they sold (essentially, swapping future winners for future losers).

However, the results showed that the stocks they sold actually performed better than the ones they bought, by an average of 3.2% annually. When compared to the 8-10% average yearly return of a typical index fund, this reveals the true cost of believing in one’s trading prowess. Numerous other studies have corroborated these findings: actively managed investment portfolios consistently underperform simple index funds.

In another case, a financial firm providing services to very wealthy clients invited Kahneman to speak. In preparation, the firm provided a database of investment outcomes over the previous eight years for twenty-five advisors. This spreadsheet effectively ranked each advisor for year-end bonus calculations. With this data, Kahneman found, however, that the firm was rewarding luck, not skill. There were effectively zero correlations between performance and skill among the firm’s advisors. Therefore, performance variations could only be attributed to randomness. When Kahneman revealed his findings, the management team wasn’t surprised. But did they change?

“I have done very well for the firm and no one can take that away from me.”

A Narrative of Skill

When a narrative of skill becomes ingrained in culture or one’s self-worth, individuals easily ignore statistics. Recall our recent discussion of the base rate fallacy. The illusion of skill can make us overconfident in our abilities, sometimes causing us to confuse good fortune with ability or talent. Knowing a business prospect has a 20% chance of success, we might flip those statistics, believing our chances of success at 80%. But there’s no outrunning the reality—an 80% chance of failure.

External and unexpected events—say, a pandemic—can intervene to undermine or strengthen business. In mid-2020, it might have been rough to be a restaurant owner, but a great time to be selling home gardening supplies or climbing holds. If we construct a narrative of skill in a game of chance, delusion might obscure expertise. This is particularly evident when our livelihood or reputation is founded on a mistaken belief in our talent.

The morning after Kahneman shared his findings with the Wall Street firm, one of the executives approached him and said, “I have done very well for the firm and no one can take that away from me.” Kahneman believes the findings he shared with the management team were considered and quickly swept under the rug.

While it’s common for us to associate our identity with the false belief in skill, recent research provides a glimmer of hope for enhancing our predictive abilities.

When Predictions Improve

By now we’ve seen that expertise can sometimes be illusionary, leading to erroneous predictions. But in what situations can predictions be useful? Those that involve the collective probability assessments of many others.

In a famous 1906 study that borders on myth, 800 people participated in an experiment at a county fair in Plymouth, England to estimate the weight of a slaughtered and dressed “fat ox.” Individual guesses ranged considerably, some high, some low. The median guess, however, at 1207 pounds, came shockingly close to the actual weight of 1198 pounds, within 1%.

Weight of an ox. Predictions. Clipping Chains
Photo: Pexels/Carolin Wenske

A study published in 2010 revealed that polls asking voters to predict election outcomes, regardless of their political affiliation or desired candidate, were more accurate than those solely assessing voting preferences. In other words, when voters were forced to judge the likelihood of a successful candidacy, it forced them to think beyond their preferences and assess probability instead. And it worked.

Using this logic, a team of researchers including Tetlock—the man who shook the public world of expertise—published a follow-up study demonstrating that crowdsourced forecasts could be highly accurate in predicting geopolitical and economic outcomes. This finding underscores the potential for collaborative prediction methods to enhance accuracy.

Improve Your Predictions

The bad news is that some of us are better at predictions than others, likely owing to intelligence. The good news? Prediction is a learned skill, and we can all improve through practice. Tetlock’s recent research, including this study, helps us understand how to predict more accurately.

Crowdsource Information

Teams outperform individuals. Remember the ox? The median guess of many was highly accurate. Gather thoughts and input from as many sources as possible. Note: Owing to the inherent biases of groups, teams of forecasters in these studies trained in effective collaboration.

Domain Expertise Helps

Tetlock’s early work revealed that experts often made incorrect predictions. However, as you’ve likely experienced in interactions with your boss, prestige and status don’t always align closely with genuine domain expertise. Those with intimate knowledge of a field tend to make better predictions.

Open Minds, Better Predictions

Intuition is often misleading. The overly indoctrinated are prone to poor forecasting. Remember Y2K? I do. Those who gather and consider multiple viewpoints make more accurate predictions.

Furthermore, those willing to revise predictions with new information are more successful. Take the outside and inside view whenever possible. It’s good life advice, too.

Think in Ranges and Probabilities

Instead of making absolute guesses, start by estimating a range of probable outcomes. For example, you might estimate that a “fat ox” weighs somewhere between 100 and 10,000 pounds. Then, consider whether you can refine that range using probabilities. Are you 50% confident that the answer falls within that range, or maybe 90% confident? Where possible, consider base rates as well.

Defer to Statistical Models to Improve Predictions

Want to make the best predictions? Rely on statistical models. Most techniques described above are basic attempts to get humans to think like simple algorithms. I know. Ouch.

Conclusion

I’ve come to understand that how I feel about the future is prone to error. But in that rather clinical and sterile examination, I also recognize the pleasures of forecasting. It’s a joy and a supreme source of hope to imagine our future lives differently. And let’s be real, it’s sometimes a bit of fun to gossip or muse about things to come. Where I’ve failed in the past was putting too much hope on low-probability outcomes. In some small way, I believed that leaving my corporate job would somehow relieve all my troubles.

With time, I’ve found value in gathering the diverse and varied opinions of others. By acknowledging our limitations and embracing probabilistic thinking, we can strive to make more informed decisions. Knowing life isn’t necessarily better without my job, I’m using these techniques to consider my next move in the chess game of life. Regardless of my decision, the future will inevitably unfold differently than envisioned.


Have questions? Need some feedback? Leave a comment or hit us up on the contact page.

If you enjoyed this post, please subscribe here for much, much more. And please, send this to someone who might enjoy or benefit from this content.

Support this free project:

Subscribe to the weekly newsletter and receive a FREE spreadsheet for tracking spending, income, and net worth!

* indicates required

Thanks guys, see you next time.

Affiliate links are used on this page. You will incur no extra charges if you purchase a linked product, but we will receive a tiny-baby portion of the sale. Those minimal proceeds help us keep the digital lights on around here. We wouldn’t link to a product we wouldn’t buy ourselves.

What say you friend?