Back to the Future

February 16, 2016

We are obsessed with the future. Many explanations are offered for this, but most boil down to our general anxiety about the present. This tendency has some advantages and our “future obsession” may be, in fact, one of the key evolutionary advances that has propelled humans forward. By worrying and wondering, we seek to “see” and in so doing imagine paths that could enable our future.

Yet it is in fact the past that holds most of the clues from which our future depend. The laws of nature or perhaps simply the laws of “human nature” potently drive our trajectory, but we seem to be less and less interested in remembering these.

Educational institutions are great places to see where culture and society are heading. A recent survey of the Stanford University faculty directory revealed that forty-five percent of the professors teach humanities, yet only fifteen percent of the students take their classes. Students are less interested in learning what we already know in place of what we (they) could create.

Ironically when studying, we can only look to the past (if anything) to make the future more predictable. Be it weather forecasting, purchasing probability, unemployment, etc., our only toehold is what we can gather from historical data.

Perhaps the one particular exception to this is when trying to predict the potential of a new idea. Recent work has shown that historical bias is one of the more challenging obstacles in “creative forecasting.”

Narratives about the future are potent. The Weather Channel, the Star Wars series or dystopian stories like George Orwell’s 1984 (published in ’49) – anything that hints at an estimation of tomorrow grabs our immediate and relentless attention.

Each year, communities reconvene across the globe to take a glimpse into tomorrow. Every industry has its iconic gathering. In tech it is CES, in healthcare it’s J.P. Morgan, in medical device it’s AdvaMed – and at each, devotees pour over the pundit predictions, and scribble down the forecasts and “forward looking” statements in dutiful detail.

Our world’s leaders take their annual “dose of tomorrow” in the rarified alpine air of Davos, Switzerland. While it is probably safe to say that all Davos sessions touch in some way on “the future,” the 2016 program had no fewer than 12 different sessions explicitly focused on addressing the specific future of this, that or another – from the future of Ebola outbreaks, Made-in-China, to proofing the world economy, whatever that could mean. Selling the future sells tickets.

Our anxiety about the future is well founded; at least in the context of our ability to forecast it. The Good Judgment Project (GJP) is a fascinating study of our forecasting abilities – when it is possible, how accurate can we expect it to be and what makes a person (or group) particularly good at it.

The GJP started in 2011, as a collaboration between Intelligence Advanced Research Projects Activity (IARPA), the Aggregative Contingent Estimation (ACE) group, and Philip E. Tetlock, Barbara Mellers and Don Moore. The question was simple. How close could laymen, if given basic training and access to generally available information, come to the predictions made by highly trained government intelligence experts who had access to highly classified information?

Over the course of several annual competitive cycles, a few thousand non-intelligence participants were involved. For the most part, their predictions were about as good (or less) than those generated by the professionals, but a subset emerged that could consistently perform at greater than 30 percent higher prediction accuracy – so called Super Forecasters – the top two percent of the non-intelligence participants.

While each was remarkably different in their thinking, some were more probabilistic and some more narrative, advantaged traits emerged over time. Those who were able to hold a more open mind and explore the range of possibilities before making their judgment were more effective – those who would take both an inside and outside view of the problem. Using something Daniel Kahneman calls “System 2” thinking, Super Forecasters were also those who embraced nuance and worked well with others by engaging in robust discussions on disagreements to develop a healthy debate. Yet perhaps most important was that Super Forecasters focused on improvement. Having dynamic access to their scores enabled them to learn from their mistakes and forced them to remember how difficult forecasting truly is. They stuck with it. They had the grit to look at the question from all sides.

So as we hunker down to finalize our 2016 goals and objectives, and focus on delivering the futures that we’ve promised, it is helpful to be reminded of some interesting similarities. Like those who get better at “guessing,” each of us can get better at “delivering” by remaining open-minded, deliberate, collaborative and feedback-driven.

While predicting the future is difficult building the future is less so.