Thoughts on the outlook for 2016 and the nature of forecasting

As long-time readers may be aware, this is the time of year when I tend to start rambling on about forecasts, probabilities and prediction markets. If you’re interested in those topics, then Superforecasting, a new book by Philip Tetlock, could be good holiday reading.

What does 2016 hold?

At the heart of any forecast or outlook is a problem: even though the best forecasters have insight about the future, nobody really knows exactly how things are going to turn out. So the most prominent forecasters are not always the ones who are the best at divining what is going to happen, but rather the ones who are the most skillful in packaging and delivering their commentary.

In Superforecasting: The Art and Science of Prediction, Philip Tetlock and Dan Gardner¹ tackle this problem head on. Tetlock has been testing the effectiveness of forecasters for years, and oversaw the creation of “the good judgement project (GJP)”—a project that has proved so effective that when it took part in a research effort sponsored by the U.S. intelligence community to improve the quality of its forecasts, it managed to consistently beat every other source (even professional intelligence analysts with access to confidential data)—to such an extent “after two years, GJP was doing so much better than its academic competitors that (the agency) dropped the other teams.”

And the starting point for that success was simple: you need to objectively measure the results of the forecasts. That may be simple, but it’s not easy, partly because we are dealing in probabilities, not certainties, and for a host of other reasons as well. But it is possible.

And you can only measure a forecast which is specific (“there is a 55% chance that Hillary Clinton will be the next President of the United States,” for example.) Vagueness or open-ended timelines preclude objective assessment.

Measurement is essential, but it’s only the beginning. Having spent years identifying people who consistently produce accurate forecasts, Tetlock goes on to describe their characteristics—open-mindedness, dedication, consistent striving to improve, and so on.

Tetlock also discusses the role and effectiveness of prediction markets. One of the impressive features of the GJP’s track record is that it was able to consistently beat not only other forecasters, but also prediction markets.

Forecasts vs. predictions

Of course, much of the commentary that we are used to in the investment markets is not specific enough to pass the measurability test set out above. And, anyway, who wants to hear that there’s a 40% chance of this or a 70% chance of that? In this industry, we expect our commentators to take a stance. I’ve touched on this point previously (and it’s something Nate Silver has written about)—the short version being that a forecast is about probabilities, whereas a prediction is specific. Predictions are more tangible and hence more interesting—and they can sometimes offer a cleaner view into the underlying insight. But they are different.

For specific predictions (and perhaps a couple forecasts) on what the new year may bring, Russell’s annual market outlook will be available shortly and, as always, I shall defer to our strategist team’s opinions on that front.

Meanwhile, since this is (probably) my final blog of the year (apart from a “most popular” post we’ll be running next week), allow me to wish you all a happy, healthy and prosperous 2016.

¹Tetlock, P. and D. Gardner (2015). Superforecasting: The Art and Science of Prediction. Crown Publishing Group: New York. This is, incidentally, Dan Gardner the financial journalist, not Dan Gardner who works on Russell Investments’ DC team.