A politician needs the ability to foretell what will happen tomorrow, next week, next month and next year. And to have the ability afterwards to explain why it didn’t happen. Winston Churchill

Ever watched a panel of experts on TV disagreeing over what will happen next in the economy, or in politics, or in a war… or in Libya? These impressive, informed, confident experts are keen to tell us what they think will happen, and even what would happen if political leaders followed their recommendations. Indeed, one or more of these experts may well have the ear of political decision makers.

Influential experts and the political leaders they advise believe that they can make useful forecasts about complex and uncertain situations, such as the one we currently see in Libya. But simply observing the disagreements of the experts should make us question whether relying on experts’ opinions is a good approach to making important policy decisions.

So why doesn’t someone study how to overcome this problem?

As it happens, for about 80 years, and with increasing attention in recent years, researchers have been doing just that.

The research on the value of expertise in forecasting was summarized in 1980 as the Seer-Sucker Theory: “No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers.”

Philip E. Tetlock, now the Leonore Annenberg University Professor in Democracy and Citizenship at Penn, provided exceptional support for the Seer-Sucker Theory with his 2005 book, Expert Political Judgment. The book describes the findings from Tetlock’s 20-year study of 284 political and economic experts and their 82,361 forecasts during that time. As it turned out, the experts’ forecasts were no more accurate than those of novices.

The findings also apply to conflict situations. In a recent study, Kesten C. Green of the University of South Australia and I compared the accuracy of 106 forecasts by experts and 169 forecasts by novices about eight real conflicts. The forecasts of experts who used their unaided judgment performed little better than those of novices and were inferior to simple rules (Green & Armstrong 2007a).

In wondering why he made so many mistakes in Vietnam, former Secretary of Defense Robert McNamara suggested it was important to put oneself in the shoes of opponents in order to predict the decisions they would make. Some people in the U.S. intelligence community and others in business told us that this is what they already do. So we searched for evidence on the effectiveness of this procedure. Strangely, we found none; so we tested the recommended method.

We obtained 101 “role-thinking” (our term for the method of “standing in the other person’s shoes”) forecasts of the decisions that would be made in nine diverse conflicts from 27 Naval postgraduate students (experts) and 107 role-thinking forecasts from 103 second-year organizational behavior students (novices). The accuracy of the novices’ forecasts was 33 percent. The experts came in at 31 percent. Both groups were little better than pure chance (call it “guessing”), which was right 28 percent of the time. The lack of improvement in accuracy from role-thinking strengthens the finding from earlier research that it is not sufficient to think hard about a situation in order to predict the decisions groups of people will make when they are in conflict (Green and Armstrong 2011).

In case you are overcome with despair at the desperate state of forecasting for conflicts, please read on: All is not lost!

There are two (and only two) fully-disclosed evidence-based procedures that have been shown to provide forecasts for conflicts that are more accurate than guessing. (There are also, it should be noted, a number of secret formulas that consultants sell to government agencies and businesses with claims of success.)

One of the evidence-based procedures is called Structured Analogies. Experts in the problem area—say revolutions, or North African politics—are asked to describe as many situations that are similar (analogous) to a target situation as they can. They list similarities and differences to the target situation and rate the similarity of each analogy to the target. The experts then indicate what each analogy implies would be the outcome of the target situation. Finally, an administrator takes the outcome implied by each expert’s top-ranked analogy as a forecast. The modal forecast from a few independent experts is used as the structured-analogies forecast.

The other method is called Simulated Interaction. To obtain forecasts using this method, subjects are asked to adopt the roles of the key people who are involved in the situation, and to interact with each other in a realistic manner. Interactions might take place face-to-face or in the form of statements to the media. As with structured analogies, experts are needed to describe the situation and roles of the people involved. It is not necessary, however, to have experts take on the roles—that may even harm the accuracy of forecasts as experts may not be able to ignore their own understanding of the situation in order to adopt the views and desires of a key protagonist. The method can be used in any situation. The U.S. and German militaries, police dealing with armed hostage-takers, law firms and a few businesses have used variations of the method.

Our experiments were the first to provide evidence that simulated interaction and structured analogies lead to gains in forecast accuracy. And the gains were large. Whereas experts were, on average, correct for 32 percent of their predictions (little better than guessing), structured analogies were correct for 60 percent and simulated interactions were correct for 62 percent.

The structured analogies and simulated interaction methods have been fully disclosed in journals (in particular, see Green and Armstrong 2007b and Green 2005) and described to government advisors. We hope that our political and military leaders are benefiting from the adoption of the structured analogies and simulated interactions methods when they consider how to respond to the situations in Libya and elsewhere. Better forecasting of responses in conflict situations from all sides in a conflict will help decision makers to reduce bad outcomes in conflict situations.

References

Green, K. C. (2005). “Game theory, simulated interaction, and unaided judgment for forecasting decisions in conflicts: Further evidence”. International Journal of Forecasting, 21, 463-472.

Green, K. C. & Armstrong, J. S. (2011). “Role thinking: Standing in other people’s shoes to forecast decisions in conflicts.” International Journal of Forecasting, 27, 69-80.

Green, K. C. & Armstrong, J. S. (2007a). “The value of expertise for forecasting decisions in conflicts.” Interfaces, 37, 287-299.

Green, K. C. & Armstrong, J. S. (2007b). “Structured analogies for forecasting.” International Journal of Forecasting, 23, 365-376.

Tetlock, Philip E. (2005). Expert Political Judgment. Princeton University Press: Princeton.