Book Review: Future Babble by Dan Gardner

Today’s guest post is from Darren McKee, an contributor to the Ottawa Skeptics podcast. Want to contribute a review? Contact us.

I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.

My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway. Gardner, a columnist for the Ottawa Citizen and author of the bestselling Risk, returns to the format that made Risk such a success: Find some interesting psychological research from the past few decades; describe the research in accessible and pithy prose for a general audience; emphasize cognitive biases; extrapolate the research findings to popular events to indicate why they matter; and imply that we should change our behaviours and policies.

In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.

Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.

After letting this impressive experiment run its course for several years and crunching all the numbers to see how the predictions bore out, Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.

“What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)

The big difference is in the way the experts think.

The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)

Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news. This is even more concerning as one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)

Gardner did such a superb job in the first chapter that you almost don’t need to read the rest of the book. Those with a background in psychology, as well as the seasoned Skeptic, will see some familiar faces: confirmation bias, hindsight bias, negativity bias, optimism bias, partisan bias, status quo bias, availability heuristic, cognitive dissonance… etc. Given that most could use a primer or a refresher on such biases, I do recommend the rest of the book for those with less of a background, as the following chapters usefully illustrate the key findings with various examples to increase understanding and provide greater depth of analysis.

Future Babble would make a great gift, and I hope that Gardner’s popularization of Tetlock’s work succeeds and the issues raised become part of a larger discussion on the validity of expert predictions.

Appendix (of sorts)

So ends the book review proper. Below I examine the book in more detail by going chapter by chapter, presenting some of my thoughts and notes. This content will likely be useful to those who want more detail, but it might be especially useful for those who have already read the work or who are looking to tease out to discussion points.

Chapter 2 – The Unpredictable World
An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.

Chapter 3 – In the Minds of Experts
A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.

Chapter 4 – The Experts Agree: Expect Much More of the Same
An interesting and almost amusing analysis of how the rise of Japan was the big fear in the US in the early 1990s, and pretty much none of it came true. He wisely mentions how the same concerns are occurring with China now. Although these concerns might be true, we should be wary of believing them. Gardner really drives home the notion that an ordinary person has about as good a chance at making correct predictions as most experts.
I found two flaws in this chapter, neither major but worth noting.
Flaw #1: Gardner uses a gross national income statistic to compare the US and other countries, but he doesn’t per capita measures (p.94). This is misleading and doesn’t fit with the rigour of the rest of the book.
Flaw #2: Gardner could have had a more nuanced discussion of Tetlock’s work and how it fits into the status quo problem. The issue here is that Tetlock found that if you predict “no change,” you’ll actually do a decent job predicting things. A related notion is the status quo bias, where people assume that things will continue as they are. This is a problem because people invalidly extrapolate trend lines. There is a subtle distinction here between assuming that the present circumstances won’t change (good for prediction) and assuming that indicators in the present are valid predictors of future circumstances (bad for prediction). I don’t think it would have been too much trouble to tease this out (if only in a footnote).

Chapter 5 – Unsettled by Uncertainty
While there was a lot of interesting information in this chapter, it felt disjointed and had a few too many anecdotes for my comfort. It was mainly stories of how bad things were in the 1970s, or how dire the predictions were, and how nothing that bad came to pass. It might be the weakest chapter, but the social/intellectual history was decent. To be fair, a different reader might enjoy having the concepts elaborated upon. The problem for me is that once Gardner displayed Tetlock’s findings in the early chapters, further anecdotal information does not increase how convinced I am.

Chapter 6 – Everyone Loves a Hedgehog
More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
Minor issue: If you check footnote 56, you’ll see Gardner admit to an error that he exposes numerous others making in the body of the text. I wondered why he did this in a footnote. Was he concerned that admitting he didn’t check common wisdom for accuracy would undermine his authority as a columnist and writer?

Chapter 7 – When Prophets Fail
This might be the most entertaining chapter as it looks at prophets and prophecies, including experts who predicted Y2K chaos and calamities that never happened. There is a good exploration of Leon Festinger’s cognitive dissonance, which can generally be explained by saying that two or more beliefs come into conflict and they are usually resolved in a self-enhancing manner, putting truth as a lower priority. Regarding the theme of this book, “a mind deeply committed to the truth of a predication will do almost anything to avoid seeing evidence of the prediction’s failure for what it is.” (p.196)
Gardner uses a nice analogy, describing dissonance as a cognitive migraine and self-enhancing belief as a pill that makes the pain go away. Once again, there are too many anecdotes for my tastes, but it is useful as case studies illustrate the concerns and help the reader understand and hopefully apply the lessons thereof.
The chapter opened with a great quotation by John Maynard Keynes: “When the facts change, I change my mind. What do you do, Sir?”
Finally, it is in chapter 7 that Gardner writes one of his best passages (p. 236):

“An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”

Chapter 8 – The End
Gardner really pulls it all together in the last chapter with a good flow and summary of aforementioned themes and facts without it feeling repetitive or awkward. Helpfully, Gardner provides specific examples of better ways to think about issues (the Fox approach). One can only hope that these tactics will be appropriate and humility will increase along with accuracy of predictions.Once again, there are nice phrases throughout and he knows how to write quotable prose.

So, was my prediction correct?

5 Responses to “Book Review: Future Babble by Dan Gardner”

  1. Evan Harper says:

    I had rather the opposite reaction to that passage you quoted. It seems obvious that there are unfalsifiable, completely non-empirical propositions which have nothing to do with illogical dogma. Mathematical propositions, for instance. Or definitions, or axioms. Really it is a silly mistake to conflate “empirically falsifiable” with “logically well-founded.”

    • Erik Davis says:

      Good point. I think the language is a bit sloppy here — not just for that reason, but because it also doesn’t capture the fact that there’s lots of dogma that is falsifiable…and indeed has been falsified already.

      • Darren McKee says:

        While you are both technically correct, I think you are missing the main gist (as Dan alluded to below). There are fundamental assumptions that must be made (i.e., the world exists) and axioms are the foundation of math, but for the average person (who I believe is the target audience of the book), these things are not concerning. Many quote Hitchens when he says what can be asserted without evidence can also be dismissed without evidence, but they don’t then go on to say Hitchens dismisses all of math or any such thing.

      • Erik Davis says:

        Fair ’nuff. Future Babble is still on my nightstand unread, but if it’s anything close to Risk (which I reviewed here) I expect these are very minor quibbles.

  2. Dan Gardner says:

    Thanks for the review, guys.

    Evan, the passage should be read in context. It’s deep into a book about politics, economics, the environment, etc. And it’s at the end of a chapter in which I detail examples of people making predictions about these subjects and ignoring overwhelming empirical evidence that the predictions failed. It is certainly not intended to be an exhaustive statement on falsifiability and dogma in all domains and I don’t think any reasonable reader would take it as such.

Trackbacks/Pingbacks


  • Darren McKee

    Darren McKee is concerned with the promotion of critical thinking. He is a co-host of the weekly skepticism podcast The Reality Check and also blogs with CFI Ottawa. He thinks skeptics should incorporate more morality into their explorations and thinks everyone should read (more) Daniel Dennett and Peter Singer.