Recently, I filed a post about food allergies in the schoolyard, which was driven by catching myself in several common logical fallacies. You see, before writing the article, I had a general sense that school policies designed to protect allergic kids from peanut exposure were getting out of hand. After all, back when I was a kid, we didn’t have them and I can’t remember a single anaphylactic reaction. So when I overheard the mother of an allergic child in my son’s school complaining that a new school policy was overdoing it, it seemed pretty clear I was right all along.
Of course I wasn’t. My belief was based on anecdotal data and an appeal to antiquity, all bathed in the “warm bath of confirmation bias”, as Dan Gardner so eloquently put it. Once I’d actually looked at the data and thought about the math, policy responses like Sabrina’s Law seemed much more reasonable. Until that point though, I was pretty certain they weren’t, based on what seemed like common sense.
Now common sense is a popular notion. Here on Ontario, we had a whole revolution based on it. It also underpinned the (slightly more significant) American Revolution, and deep thinkers like Glenn Beck continue to espouse it. It’s what every parent needs and what Lehman Brothers lacked. It’s clearly an awfully good thing.
Yet it’s also somewhat curious, for a couple of reasons. First, while everyone thinks they have it, not everyone agrees with one another — a fact which causes shockingly little cognitive dissonance. And second, though this is certainly a quibble: we know for a fact that sense — meaning the ability to consistently reason soundly — is not at all common. It’s an unnatural skill for humans, and as a species we’ve only even been trying to do it for a few thousand years, with many of the most significant advances coming in the last few hundred. Indeed, to use “sense” as we know it today requires deliberate training of a sort that is most uncommon.
Some Thoughts on Thinking
To illustrate this, let’s take a fantastically oversimplified look at the history of human thought. In the beginning, as any first year philosophy student can tell you, there were logic and rhetoric. Rhetoric dates back to the beginnings of written history (~3000 BCE), and is already recognized as a mixed blessing in Plato’s time, because it can manipulate as well as it can elucidate. (The Sophists do the former, Socrates the latter.) Logic, first codified by Aristotle, was the purer tool of thought — or at least one branch of it was. Deductive reasoning, the process of carefully determining new knowledge from a set of known premises, became the underpinning of human thought for the next 2000 years.
Now Aristotle identified a second branch of logic called inductive reasoning, which goes in the opposite direction, forming a general conclusion based on the observation of specific instances, and he recognized it as useful in science. But he didn’t do much more than that, and while it was taken up periodically over the ensuing millennia, including by such heavyweights as the Persian philosopher Avicenna, there were no real advances in that field until around the 17th century. In fact, much of the time it was brought up was to argue whether it should truly be part of logic at all. After all, the conclusions of induction could be false even when the premises were true and no logical flaw existed, so how could one derive reliable knowledge from such a process?
The problem was that there was no way of knowing just how closely a sample matched the population — at least until the development of statistics, which itself had to wait for the development of probability. And probability really only started to be codified with Fermat and Pascal in the mid-17th century, with antecedents no more than a century older than that. Statistics and probability gave induction its own language, and with Bacon and others also giving induction a second look as the Age of Reason began, the scientific method was born.
There is no way to overstate the importance of the scientific method. After 2000 years relying solely on deductive reasoning, we realized that there was another way to obtain knowledge. Instead of all knowledge being a product or recombination of what we already know, we now had a mechanism to learn genuinely knew things in a new way — by direct measurement and a determination of the precise likelihood that our measurements were reflective of the natural world. This was an immense leap forward in human thought, expanding the realm of things we could “know” exponentially. Our world today would be unimaginable without it — it set the stage for the Enlightenment and the natural rights we now enjoy, and underpinned the largest creation and widest distribution of wealth in human history by speeding the pace of, and access to, technology innovation.
The Numeracy Problem
But while deductive reasoning at least seemed innate to humans (more on that below), it would be hard to call this new kind of sense common. In fact, it’s completely counter-intuitive, which is why it took thousands of years of human thought to develop. We had to create two branches of mathematics, and learn how to use them — and numeracy is a big hurdle for any mode of thought aspiring to common use. To illustrate just how uncommon this sort of thinking is, consider a few of the many, many studies that illustrate our essential innumeracy:
- On the Lipkus Numeracy Scale, an 11 question test of numeracy, respondents routinely do dramatically worse on questions that involve simple non-integers represented as decimals — even when those respondents are highly educated.
- Data from the U.S. National Assessment of Educational Progress found that mathematical literacy among 17 year old students has barely budged in the last 25 years, and remains in the lower half of the basic range. (Here are some sample questions in case you think the test might be too hard).
- A recent study by the Federal Reserve Bank of Atlanta found that financial literacy is highly correlated with basic numeracy, and that “limited numerical ability played a non-trivial role in the sub-prime mortgage crisis”. The short quiz they gave the survey respondents tests little more than basic arithmetic.
So nearly 400 years after the Age of Reason began, humans still can’t speak the language of science, at least without significant training. And that training requirement has only grown in the ensuing centuries, for two reasons. First, science itself has continued to become more complex (and thus rarefied) as the field of statistics has developed over the last century or so. Today, hardly a study is conducted without the aid of a computer stats package. And second, a “third wave” of human thought has since begun.
Wave Goodbye to Reason?
By the late 19th century, Charles Sanders Peirce — one of the main codifiers of the scientific method and an early statistics pioneer — was already calling attention to the impact of biases in scientific analysis, and recommending randomization and control as important tools in their elimination. Peirce was concerned mostly with statistical biases (e.g. sampling bias), but a century later we’d be starting to realize that the biases that formed the biggest obstacles to science were not mathematical, but cognitive.
What I’ll call the third wave of human thought — after deduction and induction — is the understanding of human cognitive processes pioneered in the 1970′s and codified by Kahneman, Tversky and Slovic (1982) in Judgment under uncertainty: heuristics and biases. This work has had a major impact on a wide range of fields: Kahneman, a psychologist, won the Nobel prize in economics, and cognitive biases are now routinely taught in science courses.
The main reason it’s so important is that it categorically refutes the notion, upon which the entire prior history of human thought is based, that the bulk of human thought is rational. Surely earlier thinkers recognized that humans were subject to irrational thought, even deliberate manipulation — witness Socrates against the Sophists — but until the recent advances in cognitive psychology, there was a belief that rational thought (particularly deductive reasoning) was our native mode. We now know this to be patently untrue.
Thirty-plus-years of research has shown that humans have two cognitive processes — one rational and one heuristic-based — that essentially debate decisions before a final judgment is made. The problem is, the rational part takes a lot of coffee breaks, and lets the heuristic part run unchecked unless the decision is deemed particularly important and time allows for careful analysis. Even then, heuristic judgments are viewed as important inputs to the rational process, meaning that not only are most of our day to day decisions intuitive, but even the rational ones are subject to influence by the heuristic process as well.
The heuristic process uses circuitry in the oldest part of our brain that we inherited from earlier species, and it has certainly served us well. In most cases, and under most conditions, it allows us to speed up our decision making process dramatically by substituting easier questions for harder ones. Take the availability heuristic I’ve mentioned before. If I want to know whether the water in a well is safe to drink, I’ll try to recall instances where people drank from it and got sick. I’m answering a different question, but it’s a reasonable way to get to quick, approximate answer if I’m thirsty.
The problem is not that this method is an approximation and thus sometimes wrong. It’s that it’s subject to systematic biases which can make us consistently wrong in certain circumstances. Even when the rational brain does step in, these biases can be very hard to overcome. Some of the early research on conjunction fallacies and the representativeness heuristic was done with social science graduate students — and even stats and psych professors — who still made elementary errors because of the pull of the heuristic. That particular heuristic has also been associated with the base rate fallacy, regression fallacy, and gambler’s fallacy.
We think of ourselves as rational creatures, yet study after study shows us relying on heuristics for the bulk of our decision making, exposing us to the myriad cognitive biases they engender. This has been hard-wired by evolutionary forces that have disproportionately rewarded quick decision making (“Rhino Charging: Run!”) over careful thought. If we’re honestly assessing what’s most common in human thought, it’s not using our sense at all.
Toward an Uncommon Sense
So for those of us aspiring to think and reason soundly, what’s really required? Here’s my list:
- The strength of will not to be mentally lazy — to overcome our default heuristic-based thought process and use our rational processes more of the time.
- Enough of a grounding in deductive reasoning to engender careful thought and the avoidance of logical fallacies.
- Solid numeracy skills, with a focus on probability and statistics.
- An understanding of the scientific method — the way in which knowledge is proposed, tested and affirmed — and its limitations.
- An awareness of the myriad common cognitive biases identified in the research, the ability to spot them in one’s own thinking, and practice debiasing.
Have I missed anything? I’d be appreciative of any additions in the comments.