It was a tragic death, sweetened only by an act of parental love and the knowledge that it could have been worse. On January 12th, Marites Mendoza was struck and killed by an 83-year old driver who ran a red light, pushing her stroller out of the way just in time to save her 12 week old baby boy. The accident happened not far from where I live in Toronto’s west-end, and the news quickly swept the city.
It’s the type of event that sticks with us, and I’m certain we’d all remember it for quite a while regardless of what happened next. But almost at once, the story was caught up in a broader narrative that seemed to consume the city for the next several weeks. You see, Ms. Mendoza was not the only pedestrian to die on the streets that day – a 17 year old student and an 80 year old man were also struck and killed in separate accidents, bringing the GTA’s pedestrian death toll to 5 since the start of the year. And they continued: a 24 year old woman in Brampton on the 14th; a 54 year old man in Mississauga on the 17th; two more on the 18th, including a 60 year old woman struck by a transit bus. By the end of January, 14 people of all ages were dead across the GTA – three times the monthly average over the past two years. And everyone wanted to know just why our streets had become so much more dangerous.
Which is just the question we’d expect them to ask. As I discussed in a past article, humans have been hard wired by evolution to hone in on certain types of data relating to risks in their environment, in order to use this information to make quick decisions about those risks. One of the most important inputs we use is the perceived frequency of the event: the availability heuristic states that our perception of the risk of an event increases based on how easily we can recall an instance of it occurring, and we clearly had a mountain of data points on this one. A second input we use is the emotional impact of an event: the affect heuristic states that how deeply we feel about an event correlates with our perception of its risk.
Vivid, tragic stories like that of Ms. Mendoza are brimming with emotional impact, and when set against the broader narrative of increased frequency of fatalities, the affect heuristic essentially compounds upon the availability heuristic, causing our hard wired risk bells to start ringing. Of course, humans always have the option to ignore those bells – they’re rung by the oldest and most primordial part of our brain (the amygdala, or “lizard brain”), and the gray matter that’s been grafted on top of it over the last few hundred thousand years, does sometimes step in with a bit of countervailing rationality. Which seems to have been at play here, although in a way that illustrates just how faulty that mechanism can be.
It seemed to work something like this: the rational brain stepped in and said, “pedestrian deaths don’t just triple in frequency on their own,” which is true – they’re the sort of background risk that’s generally pretty stable. But then the deaths kept coming and those bells kept ringing, so rational brain eventually conceded lizard brain’s point: “…so there must be a cause” and eventually concluded, “…and we must find it, so we can avoid it and keep ourselves safe.”
And boy did we find causes. In the days that followed, journalists mooted a dizzying array of them, including jaywalking, hasty crosswalk signals, and that paragon of dangerous technologies, the cellphone (apparently in its free time between causing brain tumors, igniting gas stations, and bringing down commercial aircraft). One particularly toothsome article by Toronto Star Urban Issues columnist Christopher Hume blames (1) automobile dependence; (2) bad signage; (3) poor crosswalk design; (4) streets that are too wide; (5) high speed limits and low penalties for violators; (6) cars designed for drivers not pedestrians; and (7) “a culture that quietly condones putting pedestrians’ lives at risk”. Quite the perfect storm.
Yet none of these really rang true – we were grasping at straws because our lizard brain told us there was something wrong and our rational brain tried to appease it. The same cognitive dissonance ran through the official response as well: the police said they could find no pattern in the deaths, yet still ran a so-called safety blitz, handing out tickets for jaywalking and failing to stop for streetcar doors.
But if the police said it was nothing and then did something, the politicians took the opposite route. Councillor Bill Saundercook called for “reduced speed limits and heightened awareness on the part of both pedestrians and drivers that they must share the road” and deputy mayor Joe Pantalone called for a “zero-tolerance policy for Torontonians,” that seemed to require people to talk to their loved ones about safety, and if they don’t comply, “give them hell”. Solid policy responses, all.
And then, as quickly as it came, it disappeared. The news coverage all but stopped in February, and by March no one was even keeping count of the deaths anymore. No more fulminating politicians, no more safety blitzes. It’s as if it never happened. Which I guess is OK, because mathematically speaking, it never did.
As I said earlier, rational brain got it partly right – “if there’s no reason for the deaths to increase, then they won’t increase”. Rational brain also correctly (from a logic standpoint) made the inference that “If they do increase, there must be a reason for the increase.” It was obvious to rational brain that deaths had increased, and so it started looking for a reason. The logic was right, but as we’ll see, the premise was wrong.
That’s because there hadn’t, in fact, been a real increase. What we saw was merely the type of cluster that naturally occurs in a random distribution. But unfortunately, the rational brain doesn’t grok randomness particularly well without being trained to do so.
Yet the concept isn’t really hard to understand. Let’s say you toss a coin a million times. Basic probability theory suggests that you’ll see 500,000 heads and 500,000 tails. What it doesn’t say is that the pattern of your results will be H-T-H-T-H-T-H-T-H-T-H-T-H-T-H-T. It should be obvious that in a string of a million results, there will sections that look like this: H-T-H-H-T-H-T-T-T-T. Those last 4 T’s are a cluster, and they happen all the time in a random distribution – a phenomenon known as Poisson Clumping. (You can even try it out for yourself). The problem comes when people look at the data with too narrow a lens, and all they see is the clump. For example, if you were only looking at the last 5 results in that sequence, you’d surely think the coin must be weighted toward Tails. And that’s what happened in Toronto.
Saying that there are an average of 56 pedestrian deaths a year in Toronto is not the same thing as saying there are 4.6 deaths a month — a month is too narrow a lens. Poisson clumping is to be expected, and we saw a big clump in January. Unfortunately, for a while that clump was all we saw.
But then something interesting happened: no deaths were reported in February. (Curiously, no news reports heralded the “drop”). Four more were reported in March, but by then the papers had stopped keeping a running count (I did my own tally from the individual reports), and only one death was reported in April. Four months into the year, 19 pedestrian deaths had been reported, putting us right back on the average.
Which is how the streets of Toronto were made safer by math.