During elections my ears perk up when law and order issues come up. Are politicians doing business differently this time round? Do they show that they understand what they’re talking about? The claims thrown around in these discussions deserve careful scrutiny because the data that they depend on are often ambiguous or misleading. Surprise! The evidence behind claims about crime can be incomplete, misunderstood and misused. Welcome to the nuance filled world of criminal statistics.
Here, I will briefly outline some basics for understanding and using criminal statistics. I hope to provide a few nuggets for critical evaluation of claims that use, or are dependent on crime data. I won’t be addressing any specific claims, just outlining the characteristics and limitations of the most commonly referred to criminal statistics. Treat a criminal statistic as if you were carefully peeling an onion, one layer at a time. Take some time to think about why this is necessary: Any given criminal statistic likely doesn’t tell you what at first glance it tells you.
Whole Numbers versus Rates
I hope that this one is intuitive for most people, but it is nonetheless important to go over as a starting point. Occurrences of crime are properly expressed as the number of incidences per 100,000 people. Total numbers are not informative on their own and it is very easy to manipulate an argument by cherry picking between a total number and a rate. Beware of claims about crime that use raw incidence numbers. When a change in whole incidence numbers is observed, this might not have any bearing on crime levels at all, because levels of crime are dependent on population.
Not every criminal statistic is equally reliable. Even though we have measures of incidences of crimes across types and subtypes, not every one of these statistics samples the actual incidence of these crimes in the same way. Indeed, very few measure the total incidences very reliably at all. The crime rates that you are most likely to encounter capture only crimes known and substantiated by police. These numbers are vulnerable to variances in how crimes become known and verified by police in the first place. Crimes very often go unreported or undiscovered. Some crimes are more likely to go unreported than others (such as sexual assaults and drug possession), and some crimes are more difficult to substantiate as having occurred than others.
Complicating matters further is the fact that these reporting patterns vary over time and are reflected in observed trends. So, when a change in the police reported crime rate is observed from year to year or across a span of time we may be observing a “real” change, we may be observing a change in how these crimes come to the attention of police, or we may be seeing a mixture of both.
Generally, the most reliable criminal statistic is the homicide rate – it’s very difficult, though not impossible, to miss a dead body. In fact, homicides in Canada are counted in the year that they become known to police and not in the year that they occurred. Our most reliable number is very, very close, but not infallible.
Other Measures of Crime
Crimes known to the police nearly always under measure the true incidence of crime, so other measures are needed to better complete our understanding. The reported crimes measure is reported every year to Statistics Canada from data that makes up the Uniform Crime Reporting Survey. This is a very rich data set that measures police data very accurately but tells us nothing about unreported crime.
We do have some data on unreported crime available. Victims are interviewed (after self-identifying) via the General Social Survey. The survey is conducted every five years with the last data set being released in 2010 and querying about victimization in the previous 12 months during the interview period (February 2009-November 2009). This measure captures information in eight crime categories both reported, and not reported to police.
It has its own set of interpretation problems and pathways to misuse. The survey relies on self-reporting, so the accuracy of the information will be open to errors due to faulty memories, willingness to report, recording errors etc. The survey doesn’t capture young people under 15, those in institutions, or those without phones. In recent years many people have been moving towards only having cell phones. Those with no land line aren’t sampled in the survey methodology. Whereas the survey would miss around 1% of potential participants with no phone of any kind, now in addition the survey misses the 8% of people that have only cell phones (this is the GSS’s own estimate). The survey does not collect specific incident data in sexual and physical assaults when the offender is a spouse or partner. Also note that data from the Northwest Territories, the Yukon and Nunavut are collected but reported separately.
From the last data set available, self-identified victims did not report 69% of violent victimizations (sexual assault, robbery and physical assault), 62% of household victimizations (break and enter, motor vehicle/parts theft, household property theft and vandalism), and 71% of personal property theft victimizations.
I note this information because while people generally understand that crimes go unreported and unknown to police, they tend to be surprised and perhaps even shocked at the actual amounts that get unreported. These numbers sound scary. However, the most common reasons reported by victims of violent and household crime for not reporting were: believing the incident was not important enough (68%) believing the police couldn’t do anything about the incident (59%), and stating that the incident was dealt with in another way (42%). Also, note that the survey indicated that 82% of violent incidents did not result in injuries to the victims. Do claims that we should do something about all this hidden crime make sense in light of what this crime looks like in the limited way we can understand it? How could you be reasonably certain that whatever intervention proposed would in fact reduce the actual amount of crime and not just reduce the amount that goes unreported?
Data is collected at all levels of the crime continuum with differing levels of accuracy and applicability. This is nicely reflected in the concept of “the crime funnel”. All criminal incidents that are ever committed are at the opening of the funnel. There is “loss” all along the way to the bottom where only a small sample of incidences become known with charges laid, prosecuted successfully and responded to by the justice system. What goes into the top levels of the funnel affects what we can know at any other point later.
In short, understanding crime is hard. And we haven’t even scratched the surface.
There are plenty of competing claims about crime, the effectiveness of what has been done about it and what should be done about it in the future. To consider these claims, step one is to consider the data that give weight to the arguments. When it comes to crime though, this data is often ambiguous. Even when the data seems to be relevant, on more careful inspection it can be not very useful at all. Yet, it’s the best we have to work with. The challenge is to sufficiently understand the nuances and limitations.
Understanding crimes as rates, with varying reliability depending on the data source is not meant to sanitize the discussion. It is always important to remember that the numbers reflect victims and tangible harm of all types. There were 610 homicides recorded in 2009. There were 610 victims, with families, friends, responsibilities, things you might like about them and things you might not. Quantifying crime and the harm that results from criminal acts is very, very difficult. However, if we’re interested in reducing this harm we should be prepared to understand the problem as correctly as we can. After all, decisions are being made and people want your vote to make those decisions happen. Challenge your candidates to not just use data to inform their criminal justice policies, but to also demonstrate that they understand the limits of that data.