Making sense of Police data: An interview with David Johnson PhD
Updated On: Sep 13, 2019

David Johnson is a psychologist and postdoctoral researcher at the University of Maryland. Using controlled laboratory experiments and computational models, his recent work has challenged prevailing assumptions about the racial dynamics of officer-involved shootings. The DC Police Union reached out to Dr. Johnson to discuss his work and policing research more generally.  


FOP: Dr. Johnson, thank you for meeting with us to share your perspective and discuss your research on police officers.

DJ: Glad to be here.

FOP: How did you first become interested in studying law enforcement?

DJ: As a researcher, I’ve always been drawn towards areas of psychology that are grappling with real-world issues. 

FOP: An important topic receiving a great deal of attention lately is bias in police practices. Obviously, police have a responsibility to not discriminate against particular groups. Your work has addressed the tools we use to detect discrimination in police practices, namely the concept of “statistical benchmarks.” What is a statistical benchmark?

DJ: A benchmark is a measuring stick we use to make sense of an outcome we care about. For example, let’s say we want to know if there is racial disparity in treatment for a certain type of cancer. From medical records, we know that Black Americans make up 13% of those receiving treatment. Are Black Americans less likely to be treated for this cancer? To answer this question we need a relevant benchmark. One benchmark we could use is the overall percentage of the US population that are Black, which is 13%. According to that benchmark, there wouldn’t be racial disparity in cancer treatment.

However, the issue with this conclusion is that the relevant benchmark is not the overall population size, but who actually needs treatment. If Black Americans made up 39% of those who actually have this type of cancer, we would have to conclude there is disparity in who is treated. According to this more appropriate benchmark, Black Americans are three times less likely to receive treatment than we would expect (13% vs. 39%).

FOP: You’ve anticipated my next question, which is this: The benchmark most often used for assessing racial disparity in officer-involved shootings is the percentage of the population made-up by a particular race. If 40% of people shot by police are Black, but only 13% of the overall population is Black, there’s a discrepancy between these percentages, and this discrepancy has led many to conclude that police are biased in their decisions to shoot. You’ve challenged the population benchmark and proposed an alternative. Please explain your reasoning.

DJ: It is critical to choose an appropriate benchmark when testing for racial disparities in officer-involved shootings. Using population comparisons as a benchmark for evidence of officer racial bias assumes that people of all races are equally involved in situations where officers are likely to use deadly force. Crime data support an uncomfortable conclusion, that per capita Black Americans commit more violent crime than other racial groups. This is important because the vast majority of officer-involved shootings take place in these situations. I want to be very clear, just because crime rates vary across race does not mean race causes those differences. There is considerable evidence that higher per capita rates of violent crime by Black Americans are tied to racial disparities in other areas, such as income and employment.

Judging racial disparities in officer-involved shootings based on population rates (rather than crime rates) is akin to judging racial disparities in cancer rates by comparing those that receive treatment to the general population, not those that actually have the disease. Given that almost all officer-involved shootings occur in situations where violent crime is being committed or suspected of being committed, a more appropriate benchmark is the degree which people from different racial groups commit violent crime.

As a side note, this suggests that the problem of fatal shootings of Black Americans is larger than the issue of whether or not there is racial bias amongst police. To substantially reduce the number of Black Americans shot by police we will have to address the larger societal issues that lead to racial differences in violent crime rates. 

FOP: Absolutely. Because it’s our mission to reduce violence in the communities we serve, it would be great to see more attention devoted to the underlying causes of that violence. For better and for worse, though, law enforcement is a galvanizing subject. Here in DC, MPD has recently released data it has collected on both arrests and stops (temporary detainments of drivers or pedestrians for investigatory purposes).  Advocacy groups and news media have been quick to compare demographic differences in this data to population demographics, concluding that those stopped and arrested by MPD are disproportionately Black. These analyses have received a lot of press, but they all rely on comparisons to the population benchmark. You’ve analyzed shootings, not stops or arrests. Are crime rates a better benchmark for analyzing stops and arrests as well?

DJ: Crime rates are a better benchmark for analyzing stops and arrests than population proportions, because the relevant benchmark for understanding stops and arrests is not how many people belong to a particular racial group, but how many people from a racial group actually are committing crimes. This also means that the type of crime used as a benchmark matters. For example, if the outcome of interest is violent crime arrests, then violent crime rates are an appropriate benchmark. In contrast, violent crime rates would be a poor benchmark for traffic stops. There, a better benchmark would be driving violation rates. 

             What makes this issue difficult is we often don’t know what actual crime rates are. Many crimes are not reported and we often don’t know who committed them. We can get rough estimates from data generated by law enforcement (e.g., calls for service), but these data are often not made publicly available and when they are they are only released as a summary that does not allow breaking the data down by type of crime. This lack of data is one of the most pressing issues facing researchers and law enforcement agencies today in better understanding racial disparities, and I am a strong proponent for departments making such data publicly available.

FOP: I can imagine that organizations critical of the police might argue that the crime rate benchmark is circular. If police are more suspicious of Black people (a racial bias), they’ll catch more Black offenders than White offenders, inflating the stats on Black offending. And if the data on offending is racially biased, then it’s not a good benchmark. How would you respond to this?

DJ: This is a really important point. If rates of offending are skewed because of biased policies or policing, they will be poor benchmarks and could obscure evidence of racial disparities. I think many lay people assume this is true to some degree. I address this in my work by comparing rates of fatal officer-involved shooting by race among a number of different benchmarks for police exposure, some of which were generated from police data (offender rates) and others from outside sources (victim reports). Regardless of whether these benchmarks were generated from police data or victim reports, I did not find evidence of anti-Black racial disparity when taking into account violent crime rates. If offending rates were skewed, I would have expected to see no anti-Black bias when using police data but anti-Black bias when using victim reports.

However, one point where I agree with these organizations is that we need much better information about when police use (and don’t use) force against civilians as well as the circumstances around those shootings. This will give us a much more accurate estimate of how much exposure people from different racial groups have with police to use as a benchmark for understanding racial disparities.

FOP: The ACLU recently issued a report on MPD arrest data, comparing Black population rates to Black arrest rates districtwide, but they also compared these rates at the neighborhood level. Does this solve the issues with using population percentage as a benchmark?

DJ: Both approaches suffer from the same issue because they use population percentages as the relevant benchmark. To address the underlying issue, these organizations would need to have data about crime rates on a districtwide or neighborhood level. The issue is that such data is often not recorded or made available. This makes it difficult for researchers to test these questions rigorously, and leads to a lack of transparency and public trust.

FOP: Changing directions slightly, another of your studies used a shoot/don’t shoot simulation (similar to the MILO scenarios we use in training) that compared the performance of police to non-police participants in a controlled laboratory setting. This has been done before, but you added a feature that our members will immediately recognize as critical: look-out information from dispatchers. What did you find in this study?

DJ: In this study we focused on the rate at which officers and civilians mistakenly shot Black and White individuals who were unarmed. Not only did officers make fewer mistakes than untrained civilians, they also showed less racial bias in their decisions. As you mentioned, we also manipulated whether officers received dispatch information about the suspect. Giving demographic information about the suspect reduced racial bias in the decision to shoot even further. This suggests that lab based studies on officer shooting decisions need to better take into account actual features of the environment that officers routinely have. If we don’t take those features into account, the conclusions we make about performance are unlikely to generalize to real-world policing. 

FOP: What are some other recent developments in police research that have been exciting or impactful?

DJ: One recent study that was particularly informative and rigorous came from the DC area. Recently, The Lab@DC worked with the MPD to conduct a randomized control trial testing whether body-worn cameras (BWCs) reduced MPD use-of-force and civilian complaints. They found BWCs did not have a substantial impact on officer behavior.

This suggests that people should not treat BWCs as a “silver bullet” that will result in sweeping improvements to policing. However, it also does not mean that BWCs don’t have other important uses, such as documenting police-civilian encounters and increasing public trust in law enforcement. Further randomized control trials could evaluate the effectiveness of BWCs on these outcomes, which departments can then use when judging whether they should invest resources into such equipment.

FOP: Essentially that study tested the belief that officers were using force gratuitously, and that this excessive force could be reduced by putting cameras on officers to hold them accountable. There were a lot of people who believed that, but as you’ve pointed out, the study found no reduction in force when officers were equipped with cameras. Most of our officers would say that’s because the presumption that we were using force gratuitously was wrong from the start. I think that speaks to the value of research like yours or like this study from the Lab@DC to, among other things, dispel common but incorrect assumptions.

            Circling back to a point you made earlier about the need for better data and more research, your background is in Experimental Psychology. How can this approach uniquely contribute to research and advance policing? 

DJ: Psychology, and experimental research in general, is useful to policing because it relies on evidence to directly test questions law enforcement officers and policy-makers are interested in. These tools can test ideas about what policies and practices help make policing more fair, efficient, and safe for officers. This approach is especially powerful when testing ideas generated from experts—law enforcement officers. For example, I’m currently working on a study with a large police department that tests how existing officer training programs and on-the-job experience impact police use of force.

FOP: That's a great topic. We look forward to learning how it turns out. Thank you for your work and for taking the time to discuss it with us.

Choosing the Proper Benchmark...

Does the Justice System discriminate against particular groups? This is a question of vital importance to all of us. But how would we begin to answer this question? This is where statistical benchmarks are useful.

To illustrate the point, imagine that we wanted to know if the police discriminate on the basis of gender by disproportionately arresting men for violent crimes. The first piece of information we would need would be the percentages of violent crime arrests made up by each gender (% of violent crime arrests)*. Then we would need a benchmark to compare these percentages against. One possibillity would be the percentages of the overall population made up by each gender (% of population). That comparison is shown in the graph below.

*data from FBI Uniform Crime Report 2014

Despite a 1:1 ratio in the overall population, the ratio of men to women among those arrested for violent crimes is 4:1. A discrepancy so large might lead us to conclude that police do discriminate on the basis of gender. 

However, we might reasonably follow up by asking if men and women commit violent crimes at the same rate. It's possible that the police aren't biased at all, but the difference in arrests simply reflects the fact that men commit more violent crimes than women. In this case, we would need a different benchmark.

The National Crime Victimization Survey (NCVS) compiles crime statistics, including the gender of the suspected offenders. If men commit more violent crimes than women, the NCVS data would show a higher percentage of men than women in victim reports (% violent crime suspects). Comparing % violent crime arrests to this new benchmark will inform the question of whether more men are arrested due to police discrimination or due to higher rates of offending. This comparison is shown in the graph below*.

*data are from the NCVS 2015 report 

Contrary to the first comparison, when the arrest data is benchmarked against the suspect data, it now appears clear that the police do not discriminate against men. Rather, the  gender difference in violent crime arrests owes to a larger number of violent crimes being committed by men than women. 

 


Choosing the proper benchmark is important because it allows us to more accurately diagnose the nature of the problem. In the example above, using only the population benchmark, some might have concluded that the arrest data reveal discrimination against men. Concluding this, they might have lobbied for investigations into police practices,  for the establishment of watchdog groups to identify discriminatory officers or units, even for the revisitation and retrial of past cases. Using the suspect benchmark, however, it appears that a better use of resources would be to identify the underlying reasons why men commit more violent crime than women. 


Find links below to read more of Dr Johnson's work:

The Conversation: A New Look at Racial Disparities in Police Use of Deadly Force

The Conversation: Database of Police Officers who Shoot Citizens Reveals Who Shot Citizens


-
Contact Info
DC Police Union
1524 Pennsylvania Ave SE
Washington, DC 20003




Top of Page image
Powered By UnionActive - Copyright © 2019. All Rights Reserved.