The Perils of Predictive Policing

In the United States, police departments could use artificial intelligence to predict and prevent future crime. Proponents of these methods believe algorithms could improve law enforcement. However, evidence indicates the costs of predictive policing far outweigh any potential benefits.

 

As technology develops, life has become easier. Anything from streaming recommendations to ads on social media are based on algorithms, which are continually improving as they learn more about us. Similarly, policing has progressed, meaning that artificial intelligence can help police target their efforts. Predictive policing is the use of algorithms to predict and prevent future crime. It can be place-based, in the sense that it uses preexisting crime data to identify times and places which have a higher risk of crime, or person-based, attempting to identify people who are more likely to be offenders or victims. Data based on more traditional policing methods inform the predictions, meaning that any issues with more traditional policing will carry over. 

The application of policing in the United States has been unequal and unjust. Black Americans are five times more likely to report that they were unfairly stopped by police because of their race or ethnicity. They are also more likely to be stopped by police in general, to be incarcerated in local jails and to be confined in the juvenile justice system. Generally speaking, Americans also might not trust police – most crimes are not reported, and those that are reported are often not solved. Predictive policing algorithms can perpetuate the underlying issues with policing in the US – which are gaining prominence with more recent examples of police brutality and protests, such as the killings of George Floyd and Breonna Taylor. These issues are not new, and they fuel other inequalities in our society, like mass incarceration, creating a vicious cycle. The use of technologies can make police more powerful and efficient – essentially making it possible for more injustices to occur without paying more police salaries.

 

What Are Crime Data?

Algorithms are trained on data. Any data we have on crime are actually policing data – such as 911 calls or reports of incidents seen while patrolling. These data depend on the neighborhoods and crimes that police prioritize. Crime data can be dirty; they can be corrupt, intentionally manipulated, or distorted by societal biases. Recent examples include corrupt police in Baltimore and New York manipulating data to make it look like the crime rate was decreasing. We might not know the full extent of the problems with policing data outside of these two egregious cases.

Police departments make it difficult to assess the bias in predictive policing. It often takes years of pressuring law enforcement to get even incomplete information on these algorithms. The New York Police department does not keep a log of predictions and has not provided documentation of the algorithm it uses, making it difficult to gauge the impact of the algorithm. Some police departments, like Chicago, even use arrest records that did not lead to a conviction. The lack of transparency around how these algorithms make predictions deems their continued use irresponsible and unethical. Government at the federal, state and local level can enforce police transparency, to ensure policing is done responsibly.

 

Why Not Solve More Crime?

Proponents of predictive policing might ask – why not solve more crime? The idea behind predictive policing is efficiency – police using fewer resources and catching more crime. Police departments started using more automated tools in recent years in response to budget cuts and a belief that algorithms are more objective than humans. Many police departments have not evaluated the effectiveness of their predictions, so it is unclear if the investment is an efficient use of resources. Despite leading to aggressive violations of due process and privacy, there is little evidence that predictive policing reduces crime. Americans profess a current obsession with true crime media, however, the public’s perception of crime does not match reality. Although the public believes that crime is increasing, violent and property crime has decreased since the 1990s. Given the decrease in crime, it is hard to justify expanding police resources that create civil rights and privacy concerns. 

Predictive policing might not make arrests more biased, but it can perpetuate existing police biases. Researchers examined the first randomized controlled trial of place-based predictive policing and found arrests of minorities were not significantly different when predictive policing was used. They analyzed data from the Los Angeles police department, which focus on crimes reported to police by the public. They argue this makes the algorithm less susceptible to a feedback loop, where police continue to patrol the same neighborhoods and go after the same suspects. However, they did not analyze if the arrests themselves were systematically biased, whether police used an algorithm or not. Predictive policing is susceptible to a feedback loop: while incidents reported by the public – as opposed to incidents directly observed by police – can help mitigate the degree of runaway feedback, they cannot entirely remove it. Predictive policing systems trained on data from jurisdictions which are unethical and biased might end up reflecting those existing issues with policing.

 

Conclusion

Predictive policing algorithms, trained on existing policing data, are ineffective and perpetuate existing issues with American policing. These underlying issues make fair application of policing algorithms impossible. When policing is unfair and crime is on the decline, there is no reason to expand police capacity with predictive policing. In fact, algorithms are not automatically impartial – they work with the data and instructions they are given by humans – although they do have the potential to help expose our biases. Until law enforcement agencies more closely analyze the biases in their work, they should not implement predictive policing technologies. 

+ posts