February 3, 2024

Policing and Artificial IntelligencePromise and Peril

 

PERF members,

Last month, the EU approved stringent regulations on the use of AI by law enforcement, against the opinions of many in law enforcement. Unless U.S. law enforcement leaders get involved in the conversation, the same story will play out here. In some places, it already is. 

In the last two weeks, the U.S. Senate held a hearing on the use of artificial AI in criminal investigations, House lawmakers held a hearing on the use of AI in the legislative branch, another Senate committee held a hearing on the use of AI at the Library of Congress, in government publishing, and at the Smithsonian, and a group of seven members of Congress —  senators and congresspersons alike — sent a letter to the Department of Justice demanding the immediate pausing of funding to support predictive policing programs. Late last year, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – with a section (Section 7. Advancing Equity and Civil Rights) directing the Attorney General and Secretary of Homeland Security to issue a report addressing the fair and impartial use of AI in the criminal justice system.

Suffice it to say, artificial intelligence is on the minds of policymakers here in Washington, D.C. I also know AI is a topic of conversation in many police departments. The topic has come up at various law enforcement gatherings, including the PERF Annual Meeting and Town Hall. In the September 2, 2023, Trending, I spoke with LAPD Chief Michel Moore about the LAPD engaging with the University of Southern California in a study of how AI could help analyze officers’ interactions with residents during traffic stops.

We all know how quickly technology evolves — my Apple watch has more computing power than was used on Apollo 11 or in the original supercomputers, which took up entire rooms. Both the prevalence and power of AI have exploded. Three key areas where AI is rapidly advancing are generative AI, which can create text, images, and sound; machine learning, which can conduct advanced analysis of images and identify hidden or complex patterns; and facial recognition, which can identify individual faces. Bloomberg estimates that generative AI alone will become a $1.3 trillion market in less than 10 years.

And, importantly, AI is not just being used for good; it also is facilitating a wide variety of crimes. Just this week, the New York Times reported on the challenges law enforcement will face in investigating child sex abuse images and trafficking. Generative AI is making it even harder to detect online scammers who solicit huge sums of cash from unsuspecting victims.

Police leaders need to understand how AI can affect their departments, both in how they do their work and in what crimes they are called on to address. Right now, it feels like lawyers and privacy advocates are defining how AI will — or, more importantly, won’t — be used. Policing needs to actively engage in the discussion on the role AI can play in developing more effective and compassionate police departments. I’ll admit I have a lot to learn about AI’s full potential for policing, but policing is currently on the defensive and needs to better understand that potential, as well as the risks and limitations of AI.

Keeping up with these advances is a challenge for policymakers and police leaders alike. In cases such as identifying exploited children online, apprehending an individual intent on violence, or identifying a victim or suspect in a homicide, it is understandable why police leaders want to use whatever tools they have to protect their communities. The risk is that the use of new tools and technologies, without proper guidelines and training, can lead to problematic cases like false arrests, which not only harm innocent people but can fuel efforts to restrict access to these potentially life-saving tools.

Some of the most prominent concerns are those around bias and accuracy. The reality is that the human version of what AI does is also marred by bias. So, too, are the data, and sometimes the perspective used to build and train AI tools, as well as how the results of those tools are used.

So, which bias is worse: the AI’s or the human’s?

That’s not just a rhetorical question. Guidelines like using AI-produced intelligence only as a part of a decision-making process, not as the sole basis of a decision, are one part of the response. But agencies will also need to train officers and the public to appreciate that some degree of bias is likely baked into the system (as is true for society as a whole), and the results need to be considered in that context.

This issue is similar to eyewitness identification and testimony. We know that witnesses will make significant mistakes and often bias plays a role. It is a serious issue that can result in wrongful convictions. So how has policing responded? We acknowledge the potential bias and determine how best to mitigate it. For example, we have double-blind investigative processes, use sequential rather than simultaneous photo arrays, and make sure that eyewitness identification is only one part of a comprehensive investigation. We know that eyewitness identification is a flawed but valuable tool, so we take steps to mitigate those flaws.

This is why it is essential that policing work to better understand AI, so that police departments can take the lead in developing  trainings, policies, and procedures, especially independent oversight and auditing processes, before acquiring any of the plethora of AI technologies — from facial recognition to body-worn camera video processing and more mundane technologies like automated report writing, call triaging, and even answering non-emergency calls.

As Miami Police Department Assistant Chief Armando Aguilar testified at one of the recent Senate hearings:

My team and I set out to establish an FR [facial recognition] policy that would address these and other concerns. We were not the first law enforcement agency to use facial recognition or to develop FR policy, but we were the first to be completely transparent about it. We did not seek to impose our policy on the public — we asked them to help us write it.

It is critical for departments to be transparent about their intent when acquiring this technology and what safeguards they will have in place. Bringing the community in to develop these guidelines is an essential first step. The community, however, is not the only stakeholder. In Seattle, when the department began testing a technology to analyze audio data from body worn cameras around efforts to de-escalate, union officials raised concerns about the use of the technology to monitor and potentially discipline officers. 

There is much promise and peril around increasingly omnipresent AI technology, and the policing profession needs to better understand its benefits and drawbacks so we can provide input as governments take steps to regulate it. PERF is likely to call on some of you to help develop some initial plans around the future role of AI in policing and how police can prepare to respond to AI in their communities. As we start that work, I would value hearing from you now on how your agencies use AI and how you’re addressing the issues it creates.

Best,

Chuck