April 20, 2024

With AI, the genie is out of the bottle, and policing needs to catch up


PERF members,

It is not hyperbole to say that the foundations of modern policing were laid at the Executive Sessions on Policing and Public Safety, a series of meetings led by the National Institute of Justice and the Harvard Kennedy School from 2008-2014. It also is not an overstatement that artificial intelligence (AI) has the ground shifting under those foundations.

That is why this week, PERF co-hosted a return to the Harvard Kennedy School, this time to discuss some of the promises and perils of AI that I’ve raised previously in this column. In speaking with some of you after publishing that column, I realized I needed to better understand how policing might benefit from this technology and avoid the threats AI might pose to policing and the communities you all protect. So PERF brought together a small group of police chiefs and policy experts for a full-day discussion.

Let me tell you, by lunchtime I was both exhilarated and frightened. AI has the potential to simultaneously improve the world and wreak havoc.

To help us put on this meeting, PERF partnered with two experts from Harvard University: Stephen Goldsmith, who previously served as the County Prosecutor in Marion County, Indiana, the Mayor of Indianapolis, and the Deputy Mayor of New York City and is now a Professor of the Practice at the Harvard Kennedy School, and Jane Wiseman, who is a fellow at the Ash Center for Democratic Governance and Innovation at Harvard and previously worked at the National Institute of Justice and the Massachusetts Executive Office of Public Safety.

The group first heard from Sharad Goel, a Professor of Public Policy at the Harvard Kennedy School. Professor Goel, who has a computer science and mathematics background, laid out some of the basics of AI, starting with a usable definition. Traditional AI predicts future events based on patterns in past events. This has been in use for years through programs like crime analysis and early intervention systems. But AI has been in the news over the past year because of generative AI, which produces novel content based on past patterns. Common text-producing generative AI systems include OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, while systems like OpenAI’s DALL-E can generate images.

With a brief prompt, a PERF staff member was able to use generative AI to alter my appearance in this photo from PERF Town Hall and create a new unidentifiable person

Next we heard from Mitch Weiss, a Professor of Management Practice at the Harvard Business School who previously served as Chief of Staff to former Boston Mayor Thomas Menino. Using the example of trucks repeatedly getting stuck under low overpasses in Boston, Professor Weiss demonstrated how police officials might use generative AI to brainstorm solutions to public safety problems.

One of the numerous trucks that often become stuck under low overpasses on Storrow Drive in Boston. (Source: CBS Boston)

Mimi Whitehouse, a Senior Manager at Accenture who works on AI implementation in public sector agencies, spoke about how police departments might want to implement AI in their agencies, including the appropriate guardrails that should be in place.

Perhaps the most interesting part of the day was when we asked the assembled police leaders how they could envision implementing AI in their agencies, and they had a range of ideas:

  • Tutoring recruits as they go through the academy.
  • Taking crime reports over the phone.
  • Coordinating between legacy information systems.
  • Analyzing crime trends.
  • Helping officers and other employees complete reports.
  • Providing officers with guidance on their options based on a photo or description of a situation.
  • Helping police employees quickly find relevant information on department policies and procedures.
  • Using data sources from multiple city agencies to analyze the effects of a policy or program.
  • Responding to public records requests, including making necessary redactions.
  • Identifying common mistakes made by officers and supervisors.
  • Improving early warning systems to address employees’ potentially concerning behaviors.
  • Reevaluating cold cases to assess solvability and identify patterns.
  • Improving employee scheduling practices.
  • Using data to improve problem-solving across all government agencies and other service providers.
  • Solving crime with facial recognition.
  • Providing the community with useful real-time information.
  • Quickly assessing community sentiment about the police department and public safety concerns.
  • Identifying policy and training curriculum redundancies and simplifying them.
  • Improving homicide investigations by identifying the steps that make cases more likely to be solved.
  • Analyzing large quantities of camera footage to identify potential concerns.
  • Operating more efficiently given limited manpower.

We spent time discussing ways that AI could help prevent crime. The LAPD is digitizing all their homicide case files, and AI could be used to identify patterns in those investigations, recommend ways detectives could improve, and help police take violent individuals off the street. In London, police are using facial recognition to identify violent offenders on the street. And we considered how AI might better identify situations that could become domestic violence incidents, enabling police to intervene.

So that’s why I left exhilarated. There are many challenges facing policing right now, and AI has the potential to address many of them.

But I also left apprehensive. Our discussion highlighted many opportunities for AI-facilitated mistakes, misuse, and mayhem. AI will make many tasks easier for police employees, but it will also make some tasks easier for criminals and others. Agencies may find themselves inundated with AI-generated public records requests. AI-generated images and videos of police officers or chiefs could quickly spread misinformation about their agency. Illegal AI-generated material, such as images of child sexual abuse, are already a problem for federal, state, and local law enforcement agencies. And agencies are probably unaware of some of the risks; for example, material entered into a publicly available AI system will be used to train that system, so police employees need to refrain from sharing confidential or sensitive information.

Agencies will need new expertise and internal capacity to tackle these challenges and implement these innovative ideas, which may come from training current employees or recruiting outside talent. They’ll also need to keep a “human in the loop,” so they aren’t blindly following AI output without first verifying it. Our guest speakers showed us examples of AI producing false information, a problem we could only recognize because we had knowledgeable people in the room. Professor Weiss said there is early evidence showing that AI tools are more useful when used in a person’s area of expertise, because the person will be able to identify any mistakes.

The discussion was a reminder that technology often develops faster than government agencies, including the police, can implement policies, standards, and training to address that technology. As one participant remarked, “policy takes a while to catch up to technology, and the law takes a while to catch up to policy.” That’s exactly what’s happening with AI right now. Police officials are trying to sort through these issues, and they need to navigate the ethical issues, evaluate vendors’ products, and address community concerns. And, in speaking with senior officials at the Department of Justice, I know these issues are a concern for the federal government as well.

I’m very grateful to Stephen and Jane for helping plan and facilitate this meeting, and to Sharad, Mitch, and Mimi for sharing their expertise. And, of course, I greatly appreciate the police executives who took time out of their busy schedules to come to Cambridge and share their thoughts.

PERF intends to continue working on this issue, so expect to see more from us in the coming months. And we’ll discuss AI at our Annual Meeting in Orlando on May 29-31.