
Credit: Pixabay/CC0 Public Domain
As more and more sectors experiment with artificial intelligence, one of the areas that has adopted this new technology the most quickly is law enforcement. It has resulted in some problematic growth pain, ranging from false arrests to concerns about facial recognition.
However, law enforcement agencies around the world are now using new training tools to help executives understand the technology and use it more ethically.
Based primarily on the research of Cansu Canca, director of responsible AI practices at the Northeastern University Experience AI Institute, and designed in collaboration with the United Nations and Interpol, the responsible AI toolkit is one of the first comprehensive training programs for police focused solely on AI. At the heart of the toolkit is a simple question, says Kanka.
“The first thing to start is to ask your organization when you’re thinking about building or deploying AI. Do you need AI?” Kanka says. “Whenever we add new tools, we’re adding risks. With policing, the goal is to increase public safety, reduce crime, and require a lot of resources. There’s a real need for efficiency and improvement, and AI has a great commitment to helping law enforcement as long as it can reduce risk.”
Thousands of officers have already been trained using toolkits, and this year, Kanka led a training session for 60 US police chiefs.
Although AI applications like facial recognition have attracted the most attention, police are also using AI for something easier, such as generating transcriptions of text from video on body camera footage, deciphering license plate numbers in blurred videos, and determining patrol schedules.
All of their uses, no matter how minor they may seem, come with inherent ethical risks if you don’t understand the limitations of AI and where it is most commonly used, Canca says.
“The most important thing is to make sure that every time we create an AI tool for law enforcement, we are as clear as possible that this tool can fail, where it can be done and that police agencies can see that they may fail in those specific ways,” Kanka says.
Even if your agency is in need or claims to use AI, the more important question is whether you are ready to deploy AI. This toolkit is designed to make law enforcement think about what’s best for your situation. The department may be ready to develop its own AI tools, such as real-time crime centres. However, Kanka explains that most people ready to adopt the technology are more likely to source it from a third-party vendor.
At the same time, it is important to recognize that agents are not ready to use AI yet.
“If you’re not ready – if you can’t keep your data safe, if you can’t ensure an appropriate level of privacy, if you can’t check bias, if your agency is basically unable to assess and monitor the technology due to risk, you probably aren’t very ambitious yet and should not start slowly engaged in those ethical muscles instead.
CANCA points out that toolkits are not all-purpose. Each sector has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector, whether policing or education.
“Policing is not separated from ethics,” with its own ethical questions and criticism, and Kanka includes “a truly long lineage of historical prejudice.”
Understanding these biases is important in implementing tools that could potentially recreate these biases and creating a vicious cycle of technology and police practice.
“There are areas that have been historically overloaded, so just looking at the data is likely to overexpand those areas again,” Kanka says. “Then the question is, “If we understand that it is true, can we ensure that the risk of discrimination be reduced, how can we complement the data, and that the tool will be used for the appropriate purposes?”
The goal of the toolkit is to avoid these ethical pitfalls by making officers aware that humans are still important elements of AI. While AI systems may be able to analyze cities and suggest areas that require more support based on crime data, it is up to humans to determine whether a particular neighborhood needs more patrols or social workers and mental health professionals.
“The police are not trained to ask the right questions about technology and ethics,” Kanka says. “We need to be there to guide them and push technology providers to create better technology.”
Provided by Northeastern University
This story has been republished courtesy of Northeastern Global News news.northeastern.edu.
Quote: Law enforcement is learning how to use AI more ethical (July 16, 2025) Retrieved from 16 July 2025 https://techxplore.com/news/2025-07-law-ai– ethy.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.