Close Menu
  • Home
  • Aerospace & Defense
    • Automation & Process Control
      • Automotive & Transportation
  • Banking & Finance
    • Chemicals & Materials
    • Consumer Goods & Services
  • Economy
    • Electronics & Semiconductor
  • Energy & Resources
    • Food & Beverage
    • Hospitality & Tourism
    • Information Technology
  • Agriculture
What's Hot

AI is now part of our world. University alumni must know how to use it responsibly

David Sacks and a blurred line of government services

Windsurf CEO opens about a “very dark” mood before recognition

Facebook X (Twitter) Instagram
USA Business Watch – Insightful News on Economy, Finance, Politics & Industry
  • Home
  • Aerospace & Defense
    • Automation & Process Control
      • Automotive & Transportation
  • Banking & Finance
    • Chemicals & Materials
    • Consumer Goods & Services
  • Economy
    • Electronics & Semiconductor
  • Energy & Resources
    • Food & Beverage
    • Hospitality & Tourism
    • Information Technology
  • Agriculture
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
USA Business Watch – Insightful News on Economy, Finance, Politics & Industry
Home » Law enforcement is learning how to use AI more ethically
Electronics & Semiconductor

Law enforcement is learning how to use AI more ethically

ThefuturedatainsightsBy ThefuturedatainsightsJuly 16, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Police body cam

Credit: Pixabay/CC0 Public Domain

As more and more sectors experiment with artificial intelligence, one of the areas that has adopted this new technology the most quickly is law enforcement. It has resulted in some problematic growth pain, ranging from false arrests to concerns about facial recognition.

However, law enforcement agencies around the world are now using new training tools to help executives understand the technology and use it more ethically.

Based primarily on the research of Cansu Canca, director of responsible AI practices at the Northeastern University Experience AI Institute, and designed in collaboration with the United Nations and Interpol, the responsible AI toolkit is one of the first comprehensive training programs for police focused solely on AI. At the heart of the toolkit is a simple question, says Kanka.

“The first thing to start is to ask your organization when you’re thinking about building or deploying AI. Do you need AI?” Kanka says. “Whenever we add new tools, we’re adding risks. With policing, the goal is to increase public safety, reduce crime, and require a lot of resources. There’s a real need for efficiency and improvement, and AI has a great commitment to helping law enforcement as long as it can reduce risk.”

Thousands of officers have already been trained using toolkits, and this year, Kanka led a training session for 60 US police chiefs.

Although AI applications like facial recognition have attracted the most attention, police are also using AI for something easier, such as generating transcriptions of text from video on body camera footage, deciphering license plate numbers in blurred videos, and determining patrol schedules.

All of their uses, no matter how minor they may seem, come with inherent ethical risks if you don’t understand the limitations of AI and where it is most commonly used, Canca says.

“The most important thing is to make sure that every time we create an AI tool for law enforcement, we are as clear as possible that this tool can fail, where it can be done and that police agencies can see that they may fail in those specific ways,” Kanka says.

Even if your agency is in need or claims to use AI, the more important question is whether you are ready to deploy AI. This toolkit is designed to make law enforcement think about what’s best for your situation. The department may be ready to develop its own AI tools, such as real-time crime centres. However, Kanka explains that most people ready to adopt the technology are more likely to source it from a third-party vendor.

At the same time, it is important to recognize that agents are not ready to use AI yet.

“If you’re not ready – if you can’t keep your data safe, if you can’t ensure an appropriate level of privacy, if you can’t check bias, if your agency is basically unable to assess and monitor the technology due to risk, you probably aren’t very ambitious yet and should not start slowly engaged in those ethical muscles instead.

CANCA points out that toolkits are not all-purpose. Each sector has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector, whether policing or education.

“Policing is not separated from ethics,” with its own ethical questions and criticism, and Kanka includes “a truly long lineage of historical prejudice.”

Understanding these biases is important in implementing tools that could potentially recreate these biases and creating a vicious cycle of technology and police practice.

“There are areas that have been historically overloaded, so just looking at the data is likely to overexpand those areas again,” Kanka says. “Then the question is, “If we understand that it is true, can we ensure that the risk of discrimination be reduced, how can we complement the data, and that the tool will be used for the appropriate purposes?”

The goal of the toolkit is to avoid these ethical pitfalls by making officers aware that humans are still important elements of AI. While AI systems may be able to analyze cities and suggest areas that require more support based on crime data, it is up to humans to determine whether a particular neighborhood needs more patrols or social workers and mental health professionals.

“The police are not trained to ask the right questions about technology and ethics,” Kanka says. “We need to be there to guide them and push technology providers to create better technology.”

Provided by Northeastern University

This story has been republished courtesy of Northeastern Global News news.northeastern.edu.

Quote: Law enforcement is learning how to use AI more ethical (July 16, 2025) Retrieved from 16 July 2025 https://techxplore.com/news/2025-07-law-ai– ethy.html

This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRex Salisbury’s Cambrian Ventures gathers new funds and backs Fintech slowdowns
Next Article Venture Global and ENI announce 20-year LNG sales and purchase agreements – Energy News, Top Headlines, Commentary, Features, Events
Thefuturedatainsights
  • Website

Related Posts

AI is now part of our world. University alumni must know how to use it responsibly

July 19, 2025

Conversations between LLMs can automate the creation of exploits, learning shows

July 19, 2025

Singapore faces “serious” cyberattacks, the minister says

July 19, 2025
Leave A Reply Cancel Reply

Latest Posts

Muller’s £45 million investment was welcomed as a vote of confidence in UK dairy products

East Anglia Farmers Get Water Reserve After NFU Pushs Back the ban

Tests show that OSR acquires muscle in support of biostimulators

Vector high risk period stimulates blue tong alerts as cases appear

Latest Posts

Why Delta and United are pulling away from airline packs

July 18, 2025

Saab shares 12% pop in profit beats amid the EU, NATO defence scattering

July 18, 2025

European companies are trying to win cash in big EU, NATO budgets

July 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • AI is now part of our world. University alumni must know how to use it responsibly
  • David Sacks and a blurred line of government services
  • Windsurf CEO opens about a “very dark” mood before recognition
  • Conversations between LLMs can automate the creation of exploits, learning shows
  • For privacy and security, think carefully before granting AI access to your personal data

Recent Comments

No comments to show.

Welcome to USA Business Watch – your trusted source for real-time insights, in-depth analysis, and industry trends across the American and global business landscape.

At USABusinessWatch.com, we aim to inform decision-makers, professionals, entrepreneurs, and curious minds with credible news and expert commentary across key sectors that shape the economy and society.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Archives

  • July 2025
  • June 2025
  • March 2022
  • January 2021

Categories

  • Aerospace & Defense
  • Agriculture
  • Automation & Process Control
  • Automotive & Transportation
  • Banking & Finance
  • Chemicals & Materials
  • Consumer Goods & Services
  • Economy
  • Economy
  • Electronics & Semiconductor
  • Energy & Resources
  • Food & Beverage
  • Hospitality & Tourism
  • Information Technology
  • Political
Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 usabusinesswatch. Designed by usabusinesswatch.

Type above and press Enter to search. Press Esc to cancel.