AI in Policing: Experts Warn of Centralized Power, Bias, and Excessive Surveillance
AI in Policing: Risks of Centralized Power and Bias

AI in Policing: Experts Warn of Centralized Power, Bias, and Excessive Surveillance

Police departments across India are rapidly adopting artificial intelligence tools to enhance their operations. These initiatives promise greater efficiency and better resource management. However, experts are sounding the alarm about significant dangers lurking behind this technological shift.

How Police Are Implementing AI Systems

The Delhi Police recently announced plans to install 10,000 AI-enabled cameras throughout the national capital under the Safe City Project. These sophisticated cameras will feature facial recognition systems and distress detection technologies. They can identify emergency situations by analyzing sounds and facial expressions.

Just days before this announcement, the Maharashtra government launched MahaCrime OS AI for state police forces. This platform aims to process complaints faster, analyze complex data, and streamline investigative procedures. These represent just two examples from a growing list of AI-driven police initiatives.

Srinivas Kodali, an interdisciplinary researcher focusing on data and cities, explains the current approach. "Police are integrating semi-automated systems into their operations," he says. "For instance, they use surveillance agents through CCTV cameras to semi-automate street patrolling and safety monitoring."

Officers can monitor entire cities from centralized command centers while cameras provide real-time footage. Beyond surveillance, police departments are deploying AI for various purposes including crime reduction, communication enhancement, and exploration activities.

In November 2025, Delhi Police received 75 drones from Indira Gandhi Delhi Technical University for Women. These drones will assist with crowd management and traffic control operations. The MahaCrime OS AI platform specifically focuses on predictive policing, using algorithms to detect crime patterns and predict future criminal hotspots.

Social media monitoring represents another application area. Bengaluru police announced in May 2025 that they would launch an AI-powered platform to track misleading content across social networks. All these systems require substantial training data, much of which comes from the Criminal Tracking Network and Systems launched in 2009.

Major Concerns About AI Integration

Experts express deep concerns about potential negative consequences of widespread AI adoption in policing. One primary worry involves unfair targeting of specific communities. Since AI systems train on historical police data, they may inherit and amplify existing biases present in that data.

Kodali emphasizes a broader societal perspective. "We need to consider the impact on society as a whole," he argues. "AI integration leads to centralization of power. Operations no longer depend on local officers but on information centralized in distant data centers."

This centralization makes police systems more difficult for ordinary citizens to navigate. Within police organizations themselves, power concentration becomes more rigid and codified. Beat officers now face constant supervision through CCTV monitoring and geotagging technologies.

Excessive policing represents another serious concern. According to Bureau of Police Research and Development documents, one CCTV camera can perform work equivalent to 100 police personnel. In Hyderabad, with approximately one million cameras installed, this translates to theoretical surveillance by 100 million officers for a population of about 11 million residents.

Such intensive monitoring can undermine fundamental civil rights, particularly the right to protest. Police can easily track and detain protesters using AI tools. Shivangi Narayan, assistant professor at Thapar School of Liberal Arts and Sciences, notes the shift in policing philosophy. "Historically, police took preventative measures like detaining known offenders," she explains. "Now the premise assumes everyone is guilty and everything is suspicious."

Widespread camera installation doesn't necessarily increase public safety. Instead, it creates constant surveillance that erodes human connection. Citizens may feel that during emergencies, no human assistance will be available because systems prioritize monitoring over response.

Transparency and Accountability Gaps

AI systems in policing suffer from significant transparency issues. Unlike traditional police manuals that guide officer training and conduct, no comprehensive rulebook exists for AI operations. This lack of clear guidelines creates accountability problems.

Real-world consequences are already emerging. In Telangana, where many policing AI systems are developed, a tragic incident occurred in 2023. Mohammed Khadeer Khan was detained by Medak police because his facial features resembled a chain-snatching suspect captured on CCTV. During detention, Khan suffered serious injuries from alleged police brutality and later died.

A subsequent investigation revealed that the CCTV footage only showed a masked thief. Police had used Khan's call record data to locate him without obtaining a warrant. This case highlights how AI-assisted policing can lead to wrongful targeting and human rights violations.

Needed Safeguards and Reforms

Calls are growing for comprehensive legal frameworks to regulate AI use in policing. Some suggest following examples like the 2023 executive order signed by then US President Joe Biden, which required AI companies to conduct safety tests and submit results before public release.

Narayan, author of "Predictive Policing and the Construction of the Criminal," believes legal compliance alone is insufficient. "Regulation must look beyond laws," she insists. "We should focus on building safer societies through employment opportunities and resource distribution. While some surveillance helps control crime, we cannot make long-term investments only in technology while neglecting institutions that create better communities."

Kodali advocates for police reforms rather than单纯 AI regulation. He compares the situation to previous technologies like fingerprint matching, which police sometimes misused. Current laws need updating to prevent AI exploitation. "Consider the Criminal Procedure (Identification) Act, 2022," he says. "It allows police to collect data from any accused person, not just convicts. If someone appears in CCTV footage near a crime scene, police can collect their information without consent. This violates privacy rights and must change."

The debate continues as Indian police forces expand their AI capabilities. Balancing technological advancement with civil liberties remains a critical challenge requiring thoughtful solutions and democratic oversight.