Indian law enforcement is undergoing a significant technological shift, actively integrating Artificial Intelligence (AI) into its core operations. From predictive crime analysis to real-time facial recognition, agencies are betting on AI to manage rising cybercrime and streamline investigations. However, this digital transformation brings with it profound concerns about entrenched biases, surveillance overreach, and a lack of legal safeguards.
AI on the Frontlines: Maharashtra's Copilot and Delhi's Surveillance Plans
A prime example of this push is the MahaCrimeOS AI, a predictive tool unveiled in December 2025 for the Maharashtra Police. Developed with support from Microsoft's Azure OpenAI Service and built by Hyderabad-based CyberEye, this system is designed to act as an investigative copilot. It integrates India's criminal laws and open-source intelligence to help officers link cases, analyze digital evidence like PDFs, images, and videos, and even generate investigation plans.
During its pilot across 23 police stations in Nagpur Rural, the AI demonstrated its utility in complex cases such as narcotics, cybercrime, and financial fraud. It guides officers on immediate next steps, from freezing bank accounts to examining social media profiles, reducing dependency on senior officials for preliminary review.
Simultaneously, the Delhi Police is planning a major expansion of AI-assisted facial recognition technology (FRT). Under a proposed Integrated Command and Control Centre (C4I), AI will analyze live CCTV feeds to identify suspects, track missing persons, and use automated number-plate recognition. This move towards real-time analytics in public spaces significantly escalates surveillance capabilities.
The Double-Edged Sword: Efficiency Gains vs. Risks of Bias
For police forces, the appeal of AI is clear: it processes vast datasets from call records, CCTV, and financial trails far faster than humans. In a country grappling with a cybercrime surge and uneven police resources, AI promises enhanced efficiency and modernisation without a proportional increase in manpower.
Yet, critics warn that AI-driven policing risks amplifying existing societal and institutional biases. These systems often rely on historical police data, which may reflect patterns of over-policing in certain communities. By learning from this data, AI can reinforce these patterns, leading to the unfair targeting of marginalized groups. The use of FRT in identifying individuals involved in the 2020 Delhi riots has already highlighted these concerns, which could intensify with more advanced AI.
Further pitfalls surround the lack of transparency, accuracy audits, and clear legal frameworks to challenge AI-driven decisions. India's data protection laws, with broad exemptions for law enforcement, complicate accountability, creating a regulatory grey area as intrusive technologies become widespread.
Beyond Policing: Deepfake Detection and Festival Monitoring
The AI exploration extends beyond traditional crime-fighting. The Centre for Development of Advanced Computing (C-DAC), under the IT Ministry, has developed a deepfake detection software. Available as a web portal and a desktop application called 'FakeCheck', it is currently being tested by select law enforcement agencies to combat digitally manipulated media.
In a novel application, Bengaluru police deployed an AI system to enforce firecracker bans during Diwali and planned for New Year's Eve. The technology scans live CCTV feeds to detect flashes, smoke, and unusual crowd activity, sending instant alerts to control rooms and patrol teams. Officials reported it helped address over 2,000 violations during Diwali.
As generative AI evolves, India's law enforcement architecture is keenly adopting it. While the potential benefits for crime prevention and investigation are substantial, the journey necessitates robust public debate and stringent safeguards to ensure that the pursuit of technological efficiency does not come at the cost of civil liberties and equitable justice.