The Rise of AI-Powered Deception in Corporate India
In January 2024, a finance executive at British engineering firm Arup joined what appeared to be a routine Skype meeting with familiar company leaders. The participants spoke with authority, blinked naturally, and smiled reassuringly - except they weren't real. AI-crafted deepfake doubles successfully duped the company, which has worked on prestigious projects including the Statue of Unity and India's rail infrastructure, of nearly $25 million.
The deepfake technology proved so flawless that 15 separate transfers from the Hong Kong office were processed before anyone detected the sophisticated con. This incident marked a watershed moment in India's growing battle against synthetic media manipulation.
From Political Manipulation to Personal Fraud
India's deepfake saga began in 2020 when fabricated videos of politician Manoj Tiwari speaking fluent Haryanvi went viral ahead of Delhi assembly polls. By mid-2023, the menace had turned intensely personal. In Kerala, a 73-year-old man lost Rs 40,000 following a WhatsApp deepfake call that appeared to come from his friend urgently pleading for financial help from Dubai.
The statistics paint an alarming picture. According to global identity verification provider Sumsub, India recorded a staggering 280% year-on-year increase in deepfake incidents during Q1 2024, with particular surge activity around national elections. A McAfee survey from November 2024 found that 75% of Indians had encountered deepfake content in the past year, and 45% reported knowing someone who had fallen victim to deepfake fraud.
Detection and Digital Forensics: Reading AI's Fingerprints
Forensic experts are developing sophisticated methods to identify synthetic content. Dr. Surbhi Mathur, head of Centre of Excellence in Multimedia Forensics at NFSU, explains that deepfake audio often sounds "too clean" without normal background noise. AI-generated faces typically lack natural light variations and photo response non-uniformity (PRNU) - the unique fingerprint left by camera sensors.
Naveen Sharma, co-founder of Kroop AI, which developed the detection tool Vizmantiz, witnessed this firsthand when a man used AI to impersonate a real job candidate during an online interview at a global chipmaker's India office. "He synced facial movements and tone quite well but we detected the use of deepfake technology," Sharma revealed.
Sandeep Shukla, director of International Institute of Information Technology, Hyderabad, notes that while detection tools claim over 90% accuracy, there's no guarantee they'll catch every form of media manipulation. He emphasizes the need to train police and judges in detection techniques and their limitations, coupled with sufficiently severe punishments to act as deterrents.
Celebrity Endorsements and Personal Threats
The surge in deepfake content is driven by mass appeal and financial gain, according to experts. Since 2023, the Deepfakes Analysis Unit (DAU) has tracked hundreds of AI-created scams featuring fabricated endorsements from prominent figures. Fake investment schemes have used the faces of Ratan Tata, N R Narayana Murthy, Rahul Gandhi, Nirmala Sitharaman, and Virat Kohli. A viral Ratan Tata "investment video" was found to be 83.8% AI-generated, prompting the industrialist to publicly debunk it on Instagram.
In November 2023, a deepfake of actor Rashmika Mandanna created by morphing influencer Zara Patel's face highlighted the grave risks to women's privacy. Experts warn that 'DeepNude' or AI-generated explicit content created without consent represents one of the most disturbing applications of this technology.
Legal Framework and Future Protection
With no dedicated deepfake legislation currently in place, several Bollywood actors are pursuing legal recourse through the courts. The Delhi High Court has issued 'John Doe' orders that enable legal action against anonymous offenders and introduced 'dynamic+ injunctions' that compel platforms to remove original deepfakes while proactively monitoring and taking down re-uploads.
Sharma identifies the banking, financial services, and insurance (BFSI) sector as facing the "most dangerous" deepfake threats. During online KYC processes, he warns that "I can clone a voice or face, and the system may verify that person as genuine."
The upcoming deepfake policy's first mandate will require BFSI institutions to use deepfake detectors for KYC and document verification. Amendments to the IT Rules, 2021 will enforce content labeling over 10% of the screen or audio, and mandate faster takedowns - within 36 hours, or three hours during elections. Both individuals circulating and platforms hosting deepfake content, even unknowingly, will face legal action.
India's Defense Against Deepfakes
Several initiatives are underway to combat the deepfake menace:
- Kerala's Cyberdome uses a public-private model for deepfake detection and digital forensics
- Telangana Women Safety Wing Cyber Module tackles image-based abuse and deepfake threats
- National Sahyog Portal allows anonymous reporting of deepfake and cyber abuse incidents to aid state police forces
The Ministry of Electronics and Information Technology (MeitY) is funding three strategic deepfake defense projects through India's IITs:
- Project Saakshya (IIT Jodhpur/Madras) focuses on real-time deepfake detection for images, video, and audio
- AI Vishleshak (IIT Mandi & Himachal Forensics) develops Explainable AI for deepfakes and signature forgery
- Voice Deepfake Detection System (IIT Kharagpur) specifically targets AI-generated voice impersonation to prevent fraud
As Sharma emphasizes, "Technology changes every 15-20 days and scammers evolve just as quickly. We need laws, detection, and awareness that move at the same speed." The race between deepfake creators and detectors continues to intensify across India's digital landscape.