The rapid spread of convincing AI-generated deepfakes in India has triggered a regulatory scramble, but legal experts warn that the government's proposed solution may be an overcorrection that could damage the digital ecosystem. Instead of heavy-handed rules, a smarter, more balanced approach is needed.
The Scale of the Deepfake Threat in India
Manipulated synthetic media is no longer a futuristic concern. Recent incidents, like fake videos of superstar Shah Rukh Khan promoting fraudulent schemes, or a fabricated clip of Ukraine's leader calling for surrender, highlight a global problem. In India, the issue is particularly acute. A 2024 McAfee Labs survey revealed that 75% of Indian respondents had encountered a deepfake in the past year, with 38% having been directly targeted by a deepfake-enabled scam.
This alarming trend has rightly pushed policymakers to seek stronger safeguards. The government's answer lies in proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. These amendments create a new category for "synthetically generated information" and impose stringent obligations on digital platforms.
Why the Proposed Rules Are Problematic
The draft rules mandate that platforms must permanently label such synthetic content with visible markings covering a tenth of the screen or audio clip and embed unique metadata. More significantly, large social media intermediaries (with over 5 million users) must ask uploaders to declare if their content is synthetic and then verify those declarations using technical tools. If users fail to label, platforms must do it themselves.
This represents a fundamental shift. Currently, Section 79 of the IT Act, 2000 provides a "safe harbour" that shields intermediaries from liability for user-generated content, provided they act as neutral platforms and address grievances promptly. The new rules would force platforms to inspect, classify, and modify content before publication, effectively blurring the line between a platform and a publisher and eroding this critical legal protection.
The challenges are not just legal but intensely practical. Even the most advanced detection technologies, like Meta's adversarially trained models, struggle to reliably distinguish deepfakes from merely edited or enhanced content. Platforms like YouTube rely largely on user declarations due to these technical limitations. While tools like Google's SynthID, which watermarks content at creation, show promise for establishing provenance, they cannot be applied retroactively to existing content.
Faced with these hurdles and potential liability, platforms may resort to over-censorship—blocking or delaying legitimate content to avoid risk. For smaller Indian startups, the compliance costs could be crippling, stifling innovation in the domestic AI ecosystem.
A Smarter, More Balanced Path Forward
Other jurisdictions offer more calibrated models. The European Union's AI Act mandates disclosure of AI-generated content but avoids rigid watermarks or pre-screening, focusing on transparency. The United States, with its strong free-speech traditions, targets specific harms like election interference. Even China's strict internet regime does not mandate fixed-portion screen labels or require platforms to authenticate all user declarations.
India's proposal, in contrast, leans towards prescriptive, platform-centric control. A more effective strategy would spread responsibility across creators, users, tech developers, and platforms. Key elements should include:
- Provenance over policing: Supporting systems like the Coalition for Content Provenance and Authenticity (C2PA) that cryptographically verify a file's origin without altering its appearance.
- Empowering users: Launching nationwide digital literacy campaigns to help India's over 800 million internet users identify sophisticated fakes. No watermark can replace an informed citizenry.
- Targeted legal remedies: Preserving safe harbour for platforms while using fast-track judicial processes and coordinated action with law enforcement to address criminal misuse like fraud, impersonation, and reputational attacks.
- Supportive innovation frameworks: Using regulatory sandboxes to allow India's AI ecosystem to develop solutions without the punitive burdens only large corporations can bear.
Deepfakes present a serious threat to trust and security online, demanding a firm regulatory response. However, security cannot be achieved through regulatory overreach that chills free expression and hampers innovation. The goal must be a framework that strengthens transparency, empowers users, and preserves the open, innovative spirit that has powered India's remarkable digital growth.