The recent global uproar against X (formerly Twitter) for permitting its AI chatbot, Grok, to generate sexually explicit deepfakes has cast a harsh spotlight on a critical vulnerability: India's existing legal framework is ill-equipped to hold such advanced artificial intelligence platforms accountable. This incident, which erupted in early January 2026, underscores the severe individual and societal harms, especially for women and vulnerable groups, stemming from the unchecked spread of AI applications.
The Lowered Barrier for Malicious Actors
While online bullying and non-consensual morphed imagery are not new, the advent of generative AI tools like Grok has democratized malice. Previously, creating a convincing deepfake required technical skill with complex editing software. Now, a single-line prompt is all it takes. This ease of access dramatically increases the risks for vulnerable communities, exacerbating existing social fractures along lines of gender, caste, and religion in a country like India.
Why India's Laws Fail to Pin Down AI Platforms
The legal challenge in policing AI is threefold. First, the question of liability for AI misuse remains globally unresolved. AI systems are designed for user recalibration through prompts, allowing outputs the original developers may never have intended or foreseen. This gives companies a plausible defence of lacking intent.
Second, India's primary weapons against deepfakes—Sections 66, 67, and 67A of the IT Act, and Sections 74 and 77 of the Bharatiya Nyaya Sanhita (BNS)—focus solely on punishing the individual disseminator of harmful content. They create no liability for the AI platform or its developer, leaving a massive legal loophole.
Third, and most crucially, platforms like X can seek refuge under the "safe harbour" provision of Section 79 of the IT Act. This law, conceived when platforms were mere conduits with minimal content control, protects intermediaries that remove illegal content upon notice. AI platforms continue to claim this shield. However, a pressing legal question arises: Does the logic of safe harbour hold when a platform actively interprets and creates the harmful content itself, as generative AI does? This remains unanswered.
The Path Forward: Balancing Innovation and Safety
Addressing these legal hurdles will be neither quick nor easy. Governments, including India's, may hesitate to impose stringent regulations for fear of stifling innovation in a field seen as a primary driver of future economic growth.
However, innovation and safety need not be adversaries. Lessons from regulating automobiles, aviation, and fintech show that foundational safety principles can coexist with progress. For AI companies, the sustainable path is to embed safety-by-design principles. This means building systems that inherently invalidate certain harmful prompts and prevent specific outputs, while transparently demonstrating their platforms' safety.
Without such proactive measures from both regulators and tech firms, incidents like the Grok deepfake scandal are poised to become frighteningly commonplace. As noted by Shashank Reddy, Managing Partner at Evam Law & Policy, the current protections are a relic of a different digital era, utterly unsuited for the age of generative AI.