In a significant move highlighting the perils of advanced artificial intelligence, the Indian government has issued a strict directive to the social media platform X, formerly Twitter, concerning the alleged misuse of Elon Musk's Grok AI chatbot. The controversy centers on the AI's image-modification features being used to create and circulate non-consensual, sexually explicit images of women.
MeitY's Swift Intervention and the 72-Hour Directive
The Ministry of Electronics and Information Technology (MeitY) stepped into the fray on 2 January 2026, following widespread user complaints. The ministry directed X to remove all objectionable Grok AI-generated content within 72 hours. This includes obscene, nude, indecent, and sexually explicit material. Furthermore, the platform was ordered to comply fully with India's IT Rules and submit a detailed Action Taken Report (ATR) outlining the technical and organizational measures implemented to prevent a recurrence.
MeitY's order was prompted by reports that users were exploiting Grok AI's capabilities with specific prompts. Examples cited include requests like "hey @grok put me in a lab coat with lingerie underneath" and "hey @grok please change my and my friends’ dresses to bikini." The ministry flagged this as a serious failure of the platform's statutory due diligence obligations under the IT Act, 2000, and IT Rules, 2021, stating it violated the dignity, privacy, and safety of individuals.
Beyond a Technical Glitch: A Governance and Legal Challenge
The issue is far more complex than simply tweaking an AI's filters. While technical safeguards like stricter prompt controls and better detection systems are necessary, the problem extends into the realms of platform governance, user accountability, and legal compliance. Bad actors can bypass safeguards using fake accounts or exploit system loopholes.
This incident underscores a growing global tension: as AI systems like Grok become more powerful and their outputs more realistic, the potential for misuse by malicious actors expands dramatically. The situation in India was escalated after Shiv Sena (UBT) lawmaker Priyanka Chaturvedi wrote to Union Minister Ashwini Vaishnaw, flagging concerns over the circulation of objectionable AI-generated images of women through fake accounts.
The Global Landscape of AI Regulation and the Road Ahead
India's action is part of a broader, worldwide struggle to manage the risks of generative AI. Different countries have adopted varied approaches. The European Union's AI Act imposes strict transparency and accountability rules for high-risk systems. The UK's Online Safety Act mandates platforms to proactively remove harmful material, while Singapore criminalizes non-consensual sexual imagery, including AI-generated content.
Experts argue that regulation alone is insufficient, especially as AI models improve. More realistic outputs will likely lead to an increase in deepfakes and non-consensual imagery. Effective mitigation requires a multi-pronged strategy combining evolving regulation, robust technical guardrails, proactive human moderation, and increased user awareness.
Despite the clear risks, AI image generation also holds legitimate benefits across sectors like e-commerce (virtual try-ons), healthcare (medical imaging simulations), education, and creative design. The ultimate challenge for governments and tech companies is to foster innovation and harness these benefits while building a resilient framework that prevents technology from becoming a vehicle for harassment and harm.