The Indian government has escalated its scrutiny of social media platform X, led by billionaire Elon Musk, demanding more detailed information on the steps taken to address a serious misuse of its artificial intelligence chatbot, Grok. This move follows initial submissions made by the platform in response to a directive from the Centre.
Government's Directive and X's Initial Response
The controversy centres on the alleged misuse of the Grok AI tool to generate sexualised and obscene images of women. Acting on these reports, the IT Ministry issued a formal directive to the social media giant, ordering a crackdown on this specific abuse of the AI technology. The platform, formerly known as Twitter, subsequently made its initial submissions to the ministry, outlining its stance and preliminary actions.
However, the government found these initial responses insufficient. On 07 January 2026, officials decided to press for more clarity. The Centre has now asked X for a more granular breakdown of the specific actions it has undertaken to curb the spread of such harmful AI-generated content. The government is seeking concrete details on the measures implemented, rather than general statements of policy or intent.
Deepening Scrutiny on AI Ethics and Platform Accountability
This development highlights the growing challenges regulators face with the rapid proliferation of generative AI tools. The case of Grok AI being used to create non-consensual, intimate imagery brings issues of digital safety, platform accountability, and AI ethics to the forefront of policy discussions in India. The government's insistence on detailed action reports signals a tougher stance on ensuring social media platforms actively police the misuse of their integrated AI features.
The directive places Elon Musk's company under increased pressure to demonstrate robust content moderation mechanisms, especially for its advanced AI products. The outcome of this exchange could set a precedent for how Indian authorities regulate the intersection of social media and powerful generative AI models, balancing innovation with user protection.
What Happens Next?
All eyes are now on X's next move. The platform must compile and present a comprehensive account of its actions to satisfy the IT Ministry's queries. Failure to provide satisfactory details could potentially lead to further regulatory action. This incident underscores a critical moment for AI governance in India, as authorities move to hold global tech leaders accountable for the societal impact of their tools.
The timeline remains fluid, with the next steps dependent on the quality and promptness of X's detailed response. The situation continues to evolve, marking a significant test case for the enforcement of digital content laws in the age of advanced artificial intelligence.