India's IT Minister Advocates Techno-Legal Framework to Combat AI-Generated Harmful Content
IT Minister Ashwini Vaishnaw has called for a comprehensive techno-legal approach to address the growing threat of AI-generated harmful content, particularly deepfakes. Speaking at the ongoing IndiaAI Impact Summit 2026, the minister highlighted the urgent need for stronger regulations and technological solutions to protect society from malicious uses of artificial intelligence.
Global Consensus on Ethical AI Use
In an interview with The Times of India, Vaishnaw noted that a positive consensus is emerging among global leaders regarding the ethical use of AI. "A good consensus is emerging among the global leaders. In our interactions with many other countries, everybody believes that AI should be used for good and all harmful impacts must be contained," he stated. The minister emphasized that addressing these issues requires more than just legislation; it demands a technological approach where safety is integrated into AI systems.
IndiaAI Safety Institute's Role
Vaishnaw pointed to the IndiaAI Safety Institute (AISI) as a key initiative in this effort. "It has to be done through a techno-legal approach and cannot be done through passing a law alone. We need technological solutions where AI can be used safely," he explained. The AISI is collaborating with academic institutions to develop indigenous, secure, and ethical AI frameworks aimed at mitigating risks like deepfakes and other harmful content.
Urgent Need for Stronger Deepfake Regulations
The minister expressed deep concern over the proliferation of deepfakes online, calling for much stronger regulation. "I think we need stronger regulation on deepfakes. It is a problem growing day by day. We need to protect our society from this harm," Vaishnaw told reporters. He revealed that the government has already initiated dialogues with industry stakeholders to address this issue, acknowledging that current measures are insufficient.
Vaishnaw further stressed the importance of parliamentary consensus for enacting robust regulations. "We definitely must create that consensus within parliament for significantly stronger regulation," he added. He also mentioned age-based restrictions, noting that the government has implemented content differentiation to safeguard students and young people from harmful AI-generated material.
New IT Rules Mandate AI Content Labelling
Vaishnaw's comments follow the recent amendment to India's IT intermediary rules, which formally bring AI-generated content under regulatory scrutiny. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified via gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, take effect from February 20.
The new regulations impose mandatory labelling requirements for synthetically generated information (SGI) on platforms such as:
- YouTube
- Meta-owned Instagram and Facebook
- X (formerly Twitter)
These platforms must ensure that labels are prominent enough for users to instantly identify AI-generated content. Additionally, they are required to deploy automated tools to cross-verify content based on format, source, and nature before publication.
This regulatory move marks a significant step in India's efforts to combat the misuse of AI technology, aligning with Vaishnaw's call for a balanced techno-legal strategy to foster innovation while ensuring public safety.
