A shocking report from the US Congress has revealed that China conducted an extensive artificial intelligence-powered disinformation campaign targeting India following the successful Operation Sindoor military exercise. The campaign specifically aimed to undermine confidence in India's Rafale fighter jets while promoting China's J-35 stealth aircraft.
The AI-Powered Disinformation Operation
According to the detailed report released by the US Congress, China systematically used AI tools to create and spread false information about India's defense capabilities. The campaign began shortly after India's Operation Sindoor demonstrated the country's growing military prowess in the region.
The disinformation effort involved creating thousands of fake social media accounts and generating AI-written content that questioned the effectiveness of India's recently acquired Rafale fighter jets. Simultaneously, these accounts heavily promoted China's J-35 fighter aircraft as a superior alternative, attempting to influence defense procurement decisions across the region.
Timing and Strategic Objectives
The campaign's timing was particularly significant, launching immediately after Operation Sindoor showcased India's enhanced military capabilities. US analysts noted that China's information warfare units specifically targeted defense communities, policymakers, and media outlets across South Asia.
The core objective was twofold: to diminish the perceived value of India's Rafale acquisition while creating market demand for China's J-35 exports. The AI systems generated convincing but entirely fabricated performance comparisons, technical specifications, and even fake expert testimonials favoring the Chinese aircraft.
Technical Sophistication and Detection
What made this campaign particularly concerning was its technical sophistication. The AI systems could generate content in multiple regional languages, adapt messaging based on audience engagement, and create seemingly authentic multimedia content including deepfake videos and AI-generated images.
US cybersecurity firms eventually detected the campaign through advanced pattern recognition algorithms that identified the AI-generated content's digital fingerprints. The investigation revealed that the operation was coordinated through multiple layers of proxy servers and involved sophisticated bot networks that could mimic human behavior with alarming accuracy.
Broader Implications for Regional Security
This revelation comes at a time when information warfare has become an increasingly critical component of international relations. The US Congress report emphasizes that such AI-powered disinformation campaigns represent a new frontier in geopolitical competition, where truth becomes the first casualty.
The report specifically warns that China's use of AI for strategic disinformation targeting regional competitors marks a significant escalation in hybrid warfare tactics. It calls for enhanced international cooperation to develop countermeasures against such AI-driven information operations.
For India, the discovery of this campaign underscores the importance of developing robust cybersecurity infrastructure and media literacy programs to combat foreign disinformation efforts. The incident also highlights the evolving nature of threats in the digital age, where artificial intelligence can be weaponized to undermine national security and influence strategic decisions.