India's rapidly evolving digital landscape faces a new regulatory challenge as the bundling of artificial intelligence tools with telecom services raises significant privacy and competition concerns. With giants like Bharti Airtel and Reliance Jio partnering with AI firms, millions of users are being ushered into a new ecosystem that existing rules are ill-equipped to govern.
The Architecture of Power and Data Flow
AI-telecom bundling involves telecom providers pre-installing or offering subsidized AI services—such as assistants, search tools, and content generators—to their subscribers, often for free or at a reduced cost. While this provides convenience and value to users, it creates a powerful new architecture for data flow and control. For AI companies, these partnerships guarantee instant, massive user bases. For telecom operators, they open fresh revenue streams and deepen customer engagement.
However, this convenience is shadowed by two major complications: the opacity of commercial agreements between telecom and AI firms, and the unclear limits on how much customer behavior data the bundled AI can access. When a network provider bundles an AI assistant, critical questions about data usage, model training, and liability in case of errors remain unanswered.
Data Privacy: The Core Concern
The convergence of telecom and AI data is particularly alarming. Telecom companies hold extensive longitudinal datasets linked to user accounts, devices, and usage patterns. When combined with the access granted to bundled AI services—which can include photos, location history, call records, and device telemetry—the potential for profiling escalates dramatically.
AI analysis of this combined data can reveal intensely sensitive personal attributes, such as location patterns indicating religious practices, browsing habits suggesting political leanings, or even biometric templates derived from video analytics. The bundling model often establishes continuous data-sharing channels between telecom services and AI providers, frequently without obtaining granular, informed consent for each specific purpose, as mandated by India's Digital Personal Data Protection Act (DPDPA).
Many platforms obtain user permission for data training under vague, altruistic prompts like "improve the model for everyone," obscuring the true privacy implications. For instance, Google retains conversations with its Gemini chatbot for training unless users opt out, keeping chats for users 18 and older by default for 18 months. Similarly, ChatGPT and Anthropic use consumer conversations for model training by default, relying on users to find and activate opt-out settings. When such consent is buried in lengthy terms or bundled with access to essential services, it can hardly be considered voluntary or informed.
The Path Forward: Regulation with Teeth
India's recently released AI governance guidelines, emphasizing transparency, fairness, accountability, and safety, provide a foundational principle. However, these principles require enforceable rules to have real impact. Experts argue that the Telecom Regulatory Authority of India (TRAI) must step in to create a robust regulatory framework.
Key recommendations include mandating that telecom firms separate consent for core connectivity services from optional AI features. Training AI models on customer data should require an explicit, clear opt-in from users, not a hard-to-find opt-out. Furthermore, any commercial relationships that could influence AI outputs must be disclosed upfront where consumers make decisions.
The competition angle is equally critical. A recent study by the Competition Commission of India (CCI) highlighted data access as a major barrier for AI startups. Telecom-AI bundles exacerbate this by giving large, established AI firms ready access to vast datasets via telecom partners, creating formidable network effects and potentially stifling innovation. Exclusive partnerships can distort the market, restrict consumer choice, and embed hidden preferences into algorithmic recommendations.
Currently, TRAI's regulations do not adequately cover ancillary services like AI bundles, and the Consumer Protection Act falls short on addressing AI-specific issues like misinformation, algorithmic bias, or liability. While the DPDPA places responsibility on the data fiduciary, global trends point towards joint liability across the AI supply chain. Indian regulators must clearly define and assign responsibility, whether the player is domestic or foreign, especially given the cross-border nature of many AI deployments.
TRAI could extend its service quality and consumer protection standards to cover AI providers in these bundles, drawing inspiration from the Reserve Bank of India's rules for outsourcing in financial services. Shared oversight between telecom operators and AI providers is essential. Technical safeguards—such as robust encryption, segregation between network and AI service environments, strict data retention limits, and genuine data minimization protocols—should be prerequisites for permitting such bundling arrangements.
The stakes are high for India's digital future. Getting this regulatory balance wrong risks entrenching monopolies, eroding citizen privacy, and leaving consumers without recourse when AI systems fail. Getting it right could position India as a global leader in fostering innovation that responsibly balances technological advancement with robust consumer protection.