Karen Hao Critiques Silicon Valley's AI Imperialism, Urges India to Chart Its Own Course
Karen Hao, author of the New York Times bestseller Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, has dedicated years to examining the human and environmental toll of Silicon Valley's frenetic race to construct AI infrastructure and models. During a recent visit to India for the Synapse society and technology conference, she spoke with Sujit John about what she terms the "imperial" nature of big tech. Hao emphasized that India must forge its own AI path, prioritizing small models and affordable infrastructure to avoid falling into a dependency trap.
India's AI Infrastructure Build-Out: A Critical Perspective
"India is beginning to see this big build-out of AI data centre infrastructure. How do you see it?" Hao was asked. She responded with a pointed critique: "What I have seen with a lot of countries that have this catch-up mentality is, they're looking at the Silicon Valley model and saying we need to catch up to that. And that in and of itself is what I critique as imperial. You're using the idea that's coming from the empire and simply taking that as the template for you."
Instead, Hao advocates for a fundamental rethinking. "You could just fundamentally rethink what AI should be for this country," she said, noting sessions at the AI Summit that focused on open-source models and smaller architectures. "There were people who brought up these really good points that for AI to work at scale in India, it actually cannot be large language models (LLMs) because it needs to run very cheaply, it needs to be able to run without internet on someone's mobile device so that a farmer can detect diseases for their crops, or doctors in rural areas can actually use this type of technology."
Hao believes India possesses all the necessary ingredients for this alternative approach. "If India thought from the ground up from that perspective, to catch up in the AI era could look completely different, where India already has all the ingredients that they need, a fantastic base of talent, the infrastructure capabilities for the smaller models, the high quality data for training these application-specific AI technologies. And then you wouldn't need these huge partnerships (with American companies) anymore and you wouldn't be actually giving away your sovereignty."
China's Small-Model Approach and Lessons for India
When questioned about whether major countries are adopting the small-model approach and what China is doing, Hao highlighted China's strategic divergence. "China's actually doing a lot of smaller models, because they're very compute constrained. And also China has a very different approach to AI development than Silicon Valley that has this mentality of just pushing technical advancement for technical advancement's sake."
She contrasted this with Silicon Valley's current struggles. "And now Silicon Valley’s run into this issue where they don't have product-market fit. So they're trying to find ways to convince people to buy their products. During the AI Summit, Brad Smith (Microsoft president) gave a keynote and he used this very specific turn of phrase: Govts need to help us generate demand for our technologies, he actually said out loud what is usually unspoken."
Hao explained that Chinese companies operate under different pressures. "Chinese companies have a different mentality. They can't afford to build technologies that no one's going to use because the venture capital model and the investment model are very different. VCs are not willing to wait years and years for their money to be returned. So they're much more thinking, what are the applications that meet our users where they are? And oftentimes those applications do not require the scale that is being built by the American companies."
While China also develops large language models, Hao noted a key efficiency. "They're also building it, but actually with significantly less computational cost. That's what we saw with DeepSeek, where it has the same capabilities but significantly cheaper to build and to run. And that's also why there are a lot of companies in the US now – not model developers but consumers of AI technologies – that are using Chinese models instead of the models from Silicon Valley because they're the same quality or sometimes better and they're just cheaper."
She clarified, "I’m not saying everyone should be using Chinese models, but this is an example of how a country and the companies actually rethought from the bottom up what are the models that are going to work for us. And I think India has the potential to do the same thing."
The Human Cost: Data Annotation as Modern-Day Imperialism
The conversation turned to the human toll of AI development, specifically data annotation work in India, which Hao has characterized as a form of "modern-day imperialism." She elaborated, "What I show in my book is time and time again, as the tech industry's appetite for resources accelerates exponentially, where do they go looking for those resources? Whether it's human resources, like the labour that they need to train their models, or physical resources, like the minerals that they extract, or energy resources, they always go to the poorest communities."
"India is a huge base for the labour that is supplied to these companies. And the way that they treat that labour is horrible," Hao stated. She cited a recent investigation by The Guardian exposing grim conditions. "There was an investigation that just came out in The Guardian about women in India, from the poorest neighbourhoods that are being roped into doing content moderation for pornography, for child sexual abuse material. Because we are now talking about video generation models that can generate these kinds of things, companies that want to prevent their models from generating these kinds of things, are building content moderation filters that are being trained by humans in these extremely poor contexts."
Hao described the ethical breach: "And when the women protested and said, this is affecting my well-being, my family's well-being, and breaking down my communities, because I was never told that this was part of the job, they're being told, you were told your job is data annotation, and this is data annotation."
Solutions: Application-Specific AI to Avoid Harmful Content
Asked about solutions to such content moderation issues, Hao pointed to the root cause. "The reason why there is all this harmful content that needs to be filtered out is because these companies are training their models on the entire internet. That's why there's harmful content."
She advocated for a targeted approach. "If you were talking about very application specific technologies like helping a farmer detect disease in their crops, why are you then training on pornography? That's not going to help detect the crops. You would just take curated photos of different types of diseases on different types of crops and train a model so you don't ever have to deal with content moderation."
Hao condemned the current industry practice. "But because the companies take the opposite approach, they just hoover up everything, and then they expect people in the poorest neighbourhoods to be human shields for all of their users." This stark critique underscores her call for India to develop AI that is ethical, affordable, and sovereign, breaking free from what she sees as Silicon Valley's imperial blueprint.
