2026: Reimagining AI Beyond Corporate Control for a Collective Future
Why Our AI Future Must Be Collectively Built, Not Inherited

As we step into 2026, Artificial Intelligence (AI) has cemented its place as the defining technology of our era. It has moved from an emerging frontier to an omnipresent force, woven into the fabric of daily life, education, work, and governance. This rapid integration, however, brings a familiar sense of déjà vu for observers of digital evolution, echoing the transformative waves of the internet, social media, and platformisation.

The Illusion of an Inevitable AI Destiny

The current narrative surrounding AI, particularly Generative AI, is one of predetermined fate. There is a growing resignation that these technologies will unilaterally replace human labour, creativity, and connection. Evidence of this shift is already stark: twenty percent of all YouTube videos are now AI-generated. Educational landscapes are altering, with students relying on prompts and educators grappling with AI-assisted grading. Simultaneously, corporations race for skilled talent while workers fear obsolescence.

This path feels charted by a select few global tech oligarchies. The AI presented to the world is not a neutral or 'natural' state of technological progress. Instead, it is a specific expression moulded by extractive and profit-driven motives. For nations in the Global South, including India, this often means inheriting futures scripted in distant boardrooms, far from the local contexts where the impacts are most profoundly felt.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The Hidden Costs of an AI-Powered World

Even remarkable breakthroughs in science and medicine powered by AI are shadowed by significant collateral damage. The environmental toll of training massive models, the weaponisation of misinformation, the erosion of public trust, and the strain on care systems and planetary ecologies create a complex picture. The promise of digital hope—egalitarianism, freedom, and equity—has proven fickle. Governments despair at AI's threat to democracy, while citizens seek new structures of governance, indicating a crisis of faith in existing institutions.

The challenge, therefore, is not merely to adopt AI but to critically interrogate its trajectory. The central question becomes: Who shapes our AI future, and for whose benefit?

Imagining and Building Alternative AI Futures

The pivotal realisation for 2026 is that AI technology is not settled; it is still profoundly unsettled and in the making. This opens a window of opportunity for collective action. The task ahead is to actively construct alternative AI futures rather than passively inherit a monolithic one. This requires a fundamental shift in how we approach technology development and ownership.

Potential pathways include decentralising control and fostering digital commons. This could involve:

  • Community data trusts that protect and empower local information.
  • Cooperative infrastructures owned and governed by users.
  • Public-interest platforms designed for societal good over profit.
  • Spaces where marginalised communities have genuine authority, not just basic access.

By inviting a broader set of stakeholders—from civil society and universities to local governments and citizen groups—we can seed the development of AI with different intentions, applications, and ethical foundations. Our AI futures cannot be things we merely inherit. They must be things we collectively make. This collective reimagination is the only way to hold corporations, governments, and ourselves accountable for creating a more equitable technological landscape.

The argument is presented by Nishant Shah, a professor of Global Media at the Chinese University of Hong Kong and a faculty associate at the Berkman Klein Centre for Internet and Society, Harvard University, in an article first published on January 1, 2026.

Pickt after-article banner — collaborative shopping lists app with family illustration