When OpenAI wrote its founding charter in 2018, it was a scrappy nonprofit trying to prove it could be trusted with one of the most consequential technologies in history. A lot has changed. On Sunday, CEO Sam Altman published 'Our Principles'—a 1,100-word, five-point framework that reflects a company now valued at over $800 billion, deep in legal battles over its nonprofit roots, and competing hard against rivals it once promised to help.
From AGI Obsession to Broad AI Rollout
The 2018 charter mentioned AGI twelve times. The new document? Twice. That is not an accident. OpenAI is less concerned with artificial general intelligence than it was nearly a decade ago and is instead prioritising a broader rollout of its technology. Where the original charter framed everything around the safe arrival of superintelligence, Altman's 2026 version talks about distributing AI widely and letting society adapt in real time.
Altman even addressed the shift on his personal blog earlier this month, writing that AGI has a 'ring of power' quality that makes people do irrational things—and that the only fix is to share the technology as broadly as possible.
The Five Principles, Quickly Explained
The new document is built around five ideas: democratisation, empowerment, universal prosperity, resilience, and adaptability.
- Democratisation means resisting AI power concentration—decisions about AI should come from democratic processes, not just lab boardrooms.
- Empowerment gives users broad latitude, though it ties that freedom to harm prevention.
- Universal prosperity links AI access to massive infrastructure buildout; OpenAI is no longer talking only about models—it is talking like a cloud company, an energy customer and a national industrial asset.
- Resilience addresses genuine risks: bioweapons, cybersecurity, critical infrastructure.
- Adaptability—perhaps the most telling principle—explicitly leaves room for OpenAI to restrict access when it deems risks too high.
The Commitment OpenAI Quietly Dropped
The 2018 charter had an unusual pledge: if a safety-conscious rival got close to building AGI first, OpenAI would stop competing and help them instead. The 2026 version skips mentions of sharing progress and stepping aside. Instead, the new document acknowledges OpenAI is now 'a much larger force in the world' and promises transparency about future changes—which is not quite the same thing as promising not to compete.
This matters given the context. OpenAI is currently in court over its shift from nonprofit to for-profit. Meanwhile, Anthropic—its most direct rival—recently surpassed it in some secondary market valuations.
Flexible Principles Are Still Principles, Just Looser Ones
The document feels softer and more flexible, giving OpenAI more room to manoeuvre than before. The old charter used language like 'we commit' and 'we will.' The new one leans on 'we believe' and 'we envision.' That is not necessarily cynical—companies evolve, and OpenAI has changed enormously since 2018. But flexibility cuts both ways. A principle that can bend with evidence can also bend with market pressure.
What Altman has written is less a set of constraints and more a statement of intent: build a lot, deploy broadly, course-correct as you go. Whether that is reassuring depends entirely on how much you trust the person doing the correcting.



