Anthropic President Declares AGI Concept Outdated Amid AI's Uneven Progress
Anthropic's Daniela Amodei Says AGI Concept May Be Irrelevant

In a bold statement that questions a core ambition of the tech world, Daniela Amodei, President of leading AI firm Anthropic, has suggested that the pursuit of Artificial General Intelligence (AGI) might be chasing an irrelevant idea. Amodei argues that the concept itself is outdated, failing to capture the current reality where AI systems wildly outperform humans in specific tasks while struggling with others that are simple for people.

AGI: An Outdated Goal in a World of Uneven AI?

During a recent interview with CNBC, Amodei expressed her scepticism about the term AGI, which refers to the theoretical point where machine intelligence matches human-level capability across the board. "AGI is such a funny term," she remarked. "Many years ago, it was kind of a useful concept to say, 'When will artificial intelligence be as capable as a human?' I think maybe the construct itself is now wrong—or maybe not wrong, but just outdated."

This critique arrives at a time when technology giants are investing tens of billions of dollars with the explicit goal of achieving AGI. Leaders like OpenAI's Sam Altman express confidence in knowing how to build it, predicting powerful AI agents will enter the workforce by 2025. Google DeepMind's Demis Hassabis forecasts AGI within five to ten years. Interestingly, Amodei's own brother and Anthropic CEO, Dario Amodei, previously estimated its arrival around 2026. Elon Musk has also shifted his timeline, now predicting his xAI company will reach AGI by 2026, after earlier forecasting 2025.

Superhuman Skills Meet Glaring Weaknesses

Amodei pointed to a concrete example that highlights this paradox. She revealed that Anthropic's own Claude AI model now writes code that is comparable to many professional engineers within the company. "That's crazy," she said, acknowledging the breathtaking speed of advancement. This claim is backed by performance on the SWE-Bench Verified, a rigorous benchmark using real-world GitHub issues. In November 2025, Claude Opus 4.5 became the first model to exceed 80% accuracy on this test, which mimics the daily work of professional developers.

Yet, the same system that can outperform engineers in coding tasks "still can’t do a lot of things that humans can do," Amodei admitted. This stark contrast—where AI exhibits superhuman proficiency in narrow domains but falters at basic, general tasks—fundamentally undermines the idea of a single, universal intelligence threshold like AGI.

The Real Challenge: Adoption Over Milestones

For Amodei, the industry's obsession with an AGI finish line is missing the point. The more urgent and practical question is how businesses and society will actually integrate and use these powerful but uneven tools. She highlights that even as models rapidly improve, real-world adoption faces significant hurdles.

The slowdown comes from the complex challenges of managing organisational change, navigating lengthy procurement processes, and the critical difficulty of identifying where AI genuinely adds tangible value versus where it is merely hype. The future, according to her perspective, won't be defined by checking a box for a textbook definition of AGI.

The future of AI will depend on a clear-eyed understanding of what these systems can truly do, where they consistently miss the mark, and how wisely society chooses to deploy them. For businesses in India and worldwide, the focus must shift from a distant theoretical milestone to the immediate tasks of integration, adaptation, and ethical application.