Anthropic's Claude Code Source Leak Exposes Hidden Features and Security Flaws
Anthropic, the artificial intelligence company whose product announcements have repeatedly triggered volatility in global stock markets, is now grappling with a significant internal security lapse. In an embarrassing incident, the complete source code for Claude Code—Anthropic's flagship AI coding assistant—was accidentally published on the public internet. This leak occurred through an npm package that included a source map file that should not have been distributed.
Third Time's Not a Charm: A Pattern of Leaks
The exposed material comprises approximately 2,200 files and 30 megabytes of TypeScript code. According to engineers who examined the leaked data, this marks at least the third instance where Anthropic has made this exact same mistake in handling its source code. The repetition of such a critical error raises serious questions about the company's internal security protocols and software development lifecycle management.
Hidden Features Revealed: From Kairos to Buddy Pets
Developers who analyzed the code dump discovered more than just standard engineering work. Buried within the files were several unreleased features that Anthropic had been developing behind compile-time feature flags. One particularly notable feature, codenamed Kairos, appears to be an always-on background agent with memory consolidation capabilities—essentially creating a version of Claude that never completely powers down.
Another surprising discovery was a full companion pet system called Buddy, complete with 18 distinct species, rarity tiers, shiny variants, and statistical distributions. The system includes whimsical elements that contrast with Anthropic's typically serious AI development work.
Additional hidden features include:
- Undercover Mode: Automatically activates for Anthropic employees working on public repositories, stripping AI attribution from commits without a visible off switch
- Coordinator Mode: Transforms Claude into an orchestrator managing parallel worker agents
- Auto Mode: Uses an AI classifier to silently approve tool permissions, eliminating the usual confirmation prompts
Architectural Insights: A Codebase Under Pressure
The leak provided unprecedented external visibility into how a well-funded AI product is constructed under significant development pressure. The findings revealed both technical sophistication and concerning architectural patterns.
The main user interface consists of a single React component spanning 5,005 lines, containing 68 state hooks, 43 effects, and JSX nesting that extends 22 levels deep. Engineers noted a TODO comment adjacent to a disabled lint rule on line 4114, suggesting rushed development practices.
The entry point file, main.tsx, runs to 4,683 lines and handles everything from OAuth authentication to mobile device management. Sixty-one separate files contain explicit comments documenting circular dependency workarounds, indicating architectural challenges.
One particularly telling detail involves the hexadecimal encoding of the word "duck"—String.fromCharCode(0x64,0x75,0x63,0x6b)—because the string apparently conflicted with an internal model codename that Anthropic's continuous integration pipeline scans for. Rather than implementing a regular expression exception, developers encoded every animal species in the pet system using hexadecimal notation.
Security Implications and Broader Concerns
This incident is not isolated. Fortune reported a separate, earlier leak this week that exposed nearly 3,000 files, including a draft blog post revealing a powerful upcoming model referred to internally as both "Mythos" and "Capybara."
Security researchers examining the Claude Code leak have warned that it potentially enables competitors to reverse-engineer its agentic framework. Even without proper access credentials, certain internal Anthropic systems may remain accessible, raising concerns about potential nation-state exploitation of the company's most advanced AI models.
Anthropic has confirmed the incident while attempting to minimize its impact. A company spokesperson told Fortune that no sensitive customer data or credentials were exposed, characterizing the event as a release packaging issue caused by human error rather than a security breach. The spokesperson added that Anthropic is implementing measures to prevent recurrence.
IPO Timing Creates Additional Pressure
The leak comes at an awkward moment for Anthropic. Bloomberg reported this week that the company is in preliminary discussions with Goldman Sachs, JPMorgan, and Morgan Stanley about a potential initial public offering in October, with a valuation approaching $380 billion.
Anthropic has already demonstrated its capacity to influence financial markets this year. Its Cowork and Claude Code Security updates erased billions of dollars in value from software and cybersecurity stocks within weeks. Leaking proprietary source code for the third time presents far from ideal optics as the company prepares for one of the most anticipated technology IPOs in recent memory.
The incident highlights the tension between rapid AI development and robust security practices, particularly as companies like Anthropic prepare for public market scrutiny while managing increasingly complex codebases and competitive pressures.



