Manual Deployment Mistake Exposes $2.5 Billion AI Coding Assistant's Inner Workings
In a startling revelation that sent shockwaves through the artificial intelligence industry, a simple human error during a routine update process has laid bare the proprietary architecture of one of AI's most lucrative coding tools. Anthropic's Claude Code, the agentic coding assistant that achieved a staggering $2.5 billion in annualized revenue merely one year post-launch, had its complete source code inadvertently published for public access, download, and scrutiny.
The Accidental Public Release on npm Registry
Earlier this week, during a standard Claude Code update deployment, Anthropic engineers accidentally included a debug source map file and pushed the package to a public npm registry. This oversight transformed what should have been a routine update into a massive intellectual property exposure event. Within hours of the release, an alert user on social media platform X posted a direct link to the full code archive, which rapidly amassed over 30 million views.
The exposed codebase was substantial, comprising 512,000 lines of code distributed across nearly 1,900 individual files. GitHub immediately became flooded with forks as developers across the globe seized the opportunity to examine the inner workings of this cutting-edge AI system.
Anthropic's Response and Root Cause Analysis
Boris Cherny, the creator of Claude Code, publicly confirmed the incident's cause on X, attributing it directly to a missed manual step in their deployment process. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly," Cherny explained in his statement. Anthropic officially characterized the incident as a packaging issue rather than a security breach, emphasizing that no customer data, credentials, or sensitive information were compromised during the exposure.
Initial speculation within the developer community pointed toward Bun, the JavaScript runtime that Anthropic had previously acquired, suggesting a known bug where source maps might be inadvertently served. However, Cherny swiftly dismissed this theory, clarifying that the incident resulted purely from developer error and was completely unrelated to any existing software vulnerabilities.
No Terminations and the Automation Solution
When questioned on social media about potential consequences for the responsible individual, Cherny demonstrated remarkable support for his team. "Full trust," he wrote in response to whether the person was "still breathing." He elaborated that "the problem wasn't the person, it was infra that was error prone. Anyone could have made this same mistake by accident."
Significantly, no employees were terminated as a result of the incident. Instead, Anthropic is implementing a counterintuitive solution: accelerating their automation efforts. The company plans to enhance their deployment infrastructure with more automated checks, potentially utilizing Claude itself to verify deployment results before any updates are publicly released.
Discoveries Within the Leaked Codebase
Once the code became publicly accessible, developers conducted thorough examinations, uncovering numerous previously confidential details about Anthropic's development roadmap. The exploration revealed references to unreleased model versions including Opus 4.7 and Sonnet 4.8, along with internal project codenames such as "Capybara" and "Tengu."
The most extensively discussed discovery was KAIROS, an always-on background agent designed to handle tasks proactively. This system maintains daily action logs and executes a nightly routine called "autoDream" that reorganizes accumulated knowledge. Cherny confirmed that Anthropic remains undecided about whether to officially release this feature to the public.
Additional findings included a Tamagotchi-style coding companion that visually reacts to user work alongside the input box, and a sentiment analysis system that flags swear words as negative indicators. Cherny verified the latter system's existence, noting that "We put it on a dashboard and call it the 'fucks' chart."
DMCA Takedown Requests Misfire
Anthropic's containment efforts encountered significant complications as well. The company filed sweeping DMCA takedown requests that inadvertently targeted legitimate forks of their own open-source Claude Code repository—code completely unrelated to the leak. Developer Theo, whose repository was incorrectly flagged during this process, described the action as "absolutely pathetic."
Cherny acknowledged that the overbroad takedown requests were unintentional, and confirmed that Anthropic collaborated with GitHub to rescind the erroneous claims against unaffected repositories.
This incident serves as a powerful reminder of the vulnerabilities inherent in manual deployment processes within rapidly scaling AI companies, while simultaneously demonstrating how accidental exposures can provide unprecedented insights into proprietary AI development methodologies.



