Pentagon Considers Designating AI Developer Anthropic as Supply Chain Risk
The US Department of Defense is reportedly on the verge of classifying Anthropic, the developer behind the Claude artificial intelligence model, as a supply chain risk. According to a senior Pentagon official who spoke to Axios, this designation would mandate that any entity conducting business with the military must terminate its associations with the AI company. Defense Secretary Pete Hegseth is said to be approaching a final decision to cut off business connections with Anthropic.
Implications of the Supply Chain Risk Designation
Typically reserved as a punitive measure against foreign adversaries, this classification carries severe consequences. The senior official emphasized the complexity of disentanglement, stating, "It will be an enormous pain in the a** to disentangle, and we are going to make sure they pay a price for forcing our hand like this." Chief Pentagon spokesman Sean Parnell confirmed to Axios that the Department of Defense is reviewing its relationship with Anthropic, highlighting that national security and troop safety are paramount considerations.
The potential move holds significant ramifications, as Claude is currently the only AI model integrated into the military's classified systems. It was reportedly utilized during the US Army's January raid targeting Venezuelan ex-president Nicolas Maduro. Pentagon officials have previously lauded Claude's capabilities, making any separation a daunting and intricate process for the military and its partners.
Impact on Anthropic, Partners, and Customers
If designated a supply chain risk, Anthropic would require companies engaged with the US Department of Defense to certify that they do not employ Claude in their operations. Given that Anthropic recently disclosed that eight of the ten largest US corporations use Claude, the fallout could extend far beyond military circles, affecting a wide array of industries and customers.
The Pentagon contract in jeopardy is valued at up to $200 million, a relatively small fraction of Anthropic's $14 billion annual revenue. However, a senior administration official pointed out that competing AI models "are just behind" in specialized government applications, potentially complicating any swift transition away from Claude.
This development also sets a precedent for the Pentagon's negotiations with other AI giants, including OpenAI, Google, and xAI. While these companies have agreed to remove safeguards for use in unclassified military systems, they are not yet deployed for more sensitive classified work. The Pentagon is confident that these firms will comply with an "all lawful use" standard, though sources indicate that discussions remain unresolved.
Root Causes of the Dispute
Anthropic and the Pentagon have engaged in months of contentious negotiations regarding the terms of military use for Claude. Anthropic is willing to relax its current usage policies but seeks assurances that its tools will not be employed for mass surveillance of American citizens or the development of fully autonomous weapons without human oversight.
The Pentagon has countered that these conditions are overly restrictive and impractical, insisting that Anthropic and other AI companies permit military utilization for all lawful purposes. Sources reveal that senior defense officials have grown frustrated with Anthropic and view this public dispute as an opportunity to apply pressure.
Privacy advocates have expressed concerns on the opposing side, noting that existing mass-surveillance laws do not adequately address AI advancements. The Pentagon already amasses extensive personal data, from social media activity to concealed carry permits, and there are fears that AI could exponentially expand this authority to target civilians.
In response to the situation, an Anthropic spokesperson stated, "We are having productive conversations, in good faith, with DoD on how to continue that work and get these new and complex issues right." The spokesperson reiterated that Claude was the first AI model deployed on classified networks, underscoring the company's dedication to leveraging frontier AI for national security objectives.
