Pentagon Official Exposes AI Contract Clauses That Triggered US Military-Anthropic Rift
A high-ranking Pentagon official has provided detailed insights into the events that precipitated the abrupt and dramatic severance of ties between the United States military and artificial intelligence firm Anthropic. According to a Reuters report, Emil Michael, the Under Secretary of Defense for Research and Engineering, disclosed that his review of AI contracts signed during the Biden administration revealed alarming provisions that severely constrained military capabilities.
"Holy Cow" Moment Over Restrictive AI Agreements
Speaking at the American Dynamism Summit in Washington on Tuesday, March 3, Michael described experiencing a "holy, holy cow" moment upon examining the contract terms. He elaborated that the agreements contained dozens of embedded restrictions, specifically impacting commands overseeing air operations in critical regions such as Iran, China, and South America.
"There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics," Michael stated, using military terminology for explosions and combat. He emphasized that the restrictions were so sweeping they threatened to halt military planning in real time, effectively tying the hands of operators during active missions.
Critical Flashpoint: AI Model Shutdown During Operations
Most critically, Michael highlighted that the contracts were structured in a manner where, if a military operator violated the AI provider's terms of service, the model could theoretically "just stop in the middle of an operation." While he did not explicitly name the AI company, it is widely understood that at the time of his review, Anthropic's Claude was the sole AI model accessible to the Defense Department on its classified systems.
The situation escalated further when a senior executive from an unnamed AI company began inquiring whether its software had been utilized in what Michael described as "one of the most successful military operations in recent memory." This operation is believed to refer to the U.S. government raid in January that captured former Venezuelan President Nicolás Maduro, with reports indicating Anthropic's Claude assisted in planning the mission.
Pentagon's Firm Stance and Swift Fallout
The suggestion that a private AI company might scrutinize or challenge the use of its technology in a classified military operation marked a turning point for Pentagon leadership. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael asserted firmly.
The dispute reached a climax over Anthropic's refusal to remove restrictions related to autonomous weapons and mass surveillance from its government contracts—limits the company defended as essential ethical guardrails. In response, Defense Secretary Pete Hegseth designated Anthropic as a "supply-chain risk" to national security, effectively barring U.S. defense contractors from using its tools. President Trump subsequently issued an order completely banning Anthropic from all government business.
OpenAI Steps In Amidst the Turmoil
Within hours of the fallout, rival OpenAI announced its own agreement with the Pentagon for deploying its models on Defense Department networks. OpenAI CEO Sam Altman indicated in a statement that the Department had consented to certain restrictions with OpenAI as well, although the precise terms remain undisclosed. This rapid shift underscored a stark contrast: one AI company lost its government contracts in days, while another swiftly moved to fill the void.
Broader Implications: Who Sets the Rules for Military AI?
Michael's revelations raise profound questions extending beyond Anthropic and OpenAI, centering on the governance of AI in warfare. Should private AI companies be permitted to impose ethical limits on how their technology is employed by the military? Or does national security necessitate that the government, rather than a tech startup, retains ultimate authority?
For the present, the Trump administration has unequivocally stated its position. However, this debate is far from concluded, as it touches on core issues of ethics, innovation, and security in an increasingly AI-driven military landscape.
