Pentagon Issues Ultimatum to Anthropic Over AI Military Use, Threatens Defense Production Act
Pentagon Ultimatum to Anthropic AI: Comply or Face Defense Act

Pentagon Issues Friday Deadline Ultimatum to Anthropic Over AI Military Use

In a dramatic escalation, the Pentagon has delivered a firm ultimatum to Anthropic, the sole artificial intelligence company currently operating on classified military systems, ordering the firm to comply with its demands by Friday. According to a senior Pentagon official, if Anthropic fails to agree by 5:01 PM local time on Friday, Defense Secretary Pete Hegseth stated that the Trump administration will invoke the Defense Production Act. This move would compel the military's use of Anthropic's AI model and label the company as a supply chain risk, potentially jeopardizing its government contracts.

Contradictory Threats Highlight Pentagon's Frustration

The Pentagon's threats present a fundamental contradiction: one would prevent the government from using Anthropic's products, while the other would force the company to allow government use. Despite this inconsistency, the threats underscore the intense anger within the Pentagon's top ranks toward Anthropic for resisting its demands and highlight the critical importance of the company's AI model to military operations. The situation reflects a high-stakes standoff over the ethical deployment of advanced AI in defense contexts.

High-Level Meeting Fails to Resolve Dispute

Earlier this week, Defense Secretary Hegseth summoned Dario Amodei, CEO of Anthropic, to the Pentagon for a meeting on Tuesday. While the discussion maintained a civil tone, it did not yield an agreement. When Anthropic refused to accede to Hegseth's demands, the defense secretary leveled the threats against the company, as reported by individuals briefed on the meeting. This confrontation marks a significant breakdown in negotiations between the tech firm and military leadership.

Anthropic's Ethical Concerns and Pentagon's Stance

Anthropic has argued that it is seeking reasonable assurances that its AI model will not be employed for surveillance of American citizens or in autonomous weapons systems, such as drone operations, that lack human oversight. The company's supporters contend that Anthropic is being penalized for being the first on the classified system and for developing a specialized model, Claude Gov, which lacks the same guardrails and restrictions present in its publicly available models. This has raised ethical questions about the military's use of AI without stringent controls.

Pentagon officials have countered that ensuring the lawful use of software and weapons is their responsibility, one they take seriously. However, they assert that they cannot effectively allow all contractors to dictate how the equipment they sell to the Pentagon will be utilized. This stance highlights a broader tension between corporate ethics and military operational needs in the rapidly evolving field of artificial intelligence.

Unprecedented Use of Defense Production Act

The Defense Production Act grants the Pentagon extensive powers, typically invoked in manufacturing contexts to prioritize production for national defense. It would be highly unusual for the act to be applied to a software company like Anthropic, potentially forcing the firm to make its AI product available for free to the military. This unprecedented move could set a significant precedent for how the government interacts with tech companies in matters of national security.

An Anthropic spokesperson emphasized that the company aims to support the government but must ensure its models are used in alignment with what they can "reliably and responsibly do." This statement underscores the firm's commitment to ethical AI deployment, even under pressure from military authorities. The outcome of this ultimatum could have far-reaching implications for the future of AI governance and military-corporate relations in the United States.