OpenAI's Controversial Department of War Agreement Ignites Industry Firestorm
OpenAI CEO Sam Altman is confronting significant online backlash following his announcement that the artificial intelligence company has finalized an agreement with the US Department of War (DoW) to deploy its advanced AI models within the authority's classified network infrastructure. This strategic move has immediately ignited widespread concerns regarding potential applications in mass surveillance programs and autonomous weapons systems development.
The Department of War Agreement Details
In a detailed post on social media platform X, Altman publicly confirmed the partnership, stating: "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome."
The OpenAI leader elaborated further on the company's safety framework, emphasizing: "AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman outlined specific technical safeguards OpenAI would implement, including building specialized technical protections to ensure appropriate model behavior, deploying field deployment engineers (FDEs) for continuous oversight, and restricting system access exclusively to secure cloud networks.
Contradictory Information and Legal Concerns
However, a Community Note appended to Altman's social media post presented contradictory information, claiming that government officials had stated OpenAI's models could be utilized for "all lawful purposes." Critics have raised serious concerns that under certain interpretations of post-9/11 legislation, including specific provisions within the USA PATRIOT Act, such broad language could potentially authorize extensive data collection operations in particular scenarios.
Anthropic's Principled Stand and Government Response
The controversy emerges against the backdrop of a significant rift between the US government and rival AI firm Anthropic, whose CEO Dario Amodei recently declined similar cooperation terms, explicitly citing fundamental safety red lines. In a detailed blog post, Anthropic reiterated two non-negotiable principles: no participation in autonomous weapons systems development and no involvement in domestic mass surveillance programs targeting US citizens.
This principled refusal prompted immediate and severe government reaction. President Donald Trump issued a directive ordering federal agencies to cease using Anthropic's technology "immediately," significantly escalating tensions between Silicon Valley's AI sector and Washington's defense establishment.
US Secretary of Defense Pete Hegseth delivered a strongly worded condemnation, stating: "This week, Anthropic delivered a master class in arrogance and betrayal... Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic." He further declared Anthropic's stance "fundamentally incompatible with American principles."
The Defense Secretary directed the DoW to formally designate Anthropic as a "Supply-Chain Risk to National Security," effectively prohibiting government contractors from commercial engagement with the firm while allowing a six-month transition period for existing arrangements.
Intense Online Backlash Against OpenAI
The backlash against Altman and OpenAI materialized swiftly and intensely across digital platforms including Reddit and X, with numerous users accusing the company of compromising ethical principles that Anthropic had chosen to defend. One user expressed their disapproval through action, writing: "I just canceled ChatGPT and bought Claude Pro Max."
Another commenter delivered a scathing critique: "Sam Altman is such an incredible backstabber, liar and traitor. While your competitor is taking a heroic and principled stand, you swoop in to make your deal. Imagine working for this guy - is there a greater shame?"
A third post presented a broader warning: "We've been warning for years that this is where AI was leading: mass surveillance, automated bombings, the imprisonment of freethinkers. It's urgent that we collectively boycott OpenAI and ChatGPT."
Another user called for industry consequences: "This should lead to a mass exodus from OpenAI. One stands up for the God-given rights of the American people. The other folds to tyrants."
Deepening Industry Divisions
This episode has substantially deepened existing divisions within the artificial intelligence industry, with some technology leaders supporting the administration's push for broader AI deployment in defense and national security contexts, while others insist on implementing significantly tighter ethical guardrails and safety protocols. The contrasting approaches of OpenAI and Anthropic have created a defining moment for AI governance and military applications.
