OpenAI Revises Defense Department Agreement Amid Surveillance Fears
OpenAI is making significant amendments to its recently signed contract with the US Department of Defense after facing intense public criticism regarding whether the agreement adequately protected American citizens from AI-powered surveillance. The controversy erupted shortly after the deal was announced, with concerns focusing on potential domestic spying applications.
CEO Sam Altman Takes Personal Stand on Constitutional Limits
CEO Sam Altman confirmed the contractual changes on Monday, revealing that the company had been collaborating with Pentagon officials to incorporate clearer language into the agreement. In a remarkable personal statement, Altman declared his willingness to face legal consequences rather than comply with what he considers unconstitutional orders.
"If I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it," Altman wrote in an internal communication that he subsequently made public. This represents one of his most definitive personal positions on ethical boundaries in government contracts.
Specific Protections Added to Updated Contract
The revised contract now explicitly states that OpenAI's artificial intelligence tools "shall not be intentionally used for domestic surveillance of US persons and nationals"—including through the acquisition of commercially available personal data such as location history or browsing records. Additionally, the Pentagon has confirmed that OpenAI's services will not be utilized by intelligence agencies like the National Security Agency (NSA) under the current agreement.
Any future use by intelligence agencies would necessitate a separate contract modification, creating an additional layer of oversight for sensitive applications.
Why the Original Agreement Drew Immediate Scrutiny
The initial agreement, announced just last Friday, immediately attracted significant criticism from privacy advocates and technology experts. According to detailed reporting by The Verge, the original contract didn't actually prohibit mass surveillance—it merely required OpenAI to comply with existing laws, many of which have historically been interpreted broadly to authorize extensive domestic spying programs.
Critics pointed to historical precedents where similar legal frameworks had been stretched to cover sweeping surveillance initiatives. The NSA's PRISM program and other bulk data collection efforts had all operated under the same type of legal authorization that OpenAI initially cited as its primary safeguard.
OpenAI's former head of policy research, Miles Brundage, offered a blunt assessment on social media platform X: "OpenAI employees' default assumption here should unfortunately be that OpenAI caved and framed it as not caving."
Contractual Language Remained Problematic
OpenAI initially defended its position, with a company spokesperson telling The Verge that their system "cannot be used to collect or analyze Americans' data in a bulk, open-ended, or generalized way." However, researchers like UC Berkeley's Sarah Shoker noted that vague terminology—including words like "unconstrained" and "generalized"—left substantial room for interpretation and potential misuse.
How OpenAI's Deal Contrasts with Rival Anthropic's Position
The Pentagon agreement emerged just hours after the Department of Defense designated Anthropic—OpenAI's primary competitor—as a "supply chain risk to national security," a classification typically reserved for foreign adversaries. This designation followed Anthropic's refusal to remove two crucial restrictions from its proposed contract: a prohibition against mass domestic surveillance and a ban on fully autonomous weapons systems capable of killing without human oversight.
OpenAI ultimately agreed to the Pentagon's core requirement of "all lawful use," a concession that Anthropic had steadfastly refused to make. According to The New York Times, Altman and DoD Chief Technology Officer Emil Michael had been negotiating since Wednesday, reaching a framework agreement within days—a process facilitated by their stronger personal relationship compared to Michael's rapport with Anthropic CEO Dario Amodei.
A former Trump administration undersecretary later confirmed that the OpenAI agreement represented "a compromise that Anthropic was offered, and rejected"—indicating that Anthropic had reviewed similar terms before declining the contract.
Altman Advocates for Industry-Wide Standards and Democratic Oversight
Despite the competitive appearance of securing the contract immediately after Anthropic's negotiations collapsed, Altman has been vocal about his desire to prevent a permanent fracture between AI companies and government agencies. In his internal communication, he revealed that he had advised the Department of Defense against designating Anthropic as a supply chain risk and requested that the amended contract terms be made available to all artificial intelligence companies.
"We do not want the ability to opine on a specific (and legal) military action," Altman wrote separately. "But we do really want the ability to use our expertise to design a safe system."
CEO Acknowledges Strategic Missteps in Rollout
Altman also admitted to procedural errors in how the initial agreement was announced. He characterized the rushed Friday announcement as appearing "opportunistic and sloppy"—a rare moment of self-criticism from a CEO who has spent the past year navigating some of Silicon Valley's most politically charged government contracts.
This episode represents another chapter in the ongoing tension between technological innovation, national security requirements, and civil liberties protections as artificial intelligence capabilities continue to advance at an unprecedented pace.
