Federal Judge Halts Pentagon's 'National Security Risk' Designation of AI Giant Anthropic
In a significant legal development, artificial intelligence company Anthropic has secured a major victory after a federal judge temporarily blocked the Pentagon from labeling the firm a "national security risk." According to a detailed report by Business Insider, U.S. District Judge Rita Lin granted Anthropic's request for a preliminary injunction, effectively halting both a Presidential Directive ordering federal agencies to cease using Anthropic's technology and Defense Secretary Pete Hegseth's designation of the company as a "supply chain risk."
Injunction Prevents Immediate Action Against Anthropic
The ruling means the controversial designation cannot take effect while the injunction remains in place, allowing Anthropic to continue its business operations with defense contractors without interruption. This decision provides the AI company with crucial legal protection as it navigates the ongoing dispute with the U.S. government.
Judge Lin delivered sharp criticism of the government's actions during her remarks, suggesting they appeared to be "an attempt to cripple Anthropic" rather than a straightforward decision to discontinue use of its AI tool Claude. She noted that one amicus brief had described the Pentagon's move as "attempted corporate murder," highlighting the severity of the allegations against the government's approach.
Anthropic Welcomes Court's Swift Action
In an official statement responding to the ruling, Anthropic expressed gratitude for the court's prompt intervention. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," the company stated. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
Pentagon's Arguments Against the AI Company
According to the Business Insider report, Deputy Assistant Attorney General Eric Hamilton argued in court that the Pentagon had raised serious questions about Anthropic's "reliability and trustworthiness." Government officials warned that the company could potentially manipulate its AI models or restrict access to critical technology. Defense officials have gone further, accusing Anthropic of putting "Silicon Valley ideology above American lives."
The controversy has reached the highest levels of government, with former U.S. President Donald Trump attacking the company on Truth Social, labeling it a "WOKE COMPANY" and claiming its actions jeopardize national security.
Judge Lin's Detailed Analysis of the Case
In her comprehensive remarks, Judge Lin framed the case as touching on "an important debate" about AI governance and national security. She noted that Anthropic maintains its AI product Claude is not safe for use in autonomous lethal weapons or domestic mass surveillance, requiring government users to agree not to deploy the technology for those purposes. Conversely, the Department of Defense argues that military commanders, not private companies, should determine what constitutes safe AI usage.
"It's a fascinating public policy debate, and it's not my role to decide who's right in that debate — that is Secretary Hegseth's call," Judge Lin stated. "The Department of Defense decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of Defense is free to stop using Claude and look for a more permissive AI vendor."
Three Government Actions Under Scrutiny
Judge Lin identified three specific government actions that form the core of the legal dispute:
- The president announced that every federal agency would immediately ban Anthropic from future government contracts, extending beyond defense applications to include even non-sensitive uses like the National Endowment for the Arts using Claude for website design.
- Secretary Hegseth declared that any company wanting to do business with the U.S. military must sever commercial relationships with Anthropic, affecting even companies using Claude for non-defense purposes like customer service chatbots.
- The Department of Defense designated Anthropic as a "supply chain risk," a label typically reserved for foreign intelligence agencies, terrorists, or other hostile actors who might sabotage U.S. technology systems.
"What is troubling to me about these three actions is that they don't really seem to be tailored to the stated national security concern," Judge Lin observed. "If the worry is about the integrity of the operational chain of command, the Department of Defense could just stop using Claude. It looks like defendants went further than that because they were trying to punish Anthropic."
First Amendment and Due Process Concerns
The judge expressed particular concern about whether Anthropic was being punished for criticizing the government's contracting position publicly. "Defendants say they were doing this because Anthropic's 'sanctimonious rhetoric' was an attempt to 'strong-arm the government,'" she noted. "Department of Defense records say that it designated Anthropic as a supply chain risk because it was 'hostile in the press.' So it looks like the Department of Defense is punishing Anthropic for trying to bring public scrutiny to this contracting dispute, which, of course, would be a violation of the First Amendment."
Judge Lin also raised questions about whether Congress had granted the government the authority to take such actions and whether Anthropic's due process rights had been violated by not receiving proper notice and opportunity to respond before the designation was imposed.
The preliminary injunction represents a temporary reprieve for Anthropic as the legal proceedings continue, with significant implications for how the U.S. government regulates and interacts with private AI companies moving forward.



