Palantir Still Using Anthropic's AI Despite Pentagon Ban, CEO Confirms
Palantir Using Banned AI for Military Ops, CEO Says

Palantir CEO Confirms Continued Use of Banned AI for Military Operations

In a significant revelation at Palantir's AIPcon 9 event in Maryland, CEO Alex Karp has openly acknowledged that his data analytics company continues to utilize Anthropic's Claude artificial intelligence models. This admission comes despite the Pentagon's recent formal designation of Anthropic as a supply chain risk, a classification typically reserved for foreign adversaries.

Defense Department's Contradictory Position

The Department of Defense made headlines last week when it officially labeled Anthropic as a supply chain risk, requiring all defense contractors and vendors to certify they are not using Claude AI in any Pentagon-related work. However, Karp's statements reveal a more complex reality. "The Department of War is planning to phase out Anthropic; currently, it's not phased out," Karp told CNBC during the conference. "Our products are integrated with Anthropic, and in the future, it will probably be integrated with other large language models."

This disclosure confirms that Claude AI models remain actively deployed to support sensitive US military operations, including those related to Iran, despite the Pentagon's prohibition. The situation highlights the challenging disconnect between policy declarations and operational realities within defense technology ecosystems.

Anthropic's Legal Challenge Against Government Action

Responding to the Pentagon's designation, Anthropic has taken aggressive legal action against the Trump administration. The AI company filed a lawsuit calling the Defense Department's move "unprecedented and unlawful" and is currently seeking an immediate stay on the Pentagon's action. Anthropic's legal team argues that hundreds of millions of dollars in government contracts hang in the balance, potentially jeopardizing both national security interests and technological innovation.

The company maintains that its AI systems provide critical capabilities for defense operations and that sudden removal could compromise military effectiveness. This legal battle represents one of the most significant confrontations between a major AI developer and the US government over technology policy and national security concerns.

Pentagon's Transition Plan Faces Practical Hurdles

Defense Department Chief Technology Officer Emil Michael provided context about the challenges involved in implementing the ban. Speaking to CNBC, Michael explained, "You can't just rip out a system that's deeply embedded overnight." This acknowledgment underscores the technical and operational difficulties facing military planners as they attempt to disentangle AI systems from mission-critical defense operations.

President Donald Trump has directed federal agencies to complete the phase-out of Anthropic's products within six months. However, internal Pentagon communications indicate that exemptions may be granted for essential operations where no viable alternatives exist. This creates a potential loophole that could allow continued use of Claude AI in specific, high-priority military applications despite the broader prohibition.

Complex Integration Creates Dependence Challenges

Palantir's reliance on Anthropic's technology illustrates the profound complexity of modern defense systems, where AI components become deeply embedded in operational frameworks. The company's data analytics platforms, widely used by US military and intelligence agencies, have integrated Claude's natural language processing and machine learning capabilities to enhance decision-making processes and operational planning.

Both defense officials and private contractors acknowledge that complete separation from banned AI systems may prove impractical within the six-month timeline. The potential disruption to sensitive military missions creates pressure for exceptions and extended transition periods, particularly for operations where alternative AI solutions cannot match Claude's specific capabilities or integration depth.

This situation reveals the broader tension between national security concerns about foreign technology influence and the practical realities of modern military operations that increasingly depend on sophisticated AI systems. As the six-month deadline approaches, defense planners face difficult choices between security protocols and operational effectiveness in an increasingly complex technological landscape.