OpenAI's Sam Altman Acknowledges Rivalry with Anthropic CEO Amid Pentagon Deal
The well-documented tension between OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei has taken a new turn, with Altman openly addressing their strained relationship in front of employees. This revelation came just hours before OpenAI accepted a controversial deal with the Pentagon, a contract that arch-rival Anthropic had rejected.
Altman's Strategic Move to Save OpenAI from Pentagon Wrath
According to reports from Axios, on February 26, Altman sent an all-staff message acknowledging that OpenAI shared Anthropic's red lines regarding ethical boundaries. However, he emphasized his commitment to helping de-escalate tensions while simultaneously pursuing his own agreement with the Defense Department. Altman stressed that the optics of such a deal might appear unfavorable in the short term, but he was acting on principle rather than appearances.
The situation intensified the following day when Altman informed a core group of staff that negotiations between the Pentagon and Anthropic had deteriorated. This breakdown was reportedly due to the perception that Amodei was manipulating press coverage to his advantage. As the Pentagon's 5 PM deadline approached on February 27, Altman revealed that the Defense Department believed it could offer Anthropic an escape from being labeled a 'Supply Chain Risk.'
Altman's Candid Remarks About Saving a Rival
In a remarkable admission, Altman told employees he found it strange to be working so hard to "save" a competitor whose CEO had, in his view, spent years attempting to undermine OpenAI. This candid remark highlights the deep-seated rivalry between the two AI giants and their leaders.
After OpenAI formally accepted the Pentagon deal, Altman reinforced his position in another message to staff. He made it clear that companies do not get to dictate terms to the Pentagon regarding how artificial intelligence software should be used. Altman suggested that Anthropic's desire to influence military applications might have contributed to the tensions with the Defense Department.
Pentagon's Clear Boundaries for AI Companies
During an all-hands meeting a few days later, Altman elaborated on the Pentagon's expectations. He explained that while the Defense Department would listen to OpenAI's technical expertise about AI applications, the federal agency explicitly did not want the company to pass judgment on whether specific military actions were morally justified. "You do not get to make operational decisions," Altman reportedly stated, underscoring the limits of corporate influence in defense matters.
Despite the competitive dynamics, Altman revealed he continues to advocate for the Pentagon to remove the 'Supply Chain Risk' designation from Anthropic. This label, typically reserved for adversaries of the United States and never before applied to a domestic company, carries significant reputational and operational consequences for Anthropic.
The Broader Implications for AI and Defense Partnerships
This episode illuminates several critical aspects of the evolving relationship between artificial intelligence companies and government defense agencies:
- The ethical boundaries and red lines that AI firms establish when engaging with military contracts
- The competitive tensions between leading AI organizations vying for government partnerships
- The delicate balance between corporate principles and practical business considerations
- The unprecedented nature of the 'Supply Chain Risk' designation being applied to a U.S. company
The divergent approaches of OpenAI and Anthropic toward Pentagon collaboration reflect deeper philosophical differences about the role of artificial intelligence in national security. While Anthropic maintained a firmer stance on ethical boundaries, OpenAI pursued a path of engagement with clear limitations, demonstrating how AI leaders navigate complex government relationships while managing intense industry rivalries.



