Trump Blacklists Anthropic as OpenAI Wins Pentagon Deal
The Trump administration banned Anthropic from all federal contracts after the AI firm refused to remove safeguards against autonomous weapons and domestic surveillance. Within hours, rival OpenAI struck a Pentagon deal — with the same conditions in place.
A Standoff Over AI's Role in Warfare
In a dramatic confrontation exposing the fault lines between Silicon Valley and Washington's national security establishment, the Trump administration on Friday banned artificial intelligence startup Anthropic from all federal government business — only to watch rival OpenAI strike a Pentagon deal hours later, ironically with the same safety conditions that had triggered the entire standoff.
The Red Lines
The dispute had been building for weeks. The Department of Defense sought unrestricted access to Anthropic's flagship Claude AI model for "all lawful purposes" across its classified networks. Anthropic drew two firm lines: Claude would not be used for autonomous weapons systems, and it would not be deployed in the mass surveillance of American citizens.
Defense Secretary Pete Hegseth set a hard deadline — 5:01 p.m. ET on Friday — for Anthropic to drop those restrictions. The company did not move. CEO Dario Amodei was categorical: "Threats do not change our position. We cannot in good conscience accede to their request."
The Ban and Its Consequences
Within minutes of the deadline passing, Hegseth declared Anthropic's stance "fundamentally incompatible with American principles" and designated the company a supply-chain risk to national security — a label more commonly applied to foreign adversaries like Huawei. President Trump followed, ordering every federal agency to "immediately cease" doing business with Anthropic and mandating a six-month phaseout of existing contracts.
The commercial stakes are real but manageable for Anthropic. The disputed Pentagon contract is worth roughly $200 million — a fraction of the company's $14 billion in annual revenue. Still, the national-security designation carries broader risks as Anthropic prepares for a public listing, potentially deterring other government contracts and raising concerns with institutional investors.
Anthropic said it had not received direct communication from the Pentagon before the deadline and vowed to challenge the designation in court, calling it "legally unsound and a dangerous precedent for any American company that negotiates with the government."
OpenAI Steps In — With the Same Safeguards
The most striking twist came hours after Anthropic was blacklisted. OpenAI CEO Sam Altman announced his company had concluded a deal with the Department of Defense to deploy its models on the Pentagon's classified networks. The agreement, Altman noted, explicitly includes the same protections Anthropic had demanded: "prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems."
The parallel drew immediate scrutiny. If the Pentagon was willing to accept those safeguards from OpenAI, critics asked, why was it unwilling to accept them from Anthropic? The episode fueled speculation that the standoff was less about operational requirements and more about establishing government leverage over private AI developers.
A Precedent With Industry-Wide Implications
Industry observers warn the outcome signals something significant. Penalizing a company for refusing to remove ethical guardrails — even through legitimate contract negotiation — could chill future AI development partnerships with the federal government. Few startups can afford to absorb both a lost $200 million contract and a national-security designation simultaneously.
For now, Anthropic stands as the rare tech company that called a government's bluff on AI ethics. And OpenAI demonstrated that principled and pragmatic can coexist — securing a major defense contract without abandoning its stated red lines.