OpenAI Takes Pentagon Contract as Anthropic Refuses to Back Down on AI Safety Lines

Date:

The US artificial intelligence industry was rocked this week by a confrontation that ended with one company banned from government use and another rushing to fill the gap. OpenAI has moved to claim Pentagon business after Anthropic was expelled from federal contracts for refusing to weaken its AI safety guidelines.

The conflict had been brewing for months. Anthropic’s Claude AI had been available to the government under terms that excluded two specific use cases — autonomous weapons and mass surveillance. Defense officials, wanting those restrictions removed, turned up the pressure, and when Anthropic refused to comply, the administration made the dispute very public.

President Trump’s post on Truth Social, ordering all federal agencies to stop using Anthropic products, was unambiguous and immediately enforceable. It effectively ended Anthropic’s government relationships overnight, replacing months of careful negotiation with a blunt political directive.

OpenAI responded within hours, with Sam Altman announcing a Pentagon deal and insisting that OpenAI’s own ethical principles — which mirror Anthropic’s on the key points — were written into the contract. He expressed hope that the Pentagon would offer these same terms to every AI company, suggesting the industry might yet arrive at a consistent ethical baseline.

Anthropic remained unbowed, releasing a statement that its positions on mass surveillance and autonomous weapons are absolute and not subject to negotiation regardless of political consequences. The company argued pointedly that these restrictions have never once prevented a legitimate government use, framing the entire standoff as unnecessary and politically motivated.

Related articles

Mark Zuckerberg’s $80 Billion Metaverse Was the Most Expensive Proof of Concept in Tech History

In hindsight, the Meta metaverse may have been the world's most expensive proof of concept. Horizon Worlds is...

Instagram’s May 2026 Privacy U-Turn: Everything You Need to Know

Meta's decision to end encrypted messaging on Instagram by May 8, 2026 is one of the most significant...

Google Health AI in Crisis: Crowdsourced Medical Feature Pulled After Safety Concerns Mount

Google has removed an AI feature from its search platform that had been collecting and displaying health advice...

Microsoft Backs Anthropic’s Right to Set AI Safety Limits as Pentagon Legal Battle Intensifies

Microsoft has publicly backed Anthropic's right to set ethical boundaries on how its AI is used by the...