The US artificial intelligence industry was rocked this week by a confrontation that ended with one company banned from government use and another rushing to fill the gap. OpenAI has moved to claim Pentagon business after Anthropic was expelled from federal contracts for refusing to weaken its AI safety guidelines.
The conflict had been brewing for months. Anthropic’s Claude AI had been available to the government under terms that excluded two specific use cases — autonomous weapons and mass surveillance. Defense officials, wanting those restrictions removed, turned up the pressure, and when Anthropic refused to comply, the administration made the dispute very public.
President Trump’s post on Truth Social, ordering all federal agencies to stop using Anthropic products, was unambiguous and immediately enforceable. It effectively ended Anthropic’s government relationships overnight, replacing months of careful negotiation with a blunt political directive.
OpenAI responded within hours, with Sam Altman announcing a Pentagon deal and insisting that OpenAI’s own ethical principles — which mirror Anthropic’s on the key points — were written into the contract. He expressed hope that the Pentagon would offer these same terms to every AI company, suggesting the industry might yet arrive at a consistent ethical baseline.
Anthropic remained unbowed, releasing a statement that its positions on mass surveillance and autonomous weapons are absolute and not subject to negotiation regardless of political consequences. The company argued pointedly that these restrictions have never once prevented a legitimate government use, framing the entire standoff as unnecessary and politically motivated.
