Deeper Insights | AI-Powered SEO & Business Growth Solutions
President Donald Trump told all U.S. federal agencies to stop using Anthropic’s AI technology right away on February 27, 2026. This included the popular Claude models. Defence Secretary Pete Hegseth then called Anthropic a “supply-chain risk to national security,” a term that has never been used to describe a U.S. company before. This action prevents defence contractors from working with Anthropic and effectively puts the company on a blacklist for most government work.
The action is not a total ban on private users across the country, but it is the strongest response from the U.S. government so far against a domestic AI developer.
The fight was about a Pentagon contract worth up to $200 million. The Department of Defence wanted “all lawful uses” of Claude, with no exceptions. Anthropic wouldn’t take out two important safety features from its Constitutional AI framework:
Dario Amodei, the CEO, said that the models are not yet reliable enough for these high-risk uses. The government acted within hours of Anthropic sticking to its guns after the 5 p.m. deadline.
OpenAI CEO Sam Altman confirmed that the company had won the Pentagon contract for classified networks just hours after Trump’s announcement. OpenAI kept the same basic rules: no mass surveillance in the US and no fully autonomous weapons. They also added more layers of protection. The quick agreement allowed OpenAI to quickly fill the gap left by Anthropic.
The Trump administration made the issue about national security and independence. During an AI arms race with China, officials said that private companies should not be able to tell the U.S. military what to do. Trump said that Anthropic was “out-of-control Radical Left” and that it was trying to “strong-arm” the Pentagon.
Anthropic said that the designation is “legally unsound” and “unprecedented,” which is a bad example for any U.S. company that wants to do business with the government. The business said it would fight the move in court.
— Rapid Response 47 (@RapidResponse47) February 27, 2026
The legal challenge is still going on as of March 2, 2026, and the six-month phase-out for current federal use is still going on. The episode shows how the government’s demands for unrestricted military use are making AI safety principles more and more difficult to follow.
This “Trump Anthropic ban” is an important early test of how the U.S. will handle the needs of innovation, ethics, and defence in the age of AI.