Matt O'Brien and Konstantin Toropin
February 28, 2026 — 1:30pm
Washington: The Trump administration has ordered all US agencies to stop using Anthropic’s artificial intelligence technology and imposed other major penalties in an unusually public clash between the US government and the company over AI safety.
President Donald Trump, Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline (US time), accusing it of endangering national security.
The Pentagon wants to use Anthropic’s Claude chatbot for any purpose within legal limits – but without any usage restrictions from Anthropic. The firm has insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations.
Trump’s order came after Anthropic chief executive Dario Amodei refused to back down, citing concerns that the company’s products could be used in ways that would violate its safeguards.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said on social media.
Hegseth deemed the company a “supply chain risk”, a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.
Anthropic said it would fight any formal designation in court, describing it in a statement as “an unprecedented action – one historically reserved for US adversaries, never before publicly applied to an American company”.
The government’s effort to assert dominance over Anthropic’s internal decision-making comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
According to The New York Times, while many military uses of artificial intelligence are still in a developmental stage, the models are already actively used for intelligence analysis.
Removing Claude from government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts and could hamper CIA analysts searching for patterns in intelligence reports, the paper suggests.
Citing former officials, the Times said CIA leaders are anxious to find a way to continue to use Claude, which has sped up their work and deepened their analysis. Before Trump’s comments, officials had warned that any presidential order could force the agency to seek other solutions.
Trump lashes out
Trump said Anthropic had made a mistake in trying to strong-arm the Pentagon. Writing on Truth Social, he said most agencies must immediately stop using Anthropic’s AI, but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.
After months of private talks exploded into public debate this week, Anthropic on Thursday said the government’s new contract language would allow “safeguards to be disregarded at will”, which Amodei said his company “cannot in good conscience accede” to.
Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable start-ups.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticise Anthropic, but their complaints posed contradictions.
Top Pentagon spokesman Sean Parnell said on social media on Thursday that Anthropic’s position was “jeopardising critical military operations and potentially putting our war fighters at risk”.
Meanwhile, Hegseth said on Friday that the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defence of the Republic”.
Hegseth’s choice to designate Anthropic as a supply chain risk uses an administrative tool designed to prevent companies owned by US adversaries from selling products harmful to American interests.
Senator Mark Warner, the top Democrat on the Senate intelligence committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations”.
Anthropic didn’t immediately reply to a request for comment on the Trump administration’s actions.
Dispute shakes up Silicon Valley
The dispute has stunned developers in Silicon Valley, where venture capitalists, prominent AI scientists and many workers from Anthropic’s top rivals – OpenAI and Google – have backed Amodei’s stance in open letters and other forums.
The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to grant access to classified military networks, and could serve as a warning to Google and OpenAI, whose contracts to supply their AI tools to the military are still evolving.
Musk sided with the Trump administration, saying on his social media platform X that “Anthropic hates Western Civilization.”
But one of Amodei’s fiercest rivals, OpenAI chief executive Sam Altman, sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview and in a letter to employees, saying OpenAI shared the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC, hours before he gathered employees for an all-hands meeting on Friday (US time).
Retired Air Force general Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end”.
Shanahan said Claude was already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable”. He said the AI large language models that power chatbots like Claude, Grok and ChatGPT were also “not ready for prime time in national security settings”, particularly not for fully autonomous weapons.
Anthropic is “not trying to play cute here,” he wrote on LinkedIn. “You won’t find a system with wider & deeper reach across the military.”
AP
Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for our weekly What in the World newsletter.


























