News

Trump orders all federal agencies to cut off Anthropic amid Department of Defense clash over AI guardrails

Mar 2, 2026

Key Points

  • Trump orders all federal agencies to stop using Anthropic's AI technology and threatens civil and criminal penalties if the company fails to cooperate during a six-month transition period.
  • Anthropic CEO Dario Amodei claims the company retains authority to refuse government use of its products, but the Pentagon argues private vendors selling to the military have limited say in operational deployment.
  • A supply chain risk designation could prevent defense contractors from using Anthropic on government projects, potentially handing classified AI work to competitors like OpenAI willing to integrate without behavioral restrictions.

Summary

President Trump ordered all federal agencies to stop using Anthropic's AI technology and threatened civil and criminal consequences if the company fails to cooperate during a six-month transition period. The order marks an escalation in a clash between the government and Anthropic over how its systems can be deployed by the Pentagon.

The dispute emerged during an active conflict with Iran, when the Department of Defense raised concerns about AI system reliability. Anthropic had previously objected to how its technology was used in a Venezuela operation, triggering Pentagon concerns about whether the company would prove dependable during wartime.

The timeline shows friction in the negotiation process. Emil Michael, the under secretary of defense, attempted to reach Dario Amodei, Anthropic's CEO, at 5:01 p.m. Friday to negotiate revised contract terms. Anthropic did not respond until 8:25 p.m., stating it had received no direct communication from the Department of Defense. Michael tweeted the supply chain risk designation at 5:14 p.m. before the formal call attempt, creating a sequencing problem that undermines claims of good-faith negotiation.

The core tension is jurisdictional. Amodei argues that Anthropic, as a private company, retains authority over how its products are deployed and can refuse sales or impose restrictions on government use. He cited specific objections: no mass domestic surveillance, no fully autonomous lethal weapons. This position collides with the principle that private companies selling to government customers have limited say in how those products are used operationally. The government's counterargument is that if a company believes its technology is unsafe for a given use case, it should not deploy that technology into those systems at all, rather than attempt to control outcomes after the sale.

Amodei acknowledged in a CBS interview that Anthropic is deeply integrated with classified systems: "We were the first to deploy models on classified clouds and the first to build custom models for national security." Replacing classified infrastructure is not a simple software migration. It involves FedRAMP certification, security vetting, and system revalidation, a process that stretches six months or longer for some agencies.

The supply chain risk designation remains uncertain. Amodei has not received a formal letter, and the label originated from a Pentagon tweet. Kalshi markets price the odds of an actual supply chain risk designation by April 1 at 42%. If applied, the designation would prevent contractors working on government contracts from using Anthropic's technology on those specific projects, though they could use it elsewhere in their businesses. Amodei called the designation "unprecedented" in its application to a U.S. company. Historically it has targeted foreign firms like Kaspersky Labs and Huawei.

The structural problem runs deeper than this one contract. Ben Thompson argues that if Anthropic's technology is genuinely as powerful as Amodei's own framing suggests, then government authority to control its deployment becomes an inevitable question of statecraft. Thompson points to the nuclear weapons precedent: the U.S. did not let private scientists retain veto power over how nuclear technology was used. Instead, the government nationalized the intellectual property while hiring contractors to operate production facilities. Those contractors answer to the government, not the reverse.

Palmer Luckey elaborated on the definitional minefields Anthropic's terms introduce. "No autonomous lethal weapons" sounds straightforward until you ask what counts as autonomous, defensive, or a target versus collateral damage, especially when one nation considers another's defensive posture an offensive threat. Allowing a private corporation to make those calls introduces potential conflicts of interest: corporate liability concerns, PR management, or personal preferences of executives could override elected leaders' military judgment.

The communication gap is material. Amodei framed his stance as technical: LLMs hallucinate and should not be used for autonomous weapons. But this paints with a broad brush about government AI use broadly, not just Anthropic's specific products or capabilities. A more precise defense would have been to say Anthropic builds systems optimized for dialogue and code, and those systems are not appropriate for autonomous weapons, rather than claiming LLMs as a class cannot be trusted for any military application.

Anthropics's revenue context matters less than the narrative around it suggests. The company generates roughly $10 billion in annualized revenue. The disputed federal contract was worth approximately $200 million, roughly 2% of revenue. Yet the issue is not financial but structural. Can a private AI company that claims to prioritize AI safety maintain operational veto power over government deployment decisions, especially during wartime, when a more capable competitor might be willing to integrate without restrictions?

If Anthropic cannot negotiate terms acceptable to the Pentagon, other foundation model companies like OpenAI, Google, and xAI become the default vendors for classified AI work. OpenAI negotiated its own restrictions on autonomous weapons but did not refuse the contract or attempt to control post-sale deployment. That willingness to partner without enforcing behavioral constraints may now be OpenAI's competitive advantage in defense contracting.

What remains unresolved is whether this conflict produces legislative clarity around AI governance or simply forces every AI company into the existing defense contractor mold: build to spec, deliver to government, cede control over use. Amodei's approach appears to push the question back to Congress and the American public, but in doing so he is testing whether a private company can actually force that debate during an active military conflict.