Interview

Dean Ball: government threatening Anthropic's existence over a contract dispute is 'not how America should work'

Mar 4, 2026 with Dean Ball

Key Points

  • The Biden administration threatened to designate Anthropic a supply chain risk unless it removed usage restrictions on AI models designed to prevent autonomous lethal weapons deployment, a move Ball calls weaponizing government power over a contract disagreement.
  • Ball argues government oversight of frontier AI labs should operate like banking regulation—applied to entities, not products—rather than state ownership, which risks international trust and eventual tyranny.
  • Ball proposes invoking the Defense Production Act to prioritize government compute access during emergencies rather than seizing AI companies or using supply chain designations as leverage in disputes.
Dean Ball: government threatening Anthropic's existence over a contract dispute is 'not how America should work'

Summary

Dean Ball, a policy researcher at Hyperdimensional focused on AI governance, argues that threatening Anthropic's existence over a contract dispute violates basic principles of American governance. The Department of Defense demanded that Anthropic remove usage restrictions on its AI models as a condition of signing a government contract. Anthropic imposed those restrictions to prevent deployment in autonomous lethal weapons systems. When the company declined, the administration signaled it would designate Anthropic as a supply chain risk, effectively barring it from doing business with federal contractors.

Ball agrees the DOD's underlying principle is sound: Dario Amodei, Anthropic's CEO, should not unilaterally decide when autonomous weapons are deployment-ready. But the solution is not to destroy a company for disagreeing on a contract. The proper response is to cancel the deal and move to another vendor. "That just can't be the way that America works," Ball says.

The precedent concerns him more. Ball points to the Biden administration's treatment of Elon Musk's companies—roughly 12 to 20 concurrent investigations and regulatory actions—as evidence that weaponizing government power against private companies has become normalized. He views the Anthropic move as an extension of that pattern.

Regulation without nationalization

Ball accepts that some form of public oversight of frontier AI labs will likely be necessary. But oversight does not require government ownership or control. His model is technocratic regulation applied to the entities themselves, not the products, analogous to how the U.S. regulates banks rather than approving individual loans. The goal is modest oversight that shapes development without seizing control.

The nuclear weapons analogy partially breaks down. Nuclear weapons stayed under government lock and worked. But nuclear energy did not develop as robustly under state control, creating a regulatory single point of failure. AI differs because superintelligence, unlike nuclear bombs or energy, will be directly useful to ordinary people in their daily lives. If government owns that, Ball argues, it "almost certainly" devolves into tyranny.

His concern about international trust is sharp. Other countries trust American companies more than Chinese ones precisely because Chinese firms are understood as state assets. Treating U.S. AI companies as military appendages erodes that trust advantage at the exact moment the U.S. competes for global market share and partnership.

The policy Ball drafted

He spent early 2025 drafting an action plan on AI policy for the current administration in what he describes as an "eighteen, twenty hour a day" sprint. The document included a provision requiring the DOD to pre-arrange compute commandeering protocols with hyperscalers in case of national emergency. This approach lets the government access massive compute reserves without owning the underlying infrastructure.

Ball's frustration is that the Anthropic dispute represents a slide back toward the "Biden era mentality" of national security-inflected control over labs, undoing some of that work.

Supply chain risk as a legal weapon

Ball reads the statutory history of supply chain risk designations as intended for foreign adversary companies, primarily China. Applying it to a U.S. company over a contract disagreement inverts the category. DJI, DeepSeek, and Unitree are not on the list, yet Anthropic faces the threat. The logical inconsistency suggests the designation is being weaponized for leverage rather than applied as coherent national security policy.

Ball distinguishes between two applications. Supply chain risk applied at the contract level (no one with usage restrictions like this can do business with federal contractors) is defensible. Supply chain risk applied at the company level (this company cannot operate in America) is different and more problematic.

Labor market disruption

When asked about the Block and Square layoffs as evidence of AI-driven job loss, Ball sees a compounding effect rather than pure causation. Tech companies over-hired during the post-COVID boom and are now most exposed to AI adoption because they do the work AI does best. That creates what he calls a "fun house like effect" exaggerating actual AI impact. Wider labor displacement may come eventually, but probably not at the firm or economy-wide level near-term. Software engineering could be an exception because the discipline has concentrated hiring.

Defense Production Act alternative

Ball's clearest policy prescription emerges in a hypothetical. If SpaceX refused to build rockets for a government military need, the answer is not commandeering SpaceX. It is invoking Title One of the Defense Production Act, which lets the government jump to the front of the priority queue for any commercial launch. The president can invoke it unilaterally in extraordinary circumstances. The same logic applies to AI compute: prioritize government access rather than seizing ownership.

AGI timelines

When asked what he tells elected officials about timelines, Ball separates the trajectory of the technology from its near-term economic effect. Current AI models are "legitimately science fiction" in capability. Yet the world does not look science fictional because transforming institutions and organizational structures takes time. The real competition between nations will be won by whoever is most imaginative at restructuring production and work to take advantage of what AI enables, just as manufacturing evolved from artisanal to factory to assembly-line models. That is conceptually hard and takes years.