Ben Thompson: if AI is like nuclear weapons, private companies can't expect to control how governments use it
Mar 2, 2026 with Ben Thompson
Key Points
- Ben Thompson argues that if AI is genuinely as powerful as nuclear weapons, private companies cannot expect governments to respect corporate control over how the technology is deployed.
- Thompson warns that restricting chip sales to China may backfire strategically, as cutting off Taiwan's market could rationally incentivize China to neutralize TSMC rather than accept permanent AI inferiority.
- Thompson contends Anthropic's refusal to work with the Pentagon sets a precedent that private executives, not democratic processes, control a potential primary source of national power.
Summary
Ben Thompson argues that if AI is as powerful as Dario Amodei claims—powerful enough to warrant the nuclear weapons analogy—then private companies building it cannot expect to control how governments use it, any more than a private company developing nuclear weapons could. Thompson frames this as descriptive rather than normative. He is not endorsing the Department of Defense's conduct in its dispute with Anthropic, and he is critical of the administration's approach to digital surveillance. But he argues Anthropic and the broader alignment community have run dorm-room theoretical arguments while failing to grapple with the physical reality that power ultimately flows from those who can enforce it. Laws, property rights, and corporate autonomy rest on the consent of the governed, with coercive force as the final backstop. If AI becomes a genuine source of power, governments will treat it as such.
Nuclear weapons versus software
Thompson draws a sharp distinction between nuclear weapons and AI. Fissile material is trackable, interceptable, and hard to proliferate. Software weights are none of those things. The more significant structural difference is economic. The Manhattan Project started as a government program with government assumptions baked in from day one. AI is necessarily starting with private companies because the capex required—hundreds of billions annually, approaching a trillion dollars—is only sustainable if you're selling to everyone. The government is one customer among many, not the primary funder.
That creates a tension the nuclear playbook cannot resolve. Intel's founding-era decision, made by Bob Noyce, to sell to the government but refuse to design chips exclusively for it was premised on the same logic: only the consumer and commercial market was large enough to fund the capital cycle that would produce genuinely superior technology. Thompson argues that dynamic applies to AI on steroids.
Taiwan and chip policy
Thompson's most pointed criticism of Amodei concerns chip policy. Amodei has been a consistent advocate for restricting semiconductor sales to China. Thompson has argued the opposite: China should be allowed to buy chips a generation or two behind current capability, and Chinese companies should be allowed to fab with TSMC. A stable equilibrium requires China to be dependent on Taiwan, not cut off from it. Taiwan sits 70 miles off the Chinese coast. If the US achieves decisive AI superiority and China does not, the rational Chinese response may be to neutralize TSMC rather than accept permanent strategic inferiority. Amodei's only public acknowledgment of this risk amounts to noting that a Taiwan conflict would slow AI adoption, which Thompson considers a grossly insufficient treatment of the scenario.
The governance bind
Thompson's prescription is narrow: Anthropic should sell to the government, and Congress should pass new digital surveillance legislation. He acknowledges the obvious objection that Congress passing meaningful tech legislation is nearly impossible, but pushes back on the implication. If legislation is off the table, the only alternative is that a private executive makes these decisions unilaterally. Thompson calls that outcome intolerable—not to him personally, but to any government holding power, which will not indefinitely accept a private company controlling what may become a primary source of national power.
Thompson quotes approvingly a tweet that made the choice explicit: the author said they would rather have Amodei making these decisions than whoever emerges from the democratic process. Thompson's response is that this position deserves credit for honesty, because it concedes that democratic governance should be abandoned in favor of unelected, unaccountable decision-makers.
OpenAI and Google
On competitive dynamics, Thompson sees a clean split. Anthropic holds a local advantage—most people in the San Francisco AI industry appear to be with them on resisting broad government access. OpenAI, by agreeing to work with the Pentagon subject to lawful capabilities, is more aligned with broader public sentiment but has likely created tension with its own talent base. Thompson describes OpenAI's arrangement as essentially a jailbreak competition with the US government, which he acknowledges is probably the most defensible place to land, even if it is structurally fraught. Google, which pulled out of Project Maven after employee backlash and eventually returned under new Cloud leadership, is conspicuously absent from the current controversy—a position Thompson implies is enviable.
The argument Anthropic is weakest on is the claim that current models are not technically capable of performing the missions the DoD requested. That reads as a precedent-setting argument rather than a genuine capability assessment, and Thompson thinks it undermines Anthropic's more credible position on digital surveillance, which he considers the genuinely compelling case for their refusal.