Aaron Ginn on why America must become an AI exporter like Boeing — and why the China chip ban is backfiring
Jun 3, 2025 with Aaron Ginn
Key Points
- Nvidia lost 50% of its China market share over four years as export controls backfired, accelerating Huawei's chip development to competitive parity rather than containing it.
- Aaron Ginn argues the U.S. should adopt a Boeing model: export AI infrastructure globally to create dependency and geopolitical leverage instead of denying access through restrictions.
- Nvidia's NVLink interconnect platform targets China by exploiting the company's widest technical lead over Huawei, likely triggering foreign direct product rules that are more defensible than hardware bans.
Summary
Nvidia's latest earnings beat expectations despite significant regulatory headwinds, a result Aaron Ginn attributes to the underlying strength of global AI infrastructure demand rather than any policy tailwind. The more consequential signal from the quarter was Jensen Huang's increasingly explicit political posture — meeting with Trump at Mar-a-Lago, advocating openly in earnings calls and at conferences for a specific global AI architecture built around Nvidia's dominance. Ginn frames this as rational given Nvidia's market cap sits roughly at the GDP of the United Kingdom.
The Export Control Argument
Ginn's central thesis is that the Biden-era chip controls backfired and are continuing to do so under their residual framework. Huang disclosed in the earnings call that Nvidia lost 50% of its China market share over the past four years, and that Huawei accelerated from 7nm to 5nm processes during that window — roughly 12 months ahead of what U.S. regulators assumed when designing the January 2025 controls. Ginn argues the controls effectively created the market for Huawei rather than containing it, citing reporting from The Information in which Chinese GPU customers said precisely that.
The policy error, in Ginn's framing, is conflating supply-chain production controls — which he supports — with consumption controls, which he views as counterproductive. Restricting foundry equipment, lithography tools, and data center manufacturing inputs is defensible. Restricting end-use GPU sales is not, because China will simply run on Huawei infrastructure instead, and all government data across markets Huawei enters will flow through Chinese-controlled systems.
The January 2025 framework also rested on two faulty assumptions: that large language models would be accessed remotely, making physical GPU location irrelevant, and that Huawei was five years away from competitive parity. Both were wrong. Ginn notes the rules have since reverted to the October 2023 framework, which removed restrictions on allies including Switzerland, Austria, and Mexico — where 90% of Nvidia GPUs are assembled.
The Boeing Framework
Ginn's preferred model is commercial dominance through ubiquity, not denial. He draws a direct analogy to Boeing: the goal is not to prevent others from flying, but to ensure that when the world flies, it flies American. The same logic applies to AI infrastructure — sovereign AI deployments like the G42 and Saudi Arabia Stargate are early expressions of a trajectory in which the U.S. becomes the default exporter of AI compute capacity globally, with data centers in the Gulf, Europe, South America, and Africa running on American hardware and software stacks.
The strategic value is leverage. Widespread Nvidia infrastructure creates dependency that can be used to press on trade issues including cheap goods, fentanyl, and Taiwan. It also creates what Ginn describes as a new NATO-style framework — a network of countries whose digital infrastructure is tied to American platforms and whose alignment can be activated in a crisis. Restricting sales to the Gulf on the grounds that proximity to China poses diversion risk — a position Ginn calls insulting to the Saudis — forfeits that footprint to Huawei by default.
The Dual-Use Claim
Ginn is sharply dismissive of the dual-use argument as it is currently applied. He characterizes it as an argument from ignorance — no specific credible scenario has been articulated in which GPU access provides China with a military capability it does not already possess. On missile targeting specifically, he notes China's existing missile technology is already capable. The broader productivity argument — that LLMs make soldiers marginally more efficient — would apply equally to Huawei gear, making restriction of Nvidia hardware irrelevant to the underlying risk.
His asymmetric counterargument is that if China's AI researchers — who he notes represent roughly half of global AI research talent — are building on Nvidia's CUDA stack, the U.S. retains visibility, leverage, and architectural influence over those workloads. The alternative is ceding that ground entirely.
NVLink and the CUDA Moat
Nvidia's recently announced NVLink platform — which allows third-party GPUs to plug into Nvidia's interconnect fabric — is framed publicly as a pitch to hyperscalers looking to mix custom silicon with Nvidia hardware. Ginn reads the actual target as China. The interconnect layer is where Nvidia's lead over Huawei remains widest; at the GPU level the gap is narrowing, but Nvidia's networking stack is described as being in a different category entirely. Huawei's Cloud Matrix is the closest competitor and is still materially behind, with Huawei's networking revenue growing but margins thin because the company treats networking as a strategic moat investment rather than a profit center.
Commerce is expected to apply foreign direct product rules to NVLink, likely blocking it from China — which Ginn views as more legally defensible than hardware export controls since it involves Nvidia's own software IP. The deeper concern is that accelerating open-source model development, much of it driven by Chinese labs, erodes the CUDA moat over time by reducing dependence on Nvidia's closed ecosystem.
Nuclear Power Realism
On Meta's 20-year power purchase agreement with Constellation Energy for the Clinton Clean Energy Center — an existing reactor being brought back from decommissioning rather than a new build — Ginn is supportive of the technology but skeptical of nuclear as a scalable near-term power solution for AI infrastructure. The regulatory path for new nuclear capacity runs through state-level approvals, and he sees little political will to accelerate that process meaningfully. He draws a parallel to DOGE: discretionary executive mandates operate at the margin and cannot override structural federalized authority.
His base case for AI data center power is natural gas, specifically dedicated turbines co-located with data centers in deregulated energy markets. He views the wave of high-profile nuclear announcements from hyperscalers as partly a strategy to push through regulatory cycles rather than genuine near-term commitments. The Meta-Constellation deal fits a pattern Ginn considers more credible — reactivating existing licensed capacity — but he cautions against reading the broader nuclear narrative as a reliable infrastructure bet.