Aaron Ginn: America should flood China with NVIDIA chips to win the AI trade war
Dec 9, 2025 with Aaron Ginn
Key Points
- Aaron Ginn, Hydra Host co-founder, argues the U.S. should sell NVIDIA chips to China rather than restrict them, comparing the export control strategy to America's failed 5G containment of Huawei.
- Ginn expects announced domestic data center capacity to deliver at roughly 50% of projections due to power approvals and financing constraints outside hyperscaler counterparties.
- NVIDIA's dominance in AI accelerators remains durable near-term; Ginn dismisses Trainium as standalone and expects TPUs to stay confined to Google, citing switching costs from CUDA infrastructure.
Summary
Aaron Ginn, CEO and cofounder of HydroHost, makes the case that the United States should sell NVIDIA chips — including the H200 — into China, arguing that restricting access does more damage to American commercial interests than it prevents in terms of military risk. His core framing: treating GPUs as weapons-grade technology, analogous to an F-35 or a Patriot missile, is the wrong mental model. He draws a closer parallel to telecom infrastructure, where the U.S. lost the 5G race by telling the world to avoid Huawei without offering a credible alternative.
Ginn's argument rests on separating demand-side and supply-side dynamics. China's state apparatus, he contends, is already motivated to build domestic chip capacity through SMIC regardless of what the U.S. does. Aggressive export controls do not eliminate that supply-side pressure — they add a demand-side justification for accelerating it. If Chinese companies are freely buying NVIDIA hardware, the CCP has less political cover to mandate adoption of inferior Huawei silicon. Remove that option, and Washington hands Beijing both arguments at once.
He notes that roughly half of all AI engineers globally are Chinese, and that the talent reality on the ground makes pure containment implausible. His preferred strategy is to extract revenue from the Chinese economy, recycle it into American reshoring, and focus restrictions on China's ability to manufacture chips domestically — not on whether Chinese firms can buy American products. He is skeptical that five-year plan rhetoric translates into execution, pointing to the Belt and Road Initiative's mixed delivery record across Africa and Southeast Asia as evidence that announced targets routinely underperform.
Data Center Build-Out: Skepticism on Scale and Timelines
On domestic data center expansion, Ginn is notably bearish on announced capacity figures. His rule of thumb: assume actual delivered capacity will be roughly 50% of what is announced. He attributes the gap to the difficulty of securing power approvals, permitting, and debt financing outside of top-tier hyperscaler counterparties.
He pushes back on the narrative of a data center financing bubble, arguing that lenders have actually been conservative — deals are structured around firm contracts with investment-grade customers. Private credit and traditional debt markets have not extended speculative capital into the sector, which means the frothy bubble-and-burst scenario that some analysts have floated is not yet visible at the financing layer.
On space-based data centers specifically — a concept NVIDIA recently amplified by posting renders for a company called Star Cloud from its corporate account — Ginn is skeptical of near-term timelines. He does not dismiss the concept but flags that most announced projects are still fundraising vehicles, with equity commitments representing only 10 to 15% of total required capital. The debt portion remains largely unsecured. He aligns with Gavin Baker's view that the U.S. needs to get serious about compute infrastructure buildout, but argues that current legislative and regulatory behavior does not reflect that urgency, with other countries moving faster on power approvals and site development.
AI Accelerator Competitive Landscape
Ginn sees NVIDIA's dominance in AI accelerators as durable in the near term. He is bearish on Trainium as a standalone product and skeptical that TPUs will see broad adoption outside Google, where they fit a vertically integrated strategy the company has long favored. Switching costs from CUDA-based infrastructure are significant and routinely underestimated, he argues — the multi-cloud analogy is instructive, since true multi-cloud adoption only materialized as a side effect of GPU procurement, not as an intentional architectural choice.
AMD's software stack is improving, and Lisa Su represents the most credible near-term competitive pressure on NVIDIA's general-purpose compute dominance. But Ginn does not see a clear buyer for a standalone GPU business outside of a company like Meta, and he does not expect the hyperscalers — all of whom are actively trying to reduce NVIDIA dependency — to acquire a direct competitor to that effort.