Tim Fist on the Nvidia-China H20 chip saga, Middle East AI deals, and why Washington is dangerously behind on AI
May 16, 2025 with Tim Fist
Key Points
- Nvidia sold roughly 1 million H20 chips to China in 2024 before export controls blocked further sales, leaving the company with $5.5 billion in stranded inventory and pushing it to open an R&D center in Shanghai to maintain market access.
- The Trump administration is negotiating AI infrastructure deals with the UAE, including a 5-gigawatt data center campus and 500,000 chips annually, but guardrails against diversion to China remain weak and G42's decoupling from Chinese investments is questionable.
- Washington policymakers dramatically underestimate AI capability development speed and lack a government-led R&D strategy comparable to ARPANET, leaving critical public-benefit applications in drug discovery and materials science unfunded while industry chases near-term B2B revenue.
Summary
Tim Fist, a policy researcher at the Institute for Progress, argues that Washington is dangerously behind on AI — and that the gap between what policymakers understand and what is actually coming could be costly.
H20 and the China chip saga
Nvidia sold roughly 1 million H20 inference chips into China in 2024. The H20 was designed to be export-control compliant, but by 2024 the strategic calculus had shifted: inference compute, driven by test-time compute scaling, reinforcement learning, and synthetic data generation, had become the critical input to frontier AI development. Fist and colleagues at the Institute for Progress publicly pushed back on plans for additional large H20 sales to major Chinese companies. The government eventually acted, issuing guidance blocking those sales. Nvidia was reportedly left holding $5.5 billion in stranded inventory.
Nvidia's response has been to open an R&D center in Shanghai, framed as a way to understand Chinese customer demands and design export-compliant products. Fist reads this as Nvidia prioritizing its commercial position over the spirit of export controls — technically legal, but in tension with the national security rationale behind the restrictions. Part of Nvidia's logic is defensive: Google, Amazon, and Microsoft are all developing custom silicon, putting Nvidia's core hyperscaler revenue at risk and pushing it to diversify geographically.
Fist frames the underlying policy question around quantity, not just quality. Even if Huawei's chips are slightly inferior to Nvidia's, allowing China access to vastly more of them — effectively TSMC-scale production volumes — could more than compensate. The priority should be squeezing both China's ability to procure chips externally and its domestic production capacity through SMIC and Huawei.
Middle East deals
The administration has walked back the AI diffusion rule and announced a series of Gulf deals, including a reported 5-gigawatt AI data center campus in Abu Dhabi and 500,000 chips per year exported to the UAE, with roughly three-quarters earmarked for US firms building data centers there and one quarter for G42, the UAE's sovereign-wealth-backed tech conglomerate.
Fist is cautiously supportive of the strategic logic — locking in early adopters and big spenders to keep the US tech stack dominant globally — but flags two structural risks. First, the guardrails against diversion are weak. Recipients could resell chips into China directly or route compute access through locally operated data centers rented out as cloud services. Second, G42's commitment to decoupling from China is questionable. After the $1.5 billion Microsoft-G42 deal, the Department of Commerce required G42 to divest its Chinese AI holdings. G42 reportedly did so, but shifted those investments into a related fund called Lunate, still ultimately controlled by the same sovereign wealth fund. Whether that constitutes real decoupling is an open question. The UAE also has documented collaboration with China in drones, 5G, and military technologies.
Fist thinks the Trump administration is well-positioned to negotiate bespoke structured deals with real guardrails, but whether it is actually doing that remains unclear given the limited public detail on the terms.
Open-source AI as geopolitical infrastructure
Fist accepts the argument that if the US doesn't supply a competitive open-source AI stack, countries building their own AI products will default to Huawei Ascend hardware and Deepseek models. But he is skeptical about how sticky any open-source ecosystem actually is. AI developers switch base models constantly based on performance, as frontier lab revenues visibly spike and collapse with each benchmark cycle.
His hypothesis is that the durable advantage for US open-source models is trust and security, not raw capability. US cloud providers already compete successfully against cheaper Huawei and Alibaba alternatives by being more credible on data privacy. The same dynamic could apply to models — particularly given that backdoors and sleeper agents can be inserted into open-source weights with no reliable detection method. Interpretability and verifiable security could be the differentiator that makes American open-source models sticky globally.
On foundation model economics more broadly: training compute costs are scaling roughly 5x per year, moving from hundreds of millions to approaching billions of dollars per run. Companies that are only marginally competitive with the frontier cannot sustain that trajectory and eventually drop out, which is already visible among early-stage foundation model companies that raised but never reached the frontier.
Compute allocation and the scaling law question
Fist estimates the current rough split in compute allocation is around 80% pre-training, 20% post-training, but expects that ratio to shift. Reinforcement learning is still early in its own scaling curve, and the evidence suggests allocating more resources there is higher-return than additional pre-training at the margin. Labs are still making large cluster and energy commitments, which reflects some institutional buy-in that pre-training scaling continues — but the center of gravity is moving.
On compute distribution: Fist pushes back on the idea that AI's GDP impact will come from giving every citizen access to a high-end inference chip. He thinks compute usage will be massively uneven — concentrated in specific countries and within a small number of frontier companies deploying millions to billions of agents internally, while most people globally still barely interact with tools like ChatGPT.
Washington is not ready
Fist is unambiguous: Washington will be extremely surprised by how fast AI capabilities develop. Most policymakers are not pricing in where current trends point, even setting aside AGI-level scenarios.
His deeper concern is a missed institutional opportunity. Technologies like the internet and the human genome project succeeded partly because the US government saw what was coming and shaped development through basic R&D — ARPANET was designed for nuclear resilience but built the architecture for a global open network that embedded American values. There is no equivalent AI initiative today. Industry is focused on B2B SaaS and chatbots because that is where near-term revenue is, leaving massive public-benefit applications in materials discovery and drug discovery unfunded.
Fist flags a new congressional coalition called the American Science Acceleration Project as a positive signal, but describes it as an exception to a broader deficit of this kind of thinking in DC.
Meta and Llama
Meta has repositioned itself in Washington from a content-moderation lightning rod to a national AI champion, partly through the DoD's adoption of a defense-oriented Llama deployment and the broader use of Llama as US open-source AI infrastructure. The pivot to community-notes-style content moderation has also smoothed its relationship with the current administration.
But the latest Behemoth model has reportedly stalled on capability improvements, and Meta's delay announcement notably did not invoke safety as the reason — making it harder to avoid reading the delay as an admission of underperformance. Fist notes some concern in DC about Meta's leaderboard results and whether benchmark selection has been managed strategically. There is also a minority faction worried that open-source models with significant misuse potential in the cyber domain should not be freely available globally — though Fist notes that concern is less pressing while Llama lags the frontier.
AI-enabled social engineering
Fist says LLMs capable of producing more convincing phishing text than the median scam email have existed since at least GPT-3.5, making the relative absence of mass AI-powered spear-phishing campaigns surprising. He allows for three explanations: existing filters are working, bad actors are not yet technically sophisticated enough to operationalize the capability, or it is happening at scale but targeting demographics who don't report it — the same dynamic that kept cryptocurrency scam victims out of major press coverage for years.