Gemini 3 beats humans at GeoGuessr and trained entirely on Google TPUs — no Nvidia chips needed
Nov 19, 2025
Key Points
- Gemini 3 Pro outscores professional humans at GeoGuessr, trained entirely on Google TPUs without any Nvidia hardware.
- Google's performance on proprietary chips proves Nvidia lacks the monopoly narrative claims, yet competitors remain locked out as AMD hardware lags years behind.
- Even Google's internal teams face strict TPU allocation constraints, leaving external startups and rival labs far more squeezed on compute access.
Summary
Google's Gemini 3 Pro has outscored professional human players at GeoGuessr, the location-guessing game. The model was trained exclusively on Google's TPUs, with no Nvidia hardware involved.
Gemini 3 Pro beat professionals on raw points but underperformed at correctly identifying countries. The model appears to rely on different heuristics than human experts, who typically pattern-match on visual cues like signpost colors or road markings. Gemini 3 Pro is stronger at general contextual reference-matching but weaker at the geographic classification task itself.
A question exists about whether the model is overfit to GeoGuessr given that Google created the underlying Google Maps data source. Earlier Gemini iterations were tested on non-Street View images and still performed well, suggesting some of the capability generalizes beyond memorization.
The chip dynamics
Nvidia has long positioned itself as the essential compute provider for AI labs. Gemini 3 Pro requiring zero Nvidia chips challenges the narrative that Nvidia's hardware is a monopoly in any meaningful sense. Nvidia retains practical monopoly power in the market because Google does not sell TPUs to external buyers. Every other lab that wants cutting-edge chips must go through Nvidia. The existence of a performant alternative inside Google signals that vendor lock-in is real, narrow, and competitive in ways that market pricing does not yet reflect.
This leaves other AI labs in a difficult position. They cannot buy TPUs and would prefer alternatives, but AMD's hardware remains immature and all other efforts to in-source compute are years behind. Google's ability to build world-class models on proprietary chips while constraining TPU allocation to balance business requirements and revenue leaves competitors squeezed.
DeepMind researcher Demis Hassabis acknowledged the constraint in a Q&A, noting that compute is perpetually scarce and decisions on TPU allocation must balance short-term revenue, long-term research potential, and direct product returns. Even for Google's own teams, the answer to "give me all the TPUs" is no. That calculus matters most for external startups and rival labs that receive even less allocation.