Interview

Poolside's Eiso Kant on securing 40,000 GB300s from CoreWeave and building a vertically integrated AI lab

Oct 16, 2025 with Eiso Kant

Key Points

  • Poolside secures 40,000 GB300 GPUs from CoreWeave via a West Texas data center partnership, bringing 250 megawatts online in December 2025 to sidestep the 18-to-24-month lead times now required for powered capacity.
  • The company built the largest reinforcement learning environment in the world on one million real-world codebases, trained on roughly 10,000 H200s representing $150 million to $200 million in annualized compute value.
  • Poolside enters enterprise markets through air-gapped defense and government deployments, positioning intelligence as a commodity sold via API while targeting margin capture in the application layer.
Poolside's Eiso Kant on securing 40,000 GB300s from CoreWeave and building a vertically integrated AI lab

Summary

Poolside is making one of the more audacious infrastructure bets among second-tier AI labs. The company has secured access to more than 40,000 GB300 GPUs through a strategic partnership with CoreWeave, with compute coming online in December 2025. That scale of capacity is effectively unavailable on the open market — GB300 supply is sold out through all of 2026 and well into 2027.

The company was founded roughly two and a half years ago by Eiso Kant and his co-founder, the former CTO of GitHub, around a thesis that reinforcement learning — not scaling language model pretraining — would become the dominant axis for improving model capabilities. That view was contrarian at the start of 2023 and is now consensus.

The infrastructure play goes beyond GPU access. Poolside is developing a data center campus in West Texas with six gigawatts of on-site gas generation capacity and an initial 250 megawatts of powered capacity coming online. Rather than conventional stick-built construction, the facility uses modular off-site manufacturing, deploying electrical, cooling, and compute skids on flatbed trucks and bringing capacity online in two-megawatt increments — roughly 1,000 GPUs per hall at current power densities. CoreWeave operates as the tenant, giving Poolside control over how much compute it allocates internally versus releasing to the broader market.

The strategic logic is vertical integration driven by lead-time risk. Kant argues the binding constraint in AI infrastructure is not chips or raw energy but powered data center shells. Securing 50 megawatts was feasible within six to nine months a year ago; securing 250 megawatts today requires a multi-billion-dollar, 15-year lease commitment with an 18-to-24-month delivery window — viable only for hyperscalers. Poolside's structure is designed to sidestep that bottleneck entirely.

On the model side, Poolside has run its research to date on approximately 10,000 H200s, representing roughly $150 million to $200 million in annualized compute value by Kant's framing. The company claims to have built what it describes as the largest reinforcement learning environment in the world — one million real-world codebases supporting hundreds of billions of agent tasks — to train models on the process of writing code, not just its outputs.

Go-to-market has been deliberately enterprise-first, with defense and government as the initial wedge. Poolside built out deployment infrastructure for air-gapped environments, government clouds, and ATO-compliant stacks, including edge deployments on workstations in military vehicles. The company frames intelligence as a commodity business — analogous to barrels of oil sold via API — and is positioning the enterprise application layer as where it intends to capture durable margin. It is now expanding beyond defense into broader enterprise markets, with coding agents as the entry point and wider knowledge-worker applications as the medium-term target.

The company's founding memo identified energy, compute, and intelligence as the three layers that would define the next decade, with most other variables becoming rounding errors. That framing is now driving the decision to own the full stack rather than rely on third-party cloud capacity at scale.