Sarah Guo on neo labs, AI trading, storytelling, and what she's watching heading into 2026
Dec 17, 2025 with Sarah Guo
Key Points
- Neo labs raising $50M to $100M-plus are betting on focused post-training and alternative architectures rather than symmetric competition with frontier labs on pre-training.
- Open Evidence scaled from $2M to $150M annualized ad revenue by targeting high-intent medical queries, validating LLM advertising as a breakout category for 2026.
- Conviction's storytelling workshops routinely move founder pitches from D-minus to A-plus, suggesting broken strategy, not narrative skill, is the real diagnosis when founders struggle to tell their story.
Summary
Sarah Guo on AI investing, neo labs, and what 2026 looks like from the venture seat.
Storytelling is a founder's job, not a hire. Guo is direct: the title of 'storyteller' reflects a real skill gap at startups, but outsourcing it is a mistake. Her firm Conviction runs a grant program called Embed, an uncapped note for 10 companies twice a year, and the highest-rated session in the program is a storytelling workshop. Founders routinely improve from D-minus to A-plus before demo day. The deeper point is diagnostic: if telling the story of your company is painful and takes 50 hours, the strategy is probably broken, not the narrative.
Neo Labs: Contrarian Bet or Rational Arbitrage?
A cluster of new AI research companies, post-Anthropic and post-SSI, are raising $50M to $100M-plus to fund original large-scale model research. Guo defines these as 'neo labs' and acknowledges they represent a genuine narrative violation in venture: new entrants with no users and a fraction of the capital claiming they can compete with OpenAI and Google.
The more credible version of the argument is not symmetric warfare with frontier labs but focused asymmetry. If pre-trained open-source models, including Chinese and European ones, are widely available, a neo lab can concentrate all of its capital on post-training, self-improvement, or alternative architectures like SSMs or diffusion. The thesis is that focused effort on one dimension can outperform a larger lab splitting resources across pre-training, inference, and product simultaneously.
Ilya Sutskever's framing at SSI was instructive: in the research phase, the goal is validating ideas, not one-shotting a trillion-dollar model. The funding requirement for that validation stage is large but not hyper-scale.
The Talent Acquisition Floor
Guo validates the fund thesis that neo labs carry a soft downside floor because hyperscalers are effectively paying for research talent leverage on massive existing compute budgets. Her framing: if a top researcher's GPU experimentation budget is $100M per year, the hyperscaler's willingness to pay for that person in an acquisition is proportionally high. Mira Murati, Barrett and others who have started new labs represent exactly the profile of talent large players will continue to bid on.
The risk is a pullback in hyperscaler capex conviction, which would reprice the talent acquisition calculus quickly. As long as the ~$100B annual capex cycle holds, she expects lab tuck-ins to continue.
2026 Predictions
AI trading as a breakout category. Guo's most specific prediction is that someone generates hundreds of millions of dollars from AI-driven trading in 2026. She sees no structural barrier to it happening and notes that West Coast labs have already been recruiting from East Coast trading firms, creating a potential startup formation path.
LLM advertising ramps in H2 2026. Ad inventory inside LLMs carries conversion rates that already appear to outperform traditional search. One data point cited: founders report ChatGPT referral traffic converting at 7% versus a site average of 3%. The unlock is high-intent query context. Open Evidence, the AI tool for physicians, is the leading proof point: it reportedly went from approximately $2M to $150M in annualized ad revenue, driven by extremely high-intent medical queries where pharmaceutical ad targeting is precision and trust-anchored.
Forward-deployed engineering is a 2026 and 2027 story, not a permanent model. Guo expects enterprises to keep paying for human AI implementation layers through at least 2027 and possibly 2028, driven by organizational change management friction rather than technical necessity. By 2028 she expects it to transition to product delivery.
Applications, not infrastructure, dominate the 2026 narrative. Her broad conviction is that proof of model capability across multiple domains is now sufficient to drive a wave of new applications. She expects market panic and 'AI winter' narratives to surface but views them as noise against a strong underlying signal.
Books Flagged
- Breakneck by Dan Wang, on China-US technology competition
- For Blood and Money, on the development of the leukemia drug IMBRUVICA
- Masters of Doom, on id Software and the early video game industry
- Apprentice to Genius, on the NIH and Johns Hopkins research lineage that produced disproportionate scientific output, which Guo reads as a study in what environments consistently generate breakthrough ideas