Anjney Midha on why he backed Mistral's $200M Series A, Sesame AI's companion hardware bet, and the post-training era
May 14, 2025 with Anjney Midha
Key Points
- Andreessen Horowitz led Mistral's $200M Series A on the thesis that enterprise buyers would pay a structural premium for open-source AI models offering cost, speed, and data control over closed-source alternatives.
- Sesame AI pairs AI glasses with voice companions as a post-smartphone interface, a bet Midha backed despite near-universal skepticism, after observing that 60% of Discord's daily minutes occur in voice channels.
- The best AI teams combine strong product taste with rigorous evaluation design, shipping only when models clear capability thresholds rather than following software's ship-early ethos.
Summary
Anjney Midha — who goes by An — is a general partner on Andreessen Horowitz's AI infrastructure practice, responsible for deploying the firm's capital across foundation model companies and computing infrastructure. His background spans founder (Ubiquity 6, sold to Discord), operator (he ran Discord's platform as it scaled from 1 million to 200 million monthly active users in six months during COVID), and early-stage angel investor in Anthropic, where Dario Amodei initially asked for $500 million to pursue what the team framed as a shot at "creating God."
The two themes dominating his LP day conversations are post-training and sovereign AI.
Why he backed Mistral's $200M Series A
Midha led the $200 million Series A into Mistral roughly two years ago, before the current wave of sovereign AI spending. The thesis wasn't geopolitical — it was structural. Computing infrastructure historically runs on two parallel tracks: a capabilities frontier dominated by closed-source players, and an efficiency frontier dominated by open source. Enterprise buyers care about cost, speed, and control, which open source consistently wins over time.
The conviction came from a lived experience at Discord. When the team got early access to GPT-4, the model performed well on several tasks — but GDPR, CCPA, and platform compliance requirements meant Discord's data couldn't leave its own servers. There was no open-source alternative capable of matching closed-source performance. Midha's read was that for every dollar flowing to closed-source enterprise prototyping, ten more were waiting for a credible open-source option.
Meta's Llama was the first model to close that gap, and the founders of Llama left to start Mistral — which Midha says made the investment straightforward. He also led the first round into Black Forest Labs, the team behind Stable Diffusion, applying the same open-source infrastructure logic to image models.
Europe's subsequent $800 billion Rearm Europe defense bill, a significant portion of which is flowing into AI and compute, has made the sovereign AI tailwind more explicit than he anticipated at the time of the Mistral investment. With 400 million consumers and a continental infrastructure buildout, Europe is now a deliberate market rather than a background assumption.
Sesame AI and the companion hardware bet
Sesame was started two years ago around the thesis that AI glasses plus a voice companion — hardware and software married together — would become the primary computing interface after smartphones. Midha says the idea was met with near-universal skepticism at the time.
The two co-founders were Brendan Iribe, former CEO of Oculus (which sold for north of $2 billion), and Ankit, who was Midha's co-founder at Ubiquity 6 and later ran voice AI infrastructure at Discord. The Discord data point that sharpened the thesis: 60% of all daily Discord minutes are spent in voice channels, suggesting voice is already a more frequent interaction modality for many consumers than screen-based interfaces.
The technical bar Sesame set before launching publicly was sub-200 millisecond voice response latency. Two years ago, the latency was too slow for the companion personality to register — users would abandon the experience before encountering what the model could actually do. The team treated that as a hard prerequisite for any public release.
Post-training and the eval-driven build culture
Midha draws a sharp distinction between how software teams and AI research teams should think about shipping. The software heuristic — ship early, iterate fast, be embarrassed by v1 — breaks down in frontier AI because capability needs to clear a threshold high enough that users will tolerate the friction that still exists in AI products today. The relevant metric isn't a feature checklist; it's eval scores.
Post-training progress is empirical, not deterministic. Teams have intuitions about what will improve a model, but they don't know until they run the test. His formula for the best teams: strong product taste combined with strong taste in evaluation design — knowing which metrics to build and staying heads down until both are solved. The tension between subjective taste and objective eval scores is, in his telling, one of the most uncomfortable transitions for teams coming from traditional software backgrounds.