Light Matter's Nick Harris: photonic interconnects deliver 100x bandwidth for AI data centers
Apr 2, 2025 with Nicholas Harris
Key Points
- Light Matter's Passage photonic interconnect delivers 114 trillion bits per second, roughly 100x faster than the 1.6 trillion bits per second in current production data centers.
- Chips based on Passage reach market this year with meaningful data center rollout expected in 2026 and 2027, requiring no software changes to existing CUDA workflows.
- Harris argues the next thousandx gain in AI performance comes from interconnect, not compute, positioning Light Matter to capture 25 to 30 percent of a $300 billion annual AI hardware market.
Summary
Light Matter, an MIT spinout based in Mountain View, has spent eight years building photonic interconnects for AI data centers. Nick Harris, co-founder and CEO, announced this week that its silicon photonics engine — called Passage — delivers 114 trillion bits per second of bandwidth. Current state-of-the-art sits at roughly 1.6 trillion bits per second, making Passage approximately 100x faster than what's in production today.
The core argument is structural. GPUs and ASICs have improved by roughly a million times in operations per second over the past 30 years. Interconnect has improved by only a thousandx over the same period, and Harris argues the next thousandx gain in AI training and inference performance will come from interconnect, not from compute.
How it works
Passage is built using standard semiconductor fabs — TSMC, GlobalFoundries — so it slots into existing manufacturing supply chains. GPUs and switches are 3D-stacked on top of a photonic wafer, optical fibers are attached, and the result is a high-bandwidth optical fabric wiring together large-scale AI clusters. Harris references xAI's 100,000-GPU Memphis deployment as the kind of infrastructure Light Matter is designed to connect.
The go-to-market pitch hinges on one thing: no software changes required. Customers can keep running CUDA. There's no porting effort, no retraining of engineering teams — a deliberate contrast to the fate of companies like Graphcore and Cerebras, which Harris says largely failed because Nvidia's software moat meant everything was architected around Nvidia GPUs and competing processors faced an almost insurmountable adoption barrier. Light Matter is selling into the networking layer, not trying to displace the processor.
Harris puts the addressable market at roughly $300 billion per year in AI hardware spend, with 25–30% of that going to networking — the segment Light Matter is targeting.
Timeline and supply chain
Chips based on Passage are coming to market this year through undisclosed semiconductor partners. Harris expects meaningful rollout in data centers in 2026 and 2027. On supply, he estimates the current chain could serve at least half of the roughly 14 million GPUs and XPUs shipping annually, with a credible path to full coverage as capacity builds out.
DeepSeek and reasoning models
Harris is largely unfazed by the DeepSeek narrative. He draws a parallel to chip design — each generation of CPU was designed using the previous one — and argues that using existing LLMs to train new models is a natural compounding dynamic, not a disruptive one. His bigger point is that reasoning models are compute-hungry and latency-sensitive: Deep Research currently takes around 12 minutes to return a result. Photonic interconnects, by cutting communication energy costs and latency, could compress that to closer to one minute.
On new AI hardware entrants
Harris is skeptical of seed-stage AI chip startups claiming they've figured out what the 2017 wave — Graphcore, Cerebras, and others — got wrong. His specific objection: most are using TSMC on the same node with the same packaging technology as everyone else. Architectural tricks like baking a single model into an ASIC can deliver one-off efficiency gains, but they don't generate new scaling laws. Light Matter's bet is that photonic interconnects do.