Commentary

DeepSeek's viral growth may be overstated — hosts debate sticky adoption vs. TikTok-driven downloads

Feb 3, 2025

Key Points

  • DeepSeek's 23% of ChatGPT's daily active users may conflate first-day downloads with sustained engagement, as TikTok algorithmic promotion inflates adoption metrics without evidence of retained usage.
  • Andreessen's promotion of DeepSeek appears driven partly by frustration with OpenAI's shift to closed-source models and regulatory capture, rather than by durable growth data.
  • Open-source models like DeepSeek could theoretically contain state-level manipulation through embedded backdoors, though no evidence exists today, raising questions about AI safety even in locally-runnable systems.

Summary

DeepSeek's viral adoption metrics may conflate downloads with durable usage, according to debate among the hosts. Marc Andreessen highlighted that DeepSeek now represents 23% of ChatGPT's daily active users and boasts far higher daily app downloads. But Nikita Beard pushed back, arguing the data overstates authentic stickiness. TikTok's algorithmic promotion of DeepSeek content is warming the trend—creators get feed visibility for posting about the service—which inflates download counts without translating to retained users. Beard notes that including first-day downloads in DAU calculations distorts the picture: she downloaded DeepSeek once to understand it but doesn't use it as a daily driver, yet likely still registers as a DAU on that phone. The lack of share flows or response behavior inside DeepSeek also signals shallow engagement.

The hosts draw a parallel to Lensa, the avatar-generation app that saw explosive downloads in one week before users exhausted the novelty and churned entirely.

One host frames Andreessen's promotion of DeepSeek partly as a proxy for frustration with OpenAI. Andreessen has criticized OpenAI's shift from open-source to closed-source models and its recent regulatory capture efforts—the company released marketing materials pitching government regulation on AI safety while continuing to restrict researcher access to its reasoning processes. By that lens, Andreessen's tweet is less a data-driven assertion and more a signal of allegiance to open-source alternatives, even if the growth figures are inflated.

The segment touches a separate security concern. Dylan Patel, on a recent episode with Lex Fridman, addressed whether open-source LLMs can be poisoned at the weight level with embedded backdoors—adversarial behaviors triggered by specific inputs or network conditions. Patel doesn't believe this is happening yet, but the capability is theoretically possible. He cites the example of British English being essentially dead in all LLMs due to American dominance of training data, and notes that open-source models can inherit subtle ideological biases from internet data. Anthropic's own adversarial safety research has shown it's possible to embed keyword triggers that change model behavior even in locally-run open weights. The implication is that DeepSeek, despite being open-source and runnable offline, could theoretically contain state-level manipulation—though Patel stresses no evidence exists today. One host flips his earlier skepticism: he now favors caution around open-source models and the continued investment in AI safety research, given the potential for nation-states to use them as vectors for information control or ideological drift at scale.