Mark Cuban wants to ban AI ads — hosts argue it would lock best AI tools behind paywalls for lower-income users
Jul 28, 2025
Key Points
- Mark Cuban proposes banning advertising in AI models entirely, arguing ads create perverse incentives that would push AI companies to prioritize engagement over accuracy, mirroring social media's addictive design.
- Hosts counter that an ad ban would force AI tools behind paywalls, making premium tutoring and knowledge engines inaccessible to lower-income users who currently benefit from ad-supported free products.
- Cuban softens to display-ad-style segregation with clearly labeled sidebar ads, while observers like Antonio Garcia Martinez argue ad-supported AI is inevitable and necessary to cover compute costs for consumer apps.
Summary
Mark Cuban argues for banning advertising in AI models entirely. His concern mirrors the social media playbook: if AI companies can monetize through ads, they will optimize models to keep users engaged longer, abandoning concise answers for rabbit holes designed to maximize impressions. This is inevitable economic behavior, the same logic that drives TikTok's engagement machine.
The hosts counter that a blanket ad ban locks the best AI tools behind paywalls, making premium tutoring, therapy, and knowledge engines inaccessible to lower-income users. Ads have historically been the economic engine that made digital products free or cheap at scale. Removing that option forces reliance on subscriptions, which creates the exact inequality Cuban likely wants to prevent.
OpenAI already has strong incentive to make ChatGPT addictive through subscription upsells alone, without any ads. Netflix's CEO said the company competes with sleep, and Netflix was subscription-only when he said it. The monetization model may matter less than the underlying fact that any for-profit platform optimizes for engagement.
Existing regulatory frameworks already address disclosure issues. Search engines label ads. Influencers must disclose paid partnerships. The FTC has rules against undisclosed endorsements. OpenAI running secret paid recommendations would be suicidal liability, and there is no evidence this is happening.
The actual economic question is more subtle. An LLM could steer users toward monetizable paths without labeling that steering, the way Google Search does not show ads for non-monetizable queries like "What day is Thanksgiving?" The difference lies in whether steering is disclosed, whether recommendations are actually useful, and whether they are mixed seamlessly into conversation or labeled separately.
Cuban eventually softens to display-ad-style segregation, with ads listed as a clearly identified sidebar completely separate from user chats. That aligns with existing precedent. The hosts note that banning ads entirely from one venue is unusual policy. Cigarette and gambling ads face restrictions, but most media outlets choose their own ad policies. It remains a market choice, not a mandate.
Ad-supported AI could work if ads are highly relevant and clearly labeled. Users may welcome them. If ChatGPT promotes a product without disclosure and the product disappoints, the model loses trust and the conversion rates that make the ad model work. OpenAI's long-term incentive is honesty, because deception would kill the asset.
Antonio Garcia Martinez says he would bet his net worth that ads in AI will be inevitable and profitable, necessary to pay for compute in consumer apps and welcomed by users when relevant. Michael McNano argues ads in AI could become the best money-printing machine in history, better than Google and Meta, because LLMs effectively become the friend-recommendation channel that drives product discovery.
Cuban raises a real concern about misaligned incentives. A blanket ban is too blunt. Better approach is transparent labeling, FTC-style disclosure rules, and letting the market choose. Ad-supported free models serve people without subscription budgets. Paid tiers serve power users. Open-source alternatives let anyone run locally.