Interview

Casey Newton debunks viral Reddit hoax accusing Uber and DoorDash of manipulating delivery speeds — and reflects on AI-generated disinformation

Jan 6, 2026 with Casey Newton

Key Points

  • A viral Reddit post claiming Uber and DoorDash deliberately slow deliveries to justify paid speed features accumulated 80,000 upvotes before Platformer's Casey Newton exposed it as fabricated using AI-generated badge images and documents.
  • The faker used an AI image tool called Nano Banana to convert a real employee badge into a fake Uber Eats credential, which former Uber employees confirmed does not exist as a distinct badge type.
  • AI now enables low-cost generation of convincing supporting materials like images and documents, making disinformation campaigns harder to detect when the underlying narrative exploits real corporate controversies like driver-pay disputes.
Casey Newton debunks viral Reddit hoax accusing Uber and DoorDash of manipulating delivery speeds — and reflects on AI-generated disinformation

Summary

A viral Reddit post accusing Uber and DoorDash of deliberately slowing competitors' deliveries to manufacture the value of a paid speed-up feature was a fabricated hoax — and Casey Newton of Platformer documented how it unraveled in real time.

The post accumulated roughly 80,000 upvotes before Newton began investigating. He messaged the Reddit account and received a reply within 9 minutes — a detail he now flags as an early warning sign. The source communicated in one- or two-word answers on Signal, inconsistent with the elaborate, verbose confession post. The logical incoherence in the post itself was another tell: it simultaneously claimed the speed-up fee was a "psychological value add" that did nothing, then conceded that other orders were being slowed down — meaning the fee did, in fact, deliver a benefit.

The fake evidence chain

The source offered an employee badge with the name and face blurred, presented as an Uber Eats credential. Newton later learned from an NBC News reporter that the faker had taken her legitimate employee badge and used an AI image tool — identified as Nano Banana — to convert it into a fake Uber Eats version. Former Uber employees subsequently confirmed there are no Uber Eats-specific employee badges; the company operates under a single Uber identity. The source also disappeared for a full day before producing an 18-page technical document purporting to describe the exploitation system. Newton now believes this, too, was AI-generated to order.

Both Uber and DoorDash responded by categorically denying the documents were real, putting their credibility on the line without hedging — a communications posture Newton describes as appropriate given the strength of their certainty.

Who and why

Motivation remains unconfirmed. Newton floats a bored teenager as the most plausible scenario but notes the episode raises a legitimate question about whether prediction market activity on platforms like Polymarket around the time of the post could reveal a financially motivated actor attempting to move Uber or DoorDash stock or trade on the hoax's eventual debunking.

The broader concern is structural. AI tools now allow anyone with a motivated hypothesis to generate technically convincing supporting materials — documents, images, testimony — at low cost. Newton frames this as AI functioning as "a superpower for motivated reasoning." The hoax spread in part because both companies have genuine histories of controversial driver-pay practices, making the fabrication culturally plausible even when factually hollow.

Newton's own AI workflow

Newton uses AI more heavily than most reporters, relying on citation-based tools for industry background research and feeding his own drafted copy into ChatGPT for fact-checking. He says this error-catching function has improved materially over the past year. He draws a firm line at delegating column writing to AI, noting the most recent version of Claude mimicked his writing style closely enough to unsettle him — but argues his readers are paying for his judgment, not a language model's output. He also walked back an earlier experiment using AI-generated images to illustrate columns after consistent reader pushback.