News

xAI staff were using Anthropic's Claude via Cursor — Anthropic cut off access

Jan 12, 2026

Key Points

  • Anthropic revoked xAI's access to Claude models this week, blocking internal use through Cursor, the code editor that integrates multiple AI assistants.
  • Anthropic is systematically cutting off access to competing AI labs it deems material threats, treating model access as a competitive moat against rivals.
  • The pattern mirrors earlier distillation controversies where researchers used ChatGPT outputs to train competing models, raising questions about whether access denial prevents reverse-engineering or merely blocks capable tooling.

Summary

Anthropic revoked xAI's access to Claude models this week. xAI staff had been using Claude internally through Cursor, a code editor that integrates multiple AI models, until the access was cut off.

The move reflects a broader pattern. Anthropic has now blocked access for competing AI labs it views as meaningful threats. A company might use Claude simply because it is capable, or it might extract knowledge from Claude's outputs to reverse-engineer competing capabilities. Either way, Anthropic's response is becoming systematic.

This recalls an earlier controversy around distillation, when researchers used ChatGPT outputs to train competing models. Once a model becomes good enough to matter, access becomes a liability for the provider if the user is a direct rival.