Anthropic says it uncovered industrial-scale distillation campaigns by three AI labs, DeepSeek, Moonshot AI, and MiniMax, designed to extract capabilities from its Claude model.
More than 16 million interactions were conducted through roughly 24,000 fraudulent accounts, according to the disclosure, in violation of its terms of service and regional access restrictions, including limits on commercial access from China.
Distillation is a standard AI training technique used to train smaller models on the outputs of stronger systems. Anthropic states that while this method is legitimate when applied to a company’s own models, competitors can misuse it to replicate advanced capabilities in less time and at lower cost.
The company describes coordinated efforts to gather Claude’s outputs in large volumes to improve rival systems.
Why It Matters: Frontier AI systems require significant investment in compute, data, and safety engineering. If their outputs can be systematically harvested, competitors may narrow capability gaps without making comparable investments, potentially distorting perceptions of independent technological progress. This also raises questions about how effectively safeguards and access restrictions hold up when model outputs themselves become a source of downstream training data.
- Three Attributed Campaigns, 16 Million Exchanges: Anthropic attributes over 150,000 exchanges to DeepSeek, more than 3.4 million to Moonshot AI, and over 13 million to MiniMax. Attribution was based on IP address correlation, request metadata, infrastructure indicators, timing analysis, and in some cases, corroboration from industry partners who observed similar activity.
- Targeting of High-Value Capabilities and Reasoning Data: The campaigns focused on agentic reasoning, tool use, coding, reinforcement learning support, computer-use agents, and computer vision. In some cases, prompts asked Claude to articulate step-by-step internal reasoning after producing an answer, generating chain-of-thought style training data. Other prompts sought censorship-safe reformulations of politically sensitive topics such as dissidents and authoritarianism.
- Fraudulent Accounts and Proxy “Hydra Cluster” Networks: To bypass regional restrictions, actors allegedly relied on commercial proxy services that resell access to frontier AI models. These services operated large networks of fraudulent accounts distributed across APIs and cloud providers. Anthropic describes one instance where a single proxy network managed more than 20,000 accounts simultaneously, replacing banned accounts with new ones and mixing extraction traffic with unrelated customer activity.
- Adaptive Campaigns and Model Lifecycle Visibility: Anthropic says it detected MiniMax’s campaign while it was still active, prior to the release of the model being trained. When Anthropic launched a new Claude model during that period, MiniMax redirected nearly half its traffic within 24 hours to capture capabilities from the updated system, providing visibility into how distillation efforts track product releases.
- National Security and Policy Implications: Anthropic argues that models built through unauthorized distillation may not retain safeguards intended to prevent misuse in areas such as bioweapons development or malicious cyber activity, and that large-scale extraction can complicate evaluations of export controls tied to advanced chip access. In response, it has deployed detection classifiers and behavioral fingerprinting systems, strengthened account verification, shared technical indicators with industry peers and authorities, and introduced product and model-level countermeasures to limit the usefulness of outputs for illicit training.
Go Deeper -> Detecting and preventing distillation attacks – Anthropic
Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model – The Hacker News
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


