Anthropic Exposes ‘Industrial-Scale’ Chinese AI Distillation Campaigns

It started with suspicious patterns. Unusual traffic spikes. Repetitive queries that didn’t look like normal user behavior. When Anthropic’s security team began investigating, they uncovered something far more systematic than typical terms-of-service violations: a coordinated, industrial-scale operation to extract the company’s most valuable AI capabilities.

In a detailed blog post published Monday, Anthropic accused three prominent Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of running sophisticated “distillation” campaigns designed to clone Claude’s capabilities for their own models. The scale was staggering: over 24,000 fraudulent accounts generating more than 16 million exchanges with Claude.

“We have identified industrial-scale campaigns by three AI laboratories to illicitly extract Claude’s capabilities to improve their own models. These campaigns are growing in intensity and sophistication. The window to act is narrow.” — Anthropic Security Team

The Anatomy of a Distillation Attack

Distillation isn’t inherently malicious. In fact, it’s a standard technique that AI labs use internally to create smaller, more efficient versions of their own models. The process involves training a smaller “student” model on the outputs of a larger “teacher” model, transferring knowledge without requiring the massive computational resources needed for training from scratch.

The problem arises when competitors use distillation to essentially copy homework. Rather than investing the billions of dollars and years of research required to develop frontier AI capabilities, bad actors can extract those capabilities through systematic querying—provided they have enough access and persistence.

Anthropic’s investigation revealed distinct targeting patterns for each Chinese lab. DeepSeek focused on foundational reasoning capabilities and alignment techniques, specifically seeking censorship-safe alternatives to policy-sensitive queries. Moonshot AI targeted agentic reasoning, tool use, coding, and data analysis—capabilities central to its Kimi K2.5 model released last month. MiniMax went after agentic coding and orchestration capabilities.

The sophistication was notable. When Anthropic released a new model during MiniMax’s active campaign, the company pivoted within 24 hours—redirecting nearly half its traffic to capture capabilities from the latest system. This wasn’t opportunistic scraping; it was systematic industrial espionage.

The National Security Dimension

Anthropic didn’t mince words about the broader implications. The company explicitly framed the distillation campaigns as a national security concern, warning that models built through illicit extraction lack the safety guardrails that American labs implement.

“Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to develop bioweapons or carry out malicious cyber activities. Models built through illicit distillation are unlikely to retain those safeguards.” — Anthropic Blog Post

The concerns extend beyond capability theft. Anthropic warned that authoritarian governments could deploy frontier AI for “offensive cyber operations, disinformation campaigns, and mass surveillance”—risks that multiply if those models are open-sourced and spread beyond any single government’s control.

The accusations come at a particularly sensitive moment in U.S.-China technology relations. Just weeks earlier, OpenAI sent a memo to the House Select Committee on China making similar allegations against DeepSeek. The timing suggests American AI companies are coordinating their response to what they view as an existential threat to their competitive advantage.

The Chip Connection

Anthropic’s revelations carry significant implications for the ongoing debate over AI chip export controls. The company explicitly tied the distillation campaigns to the rationale for restricting China’s access to advanced semiconductors.

The argument is straightforward: Executing extraction at the scale Anthropic documented requires substantial computational resources. Chinese labs needed access to advanced chips not just for training their own models, but for running the massive inference workloads required to systematically query Claude millions of times.

This bolsters the case for strict export controls in the eyes of proponents. If Chinese AI advancement depends significantly on distilling American models rather than independent innovation, then limiting access to the chips needed for both training and large-scale extraction becomes even more critical.

The timing is politically charged. Last month, the Trump administration allowed U.S. companies to export advanced AI chips like the H200 to China—a move that Anthropic publicly criticized. The distillation revelations give ammunition to China hawks who argue that any chip access advantages Chinese competitors.

“It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact.” — Dmitri Alperovitch, Chairman of Silverado Policy Accelerator

The Irony of AI IP

The accusations place American AI companies in an awkward position. These same labs have vigorously defended their right to train models on copyrighted works without permission or payment, arguing that the transformative nature of AI development constitutes fair use.

Yet when the tables turn—when their own model outputs become training data for competitors—they cry foul. The inconsistency hasn’t gone unnoticed. Critics point out that AI companies can’t simultaneously claim their training data is fair game while treating their model outputs as proprietary intellectual property.

President Trump articulated this perspective at an AI event last July: “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for. When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws.”

The distinction, AI companies argue, lies in the terms of service violations and the systematic nature of the extraction. Individual users learning from Claude is expected; organized industrial-scale campaigns to clone capabilities are not. Whether that distinction holds legally and ethically remains an open question.

What Comes Next

Anthropic says it will continue investing in defenses that make distillation attacks harder to execute and easier to identify. But the company acknowledges that technical solutions alone won’t suffice.

“Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community,” the blog post concludes. The call for coordination suggests Anthropic views this as a systemic challenge that individual companies cannot solve independently.

For now, the three accused Chinese companies—DeepSeek, Moonshot AI, and MiniMax—have not publicly responded to the allegations. All three rank among the top 15 models on the Artificial Analysis leaderboard, suggesting their distillation efforts may have yielded genuinely competitive capabilities.

The episode highlights a fundamental tension in the global AI race: as models become more capable and valuable, the incentives to extract rather than innovate grow stronger. How the industry and policymakers respond to this challenge may shape the competitive landscape for years to come.


This article was reported by the ArtificialDaily editorial team. For more information, visit Anthropic, TechCrunch, and CNN.

By Mohsin

Leave a Reply

Your email address will not be published. Required fields are marked *