On a typical Monday in San Francisco, Anthropic’s security team was reviewing anomalous traffic patterns when they noticed something extraordinary. Millions of API exchanges were flowing through a complex network of proxy services, all originating from a coordinated operation that had been quietly operating for months. What they uncovered would send ripples through the global AI industry and raise serious questions about intellectual property, national security, and the future of AI development. “These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region.” — Anthropic Security Team The Scale of the Operation Anthropic revealed this week that it had uncovered what it described as industrial-scale intellectual property theft campaigns conducted by three prominent Chinese AI companies: DeepSeek (深度求索), Moonshot AI Technology Co (月之暗面), and MiniMax Group Inc (稀宇科技). The technique they employed—known as “distillation”—is not inherently malicious. In fact, it’s a common practice within AI development, often used by companies to create cheaper, smaller versions of their own models. But when applied to a competitor’s proprietary system without authorization, it becomes something else entirely: a way to rapidly boost performance at a fraction of the cost of independent research and development. The numbers paint a stark picture. According to Anthropic, the three Chinese firms conducted approximately 16 million exchanges with its Claude model, utilizing approximately 24,000 fake accounts to mask their activities. MiniMax alone was responsible for more than 13 million of those exchanges, making it the largest operation of the three. Circumventing Controls, Extracting Value Proxy services played a critical role in the campaigns. To circumvent Anthropic’s ban on commercial access from China, the labs allegedly routed traffic through intermediary services that managed the vast networks of fraudulent accounts. This allowed them to bypass export controls on powerful US technology—controls specifically designed to preserve American dominance in the strategically sensitive AI sector. Targeted capabilities reveal strategic intent. Each campaign concentrated heavily on coding, agentic reasoning, and tool use—areas in which Claude is widely considered a market leader. Rather than seeking general-purpose model improvements, the operations zeroed in on specific competencies that would be most valuable for commercial deployment. The cost advantage is substantial. Models built through distillation can achieve comparable performance to their source systems at a fraction of the development cost. For companies racing to compete in the global AI market, this represents an enormous incentive—one that appears to have outweighed concerns about the methods used to obtain it. “Models built through illicit distillation are unlikely to retain safety guardrails designed to prevent misuse—such as restrictions on helping develop bioweapons or enabling cyberattacks.” — Anthropic Statement National Security Implications Anthropic’s disclosure goes beyond commercial concerns. The company explicitly framed the issue as a national security risk, arguing that models developed through unauthorized distillation may not maintain the safety guardrails embedded in their source systems. The concern is straightforward: when capabilities are extracted and repackaged without the original developer’s oversight, the resulting models may lack critical restrictions. Guardrails designed to prevent misuse—restrictions on helping develop bioweapons, enabling cyberattacks, or generating harmful content—could be inadvertently stripped away or deliberately removed. This isn’t merely theoretical. The rapid advancement of AI capabilities has made safety research an increasingly urgent priority for both industry and government. The prospect of powerful models circulating without adequate safeguards represents a scenario that policymakers have been working to prevent. OpenAI and the Broader Pattern Anthropic is not alone in its accusations. Earlier this month, OpenAI—creator of ChatGPT and Anthropic’s chief rival—told US lawmakers that Chinese companies were engaged in “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.” The convergence of complaints from the two leading US AI companies suggests a systematic approach rather than isolated incidents. Both firms have identified distillation as the primary technique, both have pointed to Chinese competitors as the primary actors, and both have raised concerns about the strategic implications. Even White House AI adviser David Sacks has weighed in, expressing concerns that DeepSeek specifically used this method to accelerate its development. The issue has clearly moved beyond industry disputes into the realm of policy and international relations. The DeepSeek Precedent The controversy has particular resonance because of what happened a year ago. When DeepSeek released a low-cost generative AI model that performed at levels comparable to ChatGPT and other top US chatbots, it upended assumptions about American dominance in the sector. The release triggered a reassessment of competitive dynamics in the global AI race. If a Chinese company could achieve parity with leading American models at a fraction of the cost, what did that mean for the billions of dollars being invested in US AI infrastructure? At the time, questions were raised about how DeepSeek had achieved its results so efficiently. The company’s explanations—focusing on architectural innovations and training optimizations—were accepted by many observers. Now, with Anthropic’s revelations, those explanations are being revisited. Industry and Government Response Anthropic’s statement included a clear call to action: coordinated industry and government responses to address what the company said no single organization could tackle alone. The request reflects a growing recognition that AI security challenges increasingly transcend individual corporate interests. When models can be extracted, replicated, and potentially modified to remove safety constraints, the implications affect not just the original developer but the broader ecosystem—including competitors, users, and society at large. For policymakers, the issue presents complex trade-offs. Stricter export controls and access restrictions could help protect American AI capabilities, but they might also accelerate efforts to develop independent Chinese alternatives. International agreements on AI development standards could establish norms, but enforcement would remain challenging. The coming months will reveal whether Anthropic’s disclosure catalyzes meaningful action or becomes another data point in an ongoing debate about AI governance. What is clear is that the boundary between commercial competition and national security in the AI sector is becoming increasingly difficult to define—and increasingly important to address. This article was reported by the ArtificialDaily editorial team. For more information, visit Taipei Times and Anthropic. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Google’s Cloud AI leads on the three frontiers of model capability Anthropic accuses Chinese AI labs of mining Claude as US debates AI ch Post navigation Haven Safety AI Launches AI-Native Platform to Tackle Workplace Safety Pentagon Gives Anthropic Friday Deadline to Open AI for Military Use o