U.S. AI Models Allegedly Copied by China’s AI Tigers

Technology

 

A leading American artificial intelligence company is taking aim at three of China’s fastest-growing AI firms, accusing them of secretly siphoning capabilities from its flagship model to fuel their own growth. Anthropic made the allegations public through an official blog post, claiming the behavior not only violates its terms of service but also threatens broader US national security interests.

The three companies named in the accusations, DeepSeek, MiniMax, and Moonshot AI, allegedly set up more than 24,000 fake accounts to gain unauthorized access to Claude, Anthropic’s proprietary AI model. Through those accounts, the firms reportedly conducted over 16 million conversations with Claude, using the resulting data to train their own systems via a process called distillation. Given that Claude is entirely unavailable in China, the creation of these accounts appears to have been a calculated effort to bypass restrictions.

Breaking Down Distillation and Why It’s Controversial

Distillation itself is not an obscure or fringe technique. In fact, it is widely used across the AI industry, typically allowing companies to produce smaller, more affordable versions of their own larger models. The controversy arises when outside parties attempt to apply this method to someone else’s technology without permission. Virtually every major AI provider, Anthropic included, strictly forbids this kind of third-party use within their terms of service.

Anthropic is not alone in raising these concerns. OpenAI made comparable allegations earlier this month in a formal memo submitted to the US House Select Committee on China. That document accused DeepSeek and affiliated Chinese AI companies of systematically distilling OpenAI’s ChatGPT models throughout the past year, with OpenAI describing the situation as Chinese firms building their success by taking a free ride on American innovation rather than developing original capabilities independently.

DeepSeek first grabbed international attention after releasing an AI model that performed at a level comparable to top Western competitors, yet reportedly required far less computing power to build and run. That announcement rattled the industry and sparked debate over whether American export restrictions on high-end chips were actually working, or whether Chinese companies had found ways to work around them entirely.

Why Anthropic Says Export Controls Are Working and Not Failing

Rather than viewing DeepSeek’s rise as evidence that export controls have been ineffective, Anthropic draws the opposite conclusion. The company argues that the reliance on distillation by Chinese AI labs is itself proof that those restrictions are biting. If Chinese firms could independently develop frontier AI models without depending on American technology or data, they would not need to resort to covertly harvesting outputs from US-built systems.

Anthropic has been a consistent advocate for export control policies and used the blog post to reinforce that position, stating that genuine breakthroughs in AI cannot be manufactured through distillation alone. Advanced chips and access to frontier model outputs remain essential ingredients for building truly competitive AI systems.

The Bigger Danger: What Happens When Safety Is Left Out

Perhaps the most alarming dimension of Anthropic’s allegations is not the intellectual property theft itself, but what comes after it. The company argues that models built through unauthorized distillation are unlikely to include the rigorous safety measures that responsible AI developers build into their systems. That gap, Anthropic warns, could have serious real-world consequences.

In the wrong hands, these models could be weaponized to carry out sophisticated cyberattacks, support the creation of biological threats, or give authoritarian regimes powerful tools for surveillance, propaganda, and offensive digital warfare. Anthropic stressed that the timeframe for meaningful intervention is shrinking rapidly.

DeepSeek, MiniMax, and Moonshot AI, the latter known for its Kimi model, have earned the collective label of “AI tigers” in recognition of their explosive growth. All three rank within the top 15 on the Artificial Analysis leaderboard, one of the most closely watched performance rankings in the global AI space. As of now, none of the three companies have issued any public response to Anthropic’s claims, and DeepSeek has remained equally silent on OpenAI’s earlier allegations.

Leave a Reply

Your email address will not be published. Required fields are marked *