Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks
Summary
Artificial intelligence firm Anthropic publicly accused three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—of using its powerful Claude large language model outputs to train their own models through a technique called "distillation." Anthropic identified over 16 million exchanges generated via approximately 24,000 fraudulent accounts targeting Claude's key capabilities like agentic reasoning and coding. While distillation is a legitimate training method, Anthropic argues that competitors use it illicitly to gain powerful capabilities quickly and cheaply. The firm stated it identified the actors using IP correlation and metadata, noting that these foreign distillation campaigns pose geopolitical risks by potentially feeding capabilities into military or surveillance systems used by authoritarian governments. Anthropic plans to enhance its detection systems and called for industry and policymaker collaboration to combat these large-scale attacks.
(Source:Cointelegraph)