Anthropic Just Accused China of What It Got Sued For
Paid $1.5B for theft. Now screaming about theft.

Yesterday, February 24th 2026, Anthropic posted a tweet accusing three Chinese AI companies of theft.
The internet read it, remembered something, and started laughing.
The replies are hilarious. And this is one of my favourites:

Five months earlier, Anthropic had paid $1.5 billion to settle a class-action lawsuit brought by book authors who argued, convincingly enough for a nine-figure payout, that Anthropic had used pirated copies of their works to train Claude. No admission of liability. Just $3,000 per book and a quiet resolution.

Read the original article: The Guardian
The tweet went out anyway.

Well, according to Anthropic, DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude, systematically extracting its capabilities to train their own models.
Fortune borrowed a line from The Office for its headline: “How the turn tables.”
OpenAI had, in fact, moved first. Around two weeks earlier, Sam Altman’s company warned US lawmakers that DeepSeek was distilling outputs from American models to train its own.
Anthropic’s public accusation followed. The frontier AI labs had both decided, independently or close enough to it, that this was the moment to draw a firm line. What makes that line so interesting is where it sits, and who drew it.
What Distillation Actually Means

Distillation, in AI terms, is not inherently sinister. It describes the process of using a large, capable model to generate outputs that then train a smaller or different model.
Done legitimately, it compresses intelligence into cheaper, faster systems. Most of the AI industry does some version of this.
The accusation here is not about distillation itself. It is about the scale and the deception: 24,000 of fake accounts, millions of interactions, all designed to extract the intellectual core of a competitor’s product without paying for it.
Anthropic’s tweet made a specific point about national security. “Foreign labs that illicitly distill American models can remove safeguards,” they wrote, “feeding model capabilities into their own military, intelligence, and surveillance” infrastructure.
This framing matters. It transforms a commercial grievance into a geopolitical one, shifting the story from corporate rivalry to national interest. That is a very effective way to ensure regulators and policymakers pay attention.
The Glass House Problem
The settlement context is worth sitting with for a moment.
Anthropic did not invent its training methodology in a vacuum. Claude was trained on vast quantities of text, and a significant portion of that text was written by people who were never asked, never paid, and never informed.
When those authors organised and sued, Anthropic settled for what was, at the time, one of the largest AI-related legal payouts on record.
This does not make the Chinese companies’ alleged behaviour acceptable. Using fake accounts to systematically drain a competitor’s model is deceptive in a way that training on publicly available text, however ethically murky, is not. The methods are different. The scale of deception is different.
But the underlying dynamic, taking something valuable from someone without permission and using it to build your own capability, is the same pattern. Anthropic benefited from that pattern once. Now they are on the other side of it.
The industry has a short memory for its own sins and a long memory for everyone else’s.
The Playbook is Always the Same
This isn’t new behaviour. It’s not even surprising. It’s the oldest strategy in competitive industries:
“Accuse your rivals of the exact things you do yourself, but frame your actions as innovation and theirs as theft.”
Before we had AI labs accusing each other of distillation, we had the “War of the Currents” between Thomas Edison and Nikola Tesla (technically Westinghouse, but the mythology prefers Edison versus Tesla).
Edison ran smear campaigns claiming alternating current was deadly, publicly electrocuting animals to prove AC’s danger. He even suggested AC be used for the electric chair to associate it with death.
Yet Edison’s methods weren’t pristine either. His “invention factory” in Menlo Park operated on the principle of taking employees’ ideas, developing them into patents under his name, and claiming credit for innovations that emerged from collaborative work.
The myth that Edison stole Tesla’s ideas persists because the underlying pattern is familiar: established players using every tool to discredit challengers threatening their dominance.
The tech industry repeated this pattern endlessly. When Google launched Android in 2008, Apple’s Steve Jobs called it “grand theft” and vowed “thermonuclear war” against what he saw as blatant copying of iOS.
Apple sued Samsung for copying iPhone designs. Samsung countersued for patent violations. The lawsuits dragged on for years, costing both companies billions.
The outcome? Both companies kept selling phones. Both kept making money.
The “theft” accusations were theatre in a larger competitive game where everyone copies everyone, but only your rivals’ copying is theft.
Power Protects What Power Builds
There is a deeper logic at work across all of these cases. The entity that establishes a dominant position eventually uses every available lever to protect it.
The challenger, who often built their capability by standing on the incumbent’s shoulders, finds those tools turned against them.
The Deep View, a newsletter tracking the AI industry, described this moment as American frontier labs trying to “fend off Chinese model makers from exfiltrating the frontier labs’ crown jewels.”
That framing is revealing. Crown jewels are not earned through moral purity. They are simply what the most powerful player currently possesses.
Anthropic, OpenAI and Google are not wrong to object to what they allege happened. Industrial-scale deception, fake accounts, systematic capability extraction: these are genuine violations of terms of service and arguably of law. The complaints are legitimate.
But the moral authority of the complaint is complicated by the path each of these companies took to get here. OpenAI trained on internet data that included copyrighted content at scale. Google has faced decades of intellectual property litigation across its products. Anthropic just paid $1.5 billion to the people it quietly borrowed from.
The rules of the game tend to get formalised right around the moment that the rule-makers stand to benefit from having rules.
What This Moment Actually Tells Us
The distillation accusations are significant regardless of the irony. They signal that frontier AI has reached a stage where capability gaps between American and Chinese labs have narrowed enough to make this kind of extraction worthwhile.
If DeepSeek, Moonshot AI and MiniMax were so far behind that Claude’s outputs offered little competitive value, the alleged campaigns would not have been worth running.
The accusations also reveal how the AI race is actually being fought. It is not only about compute, data and research talent. It is about access to the output of your competitor’s model at scale.
Distillation has become a form of competitive intelligence gathering. That changes what security means for an AI company.
Anthropic’s national security framing may well be sincere. The concern that distilled models, stripped of safety guardrails, could power military or surveillance applications is legitimate and worth taking seriously.
But it is also strategically useful to frame a commercial dispute as a matter of national defence. Governments move faster and more forcefully when the word “intelligence” appears in the brief.
The Uncomfortable Conclusion
Every major technology transition produces this moment. The people who broke the old rules to build new power eventually write new rules to protect what they built.
Tesla’s AC power rewired the world after Edison tried to kill it. Android phones reached three billion users after Apple tried to litigate them out of existence.
The pattern does not determine the outcome. It just tells you what stage of the game you are watching.
Right now, the American frontier labs are watching their lead narrow and reaching for every available tool: legal, political, national security framing, public accusation.
Some of those tools are justified. Some of those tools are the kind they once found pointed at themselves.
The Chinese labs, whether guilty of the specific accusations or not, are doing what challengers in every technology race have done. They are climbing using whatever handholds the current leaders left exposed.
History suggests the leaders rarely hold the lead forever. It also suggests that the moral accounting, when it finally arrives, tends to be distributed fairly evenly across both sides.
