If you've been following this blog, you know our stance: we care about who owns what. We've spent months documenting European alternatives to American tech, encouraging conscious consumption, and questioning Big Tech's loyalties. American companies donating to Trump's inauguration, bending to political pressure, chasing Pentagon contracts — we've seen it all.
But then Anthropic did something none of us expected from a San Francisco AI company valued at $380 billion: they told the most powerful government on Earth to go to hell.
Anthropic refused to let its AI be used for autonomous weapons or mass surveillance of Americans — even when the Pentagon threatened to destroy their business. They are now the only American company ever publicly designated a "supply chain risk to national security." They're challenging it in court.
What happened — the full timeline
In February 2026, the Pentagon offered Anthropic a contract worth up to $200 million to deploy Claude across classified military networks. There was one catch: the Department of Defense wanted unrestricted access to Claude for "all lawful purposes."
Anthropic said almost — but drew two red lines:
No autonomous weapons
"We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians."
No mass domestic surveillance
"We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." Anthropic refused to let Claude be used to monitor American citizens at scale.
Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline: 5:01 p.m., February 27, 2026. Relent, or face consequences.
Anthropic did not relent.
Within hours: Trump ordered all federal agencies to stop using Anthropic technology. Hegseth designated the company a "supply chain risk to national security" — a designation never before applied to an American company. And rival OpenAI announced its own Pentagon deal, conveniently timed within hours of the ban.
Anthropic vs. OpenAI: the tale of two companies
The contrast couldn't be sharper.
| Anthropic (Claude) | OpenAI (ChatGPT) | |
|---|---|---|
| Pentagon response | Refused unrestricted access | Signed the deal within hours |
| Autonomous weapons | Demanded contractual prohibition | Relies on "trust that laws will be followed" |
| Mass surveillance | Explicit contractual ban | Stated "red lines" but no hard contractual terms |
| Trump inauguration | No donation | Sam Altman donated $1 million |
| Result | Blacklisted, designated supply chain risk | Got the contract |
| Public response | Downloads surged 183%, hit #1 on App Store | Still dominant at 250M daily active users, but staff "fuming" per CNN |
OpenAI CEO Sam Altman said Anthropic was "more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with." In other words: just trust the government to follow its own rules.
As MIT Technology Review put it: "The whole reason Anthropic earned so many supporters is that they don't believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance."
The world voted with their downloads
What happened next was extraordinary. Instead of destroying Anthropic, the ban turned into the biggest marketing event in AI history.
More than a million people signed up for Claude each day during the peak. Claude became the #1 AI app in over 20 countries. To be clear: ChatGPT is still the dominant player with 250 million daily active users across iOS and Android — dwarfing Claude's 11.3 million. But the trajectory matters. Claude briefly overtook ChatGPT in daily downloads, and the message was unmistakable: people reward companies that stand on principle.
The industry rallied behind Anthropic
This wasn't just consumers. The biggest names in tech broke their silence.
The Information Technology Industry Council — representing Apple, Google, Amazon, Microsoft, Meta, Adobe, CoreWeave, and Nvidia — wrote to the Trump administration urging it to reconsider. Their argument: "Contract disputes should be resolved through continued negotiation, not by designating American companies as security threats."
Former defense and intelligence officials sent a separate letter to Congress calling for an investigation, warning the Pentagon was setting "a dangerous precedent for any American company that negotiates with the government."
Even Anthropic's competitors recognized the danger: if the government can blacklist a company for asking for ethical guardrails, no tech company is safe.
Why Europeans should care
We run a blog that usually tells you to buy European. We've catalogued hundreds of European alternatives to American products. So why are we writing a love letter to a San Francisco AI company?
Because principles matter more than passports.
Anthropic didn't refuse the Pentagon to score points in Europe. They did it because they believe AI-powered killer robots and mass surveillance are genuinely dangerous — for Americans and everyone else. That's a position most Europeans already hold. It's why the EU AI Act exists.
And Anthropic is putting its money where its mouth is in Europe:
EMEA: Fastest-growing region
Anthropic's EMEA revenue has grown 9× in the past year. Large business accounts (>$100K revenue each) grew 10× in the same period. European employees tripled.
Six European offices
London, Dublin, Zurich, Paris, Munich — and growing. European enterprises like L'Oréal, BMW, SAP, N26, Qonto, and Doctolib are using Claude for core operations.
56% of organizations using generative AI now use Anthropic — up from 29% a year ago. This isn't a niche player. This is the company that builds the AI behind this very website's brand identification.
ProduktInfo is built with Anthropic's Claude Code — an AI-assisted development tool that is rapidly becoming the industry standard for professional software engineering. We chose Claude before the Pentagon dispute, based on capability and safety practices. The events of February 2026 confirmed that choice.
What happens next
Anthropic has filed a legal challenge against the supply chain risk designation. CEO Dario Amodei stated: "We do not believe this action is legally sound, and we see no choice but to challenge it in court."
The case could set a precedent for the entire tech industry. If the government can punish a company for demanding ethical guardrails in a contract, the message to every AI company is clear: comply or be destroyed.
The counter-message from millions of new Claude users is equally clear: we'll back the company that stands its ground.
Frequently asked questions
Want to know who owns the brands you buy?
Scan Owner shows you who owns the brands in your everyday life. Powered by European AI and built in Denmark. 300,000+ brands. Free to use.