AI & ETHICS

Anthropic said NO — and we forgive them for being American

The Pentagon demanded unrestricted access to Claude for autonomous weapons and mass surveillance. Anthropic refused. Trump blacklisted them. The world responded by downloading Claude in record numbers.

· produktinfo.dk
Link copied!
Scan any product barcode to see who owns it → iPhone · Android

If you've been following this blog, you know our stance: we care about who owns what. We've spent months documenting European alternatives to American tech, encouraging conscious consumption, and questioning Big Tech's loyalties. American companies donating to Trump's inauguration, bending to political pressure, chasing Pentagon contracts — we've seen it all.

But then Anthropic did something none of us expected from a San Francisco AI company valued at $380 billion: they told the most powerful government on Earth to go to hell.

The bottom line

Anthropic refused to let its AI be used for autonomous weapons or mass surveillance of Americans — even when the Pentagon threatened to destroy their business. They are now the only American company ever publicly designated a "supply chain risk to national security." They're challenging it in court.

What happened — the full timeline

In February 2026, the Pentagon offered Anthropic a contract worth up to $200 million to deploy Claude across classified military networks. There was one catch: the Department of Defense wanted unrestricted access to Claude for "all lawful purposes."

Anthropic said almost — but drew two red lines:

RED LINE #1

No autonomous weapons

"We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians."

RED LINE #2

No mass domestic surveillance

"We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." Anthropic refused to let Claude be used to monitor American citizens at scale.

Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline: 5:01 p.m., February 27, 2026. Relent, or face consequences.

Anthropic did not relent.

Within hours: Trump ordered all federal agencies to stop using Anthropic technology. Hegseth designated the company a "supply chain risk to national security" — a designation never before applied to an American company. And rival OpenAI announced its own Pentagon deal, conveniently timed within hours of the ban.


Anthropic vs. OpenAI: the tale of two companies

The contrast couldn't be sharper.

Anthropic (Claude)OpenAI (ChatGPT)
Pentagon responseRefused unrestricted accessSigned the deal within hours
Autonomous weaponsDemanded contractual prohibitionRelies on "trust that laws will be followed"
Mass surveillanceExplicit contractual banStated "red lines" but no hard contractual terms
Trump inaugurationNo donationSam Altman donated $1 million
ResultBlacklisted, designated supply chain riskGot the contract
Public responseDownloads surged 183%, hit #1 on App StoreStill dominant at 250M daily active users, but staff "fuming" per CNN

OpenAI CEO Sam Altman said Anthropic was "more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with." In other words: just trust the government to follow its own rules.

As MIT Technology Review put it: "The whole reason Anthropic earned so many supporters is that they don't believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance."


The world voted with their downloads

What happened next was extraordinary. Instead of destroying Anthropic, the ban turned into the biggest marketing event in AI history.

183%
Increase in Claude daily active users since early 2026
#1
On Apple's U.S. free apps chart, overtaking ChatGPT
149K
Daily downloads (vs. ChatGPT's 124K) by March 2
Paid subscribers doubled

More than a million people signed up for Claude each day during the peak. Claude became the #1 AI app in over 20 countries. To be clear: ChatGPT is still the dominant player with 250 million daily active users across iOS and Android — dwarfing Claude's 11.3 million. But the trajectory matters. Claude briefly overtook ChatGPT in daily downloads, and the message was unmistakable: people reward companies that stand on principle.


The industry rallied behind Anthropic

This wasn't just consumers. The biggest names in tech broke their silence.

The Information Technology Industry Council — representing Apple, Google, Amazon, Microsoft, Meta, Adobe, CoreWeave, and Nvidia — wrote to the Trump administration urging it to reconsider. Their argument: "Contract disputes should be resolved through continued negotiation, not by designating American companies as security threats."

Former defense and intelligence officials sent a separate letter to Congress calling for an investigation, warning the Pentagon was setting "a dangerous precedent for any American company that negotiates with the government."

Even Anthropic's competitors recognized the danger: if the government can blacklist a company for asking for ethical guardrails, no tech company is safe.


Why Europeans should care

We run a blog that usually tells you to buy European. We've catalogued hundreds of European alternatives to American products. So why are we writing a love letter to a San Francisco AI company?

Because principles matter more than passports.

Anthropic didn't refuse the Pentagon to score points in Europe. They did it because they believe AI-powered killer robots and mass surveillance are genuinely dangerous — for Americans and everyone else. That's a position most Europeans already hold. It's why the EU AI Act exists.

And Anthropic is putting its money where its mouth is in Europe:

EUROPE

EMEA: Fastest-growing region

Anthropic's EMEA revenue has grown 9× in the past year. Large business accounts (>$100K revenue each) grew 10× in the same period. European employees tripled.

EUROPE

Six European offices

London, Dublin, Zurich, Paris, Munich — and growing. European enterprises like L'Oréal, BMW, SAP, N26, Qonto, and Doctolib are using Claude for core operations.

56% of organizations using generative AI now use Anthropic — up from 29% a year ago. This isn't a niche player. This is the company that builds the AI behind this very website's brand identification.

Full disclosure

ProduktInfo is built with Anthropic's Claude Code — an AI-assisted development tool that is rapidly becoming the industry standard for professional software engineering. We chose Claude before the Pentagon dispute, based on capability and safety practices. The events of February 2026 confirmed that choice.


What happens next

Anthropic has filed a legal challenge against the supply chain risk designation. CEO Dario Amodei stated: "We do not believe this action is legally sound, and we see no choice but to challenge it in court."

The case could set a precedent for the entire tech industry. If the government can punish a company for demanding ethical guardrails in a contract, the message to every AI company is clear: comply or be destroyed.

The counter-message from millions of new Claude users is equally clear: we'll back the company that stands its ground.


Frequently asked questions

Why did the Pentagon blacklist Anthropic?
Anthropic refused to grant the Pentagon unrestricted access to its Claude AI models. Specifically, Anthropic insisted on contractual prohibitions against using Claude for fully autonomous weapons and mass domestic surveillance. When the company didn't meet the Pentagon's deadline of February 27, 2026, Defense Secretary Pete Hegseth designated it a "supply chain risk to national security."
What happened to Claude downloads after the ban?
Claude's mobile daily active users surged 183% since early 2026. By March 2, Claude had 149,000 daily downloads vs. ChatGPT's 124,000. The Claude iOS app reached #1 on Apple's U.S. free apps chart, overtaking ChatGPT. Paid subscribers doubled.
What did OpenAI do differently?
Hours after Trump banned Anthropic, OpenAI announced its own Pentagon deal. OpenAI's contract includes stated "red lines" but relies on trust that existing laws will be followed — rather than demanding explicit contractual prohibitions like Anthropic did. OpenAI CEO Sam Altman had donated $1 million to Trump's inauguration.
Is Anthropic an American company?
Yes, headquartered in San Francisco. Founded in 2021 by former OpenAI executives Dario and Daniela Amodei. Major investors include Google (14% stake), Amazon ($8 billion total), and Microsoft/Nvidia. However, Anthropic is expanding rapidly in Europe with offices in London, Dublin, Zurich, Paris, and Munich.
Does the ban affect regular Claude users?
No. The ban only applies to U.S. federal agencies and defense contractors working on Pentagon contracts. Commercial and consumer use of Claude is completely unaffected worldwide.
Who supported Anthropic?
Apple, Google, Amazon, Microsoft, Meta, Adobe, CoreWeave, and Nvidia — through the Information Technology Industry Council — wrote to the Trump administration urging it to reconsider. Former defense and intelligence officials also sent a letter to Congress calling for an investigation.

Want to know who owns the brands you buy?

Scan Owner shows you who owns the brands in your everyday life. Powered by European AI and built in Denmark. 300,000+ brands. Free to use.

European AI · Built in Denmark

Brands nævnt i denne artikel

Claude CodeMicrosoftAmericanEverydayFacebookLinkedInAbsolutAndroidChatGPTCitizenComfortCompanyElementL'OréalActionAmazonCenterClaudeCursorFamily