In standing up to the US government, Anthropic has built up so much goodwill that people are chalking “GOD LOVES ANTHROPIC” on the sidewalks outside of its San Francisco HQ. It also gave OpenAI an opening it needed.
After Anthropic stood firm in its refusal to bend on its two conditions for using its AI — no mass surveillance of US citizens and no fully autonomous weapons — the Pentagon and the Trump Administration went to DEFCON 5 on Friday. They designated the company a supply chain risk, a title typically reserved for adversaries. President Donald Trump has also directed every government organization to “immediately cease” using Anthropic’s technology.
“The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield,” Secretary of War Pete Hegseth said in a post on X.
While facing the fallout of losing hundreds of millions in revenue, Anthropic has seen a massive outpouring of support, both in the AI community and beyond. In the hours after the government ban, its Claude app skyrocketed to the top of Apple's App Store, dethroning ChatGPT in the No. 1 spot. Anthropic employees, meanwhile, broadly took to X to praise their employer. AI leaders like Ilya Sutskever have commended the company for its stance, and the app has even earned the public admiration of celebrities who have nothing to do with AI, like Katy Perry.
In an interview with CBS News, CEO Dario Amodei called disagreeing with the government the “most American thing in the world.”
Anthropic vowed to take the Pentagon to court over the extent of the supply chain risk designation. In a statement on Friday, the company said the designation is both "legally unsound and set a dangerous precedent” for companies to negotiate with the government.
In the meantime, OpenAI seized the opportunity to sign a contract with the Department of War to use its models instead.
While OpenAI claimed that its agreement with the Pentagon upholds its "redlines" over domestic surveillance and autonomous weapons and "has more guardrails than any previous agreement for classified AI deployments," the reality is a little more nuanced.
Anthropic sought to preserve explicit restrictions barring the use of its models for mass surveillance of U.S. citizens and fully autonomous weapons. The Pentagon, which often structures contracts around broad “all lawful purposes” language, reportedly preferred not to carve out those exceptions. Anthropic declined to move forward under those terms. OpenAI later signed a Department of War agreement structured around the standard federal contracting language. OpenAI CEO Sam Altman defended the stance, saying, "I do not believe unelected leaders of private companies should have as much power as our democratically elected government."
Our Deeper View
While all the attention is certainly a silver lining for Anthropic, it was also its moment of truth to uphold the company's founding principles of AI responsibility, safety, and ethics. By not bending to the Pentagon’s will, especially given that human rights and lives might be at stake, the company was keeping its promises. Its refusal to capitulate may also put pressure on rivals. An open letter titled “We Will Not Be Divided” began circulating on social media and has since garnered 537 signatures from Google employees and 89 from OpenAI employees. By holding to its stated objectives and incurring the wrath of the federal government, Anthropic has effectively made itself a martyr.

