mid pressure from the Pentagon to give in to its demands to loosen its safeguards, Anthropic continues to stand firm.
In a statement on Thursday afternoon, Anthropic CEO Dario Amodei made it clear that the company cannot accede to the Department of War’s demand to roll back its safeguards that prevent its AI models from being used in two key areas: mass surveillance of U.S. citizens and fully autonomous weapons.
Amodei noted that AI’s use in mass surveillance posed “serious, novel risks to our fundamental liberties.” And while the tech may someday be helpful in fully autonomous weaponry, the guardrails simply don’t exist today to deploy this safely.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in his statement. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
Amodei said that its Claude models are widely deployed throughout the defense and intelligence community, including in the government’s classified networks, in national laboratories, and in mission-critical applications such as intelligence analysis, modeling and simulation, operational planning, and cybersecurity operations. Thus far, its safeguards haven’t presented an issue in these cases, he said.
Though Anthropic’s “strong preference” is to continue to support military action, it will only do so with its safeguards in place. Otherwise, it cannot “in good conscience” submit to their requests and continue its relationship.
Amodei’s response is the latest move in the fight between the company and the Pentagon. Earlier this week, the agency took its first steps in blacklisting Anthropic by labelling it a “supply chain risk,” a label generally reserved for companies from adversarial countries.
- The unprecedented move would not only threaten Anthropic’s contract with the military but also force all defense vendors to cut ties with Anthropic.
- And after his meeting with Amodei, Secretary of War Pete Hegseth contradicted himself by threatening to invoke the Defense Protection Act, forcing Anthropic to tailor its models to military desires regardless.
- Additionally, the Pentagon struck a deal with xAI on Monday to use its Grok models in classified systems, including weapons development and battlefield operations.
Policymakers, however, have started to warn that the sparring match between Anthropic and the Pentagon will only sour future relationships between the government and Silicon Valley AI firms, with Dean Ball, former AI adviser to the Trump Administration, calling Hegseth’s contradictory threats “incoherent.”
Our Deeper View
Anthropic standing firm in its decision not to give in to the Pentagon’s threats was its only option, given that the company has built its reputation around AI safety and only deploying AI with guidelines that ensure it does no harm. Though the company is confronting its moral and ethical standards with recent changes to its Responsible Scaling Policy, backing down would have been a sharp about-face, betraying its core principles. Though the fallout could cost Anthropic a large chunk of its revenue from government agencies and vendors, there may be a silver lining: Gaining further trust with its primary audience of risk-averse but AI-hungry enterprises.

