6.5 C
New York
Saturday, February 28, 2026

Anthropic refuses Pentagon’s new phrases, standing agency on deadly autonomous weapons and mass surveillance


Lower than 24 hours earlier than the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Division of Protection’s calls for for unrestricted entry to its AI.

It’s the fruits of a dramatic trade of public statements, social media posts, and behind-the-scenes negotiations, coming right down to Protection Secretary Pete Hegseth’s want to renegotiate all AI labs’ present contracts with the navy. However Anthropic, to date, has refused to again down from its two present purple traces: no mass surveillance of Individuals, and no deadly autonomous weapons (or weapons with license to kill targets with no human oversight in any way). OpenAI and xAI had reportedly already agreed to the brand new phrases, whereas Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White Home this week for a gathering with Hegseth himself, wherein the Secretary reportedly issued an ultimatum to the CEO to again down by the top of enterprise day on Friday or else.

In a assertion late Thursday, Amodei wrote, “I imagine deeply within the existential significance of utilizing AI to defend the US and different democracies, and to defeat our autocratic adversaries. Anthropic has due to this fact labored proactively to deploy our fashions to the Division of Struggle and the intelligence neighborhood.”

He added that the corporate has “by no means raised objections to explicit navy operations nor tried to restrict use of our expertise in an advert hoc method” however that in a “slim set of circumstances, we imagine AI can undermine, somewhat than defend, democratic values” — occurring to particularly point out mass home surveillance and absolutely autonomous weapons. (Amodei talked about that “partial autonomous weapons … are important to the protection of democracy” and that absolutely autonomous weapons could ultimately “show important for our nationwide protection,” however that “at present, frontier AI programs are merely not dependable sufficient to energy absolutely autonomous weapons.” He didn’t rule out Anthropic acquiescing to the navy’s use of absolutely autonomous weapons sooner or later however talked about that they weren’t prepared now.)

The Pentagon had already reportedly requested main protection contractors to evaluate their dependence on Anthropic’s Claude, which could possibly be seen as step one to designating the corporate a “provide chain danger” – a public risk that the Pentagon had made not too long ago (and a classification normally reserved for threats to nationwide safety). The Pentagon was additionally reportedly contemplating invoking the Protection Manufacturing Act to make Anthropic comply.

Amodei wrote in his assertion that the Pentagon’s “threats don’t change our place: we can not in good conscience accede to their request.” He additionally wrote that “ought to the Division select to offboard Anthropic, we are going to work to allow a easy transition to a different supplier, avoiding any disruption to ongoing navy planning, operations, or different important missions. Our fashions will likely be accessible on the expansive phrases we’ve got proposed for so long as required.”

Related Articles

Latest Articles