Rivals support Anthropic in AI war with the Pentagon

Hundreds of Google and OpenAI employees called on the company to put aside competition to stand with Anthropic in an AI showdown with the Pentagon.

Theo TechCrunchan open letter titled We Will Not Be Divided calls on technology leaders to unite and take a stand and received signatures from more than 540 Google employees and 90 OpenAI employees.

“We hope the leaders will put aside their differences and together continue to reject the US Department of Defense’s current request to authorize the use of models for domestic mass surveillance and automated killing without human supervision. They are trying to divide each company by instilling fear that the other will give in. That strategy only works when none of us knows the other’s position,” the letter said.

Google and OpenAI leaders have not commented on the letter. However, unofficial statements indicate that both companies support Anthropic’s stance. Although it is working with the Pentagon, Anthropic wants to ensure that Claude, the company’s AI model, will not be used for mass surveillance of Americans or in fully autonomous weapons systems.

In an interview with CNBC On February 27, OpenAI CEO Sam Altman said he personally “does not think the Pentagon should threaten to apply the Defense Production Act (DPA) to these companies”. According to CNNan OpenAI spokesperson confirmed that the company shares “red lines” with Anthropic against autonomous weapons and mass surveillance.

Google DeepMind has not spoken out about the conflict, but chief scientist Jeff Dean, personally, expressed opposition to government surveillance. “Mass surveillance violates the Fourth Amendment and negatively affects free speech. Surveillance systems are vulnerable to abuse for political or discriminatory purposes,” he wrote on X.

 

The startup Anthropic logo and the company’s Claude model are displayed on the screen. Image: AFP

The US Department of Defense rejected Anthropic, asserting that it always operates within the framework of the law and that contract suppliers cannot set their own terms on how the military uses the product. The Pentagon also threatened to classify Anthropic as a supply chain risk – a category usually reserved for companies from hostile nations. This could seriously damage the company’s ability to cooperate with the US government as well as its reputation.

President Donald Trump on February 27 directed all US federal agencies to stop using Anthropic’s products. “We do not need and do not want to use them, and will not cooperate with them again,” Mr. Trump wrote on the social network Truth Social. The above move was made after the deadline set by the US Department of Defense to resolve the controversy with Anthropic.

Daniel Castro, Vice President of the US Information Technology and Innovation Foundation (ITIF), commented: “Decisions about military artificial intelligence cannot be resolved through impromptu confrontations between the Pentagon and individual companies. If some AI capabilities are considered essential for national defense, they need to be publicly discussed and written into law.”

By Editor