The Pentagon finds new partners, Anthropic ruled out. The US Department of Defense announced on Friday that it had reached agreements with seven technology companies providing access to their artificial intelligence (AI) models for classified operations, according to a press release.
SpaceX, parent company of the xAI AI laboratory, OpenAI, Google, Nvidia, Reflection, Microsoft and the Amazon subsidiary AWS dedicated to remote computing (cloud) were selected by the Pentagon.
The government has ruled out the start-up Anthropic, with which it is in dispute, although its model, Claude, is considered one of the most successful in the world.
Level 6 and 7 Operations
At the end of February, the Trump government decreed the termination of all contracts linking it to Anthropic, a decision contested in court by the Californian start-up. It then decided to diversify its AI providers for its classified activities, access to which is restricted and which often concern national security.
The Pentagon had already announced agreements to this effect with OpenAI and Google. “These partnerships will accelerate the transformation of the US military into an AI-focused response force,” the ministry explained in the press release.
The AI models of the seven selected companies will be deployed for level 6 and 7 operations, the highest within the Pentagon. They will be used to “make data synthesis, context understanding more effective and contribute to a fighter’s decision-making in complex environments,” according to the ministry.
“Long-term flexibility”
It is in this context that the only artificial intelligence model currently authorized for classified operations, Claude from Anthropic, was used during the American offensive against Iran. Decisions relating to attacks, the timing of a strike, the choice of target, remain made by the military.
By increasing its suppliers, the ministry wants to “avoid being dependent on one service provider and ensure long-term flexibility”, justified the Pentagon. The latter intends to rely on “model developers who allow their full and complete use to support (ses) missions ».
The dispute between the Trump government and Anthropic arose from the Californian company’s desire to prevent the use of its models for mass surveillance of the American population and for deadly attacks. The Ministry of Defense considered that the guarantee of use within the limits of the law was sufficient.
On Monday, a letter signed by more than 600 Google employees called on the group’s management to stop providing the American army with its models for classified operations.
https://purebets.co.com/en/review/
https://purebets.co.com/free-spins/
https://purebets.co.com/hu/
https://purebets.co.com/hu/app/
https://purebets.co.com/hu/bonuses/
https://purebets.co.com/hu/deposit/
https://purebets.co.com/hu/free-spins/
https://purebets.co.com/hu/login/
https://purebets.co.com/hu/no-deposit-bonus/
https://purebets.co.com/hu/promo-code/
https://purebets.co.com/hu/registration/
https://purebets.co.com/hu/review/
https://purebets.co.com/it/
https://purebets.co.com/it/app/
https://purebets.co.com/it/bonuses/
https://purebets.co.com/it/deposit/
https://purebets.co.com/it/free-spins/
https://purebets.co.com/it/login/
https://purebets.co.com/it/no-deposit-bonus/
https://purebets.co.com/it/promo-code/
https://purebets.co.com/it/registration/
https://purebets.co.com/it/review/
https://purebets.co.com/login/
https://purebets.co.com/no-deposit-bonus/
https://purebets.co.com/no/