The Trump administration has announced that it will prohibit US government agencies from working with the artificial intelligence company Anthropic, after months of dispute between the company and the Pentagon regarding the use of its models in security systems. The president ordered the termination of contracts with the company, and the Ministry of Defense announced that it would define it as a risk in the supply chain, a step that is usually taken against companies from rival countries. As part of the decision, the ministry is expected to terminate a contract of up to 200 million dollars and require suppliers working with it to declare that they do not integrate the Claude model into their systems. Also, a transition period of six months was established for the purpose of replacing the systems and implementing alternatives.
● The urgent flight and the fury in the White House: this is how Trump made the decision to attack
● OpenAI breaks records: raised 110 billion dollars at a value of 730 billion
However, hours after the official announcement, it was reported that in the combined attack carried out by the US and Israel in Iran, an Anthropic Claude model was used. According to sources privy to the details, American headquarters, including the US Central Command, integrate the model for intelligence assessments, target identification and operational scenario simulations. The Central Command refused to comment on the specific systems used in the attack, but confirmed that the tool is being used in its current operations. Thus, the report indicates that the model continued to be embedded in military systems operating in the Iranian arena even after the announcement of the ban
The dispute between Anthropic and the Pentagon focused on the Ministry of Defense’s demand that the company allow the use of its technology for any legal purpose, without additional restrictions on the part of the supplier. Anthropic has refused to lift restrictions designed to prevent the use of artificial intelligence for mass surveillance purposes within the US or to operate autonomous weapons systems without human involvement.
The CEO of the company, Dario Amoday, published a public statement in which he wrote that the company cannot comply with the demand for liability reasons. On the other hand, Pentagon officials claimed that after the military purchases a technological tool, the manner of its use is determined in accordance with the law and military procedures, and that it is not possible to conduct individual negotiations on every possible use scenario. According to reports, Claude is the only model currently used in classified systems of the US military. Sources familiar with the matter stated that it was also used in previous high-profile operations, and is integrated into the work of the government’s defense contractors, including Palantir. Defense officials admitted that disconnecting the model from the existing systems is expected to be complex and last months, even if the new engagement is terminated.
OpenAI signs a new agreement with the Pentagon
At the same time as the escalation against Anthropic, OpenAI announced in the last day that it had signed an agreement with the Pentagon to deploy advanced artificial intelligence systems in classified environments. The company’s announcement stated that the agreement includes three red lines: a ban on the use of technology for mass surveillance within the US, a ban on operating autonomous weapons systems without human control, and a ban on making automatic decisions with weighty consequences without human approval. The company stated that the deployment will be carried out in the cloud only, while maintaining its safety system and accompanied by classified engineers on its behalf. It was also reported that she does not believe that Anthropic should be defined as a risk in the supply chain.
Alongside OpenAI, Xai, Elon Musk’s artificial intelligence company, recently signed an agreement to integrate the Grok model into classified systems, and Google is in talks to expand the use of its models in security systems. At the same time, hundreds of employees at Google and OpenAI signed an internal petition calling for clear limits to be placed on the uses of artificial intelligence in the field of surveillance and autonomous weapons
For your attention: The Globes system strives for a diverse, relevant and respectful discourse in accordance with the code of ethics that appears in the trust report according to which we operate. Expressions of violence, racism, incitement or any other inappropriate discourse are filtered out automatically and will not be published on the site.
https://sites.google.com/view/cryptocurrency-link-building/%D0%B3%D0%BB%D0%B0%D0%B2%D0%BD%D0%B0%D1%8F-%D1%81%D1%82%D1%80%D0%B0%D0%BD%D0%B8%D1%86%D0%B0?authuser=2
https://blankslate.io/?note=1276524
https://www.tumblr.com/kevinhartmann93/809517137705222144/building-property-authority-online-how-needmylink
https://www.notebook.ai/documents/2396146
https://hackmd.hub.yt/s/p5hhG9kH3
https://www.musicinafrica.net/users/ian-sommers
https://form.jotform.com/260575370581056
https://steemit.com/link/@hankdry/earning-digital-trust-in-britain-how-needmylink-strengthens-link-building-in-the-uk
https://noti.st/vladimirmelnik
http://network.hu/ColeSimmons/blog/simmons-blogja/boosting-uk-digital-authority-how-needmy-link-revolutionizes-link-building
https://justpaste.me/vC52
https://www.deviantart.com/tristanmoreu/about
https://doc.clickup.com/90151673295/d/h/2kyqaqef-895/2c61b116f40f323
https://pastelink.net/24k8i71a
https://hackmd.openmole.org/s/SXqThcaJp
https://www.thesims3.com/myBlog.html?persona=fesok80414&showBlogMasterPopup=false