Anthropic investigates unauthorized access on the Mythos AI model

The leading artificial intelligence development company Anthropic is investigating to ascertain whether a group of users gained unauthorized access to its modello Claude Mythosreleased only to a few trusted companies due to its advanced cybersecurity features. The AI ​​lab said Tuesday it was looking into reports that a group of people had accessed the model through a system set up for third-party companies working for Anthropic.

Official statements and concerns

The company stated: “We are investigating a report alleging unauthorized access to Claude Mythos Preview via one of our third-party vendor environments”. The incident raises concerns about Antropic’s AI lab’s ability to prevent the technology it develops from falling into the hands of bad actors. Anthropic limited the dissemination of Claude Mythos Preview to a small group of trusted technology companies, due to the risk that the model could be misused to conduct cyberattacks at a scale and speed beyond human capabilities. The risk of unauthorized access will only add to the anxiety surrounding Mythos, which has already shaken markets and triggered high-level debates among financial institutions and global regulators.

Details of the accident

According to Bloomberg, which first reported the news, one of the people who gained unauthorized access was able to use their freelancer permissions for Anthropic to access Mythos. Anthropic said it has no evidence of activity extending outside the“supplier environment”which third parties use to access model development systems. AI labs commonly use outside contractors for tasks such as model testing, although it is unclear which vendor was involved in the incident. Earlier this month Anthropic launched Mythos, targeting companies including Amazon, Microsoft, Apple, Cisco and CrowdStrike.

Security risks and precedents

The San Francisco-based company said these partners would be able to find and protect cyber vulnerabilities using Mythos’ advanced capabilities before the model was made public. Security experts have warned that, in the wrong hands, hackers could exploit vulnerabilities faster than organizations can fix them. Anthropic’s security processes came under intense scrutiny after descriptions of the model, including its name, were discovered in a publicly accessible data cache in March. The AI ​​lab blamed it on human error. Earlier this month, in a second episode, the internal source code of the company’s programming assistant, Claude Code, was also made public.

By Editor