Anthropic disrupts global espionage campaign with AI-powered cyberattacks with almost no human intervention

Anthropic has identified and interrupted a “highly sophisticated” spy campaign in which malicious actors backed by the Chinese state used the artificial intelligence (AI) model Claude to automate around 30 computer attacks against corporations and governments, with hardly any human intervention.

The malicious attacks, which took place in September of this year, stand out because among the 80 and 90 percent of the operation was automated with AI, which highlights how this increasingly advanced technology is facilitating and boosting the tasks of cybercriminals to be more effective.

The company has already warned on previous occasions about real cyberattacks with AI capabilities as advisors, however, it has now highlighted how to use it to execute attacks directly, placing emphasis on “the speed with which they have done it on a large scale.

In this way, in the case of the last interrupted espionage campaign, Anthropic has specified that around 30 attacks that were directed against different organizationslarge technology companies and government agencies. In addition, it has also reflected “a high probability” that the hackers behind these threats had the support of the Chinese state.

As detailed in a statement on its website and in the investigation report, the attacking group (identified as GTG-1002) manipulated the Claude Code tool to carry out its malicious activities, turning it into a “large-scale cyberattack executed without significant human intervention.”

Specifically, the attacks were carried out with “just press the button”, with hardly any human intervention, as detailed by Anthropic’s head of threat intelligence, Jacob Klein, in statements to The Wall Street Journal, collected by The Verge.

As Klein explained, the intervention of malicious actors to carry out the attack was limited to “a few critical points”, in which They detailed whether the AI ​​should continue acting or notor if they identified any errors. Thus, he has exemplified that instructions such as: “Yes, continue” and “Oh, this doesn’t seem right, Claude, are you sure?” were carried out.

This automation has been possible thanks to the fact that, currently, the models have more advanced intelligence capabilities to follow complex instructions and understand context. Likewise, it is also because they can act as agentsexecuting their own autonomous actions, since they have access to a wide range of software tools.

That is, they have advanced capabilities that they did not have before, allowing Claude to be converted, in this case, into a practically independent cyberattack weapon. “Groups with less experience and resources can now potentially carry out large-scale attacks of this nature”, the company has stated.

FOUR OBJECTIVES AFFECTED

Although after detecting the suspicious activity Anthropic suspended the related accounts and notified the affected entities of the problem, the malicious actors They managed to steal confidential data from four of the attacked targets. However, it has not specified which organizations or governments have been harmed, although it has clarified that the US Government has not been one of them.

“This campaign has important implications for cybersecurity in the era of AI agents,” the technology company added, while reflecting that “agents are valuable for daily work and productivity, but In the wrong hands they can significantly increase the viability of cyber attacks on a large scale.”

Therefore, to address this threat, Anthropic has detailed that it has expanded its detection capabilities and has developed better classifiers to identify malicious activity.

Other companies in the sector, such as Google, have also recently warned about the use of AI as a technology that cybercriminals have integrated into their operations to be more effective.

This was reported by the experts from Google Threat Intelligence Group (GTIG), who detailed that AI is no longer only used to increase the productivity of its attacks, but is also used in its operations by creating new ‘malware’ with integrated AI, capable of modifying its behavior dynamically while it runs.

By Editor

Leave a Reply