They prove that AI agents can be used to carry out autonomous telephone scams

Generative artificial intelligence (AI) agents with voice capabilities can be used to make fraudulent calls autonomously, and although their success rate is currently moderate, advances in this technology are expected to make them a harmful tool in hands of cybercriminals.

Telephone-based scams are common in the cybercrime landscape, and require a scammer to contact the potential victim by calling to convince them to give them sensitive data, such as access credentials for digital services or the bank account number, with the goal of stealing your money.

This process requires the fraudster to perform multiple actions, such as browsing a bank’s website, obtaining and entering the user’s credentials, obtaining and entering the additional security factor in a multi-authentication system and making the transfer, and even reacting in the face of problems that arise, such as misunderstanding the information that the victim says.

In this context, researchers from the University of Illinois at Urbana-Champaign (United States) have proven that an AI agent can be enabled to perform these tasks in a telephone scam, as stated in the text of their research, published on Arxiv. .org. “Autonomous scam calls are possible with new advances in AI,” they say.

Although they know that the use of AI in scams is not new, since the technique known as deep manipulation (deepfake) is already widely used, especially in social networks, they have focused on the voice capabilities of the fraud models. large language (LLM) powering these agents, while “autonomous and responsive voice scams are not widely deployed due to technological limitations.”

Specifically, researchers have used ChatGPT with the GPT-4o model, from OpenAI, and have recorded a moderate success rate, ranging between 20 and 60 percent, while the success rate in all scams is 36 percent, according to the research, which highlights the effort that the agent has to make in this process: 26 actions to complete the bank transfer, and up to three minutes in the most complex scams.

“As we have shown, voice-enabled LLM agents can perform the actions necessary to carry out common phone scams. These agents are very capable, they can react to changes in the environment and retry based on faulty victim information .

With their work they hope that OpenAI and other companies will strengthen the security of their LLMs. And to prevent their agents from falling into the wrong hands, they have chosen not to make their agents’ code public.

By Editor