The new electoral threat: chatbots capable of moving voting intention, according to a study

A brief interaction with a chatbot can significantly change a voter’s opinion about a presidential candidate or a proposed policy in either direction, according to new research from Cornell University (United States).

Specifically, the researchers publish these findings in two articles simultaneously: ‘Persuading voters through human-AI dialogues’, in Nature, and ‘The levers of political persuasion with conversational AI’, in Science.

MIRA: Resisting technological and AI advances, old calculators continue to thrive

The potential for artificial intelligence to influence election results is a major public concern. These two new papers, with experiments conducted in four countries, show that chatbots powered by large language models (LLMs) are quite effective at political persuasion, shifting the preferences of opposition voters by 10 percentage points or more in many cases. The persuasiveness of LLMs is not due to their mastery in psychological manipulation, but to their large number of arguments that support the candidates’ political positions.

“LLMs can significantly influence people’s attitudes toward presidential candidates and their policies, and they do so by providing numerous factual statements that support their position,” says David Rand, professor of Information Sciences and Marketing and Communication Management at Cornell, and lead author of both articles. “However, these claims are not necessarily accurate, and even arguments based on accurate claims can be misleading by omission.”

In the Nature study, David Rand, along with co-senior author Gordon Pennycook, an associate professor of psychology, instructed AI chatbots to change voters’ attitudes toward presidential candidates. They randomly assigned participants to participate in a text conversation with a chatbot promoting one position or another and then measured any changes in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 US presidential election, the 2025 Canadian federal election, and the 2025 Polish presidential election.

They found that, two months before the US election, among more than 2,300 Americans, chatbots focused on the candidates’ policies caused a slight change in opinion. On a 100-point scale, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, an effect roughly four times larger than traditional ads tested during the 2016 and 2020 elections. The pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

In similar experiments with 1,530 Canadians and 2,118 Poles, the effect was much larger: chatbots changed the attitude and voting intention of opposition voters by about 10 percentage points. “This was a surprisingly large effect, especially in the context of presidential politics,” Rand says.

Chatbots used a variety of persuasion tactics, but politeness and providing evidence were the most common. When the researchers prevented the model from using facts, its persuasion became much less convincing, demonstrating the critical role that fact-based claims play in AI persuasion.

The researchers also verified the chatbot’s claims using an AI model validated by professional human verifiers. While claims were mostly accurate on average, chatbots supporting right-wing candidates made more inaccurate claims than those supporting left-wing candidates in all three countries. This finding, validated with groups of politically balanced citizens, reflects the frequently repeated finding that social media users on the right share more inaccurate information than those on the left, the researchers summarize.

In the Science article, Rand collaborated with colleagues at the UK AI Safety Institute to investigate what makes these chatbots so persuasive. They measured the opinion changes of almost 77,000 UK participants who interacted with chatbots on more than 700 political topics.

“Larger models are more persuasive, but the most effective way to increase their persuasiveness was to instruct the models to supplement their arguments with as much data as possible and provide them with additional training focused on increasing their persuasiveness,” Rand qualifies. “The most optimized model for persuasion achieved an astonishing 25 percentage point swing in opposition voters.”

This study also showed that the more persuasive a model was, the less accurate the information it provided. Rand suspects that as the chatbot is pressured to provide more and more factual statements, it eventually runs out of accurate information and begins to invent.

The finding that factual claims are key to the persuasiveness of an AI model is supported by a recent third paper published in PNAS Nexus by Rand, Pennycook and colleagues. The study showed that AI chatbot arguments reduced belief in conspiracy theories even when people believed they were talking to a human expert. This suggests that it was the compelling messages that worked, not the belief in the authority of AI.

In both studies, all participants were informed that they were conversing with an AI and were debriefed in detail afterwards. Additionally, the direction of persuasion was randomized so that the experiments did not change opinions overall.

Studying AI persuasion is essential to anticipating and mitigating its misuse, the researchers said. By testing these systems in controlled and transparent experiments, they hope to inform ethical guidelines and policy debates about how AI should and should not be used in political communication.

Rand also points out that chatbots can only be effective persuasion tools if people interact with them in the first place, which is a very difficult challenge to overcome. But there is no doubt that AI chatbots will be an increasingly important part of political campaigns, Rand concludes. “The challenge now is to find ways to limit the damage and help people recognize and resist AI persuasion.”

By Editor

One thought on “The new electoral threat: chatbots capable of moving voting intention, according to a study”

Leave a Reply