European elections: Which parties are trying to influence the vote through Facebook or Instagram?  |  Technology

The European elections on June 9, to which more than 370 million citizens are called, can become fertile ground for disinformation and political manipulation. This was already warned in October by the EU Agency for Cybersecurity (Enisa), where there is great concern about the effect that generative artificial intelligence (AI) may have on the process. This technology is capable of producing compelling texts or hyper-realistic videos, which can be used to propagate interested ideas and influence citizens’ votes.

But the spread of hoaxes and interested messages is not the only problem that citizens face. There are political parties that use digital advertising tools provided by social networks to personalize and segment their message with the aim of influencing the vote. This is what Cambridge Analytica did in the 2016 US presidential elections, in that case using data from 80 million users fraudulently captured through Facebook.

Audience segmentation, that is, dividing them into groups that share a series of characteristics, is a legal practice widely used in political marketing. The microsegmentation Politics, on the other hand, which analyzes the interests of individuals, and not their groups, is not allowed in the EU. The General Data Protection Regulation (article 9.1) prohibits the processing of personal data that reveals the political opinions of citizens. And that is exactly what the ideological profiles drawn up by the microsegmentation (o microtargetingin English): a kind of political record of each individual made from information available in their browsing history or in their reactions on social networks.

Appearance of a summary sheet from Who Targets Me, the tool that will be used in the Who I’m the Target campaign, which was used to express the number of times the user has been exposed to personalized political advertising.

Despite being prohibited, political advertising microsegmentada It is still a common practice in Europe. The privacy protection group Noyb (acronym for None of Your Business), led by Austrian activist Max Schrems, filed a series of complaints last year against several German political parties for having resorted to this technique in the 2021 federal elections.

In Spain, all parties tried to reform the Organic Law of the Electoral Regime (LOREG) through the Organic Law of Data Protection (LOPD, 2018) to allow parties to collect “personal data” from the web and social networks. regarding the opinions of citizens” ahead of the 2019 elections. A group of jurists and associations pressured the Ombudsman to appeal this change to the Constitutional Court. And the Constitutional Court overthrew him.

“That was the biggest victory of my career,” recalls Borja Adsuara, one of the lawyers who promoted the appeal. “We managed to stop some parties that authorized themselves to collect, from websites and social networks, the political opinions of citizens linked to their personal data, that is, attributing them to natural persons with names and surnames,” he points out.

However, there are parties that continue to support this technique, even though it is banned. The network of digital rights activists Xnet has launched, together with a coalition of European groups and organizations with the same sensitivities, the campaign “Whose target am I?” Its objective is to analyze how Facebook and Instagram, Meta’s two star social networks, exploit user data to make individualized profiles for political purposes.

The campaign pivots around the Who Targets Me tool, a browser extension that allows you to collect, catalog and display personalized electoral advertising served to Facebook users while they browse that platform. The tool tracks and processes the anonymized data received from campaigns and posts on social networks, stores it and subsequently processes it.

The more users download the extension, the more valid the data that analysts extract from them will be. The objective is to detect which parties resort to microsegmentation and at what times during the campaign. Xnet will prepare a report with this data that it will publish once the electoral appointment passes.

A man holds his cell phone in the air with the Instagram app open.Unplash

Experts and legislators agree that the microtargeting It is a practice that threatens the proper functioning of democracy. These techniques, which use digital data analysis to provide users with information specially adapted to their profile, have the danger of seriously influencing the voter. “Political parties are the second client of the information manipulation industry after influencers: they buy bots, user profiles, etc.,” explains Simona Levi, founder and coordinator of Xnet. “The strategies of microtargeting of the parties seek to manipulate us psychologically, they are based on sending us the information we want to see. That creates information bubbles. Telling us what we want to hear, and not what they think, is not convincing: it is manipulating.”

“Any data about a person’s political opinions is particularly strictly protected by the GDPR,” says Felix Mikolasch, a privacy lawyer at Noyb. “Not only is that data extremely sensitive, it also allows for large-scale manipulation of voters, as Cambridge Analytica has demonstrated,” he notes.

Disinformation and manipulation in the age of AI

Two weeks ago, the European Commission asked X, TikTok, Facebook and other large platforms to take measures to stop the circulation of suspicious content that seeks to influence voters. Brussels, fearing a barrage of interference and disinformation, has published a series of guidelines for platforms with more than 45 million active users in the EU aimed at combating harmful AI-powered content and misleading political advertising. Google, Meta and TikTok, for example, have launched teams especially dedicated to combating misinformation around the elections.

In Europe there are 24 official languages ​​to monitor, and mastery of so many languages ​​is not a common characteristic among content moderators. Hence, the Commission has a special interest in strengthening this area. According to a report by X collected by Euronews, the social network only has one content moderator who is fluent in Bulgarian, Croatian, Dutch, Portuguese, Latvian or Polish in its global team of 2,294 people. There is no one to cover 17 of the official languages ​​of the EU, including Greek, Hungarian, Romanian and Swedish: everything is trusted there to AI.

The threat of misinformation and the spread of hoaxes is now common in all elections, at least since the 2016 presidential election that brought Donald Trump to the White House. The danger increases considerably with generative AI. There are now particular fears that deepfakes, hyperrealistic videos made by AI, can have a direct influence on the vote of millions of citizens. This technology allows us to generate videos in which any politician appears in any situation saying anything.

A recent Microsoft report warns that China will try to influence the US presidential elections in November, as well as the South Korean and Indian elections, with content generated with AI. The technology company hopes that several cyber groups associated with Beijing and Pyongyang are already working on it, as they did in Taiwan. “Although the impact of this content remains limited, China’s growing experimentation with memes, videos and audio will continue, and may prove effective in the future,” the study concludes.

“Confidence in the EU electoral process will depend critically on our ability to rely on secure cyber infrastructures, as well as the integrity and availability of information. It is up to us to ensure that we take the necessary measures to achieve this sensitive but essential objective for our democracies,” said Enisa CEO Juhan Lepassaar.

By Editor

Leave a Reply