OpenAI stops covert operations that used their AI to influence conflicts |  TECHNOLOGY

OpenAI has dismantled five covert influence operations (IOs), which used its Artificial intelligence (AI) to support deceptive activities on the Internet, related to current politics and conflicts, such as the Russian invasion of Ukraine or the war in Gaza, all driven by threat actors from Russia, Iran and Israel.

The company led by Sam Altman has shared its commitment to detecting and disrupting covert influence operations, which attempt to manipulate public opinion or influence political outcomes, in order to enforce its abuse prevention policies and improve transparency in around AI-generated content.

In this sense, the technology company has indicated that in the last three months it has interrupted five covert OIs, with which threat actors tried to use their AI models to carry out deceptive activities on the Internet. These operations have been carried out from Russia, Iran and Israel and were aimed at manipulating public opinion.

After the interception of these operations and as a result of its services, OpenAI has indicated that, during this month of May, the campaigns “do not appear to have significantly increased participation”, nor has there been an increase in the reach of its audience. .

As explained in a statement on its website, these operations used AI models for different purposes, such as generating short comments and long articles in different languages, inventing names and biographies for social networks, debugging simple code, and translating and correcting text.

Likewise, they generated content on various topics, related to the Russian invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and even criticism from foreign governments and citizens opposed to the Chinese Government.

Specifically, one of the intercepted operations was the so-called Bad Grammar, which, identified as having Russian origin, operated mainly on Telegram and targeted users in Ukraine, Moldova, the Baltic States and the United States. Thus, malicious actors used OpenAI’s AI to debug code and run a Telegram ‘bot, all with the aim of creating short political comments in Russian and English, which were then distributed through the social network.

Another of the covert IOs, also originating in Russia, is the well-known Doppelganger. In this case, malicious actors generated comments in English, French, German, Italian and Polish, which were published on the social network X (formerly Twitter). Likewise, they also used AI models to translate and edit articles in English and French published on websites linked to this operation. With this, they generated headlines and turned news articles into Facebook posts.

The same occurs with the Chinese network known as Spamouflage which, as OpenAI has clarified, used its models to investigate the activity of public social networks, as well as generate texts in languages ​​​​such as Chinese, English, Japanese and Korean, which they then published on X and Blogspot. In this operation, cybercriminals also used AI to debug code, managing databases and websites.

Following this line, OpenAI also intercepted a covert IO from the well-known Iranian group International Union of Virtual Media (IUVM), which generated and translated long articles, headlines and website tags that were subsequently published on the IUVM-linked website, iuvmpress[.]co.

Finally, the technology company has paralyzed the activity of the commercial company STOIC, based in Israel. This is the operation they have called Zenón Cero, in which AI models were used to generate articles and comments that were published on social networks such as Instagram, Facebook, X and websites associated with the company. However, in the case of the latter, OpenAI has pointed out that it has managed to interrupt the operation’s activity, but not the STOIC company.

TRENDS OF MALICIOUS ACTORS

OpenAI has observed that malicious actors use AI to generate textual content and images with fewer errors than those produced manually. The intercepted operations combined AI with handwritten texts or copied memes. Additionally, AI is used to simulate social media engagement and improve productivity, for example by summarizing posts.

DEFENSE METHODS

OpenAI has highlighted that its AI can also defend against covert influence operations, refusing to generate content requested by malicious actors. Their models have tools to detect and analyze these threats more effectively. In recent research, detection took place in days rather than weeks or months.

OpenAI has also shared detailed information with other industry partners to increase the effectiveness of these disruptions. The company will continue to work to identify and mitigate these types of abuses on a large scale, using generative AI.

By Editor

Leave a Reply