Donald Trump, Elon Musk and Taylor Swift are the public personalities who have starred in the most ‘deepfakes’ in 2024, a practice that manipulates the image and voice with artificial intelligence tools and that has proliferated on the internet and social networks despite the risks it poses. they entail for democracy.
‘Deepfakes’ are a type of manipulated audiovisual content that shows false images, usually simulating the voice and appearance of other people. However, they have the peculiarity that these images appear to be real, since they are generated using advanced deep learning and Artificial Intelligence (AI) techniques.
Although they were initially a format only suitable for developers, they are now a type of video that is available to anyone thanks to advances in open source voice cloning and lip synchronization technologies.
As explained by Kapwing, a video creation platform, voice cloning takes as an example a person’s voice from a short audio sample, lasting between 10 and 15 seconds. With this, it analyzes and applies the vocal characteristics to an underlying model capable of generating speech from text.
For its part, the AI for data synchronization matches the lips of the images of the person being imitated with the sounds produced in the audio file. In this sense, as detailed by the platform, with technologies such as those mentioned above, an AI twin can be created with only ten seconds of initial footage.
These technologies allow you to manipulate a video in a realistic way and in a short period of time. As detailed by Kapwing, with technologies such as those mentioned, an AI twin can be created with only ten seconds of initial footage, as can be seen from research they have carried out on the possibilities offered by ‘deepfake’ and the risks it entails. this technology for identity theft or the spread of fake news.
THE MOST ‘DEEPFAKED’ FAMOUS CHARACTERS
Due to the proliferation of these technologies, ‘deepfake’ videos are currently content that is frequently found on the Internet and social networks, even going viral.
Public figures often star in these ‘deepfakes’ when the objective is “to influence public opinion, enable scam or fraudulent activities, or to generate profits.” But not just anyone is worth it.
In this sense, according to data collected from Kapwing, after analyzing text-to-video messages in a popular Discord channel to generate videos with AI, the former president of the United States Donald Trump, the owner of X (former Twitter), Elon Musk , and the singer and songwriter Taylor Swift, are the most ‘deepfaked’ American public figures so far in 2024.
Specifically, the platform has counted 12,384 requests for ‘deepfake’ videos related to Trump in 2024, another 9,544 requests were linked to Musk and 8,202 to Taylor Swift. As Kapwing points out, these three personalities have been impersonated more frequently than any other famous person, in relation to the use of ‘deepfake’.
They are closely followed by the current president of the United States, Joe Biden, who has also been one of the most ‘deepfaked’ personalities, with 7,956 requests. Also on this list are actors, such as Tom Cruise, Dwayne Johnson and Will Smith.
Likewise, ‘deepfake’ videos related to athletes have been requested, such as footballers Cristiano Ronaldo and Leo Messi and basketball player Michael Jordan. The same has happened with singers of the stature of Beyoncé and personalities from the world of technology such as the CEO of Meta, Mark Zuckerberg.
DANGERS OF IMPLANTATION WITH ‘DEEPFAKE’
Although much of this content is created and posted on the Internet for entertainment, as a form of comedy, for example, impersonating famous personalities to have them tell a joke, it can also pose a risk or create confusion among users due to the realism of some ‘deepfakes’.
An example of this is a fake video of a speech by Joe Biden after ceasing his candidacy for the next election for the presidency of the United States. On social media, some users may believe that it is a real video and trust that the information they have heard from the mouth of the President of the United States, even through a video, is real.
In addition to promoting the consumption of ‘fake news’, these ‘deepfakes’ are also a tool for cybercriminals, who can use them as a method to defraud users by taking advantage of the trust placed in public figures.
DETECT MODERN ‘DEEPFAKES’
With all this, Kapwing has outlined some guidelines to take into account to detect ‘deepfake’ videos made with modern technologies. To do this, he pointed out that the key is to pay attention to both the audio and the video, placing emphasis on the voice and lips.
This is because one of the signs that it is a ‘deepfake’ is that the movement of the lips and teeth is “inconsistent.” Although lip-syncing technology is constantly evolving, the process often results in glitches such as blurring the mouth or unnaturally pink lips. In the same way, teeth usually appear very white and register unrealistic movements when they are visible, since this is a complex task for AI.
Following this line, ‘deepfakes’ also often show unnatural body movements and gestures. As the technology company has explained, this is because, although the movements of the lips and mouth are rectified to appear real, the body gestures are not modified.
This translates into symptoms such as shoulder shrugs, head movements, or hand gestures that do not match what is being narrated and are used to emphasize a certain point. As he has exemplified, a wrong movement could be that the person looks away while speaking. Likewise, the lack of flickering is also an indication of content created with AI.
Regarding voice, since ‘deepfakes’ use AI-generated voices, they tend to offer a more monotonous and less expressive sound than those of a real human. This is noticeable in issues such as breathing patterns.
“The conclusions of our study clearly show that video deepfakes are already common, as are the tools that can be used to make them. These tools need to be available safely,” said the co-founder of Kapwing, Eric Lu, while noting that they hope other providers “also make clear when content is altered and take measures to ensure security.”