MADRID, 17 Jun. (Portaltic/EP) –
Facebook and Michigan State University (United States) have worked on a new approach based on reverse engineering for not only detect ‘deepfakes’ but also trace them to identify the model that generated them, with the aim of combating misinformation
The contents known as ‘deepfakes’, those manipulated with artificial intelligence systems, They offer a level of realism that is concerned with its implications for public debate and the trust placed in authority figures.
The ‘deepfake’ are images or videos where the face has been manipulated, or also the voice to introduce a specific speech, sometimes for entertainment purposes, but malicious at other times.
Distinguishing them from a real person is not always easy, And with that in mind, Facebook has partnered with Michigan State University to develop a detection and tracking method based on reverse engineering.
The technology company recognizes that reverse engineering is not a new approach in the field of deep learning, but defends it as a method to combat ‘deepfakes’, since it allows “discover the unique patterns behind the AI model used to generate a single ‘deepfake’ image “.
“This ability to detect which ‘deepfakes’ have been generated from the same AI model can be useful to discover cases of coordinated misinformation or other malicious attacks launched using ‘deepfakes’, “explain researchers Xi Yin and Tal Hassner in a post on Facebook AI.
As with cameras and digital photography, the generative model leaves a “footprint” in each image it produces, “subtle but unique patterns” that allow identification. However, deep learning techniques have made the identification of fingerprint properties more difficult by making the set of tools that can be used to generate images “limitless”.
The Facebook and Michigan State University system starts from the general fingerprint properties, such as magnitude, repetitive nature, frequency range, and symmetric frequency response, which they apply to their fingerprint estimation network (FEN ) and serve, in turn, as inputs for the analysis of the model.
What they do with her is, on the one hand, discover the properties of the model that has generated the ‘deepfake’ and, on the other, compare and trace similarities between a set of ‘deepfakes’. The goal of their method is “to facilitate the detection and tracking of ‘deepfake’ in real-world environments.”