The race to create technology capable of detecting deepfakes has begun
Two years after the birth of the deepfakes, both the technology industry and the academic field strive to develop solutions that automate the detection of counterfeit multimedia material (videos, images and voice recordings) through the use of artificial intelligence.
The reason behind these efforts is as simple as worrying: deepfakes detection algorithms continue to fall behind the technology used to generate them .
Without going any further, not a month ago since Facebook announced that it would invest no less than 10 million dollars in the creation of a dataset that collected deepfakes and in the organization of a contest that would promote the development of technology for the detection of same . Microsoft is another of the giants of Silicon Valley that appears as a collaborator in this project.
Google also announced last week the creation of a visual deepfakes dataset, developed in collaboration with Jigsaw, from the recording of hundreds of videos starring actors.
All these videos, original and manipulated, have been incorporated into the dataset, now available to the research community. And they warn that they will be incorporating new deepfakes using the new methods that are being developed .
No technology will be our ‘silver bullet’
Siwey Lyu, a researcher at the University of Albany and one of the world’s leading experts in deepfakes detection, has encouraged the creation of another video dataset , DeepFake Forensic (DFF) with the same objective: to incorporate new samples of representative deepfakes of the most advanced technology in every moment.
Lyu, who participated a few days ago in a subcommittee of the US House of Representatives who studied impersonation and misinformation on the Internet, then explained that
“It is important to have effective technologies to identify, contain and obstruct deepfake before they can cause damage. This should be done by focusing on improving our forensic capabilities and making it more difficult to train fake generators through the use of online videos. [. ..] Due to the complex nature of deepfakes, no particular method or technology will be a ‘silver bullet’. “
He is committed to combining methods of forensic detection: look for traces of the synthesis process (deformed faces to adapt to the anatomy of the objective) or of physiological inconsistencies (such as the absence of realistic flickering ), in addition to betting on ” using AI to hunt to AI “, using neural networks to learn to detect the characteristic patterns of deepfakes.
But how does Lyu propose to hinder the use of images and videos to train new AIs for generating deepfakes? Introducing “antagonistic noise” invisible to the human eye but capable of boycotting the use of facial detection algorithms , forcing the forger to hand select thousands of points of the video.
One step forward and two steps back?
Dessa, an artificial intelligence company, has made public an open source software focused on the detection of audio deepfakes , a tool that the CEOs who were supplanted a few months ago via mobile calls would have appreciated .
Ragavan Thurairatnam, co-founder of Dessa, believes that it will be ” inevitable that malicious actors move much faster than those who want to stop them, ” but he also trusts that his free detector will be a “starting point” for advancing detection. of deepfakes.
And yet, this movement could be “one of lime and another of sand”: Thurairatnam himself recognizes that generative AI systems can be trained with the objective of deceiving a specific detector , although he is confident that his potential to help Creating new and better detection tools compensates for any other misuse.
Lyu, however, disagrees with this optimistic forecast: he does believe there are reasons to think that, in the long run, it will be worse to have released the code for this tool :
“At first, the code will help both parties, but it will probably end up impacting [the creation of] better generators.”
In his appearance before the House of Representatives, he abounded when explaining this disadvantage of the ‘good’:
“As this technology continues to develop, current barriers to the creation of deepfakes will decrease and their quality will continue to improve.”
“What is also evolving is the game of the cat and the mouse that experience all the attacker-defender relationships, and the attackers seem to have an advantage: the possibility of adjusting the generation algorithm every time a new method of publicization is made public detection”.