Deepfakes promise to be one of the great challenges for social platforms in 2020 . In the last year we have witnessed the birth, or rather, popularization of this technology that allows us to manipulate a video or audio clip to make, for example, a world-renowned person like Barack Obama or Mark Zuckerberg appear “saying things” that have not said.
It is challenging because deepfakes are becoming more and more real and distinguishing them from a real video can be difficult . Faced with this challenge, companies like Facebook or Adobe have gotten down to work, and now Zuckerberg’s company has confirmed how it will act in the face of this type of content : deleting it, as long as certain conditions are met.
As explained by Monika Bickert, vice president of global policy management at Facebook, “manipulations can be done through simple technologies such as Photoshop or through sophisticated tools that use artificial intelligence or” deep learning “techniques to create videos that distort reality, generally called ‘deepfakes’ “. According to Bickert, “while these videos are still rare on the Internet, they present a significant challenge to our industry and society as their use increases.”
From Facebook they affirm that their approach to combat deepfakes consists of several protagonists, from academia and government to industry, as well as “more than 50 world experts with technical, political, media, legal, civic and academic backgrounds to inform the developing our policies and improving the science of tampering media detection. ” And how will the company act upon detection of a deepfake? Deleting it when it meets these criteria:
In other words, if the video or audio has been generated by an artificial intelligence or has been edited in a way that can mislead an “average person”, the company will remove it from the platform. However, these measures do not extend to parody or satirical content , nor to those videos that have been edited to omit words or change the order of them (something that, it must be said, could be used to create manipulated content). If the content violates the community rules, whether it is false or not, it will be removed from Facebook.
Now, the company will not completely get rid of fake news , another area that has not been without controversy . Facebook stands by its thirteen, stating that if one of its 50 fact-checking partners detects a false or partially false story it will “significantly reduce its distribution on the News Feed and reject it if it is running as an ad.” Also, people who are going to share or have shared it will receive warnings alerting that the content is false.
Recently, Facebook, together with several actors such as Microsoft and various universities, launched the Deepfake Detection Challenge , a challenge whose objective is “to produce technology that everyone can use to better detect when artificial intelligence has been used to alter a video to deceive the viewer” . To subsidize it, Facebook is dedicating $ 10 million.
A more radical example would be China . Since January 1, deepfakes have been illegal and distributing them is a criminal offense that will land users in prison. Companies will be able to distribute this content as long as it is “clearly” indicated when said content was created using artificial intelligence. In a similar vein is California , which prohibits the dissemination of videos manipulated to discredit political candidates during the 60 days before an election.