The Faceswap manifesto has a very interesting point when stating that “it was the first AI code that anyone could download, run and learn by experimentation without having a Ph.D.”. I think it actually is a great thing that such a powerful technology is easily available to everyone. The technology itself was existing even before this open source project, so keeping it reserved to few skilled people did not prevent misusage (especially because as most of IT technology, it comes with a low price to run, so the only barrier is often actually lack of knowledge).
Similarly to Kerckhoffs’s principle, any given technolofy should be safe even if everything is public knowledge and everyone can use it. In this prespective, the Faceswap project put a spotlight on this type of AI and allows us to reason about it.
The deepfakes are getting better and better and these open source projects show that they also are becoming easier and easier to use. In a few months even non-developers will be able to create fake videos where people say something they never said or do something they never did. While this may sound terrifying (after all the fake news have been used to hijack elections), I think is a great news: people are willing to believe to crazy written text claiming some very hard-to-believe stories, just because such articles and posts restate their vision of the world; with fake videos and audios the situation will quickly exacerbate.
It takes a lot of time and effort to debunk a fake news (an operation that after all does not serve the purpose, since believer will always believe and non-believers will not need the debunk), with deepfakes the situation will become even worse. And, again, I think this actually is a good news, because this will push societies in the direction of regulation. We cannot expect that people will educated themselves to discern high quality news from bad ones; it will take years to bring such a change in society, but recent events proved that we do not have that much time.
To spread a fake news we need at least three different actors: the producer, the consumer and the infection medium. As we have seen, consumer are not getting better at consuming, while producing fake news is getting easier. The only other option we have is to act on the infection medium.
Contrary to the technology needed to create a fake news, running a massive social network that reaches milion of people is hard and expensive; therefore, only a few exists and, hence, are the perfect target for regulation. Regulation may happen at different levels, but I think two can be especially effective: building a sound reputation system and creating a robuts ban mechanism.
A reputation system would ensure that it is actually expesive to inject the content in the system. Everyone who has used StackOverflow knows how hard it is to earn enough reputation to perform actions. The same should go with social media. Before being allowed to inject contents that can get viral, users should put some effort (e.g. must be active users, have commented, earned some favs); this would prevent bots and bot networks to operate efficiently and it would discourage wild behaviours of users (who wants to waste that hard-earned reputation?).
On the other hand, a robust ban mechanis would make it difficult for misusers to go back online. Such a mechanism must, of course, include a robust identification system, but a mobile phone 2 factor authentication should be more than enough: a regular person can afford to be banned only so many times if his identity is tied to a phone number.
Or course there are other ways of addressing this issue. For instance, it would be possible to embed a fake videos detector on videos upload, but this would trigger a race between deepfake and detector algoritms and would not solve other fake news propagation.
Given the current times I do not think that we will see any push in regulation from politics, nor from private companies. That is why I think that making a technology easy to use might trigger a change in this situation.