Toni Allen is General Manager of Quality Nonsense LTD and editor for WhoIsHostingThis.com – leading web hosting leader publisher reviewer and buyer’s guides.
How would you define deepfakes.
Deepfake is a kind of AI-driven technology that allows videos to altered much like photos can be altered with Photoshop. This normally takes the form of changing someone’s face (in whole or part) in a video as Jordan Peele did in a provocative example of the dangers of the technology.
Colloquially, «deepfakes» has come to refer to the resulting videos that use this technology.
Are deepfakes a threat?
The technology is still relatively expensive to use in terms of hardware and time. But the technology is such that anyone can now produce (clearly altered) proto-deepfakes that would have been unimaginable a few years ago.
I don’t think there is a threat that people will be convicted of crimes because of deepfakes. But there is a sociological aspect of deepfakes that concerns me a bit. They erode confidence in our shared reality. We are fast approaching the point where people will not trust what they see on video. On one level, that’s no big deal. We’ve already been through that with still images and voice recordings. But given how much of our lives are online, I worry that we will become further disconnected from each other in terms of shared reality.
I’m not too worried, though. There has never been a technology that humans haven’t been able to integrate with their lives.
Do you think deepfakes are being used as false evidence in criminal trials?
I’m not an expert and we have not looked into this. If deepfakes are not used now, they certainly will be in the future. But I don’t see how this is any different than it ever has been. Fake evidence has been around for as long as there have been trials. Clearly, the possibility of deepfakes must be taken into account by all people involved in the criminal justice system. But I don’t see deepfakes as any kind of a unique threat – just a new one.
In your study, 60% of women think deepfakes should be illegal besides 38% of men. Why do you think there is such difference?
There are probably many reasons for this. One we know about is simply that deepfakes are used against women so much more. In addition to that, the way they target women strikes at deep-seated social mores about sexuality. According to a recent study by Deeptrace Labs, 96% of all deepfakes were used in porn – putting the faces of women on the bodies of porn actresses. A fake sex-tape is likely to do more social harm to a woman than to a man. And with social harm comes both physical and psychological harm. To be honest, I think everyone (but especially women) would be even more concerned if they knew more about the technology and where it’s heading.
Companies like Facebook, Amazon and Microsoft are fighting to errase deepfakes because of its impact on their credibility. Do you think it will be possible to detect deepfakes in the future?
There is already an industry developing to detect deepfakes. I suspect that there will be something of an arms race were the creators of deepfakes improve the technology as the detectors catch up. How that plays out, I can’t say.
What seems certain is that we will reach a point where everyone is highly skeptical of any video evidence that is at all provocative. We might reach a point where we only believe the US president said something if it came directly from an established news outlet. Maybe we’re already there.
Link to the study.