There’s a new spooky AI-powered face generator doing the rounds – WhichFaceIsReal.com, which challenges you to tell the difference between a computer-generated face and the real thing.
All users have to do is head over to the website and get clicky with it. Might sound like fun and games, but the project actually has a higher purpose. Created by two researchers from the University of Washington, the website aims to educate the public about AI-generated fakes, as part of their catchily named ‘Calling Bullshit’ project. According to academics Jevin West and Carl Bergstrom, they think fake faces could undermine people’s trust in photographic evidence. In an interview with The Verge, Bergstrom said:
“When a new technology like this comes along, the most dangerous period is when the technology is out there but the public isn’t aware of it. That’s when it can be used most effectively.”
West goes on to compare this new technology to the moment when the public figured out it was possible to Photoshop an image. Looking to the future, Bergstrom and West think that this technology could be abused by people looking to spread misinformation. For example, in the event of a terrorist attack, AI could be used to generate fake mugshots to spread on social media.
What would usually happen in these scenarios is a journalist would verify the origin of an image using Google Reverse Image search – however, this wouldn’t work on a fake. Bergstrom explains:
“If you wanted to inject misinformation into a situation like that, if you post a picture of the perpetrator and it’s someone else it’ll get corrected very quickly. But if you use a picture of someone that doesn’t exist at all? Think of the difficulty of tracking that down.”
Known as “deepfakes” these images are generated using a machine learning program called a generative adversarial network, or GAN. The networks work by analyzing huge data sets, learning patterns, and trying to copy them. What makes the program so good is the fact that it tests itself; every time a face fails, the generator stacks up training data to improve its artistic skills.
However, researchers like West and Bergstrom on working on ways to bust the deepfakes. Apparently, it’s pretty easy to do right now. West notes that people taking the test on their website will notice that it’s not that hard to tell the difference, with things like asymmetrical faces, bad teeth, unrealistic hair, and weird ears giving the game away.
That said, every time the program fails, it gets better. According to West, very soon, these AI-generated faces will be indistinguishable from the genuine article. However, West and Bergstrom insist their message isn’t that people shouldn’t believe anything – just that you should keep your guard up.