If you have not heard, we are now living in a world where what-you-see-may-not-be-what-you-get (WYSMNBWYG). Case-in-point: the photo you see here. The people you see in the above image are not real people. I repeat. They are NOT real people. They were generated by artificial intelligence, or in technical terms, generated by a Style-based Generator Architecture for Generative Adversarial Networks. How this creepy AI tech works is basically pulling the features of several human faces and fuse them together to create images of people who do not exists.
The real mind-blowing revelation here is how this system is able to photos of people that are so realistic that they actually look more real than the real people you see on the digital domain who, more often than not, were the results of filters and photo editing. We have no idea what are the applications for this NVIDIA-developed AI tech. All we know is, at the rate AI, machine learning and whatnot are progressing, what-you-see-may-not-be-what-you-get from here on and out. The implication can be quite profound if this tech were to be accessible to average Joe.
How? Well, up till know, people with ill intention to deceive can use photos he/she grab from the Internet and that can easily be exposed with reverse image search. But if the image a new person created by AI, you may lead to believe that the person is somehow real since you will not be able to dig out anything from reverse image search. However, I imagine this tech is still not accessible to everyday people, unlike deepfakes. Then again, the implication of deepfakes should be significantly lesser than filters, especially now that mobile phone camera can even trim a person’s figure. As if make up isn’t good enough to deceive men around the world…
Speaking of filters… here’s a PSA for those who dig filters:
And here’s the video demonstrating this creepy technology: