Of all the actual research on neural networks, machine learning, and artificial intelligence that are underway, there are a lot of experiments … the results are on the delicate line between interesting and scary. .
Automatic image processing has emerged as a great tool for artificial neural networks, in part thanks to decades of sharing photos and selfies on the internet. As a result, we have a huge trove of face shots to “harvest” and use and AI training to do everything, from simulating user aging on mobile apps, to creating produce collections of surreal faces from people who don’t even exist.
The stock photography industry will never be the same again, but Mario Klingermann wondered what would happen if he asked those artificial neural networks to create virtual face shots in sync with the music – and As you can see in the video below, we have some really impressive faces when the music is booming!
StyleGAN2 – create a synchronized face to the beat of music
Klingermann used the inverse network that generated StyleGAN2, which was created by Nvidia and then launched in open source form over a year ago. He did not undertake custom image training himself, but instead refined the GAN so that it would transform the results generated based on the sound spectrum of an included audio file – in the field. This is the song Triggernometry by Kraftamt.
Some of Klingermann’s Twitter followers have said that he should let the GAN-made video slow down a bit to see the horrors hidden there. You can see the images below to see its horror right away. Note that you should not scroll down to watch more if you have a cardiovascular history or are watching this article at midnight!
* Last warning!