They're calling it neural karaoke.
University of Toronto researchers Raquel Urtasun, an associate professor in computer science and Canada Research Chair in Machine Learning and Computer Vision, and Sanja Fidler, an assistant professor in computer science, along with PhD student Hang Chu, have trained recurrent neural networks to take a digital photo and turn it into a computer-generated singalong and dance-along.
The team's ‘neural karaoke’ has been featured in The Guardian and Huffington Post.
The Guardian reports that the researchers had a program analyze Christmas photographs, generate lyrics, and then had the program sing the lyrics to music it had composed.
Chu trained a neural network on 100 hours of online music, according to the story.
The team also taught the program how to dance, using an hour of footage from the video game Just Dance.