What you see here is a video demonstration of Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation. In layman terms, it means squeezing high-quality slow-motion videos from 30-frame-per-second videos – something which is not possible until today. If you have watched enough documentaries where often there are videos slowed down from normal speed video, the quality is often less than desirable.
Combining deep-learning-based system with artificial intelligence, researchers at graphics card maker NVIDIA is able to create gorgeous 240fps slow motion footage from standard 30fps video, or spitting out insane 480fps slo-mo from 60fps original footage.
“Using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.”
The result is slowed down videos that are more fluid and less blurry. But why this technology? And how is it useful to everyday people? Simple. You want to capture videos in say, 4K and such, but choose only the segments where you want to slow down. And when you do so, you don’t want the video quality to suffer because of the slow motion.
That’s a simple enough reason to get us all wet with excitement. Sure, you can record every seconds in slow-motion. Now that modern smartphone is capable of 240fps videoing, it can very well do that, but first, as said, you probably don’t need every single moment to be in slo-mo (frankly it will be a pain to watch an entire one-hour video in slow motion) and secondly, it would be a pain to hold a phone steady for an extended period time which means, you are likely to be tied down by a tripod.