Search
Menu
Hamamatsu Corp. - Mid-Infrared LED 11/24 LB

Researchers Create Video from a Single Image

Facebook X LinkedIn Email
A team at the University of Washington has developed a method capable of turning static images of fluid motion into animated videos. The method makes it possible to animate flowing material, such as water, clouds, and smoke. It produces a short video that loops seamlessly, giving the impression of endless movement.

“A picture captures a moment frozen in time. But a lot of information is lost in a static image. What led to this moment, and how are things changing? Think about the last time you found yourself fixated on something really interesting — chances are it wasn’t totally static,” said lead author Aleksander Holynski, a doctoral student in the Paul G. Allen School of Computer Science and Engineering. “What’s special about our method is that it doesn’t require any user input or extra information. All you need is a picture. And it produces as output a high-resolution, seamlessly looping video that quite often looks like a real video.”

Developing a method that turns a single photo into a believable video has been a challenge for the field. “It effectively requires you to predict the future,” Holynski said. “And in the real world, there are nearly infinite possibilities of what might happen next.”

To estimate motion, the team trained a neural network with thousands of videos of waterfalls, rivers, oceans, and other materials with fluid motion. The training process asked the network to guess the motion of a video with only the first frame. After comparing the prediction with the actual video, the network learned to identify clues — ripples in a stream, for example — to help it predict what happened next. The system then uses that information to determine if and how each pixel should move.

Initially, the team tried to use a technique called “splatting” to animate the photo. The method moves each pixel to its predicted motion, though this presented a problem.

Excelitas PCO GmbH - Industrial Camera 11-24 VS MR

“Think about a flowing waterfall,” Holynski said. “If you move the pixels down the waterfall, after a few frames of the video you’ll have no pixels at the top!”

To solve that, the team created “symmetric splatting.” The method predicts both the past and the future for an image and combines them into one animation.

“Looking back at the waterfall example, if we move into the past, the pixels will move up the waterfall. So we will start see a hole near the bottom,” Holynski said. “We integrate information from both of these animations so there are never any glaringly large holes in our warped images.”

The researchers wanted their animation to loop seamlessly to create a look of continuous movement. The animation network follows a few tricks to keep things clean, including transitioning different parts of the frame at different times and deciding how quickly or slowly to blend each pixel depending on its surroundings.

The method works best for objects with predictable fluid motion. At present it struggles with how reflections should move and how water distorts the appearance of objects beneath it.

“When we see a waterfall, we know how the water should behave. The same is true for fire or smoke. These types of motions obey the same set of physical laws, and there are usually cues in the image that tell us how things should be moving,” Holynski said. “We’d love to extend our work to operate on a wider range of objects, like animating a person’s hair blowing in the wind. I’m hoping that eventually the pictures that we share with our friends and family won’t be static images. Instead, they’ll all be dynamic animations like the ones our method produces.

The work will be presented June 22 at the Conference on Computer Vision and Pattern Recognition.

Published: June 2021
Glossary
neural network
A computing paradigm that attempts to process information in a manner similar to that of the brain; it differs from artificial intelligence in that it relies not on pre-programming but on the acquisition and evolution of interconnections between nodes. These computational models have shown extensive usage in applications that involve pattern recognition as well as machine learning as the interconnections between nodes continue to compute updated values from previous inputs.
algorithm
A precisely defined series of steps that describes how a computer performs a task.
computer vision
Computer vision enables computers to interpret and make decisions based on visual data, such as images and videos. It involves the development of algorithms, techniques, and systems that enable machines to gain an understanding of the visual world, similar to how humans perceive and interpret visual information. Key aspects and tasks within computer vision include: Image recognition: Identifying and categorizing objects, scenes, or patterns within images. This involves training...
Research & Technologyneural networkneural networksalgorithmanimationVideophotographyloopAleksander HolynskiUniversity of Washingtoncomputer visionmotionmotion analysismotion predictionImage Analysisimage analysis algorithmsAmericasThe News Wire

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.