Runway, the company best known for its text-to-image model Stable Diffusion, has just released Gen-1, a video generation AI system. Similarly to the company’s Stable Diffusion technology, users may use text input to transform videos using the AI model.
A short demonstration video, published on the company’s official YouTube channel, shows how Gen1 can turn a video clip of people walking down the street into claymation puppets. A simple command, Claymation style, is all that is required to make the transformation.
Later on, in the same video, Runway reveals that its video generation AI system accepts text and image input to create new video content using existing video clips. Apart from direct transformations of video clips, Gen1 supports what Runway calls Storyboard.
Storyboard turns mockups into animated renders. The video shows how a stack of books is turned into a skyline by night. Then there is mask mode, which allows video editors to isolate objects in the video and modify them. The example this time shows how Gen1 was used to add spots to the dog. The short clip highlights an issue, as the AI put two of the spots directly on the dog’s eyes.
Render mode may turn untextured renders into realistic outputs through text prompts or providing an image.
Customization mode, finally, allows users to customize the model for “even higher fidelity results”.
You can watch the full video below: