Adobe previewed its generative video tools earlier this year, but details were relatively scarce. Today, Adobe shared much more information, including actual videos created in its Adobe Firefly Video Model.
Adobe says its Adobe Firefly Video Model is designed to “empower film editors and video professionals” with a wide variety of tools to inspire creative vision, fill gaps in a video’s editing timeline, or add new elements to existing clips.
“The Firefly Video Model will extend Adobe’s family of generative AI models, which already includes an Image Model, Vector Model and Design Model, making Firefly the most comprehensive model offering for creative teams,” says Adobe. “To date, Adobe Firefly has been used to generate over 12 billion images globally.”
Arriving to the public in beta form later this year, new Firefly-powered Text to Video and Image to Video capabilities will come to Adobe Firefly on the web, and some AI features will be implemented natively in Adobe Premiere Pro, which was updated yesterday with a suite of new color grading tools.
Text to Video enables users to generate video clips through simple text prompts. These prompts are reactive to specific camera-related text, including things like angle, motion, and zoom. With Image to Video, users can feed Firefly reference still frames to generate motion clips.
Adobe published numerous AI-generated clips, all of which were created in “under two minutes” using the Adobe Firefly Video Model.
“Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation,” says Ashley Still, senior vice president, Creative Product Group at Adobe. “We are excited to bring new levels of creative control and efficiency to video editing with Firefly-powered Generative Extend in Premiere Pro.”
Adobe notes that the camera control prompts, like angle and motion, can be combined with real video to further augment the look, flow, and feel of content without needing to reshoot something.
Adobe also shared clips that it generated to augment existing real-world footage. The first clip below is original, human-captured footage, while the second was generated using Firefly. The final clip is the combined, edited footage put together into a single sequence.
“Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage,” Still writes.
Adobe believes video creators and editors can use Adobe’s AI technology to address gaps in footage, remove unwanted objects from a scene, smoothing out transitions, and creating the perfect B-roll clips.
As with Adobe Firefly’s other tools and functions, the Firefly Video Model is designed to be commercially safe and has been trained exclusively using content Adobe has permission to use.
The Adobe Firefly Video Model beta will be released later this year.
Image and video credits: Adobe