Stability.ai, a pioneering open-source AI company established in 2019, has recently announced its latest breakthrough – the Stable Video Diffusion AI model. This innovation marks a significant leap in AI capabilities, enabling the transformation of still images into captivating animations. Similar to its predecessor, the Stable Diffusion image model, this new offering has been made available through Stability.ai’s Github repository for research preview.
The crux of Stable Video Diffusion lies in its ability to generate animated sequences based on uploaded still images. Leveraging the image’s content, the AI model crafts a video by creating 25 frames, culminating in a brief animation. Moreover, users have the flexibility to produce 14-frame videos. However, the resultant animation’s resolution, maxing out at 576×1024, hinges upon the uploaded image’s size.
Stability.ai claims superiority for Stable Video Diffusion over rival AI models, citing a study accompanying its release. Yet, it’s essential to acknowledge that this study lacks peer review, thus raising questions about impartiality. Notably, the comparison involved Runway’s GEN-2 model and Pika Labs’ offering.
However, limitations exist. The duration of videos generated from still images is confined to approximately 4 seconds. Although suitable for looping content, it falls short for original animated creations. Moreover, occasional failures to generate animations and instances of sluggish or unnatural motion are among the model’s drawbacks.
Like many AI counterparts, Stable Video Diffusion grapples with certain challenges. Notably, text within images may lose legibility when translated into video format, while facial elements might suffer distortion. Currently intended solely for research purposes, access to the model is available via Stability.ai’s GitHub repository, albeit requiring prior experience in code download and execution.
Stability.ai’s unveiling of Stable Video Diffusion contributes to the swift evolution of AI technology. Concurrently, Pika Labs recently introduced Pika 1.0, a text-to-video AI generator, reinforcing the accelerating advancements in video and image generation through ongoing research endeavours.