Stable Diffusion Video - XT v1.1 [IMG2VID]

CHECKPOINT
Reprint


Updated:

258

(SVD XT 1.1) Image-to-Video is a latent diffusion model released this month that is trained to generate short video clips from an image conditioning.

This model was trained to generate 25 frames at resolution 1024x576 given a context frame of the same size, finetuned from SVD Image-to-Video [25 frames].

Fine tuning was performed with fixed conditioning at 6FPS and Motion Bucket Id 127 to improve the consistency of outputs without the need to adjust hyper parameters. These conditions are still adjustable and have not been removed. Performance outside of the fixed conditioning settings may vary compared to SVD 1.0.

---

This model is intended for research purposes only.

While this model is open source, but please respect the hard work of the people at Stability AI and review the acceptable use cases set forth by the team.

  • Developed by: Stability AI

  • Funded by: Stability AI

  • Model type: Generative image-to-video model

  • Finetuned from model: SVD Image-to-Video [25 frames]

Again, please review the acceptable use policy and the details that the team has graciously provided.

Version Detail

SD 2.0
The model should not be used in any way that violates Stability AI's Acceptable Use Policy.

Project Permissions

Model reprinted from : https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Comments

Related Posts

Describe the image you want to generate, then press Enter to send.