FramePack is Now Available on RunDiffusion

FramePack is now available on RunDiffusion. It enables next-frame video generation using fixed-length context packing. Users can launch FramePack sessions directly from the cloud—no local setup required. Built for efficient, progressive frame-by-frame video diffusion.
FramePack is Now Available on RunDiffusion

FramePack, a next-frame video diffusion framework from lllyasviel, is now available to run directly through RunDiffusion's cloud platform.

This integration gives users access to FramePack’s unique context-packing approach for efficient video generation—without needing to set up the project locally.


What is FramePack?

FramePack introduces a method for next-frame prediction in video diffusion models. It compresses the context of previously generated frames into a fixed-length representation, allowing the model to focus only on relevant prior information. This results in a consistent input size for each generation step, regardless of how many frames have been generated.

This framework is designed for progressive video generation, where frames are created one-by-one in sequence. The goal is to improve performance and reduce compute requirements during both training and sampling.

The official repository is here:

GitHub - lllyasviel/FramePack: Lets make video diffusion practical!
Lets make video diffusion practical! Contribute to lllyasviel/FramePack development by creating an account on GitHub.

Core Features (from the repository)

  • Progressive Frame-by-Frame Video Generation
  • Context Compression: Packs earlier frames into a condensed form before predicting the next frame.
  • Sampling System Included: Comes with a simple high-quality sampling pipeline for evaluation and demo purposes.
  • Training Code Provided: Training scripts for learning frame prediction using standard datasets.
The authors note that it helps keep the input size constant and allows the model to operate without performance degradation as the number of frames increases.

How to Use FramePack on RunDiffusion

Click on FramePack in the left-hand menu and then click Select.

Set your session parameters, then click Launch to start your session. We recommend at Large server for video work.

Drop or Upload an Image.

Enter a Prompt.

Check your settings. You can change your Total Video Length. This may drastically change processing time. A 5 second video takes about 10-11 minutes.

When your settings are what you'd like click Start Generation.

While the video is processing. You can End the Generation early if needed.

After processing you can download your video by clicking on the download symbol.

Example Videos.

0:00
/0:04

5 Second Video

0:00
/0:04

40 Second Video:

0:00
/0:43

If you’re working with video and animation in diffusion models, you may also find these RunDiffusion guides useful:

LTX-2 Retake: Directable AI Video Editing for Creators on RunDiffusion | RunDiffusion
LTX-2 Retake lets creators regenerate selected moments within a video while preserving continuity. You can reshape tone, dialogue, or motion without re-rendering entire scenes, making Runnit a powerful tool for directable AI filmmaking.
LTX-2 Prompt Guide | RunDiffusion
Learn how to write cinematic prompts for LTX-2 on RunDiffusion to generate stunning AI video from images or text. This guide breaks down 9 essential prompt writing techniques with examples, covering camera behavior, atmospheric detail, character motion, and genre-specific language.
Wan2.1 Is Now Available Inside ComfyUI | RunDiffusion
Wan2.1 is now live inside ComfyUI on RunDiffusion, offering advanced open-source video and image generation features, including Text-to-Video and Image-to-Video. Access Wan2.1 easily from the ComfyUI app.
About the author
Adam Stewart

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to RunDiffusion.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.