Chroma is a new 8.9B parameter model, still being developed, based on Flux.1 Schnell.
It’s fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build on top of it.
Like my HiDream workflow, this will let you work with:
- txt2img or img2img,
-Detail-Daemon,
-Inpaint,
-HiRes-Fix,
-Ultimate SD Upscale,
-FaceDetailer.
The model is still being trained, so there are many updated versions (latest today, May 15th, is the v29.5). Here are all the versions: https://huggingface.co/lodestones/Chroma/tree/main
In brief, this model is:
Training on a 5M dataset, curated from 20M samples including anime, furry, artistic stuff, and photos.
Fully uncensored, reintroducing missing anatomical concepts.
Built as a reliable open-source option for those who need it.
Being based on Flux.1 Schnell, it should run on low-Vram GPUs, so you can use it locally very easily.
You will need one of the t5xxl text encoder model files that you can find in: this repo, fp16 is recommended, if you don’t have that much memory fp8_scaled are recommended. Put it in the ComfyUI/models/text_encoders/ folder. The VAE is the same as FLUX or HiDream, so you should already have it.