Pixel Art Style (illustrious by Skormino)

LORA
Reprint


Updated:

Creator: Skormino

Version 7.05 🔨

Usage Recommendations:

I still recommend using your custom node in ComfyUI—I believe this is the right approach. How to use my custom node

  • Model: Plant Milk Model Suite Walnut | indexed v1

  • CFG: 3–4

  • Steps: 28+

  • Sampler: Euler | EulerA

  • Scheduler: Simple | sgm_uniform

Write your prompt after my trigger words: masterpiece, pixpix, 8-bit, pixel_art

Please, avoid adding too many quality tags—they’re usually meant for smooth images, but pixels are inherently square. Do you realize what happens when you add too many quality tags?

VAE:

I usually used baked VAE, but after updating Comfy, I don’t have a choice but to use the first one available, which happened to be lunaXLILNAIVAE_luna. Even if I wanted to, I can’t check if it makes any difference or not.

Speaking of VAE: I recently came across an SDXL-based VAE, and it’s pixelated. The results look amazing, but SDXL itself is outdated, and that VAE conflicts with my LoRAs. I’d love to train my own VAE—imagine, about six people offered to help with their hardware, but something always went wrong, and I still haven’t been able to do anything on external equipment.

I’m certain that a VAE trained on my data could produce better results than a LoRA.

I haven’t been able to get back to this for a while. Laziness has been choking me.

Test Model:

I’m using 72 images (none of which were in previous training, so there should definitely be a difference compared to other versions).

I like that this version produces interesting horizontal landscapes and unusual characters. The girls sometimes have issues with their eyes, but I know why—it’s clearly a dataset issue. Fixing it would require creating a whole new model.

Over the past three months, I’ve been hit with some serious apathy, but I’ve come to understand some technical truths. For example: did you know that if you allow even one questionable-quality image in your dataset, all the other flawless images will have almost no positive effect? The bad drags down the good. A neural network never forgets what it’s seen, and if there’s something questionable, it will appear in every generation. So even if we have what seem like good images, we’ll still end up with mediocre results because some things—intentionally drawn by artists—might turn out to be artifacts that appear in every generation.

The larger the dataset, the riskier it gets. You never know what hidden flaws an image might have during training. But the number of images determines the variety of ways something can be drawn. I could go on philosophizing forever, but I’ll stop here.

By the way, I wanted to release a version for Pony—it does produce interesting results, but Pony probably needs a much larger dataset than Illustrious, so I’ll hold off until the dataset is truly impressive.

Version Detail

Illustrious
Trigger Words pixpix, 8-bit, pixel_art,

Project Permissions

Model reprinted from : https://civitai.com/models/1631459?modelVersionId=2396350

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Related Posts