Cover images are directly from the vanilla (the original) base model in a1111, at 1MP resolution. No upscale, no plugins (face/hands inpaint fixes, CFG Rescale, etc.), even no negative prompt. You can drop and reproduce those images in a1111. They have metadata.
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited. This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a thief.
Latest version update
(6/21/2025) Buzz tips:
Thanks for the buzz tips. However, since Civitai never wants to pay its creators. It is useless to me. So I tipped those buzz to creators who use Civitai online trainer.
If you find a creator always release LoRAs that are 217.87MB. He/she 95% may rely on Civitai online trainer because it is the default LoRA struct settings.
I don't use online trainer, it does not meet my requirement.
(6/21/2025) illus01 v1.185c:
Note: This is a special version. "c" stands for "colorful", "creative", sometimes "chaotic". See v1.165c log for more info.
-30% images (from v1.165c dataset) that are too chaotic (cannot be descripted properly)
Mixed interpolation mode before processing by VAE. (Nearest + Lanczos). Image should feel clearer and sharper. Feels like +50% details, even if you zoom in. This is because it has more "noisy" pixels.
Other changes that I forget. 20 intermediate versions. Lost the track.
FAQ:
If you found that this LoRA has a very strong effect even at low strength (~0.3):
This is a normal standard LoRA (not a slider, etc.). Which means its working strength is 0~1. And its ideal strength is ~0.7. At low strength (0~0.3) there should be no obvious effect.
Below is on NoobAI epred v1.1. On model with default style this effect should be even weaker.
However, if you found that on your base model even at low strength (~0.2) it still "dramatically" changed the output, and at higher strength (e.g. ~0.5) it broke the base model. This means this base model has already merged my LoRA (including old versions, they have shared dataset).
I won't be surprised, since this LoRA has been the most download illustrious LoRA for a long time.
This is a nature of open source, some create, some steal. It is user's responsibility to notice and ******* those thieve who just upload merged checkpoints without any details and credits.
Stabilizer
It's an all-in-one finetuned base model LoRA. If you apply it to vanilla NoobAI e-pred v1.1, then you will get my personal "finetuned" base model.
It focuses on natural lighting and details, stable prompt understanding and more creativity.
It is not an overfitted style LoRA (which only has dozens of images). The training dataset is big and very diverse. It does not affect the creativity of the base model. It adds more. You will not get same things (faces, backgrounds, etc.) over and over again.
You can get a clean and stable character/style as it should be. No style pollution, no overfitting effects. No matter if it's a built-in tag or LoRA, 2D or 3D, human or non-human. Example.
The training dataset only contains high resolution images (avg pixels > 3MP, ~1800x1800). Zero AI image. So you can get real texture and details beyond pixel level, instead of fake edges and smooth surfaces with no texture (Because they were trained on AI images). Example.
Why all-in-one? Because if you train 10 LoRAs with 10 different datasets for different aspects, and stack them up, your base model will blow up. If you train those datasets in one go, there will be no conflicts.
Why not finetune the full base model? I not a gigachad and I don't have millions of training images, so finetuning the entire base model is not necessary. Fun fact: Most (95%) base models out here just merges of merges of merges of tons of LoRAs... Only very few base models are truly fully finetuned models trained by truly gigachad creators.
(6/4/2025): Future plan
The dataset is getting bigger and bigger and it's difficult and expensive to train. illus01 v1.164 was trained from the start and took almost 35 hours. So
NoobAI version will not be updated. I decided to put my main time to improve illustrious v0.1 versions, which supports all NoobAI and later illustrious versions (v1, v2...).
I opened a donation page. If you like my model, and want to support me training on bigger cloud GPU, you can directly support me at here: https://app.unifans.io/c/fc05f3e2c72cb3f5
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
How to use
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Version prefix:
illus01 = Trained on Illustrious v0.1. (Recommended, even for NoobAI)
nbep11 = Trained on NoobAI e-pred v1.1. (Discontinued)
Recommended usage:
Vanilla (no default style) models, e.g. NoobAI e-pred v1.1
+ This LoRA (strength 0.5~0.7, v-pred may need to be lower)
+ 1~3 style tags/LoRAs you like. Note: There is no default style. So you have to hint the style you want in the prompt or use LoRA.
NOT recommended:
Base models with AI style. Because AI styles are super overfitted style, and will overlap any texture instantly.
Heavily merged base models (Merge of merges of merges...). May have 20+ LoRAs inside. And is gonna blow up.
Old versions:
New version == new stuffs and new attempt.
One big advantage of LoRA is that you can always mix different versions in a second.
You can find more info in "Update log". Beware that old versions may have very different effects.
Now ~: Forcing on natural details and textures, stable prompt understanding and more creativity from build-in style.
Illus01 v1.23 / nbep11 0.138 ~: Forcing on pure anime style with vivid colors.
Illus01 v1.3 / nbep11 0.58 ~: Forcing on anime style.
FAQ
I can't get textures like the cover images.
This LoRA may have no effect with AI styles (trained on AI images). Because AI styles are super overfitted style, and will overlap any texture instantly. FYI. Cover images are from vanilla base model.
How to know if base model (or LoRAs) is AI style.
No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.
I got realistic faces on my anime characters.
I can guarantee that there are no realistic faces in the dataset. So this LoRA has zero knowledge of realistic faces. However, your base model may have (mixed with other realistic models, many models did this for better details).
Dataset
latest version or recent versions
~7k images total. Every image is hand-picked by me.
Only normal good looking things. No crazy art style that cannot be described. No AI images, no watermarks, etc.
Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.
All images have natural captions from Google latest LLM.
All anime characters are tagged by wd tagger v3 first and then Google LLM.
Contains nature, outdoors, indoors, animals, daily objects, many things, except real human.
Contains all kinds of brightness conditions. Very dark, very bright, very dark and very bright.
Other tools
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: A LoRA trained on and only on the real world dataset (no anime dataset). Has stronger effect. Better background and lighting. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: A LoRA that can fix the high brightness bias in anime models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Contrast Controller: A handcrafted LoRA. (No joke, it was not from training). The smallest 300KB LoRA you have ever seen. Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, mathematical linear, and has zero side effect on style.
Useful when you base model has oversaturation issue, or you want something really colorful.
Example:
Style Strength Controller: Or overfitting effect reducer. Also a handcrafted LoRA, not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The base model has many biases, e.g high brightness, smooth and shiny surface, printings on wall... The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25, less bias of high brightness, less weird smooth feeling on every surfaces, the image feels more natural.
Differences between Stabilizer:
Stabilizer was trained on real world data. It can only "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. Can mathematically reduce all overfitting effects, like bias on brightness, objects.
Update log