RouWei

CHECKPOINT
Reprint


Updated:

https://civitai.com/models/950531?modelVersionId=1882934

In depth retraining of Illustrious to achieve best prompt adherence, knowledge and state of the art performance.

Large scale finetune using gpu cluster with a dataset of ~13M pictures (~4M with natural text captions)

  • Fresh and wast knowledge about characters, concepts, styles, cultural and related things

  • The best prompt adherence among SDXL anime models at the moment of release

  • Solved main problems with tags bleeding and biases, common for Illustrious, NoobAi and other checkpoints

  • Excellent aesthetics and knowledge across a wide range of styles (over 50,000 artists (examples), including hundreds of unique cherry-picked datasets from private galleries, including those received from the artists themselves)

  • High flexibility and variety without stability tradeoff

  • No more annoying watermarks for popular styles thanks to clean dataset

  • Vibrant colors and smooth gradients without trace of burning, full range even with epsilon

  • Pure training from Illustrious v0.1 without involving third-party checkpoints, Loras, tweakers, etc.

Dataset cut-off - end of April 2025.

Features and prompting:

Important change:

When you are prompting artist styles, especially mixing several, their tags MUST BE in a separate CLIP chunk. Just add BREAK after it (for A1111 and derivatives), use conditioning concat node (for Comfy) or at least put them in the very end. Otherwise, significant degradation of results is likely.

Sampling parameters:

  • ~1 megapixel for txt2img, any AR with resolution multiple of 32 (1024x1024, 1056x, 1152x, 1216x832,...). Euler_a, 20..28steps.

  • CFG: for epsilon version 4..9 (7 is best), for vpred version, 3..5

  • Sigmas multiply may improve results a bit, CFG++ samplers work fine. LCM/PCM/DMD/... and exotic samplers untested.

  • Some schedulers doesn't work well.

  • Highresfix - x1.5 latent + denoise 0.6 or any gan + denoise 0.3..0.55.

  • For vpred version lower CFG 3..5 is needed!

For vpred version lower CFG 3..5 is needed!

Quality classification:

Only 4 quality tags:

masterpiece, best quality

for positive and

low quality, worst quality

for negative.

Nothing else. Actually you can even omit positive and reduce negative to low quality only, since they can affect basic style and composition.

Meta tags like lowres have been removed and don't work, better not to use them. Low resolution images have been either removed or upscaled and cleaned with DAT depending on their importance.

Negative prompt:

worst quality, low quality, watermark

That's all, no need of "rusty trombone", "farting on prey" and others. Do not put tags like greyscale, monochrome in negative unless you understand what are you doing. Extra tags for brightness/colors/contrast section below can be used

Artist styles:

Grids with examples, list/wildcard (also can be found in "training data").

Used with "by " it's mandatory. It will not work properly without it.

"by " is a meta-token for styles to avoid mixing/misinterpret with tags/characters of similar or close name. This allows to have a better results for styles and at the same time avoid random style fluctuation that you may observe in other checkpoints.

General styles:

2.5d, anime screencap, bold line, sketch, cgi, digital painting, flat colors, smooth shading, minimalistic, ink style, oil style, pastel style

Booru tags styles:

1950s (style), 1960s (style), 1970s (style), 1980s (style), 1990s (style), 2000s (style), animification, art nouveau, pinup (style), toon (style), western comics (style), nihonga, shikishi, minimalism, fine art parody

and everything from this group.

Can be used in combinations (with artists too), with weights, both in positive and negative prompts.

Characters:

Use full name booru tag and proper formatting, like karin_(blue_archive) -> karin \(blue archive\), use skin tags for better reproducing, like karin \(bunny\) \(blue archive\). Autocomplete extension might be very useful.

Most characters are recognized just by their booru tag, but it will be more accurate if you describe their basic traits. Here you can easily redress your waifu/husbendo just by the prompt without suffering from the typical leaks of basic features.

Natural text:

Use it in combination with booru tags, works great. Use only natural text after typing styles and quality tags. Use just booru tags and forget about it, it's all up to you. To get best performance keep track if CLIP 75 tokens chunks.

Vpred version

Main thing you need to know - lower your CFG from 7 down to 5 (or less). Otherwise, the use is similar with advantages.

Known issues:

Off course there are:

  • Artists and style tags must be seperated into a different chunk from main prompt or come very last

  • There may be some positional or combinational bias in rare cases, but it's not yet clear.

  • There are some complaints about few of the general styles.

  • Epsilon version relies too much on brightness meta tags, sometimes you will need to use them to get desired brightness shift

  • Some newly added styles/characters might be not as good and disctinct as they deserve to

  • To be discovered

Requests for artists/characters in future models are open. If you find artist/character/concept that perform weak, inaccurate or has strong watermark - please report, will add them explicitly. Follow for a new versions.

JOIN THE DISCORD SERVER

License:

Same as illustrious. Fell free to use in your merges, finetunes, ets. but please leave a link or mention, it is mandatory

Thanks:

First of all I'd like to acknowledge everyone who supports open source, develops in improves code. Thanks to the authors of illustrious for releasing model, thank to NoobAI team for being pioneers in open finetuning of such a scale, sharing experience, raising and solving issues that previously went unnoticed.

Personal:

Artists wish to remain anonymous for sharing private works; Few anonymous persons - donations, code, captions, etc., Soviet Cat - GPU sponsoring; Sv1. - llm access, captioning, code; K. - training code; Bakariso - datasets, testing, advices, insides; NeuroSenko - donations, testing, code; LOL2024 - a lot of unique datasets; T.,[] - datasets, testing, advises; rred, dga, Fi., ello - donations; TekeshiX - datasets. And other fellow brothers that helped. Love you so much ❤️.

And off course everyone who made feedback and requests, it's really valuable.

If I forgot to mention anyone, please notify.

Donations

If you want to support - share my models, leave feedback, make a cute picture with kemonomimi-girl. And of course, support original artists.

AI is my hobby, I'm spending money on it and not begging for donations. However, it has turned into a large-scale and expensive undertaking. Consider to support to accelerate new training and researches.

(Just keep in mind that I can waste it on alcohol or cosplay girls)

BTC: bc1qwv83ggq8rvv07uk6dv4njs0j3yygj3aax4wg6c

ETH/USDT(e): 0x04C8a749F49aE8a56CB84cF0C99CD9E92eDB17db

XMR: 47F7JAyKP8tMBtzwxpoZsUVB8wzg2VrbtDKBice9FAS1FikbHEXXPof4PAb42CQ5ch8p8Hs4RvJuzPHDtaVSdQzD6ZbA5TZ

if you can offer gpu-time (a100+) - PM.

Version Detail

Illustrious

Project Permissions

Model reprinted from : https://civitai.com/models/950531?modelVersionId=1882934

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Related Posts