Please use original: https://tensor.art/models/609075900996308391
creator: Ikena
Hassaku aims to be a model with a bright, clear anime style. Model focus are nsfw images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models.
My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old hentai model)
Sponsors: Mage.space with its amazing creators program supports all kinds of creators like me! Preinstalled with 80+ high quality models, hundreds of loras and a new animation feature, join their discord community here!
SinkIn.ai hosts the best Stable Diffusion models on fast GPUs. You can run Hassaku at: https://sinkin.ai/m/76EmEaz
Supporters:
Thanks to my supporters Riyu, SETI, Jelly, Kodokuna and Gpr0mpt on my patreon!
You can support me on my patreon, where you can get other models of me and early access to hassaku versions.
_____________________________________________________
Using the model:
Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.
My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.
Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.
Don't use face restore and underscores _, type red eyes and not red_eyes.
Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second anus.
_____________________________________________________
Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.
_____________________________________________________
Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments
I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).