>>> UPLOADING/SHARING MY MODELS OUTSIDE CIVITAI IS STRICLY PROHIBITED* <<<
The only authorized generative service website are:
• Mage.Space = That's help me a lot to buy a new PC
• Also in collaboration with Shakker.ai
This model is free for personal use and free for personal merging(*).
For commercial use, please be sure to contact me (Ko-fi) or by email: samuele[dot]bonzio[at]gmail[dot]com
Leaderboard of my 3 best supporter:
https://ko-fi.com/samael1976/leaderboard
Aniverse - Pony XL - make the impossible possible!
This is a long shot project, I’d like to implement something new at every update!
The name is a merge of the two words: Animation and Universe (and a word pun: Any+Universe -> Anyverse -> Aniverse)
-> If you are satisfied using my model, press on ❤️ to follow the progress and consider leaving me ⭐⭐⭐⭐⭐ on model review, it's really important to me!
Thank you in advance 🙇
And remember to publish your creations using this model! I’d really love to see what your imagination can do!
Recommended Settings:
Excessive negative prompt can makes your creations worse, so follow my suggestions below!
Before applying a LoRA to produce your favorite character, try it without first. You might be surprised what this model can do!
Stable Diffusion XL with only 3GB of VRam (nVidia GPU):
If you have a GPU nVidia, but do you have problem to run XL cause low VRam, try to use this version made by me and nuaion. It is a Portable version, so it does not affect if you have an A1111 already installed on your PC and can easily work in parallel.
Besides the fact that it is "movable" from PC to PC without problems.
This version does not have any pre-installed template nor ADetailer which we preferred to leave optional
Let me know if it works for you and what you think
A1111 my settings:
I run my Home PC A1111 with this setting:
set COMMANDLINE_ARGS= --xformers --skip-torch-cuda-test --no-half-vae (if you have low VRam, try to add --medvram-sdxl or --lowvram that can help you, but it slow down the image creations)
if you can't install xFormers (read below) use my Google Colab Setting:
set COMMANDLINE_ARGS= --disable-model-loading-ram-optimization --opt-sdp-no-mem-attention --no-half-vae (if you have low VRam, try to add --medvram-sdxl or --lowvram that can help you, but it slow down the image creations)
My A1111 Version: version: v1.9.3 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 •
If you want activate xformers optimization like my Home PC (How to install xFormers):
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "xformers"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you can't install xFormers use SDP-ATTENTION, like my Google Colab:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "sdp-no-mem - scaled dot product without memory efficient attention"
Press in "Apply Settings"
Reboot your Stable Diffusion
How to emulate the nvidia GPU follow this steps:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Show all pages"
Search "Random number generator source"
Select the voice: "NV"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you use my models, install the ADetailer extension for your A1111.
Navigate to the "Extensions" tab within Stable Diffusion.
Go to the "Install from URL" subsection.
Under "URL for extension's git repository" put this link: : https://github.com/Bing-su/adetailer
Click on the "Install" button to install the extension
Reboot your Stable Diffusion
How to install Euler Max Sampler:
In A1111 click in "Extensions Tab"
click in "Install from URL"
Under "URL for extension's git repository" put this link: https://github.com/licyk/advanced_euler_sampler_extension
Once installed click in: "Installed" Tab
Click in "Apply and quit"
Reboot your Stable Diffusion
Now at the end of the list of the sampler, you have the new sampler.