Run1.9K
<p>This is a proof of concept for training an SDXL lora on SD1.5 knowledge. Trained on 'headshot of <strong>person</strong>' where <strong>person</strong> is a wildcard.<br /><br /><em>update</em> much smaller lora + cleaner dataset + trained longer</p><p>(<strong>wip</strong>) Mirror <a target="_blank" rel="ugc" href="https://ntcai.xyz/sdxl/headshot">https://ntcai.xyz/sdxl/headshot</a></p><p></p><p><strong>Usage:</strong></p><p><a target="_blank" rel="ugc" href="https://ntcai.xyz/comfyui/lorasimplesdxl.json">https://ntcai.xyz/comfyui/lorasimplesdxl.json</a> comfyui</p><p></p><p><strong>Quick note:</strong></p><p><em>I just setup a patreon at </em><a target="_blank" rel="ugc" href="http://patreon.com/NTCAI"><em>patreon.com/NTCAI</em></a><em> if you'd like to support my work or vote on what to work on next. There's also an nft drop planned for supporters but it won't have financial value, just having fun.</em></p><p><strong><br />This process is:</strong></p><ol><li><p>Use img2img on SDXL output using SD1.5. I wrote a tutorial on this here: <a target="_blank" rel="ugc" href="https://civitai.com/articles/1430/applying-sd-15-models-and-loras-to-sdxl-1024x1024-comfyui">https://civitai.com/articles/1430/applying-sd-15-models-and-loras-to-sdxl-1024x1024-comfyui</a> ( <a target="_blank" rel="ugc" href="https://ntcai.xyz/articles/applying_stable_diffusion_15_models_loras_controlnet_to_sdxl_with_comfyui_and_img2img/">mirror</a> )<br />Create a large dataset with this technique - this is trained on 800 images<br /></p><p><strong>Hint: </strong>choose a subject matter that sd1.5 knows well and perhaps reject any distorted images.</p></li><li><p>Get your files in the correct form. This tutorial helped me:</p><div data-youtube-video><iframe allowfullscreen="true" autoplay="false" disablekbcontrols="false" enableiframeapi="false" endtime="0" ivloadpolicy="0" loop="false" modestbranding="false" origin playlist src="https://www.youtube.com/embed/AY6DMBCIZ3A" start="0" width="640" height="480"></iframe></div></li><li><p>Train SDXL using <a target="_blank" rel="ugc" href="https://github.com/kohya-ss/sd-scripts">https://github.com/kohya-ss/sd-scripts</a> on the generated images.</p><pre><code># Full command
CUDA_VISIBLE_DEVICES=0 accelerate launch --num_cpu_threads_per_process=2 "sdxl_train_network.py" --enable_bucket --pretrained_model_name_or_path="/ml2/trained/ComfyUI/models/checkpoints/stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors" --train_data_dir="/ml2/trained/sd-scripts/data/headshot" --resolution="1024,1024" --output_dir="/ml2/trained/ComfyUI/models/loras/Lora/sdxl" --logging_dir="./logs" --save_model_as=safetensors --output_name="headshot2" --network_alpha="1" --network_dim=32 --network_module=networks.lora --text_encoder_lr=0.0004 --unet_lr=0.0004 --lr_scheduler="constant" --train_batch_size="16" --max_train_steps="10000" --mixed_precision="bf16" --save_every_n_epochs="1" --save_precision="bf16" --caption_extension=".txt" --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --gradient_checkpointing --xformers --bucket_no_upscale</code></pre></li></ol><p><br />My hope is that this can help creators migrate their work. Happy training!</p><p><br /><br /><strong>Resources used:</strong></p><ul><li><p>eyes lora <a target="_blank" rel="ugc" href="https://civitai.com/models/72447/eyes-trained-with-no-data-trick">https://civitai.com/models/72447/eyes-trained-with-no-data-trick</a></p></li><li><p>SD1.5 aniverse <a target="_blank" rel="ugc" href="https://civitai.com/models/107842">https://civitai.com/models/107842</a></p></li><li><p>person wildcards <a target="_blank" rel="ugc" href="https://civitai.com/models/98622/clonecleaner-wildcards-increase-character-consistency">https://civitai.com/models/98622/clonecleaner-wildcards-increase-character-consistency</a></p><p></p></li></ul>
Version Detail
SDXL 1.0
<p>Trained for 8025 steps at batch size 16 on the phrase ' headshot of <strong>person</strong> ' where person is a wildcard. The phrase was run through the SDXL to SD1.5 pipeline outlined here <a target="_blank" rel="ugc" href="https://civitai.com/articles/1430/applying-sd-15-models-and-loras-to-sdxl-1024x1024-comfyui">https://civitai.com/articles/1430/applying-sd-15-models-and-loras-to-sdxl-1024x1024-comfyui</a></p><pre><code># Full command
CUDA_VISIBLE_DEVICES=0 accelerate launch --num_cpu_threads_per_process=2 "sdxl_train_network.py" --enable_bucket --pretrained_model_name_or_path="/ml2/trained/ComfyUI/models/checkpoints/stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors" --train_data_dir="/ml2/trained/sd-scripts/data/headshot" --resolution="1024,1024" --output_dir="/ml2/trained/ComfyUI/models/loras/Lora/sdxl" --logging_dir="./logs" --save_model_as=safetensors --output_name="headshot2" --network_alpha="1" --network_dim=32 --network_module=networks.lora --text_encoder_lr=0.0004 --unet_lr=0.0004 --lr_scheduler="constant" --train_batch_size="16" --max_train_steps="10000" --mixed_precision="bf16" --save_every_n_epochs="1" --save_precision="bf16" --caption_extension=".txt" --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --gradient_checkpointing --xformers --bucket_no_upscale</code></pre><p>Reducing the network_dim reduced the filesize</p>
Project Permissions
Use in TENSOR Online
As a online training base model on TENSOR
Use without crediting me
Share merges of this model
Use different permissions on merges
Use Permissions
Sell generated contents
Use on generation services
Sell this model or merges
Commercial Use
Comments
Related Posts
Describe the image you want to generate, then press Enter to send.