Model Training: FLUX A short explanation of the settings I am using currently to train on my models.


Updated:

Base Model: 4.5 (Experimental) - Real_WoW (Flux+XL)
Network Module: LoRA
Network Dim: 64, Network Alpha: 32

Epochs: 10

aim for ~ 150 tokens cost

Resolution: 1024x1024
Clip Skip: 2
Text Encoder LR: 0.00001
UNet LR: 0.001464
LR Scheduler: cosine_with_restarts (4 cycles, 0 warmup steps)
Optimizer: AdamW8bit
Gradient Accumulation Steps: 16
Shuffle Caption: On, Keep n tokens: 7
Noise Offset: 0.03
Multires Noise Discount: 0.3
Multires Noise Iterations: 12
conv_dim: 64, conv_alpha: 64

0