The LoRa was trained on just two single transformer blocks with a rank of 32, which allows for such a small file size to be achieved without any loss of quality.
Since the LoRa is applied to only two blocks, it is less prone to bleeding effects. Many thanks to 42Lux for their support.
Run62
Version Detail
FLUX.1
Project Permissions
Use in TENSOR Online
As a online training base model on TENSOR
Use without crediting me
Share merges of this model
Use different permissions on merges
Use Permissions
Sell generated contents
Use on generation services
Sell this model or merges
Commercial Use
Related Posts
Describe the image you want to generate, then press Enter to send.