Struggling with flux prompts? try my flux prompt craft GPT bot
Do you use the GGUF? Try these workflows. GGUF T5 CLIP only | GGUF UNET+GGUF T5
Looking for a Latent node that's optimized for flux? Bob's FLUX Latent Optimizer Node
For optimal results, I recommend using 20-32 steps. (works with 8-12 as well but looks half as good)
Huge TY to @jurdn for helping me with the V4, V5, V6, and now V7 Q8 and Q4KS GGUF's!
Remember to also download the custom CLIP L! The current version of CLIP L is V5
What's New in V8?
V8 is almost here, and it marks a major shift in the evolution of Nepotism Fux. The biggest change? A complete removal of NSFW content. This decision wasn’t taken lightly, but it's a necessary step to push creative boundaries responsibly and align with modern ethical standards in AI. Read the full update article here.
Here’s the breakdown:
NSFW Content Removed: The model is now entirely focused on safe-for-work content. Users can still apply NSFW LoRAs externally, but the core model will remain SFW, prioritizing ethical AI usage.
Mild Photorealism Bias: V8 leans towards producing photorealistic images—though it CAN still reach anime fairly easily (examples in gallery). While this might not suit every user, the realism and fine details it achieves are top-tier. For those looking for anime or stylized outputs, using appropriate LoRAs can adjust the model’s default style if you have any trouble but I was able to hit a very wide range of artistic and anime styles in my testing.
Increased Sensitivity to LoRAs: V8 is more responsive to LoRA weights, allowing you to achieve great results with smaller adjustments. Lowering the LoRA weight to 0.2 or 0.3 can often yield the desired effect without overshooting the output.
Performance:
Cold Load (No LORAs): 1.03-1.08s/it
Cold Load (With LORAs): ~2.00-3.05s/it, dropping to 1.03-1.30s/it post-load.
(Tested on a 4080 GPU)
Why Nepotism Fux Stands Out:
- Balanced Precision: This merge uses FP8, producing images that closely resemble FP16 quality at a fraction of the time. Perfect for users with mid-range PCs who want Flux1Dev-level results without the resource drain.
- Efficiency: At 20 steps, generate high-quality images in just 16-22 seconds on a 4080 GPU, compared to the 80-150 seconds typical with Flux1Dev FP16