🚀 Fix ModelScope Flux Klein LoRA Training for Tensor.Art Uploads
If you trained a Flux LoRA on ModelScope (Diffusers format) and are facing a "Deployment Error" or "Invalid Model Structure" on Tensor.Art, this script is the definitive fix.
This script was built upon an initial provided by the Tensor.Art developer team. Their script provided the essential foundation for layer renaming and basic tensor merging logic, which allowed us to understand the core requirements of the platform.
❓ The Problem & Evolution
While the initial developer script is excellent for standard model renaming, we discovered a specific challenge with LoRA adapters. ModelScope outputs split Q, K, and V tensors, but simply "stacking" them doesn't satisfy the Flux architecture's requirement for a 12288 output dimension in the Fused QKV layer.
🛠️ The Solution: Block Diagonal Merging
To solve this, we evolved the initial logic. Through intensive "vibe coding" with Gemini and KIMI, we implemented a Block Diagonal Strategy. This method:
Merges the split Q, K, and V adapters from the Diffusers format.
Expands the Rank (e.g., from 64 to 192) using a sparse diagonal matrix.
Preserves 100% of your training quality by mathematically isolating the weights, ensuring the output perfectly matches the required 12288 dimension.
📝 How to use this script:
Step 1: Install Dependencies & Conversion Script (Block Diagonal)
Run this cell to set up the environment (Torch & Safetensors) & Conversion Script (Block Diagonal)
Step 2: Clone Repository
Enter your ModelScope repo_id (e.g., username/model_name). If your model is private, paste your access_token; otherwise, leave it blank. Run to download.
Step 3: The "Block Diagonal" Fix & Metadata Injection
Copy the path of the downloaded model into input_file and set a path for the final_file. Enter your metadata. Run this cell to automatically perform the Matrix Merging and save the fixed file.
Step 4: Download & Upload
Download the resulting file and upload it to Tensor.Art.
🔗 Script Link: https://colab.research.google.com/github/sevunk/fixing_layer_name/blob/main/merge_qkv_flukklein_diff.ipynb
Special thanks to the tensor.art developer, community and intensive Gemini and KIMI debugging for cracking the Flux matrix dimension requirements!
Final Hope:
Let's hope the developer of Tensor.Art update their backends to support Diffusers-style Flux LoRAs natively soon, so we don't have to manually convert them anymore! :D Until then, let this script do the heavy lifting for you.

