YUME
YUME
助けてください、お願いします
Discord: hirayama.ai
302
Followers
43
Following
99.9K
Runs
17
Downloads
6.2K
Likes
1.5K
Stars
AI Tools
View AllModels
View AllLORA IllustriousUpdated
EXCLUSIVE
The Herta (大黑塔) (マダム・ヘルタ) - Honkai Star Rail-Illustrios v0.1.2
YUME
LORA Illustrious
EXCLUSIVE
Anime Style for Illustrious -A_Style2B
YUME
LORA Illustrious
EXCLUSIVE
Citlali - Genshin Impact - Illustrios (茜特菈莉)(シトラリ)-Illustrios v0.1
YUME
LORA Illustrious
EXCLUSIVE
Shiny - NovelAi - NAI Style-NAI_A2
YUME
LORA Illustrious
EXCLUSIVE
Soft NAI style - NovelAI-Illustrios v0.1.1
YUME
LORA Illustrious
EXCLUSIVE
Burnice White - Zenless Zone Zero - Illustrios-Illustrios v0.1
YUME
LORA IllustriousUpdated
Aglaea - Honkai Star Rail - 阿格莱雅-IL v0.1.1 - e12
YUME
LORA Illustrious
Anime Style Alter-Anime Style v2
YUME
LORA Illustrious
Blue Archive anime style-Illustrios 0.1.1
YUME
LORA Illustrious
A111 Style -111A 16
YUME
LORA Illustrious
Ido 9eebk-2025-01-07 00:25:59
YUME
LORA Illustrious
Id5899-MINIMALIST
YUME
LORA Illustrious
2453dhywz-NAI3.5
YUME
LORA Pony
Hu Tao - Cherries snow Laden - Genshin Impact -Pony v1.0
YUME
LORA Illustrious
Hu Tao - Cherries snow Laden - Genshin Impact -Illustrios v0.1.1
YUME
LORA Illustrious
Momo Ayase - DAN DAN DAN - (綾瀬 桃)-Illustrios v0.1.1.
YUME
LORA Pony
Astarion - Baldurs Gate 3 - PDXL-Pony - Alpha 1 E16
YUME
LORA Illustrious
Futuristic Scenarios-Illustrios v0.1
YUME
LORA Illustrious
Lingsha - Honkai Star Rail - (霊砂)(영사) -Illustrios v0.1.1
YUME
LORA Illustrious
Pixel Art game design-Illustrios v0.2
YUME
Workflows
View AllArticles
View AllModel Training - Training an Illustrious Model Character
This time will probably be quite brief or simple, but in the future, I plan to dedicate a complete guide to training a model from scratch.In this process, we'll assume that you already have a prepared dataset.WHAT DO WE NEED?A dataset (I'll use Lingsha from Star Rail this time)Credits (at least 300)PatienceFIRST: Open TensorArt's online training:HERE: SECOND - UPLOAD DATASET: Once open, it's time to upload our dataset. You can do it in two ways: a ZIP file or image by image. (For more experienced users, you can attach tags in the ZIP.)Image one by one:ZIP File: Click "Upload dataset"NOTICE: Please note that free accounts have a limit of 100 images.THIRD - TAGGING: Once our dataset is uploaded, we'll start adding the tags automatically. To do this, follow these steps:Open "Auto Labeling"Here, among the options, select the Labeling Algorithm. You’ll usually use those from Waifu Diffusion (WD). I recommend wd-swinv2-tagger-v3.Click confirm and just wait.You can later check that all the tags match the image shown. You can delete, edit, or add tags image by image or use the "Batch Add Labels" function.FOURTH - PARAMETER SETTINGS. I will refrain from explaining in great detail, hehe, I’ll be brief:Parameter SettingsNetwork ModuleLoRAUse Base Model – The model you will train with. We'll use Illustrious.Illustrious-XL - v0.1Trigger Words – This word should be present in all tagged images. Always add it first, and make sure it is different from the usual tags, e.g. (Lingsha203, lingshaHSR2024, Liiiingsha2). Why? Because you don't want it to conflict with any other character in the Illustrious base and end up with a mix of things.Lingsha_HSRImage Processing Parameters – I used the following settings, but the trick is to exceed the approximate range of 3,000 total steps while keeping the cost below 300 credits.Repeat - main priority, among the images have, but under the repetition6 (example based on a dataset of 100 images)Epoch6 (example based on a dataset of 100 imagesSave Every N Epochs – How often to save a checkpoint. The maximum is 10, so if you have 20, the last 10 will show. If you don’t want the last 10, you can set the value to 2, which will display a checkpoint every two epochs.1Training Parameters – User PreferenceSeed Clip Skip To avoid problems, you would normally use the default settings.Gradient Accumulation Steps – Not needed.Label ParametersShuffle Caption – Activate it for variety and to avoid static learning of tags.trueKeep N Tokens – Necessary if you use a trigger word. Default is 1.1To avoid problems, you would normally use the default settings here:Batch Size – Set the batch size. 2 means processing two images at once, which also means more GPU memory usage, but it reduces the training time by half compared to 1.2Sample Image Settings – These are just samples or previews for each checkpoint, helping you see how the training process is progressing. Set a general prompt for the character, and leave the negative prompt empty or write basic things like low quality, text, blurry.PromptLingsha_HSR, 1girl, lingsha, hair ornament, red limbs, single wrist cuff, thigh strap, black heels, sitting, crossed legs, finger to mouth, smile, pink rabbit, smokeNegative Prompt:Samplereuler_ancestralFIFTH - Start training and wait for progress.You just need to start and wait for the model to begin training.As it learns, the epochs will be shown along with a preview. Usually, the first epoch has the least amount of learning. If you see that something is going wrong or the first epoch is a disaster, simply stop the generation. The credits will be refunded based on the percentage of training completed.SIX - Choosing the best epoch.Once all the training is complete, you'll need to choose the epochs that you think are the best. In this case, you can test each one. To do this...Select "Publish" or "Download" if you want to save it to your local storage (recommended).Create a new project:Fill in the fields according to your trained model:Click on create!You can skip the procedure of reviewing each item later and go directly below to PUBLISHDON'T WORRY, YOU CAN EDIT YOUR PROJECT LATER!And that's it, just wait for the deployment and run tests. If you're not satisfied with the time period you chose, simply try another one and publish it in the same project!I HOPE IT HELPS YOU, IN THE FUTURE I WILL EXPAND MORE ON HOW TO TRAIN A CHARACTER BY BUILDING THE DATA SET!SORRY FOR MY ENGLISH, I HOPE IT HELPS YOU.
AI TOOL - Create an easier way to use your Workflow
On this occasion I will talk and recommend a little about the importance of using AI Tools to make our workflow easier to use, and it is that using comfyUI for less expert or casual users can be quite overwhelming. But thanks to AI Tools, this is no longer a problem, and now the user can use them without being overwhelmed by the immensity of nodes or parameters that are normally seen in a workflow.How do I create an AI Tool?Considering that you already have an elaborate and functional workflow, it is time to turn it into an AI Tool that is easy and intuitive to use.On this occasion, I’d like to talk a little about the importance of using AI Tools to make our workflow easier. Using ComfyUI can be quite overwhelming for less experienced or casual users. But thanks to AI Tools, this is no longer a problem, and now users can work without feeling overwhelmed by the complexity of nodes and parameters typically seen in a workflow.Lets started!Assuming you already have a well designed and functional workflow, it’s time to turn it into an easy-to-use, intuitive AI Tool.Step 1: Once your workflow is complete, go to the public section. From the two options available, select the one that says "AI TOOL."Step 2: The work environment will change. I recommend filling out everything up to the "Description" section.Step 3: In the "User-configurable Settings" section, this is where we’ll begin editing the necessary fields so that our AI Tool displays the parameters we want for our workflow in the interface.For example, a basic step is always to include a "prompt" box. To add it, simply click "add." Then, search for the name of the node where you typically write the text (CLIPTextEncode - text), locate it, and add it.Repeat the process by choosing what will be truly useful in the AI Tool interface.Normally, you would add: CheckpointLoaderSimple - ckpt_name, KSampler - steps, KSampler - cfg, KSampler - sampler_name, among others.Step 4: Once you’ve selected what to add to the interface, close the window, and you’ll see a list of the parameters you added. You can rename them by clicking on the "pencil" icon.Some fields will allow for further customization. For instance, with "Prompt," you can add a custom example or radio buttons to make it more intuitive.Step 5: Finally, it’s time to set a cover image and showcase your AI Tool. Once you’re satisfied, click "PUBLISH."That’s it! You’ve simplified your workflow and made it an easy-to-use AI Tool for everyone.If something doesn’t feel right, you can always go back and edit the AI Tool.This is a quick guide for anyone who wants to use AI Tools to streamline their workflow.Thank you for reading! I apologize for any translation errors as I’m Japanese, and English is not my strong suit.Take care of yourself, and Merry Christmas!
TAG your dataset more custom for TensorArt - HALLOWEEN2024
In this article, I will teach you a few things about how to add and customize the tags for your dataset to go directly into online training on TensorArt. But why? Well, because the online training in TensorArt is somewhat limited when it comes to handling tags.REQUIREMENTS:The images for which you will generate tags.Download the program Dataset Processor - All-in-one Tools.We will also use BooruDatasetTagManager since it is more intuitive for managing tags.Patience.Need GPU?, No. (I have only ryzen CPU)PROCEEDING:First, it is necessary that you have read your images and stored them in a folder, which we will use later. (This time, I will use sample images.)Having downloaded the Dataset Processor - All-in-one Tools, we open it and navigate to the "Generate Tags" section.In this section, where it says "Select input folder," we click there and specify the path to our images. In the "Select output folder" field, we specify the same path.In the "Select the auto tagger mode" section, we select the WD14v2 or WDv3 model and adjust the "Threshold for predictions" to between 0.25 and 0.50.5. Then, scroll down and click on "GENERATE TAGS." (The first time, it will take some time because it will download the necessary models; afterwards, it will be immediate.)When it has finished generating the tags, it will mark as "FINISHED."So far the process of generating tagsTo verify that everything has been done correctly, we will go to our image folder, where each image is accompanied by a TXT file that contains the generated tags. If we open it, we will be able to see the generated TAGS.PERFECT! NOW YOU HAVE ALL THE TAGS FOR EACH IMAGE! But what if you want to edit them? Well, you won’t open each TXT file one by one; instead, you can use the following tool for quick management.To do this, we will open the BooruDatasetTagManager, which I assume you have already downloaded. Here, we simply load our dataset (the folder with the image paths and the TXT files), and it loads all the tags associated with the images, allowing us to edit them in bulk or individually. We can add, delete, or replace tags; you can even generate tags (the generator doesn’t work for me).Now it’s just a matter of having patience and trimming the tags that you don’t need or that affect the quality of the dataset.When you have finished, simply save the changes or press CTRL + S and close the program.Next, go to your dataset folder and prepare it to be compressed into a ZIP file.Now we open the online training in TensorArt and upload our ZIP file.If you don’t know how to upload a ZIP file, simply click where it says "or upload dataset" and select your ZIP file.Finally, when your dataset finishes uploading, your images with your customized tags will be loaded. From here on, you only need to adjust the parameters and train! You can also review the tags using TensorArt's function.DONE! That’s it. It’s a longer process, but it allows you to modify the tags in a more personalized way, which ultimately benefits you by improving the quality of your LoRA training.I HOPE THIS GUIDE IS VERY HELPFUL FOR YOU IN CREATING TAGS LOCALLY. ADDITIONALLY, IT’S NOT THAT COMPLEX IF YOU FOLLOW IT STEP BY STEP. I apologize for my poor command of the English language, as I am Japanese.HAPPY HALLOWEEN!