Tensor.Art

Creation

Get start with Stable Diffusion!

Spark

Every Image is a Journey!

ComfyFlow

ComfyUI's amazing experience!

Host My Model

Share my models,get more attention!

Online Training

Make LoRA Training easier!

Models

853693393589527280
CHECKPOINT Illustrious

Illustrious-XL-V2.0-Stable-base

25K 1.7K
853697456628624516
CHECKPOINT Lumina

Illustrious_Lumina_2B-ema

654 307
855338755832175475
LORA Flux
EARLY ACCESS

Manga Neo arina-Manga

373 141
856400898023275923
LORA Flux
EARLY ACCESS

PixelCraft-v1

676 304
856403042285676411
LORA Flux
EARLY ACCESS

ClayGen style-1

427 145
857831602095448994
CHECKPOINT IllustriousUpdated
EXCLUSIVE

🎨Illustrious - AnimeMaster By NukeA.I -Acapv4

1.3M 4K
756091836981878773
LORA HunyuanDiT
EXCLUSIVE

TQ - HunYuan More Beautiful Detail-v.1.7

17K 164
841682086990519676
LYCORIS Flux
EARLY ACCESS

Macro Miniature World - ABM-1.0

1.8K 169
764681969138299021
LORA Pony

Samdoesarts Pony-pony

23K 1.2K
827066287508181826
LORA Flux
EXCLUSIVE

Nezha哪吒-2.0

1.7K 208
686792131634819476
LORA XL
EXCLUSIVE

Azazeal's Voodoo Anime-Azazeal's Voodoo Anime

191K 1.6K
785006961268820049
LORA Flux
EARLY ACCESS

Macro Photography Aperture-MACROSCOPIC

25K 452
763366377705885167
LORA SD3
EARLY ACCESS

KC_ISO-2024-08-18 16:02:43

345 48
783750344032352537
LYCORIS Flux
EXCLUSIVE

Dynamic Manga Frames Panels-Prodigy Manga Artist

6.5K 406
778505248365978579
LORA Flux
EXCLUSIVE

Pixomeda-Pixomeda Style

156K 1.3K
847919905276696172
LORA Illustrious
EXCLUSIVE

Quantum Lotus-v1

2.1K 235
772280380465187095
LORA Flux

The Wizard's Vintage Comic Book Cover-v1.0

5.8K 254
769518518925482140
LORA Flux

Cozy Spooky Style [Flux Style LoRA]-v1.0

5.7K 211
853462095641990340
LORA Illustrious
EXCLUSIVE

Grayscale Prince-v1

914 157
760342145203910041
LORA SD3
EXCLUSIVE

Portrait Muse SD3-V1

196 24
832152933241179348
LORA Flux
EARLY ACCESS

Real Stickers - Mirol-v1

759 93
825139384790752024
LORA Flux

Contemporary Ink Wash 1 CpD6C5 2025-01-30 21:38:03-Ep 12

826 160
758200318031839939
LORA HunyuanDiT
EXCLUSIVE

Manga style illustration DiT-V1

1.5K 120
781328579477821774
LORA Flux
EXCLUSIVE

hand mecha2-2024-10-06 01:22:52

3.7K 221
853731123803457633
LORA Illustrious
EXCLUSIVE

Lunar Parasol-v1

701 91

Workflows

Articles

Tutorail of ACG to OnlyFans style

Tutorail of ACG to OnlyFans style

The difference from other ACG 2 REAL tools lies in: more realistic facial features and photographic qualityThe limitation is that it's not suitable for overly complex movements, and hands might occasionally have issuesHow to use:Input your photoSelect a face model (you can check image details in related posts to preview effects by number)GO*If you have a custom prompt, please add a comma at the endYou'll receive two images - one is the standard result like other ACG2REAL tools (used as guidance for the next image), and the other is this TOOL's specialty image.There is also a version that consumes less cresits here ,It’s just that the control is not that precise, and the reproduction of complex movements is not very goodIf there is an error in the hand, you can try to fix it with this toolHope you like it!
101
9
Prompting: Hairstyles

Prompting: Hairstyles

Hey all! I made a visual example of the different hairstyles you can use for your prompts. Keep in mind that you can pair the hairstyles with the length of the hair too!Shoutout to annnnnie for the hairstyle list!
389
47
🔮🔮Anime to Real Life: My Free/Paid AI Tool for Converting 2D to Realistic🔮🔮

🔮🔮Anime to Real Life: My Free/Paid AI Tool for Converting 2D to Realistic🔮🔮

Ever wondered how your favorite anime character would look in real life? I built some AI tool that transforms ACG (Anime, Comics, Games) images into photorealistic portraits—perfect for artists, cosplayers, or curious fans!  🔮Access Tiers (Simple & Transparent)Free Tier​(click to use):A2R Free V1:Low credit consumption, not suitable for multiple people and complex compositionsA2R Free V2:More realistic and cosplayer-like, details may be differentPro​-less credits(click to use):A2R Pro L V1:Advanced version of 【A2R FreeV1】A2R Pro L V2:Very real, but the details will changePro​-more credits(click to use):A2R Pro M V1:A version with perfect detailsA2R Pro M V2:The original versionA2R Pro M V3:The most perfect detail restorationA2R Pro M V4:The most realistic face, more like a cosplayerAmplification and restorationSometimes the generated image will have misplaced hands or feet,Or you may want to output it in a larger sizeI made two tools to help with this adjustment, they are free,click to useHand and foot repair toolImage Upscale Tool
112
6
HairStyle Prompts Sharing - AIの髪のプロンプト集 - 发型提示词分享

HairStyle Prompts Sharing - AIの髪のプロンプト集 - 发型提示词分享

BeginningHairstyles are so diverse and ever-changing. Here are 100+ Effective HairStyle Prompts for you!The tested Model is ✨WAI-NSFW-illustrious-SDXL✨ , these prompts perform also well on most models.If you want to watch Video Tutorial (which is more efficient and intuitive), you can visit 👉 https://www.instagram.com/p/DH59PgJS-5A/More prompt sharing will be released. Stay tuned! And welcome to follow TensorArt Official Instagram 💗👉 https://www.instagram.com/tensor.art/This is where you can be the first to receive our shared videos.😉PromptsLoose Hairwavy hair , ウェーブヘア|波浪卷 卷发curly hair , 巻き毛|小卷 卷发messy hair , メッシーヘア|乱乱的头发straight hair , ストレートヘア|直发single sidelock , シングルサイドロック|单边发放下asymmetrical sidelocks , アシンメトリーサイドロック|不对称鬓角single hair intake , シングルヘアインテーク|单发发旋hair intakes , ヘアインテーク|发旋bob cut , ボブカット|波波头inverted bob , 逆ボブ|反向翻转波波头flipped hair , フリップヘア|头发下侧翘起来wolf cut , ウルフカット|狼尾hime cut , 姫カット|公主姬mullet , マレット|鲻鱼头half updo , ハーフアップ|半扎发Tailsponytail , ポニーテール|马尾side ponytail , サイドポニーテール|侧马尾high ponytail , ハイポニーテール|高马尾folded ponytail , 折り返しポニーテール|折叠马尾short ponytail , ショートポニーテール|短马尾two side up , ツーサイドアップ|双侧扎发one side up , ワンサイドアップ|单侧扎发uneven twintails , 不揃いなツインテール|不齐双马尾twintails , ツインテール|双马尾low twintails , ローツインテール|低双马尾short twintails , ショートツインテール|短双马尾low-tied sidelocks , ロータイサイドロック|低扎鬓角multi-tied hair , マルチタイドヘア|多重扎发BraidCrown braid , クラウンブレード|皇冠麻花辫Folded braid , 折り込みブレード|折叠麻花辫French braided ponytail , フレンチブレードポニーテール|法式辫马尾French braided twintails , フレンチブレードツインテール|法式辫双马尾Half up braid , ハーフアップブレード|半扎麻花辫Low-braided long hair , ローブレードロングヘア|低麻花辫长发Side braid , サイドブレード|侧麻花辫Single braid , シングルブレード|单麻花辫Twin braids , ツイン編み込み|双麻花辫Bun & DrillsBun with braided base , 編み込みベースお団子|麻花辫编成底部丸子头double bun , ダブルおだんご|双丸子头cone hair bun , コーンおだんご|锥形丸子头donut hair bun , ドーナツお​​だんご|甜甜圈丸子头bow-shaped hair , ボウシェイプヘア|蝴蝶结形发型drill hair , ドリルヘア|钻头卷发twin drills , ツインドリル|双钻头卷发ringlets , リングレット|卷环发drill sidelocks , ドリルサイドロック|钻头卷鬓角hair rings , ヘアリング|发环single hair ring , 一重ヘアリング|单层发环Bangsbangs , 前髪|刘海bangs pinned back , ピンで留めた前髪|用发夹固定的刘海blunt bangs , 鈍い前髪|齐刘海Braided bangs , 編み込み前髪|编辫刘海crossed bangs , クロス前髪|交叉刘海choppy bangs , 不揃い前髪|参差刘海diagonal bangs , 斜め前髪|斜刘海hair over eyes , 目にかかる髪|遮眼发hair over one eye , 片目にかかる髪|遮一只眼发hair between eyes , 目の間の髪|眼间发parted bangs , 分け目のある前髪|分缝刘海curtained hair , カーテンヘア|帘式刘海wispy bangs , 薄い前髪|稀疏刘海short bangs , 短い前髪|短刘海swept bangs , 流した前髪|侧扫刘海Ahogeahoge , アホ毛|呆毛heart ahoge , ハートアホ毛|心形呆毛huge ahoge , 巨大アホ毛|巨大呆毛antenna hair , アンテナヘア|天线发heart antenna hair , ハートアンテナヘア|心形天线发Othersnihongami , 日本髪|日本髮型pointy hair , 尖った髪|尖发spiked hair , スパイクヘア|刺猬头buzz cut , バズカット|平头crew cut , クルーカット|短平头flattop , フラットトップ|平顶头undercut , アンダーカット|剃鬓侧削发cornrows , コーンロウ|玉米鬃辫dreadlocks , ドレッドヘア|脏辫pompadour , ポンパドール|蓬巴杜发型hair slicked back , 髪を後ろになでつける|油头pixie cut , ピクシーカット|精灵短发Special ThanksSome prompts shared this time are sourced from @hypersankaku2. Follow him/her on Twitter to support!https://x.com/hypersankaku2
187
10
Gesture Prompts Sharing🫶 - AIの手のプロンプト集 - 手部提示词分享

Gesture Prompts Sharing🫶 - AIの手のプロンプト集 - 手部提示词分享

BeginningHand Gesture have always been a very challenging aspect to control. I wish to share some highly effective prompts for controlling Gestures.The tested Model is ✨🎨Illustrious - AnimeMaster✨ and these prompts perform well on most models.At the end of the article, there are Demonstration Images showing the effects.If you want to watch Video Tutorial (which is more efficient and intuitive), you can visit 👉 https://www.instagram.com/p/DIeFERbygkO/?hl=zh-cnI'm Annie and this is a Prompts Series by TA Official, more prompt sharing will be released gradually in the future. Stay tuned! And welcome to follow our Official Instagram 💗👉 https://www.instagram.com/tensor.art/ .This is where you can be the first to receive our shared videos.😉PromptsFingersindex_finger_raised 、人差し指を上げる|举起食指shushing 、しーっ|嘘🤫pinky_out 、小指を外側に出す|翘小拇指thumbs_down 、親指を下に向ける|差评👎thumbs_up 、親指を立てる|点赞👍double_thumbs_up 、二本親指を立てる|双手拇指点赞finger_gun 、指銃|手枪double_finger_gun 、二本指銃|双手手枪two_finger_salute 、二本指で敬礼する|二指敬礼finger_frame 、指フレーム|手指比取景框spread_fingers 、指を広げる|分开手指x_arms 、腕を組む|手臂比叉x_fingers 、二本指|手指比叉fidgeting 、そわそわするV|食指相对steepled_fingers 、尖った指|手指金字塔Victory Signv 、ピース|比耶✌️double_v 、二本V|双手比耶v_over_eye 、vを目にかざす|眼睛前比耶v_over_mouth 、vを口にかざす|嘴巴上比耶gyaru_v 、ギャルのv|反手比耶 辣妹式Fistpower_fist 、パワーフィスト|挥拳fist_bump 、拳を突き合わせる|碰拳fist_in_hand 、手に握りしめた拳|紧握拳头clenched_hands 、握りしめた手|紧握双手Pointingpointing 、指さし|指pointing_at_self 、自分を指さすf|指自己pointing_at_viewer 、視聴者を指差す|指向关注pointing_down 、下を向いている|向下指pointing_forward 、前を向いている|向前指pointing_up 、上を向いている|向上指Coveringcovering over face 、顔を覆う|遮住脸covering over eyes 、目を覆う|遮住眼睛covering over mouth 、口を覆う|遮住嘴covering over ears 、耳を覆う|遮住耳朵Otherscupping_hands 、両手をすくめる|杯状手own_hands_clasped 、自分の手|握住自己的手money_gesture 、お金のジェスチャー|金钱手势ok_sign 、OKサイン (very similar to money_gesture)|比OKtwirling_hair 、髪をくるくる回す|玩头发shadow_puppet 、影絵|影子手偶fox_shadow_puppet 、キツネの影絵|狐狸影子手偶pinching_gesture 、つまむジェスチャー|捏合手势reaching 、手を伸ばす|伸手waving 、手を振る|挥手beckoning 、手招きする|招手
340
28
🎉AIでエッグハント!?Tensor.Art「AI Tool Easter」イベントが開催中🥚🐰

🎉AIでエッグハント!?Tensor.Art「AI Tool Easter」イベントが開催中🥚🐰

こんにちは!今回は、AI画像生成コミュニティ「Tensor.Art」で開催中の春のユニークなイベント**『AI Tool Easter - GPT-4o Revival』**をご紹介します!GPT-4oの話題のフィルターを使って、可愛いイースターエッグをゲットしちゃおう!参加方法やルール、報酬内容をわかりやすくまとめました🌸🗓️ イベント概要イベント名:AI Tool Easter - GPT-4o Revival開催期間:2025年4月17日 〜 4月24日(UTC基準) ※日本時間では 4月17日 午前9時 〜 4月24日 午前9時🎯 どんなイベント?GPT-4o関連のAIツールや投稿に、**指定タグ(#GPT4o)**を付けて投稿すると、さまざまな「イースターエッグ(報酬)」がもらえる、遊び心満載のイベントです✨対象のタスクをクリアすると、ランダムでイースターエッグがプレゼントされます🎁中身は、クレジットやGPUファンド、Proプランなど盛りだくさん!🥚 タスク&報酬一覧(Egg Task List)🛠️ タスク内容🎁 報酬#GPT4oタグ付きでAI Toolを公開Poached Egg(確定)#GPT4oタグ付きAI Toolで投稿ランダムなEaster Egg作成したAI Toolが「30人以上に使われる」ランダムなEaster Egg作成したAI Toolが「Most Run」でTop10入りランダムなEaster Egg※すべてのAI Toolはimg2img形式で、最大2つまでのパラメーター公開が必要です。🎁 イースターエッグの中身は?エッグの種類内容出現確率Poached Egg$1分のGPUファンド公開時に確定Golden Egg$5分のGPUファンド25%Green Egg777クレジット30%Blue Egg7日間のProアカウント45%💡 イースターエッグはランダム配布&バッジ形式で受け取れます!✅ 参加ルール&注意点投稿やツールには必ず「#GPT4o」タグを付けてください(付け忘れは報酬対象外)不適切なコンテンツ(NSFW、著名人の画像、暴力、低品質なもの)は対象外デフォルトアバターやニックネームのままだと報酬がもらえません不正行為は即失格報酬はタスク完了の翌営業日以降に配布、遅くともイベント終了から3営業日以内に配布完了🐣 こんなフィルターが人気!今回のイベントでは、以下のようなフィルター例が紹介されています:LEGO風フィルタースヌーピー風シンプソンズ風AirPodsの中にキャラが入る「Me in Airpods」など!もちろん、これ以外の独創的なスタイルも大歓迎とのこと!✨ まとめAIツールを投稿するだけで、報酬がもらえちゃうこのイベント。「ちょっと面白いフィルターを作ってみたい!」という方にはぴったりです🎨まだTensor.Artを使ったことがない方も、このイースターのチャンスにぜひ参加してみてください!あなたのツールが人気になるかも…?👉参加はこちらから🔗 https://tensor.art/blackboard/AIToolEaster?tab=info
Introduction to ControlNet.

Introduction to ControlNet.

This article explains what ControlNet is and how you can use it. It includes an example with simple instructions that you can run yourself right now, no prerequisites. It should take about 10 minutes. Just read it (trust me bro). If you want to make pictures first, you can skip the general info below and look for the pictures.Not many people on tensor.art use ControlNet. Pictures are generated in a gacha game style - one writes a prompt and hopes for the best. If the result is not satisfactory, it's either another try with the same parameters or prompt update. There are also checkpoints, LoRAs and other parameters of generation, a lot of knobs that affect generation in some peculiar and largely unpredictable way. There is some control over the process indeed, but it heavily relies on luck. There's gacha-esque fun in this process. AI generation is like a box of chocolates.Sometimes a picture turns out almost perfect. It can be awesome in every way except for having 6 fingers on someone's hand. There is no way to fix it with a prompt. ArsTechnica reported that the latest OpenAI image generator allows prompt iteration on a picture, so such problems may eventually get resolved. For example, you may be able to generate something and then ask for corrections by saying something like "good, but let it be sunset and I want the second girl from the left to be blond, go". Eventually prompts may become the only tool an AI artist really needs to build a scene. For now, prompts are rather limited.ControlNet doesn't fix it; it's another knob to use. But it allows you to control many aspects of image generation directly, spatially, like "the sword is right here, not somewhere in the picture". You can actually imagine a final picture in your head and work toward it. If you can make a rough sketch, you are halfway there. You can iterate, keep the parts you like and correct those you don't. It is still a gacha game, but your chances of getting an SSR are much higher.It allows you to shoot for much more complex scenes. There is absolutely nothing wrong with generating hundreds of pretty women portraits if it makes you happy. And I mean it; fun is precious, it is never a waste of time. But if you get bored with it, there are options.ControlNet uses an image as an additional or, in a few cases, the only prompt - sketch, pose diagram, edge/depth/normal map. "A picture is worth a thousand words". A simple doodle can be more efficient in conveying desired composition than any prompt. Also, models don't follow prompts all that well and perfectly crafted prompts fail most of the time.ControlNet works by attaching small, specialized neural networks called "adapters" to a pre-trained diffusion model. These adapters are trained to interpret specific types of visual input and influence the generation process accordingly, without retraining the whole model. This allows the base model to remain flexible and powerful, while giving users a way to “steer” the output using visual cues rather than just words.ControlNet is an open source project based on open research publications. The main contributors to both seem to be from China. Kudos to China. It was initially developed for Stable Diffusion 1.5, then adapted for SDXL and works for derived models and checkpoints. There is no ControlNet for SD 3.0 or FLUX.1 as far as I know.Personally, I use mostly Pony derivatives and sometimes Illustrious checkpoints.Using ControlNet requires persistence, iterations are the whole idea. Basic skills with a graphic editor are necessary to make changes to control files used by ControlNet. If you are experienced in image editing it will help you a lot but you don't have to be a classical artist. I have zero art education beyond lessons in secondary school and I was okay-ish at best. It helps if you find joy in image editing. The ability to use layers is a great bonus.Personally I use Gimp but there are lots of good editors, including free options. Krita seems to be very good. Paint.NET is simple yet capable.Below I will use Canny and Depth adapters because these are the two I find the most useful and use frequently. There will be a separate in-depth article about them later. I will also give a brief overview of other adapters available on tensor.art in another article, there is a rather harsh article size limit here.Remixing a picture using ControlNet.Let's try using ControlNet. Here is what we will be working with:Click this link and press "remix". It will set generation parameters. Run it and be amused by the utter failure. Or just skip it, here is what I got:Not too bad. I like perspective distortion. A couple of anatomical problems, very fixable. There is no bear though. A failure.We got all the parameters right. The missing ingredients are ControlNet control files. Let's add them.Download the picture we are trying to remix and remember the location. Click on "Add ControlNet" button in "Model" section, choose "Canny" (3rd option), click on the square area in the lower left corner of the new dialogue window and pick the picture you just downloaded. Here's how it should look:Repeat the same actions one more time but choose the "Depth" adapter this time (4th option).Set weights for both at 0.5. If you did everything right it should look like this:Run it. Here is what I got this time:The clothing colors are different — which is expected, since the prompt doesn’t specify them. It is a very good picture, on par with the original one.We successfully remixed the picture without even touching the control files themselves. Let's look at them though.Click on the garbage bin icon to remove Canny adapter and add it again. Here is what it looks like before you confirm your choice:Click on the right picture, the one in black and white. You will be presented with control file created by Canny adapter preprocessor. You can save this picture:Now you can edit it and use edited version instead of the one created by the preprocessor. To do so, you just need to press "Control Image" button in the dialogue above, it will suggest you to upload your control file.Let's say we don't like the bear. No wonder, I got it from a quick Bing image search, it was a cartoonish sketch. The bear sucks. Let's paint this area black:And here is what I got using the new version of control file:That’s a much better-looking bear — more natural and fitting. Every time I run the generation with these parameters I will get a new bear. The bear is drawn there because the prompt asks for it and the control file doesn't leave any other option for its location. Also, the depth adapter still indicates to AI generator presence of large body there:Once I am happy with the bear, I can fix it in the control file and change other aspects of the generation. I can remove flowers, add a cat, make the woman run toward the bear, make her wear jeans or nothing, make her a demoness, make bear run away from her. The sky is the limit now that you can work on specific aspects of the picture with intent.As an unexpected bonus, the girl's skirt is see-through now and she seems to be going commando. Not intended and can be inappropriate. Let's fix it. I add to prompt: "elaborate blue dress, orange jacket". Here is what I got:Nice jacket. The claw is bad and fingers are wonky. Well, you know what to do. Pull the lever, let it roll. :)Neither the Canny nor the Depth adapter has anything to do with color, just geometry, so your hands are free here. Also, you can switch between checkpoints supporting ControlNet freely now, the scene will generally persist. There are multiple examples of that in my pictures.That's it. ControlNet is that simple. People really should use it more.A few clarifications. It might be obvious, but better be safe than vague. When we supply the original image the Canny preprocessor analyzes it and automatically creates a control file, an edge map (the black and white line drawing). Which we can download and reuse/abuse. The weight controls how strongly the ControlNet influences the generation, same as for LoRAs. Higher values stick closer to the control image; lower values give the AI more freedom. At high values (0.7 and above) undesirable effects are very likely.The method we used above would work for every picture on tensor.art, albeit with different degrees of success. All you need is the prompt and the picture itself, you don't necessarily need to use the same tools and LoRAs as the original author. It works for an arbitrary image too, like anime screenshot, you just need to write a prompt adding details that Canny and Depth adapters miss, like colors, lighting details, etc. That's what I do for almost every single picture I have published.That's it for now, I plan to publish a few more articles on this topic, it was an introduction.Questions and comments are welcome.Related articles: ControlNet: Canny adapter, ControlNet: Depth adapter, ControlNet: Openpose adapter.
124
20
How to create Lora (Basic version)

How to create Lora (Basic version)

How to create Lora (Basic version)I believe that when creating an image you like, you may sometimes feel that some element is missing, and you may try to search for the relevant lora, but still cannot find it.At this time, you may think, should I make a lora myself? In this way, in my future works, maybe I can highlight more of the elements I like! I can also share it with my creative partners.Let’s get started!First, you need to collect more pictures of similar elements (about 12 pictures), and the picture resolution should be higher, so that when you use it with other LoRa in the future, the pictures will be clearer, unless the element you want is a hazy feeling.Next, it’s time to start training!First, open the user interface and there is a model I trained. Then click the online training in the upper left corner to enter the training interface. Add the prepared image in the lower left corner. Select the basic model type you often use in the upper right corner, such as: sd1.5, sd3.5, pony, flux, Hunyuan, etc.Next, you need to consider your computing power budget, because if you use different basic models to train the same image, the results will be very different, and the computing power consumption will also be different. Also, can you use it together with the commonly used LoRa?For the trigger word part, you can input the most important elements of this lora. You can choose not to input them first and wait until you see the training results.The prompt word part seems to affect the training results. If you have a clear goal, you can enter it. The default is 1girl. If your element is not a girl, you can change it or leave it blank.OK, next click on training, wait in line and check the training time.After the training is finished, I will look at the training results. There are ten training results in total. I usually choose one from the sixth to the tenth one that looks closer to the result I want and press publish.Select Create Project at the top.Enter the project name (be careful, this cannot be changed)Select Lora TypeAdd lora tagparameterI usually choose 500 for the number of iterations.Type the trigger word and descriptionSelecting a base modelReverse prompt wordUpload files unless they are the result of previous trainingOtherwise, I usually adjust the precision to fp32.Showcases (Image/Video)If you are uploading the results of a previous training, you need to upload the workbench image and cover imageOK, press PublishAdjust the details again, update and wait for the system to be deployed, then you can try it out and see your results!😁😁😁
194
39
Tutorial of Doujin master Español

Tutorial of Doujin master Español

Tutorial de Doujin Master (traducción en español del original)¡Esta herramienta te ahorra la molestia de buscar nombres de personajes de anime y aprender a representar sus poses y expresiones!https://tensor.art/template/846371912090192788Simplemente sube dos imágenes:Una imagen de tu personaje de animeUna imagen de la pose y el contenido que quieres que adopte el personajeLuego, selecciona un modelo (cada modelo tiene sutiles variaciones; experimenta y elige tu favorito).Después, elige un estilo de artista de anime. El predeterminado es Yoneyama Mai, una de las ilustradoras de anime más populares del momento.También hay una opción avanzada para controlar la similitud de la imagen generada con tu referencia:Los valores más altos hacen que el resultado se parezca más a tu imagen de referencia.Los valores más bajos lo mantienen más cercano a tu personaje original.Aquí tienes un ejemplo: Seleccioné el primer modelo, puse a Sailor Moon como personaje y usé una ilustración de anime de Pinterest como referencia de pose. Para **[detalle de control de pose], elegí medio.
72
4
fire time to using "Claude Monet's Impressionist Inspired LORA"

fire time to using "Claude Monet's Impressionist Inspired LORA"

서양화, 점묘화를 매력적으로 그리는 도구들이 흥미로워서 둘러보다가Claude Monet's ~~ 도구를 이용해서 대략적인 이미지를 제작해보기로 했다.단일 AI도구로는 즉시 사용하기 어려운 퀄리티의 이미지가 자주 나오는데이 도구는 한 번에 꽤 괜찮은 결과물이 나왔다.물론 두-세번 시도하긴 했지만 나중에 생각나면 다시 써봐야지.While exploring tools for creating captivating Western paintings and pointillism artworks, I came across Claude Monet's ~~ tool and decided to generate a rough image with it.With a single AI tool, the quality of the images is often not immediately usable, but this tool produced a fairly decent result in just one go.Of course, I tried it two or three times, but I should definitely give it another shot if I think of it later.
127
18
Tutorial of Doujin master

Tutorial of Doujin master

This tool can save you the hassle of searching for anime character names and figuring out how to depict their poses and expressions!Simply upload two images:A picture of your anime characterAn image of the pose and content you want the character to adoptThen, select a model (different models have subtle variations—feel free to experiment and pick your favorite).Next, choose an anime artist style. The default is Yoneyama Mai—one of the most popular anime illustrators right now.There’s also an advanced option to control how closely the generated image matches your reference:​Higher values make the output closer to your reference image.​Lower values keep it closer to your original character.Here’s an example: I selected the first model, input Sailor Moon as the character, and used an anime illustration from Pinterest as the pose reference. For ​**[pose control detail], I chose medium.
145
12
🌈🌈Tutorial of Color everything

🌈🌈Tutorial of Color everything

Line Art Coloring Tool TutorialThis guide covers the ​free version first.​Upload an image (self-explanatory).Optionally, add a ​prompt if you want to control specific color areas.Adjust ​Line Drawing Control Strength (default: Low).Higher values enforce stricter adherence to line art but reduce AI-added details.Click ​Generate to receive a ​1024px image (facial nodes are auto-refined). For larger sizes, use my ​free upscaling tool.​PRO/Paid Version Features​Larger output (1536px) with enhanced detail refinement.​Multiple style models: Beyond Anime Flat, convert line art to ​2.5D, ​3D, or ​realistic styles.​Continuous updates with improved models.​Expressive H Styles (PRO exclusive):Boosts visual dynamism, but avoid pairing with High Line Drawing Control—this often causes artifacts.You can get the PRO (It is free to use during the review)version or Paid (Not online)version hereEnjoy using the tool! For best results, balance settings carefully.
112
6
HUNYUAN VIDEO CHALLENGE ROUND 1!

HUNYUAN VIDEO CHALLENGE ROUND 1!

1. コンテスト概要スキル、創造性、運が試されるビデオ生成コンテストHunyuan Video Modelまたはそのファインチューニングモデルを使用してビデオを生成image2videoまたはtext2videoのいずれかの方法で生成可能2. 課題内容以下の3つのテーマに基づいて創造的なビデオを生成テーマ1:TENTEN AND PENGUINHunyuanの公式マスコットペンギンとTAのマスコットTentenの出会いを描くタグ:tenguinテーマ2:ARTWORK MEMEアートワークを題材としたミームビデオタグ:artworkmemeテーマ3:LIE DOWN ANYWHEREどこでも寝転がる様子を描いたビデオタグ:liedownビデオの解像度に関する注意点:960x960の解像度では5秒の動画は非互換性があるため、3秒または4秒を選択してください。3. 報酬最もいいね!が多かった賞各テーマの上位10名(合計30名)報酬:9.9米ドルラッキーチャレンジャー賞最もいいね!が多かった30名のうち、タイムスタンプが13番目と27番目の投稿者報酬:29.9米ドル報酬はGPU Fundに発行参加賞100クレジット追加ボーナス300クレジット+1日PROInstagram、Facebook、X(Twitter)、Youtube、Tiktok、Pinterest、redditなどのチャンネルで#tensorart & #hunyuanのハッシュタグをつけて投稿。4. 参加方法上記いずれかのプラットフォームに、ハッシュタグ#tensorart & #hunyuanを付けて課題に関連するコンテンツを投稿報酬を受け取るために、アンケートに回答(右側のリンクアイコンから)5. 参加規約投稿はイベント要件(Hunyuanモデルとテーマの使用)を満たす必要あり。1つの投稿に追加できるイベントタグは1つのみ。不正行為を防ぐため、3月3日以前に登録されたユーザーからのいいね!のみが「有効ないいね!」としてカウントシステムのデフォルトのニックネームとアバターは報酬を受け取る資格なし投稿コンテンツはオリジナルである必要があり、再投稿や盗用は禁止児童ポルノ、有名人のポルノまたは侮辱はTAで禁止。NSFWコンテンツ、および低品質のコンテンツはアクティビティから失格となります。不正行為は禁止。異常または違反の疑いがある場合は、プラットフォームはユーザーの参加資格を取り消し、関連する違法ユーザーに対する法的責任を追及する権利を留保。アクティビティの最終的な解釈権はTensorArtに帰属
153
14
How I LoRA: A beginners guide to LoRA training | Part 3: Testing your LoRA

How I LoRA: A beginners guide to LoRA training | Part 3: Testing your LoRA

A step-by-step guide on how to train a LoRA; part 3!Warning: This guide is based on Kohya_SSThis guide REQUIRES that you read "How I LoRA: A beginners guide to LoRA training | Part 1: Dataset Prep." and "How I LoRA: A beginners guide to LoRA training | Part 2: Training Basics"This guide CAN be ported to Tensor.art's trainer; if you know what you are doing.This guide is an (almost) 1:1 of the following guide: https://civitai.com/articles/3522/valstrixs-crash-course-guide-to-lora-and-lycoris-trainingEdits were made to keep it short and only dive into the crucial details. It also removes a lot of recommendations I DO NOT follow.; for more advanced information, please support the original guide. If you want to do things MY way, keep reading.THE SETTINGS USED ARE BASED ON SDXL, DO NOT FOLLOW IF YOU ARE TRAINING ON V-PRED OR 1.5Testing your LoRAThere are two ways to test a LoRA. During training and after training.During:While on your Kohya_ss, there is a section for a "test" prompt. Use it. If you followed the guide you should have set "save every N epoch" as 1. Meaning that every epoch it will save a model and by proxy, test it with the given prompt.Look at each image, and judge its quality.After (The right way):After training is done, move all your safetensors files to your lora folder on your WebUI instalation. I will asume you have A1111, A1111 Forge or A1111 Re-Forge (the best one).On your WebUI, set yourself up with all the settings you would normally use; checkpoint, scheduler, etc.Copy/paste one of your dataset prompts to the prompt area (This will test overfitting).Navigate to the LoRA subtab and add the first file; ie.: Shondo_Noob-000001.safetensor, this will add the LoRA to the prompt as: <lora:Shondo_Noob-000001:1>; change the :1 to :0.1Set a fixed seed; ie.: 1234567890Scroll down to the "script" area of your WebUI and select X/Y/ZSet your X, Y and Z as "Prompt S/R"On X; write all of your LoRA's filenames; ie.: Shondo_Noob-000001, Shondo_Noob-000002, Shondo_Noob-000003, Shondo_Noob-000004, etc. Depending on how many files you saved, their names, etc. ALWAYS SEPARATE WITH A COMMA.On Y; write all the strength variables from 0.1 to 1, ie.: 0.1, 0.2, 0.3, etc. ALWAYS SEPARATE WITH A COMMA.On Z; write an alternate tag to test flexibility, so, if your prompt is: "fallenshadow, standing, dress, smile", write something like: dress, nude, swimwear, underwear, etc. This will create a grid where instead of wearing a dress, she will be nude, wear a swimsuit, etc. ALWAYS SEPARATE WITH A COMMA.If you did a concept LoRA or a style lora:On your WebUI, set yourself up with all the settings you would normally use; checkpoint, scheduler, etc.Copy/paste one of your dataset prompts to the prompt area (This will test overfitting).Navigate to the LoRA subtab and add the first file; ie.: doggystyle-000001.safetensor, this will add the LoRA to the prompt as: <lora:doggystyle-000001:1>; change the :1 to :0.1Set a fixed seed; ie.: 1234567890Scroll down to the "script" area of your WebUI and select X/Y/ZSet your X and Y as "Prompt S/R"On X; write all of your LoRA's filenames; ie.: doggystyle-000001, doggystyle-000002, Shondo_Noob-000003, doggystyle-000004, etc. Depending on how many files you saved, their names, etc. ALWAYS SEPARATE WITH A COMMA.On Y; write all the strength variables from 0.1 to 1, ie.: 0.1, 0.2, 0.3, etc. ALWAYS SEPARATE WITH A COMMA.Selecting the right fileOnce the process finishes, you should have at least 2 grids, one XY with dress, and another with nude (for example). Or one if you didnt set up an Z grid. Up to you.Now look at the grid and look for the "best" result. Look at the art style bias, pose bias, look bias, etc. The more flexible the better. If on fallenshadow-000005 shondo's pose is always unique but after 000006 she's always standing the same way, ignore 000006+If at some point the art style gets ignored or changes and fixates on it; ignore it.If at some point ANYTHING starts repeating that you don't want; ignore it.The only thing that should repeat at all times is whatever corresponds to the trained concept. If you only trained a wolf with a hat but it should always be a different hat, avoid a file that gives him the same hat on the same pose with the same style.If the result image is identical to the training data; avoid it! You are not here to do the same images as your data, you are here to make new ones, remember?If colors are weird; bad.If shapes are mushy; bad.If angle is always the same; bad (unless you prompted for it).Anything that goes against the concept or the flexibility of it: BAD.Any file that has to be lower than 1 or 0.9: BAD. If your LoRA "works best" at 0.6 strenght, it's shit.THIS IS IT FOR PART 3. Now do some good cool loras.
171
16
ACG TO REAL Tutorial 使用教程

ACG TO REAL Tutorial 使用教程

Hello, welcome to my new AI TOOL: ACG TO REAL大家好,欢迎使用我的新AI TOOL:ACG TO REAL另外一款也不错Another good one is here:https://tensor.art/template/814153859164446330This is another excellent tool for converting anime to real people. Compared to ⭐️⭐️PRO⭐️⭐️Anime Come True, while it may not replicate as accurately, it uses fewer credits and generates results faster这是另一个不错的动漫转化真人的tool,相对于⭐️⭐️PRO⭐️⭐️Anime Come True来说,虽然没有那么准确的复刻,但耗费的credits更少,生成的速度更快First, let me list some best practice combinations. If you don’t want to read a lengthy article, just look at these three solutions:首先,我来列举一些最佳实践的组合,如果你不想看冗长的文章,只看这三个方案就可以:Let me explain the model and parameter combinations to help you use it better:下面我来讲一下模型和参数的搭配,希望能帮助你更好的使用:A. Regarding ethnicity, it depends more on the checkpoint model 关于人种,更多取决于checkpoint modelCaucasian appearance 欧美人种⬇Asian appearance 亚洲人种⬇B. For adding realistic details, you can use lora model关于真实细节和的添加,可以使用lora modelMore details 更多细节⬇More realistic faces 更真实的脸⬇More accurate reproduction 更精准的还原⬇C. About Params selection关于Params的选择Low = closer to anime image, but lowest realism, possible body distortions低=更接近动漫图,但真实感最低,可能会有肢体的崩坏High = more realistic, more reasonable body proportions, but greatest deviation from original anime image高=更加真实,肢体更合理,但距离动漫原图变化最多Based on this ABC information, feel free to experiment with different combinations. Choose the best setup according to your input anime image!根据ABC这些信息,你可以自由搭配实验一下,根据你输入的动漫图,来选择最佳的搭配吧!
134
17
Illustrations v1.1 — Now Exclusive on Tensor.art

Illustrations v1.1 — Now Exclusive on Tensor.art

Next-Gen AI for Stunning Illustrations — Now on Tensor.artIntroducing Illustrious XL 1.1, the latest evolution in anime-focused text-toimage AI. Building on the foundation of Illustrious XL 0.1, this new version pushes the boundaries of fidelity, prompt understanding, and high-resolution output, making it a must-have for artists, illustrators, and animation creators.🔹 Resolution & More Detail — Generate breathtaking 1536 x 1536 images with refined aesthetic quality🔹 Smarter Prompt Interpretation — Optimized for natural language prompts, delivering more intuitive resultsRecommended Settings for Best Results💡 Negative Prompts: “blurry,” “worst quality,” “bad quality,” “bad hands”🛠️ Sampling Settings: Steps: 28 | CFG Scale: 5.5-7.5 | Sampler: Euler🏋️ Training: Try lokr when training, achieving better results than Lora 🤫To showcase the advancements of Illustrious XL 1.0, we’ve put it to the test across key performance areas. Below is a direct comparison of image outputs across different versions, demonstrating improvements in natural language comprehension, high-resolution rendering, vivid color expression, and detail fidelity.1. Natural Language Understanding📌 Improvement: Better prompt adherence and character accuracy.🔍 Comparison:• Illustrious XL 0.1: Struggled with maintaining a consistent character fidelity.• Illustrious XL 1.0: Improved coherence between prompt and image, with better facial expressions• Illustrious XL 1.1: Further refined accuracy, reducing artifacts and enhancing overall expressiveness.📝 Prompt Used:"A vibrant anime-style illustration of a young woman with golden blonde hair, striking orange eyes, and a cheerful expression. She's dressed in a unique outfit that blends sporty and whimsical elements: an orange jacket over a teal and white striped shirt, a blue neckerchief, and a distinctive white cap with orange accents. She's set against a dark green background with streaks of teal, creating a dynamic and eye-catching composition. The style is bold, energetic, and suggestive of a character from a video game or animation., masterpiece, best quality, very aesthetic, absurdres, vivid colors2. High-Resolution Precision📌 Improvement: Increased resolution to 1536 x 1536, maintaining clarity at larger sizes.🔍 Comparison:• Illustrious XL 0.1: Noticeable blurring and loss of detail in high-resolution images.Illustrious XL 1.0: Clearer textures, sharper lines, and more defined elements.• Illustrious XL 1.1: More robust structure📝 Prompt Used:"This masterpiece artwork, in a stylish and extremely aesthetic style evocative of artists like hyatsu,shule_de_yu, lococo:p, huke, potg_\(piotegu\), z3zz4, and moruki, showcases a tsundere solo 1girl, makise kurisu, standing at night under an iridescent sky filled with clouds and forget-me-not flowers, rendered in absurdres detail with a colorful yet partially black and white and abstract composition.”3. Vivid Colors & Dynamic Lighting📌 Improvement: More vibrant hues, balanced contrast, and expressive compositions🔍 Comparison:• Illustrious XL 0.1: Muted tones and washed-out colors.• Illustrious XL 1.0: More vibrant color balance• Illustrious XL 1.1: Richer tones and better shadow handling?📝 Prompt Used:"1girl,hyatsu,shule_de_yu,lococo:p,makise kurisu,huke,tsundere,absurdres,potg_ (piotegu\),z3zz4,moruki,hyatsu,stylish,extremely aesthetic,abstract,colorful,night,sky,flower,cloud,iridescent,masterpiece,black and white, forget-menot.”4. Detail Refinement & Aesthetic Quality📌 Improvement: Sharper facial details, and expressive character design.🔍 Comparison:• Illustrious XL 0.1: Some inconsistencies in facial structure and hand rendering.• Illustrious XL 1.0: Significant improvements in eye detailing and shading.• Illustrious XL 1.1: Near-professional quality with refined expressions.📝 Prompt Used:"1boy,black hair,red eyes,horns,scars,white clothes,blood stains,arm tattoos,black and red tattoos,long gloves on left hand,red sash,warrior-like attire,cold expression,sharp expression”Get Started Today! The future of anime AI is here—be part of it with Illustrious XL 1.1 ✨
159
18
Guide For⭐️⭐️PRO⭐️⭐️Anime Come True

Guide For⭐️⭐️PRO⭐️⭐️Anime Come True

This is a Guide For⭐️⭐️PRO⭐️⭐️Anime Come True这是一篇⭐️⭐️PRO⭐️⭐️Anime Come True的使用指南Although the interface of Tensor is already quite user-friendly, it's still necessary to explain how to use model combinations to achieve the best results.First, I'll recommend two best combinations. For those who find the article too long, just looking at these two images should be sufficient:虽然tensor的界面已经很简单了,但还是需要为大家讲解一下模型组合的使用,从而达到一个最佳的效果。先推荐两个最佳组合,嫌弃文章太长的看这两张图就基本够用了:Below are the detailed explanations. Models are divided into two parts: checkpoint and lora. The checkpoint model mainly determines the initial "ethnicity" and "level of realism".Models with lower levels of realism need to be paired with high-level and realistic loras.下面就是具体的说明了,模型分为checkpoint和lora两部分checkpoint模型主要决定了最开始的【人种】和【写实程度】写实程度较低的模型需要搭配 high level 和 写实的loraPart One:Regarding the selection of second-generation models, as shown in the image below,The red box is suitable for Asian facial features, while the green box is more appropriate for Caucasian facial features.第一部分:关于second generation的模型选择,如下图所示,红色框适合亚洲人面孔,绿色框适合欧美人面孔Part Two:Regarding the Real level, it's generally recommended to use 'low' for a more accurate restoration of ACG images. If you notice a significant number of errors in the limbs, it suggests a conflict between the information provided by the ACG images and the realistic style. In such cases, you may opt for 'medium' or 'high'.第二部分:关于Real level,通常情况下用low,可以更好的还原ACG图片;如果发现肢体错误比较多,说明ACG图片提供的信息与写实风格有冲突,就可以选择medium或者highPart Three:Regarding the selection of the lora model,The red box is more suitable for realistic illustrations, while the green box is better suited for anime with a higher ACG content.第三部分:关于lora模型选择红色框适合比较写实的插画,绿色框适合ACG含量较高的animeExample:Anime with a high ACG content + Asian facial features.ACG含量较高的anime + 亚洲人Realistic illustration + Caucasian facial features.That concludes the basic tutorial. We wish you all a delightful experience!比较写实的插画+欧美人以上就是基本的教程,祝大家玩的愉快!
166
43
🐍 スネークの夜会クリエイティブコンテスト 🐍~1月20日 日本語訳

🐍 スネークの夜会クリエイティブコンテスト 🐍~1月20日 日本語訳

🎉旧正月: 蛇の夜会🎉2025 年 1 月 29 日の旧正月が近づいてきました。私たちはこれを発表できることをうれしく思っています。スネークの夜会クリエイティブコンテスト”! 🌟 蛇の新年🌟蛇年は知恵、狡猾さ、内省を象徴します。 2025 年はチャンスと挑戦に満ちた年であり、創造性と祝賀に最適な時期となります。 🎊 蛇年の祝賀会の始まりです。あなたの想像力を見せてください! 🎊⏰ イベント期間1月1日~1月20日 (UTC)イベント終了後、2 日間かけて評価し、1 月 23 日に受賞者を正式に発表します。 🌟スネークソワレ 画像・動画とAIツール巳年の夜会は 2 つのセッションに分かれています。 特別な夜会 そして AIツール特別な夜会。  楽しんでください!😝🖼️ 投稿 夜会: スネークとテンテンタスク: お久しぶりです!皆さん覚えていますか、テンテン!  TensorArt のマスコット?蛇年を祝いに来ました! 🌟組み合わせてください テンテン と 蛇の要素  作品に! 🌟TensorArt に投稿してくださいガイドラインは存在しないでしょう。あなたの創造力を輝かせて、テンテンとヘビの要素を完璧に融合させましょう!タグ snakeyear 投稿するとき! 受賞者と賞品:ベストクリエイティビティ賞: 3 日間プロ + 200 クレジット (10 名)ベストエステティック賞: 3 日間プロ + 200 クレジット (10 名)参加賞:画像投稿:50クレジット動画投稿:100クレジット  (画像・動画の特典は同時に獲得出来ます。アカウントごとに最大 150 クレジットが獲得できます)🎨 AI ツール夜会: デザインスネークタスク: 小さなヘビをデザインの世界に「滑り込ませて」みましょう。 🌟AI ツールを作成しよう デザイン フィールド🌟蛇の要素 または蛇のイメージ、デザイン要素を含めて下さい。たとえば、ポスター デザイン、ファッション デザイン、芸術的なタイポグラフィ デザイン、ヘビ要素を含む AI ツールなど…。AI ツールは最大 3 つのパラメーター設定を公開でき、デザイン関連である必要があります。そうでない場合は報酬の対象になりません。タグ snakeyear アップロードするとき! 受賞者と賞品:ベストクリエイティビティ賞: $29.9 (3名様)ベストエステティック賞: $29.9 (3名様)参加賞:200クレジット📝 ルール投稿と AI ツールは、対応するテーマと要件に適合する必要があります。そうしないと報酬を獲得できません。タグ付け #snakeyear イベントへの参加とみなされます。タグを使用しないと、報酬を受け取る資格が失われます。現金報酬はイベント終了時に GPU 基金に入金され、いつでも引き出す​​ことができます。勝者は TensorArt 公式チームによって決定されます。システムデフォルトのアバターとニックネームを持つユーザーは報酬を受け取りません。イベントの内容はコミュニティのルールに準拠する必要があります。 NSFW、児童ポルノ、有名人の画像、暴力、低品質のコンテンツは対象外です。イベントの最終的な解釈は TensorArt に属します
281
34
TensorArt 2024 Community Trends Report

TensorArt 2024 Community Trends Report

2024: A Year of BreakthroughsThis year marked an explosion of innovation in AI. From language and imagery to video and audio, new technologies emerged and thrived in open-source communities. TensorArt stood at the forefront, evolving alongside our creators to witness the rise of AI artistry.Prompt of the Year: HairSurprisingly, "Hair" became the most-used prompt of 2024, with 260 million uses. On reflection, it makes sense—hair is essential in capturing the intricacies of portraiture. Other frequently used words included eyes (142M), body (130M), face (105M), and skin (79M).Niche terms favored by experienced users—like detailed (132M), score_8_up (45M), and 8k (25M)—also dominated this year, but saw a decline in usage by mid-year. With the advent of foundational models like Flux, SD3.5, and HunYuanDit, natural language prompts became intuitive and multilingual, removing the need for complex or negative prompts and lowering the barriers to entry for creators worldwide.Community AchievementsEvery day, hundreds of new models are uploaded to TensorArt, fueling creativity among tensorians. This year alone:Over 400,000 models are now available.300,000 images generated daily, with 35,000 shared via posts, reaching 1 million viewers and earning 15,000 likes and shares.This year, we introduced AI Tool and ComfyFlow, welcoming a new wave of creators. AI Tool simplified workflows for beginners and enabled integration into industry applications, with usage distributed across diverse fields.In November, TensorArt celebrated its 3 millionth user, solidifying its position as one of the most active platforms in the AI space after just 18 months. Among our loyal community are members like Goofy, MazVer, AstroBruh and Nuke, whose dedication spans back to our earliest days.A Global Creative ExchangeAI knows no borders. Creators from around the world use TensorArt to share and connect through art. From the icy landscapes of Finland (1.6%) to the sunny shores of Australia (8.7%), from Pakistan (0.075%) to Cuba (0.003%), tensorians transcend language and geography.Generationally, 75% of our users are Gen Z or Alpha, with the remaining 9% belonging to Gen X and Baby Boomers. “It’s never too late to learn” is a motto they live by.Gender representation also continues to evolve, with women now accounting for 20% of user base.TensorArt is breaking barriers—technical, social, and economic. With no need for costly GPUs or advanced knowledge of parameters, tools like Remix make creating stunning artwork as simple as a click.The Way Tensorians CreateMost active hours: Weeknights, 7 PM–12 AM, when TensorArt serves as the perfect way to unwind.Platform preferences: 70% of users favor the web version, but we’ve prioritized app updates for Q1 2025 to close this gap.Image ratios: Female characters outnumber male ones 9:1.67% are realistic, 28% are anime, and 3% are furry.Favorite colors order: Black, white, blue, red, green, yellow, and gray.A Growing Creator EconomyIn 2024, Creator Studio empowered users to monitor their model earnings. Membership in TenStar Fund tripled, and average creator income grew by 1.5x compared to last year.In 2025, TensorArt will continue to prioritize the balance between the creator economy and market development. TA will place greater emphasis on encouraging creators of AI tools and workflows to provide more efficient and convenient practical tools for various specific application scenarios. To this end, TA will be launching the Pro Segment to further reward creators, offering them higher revenue coefficients and profit sharing from Pro user subscriptions.2024 MilestonesThis year, TensorArt hosted:26 site events and 78 social media campaigns.First AI Tool partnership with Snapchat, pioneering AI-driven filters, which were featured as a case study by Snapchat.Launch of “Realtime Generate” and “Talk to Model,” revolutionizing how creators interact with AI.Collaboration with Austrian tattoo artist Fani to host a tattoo design contest, where winners received free tattoos based on their designs.TensorArt is committed to advancing the open-source ecosystem and has made significant strides in multiple areas:For newly released base models, TA ensures same-day online running and next-day support for online training. To allow Tensorians to experience the latest models, limited-time discounts are offered.To boost creative engagement with new base models, TA hosts high-reward events for each open-source base model, incentivizing Tensorians across various dimensions such as Models, AI tools, and Posts.Beyond image generation, TA actively supports the open-source video model ecosystem, enabling rapid integration of CogVideo, Mochi, and HunYuanVideo into ComfyFlow and Creation. In 2025, TA plans to expand online video functionality further.Moving from "observer" to "participant," TA has launched TensorArt Studios, with the release of the SD3.5M distilled version, Turbo. In 2025, Studios will unveil TensorArt’s self-developed base model.TensorArt continuously funds talented creators and labs, providing financial and computational resources to support model innovation. In 2025, Illustrious will exclusively collaborate with TensorArt to release its latest version.Looking ForwardFrom ChatGPT’s debut in 2022 to Sora’s groundbreaking in 2024, AI continues to redefine innovation across industries. But progress isn’t driven by one company—it thrives in the collective power of open-source ecosystems, inspiring collaboration and creativity.AI is a fertile ground, filled with the dreams and ambitions of visionaries worldwide. On this soil, we’ve planted the seed of TensorArt. Together, we will nurture it and watch it grow.2024 Annual RankingsEach month of 2024 brought unforgettable moments to TensorArt. Based on events, likes, runs and monthly trends, we’ve curated the 2024 Annual Rankings. Click to explore!
390
54
My Journey: Model Training a LoRA for Game Art Design

My Journey: Model Training a LoRA for Game Art Design

My Journey: Training a LoRA Model for Game Art DesignWhat is LoRA?LoRA (Low-Rank Adaptation) is a powerful technique to create custom AI art models, perfect for game designers looking to develop unique visual styles.My Training Setup for Adrar Games Art StylePreparing Your Training DatasetTechnical SpecificationsBase Model: FLUX.1 - dev-fp8Training Approach: LoRA (Low-Rank Adaptation)Trigger Words: Adrr-GmzEpochs: 5Learning Rate: 0.0005 (UNet)Key Training ParametersNetwork ConfigurationDimension: 2Alpha: 16Optimizer: AdamW 8bitLR Scheduler: Cosine with RestartsAdvanced TechniquesNoise Offset: 0.1Multires Noise Discount: 0.1Multires Noise Iterations: 10Sample Prompt"A game art poster of a Hero standing in a fantastic ancient city in the background, and in the top a title in a bold stylized font 'Adrar Games'"My Learning ProcessChallengesCreating a consistent game art styleCapturing the essence of "Adrar Games" visual identityBalancing technical parameters with creative visionInsightsLoRA allows precise control over art generationCareful parameter tuning is crucialSmall adjustments can significantly impact resultsPractical TakeawaysStart with a clear artistic visionExperiment with different settingsDon't be afraid to iterate and refineRecommended Next StepsGenerate multiple sample imagesAnalyze and compare resultsAdjust parameters incrementallyBuild a library of unique game art assetsWould you like me to elaborate on any part of my LoRA training experience?
264
27
僕はとにかく楽がしたい LoRAの各種設定 Model Training

僕はとにかく楽がしたい LoRAの各種設定 Model Training

寒い冬は動きたくない。暑い夏は外に出たくない。花粉の多い春は家にいたい。枯れ葉が飛んでくる秋は窓から外を眺めたい。僕にとってGPTは代わりに色々してくれるいい相棒です。ご飯とかお風呂を用意してくれるようになったら完璧。そんな楽をすることしか考えてない僕の、ローラ詳細設定方法をご紹介します。ババン。GPTに詳細設定を考えてもらうSample画像を1枚アップロードしてこう聞いてください。赤字の部分を変更します。AI画像生成モデルのFlux用のLoRAを作る予定です。「結婚式場」をテーマにしたLoRAです。 添付画像のような画像を40枚用意しました。最適な「繰り返し数Repeat」「エポック数Epoch」「Network Rank Dim」「Network Alpha」などの設定を考えて貰えますか。タグ設定について注意点などありますか?サンプル画像生成用のプロンプトを作成して貰えますか。他にも重視したいことを記入すると良いですね。GPTさんの実際の返事LoRAの設定や学習手順について、結婚式場をテーマにしたLoRAを作成する際の最適な設定や注意点、サンプル生成プロンプトについて詳しく解説します。LoRA学習用の設定提供された40枚の画像を使用する場合、データ量は少なめですので、以下の設定を推奨します。基本設定Repeat(繰り返し数):5~10データセットが40枚と少ないため、各画像を繰り返して学習データの量を補います。Epoch(エポック数):10~15過学習を防ぎつつもデータ量に応じて十分に学習させるため、エポック数は10~15程度に調整します。Network Rank Dim(Rank):16~32Rankを16~32程度に設定し、モデルの柔軟性を確保します。低すぎると学習内容が薄まり、高すぎると過学習のリスクが上がります。Network Alpha(Alpha):16AlphaをRankと同値程度に設定すると、LoRAの学習が安定します。タグ設定の注意点共通タグ"wedding venue"、"flower decorations"、"arches"、"drapes"、"fairy lights" などの基本要素をタグとして追加。→ 結婚式場の要素がしっかり学習されるように意識します。詳細なタグ画像内の要素を細かく分けてタグ付け。例えば:花の種類:「roses」「floral arrangements」「bouquets」装飾:「candles」「lanterns」「string lights」「arched drapery」照明や雰囲気:「soft lighting」「romantic setting」「warm glow」固有名詞の避け方「特定の会場名」や「ブランド名」などの固有名詞は含めないよう注意。サンプル画像生成プロンプト学習が完了したLoRAを使って、以下のプロンプトでサンプル画像を生成できます。コードをコピーする(masterpiece, best quality, ultra-detailed), wedding venue, floral arch, draped curtains, fairy lights, romantic lighting, beautiful flower arrangements, glowing candles in glass lanterns, soft sunlight through tall windows, elegant decorations, roses, pastel colors, dreamy atmosphere, wooden floor, luxurious and enchanting wedding setting追加の強化要素季節感を出す場合:spring wedding や winter wedding theme背景変更:sunset lighting や nighttime fairy lights装飾の強調:detailed flower patterns や ornate arch design補足生成の際は、LoRAの強度(weight)を0.7~1.0程度で調整し、モデルの影響度を最適化します。サンプル生成後、LoRAが意図した要素をしっかり学習できているか、具体的に確認して調整を加えてください。この設定で試してみて、調整が必要な場合はフィードバックをお待ちしています!✨
255
26
'Paduru' Illustrious LoRa Model Training Guide

'Paduru' Illustrious LoRa Model Training Guide

link to Lora: https://tensor.art/models/806808906744706431/Podoru-Meme-LoRa-e10I fetched my training images from: https://apps.apple.com/us/app/sankaku-anime-ai-girlfriend/Pic collage shown here was created using https://gandr.io/Then I used the built in Tensor Art caption tool to set the promptsExample:"padoru 1girl solo long hair smile open mouth simple background hair ornament long sleeves hat white background holding tail full body yellow eyes :d red hair pointy ears hairclip chibi fur trim capelet fangs transparent background blush stickers monster girl slit pupils christmas red headwear santa hat santa costume meme scales sack lamia holding sack miia \(monster musume\)"Training settings:base model: NovaXL (as shown as base model in LoRa) Unet learning rate: 0.0001Repeat : 20 , Epochs 10Constant learning rate Network dim: 64 , Alpha: 32Loss chart is absurdly low:And here are some examples of the output from the NovaXL illustrious model:Very cute!Thats all for this training guide. Merry Christmas! /Adcom
142
18
Model Training: The Results of the Model with 10 Prompts " Christmas Decorations Vintage "

Model Training: The Results of the Model with 10 Prompts " Christmas Decorations Vintage "

AI Model for Classifying Vintage Christmas DecorationsAI technology, particularly machine learning models like CNN, can be used to identify vintage Christmas decorations. The process involves:Dataset: Collecting images of vintage ornaments (1920-1980) with proper annotations.Preprocessing: Preparing images through resizing and augmentation.Training: Using CNN models like ResNet to detect key features such as retro designs or specific materials.Evaluation: Measuring accuracy and fine-tuning the model.Visual Results from the my Model: "Christmas Decorations Vintage"The model can provide:Automatic identification of vintage ornaments (e.g., pastel glass balls or retro string lights).Visual categorization by era, such as 1950s or 1970s, displayed in collection panels.Detection of unique materials like glass, wood, or textured plastic, accompanied by a description of their historical use.10 Prompts for Vintage Christmas Decorations:"A collection of 1950s pastel glass Christmas ornaments with glitter details, displayed on a wooden table with soft vintage lighting.""Retro ceramic Christmas figurines from the 1970s, featuring angels and Santa Claus, with a cozy holiday backdrop.""A Christmas tree decorated with 1920s-inspired metallic tinsel and glass baubles, surrounded by antique gifts.""Close-up of vintage Christmas string lights with large, colorful bulbs glowing warmly in a dimly lit room.""Mid-century modern Christmas ornaments in soft pastel tones, styled with minimalist retro decor.""Handcrafted wooden nutcracker ornaments from the 1940s, arranged on a snowy windowsill.""A wreath made of vintage glass ornaments in vibrant colors, hung on an old wooden door with a snowy background.""A set of antique Christmas postcards from the 1930s, featuring illustrations of festive decorations and snowy scenes.""A shelf displaying 1960s Christmas decorations: ceramic trees, vintage candles, and glittery ornaments.""A warm holiday scene with a 1940s-inspired fireplace mantel decorated with garlands, stockings, and vintage baubles."These prompts evoke a nostalgic and festive aesthetic, capturing the charm of vintage Christmas decor.
21
It's done! I was able to do Model Training too!

It's done! I was able to do Model Training too!

”It's done! I was able to do model training too!”Model training may be asked as an assignment at the event.This is a great reward for the effort, so I definitely want to try it.https://tensor.art/models/806517974249805611/Christmas-Yggdrasil-2024-12-12-07:37:37This is a model I made for an event.All I had to do was have the AI ​​draw three pictures and "munch through" the online training.I will explain my example in 5 steps.First step. Think about the theme of the model. Prepare 3 pictures. Pictures drawn by AI is fine.(Be careful about copyright)It doesn't matter what you want, but if possible, it's best to choose something you like. I like big things, so I chose 3 pieces including this one.Second step: Upload images from the online training page.Then click on the uploaded image and "delete" the inspirational words from the radio buttons on the right. The key is to "erase" rather than "keep".Do this for all images you upload.Third step. Enter parameters. Let me introduce what I think is the easiest and most reasonable method.First, regarding the model, select "SD 3.5 Large" from the "Standard" tab.The reason is because it's cheap now (lol).In fact, model training consumes a lot of credits, so this element is not trivial.(However, the theme for December 18th is to reveal the "illstrious" model, so please be careful about that.)There are several places where you have to type in text.The most important thing is the trigger word. Enter here a short phrase that inspires you.If your only purpose is to create a model, I don't think you need to mess around with the other numbers that much.In my case, "Text Encoder learning rate" was set to 0.00005, "Clip Skip" was set to 2, and "Network Dim" and "Network Alpha" were set to 128.”repeat”was set to 10, and "epoch" was set to 20.Also, LR Scheduler was set to “cosine_with_restarts”.All that remains is to write a sample creation prompt and a negative prompt.This is a prompt for writing a sample, so you can do it just like you would normally do when you ask the AI ​​to draw a picture.And then all you have to do is press the button to start training, and the AI ​​will “study drawing” for you.This amount of learning will take approximately 45 minutes.The Fourth step is to create a project.Create→Add LoRA→Host my model→Create a projectThis will take you to the project launch screen. Enter the necessary information and publish your project. This project will become the ``container'' for the model you created.At this stage, there is no model of the project contents yet.The Final step is to "install" the completed model into the project and publish it.When training is complete, you should see a screen like this: Click "Publish" for the set with the image you like.A “Select Project to Publish” pop-up will appear, so select the project you just created.You will be redirected to the “Host a model” screen, so just enter the model data etc. and publish it.You can perform model training with the above steps. Once you try it, you'll realize it's not that difficult.Learning can be done with AI, but inspiration can only be obtained by humans.Bring your inspiration to life in your models.I hope this article helps you in your creation. Thank you.
16
2
Introducing LoRA: A Lightweight Model Training Technique for Customizing Text-to-Image

Introducing LoRA: A Lightweight Model Training Technique for Customizing Text-to-Image

What is LoRA?LoRA (Low-Rank Adaptation) is a clever technique that lets you tweak existing AI models without having to start from scratch. Imagine you have a pre-trained model that can generate all sorts of images, but you want it to create art in a specific style, like anime or watercolor. LoRA allows you to customize this model efficiently, saving you a ton of time and computing resources.Why is LoRA Important for Text-to-Image?Efficiency: Instead of retraining the entire model, LoRA only adjusts a small part of it. This means it's much faster and requires less computing power.Customization: Want to create your own unique art style? LoRA lets you fine-tune the model to generate images that match your specific preferences.Flexibility: LoRA can be applied to various text-to-image models, not just Stable Diffusion.How Does LoRA Work?Think of it like teaching a dog a new trick. You don't have to retrain the entire dog, just teach it a few new commands. Similarly, LoRA adds small adjustments to specific layers of the pre-trained model, allowing it to learn new things based on the data you provide.Benefits of Using LoRA:Cost-effective: The faster training process means you'll spend less on computing resources.Better Results: With careful customization, LoRA can generate higher quality images that are more aligned with your desired style.Accessibility: LoRA makes it easier for anyone to experiment with text-to-image models, even if you don't have a supercomputer.Use Cases for LoRA:Creating Unique Art Styles: Train LoRA on a dataset of images in a specific style, like vintage comics or impressionist paintings, to generate new images in that style.Generating Specific Objects or Concepts: Want to create images of fantasy creatures or futuristic cityscapes? LoRA can help you achieve that.ConclusionLoRA is a powerful tool for customizing text-to-image models. Its efficiency and flexibility make it a game-changer in the field of generative AI.Additional Tips:Quality Data: The quality of your training data directly impacts the quality of your results.Experiment with Hyperparameters: Tweaking settings like learning rate and the number of training epochs can significantly affect the model's performance.Join the Community: Online communities focused on LoRA can provide valuable insights and support.In summary, LoRA is a fantastic way to create personalized AI art. Give it a try and see what you can create!
47
3
❄️Effective prompt words to get great images with the Christmas model "Model training"❄️

❄️Effective prompt words to get great images with the Christmas model "Model training"❄️

Hello everyone, 🥰 Christmas is coming up, so I would like to introduce some words that you should include in your prompts to create better works using the Christmas model for image generation.First of allIn AI image generation, the choice of words in the prompt you use greatly affects the quality of the image. In this article, we will introduce some tips that are easy to understand even for beginners and will improve the quality of your images not only for Christmas but also for everyday prompts.1. Basic strategyTo create an effective prompt, keep the following points in mind. By keeping these in mind, it will be easier for AI to generate images that meet your intentions.Style: Specifies the atmosphere and drawing method of the image.Examples: "anime style", "cyberpunk", "watercolor painting", "Ink painting style", "Reality Photo", etc.Type: Indicates the composition of the image.Examples: "portrait", "landscape", "full-body character", "Cowboy Shot", etc.Background: Clarify the situation and background of the image.Examples: "sunset", "dense forest", "futuristic cityscape", etc.Mood: Set the overall atmosphere of the image.Examples: "dramatic", "peaceful", "vibrant", etc.2. Elements to emphasize and examples of their useTo have the AI ​​reflect specific elements, use the following phrases.Lighting effects:"golden hour lighting"Example: "a serene mountain landscape, golden hour lighting, highly detailed""neon glow"Example: "a cyberpunk city at night, neon glow, atmospheric"Camera perspective:"close-up of a face"Example: "a close-up of a young woman's face, freckles, cinematic lighting""wide-angle view"Example: "a wide-angle view of a bustling market, colorful stalls, lively atmosphere"Background details:"galaxy in the background"Example: "a futuristic spacecraft, galaxy in the background, vibrant colors""snow-covered mountains"Example: "a hiker on a snowy trail, snow-covered mountains, dramatic lighting"Texture:"highly detailed"Example: "a dragon with highly detailed scales, glowing eyes, fantasy setting""soft and fluffy texture"Example: "a kitten sleeping on a blanket, soft and fluffy texture, warm lighting"3. Techniques for adding originalityAdding the following phrases will give you a more unique result:Abstract/Fantastic elements:"ethereal"Example: "an ethereal fairy in a glowing forest, delicate wings, magical atmosphere""surreal"Example: "a surreal desert with floating islands, warm tones, imaginative"Luminescence and special effects:"bioluminescent"Example: "an underwater world with bioluminescent plants, vibrant colors, mysterious""iridescent"Example: "a bird with iridescent feathers, perched on a branch, sunlight reflection"Environment and atmosphere:"dystopian"Example: "a dystopian cityscape, dark and foggy, neon lights""whimsical"Example: "a whimsical tea party in a garden, pastel colors, surreal elements"4. Tips to avoid problemsSome words and settings may cause the AI ​​to produce unstable results. Please refer to the following and adjust the prompts.Words to avoid:"blurry""chaotic"Suggested alternatives:"soft focus" → Example: "a soft focus portrait of a smiling elderly woman, warm lighting""dynamic composition" → Example: "a dynamic composition of a sports car racing on a track, motion blur"5. Further examples of AI image generationCharacter design: Example: "a steampunk warrior with intricate armor, full-body, dramatic lighting"Product promotion: Example: "a sleek smartphone on a reflective surface, minimalistic design, vibrant colors"Recreating an art style: Example: "a serene landscape in the style of Van Gogh, swirling brushstrokes, vivid colors"6. Example Christmas-themed promptFinally, I'll show you an example prompt to create a Christmas-themed image of a girl using my Christmas model "🎄Cute Christmas collection・FLUX 🎄" using the words introduced above. Use these as a starting point to create your own original prompts.*Fantastic Christmas"A young girl in a red and white Christmas dress, surrounded by glowing fairy lights, snowy background, whimsical atmosphere, vibrant colors, golden hour lighting"*Traditional Christmas scene"A cheerful girl decorating a Christmas tree, cozy living room, soft and warm lighting, fireplace in the background, soft and fluffy texture"*Futuristic Christmas"A futuristic Christmas celebration, a girl in a glowing cyberpunk Santa outfit, neon lights, snowy futuristic cityscape, vibrant and dynamic composition"*Winter fairy girl"A snow fairy girl with delicate wings, sparkling snowflakes, ethereal and magical atmosphere, soft focus, iridescent glow"*Christmas market at night"A young girl walking through a bustling Christmas market, surrounded by festive stalls and twinkling lights, wide-angle view, atmospheric"Tweak these prompts and enjoy your own Christmas images!Enjoy an amazing AI-generated experience! 🎄✨
44
10
Model Training - How to Train an AI Model: A Step-by-Step Guide

Model Training - How to Train an AI Model: A Step-by-Step Guide

Training an AI model may sound daunting, but it’s surprisingly straightforward when broken down into clear steps. Whether you're a beginner or looking to refine your skills, this guide walks you through the process from creating datasets to fine-tuning settings. THIS IS THE MODEL PAGE : https://tensor.art/models/806678236659647115/CHRISTMAS-UGLY-SWEATER-PATTERN-V9Step 1: Build Your DatasetA quality dataset is the backbone of any successful AI model. Here’s how you can create one:Source Images: Gather images from reliable sources like Pinterest, stock image websites, your personal photo gallery, or even AI-generated outputs. Ensure you have permission to use the images, especially for commercial purposes.Focus on Quality:Use clear, sharp images.Avoid images with noise, blur, or watermarks.Size doesn’t have to be massive, but clarity is key.Example: For this guide, let’s say you’re building a dataset of seamless patterns inspired by ugly sweaters. Carefully curate high-quality images that fit this niche.Step 2: Caption Your DatasetGood captions make a significant difference in training outcomes. A well-captioned dataset ensures your model understands the nuances of your images.Tips for Effective Captioning:Write captions manually for precision.Use automated captioning tools sparingly and always review their output.Be descriptive but concise, capturing key details like color, style, or patterns.Example Caption:For an image of a red-and-green holiday sweater with reindeer motifs, your caption might read:“Seamless pattern of a red-and-green knitted sweater with reindeer and snowflake designs.”Manually crafting captions might take more time, but the payoff is better accuracy in your model's outputs.Step 3: Set Parameters and Configure TrainingOnce your dataset is ready, it’s time to train your model. Using platforms like Tensor.art simplifies this process.For Beginners:Start with default settings. These are optimized for general use and save you the hassle of configuring every parameter manually.For Advanced Users:Experiment with parameters such as learning rate, batch size, and epoch count to refine your model.Bonus TipsTest Regularly: As your model trains, run tests to ensure it’s learning correctly. This helps identify issues early.Iterate: Training is an iterative process. Don’t hesitate to tweak and retrain if the results aren’t up to par.Document Your Process: Keep notes on what works and what doesn’t. This saves time in future projects.Final ThoughtsTraining an AI model involves careful preparation and a bit of patience, but the results are worth the effort. By curating a high-quality dataset, writing thoughtful captions, and fine-tuning settings, you’ll be on your way to creating a model that performs exactly as you envision.Dive in, experiment, and watch your AI-powered creativity take flight!
34
8
Model Training - Illustrious NoobAI LoRA Discussion

Model Training - Illustrious NoobAI LoRA Discussion

Let's talk about Illustrious and NoobAI LoRA'sPrefaceI am currently using tensor.art with Professional Mode to train my Lora, this article will mainly discuss what I've tried and I welcome others to discuss too as there's no official finetune guide.GuidelinesHigher rates = stronger character features but potential loss in image qualityLower rates = better image quality but weaker character featuresMost character Loras work well with UNET around 0.0003 and TE around 0.00003Lower learning rates will adapt the features better but can also take longer. As for the dataset lets say i have 40 images , 5-10 repeats, 10 epochs, 4 batch size, this usually adds up to the total steps and then hopefully a model is trained well enoughThe ideal ratio is typically UNET:TE = 10:1UNET Rates (0.0005 - 0.0001):0.0005: Very strong influence, can overpower the base model. Good for exact character matching but may reduce image quality0.0003: Balanced influence, commonly used for character Loras0.0001: Subtle influence, maintains high image quality but character features may be less pronouncedText Encoder (TE) Rates (0.00005 - 0.00001):0.00005: Strong text conditioning, helps with character recognition0.00003: Moderate text influence, good balance for most character Loras0.00001: Light text conditioning, useful when you want minimal style transferDimension Ranks (DR) - Network Dim32: Standard/Default rank, good balance of detail and file size64: Higher detail capture, larger file size128: Very high detail, much larger file size256: Maximum detail, extremely large file sizeNetwork Alpha (AR) - Network AlphaAlpha is typically set to match or be slightly lower & higher than the rank.Common ratios:AR may be half the rank or even a quarter less than the DRAR: Standard training stability (1:1 ratio), same as the DRAR× 1.5: Increased stability, a quarter more than the DRAR× 2: Maximum stability, double the DRThe values below are not 100% but they are being figured out still.Basic Character Lora (Base Model's preference)DR 64, AR 32 - Best for: Simple anime/cartoon characters - File size: ~70MB - Good balance of detail and stabilityComplex Character LoraDR 64-48, AR 32-24 - Best for: Most character types - File size: ~100MB - Excellent for anime/game charactersStyle Loraexample : https://tensor.art/models/806682226684073145/NAI3-Kawaii-Style-Illustrious-NoobAI-nai-IL-V0.1example : https://tensor.art/models/806356844256811271/Anima-Crayon-Sketch-Illustrious-IL-V0.1original article says : DR 128, AR 64 to 32 - seems to be the best for a combination of complex features etc if the style is very detailed. otherwise lower ranks work too.Learning rates can vary: CAME and RAWR = 0.0002 UNET and 0.00002 TE will need about 2500 to 3000 steps ADAMW8BIT & ADAFACTOR between 0.0003-0.0005 UNET and 0.00003-0.00005 at 1000 steps but what i use instead :Parameter Settings Network Module LoRA Use Base Model rMix NNNoobAI - V1.1 Trigger words nai3_kawaii Image Processing Parameters Repeat 10 Epoch 10 Save Every N Epochs 1 Training Parameters Seed - Clip Skip - Text Encoder learning rate 0.00004 Unet learning rate 0.00035 LR Scheduler cosine_with_restarts Optimizer AdamW8bit Network Dim 32 Network Alpha 16 Gradient Accumulation Steps - Label Parameters Shuffle caption true Keep n tokens 1 Advanced Parameters Noise offset 0.0357 Multires noise discount 0.15 Multires noise iterations 8 conv_dim - conv_alpha - Batch Size 2 Sample Image Settings Prompt nai3_kawaii 1girl solo long hair looking at viewer blush bangs blue eyes hair ornament dress ribbon sitting closed mouth pink hair sleeveless hairclip sailor collar two side up book blue dress sailor dress . masterpiece, best quality, amazing quality, very aesthetic, absurdres Sampler eulerWhat works?I'd like to hear what works and doesn't work for illustrious:OptimizerLearning Rates could change dependent on the optimizer chosen.SchedulerNetwork Settings(DR) Dimension rank 128, 96, 64, 32, 16, 4(AR) Alpha rank 128, 96, 64, 32, 16, 4Don't use:ProdigyCan use:AdamW8BitConstant0.0003 LR (TE & UNET) - Aggressive Learning for characters0.0002 LR - Medium learning for characters (DR 128 AR 64)AdaFactorSchedulerCosine with restart0.0005-0.0003 LR (UNET)0.00005-0.00003 LR (TE)DR 128-32, AR 64-16 - usually i go half the Network Dimension Rankplagiarized and inspired from : https://civitai.com/articles/9148/illustrious-lora-training-discussionmodel used for my training : rMix NNNoobAI v1.1 - https://tensor.art/models/805164110363975687
29
8
🎄“Model Training” Words that enhance the effect of X'mas models

🎄“Model Training” Words that enhance the effect of X'mas models

Attention to detail in the prompt is important to produce beautiful images related to Christmas.We've put together a list of words to help you get into the Christmas spirit and create dreamy and beautiful images.By using these in combination, you are sure to be able to create a magical and warm Christmas scene.  1. Main theme - Christmas tree - Santa Claus - Snowy landscape - Christmas lights - Cozy fireplace - Gift-wrapped presents - Snow-covered village - Reindeer - Gingerbread house 2. Decorations and accessories - Ornaments - Garland - Wreath - Baubles - Holly and ivy - Christmas stockings - Snowflakes - Candy canes - Bells 3. Lighting effects - Warm glowing lights - Sparkling fairy lights - Candlelit ambiance - Radiant glow - Soft bokeh effect - Starry night - Lantern-lit 4. Color effects - Warm hues - Rich reds - Forest greens - Snowy whites - Golden tones - Frosty blues - Silver sparkle 5. Characters and animals - Santa’s elves - Festive animals - Polar bear with scarf - Penguins in winter hats - Adorable snowman 6. Scenery and places - Cozy cabin - Snowy forest - Mountain village - Christmas market - ​​Enchanted winter scene - Icicles on trees - Ice rink with skaters 7. Texture and atmosphere - Velvet textures - Frosted glass effect - Glittering snow - Cozy and warm atmosphere - Nostalgic charm - Festive elegance - Whimsical charm 8. Other keywords - Magical holiday - Winter wonderland - Silent night - Joyful celebrations -Family gathering - Childlike wonder - Frost-kissed scenerytogether with a combination of these words"Beautiful", "Wonderful", "Warm", "Festive" etc.Adding keywords to the prompt will help you generate images full of Christmas charm.
15
5
To Make Models, Online Training!!! 😍

To Make Models, Online Training!!! 😍

Greetings, my fellow Tensorian artists! 😊🤗😁@sfsinspiredwritr here! My Tensorian friend @digital_daydreamer had asked for my advice on training for models, and I gave him/her a helping hand! It's so detailed and invaluable, that I decided to give ALL Tensorians here these 10 tips on how to make models online with Tensor Art! (Note: These are the things that I have figured SO FAR in my Tensor journey up 'til now, and it's all from scratch!)Enjoy! 😄Tip #1: In my experience of training models (online on Tensor Art), a good margin for how many images to use (especially, for the model to work nicely when it's ran) is like 15-22 images. Any fewer or greater number will overall lessen the model's quality.Tip #2: For the images, Tensor Art supports images in the formats: .png, .jpg, .jpeg, and .webp (Chrome HTML document). I find these images by searching for the model topic on Google (for example, "digimon tai x sora") and then heading to the "Images" tab to see only photos of the topic. I click on an image I want to use, and then right-click on it to "Save image as". This is how I download the image to my computer.Now, ALWAYS check for the image format after you download, from whatever your file manager is called on your device. .avif, .jpg_large, nor .jfif formatted images are NOT supported by Tensor Art, so make sure to keep those out of your image count!Tip #3:Pony is THE #1 type of Model to use as a Base to train your models on online! The image quality is unmatched by any base model type, IMO! Good base models to use incl. Nova Anime XL (v.5.0 or v.6.0, haven't tried the newer versions yet) and Pony - Disney Pixar Style (V2).Tip #4:The steps, how many times the model repeats an image for training, is IMPORTANT to set up! Generally you keep the steps in the range of 24-27 steps for the model to train the images on.Tip #5:You should keep the image prompt as 'loosely detailed" as possible; my image prompt in online training would look something like this:(Trigger words) - digimon_adventure, mimi_tachikawa, (1girl, 1_solo) :(Image standard terms) - (RAW Photo, best_quality, high_definition, high_res, masterpiece, veryGOODImage_positive) (Beautiful, extremely detailed)_image, (BEST, amazing, incredible, highly detailed, anime-style)_quality, (Clear & not blurry, beautiful)_image(Optional details) - simple_background, (upper_body(_shot))(Main Prompt) There is a teenage girl resembling Mimi Tachikawa. She has long and light brown hair, (amazing, anime_style, well-drawn) brown eyes, and fair skin. She is looking at the viewer with a smile.This is how my training prompt looks! Simple enough for the AI to train the images! 🙂Tip #6:The steps are 24-27, and the epoch is just 1, a set of 4 images.This is the cheapest way I practice to make images, and with what I spoke on the steps and the image prompt, the model will become fantastic to use using this ratio!Tip #7:The model's initial images, when its training is done, will look ... UNPROFESSIONAL for the most part.Don't be discouraged! Consider these first 4 images the "rough draft", and just Remember This: The model's first images are not the goal to completing the model, it's the images that you run AFTER the model is published!Essentially, I just replace the first images with 10-12 of NEW images to show off for the model! That's the "presentation" secret (Shh 🤫)!Tip #8:Upon running the newly published model, this is the part where you come up with: more details for the prompt, the negative prompt, the strength ('detail weight') of the model, the CFG scale, and whatever else you can mess with to make good images with the model!Once you have this figured out, then you can "Edit Model" & place your "base" prompt & negative prompt & strength & CFG scale & other details into the description of the model itself so that the users running your model (and yourself 😉 wink-wink) can have some more ease and AMAZING use with it!Tip# 9:The models will have to go into projects to be showcased to the Tensor community, of course!To make a project, you go to "+ Host my model" on your Profile page, then go to "Create a project" to go to a page that does just that! You make its name, type (mines are usually Image & LoRA), channel (Anime, Illustration, etc.), tags (e.g. "SORA TAKENOUCHI", "DIGIMON ADVENTURE", etc.), description (where I recommend some of my tips for users to use the model well), resource (usually "Original"), view scope (usu. "Public"), and permissions you give to users (I allow everything except those under "Commercial use"; I don't care about those), and then you "Create"!Tip #10:Instead of using the model you used to train the images to run new images to present, you can use a DIFFERENT ONE to make the new images of the model!For example, when I train models for the DIGIMON ADVENTURE characters' Defined versions, the base model for training is Nova Anime, but for running images it's Disney Pixar Style! For their anime versions, it's the reverse! In my experience, using a different model for running aside from the one used for training really brings out the detail and quality of the images, better than using the same one! (Nothing against using the same one, though; this would still work!)And there you have it! I had a lot to say, and (again) I had to learn these priceless pieces of knowledge by MYSELF, and thankfully you new and seasoned Tensor artists should have more ease in training your models online here on Tensor Art 🥰!I hope these tips are of great help to you (let me know if you're confused, if I've said too much or little 😬 ... This is supposed to be helpful!), and let us all grow into amazing Tensor Artists together!P.S. When I learn something else new, I'll be sure to post "mini-articles" as updates for my sharing these experiences with you all, Tensor Art community!Until another time, Tensorians! Many good wishes to you all, fellow artists and creators! 🤗🤩🥰😍😊😁😀
22
11
📖 AI Tools Guidelines - Get new portraits with your face

📖 AI Tools Guidelines - Get new portraits with your face

🔥 [ Flux.1 ] Beautiful Portrait With FACE v2.01️⃣ upload photo2️⃣ Setting parameters3️⃣ Waiting for generation4️⃣ Get new portraits
7
2
解説★AIツールのラジオボタン★AI Tool

解説★AIツールのラジオボタン★AI Tool

遅くなりましたが、イベントのお題であるAIツールラジオボタンを解説します。ラジオボタンって何?こういうのです。いくつかの選択肢とカスタムボタンが表示されます。さっそく作っていきましょう。comfyuiの設定ラジオボタンが利用できるノードと利用できないノードがあります。僕はpromptlistというノードをよく使うのですが、このノードはラジオボタンを設定出来ません。なのでtextノードを繋げています。textノードはjjkがオススメです。textノードですが実際はtextboxノードに相当すると思われます。AIツールページの編集このように設定画面の右に「編集」の文字が出てきたら成功です。それでは設定していきましょう。プロンプトの編集画面入力方式で「ラジオボタン」を選択して、「追加」します。フォルダ管理のような画面になります。この場合だと「Action」がラジオボタンの名前になり、「jogging」が追加されます。ラジオボタンの追加ラジオボタンは選択肢に入ってないものも作成出来ます。上段が表示名、下段が実際に記入されるプロンプトになります。ラジオボタンの削除僕のようなタイプは少数派かもしれませんが、既定のプロンプトに興味がないです。「jogging」を消します。最初に一つ選択するのは必須の動作になっていますが、このように削除することが可能です。べっべっべっと追加します。ちなみに2つめの何も書いてないラジオボタンはスペースが入っています。僕のアニマジンAIツールは別にアクションを指定しなくても機能します。完成!はい!完成です!後はまぁ適当にチェックしておくといいです。AIツールは誤作動が多いように感じます。それでは、みなさん頑張って下さいネ★
26
7
[REYApping] Simple and Brief Explanation of AI Tool

[REYApping] Simple and Brief Explanation of AI Tool

Hello and welcome to the third edition of REYApping, a space where I write a bunch of nonsense. Without further ado, let's begin.Never in my entire Tensor life would I actually try to explain something. But here we are, an article about AI tool. What is AI Tool? Why make an AI Tool? How is it different from the "create mode"? I'll try to explain them.What is AI Tool?Now, I might be wrong here (roast me in the comment) but here's my answer: AI tool is a simplified, more straightforward interface of a comfyui workflow. It saves you from seeing bunch of tangled spaghetti that can potentially break your eyes and mind. Instead of customizing directly on the workflow nodes, you get a similar interface as the "create mode". The downside is that it can have limited parameter since those are set by the tool's creator, and you won't know how the workflow works. Also it suck your credits and soul (Riiwa, 2024), but sadly doesn't suck your coc- *cough* Nevermind that last part.Here's an image of comfyui workflow:Here's when that workflow is made into an AI Tool:Why Make an AI Tool?Simplicity and straightforwardness in the palm(?) of your hand. That's it. Especially if your flow has a few variables that can be modified such as prompts, steps, etc.. If your flow has a lot of modifiable variables and/or you want more control over your workflow, then I suggest you do that directly in the comfyui.How is It Different from Creation Mode?Creation mode allows you to control basic functions such as samplers, which T5 would you use, and other thing like ADetailer, img2img, controlnet, etc.. AI Tool, while it can do that if set by the author, it's generally limited to basic things only such as prompts, steps, resolution, batch size, and maybe seeds. But you can't really use things like ADetailer or img2img and other fancy stuff by yourself and you really depends on what is provided by the tool itself. In short: Creation Mode allows broader range of functions but with only basic abilities while AI Tool mostly allows specific functions, but can have better result because of the dark magic trickery inside its comfy flow.Thank you for reading this part of REYApping. See you in the next one (if there's any).
43
12
How to publis an AI Tool

How to publis an AI Tool

To publish a tool you need to have a workflow preparedYou can find them in ComfyFlow.From here you either make a new workflow, import a workflow file or choose already made one.When you selected a workflow to make into Ai tool, enter that workflows editor.Inside of selected workflow you need to have at least one AI Tool Node (TA Nodes) integrated in to the workflow.(More about TA Nodes: https://tensor.art/about/aitool-tutorial)Then you need to run the workflow.After you run it press "Publish" button in top right corner and select "Ai Tool"Now you need to fill out the boxes (Name, Channel)If you had done everything correctly you can also change "User-configurable Settings"Fill everything according to the Tool/Workflow and pres publish.
5
1
🎨 AI Tool: Turning Your Workflow into a Magical Black Box of Creativity! 🪄

🎨 AI Tool: Turning Your Workflow into a Magical Black Box of Creativity! 🪄

Hey there, fellow tinkerers and pixel wizards! 🌟 Ever wanted to create an AI tool so powerful, even your future self wouldn’t know how it works? Well, buckle up! Today, we’re diving into the quirky world of workflow wizardry—where you’ll craft AI tools using ComfyUI and publish them like a mysterious, shiny black box. The best part? Your users won’t see the chaos inside. 🤫So, What’s the Deal with AI Tools?Imagine you’re assembling a Lego masterpiece, except each piece is a node, and the result isn’t a castle—it’s an AI tool. 🏰 These tools take user inputs (like prompts or images), process them through a hidden workflow, and spit out something magical. Your users don’t need to know what’s under the hood—they’ll just press buttons and enjoy the ride!How to Build Your AI Tool (Without Losing Your Marbles):1️⃣ Dream It: Start by conceptualizing what your AI tool will do. Want to turn doodles into masterpieces or mix Christmas sweaters with robot aesthetics? The possibilities are endless. 🎅🤖2️⃣ Craft It: In ComfyUI, build your workflow by connecting nodes like a pro pipefitter. Each node has a purpose, from loading models to decoding images. This is where the magic happens—or chaos, depending on your coffee intake. ☕✨3️⃣ Test It: Run the workflow as an AI tool. At this stage, expect some hiccups. Maybe the colors look weird, or your robot Santa has three arms. That’s fine—it’s all part of the process!4️⃣ Polish It: Update, adjust, and repeat until your tool is sleeker than a freshly polished apple. 🍎 Then, publish it for the world to admire (or fear).The Secret Sauce: Export/Import User Settings 🍔When you update your workflow, the user-configurable settings can reset. 😱 But fear not! With the Export/Import feature, you can save and reload those settings faster than you can say "workflow meltdown."How It Works:Export: Before hitting the update button, export your settings. Think of it as taking a backup of your genius. 💾Import: After updating your workflow, reload the saved settings. Voilà—no more starting from scratch. 🪄Pro Tip: This feature doesn’t work if you change the nodes too drastically. So, proceed with caution or risk hearing your inner monologue scream. 😬Nodes and Workflows: A Quickie Guide for the Clueless 🤷‍♂️Nodes:Think of nodes as puzzle pieces. Each one handles a small task, like:Loading a model 🎒Decoding text 🧾Sampling images 🎨Connect them, and you’ve got a functional pipeline. Disconnected nodes, however, are just sad little islands of potential. 😢Workflows:A workflow is what you get when you chain nodes together. It’s like a recipe for your AI tool:Load a model.Process a prompt.Generate an image.Save it.Simple? Yes. Satisfying? Extremely.When to Publish Your AI Tool 🎉Once you’ve created your workflow and polished it to perfection, it’s time to publish! Your users will only see the polished front end, not the spaghetti-like chaos of nodes and connections you wrangled into submission.Encourage users to interact by configuring input fields like prompts or sliders. Their creativity meets your innovation—it’s a win-win!Tips for AI Tool Wizards-in-Training 🧙Start Small: Begin with simple workflows to avoid brain freeze. 🧊Tinker Away: Play with parameters to see how they affect the output.Be Bold: Experiment with styles and features. Combine multiple LoRAs for maximum chaos (and brilliance).ConclusionCongratulations, you’re now equipped to create AI tools that will wow, confuse, and delight users! 🎉 So, go forth and turn your wildest ideas into shiny black-box tools. And remember: with great power comes great responsibility—or at least some very weird outputs. 😜Happy creating! 🎨🪄BlackPantherP.S. Don’t forget to export those settings. Nobody likes redoing work twice!
3
5
AI Tool -👌Easily create an Ai tool without prompt (Part 1)

AI Tool -👌Easily create an Ai tool without prompt (Part 1)

Often, we have a picture in our mind and can find a similar picture, but we don’t know how to write a prompt. Although tensor provides a reverse inference tool, there are other processes such as copying and pasting in the middle, and it does not support NSFW.In short, filling in the options is a very troublesome thing. I am not the only one who thinks so!Using the workflow to make a small tool can simplify a lot of processes. You can see the various small tools I made. Basically, there is no need to write prompt words, because I am a very lazy artistThe following is a simple tutorial to teach you how to make the first AI Tool. It is very simple. Just follow my picture step by step!Step 1: Create a new workflowStep 2: Select the img2img templateStep 3: Double-click the mouse on a blank area of ​​the interface, search for [wd] in the interface that appears, and select the [WD14 Tigger] plug-inStep 4: Drag the image on the load image panel and connect it to the image on the WD. This is the basis of the workflow [connecting nodes]!Step 5: Change the WD14 modle to V3 version, which is the latest image reverse model. With it, you can change your image to prompt.Step 6. Right-click on the Clip Text Encode panel and select Covert Text to Input.Step 7: Double-click the blank area again and enter string functionStep 8: Right-click on the string function panel and click [convert text_b to input]; then connect [string] on the WD14 panel to [text_b] of [stringfunction]Step 9: Connect the string in the stringfunction panel to the text in the CLIP Text encode panel, so your image will become a positive prompt!Step 10: Are you tired of reading? I am also tired of writing, let’s take a break😀😀😀😀😀😀😀😀Step 11: Click the load checkpoint panel ckpt_name, you can select the model, this time we choose a pony modelStep 12: In the string function and another clip text encode panel, fill in the pony's mass prompt[positive]:score_9,score_8_up,score_7_up, [negative]:score_3,score_2,score_1Step 13: It's almost done! Set the numbers in the ksample panel, refer to my values:Step 14: Click upload in the load image panel, select the image you like (the longest side should not exceed 1280), then click generate, and that's it!Step 15: Click Publish in the upper right corner, and then select Share Workflow. You will have your own workflow tool. You can find and run it on your personal homepage.This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support! This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support!
13
4
🎭 AI Tool Spotlight: Facial Expression Adjuster & GPTs Flux Prompt PRO 🚀

🎭 AI Tool Spotlight: Facial Expression Adjuster & GPTs Flux Prompt PRO 🚀

Unleashing Creative Potential with AI: A Spotlight on the Facial Expression Adjuster and GPTs Flux Prompt PROIn the ever-evolving world of artificial intelligence, precision and flexibility are at the heart of creating truly engaging and realistic digital content. From lifelike character animations to the fine-tuning of AI-generated imagery, a new generation of tools is enabling creators, animators, and designers to bring their visions to life with unprecedented control and detail. Two such cutting-edge tools—The Facial Expression Adjuster and GPTs Flux Prompt PRO—demonstrate the transformative power of intelligent automation in the creative workflow.1. The Facial Expression AdjusterLink: https://tensor.art/template/795874684511075193The Facial Expression Adjuster is a versatile AI solution designed to enhance and personalize digital facial expressions down to the tiniest detail. Whether you’re creating a 3D animated character or refining the emotional nuances of a still portrait, this tool lets you achieve unmatched accuracy and expressiveness. Key features include:Head Positioning: Easily control parameters such as pitch, yaw, and roll, ensuring perfect alignment and posture.Eye Expressions: Fine-tune blink and wink behaviors, adjust eyebrow angles, and position pupils for subtle or dramatic effects.Mouth Phonetics: Simulate mouth shapes corresponding to various phonemes (“A,” “E,” “W,” etc.) to produce speech-like expressions.Smile Calibration: Dial in the intensity of smiles, from a faint grin to a broad beam, adding depth and realism to character personalities.Ideal for animators, 3D artists, and AI developers, the Facial Expression Adjuster makes it simple to breathe life into digital avatars and scenes. By offering granular control over facial parameters, it unlocks new creative possibilities for storytelling and user engagement.2. GPTs Flux Prompt PROLink: https://chatgpt.com/g/g-NLx886UZW-flux-prompt-proAs AI-generated images increasingly reshape the creative landscape, the need for effective prompt engineering has never been greater. GPTs Flux Prompt PRO is a specialized tool that streamlines the process of crafting compelling, visually rich prompts for models like FLUX. By guiding creators through practical steps, offering real-world examples, and applying proven methods, it ensures that the prompts you design unlock the full potential of AI-generated visuals. Through this hands-on approach, even newcomers to prompt engineering can rapidly learn how to produce captivating outcomes that align with their artistic vision.Reinventing Your Workflow with AIBy incorporating The Facial Expression Adjuster and GPTs Flux Prompt PRO into your toolkit, you can drastically enhance the quality and impact of your creative output. These tools don’t just automate routine tasks; they empower you to direct AI-driven systems with precision and clarity, resulting in more refined, expressive, and emotionally compelling digital content.From breathing authenticity into virtual characters to perfecting your prompt-crafting skills, these advanced resources provide a blueprint for success in a world where technology and artistry continue to converge. If you’re ready to push your creative boundaries and discover new dimensions in AI-assisted art and animation, The Facial Expression Adjuster and GPTs Flux Prompt PRO stand ready to elevate your work to new heights.
17
AI Tool Video Generation Recommendation

AI Tool Video Generation Recommendation

Explore the evolving landscape of AI-powered video generation with our curated list of tools designed to bring your ideas to life. Each tool offers unique features tailored for diverse creative needs, from photorealistic renders to stylized motion. While their capabilities vary, these tools push the boundaries of AI creativity, though limitations like short durations and resolution constraints persist. Dive into this guide to discover the possibilities and challenges of SVD, CogVideoX, PyramidFlow, HunyuanVideo, and more—ideal companions for your video generation journey.1. Shutterbug | SVD & SD3.5L Turbo by PictureThttps://tensor.art/template/803606557651731715 its using SVD as base Limitations and BiasThe generated videos are rather short (<= 1.5sec), and the model does not achieve perfect photorealism.The model may generate videos without motion, or very slow camera pans.The model cannot be controlled through text.The model cannot render legible text.Faces and people in general may not be generated properly.The autoencoding part of the model is lossy.2. Let's generate a video using CogVideo-5B by oaahttps://tensor.art/template/783248442733541899sample:https://image.tensorartassets.com/workflow_template_showcase/783243275902494025/a3767754-af19-ab28-8ffe-80632559b43e.mp4Limitations :The generated videos are rather shortLimited parameters, only prompt, text2videoonly at low resolution3. Make your pictures come alive with CogVideo-5B by oaahttps://tensor.art/template/783254086320651706sample:https://image.tensorartassets.com/workflow_template_showcase/783255194436647499/63fbaa72-4493-3d5c-886b-19b6fd481b41.mp4Limitations :The generated videos are rather shortImg2Video with support only for landscape image with specific ratioonly at low resolution4. Let's generate a 384p video using PyramidFlow by oaahttps://tensor.art/template/783281513981656372sample:https://image.tensorartassets.com/workflow_template_showcase/790275614820104217/87be7526-ac5e-bca3-ac91-f80a1bfc58eb.mp4Limitations :The generated videos are rather shortLimited parameters, only prompt, text2videoOnly at 384p5. Make your pictures come alive with PyramidFLow by oaahttps://tensor.art/template/789854342952861190sample:https://image.tensorartassets.com/workflow_template_showcase/790272754371847397/2a99eba3-7917-a1b7-cf1b-d3468c90921f.mp4Limitations :The generated videos are rather shortImg2vid only works with certain aspect ratioOnly at 384p6. Make your pictures come alive with PyramidFlow - 768P Version by oaahttps://tensor.art/template/789871312368614821sample:https://image.tensorartassets.com/workflow_template_showcase/790275017819641763/f6523cfa-a883-9b05-3149-54b4ff999427.mp4Limitations :The generated videos are rather shortImg2vid only works with certain aspect ratioExpensive to run7. Mochi 1 preview - video generation by oaahttps://tensor.art/template/789464613325392462Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems.sample:https://image.tensorartassets.com/workflow_template_showcase/789223103034147869/9916ffbd-e375-d017-a198-e7a1af1a7dc5.mp4Limitations :The generated videos are rather shortText2Vid only8. HunyuanVideo by oaahttps://tensor.art/template/803673151119656752sample:https://image.tensorartassets.com/workflow_template_showcase/803944541527945002/93638436-ae16-480b-3c6e-2b2e725eae0c.mp4Limitations :The generated videos are rather shortText2Vid only9. DimensionX - 3D Scene Generation by oaahttps://tensor.art/template/796266016161330278sample:https://image.tensorartassets.com/workflow_template_showcase/796264165045080771/da5e0cb9-60ec-8277-616d-a7093d9f5bb7.mp4Limitations :img2video that only works with certain ratio onlyonly rotate on left direction only
20
10
how to create ai tool for beginner - Christmas Walkthrough AI TOOL

how to create ai tool for beginner - Christmas Walkthrough AI TOOL

In this article i will share how easy to create AI Tool for beginner. check it out.1. click comfyFlow at create menu at the top2. click New Workflow or import workflow if you have any workflow3. Choose any template you want, in this section i will add text2img template4. The new tab browser will appear, wait until completed5. Setting the paramater you want, in this section i will change checkpoint and prompt only, then do running test6. after successfully testing, click publish it then choose AI Tool7. New tab will appear, then fill it, then click Publish8. TADA, your AI Tool now go public.
4