SuperRedPig
Tensor.Art
Creation
Get start with Stable Diffusion!
ComfyFlow
ComfyUI's amazing experience!
Host My Model
Share my models,get more attention!
Online Training
Make LoRA Training easier!
HHM
HHM FLUX Interior inspire i2i
riwa
Turnaround Character Sheet Generator - FLUX
user_552368125411396324
Snakeyear, Jewelry Design
HuangTaobox
Anime Come True
FuturEvoLab
🐍Year of the Snake Energy Pendant & Jewelry Generator|FuturEvoLab
Antikorper_TA
Traditional Chinese art style - Snake year
user_817868125411379241
Snake Year, Shoe Design
Sensitive content (17+)
Rinoka
Tools2
Shopon_skp
SKP Mystic Vision XL
zDKZ
ILLUSTRATION SNAKE YEAR THEME
Ai Art Vision
Holiday Magic
Ava
Replace Background
Architech1904
Wabisabi Interior Design from Architech1904
Ai Art Vision
Realistic Vision
SuperRedPig
One-click Lego style V2.0/一键转乐高风2.0版本
TensorArt
Stable Diffusion 3 (SD3) Medium Basic
HuangTaobox
Kawaii Little Painter卡哇伊小画家
SuperRedPig
乌萨奇冲冲冲/Usachi Go
Apolonia
Fluxcore ~ Edward Hopper Art
riwa
You and Snake - SNAKEYEAR 2025
riwa
TATTOO Yourself !
antoinewu990107
Artistic re-creation
KurtC PhotoEd
Flux Inpainting Mask or Img2Img + Upscale + loras + model Switch
Ava
Design your QR Code
View All AI Tools
Models
CHECKPOINT SD 1.5
EXCLUSIVE
Anime Black Line-v1.0
TXZN
CHECKPOINT Illustrious
Illustrious-XL-v0.1
onomaai
LORA Flux
EXCLUSIVE
Flux Cartoon-v1
TXZN
LORA Flux
Alcohol Ink -1
Hanna.Mateen
LORA Flux
EXCLUSIVE
Drim's Comic Book Style Lora (FLUX)-V1
Drimayya
LORA SD 1.5
EXCLUSIVE
Hands Repair |Lora-V5
poakl
LORA Flux
EARLY ACCESS
painting pavsok Style-flux
AdrarDependant
LORA Flux
Flux Arnold Lobel Style-1.9
Invisidude
LORA SD 1.5
Da Vinci Tech - World Morph-v1.0
Navimixu
LORA HunyuanDiT
EXCLUSIVE
HunyuanDiT - Cover painting-V1
SDG
LORA SD 1.5
Long cat Meme (Solo) | Concept LoRA-v1.0
FallenIncursio
LORA Flux
EXCLUSIVE
Decorate the flower sea staircase space-flux1.0-Decorate the flower
searn
LORA Flux
Tarots-V1
Ciro Negrogni
LORA Flux
EARLY ACCESS
Lantern -2024-11-27 02:14:15
Fiezafareena
CHECKPOINT HunyuanDiT
EXCLUSIVE
Hun-Flux SELECTION-TDNM-V1
TDNM
VIDEO CogVideoX 2B
CogVideoX2B-v1
hotpot
LORA SD 1.5
Leviathan レヴァイアさん || Mythical Creature-v0.1
Shippy
LORA Flux
EARLY ACCESS
Regal Fantasy Artgem-Fantasy Artgem
Orked Orked
LORA SD3
EXCLUSIVE
Random Sketch-V1
Lobbbby
LORA HunyuanDiT
EARLY ACCESS
ChugAx_WuKong_DIT-Wukong Niji
ChugAx
LORA SD 1.5
tshee - vector style art-v1.0
muf00d
CHECKPOINT HunyuanDiT
HunYuanDiT-V1.2-EMA
腾讯混元
LORA Flux
EARLY ACCESS
Flux • GTA San Andreas style-Loading
XIZ
LORA SD3
Cyborg Style SD3m-v1
Goofy Ai
LORA Flux
Flux realism Lora-v1
1ronMan
Workflows
Flux Style Transfer
user_776594434589540139
Precise Person Removal with Inpainting
user_775929092615802455
silat malay man
Ai_AienBlackRose
10 Nodes
Simple img to video for beginners.
Draco_lord
RF inversion FLUX
~DW~
11 Nodes
FLUX BASIC IMG TO IMG(LORA)
slow0000
E-commerce Product Workflow
osgoh88
14 Nodes
Flux with Lora + Upscale
ADHDTV Studios
12 Nodes
Flux.1-Fill-Inpaint
derek
11 Nodes
Halloween2024 Theme Flux
Izmys
14 Nodes
🎃 Halloween 2024 • Pony Diffusion Workflow
PictureT
21 Nodes
YT SHOWCASE FLUX.1-DEV
yt03020907
12 Nodes
Background Remover
Danlky
9 Nodes
Cyber Girl creation
petit moine 小和尚
FLUX REDUX_MIX PROMPT & STYLE IMG
Anna ai art
13 Nodes
Test crazy
Juanitouser_668708339332667411
10 Nodes
Image to Face Detailer
PictureT
Different pose
TestYun
24 Nodes
Flux Txt2Img + Loras+upscale
Aleavka
14 Nodes
LLUSTRATION SNAKE YEAR THEME
zDKZ
Articles
🐍 スネークの夜会クリエイティブコンテスト 🐍~1月20日 日本語訳
🎉旧正月: 蛇の夜会🎉2025 年 1 月 29 日の旧正月が近づいてきました。私たちはこれを発表できることをうれしく思っています。スネークの夜会クリエイティブコンテスト”! 🌟 蛇の新年🌟蛇年は知恵、狡猾さ、内省を象徴します。 2025 年はチャンスと挑戦に満ちた年であり、創造性と祝賀に最適な時期となります。 🎊 蛇年の祝賀会の始まりです。あなたの想像力を見せてください! 🎊⏰ イベント期間1月1日~1月20日 (UTC)イベント終了後、2 日間かけて評価し、1 月 23 日に受賞者を正式に発表します。 🌟スネークソワレ 画像・動画とAIツール巳年の夜会は 2 つのセッションに分かれています。 特別な夜会 そして AIツール特別な夜会。 楽しんでください!😝🖼️ 投稿 夜会: スネークとテンテンタスク: お久しぶりです!皆さん覚えていますか、テンテン! TensorArt のマスコット?蛇年を祝いに来ました! 🌟組み合わせてください テンテン と 蛇の要素 作品に! 🌟TensorArt に投稿してくださいガイドラインは存在しないでしょう。あなたの創造力を輝かせて、テンテンとヘビの要素を完璧に融合させましょう!タグ snakeyear 投稿するとき! 受賞者と賞品:ベストクリエイティビティ賞: 3 日間プロ + 200 クレジット (10 名)ベストエステティック賞: 3 日間プロ + 200 クレジット (10 名)参加賞:画像投稿:50クレジット動画投稿:100クレジット (画像・動画の特典は同時に獲得出来ます。アカウントごとに最大 150 クレジットが獲得できます)🎨 AI ツール夜会: デザインスネークタスク: 小さなヘビをデザインの世界に「滑り込ませて」みましょう。 🌟AI ツールを作成しよう デザイン フィールド🌟蛇の要素 または蛇のイメージ、デザイン要素を含めて下さい。たとえば、ポスター デザイン、ファッション デザイン、芸術的なタイポグラフィ デザイン、ヘビ要素を含む AI ツールなど…。AI ツールは最大 3 つのパラメーター設定を公開でき、デザイン関連である必要があります。そうでない場合は報酬の対象になりません。タグ snakeyear アップロードするとき! 受賞者と賞品:ベストクリエイティビティ賞: $29.9 (3名様)ベストエステティック賞: $29.9 (3名様)参加賞:200クレジット📝 ルール投稿と AI ツールは、対応するテーマと要件に適合する必要があります。そうしないと報酬を獲得できません。タグ付け #snakeyear イベントへの参加とみなされます。タグを使用しないと、報酬を受け取る資格が失われます。現金報酬はイベント終了時に GPU 基金に入金され、いつでも引き出すことができます。勝者は TensorArt 公式チームによって決定されます。システムデフォルトのアバターとニックネームを持つユーザーは報酬を受け取りません。イベントの内容はコミュニティのルールに準拠する必要があります。 NSFW、児童ポルノ、有名人の画像、暴力、低品質のコンテンツは対象外です。イベントの最終的な解釈は TensorArt に属します
TensorArt 2024 Community Trends Report
2024: A Year of BreakthroughsThis year marked an explosion of innovation in AI. From language and imagery to video and audio, new technologies emerged and thrived in open-source communities. TensorArt stood at the forefront, evolving alongside our creators to witness the rise of AI artistry.Prompt of the Year: HairSurprisingly, "Hair" became the most-used prompt of 2024, with 260 million uses. On reflection, it makes sense—hair is essential in capturing the intricacies of portraiture. Other frequently used words included eyes (142M), body (130M), face (105M), and skin (79M).Niche terms favored by experienced users—like detailed (132M), score_8_up (45M), and 8k (25M)—also dominated this year, but saw a decline in usage by mid-year. With the advent of foundational models like Flux, SD3.5, and HunYuanDit, natural language prompts became intuitive and multilingual, removing the need for complex or negative prompts and lowering the barriers to entry for creators worldwide.Community AchievementsEvery day, hundreds of new models are uploaded to TensorArt, fueling creativity among tensorians. This year alone:Over 400,000 models are now available.300,000 images generated daily, with 35,000 shared via posts, reaching 1 million viewers and earning 15,000 likes and shares.This year, we introduced AI Tool and ComfyFlow, welcoming a new wave of creators. AI Tool simplified workflows for beginners and enabled integration into industry applications, with usage distributed across diverse fields.In November, TensorArt celebrated its 3 millionth user, solidifying its position as one of the most active platforms in the AI space after just 18 months. Among our loyal community are members like Goofy, MazVer, AstroBruh and Nuke, whose dedication spans back to our earliest days.A Global Creative ExchangeAI knows no borders. Creators from around the world use TensorArt to share and connect through art. From the icy landscapes of Finland (1.6%) to the sunny shores of Australia (8.7%), from Pakistan (0.075%) to Cuba (0.003%), tensorians transcend language and geography.Generationally, 75% of our users are Gen Z or Alpha, with the remaining 9% belonging to Gen X and Baby Boomers. “It’s never too late to learn” is a motto they live by.Gender representation also continues to evolve, with women now accounting for 20% of user base.TensorArt is breaking barriers—technical, social, and economic. With no need for costly GPUs or advanced knowledge of parameters, tools like Remix make creating stunning artwork as simple as a click.The Way Tensorians CreateMost active hours: Weeknights, 7 PM–12 AM, when TensorArt serves as the perfect way to unwind.Platform preferences: 70% of users favor the web version, but we’ve prioritized app updates for Q1 2025 to close this gap.Image ratios: Female characters outnumber male ones 9:1.67% are realistic, 28% are anime, and 3% are furry.Favorite colors order: Black, white, blue, red, green, yellow, and gray.A Growing Creator EconomyIn 2024, Creator Studio empowered users to monitor their model earnings. Membership in TenStar Fund tripled, and average creator income grew by 1.5x compared to last year.In 2025, TensorArt will continue to prioritize the balance between the creator economy and market development. TA will place greater emphasis on encouraging creators of AI tools and workflows to provide more efficient and convenient practical tools for various specific application scenarios. To this end, TA will be launching the Pro Segment to further reward creators, offering them higher revenue coefficients and profit sharing from Pro user subscriptions.2024 MilestonesThis year, TensorArt hosted:26 site events and 78 social media campaigns.First AI Tool partnership with Snapchat, pioneering AI-driven filters, which were featured as a case study by Snapchat.Launch of “Realtime Generate” and “Talk to Model,” revolutionizing how creators interact with AI.Collaboration with Austrian tattoo artist Fani to host a tattoo design contest, where winners received free tattoos based on their designs.TensorArt is committed to advancing the open-source ecosystem and has made significant strides in multiple areas:For newly released base models, TA ensures same-day online running and next-day support for online training. To allow Tensorians to experience the latest models, limited-time discounts are offered.To boost creative engagement with new base models, TA hosts high-reward events for each open-source base model, incentivizing Tensorians across various dimensions such as Models, AI tools, and Posts.Beyond image generation, TA actively supports the open-source video model ecosystem, enabling rapid integration of CogVideo, Mochi, and HunYuanVideo into ComfyFlow and Creation. In 2025, TA plans to expand online video functionality further.Moving from "observer" to "participant," TA has launched TensorArt Studios, with the release of the SD3.5M distilled version, Turbo. In 2025, Studios will unveil TensorArt’s self-developed base model.TensorArt continuously funds talented creators and labs, providing financial and computational resources to support model innovation. In 2025, Illustrious will exclusively collaborate with TensorArt to release its latest version.Looking ForwardFrom ChatGPT’s debut in 2022 to Sora’s groundbreaking in 2024, AI continues to redefine innovation across industries. But progress isn’t driven by one company—it thrives in the collective power of open-source ecosystems, inspiring collaboration and creativity.AI is a fertile ground, filled with the dreams and ambitions of visionaries worldwide. On this soil, we’ve planted the seed of TensorArt. Together, we will nurture it and watch it grow.2024 Annual RankingsEach month of 2024 brought unforgettable moments to TensorArt. Based on events, likes, runs and monthly trends, we’ve curated the 2024 Annual Rankings. Click to explore!
My Journey: Model Training a LoRA for Game Art Design
My Journey: Training a LoRA Model for Game Art DesignWhat is LoRA?LoRA (Low-Rank Adaptation) is a powerful technique to create custom AI art models, perfect for game designers looking to develop unique visual styles.My Training Setup for Adrar Games Art StylePreparing Your Training DatasetTechnical SpecificationsBase Model: FLUX.1 - dev-fp8Training Approach: LoRA (Low-Rank Adaptation)Trigger Words: Adrr-GmzEpochs: 5Learning Rate: 0.0005 (UNet)Key Training ParametersNetwork ConfigurationDimension: 2Alpha: 16Optimizer: AdamW 8bitLR Scheduler: Cosine with RestartsAdvanced TechniquesNoise Offset: 0.1Multires Noise Discount: 0.1Multires Noise Iterations: 10Sample Prompt"A game art poster of a Hero standing in a fantastic ancient city in the background, and in the top a title in a bold stylized font 'Adrar Games'"My Learning ProcessChallengesCreating a consistent game art styleCapturing the essence of "Adrar Games" visual identityBalancing technical parameters with creative visionInsightsLoRA allows precise control over art generationCareful parameter tuning is crucialSmall adjustments can significantly impact resultsPractical TakeawaysStart with a clear artistic visionExperiment with different settingsDon't be afraid to iterate and refineRecommended Next StepsGenerate multiple sample imagesAnalyze and compare resultsAdjust parameters incrementallyBuild a library of unique game art assetsWould you like me to elaborate on any part of my LoRA training experience?
僕はとにかく楽がしたい LoRAの各種設定 Model Training
寒い冬は動きたくない。暑い夏は外に出たくない。花粉の多い春は家にいたい。枯れ葉が飛んでくる秋は窓から外を眺めたい。僕にとってGPTは代わりに色々してくれるいい相棒です。ご飯とかお風呂を用意してくれるようになったら完璧。そんな楽をすることしか考えてない僕の、ローラ詳細設定方法をご紹介します。ババン。GPTに詳細設定を考えてもらうSample画像を1枚アップロードしてこう聞いてください。赤字の部分を変更します。AI画像生成モデルのFlux用のLoRAを作る予定です。「結婚式場」をテーマにしたLoRAです。 添付画像のような画像を40枚用意しました。最適な「繰り返し数Repeat」「エポック数Epoch」「Network Rank Dim」「Network Alpha」などの設定を考えて貰えますか。タグ設定について注意点などありますか?サンプル画像生成用のプロンプトを作成して貰えますか。他にも重視したいことを記入すると良いですね。GPTさんの実際の返事LoRAの設定や学習手順について、結婚式場をテーマにしたLoRAを作成する際の最適な設定や注意点、サンプル生成プロンプトについて詳しく解説します。LoRA学習用の設定提供された40枚の画像を使用する場合、データ量は少なめですので、以下の設定を推奨します。基本設定Repeat(繰り返し数):5~10データセットが40枚と少ないため、各画像を繰り返して学習データの量を補います。Epoch(エポック数):10~15過学習を防ぎつつもデータ量に応じて十分に学習させるため、エポック数は10~15程度に調整します。Network Rank Dim(Rank):16~32Rankを16~32程度に設定し、モデルの柔軟性を確保します。低すぎると学習内容が薄まり、高すぎると過学習のリスクが上がります。Network Alpha(Alpha):16AlphaをRankと同値程度に設定すると、LoRAの学習が安定します。タグ設定の注意点共通タグ"wedding venue"、"flower decorations"、"arches"、"drapes"、"fairy lights" などの基本要素をタグとして追加。→ 結婚式場の要素がしっかり学習されるように意識します。詳細なタグ画像内の要素を細かく分けてタグ付け。例えば:花の種類:「roses」「floral arrangements」「bouquets」装飾:「candles」「lanterns」「string lights」「arched drapery」照明や雰囲気:「soft lighting」「romantic setting」「warm glow」固有名詞の避け方「特定の会場名」や「ブランド名」などの固有名詞は含めないよう注意。サンプル画像生成プロンプト学習が完了したLoRAを使って、以下のプロンプトでサンプル画像を生成できます。コードをコピーする(masterpiece, best quality, ultra-detailed), wedding venue, floral arch, draped curtains, fairy lights, romantic lighting, beautiful flower arrangements, glowing candles in glass lanterns, soft sunlight through tall windows, elegant decorations, roses, pastel colors, dreamy atmosphere, wooden floor, luxurious and enchanting wedding setting追加の強化要素季節感を出す場合:spring wedding や winter wedding theme背景変更:sunset lighting や nighttime fairy lights装飾の強調:detailed flower patterns や ornate arch design補足生成の際は、LoRAの強度(weight)を0.7~1.0程度で調整し、モデルの影響度を最適化します。サンプル生成後、LoRAが意図した要素をしっかり学習できているか、具体的に確認して調整を加えてください。この設定で試してみて、調整が必要な場合はフィードバックをお待ちしています!✨
'Paduru' Illustrious LoRa Model Training Guide
link to Lora: https://tensor.art/models/806808906744706431/Podoru-Meme-LoRa-e10I fetched my training images from: https://apps.apple.com/us/app/sankaku-anime-ai-girlfriend/Pic collage shown here was created using https://gandr.io/Then I used the built in Tensor Art caption tool to set the promptsExample:"padoru 1girl solo long hair smile open mouth simple background hair ornament long sleeves hat white background holding tail full body yellow eyes :d red hair pointy ears hairclip chibi fur trim capelet fangs transparent background blush stickers monster girl slit pupils christmas red headwear santa hat santa costume meme scales sack lamia holding sack miia \(monster musume\)"Training settings:base model: NovaXL (as shown as base model in LoRa) Unet learning rate: 0.0001Repeat : 20 , Epochs 10Constant learning rate Network dim: 64 , Alpha: 32Loss chart is absurdly low:And here are some examples of the output from the NovaXL illustrious model:Very cute!Thats all for this training guide. Merry Christmas! /Adcom
Model Training: The Results of the Model with 10 Prompts " Christmas Decorations Vintage "
AI Model for Classifying Vintage Christmas DecorationsAI technology, particularly machine learning models like CNN, can be used to identify vintage Christmas decorations. The process involves:Dataset: Collecting images of vintage ornaments (1920-1980) with proper annotations.Preprocessing: Preparing images through resizing and augmentation.Training: Using CNN models like ResNet to detect key features such as retro designs or specific materials.Evaluation: Measuring accuracy and fine-tuning the model.Visual Results from the my Model: "Christmas Decorations Vintage"The model can provide:Automatic identification of vintage ornaments (e.g., pastel glass balls or retro string lights).Visual categorization by era, such as 1950s or 1970s, displayed in collection panels.Detection of unique materials like glass, wood, or textured plastic, accompanied by a description of their historical use.10 Prompts for Vintage Christmas Decorations:"A collection of 1950s pastel glass Christmas ornaments with glitter details, displayed on a wooden table with soft vintage lighting.""Retro ceramic Christmas figurines from the 1970s, featuring angels and Santa Claus, with a cozy holiday backdrop.""A Christmas tree decorated with 1920s-inspired metallic tinsel and glass baubles, surrounded by antique gifts.""Close-up of vintage Christmas string lights with large, colorful bulbs glowing warmly in a dimly lit room.""Mid-century modern Christmas ornaments in soft pastel tones, styled with minimalist retro decor.""Handcrafted wooden nutcracker ornaments from the 1940s, arranged on a snowy windowsill.""A wreath made of vintage glass ornaments in vibrant colors, hung on an old wooden door with a snowy background.""A set of antique Christmas postcards from the 1930s, featuring illustrations of festive decorations and snowy scenes.""A shelf displaying 1960s Christmas decorations: ceramic trees, vintage candles, and glittery ornaments.""A warm holiday scene with a 1940s-inspired fireplace mantel decorated with garlands, stockings, and vintage baubles."These prompts evoke a nostalgic and festive aesthetic, capturing the charm of vintage Christmas decor.
It's done! I was able to do Model Training too!
”It's done! I was able to do model training too!”Model training may be asked as an assignment at the event.This is a great reward for the effort, so I definitely want to try it.https://tensor.art/models/806517974249805611/Christmas-Yggdrasil-2024-12-12-07:37:37This is a model I made for an event.All I had to do was have the AI draw three pictures and "munch through" the online training.I will explain my example in 5 steps.First step. Think about the theme of the model. Prepare 3 pictures. Pictures drawn by AI is fine.(Be careful about copyright)It doesn't matter what you want, but if possible, it's best to choose something you like. I like big things, so I chose 3 pieces including this one.Second step: Upload images from the online training page.Then click on the uploaded image and "delete" the inspirational words from the radio buttons on the right. The key is to "erase" rather than "keep".Do this for all images you upload.Third step. Enter parameters. Let me introduce what I think is the easiest and most reasonable method.First, regarding the model, select "SD 3.5 Large" from the "Standard" tab.The reason is because it's cheap now (lol).In fact, model training consumes a lot of credits, so this element is not trivial.(However, the theme for December 18th is to reveal the "illstrious" model, so please be careful about that.)There are several places where you have to type in text.The most important thing is the trigger word. Enter here a short phrase that inspires you.If your only purpose is to create a model, I don't think you need to mess around with the other numbers that much.In my case, "Text Encoder learning rate" was set to 0.00005, "Clip Skip" was set to 2, and "Network Dim" and "Network Alpha" were set to 128.”repeat”was set to 10, and "epoch" was set to 20.Also, LR Scheduler was set to “cosine_with_restarts”.All that remains is to write a sample creation prompt and a negative prompt.This is a prompt for writing a sample, so you can do it just like you would normally do when you ask the AI to draw a picture.And then all you have to do is press the button to start training, and the AI will “study drawing” for you.This amount of learning will take approximately 45 minutes.The Fourth step is to create a project.Create→Add LoRA→Host my model→Create a projectThis will take you to the project launch screen. Enter the necessary information and publish your project. This project will become the ``container'' for the model you created.At this stage, there is no model of the project contents yet.The Final step is to "install" the completed model into the project and publish it.When training is complete, you should see a screen like this: Click "Publish" for the set with the image you like.A “Select Project to Publish” pop-up will appear, so select the project you just created.You will be redirected to the “Host a model” screen, so just enter the model data etc. and publish it.You can perform model training with the above steps. Once you try it, you'll realize it's not that difficult.Learning can be done with AI, but inspiration can only be obtained by humans.Bring your inspiration to life in your models.I hope this article helps you in your creation. Thank you.
Introducing LoRA: A Lightweight Model Training Technique for Customizing Text-to-Image
What is LoRA?LoRA (Low-Rank Adaptation) is a clever technique that lets you tweak existing AI models without having to start from scratch. Imagine you have a pre-trained model that can generate all sorts of images, but you want it to create art in a specific style, like anime or watercolor. LoRA allows you to customize this model efficiently, saving you a ton of time and computing resources.Why is LoRA Important for Text-to-Image?Efficiency: Instead of retraining the entire model, LoRA only adjusts a small part of it. This means it's much faster and requires less computing power.Customization: Want to create your own unique art style? LoRA lets you fine-tune the model to generate images that match your specific preferences.Flexibility: LoRA can be applied to various text-to-image models, not just Stable Diffusion.How Does LoRA Work?Think of it like teaching a dog a new trick. You don't have to retrain the entire dog, just teach it a few new commands. Similarly, LoRA adds small adjustments to specific layers of the pre-trained model, allowing it to learn new things based on the data you provide.Benefits of Using LoRA:Cost-effective: The faster training process means you'll spend less on computing resources.Better Results: With careful customization, LoRA can generate higher quality images that are more aligned with your desired style.Accessibility: LoRA makes it easier for anyone to experiment with text-to-image models, even if you don't have a supercomputer.Use Cases for LoRA:Creating Unique Art Styles: Train LoRA on a dataset of images in a specific style, like vintage comics or impressionist paintings, to generate new images in that style.Generating Specific Objects or Concepts: Want to create images of fantasy creatures or futuristic cityscapes? LoRA can help you achieve that.ConclusionLoRA is a powerful tool for customizing text-to-image models. Its efficiency and flexibility make it a game-changer in the field of generative AI.Additional Tips:Quality Data: The quality of your training data directly impacts the quality of your results.Experiment with Hyperparameters: Tweaking settings like learning rate and the number of training epochs can significantly affect the model's performance.Join the Community: Online communities focused on LoRA can provide valuable insights and support.In summary, LoRA is a fantastic way to create personalized AI art. Give it a try and see what you can create!
❄️Effective prompt words to get great images with the Christmas model "Model training"❄️
Hello everyone, 🥰 Christmas is coming up, so I would like to introduce some words that you should include in your prompts to create better works using the Christmas model for image generation.First of allIn AI image generation, the choice of words in the prompt you use greatly affects the quality of the image. In this article, we will introduce some tips that are easy to understand even for beginners and will improve the quality of your images not only for Christmas but also for everyday prompts.1. Basic strategyTo create an effective prompt, keep the following points in mind. By keeping these in mind, it will be easier for AI to generate images that meet your intentions.Style: Specifies the atmosphere and drawing method of the image.Examples: "anime style", "cyberpunk", "watercolor painting", "Ink painting style", "Reality Photo", etc.Type: Indicates the composition of the image.Examples: "portrait", "landscape", "full-body character", "Cowboy Shot", etc.Background: Clarify the situation and background of the image.Examples: "sunset", "dense forest", "futuristic cityscape", etc.Mood: Set the overall atmosphere of the image.Examples: "dramatic", "peaceful", "vibrant", etc.2. Elements to emphasize and examples of their useTo have the AI reflect specific elements, use the following phrases.Lighting effects:"golden hour lighting"Example: "a serene mountain landscape, golden hour lighting, highly detailed""neon glow"Example: "a cyberpunk city at night, neon glow, atmospheric"Camera perspective:"close-up of a face"Example: "a close-up of a young woman's face, freckles, cinematic lighting""wide-angle view"Example: "a wide-angle view of a bustling market, colorful stalls, lively atmosphere"Background details:"galaxy in the background"Example: "a futuristic spacecraft, galaxy in the background, vibrant colors""snow-covered mountains"Example: "a hiker on a snowy trail, snow-covered mountains, dramatic lighting"Texture:"highly detailed"Example: "a dragon with highly detailed scales, glowing eyes, fantasy setting""soft and fluffy texture"Example: "a kitten sleeping on a blanket, soft and fluffy texture, warm lighting"3. Techniques for adding originalityAdding the following phrases will give you a more unique result:Abstract/Fantastic elements:"ethereal"Example: "an ethereal fairy in a glowing forest, delicate wings, magical atmosphere""surreal"Example: "a surreal desert with floating islands, warm tones, imaginative"Luminescence and special effects:"bioluminescent"Example: "an underwater world with bioluminescent plants, vibrant colors, mysterious""iridescent"Example: "a bird with iridescent feathers, perched on a branch, sunlight reflection"Environment and atmosphere:"dystopian"Example: "a dystopian cityscape, dark and foggy, neon lights""whimsical"Example: "a whimsical tea party in a garden, pastel colors, surreal elements"4. Tips to avoid problemsSome words and settings may cause the AI to produce unstable results. Please refer to the following and adjust the prompts.Words to avoid:"blurry""chaotic"Suggested alternatives:"soft focus" → Example: "a soft focus portrait of a smiling elderly woman, warm lighting""dynamic composition" → Example: "a dynamic composition of a sports car racing on a track, motion blur"5. Further examples of AI image generationCharacter design: Example: "a steampunk warrior with intricate armor, full-body, dramatic lighting"Product promotion: Example: "a sleek smartphone on a reflective surface, minimalistic design, vibrant colors"Recreating an art style: Example: "a serene landscape in the style of Van Gogh, swirling brushstrokes, vivid colors"6. Example Christmas-themed promptFinally, I'll show you an example prompt to create a Christmas-themed image of a girl using my Christmas model "🎄Cute Christmas collection・FLUX 🎄" using the words introduced above. Use these as a starting point to create your own original prompts.*Fantastic Christmas"A young girl in a red and white Christmas dress, surrounded by glowing fairy lights, snowy background, whimsical atmosphere, vibrant colors, golden hour lighting"*Traditional Christmas scene"A cheerful girl decorating a Christmas tree, cozy living room, soft and warm lighting, fireplace in the background, soft and fluffy texture"*Futuristic Christmas"A futuristic Christmas celebration, a girl in a glowing cyberpunk Santa outfit, neon lights, snowy futuristic cityscape, vibrant and dynamic composition"*Winter fairy girl"A snow fairy girl with delicate wings, sparkling snowflakes, ethereal and magical atmosphere, soft focus, iridescent glow"*Christmas market at night"A young girl walking through a bustling Christmas market, surrounded by festive stalls and twinkling lights, wide-angle view, atmospheric"Tweak these prompts and enjoy your own Christmas images!Enjoy an amazing AI-generated experience! 🎄✨
Model Training - How to Train an AI Model: A Step-by-Step Guide
Training an AI model may sound daunting, but it’s surprisingly straightforward when broken down into clear steps. Whether you're a beginner or looking to refine your skills, this guide walks you through the process from creating datasets to fine-tuning settings. THIS IS THE MODEL PAGE : https://tensor.art/models/806678236659647115/CHRISTMAS-UGLY-SWEATER-PATTERN-V9Step 1: Build Your DatasetA quality dataset is the backbone of any successful AI model. Here’s how you can create one:Source Images: Gather images from reliable sources like Pinterest, stock image websites, your personal photo gallery, or even AI-generated outputs. Ensure you have permission to use the images, especially for commercial purposes.Focus on Quality:Use clear, sharp images.Avoid images with noise, blur, or watermarks.Size doesn’t have to be massive, but clarity is key.Example: For this guide, let’s say you’re building a dataset of seamless patterns inspired by ugly sweaters. Carefully curate high-quality images that fit this niche.Step 2: Caption Your DatasetGood captions make a significant difference in training outcomes. A well-captioned dataset ensures your model understands the nuances of your images.Tips for Effective Captioning:Write captions manually for precision.Use automated captioning tools sparingly and always review their output.Be descriptive but concise, capturing key details like color, style, or patterns.Example Caption:For an image of a red-and-green holiday sweater with reindeer motifs, your caption might read:“Seamless pattern of a red-and-green knitted sweater with reindeer and snowflake designs.”Manually crafting captions might take more time, but the payoff is better accuracy in your model's outputs.Step 3: Set Parameters and Configure TrainingOnce your dataset is ready, it’s time to train your model. Using platforms like Tensor.art simplifies this process.For Beginners:Start with default settings. These are optimized for general use and save you the hassle of configuring every parameter manually.For Advanced Users:Experiment with parameters such as learning rate, batch size, and epoch count to refine your model.Bonus TipsTest Regularly: As your model trains, run tests to ensure it’s learning correctly. This helps identify issues early.Iterate: Training is an iterative process. Don’t hesitate to tweak and retrain if the results aren’t up to par.Document Your Process: Keep notes on what works and what doesn’t. This saves time in future projects.Final ThoughtsTraining an AI model involves careful preparation and a bit of patience, but the results are worth the effort. By curating a high-quality dataset, writing thoughtful captions, and fine-tuning settings, you’ll be on your way to creating a model that performs exactly as you envision.Dive in, experiment, and watch your AI-powered creativity take flight!
Model Training - Illustrious NoobAI LoRA Discussion
Let's talk about Illustrious and NoobAI LoRA'sPrefaceI am currently using tensor.art with Professional Mode to train my Lora, this article will mainly discuss what I've tried and I welcome others to discuss too as there's no official finetune guide.GuidelinesHigher rates = stronger character features but potential loss in image qualityLower rates = better image quality but weaker character featuresMost character Loras work well with UNET around 0.0003 and TE around 0.00003Lower learning rates will adapt the features better but can also take longer. As for the dataset lets say i have 40 images , 5-10 repeats, 10 epochs, 4 batch size, this usually adds up to the total steps and then hopefully a model is trained well enoughThe ideal ratio is typically UNET:TE = 10:1UNET Rates (0.0005 - 0.0001):0.0005: Very strong influence, can overpower the base model. Good for exact character matching but may reduce image quality0.0003: Balanced influence, commonly used for character Loras0.0001: Subtle influence, maintains high image quality but character features may be less pronouncedText Encoder (TE) Rates (0.00005 - 0.00001):0.00005: Strong text conditioning, helps with character recognition0.00003: Moderate text influence, good balance for most character Loras0.00001: Light text conditioning, useful when you want minimal style transferDimension Ranks (DR) - Network Dim32: Standard/Default rank, good balance of detail and file size64: Higher detail capture, larger file size128: Very high detail, much larger file size256: Maximum detail, extremely large file sizeNetwork Alpha (AR) - Network AlphaAlpha is typically set to match or be slightly lower & higher than the rank.Common ratios:AR may be half the rank or even a quarter less than the DRAR: Standard training stability (1:1 ratio), same as the DRAR× 1.5: Increased stability, a quarter more than the DRAR× 2: Maximum stability, double the DRThe values below are not 100% but they are being figured out still.Basic Character Lora (Base Model's preference)DR 64, AR 32
- Best for: Simple anime/cartoon characters
- File size: ~70MB
- Good balance of detail and stabilityComplex Character LoraDR 64-48, AR 32-24
- Best for: Most character types
- File size: ~100MB
- Excellent for anime/game charactersStyle Loraexample : https://tensor.art/models/806682226684073145/NAI3-Kawaii-Style-Illustrious-NoobAI-nai-IL-V0.1example : https://tensor.art/models/806356844256811271/Anima-Crayon-Sketch-Illustrious-IL-V0.1original article says :
DR 128, AR 64 to 32 - seems to be the best for a combination of complex features etc
if the style is very detailed. otherwise lower ranks work too.Learning rates can vary:
CAME and RAWR = 0.0002 UNET and 0.00002 TE will need about 2500 to 3000 steps
ADAMW8BIT & ADAFACTOR between 0.0003-0.0005 UNET and 0.00003-0.00005 at 1000 steps
but what i use instead :Parameter Settings
Network Module
LoRA
Use Base Model
rMix NNNoobAI - V1.1
Trigger words
nai3_kawaii
Image Processing Parameters
Repeat
10
Epoch
10
Save Every N Epochs
1
Training Parameters
Seed
-
Clip Skip
-
Text Encoder learning rate
0.00004
Unet learning rate
0.00035
LR Scheduler
cosine_with_restarts
Optimizer
AdamW8bit
Network Dim
32
Network Alpha
16
Gradient Accumulation Steps
-
Label Parameters
Shuffle caption
true
Keep n tokens
1
Advanced Parameters
Noise offset
0.0357
Multires noise discount
0.15
Multires noise iterations
8
conv_dim
-
conv_alpha
-
Batch Size
2
Sample Image Settings
Prompt
nai3_kawaii 1girl solo long hair looking at viewer blush bangs blue eyes hair ornament dress ribbon sitting closed mouth pink hair sleeveless hairclip sailor collar two side up book blue dress sailor dress . masterpiece, best quality, amazing quality, very aesthetic, absurdres
Sampler
eulerWhat works?I'd like to hear what works and doesn't work for illustrious:OptimizerLearning Rates could change dependent on the optimizer chosen.SchedulerNetwork Settings(DR) Dimension rank 128, 96, 64, 32, 16, 4(AR) Alpha rank 128, 96, 64, 32, 16, 4Don't use:ProdigyCan use:AdamW8BitConstant0.0003 LR (TE & UNET) - Aggressive Learning for characters0.0002 LR - Medium learning for characters (DR 128 AR 64)AdaFactorSchedulerCosine with restart0.0005-0.0003 LR (UNET)0.00005-0.00003 LR (TE)DR 128-32, AR 64-16 - usually i go half the Network Dimension Rankplagiarized and inspired from : https://civitai.com/articles/9148/illustrious-lora-training-discussionmodel used for my training : rMix NNNoobAI v1.1 - https://tensor.art/models/805164110363975687
🎄“Model Training” Words that enhance the effect of X'mas models
Attention to detail in the prompt is important to produce beautiful images related to Christmas.We've put together a list of words to help you get into the Christmas spirit and create dreamy and beautiful images.By using these in combination, you are sure to be able to create a magical and warm Christmas scene. 1. Main theme - Christmas tree - Santa Claus - Snowy landscape - Christmas lights - Cozy fireplace - Gift-wrapped presents - Snow-covered village - Reindeer - Gingerbread house 2. Decorations and accessories - Ornaments - Garland - Wreath - Baubles - Holly and ivy - Christmas stockings - Snowflakes - Candy canes - Bells 3. Lighting effects - Warm glowing lights - Sparkling fairy lights - Candlelit ambiance - Radiant glow - Soft bokeh effect - Starry night - Lantern-lit 4. Color effects - Warm hues - Rich reds - Forest greens - Snowy whites - Golden tones - Frosty blues - Silver sparkle 5. Characters and animals - Santa’s elves - Festive animals - Polar bear with scarf - Penguins in winter hats - Adorable snowman 6. Scenery and places - Cozy cabin - Snowy forest - Mountain village - Christmas market - Enchanted winter scene - Icicles on trees - Ice rink with skaters 7. Texture and atmosphere - Velvet textures - Frosted glass effect - Glittering snow - Cozy and warm atmosphere - Nostalgic charm - Festive elegance - Whimsical charm 8. Other keywords - Magical holiday - Winter wonderland - Silent night - Joyful celebrations -Family gathering - Childlike wonder - Frost-kissed scenerytogether with a combination of these words"Beautiful", "Wonderful", "Warm", "Festive" etc.Adding keywords to the prompt will help you generate images full of Christmas charm.
To Make Models, Online Training!!! 😍
Greetings, my fellow Tensorian artists! 😊🤗😁@sfsinspiredwritr here! Today, my Tensorian friend @digital_daydreamer had asked for my advice on training for models, and I gave him/her a helping hand! It's so detailed and invaluable, that I decided to give ALL Tensorians here these 10 tips on how to make models online with Tensor Art! (Note: These are the things that I have figured SO FAR in my Tensor journey up 'til now, and it's all from scratch!)Enjoy! 😄Tip #1: In my experience of training models (online on Tensor Art), a good margin for how many images to use (especially, for the model to work nicely when it's ran) is like 15-22 images. Any fewer or greater number will overall lessen the model's quality.Tip #2: For the images, Tensor Art supports images in the formats: .png, .jpg, .jpeg, and .webp (Chrome HTML document). I find these images by searching for the model topic on Google (for example, "digimon tai x sora") and then heading to the "Images" tab to see only photos of the topic. I click on an image I want to use, and then right-click on it to "Save image as". This is how I download the image to my computer.Now, ALWAYS check for the image format after you download, from whatever your file manager is called on your device. .avif, .jpg_large, nor .jfif formatted images are NOT supported by Tensor Art, so make sure to keep those out of your image count!Tip #3:Pony is THE #1 Base Model to train your models on! The image quality is unmatched by any base model type, IMO! Good base models to use incl. Nova Anime XL (v.5.0 or v.6.0, haven't tried the newer versions yet) and Pony - Disney Pixar Style (V2).Tip #4:The steps, how many times the model repeats an image for training, is IMPORTANT to set up! Generally you keep the steps in the range of 24-27 steps for the model to train the images on.(Before the Online Training update, you could change each image's own steps freely; now, this action only works with the paid Pro account 😭 .... 😤 Nonetheless, the 24-27 steps range still works!)Tip #5:You should keep the image prompt as 'loosely detailed" as possible; my image prompt in online training would look something like this:(Trigger words) - digimon_adventure, mimi_tachikawa, (1girl, 1_solo) :(Image standard terms) - (RAW Photo, best_quality, high_definition, high_res, masterpiece, veryGOODImage_positive) (Beautiful, extremely detailed)_image, (BEST, amazing, incredible, highly detailed, anime-style)_quality, (Clear & not blurry, beautiful)_image(Optional details) - simple_background, (upper_body(_shot))(Main Prompt) There is a teenage girl resembling Mimi Tachikawa. She has long and light brown hair, (amazing, anime_style, well-drawn) brown eyes, and fair skin. She is looking at the viewer with a smile.This is how my training prompt looks! Simple enough for the AI to train the images! 🙂Tip #6:The steps are 24-27, and the epoch is just 1, a set of 4 images.This is the cheapest way I practice to make images, and with what I spoke on the steps and the image prompt, the model will become fantastic to use using this ratio!Tip #7:The model's initial images, when its training is done, will look ... UNPROFESSIONAL for the most part.Don't be discouraged! Consider these first 4 images the "rough draft", and just Remember This: The model's first images are not the goal to completing the model, it's the images that you run AFTER the model is published!Essentially, I just replace the first images with 10-12 of NEW images to show off for the model! That's the "presentation" secret (Shh 🤫)!Tip #8:Upon running the newly published model, this is the part where you come up with: more details for the prompt, the negative prompt, the strength ('detail weight') of the model, the CFG scale, and whatever else you can mess with to make good images with the model!Once you have this figured out, then you can "Edit Model" & place your "base" prompt & negative prompt & strength & CFG scale & other details into the description of the model itself so that the users running your model (and yourself 😉 wink-wink) can have some more ease and AMAZING use with it!Tip# 9:The models will have to go into projects to be showcased to the Tensor community, of course!To make a project, you go to "+ Host my model" on your Profile page, then go to "Create a project" to go to a page that does just that! You make its name, type (mines are usually Image & LoRA), channel (Anime, Illustration, etc.), tags (e.g. "SORA TAKENOUCHI", "DIGIMON ADVENTURE", etc.), description (where I recommend some of my tips for users to use the model well), resource (usually "Original"), view scope (usu. "Public"), and permissions you give to users (I allow everything except those under "Commercial use"; I don't care about those), and then you "Create"!Tip #10:Instead of using the model you used to train the images to run new images to present, you can use a DIFFERENT ONE to make the new images of the model!For example, when I train models for the DIGIMON ADVENTURE characters' Defined versions, the base model for training is Nova Anime, but for running images it's Disney Pixar Style! For their anime versions, it's the reverse! In my experience, using a different model for running aside from the one used for training really brings out the detail and quality of the images, better than using the same one! (Nothing against using the same one, though; this would still work!)And, PHOOOO, there you have it! I had a lot to say, and (again) I had to learn these priceless pieces of knowledge by MYSELF, and thankfully you new and seasoned Tensor artists should have more ease in training your models online here on Tensor Art 🥰!I hope these tips are of great help to you (let me know if you're confused, if I've said too much or little 😬 ... This is supposed to be helpful!), and let us all grow into amazing Tensor Artists together!P.S. When I learn something else new, I'll be sure to post "mini-articles" as updates for my sharing these experiences with you all, Tensor Art ocommunity!Until another time, Tensorians! Many good wishes to you all, fellow artists and creators! 🤗🤩🥰😍😊😁😀
📖 AI Tools Guidelines - Get new portraits with your face
🔥 [ Flux.1 ] Beautiful Portrait With FACE v2.01️⃣ upload photo2️⃣ Setting parameters3️⃣ Waiting for generation4️⃣ Get new portraits
解説★AIツールのラジオボタン★AI Tool
遅くなりましたが、イベントのお題であるAIツールラジオボタンを解説します。ラジオボタンって何?こういうのです。いくつかの選択肢とカスタムボタンが表示されます。さっそく作っていきましょう。comfyuiの設定ラジオボタンが利用できるノードと利用できないノードがあります。僕はpromptlistというノードをよく使うのですが、このノードはラジオボタンを設定出来ません。なのでtextノードを繋げています。textノードはjjkがオススメです。textノードですが実際はtextboxノードに相当すると思われます。AIツールページの編集このように設定画面の右に「編集」の文字が出てきたら成功です。それでは設定していきましょう。プロンプトの編集画面入力方式で「ラジオボタン」を選択して、「追加」します。フォルダ管理のような画面になります。この場合だと「Action」がラジオボタンの名前になり、「jogging」が追加されます。ラジオボタンの追加ラジオボタンは選択肢に入ってないものも作成出来ます。上段が表示名、下段が実際に記入されるプロンプトになります。ラジオボタンの削除僕のようなタイプは少数派かもしれませんが、既定のプロンプトに興味がないです。「jogging」を消します。最初に一つ選択するのは必須の動作になっていますが、このように削除することが可能です。べっべっべっと追加します。ちなみに2つめの何も書いてないラジオボタンはスペースが入っています。僕のアニマジンAIツールは別にアクションを指定しなくても機能します。完成!はい!完成です!後はまぁ適当にチェックしておくといいです。AIツールは誤作動が多いように感じます。それでは、みなさん頑張って下さいネ★
[REYApping] Simple and Brief Explanation of AI Tool
Hello and welcome to the third edition of REYApping, a space where I write a bunch of nonsense. Without further ado, let's begin.Never in my entire Tensor life would I actually try to explain something. But here we are, an article about AI tool. What is AI Tool? Why make an AI Tool? How is it different from the "create mode"? I'll try to explain them.What is AI Tool?Now, I might be wrong here (roast me in the comment) but here's my answer: AI tool is a simplified, more straightforward interface of a comfyui workflow. It saves you from seeing bunch of tangled spaghetti that can potentially break your eyes and mind. Instead of customizing directly on the workflow nodes, you get a similar interface as the "create mode". The downside is that it can have limited parameter since those are set by the tool's creator, and you won't know how the workflow works. Also it suck your credits and soul (Riiwa, 2024), but sadly doesn't suck your coc- *cough* Nevermind that last part.Here's an image of comfyui workflow:Here's when that workflow is made into an AI Tool:Why Make an AI Tool?Simplicity and straightforwardness in the palm(?) of your hand. That's it. Especially if your flow has a few variables that can be modified such as prompts, steps, etc.. If your flow has a lot of modifiable variables and/or you want more control over your workflow, then I suggest you do that directly in the comfyui.How is It Different from Creation Mode?Creation mode allows you to control basic functions such as samplers, which T5 would you use, and other thing like ADetailer, img2img, controlnet, etc.. AI Tool, while it can do that if set by the author, it's generally limited to basic things only such as prompts, steps, resolution, batch size, and maybe seeds. But you can't really use things like ADetailer or img2img and other fancy stuff by yourself and you really depends on what is provided by the tool itself. In short: Creation Mode allows broader range of functions but with only basic abilities while AI Tool mostly allows specific functions, but can have better result because of the dark magic trickery inside its comfy flow.Thank you for reading this part of REYApping. See you in the next one (if there's any).
How to publis an AI Tool
To publish a tool you need to have a workflow preparedYou can find them in ComfyFlow.From here you either make a new workflow, import a workflow file or choose already made one.When you selected a workflow to make into Ai tool, enter that workflows editor.Inside of selected workflow you need to have at least one AI Tool Node (TA Nodes) integrated in to the workflow.(More about TA Nodes: https://tensor.art/about/aitool-tutorial)Then you need to run the workflow.After you run it press "Publish" button in top right corner and select "Ai Tool"Now you need to fill out the boxes (Name, Channel)If you had done everything correctly you can also change "User-configurable Settings"Fill everything according to the Tool/Workflow and pres publish.
🎨 AI Tool: Turning Your Workflow into a Magical Black Box of Creativity! 🪄
Hey there, fellow tinkerers and pixel wizards! 🌟 Ever wanted to create an AI tool so powerful, even your future self wouldn’t know how it works? Well, buckle up! Today, we’re diving into the quirky world of workflow wizardry—where you’ll craft AI tools using ComfyUI and publish them like a mysterious, shiny black box. The best part? Your users won’t see the chaos inside. 🤫So, What’s the Deal with AI Tools?Imagine you’re assembling a Lego masterpiece, except each piece is a node, and the result isn’t a castle—it’s an AI tool. 🏰 These tools take user inputs (like prompts or images), process them through a hidden workflow, and spit out something magical. Your users don’t need to know what’s under the hood—they’ll just press buttons and enjoy the ride!How to Build Your AI Tool (Without Losing Your Marbles):1️⃣ Dream It: Start by conceptualizing what your AI tool will do. Want to turn doodles into masterpieces or mix Christmas sweaters with robot aesthetics? The possibilities are endless. 🎅🤖2️⃣ Craft It: In ComfyUI, build your workflow by connecting nodes like a pro pipefitter. Each node has a purpose, from loading models to decoding images. This is where the magic happens—or chaos, depending on your coffee intake. ☕✨3️⃣ Test It: Run the workflow as an AI tool. At this stage, expect some hiccups. Maybe the colors look weird, or your robot Santa has three arms. That’s fine—it’s all part of the process!4️⃣ Polish It: Update, adjust, and repeat until your tool is sleeker than a freshly polished apple. 🍎 Then, publish it for the world to admire (or fear).The Secret Sauce: Export/Import User Settings 🍔When you update your workflow, the user-configurable settings can reset. 😱 But fear not! With the Export/Import feature, you can save and reload those settings faster than you can say "workflow meltdown."How It Works:Export: Before hitting the update button, export your settings. Think of it as taking a backup of your genius. 💾Import: After updating your workflow, reload the saved settings. Voilà—no more starting from scratch. 🪄Pro Tip: This feature doesn’t work if you change the nodes too drastically. So, proceed with caution or risk hearing your inner monologue scream. 😬Nodes and Workflows: A Quickie Guide for the Clueless 🤷♂️Nodes:Think of nodes as puzzle pieces. Each one handles a small task, like:Loading a model 🎒Decoding text 🧾Sampling images 🎨Connect them, and you’ve got a functional pipeline. Disconnected nodes, however, are just sad little islands of potential. 😢Workflows:A workflow is what you get when you chain nodes together. It’s like a recipe for your AI tool:Load a model.Process a prompt.Generate an image.Save it.Simple? Yes. Satisfying? Extremely.When to Publish Your AI Tool 🎉Once you’ve created your workflow and polished it to perfection, it’s time to publish! Your users will only see the polished front end, not the spaghetti-like chaos of nodes and connections you wrangled into submission.Encourage users to interact by configuring input fields like prompts or sliders. Their creativity meets your innovation—it’s a win-win!Tips for AI Tool Wizards-in-Training 🧙Start Small: Begin with simple workflows to avoid brain freeze. 🧊Tinker Away: Play with parameters to see how they affect the output.Be Bold: Experiment with styles and features. Combine multiple LoRAs for maximum chaos (and brilliance).ConclusionCongratulations, you’re now equipped to create AI tools that will wow, confuse, and delight users! 🎉 So, go forth and turn your wildest ideas into shiny black-box tools. And remember: with great power comes great responsibility—or at least some very weird outputs. 😜Happy creating! 🎨🪄BlackPantherP.S. Don’t forget to export those settings. Nobody likes redoing work twice!
AI Tool -👌Easily create an Ai tool without prompt (Part 1)
Often, we have a picture in our mind and can find a similar picture, but we don’t know how to write a prompt. Although tensor provides a reverse inference tool, there are other processes such as copying and pasting in the middle, and it does not support NSFW.In short, filling in the options is a very troublesome thing. I am not the only one who thinks so!Using the workflow to make a small tool can simplify a lot of processes. You can see the various small tools I made. Basically, there is no need to write prompt words, because I am a very lazy artistThe following is a simple tutorial to teach you how to make the first AI Tool. It is very simple. Just follow my picture step by step!Step 1: Create a new workflowStep 2: Select the img2img templateStep 3: Double-click the mouse on a blank area of the interface, search for [wd] in the interface that appears, and select the [WD14 Tigger] plug-inStep 4: Drag the image on the load image panel and connect it to the image on the WD. This is the basis of the workflow [connecting nodes]!Step 5: Change the WD14 modle to V3 version, which is the latest image reverse model. With it, you can change your image to prompt.Step 6. Right-click on the Clip Text Encode panel and select Covert Text to Input.Step 7: Double-click the blank area again and enter string functionStep 8: Right-click on the string function panel and click [convert text_b to input]; then connect [string] on the WD14 panel to [text_b] of [stringfunction]Step 9: Connect the string in the stringfunction panel to the text in the CLIP Text encode panel, so your image will become a positive prompt!Step 10: Are you tired of reading? I am also tired of writing, let’s take a break😀😀😀😀😀😀😀😀Step 11: Click the load checkpoint panel ckpt_name, you can select the model, this time we choose a pony modelStep 12: In the string function and another clip text encode panel, fill in the pony's mass prompt[positive]:score_9,score_8_up,score_7_up, [negative]:score_3,score_2,score_1Step 13: It's almost done! Set the numbers in the ksample panel, refer to my values:Step 14: Click upload in the load image panel, select the image you like (the longest side should not exceed 1280), then click generate, and that's it!Step 15: Click Publish in the upper right corner, and then select Share Workflow. You will have your own workflow tool. You can find and run it on your personal homepage.This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support! This tutorial ends here. In the next issue, we will teach you how to convert WORKFLOW into a gadget and make it more useful and complete. Thank you for your support!
🎭 AI Tool Spotlight: Facial Expression Adjuster & GPTs Flux Prompt PRO 🚀
Unleashing Creative Potential with AI: A Spotlight on the Facial Expression Adjuster and GPTs Flux Prompt PROIn the ever-evolving world of artificial intelligence, precision and flexibility are at the heart of creating truly engaging and realistic digital content. From lifelike character animations to the fine-tuning of AI-generated imagery, a new generation of tools is enabling creators, animators, and designers to bring their visions to life with unprecedented control and detail. Two such cutting-edge tools—The Facial Expression Adjuster and GPTs Flux Prompt PRO—demonstrate the transformative power of intelligent automation in the creative workflow.1. The Facial Expression AdjusterLink: https://tensor.art/template/795874684511075193The Facial Expression Adjuster is a versatile AI solution designed to enhance and personalize digital facial expressions down to the tiniest detail. Whether you’re creating a 3D animated character or refining the emotional nuances of a still portrait, this tool lets you achieve unmatched accuracy and expressiveness. Key features include:Head Positioning: Easily control parameters such as pitch, yaw, and roll, ensuring perfect alignment and posture.Eye Expressions: Fine-tune blink and wink behaviors, adjust eyebrow angles, and position pupils for subtle or dramatic effects.Mouth Phonetics: Simulate mouth shapes corresponding to various phonemes (“A,” “E,” “W,” etc.) to produce speech-like expressions.Smile Calibration: Dial in the intensity of smiles, from a faint grin to a broad beam, adding depth and realism to character personalities.Ideal for animators, 3D artists, and AI developers, the Facial Expression Adjuster makes it simple to breathe life into digital avatars and scenes. By offering granular control over facial parameters, it unlocks new creative possibilities for storytelling and user engagement.2. GPTs Flux Prompt PROLink: https://chatgpt.com/g/g-NLx886UZW-flux-prompt-proAs AI-generated images increasingly reshape the creative landscape, the need for effective prompt engineering has never been greater. GPTs Flux Prompt PRO is a specialized tool that streamlines the process of crafting compelling, visually rich prompts for models like FLUX. By guiding creators through practical steps, offering real-world examples, and applying proven methods, it ensures that the prompts you design unlock the full potential of AI-generated visuals. Through this hands-on approach, even newcomers to prompt engineering can rapidly learn how to produce captivating outcomes that align with their artistic vision.Reinventing Your Workflow with AIBy incorporating The Facial Expression Adjuster and GPTs Flux Prompt PRO into your toolkit, you can drastically enhance the quality and impact of your creative output. These tools don’t just automate routine tasks; they empower you to direct AI-driven systems with precision and clarity, resulting in more refined, expressive, and emotionally compelling digital content.From breathing authenticity into virtual characters to perfecting your prompt-crafting skills, these advanced resources provide a blueprint for success in a world where technology and artistry continue to converge. If you’re ready to push your creative boundaries and discover new dimensions in AI-assisted art and animation, The Facial Expression Adjuster and GPTs Flux Prompt PRO stand ready to elevate your work to new heights.
AI Tool Video Generation Recommendation
Explore the evolving landscape of AI-powered video generation with our curated list of tools designed to bring your ideas to life. Each tool offers unique features tailored for diverse creative needs, from photorealistic renders to stylized motion. While their capabilities vary, these tools push the boundaries of AI creativity, though limitations like short durations and resolution constraints persist. Dive into this guide to discover the possibilities and challenges of SVD, CogVideoX, PyramidFlow, HunyuanVideo, and more—ideal companions for your video generation journey.1. Shutterbug | SVD & SD3.5L Turbo by PictureThttps://tensor.art/template/803606557651731715 its using SVD as base Limitations and BiasThe generated videos are rather short (<= 1.5sec), and the model does not achieve perfect photorealism.The model may generate videos without motion, or very slow camera pans.The model cannot be controlled through text.The model cannot render legible text.Faces and people in general may not be generated properly.The autoencoding part of the model is lossy.2. Let's generate a video using CogVideo-5B by oaahttps://tensor.art/template/783248442733541899sample:https://image.tensorartassets.com/workflow_template_showcase/783243275902494025/a3767754-af19-ab28-8ffe-80632559b43e.mp4Limitations :The generated videos are rather shortLimited parameters, only prompt, text2videoonly at low resolution3. Make your pictures come alive with CogVideo-5B by oaahttps://tensor.art/template/783254086320651706sample:https://image.tensorartassets.com/workflow_template_showcase/783255194436647499/63fbaa72-4493-3d5c-886b-19b6fd481b41.mp4Limitations :The generated videos are rather shortImg2Video with support only for landscape image with specific ratioonly at low resolution4. Let's generate a 384p video using PyramidFlow by oaahttps://tensor.art/template/783281513981656372sample:https://image.tensorartassets.com/workflow_template_showcase/790275614820104217/87be7526-ac5e-bca3-ac91-f80a1bfc58eb.mp4Limitations :The generated videos are rather shortLimited parameters, only prompt, text2videoOnly at 384p5. Make your pictures come alive with PyramidFLow by oaahttps://tensor.art/template/789854342952861190sample:https://image.tensorartassets.com/workflow_template_showcase/790272754371847397/2a99eba3-7917-a1b7-cf1b-d3468c90921f.mp4Limitations :The generated videos are rather shortImg2vid only works with certain aspect ratioOnly at 384p6. Make your pictures come alive with PyramidFlow - 768P Version by oaahttps://tensor.art/template/789871312368614821sample:https://image.tensorartassets.com/workflow_template_showcase/790275017819641763/f6523cfa-a883-9b05-3149-54b4ff999427.mp4Limitations :The generated videos are rather shortImg2vid only works with certain aspect ratioExpensive to run7. Mochi 1 preview - video generation by oaahttps://tensor.art/template/789464613325392462Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems.sample:https://image.tensorartassets.com/workflow_template_showcase/789223103034147869/9916ffbd-e375-d017-a198-e7a1af1a7dc5.mp4Limitations :The generated videos are rather shortText2Vid only8. HunyuanVideo by oaahttps://tensor.art/template/803673151119656752sample:https://image.tensorartassets.com/workflow_template_showcase/803944541527945002/93638436-ae16-480b-3c6e-2b2e725eae0c.mp4Limitations :The generated videos are rather shortText2Vid only9. DimensionX - 3D Scene Generation by oaahttps://tensor.art/template/796266016161330278sample:https://image.tensorartassets.com/workflow_template_showcase/796264165045080771/da5e0cb9-60ec-8277-616d-a7093d9f5bb7.mp4Limitations :img2video that only works with certain ratio onlyonly rotate on left direction only
how to create ai tool for beginner - Christmas Walkthrough AI TOOL
In this article i will share how easy to create AI Tool for beginner. check it out.1. click comfyFlow at create menu at the top2. click New Workflow or import workflow if you have any workflow3. Choose any template you want, in this section i will add text2img template4. The new tab browser will appear, wait until completed5. Setting the paramater you want, in this section i will change checkpoint and prompt only, then do running test6. after successfully testing, click publish it then choose AI Tool7. New tab will appear, then fill it, then click Publish8. TADA, your AI Tool now go public.
Using New TA Nodes with SelectParams to adjust Redux Style Model (new AI Tool)
Guide to Using New TA Nodes with SelectParams on Tensor.artTensor.art recently introduced the powerful TA Nodes tool, enabling users to have more control and flexibility in AI-driven art creation. This article will guide you on how to use the SelectParams Node to adjust the application intensity of the Redux Style Model through the ConditioningAverage Node.1. What are TA Nodes?TA Nodes is a node-based workflow system that allows you to connect nodes to customize your image creation process. The SelectParams Node is a crucial feature, enabling you to fine-tune input parameters and control how much the Style Model influences the final output.2. Redux Style Model and the Role of SelectParamsThe Redux Style Model on Tensor.art is designed to produce artwork with a bold, minimalist yet sharp aesthetic. To manage the intensity of the Style Model's application and ensure the output aligns with your creative vision, the SelectParams Node allows you to adjust parameters dynamically via the ConditioningAverage Node.3. Steps to Use TA Nodes with SelectParamsStep 1: Create a Workflow with Redux Style ModelOpen the TA Nodes interface on Tensor.art.Add the Load Style Model node and select the model flux1-redux-dev.safe.tensors.Connect the Load Style Model node to the Apply Style Model node.Step 2: Add a PromptAdd the CLIP Text Encode (Prompt) node and input your creative idea.Example: "A cyberpunk cityscape at sunset with neon lights."Connect the output of CLIP Text Encode to the Apply Style Model node.Step 3: Add the SelectParams NodeAdd the SelectParams Node from the node list.Configure the settings:Creativity Levels: Choose between Low, Medium, or High.Set corresponding values for each level (e.g., Low: 0.1, Medium: 0.5, High: 0.8).Connect the SelectParams Node to the ConditioningAverage Node.Step 4: Integrate and AdjustConnect the ConditioningAverage Node to the output of the Apply Style Model node.In the ConditioningAverage Node, fine-tune additional parameters like Conditioning Strength to blend the values from SelectParams effectively.Step 5: Preview and FinalizeClick Preview AI Tool to inspect the output.If needed, go back and adjust the values in SelectParams.Once satisfied, click Go to generate the final artwork.4. Benefits of the SelectParams NodeFlexible Adjustments: The SelectParams Node allows you to increase or decrease the intensity of the Apply Style Model, ensuring the final image matches your creative intent.Seamless Integration with ConditioningAverage: It works directly with the ConditioningAverage Node, letting you control the Style Model's application intensity based on predefined levels (Low, Medium, High).Optimized Workflow: Quickly experiment with different settings without manually tweaking small parameters.High Precision: The ability to fine-tune specific levels ensures you achieve the desired result without excessive trial and error.Time-Saving: Predefined Low, Medium, and High settings make the adjustment process straightforward and efficient.5. Tips for Using the SelectParams NodeStart with Medium: This level is balanced and ideal for initial experimentation.Go High for Bold Results: Increase to High when aiming for detailed or striking artistic effects.Use Low for Subtlety: Lower the intensity when you want a natural and minimalist output.6. ConclusionThe SelectParams Node not only enables you to adjust the application intensity of the Redux Style Model but also optimizes your creative process. It's an ideal tool for ensuring that every piece of artwork reflects your vision and style. Start experimenting today on Tensor.art! 🎨
Understanding "Ai Tools" and How They Work on Tensor Art Platform
Understanding AI Tools and How They Work on Tensor Art PlatformIn recent years, Artificial Intelligence (AI) has revolutionized the way artists and creators produce visual content. One of the platforms making waves in this space is Tensor Art, a hub for AI-generated art enthusiasts and professionals. But how do AI tools work on such a platform, and what makes it special? Let’s break it down.What Are AI Tools?AI tools are software or programs powered by machine learning algorithms that analyze and learn from large datasets. In the context of art, these tools are trained on millions of images, patterns, and artistic techniques. This enables them to mimic styles, create unique visuals, and assist artists in enhancing or generating content with ease.How AI Works on Tensor ArtThe Tensor Art Platform integrates AI tools to provide users with a seamless creative experience. Here’s a simple overview of how it functions:Input Creation:Users provide an initial input, often in the form of text prompts, sketches, or existing images. For example, you might type, "A futuristic city at sunset with glowing skyscrapers."AI Processing:The platform’s AI engine processes the input using advanced algorithms. It deciphers the elements of your prompt, breaks down styles, and matches them with patterns in its database.Image Generation:Based on the input, the AI generates an image. On Tensor Art, users can choose between different artistic styles, such as impressionism, photorealism, or surrealism.Customization:Tensor Art allows users to refine the generated image by adjusting parameters like color tones, composition, or level of detail. This ensures that creators retain control over their work.Exporting and Sharing:Once satisfied, users can download their art or share it directly through the platform’s community. Tensor Art also supports high-resolution exports for professional use.Why Use Tensor Art?Tensor Art is designed with both amateurs and professionals in mind. Its user-friendly interface, combined with powerful AI capabilities, makes it ideal for:Experimenting with new art styles.Creating quick drafts or concepts.Generating high-quality visuals for personal or commercial projects.Final ThoughtsAI tools on platforms like Tensor Art are transforming how we approach creativity. By combining human imagination with machine precision, they open up endless possibilities for artists, designers, and hobbyists alike. Whether you’re looking to explore new ideas or speed up your workflow, Tensor Art is a powerful ally in the world of AI-generated art.
Christmas Walkthrough | AI Tool - small tips and tricks
Hi guys, it's me Manuela here, This is my first Article so if there are mistakes on my post, feel free to correct it d^o^b3 small tips for beginners to create AI TOOL1/ You can rename any node if you feel it is not satisfactory or can cause confusion for new users2/ You can edit the prompt directly this way, instead of going back to the comfyui workflow environment3/ Instead of having to use the images created from the comfyui environment/workspace, you can upload your own unique cover image to make your AI tool look better. Hopefully this will help you somehow, Merry Xmas UwU
AI Tool & Radio Button. The beast is not as scary as it is portrayed.
Let me start by saying that to create an "AI Tool" you first need to make a working "workflow". It is not for nothing that the first task of the second week, the "Christmas Walkthrough" event, is to create your own "workflow". To start creating it, just click here, as shown in the picture. For a better understanding of working with "workflow", create an empty "workflow", as shown in the picture. Now, you probably got scared. A strange black grid and an incomprehensible interface. Everything is fine, everything is quite simple here. Everything consists of nodes that are connected to each other, by connecting ones, similar to wires. You can watch the video of the "Tensor.art" team on "youtube", in which you will be introduced to the main nodes. The method of adding nodes in the video has a drawback. The list of nodes available for adding is very large and starts from the end of the list, and the most frequently used nodes are at the beginning of the list. Scrolling to the beginning of the list is very long and takes 2-3 minutes. Therefore, I advise using the search to add nodes. To open the search by nodes, you need to double-click the Left Mouse Button on an empty space. So which nodes exactly need to be added and what is their name? Let's try a method known from school. Let's copy someone's ready-made "workflow". For copying, I suggest my "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Everything as in the picture below you will try to copy.I made it according to the video guide of the "Tensor.art" team on "youtube", which I wrote about above. Adding a couple of other nodes from myself. Add all the necessary nodes using the search by nodes.Now fill in all the nodes, as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" or fill them with your custom parameters. Now exactly repeat the connections of the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" Hold down the left mouse button on the desired "light" then drag the wire to the other "light" as in the picture. To complete the task "AI Tool" containing "Radio Button". In addition to the two nodes "CLIP Text Encode (prompt)" I added one node "TA Node - PromptText". Then I turned one node with a positive prompt "CLIP Text Encode (prompt)" into "Input" as in the picture.As a result, I got this. I checked the functionality of the "workflow". With the "Run" button. After that I added the "Radio Button" as in the picture. The buttons are added, now you can press the publish button. Next, select the publication of the "AI Tool" and fill in all the sections. After pressing the publication again, the "AI Tool" is ready. It's not difficult, but at first it was scary? Congratulations!
AI Tool / The beast is not as scary. Guide to creation From A to Z, from yesterday's newbie.
Let me start by saying that to create an "AI Tool" you first need to make a working "workflow". It is not for nothing that the first task of the second week, the "Christmas Walkthrough" event, is to create your own "workflow".To start creating it, just click here, as shown in the picture.For a better understanding of working with "workflow", create an empty "workflow", as shown in the picture.Now, you probably got scared. A strange black grid and an incomprehensible interface. Everything is fine, everything is quite simple here. Everything consists of nodes that are connected to each other, by connecting ones, similar to wires. You can watch the video of the "tensor.art" team on "youtube", in which you will be introduced to the main nodes.The method of adding nodes in the video has a drawback. The list of nodes available for adding is very large and starts from the end of the list, and the most frequently used nodes are at the beginning of the list. Scrolling to the beginning of the list is very long and takes 2-3 minutes. Therefore, I advise using the search to add nodes. To open the search by nodes, you need to double-click the Left Mouse Button on an empty space.So which nodes exactly need to be added and what is their name? Let's try a method known from school. Let's copy someone's ready-made "workflow". For copying, I suggest my own "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Everything as in the picture below, you will try to copy.I made it according to the video guide of the "tensor.art" team on "youtube", which I wrote about above. Adding a couple of other nodes from myself. Add all the necessary nodes using the search by nodes.Now fill in all the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough" or fill them with your custom parameters.Now exactly repeat the connections of the nodes as in the "workflow" - "Introvert Christmas & Phlegmatic New Year #Christmas Walkthrough". Hold down the left mouse button on the desired "light" then drag the wire to the other "light" as in the picture.To complete the task "AI Tool" containing "Radio Button". In addition to the two nodes "CLIP Text Encode (prompt)" I added one node "TA Node - PromptText". Then I turned one node with a positive prompt "CLIP Text Encode (prompt)" into "Input" as in the picture.As a result, I got this. I checked the functionality of the "workflow". With the "Run" button.After that I added the "Radio Button" as in the picture.The buttons are added, now you can click the publish button.Next, select the publication of the "AI Tool" and fill in all the sections. After clicking the publication again, everything "AI Tool" is ready. It's not difficult, really, but was it scary at first?Congratulations!
⚙️Beginner's guide to creating "AI tool": workflow basics and practice⚙️
IntroductionHello everyone. In this article, we will explain the basic mechanism for creating AI tools with "Tensor Art". We will introduce the particularly important meanings of "workflow" and "node", how to set them up, and procedures. What is an AI tool?AI Tools is a node-based tool for visually designing AI image generation. It consists of a processing flow (workflow) that combines nodes (like pieces of a puzzle) to generate an image.Main features of workflowIntuitive operability: simply place and connect nodes with drag and drop.Flexible configuration: Fully customize models (checkpoints), LoRA, prompts, and more.Real-time generation: You can start image generation immediately after setting.It is important to understand first! What is a node?A node is a "small unit responsible for one process" in image generation. For example, there are nodes such as "Load Checkpoint" that loads an AI model and "CLIP Text Encode" that analyzes prompts. Basic configuration of a node:Input: Materials that start processing (e.g. prompt or model).Output: Passes the results of processing to the next node.In an analogy, nodes are like the "parts" of a pipeline. Connecting these together creates the overall flow. What is a workflow?A workflow is a series of image generation flows designed by connecting multiple nodes.For example, create the following flow:Load an AI model (Load Checkpoint)Analyze the prompt (Prompt Encode)Generate an image (Sampler)Save the image (Save Node)Visually constructing these flows enables image generation in Tensor Art.Image Generation Workflow in Tensor Art: Basic Configuration and StepsBelow, we will explain the basic workflow and the role of each node in detail. Overview of the Basic Workflow The basic configuration for image generation in Tensor Art is as follows:Load Checkpoint (AI model): Select the base generative model.→ Node name: Load CheckpointEncode prompt (generation instruction): Specify the direction of image generation.→ Node name: CLIP Text EncodeApply LoRA model (optional): Add style and features.→ Node name: Load LoRAImage generation process: Generate image based on prompt and model.→ Node name: KSamplerVAE decode: Adjust generated image to make it human-readable.→ Node name: VAE DecodeSave image: Save generated image to file.→ Node name: Save ImageDetailed explanation and setting method for each node 🌸⬇️Let's use a workflow using FLUX as an example. ⬇️🌸1. Load CheckpointRole: Loads the model that is the basis of AI image generation.Settings: ckpt_name: Specify the model name you want to use.Example: FLUX-1-dev-fp8 (recommended checkpoint for TensorArt).2. Load LoRA (Add style)Role: Apply LoRA model that adds specific features and style.Settings: lora_name: Enter the name of the LoRA model you want to use.strength_model and strength_clip: Model influence (1.0 recommended).3. CLIP Text EncodeRole: Converts the content of the image to be generated (prompt) into a format that AI can understand.Settings:Example prompt: "futuristic cityscape, neon lights, digital painting".4. KSampler (Central process of image generation)Role: Generates the actual image based on the prompt and model.Settings:steps: Accuracy of generation (approximately 20-30).cfg: Applicability of the prompt (usually 1.0).sampler_name: Sampling method (e.g. Euler).5. VAE DecodeRole: Converts the generated latent image into the final image data.Note: Select a VAE that corresponds to the checkpoint.6. Save ImageRole: Saves the generated image as a file.Settings:filename_prefix: Specifies the beginning of the image name (e.g. "TensorArt_").Example of actual workflow: Node connectionBelow is an example of an actual node connection. Image generation is possible by reproducing this flow in the Tensor Art node editor.Load Checkpoint → Load the AI model.Add Load LoRA if necessary and apply styles.Enter prompts into CLIP Text Encode and set the generation content.Use FluxGuidance (adjust guidance scale) to fine-tune the influence of the prompt.Generate an image with KSampler.Adjust the image through VAE Decode.Finally, save the image with Save Image. Frequently Asked QuestionsQ1. What is the difference between Checkpoint and LoRA?Checkpoint: A model that is the basis of AI image generation. It determines the overall style.LoRA: A module for adding specific additional styles and fine features.Q2. Is VAE required?Basically used in conjunction with Checkpoint. Without VAE, the color and resolution of the image may not be displayed properly. SummaryOnce you understand how nodes and workflows work, you can create your own images just the way you want them. Use this guide to get started with a simple setup! 😆👍 ⬆️This is an image generated using the workflow introduced this time 😊 Next stepsTry your own prompts and settings.Combine multiple LoRAs to pursue originality.Experiment with high resolution and special styles.By publishing the completed workflow, you can have many people use it as your AI tool 👍Enjoy your creative adventure with Tensor Art! Side note: Tips for beginnersUnderstand the basics of nodes: First, understand what each node does.Start with a simple workflow: Try a workflow with a minimum number of nodes to help you understand how it works.Repeat the experiment: Adjust the parameters of each node and see how the generated image changes.
🔰 “AI Tool” Export/import user settings
Hello 🙂 Today, I will explain the "export" and "import" settings when publishing an AI tool,which has been introduced several times on the site.To publish an AI tool, roughly speaking,1) Create a workflow based on the concept of the AI tool you want to make2) Configure, adjust, and test the workflow3) Test as an AI tool, and publishI think you will create and publish in the above order, butAfter you are satisfied with part 2), when you turn it into a tool and test it in part 3), you may see a "defect" or "finishing discrepancy" that you could not see in workflow mode.In this case, you should configure and adjust the "workflow that is the basis of the tool" again, then "update the workflow" on the "edit" mode screen of the AI tool, reflecting the changes in the AI tool settings, and then publish it again. This is the flow.When you perform this "workflow update", the information displayed on the AI tool operation screen will change.“User-configurable Settings” will be “reset”! What you have gone through a lot of trouble configuring, sorting, etc., is now starting from scratch.I can't help but sigh at having to start over 😮 Corresponding to this resetThe method is " Export/Import settings ''.With this function, even if you need to repeatedly update your workflow, you can easily return to the "User-configurable Settings" settings you used when you first created the tool, so you can avoid unnecessary work.Here's how to do it:● Export (save settings before workflow update)1) Open the "Edit" mode screen of the AI tool and scroll down to find "User-configurable Settings". To the right of it there are three buttons: "Import", "Export", and "Empty". Click "Export".2) Then, if you are using a PC, a dialog box will appear asking you to confirm the “file save destination”.Save it in an easy-to-find location so you don't get lost later.At this time, it is even more convenient to give the file a "file name" that is easy to identify.(This procedure applies to smartphones, tablets, etc.)This is the end of "Export".◆ Import (Load previous settings after updating workflow)1) Go to "User-configurable Settings"in the same way as when exporting and click "Import".2) A dialog box will appear asking you which file to import,so specify the file you "saved" (exported) earlier and click "OK".(For smartphones, tablets, etc., follow these guidelines)This will restore the "settings" that were reset by "update workflow".You should see that it is.The above is how to "Export" and "Import" "User - configurable Settings", but there is one thing to keep in mind.That is, if you "change the node type" or "add a new node or delete a node" when updating the workflow,"Export" and "Import" will not work. Please note that these are functions to save and restore the "initial settings made into a tool".I think there are many similar articles on the site, but this time's "Christmas Walkthrough"I hope this will be helpful for those who are trying to create an AI tool for the first time at the event 🙂
Christmas Walkthrough | Add Radio Buttons to an old Ai Tool.
What are Radio Buttons?They allow you to use name syntax in your prompt to get a lines of prompt from a file. in TensorArt we will use it as susbtitition for personalized wildcards. So Radio Buttons are pseudo-wildcards. Check this article to know how to manipulate and personalize them. Radio Buttons requires a <Clip Text Encoder> node to be storo within.What do we need?Any working Ai ToolIn my current exploration only certain <CLIP Text Encoder> nodes allows you to use them as Radio Button containers. For this example I'll use my ai tool: 📸 Shutterbug | SD3.5L Turbo.Duplicate/Download your Ai Tool workflow (To have a Backup).Add a <CLIP Text Encode> node.Add a <Conditioning Combine> node,Ensamble the nodes as the illustration shows; be careful with the combine method, use concat if you're not experienced at combining clips, this will instruct your prompting to ADD the Radio Button calling prompt.💾 Save your Ai Tool workflow.Go to Edit mode in your Ai Tool.Export your current User-configurable Settings (JSON).↺ Update your Ai Tool.Import your old User-configurable Settings (JSON).Look for the new <CLIP TextEncode> node, and load it.Hover over the <CLIP TextEncode> new tab, and select Edit.Config your Radio Buttons.Publish your Ai Tool.Done! Enjoy the Radio Button feature in your Ai Tools, so in my case my new Ai Tool looks like this:📹 Shutterbug | SVD & SD3.5L Turbo.Note: I also included SVD video to meet the requirements of the Christmas Walkthrough event.
TensorArt New Feature Tutorial: Classic Workbench Text-to-Video and Image-to-Video
Hello everyone! TensorArt has recently launched a new feature in the Classic Workbench, supporting Text-to-Video and Image-to-Video functionalities. Today, I’ll walk you through how to use these exciting new features to create your own video content! ��Step 1: Open the Classic WorkbenchFirst, open the TensorArt Classic Workbench and go to the main interface. Then, locate the Text to Video module. Step 2: Select Model and SettingsIn the Text to Video page, you'll see two important options: Models and Settings. Currently, there are three models available for you to choose from.·FPS (Frames Per Second): FPS stands for Frames Per Second, which indicates how many frames of images are displayed per second. The higher the FPS, the smoother the video looks. For example, we can set the FPS to 24, which is typically suitable for most video productions.Duration: Duration refers to how long your video will play, from start to finish. You can set it in seconds, minutes, or longer, depending on your needs.Once you've adjusted these settings, input your Prompts (the text description of what you want to generate), and click Generate. Voila! Your video will be created based on the prompts you provided! ✨Step 3: Image-to-VideoNext, let's take a look at the Image to Video feature. Here, you’ll see two models available. First, click to upload the image you want to use. Then, set the related parameters, such as FPS and Duration. Finally, input your Prompts (describing how you want the image to be turned into a video) and click Generate.It’s that simple! By adjusting the settings, you can create creative image-to-video works.SummaryHow easy is that? �� With just a few simple steps, you can turn text into lively video or transform static images into dynamic video content. Why not give it a try?If you have any questions or want to share your creations, feel free to leave a comment below! ��We look forward to seeing your creative works! Come try out the Text-to-Video and Image-to-Video features on TensorArt today!
【12/9更新あり】 日本語訳 11月29日~12月26日 公式イベント ChristmasWalkthrough
11/29~12/26までのクリスマスイベントの日本語訳です。<時間がない人・何していいかわからない人>12月13日の朝8:59までにホームにピン留めされた「3DVideo」「RealAnime」で動画と画像投稿で2day Pro GET!の激熱イベントなので、ぜひこれだけはやっておきましょう。元記事https://tensor.art/blackboard/ChristmasWalkthroughhttps://docs.google.com/document/d/10GsQgVS-myqSHJGDLVQT3Su9o7gjxvCFl3CehL8ICwk/edit?tab=t.0こんにちは、旅人さん!🎅🎄ようこそ、Tensor Impactへ!これから君はクリスマスの冒険の旅に出るのだよ。探索タスクを次々と達成して、素晴らしい報酬を手に入れてくれたまえ!✨⏰ 探索期間:11月29日 UTC → 11月29日 09:00 JSTから12月26日 UTC → 12月26日 09:00 JSTまで。この28日間で**「クリスマスウォークスルー」の全タスクを達成し、成功した探検家になろう!🎁達成者には49.9ドルの現金報酬**と、**新年プロモーション(1つ購入で1つ無料!)**が待っている!さらに、タスクごとに20ドル相当の報酬やPro会員特典、クレジットを獲得できるぞ。📅 探索タスクカレンダー毎日1つずつタスクが用意されており、各週内にタスクをすべてクリアすればウィークリーバッジをゲット!もしタスクを1つでも達成できなかった場合は「マジックバッジ」を使って補完できるので安心じゃ!各タスクには難易度が表示されているよ(例: 🌟 = 簡単, 🌟🌟🌟 = 難しい)。難しいタスクにはガイドも用意されているから活用してくれたまえ!すべての投稿には必ず「#Christmas Walkthrough」のタグを付けるのをお忘れなく!🎨ウィーク1: 11月29日~12月5日期間中にすべてのタスクを完了すると、200クレジット(ボーナス込み)がもらえる!日付タスク報酬11/29 毎日のテーマに投稿20クレジット11/30 テーマカレンダーに沿った投稿20クレジット12/1 テーマカレンダーに沿った投稿20クレジット12/2 テーマカレンダーに沿った投稿20クレジット12/3 テーマカレンダーに沿った投稿20クレジット12/4 テーマカレンダーに沿った投稿20クレジット12/5 テーマカレンダーに沿った投稿20クレジットウィーク2: 12月6日~12月12日(情報更新されているので注意!!)期間中にすべてのタスクを完了すると、10日分のPro会員特典がもらえる!12/6 ワークフローの公開 → 1日分のPro会員特典12/7 動画生成AIツールを公開 → 1日分のPro会員特典12/8 ホームにピン留めされた「3DVideo」AIツールで動画を投稿 → 1日分のPro会員特典12/9 ホームにピン留めされた「RealAnime」AIツールで投稿 → 1日分のPro会員特典12/10 AIツール関連の記事を公開 → 1日分のPro会員特典12/11 ラジオボタンを含むAIツールを公開 → 1日分のPro会員特典12/12 サブスクリプションを開設(12/13以前なら達成) → 1日分のPro会員特典日付日本語訳原文12.6タスク: ワークフローを公開する Task: Publish a workflow12.7タスク: 動画投稿を公開する Task: Publish a video post12.8タスク: 動画生成AIツールを公開する Task: Publish an AI Tool that generate video12.9タスク: ホームページに固定されている「RealAnime」AIツールを使って投稿を公開する Task: Use the "RealAnime" AI Tool pinned on the homepage to publish a post12.10タスク: AIツールに関連する記事を公開し、タイトルに「AI Tool」を含める Task: Publish an article related to AI Tools, include text “AI Tool” in title12.11タスク: 「ラジオボタン」を含むAIツールを公開する Task: Publish an AI Tool containing "Radio Button"12.12タスク: サブスクリプションを作成する(12.13以前に作成すれば完了と見なされる) Task: Create a buffet plan. (considered as completed as long as created before 12.13)ウィーク3: 12月13日~12月19日期間中にすべてのタスクを完了すると、20ドルの現金報酬を獲得!12/13 クリスマスをテーマにしたモデルを公開 → $212/14 モデル関連の記事を公開 → $212/15 「ゲームデザイン」「ビジュアルデザイン」「スペースデザイン」のチャンネルに合うモデルを公開 → $212/16 TenStarFundに参加したモデルを公開 → $212/17 11月29日以降にアップロードされ、20件以上の投稿があるモデルを持つ → $212/18 ベースモデルをIllustriousとしてオンライントレーニングを使ったモデルを公開 → $212/19 サブスクリプション活動(購入または購入される)を行う → $2| 日付 | 難易度 | 日本語訳 | 英語訳 | 報酬 ||--------|--------|----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------|| 12/13 | ★★ | クリスマスをテーマにしたモデルを公開する。 | Publish a Christmas-themed Model. | $2 キャッシュ || 12/14 | ★★ | 「Model Training」という文字をタイトルに含めた、モデルに関する記事を公開する。 | Publish an article related to Model, include text "Model Training" in title. | $2 キャッシュ || 12/15 | ★★ | 「ゲームデザイン」「ビジュアルデザイン」「宇宙デザイン」チャンネルのいずれかに、 | Publish a Model in one of the "Game Design, Visual Design, Space Design" channels, | $2 キャッシュ || | | そのチャンネルのスタイルに合わせたモデルを公開する。 | matching the style of the chosen channel. | || 12/16 | ★★★ | TenStarFund に成功裏に参加したモデルを公開する。 | Publish a Model that successfully joined the TenStarFund. | $2 キャッシュ || 12/17 | ★★ | 11月29日以降にアップロードされ、20以上のユーザーポストがあるモデルを用意する。 | Have a Model uploaded after November 29th with over 20 user posts. | $2 キャッシュ || 12/18 | ★★★ | 基本モデルとして「Illustrious」を使用し、オンライントレーニングでモデルを公開する。 | Publish a Model using Online Training with the base model being Illustrious. | $2 キャッシュ || 12/19 | ★★★ | 11月28日 0:00 UTC以降の購入記録があること。 | Have Purchase Record since 11.28 00:00 UTC. | 20 クレジット |### ボーナス| 内容 | 英語訳 | 報酬 ||----------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------|| すべての探索タスクを3週目に完了すると合計 $20 ($2×7 + 追加 $6) が獲得可能。 | Complete all exploration tasks in the third week to earn a total of $20 cash ($2×7 + extra $6). | - |ウィーク4: 12月20日~12月26日この週には特別な名誉バッジがもらえるタスクもあるぞ!12/20 イベント期間中に公開された投稿が「リミックス」される12/21 TensorArtに関連する内容をSNSでシェアし、アンケートに回答12/22 #Christmas Walkthroughのタグが付いた投稿に「いいね」「コメント」「スター」のいずれかをする12/23 30クレジットでバッジを交換(マイページ→クレジット)12/24 イベント中に公開されたAIツールが「ブラックホースAIツール」ランキングトップ100に入る12/25 イベント中に公開されたモデルが「ブラックホースモデル」ランキングトップ100に入る12/26 「クリエイター」ランキングトップ100に入るさあ、冒険の旅を楽しんでくれたまえ!サンタも応援しているぞ!🎁✨
Christmas Walkthrough 【日本語訳】11/29~12/26
クリスマスイベントの日本語訳です。(12/7修正)こんにちは、旅行者さん! Tensor Impact へようこそ。これから一連の探索タスクに着手し、さまざまな豪華報酬を獲得してください。⏰ 探索期間: 11 月 29 日から 12 月 26 日 (協定世界時)28 日以内にクリスマス ウォークスルーのすべてのタスクを完了して、成功した探検家になりましょう!勝つ $49.9 現金と 1 つ買うともう 1 つ無料の新年プロモーション!各探索タスクを完了すると、対応する報酬 ($20、プロ、クレジット) も獲得できます。 📅 探索タスクカレンダー毎日 1 つのタスクがあり、その週以内にタスクを完了するとウィークリー バッジを獲得できます。タスクの 1 つを完了できなかった場合でも、心配する必要はありません。バッジ引き換えセクションを確認し、未完了のタスクを自動的に完了としてマークする Magic バッジを引き換えてください。詳細については、 の数 🌟タスクの後には、このタスクを達成するのがどれほど難しいかを意味します。私たちは提供します 「探索タスクガイド」 3 つ星以上のタスクに関するガイダンスを参照してください。 参加しているすべてのモデル、AI ツール、投稿には、 タグ「#Christmas Walkthrough」 公開されたとき。記事とワークフローに「ChristmasWalkthrough」タグを含める必要はありません。# お知らせ: すべてのタスクは必ずしもその日に完了する必要はありません。事前に、またはその週の終わりまでに行うことができます。ただし、週末までに完了しない場合、タスクは欠落したものとみなされます。注意:自動で日本語訳したので一部の指定ワードが異なります。12/10 “AI Tool” 12/14 “Model Training”私たちのブラックホースリーダーボード: [TensorArt] Christmas Walkthrough: Dark Horse Leaderboard12.21 タスクについては、ソーシャル メディアに投稿した後、このアンケートに回答してください。 Googleフォーム毎日のテーマ🔱バッジの紹介– バッジの構成毎日のバッジ: 合計 26 個、毎日の探索タスクを完了すると授与されます (1 月 10 日まで有効)。ウィークリーバッジ: 合計 4 個、各週のすべてのタスクを完了すると授与されます (1 月 10 日まで有効)。究極のバッジ: 合計 1 つ、すべての探索タスクを完了すると授与されます (90 日間有効)。12.23 タスクバッジ: 合計 1 つで、12 月 23 日のタスクと引き換えるにはクレジットが必要です (1 月 10 日まで有効)。マジックバッジ: 合計 4 つ、未完了のタスクを自動的に完了としてマークするために引き換えることができますが、報酬は与えられません (1 月 10 日まで有効)。名誉バッジ: 合計 1 つ。12 月 26 日のタスクを完了すると自動的に授与されます。引き換え可能ですが、報酬は与えられません (1 月 10 日まで有効)。 – 発行ルールすべてのイベント時間は UTC で計算されます。タスクは UTC 時間内に完了するようにしてください。毎週金曜日に、前週に完了したタスクに対してバッジを発行します。毎週のタスク (金曜日から次の木曜日まで) は、完了したとみなされるために、同じ週内に完了する必要があります。 たとえば、12 月 6 日のタスクは 12 月 1 日から 12 月 7 日までに完了する必要があります。毎週のタスクをすべて完了すると、週ごとのバッジのみが獲得でき、毎日のバッジは獲得できません。タスク、マジック、名誉のバッジは引き換え時に自動的に付与されます。タスクバッジは交換でのみ入手できます。魔法のバッジでは代用できません。名誉バッジは交換を通じてのみ入手できます。魔法のバッジでは代用できません。– 引き換えルールバッジ引き換え期間は11月29日から12月26日まで。12 月 26 日のタスクでは、完了済みとしてマークされる「名誉バッジ」を引き換えるのに 10,000 クレジットが必要です。マジック バッジは 5 つあり、そのうち 4 つを引き換えるには「5、50、500、1000」クレジットが必要ですが、完了報酬は付与されません。マジック バッジでは、12 月 23 日と 12 月 26 日のバッジを引き換えることはできません。一度引換したバッジは返品できません。📜イベントルールシステムのデフォルトのアバターとニックネームを持つユーザーは報酬を受け取りません。現金報酬はイベント終了時に GPU 基金に入金され、いつでも引き出すことができます。イベントモデルはオリジナルである必要があり、再印刷またはマージはカウントされません。イベントの内容はコミュニティのルールに準拠する必要があります。 NSFW、児童ポルノ、有名人の画像、暴力、低品質のコンテンツは対象外です。不正行為は失格となります。 Tensor.Art はイベントの最終解釈権を留保します。ご不明な点がございましたら、Discord でチケットを開いてスタッフにお問い合わせください。タグ「#Christmas Walkthrough」を使おう忘れやすそうなので大きくしましたタグ「#Christmas Walkthrough」を使いましょう。#はタグを示すマークです。タグ欄に「Christmas Walkthrough」と入力するといいです。(記事とワークフローに「ChristmasWalkthrough」タグを含める必要はありません。)日本人向け注意事項おそらくタスクはUTC時間に合わせてする必要があります。朝9時がUTCの0時です。ユーザー名と画像の設定をしましょう。日本人の認識より児童系は判定がきついことが多いです。子供やちびキャラの画像は避けましょう。運営からの回答「クリスマス ウォルトロウ」の 2 週目に作成された同じ AI ツールは、その週のすべてのタスクにカウントできますか? (すべての要件を満たしている場合)それとも、別のツールを作成する必要がありますか? もう 1 つは、新しい AI ツールを公開するのではなく、新しい要件を満たすために古い AI ツールを更新した場合、カウントされますか?A:2 週目に作成された AI ツールはカウントできます。11.29以降に作成されたすべての AI ツールがカウントされます。ただし、古い AI ツールを更新するだけでは要件を満たさないため、新しい AI ツールにする必要があります。マジックバッジについてA:マジックバッジは、一種の補償メカニズムです。特定の日にタスクを逃したり、完了できなかったりした場合は、マジックバッジを購入して逃したバッジを引き換えることができます。これにより、最終的な報酬を獲得しやすくなります。たとえば、12 月 17 日のタスクは「11 月 29 日以降に 20 を超えるユーザー投稿を含むモデルをアップロードする」であり、このタスクを達成できませんでしたが、最終的な報酬である 49.9 ドルにはバッジが 1 つ足りません。マジックバッジを引き換えると、自動的に不足分が補われ、最終的な大賞を獲得できるようになります。投稿し忘れちゃった当日に日替わりテーマカレンダーを投稿し忘れたとしても、心配しないでください。今週中に7 つの日替わりテーマの投稿を公開すれば、バッジと報酬を獲得できます。数日前に見逃してしまった場合は、ぜひ追いついてください。探索タスクガイドこのガイドでは、次の詳細な手順を説明します。 3 つ星以上の高難易度探索タスク。12.7 探索タスク: 動画を生成するAIツールを公開します。完了方法:次のビデオ ノードのいずれかを使用することをお勧めします: Cogvideo、Mochi、Pyramid-Flow。ビデオ ワークフロー (テキストからビデオ、または画像からビデオ) を作成し、AI ツールとして公開します。12.8 探索タスク: ホームページに固定されている「3DVideo」AI ツールを使用して、ビデオ投稿を公開します。完了方法:指定された AI ツールを使用します: 👉 3Dビデオ 👈 画像を生成して投稿します。12.9 探索タスク: ホームページに固定されている「RealAnime」AI ツールを使用して投稿を公開します。完了方法:指定された AI ツールを使用します: 👉 リアルアニメ 👈 画像を生成して投稿します。12.11探索タスク: 「ラジオボタン」を含むAIツールを公開します。完了方法:AI ツールを公開するときは、ユーザーが設定するプロンプト ノード (テキストなど) を開きます。「入力タイプ」で「ラジオボタン」を選択します。12.16探索タスク: TenStarFund に正常に参加したモデルを公開します。完了方法:💸 TenStar Fund プロジェクトを通じてモデルを実行して収入を稼ぎます。詳しい操作方法や導入方法については、以下をご確認ください。 [リンク]12.18探索タスク: Illustrious のベース モデルを使用して、オンライン トレーニングを使用してモデルを公開します。完了方法:基本モデル Illustrious を使用したオンライン トレーニングについては、提供される特定の指示に従ってください。12.26探索タスク: 「クリエイター」リーダーボードのトップ 100 にランクインします。完了方法:リンクをクリックしてリーダーボードを表示します。 [リンク]
RealAnime Event: Toon Drifter Faction Showdown! ~11/28 日本語訳
アニメキャラクターが第四の壁を突破できるTensorArt専用モデル「RealAnime」が登場! 🎉使いやすい AI ツールを使用して、現実世界のシーンでアニメ キャラクターを生成できます。プロンプトを入力するだけで、魔法が起こるのを観察できます。ショー・ドリフター目覚ましが鳴ったら、起きて仕事に行く時間です!アニメのキャラクターもお腹を満たすために頑張らなければなりません。指定されたものを利用する AIツール お気に入りのアニメキャラクターの職場生活をデザインしてみませんか? 💼✨ブルース・ウェインとは異なり、ジョーカーは仕事の後、食料品を買い、自分で食事を作らなければなりません。 🤡レムはメイドカフェでコーヒーとデザートの作り方を学ぶ必要があります。給料が低かったため、サノスは指を鳴らして会社を爆破することを決意しました。 💥派閥対決に参加しよう!派閥を選択し、指定されたタグを付けて投稿することで派閥の評判を高めましょう!派閥タグ(たぶん必須 どれか一つを使う)#Driftermon#DrifterAvengers#DrifterDoom評判の計算ルール:評判 = (投稿した Pro ユーザーの数 0.4 + 投稿した Standard ユーザーの数 0.2 + いいねをした人の数 0.1 + リミックスした人の数 0.3) * 100各勢力の評判は毎日更新されるので、毎日投稿してチームへのサポートを結集することを忘れないでください。 🏆*公式のイベントページにチーム評価を表示するタグがあります。最高の評判ボーナス:トップ派閥のメンバー全員に 500 クレジットと 1 日 Pro が与えられます。 🎉特別ボーナス:質の高い投稿には不思議な報酬が当たるチャンスも! 🎁ソーシャルメディア投稿報酬ソーシャル メディアへの投稿ごとに 100 クレジット、最大 500 クレジットを獲得できます。コンテンツ形式:無制限!タグを含める必要があります: #TensorArt そして #RealAnimeサポートされているプラットフォーム: Instagram、TikTok、Twitter、Facebook、Reddit、YouTube、Pinterest。追加の報酬:500 件以上の「いいね!」: $20500 リツイート以上: 70 ドルフォロワーが 5,000 人を超える場合、500 件以上の「いいね!」で 40 ドル、500 件以上のリツイートで 140 ドルを獲得できます。クリック 記入するアイコン 参加情報を確認して報酬を受け取りましょう! 📲イベント期間11月18日~11月28日イベントルール投稿のテーマと内容はイベントのスタイルと一致している必要があります。各投稿にはイベント タグを 1 つだけ含めることができます。デフォルトのアバターとニックネームを持つユーザーは特典を受け取る資格がありません。NSFW、児童セレブのポルノ、低品質のコンテンツは有効な参加としてカウントされません。不正行為があった場合はイベントから失格となります。イベントの最終的な解釈権は Tensor.Art に帰属します。正しい生成方法(公式)たった4ステップでアツい「第四の壁突破」画像が完成!クリック AIツール 始めましょう! 🖱️✨ステップ1ページの右側で、キャラクター名のオプションを選択するか、「カスタム」をクリックしてアニメキャラクターの名前を入力します。ステップ2以下の「何かを行う」セクションで、対応するアクションのオプションを選択するか、「カスタム」をクリックしてアクションを説明します。詳細な説明により、「赤いドレスを着て本物のオープンカーでワインを飲む」など、より正確な生成結果が得られます。ステップ3「画像サイズ」を選択します。ニーズに基づいて選択できる 9 つの一般的なサイズがあります。ステップ4下の「go」ボタンをクリックして、画像が生成されるまで辛抱強く待ちます。上のタブを切り替えると過去の結果が表示されます。ヒント「翻訳」をクリックすると、入力テキストを英語に翻訳できます。生成結果に満足できない場合は、キャラクターやシーンを変更して再試行してください。 🎨✨ハムスター式生成方法12つに分かれてたらいいだろうというノリで、好きに書く。ハムスター式生成方法2もう②にはスペース「 」しか入れない。ヒント・普通にプロンプト書いた方が手っ取り早い
Halloween2024 | Unlocking Creativity: The Power of Prompt Words in Writing
Unlocking Creativity: The Power of Prompt Words in WritingWriting can sometimes feel tough, especially when you’re staring at a blank page. If you’re struggling to find inspiration, prompt words can be a helpful tool. These words can spark ideas and make writing easier and more fun. Let’s explore how prompt words can boost your creativity and how to use them effectively.What Are Prompt Words?Prompt words are specific words or phrases that inspire you to write. They can be anything from a single word to a short phrase that gets your imagination going. For example, words like "adventure," "friendship," or "mystery" can lead to exciting stories or poems.Why Use Prompt Words?1. Overcome Writer’s Block: If you’re stuck and don’t know what to write, a prompt word can give you a direction to start.2. Spark Creativity: One word can trigger a flood of ideas. It helps you think outside the box.3. Try New Styles: Prompt words encourage you to write in different genres or styles you might not normally explore.4. Build a Writing Habit: Using prompt words regularly can help you develop a consistent writing routine.How to Use Prompt Words1. Make a ListStart by writing down some prompt words that inspire you. Here are a few examples:- Adventure- Dream- Secret- Journey- Change2. Quick Writing ExercisePick a prompt word and set a timer for 10 minutes. Write anything that comes to mind without worrying about making it perfect. This helps you get your ideas flowing.3. Write a Story or SceneChoose a prompt word and try to write a short story or scene based on it. For example, if your word is "mystery," think about a detective solving a case.4. Create a PoemUse a prompt word to write a poem. Let the word guide your ideas and feelings. You can write a simple haiku or free verse.5. Share with FriendsShare your prompt words with friends and challenge each other to write something based on the same word. This can lead to fun discussions and new ideas.Tips for Using Prompt Words- Write Daily: Spend a few minutes each day writing with a prompt word. This builds your skills and keeps your creativity flowing.- Make a Prompt Jar: Write different prompt words on slips of paper and put them in a jar. Whenever you need inspiration, pull one out and start writing.- Reflect on Your Work: After you write, take a moment to think about what you created. What did you like? What can you improve?- Explore Different Genres: Use prompt words to try writing in genres you don’t usually write in, like fantasy or poetry. This helps you grow as a writer. ConclusionPrompt words are a simple yet powerful way to boost your creativity and make writing enjoyable. They can help you overcome blocks, spark new ideas, and develop a consistent writing habit. So, the next time you feel stuck, remember that a single word can lead to amazing stories. Embrace the power of prompt words and watch your creativity soar!
Are score_tags neccessary in PDXL/SDXL Pony Models? | Halloween2024
Consensus is that the latest generation of Pony SDXL models no linger require "score_9 score_8 score_7" written in the prompt to "look good".//----//It is possible to visualize our actual input to the SD model for CLIP_L ( a 1x768 tensor) as a 16x16 grid , each with RGB values since 16 x 16 x 3 = 768I'll assume CLIP_G in the SDXL model can be ignored. Its assumed CLIP_G is functionally the same but for 1024 dimension instead of 768.So the here we have the prompt : "score_9 score_8_up score_8_up"Then I can do the same but for the prompt : "score_9 score_8_up score_8_up" + XWhere X is some random extremely sus prompt I fetch from my gallery. Assume it to fill up to the full 77 tokens (I set truncate=True on the tokenizer so it just caps off past the 77 token limit)Examples:etc. etc.Granted , first three tokens in the prompt for the 768 encoding greatly influnces the "theme" of the output.But from above images one can see that the "appearance" of the text encoding can vary a lot.Thus , the "best" way to write a prompt is rarely universal.Here I'm running some random text I write myself to check similarity to our "score prompt" (top result should be 100% , so I might have some rounding error) :score_6 score_7_up score_8_up : 98.03%
score 8578 : 85.42%
highscore : 82.87%
beautiful : 77.09%
score boobs score : 73.16%
SCORE : 80.1%
score score score : 83.87%
score 1 score 2 score 3 : 87.64%
score : 80.1%
score up score : 88.45%
score 123 score down : 84.62%So even though the model is trained for "score_6 score_7_up score_8_up"we can be kinda loose in how we want to phrase it , if we want to phrase it.Same principle applies for all LoRA and their activation keywords.Negatives are special. The text we write in the negatives are split by whitespace , and the chunks are encoded individually.Link to Notebook if you want to run your own tests:https://huggingface.co/datasets/codeShare/fusion-t2i-generator-data/blob/main/Google%20Colab%20Jupyter%20Notebooks/fusion_t2i_CLIP_interrogator.ipynbI use this thing to search up prompt words using the CLIP_L model//---//These are the most similiar items to the Pony model "score prompt" within my text corpusItems of zero similarity (perpendicular) negative similarity (vector at opposite direction) to encoding are omitted from these results.Note that this are encodings similiar to the "score prompt" trigger encoding , not analysis of what the Pony Model considers good quality.Prompt phrases among my text corpus most similiar to "score_9 score_8_up score_8_up" according to CLIP (the peak of the graph above): Community: sfa_polyfic - 68.3 %
holding blood ephemeral dream - 68.3 %
Excell - 68.3 %
supacrikeydave - 68.3 %
Score | Matthew Caruso - 67.8 %
freckles on face and body HeadpatPOV - 67.8 %
Kazuno Sarah/Kunikida Hanamaru - 67.8 %
iers-kraken lun - 67.8 %
blob whichever blanchett - 67.6 %
Gideon Royal - 67.6 %
Antok/Lotor/Regris (Voltron) - 67.6 %
Pauldron - 66.7 %
nsfw blush Raven - 66.7 %
Episode: s08e09 Enemies Domestic - 66.7 %
John Steinbeck/Tanizaki Junichirou (Bungou Stray Dogs) - 66.7 %
populism probiotics airspace shifter - 65.4 %
Sole Survivor & X6-88 - 65.4 %
Corgi BB-8 (Star Wars) - 65.4 %
Quatre Raberba Winner/Undisclosed - 65.2 %
resembling a miniature fireworks display with a green haze. Precision Shoot - 65.2 %
bracelet grey skin - 65.2 %
Reborn/Doctor Shamal (Katekyou Hitman Reborn!)/Original Male Character(s) - 65.2 %
James/Madison Li - 65.1 %
Feral Mumintrollet | Moomintroll - 65.1 %
wafc ccu linkin - 65.1 %
Christopher Mills - 65.0 %
at Overcast - 65.0 %
Kairi & Naminé (Kingdom Hearts) - 65.0 %
with magical symbols glowing in the air around her. The atmosphere is charged with magic Ghost white short kimono - 65.0 %
The ice age is coming - 65.0 %
Jonathan Reid & Bigby Wolf - 65.0 %
blue doe eyes cortical column - 65.0 %
Leshawna/Harold Norbert Cheever Doris McGrady V - 65.0 %
foxtv matchups panna - 65.0 %
Din Djarin & Migs Mayfeld & Grogu | Baby Yoda - 65.0 %
Epilogue jumps ahead - 65.0 %
nico sensopi - 64.8 %
秦风 - Character - 64.8 %
Caradoc Dearborn - 64.8 %
caribbean island processing highly detailed by wlop - 64.8 %
Tim Drake's Parents - 64.7 %
probiotics hardworkpaysoff onstorm allez - 64.7 %
Corpul | Coirpre - 64.7 %
Cantar de Flor y Espinas (Web Series) - 64.7 %
populist dialog biographical - 64.7 %
uf!papyrus/reader - 64.7 %
Imrah of Legann & Roald II of Conte - 64.6 %
d brown legwear - 64.6 %
Urey Rockbell - 64.6 %
bass_clef - 64.6 %
Royal Links AU - 64.6 %
sunlight glinting off metal ghost town - 64.6 %
Cross Marian/Undisclosed - 64.6 %
ccu monoxide thcentury - 64.5 %
Dimitri Alexandre Blaiddyd & Summoner | Eclat | Kiran - 64.5 %
My Personal Guide to Choosing the Right AI Base Model for Generate Halloween2024 Images
Simple comparison of the models (Base on Personal opinion)1. SDXL: Best for producing high-quality, realistic images and works well with various styles. It excels in detail enhancements, especially for faces, and offers many good LoRA variations. It generates large, sharp images that are perfect for detailed projects. However, some images may appear distinctly "AI-generated," which might not suit everyone's preference.2. Pony Diffusion: Known for its artistic flexibility, it doesn’t copy specific artist styles but gives beautiful, customizable results. It is also fine-tuning capable, producing stunning SFW and NSFW visuals with simple prompts. Users can describe characters specifically, making it versatile for various creative needs.3. SD3: Focuses on generating realistic and detailed images, offering more control and customization than earlier versions. Despite the many controversies surrounding SD3, SD3 is also widely used in Comfyui.4. Flux: Ideal for fixing image issues like anatomy or structure problems. It enhances image quality by adding fidelity and detail, particularly in text and small image elements, can provide a clearer concept, better prompt implementation with more natural depiction. 5. Kolors: Great for styling, and make colorful and vibrant artwork, especially in fantasy or creative designs.6. Auraflow: Specializes in smooth, flowing images, often with glowing or ethereal effects, perfect for fantasy or sci-fi themes.And if you want to combine the best of different AI models? you can try my workflow or my ai tool:SDXL MergeSimple - this simple workflow can merge 2 checkpoints with the same base, Pony + FLUX Fixer - and you can try this ai tool if you want to merging 2 different base, since FLUX good at fixing image, text, and small detail, so it will be effective without having to work twice.Finally, all of this is my personal opinion from what I experienced, How about you? do you have a different opinion? and which model do you prefer? share your thoughts in the comments below! let's open the discussion!
LoRA Training for Stable Diffusion 3.5
Full article can be found here : Stable Diffusion 3.5 Large Fine-tuning TutorialImages should be cropped into these aspect ratios:If you need help automatically pre-cropping your images, this is a lightweight, barebones [script](https://github.com/kasukanra/autogen_local_LLM/blob/main/detect_utils.py) I wrote to do it. It will find the best crop depending on:(1024, 1024), (1152, 896), (896, 1152), (1216, 832),(832, 1216), (1344, 768), (768, 1344), (1472, 704)1. Is there a human face in the image? If so, we’ll do the cropping oriented around that region of the image.2. If there is no human face detected, we’ll do the cropping using a saliency map, which will detect the most interesting region of the image. Then, a best crop will be extracted centered around that region.Here are some examples of what my captions look like:k4s4, a close up portrait view of a young man with green eyes and short dark hair, looking at the viewer with a slight smile, visible ears, wearing a dark jacket, hair bangs, a green and orange background
k4s4, a rear view of a woman wearing a red hood and faded skirt holding a staff in each hand and steering a small boat with small white wings and large white sail towards a city with tall structures, blue sky with white clouds, cropped
If you don't have your own fine-tuning dataset, feel free to use this dataset of paintings by John Singer Sargent (downloaded from WikiArt and auto-captioned) or a synthetic pixel art dataset.I’ll be showing results from several fine-tuned LoRA models of varying dataset size to show that the settings I chose generalize well enough to be a good starting point for fine-tuning LoRA.repeats duplicates your images (and optionally rotates, changes the hue/saturation, etc.) and captions as well to help generalize the style into the model and prevent overfitting. While SimpleTuner supports caption dropout (randomly dropping captions a specified percentage of the time), it doesn’t support shuffling tokens (tokens are kind of like words in the caption) as of this moment, but you can simulate the behavior of kohya’s sd-scripts where you can shuffle tokenswhile keeping an n amount of tokens in the beginning positions. Doing so helps the model not get too fixated on extraneous tokens.Steps calculationMax training steps can be calculated based on a simple mathematical equation (for a single concept):There are four variables here:Batch size: The number of samples processed in one iteration.Number of samples: Total number of samples in your dataset.Number of repeats: How many times you repeat the dataset within one epoch.Epochs: The number of times the entire dataset is processed.There are 476 images in the fantasy art dataset. Add on top of the 5 repeats from multidatabackend.json . I chose a train_batch_size of 6 for two reasons:This value would let me see the progress bar update every second or two.It’s large enough in that it can take 6 samples in one iteration, making sure that there is more generalization during the training process.If I wanted 30 or something epochs, then the final calculation would be this:represents the number of steps per epoch, which is 396.As such, I rounded these values up to 400 for CHECKPOINTING_STEPS .⚠️ Although I calculated 11,900 for MAX_NUM_STEPS, I set it to 24,000 in the end. I wanted to see more of samples of the LoRA training. Thus, anything after the original 11,900 would give me a good gauge on whether I was overtraining or not. So, I just doubled the total steps 11,900 x 2 = 23,800, then rounded up.CHECKPOINTING_STEPS represents how often you want to save a model checkpoint. Setting it to 400 is pretty close to one epoch for me, so that seemed fine.CHECKPOINTING_LIMIT is how many checkpoints you want to save before overwriting the earlier ones. In my case, I wanted to keep all of the checkpoints, so I set the limit to a high number like 60.Multiple conceptsThe above example is trained on a single concept with one unifying trigger word at the beginning: k4s4. However, if your dataset has multiple concepts/trigger words, then your step calculation could be something like this so:2 concepts [a, b]Lastly, for learning rate, I set it to 1.5e-3 as any higher would cause the gradient to explode like so:The other relevant settings are related to LoRA.{
"--lora_rank": 768,
"--lora_alpha": 768,
"--lora_type": "standard"
}
Personally, I received very satisfactory results using a higher LoRA rank and alpha. You can watch the more recent videos on my YouTube channel for a more precise heuristic breakdown of how image fidelity increases the higher you raise the LoRA rank (in my opinion).Anyway, If you don’t have the VRAM, storage capacity, or time to go so high, you can choose to go with a lower value such as 256 or 128 .As for lora_type , I’m just going with the tried and true standard . There is another option for the lycoris type of LoRA, but it’s still very experimental and not well explored. I have done the deep-dive of lycoris myself, but I haven’t found the correct settings that produces acceptable results.Custom config.json miscellaneousThere are some extra settings that you can change for quality of life.{
"--validation_prompt": "k4s4, a waist up view of a beautiful blonde woman, green eyes",
"--validation_guidance": 7.5,
"--validation_steps": 200,
"--validation_num_inference_steps": 30,
"--validation_negative_prompt": "blurry, cropped, ugly",
"--validation_seed": 42,
"--lr_scheduler": "cosine",
"--lr_warmup_steps": 2400,
}
"--validation_prompt": "k4s4, a waist up view of a beautiful blonde woman, green eyes""--validation_guidance": 7.5 "--validation_steps": 200 "--validation_num_inference_steps": 30 "--validation_negative_prompt": "blurry, cropped, ugly""--lr_scheduler": "cosine""--lr_warmup_steps": 2400These are pretty self-explanatory:"--validation_prompt"The prompt that you want to use to generate validation images. This is your positive prompt."--validation_negative_prompt"Negative prompt."--validation_guidance"Classifier free guidance (CFG) scale."--validation_num_inference_steps"The number of sampling steps to use."--validation_seed"Seed value when generating validation images."--lr_warmup_steps"SimpleTuner has set the default warm up to 10% of the total training steps behind the scenes if you don’t set it, and that’s a value I use often. So, I hard-coded it in (24,000 * 0.1 = 2,400). Feel free to change this."--validation_steps"The frequency at which you want to generate validation images is set with "--validation_steps". I set mine to 200, which is a 1/2 of 400 (number of steps in an epoch for my fantasy art example dataset). This means that I generate a validation image every 1/2 of an epoch. I suggest generating validation images at least every half epoch as a sanity check. If you don’t, you might not be able to catch errors as quickly as you can.Lastly is "--lr_scheduler" and "--lr_warmup_steps".I went with a cosine scheduler. This is what it will look like:### Memory usageIf you aren’t training the text encoders (we aren’t), `SimpleTuner` saves us about `10.4 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/316002db-297b-45a9-b919-cec6b311c773/image.png)With the settings of `batch size` of `6` and a `lora rank/alpha` of `768`, the training consumes about `32 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/c2aac70a-8c65-4f6f-b602-487f24de4bd2/image.png)Understandably, this is out of the range of consumer `24 GB` VRAM GPUs. As such, I tried to decrease the memory costs by using a `batch size` of `1` and `lora rank/alpha` of `128` .Tentatively, I was able to bring the VRAM cost down to around `19.65 GB` of VRAM.However, when running inference for the validation prompts, it spikes up to around `23.37 GB` of VRAM.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/0c5240d6-6f71-404e-bea7-b18cc35ee5ad/image.png)![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/026be306-8331-45a2-9c02-541005f2cdfd/image.png)To be safe, you might have to decrease the `lora rank/alpha` even further to `64`. If so, you’ll consume around `18.83 GB` of VRAM during training.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/5edcaaf9-bf0d-4db0-a183-cfab44963b8e/image.png)During validation inference, it will go up to around `21.50 GB` of VRAM. This seems safe enough.![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/4e8dae13-2612-4518-91a4-53485ccdba7c/bd41ce4e-a0db-443b-b3d2-63eac136779d/image.png)If you do decide to go with the higher spec training of `batch size` of `6` and `lora rank/alpha` of `768` , you can use the `DeepSpeed` config I provided [above](https://www.notion.so/Stable-Diffusion-3-5-Large-Fine-tuning-Tutorial-11a61cdcd1968027a15bdbd7c40be8c6?pvs=21) if your GPU VRAM is insufficient and you have enough CPU RAM.
Exploring DORA, LoRA, and LOKR: Key Insights Before Halloween2024 Training
In the world of artificial intelligence (AI), especially in training image-based models, the terms DORA, LoRA, and LOKR often play different but complementary roles in developing more efficient and accurate AI models. Each has a unique approach to understanding data, adapting models, and involving developers in the process. This article will discuss what DORA, LoRA, and LOKR are in the context of AI image training, as well as their respective strengths and weaknesses.1. DORA (Distributed Organization and Representation Architecture) in AI Image Training DORA is a model better known in the fields of cognitive science and AI, focusing on how systems understand and represent information. Although not commonly used directly in AI image training, DORA's principle of distributed representation can be applied to how models understand relationships between elements in an image—such as color, texture, shape, or objects—and how those elements are connected in a broader context.Strengths: Understanding complex relationships: DORA allows AI models to understand complex relationships between objects in an image, crucial for tasks such as object recognition or object detection.Strong generalization: Helps models learn more abstract representations from visual data, allowing for object recognition even with variations in form or context.Weaknesses: Less specific for certain visual tasks: DORA may be less optimal for tasks requiring high accuracy in image details, such as image segmentation.Computational complexity: Using a model based on complex representations like DORA requires more computational resources.2. LoRA (Low-Rank Adaptation) in AI Image Training LoRA is a method widely used in AI for fine-tuning large models without requiring significant resources. LoRA reduces model complexity by factoring heavy layers into low-rank representations. This allows for adjustments to large models (such as Vision Transformers or GANs) without retraining the entire model from scratch, saving time and cost.Strengths: Resource efficiency: LoRA enables faster and more efficient adaptation of models, especially when working with large models and smaller datasets.Reduces overfitting: Since only a small portion of the parameters are adjusted, the risk of overfitting is reduced, which is essential when working with limited image datasets.Pretrained model adaptation: LoRA allows for the reuse of large pretrained models trained on vast datasets, making it easier to adapt them to more specific datasets.Weaknesses: Limited to minor adjustments: LoRA is excellent for minor adjustments, but if significant changes are needed or if the dataset differs greatly from the original, the model may still require deeper retraining.Dependent on base model: The best results from LoRA heavily rely on the quality of the pretrained model. If the base model is not strong enough, the adapted results may be unsatisfactory.3. LOKR (Locus of Control and Responsibility) in AI Image Training LOKR, derived from psychology, refers to how a person perceives control and responsibility over something. In the context of AI development, this concept can be applied to how developers feel responsible for and control the training process of the model. Developers with an internal locus of control feel they have full control over the training process, while those with an external locus of control might feel that external factors such as datasets or hardware are more influential.Strengths: Better decision-making: Developers with an internal locus of control are usually more focused on optimizing parameters and trying various approaches to improve results, which can lead to better AI models.High motivation: Developers who feel in control of the training outcomes are more motivated to continuously improve the model and overcome technical challenges.Weaknesses: Challenges with external factors: Developers with an external locus of control might rely too much on external factors such as the quality of the dataset or available hardware, which can limit innovation and control over the training process.Not directly related to AI technicalities: While this concept provides good psychological insights, it does not offer direct solutions in the technical training of AI models.Conclusion DORA, LoRA, and LOKR bring different perspectives to AI image-based training. DORA offers insight into how models can understand complex relationships in images, though it comes with computational challenges. LoRA is highly useful for adapting large models in a more resource-efficient way, but has limitations if larger changes are required. Meanwhile, LOKR, although derived from psychology, can influence how AI developers approach training, especially in terms of control and responsibility. By understanding the strengths and weaknesses of each approach, developers can more effectively choose the method that best fits the specific needs of their AI projects, maximizing both efficiency and model performance in processing images.
Halloween2024 - ComfyUI experiences
Hello everyone.I have been working more intensively with various AI tools in the last few days and weeks. In this article I would like to briefly share my opinion on the "workflows" that you can create with ComfyUI.First of all, my computer is not the "more expensive, faster, better" type. It is a Ryzen 5 with a GForce 3060 Ti. So it is not bad, but by far not the best for training LoRAs, checkpoints or other AI things. It simply takes longer than with a Ryzen 9 and a GForce 4090 ;)But back to ComfyUI and the workflows.Since I have only been working with it for a few days, before that I used A1111 (Stable Diffusion), I am of course far from someone who can give you tips if you have problems. But one thing is certain: ComfyUI is definitely extremely faster than A1111 when creating images.With my current setup, I need over 2 minutes per XL image and almost 5 minutes for FLUX-based images with A1111. Anyone who can do a bit of math knows that this is really incredibly slow...ComfyUI, on the other hand, even with my setup, needs less than 20 seconds for an XL image and almost 60 seconds for a FLUX-based image. Of course, that depends on the workflow.The problem with ComfyUI, in my opinion, is that it is not at all beginner-friendly. There is a "standard" workflow, but that is not enough. After all, we want to integrate or test various checkpoints, LoRAs or other things.So you start and look at the different options... and then... then you don't know what to do next. So without looking at various documentation or examples, you will have an extremely difficult time understanding this tool.If we take the "fresh" installation of ComfyUI, after a long browse you will find that the things you actually want are "not" there. This includes things like using placeholders or a "better" way to save the files you create.This brings us to the possible extensions. Like in so many other communities, there are a huge number here. Unfortunately, this also makes things very confusing. Again, you have to look closely at what you want, need or expect, but even then it doesn't mean that the extension does what you want.The worst thing about ComfyUI in my opinion is the confusing menu and it gets worse with every extension. If you just look at the "Workflow" tool here in Tensort.Art, you immediately understand what I mean.Still. ComfyUI is a very good and powerful tool. Most importantly, it is much faster than the other tools I have tried so far. I also really like the flexibility of the tool. However, it could have a "better" menu to make it more user-friendly.If you haven't done it before: It's worth to check it out.
Posts
Daily theme: