TensorArt

TensorArt

596591019210834840
Images & GIFs online generating, model training, hosting, comfyUI workflow and more!
1.8K
Followers
1
Following
7.6M
Runs
912
Downloads
1.3K
Likes
31.3K
Stars
Latest
Most Liked
The Best-Performing Open-Source Image Generation Model: Tencent Open-Sources HunYuanImage 3.0

The Best-Performing Open-Source Image Generation Model: Tencent Open-Sources HunYuanImage 3.0

Tensor.Art will soon support online generation and has partnered with Tencent HunYuan for an official event. Stay tuned for exciting content and abundant prizes!September 28, 2025 — Tencent HunYuan today announced and open-sourced HunYuanImage 3.0, a native multimodal image generation model with 80B parameters. HunYuanImage 3.0 is the first open-source, industrial-grade native multimodal text-to-image model and currently the best-performing and largest open-source image generator, benchmarking against leading closed-source systems.Users can try HunYuanImage 3.0 on the desktop version of the Tencent HunYuan website (https://hunyuan.tencent.com/image). Tensor.Art (https://tensor.art) is soon to support online generation! The model will also roll out on Yuanbao. Model weights and accelerated builds are available on GitHub and Hugging Face; both enterprises and individual developers may download and use them free of charge.HunYuanImage 3.0 brings commonsense and knowledge-based reasoning, high-accuracy semantic understanding, and refined aesthetics that produce high-fidelity, photoreal images. It can parse thousand-character prompts and render long text inside images—delivering industry-leading generation quality.What “native multimodal” means“Native multimodal” refers to a technical architecture where a single model handles input and output across text, image, video, and audio, rather than wiring together multiple separate models for tasks like image understanding or generation. HunYuanImage 3.0 is the first open-source, industrial-grade text-to-image model built on this native multimodal foundation.In practice, this means HunYuanImage 3.0 not only “paints” like an image model, but also “thinks” like a language model with built-in commonsense. It’s like a painter with a brain—reasoning about layout, composition, and brushwork, and using world knowledge to infer plausible details.Example: A user can simply prompt, “Generate a four-panel educational comic explaining a total lunar eclipse,” and the model will autonomously create a coherent, panel-by-panel story—no frame-by-frame instructions required.Better semantics, better typography, better looksHunYuanImage 3.0 significantly improves semantic fidelity and aesthetic quality. It follows complex instructions precisely—including small text and long passages within images.Example: “You are a Xiaohongshu outfit blogger. Create a cover image with: 1) Full-body OOTD on the left; 2) On the right, a breakdown of items—dark brown jacket, black pleated mini skirt, brown boots, black handbag. Style: product photography, realistic, with mood; palette: autumn ‘Marron/MeLàde’ tones.” HunYuanImage 3.0 can accurately decompose the outfit on the left into itemized visuals on the right.For poster use-cases with heavy copy, HunYuanImage 3.0 neatly renders multi-region text (top, bottom, accents) while maintaining clear visual hierarchy and harmonious color and layout—e.g., a tomato product poster with dewy, lustrous, appetizing fruit and a premium photographic feel.It also excels at creative briefs—like a Mid-Autumn Festival concept featuring a moon, penguins, and mooncakes—with strong composition and storytelling.These capabilities meaningfully boost productivity for illustrators, designers, and visual creators. Comics that once took hours can now be drafted in minutes. Non-designers can produce richer, more engaging visual content. Researchers and developers—across industry and academia—can build applications or fine-tune derivatives on top of HunYuanImage 3.0.Why architecture matters nowIn text-to-image, both academia and industry are moving from traditional DiT to native multimodal architectures. While several open-source models exist, most are small research models with image quality far below industrial best-in-class.As a native multimodal open-source model, HunYuanImage 3.0 re-architects training to support multiple tasks and cross-task synergy. Built on HunYuan-A13B, it is trained with ~5B image-text pairs, video frames, interleaved text-image data, and ~6T tokens of text corpus in a joint multimodal-generation / vision-understanding / LLM setup. The result is strong semantic comprehension, robust long-text rendering, and LLM-grade world knowledge for reasoning.The current release exposes text-to-image. Image-to-image, image editing, and multi-turn interaction will follow.Track record & open-source commitmentTencent HunYuan has continuously advanced image generation, previously releasing the first open-source Chinese native DiT image model (HunYuan DiT), the native 2K model HunYuanImage 2.1, and the industry’s first industrial-grade real-time generator, HunYuanImage 2.0.HunYuan embraces open source—offering multiple sizes of LLMs, comprehensive image / video / 3D generation capabilities, and tooling/plugins that approach commercial-model performance. There are ~3,000 derivative image/video models in the ecosystem, and the HunYuan 3D series has 2.3M+ community downloads, making it one of the world’s most popular 3D open-source model families.LinksThe model will soon be available for online generation on Tensor.Art.Model Playground (desktop only): https://hunyuan.tencent.com/modelSquare/home/play?from=modelSquare&modelId=289Official Site: https://hunyuan.tencent.com/imageGitHub: https://github.com/Tencent-HunYuan/HunyuanImage-3.0Hugging Face: https://huggingface.co/tencent/HunYuanImage-3.0HunYuanImage 3.0 Prompt Handbook: https://docs.qq.com/doc/DUVVadmhCdG9qRXBUSample Generations & Prompts (English translations provided)A wide image taken with a phone of a glass whiteboard, in a room overlooking the Bay Bridge. The field of view shows a woman writing. The handwriting looks natural and a bit messy, and we see the photographer's reflection. The text reads: (left) "Transfer between Modalities: Suppose we directly model p(text, pixels, sound) [equation] with one big autoregressive transformer. Pros: image generation augmented with vast world knowledge next-level text rendering native in-context learning unified post-training stack Cons: varying bit-rate across modalities compute not adaptive" (Right) "Fixes: model compressed representations compose autoregressive prior with a powerful decoder" On the bottom right of the board, she draws a diagram: "tokens -> [transformer] -> [diffusion] -> pixels"Young Asian woman sitting cross-legged by a small campfire on a night beach, warm light glinting on her skin, shoulder-length wavy hair, oversized knit sweater slipped off one shoulder, holding a burning newspaper (half-scorched), high-contrast warm orange firelight under a deep-blue sky, film-grain texture, waist-up angle.Young East Asian woman with fair, delicate skin and an oval face. Clear, refined features; large, bright dark-brown eyes looking directly at the viewer; natural brows matching hair color; petite, straight nose; full lips with pale-pink gloss. Shiny brown hair center-parted into two neat braids tied with white ruffled fabric bows. Wispy bangs and strands blown lightly by wind. Wearing a white camisole with delicate white lace trim at the neckline and straps; bare shoulders, smooth skin. Key light from front-right creating highlights on cheeks, nose bridge, and collarbones. Background: expansive water in deep blue, distant land with dark-green trees, lavender sky suggesting dusk or dawn. Overall warm, gentle tonality.Neo-Chinese product photography: a light-green square tea box with elegant typography (“Eco-Tea”) and simple graphics in a Zen-inspired vignette—ground covered with fine-textured emerald moss, paired with a naturally shaped dead branch, accented by white jasmine blossoms. Soft gradient light-green background with blurred bamboo leaves in the top-right. Palette: fresh light greens; white flowers for highlights. Eye-level composition with the box appearing to hover lightly above the branch. Fine moss texture, natural wood grain, crisp flowers, soft lighting for a pure, tranquil mood.Zen-inspired luxury perfume still life: a square transparent bottle with warm golden liquid and black cap on a dark-brown pedestal. Deep blue gradient backdrop with a minimalist black branch casting sharp silhouettes and three white magnolias in bloom. Palette: deep blue, warm gold, pure white, rich black; eye-level, centered composition with soft light, premium finish, and Eastern floristry aesthetics.Advertising still life: a floating ketchup bottle surrounded by fresh tomatoes with splashes and flying juice. Dominant rich red scene; realistic, high-impact style; crystalline splash details and plump tomatoes. Center composition focusing on the bottle and explosion. Include label text: “WELL 威尔番茄沙司 净含量 300g”.Neo-Chinese, Zen-style premium tea still life: light-blue “Seek-Tea” box with refined Chinese typography and gold-foil motifs; sky-blue gaiwan (bowl, lid, saucer) with a smooth glaze; and a deep-blue vase. Two dark wood furniture pieces: taller stand on the left, shorter stand on the right with gold vertical metal trim. Red-orange gradient background (deep red top → light orange bottom) with soft bamboo shadows. Palette: red-orange, light blue, sky-blue, deep blue, dark wood, gold. Eye-level, symmetric layout; soft bamboo shadows, fine wood grain, warm ceramic sheen, refined metal textures; soft light for a high-end, Eastern ambience.Lifestyle product photo: a woman with voluminous curls wearing over-ear headphones and a loose pale-yellow sweatshirt, seated by a sun-lit window. Background: bright yellow wall and clear blue sky. Warm, comforting vibe; palette dominated by yellows with fresh blue accents. Side view, relaxed pose (elbow on desk, cheek in hand, gaze drifting). Details: curly texture, compact wireless headphones, translucent brown glass cup, minimalist devices, gentle natural-light shadows—cozy and serene.Illustration style: a step-by-step tutorial explaining how to make a latte. Must include a title and English step descriptions.
58
8
Qwen Prompting Guide - Best Ever!

Qwen Prompting Guide - Best Ever!

In this guide, we’ll explore key strategies for prompt design and ways to improve the quality and stability of generated results through precise descriptions of content and style on QwenPrompting TipsGeneral:❗️Core Strategy:Use coherent, natural sentences to describe the scene's content (subject + action + environment), and clear, concise phrases to describe the style/composition/camera angle/quality, etc. A universal template is as follows:[Subject Description] + [Environmental Background] + [Style Tone] + [Aesthetic Parameters] + [Emotional Atmosphere] + [On-Screen Text]Subject Description: For a person, describe appearance, expression, and action; for an object, detail material, color, and shape.Environmental Background: Specify the scene (e.g., "library at midnight") and the spatial relationship between elements.Style Tone: Define the artistic style (e.g., "ink painting," "cyberpunk") for consistency.Aesthetic Parameters: Include visual elements like composition, perspective, angle, lighting, and color tone.Emotional Atmosphere: Define the conveyed emotion (e.g., "lively," "tense," "relaxed").On-Screen Text: If text is needed, place it in quotes with position and font details.💡 TipsMaintain a consistent visual style to avoid conflicts: "The atmosphere is solemn, with a refreshing and healing tone" → "The image conveys a calm emotion with a fresh and elegant color tone."Rephrase negative expressions into positive ones: "Avoid cartoon style" → "Create a realistic style image"; "Don’t make the image look crowded" → "The composition is simple, with the subject in the center and 1/3 of the space left empty around it."When there is a clear use case, specify the purpose and type, such as "mobile wallpaper," "movie poster," etc.Avoid unnecessary instructions unrelated to the image.🌰 Detailed ExamplePrompts:A visually striking surreal illustration depicts a giant whale made of brilliant starry skies and molten gold, gliding silently through deep space. Its body is semi-transparent, revealing flickering star clusters and nebulae, with countless tiny lights falling from its tail, forming a trajectory. In the bottom corner of the image, on a massive, angular, dark-colored meteorite, a short futuristic sentence "WE ARE THE COSMOS DREAMING." is engraved in glowing, futuristic font, its light mirroring the glow on the whale. The background features a deep, velvet-textured cosmos dotted with distant galaxies. The image exudes a serene divine quality, a magnificent contrast of scale, and breath-taking details.1. Content DescriptionSubject: a giant whale made of brilliant starry skies and molten goldEnvironment: A deep, velvet-textured cosmos dotted with distant galaxies.Text: "WE ARE THE COSMOS DREAMING."2. Tone of the ImageStyle: Surrealism, futuristic illustration.Atmosphere: Serene, mysterious, divine, deep.Other examples:Practical Design TipsPoster DesignUnlike typical images, poster design requires special attention to the theme, visual elements, aesthetic style, and layout of the design.Theme Description: Explain the intended use, defining the general style of the image, such as "a promotional poster for a music festival" or "an advertisement poster for xxx product."Visual Elements: Describe the elements included in the poster image. If text is required, place it in quotation marks and specify its position and font.Aesthetic Style: Define the overall feeling and artistic movement of the poster, such as "vector illustration," "e-commerce style," "abstractism," etc.Layout: Specify the desired layout (e.g., "rule of thirds," "modular composition") and where the main subject and text should be placed.Professional Parameters: These include specific image detail and output quality requirements, such as "4K professional level" or "clear lines."🌰 Detailed ExamplePrompts: A summer eco-market event poster in a flat illustration style with bright, cheerful colors. In the center of the image is a large cartoon-style apple tree, with various handcrafted goods and organic produce stalls underneath. People are happily chatting and shopping. Sunlight filters through the leaves, creating dappled light spots, enhancing the relaxed and joyful atmosphere. The top of the poster features a large, pure light blue sky for the main title and subtitle. The bottom area is reserved for event details in a clean beige color. The main title reads "Natural Carnival," with the subtitle "Discover the Joy of Green Living." Event details include "Event Date: July 27, 2026" and "Event Location: Central Park Lawn." The organizer’s name, "Green Leaf Community," is in the top-right corner.Other Examples: Font DesignFont design is relatively simple; just describe the text content, font style, color, texture/feel, and the background of the image.💡 Important: Text content must be in quotation marks.🌰 Detailed ExampleAn expressive ink calligraphy piece with the text “自在” in bold, flowing cursive. The brushstrokes show rich variations in dryness, thickness, and wetness, with ink spreading naturally across rice paper, as if freshly written. Surrounding the text are faint sketches of distant mountains and a lone boat, with large areas of white space showing the slight yellowing and creases of the ancient paper. The overall atmosphere is ethereal, tranquil, and full of Eastern philosophical charm. Soft natural light shines from the side, highlighting the ink’s layers.1. Text Content: “自在”2. Font and Calligraphy StyleFont: Cursive brush.Style: Bold, dynamic, with rich variations in brushstrokes and a flowing, expressive feel.3. Color: Ink: A gradient from deep to light ink with natural diffusion and spreading effects.4. Texture and Feel Ink Texture: Natural diffusion and spreading on the paper, showing the unique effect of wet and dry brushstrokes with a layered feel.Aesthetic Parameters Prompting HandbookStyle Templates1. 2000s Street Documentary (CCD Taste): 35mm straight-on view, direct flash; high contrast + cool green tint; candid look back; 3:2 ratio; noticeable grain.2. Cyber Mech Character: Rain droplets on armor, surface flowing light, volumetric beams piercing through smoke; path tracing/global illumination; 8K ultra HD.3. Food Still Life (Dark Tone Texture): Layered foreground obstruction (herbs/peppercorns), shallow depth of field, triangular composition; shadows with details.4. Architectural Space (Order and Light): 24mm exaggerated perspective + foreground framing; hard-edge shadow geometric cuts; low saturation, cool tone.5. Chinese Portrait (Rain Alley Night Scene): New Chinese-style high collar + velvet shawl, hair dampened by rain; misty volumetric light passing through alley lanterns, shallow depth of field, creamy bokeh; golden spiral composition; window light texture, natural skin tone reflection.Lighting Systems 1. Light Position/Texture: Top light/side-back light/rim light/butterfly light/Rembrandt light/window light/honeycomb control light/flag cut light.2. Atmosphere: Volumetric light/foggy beams/silhouette/high contrast/low contrast/edge highlights.3. Mixed Colors: Dual color temperature mixed light/gel blue-green/magenta/orange/green to clash.Color/Film Effects1. Film Reference: Portra 400 (soft skin tones)/Cinestill 800T (tungsten blue tint)/Fuji Velvia 50 (high saturation landscape).2. Semantic Colors: Cyber blue-green/vintage ochre-brown/Hong Kong style cyan-green/twilight orange-gold.Post-Production and Texture1. Cinematic Depth of Field/Natural Film Grain/Vignette/Glow/Halation2. Local Contrast/Color Separation/Cross Processing/Sharpening with Contrast Retention3. LUT Stylization/Film Curves/S-Curve/Color Noise Control🪄 Ultimate Streamlined VersionDon’t want to bother with writing prompts? Too many rules to remember? Try TensorArt prompt enhancement!Enter the workspace, input core keywords, click the Prompt Enhancement icon, and generate the full prompt with one click, easily unlocking all Qwen-Image capabilities. Effect Comparison💡 If you’re not satisfied with the automatically expanded prompt, you can modify it according to your needs or regenerate it. That’s all for Qwen-image’s advanced prompt techniques. Open workspace and make your imagination come alive! 🪄
95
3
TA Update Log - Hunyuan 2.1, SRPO, Vace, New Labeling Algorithm, Enhanced Prompt etc

TA Update Log - Hunyuan 2.1, SRPO, Vace, New Labeling Algorithm, Enhanced Prompt etc

We bring you some recent updates on TA that you might find interesting. HunyuanImage 2.1 UpdateFollowing Tencent’s official update of Hunyuan 2.1, which introduced ComfyUI support, TA has now synchronized its system to support this update as well.Hunyuan 2.1 has : Enhanced Semantic Understanding: Now capable of accurately interpreting complex semantics, supporting individual descriptions and precise generation for multiple subjects.Improved Visual Quality: Visual textures are more realistic, with enhanced details and significant improvements in lighting and material expression.Faster and More Stable Performance: Response and generation speeds are now more consistent, meeting a wider range of business scenario demands.📣 We now supports native 2K resolution output! Don’t miss out.Try it here: https://tensor.art/models/908972930887615947SRPO SupportSRPO is introduced to address the "oily" texture issue in the Flux model. Compared to previous Flux models, SRPO offers the following advantages: Oiliness Reduction: Effectively eliminates oiliness and AI-generated artifacts, enhancing realism by up to 3x. Fast Training: The training process now takes only 10 minutes.Experience here: https://tensor.art/models/908661281618175817Vace SupportedWe previously launched the video Vace feature, but recently with help of Wan 2.2 VACE, we've upgraded it to make it even more powerful.VACE provide Precise Motion Rendering: Accurately recreates action changes and facial expressions.Simple Setup: Just go to the Video Workspace, select Edit as the model type, choose the Wan 2.2 model family, and select Wan 2.2 Vace. Easy Upload and Generation: Upload your materials and hit Generate to create videos.Multiple Function includes depth control, pose control, swap subject, multi-image-reference and Recolorization. Offers comprehensive video editing tools, precise control, and endless creative possibilities.New Labeling AlgorithmThe new Minicpmv4.5 series labeling algorithm is now available for online training. Compared to previous versions, this algorithm offers significant improvements:Better Understanding of Images/Videos: Now captures more subtle details and semantics with greater precision.Natural and Fluent Tagging: Generates labels that are more aligned with human expression.Check it out below:Enter the online training mode, upload your image, and select Labeling → Auto Labeling.Choose the Minicpmv4.5 series from the dropdown menu.Preview the results.Enhanced Prompt UpgradesTo help improve the quality and stability of generated content, we has upgraded the prompt enhancement feature.Optimized Prompt Formula: The generated prompt text now follows a formula of [Subject Description] + [Environmental Background] + [Style Tone] + [Aesthetic Parameters] + [Emotional Atmosphere] + [On-screen Text], ensuring detailed, accurate expression.Simply go to the workspace, input your core keywords, and click the Prompt Enhancement icon to generate a complete prompt in one click—no more struggling with unclear or lackluster descriptions! 👇👇👇Thank you for reading through the update! If you have any questions or suggestions, feel free to join our community and reach out to the admins for feedback on Discord.
37
1
Creator Dashboard & Withdraw

Creator Dashboard & Withdraw

This is an introduction to the Creator Dashboard and Withdraw process. It provides a detailed overview of how you, as a creator, can monitor and analyze the performance of your content on TensorArt, as well as how to withdraw income. 😉How to check my EarningYou can check and manage all your earning in your Creator Dashboard. You can see the total accumulated income & withdrawable income.Worried about transparency? Creator Dashboard offers detailed records for every transaction, so you’ll always know exactly how much you’ve earned! 💰Data AnalysisWe have also provided data visualization and analysis capabilities in the "Data Center," allowing you to clearly view your revenue curve and see which models/AI tools contributed to it.Tips: Here, you can monitor the performance of all your content, including the number of views, pro runs, and paid interactions. This information is very helpful for planning your future creative direction."WithdrawHow to Withdraw:On the "Income & Withdraw" page, click the withdraw to transfer your income to your bank account.Note: We will verify your identity and collect your bank card information during the first withdrawal. Please make sure your bank card details and personal information are accurate to avoid payment failure.How Long Does Withdrawal Take:Generally, withdraw will be proceed within 7 working days. We usually process withdrawal request on every Friday. It may take another 3-5 days for your Bank to transfer to your account.Tip: If you haven’t received the transfer after 14 working days, please check your system notification. If there’s an issue with your transfer, we will send a notification to guide you on how to proceed. If you still have question, contact us on DiscordAbout the Service FeeSince it’s an international transfer, each transaction will incur a $15 fee from the payment channel.(not from TA)Tips: 1️⃣ Accumulating a certain amount before making a single withdrawal to minimize the impact of fees.2️⃣ Use Redeem feature. You can use your money in wallet to redeem pro membership & credits. No service fee. It's a good option if you need pro & credits.If you have any questions, especially if you encounter any issues with withdrawals, feel free to contact us on Discord.
Pro Segment - Models

Pro Segment - Models

What is Pro Model?When a model joins Pro Segment, only Pro members can continuously utilize this model, while standard users have only three opportunities to experience.Each time Pro members run a Pro model, or your model attracts new users to purchase a Pro membership, it generates significant revenue for the creators through a high profit-sharing arrangement. 💸Revenue GenerationThe revenue from the Pro Model consists of two parts:1️⃣ Pro Member Usage: Every a Pro member uses your model, you earn extra income.2️⃣ Pro Membership Referrals: When a user purchases a Pro membership to access your content, you earn a high referral bonus of $3 per person.✨ The income from Pro Segment does not conflict with TSF earnings. You can do both!How to Apply?Application Process for Pro model1️⃣ Click on the "Apply to Join" link located at the top of the project details page, above the version information.2️⃣ After clicking, a Pro Segment window will appear. Here, select the version you wish to join for pro content and submit your application.3️⃣ Once your application is approved, you will receive a system notification informing you that the version has been added to the pro segment program.Requirements for Pro ModelBasic Requirements1️⃣ The project must be Public and Original.2️⃣ The project must be Exclusive.3️⃣ Images generated from this version can be used for commercial purposes.4️⃣ Once your application is successful, it cannot be reverted to a non-pro accessible model .5️⃣ Model should be of high quality, have stable output performance and high responsiveness to promptAdditional RequirementsBesides meeting all Basic Requirements, Pro Models also need to have significant performance in any of the following fields:1️⃣ Commercial ValueThe model can solve practical scenario problems in a specific vertical field such as design, visual arts, gaming, or architecture.Good Examples:2️⃣ Unique CreativityThe model possesses a high degree of aesthetic creativity or specific functionalities. It can also be combined with current hot topics, making the model distinct from the current model ecosystem and giving it a unique competitive advantage.Good Examples:Tips & Bonus Points💡1️⃣ Provide clear and straightforward Model Name2️⃣ Provide multiple high quality showcases3️⃣ Provide a detailed introduction to the model's usage methods, recommended prompts, recommended parameters, and examples of image generation effects.Good example: FAQQ: Can I apply if I am launching an Early Access project?A: No, you must first convert your project into an Exclusive project before applying for pro model.Q: Can I quit pro model after joining?A: No, you can not revert your pro model to a non-pro accessible model. But you are able to delete this pro model.Q: How is "Pro Membership Referrals" defined?A: When standard users have used up their 3 free trials with a pro model, a pop-up will remind them that this is a pro model and suggest purchasing a membership to continue using it. If they proceed to buy a membership through this referral, it means one successful referral.Q: Will joining the Pro model affect the existing creator incentives TenStar Fund?A: No, the two do not conflict. Revenue from the Pro model = Pro Segment Revenue (Member Usage + Member Referrals) + TenStar Fund.Q: If my model does not join the Pro model, will I still receive TenStar Fund?A: Yes, you can apply for TenStar Fund as long as you meet the application criteria and successfully apply.Q: If my application to join the Pro model fails, can I apply again?A: Yes, you can continue to apply after modifying your model content according to the application criteria to improve your chances of approval.Q: I am an AI tool creator. How can I earn revenue?A: The pro segment for AI tools will be opening soon, so please stay tuned for official updates.👀 If you have further questions, contact us on TensorArt Discord!
1
Monetize on TensorArt - Overall Guide

Monetize on TensorArt - Overall Guide

💰 Monetize Your Creations on TensorArt“Train once, thrive everywhere!”At TensorArt, there are multiple ways to monetize your work.🎨 ➡️ 💸✨ Turn your works into cash income—TensorArt offers tailored monetization paths to reward your creativity! ✨💸 No matter if you're a Model creator, ComfyFlow creator, or AI image artist, you can monetize your work on Tensor.Art and easily earn income. This article will comprehensively introduce Tensor.Art's monetization features, making it easy for you to get started~🌐 For multi language version 👉 Monetize your creation - multi language (note will redirect to other webpage)🔍 Quick NavigationEarn from Models: [Join TenStar Fund] , [Join Pro Model]Earn from Comfyflow: [Join Pro AI Tool]Earn from AI image & videos: [Earn as AI Image Artist]Where to Check your Earning 👉 [Creator Dashboard & Withdraw]✨ For Model Creators:We’ve designed two powerful programs to help you profit from high-quality models. TenStar Fund & Pro Segment. You can also set up paid downloads & paid runs, as well as receive buffet from fans.Choose the one that fits your goals, or you can leverage all for maximum earnings!1️⃣ TenStar FundTenStar Fund (TSF) offers a stable cash reward program for high-quality model creators.You can earn daily rewards based on your model’s usage, the more run of your model, the more you earn. Our top creators can earn thousands of dollars per month through TSF. 💸🤑Why Join TSF?Sustainable Daily Income: Cash rewards T+1. 💰More Exposure: TSF models get prioritized display and higher chance of being featured. More exposure, more run, more money!Social Media Ads and Custom Orders: To be promoted worldwide on Ins, X, Youtube, etc , and may receive custom orders from enterprise clients.How to Join TSF & More Information & tips 👉 Must-Read to Earn from TSF😉2️⃣ Pro Segment - ModelsPro segment is a new monetization opportunity provided by TA for creators. By joining the Pro Segment, creators can earn additional income through Pro members' usage and referrals.Pro Model EarningsPro Member Usage: ✨ Every a Pro member uses your model, you earn extra income.Pro Membership Referrals: ✨ When a user purchases a Pro membership to access your content, you earn a high referral bonus of $3 per person.Why join Pro segment?Higher Earnings: The income from Pro member usage and referrals is substantial, perfect for creators with a certain fan base.Dual Earnings: Membership Area earnings and TSF income can be enjoyed simultaneously without conflict.How to Join Pro Segment & More Information & tips 👉 Must-Read to Earn from Pro Segment🥰3️⃣ Paid Download/Run + BuffetModels can set paid downloads and paid runs, with the amount determined by the creator. You can also set up your buffet, allowing users to tip you. All paid download/run & buffet income is entirely for the creator.How to Set Paid Features & Buffet 👉 Must-Read for Paid Features & Buffet Setup✨ For ComfyUI CreatorsWe have also prepared a complete monetization option for ComfyUI creators, allowing you to publish your workflow as AI Tools and join Pro Segment, turning your ComfyUI creations into income!What is AI Tool?AI Tool is essentially a packaged version of a Comfyflow. Once packaged, users only interact with the inputs and outputs you choose to open. Makes it simpler, more convenient, and beginner-friendly for users. You need to publish your workflow as an AI Tool before you can start earning from it. Don’t worry, it’s very easy.How to publish a workflow as 👉 AI Tool — step-by-step guide💸 Earn From Pro Segment - AI ToolThe revenue from the Pro AI Tool consists of two parts:Pro Member Usage:✨ Based on how many pro members have run your Pro AI Tool.Pro Membership Referrals:✨ Based on how many users have purchased and become Pro members through your AI Tool.Why join Pro AI ToolHigher Earnings: The income from Pro member usage and referrals is substantial, perfect for creators with a certain fan base.More Exposure: AI Tools in Pro Segment are more likely to be featured on the homepage.How to Join Pro AI Tool & More Tips 👉 Must-Read to Earn from AI Tools😋🧑‍🎨 For AI Image&Video ArtistParticipating in website Events is a great way to earn cash, Pro, and Credits.TA host exciting events all year round, and you can grab amazing cash rewards simply by creating and posting stunning images or videos that meet the event requirements! 🎉Check out the banner on the homepage to see the latest events, or head over to the Event page to explore past events and get inspired. Don't miss out – your next big win could be just a post away! 💰✨💰 How to Check my Revenue & WithdrawWe have provided all the features you need in the Creator Dashboard, including viewing, analyzing, and managing your revenue, as well as withdrawals.If you want to see a more detailed introduction and some withdrawal tips, please refer to 👉 Creator Dashboard & Withdraw Tips🤩 Ready to Start Earning?We look forward to seeing your creations shine on TensorArt and to building the AI open-source community together with you!If you have any questions or need assistance, feel free to contact us [Discord]
1
Avatar & Homepage Update Highlights

Avatar & Homepage Update Highlights

Redesigned HomepageWe’ve streamlined the way you enter the creative workspace from the Toast homepage. Now it’s easier than ever to start generating images and videos right away!Each entry point is paired with inspiring examples—real works from our community—because we can’t wait to see more of your amazing creations appear here. 〰Avatar Function SupportWith just one static image and a short audio clip, you can now create cinematic-quality digital human videos. Expect natural facial expressions, perfectly synced lip movements, and fluid body gestures — up to 60 seconds per generation, with full text-based control.Applications range from livestreaming with digital avatars to film and media production, and much more.How to get started:On the homepage, click the Avatar entry to enter the workspace.Select your model type:Infinite Talk excels at voice-to-character synchronization.WAN2.2-S2V offers superior visual quality.Upload your character's imageUpload or generate your audio file online.Finding the right audio file can be tricky, so we’ve added an online audio cloning feature. The system will render your dialogue using the tone and style of your reference. Simply click Generate Audio and:Enter the dialogue text.Provide a reference audio clip.Don’t have a reference clip? No problem. We also provide preset system voices for you to choose from. Once your audio is generated, click Use to apply it directly in the workspace.Finally, set your parameters:Prompt: Keep it simple—just describe the subject. If you’d like gestures or extra actions, add them (e.g., “This man is speaking while waving his arms”).Resolution: Higher resolutions yield sharper results, but require more compute.Generation Mode:Fast Mode: quicker, cheaper, with slight quality trade-offs.Quality Mode: best results, but requires more time and compute.That’s it—you’re ready to generate. We can’t wait to see your high-quality digital humans shared in the community. We look forward to seeing your work pinned on the homepage!
87
17
Wan2.2 Training Tutorial

Wan2.2 Training Tutorial

In this guide, we’ll walk through the full process of online training on TensorArt using Wan2.2. For this demo, we’ll be using image2video training so you can see direct results.Step 1 – Open Online TrainingGo to the Online Training page.Here, you can choose between Text2Video or Image2Video.👉 For this tutorial, we’ll select Image2Video.Step 2 – Upload Training DataUpload the materials you want to train on.You can upload them one by one.Or, if you’ve prepared everything locally, just zip the files and upload the package.Step 3 – Adjust ParametersOnce the data is uploaded, you’ll see the parameter panel on the right.💡 Tip: If you’re training with video clips, keep them around 5 seconds for the best results.Step 4 – Set Prompts & Preview FramesThe prompt field defines what kind of results you’ll see during and after training.As training progresses, you’ll see epoch previews. This helps you decide which version of the model looks best.For image-to-video LoRA training, you can also set the first frame of the preview video.Step 5 – Start TrainingClick Start Training once your setup is ready.When training completes, each epoch will generate a preview video.You can then review these previews and publish the epoch that delivers the best result.Step 6 – Publish Your ModelAfter publishing, wait a few minutes and your Wan2.2 LoRA model will be ready to use.Recommended Training Parameters (Balanced Quality)Network Module: LoRABase Model: Wan2.2 – i2v-high-noise-a14bTrigger words: (use a unique short tag, e.g. your_project_tag*)*Image Processing ParametersRepeat: 1Epoch: 12Save Every N Epochs: 1–2Video Processing ParametersFrame Samples: 16Target Frames: 20Training ParametersSeed: –Clip Skip: –Text Encoder LR: 1e-5UNet LR: 8e-5 (lower than 1e-4 for more stability)LR Scheduler: cosine (warmup 100 steps if available)Optimizer: AdamW8bitNetwork Dim: 64Network Alpha: 32Gradient Accumulation Steps: 2 (use 1 if VRAM is limited)Label ParametersShuffle caption: –Keep n tokens: –Advanced ParametersNoise offset: 0.025–0.03 (recommended 0.03)Multires noise discount: 0.1Multires noise iterations: 10conv_dim: –conv_alpha: –Batch Size: 1–2 (depending on VRAM)Video Length: 2Sample Image SettingsSampler: eulerPrompt (example):TipsKeep training videos around ~5 seconds for best results.Use a consistent dataset (lighting, framing, style) to avoid drift.If previews show overfitting (blurry details, jitter), lower UNet LR to 6e-5 or reduce Epochs to 10.For stronger style binding: increase Network Dim → 96 and Alpha → 64, while lowering UNet LR → 6e-5.
9
2
📢 Daily Credits Mission Update Notice

📢 Daily Credits Mission Update Notice

Dear Tensorians,To encourage more high-quality content in our community, we’ve optimized the Daily Missions.New missions focus more on creative value and content recommendations, so outstanding works can earn greater rewards.🎯 New Daily Mission Reward PlanShare content to external sites (once per day) 👉 +5 creditsYour post gets a like (up to 5 times per day) 👉 +2 credits eachPost featured on the Homepage (up to 3 times per day) 👉 +30 creditsPost featured on a Channel page (once per day) 👉 +10 credits💡 Note: Credits from others running your Models & AI Tools remain unchanged.We understand that the ways to earn credits may feel more streamlined than before, but please trust that these changes are made to:Improve the quality of recommended content, so users see more valuable worksProvide greater rewards for outstanding creatorsBuild a more positive and fair community environmentActivate Date: 2025.09.4🎬 New Credits Pool!Plus, to further support video content, a brand-new Video Reward Pool is coming soon!✨ You’ll get the chance to share a pool full of rewards with fellow creators!💜 Thank you for your continuous support. Let’s make the TensorArt community even better, together!
119
787
Illustrations v1.1 — Now Exclusive on Tensor.art

Illustrations v1.1 — Now Exclusive on Tensor.art

Next-Gen AI for Stunning Illustrations — Now on Tensor.artIntroducing Illustrious XL 1.1, the latest evolution in anime-focused text-toimage AI. Building on the foundation of Illustrious XL 0.1, this new version pushes the boundaries of fidelity, prompt understanding, and high-resolution output, making it a must-have for artists, illustrators, and animation creators.🔹 Resolution & More Detail — Generate breathtaking 1536 x 1536 images with refined aesthetic quality🔹 Smarter Prompt Interpretation — Optimized for natural language prompts, delivering more intuitive resultsRecommended Settings for Best Results💡 Negative Prompts: “blurry,” “worst quality,” “bad quality,” “bad hands”🛠️ Sampling Settings: Steps: 28 | CFG Scale: 5.5-7.5 | Sampler: Euler🏋️ Training: Try lokr when training, achieving better results than Lora 🤫To showcase the advancements of Illustrious XL 1.0, we’ve put it to the test across key performance areas. Below is a direct comparison of image outputs across different versions, demonstrating improvements in natural language comprehension, high-resolution rendering, vivid color expression, and detail fidelity.1. Natural Language Understanding📌 Improvement: Better prompt adherence and character accuracy.🔍 Comparison:• Illustrious XL 0.1: Struggled with maintaining a consistent character fidelity.• Illustrious XL 1.0: Improved coherence between prompt and image, with better facial expressions• Illustrious XL 1.1: Further refined accuracy, reducing artifacts and enhancing overall expressiveness.📝 Prompt Used:"A vibrant anime-style illustration of a young woman with golden blonde hair, striking orange eyes, and a cheerful expression. She's dressed in a unique outfit that blends sporty and whimsical elements: an orange jacket over a teal and white striped shirt, a blue neckerchief, and a distinctive white cap with orange accents. She's set against a dark green background with streaks of teal, creating a dynamic and eye-catching composition. The style is bold, energetic, and suggestive of a character from a video game or animation., masterpiece, best quality, very aesthetic, absurdres, vivid colors2. High-Resolution Precision📌 Improvement: Increased resolution to 1536 x 1536, maintaining clarity at larger sizes.🔍 Comparison:• Illustrious XL 0.1: Noticeable blurring and loss of detail in high-resolution images.Illustrious XL 1.0: Clearer textures, sharper lines, and more defined elements.• Illustrious XL 1.1: More robust structure📝 Prompt Used:"This masterpiece artwork, in a stylish and extremely aesthetic style evocative of artists like hyatsu,shule_de_yu, lococo:p, huke, potg_\(piotegu\), z3zz4, and moruki, showcases a tsundere solo 1girl, makise kurisu, standing at night under an iridescent sky filled with clouds and forget-me-not flowers, rendered in absurdres detail with a colorful yet partially black and white and abstract composition.”3. Vivid Colors & Dynamic Lighting📌 Improvement: More vibrant hues, balanced contrast, and expressive compositions🔍 Comparison:• Illustrious XL 0.1: Muted tones and washed-out colors.• Illustrious XL 1.0: More vibrant color balance• Illustrious XL 1.1: Richer tones and better shadow handling?📝 Prompt Used:"1girl,hyatsu,shule_de_yu,lococo:p,makise kurisu,huke,tsundere,absurdres,potg_ (piotegu\),z3zz4,moruki,hyatsu,stylish,extremely aesthetic,abstract,colorful,night,sky,flower,cloud,iridescent,masterpiece,black and white, forget-menot.”4. Detail Refinement & Aesthetic Quality📌 Improvement: Sharper facial details, and expressive character design.🔍 Comparison:• Illustrious XL 0.1: Some inconsistencies in facial structure and hand rendering.• Illustrious XL 1.0: Significant improvements in eye detailing and shading.• Illustrious XL 1.1: Near-professional quality with refined expressions.📝 Prompt Used:"1boy,black hair,red eyes,horns,scars,white clothes,blood stains,arm tattoos,black and red tattoos,long gloves on left hand,red sash,warrior-like attire,cold expression,sharp expression”Get Started Today! The future of anime AI is here—be part of it with Illustrious XL 1.1 ✨
216
20
TensorArt 2024 Community Trends Report

TensorArt 2024 Community Trends Report

2024: A Year of BreakthroughsThis year marked an explosion of innovation in AI. From language and imagery to video and audio, new technologies emerged and thrived in open-source communities. TensorArt stood at the forefront, evolving alongside our creators to witness the rise of AI artistry.Prompt of the Year: HairSurprisingly, "Hair" became the most-used prompt of 2024, with 260 million uses. On reflection, it makes sense—hair is essential in capturing the intricacies of portraiture. Other frequently used words included eyes (142M), body (130M), face (105M), and skin (79M).Niche terms favored by experienced users—like detailed (132M), score_8_up (45M), and 8k (25M)—also dominated this year, but saw a decline in usage by mid-year. With the advent of foundational models like Flux, SD3.5, and HunYuanDit, natural language prompts became intuitive and multilingual, removing the need for complex or negative prompts and lowering the barriers to entry for creators worldwide.Community AchievementsEvery day, hundreds of new models are uploaded to TensorArt, fueling creativity among tensorians. This year alone:Over 400,000 models are now available.300,000 images generated daily, with 35,000 shared via posts, reaching 1 million viewers and earning 15,000 likes and shares.This year, we introduced AI Tool and ComfyFlow, welcoming a new wave of creators. AI Tool simplified workflows for beginners and enabled integration into industry applications, with usage distributed across diverse fields.In November, TensorArt celebrated its 3 millionth user, solidifying its position as one of the most active platforms in the AI space after just 18 months. Among our loyal community are members like Goofy, MazVer, AstroBruh and Nuke, whose dedication spans back to our earliest days.A Global Creative ExchangeAI knows no borders. Creators from around the world use TensorArt to share and connect through art. From the icy landscapes of Finland (1.6%) to the sunny shores of Australia (8.7%), from Pakistan (0.075%) to Cuba (0.003%), tensorians transcend language and geography.Generationally, 75% of our users are Gen Z or Alpha, with the remaining 9% belonging to Gen X and Baby Boomers. “It’s never too late to learn” is a motto they live by.Gender representation also continues to evolve, with women now accounting for 20% of user base.TensorArt is breaking barriers—technical, social, and economic. With no need for costly GPUs or advanced knowledge of parameters, tools like Remix make creating stunning artwork as simple as a click.The Way Tensorians CreateMost active hours: Weeknights, 7 PM–12 AM, when TensorArt serves as the perfect way to unwind.Platform preferences: 70% of users favor the web version, but we’ve prioritized app updates for Q1 2025 to close this gap.Image ratios: Female characters outnumber male ones 9:1.67% are realistic, 28% are anime, and 3% are furry.Favorite colors order: Black, white, blue, red, green, yellow, and gray.A Growing Creator EconomyIn 2024, Creator Studio empowered users to monitor their model earnings. Membership in TenStar Fund tripled, and average creator income grew by 1.5x compared to last year.In 2025, TensorArt will continue to prioritize the balance between the creator economy and market development. TA will place greater emphasis on encouraging creators of AI tools and workflows to provide more efficient and convenient practical tools for various specific application scenarios. To this end, TA will be launching the Pro Segment to further reward creators, offering them higher revenue coefficients and profit sharing from Pro user subscriptions.2024 MilestonesThis year, TensorArt hosted:26 site events and 78 social media campaigns.First AI Tool partnership with Snapchat, pioneering AI-driven filters, which were featured as a case study by Snapchat.Launch of “Realtime Generate” and “Talk to Model,” revolutionizing how creators interact with AI.Collaboration with Austrian tattoo artist Fani to host a tattoo design contest, where winners received free tattoos based on their designs.TensorArt is committed to advancing the open-source ecosystem and has made significant strides in multiple areas:For newly released base models, TA ensures same-day online running and next-day support for online training. To allow Tensorians to experience the latest models, limited-time discounts are offered.To boost creative engagement with new base models, TA hosts high-reward events for each open-source base model, incentivizing Tensorians across various dimensions such as Models, AI tools, and Posts.Beyond image generation, TA actively supports the open-source video model ecosystem, enabling rapid integration of CogVideo, Mochi, and HunYuanVideo into ComfyFlow and Creation. In 2025, TA plans to expand online video functionality further.Moving from "observer" to "participant," TA has launched TensorArt Studios, with the release of the SD3.5M distilled version, Turbo. In 2025, Studios will unveil TensorArt’s self-developed base model.TensorArt continuously funds talented creators and labs, providing financial and computational resources to support model innovation. In 2025, Illustrious will exclusively collaborate with TensorArt to release its latest version.Looking ForwardFrom ChatGPT’s debut in 2022 to Sora’s groundbreaking in 2024, AI continues to redefine innovation across industries. But progress isn’t driven by one company—it thrives in the collective power of open-source ecosystems, inspiring collaboration and creativity.AI is a fertile ground, filled with the dreams and ambitions of visionaries worldwide. On this soil, we’ve planted the seed of TensorArt. Together, we will nurture it and watch it grow.2024 Annual RankingsEach month of 2024 brought unforgettable moments to TensorArt. Based on events, likes, runs and monthly trends, we’ve curated the 2024 Annual Rankings. Click to explore!
506
74