Articles

Some prompts I've collected 一些我收藏的提示词

Some prompts I've collected 一些我收藏的提示词

风格提示词——StylizationInfographic drawing, The concept character sheet 信息图表,概念字符character sheet style 人物表character sheet ilustration 人物表插画Realistic 真实感bokeh 背景散焦Ethereal 空灵 幽雅Warm tones 暖色调The lighting is soft 灯光柔和natural light 自然光ink wash 水墨Splash pigment effect 飞溅颜料cybernetic illuminations 科技光Neo-Pop 新波普艺术风格Art nouveau 新艺术Grandparentcore 复古老派风格Cleancore 简约风格red theme 红色主题sticker 贴纸Reflection 反射Backlit 逆光depth of field 景深A digital double exposure photo 双重曝光blurry foreground 模糊前景blurry background 模糊背景motion_blur 动作模糊split theme 分裂主题Paisley patterns 佩斯利图案(花纹)lineart 线条画silhouette art 剪影艺术concept art 概念艺术graffiti art 涂鸦艺术Gothic art 哥特式艺术Goblincore 地精自然风格ukiyo-e 浮世绘sumi-e 墨绘magazine cover 杂志封面commercial poster 商业海报视角提示词——ViewPerspective view 透视视角Three-quarter view 三分之一视角Thigh-level perspective 大腿水平视角close-up 特写Macro photo 微距图像Headshot 头像portrait 肖像low angle shot 低视角front and back view 前视图和后视图(正反面)various views 各种视角(多视角)Panoramic view 全景Mid-shot/Medium shot 中景cowboy_shot 牛仔镜头Waist-up view 腰部以上视图Bust shot 半身照Torso shot 躯干照foot focus 足部焦点looking at viewer 看着观众from above 俯视from below 仰视full body 全身像sideways/profile view 侧面fisheye lens 鱼眼镜头Environmental portrait 环境人像表情提示词——Facial expressionSmile 微笑grin 咧嘴笑biting lip 咬嘴唇adorable 萌tearing up/crying_tears 泪目tearful 含泪wave mouth 波浪嘴spiral_eyes 螺旋眼Cheerful 乐观nose blush 潮红running mascara 流动睫毛膏发型提示词——Hairstylesmooth 柔顺hair over one eye 刘海遮住一只眼睛twintails 双马尾ponytail 马尾辫diagonal bangs 斜刘海Dynamic hair 飘发hanging hair 垂发ahoge 呆毛braid 辫子braided bun 包子头Undercut 剃鬓发型装饰提示词——Ornamentforehead mark 额头痣mole under eye 泪痣Skindentation 勒痕eyepatch 单眼罩blindfold 眼罩hairpin 发卡hairclip 发圈headband 发箍hair holder 束发hair ribbon 发带Ribbon 缎带 蝴蝶结maid headdress 女仆头饰headveil 头纱tassel 流苏thigh strap 大腿带服装提示词——Clothingjkseifuku jk 日本女子校服miko 女巫idol clothes 偶像服competition swimsuit 竞速泳装Rococo 洛可可pelvic curtain 盆骨帘midriff 分体式halterneck 露背装enmaided 女仆装backless sweater 露背毛衣turtleneck sweater 高领毛衣French-style suspender skirt 法式吊带裙winter coat 冬大衣Trench Coat 风衣race queen 赛车女郎Highleg/Leotard 高叉紧身衣slit skirt 分衩裙Stirrup legwear 踩脚裤fishnet stockings 渔网袜thighhighs/thigh-high socks 大腿袜kneehighs 过膝袜toeless legwear 无指袜yoga pants 瑜伽裤frilled 荷叶边(花边)动作提示词——Action(crossed_legs_(sitting)/crossed legs 二郎腿坐cross-legged sitting 盘腿坐semireclining position 半卧姿势head tilt 头部倾斜leaning forward 向前俯身planted sword 种植剑heart hand duo 双人心形手double thumbs up 点赞peace sign 比耶Salute (≧ω≦)/Energetic Pose 活力姿态sitting on seiza 正坐身体提示词——Bodythick eyebrows 浓眉Abs 腹肌toned 强壮navel 露脐off-shoulder 露肩tsurime 吊梢眼cyborg 半机械人tan skin 日晒肤色cocoa skin 可可肤色fit physique 健美体态
1.7K
349
Prompting: Hairstyles

Prompting: Hairstyles

Hey all! I made a visual example of the different hairstyles you can use for your prompts. Keep in mind that you can pair the hairstyles with the length of the hair too!Shoutout to annnnnie for the hairstyle list!
1.4K
117
Dual-character Action Prompts Sharing - 二人用アクションprompts - 双人动作提示词分享

Dual-character Action Prompts Sharing - 二人用アクションprompts - 双人动作提示词分享

BeginningWhen there are two characters in the scene, do you often find it difficult to control their interactions?I’ve prepared multiple dual-character action prompts for you! That, after testing, work well with most models. ✨If you have more, comment and let everyone know!✨ At the end of the article, there are Demonstration Images showing the effects.If you want to watch Video Tutorial (which is more efficient and intuitive), you can visit 👉 https://www.instagram.com/p/DJZpqM9SzxA/I'm Annie and this is a Prompts Series by TA Official, more prompt sharing will be released gradually in the future. And welcome to follow our Official Instagram 💗👉 https://www.instagram.com/tensor.art/ .This is where you can be the first to receive our shared videos.😉PromptsKisskiss -- キス -- 亲吻kiss cheek -- 頬へのキス -- 亲脸颊kissing forehead -- おでこにキス -- 亲额头french kiss -- ディープキス -- 法式接吻(舌吻)pocky kiss -- ポッキーキス -- 百奇棒接吻Head and Face relatedheads together -- 頭を寄せ合う -- 头靠头face-to-face -- 顔を合わせる -- 面对面cheek-to-cheek -- 頬を合わせる -- 贴面forehead-to-forehead -- おでこを合わせる -- 额头相贴noses touching -- 鼻を触れ合わせる -- 鼻尖相碰cheek-to-breast -- 頬を胸に当てる -- 脸颊贴胸head on chest -- 胸に頭を乗せる -- 头靠胸膛head on another's shoulder -- 肩に頭を乗せる -- 头靠肩膀 Eye contactlooking at another --アイコンタクト -- 对视lightning glare -- 鋭い睨み -- 锐利瞪视(带有电光)confrontation -- 対峙 -- 对峙Handholding hands -- 手をつなぐ -- 牵手hand on another's head -- 頭に手を置く -- 手放对方头上hand on another's cheek -- 頬に手を当てる -- 手抚脸颊hand on another's waist -- 相手の腰をつかむ -- 扶着对方的腰slapping -- 平手打ち -- 掌掴handshake -- 握手 -- 握手high five -- ハイタッチ -- 击掌pinky swear -- 指切りげんまん -- 拉钩约定fist bump -- フィストバンプ -- 碰拳Armheart hands duo -- ハートハンド -- 双手比心(双人)arm around neck -- 首に腕を回す -- 搂脖子,搂肩arm around the waist -- 腰に腕を回す -- 搂腰one armed carry -- 片腕抱き -- 单臂托抱locked arms -- 腕を組む -- 挽手臂heart arms -- ハートアーム -- 手臂比心(单人)arm wrestling -- 腕相撲 -- 掰手腕Hug and carryhug -- ハグ -- 拥抱cuddling -- 寄り添う -- 依偎hug from behind -- 後ろから抱きつく -- 背后拥抱shoulder carry -- 肩車 -- 扛肩waist hug -- 腰抱き -- 环腰抱bridal carry -- 花嫁抱っこ -- 新娘抱(婚礼式横抱)princess carry -- お姫様抱っこ -- 公主抱piggyback -- おんぶ -- 背背Others over the knee -- 膝の上 -- 膝上姿势lap pillow -- 膝枕 -- 膝枕shared umbrella -- 相合傘 -- 共撑一伞shoulder-to-shoulder -- 肩を並べる -- 肩并肩back-to-back -- 背中合わせ -- 背靠背asymmetrical -- 非対称フィット-- 非对称合照symmetrical-- 対称フィット -- 对称合照breast contest -- おっぱいコンテスト -- 胸部比拼Demonstration Image
1.1K
84
Prompting: Lighting

Prompting: Lighting

Hello! Here's another widely requested article for prompting lighting! I hope it's helpful and as always remember than depending on the models you use, the results may vary!Settings used to generate these images:
1K
66
Prompting: Skin Colors, Conditions, Types, Marks and Scars

Prompting: Skin Colors, Conditions, Types, Marks and Scars

Hello all! This article was requested by many of you, it's a bit long, but it covers a lot of prompting words regarding to skins. Remember depending on your models, results may vary!Skin TypesSkin ConditionsVitiligo LoRASkin ColorsSkins (Fantasy & Unnatural Tones)ScalesFur & FeathersMarks & Scars
966
75
Posture Tag Study Results (Tags from Danbooru)

Posture Tag Study Results (Tags from Danbooru)

This experiment was based on the Danbooru posture tag page.1.Basic positions1.Kneeling / 2.On one knee / 3.Lying / 4.Crossed legs / 5.Fetal position6.On back / 7.On side / 8.On stomach / 9.Sitting / 10.Butterfly sitting11.Crossed legs / 12.Figure four sitting / 13.Indian style / 14.Lotus position / 15.Hugging own legs16.Reclining / 17.Seiza / 18.Sitting on person / 19.Sitting on head / 20.Sitting on lap21.Shoulder_carry / 22.Human chair / 23.Straddling / 24.Thigh straddling / 25.Upright straddle26.Wariza / 27.Yokozuwari / 28.Standing / 29.Balancing / 30.Crossed legs31.Legs apart / 32.Standing on one leg2.Movement of the body1.Balancing / 2.Crawling / 3.Idle animation / 4.Midair / 5.Falling6.Floating / 7.Flying / 8.Jumping / 9.Hopping / 10.Pouncing11.Running / 12.Walking / 13.Walk cycle / 14.Wallwalking3.Other postures potentially involving the whole body1.All fours / 2.Top-down bottom-up / 3.Prostration / 4.Bear position / 5.Bowlegged pose6.Chest stand / 7.Chest stand handstand / 8.Triplefold / 9.Ruppelbend / 10.Quadfold / 11.Cowering / 12.Crucifixion / 13.Faceplant / 14.Full scorpion / 15.Fighting stance / 16.Battoujutsu stance * / 17.Spread eagle position / 18.Squatting*prompt of 16.Battoujutsu stancekatana, (battoujutsu_stance:1.4), (casual clothes:1.4), hand on hilt, ready to draw, pastel blue kimono, wide sleeves, japanese clothes , outdoors, east asian architecture, full_body,19.Stretching / 20.Superhero landing / 21.Upside-down / 22.Handstand / 23.Headstand24.Yoga / 25.Scorpion poseno image4.Other rest points of the body1.Arm support / 2.Head rest5.Posture of the head1.Head down / 2.Head tilt / 3.Head back6.Torso inclination1.Arched back / 2.Bent back / 3.Bent over / 4.Leaning back / 5.Leaning forward6.Slouching / 7.Sway back / 8.Twisted torso7.Arms1.Arms behind back / 2.Arm up / 3.Arm behind head / 4.Victory pose / 5.Arms up6.\o/ / 7.Arms behind head / 8.Outstretched arms / 9.Spread arms / 10.Arms at sides11.Airplane arms / 12.Crossed arms / 13.Flexing / 14.Praise the sun / 15.Reaching16.Shrugging / 17.T-pose / 18.A-pose / 19.V arms / 20.W arms21.Stroking own chin / 22.Outstretched hand / 23.V / 24.Interlocked fingers / 25.Own hands clasped26.Own hands together / 27.Star hands8.Hips1.Contrapposto / 2.Sway back9.Legs1.Crossed ankles / 2.Folded / 3.Leg up / 4.Legs up / 5.Knees to chest6.Legs over head / 7.Leg lift / 8.Outstretched leg / 9.Split / 10.Pigeon pose / 11.Standing split12.Spread legs / 13.Watson cross / 14.Captain morgan pose / 15.Knees apart feet together / 16.Knees together feet apart / 17.Knee up18.Knees up / 19.Dorsiflexion / 20.Pigeon-toed / 21.Plantar flexion / 22.Tiptoes23.Tiptoe kiss10.Posture of at least two characters1.Ass-to-ass / 2.Back-to-back / 3.Belly-to-belly / 4.Cheek-to-breast / 5.Cheek-to-cheek6.Eye contact / 7.Face-to-face / 8.Forehead-to-forehead / 9.Head on chest / 10.Heads together11.Holding hands / 12.Leg lock / 13.Locked arms / 14.Over the knee / 15.Nipple-to-nipple / 16.Noses touching17.Shoulder-to-shoulder / 18.Tail lock11.Posture of at least three characters1.Circle formation / 2.Group hug12.Hugging1.Hug / 2.Hugging own legs / 3.Hugging object / 4.Hugging tail / 5.Wing hug6.Arm hug / 7.Hug from behind / 8.Waist hug13.Carrying someone1.Baby carry / 2.Carried breast rest / 3.Carrying over shoulder / 4.Carrying under arm / 5.Child carry6.Fireman's carry / 7.Piggyback / 8.Princess carry / 9.Shoulder carry / 10.Sitting on shoulder / 11.Standing on shoulder14.Poses1.Rabbit pose / 2.Horns pose / 3.Paw pose / 4.Claw pose / 5.Archer pose / 6.Bras d'honneur / 7.Body bridge8.Contrapposto / 9.Dojikko pose / 10.Ghost pose / 11.Inugami-ke no Ichizoku pose / 12.Letter pose / 13.Ojou-sama pose / 14.Saboten pose / 15.Symmetrical hand pose16.Victory pose / 17.Villain pose / 18.Zombie pose / 19.Gendou pose / 20.JoJo pose / 21.Dio Brando's pose / 22.Giorno Giovanna's pose / 23.Jonathan Joestar's pose / 24.Kujo Jotaro's pose / 25.Kongou pose / 26.Kujou Karen poseThank you for taking a look!
935
54
Mastering FLUX Prompt Engineering: A Practical Guide with Tools and Examples

Mastering FLUX Prompt Engineering: A Practical Guide with Tools and Examples

FLUX AI Tools:https://tensor.art/template/768387980443488839https://tensor.art/template/759877391077124092https://tensor.art/template/761803391851647087https://tensor.art/template/763734477867329638FLUX Prompt Tools:https://chatgpt.com/g/g-NLx886UZW-flux-prompt-pro ⇦⇦⇦ Although I am doing my best to optimize my AI prompt generation tool, I am currently facing malicious negative reviews from competitors. If you have any suggestions for improvement, please feel free to share, and I will do my best to make the necessary optimizations. However, please refrain from giving unfair ratings, as it really discourages my creative efforts. If you find this GPT helpful, please give it a fair rating. Thank you.AI-generated images are revolutionizing the creative landscape, and mastering the art of prompt engineering is crucial for creating visually stunning outputs with models like FLUX. This guide provides practical steps, examples, and introduces a specialized tool to help you craft the perfect prompts for FLUX.1. Start with Descriptive AdjectivesThe foundation of any good prompt lies in the details. Descriptive adjectives are essential for guiding the AI to produce the nuances you desire. For instance, instead of a simple "cityscape," you might specify "a bustling, neon-lit cityscape at dusk with reflections on wet asphalt." This level of detail helps FLUX understand the specific atmosphere and mood you're aiming for, leading to richer and more visually engaging results.2. Integrate Specific Themes and StylesIncorporating themes or art styles can significantly shape the output. For example, you could combine cyberpunk elements with classic art references: "a cyberpunk city with Baroque architectural details, under a sky filled with digital rain." This blend of styles allows FLUX to draw from various visual traditions, creating a unique and layered image​.3. Utilize Technical SpecificationsBeyond adjectives and themes, technical aspects like lighting, perspective, and camera angles add depth to your images. Consider using prompts such as "soft, diffused lighting" or "extreme close-up with shallow depth of field" to control how FLUX renders the scene. These details can make a significant difference, turning a simple image into a masterpiece by manipulating light and shadow, and focusing attention where it matters most​.4. Combine Multiple ElementsTo achieve a more complex and detailed output, combine several of the above strategies in a single prompt. For example: "A close-up shot of a futuristic warrior standing on a neon-lit street, wearing cyberpunk armor with glowing accents, under a sky filled with dark clouds and lightning." This prompt merges detailed descriptions, stylistic choices, and technical elements to create a vivid and engaging scene​ (Magai).5. Experiment and IteratePrompt engineering is an iterative process. Start with a basic idea and refine it based on the results FLUX generates. If the initial output isn't what you expected, adjust the adjectives, tweak the themes, or alter the technical specifications. Continuous refinement is key to mastering prompt engineering​ (Hostinger).6. Utilize the FLUX Prompt Pro ToolIf you're finding it challenging to craft precise prompts, or if you want to speed up your process, try using the FLUX Prompt Pro tool. This tool is designed to generate accurate English prompts specifically for the FLUX AI model. By inputting your basic idea, the tool helps you flesh out the details, ensuring that your prompts are both clear and comprehensive. It's an excellent way to enhance your creative process and achieve better results faster.Try it here: 🚀FLUX Prompt Pro! 🚀 https://chatgpt.com/g/g-NLx886UZW-flux-prompt-pro7. Practical ExampleLet’s put all these strategies into practice with an example:Basic Idea: A futuristic city.Refined Prompt: "A wide-angle shot of a neon-lit, futuristic city at night, with towering skyscrapers reflecting in rain-soaked streets, cyberpunk style, featuring soft backlighting from holographic billboards, and a lone figure in a trench coat standing on a rooftop."This prompt uses descriptive adjectives, specific themes, technical specifications, and combines multiple elements to create a detailed and dynamic image. By following these steps, you can consistently produce high-quality visuals with FLUX.ConclusionMastering FLUX prompt engineering involves blending creativity with precision. By leveraging descriptive language, specific themes, and technical details, and by iterating on your prompts, you can unlock the full potential of FLUX to generate stunning, personalized images. Don’t forget to use the FLUX Prompt Pro tool to streamline your process and achieve even better results.Keep experimenting, stay curious, and enjoy creating!======================================================If you enjoy listening to great music while creating AI-generated art, I highly recommend subscribing to my SUNO AI music channel. I believe it will help ignite your inspiration and creativity even more. I’ll be regularly updating the channel with new AI-generated music. Thank you all for your support! Feel free to leave suggestions or let me know what music styles you’d like to hear. I’ll be creating more tracks in various styles over time.FuturEvoLab AI music: https://suno.com/invite/@futurevolab
904
203
Negative Prompts

Negative Prompts

Avoid unwanted artifacts!Did you know using negative prompts can benefit your image?Here is a short list with some good negative prompts i have collected over one year of being a tensor user(SD1.5, 3.0 and XL compatible).Negatives for landscapesblurryboringclose-updark (optional)details are lowdistorted detailseeriefoggy (optional)gloomy (optional)grainsgrainygrayscale (optional)homogenouslow contrastlow qualitylowresmacromonochrome (optional)multiple anglesmultiple viewsopaqueoverexposedoversaturatedplainplain backgroundportraitsimple backgroundstandardsurrealunattractiveuncreativeunderexposedNegatives for street viewsanimals (optional)asymmetrical buildingsblurrycars (optional)close-upcreepydeformed structuresgrainyjpeg artifactslow contrastlow qualitylowresmacromultiple anglesmultiple viewsoverexposedoversaturatedpeople (optional)pets (optional)plain backgroundscarysolid backgroundsurrealunderexposedunreal architectureunreal skyweird colorsNegatives for people3Dabsent limbsage spotadditional appendagesadditional digitsadditional limbsaltered appendagesamputeeasymmetricasymmetric earsbad anatomybad earsbad eyesbad facebad proportionsbeard (optional)broken fingerbroken handbroken legbroken wristcartoonchildish (optional)cloned facecloned headcollapsed eyeshadowcombined appendagesconjoinedcopied visagecorpsecripplecropped headcross-eyeddepresseddesiccateddisconnected limbdisfigureddismembereddisproportionatedouble faceduplicated featureseerieelongated throatexcess appendagesexcess body partsexcess extremitiesextended cervical regionextra limbfatflawed structurefloating hair (optional)floating limbfour fingers per handfused handgroup of peoplegruesomehigh depth of fieldimmatureimperfect eyesincorrect physiologykitschlacking appendageslacking bodylong bodymacabremalformed handsmalformed limbsmangledmangled visagemerged phalangesmissing armmissing legmissing limbmustache (optional)nonexistent extremitiesoldout of focusout of frameparchedplasticpoor facial detailspoor morphologypoorly drawn facepoorly drawn feetpoorly drawn handspoorly rendered facepoorly rendered handssix fingers per handskewed eyesskin blemishessquintstiff facestretched napestuffed animalsurplus appendagessurplus phalangessurrealuglyunbalanced bodyunnaturalunnatural bodyunnatural skinunnatural skin toneweird colorsNegatives for photorealism3D renderaberrationsabstractanimeblack and white (optional)cartooncollapsedconjoinedcreativedrawingextra windowsharsh lightingillustrationjpeg artifactslow saturationmonochrome (optional)multiple levelsoverexposedoversaturatedpaintingphotoshoprottensketchessurrealtwistedUIunderexposedunnaturalunreal engineunrealisticvideo gameNegatives for drawings and paintings3dbad artbad artistbad fan artCGIgrainyhuman (optional)inaccurate skyinaccurate treeskitschlazy artless creativelowresnoisephotorealisticpoor detailingrealismrealisticrenderstacked backgroundstock imagestock phototextunprofessionalunsmoothAdditional negativesBad anatomy: flawed structure, incorrect physiology, poor morphology, misshaped bodyBad proportions: improper scale, incorrect ratio, disproportionateBlurry: unfocused, hazy, indistinctCloned face: duplicated features, replicated countenance, copied visageCropped: trimmed, cut, shortenedDark images: dark theme, underexposed, dark colorsDeformed: distorted, misshapen, malformedDehydrated: dried out, desiccated, parchedDisfigured: mangled, dismembered, mutilatedDuplicate: copy, replicate, reproduceError: mistake, flaw, faultExtra arms: additional limbs, surplus appendages, excess extremitiesExtra fingers: additional digits, surplus phalanges, excess appendagesExtra legs: additional limbs, surplus appendages, excess extremitiesExtra limbs: additional appendages, surplus extremities, excess body partsFingers: conjoined fingers, crooked fingers, merged fingers, fused fingers, fading fingersFused fingers: joined digits, merged phalanges, combined appendagesGross proportions: disgusting scale, repulsive ratio, revolting dimensionsJPEG artifacts: compression artifacts, digital noise, pixelationLong neck: extended cervical region, elongated throat, stretched napeLow quality: poor resolution, inferior standard, subpar gradeLowres: low resolution, inadequate quality, deficient definitionMalformed limbs: deformed appendages, misshapen extremities, malformed body partsMissing arms: absent limbs, lacking appendages, nonexistent extremitiesMissing legs: absent limbs, lacking appendages, nonexistent extremitiesMorbid: gruesome, macabre, eerieMutated hands: altered appendages, changed extremities, transformed body partsMutation: genetic variation, aberration, deviationMutilated: disfigured, dismembered, butcheredOut of frame: outside the picture, beyond the borders, off-screenPoorly drawn face: badly illustrated countenance, inadequately depicted visage, incompetently sketched featuresPoorly drawn hands: badly illustrated appendages, inadequately depicted extremities, incompetently sketched digitsSignature: autograph, sign, markText: written language, printed words, scriptToo many fingers: excessive digits, surplus phalanges, extra appendagesUgly: unattractive, unsightly, repellentUsername: screen name, login, handleWatermark: identifying mark, branding, logoWorst quality: lowest standard, poorest grade, worst resolutionDo not forget to set this article as favorite if you found it useful.Happy generations!
811
128
Gesture Prompts Sharing🫶 - AIの手のプロンプト集 - 手部提示词分享

Gesture Prompts Sharing🫶 - AIの手のプロンプト集 - 手部提示词分享

BeginningHand Gesture have always been a very challenging aspect to control. I wish to share some highly effective prompts for controlling Gestures.The tested Model is ✨🎨Illustrious - AnimeMaster✨ and these prompts perform well on most models.At the end of the article, there are Demonstration Images showing the effects.If you want to watch Video Tutorial (which is more efficient and intuitive), you can visit 👉 https://www.instagram.com/p/DIeFERbygkO/?hl=zh-cnI'm Annie and this is a Prompts Series by TA Official, more prompt sharing will be released gradually in the future. Stay tuned! And welcome to follow our Official Instagram 💗👉 https://www.instagram.com/tensor.art/ .This is where you can be the first to receive our shared videos.😉PromptsFingersindex_finger_raised 、人差し指を上げる|举起食指shushing 、しーっ|嘘🤫pinky_out 、小指を外側に出す|翘小拇指thumbs_down 、親指を下に向ける|差评👎thumbs_up 、親指を立てる|点赞👍double_thumbs_up 、二本親指を立てる|双手拇指点赞finger_gun 、指銃|手枪double_finger_gun 、二本指銃|双手手枪two_finger_salute 、二本指で敬礼する|二指敬礼finger_frame 、指フレーム|手指比取景框spread_fingers 、指を広げる|分开手指x_arms 、腕を組む|手臂比叉x_fingers 、二本指|手指比叉fidgeting 、そわそわするV|食指相对steepled_fingers 、尖った指|手指金字塔Victory Signv 、ピース|比耶✌️double_v 、二本V|双手比耶v_over_eye 、vを目にかざす|眼睛前比耶v_over_mouth 、vを口にかざす|嘴巴上比耶gyaru_v 、ギャルのv|反手比耶 辣妹式Fistpower_fist 、パワーフィスト|挥拳fist_bump 、拳を突き合わせる|碰拳fist_in_hand 、手に握りしめた拳|紧握拳头clenched_hands 、握りしめた手|紧握双手Pointingpointing 、指さし|指pointing_at_self 、自分を指さすf|指自己pointing_at_viewer 、視聴者を指差す|指向关注pointing_down 、下を向いている|向下指pointing_forward 、前を向いている|向前指pointing_up 、上を向いている|向上指Coveringcovering over face 、顔を覆う|遮住脸covering over eyes 、目を覆う|遮住眼睛covering over mouth 、口を覆う|遮住嘴covering over ears 、耳を覆う|遮住耳朵Otherscupping_hands 、両手をすくめる|杯状手own_hands_clasped 、自分の手|握住自己的手money_gesture 、お金のジェスチャー|金钱手势ok_sign 、OKサイン (very similar to money_gesture)|比OKtwirling_hair 、髪をくるくる回す|玩头发shadow_puppet 、影絵|影子手偶fox_shadow_puppet 、キツネの影絵|狐狸影子手偶pinching_gesture 、つまむジェスチャー|捏合手势reaching 、手を伸ばす|伸手waving 、手を振る|挥手beckoning 、手招きする|招手
785
44
AI Tool for Storytelling: Visualizing Fictional Worlds with Custom Parameters

AI Tool for Storytelling: Visualizing Fictional Worlds with Custom Parameters

The power of storytelling has always been one of the most fundamental elements of human culture, connecting people through inspiring, captivating, and mesmerizing tales. Now, artificial intelligence (AI) technology is opening new doors for creators to bring their fictional worlds to life visually. With AI tools for generating images, such as DALL·E, Stable Diffusion, and MidJourney, artists and writers can create complex fictional universes using only textual descriptions. This article explores how AI tools can support storytelling in innovative and unique ways.Why is Visualization Important in Storytelling?Visualization adds depth to storytelling. In fiction, the worlds built through words can often be difficult to imagine in detail. AI tools solve this challenge by:Bringing Abstract Concepts to Life: With simple descriptions, AI can create complex images, such as “a city at the edge of the galaxy with crystal-based architecture.”Ensuring Visual Consistency: AI can produce a series of images following a specific theme or style, maintaining coherence throughout the narrative.Accelerating the Creative Process: Without requiring technical drawing skills, writers can focus on their imagination without being limited by artistic ability.Using Custom Parameters for More Accurate ResultsOne of the most powerful features of AI tools is their ability to customize parameters to achieve results that align with the creator's vision. Here are some steps to make the most of custom parameters in storytelling:Define Style and AtmosphereBefore starting, think about the mood of your fictional world. Is it a dark dystopia? Or perhaps a vibrant utopia? Use parameters such as:Lighting: “Dimly lit with neon hues.”Texture: “Smooth and futuristic vs. rugged and natural.”Use Negative Prompts to Avoid Unwanted DetailsIf you want specific results, negative prompts help eliminate distracting elements. For example, for a futuristic city without trees, use:Prompt: “A futuristic city skyline, no vegetation, no natural elements.”Outpainting for Expansive WorldsIf your story requires a large and interconnected world, use the outpainting feature to expand images, creating seemingly endless landscapes.Reference Base ImagesFor continuity, use a base image as a reference. AI will generate variations based on the image, ensuring your world feels cohesive.Case Study: Visualizing a Fictional World with AI ToolsImagine a writer creating a story titled "The Shattered Realms," set in a parallel world blending magic and technology. Here’s how to visualize it using AI:Sky and Landscape:Prompt description: “A fractured sky with glowing magical rifts, over a city built on floating islands with gears and steam-powered towers.”Characters:Prompt: “A mage with glowing blue tattoos, holding a staff of crystal shards, standing in a windswept meadow of neon flowers.”Action:Prompt: “A battle scene between two armies, one wielding swords of fire, the other armed with glowing shields powered by ancient tech.”Advantages and ChallengesAdvantages:Flexibility: AI allows easy exploration of various styles and concepts.Accessibility: High-quality visuals can be created without professional art skills.Multidisciplinary Collaboration: AI tools enable collaboration between writers, artists, and designers.Challenges:Detail Consistency: AI sometimes generates inconsistent elements between images.Understanding Context: AI is still limited in comprehending complex contexts, requiring manual adjustments.ConclusionAI tools for generating images not only support storytelling but also redefine creative boundaries. By leveraging custom parameters, creators can visually bring their fictional worlds to life, offering a more immersive experience for readers or audiences. These tools are not just instruments but collaborative partners in crafting extraordinary stories.If you’re a writer, artist, or creator, why not give these tools a try and see how far your imagination can go? Your fictional world is waiting to come to life!
726
25
Exploring the Impact of Captions on Model Training: A Comprehensive Analysis

Exploring the Impact of Captions on Model Training: A Comprehensive Analysis

IntroductionIn the ever-evolving field of AI, the effectiveness of training methods is a crucial factor in achieving optimal model performance. A pivotal consideration in model training, especially for techniques like Flux LoRA, is whether to use captions as part of the training dataset. Captions—textual descriptions accompanying images—have been both celebrated and critiqued for their influence on model behavior. This article examines the impact of captions on model training, comparing the strengths and weaknesses of datasets with captions against those without.The Role of Captions in Model TrainingCaptions provide semantic context that can significantly enhance a model’s ability to associate visual elements with descriptive terms. This relationship is particularly beneficial in scenarios where specific outputs are desired, such as generating images based on text prompts.Benefits of Using CaptionsImproved Specificity: Captions help models better understand nuanced details in images. For example, a caption like “a red fox in a snowy forest” directs the model’s attention to key elements, leading to more accurate results. Enhanced Alignment: When paired with textual prompts, models trained on captioned datasets produce outputs that are more aligned with user intent.Semantic Richness: Captions add layers of meaning, enabling the model to learn abstract concepts like “melancholic atmosphere” or “elegant posture.”Challenges with CaptionsData Quality Dependency: Poorly written or ambiguous captions can mislead the model, introducing noise into the training process.Bias Amplification: Captions may carry cultural or linguistic biases that can skew model outputs.Computational Overhead: Processing captions requires additional resources, increasing the complexity and duration of training.The Case for Caption-Free DatasetsDatasets without captions rely solely on visual features for training, which can also have distinct advantages:Benefits of Caption-Free DatasetsFlexibility in Output: Models trained without captions are often more creative, as they are not constrained by explicit textual guidance.Reduced Preprocessing Needs: Eliminating captions simplifies dataset preparation, saving time and resources.Neutral Learning: Without captions, models are less likely to inherit textual biases, focusing instead on intrinsic visual patterns.Challenges with Caption-Free DatasetsLack of Context: Without captions, models may struggle to understand the intent behind certain visual elements.Ambiguity in Outputs: Outputs can lack precision, as the model has no textual reference to guide its interpretations.Striking a Balance: The Hybrid ApproachFor many use cases, the optimal strategy lies in a hybrid approach that combines the strengths of both methodologies. By using captions selectively, models can achieve both precision and creativity.Practical Steps for ImplementationCurate High-Quality Captions: Ensure that captions are accurate, relevant, and free from bias.Segment the Dataset: Use captions for subsets of data where specificity is critical and leave others caption-free to foster diversity.Iterative Training: Alternate between captioned and caption-free batches to balance semantic alignment and visual flexibility.Quantitative AnalysisExperiments with Flux LoRA reveal that:Models trained with captions show a 25% improvement in alignment with text-based prompts.Caption-free models exhibit a 30% increase in creative variation but a 15% decrease in prompt specificity.Hybrid models demonstrate balanced performance, with a 15% boost in both alignment and creativity.ConclusionCaptions are a double-edged sword in model training. While they enhance semantic understanding and specificity, they can introduce noise and biases. Conversely, caption-free datasets foster creativity but risk ambiguity. A hybrid approach, tailored to the specific goals of the project, offers the most balanced outcomes. As AI training methods continue to evolve, understanding the nuanced impacts of captions will be key to unlocking new frontiers in model performance.
671
25
Prompting: Backgrounds

Prompting: Backgrounds

Hey all! So I know a lot of you struggle when prompting for backgrounds, and I'd like to give you a few tips to help you achieve what you're looking for!Template FormulaWe're going to use this template to make it easier when prompting for backgrounds, of course, you don't have to fill in all of these, it's just an idea to get you going![indoor/outdoor], [theme/style], [key background location], [important objects or structures], [time of day], [weather], [lighting/mood adjectives], [small background details]Examples1. 📍 INDOORSModern IndoorsApartment interior — minimalist furniture, exposed brick wall, soft warm lighting, windowsCorporate office — glass walls, desks with chairs and computer, a plant, filing cabinets, folders, cityscape visible outside, fluorescent lightsCafé — rustic tables, warm lights, crowded shelves, coffee machineFantasy IndoorsAncient library — towering bookshelves, candles, massive dusty tomes, wooden rustic tablesMage’s tower room — arcane symbols on walls, glowing crystals, ancient scrolls and books piled on table, wooden desk, magic clutterCastle Hall — stone walls, banners, chandeliers, long tables, warm lightSci-Fi IndoorsSpaceship cockpit — holographic displays, metal walls, blinking control panelsCyberpunk hacker den — glowing monitors, tangled wires, dark metal desk, neon graffiti wallsAlien temple — bioluminescent carvings, strange pulsating walls, hovering energy artifactsHistorical IndoorsMedieval tavern — wooden beams, crowded tables, candlelight flickering, tankards of aleVictorian parlor — velvet chairs, ornate rugs, grand fireplace, oil paintings on wallsRoman villa atrium — marble columns, mosaic floors, small fountain, lush plants2. 🌲 OUTDOORSModern OutdoorsUrban city street — skyscrapers, busy traffic, flashing billboards, nighttime rain reflectionsSuburban park — morning, trimmed green lawns, tree-lined walking path, benchAbandoned warehouse yard — cracked pavement, broken fences, overgrown weeds, graffiti muralsFantasy OutdoorsEnchanted forest — glowing mushrooms, ancient twisted trees, magical fog creeping at feetDragon graveyard — giant skeletal remains, charred black ground, perpetual red sunsetMountain castle — green mountain cliffs, big stone castle treesSci-Fi OutdoorsAlien desert planet — red sun, red cracked earth, distant colossal ruins, swirling sandstormsMega-city skyline — impossibly tall towers, neon holograms, hovercars zipping past, misty lower levelsAsteroid mining station exterior — rocky surface, space shuttles parked, distant planets visible in black skyHistorical OutdoorsMedieval battlefield — muddy trenches, shattered banners, smoke rising from distant firesViking fjord village — wooden longhouses, smoky chimneys, mountains meeting icy watersFeudal Japan village — thatched roofs, winding stone paths, cherry blossom trees in bloom🧠 Bonus Tips for Background PromptingMention time of day: morning, dusk, night, midnight, dawn.Weather adds mood: rainy, snowy, foggy, stormy, sunny, dusty.Use adjectives to strengthen the scene: "ruined", "elegant", "oppressive", "overgrown", "desolate", "bustling".Think of lighting: neon lights, candlelight, moonlight, glowing fog, flickering torches, cyber glow.Use 2-3 small details to make scenes vivid without overloading.🖊Quick Ideas1. Indoor or Outdoor?indooroutdoor2. Theme or Style?modernfantasysci-fihistoricalcyberpunksteampunkdark horrormedievalpost-apocalyptic3. Location Examples?apartment interiorcastle throne roomneon city streetfloating islandviking villageabandoned labhacker densecret gardenspaceship bridge4. Objects/Structures?glowing runessteel beamsgrand chandeliersruined statuesmarket stallsold booksovergrown vinesshattered glassritual circles5. Time of Day?morningafternoonsunsetdusknightmidnightdawn6. Weather?clear skyrainingsnowingfoggystormywindydusty7. Lighting or Mood Adjectives?soft candlelightharsh neon gloweerie blue lightgolden sunset glowdim flickering lightbioluminescent mistcrackling fires8. Small Details?footsteps in dustabandoned backpacksbloodstains on floorflying paper scrapsvines creeping up wallsholographic adsancient symbols carved on walls
617
27
Prompting: Merfolk

Prompting: Merfolk

Hey all! So this article has been made upon request by some members of my discord to prompt different kinds of merfolk, I hope you find it useful! I use Illustrious based models, so keep it in mind as you might get different results!Striped tailMerman short orange hair, messy, freckles, round hazel eyes, joyful expression, naked chest, orange and white striped tail, soft fin edges, playful pose, underwater reef background, holding starfish, surrounded by schools of colorful fish, bubble trail risingClownfishMerman, short orange hair, messy, round hazel eyes, joyful expression, naked chest, (very detailed clownfish tail:1.8), (clownfish rounded fins), underwater reef background, surrounded by schools of colorful fish, bubble trail rising, male nipplesLionfishmerman, spiky red hair, long slicked back hair, glowing yellow eyes, intense glare, bold black and red striped tail, venomous-looking fin spines, underwater cave lit by bioluminescence, floating aggressively, arms spread wideAnglerfishmerman, (glowing lure antenna on forehead:1.5), sharp teeth, glowing purple eyes, , creepy grin, shadowy deep-sea background, bioluminescent tendrils on chest, pitch black tail, floating alone deep underwater, eerie stillnessSeahorseseahorse merman, light golden hair, short curls, teal eyes, curious look, (pastel yellow curled spiked tail:1.5), seahorse crown, coral reef background, naked chest, male nipplesOctopusnavy blue hair, slicked to one side, Octopus,monster boy,tentacle legs,tentacle lower body,scylla,octopus boy, purple-blue skin, glowing white eyes, playful smirk, muscular upper body, underwater background, holding enchanted scroll, tentacles mid-motionOrcaOrca Merman,black and white tail, strong jaw, very short black hair, icy blue eyes, battle scars, wearing thick leather harness, underwater cave background, muscular arms, firm expressionElectric eeleel hybrid male, sinuous body tapering into a long finned tail, pale electric-blue skin with dark streaks, glowing yellow eyes, sleek black wet hair, bioluminescent markings across arms, floating beside broken stone archway in submerged ruins, lightning crackling between fingers, intense glareBetaMerman, midnight blue hair, slicked back with loose strands, violet eyes, sharp stare, dramatic flowing tail like a crowntail betta in deep blue and red gradient, muscular torso, underwater trench, intense backlight, floating, aggressive postureGoldfishGoldfish Mermaid, orange curly hair, high messy bun, large golden eyes, smiling brightly, round soft body, translucent frilly tail with orange-and-white bubble eye goldfish pattern, lacy coral top, swimming through reeds and bubbles, sunlight glinting, playful twirl, underwaterAxolotl Mermanmerman, pale pink-white hair, soft waves, shoulder length, gentle teal eyes, serene smile, soft pink skin, translucent fins down arms and head, smooth short tail with a curled tip, glowing cavern lake, curled up resting pose, etherealKohaku Koimerman, white hair, long straight hair, center part, soft amber eyes, relaxed smile, white and red koi tail with large red markings, elegant white scaled bracers, underwater garden background with red lilies, leaning back on coral rock, arms open, serene expressionShowa Koimermaid, black curly hair, medium length, floating hair, golden brown eyes, confident smirk, black, red and white patterned koi tail, black pearls bikini top, underwater background, dramatic lighting, floating underwaterBekko Koimermaid, black straight flowy hair, very long, full blunt bangs, pale blue eyes, soft expression, white koi tail with black spots, white shell bikini top, swimming over kelps, underwaterShusui KoiMermaid, dark auburn hair, braided crown style, warm amber eyes, cheerful smile, sky blue koi tail with orange underbelly, simple pearl-strap top, shallow pool background with lotus pads,laying downUtsuri KoiMerman, black hair tied in low ponytail, violet eyes, serious look, black and golden yellow patterned koi tail, swimming over dark coral reef, arms crossed, brooding mood, naked chest, male nipplesGoshiki KoiMermaid, gray hair, medium loose curls, red eyes, sly grin, red and light gray koi tail, white bra top, underwater background, school of fishes, swimming twirling, hair flowing, playful expression
582
37
绘图提示词整理:第一章|常用身体部位与表情关键词 Chapter 1: Common Body Parts and Facial Expressions

绘图提示词整理:第一章|常用身体部位与表情关键词 Chapter 1: Common Body Parts and Facial Expressions

在开始绘图前,请先明确你的构图场景、风格方向,以及希望传达的细节关键词。本期是一章辅助提示词分享,排版由AI帮我完成,我比较懒的排版常用提示词参考(双重曝光风格)以下是我经常使用的 Prompt 示例,适用于“双重曝光”风格。不同模型表现略有差异,仅供参考,如有问题欢迎指正:silhouette, within, blending, blended, merged, filled with / into, Jeddbleil, double exposure手部动作(Hand Gestures)互动与招呼类waving:挥手(角色打招呼、道别)【社交、迎接、离别】saluting:行军礼(正式致意)【军事、正式场合】high-five:击掌(庆祝、友好)【伙伴间庆祝、友谊】clapping:鼓掌(赞美、认可)【庆祝、演出】pointing:指向(强调方向或目标)【讲解、示意、互动】表达与情感类thumbs-up:竖起大拇指(赞同、鼓励)【积极反馈、喜好】thumbs-down:竖起大拇指向下(否定、不满意)【否定、表达失望】fist-clenching:握拳(愤怒、紧张、决心)【战斗前、内心斗争】open-palm:张开手掌(欢迎、防御、祈祷)【招呼、接纳、请求】finger-snapping:打响指(节奏感、魔法瞬间)【表演、变化】思考与姿态类hand-on-chin:手托下巴(沉思、困惑)【思考、犹豫】hand-on-hip:手叉腰(自信、不满)【质问、姿态感】hand-on-face:手捂脸(震惊、羞涩、失落)【反应瞬间、内向情绪】hand-on-heart:手放胸口(真诚、感动、发誓)【感情表达、承诺】表现与动态类grabbing:握住(紧张、抓住机会)【紧张时、紧急动作】fingertips-touching:手指尖接触(细心、低调、计划)【思考、低调交流】pinching:捏住(小心、关心、纠结)【检查、轻轻触碰】flipping:翻动(无聊、愤怒)【翻书、翻桌】palms-together:双掌合十(祈愿、感激、求助)【请求、祈祷】picking-up:捡起(专注、小心)【细致操作、动作】难度 / 节奏类spreading-fingers:张开手指(强烈、压力)【紧张、惊讶】gripping:紧握(控制、专注)【抓住、攻击】tapping-fingers:轻敲手指(焦虑、不耐)【等待、急切】其他细节表现类shaking-hand:握手(交流、协议)【社交、合作】twisting-wrist:转动手腕(疲劳、无聊)【细节动作】flexing-fingers:弯曲手指(放松、恢复、体操)【手部活动】resting-hand:手轻放(放松、休息)【情感松弛】clapping-back:背后鼓掌(偷偷支持)【暗中支持】腿部动作(Leg Gestures)姿势类standing-with-legs-apart:双腿分开站立(稳重、自信)【战斗准备、威严展示】standing-with-legs-crossed:双腿交叉站立(休闲、放松)【等待、聊天场景】sitting-cross-legged:盘腿坐(随意、放松、冥想)【居家、教室、自然场景】sitting-with-legs-stretched:双腿伸直坐(疲惫、放松)【旅行、休息】sitting-with-legs-bent:双腿弯曲坐(含蓄、少女感)【害羞、情绪细腻场景】walking-with-long-strides:大步走(自信、紧迫)【赶路、任务场景】walking-with-short-steps:小步走(谨慎、优雅)【紧张、礼仪场景】kneeling:跪姿(恳求、忠诚、礼拜)【宗教、情感高潮】squatting:蹲下(观察、调皮、准备动作)【低视角互动、行动开始】脚部动作类tip-toeing:脚尖站立(轻盈、偷偷、谨慎)【悄悄行动、偷袭】pointing-toe:指向脚尖(展现优雅、舞蹈动作)【舞蹈、优雅动作】flexing-ankles:弯曲脚踝(疲劳、恢复、舒展)【休息、放松】bouncing-on-toes:踮脚跳跃(活力、期待)【紧张、急切等】foot-tapping:脚尖敲击地面(焦虑、不耐)【等待、急切】crossing-ankles:交叉脚踝(优雅、放松)【放松、优雅的坐姿】rocking-back-and-forth:前后摇摆(焦虑、不安)【焦虑、等待】raising-heel:提起脚跟(不安、注意力集中)【紧张、预备动作】kicking:踢(愤怒、冲动)【防御、发泄愤怒】stepping-forward:向前迈步(进取、自信)【进攻、走向目标】stepping-backward:向后退步(后退、避免)【防守、撤退】twisting-feet:脚部旋转(困惑、不安)【不耐烦、焦虑】脚趾动作类wiggling-toes:脚趾扭动(放松、舒适)【休息、放松】curling-toes:脚趾蜷曲(紧张、不安)【紧张状态】pointing-toe:脚趾指向(优雅、舞蹈)【舞蹈、表现】scrunching-toes:脚趾收缩(不适、困扰)【疲惫、不适】tapping-toes:脚趾轻敲(不耐烦、急切)【等待、焦虑】spreading-toes:脚趾张开(放松、舒展)【恢复、轻松】pressing-toes:脚趾用力(紧张、力量)【控制、发力】手指动作(Finger Gestures)指令与强调类pointing-with-index-finger:用食指指向(下指令、强调)【命令、讲课、展示物品】making-a-fist:握拳(愤怒、斗志)【决斗、激动、煽动情绪】making-a-peace-sign:竖两指(胜利、和平、自拍)【合影、流行文化】making-a-come-here-gesture:招手让人过来(暗示、引导)【角色互动、引导】making-a-stop-gesture:手掌前伸(制止、警告)【争执、警觉】making-a-call-me-gesture:拇指和小指张开(联系、社交)【俏皮、暗示沟通】making-a-thumbs-up-gesture:举拇指(称赞、肯定)【胜利、认可】making-a-thumbs-down-gesture:拇指向下(反对、否定)【拒绝、嘲讽】手指与手掌互动类(Finger and Palm Interaction)pointing-with-multiple-fingers:多指指向(指示、强调方向)【指引、指向】touching-fingers:手指互相触碰(思考、犹豫、细致)【思考、谨慎】holding-a-finger-to-lips:食指放在嘴唇上(示意安静、谨慎)【安静、隐秘】index-finger-and-thumb-pinch:食指和拇指捏住(轻触、审视)【检查、敏感操作】手腕动作类(Wrist Gestures)wrist-twist:转动手腕(放松、无聊)【疲劳、细节动作】flexing-wrist:弯曲手腕(用力、强迫)【力量、控制】shaking-wrist:摇晃手腕(不耐烦、失望)【焦虑、反应】flicking-wrist:振动手腕(生气、激动)【快速反应、警告】手臂动作类(Arm Gestures)raising-arm:举臂(求救、问询)【寻求帮助、庆祝】lowering-arm:放下手臂(放松、平静)【失望、退却】crossing-arms:双臂交叉(防御、不信任)【拒绝、冷漠】stretching-arm:伸展手臂(放松、舒展)【恢复、疲劳后的动作】elbow-pointing:弯曲肘部指向(暗示、挑衅)【无声交流、态度表现】pushing-with-arm:用手臂推(推动、抵抗)【防守、推动】pulling-with-arm:用手臂拉(拉住、吸引)【控制、反向动作】其他手部细节动作(Other Hand Details)finger-waving:手指晃动(不满、警告)【反对、威胁】clenching-fingers:紧握手指(专注、愤怒)【紧张、决心】stretching-fingers:伸展手指(放松、恢复)【恢复、放松】thumb-over-finger:大拇指压住食指(决策、犹豫)【考虑、思考】头部动作(Head Gestures)同意与否定类nodding:点头(同意、认可)【正常对话、肯定答复】shaking-head:摇头(否定、不信)【拒绝、怀疑】思考与疑惑类tilting-head:侧头(疑惑、卖萌)【思考、好奇、撒娇】looking-up:向上看(幻想、灵感、请求)【祈祷、星空、想象力】looking-down:低头(思考、内敛、羞涩)【伤感、失落、沉思】情感与表达类bowing-head:低头(羞愧、礼貌、悲伤)【认错、反省、正式场合】raising-eyebrows:挑眉(惊讶、调情)【表情丰富、反应场景】furrowing-brows:皱眉(担心、生气)【冲突、压力场景】winking:眨眼(打趣、默契)【俏皮、暗号】blinking:眨眼(自然动作、困倦)【情绪缓冲、转场】面部与头部动作组合(Head + Facial Combos)head-tilt-with-smile:侧头微笑(撒娇、魅力、亲和力)【亲近、温柔】head-tilt-with-frown:侧头皱眉(疑惑、不解)【好奇、质疑】face-palming:捂脸(震惊、羞愧、失望)【意外、尴尬】looking-away:偏头看向远方(思考、逃避)【沉思、逃避目光】raising-chin:抬起下巴(自信、挑战)【坚定、不屈】颈部动作(Neck Gestures)neck-tilting:颈部倾斜(思考、质疑)【疑问、好奇】neck-craning:颈部伸展(好奇、注视远方)【寻找、观察】neck-rotation:旋转脖子(放松、缓解压力)【舒展、放松】neck-stretch:伸展脖部(疲劳、放松)【解除紧张、活动】面部组合动作(Facial Combos)smiling-with-raised-eyebrows:微笑配挑眉(高兴、调皮)【友好、轻松】frowning-with-raised-eyebrows:皱眉配挑眉(怀疑、惊讶)【困惑、质疑】smiling-with-winking:微笑配眨眼(调皮、暗示)【俏皮、暗号】frowning-with-winking:皱眉配眨眼(困惑、调皮)【打趣、示意】眼睛与头部动作组合(Eye + Head Gestures)eyes-widening:眼睛睁大(惊讶、震惊)【意外、吃惊】narrowing-eyes:眼睛眯起(怀疑、警觉)【质疑、警惕】rolling-eyes:眼睛翻白(不耐烦、嫌弃)【厌烦、失望】eye-contact:目光接触(专注、信任)【交流、示意】looking-over:眼睛扫视(挑衅、评估)【观察、对比】looking-around:眼睛环视(好奇、警觉)【寻找、四处观察】面部细节与组合动作(Facial Detail & Combinations)smiling-with-closed-eyes:微笑配闭眼(幸福、喜悦)【愉悦、感动】squinting:眯眼(困惑、仔细看)【难以看清、集中注意】grinning-with-closed-eyes:咧嘴微笑配闭眼(极度高兴、得意)【喜悦、开心】smiling-with-one-eye-closed:微笑配一只眼闭(狡猾、戏谑)【暗示、轻松】以上是我个人长时间整理累积,等待下一章的更新...
533
65
Prompt reference for "Lighting Effects"

Prompt reference for "Lighting Effects"

Hello. I usually use "lighting/lighting effects" when generating images.I will introduce some of the "words" I use when I want to add something.Please note that these words alone do not provide 100% effectiveness, and the base modelThe effect you get will differ depending on the LoRA sampling method and where you place it in the prompt.Words related to "lighting effects"・ Backlight :  Light from behind the subject・ Colorful lighting :  The impression itself is not colored, but the color changes depending on the light.・ moody lighting :  natural lighting, not direct artificial light・ studio lighting :  A term used to describe the artificial lighting of a photography studio.・ Directional Light :  directional light source is a light source that shines parallel rays in a selected direction.・ Dramatic lighting :  Lighting techniques in the field of photography・ Spot lighting :  A lighting technique that uses artificial light in a small area.・ Cinematic lighting :  A single word that describes several lighting techniques used in movies.・ Bounce Lighting :  Light reflected by a reflex plate, etc.・ Practical Lighting :  Photographs and videos that depict the light source itself in the composition・ Volumetric lighting :  A word derived from 3DCG. It tends to be a picture with a divine golden light source.・ Dynamic lighting :  I don't really understand what it means, but it tends to create high-contrast images.・ Warm lighting :  Creates a warm picture illuminated with warm colors・ Cold lighting :  Lights with a cold light source.・ High-key lighting :  Soft light, minimal shadows, low contrast, resulting in bright frames・ Low-key lighting :  It provides high contrast, but the impression is a little weak.・ Hard light :  Strong light. Highlights appear strong.・ soft light :  A word that refers to faint light.・ strobe lighting :  strong artificial light (stroboscopic lighting)・ Ambient light :  An English word that refers to ambient lighting/indoor lighting.・ flash lighting  :  For some reason, the characters themselves tend to emit light, and there are often flashes of light. (flash lighting photography) ・ Natural lighting :  This tends to create a natural-looking picture that feels contrasting with artificial light.
499
73
Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion

Prompt weighting in Stable Diffusion allows you to emphasize or de-emphasize specific parts of your text prompt, giving you more control over the generated image. Different types of brackets are used to adjust the weights of keywords, which can significantly affect the resulting image. In this tutorial, we will explore how to use parentheses (), square brackets [], and curly braces {} to control keyword weights in your prompts.Basics of Prompt WeightingBy default, each word or phrase in your prompt has a weight of 1. You can increase or decrease this weight to control how much influence a particular word or phrase has on the generated image. Here’s a quick guide to the different types of brackets:1. Parentheses (): Increase the weight of the enclosed word or phrase.2. Square Brackets []: Decrease the weight of the enclosed word or phrase.3. Curly Braces {}: In some implementations, they behave similarly to parentheses but with slightly different multipliers.Using Parentheses to Increase WeightParentheses () are used to increase the weight of the enclosed keywords. This means the AI model will give more importance to these words when generating the image.• Single Parentheses: Increase the weight by 1.1 times.• Example: (girl) increases the weight of “girl” to 1.1.• Nested Parentheses: Increase the weight further.• Example: ((girl)) increases the weight of “girl” to 1.21 (1.1 * 1.1).You can also specify a custom weight:• Custom Weight: Specify the exact multiplier.• Example: (girl:1.5) increases the weight of “girl” to 1.5.Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smileUsing Square Brackets to Decrease WeightSquare brackets [] are used to decrease the weight of the enclosed keywords. This means the AI model will give less importance to these words when generating the image.• Single Square Brackets: Decrease the weight by 0.9 times.• Example: [background] decreases the weight of “background” to 0.9.• Nested Square Brackets: Decrease the weight further.• Example: [[background]] decreases the weight of “background” to 0.81 (0.9 * 0.9).Example Prompts:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Using Curly BracesCurly braces {} are less commonly used but in some implementations (e.g., NovelAI), they serve a similar purpose to parentheses with different default multipliers. For instance, {word} might be equivalent to (word:1.05).Example Prompts:(masterpiece, best quality), {beautiful girl:1.3}, highres, looking at viewer, smileCombining WeightsYou can combine different types of brackets to fine-tune the prompt further:• Example: ((beautiful girl):1.2), [[background]:0.7]Example Prompts:(masterpiece, best quality), ((beautiful girl):1.2), highres, looking at viewer, smile, [[background]:0.7]Practical ExamplesIncreasing Emphasis:To generate an image where the focus is heavily on the “girl”:(masterpiece, best quality), (beautiful girl:1.5), highres, looking at viewer, smile, [background:0.8]Decreasing Emphasis:To generate an image where the “background” is less emphasized:(masterpiece, best quality), beautiful girl, highres, looking at viewer, smile, [background:0.5]ConclusionBy using parentheses, square brackets, and curly braces effectively, you can guide Stable Diffusion to prioritize or de-prioritize certain elements in your prompt, resulting in images that better match your vision. Practice using these weighting techniques to see how they affect your generated images, and adjust accordingly to achieve the best results.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓Welcome to FuturEvoLab! We greatly appreciate your continuous support. Our mission is to delve deep into the world of AI-generated content (AIGC), bringing you the latest innovations and techniques. Through this platform, we hope to learn and exchange ideas with you, pushing the boundaries of what's possible in AIGC. Thank you for your support, and we look forward to learning and collaborating with all of you.In our exploration, we recommend several powerful models:Pony XL (Realistic)[Pony XL]Aurora Realism - FuturEvoLab[Pony XL]Lifelike Doll Romance - FuturEvoLabPony XL (Anime)[Pony XL]Cyber Futuristic Maidens - FuturEvoLab[Pony XL]Cyberworld Anime - FuturEvoLabDream Brush SDXL - FuturEvoLabSDXL 1.0 (Realistic)[SDXL]Lover's Light - FuturEvoLab[SDXL]Real style fantasy - FuturEvoLab[SDXL]Soulful Particle Genesis - FuturEvoLabSDXL 1.0 (Anime)[SDXL]Lovepunk Synth - FuturEvoLabFutureDreamWorks-SDXL-FuturEvoLabDreamEvolution-SDXL-FuturEvoLabStable Diffusion 1.5 (Realistic)[SD1.5]Genesis Realistic - FuturEvoLabTemptation Core - FuturEvoLab[SD1.5]Meris Realistic - FuturEvoLab[SD1.5]Fantasy Epic - FuturEvoLab[SD1.5]Fantasy - FuturEvoLabStable Diffusion 1.5 (Anime)[SD1.5]LoveNourish EX Anime - FuturEvoLab[SD1.5]LoveNourish Anime - FuturEvoLab[SD1.5]Temptation Heart【2.5D style】- FuturEvoLabBy leveraging these models, creators can generate images that range from hyper-realistic to vividly imaginative, catering to various artistic and practical applications.〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓 ★★★ FuturEvoLab ★★★ 〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓〓
490
27
TensorArt 2024 Community Trends Report

TensorArt 2024 Community Trends Report

2024: A Year of BreakthroughsThis year marked an explosion of innovation in AI. From language and imagery to video and audio, new technologies emerged and thrived in open-source communities. TensorArt stood at the forefront, evolving alongside our creators to witness the rise of AI artistry.Prompt of the Year: HairSurprisingly, "Hair" became the most-used prompt of 2024, with 260 million uses. On reflection, it makes sense—hair is essential in capturing the intricacies of portraiture. Other frequently used words included eyes (142M), body (130M), face (105M), and skin (79M).Niche terms favored by experienced users—like detailed (132M), score_8_up (45M), and 8k (25M)—also dominated this year, but saw a decline in usage by mid-year. With the advent of foundational models like Flux, SD3.5, and HunYuanDit, natural language prompts became intuitive and multilingual, removing the need for complex or negative prompts and lowering the barriers to entry for creators worldwide.Community AchievementsEvery day, hundreds of new models are uploaded to TensorArt, fueling creativity among tensorians. This year alone:Over 400,000 models are now available.300,000 images generated daily, with 35,000 shared via posts, reaching 1 million viewers and earning 15,000 likes and shares.This year, we introduced AI Tool and ComfyFlow, welcoming a new wave of creators. AI Tool simplified workflows for beginners and enabled integration into industry applications, with usage distributed across diverse fields.In November, TensorArt celebrated its 3 millionth user, solidifying its position as one of the most active platforms in the AI space after just 18 months. Among our loyal community are members like Goofy, MazVer, AstroBruh and Nuke, whose dedication spans back to our earliest days.A Global Creative ExchangeAI knows no borders. Creators from around the world use TensorArt to share and connect through art. From the icy landscapes of Finland (1.6%) to the sunny shores of Australia (8.7%), from Pakistan (0.075%) to Cuba (0.003%), tensorians transcend language and geography.Generationally, 75% of our users are Gen Z or Alpha, with the remaining 9% belonging to Gen X and Baby Boomers. “It’s never too late to learn” is a motto they live by.Gender representation also continues to evolve, with women now accounting for 20% of user base.TensorArt is breaking barriers—technical, social, and economic. With no need for costly GPUs or advanced knowledge of parameters, tools like Remix make creating stunning artwork as simple as a click.The Way Tensorians CreateMost active hours: Weeknights, 7 PM–12 AM, when TensorArt serves as the perfect way to unwind.Platform preferences: 70% of users favor the web version, but we’ve prioritized app updates for Q1 2025 to close this gap.Image ratios: Female characters outnumber male ones 9:1.67% are realistic, 28% are anime, and 3% are furry.Favorite colors order: Black, white, blue, red, green, yellow, and gray.A Growing Creator EconomyIn 2024, Creator Studio empowered users to monitor their model earnings. Membership in TenStar Fund tripled, and average creator income grew by 1.5x compared to last year.In 2025, TensorArt will continue to prioritize the balance between the creator economy and market development. TA will place greater emphasis on encouraging creators of AI tools and workflows to provide more efficient and convenient practical tools for various specific application scenarios. To this end, TA will be launching the Pro Segment to further reward creators, offering them higher revenue coefficients and profit sharing from Pro user subscriptions.2024 MilestonesThis year, TensorArt hosted:26 site events and 78 social media campaigns.First AI Tool partnership with Snapchat, pioneering AI-driven filters, which were featured as a case study by Snapchat.Launch of “Realtime Generate” and “Talk to Model,” revolutionizing how creators interact with AI.Collaboration with Austrian tattoo artist Fani to host a tattoo design contest, where winners received free tattoos based on their designs.TensorArt is committed to advancing the open-source ecosystem and has made significant strides in multiple areas:For newly released base models, TA ensures same-day online running and next-day support for online training. To allow Tensorians to experience the latest models, limited-time discounts are offered.To boost creative engagement with new base models, TA hosts high-reward events for each open-source base model, incentivizing Tensorians across various dimensions such as Models, AI tools, and Posts.Beyond image generation, TA actively supports the open-source video model ecosystem, enabling rapid integration of CogVideo, Mochi, and HunYuanVideo into ComfyFlow and Creation. In 2025, TA plans to expand online video functionality further.Moving from "observer" to "participant," TA has launched TensorArt Studios, with the release of the SD3.5M distilled version, Turbo. In 2025, Studios will unveil TensorArt’s self-developed base model.TensorArt continuously funds talented creators and labs, providing financial and computational resources to support model innovation. In 2025, Illustrious will exclusively collaborate with TensorArt to release its latest version.Looking ForwardFrom ChatGPT’s debut in 2022 to Sora’s groundbreaking in 2024, AI continues to redefine innovation across industries. But progress isn’t driven by one company—it thrives in the collective power of open-source ecosystems, inspiring collaboration and creativity.AI is a fertile ground, filled with the dreams and ambitions of visionaries worldwide. On this soil, we’ve planted the seed of TensorArt. Together, we will nurture it and watch it grow.2024 Annual RankingsEach month of 2024 brought unforgettable moments to TensorArt. Based on events, likes, runs and monthly trends, we’ve curated the 2024 Annual Rankings. Click to explore!
466
67
Prompting: Eyes. Colors, Shapes & More

Prompting: Eyes. Colors, Shapes & More

Hey all! Today I bring you an article about prompting for eyes. As always, keep in mind that results may vary depending on the models you use.Models and settings used to generate these images:
437
51
HairStyle Prompts Sharing - AIの髪のプロンプト集 - 发型提示词分享

HairStyle Prompts Sharing - AIの髪のプロンプト集 - 发型提示词分享

BeginningHairstyles are so diverse and ever-changing. Here are 100+ Effective HairStyle Prompts for you!The tested Model is ✨WAI-NSFW-illustrious-SDXL✨ , these prompts perform also well on most models.If you want to watch Video Tutorial (which is more efficient and intuitive), you can visit 👉 https://www.instagram.com/p/DH59PgJS-5A/More prompt sharing will be released. Stay tuned! And welcome to follow TensorArt Official Instagram 💗👉 https://www.instagram.com/tensor.art/This is where you can be the first to receive our shared videos.😉PromptsLoose Hairwavy hair , ウェーブヘア|波浪卷 卷发curly hair , 巻き毛|小卷 卷发messy hair , メッシーヘア|乱乱的头发straight hair , ストレートヘア|直发single sidelock , シングルサイドロック|单边发放下asymmetrical sidelocks , アシンメトリーサイドロック|不对称鬓角single hair intake , シングルヘアインテーク|单发发旋hair intakes , ヘアインテーク|发旋bob cut , ボブカット|波波头inverted bob , 逆ボブ|反向翻转波波头flipped hair , フリップヘア|头发下侧翘起来wolf cut , ウルフカット|狼尾hime cut , 姫カット|公主姬mullet , マレット|鲻鱼头half updo , ハーフアップ|半扎发Tailsponytail , ポニーテール|马尾side ponytail , サイドポニーテール|侧马尾high ponytail , ハイポニーテール|高马尾folded ponytail , 折り返しポニーテール|折叠马尾short ponytail , ショートポニーテール|短马尾two side up , ツーサイドアップ|双侧扎发one side up , ワンサイドアップ|单侧扎发uneven twintails , 不揃いなツインテール|不齐双马尾twintails , ツインテール|双马尾low twintails , ローツインテール|低双马尾short twintails , ショートツインテール|短双马尾low-tied sidelocks , ロータイサイドロック|低扎鬓角multi-tied hair , マルチタイドヘア|多重扎发BraidCrown braid , クラウンブレード|皇冠麻花辫Folded braid , 折り込みブレード|折叠麻花辫French braided ponytail , フレンチブレードポニーテール|法式辫马尾French braided twintails , フレンチブレードツインテール|法式辫双马尾Half up braid , ハーフアップブレード|半扎麻花辫Low-braided long hair , ローブレードロングヘア|低麻花辫长发Side braid , サイドブレード|侧麻花辫Single braid , シングルブレード|单麻花辫Twin braids , ツイン編み込み|双麻花辫Bun & DrillsBun with braided base , 編み込みベースお団子|麻花辫编成底部丸子头double bun , ダブルおだんご|双丸子头cone hair bun , コーンおだんご|锥形丸子头donut hair bun , ドーナツお​​だんご|甜甜圈丸子头bow-shaped hair , ボウシェイプヘア|蝴蝶结形发型drill hair , ドリルヘア|钻头卷发twin drills , ツインドリル|双钻头卷发ringlets , リングレット|卷环发drill sidelocks , ドリルサイドロック|钻头卷鬓角hair rings , ヘアリング|发环single hair ring , 一重ヘアリング|单层发环Bangsbangs , 前髪|刘海bangs pinned back , ピンで留めた前髪|用发夹固定的刘海blunt bangs , 鈍い前髪|齐刘海Braided bangs , 編み込み前髪|编辫刘海crossed bangs , クロス前髪|交叉刘海choppy bangs , 不揃い前髪|参差刘海diagonal bangs , 斜め前髪|斜刘海hair over eyes , 目にかかる髪|遮眼发hair over one eye , 片目にかかる髪|遮一只眼发hair between eyes , 目の間の髪|眼间发parted bangs , 分け目のある前髪|分缝刘海curtained hair , カーテンヘア|帘式刘海wispy bangs , 薄い前髪|稀疏刘海short bangs , 短い前髪|短刘海swept bangs , 流した前髪|侧扫刘海Ahogeahoge , アホ毛|呆毛heart ahoge , ハートアホ毛|心形呆毛huge ahoge , 巨大アホ毛|巨大呆毛antenna hair , アンテナヘア|天线发heart antenna hair , ハートアンテナヘア|心形天线发Othersnihongami , 日本髪|日本髮型pointy hair , 尖った髪|尖发spiked hair , スパイクヘア|刺猬头buzz cut , バズカット|平头crew cut , クルーカット|短平头flattop , フラットトップ|平顶头undercut , アンダーカット|剃鬓侧削发cornrows , コーンロウ|玉米鬃辫dreadlocks , ドレッドヘア|脏辫pompadour , ポンパドール|蓬巴杜发型hair slicked back , 髪を後ろになでつける|油头pixie cut , ピクシーカット|精灵短发Special ThanksSome prompts shared this time are sourced from @hypersankaku2. Follow him/her on Twitter to support!https://x.com/hypersankaku2
433
16
Easy Guide to make a LoRA for Flux

Easy Guide to make a LoRA for Flux

Hello, my friends?Many of you already know how to create LoRAs. And there are many articles about creating a LoRA on this site. However, I just wanted to add another tutorial for whoever wants to check it from my creation process. Let's begin! (I'll use many images I captured during the creation process.)Step 1. upload imagesWell, before uploading images, you need to prepare for the images first of course. For Flux LoRA of a character, it'll be somewhere from 15 ~ 40 probably. For this tutorial, I prepared for only 12 images. The image quality of the images is not really ideal either. We'd better have a very high quality images of various angles. You'd better not use images of poor quality. It only makes your result worse. For Flux, well, you can use OK images because Flux is very smart to learn and create a good LoRA under harsh circumstances.If you go to the online training menu on the top of Tensor Art site and click it, you'll see a screen like the following. You click the upload images button in the left empty area, a pop up window will show up. Then you can go to the folder of your stored images and select them. Then, they will show up on the training window like the following image. If you click on any of the images there, another window will pop up to show the image and the caption (tag) created for it. (When the image is uploaded, the tool will generate the caption of the images automatically. You can create your own caption file and upload it with each image if you want. Let's skip it for now.)Step 2. Parameter SettingsNext, check the parameters for the training correctly as follows. Make sure you are in pro mode. If you click the button on the right top, it will change to "Basic Mode" button as follows. Click the SD3/Flux Standard check box. Then use base model of Flux.1 which is a default. For Network Module, change it to LoKr instead of LoRA. They are similar things. Personally, I prefer LoKr. Please ask ChatGPT for details about them, and teach me too~ LOL. (You must select LyCORIS in model project settings for now. Don't choose LoKr until TA fixes the issue. LyCORIS is a superset including efficient models like LoKr, LoHA, LoCon etc.)For the primary trigger words, use proper words with under scores like ek_sku11_kn1ght used in my case. I used 1's to replace certain alphabet letters like 'l' and 'i'. Why? I was told to use words not to be confused with any other "normal" prompt words. For other additional trigger words, you can use normal phrases you want to use for the LoRA. These trigger words are usually used during your train process as caption to teach the LoRA about the images.Fo example, if you want to teach the "metallic surface", then it must be in the caption of the right images. Then the LoRA will learn the image's metallic surface. When you generate an image using your LoRA later, if you use the trigger words "metallic surface", the related image result will show up better probably. At the bottom, the amount of credits used for LoRA creation is shown. It's 210.61 credits. However, it's for the default settings of repeat: 20 and epoch:10. We will use less numbers for the training this time and the credits will reduce a lot.Let's continue to the next steps. For the "Skull Knight" LoRA I'm trying to make now, I personally don't really need a complete match to the original source images this time. So I reduce the repeat down to 15 and epoch down to 3. If the result is bad, we can continue the training process from the epoch 3 later. (However, this continuation of training is only available for Pro users unfortunately.) Save every N Epochs is set to 1 which means you want to save the generated lora called safetensors at every epoch. You don't know where in the training process you will get the best result until you see the sample results. So you need to save the result at every epoch.The desirable resolution for Flux Lora training is 1024x1024. However, you can still get the result of certain quality by choosing 512x512. For serious Lora creation like a beautiful woman's face, I use 1024x1024 of course. For this kind of OK Lora, I can try 512x512. The credits are saved a lot by choosing smaller image size here. Check out the credits at the bottom now. It's only 47.39!! BTW, your source images are supposed to be prepared matching this resolution at first. (To be honest, you can use various resolutions for the source images and the training still works. However, for the best training result, you are supposed to use the correct source image size for training. Don't blame me later. 🤗)Step 3. Parameter Setting (2)The parameter setting is too long and I break it into several steps. Check the numbers in the following image and just use them if you are not sure what to do here. For LR scheduler I prefer cosine_with_restarts. However other scheduler like "constant" or "linear" will work fine too. For details, ask chatGPT and teach me later too~ lol. For optimizer, AdamW8bit is used. There are other choices here too. Ask ChatGPT~.Shuffle caption is set to enabled. Keep N tokens is 1 because we will put the primary trigger word at the beginning of the captions later. Then the Lora doesn't check the trigger word as part of caption. For noise offset and other numbers use the default. Some articles say that the noise offset should be 0.0357. According to my experience, the default 0.03 is still OK. I couldn't tell the difference in the final results in many cases. For conv_dim, use 4. For conv_alpha, use 1. These numbers are very important. For style LoRAs, we use different settings like 8 and 2 here.Step 4. Parameter Setting (3)For sample image generation during the training, use proper prompt. However, don't use complicated prompt words as you do in the real generation later. Make it simple to check the effect of the Lora more clearly. Check the prompt I used here for example. Include the trigger words in the prompt. Set the image size of the sample images as you wish. I checked 768 x 1024 here. For sampler, use Euler or Euler_ancestral. For pro users, click the Priority Queue button for faster training process.Step 5. Add trigger words in captionYou need to put the primary trigger word in the beginning of your captions. Use the labeling menu and put the trigger word at the beginning as follows.Now you can see that the primary trigger word is placed at the beginning of the captions in every image. You cannot do this manually one by one. (Well, you can do it if you have the caption as a separate file from the beginning.)For additional trigger words, you can put them in the end as follows. I have another trigger words "Skull Knight" which is not really necessary but I used it for the tutorial.Now, the new trigger words are in the end of your caption of every image.Now select the Priority Queue button. Then press "Start Training Now" button.Step 6. Training starts...Now a new window pops up as follows. The name of the training session is shown as a time stamp. You can change the name of the session as follows. Select "rename" on the right top menu.Change it to the name you want. I put "skull knight 1.0". BTW, this name becomes the version name of your lora after you publish it later. I usually change them to the real version number like 0.5 or 1.0 or 2.0 later. We can edit it later. Don't worry.Step 7. Training ProgressYou can select the parameter setting menu and check if everything is OK here. If you find out any mistakes, you can stop the training before it starts and save your credits. (BTW, I asked TA to make this cancel feature long ago. They accepted my suggestion. Yeah, you owe me a big one~ LOL.)As the training progresses, you'll see that the "Loss" is getting lower (sometimes it goes up and down). You see the result of every epoch from the 4 sample images. Click and see the images if they are what you want. You can compare the sample images of different epochs. You can choose the epoch with best sample results of your favorite for publication.The "publish" button at the bottom of each epoch can be used to publish the corresponding epoch's safetensors. The continue button at each epoch can be used when you want to continue another session of training starting from the epoch. This is really convenient thanks to TA. Great Job, TA~You can see that the image of the same location of the samples improved as the epochs change.Step 8. Publish your LoRA.OK. The training is over and you are happy with your sample results. Then publish it as your LoRA project! Easy huh? The following window pops up. Click the "+ Create a project" menu. You could have created a project separately from TA's home page before.Add project name and select the channel and tags properly.Add description for the LoRA you just created. In 2, you can see your session name as Model Version. If you want, you can change it to a different number or name you want. You can edit this later too.Set the default LoRA strength you want. I set 0.8 here.You can add more showcase images here. You will come back here to make changes later after you generate images using your Lora.Choose Download enabled or prohibited here. I chose "prohibited". Then press "Publish" button.Then, project edit page will pop up. Type in the project name you want. Select the tags you want.Add description for your LoRA project. Select Original, Original - Exclusive or Original - Early Access depending on your preferences. Select the remaining choices and press "Update" and you are all set~ Good luck!Step 9. Using the created LoRAThe final results are not very bad~. From the small set of images and 512x512, we still got a decent LoRA!Step 10. Continued TrainingTo improve LoRA, I continued the training up to epoch 7. I added more captions/trigger words related to spikes as follows. The result improved to be close to the original source images. Thanks for reading! 🤗😉 (For Style Lora making, please see my 2nd article~)
425
43
Prompting: Male Hairstyles

Prompting: Male Hairstyles

Hello all! Today I bring you the most requested article for male hairstyles! As always, keep in mind that results may vary depending on the models you use.This is the prompt I've used to generate these images, the only difference is that I added the hairstyles to the prompt:"1boy, upper body, looking at viewer, neutral expression, soft lighting, shallow depth of field, clean background, natural pose, high quality, detailed face"As always, my settings for generating these images:
408
29
ControlNet: Openpose adapter

ControlNet: Openpose adapter

This article introduces the OpenPose ControlNet adapter. If you’re new to ControlNet, I recommend checking out my introductory article first.I don’t use OpenPose much myself, since I find the Canny + Depth combination more convenient. But I did some experiments specifically for this article, so consider this a first look rather than a deep dive.The OpenPose adapter lets you copy the pose of humanoid characters from one image to another. Like other ControlNet adapters, it uses a preprocessor that takes an input image and generates a control file, in this case - a stick figure representing the positions of key joints and limbs. This stick figure then guides the image generation process. Here is an example:Left - original picture by Chicken, center - stick figure generated by the OpenPose preprocessor, right - stick figure overlaid on the original image.As you can see, the stick figure isn’t a full skeleton but marks key joints as dots connected by lines. The colors aren’t random, they follow a color code for different bones and joints. Bing search gives this reference for them. Here is the list of joints and bones with color scheme:Looking at the example above, I think preprocessor didn't do a good job this time, a few joints seem to be quite a way off the mark and the legs are missing. The picture is somewhat non-trivial, it is close top-down view with perspective distortion. Still, preprocessor should have been able to handle it. It seems to be easy enough to alter the stick figures or even make them from scratch.Openpose seems to be sensitive to scheduler and sampler settings. Unlike Canny and Depth, it refused to work with karras/dpm_adaptive I use normally, so I switched to normal/euler, 20 steps.Here are the settings:And here are the results:As you can see, the pose and head position are copied to some extent.I used the default preprocessor here, there are more:Here is the stick figure for openpose_full:It includes fingers. A single white dot represents face, I guess preprocessor just failed here. Fingers are nowhere to be seen in the results:It seems the preprocessor and the main model are out of sync.The dw_openpose_full stick figure looks promising:It includes markings for face, eyes and mouth contours. Results, though, are disappointing, it seems to be completely ignored. I think dw_openpose_full preprocessor is not compatible with the adapter model.So, yeah, quite disappointing. It is not a complete loss, it does work to some extent and can be useful. It is just difficult to be excited about this one.I should point out that I’m specifically talking about the ControlNet OpenPose adapter for the SDXL-based model on tensor.art, these conclusions are in no way representative for other implementations of this adapter. Also, it is possible that I am "holding it wrong". These things can be tricky and my experience using it is very limited.If I am missing something here, feel free to drop a comment.
401
40

Tips for new Users

Intro Hey there! If you're reading this, you're probably new to AI image generation and want to learn more. If you're not, you probably already know more than me :). Yeah, full disclosure: I'm still pretty inexperienced at this whole thing, but I thought I could still share some of the things I've learned with you! So, in no particular order:1. You can like your own posts I doubt there's anyone who doesn't know this already, but if you're posting your favorite generations and you care about getting likes, you can always like them yourself. Sketchy? Kinda. Do I still do it? Yes. And on the topic of getting more likes:2. Likes will often be returned Whenever I receive a like on one of my posts, I'll look at that person's pictures and heart any that I particularly enjoy. I know a lot of people do this, so one of the best ways to get people to notice and like your content is to just browse through posts and be generous with your own likes. It's a great way to get inspiration too!3. Use turbo/lightning LORAs If you find yourself running out of credits, there are ways to conserve them. When I'm iterating on an idea, I'll use a SDXL model (Meina XL) paired with this LORA. This lets me get high quality images in 10 steps for only 0.4 credits! It's really nice, and works with any SDXL model. Unfortunately, if there is a similar method for speeding up SD 1.5 models I don't know it, so it only works with XL.4. Use ADetailer smartly ADetailer is the best solution I've found for improving faces and hands. It's also a little difficult to figure out. So, though I'm still not a professional with it, I thought I could share some of the tricks I've learned. The models I normally use are face_yolo8s.pt and hand_yolo8s.pt. The "8s" versions are better than the "8n" versions, though they are slightly slower. In addition to these models, I'll often add the Attractive Eyes and Perfect Hand LORAs respectively. These are all just little things you can do to improve these notoriously hard parts of image generation. Also, using ADetailer before upscaling the image is cheaper in terms of credits, though the upscaling process can sometimes mess up the hands and face a little bit so there's some give and take there.5. Use an image editing app Wait a minute, I hear you saying, isn't this a guide for using Tensor Art? Yes, but you can still use other tools to improve your images. If I don't like a specific part of my image, I'll download it, open it in Krita (Or Photoshop or Gimp) and work on it. My art skills are pretty bad, (which is why I'm using this site in the first place,) but I can still remove, recolor, or edit certain aspects of the image. I can then reupload it to Tensor Art, and Img2img with a high denoising strength to improve it further. You could also just try inpainting the specific thing you want to change, but I always find it a bit of a struggle to get inpaint to make the changes I want.6. Experiment! The best way to learn is to do, so just start generating images, fiddling with settings, and trying new things. I still feel like I'm learning new stuff every day, and this technology is improving so fast that I don't think anyone will ever truly master it. But we can still try our hardest and hone our skills through experimentation, sharing knowledge, and getting more familiar with these models. And all the anime girls are a big plus too.Outro If you have anything to add, or even a tip you'd like to share, definitely leave a comment and maybe I can add it to this article. This list is obviously not exhaustive, and I'm no where near as talented as some of the people on this platform. Still though, I hope to have helped at least one person today. If that was you, maybe give the article a like? I appreciate it a ton, so if you enjoyed, just let me know. Thanks for reading!
395
77
Prompting: Perspectives

Prompting: Perspectives

Greetings! Today, I'd like to present a short guide on what angles and perspectives I use for generating images.Bird's eye view: A shot taken from above, showing a scene from above.Worm's eye view: A photo taken from below, looking up from the ground, works the same as a low angle.Close up: A shot showing details of a subject, such as a face, in close focus.Extreme close up: A very close shot that captures a specific detail, such as an eye, in focus.Upper body / Cowboy Shot: A shot from the waist up of the subject.Dutch angle: An angle shot that creates a dynamic, distorted perspective.High angle: A shot that looks at the subject from above.Low angle: A shot that looks at the subject from below.Frontal view: The subject is looking at the cameraBack view / Rear view: The subject has their back to the cameraSide view: The subject is shot from the sideYou can also add the following to the prompt to get a random angle:Random angle, dynamic angleHave fun creating!
370
13
ControlNet: QR Code adapter

ControlNet: QR Code adapter

This article introduces the QR Code ControlNet adapter. If you’re not yet familiar with the general idea behind ControlNet, I suggest reading this article first.The QR Code adapter is named after the QR code. To steal a bit from Ars Technica:QR codes, short for Quick Response codes, are two-dimensional barcodes initially designed for the automotive industry in Japan. These codes have since found wide-ranging applications in various fields including advertising, product tracking, and digital payments, thanks to their ability to store a substantial amount of data. When scanned using a smartphone or a dedicated QR code scanner, the encoded information (which can be text, a website URL, or other data) is quickly accessed and displayed.If you use a QR code as the control file for the QR Code adapter, the resulting picture will contain the recognizable pattern of the QR code — while still trying to represent your prompt:The QR code is usually distorted to some extent, but it mostly works thanks to the error correction built into the QR code system. QR codes don’t expect every scan to be perfectly aligned and clean.A much more interesting use is unrelated to QR codes. Any arbitrary pattern can be presented to this adapter, which leads to some fascinating effects:This picture is from another Ars Technica article about this adapter.People have gotten very creative with this:The author of this image even made a guide based on it.Here’s the pattern he used:I used the QR Code adapter for a few images here. I make the control file black and white and match its size to the intended picture size, although both of these steps seem to be optional. I use a weight in the 0.9 to 1.5 range.You should give a prompt that allows some freedom for the AI to fit into your pattern. Irregular, natural objects seem to work best — think clouds, fog, smoke, rocks, waves, flames, sand, or shadows. But there are really no wrong prompts. Just try and see.An easy way to convert an image with a prominent foreground object into a QR Code control file is to make a depth map for it and then edit it.And that's really all there is to it. This is a very simple adapter. Unlike Canny, Depth, and OpenPose, it doesn't even have a preprocessor; the only input is the pattern file.Here are five images I posted here that were made with QR Code adapter:And here are the patterns I used:Other articles about ControlNet:Introduction to ControlNetControlNet: Canny adapterControlNet: Depth adapterControlNet: Openpose adapter
367
25
Prompting: Facial Expressions

Prompting: Facial Expressions

Hey all! A quick post about some facial expressions I use in my generations, I'm pretty sure there's many more, but these are the ones I use the most! Also, depending on the models you use, these might look different.
349
17
Tutorail of ACG to OnlyFans style

Tutorail of ACG to OnlyFans style

The difference from other ACG 2 REAL tools lies in: more realistic facial features and photographic qualityThe limitation is that it's not suitable for overly complex movements, and hands might occasionally have issuesHow to use:Input your photoSelect a face model (you can check image details in related posts to preview effects by number)GO*If you have a custom prompt, please add a comma at the endYou'll receive two images - one is the standard result like other ACG2REAL tools (used as guidance for the next image), and the other is this TOOL's specialty image.There is also a version that consumes less cresits here ,It’s just that the control is not that precise, and the reproduction of complex movements is not very goodIf there is an error in the hand, you can try to fix it with this toolHope you like it!
334
22
🐍 スネークの夜会クリエイティブコンテスト 🐍~1月20日 日本語訳

🐍 スネークの夜会クリエイティブコンテスト 🐍~1月20日 日本語訳

🎉旧正月: 蛇の夜会🎉2025 年 1 月 29 日の旧正月が近づいてきました。私たちはこれを発表できることをうれしく思っています。スネークの夜会クリエイティブコンテスト”! 🌟 蛇の新年🌟蛇年は知恵、狡猾さ、内省を象徴します。 2025 年はチャンスと挑戦に満ちた年であり、創造性と祝賀に最適な時期となります。 🎊 蛇年の祝賀会の始まりです。あなたの想像力を見せてください! 🎊⏰ イベント期間1月1日~1月20日 (UTC)イベント終了後、2 日間かけて評価し、1 月 23 日に受賞者を正式に発表します。 🌟スネークソワレ 画像・動画とAIツール巳年の夜会は 2 つのセッションに分かれています。 特別な夜会 そして AIツール特別な夜会。  楽しんでください!😝🖼️ 投稿 夜会: スネークとテンテンタスク: お久しぶりです!皆さん覚えていますか、テンテン!  TensorArt のマスコット?蛇年を祝いに来ました! 🌟組み合わせてください テンテン と 蛇の要素  作品に! 🌟TensorArt に投稿してくださいガイドラインは存在しないでしょう。あなたの創造力を輝かせて、テンテンとヘビの要素を完璧に融合させましょう!タグ snakeyear 投稿するとき! 受賞者と賞品:ベストクリエイティビティ賞: 3 日間プロ + 200 クレジット (10 名)ベストエステティック賞: 3 日間プロ + 200 クレジット (10 名)参加賞:画像投稿:50クレジット動画投稿:100クレジット  (画像・動画の特典は同時に獲得出来ます。アカウントごとに最大 150 クレジットが獲得できます)🎨 AI ツール夜会: デザインスネークタスク: 小さなヘビをデザインの世界に「滑り込ませて」みましょう。 🌟AI ツールを作成しよう デザイン フィールド🌟蛇の要素 または蛇のイメージ、デザイン要素を含めて下さい。たとえば、ポスター デザイン、ファッション デザイン、芸術的なタイポグラフィ デザイン、ヘビ要素を含む AI ツールなど…。AI ツールは最大 3 つのパラメーター設定を公開でき、デザイン関連である必要があります。そうでない場合は報酬の対象になりません。タグ snakeyear アップロードするとき! 受賞者と賞品:ベストクリエイティビティ賞: $29.9 (3名様)ベストエステティック賞: $29.9 (3名様)参加賞:200クレジット📝 ルール投稿と AI ツールは、対応するテーマと要件に適合する必要があります。そうしないと報酬を獲得できません。タグ付け #snakeyear イベントへの参加とみなされます。タグを使用しないと、報酬を受け取る資格が失われます。現金報酬はイベント終了時に GPU 基金に入金され、いつでも引き出す​​ことができます。勝者は TensorArt 公式チームによって決定されます。システムデフォルトのアバターとニックネームを持つユーザーは報酬を受け取りません。イベントの内容はコミュニティのルールに準拠する必要があります。 NSFW、児童ポルノ、有名人の画像、暴力、低品質のコンテンツは対象外です。イベントの最終的な解釈は TensorArt に属します
331
36
My Journey: Model Training a LoRA for Game Art Design

My Journey: Model Training a LoRA for Game Art Design

My Journey: Training a LoRA Model for Game Art DesignWhat is LoRA?LoRA (Low-Rank Adaptation) is a powerful technique to create custom AI art models, perfect for game designers looking to develop unique visual styles.My Training Setup for Adrar Games Art StylePreparing Your Training DatasetTechnical SpecificationsBase Model: FLUX.1 - dev-fp8Training Approach: LoRA (Low-Rank Adaptation)Trigger Words: Adrr-GmzEpochs: 5Learning Rate: 0.0005 (UNet)Key Training ParametersNetwork ConfigurationDimension: 2Alpha: 16Optimizer: AdamW 8bitLR Scheduler: Cosine with RestartsAdvanced TechniquesNoise Offset: 0.1Multires Noise Discount: 0.1Multires Noise Iterations: 10Sample Prompt"A game art poster of a Hero standing in a fantastic ancient city in the background, and in the top a title in a bold stylized font 'Adrar Games'"My Learning ProcessChallengesCreating a consistent game art styleCapturing the essence of "Adrar Games" visual identityBalancing technical parameters with creative visionInsightsLoRA allows precise control over art generationCareful parameter tuning is crucialSmall adjustments can significantly impact resultsPractical TakeawaysStart with a clear artistic visionExperiment with different settingsDon't be afraid to iterate and refineRecommended Next StepsGenerate multiple sample imagesAnalyze and compare resultsAdjust parameters incrementallyBuild a library of unique game art assetsWould you like me to elaborate on any part of my LoRA training experience?
307
31
Perfect migration details — Use the Kontext model to achieve stable mapping of graphics to graphics

Perfect migration details — Use the Kontext model to achieve stable mapping of graphics to graphics

Tensor has launched the new Kontext model from Black Forest Studio, which features excellent detail migration and a variety of operations with simple prompts. Here are some simple and practical ways to use it:First, the user needs to upload an image and select the appropriate size. It is usually recommended to keep the original size to ensure image quality.Additional explanation: The official kontext has a daily usage limit. After it is used up, you can use the TOOLS I made as a supplement1. Switch stylesJust type "restyle to" and add a style description, such as Pixar, clay, or GhibliClick Generate to directly transfer the style2. Restoring old photosEnter "Restore and colorize this image. Remove any scratches or imperfections."3. Remove image elementsSimply type an instruction that starts with "remove" and specify what you want to remove.For example: remove the watermark from the pictureremove astronaut from the pic 4. Change the image elementsEnter "change" at the beginning, and then connect "image content to the content to be changed"For example, change the dessert to a burgerchange "I LOVE YOU" to "I WANT HIM"change the woman to side viewchange the time to daytime5. Character consistencyDescribe the character features briefly, and then enter the content of the character to be changed (actions, Settings) for instance :generate the girl's front view, side view, rear view.the 3D girl dancing on the stage with colorful light6. Extract itemsEnter "Extract" to start, and then enter the content to be extracted, followed by "over a grey background", for example:Extract only the T-shirt over a grey background, product photography style7. Expand the imageEnter "extend to" and enter the requirements for the extensionFor example: extend to full body shot8. Art style referenceEnter "using this style" and enter what you want to drawFor example, using this style to draw a liquid glass style icon of the camera app9. Modify the environmentEnter "It's now" and add a description of the environment stateFor example: It's snowing now, everything is covered with snowIn summary, I have summarized 9 practical functions, I hope it will be helpful to you. Of course, if you have a better way to use it, we are also very welcome to share with us.In addition, I have developed several TOOLS designed to address the cumbersome input prompt and daily usage limits, which can be easily addressed with these TOOLS:Style transfer | Cute you | Remove and replace | POSE CHANGE | change background 
304
9
僕はとにかく楽がしたい LoRAの各種設定 Model Training

僕はとにかく楽がしたい LoRAの各種設定 Model Training

寒い冬は動きたくない。暑い夏は外に出たくない。花粉の多い春は家にいたい。枯れ葉が飛んでくる秋は窓から外を眺めたい。僕にとってGPTは代わりに色々してくれるいい相棒です。ご飯とかお風呂を用意してくれるようになったら完璧。そんな楽をすることしか考えてない僕の、ローラ詳細設定方法をご紹介します。ババン。GPTに詳細設定を考えてもらうSample画像を1枚アップロードしてこう聞いてください。赤字の部分を変更します。AI画像生成モデルのFlux用のLoRAを作る予定です。「結婚式場」をテーマにしたLoRAです。 添付画像のような画像を40枚用意しました。最適な「繰り返し数Repeat」「エポック数Epoch」「Network Rank Dim」「Network Alpha」などの設定を考えて貰えますか。タグ設定について注意点などありますか?サンプル画像生成用のプロンプトを作成して貰えますか。他にも重視したいことを記入すると良いですね。GPTさんの実際の返事LoRAの設定や学習手順について、結婚式場をテーマにしたLoRAを作成する際の最適な設定や注意点、サンプル生成プロンプトについて詳しく解説します。LoRA学習用の設定提供された40枚の画像を使用する場合、データ量は少なめですので、以下の設定を推奨します。基本設定Repeat(繰り返し数):5~10データセットが40枚と少ないため、各画像を繰り返して学習データの量を補います。Epoch(エポック数):10~15過学習を防ぎつつもデータ量に応じて十分に学習させるため、エポック数は10~15程度に調整します。Network Rank Dim(Rank):16~32Rankを16~32程度に設定し、モデルの柔軟性を確保します。低すぎると学習内容が薄まり、高すぎると過学習のリスクが上がります。Network Alpha(Alpha):16AlphaをRankと同値程度に設定すると、LoRAの学習が安定します。タグ設定の注意点共通タグ"wedding venue"、"flower decorations"、"arches"、"drapes"、"fairy lights" などの基本要素をタグとして追加。→ 結婚式場の要素がしっかり学習されるように意識します。詳細なタグ画像内の要素を細かく分けてタグ付け。例えば:花の種類:「roses」「floral arrangements」「bouquets」装飾:「candles」「lanterns」「string lights」「arched drapery」照明や雰囲気:「soft lighting」「romantic setting」「warm glow」固有名詞の避け方「特定の会場名」や「ブランド名」などの固有名詞は含めないよう注意。サンプル画像生成プロンプト学習が完了したLoRAを使って、以下のプロンプトでサンプル画像を生成できます。コードをコピーする(masterpiece, best quality, ultra-detailed), wedding venue, floral arch, draped curtains, fairy lights, romantic lighting, beautiful flower arrangements, glowing candles in glass lanterns, soft sunlight through tall windows, elegant decorations, roses, pastel colors, dreamy atmosphere, wooden floor, luxurious and enchanting wedding setting追加の強化要素季節感を出す場合:spring wedding や winter wedding theme背景変更:sunset lighting や nighttime fairy lights装飾の強調:detailed flower patterns や ornate arch design補足生成の際は、LoRAの強度(weight)を0.7~1.0程度で調整し、モデルの影響度を最適化します。サンプル生成後、LoRAが意図した要素をしっかり学習できているか、具体的に確認して調整を加えてください。この設定で試してみて、調整が必要な場合はフィードバックをお待ちしています!✨
301
26
Double-Click Boost! 🚀 The Secret to Skip the Queue!​

Double-Click Boost! 🚀 The Secret to Skip the Queue!​

Double-Click Boost! 🚀 The Secret to Skip the Queue!​​While using ​Tensor AI Tools​ for image generation, I often found myself stuck in a never-ending queue after hitting the "Go" button... 😩But then—EUREKA!​​ 💡 I discovered a ​secret trick:👉 ​Double-click "Go"​​ to create a new task, and—voilà!—your first task magically starts running! 🎩✨It’s like hitting ​NITRO​ in a racing game 🏁💨—DOUBLE-CLICK TO ACCELERATE!!!​​ 🚀🔥(Works most of the time… but no refunds if it doesn’t! 😆)
298
30
How to create Lora (Basic version)

How to create Lora (Basic version)

How to create Lora (Basic version)I believe that when creating an image you like, you may sometimes feel that some element is missing, and you may try to search for the relevant lora, but still cannot find it.At this time, you may think, should I make a lora myself? In this way, in my future works, maybe I can highlight more of the elements I like! I can also share it with my creative partners.Let’s get started!First, you need to collect more pictures of similar elements (about 12 pictures), and the picture resolution should be higher, so that when you use it with other LoRa in the future, the pictures will be clearer, unless the element you want is a hazy feeling.Next, it’s time to start training!First, open the user interface and there is a model I trained. Then click the online training in the upper left corner to enter the training interface. Add the prepared image in the lower left corner. Select the basic model type you often use in the upper right corner, such as: sd1.5, sd3.5, pony, flux, Hunyuan, etc.Next, you need to consider your computing power budget, because if you use different basic models to train the same image, the results will be very different, and the computing power consumption will also be different. Also, can you use it together with the commonly used LoRa?For the trigger word part, you can input the most important elements of this lora. You can choose not to input them first and wait until you see the training results.The prompt word part seems to affect the training results. If you have a clear goal, you can enter it. The default is 1girl. If your element is not a girl, you can change it or leave it blank.OK, next click on training, wait in line and check the training time.After the training is finished, I will look at the training results. There are ten training results in total. I usually choose one from the sixth to the tenth one that looks closer to the result I want and press publish.Select Create Project at the top.Enter the project name (be careful, this cannot be changed)Select Lora TypeAdd lora tagparameterI usually choose 500 for the number of iterations.Type the trigger word and descriptionSelecting a base modelReverse prompt wordUpload files unless they are the result of previous trainingOtherwise, I usually adjust the precision to fp32.Showcases (Image/Video)If you are uploading the results of a previous training, you need to upload the workbench image and cover imageOK, press PublishAdjust the details again, update and wait for the system to be deployed, then you can try it out and see your results!😁😁😁
283
55
Art Mediums (127 Style)

Art Mediums (127 Style)

Art MediumsVarious art mediums. Prompted with '{medium} art of a woman MetalpointMiniature PaintingMixed MediaMonotype PrintingMosaic Tile ArtMosaicNeonOil PaintOrigamiPapermakingPapier-mâchéPastelPen And InkPerformance ArtPhotographyPhotomontagePlasterPlastic ArtsPolymer ClayPrintmakingPuppetryPyrographyQuillingQuilt ArtRecycled ArtRelief PrintingResinReverse Glass PaintingSandScratchboard ArtScreen PrintingScrimshawSculpture WeldingSequin ArtSilk PaintingSilverpointSound ArtSpray PaintStained GlassStencilStoneTapestryTattoo ArtTemperaTerra-cottaTextile ArtVideo ArtVirtual Reality ArtWatercolorWaxWeavingWire SculptureWoodWoodcutGlassGlitch ArtGold LeafGouacheGraffitiGraphite PencilIceInk Wash PaintingInstallation ArtIntaglio PrintingInteractive MediaKinetic ArtKnittingLand ArtLeatherLenticular PrintingLight ProjectionLithographyMacrameMarbleMetalColored PencilComputer-generated Imagery (cgi)Conceptual ArtCopper EtchingCrochetDecoupageDigital MosaicDigital PaintingDigital SculptureDioramaEmbroideryEnamelEncaustic PaintingEnvironmental ArtEtchingFabricFeltingFiberFoam CarvingFound ObjectsFrescoAugmented Reality ArtBatikBeadworkBody PaintingBookbindingBronzeCalligraphyCast PaperCeramicsChalkCharcoalClayCollageCollagraphy3d PrintingAcrylic PaintAirbrushAlgorithmic ArtAnimationArt GlassAssemblage
257
28
Beginner Guide for Prompts (Lesson 1 Emotions)

Beginner Guide for Prompts (Lesson 1 Emotions)

This is a guide for beginners it's very very basic, I'll showcase emotions and how the AI portrays them (or doesn't)The style I'll be using to test is Anime. The seed of the neutral expression would be used as a base and only the " neutral expression" of the prompt would be changed.Prompt: masterpiece, best quality, aesthetic, , 4k, hd, amazing quality, very aesthetic, volumetric lighting, perfect lighting, detailed eyes, perfect anatomy, perfect proportions, high definition, masterpiece, best quality, very awa, newest, highres, absurdres, year 2024, extremely detailed, highres,detailed beautiful face, high resolution, good colors, bright skin, Dynamic lighting, countershading, depth of field, ambient occlusion, raytracing, bara, face up close, face, eyes, nose, neck, chest, toned neck, adam apple, looking at viewer, head_tilt, slice-of-life style, focus face, male_only, neutral expression1boy, toned boy. handsome, ikemen, dynamic angle,, multi color hair, blue school_uniform, idol costume, (Glittering background), outdoor, school, blue sky, heart, hair ribbon, head rest, blue glitter hair streaks,shoulder-length_hair, Black hair, cyan eyes, warm-ivory_skin.Expressions: Neutral, Happy, Angry, Sad, Pout, BoredSmug, Naughty, Grimace (don't ask), Excited, Hopeful, JoyfulNervous, Pensive, Relaxed, Shy, Shocked, SleepyWorried, Mischievous, Embarrassed, Seductive, Inawe, AhegaoLastly. Emotionless, Scared, Disgust, At the bottom is an example of how to enhance the emotion, is to describe the expression through tags.I did the Smug expression but added more details of how it should look. Search for Inspo learn from other creators, check their prompts, try to verbalize a picture's expression, it'll help your gens look more lively. Emotions that didn't show up: Jealous, hurt, proud, suspicious, thoughtful, stressed,  tired, troubled, wary, embarrassed. Romantic, frustrated, fearful  confused , curious, troubled, worried, sulking, melancholic, disappointed, disgusted
255
11
How I LoRA: A beginners guide to LoRA training | Part 3: Testing your LoRA

How I LoRA: A beginners guide to LoRA training | Part 3: Testing your LoRA

A step-by-step guide on how to train a LoRA; part 3!Warning: This guide is based on Kohya_SSThis guide REQUIRES that you read "How I LoRA: A beginners guide to LoRA training | Part 1: Dataset Prep." and "How I LoRA: A beginners guide to LoRA training | Part 2: Training Basics"This guide CAN be ported to Tensor.art's trainer; if you know what you are doing.This guide is an (almost) 1:1 of the following guide: https://civitai.com/articles/3522/valstrixs-crash-course-guide-to-lora-and-lycoris-trainingEdits were made to keep it short and only dive into the crucial details. It also removes a lot of recommendations I DO NOT follow.; for more advanced information, please support the original guide. If you want to do things MY way, keep reading.THE SETTINGS USED ARE BASED ON SDXL, DO NOT FOLLOW IF YOU ARE TRAINING ON V-PRED OR 1.5Testing your LoRAThere are two ways to test a LoRA. During training and after training.During:While on your Kohya_ss, there is a section for a "test" prompt. Use it. If you followed the guide you should have set "save every N epoch" as 1. Meaning that every epoch it will save a model and by proxy, test it with the given prompt.Look at each image, and judge its quality.After (The right way):After training is done, move all your safetensors files to your lora folder on your WebUI instalation. I will asume you have A1111, A1111 Forge or A1111 Re-Forge (the best one).On your WebUI, set yourself up with all the settings you would normally use; checkpoint, scheduler, etc.Copy/paste one of your dataset prompts to the prompt area (This will test overfitting).Navigate to the LoRA subtab and add the first file; ie.: Shondo_Noob-000001.safetensor, this will add the LoRA to the prompt as: <lora:Shondo_Noob-000001:1>; change the :1 to :0.1Set a fixed seed; ie.: 1234567890Scroll down to the "script" area of your WebUI and select X/Y/ZSet your X, Y and Z as "Prompt S/R"On X; write all of your LoRA's filenames; ie.: Shondo_Noob-000001, Shondo_Noob-000002, Shondo_Noob-000003, Shondo_Noob-000004, etc. Depending on how many files you saved, their names, etc. ALWAYS SEPARATE WITH A COMMA.On Y; write all the strength variables from 0.1 to 1, ie.: 0.1, 0.2, 0.3, etc. ALWAYS SEPARATE WITH A COMMA.On Z; write an alternate tag to test flexibility, so, if your prompt is: "fallenshadow, standing, dress, smile", write something like: dress, nude, swimwear, underwear, etc. This will create a grid where instead of wearing a dress, she will be nude, wear a swimsuit, etc. ALWAYS SEPARATE WITH A COMMA.If you did a concept LoRA or a style lora:On your WebUI, set yourself up with all the settings you would normally use; checkpoint, scheduler, etc.Copy/paste one of your dataset prompts to the prompt area (This will test overfitting).Navigate to the LoRA subtab and add the first file; ie.: doggystyle-000001.safetensor, this will add the LoRA to the prompt as: <lora:doggystyle-000001:1>; change the :1 to :0.1Set a fixed seed; ie.: 1234567890Scroll down to the "script" area of your WebUI and select X/Y/ZSet your X and Y as "Prompt S/R"On X; write all of your LoRA's filenames; ie.: doggystyle-000001, doggystyle-000002, Shondo_Noob-000003, doggystyle-000004, etc. Depending on how many files you saved, their names, etc. ALWAYS SEPARATE WITH A COMMA.On Y; write all the strength variables from 0.1 to 1, ie.: 0.1, 0.2, 0.3, etc. ALWAYS SEPARATE WITH A COMMA.Selecting the right fileOnce the process finishes, you should have at least 2 grids, one XY with dress, and another with nude (for example). Or one if you didnt set up an Z grid. Up to you.Now look at the grid and look for the "best" result. Look at the art style bias, pose bias, look bias, etc. The more flexible the better. If on fallenshadow-000005 shondo's pose is always unique but after 000006 she's always standing the same way, ignore 000006+If at some point the art style gets ignored or changes and fixates on it; ignore it.If at some point ANYTHING starts repeating that you don't want; ignore it.The only thing that should repeat at all times is whatever corresponds to the trained concept. If you only trained a wolf with a hat but it should always be a different hat, avoid a file that gives him the same hat on the same pose with the same style.If the result image is identical to the training data; avoid it! You are not here to do the same images as your data, you are here to make new ones, remember?If colors are weird; bad.If shapes are mushy; bad.If angle is always the same; bad (unless you prompted for it).Anything that goes against the concept or the flexibility of it: BAD.Any file that has to be lower than 1 or 0.9: BAD. If your LoRA "works best" at 0.6 strenght, it's shit.THIS IS IT FOR PART 3. Now do some good cool loras.
242
20
Prompting: Hair Coloring

Prompting: Hair Coloring

Hey! Here's a quick guide on how to prompt different hair coloring, have in mind that the results may vary depending on the models you're using!
232
5
FLUX1 - Mastering Camera Exposure to Achieve Realism in AI Image Generation - Leica Camera

FLUX1 - Mastering Camera Exposure to Achieve Realism in AI Image Generation - Leica Camera

When venturing into the realm of AI-powered image generation, precision in your prompts is paramount to achieving truly compelling results. While the term "realistic photo" might seem straightforward, it lacks the specificity needed to guide these systems effectively. AI image generators are literal interpreters, meticulously following your instructions. To unlock their full potential and generate images that feel authentic and believable, we must embrace a more nuanced approach. By incorporating detailed camera exposure commands into our prompts, we can provide the AI with a clearer roadmap, leading to more focused and visually striking outputs. Let's explore the power of these commands by experimenting with variations of a single prompt, observing firsthand how subtle changes can dramatically impact the final image.Try experimenting with this sentence by copying and pasting variations at the end, and experience the differences for yourself.① GeneralLeica M10-R, low exposure, high contrast black and white, ISO 100, with a 50mm prime lens.Result: Emphasizes texture with deep shadows and fine monochrome details.Leica SL2-S with tilt-shift lens, low exposure, high contrast, ISO 100, with a 45mm tilt-shift lens.Result: Produces a surreal perspective with precise focus and deep shadows, highlighting scale and architectural details.Leica Q2 Monochrom, low exposure, extreme high contrast, ISO 50, with a 28mm macro lens.Result: Highlights intricate details with a glowing outline, using backlighting for a dramatic effect.Leica S3, low exposure, high contrast, ISO 50, with a 120mm macro lens.Result: Captures sharp macro details with soft contrast, enhancing textures and reflections in fine details.Leica SL2 with long exposure, low exposure, high contrast, ISO 100, with a 35mm wide-angle lens.Result: Captures dynamic movement with streaks of light, enhancing the contrast of urban night scenes. For better separation of the subject and bokeh effect: 100mm lens.Leica M10 Monochrom, low exposure, high contrast, ISO 100, with a 50mm prime lens.Result: Delivers fine natural textures and detail, emphasizing delicate patterns with soft lighting.Leica SL2-S, low exposure, high contrast, ISO 50, with a 24-90mm zoom lens.Result: Captures architectural details with dramatic lighting and long shadows, creating a striking urban landscape.Leica Q2, low exposure, high contrast, ISO 50, with a 28mm prime lens.Result: Reveals light refraction and swirling colors in delicate textures, producing a magical and ethereal image.Leica SL2-S, low exposure, high contrast, ISO 64, with a 75mm prime lens.Result: Enhances botanical detail with soft lighting, revealing the intricate patterns of petals.Leica S3 with bellows extension, low exposure, high contrast, ISO 64, with a 100mm macro lens.Result: Highlights mechanical precision with sharp contrast and deep shadows, focusing on fine details.Leica M10-R with ND filter, low exposure, high contrast, ISO 50, with a 28mm wide-angle lens.Result: Freezes fast-moving water with strong contrast, capturing dramatic texture under harsh light.Leica Q2 Monochrom with polarizing filter, low exposure, high contrast, ISO 100, with a 28mm prime lens.Result: Captures abstract reflections and interplay of light, emphasizing contrasts on smooth reflective surfaces.Leica M10 Monochrom, low exposure, high contrast, ISO 400 (pushed to 800), with a 50mm prime lens.Result: Creates a moody, silhouetted image with vintage film grain and intense backlighting.Leica M-A with black and white film, low exposure, high contrast, ISO 400 (pushed to 1600), with a 50mm prime lens.Result: Utilizes deep shadows and grain to create a powerful and evocative monochrome scene.Leica M10 with infrared film, low exposure, high contrast, ISO 400 (pushed to 800), with a 35mm wide-angle lens.Result: Produces a surreal and ethereal image, highlighting hidden patterns with unique lighting effects.② Street & DocumentaryLeica M10-R, low exposure, high contrast black and white, ISO 100, with a 35mm prime lens.Result: Captures strong contrast and dynamic light interplay, emphasizing the urban textures and shadows.Leica Q2, low exposure, high contrast, ISO 100, with a 28mm prime lens.Result: Produces a stark silhouette with vibrant city lights creating dramatic contrast and a halo effect.③ Portrait & LifestyleLeica M11, low exposure, high contrast, ISO 200, with a 50mm prime lens.Result: Highlights natural light, creating an intimate and flattering portrait with soft shadows.Leica Q2 Monochrom, low exposure, high contrast, ISO 800, with a 28mm prime lens.Result: Emphasizes warm, romantic light, capturing candid emotions with nostalgic undertones.④ Landscape & ArchitectureLeica M10-R, low exposure, high contrast, ISO 100, with a 24mm wide-angle lens.Result: Enhances landscape drama, capturing long shadows and the grandeur of the scene with precise detail.Leica Q2, low exposure, high contrast, ISO 64, with a 28mm prime lens.Result: Showcases intricate architectural details with side lighting, bringing out textures and design elements.⑤ Reflection & AbstractionLeica SL2-S with tilt-shift lens, low exposure, high contrast, ISO 100, with a 45mm tilt-shift lens.Result: Creates a hyper-realistic scene with a unique perspective, using long shadows for dramatic effect.Leica Q2 Monochrom, low exposure, extreme high contrast, ISO 50, with a 28mm macro lens.Result: Emphasizes the fine details of a snowflake, using backlighting to create a glowing effect and enhance texture.⑥ Film & MoodLeica M10 Monochrom, low exposure, high contrast, ISO 400 (pushed to 800), with a 50mm prime lens.Result: Captures a moody silhouette with film grain and backlighting, enhancing the emotional depth of the scene.Leica M-A with black and white film, low exposure, high contrast, ISO 400 (pushed to 1600), with a 50mm prime lens.Result: Creates a powerful black-and-white image with deep shadows and film grain, emphasizing the drama and raw emotion.⑦ Still Life & AbstractLeica S3, low exposure, high contrast, ISO 50, with a 120mm macro lens.Result: Produces a sharp, high-detail macro image, capturing the delicate textures of a single water droplet with soft shadows.Leica SL2 with long exposure, low exposure, high contrast, ISO 100, with a 35mm wide-angle lens.Result: Captures dynamic city movement with long exposure, blending streaks of light and blurred motion for a high-energy urban image.⑧ Mechanical & DetailLeica S3 with bellows extension, low exposure, high contrast, ISO 64, with a 100mm macro lens.Result: Emphasizes fine mechanical details with deep shadows and sharp focus, bringing out the intricate craftsmanship of the vintage watch.🟨 FAQWhy are Hasselblad, Phase One, and Leica cameras the primary focus, when there are other great options?☝️ These three brands are known for their high-end cameras, which are often used by professionals due to their superior quality. Based on their training on the images produced by these high-end professionals, the algorithm, called artificial intelligence, focuses on producing output that reflects the exceptional quality associated with these brands.For Phase One Camera - https://tensor.art/articles/771043378340050555For Hasselblad Camera - https://tensor.art/articles/771012991446530480
227
25
[Guide] Make your own Loras, easy and free

[Guide] Make your own Loras, easy and free

This article helped me to create my first Lora and upload it to Tensor.art, although Tensor.art has its own Lora Train , this article helps to understand how to create Lora well.🏭 PreambleEven if you don't know where to start or don't have a powerful computer, I can guide you to making your first Lora and more!In this guide we'll be using resources from my GitHub page. If you're new to Stable Diffusion I also have a full guide to generate your own images and learn useful tools.I'm making this guide for the joy it brings me to share my hobbies and the work I put into them. I believe all information should be free for everyone, including image generation software. However I do not support you if you want to use AI to trick people, scam people, or break the law. I just do it for fun.Also here's a page where I collect Hololive loras.📃What you needAn internet connection. You can even do this from your phone if you want to (as long as you can prevent the tab from closing).Knowledge about what Loras are and how to use them.Patience. I'll try to explain these new concepts in an easy way. Just try to read carefully, use critical thinking, and don't give up if you encounter errors.🎴Making a Lorat has a reputation for being difficult. So many options and nobody explains what any of them do. Well, I've streamlined the process such that anyone can make their own Lora starting from nothing in under an hour. All while keeping some advanced settings you can use later on.You could of course train a Lora in your own computer, granted that you have an Nvidia graphics card with 6 GB of VRAM or more. We won't be doing that in this guide though, we'll be using Google Colab, which lets you borrow Google's powerful computers and graphics cards for free for a few hours a day (some say it's 20 hours a week). You can also pay $10 to get up to 50 extra hours, but you don't have to. We'll also be using a little bit of Google Drive storage.This guide focuses on anime, but it also works for photorealism. However I won't help you if you want to copy real people's faces without their consent.🎡 Types of LoraAs you may know, a Lora can be trained and used for:A character or personAn artstyleA poseA piece of clothingetcHowever there are also different types of Lora now:LoRA: The classic, works well for most cases.LoCon: Has more layers which learn more aspects of the training data. Very good for artstyles.LoHa, LoKR, (IA)^3: These use novel mathematical algorithms to process the training data. I won't cover them as I don't think they're very useful.📊 First Half: Making a DatasetThis is the longest and most important part of making a Lora. A dataset is (for us) a collection of images and their descriptions, where each pair has the same filename (eg. "1.png" and "1.txt"), and they all have something in common which you want the AI to learn. The quality of your dataset is essential: You want your images to have at least 2 examples of: poses, angles, backgrounds, clothes, etc. If all your images are face close-ups for example, your Lora will have a hard time generating full body shots (but it's still possible!), unless you add a couple examples of those. As you add more variety, the concept will be better understood, allowing the AI to create new things that weren't in the training data. For example a character may then be generated in new poses and in different clothes. You can train a mediocre Lora with a bare minimum of 5 images, but I recommend 20 or more, and up to 1000.As for the descriptions, for general images you want short and detailed sentences such as "full body photograph of a woman with blonde hair sitting on a chair". For anime you'll need to use booru tags (1girl, blonde hair, full body, on chair, etc.). Let me describe how tags work in your dataset: You need to be detailed, as the Lora will reference what's going on by using the base model you use for training. If there is something in all your images that you don't include in your tags, it will become part of your Lora. This is because the Lora absorbs details that can't be described easily with words, such as faces and accessories. Thanks to this you can let those details be absorbed into an activation tag, which is a unique word or phrase that goes at the start of every text file, and which makes your Lora easy to prompt.You may gather your images online, and describe them manually. But fortunately, you can do most of this process automatically using my new 📊 dataset maker colab.Here are the steps:1️⃣ Setup: This will connect to your Google Drive. Choose a simple name for your project, and a folder structure you like, then run the cell by clicking the floating play button to the left side. It will ask for permission, accept to continue the guide.If you already have images to train with, upload them to your Google Drive's "lora_training/datasets/project_name" (old) or "Loras/project_name/dataset" (new) folder, and you may choose to skip step 2.2️⃣ Scrape images from Gelbooru: In the case of anime, we will use the vast collection of available art to train our Lora. Gelbooru sorts images through thousands of booru tags describing everything about an image, which is also how we'll tag our images later. Follow the instructions on the colab for this step; basically, you want to request images that contain specific tags that represent your concept, character or style. When you run this cell it will show you the results and ask if you want to continue. Once you're satisfied, type yes and wait a minute for your images to download.3️⃣ Curate your images: There are a lot of duplicate images on Gelbooru, so we'll be using the FiftyOne AI to detect them and mark them for deletion. This will take a couple minutes once you run this cell. They won't be deleted yet though: eventually an interactive area will appear below the cell, displaying all your images in a grid. Here you can select the ones you don't like and mark them for deletion too. Follow the instructions in the colab. It is beneficial to delete low quality or unrelated images that slipped their way in. When you're finished, send Enter in the text box above the interactive area to apply your changes.4️⃣ Tag your images: We'll be using the WD 1.4 tagger AI to assign anime tags that describe your images, or the BLIP AI to create captions for photorealistic/other images. This takes a few minutes. I've found good results with a tagging threshold of 0.35 to 0.5. After running this cell it'll show you the most common tags in your dataset which will be useful for the next step.5️⃣ Curate your tags: This step for anime tags is optional, but very useful. Here you can assign the activation tag (also called trigger word) for your Lora. If you're training a style, you probably don't want any activation tag so that the Lora is always in effect. If you're training a character, I myself tend to delete (prune) common tags that are intrinsic to the character, such as body features and hair/eye color. This causes them to get absorbed by the activation tag. Pruning makes prompting with your Lora easier, but also less flexible. Some people like to prune all clothing to have a single tag that defines a character outfit; I do not recommend this, as too much pruning will affect some details. A more flexible approach is to merge tags, for example if we have some redundant tags like "striped shirt, vertical stripes, vertical-striped shirt" we can replace all of them with just "striped shirt". You can run this step as many times as you want.6️⃣ Ready: Your dataset is stored in your Google Drive. You can do anything you want with it, but we'll be going straight to the second half of this tutorial to start training your Lora!⭐ Second Half: Settings and TrainingThis is the tricky part. To train your Lora we'll use my ⭐ Lora trainer colab. It consists of a single cell with all the settings you need. Many of these settings don't need to be changed. However, this guide and the colab will explain what each of them do, such that you can play with them in the future.Here are the settings:▶️ Setup: Enter the same project name you used in the first half of the guide and it'll work automatically. Here you can also change the base model for training. There are 2 recommended default ones, but alternatively you can copy a direct download link to a custom model of your choice. Make sure to pick the same folder structure you used in the dataset maker.▶️ Processing: Here are the settings that change how your dataset will be processed.The resolution should stay at 512 this time, which is normal for Stable Diffusion. Increasing it makes training much slower, but it does help with finer details.flip_aug is a trick to learn more evenly, as if you had more images, but makes the AI confuse left and right, so it's your choice.shuffle_tags should always stay active if you use anime tags, as it makes prompting more flexible and reduces bias.activation_tags is important, set it to 1 if you added one during the dataset part of the guide. This is also called keep_tokens.▶️ Steps: We need to pay attention here. There are 4 variables at play: your number of images, the number of repeats, the number of epochs, and the batch size. These result in your total steps.You can choose to set the total epochs or the total steps, we will look at some examples in a moment. Too few steps will undercook the Lora and make it useless, and too many will overcook it and distort your images. This is why we choose to save the Lora every few epochs, so we can compare and decide later. For this reason, I recommend few repeats and many epochs.There are many ways to train a Lora. The method I personally follow focuses on balancing the epochs, such that I can choose between 10 and 20 epochs depending on if I want a fast cook or a slow simmer (which is better for styles). Also, I have found that more images generally need more steps to stabilize. Thanks to the new min_snr_gamma option, Loras take less epochs to train. Here are some healthy values for you to try:10 images × 10 repeats × 20 epochs ÷ 2 batch size = 1000 steps20 images × 10 repeats × 10 epochs ÷ 2 batch size = 1000 steps100 images × 3 repeats × 10 epochs ÷ 2 batch size = 1500 steps400 images × 1 repeat × 10 epochs ÷ 2 batch size = 2000 steps1000 images × 1 repeat × 10 epochs ÷ 3 batch size = 3300 steps▶️ Learning: The most important settings. However, you don't need to change any of these your first time. In any case:The unet learning rate dictates how fast your Lora will absorb information. Like with steps, if it's too small the Lora won't do anything, and if it's too large the Lora will deepfry every image you generate. There's a flexible range of working values, specially since you can change the intensity of the lora in prompts. Assuming you set dim between 8 and 32 (see below), I recommend 5e-4 unet for almost all situations. If you want a slow simmer, 1e-4 or 2e-4 will be better. Note that these are in scientific notation: 1e-4 = 0.0001The text encoder learning rate is less important, specially for styles. It helps learn tags better, but it'll still learn them without it. It is generally accepted that it should be either half or a fifth of the unet, good values include 1e-4 or 5e-5. Use google as a calculator if you find these small values confusing.The scheduler guides the learning rate over time. This is not critical, but still helps. I always use cosine with 3 restarts, which I personally feel like it keeps the Lora "fresh". Feel free to experiment with cosine, constant, and constant with warmup. Can't go wrong with those. There's also the warmup ratio which should help the training start efficiently, and the default of 5% works well.▶️ Structure: Here is where you choose the type of Lora from the 2 I mentioned in the beginning. Also, the dim/alpha mean the size of your Lora. Larger does not usually mean better. I personally use 16/8 which works great for characters and is only 18 MB.▶️ Ready: Now you're ready to run this big cell which will train your Lora. It will take 5 minutes to boot up, after which it starts performing the training steps. In total it should be less than an hour, and it will put the results in your Google Drive.🏁 Third Half: TestingYou read that right. I lied! 😈 There are 3 parts to this guide.When you finish your Lora you still have to test it to know if it's good. Go to your Google Drive inside the /lora_training/outputs/ folder, and download everything inside your project name's folder. Each of these is a different Lora saved at different epochs of your training. Each of them has a number like 01, 02, 03, etc.Here's a simple workflow to find the optimal way to use your Lora:Put your final Lora in your prompt with a weight of 0.7 or 1, and include some of the most common tags you saw during the tagging part of the guide. You should see a clear effect, hopefully similar to what you tried to train. Adjust your prompt until you're either satisfied or can't seem to get it any better.Use the X/Y/Z plot to compare different epochs. This is a builtin feature in webui. Go to the bottom of the generation parameters and select the script. Put the Lora of the first epoch in your prompt (like "<lora:projectname-01:0.7>"), and on the script's X value write something like "-01, -02, -03", etc. Make sure the X value is in "Prompt S/R" mode. These will perform replacements in your prompt, causing it to go through the different numbers of your lora so you can compare their quality. You can first compare every 2nd or every 5th epoch if you want to save time. You should ideally do batches of images to compare more fairly.Once you've found your favorite epoch, try to find the best weight. Do an X/Y/Z plot again, this time with an X value like ":0.5, :0.6, :0.7, :0.8, :0.9, :1". It will replace a small part of your prompt to go over different lora weights. Again it's better to compare in batches. You're looking for a weight that results in the best detail but without distorting the image. If you want you can do steps 2 and 3 together as X/Y, it'll take longer but be more thorough.If you found results you liked, congratulations! Keep testing different situations, angles, clothes, etc, to see if your Lora can be creative and do things that weren't in the training data.source: civitai/holostrawberry
220
25
🔮🔮Anime to Real Life: My Free/Paid AI Tool for Converting 2D to Realistic🔮🔮

🔮🔮Anime to Real Life: My Free/Paid AI Tool for Converting 2D to Realistic🔮🔮

Ever wondered how your favorite anime character would look in real life? I built some AI tool that transforms ACG (Anime, Comics, Games) images into photorealistic portraits—perfect for artists, cosplayers, or curious fans!  🔮Access Tiers (Simple & Transparent)Free Tier​(click to use):A2R Free V1:Low credit consumption, not suitable for multiple people and complex compositionsA2R Free V2:More realistic and cosplayer-like, details may be differentPro​-less credits(click to use):A2R Pro L V1:Advanced version of 【A2R FreeV1】A2R Pro L V2:Very real, but the details will changePro​-more credits(click to use):A2R Pro M V1:A version with perfect detailsA2R Pro M V2:The original versionA2R Pro M V3:The most perfect detail restorationA2R Pro M V4:The most realistic face, more like a cosplayerAmplification and restorationSometimes the generated image will have misplaced hands or feet,Or you may want to output it in a larger sizeI made two tools to help with this adjustment, they are free,click to useHand and foot repair toolImage Upscale Tool
219
11