FoxyFluffs

FoxyFluffs

Furry AI artist documenting the daily lives of a host of foxy fluffs! Sometimes nsfw, ALWAYS fluffy! Active in Discord.
380
Followers
238
Following
12.9K
Runs
2
Downloads
10.8K
Likes
84
Stars

Articles

View All
Multi-character Prompting & Inpainting Guide (Step by step)

Multi-character Prompting & Inpainting Guide (Step by step)

Foxy Fluffs wants to hug Hatsune Miku! but how to gen two different characters together?We will follow these steps:Engineer an SD model prompt for our charactersRun our prompt and refine until we get a near-matchInpaint our chosen image section by section to correct errorsUpscale the final product.lets get into it!1) Engineer an SD model prompt for our charactersWhen prompting in SD (with either TAMS 2.0 or A1111 parsing method), the best practice I've found for images of multiple characters is to use the following prompt structure:quality tags, general prompts, BREAK, character1 description and lora keywords, BREAK character2 descripton and lora keywordsquality tags (like "best quality", "score_9", or "Newest") will differ depending on the base model or checkpoint used (SD1.5, Pony, Illustrious, etc)general prompts refers to all the prompts that apply to the scene and to both characters (eg. indoors, 2girls, hugging)BREAK tells the model to treat the text following it as a new block of instructions. It differs from a comma which tells the model to apply the following prompt to the previous one (up to 75 tokens)*character descriptions refers to the unique identifiers that will help to depict each character's unique qualities in the scene (eg. "Foxy Fluffs, fox girl, green top," and "Hatsune Miku, human, aqua hair"). Expect bleedthrough no matter how carefully you structure this, however we will choose the image with the least mistakes and edit out any remaining errors with inpainting.*BREAK doesn't function with all parsing methods, however as a best practice I like to keep it in between the unique character descriptions to act as a visual cue of where to place my prompts. I use full stops as well at the end of unique descriptors to further reinforce this (see below).Here's the prompt I engineered for my image of Foxy and Miku with Illustrious model quality tags:Positive Prompt: masterpiece, best quality, amazing quality,very aesthetic,high resolution,ultra-detailed,absurdres,newest,scenery,depth of field,volumetric lighting, 2girls, hugging, looking at another, indoors, BREAK Foxy Fluffs, 1girl, anthro, furry, foxgirl, orange fur, long brown hair, brown eyes, slit pupils, black choker with a silver heart-shaped pendant, green top, black bottoms, blushing, nervous, FoxyFluffs. BREAK hatsune miku, 1girl, human, absurdly long hair, aqua hair, twintails, hair ornament, sidelocks, hair between eyes, parted bangs, aqua eyes, (happy), smiling, white shirt, collared shirt, bare shoulders, sleeveless shirt, aqua necktie, detached sleeves, black sleeves, shoulder tattoo, fringe, black thighhighs, miniskirt, pleated skirt, zettai ryouiki, thigh boots.Negative Prompt: [blank]**I don't use negative prompts most of the time. Maybe it's a thing with images involving fox girls (because there's nothing negative about fox girls!), but I find I get better results without negatives, so i keep the negative blank except if obvious errors creep in that need to be prompted out. YMMV!2) Run our prompt and refine until we get a near-matchI ran the prompt using the following Illustrious Models with square aspect and 15 steps:Nova Anime XL (IL v2.5 Merry Christ)Foxy Fluffs OC Character Lora (0.8 weight)Hatsune Miku -Vocaloid (1.0 weight)After 12 gens, I lucked out and got the following image:Miku seems to have come out almost perfectly (besides her expression), while Foxy's outfit, hair and expression got corrupted a bit.So now that we have a usable image, we will move on to inpainting!3) Inpaint our chosen image section by section to correct errorsWe will inpaint this in stages to keep control of the overall image.For the first inpaint, we will fix Foxy's Hair and Face. We will mask the existing hair AND the area where her long brown hair should be, and we will also mask her face. (we could do these separately, but this gives us a free shot at fixing her face without spending an extra credit)Now we've masked the parts of the hair and face, we adjust the prompt by TEMPORARILY removing the Hatsune Miku prompts and lora to ensure they don't write over Foxy's appearance again (we will need to add these all back in again for the final upscale). Now our positive prompt looks like this:No need to adjust the base settings. We run like this and see what we get! If the hair isn't quite what we want, we can adjust the denoise setting up or down, and we can adjust the mask as well until we get the right result.Result of inpaint No 1:Foxy's hair is fixed, and her face now matches the intended prompt. As an unexpected bonus, the frills on her vest got painted out too! (this was a detail from Miku's costume). Lucky!Now we want to fix the rest of Foxy's costume. Her pendant is missing from her collar, and she prefers to wear shorts and no leggings. So we will mask out the area in front of her choker, as well as the miniskirt and her legs all the way down, like so:We could have probably tried this all in one go with the previous mask just to save credits, but this way gives us a little more control over individual elements of the image. The prompt remains the same (without Miku and her keywords).Result of inpaint No 2:So her shorts are fixed! But the pendant is looking a bit strange. We'll mask the pendant only on this image and try again! (I'll skip the mask image since I think you get the idea now). One thing we WILL do is add greater emphasis in the prompt, so we will add multiple brackets around (((silver heart-shaped pendant))) so the model knows to give it more importance.Result of inpaint No 3:Not perfect, but we will roll the dice on the upscaler fixing it at the end, and move on to Miku!So now we want to give Miku her correct expression, and add back in all of her prompts so that when we upscale in the next step, it will reinforce her attributes and not turn her into another fox girl (which is what will happen if I only leave foxy's prompts in and don't add in Miku's).I also remove the emphasis brackets around Foxy's pendant so that it doesn't wind up on Miku's neck in the final image too. Using the whole original prompt, I go ahead and mask up Miku's face ONLY and then run the inpaint tool one last time.Result of inpaint No 4:So now Miku has this lovely happy cute expression, and foxy has her nervous expression at meeting her idol. Their outfits are pretty much correct too (though foxy's pendant is fading, but TRUST IN THE UPSCALE!).Before we move on to the final step, a confession. I totally forgot to add Miku's lora back in with her prompts 🙈 I don't know how much of a difference that made, but the final image only has my Lora included. Fortunately, Miku is such a popular character, so her data must already exist in the checkpoint I used, otherwise the next step may not have worked...4) Upscale the final productUpscaling can be quite tricky, depending on the image, higher resolution and denoise can introduce a lot of hallucinations. To avoid this, I usually prefer to upscale by 1.5x and a low denoise like 0.1-0.2 to preserve the underlying image.The following Image was Upscaled at 1.5x, 0.2 denoise, and 35 steps (which is excessive, you can still get great results at the limit for free accounts, so don't worry if you don't have Pro!).Foxy meets a celebrity! (Link to original post. If you can't see it, it's still caught up in the NSFW bug, which hopefully will be fixed soon!)I was very happy with this image. There was a tiny bit of bleedthrough from miku's outfit again (note the aqua-coloured outline to foxy's shorts, and the green frills reintroduced on her vest), but overall the image is very sharp, pretty, and Foxy got her pendant in the upscale! (woohoo!)So, that's all there is to it! You can of course try this with more than 2 characters, though you'll need to do a lot more inpainting as more characters tend to introduce a lot more variables into the mix, but this technique should work for any number of subjects as long as you're patient enough to inpaint each of them.I hope this guide is a help to those who read this far! Please do leave your comments below if this was helpful, or if you have suggestions on how I can improve too!Lastly, a quick shoutout to Superpat50 and XMPL on Discord who inspired me to write this article in the first place! Thanks guys!
13
9
Model Training: How should I Label My Datasets?

Model Training: How should I Label My Datasets?

I made a Lora for “fox diving into snow”, and I need some help to refine the labels in my dataset.Based on existing advice I collected around 40 images that reflect the concept, however I didn’t find much documentation on exactly how I should label those images, especially given the unique pose and subject of my lora. I also wanted the lora to be able to extend beyond just applying to foxes, and be able to influence other animal and humanoid subjects.The problem I have encountered with my first version of the lora is that the fox's head should be buried in the snow and not visible in the shot. Unfortunately, more often than not, some or all of the head is visible in nearly every generation, even with tagging to try to remove the head entirely!I could not find any articles that adequately explained how to handle this, so I Turned to ChatGPT, and following is the advice it gave. (I originally attached an image from National Geographic of a fox with its head in the snow, which I won’t repost here as it’s probably copyrighted).Example: Labelling the "Fox Diving into Snow" ImageThe attached image captures a fox in the middle of a hunting dive, where its hind legs and tail are prominently visible, but its head is completely buried in the snow. The absence of the fox's head is a key feature that must be labelled clearly.A precise label for this image could be:fox, snow, fox diving, head buried, tail up, legs up, dynamic pose, winterWhy This Label Works:Subject: "fox" is the core focus of the image.Action: "fox diving" describes the key behavior.Pose-Specific Detail:"head buried" emphasizes that the fox's head is submerged in the snow and not visible."tail up" and "legs up" highlight the upward positioning of the visible body parts.Environment: "snow" and "winter" provide the seasonal and environmental context.Pose Type: "dynamic pose" further emphasizes that the fox is mid-action, not static.By including "head buried" in the labels, you clarify a critical aspect of the image, which differentiates this pose from others (e.g., standing, jumping, or resting foxes).Best Practices for Labelling "Fox Diving into Snow" ImagesFocus on Visibility: Always label which parts of the fox are visible and which are hidden. In this case, “head buried” is essential, as it defines the pose’s uniqueness.Examples: “head buried,” “tail visible,” “hind legs up.”Describe the Pose: Use consistent terms to describe body positioning. For diving foxes, terms like “tail up,” “legs up,” “dynamic pose,” and “mid-dive” are clear and informative.Highlight Environment: Include “snow” and “winter” in every label to maintain context for the dataset. If the snow has specific qualities (e.g., deep snow, fluffy texture), these can be included as optional descriptors.Avoid Ambiguity: Avoid labels like “fox jumping” or “fox standing,” which may not represent the pose accurately. The term “diving” specifically describes this action.Consistency is Key: Always use the same phrasing for similar images. For example, label every diving pose consistently as:fox, snow, head buried, tail up, legs up, dynamic pose, winterBuilding the Dataset: Other Images to CollectTo create a high-quality LoRA model for “fox diving into snow”, you’ll need a variety of images showcasing similar and related concepts:Fox Diving Variants:Different angles of foxes mid-dive (side view, back view, partial body shots).Varying depths of snow where the head is buried (shallow, deep snow).Dynamic and Static Comparisons:Mid-action poses (legs stretched, snow flying).After-action poses where the fox is emerging or resting in the snow.Visibility Details:Images where only tails or legs are visible above the snow.Environmental Diversity:Snow-covered fields, forests, or backdrops with varying lighting (e.g., overcast, sunrise, or sunset).Unique Attributes:Snow texture (fluffy, compacted) and disturbances caused by the fox’s dive, such as kicked-up snow.ChatGPT’s SummaryFor a “fox diving into snow” LoRA, labelling must emphasize the pose's unique feature: the head being buried and not visible. Consistent use of labels like “head buried,” “tail up,” “legs up,” and “dynamic pose” ensures the AI accurately learns this concept. By collecting images with diverse angles, environments, and snow conditions, you create a robust dataset capable of generating realistic and dynamic images of foxes diving into snow.Foxy’s SummarySo there you have it. This is what I was able to get out of ChatGPT. The advice seems pretty sound, and I plan to apply it when retraining my “Fox Dive” Lora (probably after we’re done with the Christmas Walkthrough Event). For now, I share it with you all in the hopes that more experienced users can confirm the accuracy of this information, and for other users like me who are still trying to get a grip on how best to label out our datasets.And now my head is feeling pretty frazzled by all this LoRA talk, so I think I’m gonna go outside and stick it in the snow to cool off…
2
An Honest Guide for the Complete Beginner to Get Started with Making an AI Tool in TensorArt

An Honest Guide for the Complete Beginner to Get Started with Making an AI Tool in TensorArt

Lets be honest. To an AI Tool newcomer, ComfyUI is a ridiculously complicated pile of spaghetti and boxes, with confusing terms like “nodes”, and “loaders” and a loooong list of potential tools that make the eyes bleed to scroll through.So I’m not bothering with all that nonsense (At least at first), and neither should you.Lets be clear. The current state of AI tools and art make it stupidly easy to “cheat” at whatever task you are trying to accomplish, whether that be writing code, making art, or writing an Article about AI Tools (ahem). Admittedly, that was the first thing I tried when attempting to fulfil this task for the Christmas Walkthrough event. But after reading the overly complicated instructions on how to “get started with ComfyUI” that ChatGPT spat out, I realized what it is I really wanted to say and decided to toss that out and write this myself.As a relative newcomer to AI Art, I’ve relied heavily on “borrowing” prompts, lora and parameters from other more advanced/successful users, especially when starting off to generate images that I wanted to create. I felt a bit guilty for "pirating" other peoples prompts, but over time I began to read deeper into those tools I was “borrowing” and I started to get a real idea of how the underlying models were trained and how to actually get the most out of them. It took a couple months of messing around with image generation (read: making cute images of Pokémon and fox girls, and other stuff that should never see the light of day) before I began to seriously study and experiment with those tools to figure out what really made them work. I had a lot of fun doing that, and that is what I think really helped me build my confidence to go deeper. That same methodology applies to AI Tools generated with ComfyFlow. So here is how to get started with that on TensorArt, beginning with some good old fashioned piracy borrowing! (Arrrr)Find an existing AI Tool that does what you want (or close to it)Navigate to the profile page of the user who created that toolCheck their Workflows page, to see if they published the workflowIf not, then start over and find the next best Tool that does what you wantIf the workflow is available, Run it and save a copy under your WorkflowStart Experimenting!Make small tweaks to begin withApply what you already know from online image generationDon’t be scared to break things and start overIf the tool you chose ends up being too complicated to wrap your head around right now, start over from step 1 to find a simpler tool, one with fewer Nodes.Keep going til you start to get the hang of how it actually works!Publish your first workflow as soon as you feel you’ve begun to understand the basics. It doesn’t have to be something too distinct from the original tool you began with, just enough to show you took some risks and tried something a little different than the original. This is your version 1, which you will use to try making your first AI Tool.Go back into your published workflow and follow the recently updated AI Tool guide. It’s really straightforward and simple to do. I recommend sticking to just converting your text prompt to an AI input and publishing after testing. Make sure you enable pop-ups if clicking on the Publish button doesn’t do anything (I learned that the hard way, too!).Good luck!
3

Posts