# Prompt Master
This ComfyUI workflow, "Prompt Master," provides a versatile set of tools for crafting and refining prompts for the Flux model. It offers multiple input methods, allowing you to tailor your prompts to achieve the desired results.
## Features
This workflow includes the following prompting methods:
* Img2Prompt with JoyCaption2 LLM: Generate detailed and descriptive prompts directly from images using the powerful JoyCaption2 Large Language Model. This is ideal for quickly capturing the essence of an image and using it as a starting point for your Flux generations.
* Manual Prompt Input: For precise control, you can directly input your prompts. This allows for fine-tuning and experimentation with specific keywords and phrases.
* LLM Polished Prompt: Leverage an LLM to refine and enhance your manually created or Img2Prompt-generated prompts. This helps to improve clarity, grammar, and overall effectiveness, leading to better results from the Flux model. (Specify which LLM is used here if applicable)
* Wildcard Prompts: Introduce variability and surprise into your generations using wildcard placeholders. This feature allows you to explore a wider range of possibilities with minimal effort. (Explain how wildcards are implemented - e.g., using a specific node or text file input)
## Usage
1. Installation: Ensure you have ComfyUI installed and running. (Mention any specific custom nodes required and how to install them. Provide links if available)
2. Workflow Loading: Download the `prompt_master.json` file (or whatever the file extension is) and load it into ComfyUI.
3. Prompt Input: Choose your preferred prompting method: (only one can be active at a time)
* Img2Prompt: Connect the image you want to use to the Img2Prompt input node.
* Manual Prompt: Enter your prompt text into the designated input box.
* LLM Polished Prompt: Connect your manually created or Img2Prompt prompt to the LLM Polished Prompt input node.
* Wildcard Prompts: Initial support for wildcard prompting.
4. Flux Model Integration: Connect the output of your chosen prompt method to the appropriate input of your Flux model nodes.
5. Generation: Run the workflow to generate images based on your prompts.
## Custom Nodes
* AddLabel /
* Anything Everywhere /
* Bookmark (rgthree) /
* DisplayText /
* ExtraOptionsNode /
* FaceDetailer /
* Fast Groups Bypasser (rgthree) /
* Image Comparer (rgthree) /
* Image Filter /
* ImpactWildcardProcessor /
* InjectLatentNoise+ /
* JWInteger /
* JoinStrings /
* JoyCaption2_simple /
* LF_RegexReplace /
* OverrideVAEDevice /
* Power Lora Loader (rgthree) /
* ProjectFilePathNode /
* Prompts Everywhere /
* SAMLoader /
* SaveText|pysssss /
* Searge_LLM_Node /
* Seed (rgthree) /
* Seed Everywhere /
* String /
* String Replace (mtb) /
* Switch any [Crystools] /
* Text Concatenate /
* Text Input [Dream] /
* Text to Conditioning /
* UltimateSDUpscale /
* UltralyticsDetectorProvider /
* UnetLoaderGGUF /
* easy boolean /
* easy showAnything /
## LLM Models
* unsloth/Meta-Llama-3.1-8B-Instruct /
```bash
$ huggingface-cli download unsloth/Meta-Llama-3.1-8B-Instruct --local-dir ComfyUI/models/LLM/unsloth--Meta-Llama-3.1-8B-Instruct --exclude "*.git*" ""
```
* Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 /
```bash
$ huggingface-cli download Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 --local-dir ComfyUI/models/LLM/Orenguteng--Llama-3.1-8B-Lexi-Uncensored-V2 --exclude "*.git*" ""
```