# Prompt Master
This ComfyUI workflow, "Prompt Master," provides a versatile set of tools for crafting and refining prompts for the Flux model. It offers multiple input methods, allowing you to tailor your prompts to achieve the desired results.
## Features
This workflow includes the following prompting methods:
* Img2Prompt with JoyCaption2 LLM: Generate detailed and descriptive prompts directly from images using the powerful JoyCaption2 Large Language Model. This is ideal for quickly capturing the essence of an image and using it as a starting point for your Flux generations.
* Manual Prompt Input: For precise control, you can directly input your prompts. This allows for fine-tuning and experimentation with specific keywords and phrases.
* LLM Polished Prompt: Leverage an LLM to refine and enhance your manually created or Img2Prompt-generated prompts. This helps to improve clarity, grammar, and overall effectiveness, leading to better results from the Flux model. (Specify which LLM is used here if applicable)
* Wildcard Prompts: Introduce variability and surprise into your generations using wildcard placeholders. This feature allows you to explore a wider range of possibilities with minimal effort. (Explain how wildcards are implemented - e.g., using a specific node or text file input)
## Usage
1. Installation: Ensure you have ComfyUI installed and running. (Mention any specific custom nodes required and how to install them. Provide links if available)
2. Workflow Loading: Download the `prompt_master.json` file (or whatever the file extension is) and load it into ComfyUI.
3. Prompt Input: Choose your preferred prompting method: (only one can be active at a time)
* Img2Prompt: Connect the image you want to use to the Img2Prompt input node.
* Manual Prompt: Enter your prompt text into the designated input box.
* LLM Polished Prompt: Connect your manually created or Img2Prompt prompt to the LLM Polished Prompt input node.
* Wildcard Prompts: Initial support for wildcard prompting.
4. Flux Model Integration: Connect the output of your chosen prompt method to the appropriate input of your Flux model nodes.
5. Generation: Run the workflow to generate images based on your prompts.
## Custom Nodes
* AddLabel / https://github.com/kijai/ComfyUI-KJNodes
* Anything Everywhere / https://github.com/chrisgoringe/cg-use-everywhere
* Bookmark (rgthree) / https://github.com/rgthree/rgthree-comfy
* DisplayText / https://github.com/IuvenisSapiens/ComfyUI_MiniCPM-V-2_6-int4
* ExtraOptionsNode / https://github.com/TTPlanetPig/Comfyui_JC2
* FaceDetailer / https://github.com/ltdrdata/ComfyUI-Impact-Pack
* Fast Groups Bypasser (rgthree) / https://github.com/rgthree/rgthree-comfy
* Image Comparer (rgthree) / https://github.com/rgthree/rgthree-comfy
* Image Filter / https://github.com/chrisgoringe/cg-image-filter
* ImpactWildcardProcessor / https://github.com/ltdrdata/ComfyUI-Impact-Pack
* InjectLatentNoise+ / https://github.com/cubiq/ComfyUI_essentials
* JWInteger / https://github.com/jamesWalker55/comfyui-various
* JoinStrings / https://github.com/kijai/ComfyUI-KJNodes
* JoyCaption2_simple / https://github.com/TTPlanetPig/Comfyui_JC2
* LF_RegexReplace / https://github.com/lucafoscili/comfyui-lf
* OverrideVAEDevice / https://github.com/city96/ComfyUI_ExtraModels
* Power Lora Loader (rgthree) / https://github.com/rgthree/rgthree-comfy
* ProjectFilePathNode / https://github.com/MushroomFleet/DJZ-Nodes
* Prompts Everywhere / https://github.com/chrisgoringe/cg-use-everywhere
* SAMLoader / https://github.com/ltdrdata/ComfyUI-Impact-Pack
* SaveText|pysssss / https://github.com/pythongosssss/ComfyUI-Custom-Scripts
* Searge_LLM_Node / https://github.com/SeargeDP/ComfyUI_Searge_LLM
* Seed (rgthree) / https://github.com/rgthree/rgthree-comfy
* Seed Everywhere / https://github.com/chrisgoringe/cg-use-everywhere
* String / https://github.com/M1kep/ComfyLiterals
* String Replace (mtb) / https://github.com/melMass/comfy_mtb
* Switch any [Crystools] / https://github.com/crystian/ComfyUI-Crystools
* Text Concatenate / https://github.com/WASasquatch/was-node-suite-comfyui
* Text Input [Dream] / https://github.com/alt-key-project/comfyui-dream-project
* Text to Conditioning / https://github.com/WASasquatch/was-node-suite-comfyui
* UltimateSDUpscale / https://github.com/ssitu/ComfyUI_UltimateSDUpscale
* UltralyticsDetectorProvider / https://github.com/ltdrdata/ComfyUI-Impact-Subpack
* UnetLoaderGGUF / https://github.com/city96/ComfyUI-GGUF
* easy boolean / https://github.com/yolain/ComfyUI-Easy-Use
* easy showAnything / https://github.com/yolain/ComfyUI-Easy-Use
## LLM Models
* unsloth/Meta-Llama-3.1-8B-Instruct / https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct
```bash
$ huggingface-cli download unsloth/Meta-Llama-3.1-8B-Instruct --local-dir ComfyUI/models/LLM/unsloth--Meta-Llama-3.1-8B-Instruct --exclude "*.git*" "README.md"
```
* Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 / https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
```bash
$ huggingface-cli download Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 --local-dir ComfyUI/models/LLM/Orenguteng--Llama-3.1-8B-Lexi-Uncensored-V2 --exclude "*.git*" "README.md"
```