Comfy Image Generator
STDIOServer using FastMCP framework to generate images based on prompts via remote Comfy server.
Server using FastMCP framework to generate images based on prompts via remote Comfy server.
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.
Flux-Dev-ComfyUI-Workflow.json
which is only used here as reference. You will need to export from your workflow and set the environment variables accordingly.You can install the required packages for local development:
uvx mcp[cli]
Set the following environment variables:
COMFY_URL
to point to your Comfy server URL.COMFY_WORKFLOW_JSON_FILE
to point to the absolute path of the API export json file for the comfyui workflow.PROMPT_NODE_ID
to the id of the text prompt node.OUTPUT_NODE_ID
to the id of the output node with the final image.OUTPUT_MODE
to either url
or file
to select desired output.Optionally, if you have an Ollama server running, you can connect to it for prompt generation.
OLLAMA_API_BASE
to the url where ollama is running.PROMPT_LLM
to the name of the model hosted on ollama for prompt generation.Example:
export COMFY_URL=http://your-comfy-server-url:port export COMFY_WORKFLOW_JSON_FILE=/path/to/the/comfyui_workflow_export.json export PROMPT_NODE_ID=6 # use the correct node id here export OUTPUT_NODE_ID=9 # use the correct node id here export OUTPUT_MODE=file
Comfy MCP Server can be launched by the following command:
uvx comfy-mcp-server
{ "mcpServers": { "Comfy MCP Server": { "command": "/path/to/uvx", "args": [ "comfy-mcp-server" ], "env": { "COMFY_URL": "http://your-comfy-server-url:port", "COMFY_WORKFLOW_JSON_FILE": "/path/to/the/comfyui_workflow_export.json", "PROMPT_NODE_ID": "6", "OUTPUT_NODE_ID": "9", "OUTPUT_MODE": "file", } } } }
generate_image(prompt: str, ctx: Context) -> Image | str
This function generates an image using a specified prompt. It follows these steps:
generate_prompt(topic: str, ctx: Context) -> str
This function generates a comprehensive image generation prompt from specified topic.
mcp
: For setting up the FastMCP server.json
: For handling JSON data.urllib
: For making HTTP requests.time
: For adding delays in polling.os
: For accessing environment variables.langchain
: For creating simple LLM Prompt chain to generate image generation prompt from topic.langchain-ollama
: For ollama specific modules for LangChain.This project is licensed under the MIT License - see the LICENSE file for details.