
Workflow Orchestration
HTTP-SSEDeclarative workflow orchestration server for AI agents using YAML-based workflows.
Declarative workflow orchestration server for AI agents using YAML-based workflows.
Empower your AI agents with a powerful, declarative workflow engine.
An MCP (Model Context Protocol) server that allows a Large Language Model (LLM) to discover, understand, and execute complex, multi-step workflows defined in simple YAML files.
Built on the cyanheads/mcp-ts-template
, this server follows a modular architecture with robust error handling, logging, and security features.
This server equips your AI with specialized tools to interact with the workflow engine:
Tool Name | Description | Key Features |
---|---|---|
workflow_return_list | Discovers and lists available workflows. | - category : Filter by a specific category.- tags : Filter by a list of tags.- includeTools : Optionally include a list of tools used in each workflow. |
workflow_get_instructions | Retrieves the complete definition for a single workflow. | - name : The exact name of the workflow.- version : The specific version to retrieve (defaults to latest).- Dynamically injects global instructions for consistent execution. |
workflow_create_new | Creates a new, permanent workflow YAML file. | - Takes a structured JSON object matching the workflow schema. - Automatically categorizes and re-indexes workflows. |
workflow_create_temporary | Creates a temporary workflow that is not listed, but can be called by name. | - Ideal for defining multi-step plans for complex tasks. - Can be passed to other agents by name. |
| Overview | Features | Installation | | Configuration | Project Structure | | Tools | Development | License |
The Workflow MCP Server acts as a powerful orchestration layer that helps your LLM agents manage complex workflows. This provides a structured way to perform 'short' multi-step tasks that would otherwise require hard-coded logic or extensive manual intervention.
It's as easy as telling your LLM "Use the workflows-mcp-server to create a new workflow that does X, Y, and Z, using the current tools you currently have access to" or "Find me a workflow that can help with task A". The server will handle the rest, allowing your agents to focus on higher-level reasoning and decision-making. The temporary workflows can be used to allow your LLM agent to "collect its thoughts" and create a structured temporary plan; even the act of defining a workflow can help the agent clarify its own understanding of the task at hand and improve tool use performance.
Instead of hard-coding multi-step logic, your tools can leverage this server to:
Developer Note: This repository includes a .clinerules file that serves as a developer cheat sheet for your LLM coding agent with quick reference for the codebase patterns, file locations, and code snippets.
Leverages the robust utilities provided by the mcp-ts-template
:
McpError
).dotenv
) with comprehensive validation using Zod.zod
for schema validation.Add the following to your MCP client's configuration file (e.g., cline_mcp_settings.json
). This configuration uses npx
to run the server, which will automatically install the package if not already present:
{ "mcpServers": { "workflows-mcp-server": { "command": "npx", "args": ["workflows-mcp-server"], "env": { "MCP_LOG_LEVEL": "info" } } } }
npm install workflows-mcp-server
Clone the repository:
git clone https://github.com/cyanheads/workflows-mcp-server.git cd workflows-mcp-server
Install dependencies:
npm install
Build the project:
npm run build
Configure the server using environment variables. These can be set in a .env
file or directly in your shell.
Variable | Description | Default |
---|---|---|
MCP_TRANSPORT_TYPE | Transport mechanism: stdio or http . | stdio |
MCP_HTTP_PORT | Port for the HTTP server (if MCP_TRANSPORT_TYPE=http ). | 3010 |
MCP_HTTP_HOST | Host address for the HTTP server (if MCP_TRANSPORT_TYPE=http ). | 127.0.0.1 |
MCP_ALLOWED_ORIGINS | Comma-separated list of allowed origins for CORS (if MCP_TRANSPORT_TYPE=http ). | (none) |
MCP_LOG_LEVEL | Logging level (debug , info , warning , error ). | debug |
MCP_AUTH_MODE | Authentication mode for HTTP: jwt or oauth . | jwt |
MCP_AUTH_SECRET_KEY | Required for jwt auth in production. Minimum 32-character secret key. | (none) |
NODE_ENV | Runtime environment (development , production ). | development |
The codebase follows a modular structure within the src/
directory:
src/
├── index.ts # Entry point: Initializes and starts the server
├── config/ # Configuration loading (env vars, package info)
│ └── index.ts
├── mcp-server/ # Core MCP server logic and capability registration
│ ├── server.ts # Server setup, capability registration
│ ├── transports/ # Transport handling (stdio, http)
│ └── tools/ # MCP Tool implementations (subdirs per tool)
├── services/ # External service integrations
│ └── workflow-indexer/ # Discovers and indexes workflow YAML files
├── types-global/ # Shared TypeScript type definitions
└── utils/ # Common utility functions (logger, error handler, etc.)
For a detailed file tree, run npm run tree
or see docs/tree.md.
The server provides a suite of tools for managing and executing workflows.
Tool Name | Description | Key Arguments |
---|---|---|
workflow_return_list | Lists available workflows. | category? , tags? , includeTools? |
workflow_get_instructions | Retrieves a workflow definition. | name , version? |
workflow_create_new | Creates a new, permanent workflow. | A structured JSON object. |
workflow_create_temporary | Creates a temporary, unlisted workflow. | A structured JSON object. |
# Build the project (compile TS to JS in dist/ and make executable) npm run build # Test the server locally using the MCP inspector tool (stdio transport) npm run inspector # Clean build artifacts npm run clean # Generate a file tree representation for documentation npm run tree # Clean build artifacts and then rebuild the project npm run rebuild # Format code with Prettier npm run format # Start the server using stdio (default) npm start # Start the server using HTTP transport npm run start:http
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.