Claude Prompts
STDIOUniversal MCP server for managing and hot-reloading AI prompts across clients.
Universal MCP server for managing and hot-reloading AI prompts across clients.

A Universal Model Context Protocol Server for Advanced Prompt Management
Production-ready MCP server with intelligent prompt orchestration, hot-reload capabilities, and framework-driven AI workflows. Compatible with Claude Desktop, Cursor Windsurf, and any MCP client.
Quick Start • Features • Documentation • Configuration
Three-Tier Execution Architecture
Intelligent System Components
Universal MCP Compatibility
Manage AI capabilities directly within conversations through three consolidated MCP tools:
# Universal prompt execution with intelligent type detection prompt_engine >>code_formatter language="Python" style="PEP8" # Create and manage prompts with intelligent analysis prompt_manager create name="code_reviewer" type="template" \ content="Analyze {{code}} for security, performance, and maintainability" # Analyze existing prompts for execution optimization prompt_manager analyze_type prompt_id="my_prompt" # System control and framework management system_control switch_framework framework="ReACT" reason="Problem-solving focus" # Execute complex multi-step LLM-driven chains prompt_engine >>code_review_optimization_chain target_code="..." language_framework="TypeScript"
Key Benefits:
The server implements a sophisticated methodology system that applies structured thinking frameworks to AI interactions:
# Switch methodology for different approaches system_control switch_framework framework="ReACT" reason="Problem-solving focus" # Monitor framework performance and usage system_control analytics show_details=true # Get current framework status system_control status
Result: Structured, systematic AI conversations through proven thinking methodologies.
Core Analysis Functions:
{{variable}}), chain steps, and file structureOptional Semantic Enhancement:
Framework Control:
# Manual framework selection system_control switch_framework framework="ReACT" reason="Problem-solving focus"
Get the server running in under a minute:
# Clone, install, build, and start git clone https://github.com/minipuft/claude-prompts-mcp.git cd claude-prompts-mcp/server && npm install && npm run build && npm start
Drop this into your claude_desktop_config.json:
{ "mcpServers": { "claude-prompts-mcp": { "command": "node", "args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"], "env": { "MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\prompts\\promptsConfig.json" } } } }
Configure your MCP client to connect via STDIO transport:
node["path/to/claude-prompts-mcp/server/dist/index.js"]MCP_PROMPTS_CONFIG_PATH=path/to/prompts/promptsConfig.jsonFor Claude Code CLI users:
claude mcp add-json claude-prompts-mcp '{"type":"stdio","command":"node","args":["path/to/claude-prompts-mcp/server/dist/index.js"],"env":{}}'
Start using the server immediately:
# Discover prompts with intelligent filtering prompt_manager list filter="category:analysis" # Execute prompts with automatic type detection prompt_engine >>friendly_greeting name="Developer" # Analyze content with framework enhancement prompt_engine >>content_analysis input="research data" # Run multi-step LLM-driven chains prompt_engine >>code_review_optimization_chain target_code="..." language_framework="TypeScript" # Monitor system performance system_control analytics include_history=true # Create new prompts conversationally "Create a prompt called 'bug_analyzer' that finds and explains code issues" # Refine existing prompts "Make the bug_analyzer prompt also suggest performance improvements" # Reference existing chain prompts # Example chains available: code_review_optimization_chain, create_docs_chain, video_notes_enhanced # Manual control when needed prompt_engine >>content_analysis input="sensitive data" step_confirmation=true gate_validation=true
The system provides a structured approach to prompt management through systematic methodology application.
The orchestration engine monitors files and reloads everything seamlessly:
# Edit any prompt file → Server detects → Reloads automatically → Zero downtime
Nunjucks-powered dynamic prompts with full templating capabilities:
Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights. {% for requirement in requirements %} - Consider: {{requirement}} {% endfor %} {% if previous_context %} Build upon: {{previous_context}} {% endif %}
Multi-phase startup with comprehensive health monitoring:
Phase 1: Foundation → Config, logging, core services Phase 2: Data Loading → Prompts, categories, validation Phase 3: Module Init → Tools, executors, managers Phase 4: Server Launch → Transport, API, diagnostics
Multi-step workflows implemented as LLM-driven instruction sequences embedded in markdown templates:
Chains are defined directly in markdown files using step headers that guide Claude through a systematic process:
# Comprehensive Code Review ## User Message Template **Target Code**: {{target_code}} **Language/Framework**: {{language_framework}} This chain performs a systematic 6-step code review: ## Step 1: Structure & Organization Analysis Analyze code architecture, organization, patterns, naming conventions... **Output Required**: Structural assessment with identified patterns. --- ## Step 2: Functionality & Logic Review Examine business logic correctness, edge cases, error handling... **Output Required**: Logic validation with edge case analysis. --- ## Step 3: Security & Best Practices Audit Review for security vulnerabilities: input validation, authentication... **Output Required**: Security assessment with vulnerability identification.
Key Features:
For chains that orchestrate multiple reusable prompts, use the markdown-embedded format:
## Chain Steps 1. promptId: extract_key_points stepName: Extract Key Points inputMapping: content: original_content outputMapping: key_points: extracted_points 2. promptId: sentiment_analysis stepName: Analyze Sentiment inputMapping: text: extracted_points outputMapping: sentiment: analysis_result
Capabilities:
Real Examples: See working chain implementations:
server/prompts/development/code_review_optimization_chain.md - 6-step code review workflowserver/prompts/documentation/create_docs_chain.md - Documentation generation chainserver/prompts/content_processing/video_notes_enhanced.md - Video analysis workflowgraph TB A[Claude Desktop] -->|MCP Protocol| B[Transport Layer] B --> C[🧠 Orchestration Engine] C --> D[📝 Prompt Manager] C --> E[🛠️ MCP Tools Manager] C --> F[⚙️ Config Manager] D --> G[🎨 Template Engine] E --> H[🔧 Management Tools] F --> I[🔥 Hot Reload System] style C fill:#ff6b35 style D fill:#00ff88 style E fill:#0066cc
This server implements the Model Context Protocol (MCP) standard and works with any compatible client:
Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.
config.json)Fine-tune your server's behavior:
{ "server": { "name": "Claude Custom Prompts MCP Server", "version": "1.0.0", "port": 9090 }, "prompts": { "file": "promptsConfig.json", "registrationMode": "name" }, "transports": { "default": "stdio", "sse": { "enabled": false }, "stdio": { "enabled": true } } }
promptsConfig.json)Structure your AI command library:
{ "categories": [ { "id": "development", "name": "🔧 Development", "description": "Code review, debugging, and development workflows" }, { "id": "analysis", "name": "📊 Analysis", "description": "Content analysis and research prompts" }, { "id": "creative", "name": "🎨 Creative", "description": "Content creation and creative writing" } ], "imports": [ "prompts/development/prompts.json", "prompts/analysis/prompts.json", "prompts/creative/prompts.json" ] }
Build sophisticated AI workflows using two complementary approaches:
The primary method uses step-by-step instructions embedded directly in markdown templates. Claude executes each step systematically while maintaining context:
# Research Analysis Chain ## User Message Template Research {{topic}} and provide {{analysis_type}} analysis. ## Step 1: Initial Research Gather comprehensive information about {{topic}} from available sources. Focus on {{analysis_type}} perspectives. **Output Required**: Research summary with key findings. --- ## Step 2: Extract Key Insights From the research findings, identify the most relevant insights. Organize by importance and relevance to {{analysis_type}} analysis. **Output Required**: Prioritized list of insights with supporting evidence. --- ## Step 3: Analyze Patterns Examine the extracted insights for patterns, trends, and relationships. Consider multiple analytical frameworks. **Output Required**: Pattern analysis with framework comparisons. --- ## Step 4: Synthesize Report Create comprehensive report combining all findings. **Output Required**: Structured report with executive summary.
Benefits:
For orchestrating multiple reusable prompts with explicit data flow:
## Chain Steps 1. promptId: research_topic stepName: Initial Research inputMapping: query: topic outputMapping: findings: research_data 2. promptId: extract_insights stepName: Extract Key Insights inputMapping: content: research_data outputMapping: insights: key_findings 3. promptId: analyze_patterns stepName: Analyze Patterns inputMapping: data: key_findings outputMapping: analysis: pattern_results 4. promptId: generate_report stepName: Synthesize Report inputMapping: insights: key_findings patterns: pattern_results outputMapping: report: final_output
Capabilities:
Real Examples: View production chain implementations:
server/prompts/content_processing/noteIntegration.md - Note processing with modular steps## Chain Steps section in any chain prompt for complete syntaxLeverage the full power of Nunjucks templating:
# {{ title | title }} Analysis ## Context {% if previous_analysis %} Building upon previous analysis: {{ previous_analysis | summary }} {% endif %} ## Requirements {% for req in requirements %} {{loop.index}}. **{{req.priority | upper}}**: {{req.description}} {% if req.examples %} Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %} {% endif %} {% endfor %} ## Focus Areas {% set focus_areas = focus.split(',') %} {% for area in focus_areas %} - {{ area | trim | title }} {% endfor %}
Template Features:
Manage your prompts dynamically while the server runs:
# Update prompts with intelligent re-analysis prompt_manager update id="analysis_prompt" content="new template" # Modify specific sections with validation prompt_manager modify id="research" section="examples" content="new examples" # Hot-reload with comprehensive validation system_control reload reason="updated templates"
Management Capabilities:
Enterprise-grade observability:
Built-in monitoring and diagnostics for production environments:
// Health Check Response { healthy: true, modules: { foundation: true, dataLoaded: true, modulesInitialized: true, serverRunning: true }, performance: { uptime: 86400, memoryUsage: { rss: 45.2, heapUsed: 23.1 }, promptsLoaded: 127, categoriesLoaded: 8 } }
Monitoring Features:
| Guide | Description | 
|---|---|
| Installation Guide | Complete setup walkthrough with troubleshooting | 
| Troubleshooting Guide | Common issues, diagnostic tools, and solutions | 
| Architecture Overview | Deep dive into the orchestration engine, modules, and data flow | 
| Prompt Format Guide | Master prompt creation including chain workflows | 
| Prompt Management | Dynamic management and hot-reload features | 
| MCP Tools Reference | Complete MCP tools documentation | 
| Roadmap & TODO | Planned features and development roadmap | 
| Contributing | Join our development community | 
Join our development community:
Released under the MIT License - see the file for details.
Star this repo if it's improving your AI workflow
Report Bug • Request Feature • View Docs
Built for the AI development community