icon for mcp server

Claude定制提示词

STDIO

Claude模型的自定义提示模板管理服务

Claude Prompts MCP Server

Claude Prompts MCP Server Logo

npm version License: MIT Model Context Protocol Node.js

🚀 The Universal Model Context Protocol Server for Any MCP Client

Supercharge your AI workflows with battle-tested prompt engineering, intelligent orchestration, and lightning-fast hot-reload capabilities. Works seamlessly with Claude Desktop, Cursor Windsurf, and any MCP-compatible client.

⚡ Quick Start🎯 Features📚 Docs🛠️ Advanced


🌟 What Makes This Special?

  • 🔥 Intelligent Hot-Reload System → Update prompts instantly without restarts
  • 🎯 Advanced Template Engine → Nunjucks-powered with conditionals, loops, and dynamic data
  • ⚡ Multi-Phase Orchestration → Robust startup sequence with comprehensive health monitoring
  • 🚀 Universal MCP Compatibility → Works flawlessly with Claude Desktop, Cursor Windsurf, and any MCP client
  • 🧠 Prompt Chain Workflows → Build complex multi-step AI processes
  • 📊 Real-Time Diagnostics → Performance metrics and health validation built-in

Transform your AI assistant experience from scattered prompts to a powerful, organized command library that works across any MCP-compatible platform.

🚀 Revolutionary Interactive Prompt Management

🎯 The Future is Here: Manage Your AI's Capabilities FROM WITHIN the AI Conversation

This isn't just another prompt server – it's a living, breathing prompt ecosystem that evolves through natural conversation with your AI assistant. Imagine being able to:

# 🗣️ Create new prompts by talking to your AI "Hey Claude, create a new prompt called 'code_reviewer' that analyzes code for security issues" → Claude creates, tests, and registers the prompt instantly # ✏️ Refine prompts through conversation "That code reviewer prompt needs to also check for performance issues" → Claude modifies the prompt and hot-reloads it immediately # 🔍 Discover and iterate on your prompt library >>listprompts → Browse your growing collection, then ask: "Improve the research_assistant prompt to be more thorough"

🌟 Why This Changes Everything:

  • 🔄 Self-Evolving System: Your AI assistant literally builds and improves its own capabilities in real-time
  • 🎮 No Context Switching: Never leave your AI conversation to manage prompts – everything happens inline
  • 🧠 Collaborative Intelligence: You and your AI work together to craft the perfect prompt library
  • ⚡ Instant Gratification: Create → Test → Refine → Deploy in seconds, not minutes
  • 🌱 Organic Growth: Your prompt library naturally evolves based on your actual usage patterns

This is what conversational AI infrastructure looks like – where the boundary between using AI and building AI capabilities disappears entirely.

⚡ Features & Reliability

🎯 Developer Experience

  • 🔥 One-Command Installation in under 60 seconds
  • Hot-Reload Everything → prompts, configs, templates
  • 🎨 Rich Template Engine → conditionals, loops, data injection
  • 🚀 Universal MCP Integration → works with Claude Desktop, Cursor Windsurf, and any MCP client
  • 📱 Multi-Transport Support → STDIO for Claude Desktop + SSE/REST for web
  • 🛠️ Dynamic Management Tools → update, delete, reload prompts on-the-fly

🚀 Enterprise Architecture

  • 🏗️ Orchestration → phased startup with dependency management
  • 🔧 Robust Error Handling → graceful degradation with comprehensive logging
  • 📊 Real-Time Health Monitoring → module status, performance metrics, diagnostics
  • 🎯 Smart Environment Detection → works across development and production contexts
  • ⚙️ Modular Plugin System → extensible architecture for custom workflows
  • 🔐 Production-Ready Security → input validation, sanitization, error boundaries

🛠️ Complete Interactive MCP Tools Suite

  • 🎮 Process Slash Commands/prompt_name syntax for instant prompt execution
  • 📋 List Prompts/listprompts to discover all available commands with usage examples
  • ✏️ Update Prompts → Modify existing prompts through conversation with full validation and hot-reload
  • 🗑️ Delete Prompts → Remove prompts by asking your AI assistant - automatic file cleanup included
  • 🔧 Modify Sections → "Edit the description of my research prompt" → Done instantly
  • 🔄 Reload System → Force refresh through chat - no terminal access needed
  • ⚙️ Smart Argument Parsing → JSON objects, single arguments, or fallback to {{previous_message}}
  • 🔗 Chain Execution → Multi-step workflow management with conversational guidance
  • 🎨 Conversational Creation → "Create a new prompt that..." → AI builds it for you interactively

🎯 One-Command Installation

Get your AI command center running in under a minute:

# Clone → Install → Launch → Profit! 🚀 git clone https://github.com/minipuft/claude-prompts-mcp.git cd claude-prompts-mcp/server && npm install && npm run build && npm start

🔌 Universal MCP Client Integration

Claude Desktop

Drop this into your claude_desktop_config.json:

{ "mcpServers": { "claude-prompts-mcp": { "command": "node", "args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"], "env": { "MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\promptsConfig.json" } } } }

Cursor Windsurf & Other MCP Clients

Configure your MCP client to connect via STDIO transport:

  • Command: node
  • Args: ["path/to/claude-prompts-mcp/server/dist/index.js"]
  • Environment: MCP_PROMPTS_CONFIG_PATH=path/to/promptsConfig.json

💡 Pro Tip: Use absolute paths for bulletproof integration across all MCP clients!

🎮 Start Building Immediately

Your AI command arsenal is ready, and it grows through conversation:

# Discover your new superpowers >>listprompts # Execute lightning-fast prompts >>friendly_greeting name="Developer" # 🚀 NEW: Create prompts by talking to your AI "Create a prompt called 'bug_analyzer' that helps me debug code issues systematically" → Your AI creates, tests, and registers the prompt instantly! # 🔄 Refine prompts through conversation "Make the bug_analyzer prompt also suggest performance improvements" → Prompt updated and hot-reloaded automatically # Handle complex scenarios with JSON >>research_prompt {"topic": "AI trends", "depth": "comprehensive", "format": "executive summary"} # 🧠 Build your custom AI toolkit naturally "I need a prompt for writing technical documentation" "The bug_analyzer needs to also check for security issues" "Create a prompt chain that reviews code, tests it, then documents it"

🌟 The Magic: Your prompt library becomes a living extension of your workflow, growing and adapting as you work with your AI assistant.

🔥 Why Developers Choose This Server

⚡ Lightning-Fast Hot-Reload → Edit prompts, see changes instantly

Our sophisticated orchestration engine monitors your files and reloads everything seamlessly:

# Edit any prompt file → Server detects → Reloads automatically → Zero downtime
  • Instant Updates: Change templates, arguments, descriptions in real-time
  • Zero Restart Required: Advanced hot-reload system keeps everything running
  • Smart Dependency Tracking: Only reloads what actually changed
  • Graceful Error Recovery: Invalid changes don't crash the server
🎨 Next-Gen Template Engine → Nunjucks-powered dynamic prompts

Go beyond simple text replacement with a full template engine:

Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights. {% for requirement in requirements %} - Consider: {{requirement}} {% endfor %} {% if previous_context %} Build upon: {{previous_context}} {% endif %}
  • Conditional Logic: Smart prompts that adapt based on input
  • Loops & Iteration: Handle arrays and complex data structures
  • Template Inheritance: Reuse and extend prompt patterns
  • Real-Time Processing: Templates render with live data injection
🏗️ Enterprise-Grade Orchestration → Multi-phase startup with health monitoring

Built like production software with comprehensive architecture:

Phase 1: FoundationConfig, logging, core services Phase 2: Data LoadingPrompts, categories, validation Phase 3: Module InitTools, executors, managers Phase 4: Server LaunchTransport, API, diagnostics
  • Dependency Management: Modules start in correct order with validation
  • Health Monitoring: Real-time status of all components
  • Performance Metrics: Memory usage, uptime, connection tracking
  • Diagnostic Tools: Built-in troubleshooting and debugging
🔄 Intelligent Prompt Chains → Multi-step AI workflows

Create sophisticated workflows where each step builds on the previous:

{ "id": "content_analysis_chain", "name": "Content Analysis Chain", "isChain": true, "chainSteps": [ { "stepName": "Extract Key Points", "promptId": "extract_key_points", "inputMapping": { "content": "original_content" }, "outputMapping": { "key_points": "extracted_points" } }, { "stepName": "Analyze Sentiment", "promptId": "sentiment_analysis", "inputMapping": { "text": "extracted_points" }, "outputMapping": { "sentiment": "analysis_result" } } ] }
  • Visual Step Planning: See your workflow before execution
  • Input/Output Mapping: Data flows seamlessly between steps
  • Error Recovery: Failed steps don't crash the entire chain
  • Flexible Execution: Run chains or individual steps as needed

📊 System Architecture

graph TB A[Claude Desktop] -->|MCP Protocol| B[Transport Layer] B --> C[🧠 Orchestration Engine] C --> D[📝 Prompt Manager] C --> E[🛠️ MCP Tools Manager] C --> F[⚙️ Config Manager] D --> G[🎨 Template Engine] E --> H[🔧 Management Tools] F --> I[🔥 Hot Reload System] style C fill:#ff6b35 style D fill:#00ff88 style E fill:#0066cc

🌐 MCP Client Compatibility

This server implements the Model Context Protocol (MCP) standard and works with any compatible client:

✅ Tested & Verified

  • 🎯 Claude Desktop → Full integration support
  • 🚀 Cursor Windsurf → Native MCP compatibility

🔌 Transport Support

  • 📡 STDIO → Primary transport for desktop clients
  • 🌐 Server-Sent Events (SSE) → Web-based clients and integrations
  • 🔗 HTTP Endpoints → Basic endpoints for health checks and data queries

🎯 Integration Features

  • 🔄 Auto-Discovery → Clients detect tools automatically
  • 📋 Tool Registration → Dynamic capability announcement
  • Hot Reload → Changes appear instantly in clients
  • 🛠️ Error Handling → Graceful degradation across clients

💡 Developer Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.

🛠️ Advanced Configuration

⚙️ Server Powerhouse (config.json)

Fine-tune your server's behavior:

{ "server": { "name": "Claude Custom Prompts MCP Server", "version": "1.0.0", "port": 9090 }, "prompts": { "file": "promptsConfig.json", "registrationMode": "name" }, "transports": { "default": "stdio", "sse": { "enabled": false }, "stdio": { "enabled": true } } }

🗂️ Prompt Organization (promptsConfig.json)

Structure your AI command library:

{ "categories": [ { "id": "development", "name": "🔧 Development", "description": "Code review, debugging, and development workflows" }, { "id": "analysis", "name": "📊 Analysis", "description": "Content analysis and research prompts" }, { "id": "creative", "name": "🎨 Creative", "description": "Content creation and creative writing" } ], "imports": [ "prompts/development/prompts.json", "prompts/analysis/prompts.json", "prompts/creative/prompts.json" ] }

🚀 Advanced Features

🔄 Multi-Step Prompt Chains → Build sophisticated AI workflows

Create complex workflows that chain multiple prompts together:

# Research Analysis Chain ## User Message Template Research {{topic}} and provide {{analysis_type}} analysis. ## Chain Configuration Steps: research → extract → analyze → summarize Input Mapping: {topic} → {content} → {key_points} → {insights} Output Format: Structured report with executive summary

Capabilities:

  • Sequential Processing: Each step uses output from previous step
  • Parallel Execution: Run multiple analysis streams simultaneously
  • Error Recovery: Graceful handling of failed steps
  • Custom Logic: Conditional branching based on intermediate results
🎨 Advanced Template Features → Dynamic, intelligent prompts

Leverage the full power of Nunjucks templating:

# {{ title | title }} Analysis ## Context {% if previous_analysis %} Building upon previous analysis: {{ previous_analysis | summary }} {% endif %} ## Requirements {% for req in requirements %} {{loop.index}}. **{{req.priority | upper}}**: {{req.description}} {% if req.examples %} Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %} {% endif %} {% endfor %} ## Focus Areas {% set focus_areas = focus.split(',') %} {% for area in focus_areas %} - {{ area | trim | title }} {% endfor %}

Template Features:

  • Filters & Functions: Transform data on-the-fly
  • Conditional Logic: Smart branching based on input
  • Loops & Iteration: Handle complex data structures
  • Template Inheritance: Build reusable prompt components
🔧 Real-Time Management Tools → Hot management without downtime

Manage your prompts dynamically while the server runs:

# Update prompts on-the-fly >>update_prompt id="analysis_prompt" content="new template" # Add new sections dynamically >>modify_prompt_section id="research" section="examples" content="new examples" # Hot-reload everything >>reload_prompts reason="updated templates"

Management Capabilities:

  • Live Updates: Change prompts without server restart
  • Section Editing: Modify specific parts of prompts
  • Bulk Operations: Update multiple prompts at once
  • Rollback Support: Undo changes when things go wrong
📊 Production Monitoring → Enterprise-grade observability

Built-in monitoring and diagnostics for production environments:

// Health Check Response { healthy: true, modules: { foundation: true, dataLoaded: true, modulesInitialized: true, serverRunning: true }, performance: { uptime: 86400, memoryUsage: { rss: 45.2, heapUsed: 23.1 }, promptsLoaded: 127, categoriesLoaded: 8 } }

Monitoring Features:

  • Real-Time Health Checks: All modules continuously monitored
  • Performance Metrics: Memory, uptime, connection tracking
  • Diagnostic Tools: Comprehensive troubleshooting information
  • Error Tracking: Graceful error handling with detailed logging

📚 Documentation Hub

GuideDescription
📥 Installation GuideComplete setup walkthrough with troubleshooting
🛠️ Troubleshooting GuideCommon issues, diagnostic tools, and solutions
🏗️ Architecture OverviewA deep dive into the orchestration engine, modules, and data flow
📝 Prompt Format GuideMaster prompt creation with examples
🔗 Chain Execution GuideBuild complex multi-step workflows
⚙️ Prompt ManagementDynamic management and hot-reload features
🚀 MCP Tools ReferenceComplete MCP tools documentation
🗺️ Roadmap & TODOPlanned features and development roadmap
🤝 ContributingJoin our development community

🤝 Contributing

We're building the future of AI prompt engineering! Join our community:

📄 License

Released under the MIT License - see the file for details.


⭐ Star this repo if it's transforming your AI workflow!

Report BugRequest FeatureView Docs

Built with ❤️ for the AI development community

MCP Now 重磅来袭,抢先一步体验