Claude定制提示词
STDIOClaude模型的自定义提示模板管理服务
Claude模型的自定义提示模板管理服务
🚀 The Universal Model Context Protocol Server for Any MCP Client
Supercharge your AI workflows with battle-tested prompt engineering, intelligent orchestration, and lightning-fast hot-reload capabilities. Works seamlessly with Claude Desktop, Cursor Windsurf, and any MCP-compatible client.
⚡ Quick Start • 🎯 Features • 📚 Docs • 🛠️ Advanced
Transform your AI assistant experience from scattered prompts to a powerful, organized command library that works across any MCP-compatible platform.
🎯 The Future is Here: Manage Your AI's Capabilities FROM WITHIN the AI Conversation
This isn't just another prompt server – it's a living, breathing prompt ecosystem that evolves through natural conversation with your AI assistant. Imagine being able to:
# 🗣️ Create new prompts by talking to your AI "Hey Claude, create a new prompt called 'code_reviewer' that analyzes code for security issues" → Claude creates, tests, and registers the prompt instantly # ✏️ Refine prompts through conversation "That code reviewer prompt needs to also check for performance issues" → Claude modifies the prompt and hot-reloads it immediately # 🔍 Discover and iterate on your prompt library >>listprompts → Browse your growing collection, then ask: "Improve the research_assistant prompt to be more thorough"
🌟 Why This Changes Everything:
This is what conversational AI infrastructure looks like – where the boundary between using AI and building AI capabilities disappears entirely.
🎯 Developer Experience
|
🚀 Enterprise Architecture
|
🛠️ Complete Interactive MCP Tools Suite
|
Get your AI command center running in under a minute:
# Clone → Install → Launch → Profit! 🚀 git clone https://github.com/minipuft/claude-prompts-mcp.git cd claude-prompts-mcp/server && npm install && npm run build && npm start
Drop this into your claude_desktop_config.json
:
{ "mcpServers": { "claude-prompts-mcp": { "command": "node", "args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"], "env": { "MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\promptsConfig.json" } } } }
Configure your MCP client to connect via STDIO transport:
node
["path/to/claude-prompts-mcp/server/dist/index.js"]
MCP_PROMPTS_CONFIG_PATH=path/to/promptsConfig.json
💡 Pro Tip: Use absolute paths for bulletproof integration across all MCP clients!
Your AI command arsenal is ready, and it grows through conversation:
# Discover your new superpowers >>listprompts # Execute lightning-fast prompts >>friendly_greeting name="Developer" # 🚀 NEW: Create prompts by talking to your AI "Create a prompt called 'bug_analyzer' that helps me debug code issues systematically" → Your AI creates, tests, and registers the prompt instantly! # 🔄 Refine prompts through conversation "Make the bug_analyzer prompt also suggest performance improvements" → Prompt updated and hot-reloaded automatically # Handle complex scenarios with JSON >>research_prompt {"topic": "AI trends", "depth": "comprehensive", "format": "executive summary"} # 🧠 Build your custom AI toolkit naturally "I need a prompt for writing technical documentation" → "The bug_analyzer needs to also check for security issues" → "Create a prompt chain that reviews code, tests it, then documents it"
🌟 The Magic: Your prompt library becomes a living extension of your workflow, growing and adapting as you work with your AI assistant.
Our sophisticated orchestration engine monitors your files and reloads everything seamlessly:
# Edit any prompt file → Server detects → Reloads automatically → Zero downtime
Go beyond simple text replacement with a full template engine:
Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights. {% for requirement in requirements %} - Consider: {{requirement}} {% endfor %} {% if previous_context %} Build upon: {{previous_context}} {% endif %}
Built like production software with comprehensive architecture:
Phase 1: Foundation → Config, logging, core services Phase 2: Data Loading → Prompts, categories, validation Phase 3: Module Init → Tools, executors, managers Phase 4: Server Launch → Transport, API, diagnostics
Create sophisticated workflows where each step builds on the previous:
{ "id": "content_analysis_chain", "name": "Content Analysis Chain", "isChain": true, "chainSteps": [ { "stepName": "Extract Key Points", "promptId": "extract_key_points", "inputMapping": { "content": "original_content" }, "outputMapping": { "key_points": "extracted_points" } }, { "stepName": "Analyze Sentiment", "promptId": "sentiment_analysis", "inputMapping": { "text": "extracted_points" }, "outputMapping": { "sentiment": "analysis_result" } } ] }
graph TB A[Claude Desktop] -->|MCP Protocol| B[Transport Layer] B --> C[🧠 Orchestration Engine] C --> D[📝 Prompt Manager] C --> E[🛠️ MCP Tools Manager] C --> F[⚙️ Config Manager] D --> G[🎨 Template Engine] E --> H[🔧 Management Tools] F --> I[🔥 Hot Reload System] style C fill:#ff6b35 style D fill:#00ff88 style E fill:#0066cc
This server implements the Model Context Protocol (MCP) standard and works with any compatible client:
✅ Tested & Verified
|
🔌 Transport Support
|
🎯 Integration Features
|
💡 Developer Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.
config.json
)Fine-tune your server's behavior:
{ "server": { "name": "Claude Custom Prompts MCP Server", "version": "1.0.0", "port": 9090 }, "prompts": { "file": "promptsConfig.json", "registrationMode": "name" }, "transports": { "default": "stdio", "sse": { "enabled": false }, "stdio": { "enabled": true } } }
promptsConfig.json
)Structure your AI command library:
{ "categories": [ { "id": "development", "name": "🔧 Development", "description": "Code review, debugging, and development workflows" }, { "id": "analysis", "name": "📊 Analysis", "description": "Content analysis and research prompts" }, { "id": "creative", "name": "🎨 Creative", "description": "Content creation and creative writing" } ], "imports": [ "prompts/development/prompts.json", "prompts/analysis/prompts.json", "prompts/creative/prompts.json" ] }
Create complex workflows that chain multiple prompts together:
# Research Analysis Chain ## User Message Template Research {{topic}} and provide {{analysis_type}} analysis. ## Chain Configuration Steps: research → extract → analyze → summarize Input Mapping: {topic} → {content} → {key_points} → {insights} Output Format: Structured report with executive summary
Capabilities:
Leverage the full power of Nunjucks templating:
# {{ title | title }} Analysis ## Context {% if previous_analysis %} Building upon previous analysis: {{ previous_analysis | summary }} {% endif %} ## Requirements {% for req in requirements %} {{loop.index}}. **{{req.priority | upper}}**: {{req.description}} {% if req.examples %} Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %} {% endif %} {% endfor %} ## Focus Areas {% set focus_areas = focus.split(',') %} {% for area in focus_areas %} - {{ area | trim | title }} {% endfor %}
Template Features:
Manage your prompts dynamically while the server runs:
# Update prompts on-the-fly >>update_prompt id="analysis_prompt" content="new template" # Add new sections dynamically >>modify_prompt_section id="research" section="examples" content="new examples" # Hot-reload everything >>reload_prompts reason="updated templates"
Management Capabilities:
Built-in monitoring and diagnostics for production environments:
// Health Check Response { healthy: true, modules: { foundation: true, dataLoaded: true, modulesInitialized: true, serverRunning: true }, performance: { uptime: 86400, memoryUsage: { rss: 45.2, heapUsed: 23.1 }, promptsLoaded: 127, categoriesLoaded: 8 } }
Monitoring Features:
Guide | Description |
---|---|
📥 Installation Guide | Complete setup walkthrough with troubleshooting |
🛠️ Troubleshooting Guide | Common issues, diagnostic tools, and solutions |
🏗️ Architecture Overview | A deep dive into the orchestration engine, modules, and data flow |
📝 Prompt Format Guide | Master prompt creation with examples |
🔗 Chain Execution Guide | Build complex multi-step workflows |
⚙️ Prompt Management | Dynamic management and hot-reload features |
🚀 MCP Tools Reference | Complete MCP tools documentation |
🗺️ Roadmap & TODO | Planned features and development roadmap |
🤝 Contributing | Join our development community |
We're building the future of AI prompt engineering! Join our community:
Released under the MIT License - see the file for details.
⭐ Star this repo if it's transforming your AI workflow!
Report Bug • Request Feature • View Docs
Built with ❤️ for the AI development community