
Rubber Duck
STDIOMCP server bridging multiple OpenAI-compatible LLMs for rubber duck debugging perspectives
MCP server bridging multiple OpenAI-compatible LLMs for rubber duck debugging perspectives
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
__
<(o )___
( ._> /
`---' Quack! Ready to debug!
Any provider with an OpenAI-compatible API endpoint, including:
👉 Complete Claude Desktop setup instructions below in Claude Desktop Configuration
# Clone the repository git clone https://github.com/yourusername/mcp-rubber-duck.git cd mcp-rubber-duck # Install dependencies npm install # Build the project npm run build # Run the server npm start
Create a .env
file in the project root:
# OpenAI OPENAI_API_KEY=sk-... OPENAI_DEFAULT_MODEL=gpt-4o-mini # Optional: defaults to gpt-4o-mini # Google Gemini GEMINI_API_KEY=... GEMINI_DEFAULT_MODEL=gemini-2.5-flash # Optional: defaults to gemini-2.5-flash # Groq GROQ_API_KEY=gsk_... GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile # Optional: defaults to llama-3.3-70b-versatile # Ollama (Local) OLLAMA_BASE_URL=http://localhost:11434/v1 # Optional OLLAMA_DEFAULT_MODEL=llama3.2 # Optional: defaults to llama3.2 # Together AI TOGETHER_API_KEY=... # Custom Provider CUSTOM_API_KEY=... CUSTOM_BASE_URL=https://api.example.com/v1 CUSTOM_DEFAULT_MODEL=custom-model # Optional: defaults to custom-model # Global Settings DEFAULT_PROVIDER=openai DEFAULT_TEMPERATURE=0.7 LOG_LEVEL=info # Optional: Custom Duck Nicknames (Have fun with these!) OPENAI_NICKNAME="DUCK-4" # Optional: defaults to "GPT Duck" GEMINI_NICKNAME="Duckmini" # Optional: defaults to "Gemini Duck" GROQ_NICKNAME="Quackers" # Optional: defaults to "Groq Duck" OLLAMA_NICKNAME="Local Quacker" # Optional: defaults to "Local Duck" CUSTOM_NICKNAME="My Special Duck" # Optional: defaults to "Custom Duck"
Note: Duck nicknames are completely optional! If you don't set them, you'll get the charming defaults (GPT Duck, Gemini Duck, etc.). If you use a config.json
file, those nicknames take priority over environment variables.
Create a config/config.json
file based on the example:
cp config/config.example.json config/config.json # Edit config/config.json with your API keys and preferences
This is the most common setup method for using MCP Rubber Duck with Claude Desktop.
First, ensure the project is built:
# Clone the repository git clone https://github.com/yourusername/mcp-rubber-duck.git cd mcp-rubber-duck # Install dependencies and build npm install npm run build
Edit your Claude Desktop config file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
Add the MCP server configuration:
{ "mcpServers": { "rubber-duck": { "command": "node", "args": ["/absolute/path/to/mcp-rubber-duck/dist/index.js"], "env": { "OPENAI_API_KEY": "your-openai-api-key-here", "OPENAI_DEFAULT_MODEL": "gpt-4o-mini", "GEMINI_API_KEY": "your-gemini-api-key-here", "GEMINI_DEFAULT_MODEL": "gemini-2.5-flash", "DEFAULT_PROVIDER": "openai", "LOG_LEVEL": "info" } } } }
Important: Replace the placeholder API keys with your actual keys:
your-openai-api-key-here
→ Your OpenAI API key (starts with sk-
)your-gemini-api-key-here
→ Your Gemini API key from Google AI StudioOnce restarted, test these commands in Claude:
Use the list_ducks tool with check_health: true
Should show:
Use the list_models tool
Use the ask_duck tool with prompt: "What is rubber duck debugging?", provider: "openai"
Use the compare_ducks tool with prompt: "Explain async/await in JavaScript"
Use the ask_duck tool with prompt: "Hello", provider: "openai", model: "gpt-4"
ls -la dist/index.js
to confirm the project built successfullydist/index.js
If ducks show as unhealthy:
Ask a single question to a specific LLM provider.
{ "prompt": "What is rubber duck debugging?", "provider": "openai", // Optional, uses default if not specified "temperature": 0.7 // Optional }
Have a conversation with context maintained across messages.
{ "conversation_id": "debug-session-1", "message": "Can you help me debug this code?", "provider": "groq" // Optional, can switch providers mid-conversation }
List all configured providers and their health status.
{ "check_health": true // Optional, performs fresh health check }
List available models for LLM providers.
{ "provider": "openai", // Optional, lists all if not specified "fetch_latest": false // Optional, fetch latest from API vs cached }
Ask the same question to multiple providers simultaneously.
{ "prompt": "What's the best programming language?", "providers": ["openai", "groq", "ollama"] // Optional, uses all if not specified }
Get responses from all configured ducks - like a panel discussion!
{ "prompt": "How should I architect a microservices application?" }
// Ask the default duck await ask_duck({ prompt: "Explain async/await in JavaScript" });
// Start a conversation await chat_with_duck({ conversation_id: "learning-session", message: "What is TypeScript?" }); // Continue the conversation await chat_with_duck({ conversation_id: "learning-session", message: "How does it differ from JavaScript?" });
// Get different perspectives await compare_ducks({ prompt: "What's the best way to handle errors in Node.js?", providers: ["openai", "groq", "ollama"] });
// Convene the council for important decisions await duck_council({ prompt: "Should I use REST or GraphQL for my API?" });
# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Pull a model ollama pull llama3.2 # Ollama automatically provides OpenAI-compatible endpoint at localhost:11434/v1
GEMINI_API_KEY=...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
To check if a provider is OpenAI-compatible:
/v1/chat/completions
endpoint in their API docscurl -X POST "https://api.provider.com/v1/chat/completions" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "model-name", "messages": [{"role": "user", "content": "Hello"}] }'
npm run dev
npm test
npm run lint
npm run typecheck
docker build -t mcp-rubber-duck .
docker run -it \ -e OPENAI_API_KEY=sk-... \ -e GROQ_API_KEY=gsk_... \ mcp-rubber-duck
mcp-rubber-duck/
├── src/
│ ├── server.ts # MCP server implementation
│ ├── config/ # Configuration management
│ ├── providers/ # OpenAI client wrapper
│ ├── tools/ # MCP tool implementations
│ ├── services/ # Health, cache, conversations
│ └── utils/ # Logging, ASCII art
├── config/ # Configuration examples
└── tests/ # Test suites
list_ducks({ check_health: true })
max_retries
and timeout
settingsContributions are welcome! Please:
MIT License - see LICENSE file for details
🦆 Happy Debugging with your AI Duck Panel! 🦆