
Second Opinion
STDIOGet instant second opinions from multiple AI models within Claude conversations.
Get instant second opinions from multiple AI models within Claude conversations.
🎯 Get instant second opinions from 17 AI platforms and 800,000+ models
OpenAI • Gemini • Grok • Claude • HuggingFace • DeepSeek • OpenRouter • Mistral • Together AI • Cohere • Groq • Perplexity • Replicate • AI21 Labs • Stability AI • Fireworks AI • Anyscale
This MCP server allows Claude to consult other AI models for different perspectives on:
honest
, freind
, coach
, wise
, creative
client_manager.py
, ai_providers.py
, conversation_manager.py
, mcp_server.py
, main.py
model_priority.json
Access any of the 800,000+ models on HuggingFace Hub via their Inference API with improved reliability:
meta-llama/Llama-3.1-8B-Instruct
- Fast and reliablemeta-llama/Llama-3.1-70B-Instruct
- Powerful reasoningmistralai/Mistral-7B-Instruct-v0.3
- Efficient French-developed modelQwen/Qwen2.5-7B-Instruct
- Alibaba's latest modelGet opinions from DeepSeek's powerful reasoning models:
deepseek-chat
(DeepSeek-V3) - Fast and efficientdeepseek-reasoner
(DeepSeek-R1) - Advanced reasoningAccess xAI's thinking models with enhanced reasoning:
grok-4
- Latest flagship modelgrok-3-thinking
- Step-by-step reasoning model, last gengrok-3-mini
- Lightweight thinking model with reasoning_effort
control, last geStart multi-AI discussions where models can see and respond to each other's input:
> "Start a group discussion about the future of AI with GPT-4.1, Claude-4, Mistral, and Perplexity"
📥 Clone the repository
git clone https://github.com/ProCreations-Official/second-opinion.git cd second-opinion
⚙️ Install dependencies
pip install -r requirements.txt
🔑 Get API Keys
Platform | Link | Required |
---|---|---|
OpenAI | platform.openai.com | ⭐ Popular |
Gemini | aistudio.google.com | ⭐ Popular |
Grok | x.ai | 🔥 Best benchmarks |
Claude | anthropic.com | 🧠 Advanced |
HuggingFace | huggingface.co | 🤗 800k+ Models |
DeepSeek | deepseek.com | 🔬 Reasoning |
OpenRouter | openrouter.ai | 🌍 200+ Models via One API |
Mistral | console.mistral.ai | 🇫🇷 European, fast |
Together AI | api.together.xyz | 🔗 200+ Models |
Cohere | dashboard.cohere.com | 🏢 Enterprise |
Groq | console.groq.com | ⚡ Ultra-Fast |
Perplexity | perplexity.ai | 🔍 Web Search |
Replicate | replicate.com | 🎭 Open Source |
AI21 Labs | studio.ai21.com | 🧬 Jamba Models |
Stability AI | platform.stability.ai | 🎨 StableLM |
Fireworks AI | fireworks.ai | 🔥 Fast Inference |
Anyscale | console.anyscale.com | 🚀 Ray Serving |
🔧 Choose Your Integration Method
Select the method that matches your Claude setup:
Add this to your Claude Desktop MCP configuration:
{ "mcpServers": { "second-opinion": { "command": "python3", "args": ["/path/to/your/main.py"], "env": { "OPENAI_API_KEY": "your_openai_key_here", "GEMINI_API_KEY": "your_gemini_key_here", "GROK_API_KEY": "your_grok_key_here", "CLAUDE_API_KEY": "your_claude_key_here", "HUGGINGFACE_API_KEY": "your_huggingface_key_here", "DEEPSEEK_API_KEY": "your_deepseek_key_here", "OPENROUTER_API_KEY": "your_openrouter_key_here", "MISTRAL_API_KEY": "your_mistral_key_here", "TOGETHER_API_KEY": "your_together_key_here", "COHERE_API_KEY": "your_cohere_key_here", "GROQ_FAST_API_KEY": "your_groq_key_here", "PERPLEXITY_API_KEY": "your_perplexity_key_here", "REPLICATE_API_TOKEN": "your_replicate_key_here", "AI21_API_KEY": "your_ai21_key_here", "STABILITY_API_KEY": "your_stability_key_here", "FIREWORKS_API_KEY": "your_fireworks_key_here", "ANYSCALE_API_KEY": "your_anyscale_key_here" } } } }
💡 Note: You only need to add API keys for the services you want to use. Missing keys will simply disable those specific features.
🔄 Restart Claude Desktop after configuration.
First, ensure Claude Code CLI is installed globally:
npm install -g @anthropic-ai/claude-code
Use the claude mcp add
command to add the Second Opinion server:
# Navigate to your second-opinion directory cd /path/to/your/second-opinion # Add the MCP server with environment variables (use -e for each API key) claude mcp add second-opinion -s user \ -e OPENAI_API_KEY=your_openai_key_here \ -e GEMINI_API_KEY=your_gemini_key_here \ -e GROK_API_KEY=your_grok_key_here \ -e CLAUDE_API_KEY=your_claude_key_here \ -e HUGGINGFACE_API_KEY=your_huggingface_key_here \ -e DEEPSEEK_API_KEY=your_deepseek_key_here \ -e OPENROUTER_API_KEY=your_openrouter_key_here \ -e MISTRAL_API_KEY=your_mistral_key_here \ -e TOGETHER_API_KEY=your_together_key_here \ -e COHERE_API_KEY=your_cohere_key_here \ -e GROQ_FAST_API_KEY=your_groq_key_here \ -e PERPLEXITY_API_KEY=your_perplexity_key_here \ -e REPLICATE_API_TOKEN=your_replicate_key_here \ -e AI21_API_KEY=your_ai21_key_here \ -e STABILITY_API_KEY=your_stability_key_here \ -e FIREWORKS_API_KEY=your_fireworks_key_here \ -e ANYSCALE_API_KEY=your_anyscale_key_here \ -- /path/to/your/second-opinion/run.sh
💡 Quick Setup: You only need to include
-e
flags for the API keys you have. For example, if you only have OpenAI and Gemini keys:
claude mcp add second-opinion -s user \ -e OPENAI_API_KEY=your_openai_key_here \ -e GEMINI_API_KEY=your_gemini_key_here \ -- /path/to/your/second-opinion/run.sh
Alternatively, you can manually add the server to your .claude.json
file:
{ "mcpServers": { "second-opinion": { "type": "stdio", "command": "/path/to/your/second-opinion/run.sh", "env": { "OPENAI_API_KEY": "your_openai_key_here", "GEMINI_API_KEY": "your_gemini_key_here", "GROK_API_KEY": "your_grok_key_here", "CLAUDE_API_KEY": "your_claude_key_here", "HUGGINGFACE_API_KEY": "your_huggingface_key_here", "DEEPSEEK_API_KEY": "your_deepseek_key_here", "OPENROUTER_API_KEY": "your_openrouter_key_here", "MISTRAL_API_KEY": "your_mistral_key_here", "TOGETHER_API_KEY": "your_together_key_here", "COHERE_API_KEY": "your_cohere_key_here", "GROQ_FAST_API_KEY": "your_groq_key_here", "PERPLEXITY_API_KEY": "your_perplexity_key_here", "REPLICATE_API_TOKEN": "your_replicate_key_here", "AI21_API_KEY": "your_ai21_key_here", "STABILITY_API_KEY": "your_stability_key_here", "FIREWORKS_API_KEY": "your_fireworks_key_here", "ANYSCALE_API_KEY": "your_anyscale_key_here" } } } }
Feature | Benefit |
---|---|
📦 Dependency Management | Automatically installs/updates requirements |
🛡️ Error Handling | Checks for python3 availability and required files |
🔄 Cross-platform | Works better than direct Python execution |
⚡ Reliability | Ensures consistent execution regardless of system |
Check that your MCP server is properly installed:
claude mcp list
You should see second-opinion
in the list of available MCP servers.
🔑 Environment Variables: You only need to add API keys for the services you want to use. Missing keys will simply disable those specific AI platforms. The server will work with any combination of available API keys.
Model | Description | Best For |
---|---|---|
o4-mini | Fast reasoning model | ⚡ Quick reasoning |
gpt-4.1 | Latest flagship non-reasoning model | 🎯 General tasks |
gpt-4o | Multimodal powerhouse | 🖼️ Vision + text |
gpt-4o-mini | Lightweight GPT-4o | 💰 Cost-effective |
Model | Description | Best For |
---|---|---|
gemini-2.5-flash-lite-preview-06-17 | Lightweight and fast | ⚡ Quick responses |
gemini-2.5-flash | Advanced reasoning and efficiency | 🧮 Complex analysis |
gemini-2.5-pro | Most capable Gemini model | 🧠 Advanced tasks |
Model | Description | Best For |
---|---|---|
grok-4 | Latest flagship model | 🎯 General excellence, best |
grok-3-thinking | Step-by-step reasoning | 🤔 Deep thinking (use grok 4) |
grok-3-mini | Lightweight model | 💡 Quick insights |
Model | Description | Best For |
---|---|---|
claude-4-opus-20250522 | Most advanced Claude | 🧠 Complex reasoning |
claude-4-sonnet-20250522 | Versatile general tasks | ⚖️ Balanced performance |
claude-3-7-sonnet-20250224 | Stable and reliable | 🛡️ Production use |
claude-3-5-sonnet-20241022 | Efficient, lighter model | 💨 Fast responses |
Featured Models:
Model | Description | Best For |
---|---|---|
meta-llama/Llama-3.1-8B-Instruct | Fast Meta model | ⚡ Speed |
meta-llama/Llama-3.1-70B-Instruct | Powerful reasoning | 🧠 Complex tasks |
mistralai/Mistral-7B-Instruct-v0.3 | French-developed | 🇫🇷 European AI |
Qwen/Qwen2.5-7B-Instruct | Alibaba's latest | 🏢 Enterprise |
🌟 Special: Access to any model on HuggingFace Hub that supports text generation
Model | Description | Best For |
---|---|---|
deepseek-chat | DeepSeek-V3 general tasks | 💬 Conversations |
deepseek-reasoner | DeepSeek-R1 advanced reasoning | 🧠 Complex logic |
Model | Description | Best For |
---|---|---|
anthropic/claude-3-5-sonnet-20241022 | OpenRouter access to Claude 3.5 Sonnet | 🎯 Balanced excellence |
openai/gpt-4-turbo | OpenRouter access to GPT-4 Turbo | 🧠 Advanced reasoning |
google/gemini-pro-1.5 | OpenRouter access to Gemini Pro 1.5 | 🔍 Long context |
meta-llama/llama-3.1-405b-instruct | OpenRouter access to largest Llama | 🦣 Massive scale |
mistralai/mistral-large | OpenRouter access to Mistral Large | 🇫🇷 European excellence |
perplexity/llama-3.1-sonar-huge-128k-online | Web-connected via OpenRouter | 🌐 Current information |
🌟 Special: Access to 200+ models from multiple providers through a single OpenRouter API
Model | Description | Best For |
---|---|---|
mistral-large-latest | Most powerful Mistral | 🎯 Top performance |
mistral-small-latest | Fast and cost-effective | 💰 Budget-friendly |
mistral-medium-latest | Balanced performance | ⚖️ General use |
codestral-latest | Code generation specialist | 💻 Programming |
Model | Description | Best For |
---|---|---|
meta-llama/Llama-3.1-8B-Instruct-Turbo | Fast Llama turbo | ⚡ Speed |
meta-llama/Llama-3.1-70B-Instruct-Turbo | Powerful Llama turbo | 🚀 Performance |
meta-llama/Llama-3.1-405B-Instruct-Turbo | Largest Llama model | 🦣 Massive scale |
mistralai/Mixtral-8x7B-Instruct-v0.1 | Mixture of experts | 🎭 Specialized tasks |
Qwen/Qwen2.5-72B-Instruct-Turbo | Alibaba's fast model | 🏢 Enterprise |
Cohere (Enterprise-grade)
Model | Description | Best For |
---|---|---|
command-r-plus | Most capable Cohere | 🎯 Enterprise |
command-r | Balanced performance | ⚖️ General business |
command | Standard command model | 💼 Basic tasks |
Groq (Ultra-fast inference)
Model | Description | Best For |
---|---|---|
llama-3.1-70b-versatile | Fast 70B Llama | ⚡ Quick power |
llama-3.1-8b-instant | Lightning-fast 8B | 🏃 Instant responses |
mixtral-8x7b-32768 | Fast Mixtral variant | 🎭 Quick specialization |
gemma2-9b-it | Google's Gemma model | 🔍 Search-optimized |
Perplexity AI (Web-connected)
Model | Description | Best For |
---|---|---|
llama-3.1-sonar-large-128k-online | Web search + large context | 🌐 Research |
llama-3.1-sonar-small-128k-online | Web search + fast responses | 🔍 Quick search |
llama-3.1-sonar-large-128k-chat | Pure chat without web | 💬 Conversations |
llama-3.1-sonar-small-128k-chat | Fast chat model | ⚡ Quick chat |
Replicate (Open-source hosting)
Model | Description | Best For |
---|---|---|
meta/llama-2-70b-chat | Large Llama 2 chat | 🦣 Powerful chat |
meta/llama-2-13b-chat | Medium Llama 2 chat | ⚖️ Balanced |
meta/codellama-34b-instruct | Code-specialized Llama | 💻 Programming |
microsoft/wizardcoder-34b | Microsoft's coding model | 🧙 Code magic |
AI21 Labs (Advanced reasoning)
Model | Description | Best For |
---|---|---|
jamba-1.5-large | State-space capabilities | 🧬 Complex reasoning |
jamba-1.5-mini | Compact Jamba model | 💎 Efficient reasoning |
j2-ultra | Jurassic-2 Ultra model | 🦕 Powerful |
j2-mid | Jurassic-2 Mid model | ⚖️ Balanced |
Stability AI (StableLM family)
Model | Description | Best For |
---|---|---|
stablelm-2-zephyr-1_6b | Efficient 1.6B parameter | ⚡ Lightweight |
stable-code-instruct-3b | Code-specialized 3B | 💻 Programming |
japanese-stablelm-instruct-beta-70b | Japanese language | 🇯🇵 Japanese tasks |
stablelm-zephyr-3b | Balanced 3B parameter | ⚖️ General use |
Fireworks AI (Ultra-fast inference)
Model | Description | Best For |
---|---|---|
accounts/fireworks/models/llama-v3p1-70b-instruct | Fast Llama 3.1 70B | 🔥 Speed + power |
accounts/fireworks/models/llama-v3p1-8b-instruct | Fast Llama 3.1 8B | ⚡ Quick responses |
accounts/fireworks/models/mixtral-8x7b-instruct | Fast Mixtral model | 🎭 Fast specialization |
accounts/fireworks/models/deepseek-coder-v2-lite-instruct | Code-specialized | 💻 Fast coding |
Anyscale (Ray-powered serving)
Model | Description | Best For |
---|---|---|
meta-llama/Llama-2-70b-chat-hf | Enterprise Llama 2 70B | 🏢 Enterprise chat |
meta-llama/Llama-2-13b-chat-hf | Enterprise Llama 2 13B | 💼 Business tasks |
codellama/CodeLlama-34b-Instruct-hf | Enterprise CodeLlama | 💻 Enterprise coding |
mistralai/Mistral-7B-Instruct-v0.1 | Enterprise Mistral | 🇫🇷 Enterprise EU |
████████████████████████████████████████████████████████████████████████████████
😤 "Give me an honest opinion about this code" (brutally frank feedback)
💕 "I need some encouragement with this project" (supportive girlfriend mode)
🏆 "Help me stay motivated to finish this task" (motivational coach)
🧙 "What's the deeper meaning behind this design pattern?" (ancient wisdom)
🎨 "Think of a creative solution to this problem" (innovative thinking)
🤖 "Just give me the best available opinion" (automatic smart selection)
💬 "Get a second opinion from GPT-4.1 on this coding approach"
🤔 "What would Grok-4 think about this solution?" (NEW: Latest model)
⚖️ "Compare how Claude-4-opus and gemini-2.5-flash would solve this problem"
🤗 "Get an opinion from meta-llama/Llama-3.1-70B-Instruct on HuggingFace"
🌍 "Get an OpenRouter opinion from anthropic/claude-3-5-sonnet-20241022"
🧠 "What does DeepSeek-reasoner think about this math problem?"
🇫🇷 "Ask Mistral-large-latest to review my code architecture"
⚡ "Get a fast response from Groq's llama-3.1-8b-instant model"
🌐 "Use Perplexity's web search to research the latest AI developments"
🏢 "What does Cohere's command-r-plus think about this business strategy?"
🔗 "Get Together AI's Llama-405B opinion on this complex problem"
🗣️ "Start a group discussion about AI ethics with GPT-4.1, Claude-4, Mistral, and Perplexity"
📊 "Cross-platform comparison of this algorithm across all 16 available platforms"
🎭 "Get a Replicate opinion from meta/llama-2-70b-chat on this open-source approach"
🧬 "What does AI21's Jamba-1.5-large think about this reasoning problem?"
🎨 "Ask Stability AI's StableLM about this code optimization"
🔥 "Get a super-fast response from Fireworks AI's Llama model"
🚀 "Use Anyscale's enterprise-grade Llama serving for this complex task"
get_openai_opinion
- Get opinion from any OpenAI modelget_gemini_opinion
- Get opinion from any Gemini model (enhanced with better conversation handling)get_grok_opinion
- Get opinion from any Grok model (includes thinking models)get_claude_opinion
- Get opinion from any Claude modelget_huggingface_opinion
- Get opinion from any HuggingFace model (enhanced with better reliability)get_deepseek_opinion
- Get opinion from DeepSeek modelsget_openrouter_opinion
- Get opinion from 200+ models via OpenRouter (NEW)get_mistral_opinion
- Get opinion from Mistral AI models (NEW)get_together_opinion
- Get opinion from Together AI's 200+ models (NEW)get_cohere_opinion
- Get opinion from Cohere enterprise models (NEW)get_groq_fast_opinion
- Get ultra-fast responses from Groq (NEW)get_perplexity_opinion
- Get web-connected AI responsesget_replicate_opinion
- Get opinion from Replicate's open-source models (NEW)get_ai21_opinion
- Get opinion from AI21 Labs' Jamba models (NEW)get_stability_opinion
- Get opinion from Stability AI's StableLM models (NEW)get_fireworks_opinion
- Get ultra-fast responses from Fireworks AI (NEW)get_anyscale_opinion
- Get enterprise-grade responses from Anyscale (NEW)compare_openai_models
- Compare multiple OpenAI modelscompare_gemini_models
- Compare multiple Gemini modelscompare_grok_models
- Compare multiple Grok modelscompare_claude_models
- Compare multiple Claude modelsget_personality_opinion
- Get AI responses with specific personality (honest, gf, coach, wise, creative)get_default_opinion
- Automatically uses the best available model (Grok-4 → Gemini Pro → GPT-4.1)list_personalities
- See all available AI personalities and their descriptionscross_platform_comparison
- Compare across all 17 AI platforms: OpenAI, Gemini, Grok, Claude, HuggingFace, DeepSeek, OpenRouter, Mistral, Together AI, Cohere, Groq Fast, Perplexity, Replicate, AI21 Labs, Stability AI, Fireworks AI & Anyscalegroup_discussion
- Multi-round discussions between AI models with shared context (supports all platforms)list_conversation_histories
- See active conversation threadsclear_conversation_history
- Reset conversation memory for specific modelsFor deeper reasoning, use thinking models:
> "Get a GPT 5 thinking opinion on this complex math problem with high reasoning effort"
This will use GPT 5 with reasoning effort set to high.
Create AI debates and collaborative problem-solving:
> "Start a group discussion about renewable energy solutions with 3 rounds between GPT-4.1, Claude-4, Gemini, and DeepSeek"
Each AI can see previous responses and build on the discussion.
Access cutting-edge open source models:
> "Get an opinion from Qwen/Qwen3-30B-A3B-Instruct-2507 about chatbot design patterns"
may be broken Perfect for testing specialized models or comparing open source vs proprietary AI.
Your API keys stay private on your machine. The MCP server only sends model responses to the client, never your credentials.
Import errors: Ensure you've installed all dependencies with pip install -r requirements.txt
API errors: Check that your API keys are correct and active
Server not connecting: Verify the file path in your MCP configuration
Cut-off responses: The new version uses 4000 max_tokens by default to prevent truncation
HuggingFace timeouts: Some models may take time to load. Try again after a few moments.
Model not available: Check if the HuggingFace model supports text generation or chat completion
Issues and pull requests welcome! This is an open-source project for the AI community.
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
⭐ Star us on GitHub • 🍴 Fork the project • 💖 Contribute to the future of AI
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓