
Second Opinion
STDIOGet instant second opinions from multiple AI models within Claude conversations.
Get instant second opinions from multiple AI models within Claude conversations.
Get instant second opinions from multiple AI models (OpenAI, Gemini, Grok, Claude, HuggingFace, DeepSeek, Mistral, Together AI, Cohere, Groq Fast & Perplexity) directly within Claude conversations.
This MCP server allows Claude to consult other AI models for different perspectives on:
Access any of the 800,000+ models on HuggingFace Hub via their Inference API with improved reliability:
meta-llama/Llama-3.1-8B-Instruct
- Fast and reliablemeta-llama/Llama-3.1-70B-Instruct
- Powerful reasoningmistralai/Mistral-7B-Instruct-v0.3
- Efficient French-developed modelQwen/Qwen2.5-7B-Instruct
- Alibaba's latest modelGet opinions from DeepSeek's powerful reasoning models:
deepseek-chat
(DeepSeek-V3) - Fast and efficientdeepseek-reasoner
(DeepSeek-R1) - Advanced reasoningAccess xAI's latest thinking models with enhanced reasoning:
grok-3
- Latest flagship modelgrok-3-thinking
- Step-by-step reasoning modelgrok-3-mini
- Lightweight thinking model with reasoning_effort
controlStart multi-AI discussions where models can see and respond to each other's input:
> "Start a group discussion about the future of AI with GPT-4.1, Claude-4, Mistral, and Perplexity"
Clone the repository
git clone https://github.com/ProCreations-Official/second-opinion.git cd second-opinion
Install dependencies
pip install -r requirements.txt
Get API Keys
Configure Claude Desktop
Add this to your Claude Desktop MCP configuration:
{ "mcpServers": { "second-opinion": { "command": "python3", "args": ["/path/to/your/main.py"], "env": { "OPENAI_API_KEY": "your_openai_key_here", "GEMINI_API_KEY": "your_gemini_key_here", "GROK_API_KEY": "your_grok_key_here", "CLAUDE_API_KEY": "your_claude_key_here", "HUGGINGFACE_API_KEY": "your_huggingface_key_here", "DEEPSEEK_API_KEY": "your_deepseek_key_here", "MISTRAL_API_KEY": "your_mistral_key_here", "TOGETHER_API_KEY": "your_together_key_here", "COHERE_API_KEY": "your_cohere_key_here", "GROQ_FAST_API_KEY": "your_groq_key_here", "PERPLEXITY_API_KEY": "your_perplexity_key_here" } } } }
Note: You only need to add API keys for the services you want to use. Missing keys will simply disable those specific features.
Restart Claude Desktop
OpenAI
o4-mini
- Fast reasoning modelgpt-4.1
- Latest flagship modelgpt-4o
- Multimodal powerhouseGemini
gemini-2.0-flash-001
- Fast and efficientgemini-2.5-flash-preview-05-20
- Advanced reasoningGrok
grok-3
- Latest flagship modelgrok-3-thinking
- Step-by-step reasoninggrok-3-mini
- Lightweight thinking modelgrok-2
- Robust and reliablegrok-beta
- Experimental featuresClaude
claude-4-opus-20250522
- Most advanced Claude modelclaude-4-sonnet-20250522
- Versatile model for general tasksclaude-3-7-sonnet-20250224
- Stable and reliableclaude-3-5-sonnet-20241022
- Efficient, lighter modelHuggingFace (800,000+ models available - Enhanced with better reliability)
meta-llama/Llama-3.1-8B-Instruct
- Fast and reliable Meta modelmeta-llama/Llama-3.1-70B-Instruct
- Powerful reasoning modelmistralai/Mistral-7B-Instruct-v0.3
- Efficient French-developed modelQwen/Qwen2.5-7B-Instruct
- Alibaba's latest modelDeepSeek
deepseek-chat
- DeepSeek-V3 for general tasksdeepseek-reasoner
- DeepSeek-R1 for advanced reasoningMistral AI (NEW)
mistral-large-latest
- Most powerful Mistral modelmistral-small-latest
- Fast and cost-effectivemistral-medium-latest
- Balanced performancecodestral-latest
- Specialized for code generationTogether AI (NEW - 200+ open-source models)
meta-llama/Llama-3.1-8B-Instruct-Turbo
- Fast Llama turbometa-llama/Llama-3.1-70B-Instruct-Turbo
- Powerful Llama turbometa-llama/Llama-3.1-405B-Instruct-Turbo
- Largest Llama modelmistralai/Mixtral-8x7B-Instruct-v0.1
- Mixture of expertsQwen/Qwen2.5-72B-Instruct-Turbo
- Alibaba's fast modelCohere (NEW - Enterprise-grade)
command-r-plus
- Most capable Cohere modelcommand-r
- Balanced performance modelcommand
- Standard command modelGroq Fast (NEW - Ultra-fast inference)
llama-3.1-70b-versatile
- Fast 70B Llamallama-3.1-8b-instant
- Lightning-fast 8B modelmixtral-8x7b-32768
- Fast Mixtral variantgemma2-9b-it
- Google's Gemma modelPerplexity AI (NEW - Web-connected)
llama-3.1-sonar-large-128k-online
- Web search + large contextllama-3.1-sonar-small-128k-online
- Web search + fast responsesllama-3.1-sonar-large-128k-chat
- Pure chat without webllama-3.1-sonar-small-128k-chat
- Fast chat modelOnce configured, ask Claude things like:
"Get a second opinion from GPT-4.1 on this coding approach"
"What would Grok-3-thinking think about this solution?"
"Compare how Claude-4-opus and gemini-2.0-flash would solve this problem"
"Get an opinion from meta-llama/Llama-3.1-70B-Instruct on HuggingFace"
"What does DeepSeek-reasoner think about this math problem?"
"Ask Mistral-large-latest to review my code architecture"
"Get a fast response from Groq's llama-3.1-8b-instant model"
"Use Perplexity's web search to research the latest AI developments"
"What does Cohere's command-r-plus think about this business strategy?"
"Get Together AI's Llama-405B opinion on this complex problem"
"Start a group discussion about AI ethics with GPT-4.1, Claude-4, Mistral, and Perplexity"
"Cross-platform comparison of this algorithm across all 11 available platforms"
get_openai_opinion
- Get opinion from any OpenAI modelget_gemini_opinion
- Get opinion from any Gemini model (enhanced with better conversation handling)get_grok_opinion
- Get opinion from any Grok model (includes thinking models)get_claude_opinion
- Get opinion from any Claude modelget_huggingface_opinion
- Get opinion from any HuggingFace model (enhanced with better reliability)get_deepseek_opinion
- Get opinion from DeepSeek modelsget_mistral_opinion
- Get opinion from Mistral AI models (NEW)get_together_opinion
- Get opinion from Together AI's 200+ models (NEW)get_cohere_opinion
- Get opinion from Cohere enterprise models (NEW)get_groq_fast_opinion
- Get ultra-fast responses from Groq (NEW)get_perplexity_opinion
- Get web-connected AI responses (NEW)compare_openai_models
- Compare multiple OpenAI modelscompare_gemini_models
- Compare multiple Gemini modelscompare_grok_models
- Compare multiple Grok modelscompare_claude_models
- Compare multiple Claude modelscross_platform_comparison
- Compare across all 11 AI platforms: OpenAI, Gemini, Grok, Claude, HuggingFace, DeepSeek, Mistral, Together AI, Cohere, Groq Fast & Perplexitygroup_discussion
- Multi-round discussions between AI models with shared context (supports all platforms)list_conversation_histories
- See active conversation threadsclear_conversation_history
- Reset conversation memory for specific modelsFor deeper reasoning, use thinking models:
> "Get a Grok-3-thinking opinion on this complex math problem with high reasoning effort"
The reasoning_effort
parameter controls thinking depth:
low
- Faster responses with basic reasoninghigh
- Deeper analysis with step-by-step thinkingCreate AI debates and collaborative problem-solving:
> "Start a group discussion about renewable energy solutions with 3 rounds between GPT-4.1, Claude-4, Gemini, and DeepSeek"
Each AI can see previous responses and build on the discussion.
Access cutting-edge open source models:
> "Get an opinion from microsoft/DialoGPT-large about chatbot design patterns"
Perfect for testing specialized models or comparing open source vs proprietary AI.
Your API keys stay private on your machine. The MCP server only sends model responses to Claude, never your credentials.
Import errors: Ensure you've installed all dependencies with pip install -r requirements.txt
API errors: Check that your API keys are correct and active
Server not connecting: Verify the file path in your MCP configuration
Cut-off responses: The new version uses 4000 max_tokens by default to prevent truncation
HuggingFace timeouts: Some models may take time to load. Try again after a few moments.
Model not available: Check if the HuggingFace model supports text generation or chat completion
Issues and pull requests welcome! This is an open-source project for the AI community.
Built for developers who want access to the entire AI ecosystem at their fingertips 🧠✨