AutoGen Integration
STDIOMCP server integrating Microsoft's AutoGen framework for multi-agent conversations through standardized interface.
MCP server integrating Microsoft's AutoGen framework for multi-agent conversations through standardized interface.
A comprehensive MCP server that provides deep integration with Microsoft's AutoGen framework v0.9+, featuring the latest capabilities including prompts, resources, advanced workflows, and enhanced agent types. This server enables sophisticated multi-agent conversations through a standardized Model Context Protocol interface.
create_agent
- Create agents with advanced configurationscreate_workflow
- Build complete multi-agent workflowsget_agent_status
- Detailed agent metrics and health monitoringexecute_chat
- Enhanced two-agent conversationsexecute_group_chat
- Multi-agent group discussionsexecute_nested_chat
- Hierarchical conversation structuresexecute_swarm
- Swarm-based collaborative problem solvingexecute_workflow
- Run predefined workflow templatesmanage_agent_memory
- Handle agent learning and persistenceconfigure_teachability
- Enable/configure agent learning capabilitiesautogen-workflow
Create sophisticated multi-agent workflows with customizable parameters:
task_description
, agent_count
, workflow_type
code-review
Set up collaborative code review with specialized agents:
code
, language
, focus_areas
research-analysis
Deploy research teams for in-depth topic analysis:
topic
, depth
autogen://agents/list
Live list of active agents with status and capabilities
autogen://workflows/templates
Available workflow templates and configurations
autogen://chat/history
Recent conversation history and interaction logs
autogen://config/current
Current server configuration and settings
To install AutoGen Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @DynamicEndpoints/autogen_mcp --client claude
git clone https://github.com/yourusername/autogen-mcp.git cd autogen-mcp
npm install
pip install -r requirements.txt --user
npm run build
cp .env.example .env cp config.json.example config.json # Edit .env and config.json with your settings
Create a .env
file from the template:
# Required OPENAI_API_KEY=your-openai-api-key-here # Optional - Path to configuration file AUTOGEN_MCP_CONFIG=config.json # Enhanced Features ENABLE_PROMPTS=true ENABLE_RESOURCES=true ENABLE_WORKFLOWS=true ENABLE_TEACHABILITY=true # Performance Settings MAX_CHAT_TURNS=10 DEFAULT_OUTPUT_FORMAT=json
Update config.json
with your preferences:
{ "llm_config": { "config_list": [ { "model": "gpt-4o", "api_key": "your-openai-api-key" } ], "temperature": 0.7 }, "enhanced_features": { "prompts": { "enabled": true }, "resources": { "enabled": true }, "workflows": { "enabled": true } } }
Add to your claude_desktop_config.json
:
{ "mcpServers": { "autogen": { "command": "node", "args": ["path/to/autogen-mcp/build/index.js"], "env": { "OPENAI_API_KEY": "your-key-here" } } } }
Test the server functionality:
# Run comprehensive tests python test_server.py # Test CLI interface python cli_example.py create_agent "researcher" "assistant" "You are a research specialist" python cli_example.py execute_workflow "code_generation" '{"task":"Hello world","language":"python"}'
The server provides several built-in prompts:
Available resources provide real-time data:
autogen://agents/list
- Current active agentsautogen://workflows/templates
- Available workflow templatesautogen://chat/history
- Recent conversation historyautogen://config/current
- Server configuration{ "workflow_name": "code_generation", "input_data": { "task": "Create a REST API endpoint", "language": "python", "requirements": ["FastAPI", "Pydantic", "Error handling"] }, "quality_checks": true }
{ "workflow_name": "research", "input_data": { "topic": "AI Ethics in 2025", "depth": "comprehensive" }, "output_format": "markdown" }
pip install -r requirements.txt --user
npm install
Enable detailed logging:
export LOG_LEVEL=DEBUG python test_server.py
gpt-4o-mini
for faster, cost-effective operations# Full test suite python test_server.py # Individual workflow tests python -c " import asyncio from src.autogen_mcp.workflows import WorkflowManager wm = WorkflowManager() print(asyncio.run(wm.execute_workflow('code_generation', {'task': 'test'}))) "
npm run build npm run lint
For issues and questions:
test_server.py
MIT License - see LICENSE file for details.
OPENAI_API_KEY=your-openai-api-key
### Server Configuration
1. Copy `config.json.example` to `config.json`:
```bash
cp config.json.example config.json
{ "llm_config": { "config_list": [ { "model": "gpt-4", "api_key": "your-openai-api-key" } ], "temperature": 0 }, "code_execution_config": { "work_dir": "workspace", "use_docker": false } }
The server supports three main operations:
{ "name": "create_agent", "arguments": { "name": "tech_lead", "type": "assistant", "system_message": "You are a technical lead with expertise in software architecture and design patterns." } }
{ "name": "execute_chat", "arguments": { "initiator": "agent1", "responder": "agent2", "message": "Let's discuss the system architecture." } }
{ "name": "execute_group_chat", "arguments": { "agents": ["agent1", "agent2", "agent3"], "message": "Let's review the proposed solution." } }
Common error scenarios include:
{ "error": "Agent already exists" }
{ "error": "Agent not found" }
{ "error": "AUTOGEN_MCP_CONFIG environment variable not set" }
The server follows a modular architecture:
src/
├── autogen_mcp/
│ ├── __init__.py
│ ├── agents.py # Agent management and configuration
│ ├── config.py # Configuration handling and validation
│ ├── server.py # MCP server implementation
│ └── workflows.py # Conversation workflow management
MIT License - See LICENSE file for details