
OpenAI编程助手
STDIO支持OpenAI等多种LLM的MCP编程助手
支持OpenAI等多种LLM的MCP编程助手
A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
pip install -r requirements.txt
.env
file with your API keys:# Choose one or more providers
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Optional model selection
OPENAI_MODEL=gpt-4o
ANTHROPIC_MODEL=claude-3-opus-20240229
Run the CLI with the default provider (determined from available API keys):
python claude.py chat
Specify a provider and model:
python claude.py chat --provider openai --model gpt-4o
Set a budget limit to manage costs:
python claude.py chat --budget 5.00
Run as a Model Context Protocol server:
python claude.py serve
Start in development mode with the MCP Inspector:
python claude.py serve --dev
Configure host and port:
python claude.py serve --host 0.0.0.0 --port 8000
Specify additional dependencies:
python claude.py serve --dependencies pandas numpy
Load environment variables from file:
python claude.py serve --env-file .env
Connect to an MCP server using Claude as the reasoning engine:
python claude.py mcp-client path/to/server.py
Specify a Claude model:
python claude.py mcp-client path/to/server.py --model claude-3-5-sonnet-20241022
Try the included example server:
# In terminal 1 - start the server python examples/echo_server.py # In terminal 2 - connect with the client python claude.py mcp-client examples/echo_server.py
Launch a multi-agent client with synchronized agents:
python claude.py mcp-multi-agent path/to/server.py
Use a custom agent configuration file:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
Example with the echo server:
# In terminal 1 - start the server python examples/echo_server.py # In terminal 2 - launch the multi-agent client python claude.py mcp-multi-agent examples/echo_server.py --config examples/agents_config.json
Claude Code Python Edition is built with a modular architecture:
/claude_code/
/lib/
/providers/ # LLM provider implementations
/tools/ # Tool implementations
/context/ # Context management
/ui/ # UI components
/monitoring/ # Cost tracking & metrics
/commands/ # CLI commands
/config/ # Configuration management
/util/ # Utility functions
claude.py # Main CLI entry point
mcp_server.py # Model Context Protocol server
Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
Install and run the MCP server:
python claude.py serve
Open the configuration page in your browser:
http://localhost:8000
Follow the instructions to configure Claude Desktop, including:
To connect to any MCP server using Claude Code:
python claude.py mcp-client path/to/server.py
For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
/talk Agent_Name message
for direct communication/agents
to see all available agents/history
to view the conversation historyMIT
This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.
pip install -r requirements.txt
export OPENAI_API_KEY=your_api_key
Run the assistant in interactive CLI mode:
python cli.py
Options:
--model
, -m
: Specify the model to use (default: gpt-4o)--temperature
, -t
: Set temperature for response generation (default: 0)--verbose
, -v
: Enable verbose output with additional information--enable-rl/--disable-rl
: Enable/disable reinforcement learning for tool optimization--rl-update
: Manually trigger an update of the RL modelRun the assistant as an API server:
python cli.py serve
Options:
--host
: Host address to bind to (default: 127.0.0.1)--port
, -p
: Port to listen on (default: 8000)--workers
, -w
: Number of worker processes (default: 1)--enable-replication
: Enable replication across instances--primary/--secondary
: Whether this is a primary or secondary instance--peer
: Peer instances to replicate with (host:port), can be specified multiple timesRun the assistant as a Model Context Protocol (MCP) server:
python cli.py mcp-serve
Options:
--host
: Host address to bind to (default: 127.0.0.1)--port
, -p
: Port to listen on (default: 8000)--dev
: Enable development mode with additional logging--dependencies
: Additional Python dependencies to install--env-file
: Path to .env file with environment variablesConnect to an MCP server using the assistant as the reasoning engine:
python cli.py mcp-client path/to/server.py
Options:
--model
, -m
: Model to use for reasoning (default: gpt-4o)--host
: Host address for the MCP server (default: 127.0.0.1)--port
, -p
: Port for the MCP server (default: 8000)For easier deployment, use the provided script:
./deploy.sh --host 0.0.0.0 --port 8000 --workers 4
To enable replication:
# Primary instance ./deploy.sh --enable-replication --port 8000 # Secondary instance ./deploy.sh --enable-replication --secondary --port 8001 --peer 127.0.0.1:8000
To use the web client, open web-client.html
in your browser. Make sure the API server is running.
POST /conversation
: Create a new conversationPOST /conversation/{conversation_id}/message
: Send a message to a conversationPOST /conversation/{conversation_id}/message/stream
: Stream a message responseGET /conversation/{conversation_id}
: Get conversation detailsDELETE /conversation/{conversation_id}
: Delete a conversationGET /health
: Health check endpointGET /
: Health check (MCP protocol)POST /context
: Get context for a prompt templateGET /prompts
: List available prompt templatesGET /prompts/{prompt_id}
: Get a specific prompt templatePOST /prompts
: Create a new prompt templatePUT /prompts/{prompt_id}
: Update an existing prompt templateDELETE /prompts/{prompt_id}
: Delete a prompt templateThe replication system allows running multiple instances of the assistant with synchronized state. This provides:
To set up replication:
--enable-replication
--enable-replication --secondary --peer [primary-host:port]
The assistant includes various tools:
/help
: Show help message/compact
: Compact the conversation to reduce token usage/status
: Show token usage and session information/config
: Show current configuration settings/rl-status
: Show RL tool optimizer status (if enabled)/rl-update
: Update the RL model manually (if enabled)/rl-stats
: Show tool usage statistics (if enabled)