代码助手
STDIORust构建的AI编程助手,支持GUI、终端和MCP服务器模式
Rust构建的AI编程助手,支持GUI、终端和MCP服务器模式
An AI coding assistant built in Rust that provides both command-line and graphical interfaces for autonomous code analysis and modification.
Multi-Modal Tool Execution: Adapts to different LLM capabilities with pluggable tool invocation modes - native function calling, XML-style tags, and triple-caret blocks - ensuring compatibility across various AI providers.
Real-Time Streaming Interface: Advanced streaming processors parse and display tool invocations as they stream from the LLM, with smart filtering to prevent unsafe tool combinations.
Session-Based Project Management: Each chat session is tied to a specific project and maintains persistent state, working memory, and draft messages with attachment support.
Multiple Interface Options: Choose between a modern GUI built on Zed's GPUI framework, traditional terminal interface, or headless MCP server mode for integration with MCP clients such as Claude Desktop.
Agent Client Protocol (ACP) Support: Full compatibility with the Agent Client Protocol standard, enabling seamless integration with ACP-compatible editors like Zed. See Zed's documentation on adding custom agents for setup instructions.
Intelligent Project Exploration: Autonomously builds understanding of codebases through working memory that tracks file structures, dependencies, and project context.
Auto-Loaded Repository Guidance: Automatically includes AGENTS.md (or CLAUDE.md fallback) from the project root in the assistant's system context to align behavior with repo-specific instructions.
# On macOS or Linux, install Rust tool chain via rustup: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # On macOS, you need the metal tool chain: xcodebuild -downloadComponent MetalToolchain # Then clone the repo and build it: git clone https://github.com/stippi/code-assistant cd code-assistant cargo build --release
The binary will be available at target/release/code-assistant.
After building, create your configuration files:
# Create config directory mkdir -p ~/.config/code-assistant # Copy example configurations cp providers.example.json ~/.config/code-assistant/providers.json cp models.example.json ~/.config/code-assistant/models.json # Edit the files to add your API keys # Set environment variables or update the JSON files directly export ANTHROPIC_API_KEY="sk-ant-..." export OPENAI_API_KEY="sk-..."
See the Configuration section for detailed setup instructions.
Create ~/.config/code-assistant/projects.json to define available projects:
{ "code-assistant": { "path": "/Users/<username>/workspace/code-assistant", "format_on_save": { "**/*.rs": "cargo fmt" // Formats all files in project, so make sure files are already formatted } }, "my-project": { "path": "/Users/<username>/workspace/my-project", "format_on_save": { "**/*.ts": "prettier --write {path}" // If the formatter accepts a path, provide "{path}" } } }
The optional format_on_save field allows automatic formatting of files after modifications. It maps file patterns (using glob syntax) to shell commands:
See docs/format-on-save-feature.md for detailed documentation.
Important Notes:
# Start with graphical interface code-assistant --ui # Start GUI with initial task code-assistant --ui --task "Analyze the authentication system"
# Basic usage code-assistant --task "Explain the purpose of this codebase" # With specific model code-assistant --task "Add error handling" --model "GPT-5"
code-assistant server
# Run as ACP-compatible agent code-assistant acp # With specific model code-assistant acp --model "Claude Sonnet 4.5"
The ACP mode enables integration with editors that support the Agent Client Protocol, such as Zed. When running in ACP mode, the code-assistant communicates via JSON-RPC over stdin/stdout, supporting features like pending messages, real-time streaming, and tool execution with proper permission handling.
The code-assistant uses two JSON configuration files to manage LLM providers and models:
~/.config/code-assistant/providers.json - Configure provider credentials and endpoints:
{ "anthropic": { "label": "Anthropic Claude", "provider": "anthropic", "config": { "api_key": "${ANTHROPIC_API_KEY}", "base_url": "https://api.anthropic.com/v1" } }, "openai": { "label": "OpenAI", "provider": "openai-responses", "config": { "api_key": "${OPENAI_API_KEY}" } } }
~/.config/code-assistant/models.json - Define available models:
{ "Claude Sonnet 4.5 (Thinking)": { "provider": "anthropic", "id": "claude-sonnet-4-5", "config": { "max_tokens": 32768, "thinking": { "type": "enabled", "budget_tokens": 8192 } } }, "Claude Sonnet 4.5": { "provider": "anthropic", "id": "claude-sonnet-4-5", "config": { "max_tokens": 32768 } }, "GPT-5": { "provider": "openai", "id": "gpt-5-codex", "config": { "temperature": 0.7 } } }
Environment Variable Substitution: Use ${VAR_NAME} in provider configs to reference environment variables for API keys.
Full Examples: See providers.example.json and models.example.json for complete configuration examples with all supported providers (Anthropic, OpenAI, Ollama, SAP AI Core, Vertex AI, Groq, Cerebras, MistralAI, OpenRouter).
List Available Models:
# See all configured models code-assistant --list-models # See all configured providers code-assistant --list-providers
Configure in Claude Desktop settings (Developer tab → Edit Config):
{ "mcpServers": { "code-assistant": { "command": "/path/to/code-assistant/target/release/code-assistant", "args": ["server"], "env": { "PERPLEXITY_API_KEY": "pplx-...", // Optional, enables perplexity_ask tool "SHELL": "/bin/zsh" // Your login shell } } } }
Configure in Zed settings:
{ "agent_servers": { "Code-Assistant": { "command": "/path/to/code-assistant/target/release/code-assistant", "args": ["acp", "--model", "Claude Sonnet 4.5"], "env": { "ANTHROPIC_API_KEY": "sk-ant-..." } } } }
Make sure your providers.json and models.json are configured with the model you specify. The agent will appear in Zed's assistant panel with full ACP support.
For detailed setup instructions, see Zed's documentation on adding custom agents.
Tool Syntax Modes:
--tool-syntax native: Use the provider's built-in tool calling (most reliable, but streaming of parameters depends on provider)--tool-syntax xml: XML-style tags for streaming of parameters--tool-syntax caret: Triple-caret blocks for token-efficiency and streaming of parametersSession Recording:
# Record session (Anthropic only) code-assistant --record session.json --model "Claude Sonnet 4.5" --task "Optimize database queries" # Playback session code-assistant --playback session.json --fast-playback
Other Options:
--model <name>: Specify model from models.json (use --list-models to see available options)--continue-task: Resume from previous session state--use-diff-format: Enable alternative diff format for file editing--verbose / -v: Enable detailed logging (use multiple times for more verbosity)The code-assistant features several innovative architectural decisions:
Adaptive Tool Syntax: Automatically generates different system prompts and streaming processors based on the target LLM's capabilities, allowing the same core logic to work across providers with varying function calling support.
Smart Tool Filtering: Real-time analysis of tool invocation patterns prevents logical errors like attempting to edit files before reading them, with the ability to truncate responses mid-stream when unsafe combinations are detected.
Multi-Threaded Streaming: Sophisticated async architecture that handles real-time parsing of tool invocations while maintaining responsive UI updates and proper state management across multiple chat sessions.
Contributions are welcome! The codebase demonstrates advanced patterns in async Rust, AI agent architecture, and cross-platform UI development.
This section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.
replace_in_file and we know in which file quite early.
If we also know this file has changed since the LLM last read it, we can block the attempt with an appropriate error message.execute_command tool runs a shell with the provided command line, which at the moment is completely unchecked.\n line endings, no trailing white space).
This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more.
Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file.
Even when the error output of the replace_in_file tool includes the complete file and tells the LLM not to re-read the file.