
Contextual
STDIO提供带引用的领域特定查询RAG功能的MCP服务器
提供带引用的领域特定查询RAG功能的MCP服务器
A Model Context Protocol (MCP) server that provides RAG (Retrieval-Augmented Generation) capabilities using Contextual AI. This server integrates with a variety of MCP clients. In this readme, we will show integration with the both Cursor IDE and Claude Desktop.
This MCP server acts as a bridge between AI interfaces (Cursor IDE or Claude Desktop) and a specialized Contextual AI agent. It enables:
Cursor/Claude Desktop → MCP Server → Contextual AI RAG Agent
↑ ↓ ↓
└──────────────────┴─────────────┴─────────────── Response with citations
git clone https://github.com/ContextualAI/contextual-mcp-server.git cd contextual-mcp-server
python -m venv .venv source .venv/bin/activate # On Windows, use `.venv\Scripts\activate`
pip install -e .
The server requires modifications of settings or use. For example, the single use server should be customized with an appropriate docstring for your RAG Agent.
The docstring for your query tool is critical as it helps the MCP client understand when to route questions to your RAG agent. Make it specific to your knowledge domain. Here is an example:
A research tool focused on financial data on the largest US firms
or
A research tool focused on technical documents for Omaha semiconductors
The server also requires the following settings from your RAG Agent:
API_KEY
: Your Contextual AI API keyAGENT_ID
: Your Contextual AI agent IDIf you'd like to store these files in .env
file you can specify them like so:
cat > .env << EOF API_KEY=key... AGENT_ID=... EOF
This MCP server can be integrated with a variety of clients. To use with either Cursor IDE or Claude Desktop create or modify the MCP configuration file in the appropriate location:
uv
installation:UV_PATH=$(which uv) echo $UV_PATH # Example output: /Users/username/miniconda3/bin/uv
cat > mcp.json << EOF { "mcpServers": { "ContextualAI-TechDocs": { "command": "$UV_PATH", # make sure this is set properly "args": [ "--directory", "\${workspaceFolder}", # Will be replaced with your project path "run", "multi-agent/server.py" ] } } } EOF
mkdir -p .cursor/ mv mcp.json .cursor/
Configuration locations:
.cursor/mcp.json
in your project directory~/.cursor/mcp.json
for system-wide accessThis project uses uv
for dependency management, which provides faster and more reliable Python package installation.
The server provides Contextual AI RAG capabilities using the python SDK, which can available a variety of commands accessible from MCP clients, such as Cursor IDE and Claude Desktop. The current server focuses on using the query command from the Contextual AI python SDK, however you could extend this to support other features such as listing all the agents, updating retrieval settings, updating prompts, extracting retrievals, or downloading metrics.
# In Cursor, you might ask: "Show me the code for initiating the RF345 microchip?" # The MCP client will: 1. Determine if this should be routed to the MCP Server # Then the MCP server will: 1. Route the query to the Contextual AI agent 2. Retrieve relevant documentation 3. Generate a response with specific citations 4. Return the formatted answer to Cursor
To add new capabilities:
@mcp.tool()
Example:
@mcp.tool() def new_tool(param: str) -> str: """Description of what the tool does""" # Implementation return result
For all the capabilities of Contextual AI, please check the official documentation.