Docs Documentation Expert
STDIOAI documentation expert that indexes third-party libraries for accurate, version-aware code assistance.
AI documentation expert that indexes third-party libraries for accurate, version-aware code assistance.
AI coding assistants often struggle with outdated documentation and hallucinations. The Docs MCP Server solves this by providing a personal, always-current knowledge base for your AI. It indexes 3rd party documentation from various sources (websites, GitHub, npm, PyPI, local files) and offers powerful, version-aware search tools via the Model Context Protocol (MCP).
This enables your AI agent to access the latest official documentation, dramatically improving the quality and reliability of generated code and integration details. It's free, open-source, runs locally for privacy, and integrates seamlessly into your development workflow.
LLM-assisted coding promises speed and efficiency, but often falls short due to:
Docs MCP Server solves these problems by:
npx
.What is semantic chunking?
Semantic chunking splits documentation into meaningful sections based on structure—like headings, code blocks, and tables—rather than arbitrary text size. Docs MCP Server preserves logical boundaries, keeps code and tables intact, and removes navigation clutter from HTML docs. This ensures LLMs receive coherent, context-rich information for more accurate and relevant answers.
Get started quickly:
Run the server and web interface together using Docker Compose.
git clone https://github.com/arabold/docs-mcp-server.git cd docs-mcp-server
cp .env.example .env # Edit .env and set your OpenAI API key
docker compose up -d
-d
for detached mode. Omit to see logs in your terminal.docker compose up -d --build
.Restart your AI assistant after updating the config.{ "mcpServers": { "docs-mcp-server": { "url": "http://localhost:6280/sse", "disabled": false, "autoApprove": [] } } }
http://localhost:6281
in your browser.Benefits:
.env
To stop, run docker compose down
.
http://localhost:6281
.Once a job completes, the docs are searchable via your AI assistant or the Web UI.
You can index documentation from your local filesystem by using a file://
URL as the source. This works in both the Web UI and CLI.
Examples:
https://react.dev/reference/react
file:///Users/me/docs/index.html
file:///Users/me/docs/my-library
Requirements:
text/*
are processed. This includes HTML, Markdown, plain text, and source code files such as .js
, .ts
, .tsx
, .css
, etc. Binary files, PDFs, images, and other non-text formats are ignored.file://
prefix for local files/folders.file://
URL.docker run --rm \ -e OPENAI_API_KEY="your-key" \ -v /absolute/path/to/docs:/docs:ro \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest \ scrape mylib file:///docs/my-library
file:///docs/my-library
(matching the container path).See the tooltips in the Web UI and CLI help for more details.
Note: The published Docker images support both x86_64 (amd64) and Mac Silicon (arm64).
This method is simple and doesn't require cloning the repository.
Replace{ "mcpServers": { "docs-mcp-server": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "OPENAI_API_KEY", "-v", "docs-mcp-data:/data", "ghcr.io/arabold/docs-mcp-server:latest" ], "env": { "OPENAI_API_KEY": "sk-proj-..." // Your OpenAI API key }, "disabled": false, "autoApprove": [] } } }
sk-proj-...
with your OpenAI API key. Restart your application.Docker Container Settings:
-i
: Keeps STDIN open for MCP communication.--rm
: Removes the container on exit.-e OPENAI_API_KEY
: Required.-v docs-mcp-data:/data
: Required for persistence.You can pass any configuration environment variable (see Configuration) using -e
.
Examples:
# OpenAI embeddings (default) docker run -i --rm \ -e OPENAI_API_KEY="your-key" \ -e DOCS_MCP_EMBEDDING_MODEL="text-embedding-3-small" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest # OpenAI-compatible API (Ollama) docker run -i --rm \ -e OPENAI_API_KEY="your-key" \ -e OPENAI_API_BASE="http://localhost:11434/v1" \ -e DOCS_MCP_EMBEDDING_MODEL="embeddings" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest # Google Vertex AI docker run -i --rm \ -e DOCS_MCP_EMBEDDING_MODEL="vertex:text-embedding-004" \ -e GOOGLE_APPLICATION_CREDENTIALS="/app/gcp-key.json" \ -v docs-mcp-data:/data \ -v /path/to/gcp-key.json:/app/gcp-key.json:ro \ ghcr.io/arabold/docs-mcp-server:latest # Google Gemini docker run -i --rm \ -e DOCS_MCP_EMBEDDING_MODEL="gemini:embedding-001" \ -e GOOGLE_API_KEY="your-google-api-key" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest # AWS Bedrock docker run -i --rm \ -e AWS_ACCESS_KEY_ID="your-aws-key" \ -e AWS_SECRET_ACCESS_KEY="your-aws-secret" \ -e AWS_REGION="us-east-1" \ -e DOCS_MCP_EMBEDDING_MODEL="aws:amazon.titan-embed-text-v1" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest # Azure OpenAI docker run -i --rm \ -e AZURE_OPENAI_API_KEY="your-azure-key" \ -e AZURE_OPENAI_API_INSTANCE_NAME="your-instance" \ -e AZURE_OPENAI_API_DEPLOYMENT_NAME="your-deployment" \ -e AZURE_OPENAI_API_VERSION="2024-02-01" \ -e DOCS_MCP_EMBEDDING_MODEL="microsoft:text-embedding-ada-002" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest
Access the web UI at http://localhost:6281
:
docker run --rm \ -e OPENAI_API_KEY="your-openai-api-key" \ -v docs-mcp-data:/data \ -p 6281:6281 \ ghcr.io/arabold/docs-mcp-server:latest \ web --port 6281
-p 6281:6281
.-e
as needed.Run CLI commands by appending them after the image name:
docker run --rm \ -e OPENAI_API_KEY="your-openai-api-key" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest \ <command> [options]
Example:
docker run --rm \ -e OPENAI_API_KEY="your-openai-api-key" \ -v docs-mcp-data:/data \ ghcr.io/arabold/docs-mcp-server:latest \ list
Use the same volume for data sharing. For command help, run:
docker run --rm ghcr.io/arabold/docs-mcp-server:latest --help
You can run the Docs MCP Server without installing or cloning the repo:
npx @arabold/docs-mcp-server@latest
OPENAI_API_KEY
environment variable.OPENAI_API_KEY="sk-proj-..." npx @arabold/docs-mcp-server@latest
command
and args
with the npx
command above.Note: Data is stored in a temporary directory and will not persist between runs. For persistent storage, use Docker or a local install.
You can run CLI commands directly with npx, without installing the package globally:
npx @arabold/docs-mcp-server@latest <command> [options]
Example:
npx @arabold/docs-mcp-server@latest list
For command help, run:
npx @arabold/docs-mcp-server@latest --help
The Docs MCP Server is configured via environment variables. Set these in your shell, Docker, or MCP client config.
Variable | Description |
---|---|
DOCS_MCP_EMBEDDING_MODEL | Embedding model to use (see below for options). |
OPENAI_API_KEY | OpenAI API key for embeddings. |
OPENAI_API_BASE | Custom OpenAI-compatible API endpoint (e.g., Ollama). |
GOOGLE_API_KEY | Google API key for Gemini embeddings. |
GOOGLE_APPLICATION_CREDENTIALS | Path to Google service account JSON for Vertex AI. |
AWS_ACCESS_KEY_ID | AWS key for Bedrock embeddings. |
AWS_SECRET_ACCESS_KEY | AWS secret for Bedrock embeddings. |
AWS_REGION | AWS region for Bedrock. |
AZURE_OPENAI_API_KEY | Azure OpenAI API key. |
AZURE_OPENAI_API_INSTANCE_NAME | Azure OpenAI instance name. |
AZURE_OPENAI_API_DEPLOYMENT_NAME | Azure OpenAI deployment name. |
AZURE_OPENAI_API_VERSION | Azure OpenAI API version. |
DOCS_MCP_DATA_DIR | Data directory (default: ./data ). |
DOCS_MCP_PORT | Server port (default: 6281 ). |
See examples above for usage.
Set DOCS_MCP_EMBEDDING_MODEL
to one of:
text-embedding-3-small
(default, OpenAI)openai:llama2
(OpenAI-compatible, Ollama)vertex:text-embedding-004
(Google Vertex AI)gemini:embedding-001
(Google Gemini)aws:amazon.titan-embed-text-v1
(AWS Bedrock)microsoft:text-embedding-ada-002
(Azure OpenAI)For more, see the ARCHITECTURE.md and examples above.
To develop or contribute to the Docs MCP Server:
For questions or suggestions, open an issue.
For details on the project's architecture and design principles, please see ARCHITECTURE.md.
Notably, the vast majority of this project's code was generated by the AI assistant Cline, leveraging the capabilities of this very MCP server.
This project is licensed under the MIT License. See LICENSE for details.