
Docy
STDIOAI assistant documentation access server providing real-time tech docs integration
AI assistant documentation access server providing real-time tech docs integration
Supercharge your AI assistant with instant access to technical documentation.
Docy gives your AI direct access to the technical documentation it needs, right when it needs it. No more outdated information, broken links, or rate limits - just accurate, real-time documentation access for more precise coding assistance.
Note: Claude may default to using its built-in WebFetchTool instead of Docy. To explicitly request Docy's functionality, use a callout like: "Please use Docy to find..."
A Model Context Protocol server that provides documentation access capabilities. This server enables LLMs to search and retrieve content from documentation websites by scraping them with crawl4ai. Built with FastMCP v2.
Here are examples of how Docy can help with common documentation tasks:
# Verify implementation against documentation
Are we implementing Crawl4Ai scrape results correctly? Let's check the documentation.
# Explore API usage patterns
What do the docs say about using mcp.tool? Show me examples from the documentation.
# Compare implementation options
How should we structure our data according to the React documentation? What are the best practices?
With Docy, Claude Code can directly access and analyze documentation from configured sources, making it more effective at providing accurate, documentation-based guidance.
To ensure Claude Code prioritizes Docy for documentation-related tasks, add the following guidelines to your project's CLAUDE.md
file:
## Documentation Guidelines
- When checking documentation, prefer using Docy over WebFetchTool
- Use list_documentation_sources_tool to discover available documentation sources
- Use fetch_documentation_page to retrieve full documentation pages
- Use fetch_document_links to discover related documentation
Adding these instructions to your CLAUDE.md
file helps Claude Code consistently use Docy instead of its built-in web fetch capabilities when working with documentation.
list_documentation_sources_tool
- List all available documentation sources
fetch_documentation_page
- Fetch the content of a documentation page by URL as markdown
url
(string, required): The URL to fetch content fromfetch_document_links
- Fetch all links from a documentation page
url
(string, required): The URL to fetch links fromdocumentation_sources
documentation_page
url
(string, required): URL of the specific documentation page to getdocumentation_links
url
(string, required): URL of the documentation page to get links fromWhen using uv
no specific installation is needed. We will
use uvx
to directly run mcp-server-docy.
Alternatively you can install mcp-server-docy
via pip:
pip install mcp-server-docy
After installation, you can run it as a script using:
DOCY_DOCUMENTATION_URLS="https://docs.crawl4ai.com/,https://react.dev/" python -m mcp_server_docy
You can also use the Docker image:
docker pull oborchers/mcp-server-docy:latest
docker run -i --rm -e DOCY_DOCUMENTATION_URLS="https://docs.crawl4ai.com/,https://react.dev/" oborchers/mcp-server-docy
For teams or multi-project development, check out the server/README.md
for instructions on running a persistent SSE server that can be shared across multiple projects. This setup allows you to maintain a single Docy instance with shared documentation URLs and cache.
Add to your Claude settings:
"mcpServers": { "docy": { "command": "uvx", "args": ["mcp-server-docy"], "env": { "DOCY_DOCUMENTATION_URLS": "https://docs.crawl4ai.com/,https://react.dev/" } } }
"mcpServers": { "docy": { "command": "docker", "args": ["run", "-i", "--rm", "oborchers/mcp-server-docy:latest"], "env": { "DOCY_DOCUMENTATION_URLS": "https://docs.crawl4ai.com/,https://react.dev/" } } }
"mcpServers": { "docy": { "command": "python", "args": ["-m", "mcp_server_docy"], "env": { "DOCY_DOCUMENTATION_URLS": "https://docs.crawl4ai.com/,https://react.dev/" } } }
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open User Settings (JSON)
.
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others.
Note that the
mcp
key is needed when using themcp.json
file.
{ "mcp": { "servers": { "docy": { "command": "uvx", "args": ["mcp-server-docy"], "env": { "DOCY_DOCUMENTATION_URLS": "https://docs.crawl4ai.com/,https://react.dev/" } } } } }
{ "mcp": { "servers": { "docy": { "command": "docker", "args": ["run", "-i", "--rm", "oborchers/mcp-server-docy:latest"], "env": { "DOCY_DOCUMENTATION_URLS": "https://docs.crawl4ai.com/,https://react.dev/" } } } } }
The application can be configured using environment variables:
DOCY_DOCUMENTATION_URLS
(string): Comma-separated list of URLs to documentation sites to include (e.g., "https://docs.crawl4ai.com/,https://react.dev/")DOCY_DOCUMENTATION_URLS_FILE
(string): Path to a file containing documentation URLs, one per line (default: ".docy.urls")DOCY_CACHE_TTL
(integer): Cache time-to-live in seconds (default: 432000)DOCY_CACHE_DIRECTORY
(string): Path to the cache directory (default: ".docy.cache")DOCY_USER_AGENT
(string): Custom User-Agent string for HTTP requestsDOCY_DEBUG
(boolean): Enable debug logging ("true", "1", "yes", or "y")DOCY_SKIP_CRAWL4AI_SETUP
(boolean): Skip running the crawl4ai-setup command at startup ("true", "1", "yes", or "y")DOCY_TRANSPORT
(string): Transport protocol to use (options: "sse" or "stdio", default: "stdio")DOCY_HOST
(string): Host address to bind the server to (default: "127.0.0.1")DOCY_PORT
(integer): Port to run the server on (default: 8000)Environment variables can be set directly or via a .env
file.
As an alternative to setting the DOCY_DOCUMENTATION_URLS
environment variable, you can create a .docy.urls
file in your project directory with one URL per line:
https://docs.crawl4ai.com/
https://react.dev/
# Lines starting with # are treated as comments
https://docs.python.org/3/
This approach is especially useful for:
The server will first check for URLs in the DOCY_DOCUMENTATION_URLS
environment variable, and if none are found, it will look for the .docy.urls
file.
When using the .docy.urls
file for documentation sources, the server implements a hot-reload mechanism that reads the file on each request rather than caching the URLs. This means you can:
.docy.urls
file while the server is runninglist_documentation_sources_tool
or other documentation toolsThis is particularly useful during development or when you need to quickly add new documentation sources to a running server.
The URLs you configure should ideally point to documentation index or introduction pages that contain:
This allows the LLM to:
Using documentation sites with well-structured subpages is highly recommended as it:
For example, instead of loading an entire documentation site, the LLM can start at the index page, identify the relevant section, and then navigate to specific subpages as needed.
The MCP server automatically caches documentation content to improve performance:
DOCY_DOCUMENTATION_URLS
DOCY_CACHE_TTL
environment variablediskcache
libraryDOCY_CACHE_DIRECTORY
environment variable (default: ".docy.cache")While most content is cached for performance, there are specific exceptions:
.docy.urls
file, the list of documentation sources is never cached - instead, the file is re-read on each request to support hot-reloading of URLsThis hybrid approach offers both performance benefits for content access and flexibility for documentation source management.
fastmcp dev src/mcp_server_docy/__main__.py --with-editable .
http://127.0.0.1:6274
uv run --with fastmcp --with-editable /Users/oliverborchers/Desktop/Code.nosync/mcp-server-docy --with crawl4ai --with loguru --with diskcache --with pydantic-settings fastmcp run src/mcp_server_docy/__main__.py
You can use the MCP inspector to debug the server. For uvx installations:
DOCY_DOCUMENTATION_URLS="https://docs.crawl4ai.com/" npx @modelcontextprotocol/inspector uvx mcp-server-docy
Or if you've installed the package in a specific directory or are developing on it:
cd path/to/docy
DOCY_DOCUMENTATION_URLS="https://docs.crawl4ai.com/" npx @modelcontextprotocol/inspector uv run mcp-server-docy
If you encounter errors like "ERROR Tool not found for mcp__docy__fetch_documentation_page" in Claude Code CLI, follow these steps:
.docy.urls
file in your current directory with your documentation URLs:https://docs.crawl4ai.com/
https://react.dev/
docker run -i --rm -p 8000:8000 \ -e DOCY_TRANSPORT=sse \ -e DOCY_HOST=0.0.0.0 \ -e DOCY_PORT=8000 \ -v "$(pwd)/.docy.urls:/app/.docy.urls" \ oborchers/mcp-server-docy
.mcp.json
to use the SSE endpoint:{ "mcp": { "servers": { "docy": { "type": "sse", "url": "http://localhost:8000/sse" } } } }
This configuration:
.docy.urls
file instead of environment variables for documentation sourcesThe SSE transport is recommended when running the server as a standalone service that needs to be accessed over HTTP, which is particularly useful for Docker deployments.
The project uses GitHub Actions for automated releases:
pyproject.toml
git tag vX.Y.Z
(e.g., git tag v0.1.0
)git push --tags
This will automatically:
pyproject.toml
matches the tagoborchers/mcp-server-docy:latest
and oborchers/mcp-server-docy:X.Y.Z
We encourage contributions to help expand and improve mcp-server-docy. Whether you want to add new features, enhance existing functionality, or improve documentation, your input is valuable.
For examples of other MCP servers and implementation patterns, see: https://github.com/modelcontextprotocol/servers
Pull requests are welcome! Feel free to contribute new ideas, bug fixes, or enhancements to make mcp-server-docy even more powerful and useful.
mcp-server-docy is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.