LLM.txt Directory
STDIOExtracts and serves context from llm.txt files for AI model understanding.
Extracts and serves context from llm.txt files for AI model understanding.
A Model Context Protocol (MCP) server that extracts and serves context from llm.txt files, enabling AI models to understand file structure, dependencies, and code relationships in development environments. This server provides comprehensive access to the LLM.txt Directory, supporting file listing, content retrieval, and advanced multi-query search capabilities.
%LOCALAPPDATA%\llm-txt-mcp
~/Library/Caches/llm-txt-mcp
~/.cache/llm-txt-mcp
The easiest way to install is using MCP Get, which will automatically configure the server in Claude Desktop:
npx @michaellatman/mcp-get@latest install @mcp-get-community/server-llm-txt
Alternatively, you can manually configure the server in your Claude Desktop configuration by adding this to your claude_desktop_config.json
:
{ "mcpServers": { "llm-txt": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-llm-txt" ] } } }
Lists all available LLM.txt files from the directory. Results are cached locally for 24 hours.
Example response:
[ { "id": 1, "url": "https://docs.squared.ai/llms.txt", "name": "AI Squared", "description": "AI Squared provides a data and AI integration platform that helps make intelligent insights accessible to all." } ]
Fetches content from an LLM.txt file by ID (obtained from list_llm_txt).
Parameters:
id
: The numeric ID of the LLM.txt fileExample response:
{ "id": 1, "url": "https://docs.squared.ai/llms.txt", "name": "AI Squared", "description": "AI Squared provides a data and AI integration platform that helps make intelligent insights accessible to all.", "content": "# AI Squared\n\n## Docs\n\n- [Create Catalog](https://docs.squared.ai/api-reference/catalogs/create_catalog)\n- [Update Catalog](https://docs.squared.ai/api-reference/catalogs/update_catalog)\n..." }
Search for multiple substrings within an LLM.txt file.
Parameters:
id
: The numeric ID of the LLM.txt filequeries
: Array of strings to search for (case-insensitive)context_lines
(optional): Number of lines to show before and after matches (default: 2)Example response:
{ "id": 1, "url": "https://docs.squared.ai/llms.txt", "name": "AI Squared", "matches": [ { "lineNumber": 42, "snippet": "- [PostgreSQL](https://docs.squared.ai/guides/data-integration/destinations/database/postgresql): PostgreSQL\n popularly known as Postgres, is a powerful, open-source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale data workloads.\n- [null](https://docs.squared.ai/guides/data-integration/destinations/e-commerce/facebook-product-catalog)", "matchedLine": "- [PostgreSQL](https://docs.squared.ai/guides/data-integration/destinations/database/postgresql): PostgreSQL\n popularly known as Postgres, is a powerful, open-source object-relational database system that uses and extends the SQL language combined with many features that safely store and scale data workloads.", "matchedQueries": ["postgresql", "database"] } ] }
While the examples above show string IDs for clarity, the actual implementation uses numeric IDs. We found that when using string IDs (like domain names or slugs), language models were more likely to hallucinate plausible-looking but non-existent LLM.txt files. Using opaque numeric IDs encourages models to actually check the list of available files first rather than guessing at possible IDs.
To run in development mode with automatic recompilation:
npm install npm run watch
MIT