
Cognee
STDIOLocal AI knowledge management server for Claude Desktop
Local AI knowledge management server for Claude Desktop
cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server
Demo . Learn more · Join Discord · Join r/AIMemory
Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.
Please refer to our documentation here for further information.
git clone https://github.com/topoteretes/cognee.git
cd cognee/cognee-mcp
pip install uv
uv sync --dev --all-extras --reinstall
source .venv/bin/activate
LLM_API_KEY="YOUR_OPENAI_API_KEY"
python src/server.py
or stream responses over SSE
python src/server.py --transport sse
You can do more advanced configurations by creating .env file using our template. To use different LLM providers / database configurations, and for more info check out our documentation.
If you’d rather run cognee-mcp in a container, you have two options:
.env
containing only your LLM_API_KEY
(and your chosen settings).docker rmi cognee/cognee-mcp:main || true docker build --no-cache -f cognee-mcp/Dockerfile -t cognee/cognee-mcp:main .
docker run --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# With your .env file docker run --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
The MCP server exposes its functionality through tools. Call them from any MCP client (Cursor, Claude Desktop, Cline, Roo and more).
cognify: Turns your data into a structured knowledge graph and stores it in memory
codify: Analyse a code repository, build a code graph, stores it in memory
search: Query memory – supports GRAPH_COMPLETION, RAG_COMPLETION, CODE, CHUNKS, INSIGHTS
prune: Reset cognee for a fresh start
cognify_status / codify_status: Track pipeline progress
Remember – use the CODE search type to query your code graph. For huge repos, run codify on modules incrementally and cache results.
After you run the server as described in the Quick Start, create a run script for cognee. Here is a simple example:
#!/bin/bash
export ENV=local
export TOKENIZERS_PARALLELISM=false
export EMBEDDING_PROVIDER="fastembed"
export EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2"
export EMBEDDING_DIMENSIONS=384
export EMBEDDING_MAX_TOKENS=256
export LLM_API_KEY=your-OpenAI-API-key
uv --directory /{cognee_root_path}/cognee-mcp run cognee
Remember to replace your-OpenAI-API-key and {cognee_root_path} with correct values.
Install Cursor and navigate to Settings → MCP Tools → New MCP Server
Cursor will open mcp.json file in a new tab. Configure your cognee MCP server by copy-pasting the following:
{
"mcpServers": {
"cognee": {
"command": "sh",
"args": [
"/{path-to-your-script}/run-cognee.sh"
]
}
}
}
Remember to replace {path-to-your-script} with the correct value of the path of the script you created in the first step.
That's it! You can refresh the server from the toggle next to your new cognee server. Check the green dot and the available tools to verify your server is running.
Now you can open your Cursor Agent and start using cognee tools from it via prompting.
To use debugger, run:
bash mcp dev src/server.py
Open inspector with timeout passed:
http://localhost:5173?timeout=120000
To apply new changes while developing cognee you need to do:
poetry lock
in cognee folderuv sync --dev --all-extras --reinstall
mcp dev src/server.py
In order to use local cognee:
Uncomment the following line in the cognee-mcp pyproject.toml
file and set the cognee root path.
#"cognee[postgres,codegraph,gemini,huggingface,docs,neo4j] @ file:/Users/<username>/Desktop/cognee"
Remember to replace file:/Users/<username>/Desktop/cognee
with your actual cognee root path.
Install dependencies with uv in the mcp folder
uv sync --reinstall
We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT
for more information.