Graphiti
STDIO构建和查询时态感知知识图谱的MCP服务器
构建和查询时态感知知识图谱的MCP服务器
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
This is an experimental Model Context Protocol (MCP) server implementation for Graphiti. The MCP server exposes Graphiti's key functionality through the MCP protocol, allowing AI assistants to interact with Graphiti's knowledge graph capabilities.
The Graphiti MCP server provides comprehensive knowledge graph capabilities:
/mcp/ for broad client compatibilitygit clone https://github.com/getzep/graphiti.git
or
gh repo clone getzep/graphiti
stdio only clientscd graphiti && pwd
Install the Graphiti prerequisites.
Configure Claude, Cursor, or other MCP client to use Graphiti with a stdio transport. See the client documentation on where to find their MCP configuration files.
mcp_server directorycd graphiti/mcp_server
docker compose up
This starts both FalkorDB and the MCP server in a single container.
Alternative: Run with separate containers using Neo4j:
docker compose -f docker/docker-compose-neo4j.yml up
http://localhost:8000/mcp/uv to create a virtual environment and install dependencies:# Install uv if you don't have it already curl -LsSf https://astral.sh/uv/install.sh | sh # Create a virtual environment and install dependencies in one step uv sync # Optional: Install additional LLM providers (anthropic, gemini, groq, voyage, sentence-transformers) uv sync --extra providers
The server can be configured using a config.yaml file, environment variables, or command-line arguments (in order of precedence).
The MCP server comes with sensible defaults:
http://localhost:8000/mcp/)FalkorDB is a Redis-based graph database that comes bundled with the MCP server in a single Docker container. This is the default and recommended setup.
database: provider: "falkordb" # Default providers: falkordb: uri: "redis://localhost:6379" password: "" # Optional database: "default_db" # Optional
For production use or when you need a full-featured graph database, Neo4j is recommended:
database: provider: "neo4j" providers: neo4j: uri: "bolt://localhost:7687" username: "neo4j" password: "your_password" database: "neo4j" # Optional, defaults to "neo4j"
FalkorDB is another graph database option based on Redis:
database: provider: "falkordb" providers: falkordb: uri: "redis://localhost:6379" password: "" # Optional database: "default_db" # Optional
The server supports multiple LLM providers (OpenAI, Anthropic, Gemini, Groq) and embedders. Edit config.yaml to configure:
server: transport: "http" # Default. Options: stdio, http llm: provider: "openai" # or "anthropic", "gemini", "groq", "azure_openai" model: "gpt-4.1" # Default model database: provider: "falkordb" # Default. Options: "falkordb", "neo4j"
To use Ollama with the MCP server, configure it as an OpenAI-compatible endpoint:
llm: provider: "openai" model: "gpt-oss:120b" # or your preferred Ollama model api_base: "http://localhost:11434/v1" api_key: "ollama" # dummy key required embedder: provider: "sentence_transformers" # recommended for local setup model: "all-MiniLM-L6-v2"
Make sure Ollama is running locally with: ollama serve
Graphiti MCP Server includes built-in entity types for structured knowledge extraction. These entity types are always enabled and configured via the entity_types section in your config.yaml:
Available Entity Types:
These entity types are defined in config.yaml and can be customized by modifying the descriptions:
graphiti: entity_types: - name: "Preference" description: "User preferences, choices, opinions, or selections" - name: "Requirement" description: "Specific needs, features, or functionality" # ... additional entity types
The MCP server automatically uses these entity types during episode ingestion to extract and structure information from conversations and documents.
The config.yaml file supports environment variable expansion using ${VAR_NAME} or ${VAR_NAME:default} syntax. Key variables:
NEO4J_URI: URI for the Neo4j database (default: bolt://localhost:7687)NEO4J_USER: Neo4j username (default: neo4j)NEO4J_PASSWORD: Neo4j password (default: demodemo)OPENAI_API_KEY: OpenAI API key (required for OpenAI LLM/embedder)ANTHROPIC_API_KEY: Anthropic API key (for Claude models)GOOGLE_API_KEY: Google API key (for Gemini models)GROQ_API_KEY: Groq API key (for Groq models)AZURE_OPENAI_API_KEY: Azure OpenAI API keyAZURE_OPENAI_ENDPOINT: Azure OpenAI endpoint URLAZURE_OPENAI_DEPLOYMENT: Azure OpenAI deployment nameAZURE_OPENAI_EMBEDDINGS_ENDPOINT: Optional Azure OpenAI embeddings endpoint URLAZURE_OPENAI_EMBEDDINGS_DEPLOYMENT: Optional Azure OpenAI embeddings deployment nameAZURE_OPENAI_API_VERSION: Optional Azure OpenAI API versionUSE_AZURE_AD: Optional use Azure Managed Identities for authenticationSEMAPHORE_LIMIT: Episode processing concurrency. See Concurrency and LLM Provider 429 Rate Limit ErrorsYou can set these variables in a .env file in the project directory.
To run the Graphiti MCP server with the default FalkorDB setup:
docker compose up
This starts a single container with:
http://localhost:8000/mcp/localhost:6379http://localhost:3000The easiest way to run with Neo4j is using the provided Docker Compose configuration:
# This starts both Neo4j and the MCP server docker compose -f docker/docker-compose.neo4j.yaml up
If you have Neo4j already running:
# Set environment variables export NEO4J_URI="bolt://localhost:7687" export NEO4J_USER="neo4j" export NEO4J_PASSWORD="your_password" # Run with Neo4j uv run graphiti_mcp_server.py --database-provider neo4j
Or use the Neo4j configuration file:
uv run graphiti_mcp_server.py --config config/config-docker-neo4j.yaml
# This starts both FalkorDB (Redis-based) and the MCP server docker compose -f docker/docker-compose.falkordb.yaml up
# Set environment variables export FALKORDB_URI="redis://localhost:6379" export FALKORDB_PASSWORD="" # If password protected # Run with FalkorDB uv run graphiti_mcp_server.py --database-provider falkordb
Or use the FalkorDB configuration file:
uv run graphiti_mcp_server.py --config config/config-docker-falkordb.yaml
--config: Path to YAML configuration file (default: config.yaml)--llm-provider: LLM provider to use (openai, anthropic, gemini, groq, azure_openai)--embedder-provider: Embedder provider to use (openai, azure_openai, gemini, voyage)--database-provider: Database provider to use (falkordb, neo4j) - default: falkordb--model: Model name to use with the LLM client--temperature: Temperature setting for the LLM (0.0-2.0)--transport: Choose the transport method (http or stdio, default: http)--group-id: Set a namespace for the graph (optional). If not provided, defaults to "main"--destroy-graph: If set, destroys all Graphiti graphs on startupGraphiti's ingestion pipelines are designed for high concurrency, controlled by the SEMAPHORE_LIMIT environment variable. This setting determines how many episodes can be processed simultaneously. Since each episode involves multiple LLM calls (entity extraction, deduplication, summarization), the actual number of concurrent LLM requests will be several times higher.
Default: SEMAPHORE_LIMIT=10 (suitable for OpenAI Tier 3, mid-tier Anthropic)
OpenAI:
SEMAPHORE_LIMIT=1-2SEMAPHORE_LIMIT=5-8SEMAPHORE_LIMIT=10-15SEMAPHORE_LIMIT=20-50Anthropic:
SEMAPHORE_LIMIT=5-8SEMAPHORE_LIMIT=15-30Azure OpenAI:
Ollama (local):
SEMAPHORE_LIMIT=1-5429 rate limit errorsSet this in your .env file:
SEMAPHORE_LIMIT=10 # Adjust based on your LLM provider tier
The Graphiti MCP server can be deployed using Docker with your choice of database backend. The Dockerfile uses uv for package management, ensuring consistent dependency installation.
A pre-built Graphiti MCP container is available at: zepai/knowledge-graph-mcp
Before running Docker Compose, configure your API keys using a .env file (recommended):
Create a .env file in the mcp_server directory:
cd graphiti/mcp_server cp .env.example .env
Edit the .env file to set your API keys:
# Required - at least one LLM provider API key OPENAI_API_KEY=your_openai_api_key_here # Optional - other LLM providers ANTHROPIC_API_KEY=your_anthropic_key GOOGLE_API_KEY=your_google_key GROQ_API_KEY=your_groq_key # Optional - embedder providers VOYAGE_API_KEY=your_voyage_key
Important: The .env file must be in the mcp_server/ directory (the parent of the docker/ subdirectory).
All commands must be run from the mcp_server directory to ensure the .env file is loaded correctly:
cd graphiti/mcp_server
Single container with both FalkorDB and MCP server - simplest option:
docker compose up
Separate containers with Neo4j and MCP server:
docker compose -f docker/docker-compose-neo4j.yml up
Default Neo4j credentials:
neo4jdemodemobolt://neo4j:7687http://localhost:7474Alternative setup with separate FalkorDB and MCP server containers:
docker compose -f docker/docker-compose-falkordb.yml up
FalkorDB configuration:
6379http://localhost:3000redis://falkordb:6379Once running, the MCP server is available at:
http://localhost:8000/mcp/http://localhost:8000/healthIf you run Docker Compose from the docker/ subdirectory instead of mcp_server/, you'll need to modify the .env file path in the compose file:
# Change this line in the docker-compose file: env_file: - path: ../.env # When running from mcp_server/ # To this: env_file: - path: .env # When running from mcp_server/docker/
However, running from the mcp_server/ directory is recommended to avoid confusion.
VS Code with GitHub Copilot Chat extension supports MCP servers. Add to your VS Code settings (.vscode/mcp.json or global settings):
{ "mcpServers": { "graphiti": { "uri": "http://localhost:8000/mcp/", "transport": { "type": "http" } } } }
To use the Graphiti MCP server with other MCP-compatible clients, configure it to connect to the server:
[!IMPORTANT] You will need the Python package manager,
uvinstalled. Please refer to theuvinstall instructions.Ensure that you set the full path to the
uvbinary and your Graphiti project folder.
{ "mcpServers": { "graphiti-memory": { "transport": "stdio", "command": "/Users/<user>/.local/bin/uv", "args": [ "run", "--isolated", "--directory", "/Users/<user>>/dev/zep/graphiti/mcp_server", "--project", ".", "graphiti_mcp_server.py", "--transport", "stdio" ], "env": { "NEO4J_URI": "bolt://localhost:7687", "NEO4J_USER": "neo4j", "NEO4J_PASSWORD": "password", "OPENAI_API_KEY": "sk-XXXXXXXX", "MODEL_NAME": "gpt-4.1-mini" } } } }
For HTTP transport (default), you can use this configuration:
{ "mcpServers": { "graphiti-memory": { "transport": "http", "url": "http://localhost:8000/mcp/" } } }
The Graphiti MCP server exposes the following tools:
add_episode: Add an episode to the knowledge graph (supports text, JSON, and message formats)search_nodes: Search the knowledge graph for relevant node summariessearch_facts: Search the knowledge graph for relevant facts (edges between entities)delete_entity_edge: Delete an entity edge from the knowledge graphdelete_episode: Delete an episode from the knowledge graphget_entity_edge: Get an entity edge by its UUIDget_episodes: Get the most recent episodes for a specific groupclear_graph: Clear all data from the knowledge graph and rebuild indicesget_status: Get the status of the Graphiti MCP server and Neo4j connectionThe Graphiti MCP server can process structured JSON data through the add_episode tool with source="json". This
allows you to automatically extract entities and relationships from structured data:
add_episode(
name="Customer Profile",
episode_body="{\"company\": {\"name\": \"Acme Technologies\"}, \"products\": [{\"id\": \"P001\", \"name\": \"CloudSync\"}, {\"id\": \"P002\", \"name\": \"DataMiner\"}]}",
source="json",
source_description="CRM data"
)
To integrate the Graphiti MCP Server with the Cursor IDE, follow these steps:
uv run graphiti_mcp_server.py --group-id <your_group_id>
Hint: specify a group_id to namespace graph data. If you do not specify a group_id, the server will use "main" as the group_id.
or
docker compose up
{ "mcpServers": { "graphiti-memory": { "url": "http://localhost:8000/mcp/" } } }
Add the Graphiti rules to Cursor's User Rules. See cursor_rules.md for details.
Kick off an agent session in Cursor.
The integration enables AI assistants in Cursor to maintain persistent memory through Graphiti's knowledge graph capabilities.
The Graphiti MCP Server uses HTTP transport (at endpoint /mcp/). Claude Desktop does not natively support HTTP transport, so you'll need to use a gateway like mcp-remote.
Run the Graphiti MCP server:
docker compose up # Or run directly with uv: uv run graphiti_mcp_server.py
(Optional) Install mcp-remote globally:
If you prefer to have mcp-remote installed globally, or if you encounter issues with npx fetching the package, you can install it globally. Otherwise, npx (used in the next step) will handle it for you.
npm install -g mcp-remote
Configure Claude Desktop:
Open your Claude Desktop configuration file (usually claude_desktop_config.json) and add or modify the mcpServers section as follows:
{ "mcpServers": { "graphiti-memory": { // You can choose a different name if you prefer "command": "npx", // Or the full path to mcp-remote if npx is not in your PATH "args": [ "mcp-remote", "http://localhost:8000/mcp/" // The Graphiti server's HTTP endpoint ] } } }
If you already have an mcpServers entry, add graphiti-memory (or your chosen name) as a new key within it.
Restart Claude Desktop for the changes to take effect.
The Graphiti MCP server uses the Graphiti core library, which includes anonymous telemetry collection. When you initialize the Graphiti MCP server, anonymous usage statistics are collected to help improve the framework.
To disable telemetry in the MCP server, set the environment variable:
export GRAPHITI_TELEMETRY_ENABLED=false
Or add it to your .env file:
GRAPHITI_TELEMETRY_ENABLED=false
For complete details about what's collected and why, see the Telemetry section in the main Graphiti README.
This project is licensed under the same license as the parent Graphiti project.