
AI Memory
STDIOProduction-ready MCP server for AI semantic memory management with PostgreSQL and vector search
Production-ready MCP server for AI semantic memory management with PostgreSQL and vector search
A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.
📖 System Prompt Available: See SYSTEM_PROMPT.md for a comprehensive guide on how to instruct AI models to use this memory system effectively. This prompt helps models understand when and how to use memory tools, especially for proactive memory retrieval.
npm install -g mcp-ai-memory
bun install
CREATE DATABASE mcp_ai_memory; \c mcp_ai_memory CREATE EXTENSION IF NOT EXISTS vector;
# Create .env with your database credentials touch .env
bun run migrate
bun run dev
bun run build bun run start
If you see an error like:
Failed to generate embedding: Error: Embedding dimension mismatch: Model produces 384-dimensional embeddings, but database expects 768
This occurs when the embedding model changes between sessions. To fix:
Option 1: Reset and Re-embed (Recommended for new installations)
# Clear existing memories and start fresh psql -d your_database -c "TRUNCATE TABLE memories CASCADE;"
Option 2: Specify a Consistent Model
Add EMBEDDING_MODEL
to your Claude Desktop config:
{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "MEMORY_DB_URL": "postgresql://...", "EMBEDDING_MODEL": "Xenova/all-mpnet-base-v2" } } } }
Common models:
Xenova/all-mpnet-base-v2
(768 dimensions - default, best quality)Xenova/all-MiniLM-L6-v2
(384 dimensions - smaller/faster)Option 3: Run Migration for Flexible Dimensions If you're using the source version:
bun run migrate
This allows mixing different embedding dimensions in the same database.
Ensure your PostgreSQL has the pgvector extension:
CREATE EXTENSION IF NOT EXISTS vector;
💡 For Best Results: Include the SYSTEM_PROMPT.md content in your Claude Desktop system prompt or initial conversation to help Claude understand how to use the memory tools effectively.
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json
on macOS):
{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db" } } } }
{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "mcp-ai-memory"], "env": { "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db", "REDIS_URL": "redis://localhost:6379", "EMBEDDING_MODEL": "Xenova/all-MiniLM-L6-v2", "LOG_LEVEL": "info" } } } }
Variable | Description | Default |
---|---|---|
DATABASE_URL | PostgreSQL connection string | Required |
REDIS_URL | Redis connection string (optional) | None - uses in-memory cache |
EMBEDDING_MODEL | Transformers.js model | Xenova/all-MiniLM-L6-v2 |
LOG_LEVEL | Logging level | info |
CACHE_TTL | Cache TTL in seconds | 3600 |
MAX_MEMORIES_PER_QUERY | Max results per search | 10 |
MIN_SIMILARITY_SCORE | Min similarity threshold | 0.5 |
💡 Token Efficiency: Default limits are set to 10 results to optimize token usage. Increase only when needed.
memory_search
- SEARCH FIND RECALL - Search stored information using natural language (USE THIS FIRST! Default limit: 10)memory_list
- LIST BROWSE SHOW - List all memories chronologically (fallback when search fails, default limit: 10)memory_store
- STORE SAVE REMEMBER - Store new information after checking for duplicatesmemory_update
- UPDATE MODIFY EDIT - Update existing memory metadatamemory_delete
- DELETE REMOVE FORGET - Delete specific memoriesmemory_batch
- BATCH BULK IMPORT - Store multiple memories efficientlymemory_batch_delete
- Delete multiple memories at oncememory_graph_search
- GRAPH RELATED - Search with relationship traversal (alias for memory_traverse)memory_consolidate
- MERGE CLUSTER - Group similar memoriesmemory_stats
- STATS INFO - Database statisticsmemory_relate
- LINK CONNECT - Create memory relationshipsmemory_unrelate
- UNLINK DISCONNECT - Remove relationshipsmemory_get_relations
- Show all relationships for a memorymemory_traverse
- TRAVERSE EXPLORE - Traverse memory graph with BFS/DFS algorithmsmemory_graph_analysis
- ANALYZE CONNECTIONS - Analyze graph connectivity and relationship patternsmemory_decay_status
- DECAY STATUS - Check decay status of a memorymemory_preserve
- PRESERVE PROTECT - Preserve important memories from decaymemory://stats
- Database statisticsmemory://types
- Available memory typesmemory://tags
- All unique tagsmemory://relationships
- Memory relationshipsmemory://clusters
- Memory clustersload-context
- Load relevant context for a taskmemory-summary
- Generate topic summariesconversation-context
- Load conversation historysrc/
├── server.ts # MCP server implementation
├── types/ # TypeScript definitions
├── schemas/ # Zod validation schemas
├── services/ # Business logic
├── database/ # Kysely migrations and client
└── config/ # Configuration management
# Required MEMORY_DB_URL=postgresql://user:password@localhost:5432/mcp_ai_memory # Optional - Caching (falls back to in-memory if Redis unavailable) REDIS_URL=redis://localhost:6379 CACHE_TTL=3600 # 1 hour default cache EMBEDDING_CACHE_TTL=86400 # 24 hours for embeddings SEARCH_CACHE_TTL=3600 # 1 hour for search results MEMORY_CACHE_TTL=7200 # 2 hours for individual memories # Optional - Model & Performance EMBEDDING_MODEL=Xenova/all-mpnet-base-v2 LOG_LEVEL=info MAX_CONTENT_SIZE=1048576 DEFAULT_SEARCH_LIMIT=10 # Default 10 for token efficiency DEFAULT_SIMILARITY_THRESHOLD=0.7 # Optional - Async Processing (requires Redis) ENABLE_ASYNC_PROCESSING=true # Enable background job processing BULL_CONCURRENCY=3 # Worker concurrency ENABLE_REDIS_CACHE=true # Enable Redis caching
The server implements a two-tier caching strategy:
When Redis is available and ENABLE_ASYNC_PROCESSING=true
, the server uses BullMQ for background job processing:
# Start all workers bun run workers # Or start individual workers bun run worker:embedding # Embedding generation worker bun run worker:batch # Batch import and consolidation worker # Test async processing bun run test:async
The memory_stats
tool includes queue statistics when async processing is enabled:
bun run typecheck
bun run lint
The memory tools include enhanced descriptions with keywords to help models understand when to use each tool. However, for best results with models like Gemma3, Qwen, or other open-source models:
memory_search
FIRST before any operationmemory_list
as a fallback when search returns no resultsYou have access to a memory system. ALWAYS start by using memory_search with query="user name preferences personal information" to check for stored user details. If no results, use memory_list to see recent memories. Default limits are 10 results for token efficiency - only increase if needed. Follow the patterns in the system prompt for best results.
The project includes a comprehensive test suite covering:
Run tests with bun test
.
MIT