Claude 记忆服务
STDIO基于ChromaDB的语义记忆存储服务
基于ChromaDB的语义记忆存储服务
An MCP server providing semantic memory and persistent storage capabilities for Claude Desktop using ChromaDB and sentence transformers. This service enables long-term memory storage with semantic search capabilities, making it ideal for maintaining context across conversations and instances.
Talk to the Repo with TalkToGitHub!
delete_by_tag
to support both single and multiple tagsdelete_by_tags
(OR logic) and delete_by_all_tags
(AND logic)The enhanced installation script automatically detects your system and installs the appropriate dependencies:
# Clone the repository git clone https://github.com/doobidoo/mcp-memory-service.git cd mcp-memory-service # Create and activate a virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Run the installation script python install.py
The install.py
script will:
You can run the Memory Service using Docker:
# Using Docker Compose (recommended) docker-compose up # Using Docker directly docker build -t mcp-memory-service . docker run -p 8000:8000 -v /path/to/data:/app/chroma_db -v /path/to/backups:/app/backups mcp-memory-service
We provide multiple Docker Compose configurations for different scenarios:
docker-compose.yml
- Standard configuration using pip installdocker-compose.uv.yml
- Alternative configuration using UV package managerdocker-compose.pythonpath.yml
- Configuration with explicit PYTHONPATH settingsTo use an alternative configuration:
docker-compose -f docker-compose.uv.yml up
Windows users may encounter PyTorch installation issues due to platform-specific wheel availability. Use our Windows-specific installation script:
# After activating your virtual environment python scripts/install_windows.py
This script handles:
To install Memory Service for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @doobidoo/mcp-memory-service --client claude
For comprehensive installation instructions and troubleshooting, see the Installation Guide.
Add the following to your claude_desktop_config.json
file:
{ "memory": { "command": "uv", "args": [ "--directory", "your_mcp_memory_service_directory", // e.g., "C:\\REPOSITORIES\\mcp-memory-service" "run", "memory" ], "env": { "MCP_MEMORY_CHROMA_PATH": "your_chroma_db_path", // e.g., "C:\\Users\\John.Doe\\AppData\\Local\\mcp-memory\\chroma_db" "MCP_MEMORY_BACKUPS_PATH": "your_backups_path" // e.g., "C:\\Users\\John.Doe\\AppData\\Local\\mcp-memory\\backups" } } }
For Windows users, we recommend using the wrapper script to ensure PyTorch is properly installed:
{ "memory": { "command": "python", "args": [ "C:\\path\\to\\mcp-memory-service\\memory_wrapper.py" ], "env": { "MCP_MEMORY_CHROMA_PATH": "C:\\Users\\YourUsername\\AppData\\Local\\mcp-memory\\chroma_db", "MCP_MEMORY_BACKUPS_PATH": "C:\\Users\\YourUsername\\AppData\\Local\\mcp-memory\\backups" } } }
The wrapper script will:
For detailed instructions on how to interact with the memory service in Claude Desktop:
The memory service is invoked through natural language commands in your conversations with Claude. For example:
See the Invocation Guide for a complete list of commands and detailed usage examples.
The memory service provides the following operations through the MCP server:
store_memory
- Store new information with optional tagsretrieve_memory
- Perform semantic search for relevant memoriesrecall_memory
- Retrieve memories using natural language time expressionssearch_by_tag
- Find memories using specific tagsexact_match_retrieve
- Find memories with exact content matchdebug_retrieve
- Retrieve memories with similarity scorescreate_backup
- Create database backupget_stats
- Get memory statisticsoptimize_db
- Optimize database performancecheck_database_health
- Get database health metricscheck_embedding_model
- Verify model statusdelete_memory
- Delete specific memory by hashdelete_by_tag
- Enhanced: Delete memories with specific tag(s) - supports both single tags and multiple tagsdelete_by_tags
- New: Explicitly delete memories containing any of the specified tags (OR logic)delete_by_all_tags
- New: Delete memories containing all specified tags (AND logic)cleanup_duplicates
- Remove duplicate entriesIssue 5 Resolution: Enhanced tag deletion functionality for consistent API design.
search_by_tag
accepted arrays, delete_by_tag
only accepted single strings// Single tag deletion (backward compatible) delete_by_tag("temporary") // Multiple tag deletion (new!) delete_by_tag(["temporary", "outdated", "test"]) // OR logic // Explicit methods for clarity delete_by_tags(["tag1", "tag2"]) // OR logic delete_by_all_tags(["urgent", "important"]) // AND logic
// Store memories with tags store_memory("Project deadline is May 15th", {tags: ["work", "deadlines", "important"]}) store_memory("Grocery list: milk, eggs, bread", {tags: ["personal", "shopping"]}) store_memory("Meeting notes from sprint planning", {tags: ["work", "meetings", "important"]}) // Search by multiple tags (existing functionality) search_by_tag(["work", "important"]) // Returns memories with either tag // Enhanced deletion options (new!) delete_by_tag("temporary") // Delete single tag (backward compatible) delete_by_tag(["temporary", "outdated"]) // Delete memories with any of these tags delete_by_tags(["personal", "shopping"]) // Explicit multi-tag deletion delete_by_all_tags(["work", "important"]) // Delete only memories with BOTH tags
Configure through environment variables:
CHROMA_DB_PATH: Path to ChromaDB storage
BACKUP_PATH: Path for backups
AUTO_BACKUP_INTERVAL: Backup interval in hours (default: 24)
MAX_MEMORIES_BEFORE_OPTIMIZE: Threshold for auto-optimization (default: 10000)
SIMILARITY_THRESHOLD: Default similarity threshold (default: 0.7)
MAX_RESULTS_PER_QUERY: Maximum results per query (default: 10)
BACKUP_RETENTION_DAYS: Number of days to keep backups (default: 7)
LOG_LEVEL: Logging level (default: INFO)
# Hardware-specific environment variables
PYTORCH_ENABLE_MPS_FALLBACK: Enable MPS fallback for Apple Silicon (default: 1)
MCP_MEMORY_USE_ONNX: Use ONNX Runtime for CPU-only deployments (default: 0)
MCP_MEMORY_USE_DIRECTML: Use DirectML for Windows acceleration (default: 0)
MCP_MEMORY_MODEL_NAME: Override the default embedding model
MCP_MEMORY_BATCH_SIZE: Override the default batch size
Platform | Architecture | Accelerator | Status |
---|---|---|---|
macOS | Apple Silicon (M1/M2/M3) | MPS | ✅ Fully supported |
macOS | Apple Silicon under Rosetta 2 | CPU | ✅ Supported with fallbacks |
macOS | Intel | CPU | ✅ Fully supported |
Windows | x86_64 | CUDA | ✅ Fully supported |
Windows | x86_64 | DirectML | ✅ Supported |
Windows | x86_64 | CPU | ✅ Supported with fallbacks |
Linux | x86_64 | CUDA | ✅ Fully supported |
Linux | x86_64 | ROCm | ✅ Supported |
Linux | x86_64 | CPU | ✅ Supported with fallbacks |
Linux | ARM64 | CPU | ✅ Supported with fallbacks |
# Install test dependencies pip install pytest pytest-asyncio # Run all tests pytest tests/ # Run specific test categories pytest tests/test_memory_ops.py pytest tests/test_semantic_search.py pytest tests/test_database.py # Verify environment compatibility python scripts/verify_environment_enhanced.py # Verify PyTorch installation on Windows python scripts/verify_pytorch_windows.py # Perform comprehensive installation verification python scripts/test_installation.py
See the Installation Guide for detailed troubleshooting steps.
python scripts/install_windows.py
python install.py --force-compatible-deps
python scripts/fix_sitecustomize.py
python scripts/verify_environment_enhanced.py
MCP_MEMORY_BATCH_SIZE=4
and try a smaller modelPYTORCH_ENABLE_MPS_FALLBACK=1
python scripts/test_installation.py
mcp-memory-service/
├── src/mcp_memory_service/ # Core package code
│ ├── __init__.py
│ ├── config.py # Configuration utilities
│ ├── models/ # Data models
│ ├── storage/ # Storage implementations
│ ├── utils/ # Utility functions
│ └── server.py # Main MCP server
├── scripts/ # Helper scripts
├── memory_wrapper.py # Windows wrapper script
├── install.py # Enhanced installation script
└── tests/ # Test suite
MIT License - See LICENSE file for details
The MCP Memory Service can be extended with various tools and utilities. See Integrations for a list of available options, including: