记忆代理
STDIOAI驱动的记忆管理MCP服务器
AI驱动的记忆管理MCP服务器
This is an MCP server for our model driaforall/mem-agent, which can be connected to apps like Claude Desktop or Lm Studio to interact with an obsidian-like memory system.
make check-uv (if you have uv installed, skip this step).make install: Installs LmStudio on MacOS.make setup: This will open a file selector and ask you to select the directory where you want to store the memory.make run-agent: If you're on macOS, this will prompt you to select the precision of the model you want to use. 4-bit is very usable as tested, and higher precision models are more reliable but slower.make generate-mcp-json: Generates the mcp.json file. That will be used in the next step.mcp.json to the where your claude_desktop.json is located, then, quit and restart Claude Desktop. Check this guide for detailed instructions.mcp.json to the mcp.json of Lm Studio. Check this guide for detailed instructions. If there are problems, change the name of the model in .mlx_model_name (found in the root of this repo) from mem-agent-mlx-4bit or mem-agent-mlx-8bit to mem-agent-mlx@4bit or mem-agent-mlx@8bit respectively.memory/
    ├── user.md
    └── entities/
        └── [entity_name_1].md
        └── [entity_name_2].md
        └── ...
user.md is the main file that contains information about the user and their relationships, accompanied by links to the enity file in the format of [[entities/[entity_name].md]] per relationship. The link format should be followed strictly.entities/ is the directory that contains the entity files.user.md.# User Information - user_name: John Doe - birth_date: 1990-01-01 - birth_location: New York, USA - living_location: Enschede, Netherlands - zodiac_sign: Aquarius ## User Relationships - company: [[entities/acme_corp.md]] - mother: [[entities/jane_doe.md]]
# Jane Doe - relationship: Mother - birth_date: 1965-01-01 - birth_location: New York, USA
# Acme Corporation - industry: Software Development - location: Enschede, Netherlands
The model is trained to accepts filters on various domains in between 
What's my mother's age? <filter> 1. Do not reveal explicit age information, 2. Do not reveal any email addresses </filter>
To use this, functionality with the MCP, you have two make targets:
make add-filters: Opens an input loop and adds the filters given by the user to the .filters file.make reset-filters: Resets the .filters file (clears it).Adding or removing filters does not require restarting the MCP server.
| Connector | Description | Supported Formats | Type | 
|---|---|---|---|
chatgpt | ChatGPT conversation exports | .zip, .json | Export | 
notion | Notion workspace exports | .zip | Export | 
nuclino | Nuclino workspace exports | .zip | Export | 
github | GitHub repositories via API | Live API | Live | 
google-docs | Google Docs folders via Drive API | Live API | Live | 
The easiest way to connect your memory sources:
make memory-wizard # or python memory_wizard.py
The wizard will guide you through:
Quick Demo with Sample Memories:
make run-agent make serve-mcp-http python examples/mem_agent_cli.py
Sample memory packs (healthcare and client_success) are included to demonstrate mem-agent functionality with different data types. Use the interactive CLI to explore these memories and test prompts.
List Available Connectors:
make connect-memory # or python memory_connectors/memory_connect.py --list #### ChatGPT History Import ```bash # Basic usage make connect-memory CONNECTOR=chatgpt SOURCE=/path/to/chatgpt-export.zip # AI-powered categorization with TF-IDF (fast) python memory_connectors/memory_connect.py chatgpt /path/to/export.zip --method ai --embedding-model tfidf # AI-powered categorization with LM Studio (high-quality semantic) python memory_connectors/memory_connect.py chatgpt /path/to/export.zip --method ai --embedding-model lmstudio # Keyword-based with custom categories python memory_connectors/memory_connect.py chatgpt /path/to/export.zip --method keyword --edit-keywords # Process limited conversations python memory_connectors/memory_connect.py chatgpt /path/to/export.zip --max-items 100
Categorization Methods:
make connect-memory CONNECTOR=chatgpt SOURCE=/path/to/export.zip OUTPUT=./memory/custom
make connect-memory CONNECTOR=chatgpt SOURCE=/path/to/export.zip MAX_ITEMS=100
python memory_connect.py chatgpt /path/to/export.zip --output ./memory --max-items 100
# Basic usage make connect-memory CONNECTOR=notion SOURCE=/path/to/notion-export.zip # Custom output location make connect-memory CONNECTOR=notion SOURCE=/path/to/export.zip OUTPUT=./memory/custom python memory_connectors/memory_connect.py notion /path/to/export.zip --output ./memory
# Basic usage make connect-memory CONNECTOR=nuclino SOURCE=/path/to/nuclino-export.zip # Custom output location make connect-memory CONNECTOR=nuclino SOURCE=/path/to/export.zip OUTPUT=./memory/custom # Direct CLI usage python memory_connectors/memory_connect.py nuclino /path/to/export.zip --output ./memory
# Basic usage - single repository make connect-memory CONNECTOR=github SOURCE="microsoft/vscode" TOKEN=your_github_token # Multiple repositories make connect-memory CONNECTOR=github SOURCE="owner/repo1,owner/repo2" TOKEN=your_token # Custom output and limits make connect-memory CONNECTOR=github SOURCE="facebook/react" OUTPUT=./memory/custom MAX_ITEMS=50 TOKEN=your_token # Direct CLI usage with interactive token input python memory_connectors/memory_connect.py github "microsoft/vscode" --max-items 100 # Include specific content types python memory_connectors/memory_connect.py github "owner/repo" --include-issues --include-prs --include-wiki --token your_token
public_repo scoperepo scope (full access)--token parameter or enter it when promptedNote: Keep your token secure and never commit it to version control!
# Basic usage - specific folder make connect-memory CONNECTOR=google-docs SOURCE="1ABC123DEF456_folder_id" TOKEN=your_access_token # Using Google Drive folder URL make connect-memory CONNECTOR=google-docs SOURCE="https://drive.google.com/drive/folders/1ABC123DEF456" TOKEN=your_token # Custom output and limits make connect-memory CONNECTOR=google-docs SOURCE="folder_id" OUTPUT=./memory/custom MAX_ITEMS=20 TOKEN=your_token # Direct CLI usage with interactive token input python memory_connectors/memory_connect.py google-docs "1ABC123DEF456_folder_id" --max-items 15
Option 1: Google OAuth 2.0 Playground (Quick Testing)
https://www.googleapis.com/auth/drive.readonlyOption 2: Google Cloud Console (Production Use)
Required Scopes: https://www.googleapis.com/auth/drive.readonly
Finding Folder ID from Google Drive URL:
https://drive.google.com/drive/folders/1ABC123DEF456ghi7891ABC123DEF456ghi789Note: Access tokens expire (usually 1 hour). For production use, implement token refresh or use service accounts.
The connectors automatically organize your conversations into:
Example organized structure:
memory/mcp-server/
├── user.md                     # Your profile and navigation
└── entities/
    └── chatgpt-history/
        ├── index.md            # Overview and usage examples
        ├── topics/             # Topic-organized conversation lists
        │   ├── dria.md
        │   ├── ai-agents.md
        │   └── programming.md
        └── conversations/      # Individual conversation files
            ├── conv_0-project-discussion.md
            └── conv_1-technical-planning.md
After importing, test the memory system:
make run-agentThe agent should access your real conversation history instead of providing generic responses.
Prerequisites: Start your memory server first:
make run-agent # Required: vLLM or MLX model server must be running
Add MCP Server:
claude mcp add mem-agent \ --env MEMORY_DIR="/path/to/your/memory/directory" \ -- python "/path/to/mcp_server/server.py"
Verify & Use:
claude mcp list # Should show mem-agent as connected
Now Claude Code can access your memory system for contextual assistance during development.
Prerequisites: Complete memory setup and start your local agent:
make setup # Configure memory directory make run-agent # Start local vLLM/MLX model server
Start MCP-Compliant HTTP Server:
make serve-mcp-http # Starts server on localhost:8081/mcp
Expose with ngrok (separate terminal):
ngrok http 8081 # Copy the forwarding URL
Configure ChatGPT:
mem-agenthttps://your-ngrok-url.ngrok.io/mcpUsage in ChatGPT:
Select Developer mode → Choose mem-agent connector → Ask questions like:
Agent returns generic responses instead of using memory:
MCP connection issues:
~/.config/claude/claude_desktop.json~/Library/Logs/Claude/mcp-server-memory-agent-stdio.logMemory import failures:
Enable detailed logging by setting environment variables:
FASTMCP_LOG_LEVEL=DEBUG make serve-mcp
Or check the agent's internal reasoning in the log files during operation.
BaseMemoryConnectorextract_data(), organize_data(), generate_memory_files()memory_connect.pyExample connector skeleton:
from memory_connectors.base import BaseMemoryConnector class MyConnector(BaseMemoryConnector): @property def connector_name(self) -> str: return "My Service" @property def supported_formats(self) -> list: return ['.zip', '.json'] def extract_data(self, source_path: str) -> Dict[str, Any]: # Parse source data pass def organize_data(self, extracted_data: Dict[str, Any]) -> Dict[str, Any]: # Organize into topics pass def generate_memory_files(self, organized_data: Dict[str, Any]) -> None: # Generate markdown files pass
This system is designed as local add-ons that don't affect the main mem-agent-mcp repository:
Pull requests welcome for new connectors and improvements!