Bear Notes RAG
STDIOConnects Bear Notes to AI assistants using semantic search and retrieval-augmented generation.
Connects Bear Notes to AI assistants using semantic search and retrieval-augmented generation.
Looking to supercharge your Bear Notes experience with AI assistants? This little gem connects your personal knowledge base to AI systems using semantic search and RAG (Retrieval-Augmented Generation).
I built this because I wanted my AI assistants to actually understand what's in my notes, not just perform simple text matching. The result is rather sweet, if I do say so myself.
Setting up is straightforward:
git clone [your-repo-url] cd bear-mcp-server npm install
Make the scripts executable (because permissions matter):
chmod +x src/bear-mcp-server.js chmod +x src/create-index.js
Before diving in, you'll need to create vector embeddings of your notes:
npm run index
Fair warning: this might take a few minutes if you're a prolific note-taker like me. It's converting all your notes into mathematical vectors that capture their meaning— clever stuff 😉.
Update your MCP configuration file:
{ "mcpServers": { "bear-notes": { "command": "node", "args": [ "/absolute/path/to/bear-mcp-server/src/bear-mcp-server.js" ], "env": { "BEAR_DATABASE_PATH": "/Users/yourusername/Library/Group Containers/9K33E3U3T4.net.shinyfrog.net.bear/Application Data/database.sqlite" } } } }
🚨 Remember to replace the path with your actual installation location. No prizes for using the example path verbatim, I'm afraid.
Semantic Search: Find notes based on meaning, not just keywords. Ask about "productivity systems" and it'll find your notes on GTD and Pomodoro, even if they don't contain those exact words.
RAG Support: Your AI assistants can now pull in relevant context from your notes, even when you haven't explicitly mentioned them.
All Local Processing: Everything runs on your machine. No data leaves your computer, no API keys needed, no internet dependency (after initial setup).
Graceful Fallbacks: If semantic search isn't available for whatever reason, it'll quietly fall back to traditional search. Belt and braces.
This server uses the Xenova implementation of transformers.js with the all-MiniLM-L6-v2 model:
Nothing too complex here:
bear-mcp-server/
├── package.json
├── readme.md
└── src/
├── bear-mcp-server.js # Main MCP server
├── create-index.js # Script to index notes
├── utils.js # Utility functions
├── lib/ # Additional utilities and diagnostic scripts
│ └── explore-database.js # Database exploration and diagnostic tool
├── note_vectors.index # Generated vector index (after indexing)
└── note_vectors.json # Note ID mapping (after indexing)
AI assistants connecting to this server can use these tools:
search_notes: Find notes that match a query
query
(required), limit
(optional, default: 10), semantic
(optional, default: true)get_note: Fetch a specific note by its ID
id
(required)get_tags: List all tags used in your Bear Notes
retrieve_for_rag: Get notes semantically similar to a query, specifically formatted for RAG
query
(required), limit
(optional, default: 5)npm run index
If things go wonky:
npm run index
When in doubt, try turning it off and on again. Works more often than we'd like to admit.
Prefer containers? You can run everything inside Docker too.
docker build -t bear-mcp-server .
You'll still need to run the indexing step before anything useful happens:
docker run \ -v /path/to/your/NoteDatabase.sqlite:/app/database.sqlite \ -e BEAR_DATABASE_PATH=/app/database.sqlite \ bear-mcp-server \ npm run index
🛠 Replace
/path/to/your/NoteDatabase.sqlite
with the actual path to your Bear database.
Once indexed, fire it up:
docker run \ -v /path/to/your/NoteDatabase.sqlite:/app/database.sqlite \ -e BEAR_DATABASE_PATH=/app/database.sqlite \ -p 8000:8000 \ bear-mcp-server
Boom—your AI assistant is now running in a container and talking to your notes.
MIT (Feel free to tinker, share, and improve)