
Knowledge Base
STDIOMCP server for accessing Oura API sleep, readiness, and resilience data
MCP server for accessing Oura API sleep, readiness, and resilience data
This MCP server provides tools for listing and retrieving content from different knowledge bases.
These instructions assume you have Node.js and npm installed on your system.
To install Knowledge Base Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @jeanibarz/knowledge-base-mcp-server --client claude
Prerequisites
Clone the repository:
git clone <repository_url> cd knowledge-base-mcp-server
Install dependencies:
npm install
Configure environment variables:
This server supports two embedding providers: Ollama (recommended for reliability) and HuggingFace (fallback option).
EMBEDDING_PROVIDER=ollama
to use local Ollama embeddingsollama pull dengcao/Qwen3-Embedding-0.6B:Q8_0
EMBEDDING_PROVIDER=ollama OLLAMA_BASE_URL=http://localhost:11434 # Default Ollama URL OLLAMA_MODEL=dengcao/Qwen3-Embedding-0.6B:Q8_0 # Default embedding model KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases
EMBEDDING_PROVIDER=huggingface
or leave unset (default)EMBEDDING_PROVIDER=huggingface # Optional, this is the default HUGGINGFACE_API_KEY=your_api_key_here HUGGINGFACE_MODEL_NAME=sentence-transformers/all-MiniLM-L6-v2 KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases
FAISS_INDEX_PATH
environment variable to specify the path to the FAISS index. If not set, it will default to $HOME/knowledge_bases/.faiss
..bashrc
or .zshrc
file, or directly in the MCP settings.Build the server:
npm run build
Add the server to the MCP settings:
Edit the cline_mcp_settings.json
file located at /home/jean/.vscode-server/data/User/globalStorage/saoudrizwan.claude-dev/settings/
.
Add the following configuration to the mcpServers
object:
Option 1: Ollama Configuration
"knowledge-base-mcp-ollama": { "command": "node", "args": [ "/path/to/knowledge-base-mcp-server/build/index.js" ], "disabled": false, "autoApprove": [], "env": { "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases", "EMBEDDING_PROVIDER": "ollama", "OLLAMA_BASE_URL": "http://localhost:11434", "OLLAMA_MODEL": "dengcao/Qwen3-Embedding-0.6B:Q8_0" }, "description": "Retrieves similar chunks from the knowledge base based on a query using Ollama." },
"knowledge-base-mcp-huggingface": { "command": "node", "args": [ "/path/to/knowledge-base-mcp-server/build/index.js" ], "disabled": false, "autoApprove": [], "env": { "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases", "EMBEDDING_PROVIDER": "huggingface", "HUGGINGFACE_API_KEY": "YOUR_HUGGINGFACE_API_KEY", "HUGGINGFACE_MODEL_NAME": "sentence-transformers/all-MiniLM-L6-v2" }, "description": "Retrieves similar chunks from the knowledge base based on a query using HuggingFace." },
cline_mcp_settings.json
file, depending on your preferred embedding provider.
* Replace `/path/to/knowledge-base-mcp-server` with the actual path to the server directory.
* Replace `/path/to/knowledge_bases` with the actual path to the knowledge bases directory.
Create knowledge base directories:
KNOWLEDGE_BASES_ROOT_DIR
for each knowledge base (e.g., company
, it_support
, onboarding
)..txt
, .md
) containing the knowledge base content within these subdirectories..txt
, .md
) within the specified knowledge base subdirectories..
)..index
subdirectory. This hash is used to determine if the file has been modified since the last indexing.MarkdownTextSplitter
from langchain/text_splitter
.The server exposes two tools:
list_knowledge_bases
: Lists the available knowledge bases.retrieve_knowledge
: Retrieves similar chunks from the knowledge base based on a query. Optionally, if a knowledge base is specified, only that one is searched; otherwise, all available knowledge bases are considered. By default, at most 10 document chunks are returned with a score below a threshold of 2. A different threshold can optionally be provided using the threshold
parameter.You can use these tools through the MCP interface.
The retrieve_knowledge
tool performs a semantic search using a FAISS index. The index is automatically updated when the server starts or when a file in a knowledge base is modified.
The output of the retrieve_knowledge
tool is a markdown formatted string with the following structure:
## Semantic Search Results **Result 1:** [Content of the most similar chunk] **Source:** ```json { "source": "[Path to the file containing the chunk]" } ``` --- **Result 2:** [Content of the second most similar chunk] **Source:** ```json { "source": "[Path to the file containing the chunk]" } ``` > **Disclaimer:** The provided results might not all be relevant. Please cross-check the relevance of the information.
Each result includes the content of the most similar chunk, the source file, and a similarity score.