
SearxNG
STDIOWeb search capabilities for AI assistants using SearxNG metasearch engine
Web search capabilities for AI assistants using SearxNG metasearch engine
A Model Context Protocol (MCP) server that provides web search capabilities using SearxNG, allowing AI assistants like Claude to search the web.
Created by AI with human supervision - because sometimes even artificial intelligence needs someone to tell it when to take a coffee break! 🤖☕
This project implements an MCP server that connects to SearxNG, a privacy-respecting metasearch engine. The server provides a simple and efficient way for Large Language Models to search the web without tracking users.
The server is specifically designed for LLMs and includes only essential features to minimize context window usage. This streamlined approach ensures efficient communication between LLMs and the search engine, preserving valuable context space for more important information.
Create a .clauderc
file in your home directory:
{ "mcpServers": { "searxng": { "command": "pipx", "args": [ "run", "searxng-simple-mcp@latest" ], "env": { "SEARXNG_MCP_SEARXNG_URL": "https://your-instance.example.com" } } } }
{ "mcpServers": { "searxng": { "command": "uvx", "args": [ "run", "searxng-simple-mcp@latest" ], "env": { "SEARXNG_MCP_SEARXNG_URL": "https://your-instance.example.com" } } } }
{ "mcpServers": { "searxng": { "command": "python", "args": ["-m", "searxng_simple_mcp.server"], "env": { "SEARXNG_MCP_SEARXNG_URL": "https://your-instance.example.com" } } } }
{ "mcpServers": { "searxng": { "command": "docker", "args": [ "run", "--rm", "-i", "--network=host", "-e", "SEARXNG_MCP_SEARXNG_URL=http://localhost:8080", "ghcr.io/sacode/searxng-simple-mcp:latest" ] } } }
Note: When using Docker with MCP servers:
-e
flag in the args
array, as the env
object is not properly passed to the Docker container.--network=host
flag to allow the container to access the host's network. Otherwise, "localhost" inside the container will refer to the container itself, not your host machine.--network=host
, port mappings (-p
) are not needed and will be ignored, as the container shares the host's network stack directly.Configure the server using environment variables:
Environment Variable | Description | Default Value |
---|---|---|
SEARXNG_MCP_SEARXNG_URL | URL of the SearxNG instance to use | https://paulgo.io/ |
SEARXNG_MCP_TIMEOUT | HTTP request timeout in seconds | 10 |
SEARXNG_MCP_DEFAULT_RESULT_COUNT | Default number of results to return | 10 |
SEARXNG_MCP_DEFAULT_LANGUAGE | Language code for results (e.g., 'en', 'ru', 'all') | all |
SEARXNG_MCP_DEFAULT_FORMAT | Default format for results ('text', 'json') | text |
SEARXNG_MCP_LOG_LEVEL | Logging level (e.g., 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL') | ERROR |
TRANSPORT_PROTOCOL | Transport protocol ('stdio' or 'sse') | stdio |
Note: Setting log levels higher than ERROR (such as DEBUG or INFO) may break integration with some applications due to excessive output in the communication channel.
You can find a list of public SearxNG instances at https://searx.space if you don't want to host your own.
The easiest way to use this server is with pipx or uvx, which allows you to run the package without installing it permanently:
# Using pipx pip install pipx # Install pipx if you don't have it pipx run searxng-simple-mcp # OR using uvx pip install uvx # Install uvx if you don't have it uvx run searxng-simple-mcp
You can pass configuration options directly:
# Using pipx with custom SearxNG instance pipx run searxng-simple-mcp --searxng-url https://your-instance.example.com
For more permanent installation:
# From PyPI using pip pip install searxng-simple-mcp # OR using uv (faster installation) pip install uv uv pip install searxng-simple-mcp # OR from source git clone https://github.com/Sacode/searxng-simple-mcp.git cd searxng-simple-mcp pip install uv uv pip install -e .
After installation, you can run the server with:
# Run directly after installation python -m searxng_simple_mcp.server # OR with configuration options python -m searxng_simple_mcp.server --searxng-url https://your-instance.example.com
If you prefer using Docker:
# Pull the Docker image docker pull ghcr.io/sacode/searxng-simple-mcp:latest # Run the container with default settings (stdio transport) docker run --rm -i ghcr.io/sacode/searxng-simple-mcp:latest # Run with environment file for configuration docker run --rm -i --env-file .env ghcr.io/sacode/searxng-simple-mcp:latest # Run with SSE transport (starts HTTP server on port 8000) docker run -p 8000:8000 -e TRANSPORT_PROTOCOL=sse ghcr.io/sacode/searxng-simple-mcp:latest # Building locally docker build -t searxng-simple-mcp:local . docker run --rm -i searxng-simple-mcp:local # Using Docker Compose docker-compose up -d
For complete Docker usage information, see the Docker Configuration section below.
The MCP server supports two transport protocols:
STDIO (default): For CLI applications and direct integration
SSE (Server-Sent Events): For web-based clients and HTTP-based integrations
To use the SSE transport protocol:
With direct execution:
# Set the transport protocol to SSE TRANSPORT_PROTOCOL=sse python -m searxng_simple_mcp.server # Or with FastMCP fastmcp run src/searxng_simple_mcp/server.py --transport sse
With Docker:
# Run with SSE transport protocol docker run -p 8000:8000 -e TRANSPORT_PROTOCOL=sse -e SEARXNG_MCP_SEARXNG_URL=https://your-instance.example.com ghcr.io/sacode/searxng-simple-mcp:latest
With Docker Compose (from the included docker-compose.yml
):
environment: - SEARXNG_MCP_SEARXNG_URL=https://searx.info - SEARXNG_MCP_TIMEOUT=10 - SEARXNG_MCP_MAX_RESULTS=20 - SEARXNG_MCP_LANGUAGE=all - TRANSPORT_PROTOCOL=sse # Transport protocol: stdio or sse
When using SSE, the server will be accessible via HTTP at http://localhost:8000
by default.
To connect to the SSE server from an MCP client, use a configuration like:
{ "mcpServers": { "searxng": { "url": "http://localhost:8000", "transport": "sse" } } }
Note: Not all applications support the SSE transport protocol. Make sure your MCP client is compatible with SSE before using this transport method.
For development and testing:
# Install dependencies uv pip install -e . # Run linter and formatter ruff check . ruff check --fix . ruff format . # Run the server directly python -m src.searxng_simple_mcp.server # OR using FastMCP fastmcp run src/searxng_simple_mcp/server.py # Use stdio transport (default) fastmcp run src/searxng_simple_mcp/server.py --transport sse # Use sse transport # Run in development mode (launches MCP Inspector) fastmcp dev src/searxng_simple_mcp/server.py
For maintainers who need to publish new versions of the package to PyPI:
# Install development dependencies npm run install:deps # Clean, build, and check the package npm run build:package npm run check:package # Publish to PyPI (requires PyPI credentials) npm run publish:pypi # Alternatively, use the all-in-one commands to update version and publish npm run publish:patch # Increments patch version (1.0.1 -> 1.0.2) npm run publish:minor # Increments minor version (1.0.1 -> 1.1.0) npm run publish:major # Increments major version (1.0.1 -> 2.0.0)
These commands will:
You'll need to have a PyPI account and be authenticated with twine. You can set up authentication by:
.pypirc
file in your home directoryTWINE_USERNAME
and TWINE_PASSWORD
)When using Docker with MCP servers, keep these points in mind:
Integration with MCP clients: Use the configuration shown in the Using with Docker section for integrating with Claude Desktop or other MCP-compliant clients.
Transport protocols:
Configuration options:
docker run --env-file .env ...
-e
flag: docker run -e SEARXNG_MCP_SEARXNG_URL=https://example.com ...
Networking:
--network=host
when you need to access services on your host machine-p 8000:8000
when exposing the SSE server to your networksearxng-simple-mcp/
├── src/
│ ├── run_server.py # Entry point script
│ └── searxng_simple_mcp/ # Main package
├── docker-compose.yml # Docker Compose configuration
├── Dockerfile # Docker configuration
└── pyproject.toml # Python project configuration
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.