
Web UI
STDIOWeb UI for interacting with LLMs through Model Context Protocol architecture.
Web UI for interacting with LLMs through Model Context Protocol architecture.
MCP Web UI is a web-based user interface that serves as a Host within the Model Context Protocol (MCP) architecture. It provides a powerful and user-friendly interface for interacting with Large Language Models (LLMs) while managing context aggregation and coordination between clients and servers.
MCP Web UI is designed to simplify and enhance interactions with AI language models by providing:
Clone the repository:
git clone https://github.com/MegaGrindStone/mcp-web-ui.git cd mcp-web-ui
Configure your environment:
mkdir -p $HOME/.config/mcpwebui cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
Set up API keys:
export ANTHROPIC_API_KEY=your_anthropic_key export OPENAI_API_KEY=your_openai_key export OPENROUTER_API_KEY=your_openrouter_key
go mod download go run ./cmd/server/main.go
docker build -t mcp-web-ui . docker run -p 8080:8080 \ -v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \ -e ANTHROPIC_API_KEY \ -e OPENAI_API_KEY \ -e OPENROUTER_API_KEY \ mcp-web-ui
The configuration file (config.yaml
) provides comprehensive settings for customizing the MCP Web UI. Here's a detailed breakdown:
port
: The port on which the server will run (default: 8080)logLevel
: Logging verbosity (options: debug, info, warn, error; default: info)logMode
: Log output format (options: json, text; default: text)systemPrompt
: Default system prompt for the AI assistanttitleGeneratorPrompt
: Prompt used to generate chat titlesThe llm
section supports multiple providers with provider-specific configurations:
provider
: Choose from: ollama, anthropic, openai, openroutermodel
: Specific model name (e.g., 'claude-3-5-sonnet-20241022')parameters
: Fine-tune model behavior:
temperature
: Randomness of responses (0.0-1.0)topP
: Nucleus sampling thresholdtopK
: Number of highest probability tokens to keepfrequencyPenalty
: Reduce repetition of token sequencespresencePenalty
: Encourage discussing new topicsmaxTokens
: Maximum response lengthstop
: Sequences to stop generationOllama:
host
: Ollama server URL (default: http://localhost:11434)Anthropic:
apiKey
: Anthropic API key (can use ANTHROPIC_API_KEY env variable)maxTokens
: Maximum token limitOpenAI:
apiKey
: OpenAI API key (can use OPENAI_API_KEY env variable)endpoint
: OpenAI API endpoint (default: https://api.openai.com/v1)OpenRouter:
apiKey
: OpenRouter API key (can use OPENROUTER_API_KEY env variable)The genTitleLLM
section allows separate configuration for title generation, defaulting to the main LLM if not specified.
mcpSSEServers
: Configure Server-Sent Events (SSE) servers
url
: SSE server URLmaxPayloadSize
: Maximum payload sizemcpStdIOServers
: Configure Standard Input/Output servers
command
: Command to run serverargs
: Arguments for the server commandSSE Server Example:
mcpSSEServers: filesystem: url: https://yoursseserver.com maxPayloadSize: 1048576 # 1MB
StdIO Server Examples:
mcpStdIOServers: filesystem: command: npx args: - -y - "@modelcontextprotocol/server-filesystem" - "/path/to/your/files"
This example can be used directly as the official filesystem mcp server is an executable package that can be run with npx. Just update the path to point to your desired directory.
mcpStdIOServers: filesystem: command: go args: - run - github.com/your_username/your_app # Replace with your app - -path - "/data/mcp/filesystem" # Path to expose to MCP clients
For this example, you'll need to create a new Go application that imports the github.com/MegaGrindStone/go-mcp/servers/filesystem
package. The flag naming (like -path
in this example) is completely customizable based on how you structure your own application - it doesn't have to be called "path". This example is merely a starting point showing one possible implementation where a flag is used to specify which directory to expose. You're free to design your own application structure and command-line interface according to your specific needs.
port: 8080 logLevel: info systemPrompt: You are a helpful assistant. llm: provider: anthropic model: claude-3-5-sonnet-20241022 maxTokens: 1000 parameters: temperature: 0.7 genTitleLLM: provider: openai model: gpt-3.5-turbo
cmd/
: Application entry pointinternal/handlers/
: Web request handlersinternal/models/
: Data modelsinternal/services/
: LLM provider integrationsstatic/
: Static assets (CSS)templates/
: HTML templatesMIT License