OpenRouter
STDIO统一接口访问OpenRouter.ai模型的MCP服务器
统一接口访问OpenRouter.ai模型的MCP服务器
A Model Context Protocol (MCP) server providing seamless integration with OpenRouter.ai's diverse model ecosystem. Access various AI models through a unified, type-safe interface with built-in caching, rate limiting, and error handling.
Model Access
Performance Optimization
Unified Response Format
ToolResult structure for all responsesisError flagpnpm install @mcpservers/openrouterai
OPENROUTER_API_KEY: Required. Your OpenRouter API key.OPENROUTER_DEFAULT_MODEL: Optional. The default model to use if not specified in the request (e.g., openrouter/auto).OPENROUTER_MAX_TOKENS: Optional. Default maximum number of tokens to generate if max_tokens is not provided in the request.OPENROUTER_PROVIDER_QUANTIZATIONS: Optional. Comma-separated list of default quantization levels to filter by (e.g., fp16,int8) if provider.quantizations is not provided in the request. (Phase 1)OPENROUTER_PROVIDER_IGNORE: Optional. Comma-separated list of default provider names to ignore (e.g., mistralai,openai) if provider.ignore is not provided in the request. (Phase 1)OPENROUTER_PROVIDER_SORT: Optional. Default sort order for providers ("price", "throughput", or "latency"). Overridden by provider.sort argument. (Phase 2)OPENROUTER_PROVIDER_ORDER: Optional. Default prioritized list of provider IDs (JSON array string, e.g., '["openai/gpt-4o", "anthropic/claude-3-opus"]'). Overridden by provider.order argument. (Phase 2)OPENROUTER_PROVIDER_REQUIRE_PARAMETERS: Optional. Default boolean (true or false) to only use providers supporting all specified request parameters. Overridden by provider.require_parameters argument. (Phase 2)OPENROUTER_PROVIDER_DATA_COLLECTION: Optional. Default data collection policy ("allow" or "deny"). Overridden by provider.data_collection argument. (Phase 2)OPENROUTER_PROVIDER_ALLOW_FALLBACKS: Optional. Default boolean (true or false) to control fallback behavior if preferred providers fail. Overridden by provider.allow_fallbacks argument. (Phase 2)# Example .env file content OPENROUTER_API_KEY=your-api-key-here OPENROUTER_DEFAULT_MODEL=openrouter/auto OPENROUTER_MAX_TOKENS=1024 OPENROUTER_PROVIDER_QUANTIZATIONS=fp16,int8 OPENROUTER_PROVIDER_IGNORE=openai,anthropic OPENROUTER_PROVIDER_SORT=price OPENROUTER_PROVIDER_ORDER='["openai/gpt-4o", "anthropic/claude-3-opus"]' OPENROUTER_PROVIDER_REQUIRE_PARAMETERS=true OPENROUTER_PROVIDER_DATA_COLLECTION=deny OPENROUTER_PROVIDER_ALLOW_FALLBACKS=false
OPENROUTER_PROVIDER_QUANTIZATIONS=fp16,int8 OPENROUTER_PROVIDER_IGNORE=openai,anthropic
### Setup
Add to your MCP settings configuration file (`cline_mcp_settings.json` or `claude_desktop_config.json`):
```json
{
"mcpServers": {
"openrouterai": {
"command": "npx",
"args": ["@mcpservers/openrouterai"],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"OPENROUTER_DEFAULT_MODEL": "optional-default-model",
"OPENROUTER_MAX_TOKENS": "1024",
"OPENROUTER_PROVIDER_QUANTIZATIONS": "fp16,int8",
"OPENROUTER_PROVIDER_IGNORE": "openai,anthropic"
}
}
}
}
## Response Format
All tools return responses in a standardized structure:
```typescript
interface ToolResult {
isError: boolean;
content: Array<{
type: "text";
text: string; // JSON string or error message
}>;
}
Success Example:
{ "isError": false, "content": [{ "type": "text", "text": "{\"id\": \"gen-123\", ...}" }] }
Error Example:
{ "isError": true, "content": [{ "type": "text", "text": "Error: Model validation failed - 'invalid-model' not found" }] }
chat_completionSends a request to the OpenRouter Chat Completions API.
Input Schema:
model (string, optional): The model to use (e.g., openai/gpt-4o, google/gemini-pro). Overrides OPENROUTER_DEFAULT_MODEL. Defaults to openrouter/auto if neither is set.
:nitro to a model ID (e.g., openai/gpt-4o:nitro) to potentially route to faster, experimental versions if available. Append :floor (e.g., mistralai/mistral-7b-instruct:floor) to use the cheapest available variant of a model, often useful for testing or low-cost tasks. Note: Availability of :nitro and :floor variants depends on OpenRouter.messages (array, required): An array of message objects conforming to the OpenAI chat completion format.temperature (number, optional): Sampling temperature. Defaults to 1.max_tokens (number, optional): Maximum number of tokens to generate in the completion. Overrides OPENROUTER_MAX_TOKENS.provider (object, optional): Provider routing configuration. Overrides corresponding OPENROUTER_PROVIDER_* environment variables.
quantizations (array of strings, optional): List of quantization levels to filter by (e.g., ["fp16", "int8"]). Only models matching one of these levels will be considered. Overrides OPENROUTER_PROVIDER_QUANTIZATIONS. (Phase 1)ignore (array of strings, optional): List of provider names to exclude (e.g., ["openai", "anthropic"]). Models from these providers will not be used. Overrides OPENROUTER_PROVIDER_IGNORE. (Phase 1)sort ("price" | "throughput" | "latency", optional): Sort providers by the specified criteria. Overrides OPENROUTER_PROVIDER_SORT. (Phase 2)order (array of strings, optional): A prioritized list of provider IDs (e.g., ["openai/gpt-4o", "anthropic/claude-3-opus"]). Overrides OPENROUTER_PROVIDER_ORDER. (Phase 2)require_parameters (boolean, optional): If true, only use providers that support all specified request parameters (like tools, functions, temperature). Overrides OPENROUTER_PROVIDER_REQUIRE_PARAMETERS. (Phase 2)data_collection ("allow" | "deny", optional): Specify whether providers are allowed to collect data from the request. Overrides OPENROUTER_PROVIDER_DATA_COLLECTION. (Phase 2)allow_fallbacks (boolean, optional): If true (default), allows falling back to other providers if the preferred ones fail or are unavailable. If false, fails the request if preferred providers cannot be used. Overrides OPENROUTER_PROVIDER_ALLOW_FALLBACKS. (Phase 2)Example Usage:
{ "tool": "chat_completion", "arguments": { "model": "anthropic/claude-3-haiku", "messages": [ { "role": "user", "content": "Explain the concept of quantization in AI models." } ], "max_tokens": 500, "provider": { "quantizations": ["fp16"], "ignore": ["openai"], "sort": "price", "order": ["anthropic/claude-3-haiku", "google/gemini-pro"], "require_parameters": true, "allow_fallbacks": false } } }
This example requests a completion from anthropic/claude-3-haiku, limits the response to 500 tokens. It specifies provider routing options: prefer fp16 quantized models, ignore openai providers, sort remaining providers by price, prioritize anthropic/claude-3-haiku then google/gemini-pro, require the chosen provider to support all request parameters (like max_tokens), and disable fallbacks (fail if the prioritized providers cannot fulfill the request).
Search and filter available models:
interface ModelSearchRequest { query?: string; provider?: string; minContextLength?: number; capabilities?: { functions?: boolean; vision?: boolean; }; } // Response: ToolResult with model list or error
Get detailed information about a specific model:
{ model: string; // Model identifier }
Check if a model ID is valid:
interface ModelValidationRequest { model: string; } // Response: // Success: { isError: false, valid: true } // Error: { isError: true, error: "Model not found" }
The server provides structured errors with contextual information:
// Error response structure { isError: true, content: [{ type: "text", text: "Error: [Category] - Detailed message" }] }
Common Error Categories:
Validation Error: Invalid input parametersAPI Error: OpenRouter API communication issuesRate Limit: Request throttling detectionInternal Error: Server-side processing failuresHandling Responses:
async function handleResponse(result: ToolResult) { if (result.isError) { const errorMessage = result.content[0].text; if (errorMessage.startsWith('Error: Rate Limit')) { // Handle rate limiting } // Other error handling } else { const data = JSON.parse(result.content[0].text); // Process successful response } }
See CONTRIBUTING.md for detailed information about:
# Install dependencies pnpm install # Build project pnpm run build # Run tests pnpm test
See CHANGELOG.md for recent updates including:
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.