Scout监控
STDIOScout性能监控API集成MCP服务器
Scout性能监控API集成MCP服务器
This repository contains code to locally run an MCP server that can access Scout Monitoring data via Scout's API. We provide a Docker image that can be pulled and run by your AI Assistant to access Scout Monitoring data.
This puts Scout Monitoring's performance and error data directly in the hands of your AI Assistant. For Rails, Django, FastAPI, Laravel and more. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor and codebase. N+1 queries, slow endpoints, slow queries, memory bloat, throughput issues - all your favorite performance problems surfaced and explained right where you are working.
If this makes your life a tiny bit better, why not :star: it?!
The simplest way to configure and start using the Scout MCP is with our interactive setup wizard. It handles all the prereqs and installation steps for you.
Run via npx:
npx @scout_apm/wizard
Build and run from source:
cd ./wizard npm install npm run build node dist/wizard.js
The wizard will guide you through:
The wizard currently supports setup for:
For all others, it will output JSON that you can copy/paste into your AI Assistant's MCP configuration.
The Wizard is a great way to get started, but you can also set things up manually. You will need to have or create a Scout Monitoring account and obtain an API key.
The MCP server will not currently start without an API key set, either in the environment or by a command-line argument on startup.
We recommend using the provided Docker image to run the MCP server. It is intended to be started by your AI Assistant and configured with your Scout API key. Many local clients allow specifying a command to run the MCP server in some location. A few examples are provided below.
The Docker image is available on Docker Hub.
Of course, you can always clone this repo and run the MCP server directly; uv or other
environment management tools are recommended.
If you would like to configure the MCP manually, this usually just means supplying a command to run the MCP server with your API key in the environment to your AI Assistant's config. Here is the shape of the JSON (the top-level key varies):
{ "mcpServers": { "scout-apm": { "command": "docker", "args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"], "env": { "SCOUT_API_KEY": "your_scout_api_key_here"} } } }
claude mcp add scoutmcp -e SCOUT_API_KEY=your_scout_api_key_here -- docker run --rm -i -e SCOUT_API_KEY scoutapp/scout-mcp-local
MAKE SURE to update the SCOUT_API_KEY value to your actual api key in
Arguments in the Cursor Settings > MCP
Add the following to your claude config file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%/Claude/claude_desktop_config.json{ "mcpServers": { "scout-apm": { "command": "docker", "args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"], "env": { "SCOUT_API_KEY": "your_scout_api_key_here"} } } }
Scout's MCP is intended to put error and performance data directly in the... hands? of your AI Assistant. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor.
Most assistants will show you both raw tool calls and perform analysis. Desktop assistants can readily create custom JS applications to explore whatever data you desire. Assistants integrated into code editors can use trace data and error backtraces to make fixes right in your codebase.
Combine Scout's MCP with your AI Assistant's other tools to:
The Scout MCP provides the following tools for accessing Scout APM data:
list_apps - List available Scout APM applications, with optional filtering by last active dateget_app_metrics - Get individual metric data (response_time, throughput, etc.) for a specific applicationget_app_endpoints - Get all endpoints for an application with aggregated performance metricsget_endpoint_metrics - Get timeseries metrics for a specific endpoint in an applicationget_app_endpoint_traces - Get recent traces for an app filtered to a specific endpointget_app_trace - Get an individual trace with all spans and detailed execution informationget_app_error_groups - Get recent error groups for an app, optionally filtered by endpointget_app_insights - Get performance insights including N+1 queries, memory bloat, and slow queriesThe Scout MCP provides configuration templates as resources that your AI assistant can read and apply:
scoutapm://config-resources/{framework} - Setup instructions for supported framework or library (rails, django, flask, fastapi)scoutapm://config-resources/list - List all available configuration templatesscoutapm://metrics - List of all available metrics for Scout APMmy-app-name in the last 7 days. Generate a table
with the results including the average response time, throughput, and P95 response time."Foo in the last 24 hours. Get the
latest error detail, examine the backtrace and suggest a fix."Bar. Pull the specific trace by id and help me
optimize it based on the backtrace data."We are currently more interested in expanding available information than strictly
controlling response size from our MCP tools. If your AI Assistant has a configurable
token limit (e.g. Claude Code export MAX_MCP_OUTPUT_TOKENS=50000), we recommend
setting it generously high, e.g. 50,000 tokens.
We use uv and taskipy to manage environments and run tasks for this project.
uv run task dev
Connect within inspector to add API key, set to STDIO transport
docker build -t scout-mcp-local .