
Kubernetes
STDIOKubernetes cluster management server with kubectl integration and multiple authentication methods.
Kubernetes cluster management server with kubectl integration and multiple authentication methods.
MCP Server that can connect to a Kubernetes cluster and manage it. Supports loading kubeconfig from multiple sources in priority order.
https://github.com/user-attachments/assets/f25f8f4e-4d04-479b-9ae0-5dac452dd2ed
Before using this MCP server with any tool, make sure you have:
You can verify your connection by running kubectl get pods
in a terminal to ensure you can connect to your cluster without credential issues.
By default, the server loads kubeconfig from ~/.kube/config
. For additional authentication options (environment variables, custom paths, etc.), see ADVANCED_README.md.
Add the MCP server to Claude Code using the built-in command:
claude mcp add kubernetes -- npx mcp-server-kubernetes
This will automatically configure the server in your Claude Code MCP settings.
Add the following configuration to your Claude Desktop config file:
{ "mcpServers": { "kubernetes": { "command": "npx", "args": ["mcp-server-kubernetes"] } } }
For VS Code integration, you can use the MCP server with extensions that support the Model Context Protocol:
{ "mcpServers": { "kubernetes": { "command": "npx", "args": ["mcp-server-kubernetes"], "description": "Kubernetes cluster management and operations" } } }
Cursor supports MCP servers through its AI integration. Add the server to your Cursor MCP configuration:
{ "mcpServers": { "kubernetes": { "command": "npx", "args": ["mcp-server-kubernetes"] } } }
The server will automatically connect to your current kubectl context. You can verify the connection by asking the AI assistant to list your pods or create a test deployment.
mcp-chat is a CLI chat client for MCP servers. You can use it to interact with the Kubernetes server.
npx mcp-chat --server "npx mcp-server-kubernetes"
Alternatively, pass it your existing Claude Desktop configuration file from above (Linux should pass the correct path to config):
Mac:
npx mcp-chat --config "~/Library/Application Support/Claude/claude_desktop_config.json"
Windows:
npx mcp-chat --config "%APPDATA%\Claude\claude_desktop_config.json"
kubectl_get
kubectl_describe
kubectl_get
kubectl_create
kubectl_apply
kubectl_delete
kubectl_logs
kubectl_context
explain_resource
list_api_resources
kubectl_scale
kubectl_patch
kubectl_rollout
kubectl_generic
ping
kubectl_scale
(replaces legacy scale_deployment
)port_forward
helm_template_apply
) to bypass authentication issueshelm_template_uninstall
) to bypass authentication issuescleanup_pods
) in states: Evicted, ContainerStatusUnknown, Completed, Error, ImagePullBackOff, CrashLoopBackOffnode_management
) for maintenance and scaling operationsk8s-diagnose
)
kubectl get secrets
commands, does not affect logs)The MCP Kubernetes server includes specialized prompts to assist with common diagnostic operations.
This prompt provides a systematic troubleshooting flow for Kubernetes pods. It accepts a keyword
to identify relevant pods and an optional namespace
to narrow the search.
The prompt's output will guide you through an autonomous troubleshooting flow, providing instructions for identifying issues, collecting evidence, and suggesting remediation steps.
Make sure that you have bun installed. Clone the repo & install dependencies:
git clone https://github.com/Flux159/mcp-server-kubernetes.git cd mcp-server-kubernetes bun install
bun run dev
bun run test
bun run build
npx @modelcontextprotocol/inspector node dist/index.js # Follow further instructions on terminal for Inspector link
{ "mcpServers": { "mcp-server-kubernetes": { "command": "node", "args": ["/path/to/your/mcp-server-kubernetes/dist/index.js"] } } }
bun run chat
See the CONTRIBUTING.md file for details.
You can run the server in a non-destructive mode that disables all destructive operations (delete pods, delete deployments, delete namespaces, etc.):
ALLOW_ONLY_NON_DESTRUCTIVE_TOOLS=true npx mcp-server-kubernetes
For Claude Desktop configuration with non-destructive mode:
{ "mcpServers": { "kubernetes-readonly": { "command": "npx", "args": ["mcp-server-kubernetes"], "env": { "ALLOW_ONLY_NON_DESTRUCTIVE_TOOLS": "true" } } } }
All read-only and resource creation/update operations remain available:
kubectl_get
, kubectl_describe
, kubectl_logs
, explain_resource
, list_api_resources
kubectl_apply
, kubectl_create
, kubectl_scale
, kubectl_patch
, kubectl_rollout
install_helm_chart
, upgrade_helm_chart
, helm_template_apply
, helm_template_uninstall
port_forward
, stop_port_forward
kubectl_context
The following destructive operations are disabled:
kubectl_delete
: Deleting any Kubernetes resourcesuninstall_helm_chart
: Uninstalling Helm chartscleanup
: Cleanup of managed resourcescleanup_pods
: Cleaning up problematic podsnode_management
: Node management operations (can drain nodes)kubectl_generic
: General kubectl command access (may include destructive operations)The helm_template_apply
tool provides an alternative way to install Helm charts that bypasses authentication issues commonly encountered with certain Kubernetes configurations. This tool is particularly useful when you encounter errors like:
WARNING: Kubernetes configuration file is group-readable. This is insecure.
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Instead of using helm install
directly, this tool:
helm template
to generate YAML manifests from the Helm chartkubectl apply
{ "name": "helm_template_apply", "arguments": { "name": "events-exporter", "chart": ".", "namespace": "kube-event-exporter", "valuesFile": "values.yaml", "createNamespace": true } }
This is equivalent to running:
helm template events-exporter . -f values.yaml > events-exporter.yaml kubectl create namespace kube-event-exporter kubectl apply -f events-exporter.yaml -n kube-event-exporter
name
: Release name for the Helm chartchart
: Chart name or path to chart directoryrepo
: Chart repository URL (optional if using local chart path)namespace
: Kubernetes namespace to deploy tovalues
: Chart values as an object (optional)valuesFile
: Path to values.yaml file (optional, alternative to values object)createNamespace
: Whether to create the namespace if it doesn't exist (default: true)Pod cleanup can be achieved using the existing kubectl_get
and kubectl_delete
tools with field selectors. This approach leverages standard Kubernetes functionality without requiring dedicated cleanup tools.
Use kubectl_get
with field selectors to identify pods in problematic states:
Get failed pods:
{ "name": "kubectl_get", "arguments": { "resourceType": "pods", "namespace": "default", "fieldSelector": "status.phase=Failed" } }
Get completed pods:
{ "name": "kubectl_get", "arguments": { "resourceType": "pods", "namespace": "default", "fieldSelector": "status.phase=Succeeded" } }
Get pods with specific conditions:
{ "name": "kubectl_get", "arguments": { "resourceType": "pods", "namespace": "default", "fieldSelector": "status.conditions[?(@.type=='Ready')].status=False" } }
Use kubectl_delete
with field selectors to delete pods in problematic states:
Delete failed pods:
{ "name": "kubectl_delete", "arguments": { "resourceType": "pods", "namespace": "default", "fieldSelector": "status.phase=Failed", "force": true, "gracePeriodSeconds": 0 } }
Delete completed pods:
{ "name": "kubectl_delete", "arguments": { "resourceType": "pods", "namespace": "default", "fieldSelector": "status.phase=Succeeded", "force": true, "gracePeriodSeconds": 0 } }
kubectl_get
with appropriate field selectorskubectl_delete
with the same field selectorsstatus.phase=Failed
- Pods that have failedstatus.phase=Succeeded
- Pods that have completed successfullystatus.phase=Pending
- Pods that are pendingstatus.conditions[?(@.type=='Ready')].status=False
- Pods that are not readyforce=true
and gracePeriodSeconds=0
for immediate deletionallNamespaces=true
The node_management
tool provides comprehensive node management capabilities for Kubernetes clusters, including cordoning, draining, and uncordoning operations. This is essential for cluster maintenance, scaling, and troubleshooting.
list
: List all nodes with their status and schedulabilitycordon
: Mark a node as unschedulable (no new pods will be scheduled)drain
: Safely evict all pods from a node and mark it as unschedulableuncordon
: Mark a node as schedulable again1. List all nodes:
{ "name": "node_management", "arguments": { "operation": "list" } }
2. Cordon a node (mark as unschedulable):
{ "name": "node_management", "arguments": { "operation": "cordon", "nodeName": "worker-node-1" } }
3. Drain a node (dry run first):
{ "name": "node_management", "arguments": { "operation": "drain", "nodeName": "worker-node-1", "dryRun": true } }
4. Drain a node (with confirmation):
{ "name": "node_management", "arguments": { "operation": "drain", "nodeName": "worker-node-1", "dryRun": false, "confirmDrain": true, "force": true, "ignoreDaemonsets": true, "timeout": "5m" } }
5. Uncordon a node:
{ "name": "node_management", "arguments": { "operation": "uncordon", "nodeName": "worker-node-1" } }
force
: Force the operation even if there are pods not managed by controllersgracePeriod
: Period of time in seconds given to each pod to terminate gracefullydeleteLocalData
: Delete local data even if emptyDir volumes are usedignoreDaemonsets
: Ignore DaemonSet-managed pods (default: true)timeout
: The length of time to wait before giving up (e.g., '5m', '1h')dryRun
: Show what would be done without actually doing itconfirmDrain
: Explicit confirmation to drain the node (required for actual draining)confirmDrain=true
to proceedFor additional advanced features, see the ADVANCED_README.md.
See this DeepWiki link for a more indepth architecture overview created by Devin.
This section describes the high-level architecture of the MCP Kubernetes server.
The sequence diagram below illustrates how requests flow through the system:
sequenceDiagram participant Client participant Transport as Transport Layer participant Server as MCP Server participant Filter as Tool Filter participant Handler as Request Handler participant K8sManager as KubernetesManager participant K8s as Kubernetes API Note over Transport: StdioTransport or<br>SSE Transport Client->>Transport: Send Request Transport->>Server: Forward Request alt Tools Request Server->>Filter: Filter available tools Note over Filter: Remove destructive tools<br>if in non-destructive mode Filter->>Handler: Route to tools handler alt kubectl operations Handler->>K8sManager: Execute kubectl operation K8sManager->>K8s: Make API call else Helm operations Handler->>K8sManager: Execute Helm operation K8sManager->>K8s: Make API call else Port Forward operations Handler->>K8sManager: Set up port forwarding K8sManager->>K8s: Make API call end K8s-->>K8sManager: Return result K8sManager-->>Handler: Process response Handler-->>Server: Return tool result else Resource Request Server->>Handler: Route to resource handler Handler->>K8sManager: Get resource data K8sManager->>K8s: Query API K8s-->>K8sManager: Return data K8sManager-->>Handler: Format response Handler-->>Server: Return resource data end Server-->>Transport: Send Response Transport-->>Client: Return Final Response
See this DeepWiki link for a more indepth architecture overview created by Devin.
Go to the releases page, click on "Draft New Release", click "Choose a tag" and create a new tag by typing out a new version number using "v{major}.{minor}.{patch}" semver format. Then, write a release title "Release v{major}.{minor}.{patch}" and description / changelog if necessary and click "Publish Release".
This will create a new tag which will trigger a new release build via the cd.yml workflow. Once successful, the new release will be published to npm. Note that there is no need to update the package.json version manually, as the workflow will automatically update the version number in the package.json file & push a commit to main.
Adding clusters to kubectx.
If you find this repo useful, please cite:
@software{Patel_MCP_Server_Kubernetes_2024,
author = {Patel, Paras and Sonwalkar, Suyog},
month = jul,
title = {{MCP Server Kubernetes}},
url = {https://github.com/Flux159/mcp-server-kubernetes},
version = {2.5.0},
year = {2024}
}