Kubernetes Manager
STDIOMCP Server that connects to and manages Kubernetes clusters.
MCP Server that connects to and manages Kubernetes clusters.
MCP Server that can connect to a Kubernetes cluster and manage it. Supports loading kubeconfig from multiple sources in priority order.
https://github.com/user-attachments/assets/f25f8f4e-4d04-479b-9ae0-5dac452dd2ed
{ "mcpServers": { "kubernetes": { "command": "npx", "args": ["mcp-server-kubernetes"] } } }
By default, the server loads kubeconfig from ~/.kube/config
. For additional authentication options (environment variables, custom paths, etc.), see ADVANCED_README.md.
The server will automatically connect to your current kubectl context. Make sure you have:
You can verify your connection by asking Claude to list your pods or create a test deployment.
If you have errors open up a standard terminal and run kubectl get pods
to see if you can connect to your cluster without credentials issues.
mcp-chat is a CLI chat client for MCP servers. You can use it to interact with the Kubernetes server.
npx mcp-chat --server "npx mcp-server-kubernetes"
Alternatively, pass it your existing Claude Desktop configuration file from above (Linux should pass the correct path to config):
Mac:
npx mcp-chat --config "~/Library/Application Support/Claude/claude_desktop_config.json"
Windows:
npx mcp-chat --config "%APPDATA%\Claude\claude_desktop_config.json"
kubectl_get
kubectl_describe
kubectl_list
kubectl_create
kubectl_apply
kubectl_delete
kubectl_logs
kubectl_context
explain_resource
list_api_resources
kubectl_scale
kubectl_patch
kubectl_rollout
kubectl_generic
kubectl_scale
(replaces legacy scale_deployment
)port_forward
Make sure that you have bun installed. Clone the repo & install dependencies:
git clone https://github.com/Flux159/mcp-server-kubernetes.git cd mcp-server-kubernetes bun install
bun run dev
bun run test
bun run build
npx @modelcontextprotocol/inspector node dist/index.js # Follow further instructions on terminal for Inspector link
{ "mcpServers": { "mcp-server-kubernetes": { "command": "node", "args": ["/path/to/your/mcp-server-kubernetes/dist/index.js"] } } }
bun run chat
See the CONTRIBUTING.md file for details.
You can run the server in a non-destructive mode that disables all destructive operations (delete pods, delete deployments, delete namespaces, etc.):
ALLOW_ONLY_NON_DESTRUCTIVE_TOOLS=true npx mcp-server-kubernetes
For Claude Desktop configuration with non-destructive mode:
{ "mcpServers": { "kubernetes-readonly": { "command": "npx", "args": ["mcp-server-kubernetes"], "env": { "ALLOW_ONLY_NON_DESTRUCTIVE_TOOLS": "true" } } } }
All read-only and resource creation/update operations remain available:
kubectl_get
, kubectl_describe
, kubectl_list
, kubectl_logs
, explain_resource
, list_api_resources
kubectl_apply
, kubectl_create
, kubectl_scale
, kubectl_patch
, kubectl_rollout
install_helm_chart
, upgrade_helm_chart
port_forward
, stop_port_forward
kubectl_context
The following destructive operations are disabled:
kubectl_delete
: Deleting any Kubernetes resourcesuninstall_helm_chart
: Uninstalling Helm chartscleanup
: Cleanup of managed resourceskubectl_generic
: General kubectl command access (may include destructive operations)For additional advanced features, see the ADVANCED_README.md.
See this DeepWiki link for a more indepth architecture overview created by Devin.
This section describes the high-level architecture of the MCP Kubernetes server.
The sequence diagram below illustrates how requests flow through the system:
sequenceDiagram participant Client participant Transport as Transport Layer participant Server as MCP Server participant Filter as Tool Filter participant Handler as Request Handler participant K8sManager as KubernetesManager participant K8s as Kubernetes API Note over Transport: StdioTransport or<br>SSE Transport Client->>Transport: Send Request Transport->>Server: Forward Request alt Tools Request Server->>Filter: Filter available tools Note over Filter: Remove destructive tools<br>if in non-destructive mode Filter->>Handler: Route to tools handler alt kubectl operations Handler->>K8sManager: Execute kubectl operation K8sManager->>K8s: Make API call else Helm operations Handler->>K8sManager: Execute Helm operation K8sManager->>K8s: Make API call else Port Forward operations Handler->>K8sManager: Set up port forwarding K8sManager->>K8s: Make API call end K8s-->>K8sManager: Return result K8sManager-->>Handler: Process response Handler-->>Server: Return tool result else Resource Request Server->>Handler: Route to resource handler Handler->>K8sManager: Get resource data K8sManager->>K8s: Query API K8s-->>K8sManager: Return data K8sManager-->>Handler: Format response Handler-->>Server: Return resource data end Server-->>Transport: Send Response Transport-->>Client: Return Final Response
See this DeepWiki link for a more indepth architecture overview created by Devin.
Go to the releases page, click on "Draft New Release", click "Choose a tag" and create a new tag by typing out a new version number using "v{major}.{minor}.{patch}" semver format. Then, write a release title "Release v{major}.{minor}.{patch}" and description / changelog if necessary and click "Publish Release".
This will create a new tag which will trigger a new release build via the cd.yml workflow. Once successful, the new release will be published to npm. Note that there is no need to update the package.json version manually, as the workflow will automatically update the version number in the package.json file & push a commit to main.
Adding clusters to kubectx.