
Keboola
STDIOOfficialConnect AI agents to Keboola data platform for storage access, transformations, SQL queries, and jobs.
Connect AI agents to Keboola data platform for storage access, transformations, SQL queries, and jobs.
Connect your AI agents, MCP clients (Cursor, Claude, Windsurf, VS Code ...) and other AI assistants to Keboola. Expose data, transformations, SQL queries, and job triggers—no glue code required. Deliver the right data to agents when and where they need it.
Keboola MCP Server is an open-source bridge between your Keboola project and modern AI tools. It turns Keboola features—like storage access, SQL transformations, and job triggers—into callable tools for Claude, Cursor, CrewAI, LangChain, Amazon Q, and more.
With the AI Agent and MCP Server, you can:
The easiest way to use Keboola MCP Server is through our Remote MCP Server. This hosted solution eliminates the need for local setup, configuration, or installation.
Our remote server is hosted on every multi-tenant Keboola stack and supports OAuth authentication. You can connect to it from any AI assistant that supports remote SSE connection and OAuth authentication.
MCP Server
tabhttps://mcp.<YOUR_REGION>.keboola.com/sse
For detailed setup instructions and region-specific URLs, see our Remote Server Setup documentation.
You can work safely in Keboola development branches without affecting your production data. The remotely hosted MCP Servers respect the KBC_BRANCH_ID
parameter and will scope all operations to the specified branch. You can find the development branch ID in the URL when navigating to the development branch in the UI, for example: https://connection.us-east4.gcp.keboola.com/admin/projects/PROJECT_ID/branch/BRANCH_ID/dashboard
. The branch ID must be included in each request using the header X-Branch-Id: <branchId>
, otherwise the MCP Server uses production branch as default. This should be managed by the AI client or the environment handling the server connection.
Run the MCP server on your own machine for full control and easy development. Choose this when you want to customize tools, debug locally, or iterate quickly. You’ll clone the repo, set Keboola credentials via environment variables or headers depending on the server transport, install dependencies, and start the server. This approach offers maximum flexibility (custom tools, local logging, offline iteration) but requires manual setup and you manage updates and secrets yourself.
The server supports multiple transport options, which can be selected by providing the --transport <transport>
argument when starting the server:
stdio
- Default when --transport
is not specified. Standard input/output, typically used for local deployment with a single client.streamable-http
- Runs the server remotely over HTTP with a bidirectional streaming channel, allowing the client and server to continuously exchange messages. Connect via sse
- Deprecated, use streamable-http
instead. Runs the server remotely using Server-Sent Events (SSE) for one-way event streaming from server to client. Connect via http-compat
- A custom transport supporting both SSE
and streamable-http
. It is currently used on Keboola remote servers but will soon be replaced by streamable-http
only.For client–server communication, Keboola credentials must be provided to enable working with your project in your Keboola Region. The following are required: KBC_STORAGE_TOKEN
, KBC_STORAGE_API_URL
, KBC_WORKSPACE_SCHEMA
and optionally KBC_BRANCH_ID
. You can provide these in two ways:
This is your authentication token for Keboola:
For instructions on how to create and manage Storage API tokens, refer to the official Keboola documentation.
Note: If you want the MCP server to have limited access, use custom storage token, if you want the MCP to access everything in your project, use the master token.
This identifies your workspace in Keboola and is used for SQL queries. However, this is only required if you're using a custom storage token instead of the Master Token:
Note: When creating a workspace manually, check Grant read-only access to all Project data option
Note: KBC_WORKSPACE_SCHEMA is called Dataset Name in BigQuery workspaces, you simply click connect and copy the Dataset Name
Your Keboola Region API URL depends on your deployment region. You can determine your region by looking at the URL in your browser when logged into your Keboola project:
Region | API URL |
---|---|
AWS North America | https://connection.keboola.com |
AWS Europe | https://connection.eu-central-1.keboola.com |
Google Cloud EU | https://connection.europe-west3.gcp.keboola.com |
Google Cloud US | https://connection.us-east4.gcp.keboola.com |
Azure EU | https://connection.north-europe.azure.keboola.com |
To operate on a specific Keboola development branch, set the branch ID using the KBC_BRANCH_ID
parameter. The MCP server scopes its functionality to the specified branch, ensuring all changes remain isolated and do not impact the production branch.
KBC_BRANCH_ID
to the numeric ID of your branch (e.g., 123456
). You can find the development branch ID in the URL when navigating to the development branch in the UI, for example: https://connection.us-east4.gcp.keboola.com/admin/projects/PROJECT_ID/branch/BRANCH_ID/dashboard
.X-Branch-Id: <branchId>
or KBC_BRANCH_ID: <branchId>
.Make sure you have:
Note: Make sure you have uv
installed. The MCP client will use it to automatically download and run the Keboola MCP Server.
Installing uv:
macOS/Linux:
#if homebrew is not installed on your machine use: # /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install using Homebrew brew install uv
Windows:
# Using the installer script powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" # Or using pip pip install uv # Or using winget winget install --id=astral-sh.uv -e
For more installation options, see the official uv documentation.
There are four ways to use the Keboola MCP Server, depending on your needs:
In this mode, Claude or Cursor automatically starts the MCP server for you. You do not need to run any commands in your terminal.
{ "mcpServers": { "keboola": { "command": "uvx", "args": ["keboola_mcp_server --transport <transport>"], "env": { "KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com", "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema", "KBC_BRANCH_ID": "your_branch_id_optional" } } } }
Config file locations:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{ "mcpServers": { "keboola": { "command": "uvx", "args": ["keboola_mcp_server --transport <transport>"], "env": { "KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com", "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema", "KBC_BRANCH_ID": "your_branch_id_optional" } } } }
Note: Use short, descriptive names for MCP servers. Since the full tool name includes the server name and must stay under ~60 characters, longer names may be filtered out in Cursor and will not be displayed to the Agent.
When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this configuration:
{ "mcpServers": { "keboola":{ "command": "wsl.exe", "args": [ "bash", "-c '", "export KBC_STORAGE_API_URL=https://connection.YOUR_REGION.keboola.com &&", "export KBC_STORAGE_TOKEN=your_keboola_storage_token &&", "export KBC_WORKSPACE_SCHEMA=your_workspace_schema &&", "export KBC_BRANCH_ID=your_branch_id_optional &&", "/snap/bin/uvx keboola_mcp_server --transport <transport>", "'" ] } } }
For developers working on the MCP server code itself:
{ "mcpServers": { "keboola": { "command": "/absolute/path/to/.venv/bin/python", "args": [ "-m", "keboola_mcp_server --transport <transport>" ], "env": { "KBC_STORAGE_API_URL": "https://connection.YOUR_REGION.keboola.com", "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema", "KBC_BRANCH_ID": "your_branch_id_optional" } } } }
You can run the server manually in a terminal for testing or debugging:
# Set environment variables export KBC_STORAGE_API_URL=https://connection.YOUR_REGION.keboola.com export KBC_STORAGE_TOKEN=your_keboola_storage_token export KBC_WORKSPACE_SCHEMA=your_workspace_schema export KBC_BRANCH_ID=your_branch_id_optional uvx keboola_mcp_server --transport sse
Note: This mode is primarily for debugging or testing. For normal use with Claude or Cursor, you do not need to manually run the server.
Note: The server will use the SSE transport and listen on
localhost:8000
for the incoming SSE connections. You can use--port
and--host
parameters to make it listen elsewhere.
docker pull keboola/mcp-server:latest docker run \ --name keboola_mcp_server \ --rm \ -it \ -p 127.0.0.1:8000:8000 \ -e KBC_STORAGE_API_URL="https://connection.YOUR_REGION.keboola.com" \ -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \ -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \ -e KBC_BRANCH_ID="YOUR_BRANCH_ID_OPTIONAL" \ keboola/mcp-server:latest \ --transport sse \ --host 0.0.0.0
Note: The server will use the SSE transport and listen on
localhost:8000
for the incoming SSE connections. You can change-p
to map the container's port somewhere else.
Scenario | Need to Run Manually? | Use This Setup |
---|---|---|
Using Claude/Cursor | No | Configure MCP in app settings |
Developing MCP locally | No (Claude starts it) | Point config to python path |
Testing CLI manually | Yes | Use terminal to run |
Using Docker | Yes | Run docker container |
Once your MCP client (Claude/Cursor) is configured and running, you can start querying your Keboola data:
You can start with a simple query to confirm everything is working:
What buckets and tables are in my Keboola project?
Data Exploration:
Data Analysis:
Data Pipelines:
MCP Client | Support Status | Connection Method |
---|---|---|
Claude (Desktop & Web) | ✅ supported | stdio |
Cursor | ✅ supported | stdio |
Windsurf, Zed, Replit | ✅ Supported | stdio |
Codeium, Sourcegraph | ✅ Supported | HTTP+SSE |
Custom MCP Clients | ✅ Supported | HTTP+SSE or stdio |
Note: Your AI agents will automatically adjust to new tools.
Category | Tool | Description |
---|---|---|
Project | get_project_info | Returns structured information about your Keboola project |
Storage | get_bucket | Gets detailed information about a specific bucket |
get_table | Gets detailed information about a specific table, including DB identifier and columns | |
list_buckets | Retrieves all buckets in the project | |
list_tables | Retrieves all tables in a specific bucket | |
update_description | Updates description for a bucket, table, or column | |
SQL | query_data | Executes a SELECT query against the underlying database |
Component | add_config_row | Creates a configuration row for a component configuration |
create_config | Creates a root component configuration | |
create_sql_transformation | Creates an SQL transformation from one or more SQL code blocks | |
find_component_id | Finds component IDs matching a natural-language query | |
get_component | Retrieves details of a component by ID | |
get_config | Retrieves a specific component/transformation configuration | |
get_config_examples | Retrieves example configurations for a component | |
list_configs | Lists configurations in the project, optionally filtered | |
list_transformations | Lists transformation configurations in the project | |
update_config | Updates a root component configuration | |
update_config_row | Updates a component configuration row | |
update_sql_transformation | Updates an existing SQL transformation configuration | |
Flow | create_conditional_flow | Creates a conditional flow (keboola.flow ) |
create_flow | Creates a legacy flow (keboola.orchestrator ) | |
get_flow | Retrieves details of a specific flow configuration | |
get_flow_examples | Retrieves examples of valid flow configurations | |
get_flow_schema | Returns the JSON schema for the specified flow type | |
list_flows | Lists flow configurations in the project | |
update_flow | Updates an existing flow configuration | |
Jobs | get_job | Retrieves detailed information about a specific job |
list_jobs | Lists jobs with optional filtering, sorting, and pagination | |
run_job | Starts a job for a component or transformation | |
Data Apps | get_data_apps | Retrieves detailed information about a specific Data Apps or List Data Apps in the project. |
modify_data_app | Creates or updates Data Apps | |
deploy_data_app | Deploys or supsends Streamlit Data Apps in the Keboola environment. | |
Documentation | docs_query | Answers questions using Keboola documentation as the source |
Other | create_oauth_url | Generates an OAuth authorization URL for a component configuration |
search | Searches for items in the project by name prefixes |
Issue | Solution |
---|---|
Authentication Errors | Verify KBC_STORAGE_TOKEN is valid |
Workspace Issues | Confirm KBC_WORKSPACE_SCHEMA is correct |
Connection Timeout | Check network connectivity |
Basic setup:
uv sync --extra dev
With the basic setup, you can use uv run tox
to run tests and check code style.
Recommended setup:
uv sync --extra dev --extra tests --extra integtests --extra codestyle
With the recommended setup, packages for testing and code style checking will be installed which allows IDEs like VsCode or Cursor to check the code or run tests during development.
To run integration tests locally, use uv run tox -e integtests
.
NOTE: You will need to set the following environment variables:
INTEGTEST_STORAGE_API_URL
INTEGTEST_STORAGE_TOKEN
INTEGTEST_WORKSPACE_SCHEMA
In order to get these values, you need a dedicated Keboola project for integration tests.
uv.lock
Update the uv.lock
file if you have added or removed dependencies. Also consider updating the lock with newer dependency
versions when creating a release (uv lock --upgrade
).
When you make changes to any tool descriptions (docstrings in tool functions), you must regenerate the TOOLS.md
documentation file to reflect these changes:
uv run python -m src.keboola_mcp_server.generate_tool_docs
⭐ The primary way to get help, report bugs, or request features is by opening an issue on GitHub. ⭐
The development team actively monitors issues and will respond as quickly as possible. For general information about Keboola, please use the resources below.