Keboola Data Bridge
STDIOConnect AI agents to Keboola, exposing data and transformations with no glue code required.
Connect AI agents to Keboola, exposing data and transformations with no glue code required.
Connect your AI agents, MCP clients (Cursor, Claude, Windsurf, VS Code ...) and other AI assistants to Keboola. Expose data, transformations, SQL queries, and job triggers—no glue code required. Deliver the right data to agents when and where they need it.
Keboola MCP Server is an open-source bridge between your Keboola project and modern AI tools. It turns Keboola features—like storage access, SQL transformations, and job triggers—into callable tools for Claude, Cursor, CrewAI, LangChain, Amazon Q, and more.
Make sure you have:
Note: Make sure you have uv
installed. The MCP client will use it to automatically download and run the Keboola MCP Server.
Installing uv:
macOS/Linux:
#if homebrew is not installed on your machine use: # /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install using Homebrew brew install uv
Windows:
# Using the installer script powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" # Or using pip pip install uv # Or using winget winget install --id=astral-sh.uv -e
For more installation options, see the official uv documentation.
Before setting up the MCP server, you need three key pieces of information:
This is your authentication token for Keboola:
For instructions on how to create and manage Storage API tokens, refer to the official Keboola documentation.
Note: If you want the MCP server to have limited access, use custom storage token, if you want the MCP to access everything in your project, use the master token.
This identifies your workspace in Keboola and is used for SQL queries. However, this is only required if you're using a custom storage token instead of the Master Token:
Note: When creating a workspace manually, check Grant read-only access to all Project data option
Note: KBC_WORKSPACE_SCHEMA is called Dataset Name in BigQuery workspaces, you simply click connect and copy the Dataset Name
Your Keboola API URL depends on your deployment region. You can determine your region by looking at the URL in your browser when logged into your Keboola project:
Region | API URL |
---|---|
AWS North America | https://connection.keboola.com |
AWS Europe | https://connection.eu-central-1.keboola.com |
Google Cloud EU | https://connection.europe-west3.gcp.keboola.com |
Google Cloud US | https://connection.us-east4.gcp.keboola.com |
Azure EU | https://connection.north-europe.azure.keboola.com |
There are four ways to use the Keboola MCP Server, depending on your needs:
In this mode, Claude or Cursor automatically starts the MCP server for you. You do not need to run any commands in your terminal.
{ "mcpServers": { "keboola": { "command": "uvx", "args": [ "keboola_mcp_server", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema" } } } }
Config file locations:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{ "mcpServers": { "keboola": { "command": "uvx", "args": [ "keboola_mcp_server", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema" } } } }
When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this configuration:
{ "mcpServers": { "keboola": { "command": "wsl.exe", "args": [ "bash", "-c", "'source /wsl_path/to/keboola-mcp-server/.env", "&&", "/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'" ] } } }
Where /wsl_path/to/keboola-mcp-server/.env
file contains environment variables:
export KBC_STORAGE_TOKEN="your_keboola_storage_token" export KBC_WORKSPACE_SCHEMA="your_workspace_schema"
For developers working on the MCP server code itself:
{ "mcpServers": { "keboola": { "command": "/absolute/path/to/.venv/bin/python", "args": [ "-m", "keboola_mcp_server.cli", "--transport", "stdio", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema", } } } }
You can run the server manually in a terminal for testing or debugging:
# Set environment variables export KBC_STORAGE_TOKEN=your_keboola_storage_token export KBC_WORKSPACE_SCHEMA=your_workspace_schema # Run with uvx (no installation needed) uvx keboola_mcp_server --api-url https://connection.YOUR_REGION.keboola.com # OR, if developing locally python -m keboola_mcp_server.cli --api-url https://connection.YOUR_REGION.keboola.com
Note: This mode is primarily for debugging or testing. For normal use with Claude or Cursor, you do not need to manually run the server.
docker pull keboola/mcp-server:latest docker run -it \ -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \ -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \ keboola/mcp-server:latest \ --api-url https://connection.YOUR_REGION.keboola.com
Scenario | Need to Run Manually? | Use This Setup |
---|---|---|
Using Claude/Cursor | No | Configure MCP in app settings |
Developing MCP locally | No (Claude starts it) | Point config to python path |
Testing CLI manually | Yes | Use terminal to run |
Using Docker | Yes | Run docker container |
Once your MCP client (Claude/Cursor) is configured and running, you can start querying your Keboola data:
You can start with a simple query to confirm everything is working:
What buckets and tables are in my Keboola project?
Data Exploration:
Data Analysis:
Data Pipelines:
MCP Client | Support Status | Connection Method |
---|---|---|
Claude (Desktop & Web) | ✅ supported, tested | stdio |
Cursor | ✅ supported, tested | stdio |
Windsurf, Zed, Replit | ✅ Supported | stdio |
Codeium, Sourcegraph | ✅ Supported | HTTP+SSE |
Custom MCP Clients | ✅ Supported | HTTP+SSE or stdio |
Note: Your AI agents will automatically adjust to new tools.
Category | Tool | Description |
---|---|---|
Storage | retrieve_buckets | Lists all storage buckets in your Keboola project |
get_bucket_detail | Retrieves detailed information about a specific bucket | |
retrieve_bucket_tables | Returns all tables within a specific bucket | |
get_table_detail | Provides detailed information for a specific table | |
update_bucket_description | Updates the description of a bucket | |
update_column_description | Updates the description for a given column in a table. | |
update_table_description | Updates the description of a table | |
SQL | query_table | Executes custom SQL queries against your data |
get_sql_dialect | Identifies whether your workspace uses Snowflake or BigQuery SQL dialect | |
Component | create_component_root_configuration | Creates a component configuration with custom parameters |
create_component_row_configuration | Creates a component configuration row with custom parameters | |
create_sql_transformation | Creates an SQL transformation with custom queries | |
find_component_id | Returns list of component IDs that match the given query | |
get_component | Gets information about a specific component given its ID | |
get_component_configuration | Gets information about a specific component/transformation configuration | |
get_component_configuration_examples | Retrieves sample configuration examples for a specific component | |
retrieve_component_configurations | Retrieves configurations of components present in the project | |
retrieve_transformations | Retrieves transformation configurations in the project | |
update_component_root_configuration | Updates a specific component configuration | |
update_component_row_configuration | Updates a specific component configuration row | |
update_sql_transformation_configuration | Updates an existing SQL transformation configuration | |
Job | retrieve_jobs | Lists and filters jobs by status, component, or configuration |
get_job_detail | Returns comprehensive details about a specific job | |
start_job | Triggers a component or transformation job to run | |
Documentation | docs_query | Searches Keboola documentation based on natural language queries |
Issue | Solution |
---|---|
Authentication Errors | Verify KBC_STORAGE_TOKEN is valid |
Workspace Issues | Confirm KBC_WORKSPACE_SCHEMA is correct |
Connection Timeout | Check network connectivity |
Basic setup:
uv sync --extra dev
With the basic setup, you can use uv run tox
to run tests and check code style.
Recommended setup:
uv sync --extra dev --extra tests --extra integtests --extra codestyle
With the recommended setup, packages for testing and code style checking will be installed which allows IDEs like VsCode or Cursor to check the code or run tests during development.
To run integration tests locally, use uv run tox -e integtests
.
NOTE: You will need to set the following environment variables:
INTEGTEST_STORAGE_API_URL
INTEGTEST_STORAGE_TOKEN
INTEGTEST_WORKSPACE_SCHEMA
In order to get these values, you need a dedicated Keboola project for integration tests.
uv.lock
Update the uv.lock
file if you have added or removed dependencies. Also consider updating the lock with newer dependency
versions when creating a release (uv lock --upgrade
).
⭐ The primary way to get help, report bugs, or request features is by opening an issue on GitHub. ⭐
The development team actively monitors issues and will respond as quickly as possible. For general information about Keboola, please use the resources below.