
MCP Custom Host
STDIOMCP proof of concept with custom host for testing agentic systems
MCP proof of concept with custom host for testing agentic systems
This project is a proof of concept (POC) demonstrating how to implement a Model Context Protocol (MCP) with a custom-built host to play with agentic systems. The code is primarily written from scratch to provide a clear understanding of the underlying mechanisms.
The primary goal of this project is to enable easy testing of agentic systems through the Model Context Protocol. For example:
dispatch_agent
could be specialized to scan codebases for security vulnerabilitiesThese specialized agents can be easily tested and iterated upon using the tools provided in this repository.
github.com/mark3labs/mcp-go
The tools use the default GCP login credentials configured by gcloud auth login
.
host/openaiserver
: Implements a custom host that mimics the OpenAI API, using Google Gemini and function calling. This is the core of the POC.tools
: Contains various MCP-compatible tools that can be used with the host:
You can build all the tools using the included Makefile:
# Build all tools make all # Or build individual tools make Bash make Edit make GlobTool make GrepTool make LS make Replace make View
Read the .envrc
file in the bin
directory to set up the required environment variables:
export GCP_PROJECT=your-project-id export GCP_REGION=your-region export GEMINI_MODELS=gemini-2.0-flash export IMAGEN_MODELS=imagen-3.0-generate-002 export IMAGE_DIR=/tmp/images
You can test the CLI (a tool similar to Claude Code) from the bin
directory with:
./cliGCP -mcpservers "./GlobTool;./GrepTool;./LS;./View;./dispatch_agent -glob-path .GlobTool -grep-path ./GrepTool -ls-path ./LS -view-path ./View;./Bash;./Replace"
⚠️ WARNING: These tools have the ability to execute commands and modify files on your system. They should preferably be used in a chroot or container environment to prevent potential damage to your system.
This guide will help you quickly run the openaiserver
located in the host/openaiserver
directory.
Navigate to the host/openaiserver
directory:
cd host/openaiserver
Set the required environment variables. Refer to the Configuration section for details on the environment variables. A minimal example:
export IMAGE_DIR=/path/to/your/image/directory export GCP_PROJECT=your-gcp-project-id export IMAGE_DIR=/tmp/images # Directory must exist
Run the server:
go run .
or
go run main.go
The server will start and listen on the configured port (default: 8080).
The openaiserver
application is configured using environment variables. The following variables are supported:
Variable | Description | Default | Required |
---|---|---|---|
PORT | The port the server listens on | 8080 | No |
LOG_LEVEL | Log level (DEBUG, INFO, WARN, ERROR) | INFO | No |
IMAGE_DIR | Directory to store images | Yes |
Variable | Description | Default | Required |
---|---|---|---|
GCP_PROJECT | Google Cloud Project ID | Yes | |
GEMINI_MODELS | Comma-separated list of Gemini models | gemini-1.5-pro,gemini-2.0-flash | No |
GCP_REGION | Google Cloud Region | us-central1 | No |
IMAGEN_MODELS | Comma-separated list of Imagen models | No | |
IMAGE_DIR | Directory to store images | Yes | |
PORT | The port the server listens on | 8080 | No |