Model Context Protocol
STDIOHTTP-SSEFastAPI and MCP-based server enabling standardized context interaction between AI models and development environments.
FastAPI and MCP-based server enabling standardized context interaction between AI models and development environments.
Built on FastAPI and MCP (Model Context Protocol), this project enables standardized context interaction between AI models and development environments. It enhances the scalability and maintainability of AI applications by simplifying model deployment, providing efficient API endpoints, and ensuring consistency in model input and output, making it easier for developers to integrate and manage AI tasks.
MCP (Model Context Protocol) is a unified protocol for context interaction between AI models and development environments. This project provides a Python-based MCP server implementation that supports basic MCP protocol features, including initialization, sampling, and session management.
mcp_server/
├── mcp_server.py # MCP server main program
├── mcp_client.py # MCP client test program
├── routers/
│ ├── __init__.py # Router package initialization
│ └── base_router.py # Base router implementation
├── requirements.txt # Project dependencies
└── README.md # Project documentation
git clone https://github.com/freedanfan/mcp_server.git cd mcp_server
pip install -r requirements.txt
python mcp_server.py
By default, the server will start on 127.0.0.1:12000
. You can customize the host and port using environment variables:
export MCP_SERVER_HOST=0.0.0.0 export MCP_SERVER_PORT=8000 python mcp_server.py
Run the client in another terminal:
python mcp_client.py
If the server is not running at the default address, you can set an environment variable:
export MCP_SERVER_URL="http://your-server-address:port" python mcp_client.py
The server provides the following API endpoints:
/
): Provides server information/api
): Handles JSON-RPC requests/sse
): Handles SSE connectionsClients can send sampling requests with prompts:
{ "jsonrpc": "2.0", "id": "request-id", "method": "sample", "params": { "prompt": "Hello, please introduce yourself." } }
The server will return sampling results:
{ "jsonrpc": "2.0", "id": "request-id", "result": { "content": "This is a response to the prompt...", "usage": { "prompt_tokens": 10, "completion_tokens": 50, "total_tokens": 60 } } }
Clients can send a shutdown request:
{ "jsonrpc": "2.0", "id": "request-id", "method": "shutdown", "params": {} }
The server will gracefully shut down:
{ "jsonrpc": "2.0", "id": "request-id", "result": { "status": "shutting_down" } }
To add new MCP methods, add a handler function to the MCPServer
class and register it in the _register_methods
method:
def handle_new_method(self, params: dict) -> dict: """Handle new method""" logger.info(f"Received new method request: {params}") # Processing logic return {"result": "success"} def _register_methods(self): # Register existing methods self.router.register_method("initialize", self.handle_initialize) self.router.register_method("sample", self.handle_sample) self.router.register_method("shutdown", self.handle_shutdown) # Register new method self.router.register_method("new_method", self.handle_new_method)
To integrate actual AI models, modify the handle_sample
method:
async def handle_sample(self, params: dict) -> dict: """Handle sampling request""" logger.info(f"Received sampling request: {params}") # Get prompt prompt = params.get("prompt", "") # Call AI model API # For example: using OpenAI API response = await openai.ChatCompletion.acreate( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) content = response.choices[0].message.content usage = response.usage return { "content": content, "usage": { "prompt_tokens": usage.prompt_tokens, "completion_tokens": usage.completion_tokens, "total_tokens": usage.total_tokens } }
Both server and client provide detailed logging. View logs for more information:
# Increase log level export PYTHONPATH=. python -m logging -v DEBUG -m mcp_server
This project is licensed under the MIT License. See the LICENSE file for details.