Waifu队列Gemini
STDIO基于Gemini API的会话型老婆角色服务
基于Gemini API的会话型老婆角色服务
This project implements an MCP (Model Context Protocol) server for a conversational AI "waifu" character, leveraging the Google Gemini API via a Redis queue for asynchronous processing. It utilizes the FastMCP
library for simplified server setup and management.
gemini-2.5-pro
).FastMCP
..env
file) and API key loading from ~/.api-gemini
.The project consists of several key components:
main.py
: The main entry point, initializing the FastMCP
application and defining MCP tools/resources.respond.py
: Contains the core text generation logic using the google-generativeai
library to interact with the Gemini API.task_queue.py
: Handles interactions with the Redis queue (using python-rq
), enqueuing generation requests.utils.py
: Contains utility functions, specifically call_predict_response
which is executed by the worker to call the Gemini logic in respond.py
.worker.py
: A Redis worker (python-rq
) that processes jobs from the queue, calling call_predict_response
.config.py
: Manages configuration using pydantic-settings
.models.py
: Defines Pydantic models for MCP request and response validation.The flow of a request is as follows:
generate_text
MCP tool (defined in main.py
).task_queue.py
).worker.py
process picks up the job from the queue.call_predict_response
function (from utils.py
).call_predict_response
calls the predict_response
function (in respond.py
), which interacts with the Gemini API.predict_response
and stored as the job result by RQ.job://{job_id}
MCP resource (defined in main.py
).graph LR subgraph Client A[User/Client] -->|1. Send Prompt via MCP Tool| B(mcp-waifu-queue: main.py) end subgraph mcp-waifu-queue Server B -->|2. Enqueue Job (prompt)| C[Redis Queue] B -->|7. Return Job ID| A D[RQ Worker (worker.py)] --|>| C D -->|3. Dequeue Job & Execute| E(utils.call_predict_response) E -->|4. Call Gemini Logic| F(respond.predict_response) F -->|5. Call Gemini API| G[Google Gemini API] G -->|6. Return Response| F F --> E E -->|Update Job Result in Redis| C A -->|8. Check Status via MCP Resource| B B -->|9. Fetch Job Status/Result| C B -->|10. Return Status/Result| A end
pip
or uv
(Python package installer)You can find instructions for installing Redis on your system on the official Redis website: https://redis.io/docs/getting-started/ You can obtain a Gemini API key from Google AI Studio: https://aistudio.google.com/app/apikey
Clone the repository:
git clone <YOUR_REPOSITORY_URL> cd mcp-waifu-queue
Create and activate a virtual environment (using venv
or uv
):
Using venv
(standard library):
python -m venv .venv source .venv/bin/activate # On Linux/macOS # .venv\Scripts\activate # On Windows CMD # source .venv/Scripts/activate # On Windows Git Bash/PowerShell Core
Using uv
(if installed):
# Ensure uv is installed (e.g., pip install uv) python -m uv venv .venv source .venv/bin/activate # Or equivalent activation for your shell
Install dependencies (using pip
within the venv or uv
):
Using pip
:
pip install -e .[test] # Installs package in editable mode with test extras
Using uv
:
# Ensure uv is installed inside the venv if desired, or use the venv's python # .venv/Scripts/python.exe -m pip install uv # Example for Windows .venv/Scripts/python.exe -m uv pip install -e .[test] # Example for Windows # python -m uv pip install -e .[test] # If uv is in PATH after venv activation
API Key: Create a file named .api-gemini
in your home directory (~/.api-gemini
) and place your Google Gemini API key inside it. Ensure the file has no extra whitespace.
echo "YOUR_API_KEY_HERE" > ~/.api-gemini
(Replace YOUR_API_KEY_HERE
with your actual key)
Other Settings: Copy the .env.example
file to .env
:
cp .env.example .env
Modify the .env
file to set the remaining configuration values:
MAX_NEW_TOKENS
: Maximum number of tokens for the Gemini response (default: 2048
).REDIS_URL
: The URL of your Redis server (default: redis://localhost:6379
).FLASK_ENV
, FLASK_APP
: Optional, related to Flask if used elsewhere, not core to the MCP server/worker operation.Ensure Redis is running. If you installed it locally, you might need to start the Redis server process (e.g., redis-server
command, or via a service manager).
Start the RQ Worker:
Open a terminal, activate your virtual environment (source .venv/bin/activate
or similar), and run:
python -m mcp_waifu_queue.worker
This command starts the worker process, which will listen for jobs on the Redis queue defined in your .env
file. Keep this terminal running.
Start the MCP Server:
Open another terminal, activate the virtual environment, and run the MCP server using a tool like uvicorn
(you might need to install it: pip install uvicorn
or uv pip install uvicorn
):
uvicorn mcp_waifu_queue.main:app --reload --port 8000 # Example port
Replace 8000
with your desired port. The --reload
flag is useful for development.
Alternatively, you can use the start-services.sh
script (primarily designed for Linux/macOS environments) which attempts to start Redis (if not running) and the worker in the background:
# Ensure the script is executable: chmod +x ./scripts/start-services.sh ./scripts/start-services.sh # Then start the MCP server manually as shown above.
The server provides the following MCP-compliant endpoints:
generate_text
{"prompt": "Your text prompt here"}
(Type: GenerateTextRequest
){"job_id": "rq:job:..."}
(A unique ID for the queued job)job://{job_id}
job_id
(The ID returned by the generate_text
tool).{"status": "...", "result": "..."}
(Type: JobStatusResponse
)
status
: The current state of the job (e.g., "queued", "started", "finished", "failed"). RQ uses slightly different terms internally ("started" vs "processing", "finished" vs "completed"). The resource maps these.result
: The generated text from Gemini if the job status is "completed", otherwise null
. If the job failed, the result might be null
or contain error information depending on RQ's handling.The project includes tests. Ensure you have installed the test dependencies (pip install -e .[test]
or uv pip install -e .[test]
).
Run tests using pytest
:
pytest tests
Note: Tests might require mocking Redis (fakeredis
) and potentially the Gemini API calls depending on their implementation.
Gemini API key not found in .../.api-gemini or GEMINI_API_KEY environment variable
: Ensure you have created the ~/.api-gemini
file in your home directory and placed your valid Gemini API key inside it. Alternatively, ensure the GEMINI_API_KEY
environment variable is set as a fallback.~/.api-gemini
(or the fallback environment variable) is correct and valid. Ensure the API is enabled for your Google Cloud project if applicable.python -m mcp_waifu_queue.worker
) is running in a separate terminal and connected to the same Redis instance specified in .env
. Check the worker logs for errors.REDIS_URL
specified in .env
.uvicorn ...
) is running and you are connecting to the correct host/port.git checkout -b feature/your-feature-name
).git commit -am 'Add some feature'
).git push origin feature/your-feature-name
).Please adhere to the project's coding standards and linting rules (ruff
).
This project is licensed under the MIT-0 License - see the LICENSE file for details.