Just Prompt LLM Interface
STDIOLightweight MCP server providing unified interface for multiple LLM providers.
Lightweight MCP server providing unified interface for multiple LLM providers.
just-prompt
is a Model Control Protocol (MCP) server that provides a unified interface to various Large Language Model (LLM) providers including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama. See how we use the ceo_and_board
tool to make hard decisions easy with o3 here.
The following MCP tools are available in the server:
prompt
: Send a prompt to multiple LLM models
text
: The prompt textmodels_prefixed_by_provider
(optional): List of models with provider prefixes. If not provided, uses default models.prompt_from_file
: Send a prompt from a file to multiple LLM models
abs_file_path
: Absolute path to the file containing the prompt (must be an absolute path, not relative)models_prefixed_by_provider
(optional): List of models with provider prefixes. If not provided, uses default models.prompt_from_file_to_file
: Send a prompt from a file to multiple LLM models and save responses as markdown files
abs_file_path
: Absolute path to the file containing the prompt (must be an absolute path, not relative)models_prefixed_by_provider
(optional): List of models with provider prefixes. If not provided, uses default models.abs_output_dir
(default: "."): Absolute directory path to save the response markdown files to (must be an absolute path, not relative)ceo_and_board
: Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses
abs_file_path
: Absolute path to the file containing the prompt (must be an absolute path, not relative)models_prefixed_by_provider
(optional): List of models with provider prefixes to act as board members. If not provided, uses default models.abs_output_dir
(default: "."): Absolute directory path to save the response files and CEO decision (must be an absolute path, not relative)ceo_model
(default: "openai:o3"): Model to use for the CEO decision in format "provider:model"list_providers
: List all available LLM providers
list_models
: List all available models for a specific LLM provider
provider
: Provider to list models for (e.g., 'openai' or 'o')every model must be prefixed with the provider name
use the short name for faster referencing
o
or openai
: OpenAI
o:gpt-4o-mini
openai:gpt-4o-mini
a
or anthropic
: Anthropic
a:claude-3-5-haiku
anthropic:claude-3-5-haiku
g
or gemini
: Google Gemini
g:gemini-2.5-pro-exp-03-25
gemini:gemini-2.5-pro-exp-03-25
q
or groq
: Groq
q:llama-3.1-70b-versatile
groq:llama-3.1-70b-versatile
d
or deepseek
: DeepSeek
d:deepseek-coder
deepseek:deepseek-coder
l
or ollama
: Ollama
l:llama3.1
ollama:llama3.1
--default-models
list# Clone the repository git clone https://github.com/yourusername/just-prompt.git cd just-prompt # Install with pip uv sync
Create a .env
file with your API keys (you can copy the .env.sample
file):
cp .env.sample .env
Then edit the .env
file to add your API keys (or export them in your shell):
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
GROQ_API_KEY=your_groq_api_key_here
DEEPSEEK_API_KEY=your_deepseek_api_key_here
OLLAMA_HOST=http://localhost:11434
In all these examples, replace the directory with the path to the just-prompt directory.
Default models set to openai:o3:high
, openai:o4-mini:high
, anthropic:claude-opus-4-20250514
, anthropic:claude-sonnet-4-20250514
, gemini:gemini-2.5-pro-preview-03-25
, and gemini:gemini-2.5-flash-preview-04-17
.
If you use Claude Code right out of the repository you can see in the .mcp.json file we set the default models to...
{
"mcpServers": {
"just-prompt": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
".",
"run",
"just-prompt",
"--default-models",
"openai:o3:high,openai:o4-mini:high,anthropic:claude-opus-4-20250514,anthropic:claude-sonnet-4-20250514,gemini:gemini-2.5-pro-preview-03-25,gemini:gemini-2.5-flash-preview-04-17"
],
"env": {}
}
}
}
The --default-models
parameter sets the models to use when none are explicitly provided to the API endpoints. The first model in the list is also used for model name correction when needed. This can be a list of models separated by commas.
When starting the server, it will automatically check which API keys are available in your environment and inform you which providers you can use. If a key is missing, the provider will be listed as unavailable, but the server will still start and can be used with the providers that are available.
mcp add-json
Copy this and paste it into claude code with BUT don't run until you copy the json
claude mcp add just-prompt "$(pbpaste)"
JSON to copy
{
"command": "uv",
"args": ["--directory", ".", "run", "just-prompt"]
}
With a custom default model set to openai:gpt-4o
.
{
"command": "uv",
"args": ["--directory", ".", "run", "just-prompt", "--default-models", "openai:gpt-4o"]
}
With multiple default models:
{
"command": "uv",
"args": ["--directory", ".", "run", "just-prompt", "--default-models", "openai:o3:high,openai:o4-mini:high,anthropic:claude-opus-4-20250514,anthropic:claude-sonnet-4-20250514,gemini:gemini-2.5-pro-preview-03-25,gemini:gemini-2.5-flash-preview-04-17"]
}
mcp add
with project scope# With default models claude mcp add just-prompt -s project \ -- \ uv --directory . \ run just-prompt # With custom default model claude mcp add just-prompt -s project \ -- \ uv --directory . \ run just-prompt --default-models "openai:gpt-4o" # With multiple default models claude mcp add just-prompt -s user \ -- \ uv --directory . \ run just-prompt --default-models "openai:o3:high,openai:o4-mini:high,anthropic:claude-opus-4-20250514,anthropic:claude-sonnet-4-20250514,gemini:gemini-2.5-pro-preview-03-25,gemini:gemini-2.5-flash-preview-04-17"
mcp remove
claude mcp remove just-prompt
uv run pytest
.
├── ai_docs/ # Documentation for AI model details
│ ├── extending_thinking_sonny.md
│ ├── llm_providers_details.xml
│ ├── openai-reasoning-effort.md
│ └── pocket-pick-mcp-server-example.xml
├── example_outputs/ # Example outputs from different models
├── list_models.py # Script to list available LLM models
├── prompts/ # Example prompt files
├── pyproject.toml # Python project configuration
├── specs/ # Project specifications
│ ├── init-just-prompt.md
│ ├── new-tool-llm-as-a-ceo.md
│ └── oai-reasoning-levels.md
├── src/ # Source code directory
│ └── just_prompt/
│ ├── __init__.py
│ ├── __main__.py
│ ├── atoms/ # Core components
│ │ ├── llm_providers/ # Individual provider implementations
│ │ │ ├── anthropic.py
│ │ │ ├── deepseek.py
│ │ │ ├── gemini.py
│ │ │ ├── groq.py
│ │ │ ├── ollama.py
│ │ │ └── openai.py
│ │ └── shared/ # Shared utilities and data types
│ │ ├── data_types.py
│ │ ├── model_router.py
│ │ ├── utils.py
│ │ └── validator.py
│ ├── molecules/ # Higher-level functionality
│ │ ├── ceo_and_board_prompt.py
│ │ ├── list_models.py
│ │ ├── list_providers.py
│ │ ├── prompt.py
│ │ ├── prompt_from_file.py
│ │ └── prompt_from_file_to_file.py
│ ├── server.py # MCP server implementation
│ └── tests/ # Test directory
│ ├── atoms/ # Tests for atoms
│ │ ├── llm_providers/
│ │ └── shared/
│ └── molecules/ # Tests for molecules
│ ├── test_ceo_and_board_prompt.py
│ ├── test_list_models.py
│ ├── test_list_providers.py
│ ├── test_prompt.py
│ ├── test_prompt_from_file.py
│ └── test_prompt_from_file_to_file.py
└── ultra_diff_review/ # Diff review outputs
READ README.md, pyproject.toml, then run git ls-files, and 'eza --git-ignore --tree' to understand the context of the project.
For OpenAI o‑series reasoning models (o4-mini
, o3-mini
, o3
) you can
control how much internal reasoning the model performs before producing a
visible answer.
Append one of the following suffixes to the model name (after the provider prefix):
:low
– minimal internal reasoning (faster, cheaper):medium
– balanced (default if omitted):high
– thorough reasoning (slower, more tokens)Examples:
openai:o4-mini:low
o:o4-mini:high
When a reasoning suffix is present, just‑prompt automatically switches to
the OpenAI Responses API (when available) and sets the corresponding
reasoning.effort
parameter. If the installed OpenAI SDK is older, it
gracefully falls back to the Chat Completions endpoint and embeds an internal
system instruction to approximate the requested effort level.
The Anthropic Claude models claude-opus-4-20250514
and claude-sonnet-4-20250514
support extended thinking capabilities using thinking tokens. This allows Claude to do more thorough thought processes before answering.
You can enable thinking tokens by adding a suffix to the model name in this format:
anthropic:claude-opus-4-20250514:1k
- Use 1024 thinking tokens for Opus 4anthropic:claude-sonnet-4-20250514:4k
- Use 4096 thinking tokens for Sonnet 4anthropic:claude-opus-4-20250514:8000
- Use 8000 thinking tokens for Opus 4Notes:
claude-opus-4-20250514
, claude-sonnet-4-20250514
, and claude-3-7-sonnet-20250219
modelsThe Google Gemini model gemini-2.5-flash-preview-04-17
supports extended thinking capabilities using thinking budget. This allows Gemini to perform more thorough reasoning before providing a response.
You can enable thinking budget by adding a suffix to the model name in this format:
gemini:gemini-2.5-flash-preview-04-17:1k
- Use 1024 thinking budgetgemini:gemini-2.5-flash-preview-04-17:4k
- Use 4096 thinking budgetgemini:gemini-2.5-flash-preview-04-17:8000
- Use 8000 thinking budgetNotes:
gemini-2.5-flash-preview-04-17
modelLearn to code with AI with foundational Principles of AI Coding
Follow the IndyDevDan youtube channel for more AI coding tips and tricks.