
CoRT
STDIOChain-of-Recursive-Thoughts server for enhanced AI reasoning through self-debate
Chain-of-Recursive-Thoughts server for enhanced AI reasoning through self-debate
This is a Chain-of-Recursive-Thoughts (CORT) MCP server. The orignal project is as below, I appreciate so much the original work.
Original: PhialsBasement/Chain-of-Recursive-Thoughts: I made my AI think harder by making it argue with itself repeatedly. It works stupidly well.
https://github.com/PhialsBasement/Chain-of-Recursive-Thoughts
0.2.0 LLM list updated 0.1.0 Initial release
Roo code / Cline
300 sec timeout recommend. (may sometime take longer time than expected) OPENROUTER_API_KEY is required. https://openrouter.ai/
"CoRT-chain-of-recursive-thinking": { "command": "pipx", "args": ["run", "cort-mcp", "--log=off"], "env": { "OPENAI_API_KEY": "{apikey}", "OPENROUTER_API_KEY": "{apikey}" } }
"CoRT-chain-of-recursive-thinking": { "command": "pipx", "args": ["run", "cort-mcp", "--log=on", "--logfile=/workspace/logs/cort-mcp.log"], "env": { "OPENAI_API_KEY": "{apikey}", "OPENROUTER_API_KEY": "{apikey}" } }
--log=off
: Disable all logging (no logs are written)--log=on --logfile=/absolute/path/to/logfile.log
: Enable logging and write logs to the specified absolute file pathNote:
- When logging is enabled, logs are written only to the specified absolute file path. Relative paths or omission of
--logfile
will cause an error.- When logging is disabled, no logs are output.
- If the required arguments are missing or invalid, the server will not start and will print an error message.
- The log file must be accessible and writable by the MCP Server process.
- If you have trouble to run this server, it may be due to caching older version of cort-mcp. Please try to run it with the latest version (set
x.y.z
to the latest version) of cort-mcp by the below setting.
"CoRT-chain-of-recursive-thinking": { "command": "pipx", "args": ["run", "cort-mcp==x.y.z", "--log=off"], "env": { "OPENAI_API_KEY": "{apikey}", "OPENROUTER_API_KEY": "{apikey}" } }
Check the below details.
flowchart TB Start[User query] --> DetermineRounds[AI determine thinking rounds] DetermineRounds -->|determine_thinking_rounds 1-5 rounds| InitialResponse[Initial response\ntemperature=0.7] InitialResponse --> Round1[Starting round1] subgraph "Round 1" Round1 --> R1A1[Create alternative 1 temperature=0.7] Round1 --> R1A2[Create alternative 2 temperature=0.8] Round1 --> R1A3[Create alternative 3 temperature=0.9] InitialResponse & R1A1 & R1A2 & R1A3 --> R1Eval[Evaluation temperature=0.2] R1Eval --> R1Best[Round 1 best response] end R1Best --> Round2[Starting round2] subgraph "Round 2" Round2 --> R2A1[Create alternative 1 temperature=0.7] Round2 --> R2A2[Create alternative 2 temperature=0.8] Round2 --> R2A3[Create alternative 3 temperature=0.9] R1Best & R2A1 & R2A2 & R2A3 --> R2Eval[Evaluation temperature=0.2] R2Eval --> R2Best[Round 2 best response] end R2Best --> Remaining[Remaining rounds Repeat the same process] Remaining --> FinalBest[Final round best response] FinalBest --> FinalResponse[Final response]
There are several enhancement from original CoRT methodology.
Overview: This is a new tool that adds an exploration strategy of "randomly selecting different LLM (model + provider) for each alternative" to the conventional CoRT thinking flow. This allows you to maximize the use of the knowledge and ideas of heterogeneous models and select the optimal solution from a wider range of options.
MIXED_LLM_LIST = [
{"provider": "openai", "model": "gpt-4.1-nano"},
{"provider": "openrouter", "model": "meta-llama/llama-4-scout:free"},
{"provider": "openrouter", "model": "google/gemini-2.0-flash-exp:free"},
{"provider": "openrouter", "model": "mistralai/mistral-small-3.1-24b-instruct:free"},
{"provider": "openrouter", "model": "meta-llama/llama-3.2-3b-instruct:free"},
{"provider": "openrouter", "model": "thudm/glm-4-9b:free"},
]
Overview: Changed the evaluation prompt richer. (Original prompt is available by tools) Use the prompt by {toolname}.neweval that asks the AI to explain its reasoning.
f"""Original message: {prompt}
Evaluate these responses and choose the best one:
Current best: {current_best}
Alternatives:
{chr(10).join([f"{i+1}. {alt}" for i, alt in enumerate(alternatives)])}
Which response best addresses the original message? Consider accuracy, clarity, and completeness.
First, respond with ONLY 'current' or a number (1-{len(alternatives)}).
Then on a new line, explain your choice in one sentence."""
f""" Original message: {prompt}
You are an expert evaluator tasked with selecting the response that best fulfills the user's true needs, considering multiple perspectives.
Current best: {current_best}
Alternatives: {chr(10).join([f"{i+1}. {alt}" for i, alt in enumerate(alternatives)])}
Please follow this evaluation process:
Intent Analysis: What is the user REALLY seeking? What underlying needs might be present beyond the surface question?
Context Consideration: What possible situations or backgrounds could this question arise from?
Diversity Assessment: Does the response consider different viewpoints or possible interpretations?
Practicality Evaluation: How useful would the response be in the user's real-world context?
Consistency Check: Is the response internally consistent and logically coherent?
For each response (including the current best):
Does it solve the user's TRUE problem?
Does it balance accuracy and usefulness?
Does it avoid unnecessary assumptions or biases?
Is it flexible enough to apply in various contexts or situations?
Does it account for exceptions or special cases?
After completing your evaluation:
Indicate your choice with ONLY 'current' or a number (1-{len(alternatives)}).
On the next line, explain specifically why this response best meets the user's true needs. """
This API determines the actual model to be used based on the specified provider
and model
parameters, with fallback processing in case of errors.
Provider (provider
) Resolution
openrouter
is used as the default provider.openai
or openrouter
): Falls back to the default provider openrouter
.Model (model
) Resolution
openrouter
: The default model mistralai/mistral-small-3.1-24b-instruct:free
is used.openai
: The default OpenAI model is used.API Call and Error Fallback
openai
.OPENAI_API_KEY
is set in the system.openai
provider (this is the fallback processing).openai
, or OPENAI_API_KEY
is not set), the initial error is returned as the final result, and this type of fallback does not occur.Notes on Environment Variables:
OPENROUTER_API_KEY
is required to use openrouter
.OPENAI_API_KEY
is required to use openai
or to utilize the above fallback feature.MIT
Go wild with it