Sequential Thinking Multi-Agent System
STDIOMulti-Agent System implements advanced sequential thinking with specialized agents for deeper analysis.
Multi-Agent System implements advanced sequential thinking with specialized agents for deeper analysis.
English | 简体中文
This project implements an advanced sequential thinking process using a Multi-Agent System (MAS) built with the Agno framework and served via MCP. It represents a significant evolution from simpler state-tracking approaches by leveraging coordinated, specialized agents for deeper analysis and problem decomposition.
This server provides a sophisticated sequentialthinking
tool designed for complex problem-solving. Unlike its predecessor, this version utilizes a true Multi-Agent System (MAS) architecture where:
Team
object in coordinate
mode) manages the workflow.The goal is to achieve a higher quality of analysis and a more nuanced thinking process than possible with a single agent or simple state tracking by harnessing the power of specialized roles working collaboratively.
This Python/Agno implementation marks a fundamental shift from the original TypeScript version:
Feature/Aspect | Python/Agno Version (Current) | TypeScript Version (Original) |
---|---|---|
Architecture | Multi-Agent System (MAS); Active processing by a team of agents. | Single Class State Tracker; Simple logging/storing. |
Intelligence | Distributed Agent Logic; Embedded in specialized agents & Coordinator. | External LLM Only; No internal intelligence. |
Processing | Active Analysis & Synthesis; Agents act on the thought. | Passive Logging; Merely recorded the thought. |
Frameworks | Agno (MAS) + FastMCP (Server); Uses dedicated MAS library. | MCP SDK only. |
Coordination | Explicit Team Coordination Logic (Team in coordinate mode). | None; No coordination concept. |
Validation | Pydantic Schema Validation; Robust data validation. | Basic Type Checks; Less reliable. |
External Tools | Integrated (Exa via Researcher); Can perform research tasks. | None. |
Logging | Structured Python Logging (File + Console); Configurable. | Console Logging with Chalk; Basic. |
Language & Ecosystem | Python; Leverages Python AI/ML ecosystem. | TypeScript/Node.js. |
In essence, the system evolved from a passive thought recorder to an active thought processor powered by a collaborative team of AI agents.
sequential-thinking-starter
prompt to define the problem and initiate the process.sequentialthinking
tool with the first (or subsequent) thought, structured according to the ThoughtData
Pydantic model.AppContext
.SequentialThinkingTeam
's arun
method.Team
(acting as Coordinator) analyzes the input thought, breaks it down into sub-tasks, and delegates these sub-tasks to the most relevant specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).ThinkingTools
or ExaTools
).sequentialthinking
tool call, potentially triggering revisions or branches as suggested.⚠️ High Token Usage: Due to the Multi-Agent System architecture, this tool consumes significantly more tokens than single-agent alternatives or the previous TypeScript version. Each sequentialthinking
call invokes:
Team
itself).This parallel processing leads to substantially higher token usage (potentially 3-6x or more per thought step) compared to single-agent or state-tracking approaches. Budget and plan accordingly. This tool prioritizes analysis depth and quality over token efficiency.
agno
). The system currently supports:
GROQ_API_KEY
.DEEPSEEK_API_KEY
.OPENROUTER_API_KEY
.LLM_PROVIDER
environment variable (defaults to deepseek
).EXA_API_KEY
environment variable.uv
package manager (recommended) or pip
.This server runs as a standard executable script that communicates via stdio, as expected by MCP. The exact configuration method depends on your specific MCP client implementation. Consult your client's documentation for details on integrating external tool servers.
The env
section within your MCP client configuration should include the API key for your chosen LLM_PROVIDER
.
{ "mcpServers": { "mas-sequential-thinking": { "command": "uvx", // Or "python", "path/to/venv/bin/python" etc. "args": [ "mcp-server-mas-sequential-thinking" // Or the path to your main script, e.g., "main.py" ], "env": { "LLM_PROVIDER": "deepseek", // Or "groq", "openrouter" // "GROQ_API_KEY": "your_groq_api_key", // Only if LLM_PROVIDER="groq" "DEEPSEEK_API_KEY": "your_deepseek_api_key", // Default provider // "OPENROUTER_API_KEY": "your_openrouter_api_key", // Only if LLM_PROVIDER="openrouter" "DEEPSEEK_BASE_URL": "your_base_url_if_needed", // Optional: If using a custom endpoint for DeepSeek "EXA_API_KEY": "your_exa_api_key" // Only if using Exa } } } }
To install Sequential Thinking Multi-Agent System for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @FradSer/mcp-server-mas-sequential-thinking --client claude
Clone the repository:
git clone [email protected]:FradSer/mcp-server-mas-sequential-thinking.git cd mcp-server-mas-sequential-thinking
Set Environment Variables:
Create a .env
file in the project root directory or export the variables directly into your environment:
# --- LLM Configuration --- # Select the LLM provider: "deepseek" (default), "groq", or "openrouter" LLM_PROVIDER="deepseek" # Provide the API key for the chosen provider: # GROQ_API_KEY="your_groq_api_key" DEEPSEEK_API_KEY="your_deepseek_api_key" # OPENROUTER_API_KEY="your_openrouter_api_key" # Optional: Base URL override (e.g., for custom DeepSeek endpoints) # DEEPSEEK_BASE_URL="your_base_url_if_needed" # Optional: Specify different models for Team Coordinator and Specialist Agents # Defaults are set within the code based on the provider if these are not set. # Example for Groq: # GROQ_TEAM_MODEL_ID="llama3-70b-8192" # GROQ_AGENT_MODEL_ID="llama3-8b-8192" # Example for DeepSeek: # DEEPSEEK_TEAM_MODEL_ID="deepseek-chat" # Note: `deepseek-reasoner` is not recommended as it doesn't support function calling # DEEPSEEK_AGENT_MODEL_ID="deepseek-chat" # Recommended for specialists # Example for OpenRouter: # OPENROUTER_TEAM_MODEL_ID="deepseek/deepseek-r1" # Example, adjust as needed # OPENROUTER_AGENT_MODEL_ID="deepseek/deepseek-chat" # Example, adjust as needed # --- External Tools --- # Required ONLY if the Researcher agent is used and needs Exa EXA_API_KEY="your_exa_api_key"
Note on Model Selection:
TEAM_MODEL_ID
is used by the Coordinator (Team
object). This role benefits from strong reasoning, synthesis, and delegation capabilities. Consider using a more powerful model (e.g., deepseek-chat
, claude-3-opus
, gpt-4-turbo
) here, potentially balancing capability with cost/speed.AGENT_MODEL_ID
is used by the specialist agents (Planner, Researcher, etc.). These handle focused sub-tasks. A faster or more cost-effective model (e.g., deepseek-chat
, claude-3-sonnet
, llama3-8b
) might be suitable, depending on task complexity and budget/performance needs.main.py
) if these environment variables are not set. Experimentation is encouraged to find the optimal balance for your use case.Install Dependencies: It's highly recommended to use a virtual environment.
uv
(Recommended):
# Install uv if you don't have it: # curl -LsSf https://astral.sh/uv/install.sh | sh # source $HOME/.cargo/env # Or restart your shell # Create and activate a virtual environment (optional but recommended) python -m venv .venv source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate` # Install dependencies uv pip install -r requirements.txt # Or if a pyproject.toml exists with dependencies defined: # uv pip install .
pip
:
# Create and activate a virtual environment (optional but recommended) python -m venv .venv source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate` # Install dependencies pip install -r requirements.txt # Or if a pyproject.toml exists with dependencies defined: # pip install .
Ensure your environment variables are set and the virtual environment (if used) is active.
Run the server. Choose one of the following methods:
Using uv run
(Recommended):
uv --directory /path/to/mcp-server-mas-sequential-thinking run mcp-server-mas-sequential-thinking
Directly using Python:
python main.py
The server will start and listen for requests via stdio, making the sequentialthinking
tool available to compatible MCP clients configured to use it.
sequentialthinking
Tool ParametersThe tool expects arguments matching the ThoughtData
Pydantic model:
# Simplified representation from src/models.py class ThoughtData(BaseModel): thought: str # Content of the current thought/step thoughtNumber: int # Sequence number (>=1) totalThoughts: int # Estimated total steps (>=1, suggest >=5) nextThoughtNeeded: bool # Is another step required after this? isRevision: bool = False # Is this revising a previous thought? revisesThought: Optional[int] = None # If isRevision, which thought number? branchFromThought: Optional[int] = None # If branching, from which thought? branchId: Optional[str] = None # Unique ID for the new branch being created needsMoreThoughts: bool = False # Signal if estimate is too low before last step
An LLM would interact with this tool iteratively:
sequential-thinking-starter
) with the problem definition.sequentialthinking
tool with thoughtNumber: 1
, the initial thought
(e.g., "Plan the analysis..."), an estimated totalThoughts
, and nextThoughtNeeded: True
.coordinatorResponse
.coordinatorResponse
(e.g., "Research X using available tools...").sequentialthinking
tool with thoughtNumber: 2
, the new thought
, potentially updated totalThoughts
, nextThoughtNeeded: True
.sequentialthinking
tool with thoughtNumber: 3
, the revision thought
, isRevision: True
, revisesThought: 1
, nextThoughtNeeded: True
.The tool returns a JSON string containing:
{ "processedThoughtNumber": int, // The thought number that was just processed "estimatedTotalThoughts": int, // The current estimate of total thoughts "nextThoughtNeeded": bool, // Whether the process indicates more steps are needed "coordinatorResponse": "...", // Synthesized output from the agent team, including analysis, findings, and guidance for the next step. "branches": ["main", "branch-id-1"], // List of active branch IDs "thoughtHistoryLength": int, // Total number of thoughts processed so far (across all branches) "branchDetails": { "currentBranchId": "main", // The ID of the branch the processed thought belongs to "branchOriginThought": null | int, // The thought number where the current branch diverged (null for 'main') "allBranches": { // Count of thoughts in each active branch "main": 5, "branch-id-1": 2 } }, "isRevision": bool, // Was the processed thought a revision? "revisesThought": null | int, // Which thought number was revised (if isRevision is true) "isBranch": bool, // Did this thought start a new branch? "status": "success | validation_error | failed", // Outcome status "error": null | "Error message..." // Error details if status is not 'success' }
~/.sequential_thinking/logs/sequential_thinking.log
by default. (Configuration might be adjustable in the logging setup code).logging
module.git clone [email protected]:FradSer/mcp-server-mas-sequential-thinking.git cd mcp-server-mas-sequential-thinking
python -m venv .venv source .venv/bin/activate # On Windows use `.venv\\Scripts\\activate`
requirements-dev.txt
or pyproject.toml
specifies development tools (like pytest
, ruff
, black
, mypy
).
# Using uv uv pip install -r requirements.txt uv pip install -r requirements-dev.txt # Or install extras if defined in pyproject.toml: uv pip install -e ".[dev]" # Using pip pip install -r requirements.txt pip install -r requirements-dev.txt # Or install extras if defined in pyproject.toml: pip install -e ".[dev]"
# Example commands (replace with actual commands used in the project) ruff check . --fix black . mypy . pytest
MIT