
Velociraptor
STDIOVelociraptor数字取证MCP服务器
Velociraptor数字取证MCP服务器
A production-ready Model Context Protocol (MCP) server for seamless integration between Velociraptor DFIR and Large Language Models (LLMs).
Why? Combine the power of Velociraptor's comprehensive digital forensics and incident response capabilities with the reasoning capabilities of large language models—enabling natural language queries and intelligent analysis of your forensic data.
python -m venv .venv && source .venv/bin/activate pip install git+https://github.com/socfortress/velociraptor-mcp-server.git
.whl
filepip install velociraptor_mcp_server-*.whl
python-package-distributions
artifactpip install velociraptor_mcp_server-*.whl
Create a .env
file in your project directory:
# Velociraptor Server Configuration VELOCIRAPTOR_API_KEY=/path/to/api.config.yaml VELOCIRAPTOR_SSL_VERIFY=false VELOCIRAPTOR_TIMEOUT=30 # MCP Server Configuration MCP_SERVER_HOST=127.0.0.1 MCP_SERVER_PORT=8000 # Logging Configuration LOG_LEVEL=INFO # Tool Filtering (optional) # VELOCIRAPTOR_DISABLED_TOOLS=CollectArtifactTool,RunVQLQueryTool
Note: For VELOCIRAPTOR_API_KEY
, provide the full path to your Velociraptor api.config.yaml
file. You can generate this file from your Velociraptor server using the admin interface or CLI.
# Using the CLI command velociraptor-mcp-server # Or using the Python module python -m velociraptor_mcp_server # With custom configuration velociraptor-mcp-server --host 0.0.0.0 --port 8080 --log-level DEBUG
The server will start and be available at http://127.0.0.1:8000
(or your configured host/port).
git clone https://github.com/socfortress/velociraptor-mcp-server.git cd velociraptor-mcp-server # Create virtual environment python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install in development mode pip install -e ".[dev]" # Install pre-commit hooks (optional) pre-commit install
Variable | Description | Default | Required |
---|---|---|---|
VELOCIRAPTOR_API_KEY | Path to Velociraptor API config file (api.config.yaml) | None | ✅ |
VELOCIRAPTOR_SSL_VERIFY | SSL verification | true | ❌ |
VELOCIRAPTOR_TIMEOUT | Request timeout (seconds) | 30 | ❌ |
MCP_SERVER_HOST | Server host | 127.0.0.1 | ❌ |
MCP_SERVER_PORT | Server port | 8000 | ❌ |
LOG_LEVEL | Logging level | INFO | ❌ |
VELOCIRAPTOR_DISABLED_TOOLS | Comma-separated list of disabled tools | None | ❌ |
velociraptor-mcp-server --help
Available options:
--host
: Host to bind server to (default: 127.0.0.1)--port
: Port to bind server to (default: 8000)--log-level
: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)--config
: Path to Velociraptor API config file (overrides env var)--no-ssl-verify
: Disable SSL certificate verificationfrom velociraptor_mcp_server import Config, create_server # Create server with environment configuration config = Config.from_env() server = create_server(config) # Start the server server.start()
from velociraptor_mcp_server.config import VelociraptorConfig, ServerConfig, Config # Create custom configuration velociraptor_config = VelociraptorConfig( api_key="/path/to/api.config.yaml", ssl_verify=False, timeout=60 ) server_config = ServerConfig( host="0.0.0.0", port=8080, log_level="DEBUG" ) config = Config(velociraptor=velociraptor_config, server=server_config) server = create_server(config)
from langchain_mcp_adapters.client import MultiServerMCPClient from langchain_openai import ChatOpenAI from langchain.agents import AgentType, initialize_agent # Initialize LLM model = ChatOpenAI(model="gpt-4") # Connect to Velociraptor MCP server client = MultiServerMCPClient({ "velociraptor-mcp-server": { "transport": "sse", "url": "http://127.0.0.1:8000/sse/", } }) # Get tools and create agent tools = await client.get_tools() agent = initialize_agent( tools=tools, llm=model, agent=AgentType.OPENAI_FUNCTIONS, verbose=True ) # Query your Velociraptor data response = await agent.ainvoke({ "input": "Show me all active Velociraptor clients and their OS information" }) # Collect artifacts from a specific client artifact_response = await agent.ainvoke({ "input": "Collect Windows.System.Users artifact from client workstation-01" }) # Get artifact collection results results_response = await agent.ainvoke({ "input": "Get the results from flow F.ABC123 for Windows.System.Users artifact" })
The server exposes the following MCP tools for Velociraptor integration:
{"args": {}}
hostname
(required): Hostname or FQDN of the client to search for{"args": {"hostname": "workstation-01"}} {"args": {"hostname": "server.domain.com"}}
vql
(required): VQL query string to executemax_rows
(optional): Maximum number of rows to returntimeout
(optional): Query timeout in seconds{"args": {"vql": "SELECT client_id, os_info.hostname FROM clients() LIMIT 10"}} {"args": {"vql": "SELECT * FROM flows() WHERE client_id = 'C.1234567890'"}} {"args": {"vql": "SELECT name, description FROM artifacts() WHERE name =~ 'Windows'"}}
{"args": {}}
{"args": {}}
client_id
(required): Velociraptor client ID to target for collectionartifact
(required): Name of the Velociraptor artifact to collectparameters
(optional): Comma-separated string of key='value' pairs to pass to the artifact{"args": {"client_id": "C.1234567890", "artifact": "Windows.System.Users"}} {"args": {"client_id": "C.0987654321", "artifact": "Linux.System.Uptime", "parameters": "format='seconds'"}}
client_id
(required): Velociraptor client ID where the collection was runflow_id
(required): Flow ID returned from the initial collectionartifact
(required): Name of the artifact collected (e.g., Windows.NTFS.MFT)fields
(optional): Comma-separated string of fields to return (default: '*')max_retries
(optional): Number of times to retry if the flow hasn't finished (default: 5)retry_delay
(optional): Time in seconds to wait between retries (default: 5){"args": {"client_id": "C.1234567890", "flow_id": "F.ABC123", "artifact": "Windows.System.Users"}} {"args": {"client_id": "C.0987654321", "flow_id": "F.DEF456", "artifact": "Linux.System.Uptime", "fields": "Uptime,BootTime"}}
artifact_name
(required): Name of the artifact to get details for{"args": {"artifact_name": "Windows.System.Users"}} {"args": {"artifact_name": "Linux.Network.Netstat"}} {"args": {"artifact_name": "Windows.NTFS.MFT"}}
{"args": {}}
{"args": {}}
artifact_name
(required): Name of the artifact to get details for{"args": {"artifact_name": "Windows.RemoteDesktop.RDPConnections"}}
The tools work together to provide a complete artifact collection workflow:
ListLinuxArtifactsTool
or ListWindowsArtifactsTool
to explore available artifactsCollectArtifactDetailsTool
to understand artifact parameters and requirementsGetAgentInfo
to find the target client by hostnameCollectArtifactTool
to start artifact collection and get a flow_idGetCollectionResultsTool
to monitor progress and retrieve resultsRunVQLQueryTool
for advanced custom investigationsgit clone https://github.com/socfortress/velociraptor-mcp-server.git cd velociraptor-mcp-server # Create virtual environment python -m venv .venv source .venv/bin/activate # Install in development mode with dev dependencies pip install -e ".[dev]" # Install pre-commit hooks pre-commit install
# Run all tests pytest # Run with coverage pytest --cov=velociraptor_mcp_server # Run specific test file pytest tests/test_client.py # Run with verbose output pytest -v
# Install build dependencies pip install build twine # Build the package python -m build # Check the package twine check dist/* # Test installation pip install dist/*.whl
This project uses GitHub Actions for automated building and testing:
main
and develop
branches triggers a buildCreate and push a git tag:
git tag v1.0.0 git push origin v1.0.0
The GitHub Action will automatically:
Visit the Actions page and download the python-package-distributions
artifact from any successful build.
VELOCIRAPTOR_SSL_VERIFY=true
)VELOCIRAPTOR_DISABLED_TOOLS
velociraptor-mcp-server/
├── velociraptor_mcp_server/ # Main package
│ ├── __init__.py # Package initialization
│ ├── __main__.py # CLI entry point
│ ├── client.py # Velociraptor API client
│ ├── config.py # Configuration management
│ ├── server.py # MCP server implementation
│ └── exceptions.py # Custom exceptions
├── tests/ # Test suite
│ ├── conftest.py # Test configuration
│ ├── test_client.py # Client tests
│ ├── test_config.py # Configuration tests
│ └── test_server.py # Server tests
├── .github/workflows/ # GitHub Actions
│ └── publish.yml # CI/CD pipeline
├── requirements.txt # Dependencies
├── pyproject.toml # Package configuration
├── .env.example # Environment template
├── Dockerfile # Docker configuration
└── README.md # This file
We welcome contributions! Please follow these steps:
git checkout -b feature/amazing-feature
)pytest
black .
, isort .
, flake8 .
git commit -m 'Add amazing feature'
)git push origin feature/amazing-feature
)This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ by SOCFortress