
Lumo
STDIORust-based autonomous AI agent framework using LLMs for complex task solving
Rust-based autonomous AI agent framework using LLMs for complex task solving
A powerful autonomous agent framework written in Rust that leverages LLMs to solve complex tasks using tools and reasoning.
Lumo is a Rust implementation of the smolagents library, designed to create intelligent agents that can autonomously solve complex tasks. Built with performance and safety in mind, it provides a robust framework for building AI-powered applications.
You can use models like Groq, TogetherAI using the same API as OpenAI. Just give the base url and the api key.
Warning: Since there is no implementation of a Sandbox environment, be careful with the tools you use. Preferably run the agent in a controlled environment using a Docker container.
cargo build --release
target/release/lumo
You'll be prompted to enter your task interactively. Type 'exit' to quit the program.
You need to set the API key as an environment variable or pass it as an argument.
You can add the binary to your path to access it from your terminal using lumo
command.
# Pull the image docker pull akshayballal95/lumo-cli:latest # Run with your API key docker run -it -e OPENAI_API_KEY=your-key-here lumo-cli
The default model is Gemini-2.0-Flash
lumo [OPTIONS] Options: -a, --agent-type <TYPE> Agent type. Options: function-calling, code, mcp [default: function-calling] -l, --tools <TOOLS> Comma-separated list of tools. Options: google-search, duckduckgo, visit-website, python-interpreter [default: duckduckgo,visit-website] -m, --model-type <TYPE> Model type. Options: openai, ollama, gemini [default: gemini] -k, --api-key <KEY> LLM Provider API key --model-id <ID> Model ID (e.g., "gpt-4" for OpenAI, "qwen2.5" for Ollama, or "gemini-2.0-flash" for Gemini) [default: gemini-2.0-flash] -b, --base-url <URL> Base URL for the API --max-steps <N> Maximum number of steps to take [default: 10] -p, --planning-interval <N> Planning interval -v, --logging-level <LEVEL> Logging level -h, --help Print help
Example commands:
# Using Gemini (default) lumo -k your-gemini-key # Using OpenAI with specific model lumo -m openai --model-id gpt-4 -k your-openai-key # Using Ollama with local model lumo -m ollama --model-id qwen2.5 -b http://localhost:11434 # Using specific tools and agent type lumo -a code -l duckduckgo,python-interpreter
OPENAI_API_KEY
: Your OpenAI API key (optional, if using OpenAI model)GEMINI_API_KEY
: Your Gemini API key (optional, if using Gemini model)SERPAPI_API_KEY
: Google Search API key (optional, if using Google Search Tool)Lumo supports OpenTelemetry tracing integration with Langfuse. To enable tracing, add the following environment variables to your .env
file:
Note: All telemetry data is private and owned by you. The data is only stored in your Langfuse instance and is not shared with any third parties.
LANGFUSE_PUBLIC_KEY_DEV=your-dev-public-key LANGFUSE_SECRET_KEY_DEV=your-dev-secret-key LANGFUSE_HOST_DEV=http://localhost:3000 # Or your dev Langfuse instance URL
LANGFUSE_PUBLIC_KEY=your-production-public-key LANGFUSE_SECRET_KEY=your-production-secret-key LANGFUSE_HOST=https://cloud.langfuse.com # Or your production Langfuse instance URL
Tracing is optional - if these environment variables are not provided, tracing will be disabled and the application will continue to run normally. When enabled, it will trace:
For the server, you can add these variables to your .env
file or include them in your Docker run command:
docker run -p 8080:8080 \ -e OPENAI_API_KEY=your-openai-key \ -e GOOGLE_API_KEY=your-google-key \ -e LANGFUSE_PUBLIC_KEY=your-langfuse-public-key \ -e LANGFUSE_SECRET_KEY=your-langfuse-secret-key \ -e LANGFUSE_HOST=https://cloud.langfuse.com \ lumo-server
The server will automatically detect if tracing is configured and enable/disable it accordingly.
You can configure multiple servers in the configuration file for MCP agent usage. The configuration file location varies by operating system:
~/.config/lumo-cli/servers.yaml
~/Library/Application Support/lumo-cli/servers.yaml
%APPDATA%\Roaming\lumo\lumo-cli\servers.yaml
Example configuration:
exa-search: command: npx args: - "exa-mcp-server" env: EXA_API_KEY: "your-api-key" fetch: command: uvx args: - "mcp_server_fetch" system_prompt: |- You are a powerful agentic AI assistant...
Lumo can also be run as a server, providing a REST API for agent interactions.
# Start the server (default port: 8080) lumo-server
# Build the image docker build -f server.Dockerfile -t lumo-server . # Run the container with required API keys docker run -p 8080:8080 \ -e OPENAI_API_KEY=your-openai-key \ -e GOOGLE_API_KEY=your-google-key \ -e GROQ_API_KEY=your-groq-key \ -e ANTHROPIC_API_KEY=your-anthropic-key \ -e EXA_API_KEY=your-exa-key \ lumo-server
You can also use the pre-built image:
# Pull the image docker pull akshayballal95/lumo-server:latest # Run with all required API keys docker run -p 8080:8080 \ -e OPENAI_API_KEY=your-openai-key \ -e GOOGLE_API_KEY=your-google-key \ -e GROQ_API_KEY=your-groq-key \ -e ANTHROPIC_API_KEY=your-anthropic-key \ -e EXA_API_KEY=your-exa-key \ akshayballal95/lumo-server:latest
curl http://localhost:8080/health_check
curl -X POST http://localhost:8080/run \ -H "Content-Type: application/json" \ -d '{ "task": "What is the weather in London?", "model": "gpt-4o-mini", "base_url": "https://api.openai.com/v1/chat/completions", "tools": ["DuckDuckGo", "VisitWebsite"], "max_steps": 5, "agent_type": "function-calling" }'
task
(required): The task to executemodel
(required): Model ID (e.g., "gpt-4", "qwen2.5", "gemini-2.0-flash")base_url
(required): Base URL for the APItools
(optional): Array of tool names to usemax_steps
(optional): Maximum number of steps to takeagent_type
(optional): Type of agent to use ("function-calling" or "mcp")history
(optional): Array of previous messages for contextThe server automatically detects the appropriate API key based on the base_url:
OPENAI_API_KEY
GOOGLE_API_KEY
GROQ_API_KEY
ANTHROPIC_API_KEY
Contributions are welcome! To contribute:
git checkout -b feature/amazing-feature
)git commit -m 'Add some amazing feature'
)git push origin feature/amazing-feature
)Give a ⭐️ if this project helps you or inspires your work!
The server requires API keys for the LLM providers you plan to use. These keys are used to authenticate with the respective model APIs.
.env
file in the lumo-server
directory:cp lumo-server/.env.example lumo-server/.env
.env
file:OPENAI_API_KEY=your-openai-key GOOGLE_API_KEY=your-google-key GROQ_API_KEY=your-groq-key ANTHROPIC_API_KEY=your-anthropic-key EXA_API_KEY=your-exa-key