
Token Saver
STDIOMCP server providing VSCode LSP and browser control for token-efficient AI development workflows
MCP server providing VSCode LSP and browser control for token-efficient AI development workflows
Transform AI from a code suggester into a true full-stack developer — with instant access to code intelligence and real browser control.
📚 Full Usage Guide & Examples → |
📖 Detailed Technical README → |
🔄 Releases
Modern AI coding assistants waste enormous context (and your money) by stuffing full grep/search results into the model window. That leads to:
Token Saver MCP fixes this.
It gives AI assistants direct access to VSCode’s Language Server Protocol (LSP) and the Chrome DevTools Protocol (CDP), so they can work like real developers:
Result: 90–99% fewer tokens, 100–1000× faster responses, and $200+ in monthly savings — while enabling AI to truly act as a full-stack engineer.
Think of your AI’s context window like a workbench. If it’s cluttered with logs, search dumps, and irrelevant snippets, the AI can’t focus.
Token Saver MCP keeps the workbench clean.
grep -r "renderProfileImage" . # 5000+ tokens, 10–30 seconds, bloated context
get_definition('src/components/UserCard.js', 25) # 50 tokens, <100ms, exact location + type info
Cleaner context = a sharper, more persistent AI assistant.
Token Saver MCP uses a split architecture designed for speed and stability:
AI Assistant ←→ MCP Server ←→ VSCode Gateway ←→ VSCode Internals
(hot reload) (stable interface)
🏗️ VSCode Gateway Extension
🚀 Standalone MCP Server
Why it matters: You can iterate on MCP tools instantly without rebuilding/restarting VSCode. Development is 60× faster and much more reliable.
Token Saver MCP currently provides 40 production-ready tools across five categories:
get_definition
, get_references
, rename_symbol
, get_hover
, find_implementations
, …smart_resume
(86-99% token savings vs /resume), write_memory
, read_memory
, search_memories
(full-text search), export_memories
, import_memories
, …navigate_browser
, execute_in_browser
, take_screenshot
, get_browser_console
, …test_react_component
, test_api_endpoint
, check_page_performance
, …get_instructions
, retrieve_buffer
, get_supported_languages
, …📚 See the full Usage Guide with JSON examples →
Operation | Traditional Method | With Token Saver MCP | Improvement |
---|---|---|---|
Find function definition | 5–10s, 5k tokens | 10ms, 50 tokens | 100× faster |
Find all usages | 10–30s | 50ms | 200× faster |
Rename symbol project-wide | Minutes | 100ms | 1000× faster |
Resume context (/resume) | 5000+ tokens | 200-500 tokens | 86-99% savings |
Token & Cost Savings (GPT-4 pricing):
Beyond backend code, Token Saver MCP empowers AI to control a real browser through CDP:
Example workflow:
➡️ No more “please test this manually” — AI tests itself.
Replace wasteful /resume
commands with intelligent context restoration:
smart_resume() // 200-500 tokens, focused context only
Features:
Example:
// Standard resume - just the essentials smart_resume() // Include everything from last 3 days smart_resume({ daysAgo: 3, verbosity: 3 }) // Critical items only for quick check-in smart_resume({ minImportance: 4, verbosity: 1 })
Memory is stored locally in SQLite (~/.token-saver-mcp/memory.db
) with automatic initialization.
Visit http://127.0.0.1:9700/dashboard
to monitor:
Perfect for seeing your AI’s efficiency gains in action.
# Clone repo git clone https://github.com/jerry426/token-saver-mcp cd token-saver-mcp # One-step setup ./mcp setup /path/to/your/project
That’s it! The installer:
➡️ Full installation & build steps: Detailed README →
/mcp-gemini
endpointEndpoints include:
http://127.0.0.1:9700/mcp
(standard MCP)http://127.0.0.1:9700/mcp-gemini
(Gemini)http://127.0.0.1:9700/mcp/simple
(REST testing)http://127.0.0.1:9700/dashboard
(metrics UI)Think the claims are too good to be true? Run the built-in test suite:
python3 test/test_mcp_tools.py
Expected output shows: hover, completions, definitions, references, diagnostics, semantic tokens, buffer management, etc. — all passing ✅
pnpm install pnpm run dev # hot reload pnpm run build pnpm run test
MCP server lives in /mcp-server/
, with modular tools organized by category (lsp/
, cdp/
, helper/
, system/
).
See Full Technical README → for architecture diagrams, tool JSON schemas, buffer system details, and contributing guide.
Token Saver MCP already unlocks full-stack AI workflows. Next up:
MIT — free for personal and commercial use.
👉 Start today:
./mcp setup
📚 For in-depth details: