Lean Ctx
Hybrid Context Optimizer β Shell Hook + MCP Server. Reduces LLM token consumption by 89-99%. Single Rust binary, zero dependencies.
Ask AI about Lean Ctx
Powered by Claude Β· Grounded in docs
I know everything about Lean Ctx. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
βββ ββββββββ ββββββ ββββ βββ βββββββββββββββββββ βββ
βββ βββββββββββββββββββββ βββ βββββββββββββββββββββββββ
βββ ββββββ ββββββββββββββ βββ βββ βββ ββββββ
βββ ββββββ ββββββββββββββββββ βββ βββ ββββββ
βββββββββββββββββββ ββββββ ββββββ ββββββββ βββ ββββ βββ
βββββββββββββββββββ ββββββ βββββ βββββββ βββ βββ βββ
Context Runtime for AI Agents
The context layer for AI coding agents
Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60β95% (up to 99% on cached reads)
Shell Hook + MCP Server Β· 56 tools Β· 10 read modes Β· 95+ patterns Β· Single Rust binary
Website Β· Docs Β· Install Β· Demo Β· Benchmarks Β· Cookbook Β· Security Β· Changelog Β· Discord
lean-ctx is a local-first context runtime that compresses file reads + shell output before they reach the LLM. Cached re-reads drop to ~13 tokens.
See it in action:
Read + Shell Map-mode reads + compressed CLI output |
Gain (live) Tokens + USD savings in real time |
Benchmark proof Measure compression by language + mode |
All GIFs are generated from reproducible VHS tapes in demo/.
What it does
- File reads (MCP): cached + mode-aware reads (
full,map,signatures,diff, β¦) with graph-aware related files hints - Shell output (hook): compresses noisy CLI output via 95+ patterns (git, npm, cargo, docker, β¦)
- Graph-Powered Intelligence: multi-edge Property Graph (imports, calls, exports, type_ref) with weighted impact analysis, hybrid search (BM25 + embeddings + graph proximity via RRF), and incremental git-diff updates
- PR Context Packs:
lean-ctx pack --prbuilds a PR-ready context pack (changed files, related tests, impact, artifacts) - Context Packages:
lean-ctx pack createbundles Knowledge + Graph + Session + Gotchas into portable.lctxpkgfiles β share context across projects/teams with SHA-256 integrity, auto-load on session start, and smart merge (dedup facts, overlay graph) - Session memory (CCP): persist task/facts/decisions across chats with structured recovery queries surviving compaction
- HTTP mode:
lean-ctx servefor Streamable HTTP MCP +/v1/tools/call(used by the Cookbook + SDK)
How it works (30 seconds)
AI tool β (MCP tools + shell commands) β lean-ctx β your repo + CLI
- MCP server: exposes
ctx_*tools (read modes, caching, deltas, search, memory, multi-agent) - Shell hook: transparently compresses common commands so the LLM sees less noise
- Property Graph: multi-edge code graph powers impact analysis, related file discovery, and search ranking
- CCP: persists session state with structured recovery queries so long-running work doesnβt βcold startβ every chat
Get started (60 seconds)
# 1) Install (pick one)
curl -fsSL https://leanctx.com/install.sh | sh # universal (no Rust needed)
brew tap yvgude/lean-ctx && brew install lean-ctx # macOS / Linux
npm install -g lean-ctx-bin # Node.js
cargo install lean-ctx # Rust
pi install npm:pi-lean-ctx # Pi Coding Agent
# 2) Setup (shell + auto-detected AI tools)
lean-ctx setup
# 3) Verify
lean-ctx doctor
# 4) See the payoff
lean-ctx gain --live
lean-ctx wrapped --week
After setup, restart your shell and your editor/AI tool once so the MCP + hooks are active.
Troubleshooting / Safety
- Disable immediately (current shell):
lean-ctx-off - Run a single command uncompressed:
lean-ctx -c --raw "git status" - Update:
lean-ctx update - Diagnose (shareable):
lean-ctx doctor --json
Supported IDEs & AI tools
lean-ctx is a standard MCP server, so it works with any MCP-compatible client. Three integration modes are auto-selected per agent:
| Mode | How it works | Best for |
|---|---|---|
| CLI-Redirect | Agent calls lean-ctx directly via shell β zero MCP schema overhead | Agents with shell access |
| Hybrid | MCP for cached reads (13 tokens), CLI for shell + search | Mixed environments |
| Full MCP | All 56 tools via MCP protocol | Protocol-only agents |
Agent compatibility matrix
| Agent | CLI | Hybrid | MCP | Setup |
|---|---|---|---|---|
| Cursor | β | lean-ctx init --agent cursor | ||
| Codex CLI | β | lean-ctx init --agent codex | ||
| Gemini CLI | β | lean-ctx init --agent gemini | ||
| Claude Code | β | lean-ctx init --agent claude | ||
| CRUSH | β | lean-ctx init --agent crush | ||
| Hermes | β | lean-ctx init --agent hermes | ||
| OpenCode | β | lean-ctx init --agent opencode | ||
| Pi | β | lean-ctx init --agent pi | ||
| Qoder | β | lean-ctx init --agent qoder | ||
| Windsurf | β | lean-ctx init --agent windsurf | ||
| GitHub Copilot | β | lean-ctx init --agent copilot | ||
| Amp | β | lean-ctx init --agent amp | ||
| Cline | β | lean-ctx init --agent cline | ||
| Roo Code | β | lean-ctx init --agent roo | ||
| Kiro | β | lean-ctx init --agent kiro | ||
| Antigravity | β | lean-ctx init --agent antigravity | ||
| Amazon Q | β | lean-ctx init --agent amazonq | ||
| Qwen | β | lean-ctx init --agent qwen | ||
| Trae | β | lean-ctx init --agent trae | ||
| Verdent | β | lean-ctx init --agent verdent | ||
| JetBrains IDEs | β | lean-ctx init --agent jetbrains | ||
| QoderWork | β | lean-ctx init --agent qoderwork | ||
| VS Code | β | lean-ctx init --agent vscode | ||
| Zed | β | lean-ctx init --agent zed | ||
| Neovim | β | lean-ctx init --agent neovim | ||
| Emacs | β | lean-ctx init --agent emacs | ||
| Sublime Text | β | lean-ctx init --agent sublime |
Any MCP-compatible client works out of the box β the table above shows agents with first-class auto-setup.
When to use (and when not to)
Great fit if youβ¦
- use AI coding tools daily and your sessions are shell-heavy (git/tests/builds)
- work in medium/large repos (50+ files / monorepos)
- want a local-first layer with no telemetry by default
Skip it if youβ¦
- mostly work in tiny repos and rarely call the shell from your AI tool
- always need raw/unfiltered logs (you can still use
--raw, but ROI is lower)
Demo
Try these in any repo:
lean-ctx read rust/src/server/mod.rs -m map
lean-ctx -c "git log -n 5 --oneline"
lean-ctx gain --live
lean-ctx benchmark report .
- The repo ships the exact tapes used to render the GIFs in
demo/ - Regenerate locally:
vhs demo/leanctx.tape
vhs demo/gain.tape
vhs demo/benchmark.tape
Benchmarks
- Latest snapshot: BENCHMARKS.md
- Reproduce:
lean-ctx benchmark report .
Docs
- Getting started: https://leanctx.com/docs/getting-started
- Tools reference: https://leanctx.com/docs/tools/
- CLI reference: https://leanctx.com/docs/cli-reference/
- FAQ: discord-faq.md
- Feature catalog (SSOT snapshot): LEANCTX_FEATURE_CATALOG.md
- Architecture: ARCHITECTURE.md
- Vision: VISION.md
Privacy & security
- No telemetry by default
- Optional anonymous stats sharing (opt-in during setup)
- Disableable update check (config
update_check_disabled = trueorLEAN_CTX_NO_UPDATE_CHECK=1) - Runs locally; your code never leaves your machine unless you explicitly enable cloud sync
See SECURITY.md.
Uninstall
lean-ctx-off # disable immediately (current shell session)
lean-ctx uninstall # remove hooks + editor configs + data dir
# Remove the binary (pick your install method)
brew uninstall lean-ctx
npm uninstall -g lean-ctx-bin
cargo uninstall lean-ctx
pi uninstall npm:pi-lean-ctx # Pi Coding Agent
Contributing
Start with CONTRIBUTING.md. Easy first PR: propose a new CLI compression pattern via the issue template.
License
Apache License 2.0 β see LICENSE.
Portions of this software were originally released under the MIT License. See LICENSE-MIT and NOTICE.
