Codescout
High-performance coding agent toolkit MCP server β IDE-grade code intelligence for LLMs
Ask AI about Codescout
Powered by Claude Β· Grounded in docs
I know everything about Codescout. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
codescout
MCP server giving AI coding agents IDE-grade code intelligence β symbol navigation, semantic search, persistent memory β optimized for token efficiency.
Works with Claude Code, GitHub Copilot, Cursor, and any MCP-capable agent.
What it does
- Symbol navigation β
symbols,references,symbol_at,call_graph,edit_code, backed by LSP across 9 languages - Semantic search β find code by concept using a bundled ONNX embedding model (22 MB, zero setup), not grep
- Library navigation β explore dependency source code with scoped search, version tracking, and auto-discovery
- Multi-project workspaces β register related projects in
workspace.tomlfor cross-project navigation with per-project memory and indexing - Token efficiency β compact by default, details on demand, never dumps full files
Why not just read files?
| Without codescout | With codescout |
|---|---|
| Agent reads full files to find one function | Navigates by symbol name β zero file reads |
grep returns noise (comments, strings, docs) | references returns exact call sites |
| Context burns on navigation overhead | Token-efficient by design β compact by default |
| State lost between sessions | Persistent memory across sessions |
| Re-reads same modules from different entry points | Symbol index built once, queried instantly |
Quick Start
cargo build
./target/debug/codescout start --project /path/to/code
Add codescout as an MCP server in ~/.claude/settings.json:
{
"mcpServers": {
"codescout": {
"command": "codescout",
"args": ["start", "--project", "."]
}
}
}
Then use in Claude Code β it will route all file/symbol/search operations through codescout's tools.
Onboarding is essential. Before starting work on a new project, run
onboarding()β it discovers languages, reads key project files, and generates a project-specific system prompt and memory files. Without it, the agent has no project context and will navigate the codebase blind. See the Claude Code integration guide for details.
Tip: Install the codescout-companion plugin to automatically steer Claude toward codescout tools in every session β including subagents.
Agent integrations
| Agent | Guide |
|---|---|
| Claude Code | docs/agents/claude-code.md |
| GitHub Copilot | docs/agents/copilot.md |
| Cursor | docs/agents/cursor.md |
Multi-agent infrastructure
codescout's design is informed by research on compound error in multi-agent systems β research and empirical evidence confirm failure rates of 41β87% in production pipelines. This finding drove the choice of single-session skill-based workflows over agent orchestration chains. Read the analysis β
Kotlin
codescout has first-class Kotlin support built around the reality that Kotlin projects are expensive to boot and JetBrains' kotlin-lsp allows only one LSP process per workspace.
- LSP multiplexer β a detached
codescout muxprocess shares a single kotlin-lsp JVM across all codescout instances. No configuration needed. Cold-start (8β15s JVM boot) happens once; subsequent sessions connect instantly. - Concurrent instance safety β each instance gets an isolated system path to prevent IntelliJ platform lock contention, with a circuit-breaker that fails fast instead of timing out.
- Gradle isolation β per-instance
GRADLE_USER_HOMEeliminates daemon lock contention between parallel sessions.
| Metric | Without mux | With mux |
|---|---|---|
| kotlin-lsp JVMs per machine | 1 per session (~2GB each) | 1 shared (~2GB total) |
| Cold start on 2nd session | 8β15s | ~0s (mux already warm) |
| Typical LSP response | 120s+ timeout | 30β270ms |
β Kotlin LSP Multiplexer docs
Tools (20)
Symbol navigation (5) Β· File operations (7) Β· Shell (1) Β· Semantic search (2) Β· Memory (1) Β· Library navigation (1) Β· Workflow & Config (3)
Supported languages: Rust, Python, TypeScript/JavaScript, Go, Java, Kotlin, C/C++, C#, Ruby.
β Tool reference
Semantic Search & Embeddings
codescout bundles all-MiniLM-L6-v2 (quantized, 22 MB) as its default embedding model.
It runs locally via ONNX β no external server, no API key, no GPU needed. On first
index(action: build), the model is downloaded once to ~/.cache/huggingface/hub/.
For users with Ollama or a GPU, codescout also supports external embedding servers
(Ollama, OpenAI, llama.cpp, vLLM, TEI) via the standard /v1/embeddings API.
β Embedding configuration β Model comparison & benchmark
Experimental Features
New features land on the experiments branch before reaching master.
They may change or be removed without notice, and may not be in your installed release yet.
β Browse experimental features
Contributing
See CONTRIBUTING.md for how to get started. PRs from Claude Code are welcome!
Features
- Multi-project workspace support with per-project LSP, memory, and semantic indexing
- Library navigation with per-library embedding databases and version staleness hints
- LSP idle TTL β idle language servers are shut down automatically (Kotlin: 2h, others: 30min) and restarted transparently on next query
- Persistent memory across sessions with semantic recall
- Output buffers (
@cmd_*,@file_*) for token-efficient large output handling - Progressive disclosure β compact by default, full detail on demand
