Ctxpp
A local MCP server that gives AI agents fast symbol search, call-graph traversal, and blast-radius analysis over your codebase.
Ask AI about Ctxpp
Powered by Claude Β· Grounded in docs
I know everything about Ctxpp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation

ctx++
Fast, accurate codebase intelligence for AI coding agents.
ctx++ is an MCP (Model Context Protocol) server that gives AI agents precise, structured understanding of large codebases. It extracts symbols using native tree-sitter parsing, indexes them in SQLite with both full-text and vector search, and traces call graphs to automatically map how features are assembled across files -- no hand-maintained documentation required.
Why ctx++
ctx++ is built around three principles:
- Never time out. SQLite with indexed vector and FTS search means queries are fast regardless of codebase size. The MCP server loads in under 2 seconds and all tool calls complete within the MCP timeout window.
- Return the right code, not just matching code. The call graph traversal finds everything involved in a feature by walking real call relationships -- the same way a senior engineer would read the code.
- Minimal setup. A single Go binary plus Ollama for embeddings. No cloud services, no API keys, no Docker.
Tools
| Tool | Description |
|---|---|
ctxpp_index | Index or reindex the codebase. Run once after install; incremental updates happen automatically. |
ctxpp_search | Search by identifier name (keyword) or natural language (semantic). Returns symbol definitions with file paths and line numbers. |
ctxpp_file_skeleton | Return all symbols in a file with signatures and line ranges, without reading the full body. Cheap way to understand a file's API surface. |
ctxpp_feature_traverse | Given an exact symbol name, return related symbols by walking the call graph outward via BFS. The auto-generated feature hub. |
ctxpp_blast_radius | Given a symbol, return every location in the codebase that references it. Answers "what breaks if I change this?" |
Supported Languages
| Language | Extensions | Symbols Extracted |
|---|---|---|
| Go | .go | functions, methods, structs, interfaces, types, constants, variables |
| Java | .java | classes, interfaces, enums, methods, constructors, fields |
| Kotlin | .kt, .kts | functions, methods, classes, interfaces, properties, imports |
| JavaScript | .js, .mjs, .cjs, .jsx | functions, classes, methods, arrow functions |
| TypeScript | .ts, .tsx, .mts, .cts | functions, classes, interfaces, type aliases, enums |
| Rust | .rs | functions, structs, enums, traits, impl methods, type aliases |
| C# | .cs | classes, interfaces, methods, fields, imports |
| C | .c, .h | functions, structs, enums, typedefs, function-like macros |
| C++ | .cpp, .cc, .cxx, .hpp, .hh, .hxx | functions, methods, classes, structs, enums, namespaces, templates |
| SQL | .sql | tables, views, indexes, functions, procedures, triggers |
| Markdown | .md, .mdx | headings (as sections) |
| HTML | .html, .htm | headings, script/style blocks |
| Shell | .sh, .bash, .zsh, .dash | functions |
| Protobuf | .proto | messages, services, RPCs, enums |
| HTTP | .http, .rest | named requests |
| Text/Config | .txt, .env, Makefile, Dockerfile, LICENSE, etc. | file-level document symbol |
Want to add another language? See docs/ADDING-LANGUAGE-SUPPORT.md for a step-by-step implementation template and PR checklist.
Prerequisites
- Go 1.24+ for building from source
- Ollama for semantic search embeddings
# Install Ollama, then pull the default embedding model:
ollama pull bge-m3
Without Ollama, ctx++ still works but provides keyword search only. Semantic search and feature traversal quality depend on embeddings.
Install
go install github.com/cavenine/ctxpp@latest
Or build from source:
git clone https://github.com/cavenine/ctxpp
cd ctxpp
make build
Quick Start
1. Index your project
ctxpp index --path /path/to/your/project
This creates .ctxpp/index.db in the project root. Add it to .gitignore:
.ctxpp/
Subsequent runs only re-process changed files. Branch switches self-heal automatically via the file watcher.
If parser logic changes but your source files do not, force a full reparse of supported files:
ctxpp index --path /path/to/your/project --force
2. Add to your MCP config
The examples below use Ollama with bge-m3 (the default). If Ollama is not running, omit CTXPP_OLLAMA_* β ctx++ will fall back to keyword search only.
OpenCode (opencode.json in project root):
{
"mcp": {
"ctxpp": {
"type": "local",
"command": ["ctxpp", "mcp"],
"enabled": true,
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
Claude Code (.mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
Cursor / Windsurf (.cursor/mcp.json or .windsurf/mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
3. Use it
Ask your AI agent anything about the codebase:
use ctxpp to show me everything involved in account authentication
use ctxpp to find where FetchAccount is defined and what calls it
use ctxpp_blast_radius to tell me what breaks if I change the Account struct
Ollama Integration
ctx++ uses Ollama for embedding-based semantic search. The default model is bge-m3 (BAAI's BGE-M3, 1024 dimensions), which was selected through head-to-head quality benchmarks against multiple models on real codebases.
ollama pull bge-m3
ctx++ auto-detects Ollama on localhost:11434 at startup. If Ollama is not running, ctx++ falls back to keyword search only and prints a warning.
To use a different embedding model (e.g., all-minilm for faster indexing at the cost of some search quality):
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_MODEL": "all-minilm"
}
AWS Bedrock Integration
For environments without a local GPU, ctx++ can use Amazon Titan Text Embeddings V2 via AWS Bedrock. Quality is comparable to the Ollama/bge-m3 default (4.7/5 vs 4.8/5 on the kubernetes benchmark).
Prerequisites: AWS credentials configured via ~/.aws/credentials, AWS_PROFILE, or IAM role. The identity needs bedrock:InvokeModel permission on amazon.titan-embed-text-v2:0.
Set CTXPP_EMBED_BACKEND=bedrock and the following env vars:
OpenCode (opencode.json in project root):
{
"mcp": {
"ctxpp": {
"type": "local",
"command": ["ctxpp", "mcp"],
"enabled": true,
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Claude Code (.mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Cursor / Windsurf (.cursor/mcp.json or .windsurf/mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Or for initial indexing from the command line:
export CTXPP_EMBED_BACKEND=bedrock
export CTXPP_BEDROCK_REGION=us-east-1
export CTXPP_BEDROCK_MODEL=amazon.titan-embed-text-v2:0
export CTXPP_BEDROCK_DIMS=1024
export CTXPP_EMBED_CONCURRENCY=100 # increase to 200 for large repos
ctxpp index --path /path/to/your/project
Trade-offs vs Ollama:
| Ollama (local GPU) | Bedrock | |
|---|---|---|
| Per-query embed latency | ~25ms | 100-460ms |
| Index time (kubernetes, 318K symbols) | 47m | ~7.5h |
| GPU required | Yes | No |
| Cost | Free (local) | AWS API pricing |
| Horizontal scaling | Limited by GPU | High (100-200 concurrent) |
| Quality (kubernetes benchmark) | 4.8/5 | 4.7/5 |
Bedrock is the right choice for CI/CD pipelines, cloud-hosted agents, or developer machines without a GPU. For interactive development with a GPU available, Ollama is faster.
OpenAI-Compatible Embeddings Integration
ctx++ can also use any provider that exposes the OpenAI POST /v1/embeddings API. This includes OpenAI, OpenAI-compatible proxies, vLLM, LiteLLM, LocalAI, and Ollama's OpenAI-compatible endpoint.
Set CTXPP_EMBED_BACKEND=openai and configure:
CTXPP_OPENAI_URLCTXPP_OPENAI_MODELCTXPP_OPENAI_DIMSCTXPP_OPENAI_API_KEY(optional for local unauthenticated servers)
Example with OpenAI hosted embeddings:
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "openai",
"CTXPP_OPENAI_URL": "https://api.openai.com",
"CTXPP_OPENAI_MODEL": "text-embedding-3-small",
"CTXPP_OPENAI_DIMS": "1536",
"CTXPP_OPENAI_API_KEY": "${OPENAI_API_KEY}"
}
}
}
}
Example with Ollama's OpenAI-compatible endpoint:
export CTXPP_EMBED_BACKEND=openai
export CTXPP_OPENAI_URL=http://localhost:11434
export CTXPP_OPENAI_MODEL=bge-m3
export CTXPP_OPENAI_DIMS=1024
ctxpp index --path /path/to/your/project
This backend is opt-in only. Auto-detection still prefers TEI, then Ollama, then bundled fallback.
Configuration
All configuration is via environment variables.
| Variable | Default | Description |
|---|---|---|
CTXPP_PROJECT | . | Path to the project root to index |
CTXPP_OLLAMA_URL | http://localhost:11434 | Ollama API endpoint |
CTXPP_OLLAMA_MODEL | bge-m3 | Ollama embedding model |
CTXPP_EMBED_BACKEND | (auto-detect) | Embedding backend: auto, ollama, tei, openai, bedrock, or bundled |
CTXPP_OPENAI_URL | https://api.openai.com | OpenAI-compatible embeddings API base URL |
CTXPP_OPENAI_MODEL | (required with openai) | OpenAI-compatible embedding model |
CTXPP_OPENAI_API_KEY | (optional) | Bearer token for OpenAI-compatible providers |
CTXPP_OPENAI_DIMS | (required with openai) | Embedding dimensions for the selected OpenAI-compatible model |
CTXPP_WORKERS | number of CPUs | Parallel workers for initial indexing |
CTXPP_EMBED_CONCURRENCY | 10 | Max concurrent embedding requests (mainly Bedrock) |
CLI Reference
ctxpp index [--path/-p <path>] [--force] Index or reindex a project (default: $CTXPP_PROJECT or current directory)
ctxpp backfill [--path/-p <path>] Re-embed symbols missing embedding vectors
ctxpp mcp Start the MCP server over stdio
ctxpp version Print version
Architecture
ctx++ is written in Go and built on:
- go-tree-sitter -- native C tree-sitter bindings for fast, accurate AST parsing across all supported languages
- SQLite via
modernc.org/sqlite(pure Go, no CGO) with FTS5 for full-text search and brute-force cosine similarity for vector search - Ollama for embedding generation (default model:
bge-m3) - MCP SDK for stdio-based MCP transport
The index lives in a single .ctxpp/index.db file per project. The schema tracks files, symbols, embeddings, call edges, and import edges. All queries hit indexed columns -- no full-table scans, no loading the entire index into memory.
See PRD.md for full architecture and design decisions.
How Feature Traversal Works
When you ask ctxpp_feature_traverse about a symbol (e.g. "HandleLogin"):
- A keyword search finds symbols with that exact name in the index
- The call graph is walked outward from each seed via BFS β what does this function call? What do those functions call?
- Results are returned in BFS order (seed first, then direct callees, then transitive callees) up to the configured depth (default: 3 hops)
This gives you the full call tree rooted at a symbol β useful for understanding what a function orchestrates without reading every file manually. Use ctxpp_blast_radius for the reverse direction: what calls this function?
License
MIT
