Rust Code MCP
mcp server for rust code semantic hybrid search bm25 + embeddings with tree-sitter, gpu accelerated indexing and merkle tree updates
Ask AI about Rust Code MCP
Powered by Claude Β· Grounded in docs
I know everything about Rust Code MCP. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
rust-code-mcp
An MCP server for semantic code search in Rust codebases. Combines BM25 full-text search with vector embeddings for hybrid search, plus rust-analyzer based code analysis.
Architecture
Features
- Hybrid search - BM25 keyword search + semantic vector similarity
- Symbol navigation - Find definitions and references across the codebase
- Call graph analysis - Trace function call relationships
- Complexity metrics - LOC, cyclomatic complexity, function counts
- Incremental indexing - Merkle tree change detection for fast re-indexing
- Background sync - Automatic index updates every 5 minutes
Tools
| Tool | Description |
|---|---|
search | Keyword search using hybrid BM25 + vectors |
get_similar_code | Find semantically similar code snippets |
find_definition | Locate where a symbol is defined (by name) |
find_references | Find all usages of a symbol (by name) |
get_dependencies | List imports for a file |
get_call_graph | Show function call relationships |
analyze_complexity | Calculate code complexity metrics |
read_file_content | Read file contents |
index_codebase | Manually trigger indexing |
health_check | Check system status |
Installation
1. Build the binary
git clone https://github.com/molaco/rust-code-mcp.git
cd rust-code-mcp
cargo build --release
The binary is at target/release/file-search-mcp.
Optionally, copy it somewhere on your PATH:
cp target/release/file-search-mcp ~/.local/bin/
2. Add to Claude Code
In your Rust project directory, create .mcp.json:
{
"mcpServers": {
"rust-code-mcp": {
"command": "/absolute/path/to/file-search-mcp"
}
}
}
Or add it globally in ~/.claude.json so it's available in all projects:
{
"mcpServers": {
"rust-code-mcp": {
"command": "/absolute/path/to/file-search-mcp"
}
}
}
3. Index your codebase
Once Claude Code starts, the server is running. Use the index_codebase tool to index your project:
> index my codebase at /absolute/path/to/my-rust-project
Or call the tool directly with the directory parameter set to your project root. Indexing is incremental β subsequent runs only process changed files (via Merkle tree change detection). A background sync also re-indexes every 5 minutes automatically.
4. Start using it
All tools accept a directory parameter pointing to your project root. Examples:
- Search code:
searchwith a query like "error handling in parser" - Find definitions:
find_definitionfor a symbol name - Find references:
find_referencesto see all usages of a symbol - Call graph:
get_call_graphto trace function relationships - Similar code:
get_similar_codefor semantic similarity search
Index data is stored in ~/Library/Application Support/dev.rust-code-mcp.search/ (macOS) or ~/.local/share/search/ (Linux), keyed by a hash of the project path β it never writes to your project directory.
Nix
A Nix flake is provided for easy setup:
# Enter dev shell with all dependencies
nix develop github:molaco/rust-code-mcp
# Build the binary
nix build github:molaco/rust-code-mcp
The dev shell includes nightly Rust and CUDA support.
GPU Acceleration
Embedding generation uses ONNX Runtime with CUDA support for 10-15x faster indexing on NVIDIA GPUs.
Requirements
- NVIDIA GPU (Maxwell or newer)
- CUDA 12.x + cuDNN 9.x
- The
ortcrate downloads ONNX Runtime binaries to~/.cache/ort.pyke.io/
MCP Server CUDA Configuration
For CUDA to work when the MCP server is spawned by Claude Code (or other MCP clients), the LD_LIBRARY_PATH must include:
- ORT cache - Contains
libonnxruntime_providers_shared.soandlibonnxruntime_providers_cuda.so - CUDA libraries -
libcudart.so,libcublas.so,libcublasLt.so - cuDNN libraries -
libcudnn.so
Option 1: Using flake.nix (recommended)
The included flake.nix automatically generates .mcp.json with the correct LD_LIBRARY_PATH:
nix develop
# Generates .mcp.json with dynamically discovered ORT cache path
Option 2: Manual configuration
First, find your ORT cache path:
find ~/.cache/ort.pyke.io/dfbin -name "libonnxruntime_providers_shared.so" -printf '%h\n' | head -1
# Example output: /home/user/.cache/ort.pyke.io/dfbin/x86_64-unknown-linux-gnu/8BBB.../onnxruntime/lib
Then configure your MCP client (e.g., ~/.claude.json for Claude Code):
{
"mcpServers": {
"rust-code-mcp": {
"command": "/path/to/file-search-mcp",
"args": [],
"env": {
"RUST_LOG": "info",
"CUDA_HOME": "/usr/local/cuda",
"CUDA_PATH": "/usr/local/cuda",
"LD_LIBRARY_PATH": "/home/user/.cache/ort.pyke.io/dfbin/x86_64-unknown-linux-gnu/<HASH>/onnxruntime/lib:/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu"
}
}
}
}
Replace:
/path/to/file-search-mcpwith your binary path<HASH>with the hash from thefindcommand above- CUDA paths with your system's CUDA installation
Note: The ORT cache hash changes when ONNX Runtime is updated. If CUDA stops working, re-run the
findcommand to get the new path.
Performance
| Mode | Throughput |
|---|---|
| CPU only | ~50 chunks/sec |
| GPU (RTX 3090) | ~500 chunks/sec (full pipeline) |
| GPU isolated embedding | ~8000 chunks/sec |
Stack
- tantivy - Full-text search
- fastembed - Local embeddings (ONNX)
- lancedb - Embedded vector storage
- ra_ap_syntax - AST parsing
- ra_ap_ide - Semantic analysis (goto definition, find references)
- rmcp - MCP protocol
Screenshots

More screenshots

License
MIT
