Flashlight
MCP Server that uses DeepSeek's 1M context window for whole-codebase code search
Ask AI about Flashlight
Powered by Claude Β· Grounded in docs
I know everything about Flashlight. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Flashlight
MCP Server that uses DeepSeek's 1M context window for whole-codebase code search.
How it works
Flashlight loads your entire codebase into DeepSeek's context, then uses LLM understanding to find relevant code β no embeddings, no keyword matching, just brute-force full-context search.
It caches the codebase context on DeepSeek's side, so repeat queries are fast and cheap (cache hit price: Β₯0.02/million tokens vs Β₯1/million tokens for miss).
For large projects exceeding the 1M token limit, Flashlight automatically shards the codebase by directory, queries all shards in parallel, and merges results.
Setup
1. Install
npm install -g @1percentsync/flashlight
2. Get a DeepSeek API key
Get one at platform.deepseek.com.
3. Configure MCP
Add to your MCP client config:
Claude Code (~/.claude.json under mcpServers):
{
"flashlight": {
"command": "flashlight",
"env": {
"DEEPSEEK_API_KEY": "sk-..."
}
}
}
Usage
The server exposes a single tool search with parameters:
| Parameter | Required | Description |
|---|---|---|
query | Yes | Natural language description of the code to find |
scope | No | Relative directory path to narrow search |
file_types | No | File extensions to filter (e.g. [".ts", ".py"]) |
Output Modes
Results are returned in one of three formats (tried in order):
- Full files β all matched files with line numbers (if total β€ 50K chars)
- Snippets β only the matched line ranges (if total β€ 50K chars)
- Index β file paths and line ranges only (caller should use Read to view code)
Configuration
All via environment variables:
| Variable | Default | Description |
|---|---|---|
DEEPSEEK_API_KEY | (required) | DeepSeek API key |
FLASHLIGHT_MODEL | deepseek-v4-flash | Model (deepseek-v4-flash or deepseek-v4-pro) |
FLASHLIGHT_REASONING_EFFORT | high | Thinking effort (high or max) |
FLASHLIGHT_CHANGE_THRESHOLD | 0.1 | Ratio of changed tokens to trigger base rebuild |
FLASHLIGHT_MAX_CONTEXT_TOKENS | 900000 | Max tokens per shard (triggers auto-sharding when exceeded) |
FLASHLIGHT_KEEPER_URL | (none) | URL of the keeper service for cache keepalive |
How caching works
On first query, Flashlight sends all code to DeepSeek and saves a base snapshot. On subsequent queries:
- Probe β check if DeepSeek's cache is still alive
- If alive β detect file changes, send only changed files + new query
- If expired β rebuild the base
After each rebuild, activation requests establish cache for future probes and queries.
Sharding (large projects)
When a project exceeds FLASHLIGHT_MAX_CONTEXT_TOKENS, Flashlight automatically:
- Splits files by directory β tries the whole project first, then recursively splits by top-level directories until each group fits
- Queries all shards in parallel
- Merges and deduplicates results
Each shard maintains independent cache state. Shard boundaries only change when a shard overflows (split eagerly, merge lazily).
Cache Keepalive (Docker)
For long-lived cache preservation, deploy the keeper service:
docker run -d -p 3100:3100 ghcr.io/1percentsync/flashlight-keeper
Or with docker compose (keeper/docker-compose.yml):
cd keeper && docker compose up -d
Then set FLASHLIGHT_KEEPER_URL=http://localhost:3100 in your MCP config.
How it works
The keeper learns the actual DeepSeek cache TTL through observation, then schedules activations just in time.
- Sentinel: per model, creates a tiny throwaway cache, waits ~95% of estimated TTL, probes once. If alive β TTL estimate up. If dead β TTL estimate down. Cost: ~Β₯0.002/day.
- Adaptive timing: 24 hourly buckets (UTC) per model, each with its own TTL estimate. Peak hours may have shorter TTL; off-peak may have longer. The keeper adapts automatically.
- Task activation: when a task's time since last activation reaches
estimated_TTL Γ 80%, keeper probes and re-activates it. If the cache is already dead (shouldn't happen if sentinel timing is correct), the task is removed. - TTL persistence: learned estimates are saved to disk (
/app/data/ttl_estimate.json) and survive container restarts.
Keeper environment variables
| Variable | Default | Description |
|---|---|---|
PORT | 3100 | HTTP server port |
MAX_LIFETIME_MS | 172800000 (48h) | Max task lifetime |
ENABLE_REFRESH | false | Enable /refresh endpoint (testing only) |
SENTINEL_API_KEY | (none) | Dedicated API key for sentinel probes (uses task keys if unset) |
DATA_DIR | /app/data | Directory for TTL estimate persistence |
Logs
Logs are written to .flashlight/flashlight.log in the workspace root. Each query logs:
- Snapshot size and shard plan
- Cache probe result (hit/miss)
- File change detection
- Per-shard query cache hit ratio
- Search results
- Activation status
Cost
With deepseek-v4-flash on a ~50K token codebase:
| Operation | Cost |
|---|---|
| First query (build cache) | ~Β₯0.05 |
| Subsequent query (cache hit) | ~Β₯0.001 + output tokens |
| Activation (keepalive) | ~Β₯0.001 per shard |
License
ISC
