io.github.capyBearista/gemini-researcher
Stateless MCP server that proxies research queries to Gemini CLI, reducing agent context/model usage
Ask AI about io.github.capyBearista/gemini-researcher
Powered by Claude Β· Grounded in docs
I know everything about io.github.capyBearista/gemini-researcher. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Gemini Researcher
A lightweight, stateless MCP (Model Context Protocol) server that lets developer agents (Claude Code, GitHub Copilot) hand off deep repository analysis to the Gemini CLI. The server is read-only, returns structured JSON (as text content), and is designed to reduce the calling agent's context and model usage.
Status: v1 complete. Core features are stable, but still early days. Feedback welcome!
If this saved you tokens, β please consider giving it a star! :)
The primary goals:
- Reduce agent context usage by letting Gemini CLI read large codebases locally and do its own research
- Reduce calling-agent model usage by offloading heavy analysis to Gemini
- Keep the server stateless and read-only for safety
Why use this?
Instead of copying entire files into your agent's context (burning tokens and cluttering the conversation), this server lets Gemini CLI read files directly from your project. Your agent sends a research query, Gemini reads and synthesizes using its large context window, and returns structured results. You save tokens, your agent stays focused, and complex codebase analysis becomes practical.
Verified clients: Claude Code, Cursor, VS Code (GitHub Copilot)
[!NOTE] It definitely works with other clients, but I haven't personally tested them yet. Please open an issue if you try it elsewhere!
Table of contents
Overview
Gemini Researcher accepts queries from your AI agent and uses Gemini CLI to analyze your local code files. Results are returned as formatted JSON for your agent to use.
Runtime safety
The server runs Gemini CLI with safety restrictions enabled. See docs/runtime-contract.md for full technical details.
Default invocation pattern:
gemini [ -m <model> ] --output-format json --approval-mode default [--admin-policy <path>] -p "<prompt>"
Key safety points:
- Uses
--approval-mode default(not yolo mode) for controlled execution - Enforces read-only policy by default to prevent file changes
- Policy blocks mutating tools like
write_file,replace,run_shell_command - Strict enforcement can be disabled with
GEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0(not recommended)
Auth and health check
Run health_check with includeDiagnostics: true to see auth status and server health.
| authStatus | What it means | Impact |
|---|---|---|
configured | Gemini CLI is authenticated | Server ready to use |
unauthenticated | No valid authentication found | Server marked as degraded |
unknown | Could not verify auth status | Server marked as degraded |
health_check.status values:
ok: Gemini CLI is available, auth is working, and safety policy is enforceddegraded: Setup incomplete, auth unclear, or safety policy disabled
Prerequisites
- Node.js 18+ installed
- Gemini CLI installed:
npm install -g @google/gemini-cli - Gemini CLI authenticated (recommended:
geminiβ Login with Google) or setGEMINI_API_KEY
Quick checks:
node --version
gemini --version
Quickstart
Step 1: Validate environment
Run the setup wizard to verify Gemini CLI is installed and authenticated:
npx gemini-researcher init
Step 2: Configure your MCP client
Standard config works in most of the tools:
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
[!NOTE] On native Windows, some MCP hosts use shell-less process spawning and may not resolve npm command shims reliably (
npx,gemini). If startup fails with launch errors (spawn ... ENOENT/GEMINI_CLI_LAUNCH_FAILEDdespite working in PowerShell), prefer Docker or WSL for immediate reliability. See the full remediation tree indocs/platforms/windows.md.
VS Code
Add to your VS Code MCP settings (create .vscode/mcp.json if needed):
{
"servers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
Claude Code
Option 1: Command line (recommended)
Local (user-wide) scope
# Add the MCP server via CLI
claude mcp add --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Project scope
Navigate to your project directory, then run:
# Add the MCP server via CLI
claude mcp add --scope project --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Option 2: Manual configuration
Add to .mcp.json in your project root (project scope):
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
Or add to ~/.claude/settings.json for local scope.
After adding the server, restart Claude Code and use /mcp to verify the connection.
Cursor
Go to Cursor Settings -> Tools & MCP -> Add a Custom MCP Server. Add the following configuration:
{
"mcpServers": {
"gemini-researcher": {
"type": "stdio",
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
[!NOTE] The server automatically uses the directory where the IDE opened your workspace as the project root or where your terminal is. To analyze a different directory, optionally set
PROJECT_ROOT:
Example
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
],
"env": {
"PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
Step 3: Restart your MCP client
Step 4: Test it
Ask your agent: "Use gemini-researcher to analyze the project."
Tools
All tools return structured JSON (as MCP text content). Large responses are chunked (~10KB per chunk) and cached for 1 hour.
| Tool | Purpose | When to use |
|---|---|---|
quick_query | Fast analysis with flash model | Quick questions about specific files or small code sections |
deep_research | In-depth analysis with pro model | Complex multi-file analysis, architecture reviews, security audits |
analyze_directory | Map directory structure | Understanding unfamiliar codebases, generating project overviews |
validate_paths | Pre-check file paths | Verify files exist before running expensive queries |
health_check | Diagnostics | Troubleshooting server/Gemini CLI issues |
fetch_chunk | Get chunked responses | Retrieve remaining parts of large responses |
Query tool fallback chains are family-aware:
quick_query:flash -> flash_lite -> autodeep_research:pro -> flash -> flash_lite -> autoanalyze_directory:flash -> flash_lite -> auto
When using API-key auth, fallback also handles model-unavailable/unsupported errors (not only quota/capacity errors).
Example workflows
Understanding a security vulnerability:
Agent: Use deep_research to analyze authentication flow across @src/auth and @src/middleware, focusing on security
Quick code explanation:
Agent: Use quick_query to explain the login flow in @src/auth.ts, be concise
Mapping an unfamiliar codebase:
Agent: Use analyze_directory on src/ with depth 3 to understand the project structure
Full tool schemas (for reference)
quick_query
{
"prompt": "Explain @src/auth.ts login flow",
"focus": "security",
"responseStyle": "concise"
}
deep_research
{
"prompt": "Analyze authentication across @src/auth and @src/middleware",
"focus": "architecture",
"citationMode": "paths_only"
}
analyze_directory
{
"path": "src",
"depth": 3,
"maxFiles": 200
}
validate_paths
{
"paths": ["src/auth.ts", "README.md"]
}
health_check
{
"includeDiagnostics": true
}
fetch_chunk
{
"cacheKey": "cache_abc123",
"chunkIndex": 2
}
Docker
A pre-built multi-platform Docker image is available on Docker Hub:
# Pull the image (works on Intel/AMD and Apple Silicon)
docker pull capybearista/gemini-researcher:latest
# Run the server (mount your project and provide API key)
docker run -i --rm \
-e GEMINI_API_KEY="your-api-key" \
-v /path/to/your/project:/workspace \
capybearista/gemini-researcher:latest
For MCP client configuration with Docker:
{
"mcpServers": {
"gemini-researcher": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "GEMINI_API_KEY",
"-v", "/path/to/your/project:/workspace",
"capybearista/gemini-researcher:latest"
],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
[!NOTE]
- The
-iflag is required for stdio transport- The container mounts your project to
/workspace(the project root)- Replace
/path/to/your/projectwith your actual project path- Replace
your-api-keywith your actual Gemini API key (this is required for Docker usage)
Platform guides
- Native Windows launch model and remediation:
docs/platforms/windows.md
Troubleshooting (common issues)
- Remediation decision tree:
| Error / signal | Run this check first | Change this configuration next |
|---|---|---|
GEMINI_CLI_LAUNCH_FAILED or spawn ... ENOENT | gemini --help and npx --version in the same terminal profile used by your MCP host | Prefer Docker or WSL config. If staying native, point host command to a stable shim/binary path and restart host. |
health_check warning: "resolves only through cmd /c fallback" | Run health_check with includeDiagnostics: true and inspect diagnostics.resolution | Update host config to launch the reported .cmd shim directly instead of relying on cmd /c fallback. |
MCP host cannot launch server via npx | npx --version | Change host server command from npx gemini-researcher to installed binary path (or Docker transport). |
ADMIN_POLICY_UNSUPPORTED / output format unsupported | gemini --help and confirm --admin-policy, json, stream-json | Upgrade Gemini CLI to v0.36.0+ |
AUTH_MISSING / AUTH_UNKNOWN | gemini interactive login and rerun health_check | Authenticate Gemini CLI or set GEMINI_API_KEY |
GEMINI_CLI_NOT_FOUND: Install Gemini CLI:npm install -g @google/gemini-cliGEMINI_CLI_LAUNCH_FAILED: This is a launch-path issue, not an auth/capability issue. On Windows, command shims can fail in shell-less spawn contexts. Validategemini --helpandnpx --versioninteractively, then prefer Docker or WSL if host launch mode is strict.GEMINI_RESEARCHER_GEMINI_COMMAND: Override the Gemini command name/path used by the server (for wrappers or pinned binary locations).GEMINI_RESEARCHER_GEMINI_ARGS_PREFIX: Prefix extra Gemini args for every invocation (for example--config <file>).health_checkdiagnostics redact sensitive token-like values in configured args prefix output.AUTH_MISSING: Rungemini, and authenticate or setGEMINI_API_KEYAUTH_UNKNOWN: Auth could not be confirmed (often network/CLI probe failure). If launch errors are present, fix launch-path first; otherwise verifygeminiworks interactively, then retry.ADMIN_POLICY_MISSING: Reinstall package or verifypolicies/read-only-enforcement.tomlexists in installed package.ADMIN_POLICY_UNSUPPORTED: Upgrade Gemini CLI to v0.36.0+ (gemini --helpshould include--admin-policy).- Capability errors (
ADMIN_POLICY_UNSUPPORTED, output format unsupported) should be interpreted only after a successfulgemini --helpprobe. If probe launch fails, treat it as launch-path failure first. GEMINI_RESEARCHER_ENFORCE_ADMIN_POLICY=0: Disables strict startup policy checks. This reduces safety guarantees..gitignoreblocking files: Gemini respects.gitignoreby default; togglefileFiltering.respectGitIgnoreingemini /settingsif you intentionally want ignored files included (note: this changes Gemini behavior globally)PATH_NOT_ALLOWED: All@pathreferences must resolve inside the configured project root (process.cwd()by default). Usevalidate_pathsto pre-check paths.QUOTA_EXCEEDED: Server retries with fallback models; if all options are exhausted, reduce scope (usequick_query) or wait for quota reset.
Contributing
Read the Contributing Guide to get started.
Quick links:
License
Made with β‘ for the AI-assisted dev community
