DeepCode
"DeepCode: Open Agentic Coding (Paper2Code & Text2Web & Text2Backend)"
Ask AI about DeepCode
Powered by Claude ยท Grounded in docs
I know everything about DeepCode. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
|
โโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโ โโโโโโโ โโโโโโโ โโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโ โโโโโโโโโ โโโโโโ โโโโโโโโโโโ โโโ โโโโโโ โโโโโโโโโ
โโโ โโโโโโโโโ โโโโโโ โโโโโโโ โโโ โโโ โโโโโโ โโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโ โโโโโโโ โโโโโโโ โโโโโโโโ
|
DeepCode: Open Agentic Coding
Advancing Code Generation with Multi-Agent Systems
๐ฅ๏ธ Interface Showcase
๐ฅ๏ธ CLI InterfaceTerminal-Based Development
๐ Advanced Terminal Experience
โก Fast command-line workflow ๐ง Developer-friendly interface ๐ Real-time progress tracking Professional terminal interface for advanced users and CI/CD integration |
๐ Web InterfaceVisual Interactive Experience
๐จ Modern Web Dashboard
๐ฑ๏ธ Intuitive drag-and-drop ๐ฑ Responsive design ๐ฏ Visual progress tracking Beautiful web interface with streamlined workflow for all skill levels |
๐ฌ Introduction Video
๐ฏ Watch our complete introduction - See how DeepCode transforms research papers and natural language into production-ready code
"Where AI Agents Transform Ideas into Production-Ready Code"
๐ Table of Contents
- ๐ฐ News
- ๐ Key Features
- ๐๏ธ Architecture
- ๐ Experimental Results
- ๐ Quick Start
- ๐ค nanobot Integration (Feishu Chatbot)
- ๐ก Examples
- โญ Star History
- ๐ License
๐ฐ News
๐งญ [2026-05-01] OpenRouter model selector, session cleanup & workflow UX hardening
- ๐ง OpenRouter model catalog in Settings. The new UI can now fetch OpenRouter model metadata from
https://openrouter.ai/api/v1/models, cache it locally, and expose searchable model selectors for the Default, Planning, and Implementation phases. Use exact OpenRouter model ids such asz-ai/glm-5.1without editing JSON by hand. - ๐ Runtime model switching. Saving model choices from Settings updates
deepcode_config.jsonand reloads the in-process LLM runtime so newly started workflows pick up the selected provider/model combination immediately. - ๐๏ธ Session deletion now performs safe cascade cleanup. Deleting a session from the UI removes its persistent session store and associated
deepcode_lab/tasks/<task_id>/workspaces, while preserving shareduploads/source files. Sessions withpending,running, orwaiting_for_inputtasks are blocked with a clear409 Conflict. - ๐ More accurate Paper2Code progress. The frontend now shows backend stage messages and avoids marking intermediate phases as fully "Done" while long LLM work is still running.
- ๐ก๏ธ Workflow robustness fixes. Uploads now reject Git LFS pointer files, cancelled tasks stop backend work promptly, stale browser session ids recover cleanly, planner retries fall back to a minimal valid plan when a model defers/tool-calls incorrectly, and document segmentation skips an extra validation LLM call that could stall progress.
๐๏ธ [2026-04-28] Persistent sessions & dual-layer logging
- ๐ Sessions are now persistent. Every CLI / UI run is automatically attached to a session under
~/.deepcode/sessions/<id>/(override withDEEPCODE_SESSIONS_DIR). Sessions are JSONL โtail -f session.jsonlworks out of the box. List / inspect / branch them withpython cli/main_cli.py session list|show <id>|new|resume <id>|delete <id>, or viaGET /api/v1/sessionsfrom the backend. - ๐ Resume a previous run by passing
--session <id>to the CLI orsession_idtoPOST /api/v1/workflows/paper-to-code(orchat-planning). Backend restarts no longer drop task history; running tasks left over from a crash are surfaced asinterrupted. - ๐ป CLI session UX. The interactive CLI now supports Cursor-style slash commands:
/resumeopens a numbered session picker,/new [title]creates and switches sessions,/sessionshows the active session, and/helplists commands. You can also paste inline inputs directly at the menu prompt with@/path/to/paper.pdf,@"C:\path with spaces\paper.pdf", or@https://.... - ๐ Two-layer structured logging. A global rotating JSONL lives at
logs/server-YYYYMMDD.jsonl; per-task logs atdeepcode_lab/tasks/<task_id>/logs/{system,llm,mcp}.jsonl. Everyloguru.loggercall automatically picks up the activetask_idvia a contextvar โ business code did not have to change. Configure via the newlogger.{globalFile,taskFile,llm}block indeepcode_config.json. - ๐ก WebSocket log streaming. Tail one task with
/ws/tasks/{task_id}/logs?channel=llm, or merge every task in a session via/ws/sessions/{session_id}/logs. The legacy/ws/logs/{session_id}endpoint that silently ignored its parameter has been removed. - ๐งน Dead code removed.
utils/simple_llm_logger.py,utils/dialogue_logger.py, and the in-memoryservices/session_service.pyimplementation are gone (the latter is now a thin re-export ofcore.sessions.SessionStore).
๐ ๏ธ [2026-04-17] Stability, Windows compatibility & secrets hygiene update
- ๐ Code Implementation no longer crashes with
name 'LoopDetector' is not definedโ added the missingLoopDetector/ProgressTrackerimports in bothworkflows/code_implementation_workflow.pyandworkflows/code_implementation_workflow_index.py. - ๐ช Windows:
mkdir -p/touch/rm -rf/cp -r/mvnow work natively.tools/command_executor.pytranslates these common Unix file-tree commands viapathlib/shutilon every platform, eliminating the bug wherecmd.exewould create a literal-pdirectory and stall the workflow. - ๐ Removed Brave Search end-to-end. All Python code, MCP server config, Dockerfile pre-installs, nanobot integration and docs are scrubbed of
brave/BRAVE_API_KEY/WebSearchTool. Web fetching now relies entirely on the built-infetchMCP server. - ๐ OpenAI-compatible providers documented. New
Quick Start โ Configurationsnippet shows how to point theopenai/openrouterblocks at Poe (https://api.poe.com/v1), OpenRouter, or Alibaba DashScope, plus how to setagents.defaults.model/agents.planning.model/agents.implementation.model(e.g.openai/gpt-5.4). - ๐ Secrets hygiene. All YAML config has been collapsed into a single
deepcode_config.json(nanobot-style), and.gitignorenow ignores it alongsidesecrets.json,*credentials*.json,.env,.env.*(with*.env.examplewhitelisted). - ๐ Launch table fixed.
deepcode(no flags) actually starts Docker mode โ the README now showsdeepcode --localfor the no-Docker path and adds explicit Troubleshooting rows for "Docker is installed but not running", Windows GBK encoding, and the issues fixed above. - ๐งน Misc: auto-create
logs/directory so JSONL logging never fails on a fresh checkout, replace bareexcept:withexcept Exception:inagent_orchestration_engine.py(Ruff E722),command_executorMCP tool descriptions now embed the host OS so the LLM picks compatible commands.
๐ [2026-02] nanobot โ๏ธ DeepCode. Just chat naturally with openclaw/nanobot to handle your coding tasks:
- nanobot nanobot now powers your agentic coding & engineering! ๐ค๐ป
- Step away from your laptop โ make vibe coding even more vibe! Code directly from your phone! ๐ฑโจ
- One-command deploy:
./nanobot/run_nanobot.shโ Setup Guide โ
|
|
๐ [2026-02] New Web UI Experience Upgrade!
- ๐ User-in-Loop Interaction: Support real-time user interaction during workflows - AI asks clarifying questions directly in the chat
- ๐ฌ Inline Interaction Design: Interaction prompts appear naturally within the chat flow for a seamless experience
- ๐ One-Click Launch: Simply run
deepcodeto start the new UI (cross-platform: Windows/macOS/Linux) - ๐ง Improved Process Management: Enhanced service start/stop mechanism with automatic port cleanup
- ๐ก WebSocket Real-time Communication: Fixed message loss issues, ensuring proper interaction state synchronization
DeepCode New Web UI - Modern React-based Interface
๐ [2025-10-28] DeepCode Achieves SOTA on PaperBench!
DeepCode sets new benchmarks on OpenAI's PaperBench Code-Dev across all categories:
- ๐ Surpasses Human Experts: 75.9% (DeepCode) vs Top Machine Learning PhDs 72.4% (+3.5%).
- ๐ฅ Outperforms SOTA Commercial Code Agents: 84.8% (DeepCode) vs Leading Commercial Code Agents (+26.1%) (Cursor, Claude Code, and Codex).
- ๐ฌ Advances Scientific Coding: 73.5% (DeepCode) vs PaperCoder 51.1% (+22.4%).
- ๐ Beats LLM Agents: 73.5% (DeepCode) vs best LLM frameworks 43.3% (+30.2%).
๐ Key Features
๐ Paper2CodeAutomated Implementation of Complex Algorithms Effortlessly converts complex algorithms from research papers into high-quality, production-ready code, accelerating algorithm reproduction. |
๐จ Text2WebAutomated Front-End Web Development Translates plain textual descriptions into fully functional, visually appealing front-end web code for rapid interface creation. |
โ๏ธ Text2BackendAutomated Back-End Development Generates efficient, scalable, and feature-rich back-end code from simple text inputs, streamlining server-side development. |
๐ Experimental Results

We evaluate DeepCode on the PaperBench benchmark (released by OpenAI), a rigorous testbed requiring AI agents to independently reproduce 20 ICML 2024 papers from scratch. The benchmark comprises 8,316 gradable components assessed using SimpleJudge with hierarchical weighting.
Our experiments compare DeepCode against four baseline categories: (1) Human Experts, (2) State-of-the-Art Commercial Code Agents, (3) Scientific Code Agents, and (4) LLM-Based Agents.
โ ๐ง Human Expert Performance (Top Machine Learning PhD)
DeepCode: 75.9% vs. Top Machine Learning PhD: 72.4% (+3.5%)
DeepCode achieves 75.9% on the 3-paper human evaluation subset, surpassing the best-of-3 human expert baseline (72.4%) by +3.5 percentage points. This demonstrates that our framework not only matches but exceeds expert-level code reproduction capabilities, representing a significant milestone in autonomous scientific software engineering.
โก ๐ผ State-of-the-Art Commercial Code Agents
DeepCode: 84.8% vs. Best Commercial Agent: 58.7% (+26.1%)
On the 5-paper subset, DeepCode substantially outperforms leading commercial coding tools:
- Cursor: 58.4%
- Claude Code: 58.7%
- Codex: 40.0%
- DeepCode: 84.8%
This represents a +26.1% improvement over the leading commercial code agent. All commercial agents utilize Claude Sonnet 4.5 or GPT-5 Codex-high, highlighting that DeepCode's superior architectureโrather than base model capabilityโdrives this performance gap.
โข ๐ฌ Scientific Code Agents
DeepCode: 73.5% vs. PaperCoder: 51.1% (+22.4%)
Compared to PaperCoder (51.1%), the state-of-the-art scientific code reproduction framework, DeepCode achieves 73.5%, demonstrating a +22.4% relative improvement. This substantial margin validates our multi-module architecture combining planning, hierarchical task decomposition, code generation, and iterative debugging over simpler pipeline-based approaches.
โฃ ๐ค LLM-Based Agents
DeepCode: 73.5% vs. Best LLM Agent: 43.3% (+30.2%)
DeepCode significantly outperforms all tested LLM agents:
- Claude 3.5 Sonnet + IterativeAgent: 27.5%
- o1 + IterativeAgent (36 hours): 42.4%
- o1 BasicAgent: 43.3%
- DeepCode: 73.5%
The +30.2% improvement over the best-performing LLM agent demonstrates that sophisticated agent scaffolding, rather than extended inference time or larger models, is critical for complex code reproduction tasks.
๐ฏ Autonomous Self-Orchestrating Multi-Agent Architecture
The Challenges:
-
๐ Implementation Complexity: Converting academic papers and complex algorithms into working code requires significant technical effort and domain expertise
-
๐ฌ Research Bottleneck: Researchers spend valuable time implementing algorithms instead of focusing on their core research and discovery work
-
โฑ๏ธ Development Delays: Product teams experience long wait times between concept and testable prototypes, slowing down innovation cycles
-
๐ Repetitive Coding: Developers repeatedly implement similar patterns and functionality instead of building on existing solutions
DeepCode addresses these workflow inefficiencies by providing reliable automation for common development tasks, streamlining your development workflow from concept to code.
flowchart LR
A["๐ Research Papers<br/>๐ฌ Text Prompts<br/>๐ URLs & Document<br/>๐ Files: PDF, DOC, PPTX, TXT, HTML"] --> B["๐ง DeepCode<br/>Multi-Agent Engine"]
B --> C["๐ Algorithm Implementation <br/>๐จ Frontend Development <br/>โ๏ธ Backend Development"]
style A fill:#ff6b6b,stroke:#c0392b,stroke-width:2px,color:#000
style B fill:#00d4ff,stroke:#0984e3,stroke-width:3px,color:#000
style C fill:#00b894,stroke:#00a085,stroke-width:2px,color:#000
๐๏ธ Architecture
๐ System Overview
DeepCode is an AI-powered development platform that automates code generation and implementation tasks. Our multi-agent system handles the complexity of translating requirements into functional, well-structured code, allowing you to focus on innovation rather than implementation details.
๐ฏ Technical Capabilities:
๐งฌ Research-to-Production Pipeline
Multi-modal document analysis engine that extracts algorithmic logic and mathematical models from academic papers. Generates optimized implementations with proper data structures while preserving computational complexity characteristics.
๐ช Natural Language Code Synthesis
Context-aware code generation using fine-tuned language models trained on curated code repositories. Maintains architectural consistency across modules while supporting multiple programming languages and frameworks.
โก Automated Prototyping Engine
Intelligent scaffolding system generating complete application structures including database schemas, API endpoints, and frontend components. Uses dependency analysis to ensure scalable architecture from initial generation.
๐ Quality Assurance Automation
Integrated static analysis with automated unit test generation and documentation synthesis. Employs AST analysis for code correctness and property-based testing for comprehensive coverage.
๐ฎ CodeRAG Integration System
Advanced retrieval-augmented generation combining semantic vector embeddings with graph-based dependency analysis. Automatically discovers optimal libraries and implementation patterns from large-scale code corpus.
๐ง Core Techniques
-
๐ง Intelligent Orchestration Agent: Central decision-making system that coordinates workflow phases and analyzes requirements. Employs dynamic planning algorithms to adapt execution strategies in real-time based on evolving project complexity. Dynamically selects optimal processing strategies for each implementation step.
-
๐พ Efficient Memory Mechanism: Advanced context engineering system that manages large-scale code contexts efficiently. Implements hierarchical memory structures with intelligent compression for handling complex codebases. This component enables instant retrieval of implementation patterns and maintains semantic coherence across extended development sessions.
-
๐ Advanced CodeRAG System: Global code comprehension engine that analyzes complex inter-dependencies across repositories. Performs cross-codebase relationship mapping to understand architectural patterns from a holistic perspective. This module leverages dependency graphs and semantic analysis to provide globally-aware code recommendations during implementation.
๐ค Multi-Agent Architecture of DeepCode:
-
๐ฏ Central Orchestrating Agent: Orchestrates entire workflow execution and makes strategic decisions. Coordinates specialized agents based on input complexity analysis. Implements dynamic task planning and resource allocation algorithms.
-
๐ Intent Understanding Agent: Performs deep semantic analysis of user requirements to decode complex intentions. Extracts functional specifications and technical constraints through advanced NLP processing. Transforms ambiguous human descriptions into precise, actionable development specifications with structured task decomposition.
-
๐ Document Parsing Agent: Processes complex technical documents and research papers with advanced parsing capabilities. Extracts algorithms and methodologies using document understanding models. Converts academic concepts into practical implementation specifications through intelligent content analysis.
-
๐๏ธ Code Planning Agent: Performs architectural design and technology stack optimization. Dynamic planning for adaptive development roadmaps. Enforces coding standards and generates modular structures through automated design pattern selection.
-
๐ Code Reference Mining Agent: Discovers relevant repositories and frameworks through intelligent search algorithms. Analyzes codebases for compatibility and integration potential. Provides recommendations based on similarity metrics and automated dependency analysis.
-
๐ Code Indexing Agent: Builds comprehensive knowledge graphs of discovered codebases. Maintains semantic relationships between code components. Enables intelligent retrieval and cross-reference capabilities.
-
๐งฌ Code Generation Agent: Synthesizes gathered information into executable code implementations. Creates functional interfaces and integrates discovered components. Generates comprehensive test suites and documentation for reproducibility.
๐ ๏ธ Implementation Tools Matrix
๐ง Powered by MCP (Model Context Protocol)
DeepCode leverages the Model Context Protocol (MCP) standard to seamlessly integrate with various tools and services. This standardized approach ensures reliable communication between AI agents and external systems, enabling powerful automation capabilities.
๐ก MCP Servers & Tools
| ๐ ๏ธ MCP Server | ๐ง Primary Function | ๐ก Purpose & Capabilities |
|---|---|---|
| ๐ filesystem | File System Operations | Local file and directory management, read/write operations |
| ๐ fetch | Web Content Retrieval | Fetch and extract content from URLs and web resources |
| ๐ฅ github-downloader | Repository Management | Clone and download GitHub repositories for analysis |
| ๐ file-downloader | Document Processing | Download and convert files (PDF, DOCX, etc.) to Markdown |
| โก command-executor | System Commands | Execute bash/shell commands for environment management |
| ๐งฌ code-implementation | Code Generation Hub | Comprehensive code reproduction with execution and testing |
| ๐ code-reference-indexer | Smart Code Search | Intelligent indexing and search of code repositories |
| ๐ document-segmentation | Smart Document Analysis | Intelligent document segmentation for large papers and technical documents |
๐ง Legacy Tool Functions (for reference)
| ๐ ๏ธ Function | ๐ฏ Usage Context |
|---|---|
| ๐ read_code_mem | Efficient code context retrieval from memory |
| โ๏ธ write_file | Direct file content generation and modification |
| ๐ execute_python | Python code testing and validation |
| ๐ get_file_structure | Project structure analysis and organization |
| โ๏ธ set_workspace | Dynamic workspace and environment configuration |
| ๐ get_operation_history | Process monitoring and operation tracking |
๐๏ธ Multi-Interface Framework
RESTful API with CLI and web frontends featuring real-time code streaming, interactive debugging, and extensible plugin architecture for CI/CD integration.
๐ Multi-Agent Intelligent Pipeline:
๐ Intelligence Processing Flow
|
๐ก INPUT LAYER ๐ Research Papers โข ๐ฌ Natural Language โข ๐ URLs โข ๐ Requirements | ||
|
๐ฏ CENTRAL ORCHESTRATION Strategic Decision Making โข Workflow Coordination โข Agent Management | ||
|
๐ TEXT ANALYSIS Requirement Processing |
๐ DOCUMENT ANALYSIS Paper & Spec Processing | |
|
๐ REPRODUCTION PLANNING Deep Paper Analysis โข Code Requirements Parsing โข Reproduction Strategy Development | ||
|
๐ REFERENCE ANALYSIS Repository Discovery |
๐ CODE INDEXING Knowledge Graph Building | |
|
๐งฌ CODE IMPLEMENTATION Implementation Generation โข Testing โข Documentation | ||
|
โก OUTPUT DELIVERY ๐ฆ Complete Codebase โข ๐งช Test Suite โข ๐ Documentation โข ๐ Deployment Ready | ||
๐ Process Intelligence Features
๐ฏ Adaptive FlowDynamic agent selection based on input complexity |
๐ง Smart CoordinationIntelligent task distribution and parallel processing |
๐ Context AwarenessDeep understanding through CodeRAG integration |
โก Quality AssuranceAutomated testing and validation throughout |
๐ Quick Start
๐ Prerequisites
Before installing DeepCode, ensure you have the following:
| Requirement | Version | Purpose |
|---|---|---|
| Python | 3.9+ | Core runtime |
| Node.js | 18+ | New UI frontend |
| npm | 8+ | Package management |
# Check your versions
python --version # Should be 3.9+
node --version # Should be 18+
npm --version # Should be 8+
๐ฅ Install Node.js (if not installed)
# macOS (using Homebrew)
brew install node
# Ubuntu/Debian
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# Windows
# Download from https://nodejs.org/
๐ฆ Step 1: Installation
Choose one of the following installation methods:
โก Direct Installation (Recommended)
# ๐ Install DeepCode package directly
pip install deepcode-hku
# ๐ Download the unified configuration template
curl -O https://raw.githubusercontent.com/HKUDS/DeepCode/main/deepcode_config.json.example
cp deepcode_config.json.example deepcode_config.json
๐ง Development Installation (From Source)
๐ Click to expand development installation options
๐ฅ Using UV (Recommended for Development)
git clone https://github.com/HKUDS/DeepCode.git
cd DeepCode/
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv --python=3.13
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -r requirements.txt
# Install frontend dependencies
npm install --prefix new_ui/frontend
๐ Using Traditional pip
git clone https://github.com/HKUDS/DeepCode.git
cd DeepCode/
pip install -r requirements.txt
# Install frontend dependencies
npm install --prefix new_ui/frontend
๐งช Editable install (lets deepcode always run THIS checkout)
If you want the global deepcode command to launch the source tree you are
hacking on, install the project in editable mode after the steps above:
pip install -e .
This registers a deepcode-hku package (current version 1.2.0) and exposes
the deepcode CLI entry point. Any local code change is picked up
immediately on next launch โ no reinstall needed.
If you maintain multiple DeepCode checkouts, only one of them can own the
deepcodecommand at a time (the most recentpip install -e .wins). Reinstall in the checkout you currently want to be active.
๐ง Step 2: Configuration
The following configuration applies to all installation methods (pip, UV, source, and Docker). Everything lives in one file:
deepcode_config.json(single source of truth, nanobot-style).
๐ API Keys (required)
Edit deepcode_config.json and fill in at least one provider key. Inline strings work, and ${ENV_VAR} references are resolved at load time.
{
"providers": {
"openai": { "apiKey": "your_openai_api_key" },
"anthropic": { "apiKey": "${ANTHROPIC_API_KEY}" },
"gemini": { "apiKey": "" }
}
}
๐ Using OpenAI-compatible providers (OpenRouter / Poe / DashScope / etc.)
Any OpenAI-compatible endpoint is supported by overriding apiBase on the matching provider entry. Then set the model name on the agents block (using provider/model slugs):
{
"agents": {
"defaults": {
"provider": "openrouter",
"model": "z-ai/glm-5.1"
},
"planning": { "provider": "openrouter", "model": "z-ai/glm-5.1" },
"implementation": { "provider": "openrouter", "model": "z-ai/glm-5.1" }
},
"providers": {
"openai": { "apiKey": "your_openai_api_key" },
"openrouter": { "apiKey": "your_openrouter_key", "apiBase": "https://openrouter.ai/api/v1" }
}
}
OpenRouter model ids must use the exact id returned by OpenRouter, for example
z-ai/glm-5.1, anthropic/claude-sonnet-4.5, or
google/gemini-2.5-pro. In the new UI, open Settings โ OpenRouter Models
to search the live OpenRouter catalog and update the Default, Planning, and
Implementation models without editing this file manually. Saving from the UI
reloads the runtime for newly started workflows.
๐ Never commit
deepcode_config.json. It is already in.gitignore.
๐ค LLM Provider (optional)
The provider is inferred from the model slug (openai/..., anthropic/..., gemini/..., etc.). To force a specific backend, set agents.defaults.provider:
{
"agents": {
"defaults": { "provider": "openai" }
}
}
๐ Document Segmentation (optional)
{
"documentSegmentation": {
"enabled": true,
"sizeThresholdChars": 50000
}
}
๐ช Windows Users: Additional MCP Server Configuration
On Windows you may need to configure MCP servers manually in deepcode_config.json (tools.mcpServers):
# 1. Install MCP servers globally
npm i -g @modelcontextprotocol/server-filesystem
# 2. Find your global node_modules path
npm -g root
{
"tools": {
"mcpServers": {
"filesystem": {
"type": "stdio",
"command": "node",
"args": ["C:/Program Files/nodejs/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js", "."]
}
}
}
}
Replace the path with the actual global
node_modulespath from step 2.
๐ Web Search Configuration
DeepCode performs web content retrieval through the built-in fetch MCP server (no API key required) and reads local files via filesystem. The auxiliary search server defaults to filesystem:
{
"tools": { "defaultSearchServer": "filesystem" }
}
๐ก Tip: To plug in another search backend, add it under
tools.mcpServersindeepcode_config.jsonand settools.defaultSearchServerto its name.
โก Step 3: Launch Application
Choose your preferred launch method:
| ๐ณ Docker (Recommended) | ๐ Local โ no Docker | ๐ ๏ธ Other Methods |
|---|---|---|
|
No Python/Node needed โ everything in container.
|
Run the new UI directly on the host (frontend + backend, no container).
Features: User-in-Loop, real-time progress, inline chat. Use this when Docker isn't available or you need to iterate on local source changes. |
|
๐ป CLI sessions & inline inputs
The CLI is session-aware by default. A run without --session creates a new
persistent session under ~/.deepcode/sessions/<id>/; pass --session <id> to
attach a new task to an existing session.
# Session management from the shell
python cli/main_cli.py session list
python cli/main_cli.py session show <session_id>
python cli/main_cli.py session resume <session_id> # show history, then enter interactive mode
python cli/main_cli.py --session <session_id> --file paper.pdf
Inside python cli/main_cli.py, type these at the main menu prompt:
/resume # pick a previous session from a numbered list
/new My experiment # create and switch to a fresh session
/session # show the currently active session
@/absolute/path.pdf # process a file without opening the file picker
@"C:\path with spaces\paper.pdf"
@https://arxiv.org/pdf/....
Every task created from these flows inherits the active session_id; per-task
logs are written to deepcode_lab/tasks/<task>/logs/.
In the web UI, use the Sessions menu in the header to resume or delete a
session. Deleting a session removes its JSONL session record and associated task
workspace under deepcode_lab/tasks/, but keeps original files in uploads/.
If the session still has pending, running, or waiting_for_input tasks, the
backend rejects the deletion until the task is cancelled or completed.
๐ณ Docker Management Commands
./deepcode_docker/run_docker.sh stop # Stop
./deepcode_docker/run_docker.sh restart # Restart (no rebuild needed for config changes)
./deepcode_docker/run_docker.sh --build # Force rebuild
./deepcode_docker/run_docker.sh logs # Real-time logs
./deepcode_docker/run_docker.sh status # Health check
./deepcode_docker/run_docker.sh clean # Remove containers & images
Or with Docker Compose directly:
docker compose -f deepcode_docker/docker-compose.yml up --build # Build & start
docker compose -f deepcode_docker/docker-compose.yml down # Stop
docker compose -f deepcode_docker/docker-compose.yml logs -f # Logs
๐ก Config files are mounted as volumes โ edit and restart, no rebuild needed. ๐ก Windows users: run
docker composecommands directly if shell scripts aren't available.
๐ฏ Step 4: Generate Code
- ๐ Input โ Upload a research paper, type requirements, or paste a URL
- ๐ค Processing โ The multi-agent system analyzes, plans, and generates
- โก Output โ Receive production-ready code with tests and documentation
๐ง Troubleshooting
โ Common Issues & Solutions
| Problem | Cause | Fix |
|---|---|---|
Docker build fails with tsc: not found | Corrupted build cache | docker builder prune -f then rebuild with --no-cache |
error during connect / cannot find the file / Docker is installed but not running | Docker Desktop not running | Either start Docker Desktop, or skip Docker entirely with deepcode --local |
| Frontend blank page | Corrupted node_modules | cd new_ui/frontend && rm -rf node_modules && npm install |
ERR_CONNECTION_REFUSED | Wrong port / backend not running | Docker: http://localhost:8000. Local (--local): frontend http://localhost:5173, backend http://localhost:8000 |
npm install โ Could not read package.json | Wrong directory | Use npm install --prefix new_ui/frontend |
| Windows: MCP servers not working | Need absolute paths | See Windows MCP Configuration above |
Windows: UnicodeEncodeError: 'gbk' codec can't encode... on launch | Default GBK console can't render emoji in startup banner | Set UTF-8 first: set PYTHONIOENCODING=utf-8 && set PYTHONUTF8=1 (cmd) or $env:PYTHONIOENCODING="utf-8"; $env:PYTHONUTF8="1" (PowerShell) |
Windows: code-implementation stage hangs / produces a -p directory | LLM emitted mkdir -p ... and cmd.exe treated -p as a folder name | Already fixed in tools/command_executor.py โ common Unix commands (mkdir -p, touch, rm -rf, cp -r, mv) are now executed natively via pathlib/shutil, no shell needed |
name 'LoopDetector' is not defined during code implementation | Missing import in workflow modules | Already fixed โ LoopDetector and ProgressTracker are now imported from utils.loop_detector in both workflows/code_implementation_workflow.py and workflows/code_implementation_workflow_index.py |
๐ค nanobot Integration (Feishu Chatbot)
Chat with DeepCode from Feishu โ powered by nanobot.
flowchart LR
subgraph Clients["๐ฌ Chat Platforms"]
direction TB
F["<b>Feishu</b><br/>WebSocket"]
T["<b>Telegram</b><br/>Polling"]
D["<b>Discord</b><br/>Gateway"]
end
subgraph Gateway["๐ nanobot Gateway"]
direction TB
A["Agent Loop<br/><i>LLM + Tool Calls</i>"]
end
subgraph Engine["๐ง DeepCode Engine"]
direction TB
P2C["Paper โ Code"]
C2C["Chat โ Code"]
TRK["Task Tracking"]
end
F & T & D <-->|"messages"| A
A -->|"HTTP API"| P2C & C2C & TRK
A -.->|"LLM API"| LLM["โ๏ธ OpenRouter"]
style Clients fill:#1a1a2e,stroke:#00d9ff,color:#fff
style Gateway fill:#1a1a2e,stroke:#4ecdc4,color:#fff
style Engine fill:#1a1a2e,stroke:#ff6b6b,color:#fff
style LLM fill:#1a1a2e,stroke:#9b59b6,color:#fff
Both services run inside the same Docker Compose network. Prerequisites: Docker Desktop + OpenRouter API Key (get one) + Feishu App.
Step 1 ยท Create a Feishu Bot
Feishu / Lark (Recommended โ WebSocket, no public IP needed)
- Go to Feishu Open Platform โ Create Custom App
- Enable Bot capability in App Features
- Add permissions:
im:messageยทim:message:send_as_bot - Event Subscription โ select Long Connection โ add
im.message.receive_v1 - Note your App ID (
cli_xxx) and App Secret โ Publish the app
Note: Feishu requires an active WebSocket connection before you can save "Long Connection" mode. Start nanobot first (Step 3), then come back to configure Event Subscription.
Step 2 ยท Configure
cp nanobot_config.json.example nanobot_config.json
Edit nanobot_config.json โ fill in the 3 required fields:
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx", // โ Feishu App ID
"appSecret": "xxx", // โ Feishu App Secret
"allowFrom": [] // [] = allow all users
}
},
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx" // โ OpenRouter API Key
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-sonnet-4-20250514"
}
}
}
Model choice: Any model on openrouter.ai/models. Use
anthropic/claude-sonnet-4-20250514for English,minimax/minimax-m2.1for Chinese.
Step 3 ยท Launch
Make sure deepcode_config.json has your DeepCode API keys (see Configuration), then:
./nanobot/run_nanobot.sh -d # Start both DeepCode + nanobot in background
The script checks Docker, validates configs, builds images (first run only), and starts both containers.
โ DeepCode API: http://localhost:8000
โ Nanobot: http://localhost:18790
Now open Feishu โ find your bot โ send a message!
Management Commands
./nanobot/run_nanobot.sh # Start (foreground)
./nanobot/run_nanobot.sh -d # Start (background)
./nanobot/run_nanobot.sh stop # Stop all services
./nanobot/run_nanobot.sh restart # Restart (config changes take effect immediately)
./nanobot/run_nanobot.sh --build # Force rebuild Docker images
./nanobot/run_nanobot.sh logs # View real-time logs
./nanobot/run_nanobot.sh status # Health check
./nanobot/run_nanobot.sh clean # Remove containers & images
Troubleshooting
| Problem | Fix |
|---|---|
| Feishu bot doesn't respond | Check logs (./nanobot/run_nanobot.sh logs), verify appId/appSecret, ensure app is published with Long Connection mode |
| Can't connect to DeepCode | Verify deepcode container is healthy: curl http://localhost:8000/health |
| Wrong language output | Switch model โ minimax-m2.1 defaults to Chinese, use Claude/GPT for English |
| Config not taking effect | Just restart: ./nanobot/run_nanobot.sh restart (no rebuild needed) |
| Clear chat history | Send /clear in chat, or: docker exec nanobot sh -c 'rm -rf /root/.nanobot/sessions/*.jsonl' |
๐ก Examples
๐ฌ Live Demonstrations
๐ Paper2Code DemoResearch to Implementation |
๐ผ๏ธ Image Processing DemoAI-Powered Image Tools |
๐ Frontend ImplementationComplete Web Application |
๐ Recent Updates
๐ Smart Document Segmentation (v1.2.0)
- Intelligent Processing: Automatically handles large research papers and technical documents that exceed LLM token limits
- Configurable Control: Toggle segmentation via configuration with size-based thresholds
- Semantic Analysis: Advanced content understanding with algorithm, concept, and formula preservation
- Backward Compatibility: Seamlessly falls back to traditional processing for smaller documents
๐ Coming Soon
We're continuously enhancing DeepCode with exciting new features:
๐ง Enhanced Code Reliability & Validation
- Automated Testing: Comprehensive functionality testing with execution verification and error detection.
- Code Quality Assurance: Multi-level validation through static analysis, dynamic testing, and performance benchmarking.
- Smart Debugging: AI-powered error detection with automatic correction suggestions
๐ PaperBench Performance Showcase
- Benchmark Dashboard: Comprehensive performance metrics on the PaperBench evaluation suite.
- Accuracy Metrics: Detailed comparison with state-of-the-art paper reproduction systems.
- Success Analytics: Statistical analysis across paper categories and complexity levels.
โก System-wide Optimizations
- Performance Boost: Multi-threaded processing and optimized agent coordination for faster generation.
- Enhanced Reasoning: Advanced reasoning capabilities with improved context understanding.
- Expanded Support: Extended compatibility with additional programming languages and frameworks.
โญ Star History
๐ Ready to Transform Development?
๐ Citation
If you find DeepCode useful in your research or applications, please kindly cite:
@misc{li2025deepcodeopenagenticcoding,
title={DeepCode: Open Agentic Coding},
author={Zongwei Li and Zhonghang Li and Zirui Guo and Xubin Ren and Chao Huang},
year={2025},
eprint={2512.07921},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2512.07921},
}
๐ License
MIT License - Copyright (c) 2025 Data Intelligence Lab, The University of Hong Kong


