Agent Teams MCP
MCP server: Claude Code as team lead coordinating worker agents, with true push notifications via FileChanged + asyncRewake
Ask AI about Agent Teams MCP
Powered by Claude Β· Grounded in docs
I know everything about Agent Teams MCP. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
An MCP server that turns Claude Code into a coordinator for a team of AI worker agents β with true push from workers back to the lead and a live web UI for observing the whole team.
agent-teams-mcp
agent-teams-mcp is a Model Context Protocol server, written in Rust, that turns your Claude Code CLI into a team lead. The lead can spawn and coordinate AI worker agents (Claude Code, Codex, or Gemini CLI) as managed subprocesses, route @mention-style messages between them, and β critically β receive worker replies as automatic <system-reminder> injections in its next turn, with no polling and no manual inbox_read.
A web UI (auto-launched at http://127.0.0.1:8787) renders the live chat between the lead and every worker, so you can watch the team work in real time and even chime in as a human.
Install globally β copy the prompt below into Claude Code
The fastest way to install this on your machine is to paste the prompt below into your Claude Code session. Claude will run the install end-to-end, stop at the one mandatory CC restart, and verify everything works.
You are an AI assistant that can run shell commands. Help me install
agent-teams-mcp globally on this machine so I can use team-mode MCP from
any project.
GOAL: install the binary once, register the user-scope Claude config, and
make sure I can `cd` into any project and use worker replies as automatic
<system-reminder>s.
PREREQUISITES (verify first; stop if missing):
- Rust 1.85+ (`cargo --version`)
- Node.js 14+ (`node --version`)
STEPS (run sequentially, stop on any failure β DO NOT silently work around):
1. Clone & install agent-teams-mcp to PATH (one-time per machine):
cd /tmp
git clone https://github.com/jessepwj/agent-teams-mcp
cd agent-teams-mcp
cargo install --path .
Verify: `team_mode_service --help` works and shows `install-global`.
2. Install the global Claude config:
team_mode_service install-global
This writes user-scope `.claude.json` and `.claude/settings.json` entries
that point to the `team_mode_service relay` and hook subcommands. If it
reports a CONFLICT (existing team-mode server or hook), STOP and tell me.
3. STOP and tell me: "Please fully restart Claude Code now (close ALL CC
windows, then relaunch from any project directory)". Wait for me to
confirm before continuing β `/mcp reconnect` is NOT enough; hooks only
load at CC startup.
===== AFTER I RESTART CC =====
4. Verify connection: tell me to run `/mcp`. Expect `team-mode connected`.
5. Use any project: I can `cd` into a project and run Claude Code without
any additional project-local setup. If I specifically want isolation,
tell me to use `team_mode_service init` in that project instead.
6. Smoke test:
team_create({"name": "<a-team-name>"})
worker_add({"team": "<a-team-name>", "name": "alice", "adapter": "codex"})
send_message({"team": "<a-team-name>", "text": "@alice say hi + timestamp"})
7. Wait. Within ~10s alice's reply should auto-inject as a <system-reminder>.
If after 60s nothing arrives, run diagnostics (do NOT auto-fix):
- `tail -10 .async-wake-probe.log` (recent hook fires)
- Check the active team owner PID in the runtime JSON / team state
- Report findings to me.
8. If I asked, clean up the test team:
team_delete({"name": "<a-team-name>"})
KNOWN TRAPS (do not skip these reminders):
- Step 3: full CC restart is mandatory; `/mcp reconnect` will NOT load hooks.
- Step 6: `team_create` on each CC start is still required for hook routing.
- Don't commit `.agent-teams/` to git when working locally.
When done, tell me: "team-mode is ready on this machine. You can dispatch workers now."
Manual install (no AI assistant)
git clone https://github.com/jessepwj/agent-teams-mcp
cd agent-teams-mcp
cargo install --path .
team_mode_service install-global
# Then completely close all Claude Code windows and relaunch from any project.
# /mcp should show `team-mode connected`.
β The one critical pitfall: any change to
.mcp.jsonor.claude/settings.jsonrequires a full Claude Code restart (close all CC windows + relaunch)./mcp reconnectdoes NOT reload hook configuration. If worker replies stop arriving as<system-reminder>after a config change, this is almost always why. See Β§ Troubleshooting for the full triage.
Migration from older init-based setups
If you already used team_mode_service init in a project, keep the existing .agent-teams/
folder in place. The old project-local runtime still works. To move that machine to the
v3 global path, stop the old service, run team_mode_service install-global, and then
fully restart Claude Code. New users should start with install-global directly.
With global install, each project still gets its own isolated team data via the caller project root; use init only when you explicitly want project-local files.
Why this exists
Claude Code exposes MCP tool calls but, by design, does not auto-react to MCP resources/updated notifications. So a naive MCP server that puts worker replies in a "resource" gets no push back to the lead's terminal. The official Channels API would solve this but requires claude.ai OAuth login; many people run Claude Code with an API key.
This project uses the documented Stop hook + asyncRewake pattern to implement a service β client β session push that works under API-key auth:
Worker reply
β
team_mode_service appends a JSON line to .agent-teams/<team>/lead_pending.jsonl
β
Claude Code's Stop hook (project-level, .claude/settings.json) fires when CC's turn ends
β
scripts/hooks/lead-pending-async-wake.js asks the service for this CC's teams
and atomically drains the matching per-team pending files
β
On hit: writes the reply to stderr and exits 2
β
CC enters a NEW turn with the reply injected as a <system-reminder>
β
Claude reads the reminder and continues working
No polling, no token burn, no special login. Median end-to-end latency: ~50 ms.
Historical note: earlier versions used a long synchronous Stop-hook shepherd loop and a stdio MCP relay. ADR-020 moved the default control plane to a durable localhost Streamable HTTP service, and ADR-022 moved worker-reply wakeups to
asyncRewakewith per-team pending files. The old stdioteam_mode_mcp+team_mode_daemonpath is kept only as a legacy rollback / fallback route.
What you get
- Tiny MCP surface (8 tools) β
team_create / team_list / team_delete / worker_add / worker_list / worker_remove / send_message / inbox_read. That's it. - Durable HTTP service architecture β
team_mode_serviceis a long-lived localhost Streamable HTTP MCP service on127.0.0.1:8786/mcp. Claude Code connects through.mcp.json+scripts/mcp-http-headers.js, while the service owns worker subprocesses and the web UI. The old stdioteam_mode_mcp+team_mode_daemonpair remains documented only as a legacy rollback / fallback path. - True push to the lead's terminal β Stop hook
asyncRewake+ per-team pending-file routing surface worker replies as<system-reminder>automatically. Idle CC sessions wake up when the next turn boundary arrives. - Live web UI on 127.0.0.1:8787+ β three-pane layout (teams list / group chat / session details). Per-sender colors,
@mentionhighlighting, click-to-filter, full Claude Code & Codex JSONL session transcripts, and a sticky composer so a human user can type into the team as a peer (lead included). Auto-opens onteam_create. - Multi-backend workers β
claude-code,codex,gemini-cli. The lead must remain Claude Code (see Codex as Lead). - Strict
@mentionrouting βsend_messagerejects unmatched handles up-front and returns the active worker list (always including@lead) so the caller can self-correct. Matching is case-insensitive (@Alicefinds workeralice). Workers with no@mentiondefault to@lead. - Caller-attributed messaging (Bug 29) β
send_messagederivessenderfrom HTTP caller identity, not from a parameter. Claude Code headers βsender = "lead"; worker env-backed HTTP headers βsender = <worker name>. Workers can only send into their bound team. No forging. - Explicit-only worker replies (Bug 29) β workers MUST call
mcp__team-mode__send_messageto communicate. Their stdout (LLM thinking, codex shell output, ANSI escapes) is treated as private working notes and never copied into messages. If a worker finishes a turn without calling the tool, the lead receives a[SYSTEM]"completed turn without sending message" notice. - Strict slug validation β worker / team names match
[a-z0-9_.-]{1,64}(must start lowercase letter or digit). Names that can't be@mentioned are rejected on creation, not silently broken later. - Worker liveness + revival β
worker_removeis a soft delete (process stopped, profile retained for fast resume).worker_addwithon_existing=reusebrings a worker back. Dead workers are detected via OS process check; the response includes ahinttelling you exactly what to do. - Per-dispatch terminal-message guarantee β every inbox dispatch produces exactly one terminal message back to the lead. Silent turn β
[SYSTEM] worker 'X' completed its turn without producing any reply text. Pipe close mid-turn β[SYSTEM] ... output channel closed mid-turn. Lead never has to poll to know whether a worker actually finished. - Mid-turn message delivery (PostToolUse hook) β worker replies arriving while the lead is busy in a long turn show up via
additionalContextafter the next tool call (~3s typical), not at turn end. Falls back to the Stop hook'sdecision:"block"path when the lead is purely thinking with no tool activity. - Multi-layer worker liveness β three independent paths flip a worker to "dead" on external kill: (1) per-turn 3s active probe inside the
agent_loopproduces a clean[SYSTEM] OutputClosedeven when Windows pipes don't EOF, (2) a daemon-side 5s watchdog reconcilesruntime/workers.jsonso the web UI shows dead status without an MCP roundtrip, (3) theworker_listtool always queries the orchestrator live. - Just-in-time runtime hints β operational guidance is delivered in tool response
hint/note/dead_recipients_hintfields when relevant, not buried in static tool descriptions. Tool descriptions are kept tight (~700 chars each) so they don't crowd context. - Crash-visible
team_deleteβ returns ashutdown_failuresarray so the caller knows which subprocesses might be orphans. - Stop-hook batch-grace β near-concurrent worker replies are coalesced into a single reminder (default 500 ms window,
TEAM_MODE_STOP_BATCH_GRACE_MS). - Per-project isolation (v3.1) β every MCP call carries
X-Team-Mode-Project-Root, so a single durable service can host multiple Claude Code sessions in different projects without their teams, workers, or messages bleeding across. Workercwd, the web UI's?project=route, and.lead-sessions.jsonare all caller-scoped. Run two CCs in two projects, get two fully isolated team rooms in one browser. - One-live-team per CC per project β
team_createlets one CC own at most one active team per project; trying to create a second different team returns a hint to archive, delete, or passoverwrite=true. Orphan teams from dead CC sessions are auto-cleaned and reported incleaned_orphan_teams. - Self-documenting data dir β a
README.mdis auto-regenerated inside.agent-teams/on every daemon startup, describing the on-disk layout. - 351 unit tests, zero warnings.
Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Claude Code (your CLI session) β the LEAD β
β β
β .mcp.json βββΊ http://127.0.0.1:8786/mcp ββ
β .claude/ βββΊ Stop hook = asyncRewake script ββ
β settings.json ββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Streamable HTTP MCP
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β team_mode_service(.exe) (THIS REPO β durable localhost) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β MCP runtime β 8 tools β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β Services β β
β β TeamService MemberService RoomService β β
β β MessageService β LeadPendingWriter β β
β β InboxService (computed from messages.jsonl) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β RuntimeOrchestrator β owns worker subprocesses β β
β β ClaudeCodeBackend CodexBackend GeminiBackend β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β Storage (.agent-teams/) β β
β β <team>/ team.json members.json(v=1) β β
β β room.json messages.jsonl β β
β β runtime/http-mcp.json runtime/workers.json β β
β β .locks/ README.md (auto-generated) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β team_mode_web β read-only web UI on :8787+ β β
β β served from inside the daemon, auto-opens β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ spawned as child processes
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Workers β each is a managed CLI subprocess β
β alice (claude-code) bob (codex) carol (gemini-cli) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Why a service? ADR-020 retired the old default stdio relay because stdin EOF and ESC handling made MCP lifetime unreliable on Windows. The durable HTTP service stays up across Claude Code reconnects and owns all worker state. Stop it explicitly with scripts/team-mode-service.ps1 stop when you want it gone. The old stdio team_mode_mcp + team_mode_daemon implementation is retained only as a legacy rollback / fallback route.
Data flow:
- Lead β worker:
send_messagewrites tomessages.jsonl. The worker'sAgentLoopwakes viaInboxNotifierand injects the message into the worker's stdin. - Worker β lead: worker's reply enters
MessageService::sendwithKind::Reply.LeadPendingWriterappends it to.agent-teams/<team>/lead_pending.jsonl. TheasyncRewakeStop hook drains that per-team file and wakes Claude Code with a<system-reminder>.
MCP tool reference
| Tool | Required | Optional | Summary |
|---|---|---|---|
team_create | name | cwd | Create a team; virtual lead member auto-added. Auto-cleans orphan teams from dead CCs and returns them in cleaned_orphan_teams. Auto-launches the web UI for the team. |
team_list | β | β | List all teams. Each team is decorated with ownerStatus: alive / orphan / unbound. |
team_delete | name | β | Shut down all workers + delete team dir. Returns shutdown_failures: [{member, reason}] for any worker that didn't shut cleanly. |
worker_add | team, name | adapter, model, cwd, system_prompt, env, on_existing | Spawn a worker. on_existing is required when a profile already exists: reuse (fast-resume saved profile) / overwrite (replace it; adapter required) / error (default fail-fast). On dead worker reuse, returns revived_from_dead: true. |
worker_list | team | β | List workers (lead excluded). Marks dead workers with sessionState: "dead" and surfaces a hint telling you to revive with worker_add on_existing=reuse. |
worker_remove | team, name | β | Soft-remove: process stopped, status = Removed, execution profile kept for fast-resume later. |
send_message | team, text | β | Send into the team room; sender derived from caller identity (lead's relay β "lead"; a worker's relay β that worker, env-injected at worker_add). text SHOULD contain @handles β workers default to @lead when omitted, lead must specify; unmatched handles fail with the available handle list (always includes @lead). Workers can only send into their bound team. Mixed live/dead recipient lists return dead_recipients_hint plus [SYSTEM] notices delivered to the lead's inbox. |
inbox_read | team | limit, unread_only, auto_ack | Pull-mode fallback for the lead's inbox. Not the canonical channel β replies arrive automatically via the Stop hook; inbox_read is for backlog audits only. |
Full schemas in .plans/agent-teams-v2/docs/02-current-system/mcp-tools-reference.md.
Web UI
The service runs an embedded read-only web server on 127.0.0.1:8787 (auto-increments to 8799 on port conflicts). It auto-opens in your default browser when you call team_create (disable with TEAM_MODE_WEB_AUTO_OPEN=0).
Layout: three panes β left (team / member / filter list), center (group chat timeline), right (session / details / diagnostics tabs).
Group chat: chat-bubble style. Sender avatar + name + time + text body. @mention tokens are highlighted and clickable to filter the timeline. [SYSTEM] status messages render as centered grey notices.
Per-sender colors: lead is fixed cyan, user is fixed warm orange, workers get a stable djb2-hash color (skipping reserved hue ranges so workers never collide with lead/user).
Session transcripts (right pane): the actual Claude Code or Codex JSONL session content for the focused member, organized into "work turns" β tool calls paired with their results, final reply highlighted. Each worker's session_id is captured from its backend stream (Claude Code init / result events; Codex thread.id) and used for precise session lookup. Codex rollouts under ~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl are parsed natively (5 s TTL cache).
Human-in-the-loop messaging: a sticky composer at the bottom lets you, the human, send messages into any team room as @mentions β sender name is the reserved user handle. Workers reply to you the same way they reply to the lead. The lead also sees these messages (via the lead-observability rule).
For the design rationale and feature roadmap see .plans/agent-teams-v2/docs/04-web-ui/team-mode-web-guide.md and .plans/agent-teams-v2/docs/04-web-ui/history/web-frontend-plan.md.
Backend matrix
| Capability | claude-code | codex | gemini-cli |
|---|---|---|---|
| Persistent process | β (NDJSON stream-json) | β (codex app-server JSON-RPC) | β (per-turn respawn) |
session_id capture | β | β (thread.id) | β |
| Web UI session transcript | β | β (rollout JSONL) | β (mtime fallback only) |
| Full-access mode | β (--permission-mode bypassPermissions) | β (sandbox_mode = "danger-full-access") | n/a |
| System prompt mechanism | --system-prompt flag | Prepended to first user message | System: prefix in every constructed prompt |
| Conversation memory across turns | Native (single process) | Native (single process) | In-memory rolling window (last 50 turns) |
Notes:
- Claude Code workers require
CLAUDE_CODE_GIT_BASH_PATHon Windows. The MCP relay auto-detects it from common Git install paths at startup; set the env var manually if your install is non-standard. - Codex workers are spawned with
approvalPolicy: "never"andsandbox_mode: "danger-full-access"so they don't block waiting for permission prompts. The reasoning effort field is intentionally not hardcoded; it falls through to your~/.codex/config.tomlif set. - Gemini workers lack a persistent session, so the web UI cannot show their JSONL transcript. Their conversation history is reconstructed in memory from
messages.jsonlfor each turn.
Installation
Recommended: setup script
git clone https://github.com/jessepwj/agent-teams-mcp
cd agent-teams-mcp
bash scripts/setup.sh
# or: powershell -ExecutionPolicy Bypass -File scripts\setup.ps1
The setup script:
- Verifies prerequisites (cargo 1.85+, node 14+).
- Builds the release binary:
team_mode_service(.exe). - Generates
.mcp.jsonfrom.mcp.json.templatepointing athttp://127.0.0.1:8786/mcpandscripts/mcp-http-headers.js. - Runs
cargo test --lib(351 tests). - Prints next steps.
Why a generated
.mcp.json?.mcp.jsonis machine-local and gitignored. The tracked.mcp.json.templatepoints Claude Code at the local HTTP MCP endpoint and usesscripts/mcp-http-headers.jsto attach the runtime token and owner headers. Re-run setup after moving the repo so the helper path is correct, then fully restart Claude Code.
Manual install
cargo build --release --bin team_mode_service
Then copy .mcp.json.template to .mcp.json, start the service, and fully restart Claude Code:
powershell -ExecutionPolicy Bypass -File scripts\team-mode-service.ps1 start
.mcp.json should contain the HTTP endpoint:
{
"mcpServers": {
"team-mode": {
"type": "http",
"url": "http://127.0.0.1:8786/mcp",
"headersHelper": "node scripts/mcp-http-headers.js"
}
}
}
The stdio team_mode_mcp + team_mode_daemon install path is a legacy rollback / fallback path only; do not use it for the default install.
Use in any project on this machine (alternative: project-local install (advanced))
cargo install --path .
cd ~/my-other-project
team_mode_service init
team_mode_service --project-root . --data-dir .agent-teams &
claude
Use this only if you want per-project isolation. init writes the project-local .mcp.json, .claude/settings.json, and embedded helper/hook scripts under .agent-teams/scripts/. If an existing team-mode MCP server or lead-pending hook is already present, it stops and asks you to merge manually.
Worker cargo commands on Windows MSVC
Codex workers are child processes of team_mode_service. On Windows MSVC targets, rustc may fail to discover Visual Studio from that child process and can accidentally call Git Bash's link.exe, causing errors such as link.exe was not found or linker failures from the wrong link.exe.
Fix this by sourcing vcvars64.bat before starting the service so the service, and therefore its workers, inherit LIB, INCLUDE, and the MSVC PATH.
Use the provided script:
.\scripts\team-mode-service.ps1 start
Or source it manually:
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvars64.bat"
cargo run --release --bin team_mode_service
Recommended Codex config for workers:
[shell_environment_policy]
inherit = "all"
sandbox_mode = "danger-full-access"
approval_policy = "never"
Non-Windows users can keep using cargo run --release --bin team_mode_service directly, or run the release binary in the background with --data-dir .agent-teams --project-root ..
Push notifications β already wired
.claude/settings.json is committed with the Stop hook entry. You must restart Claude Code after the first clone so it picks up the hook config (CC only loads hooks at startup). After that, every change to .mcp.json or .claude/settings.json requires a full CC restart.
Sanity checklist
bash scripts/setup.sh(or the PowerShell variant) β succeeds.target/release/team_mode_service(.exe)exists.scripts/team-mode-service.ps1 startreportsrunning pid=... url=http://127.0.0.1:8786/mcp.claudelaunched from the repo root β/mcpshowsteam-modeconnected.team_create({"name":"smoke"})succeeds; the web UI auto-opens.worker_add({"team":"smoke","name":"alice","adapter":"claude-code"})succeeds.team_delete({"name":"smoke"})succeeds.- Read
.plans/agent-teams-v2/docs/03-operations/usage-tips.mdfor the do's and don'ts.
If any step fails, see .plans/agent-teams-v2/docs/03-operations/open-source-deployment.md β it has a full triage table.
Troubleshooting β worker replies aren't pushing
The single most common issue. Symptoms: send_message returns success, but no <system-reminder> arrives in your next turn. Triage in this exact order:
- Did you restart CC after first clone / after editing
.claude/settings.json? Hooks load only at CC startup β/mcp reconnectdoes NOT pick them up. Quit all CC windows, relaunchclaude, retry. - Is the worker actually replying?
tail -f ~/.team-mode/runtime/service.logβ you should see the reply being appended forleadalong with oneevent=http_call_contextline per MCP call. If not, the worker is stuck (check that the backend CLI, e.g.codex, is installed and on PATH). - Is the Stop hook firing?
tail -f .agent-teams/.lead-pending-wake.logβ you should see async-wake injection lines. No entries at all β hook not loaded β see step 1. Service lookup errors β runscripts/team-mode-service.ps1 status. - Still nothing? See
.plans/agent-teams-v2/docs/03-operations/open-source-deployment.mdfor the full table (15+ scenarios with fixes).
The send_message tool's response hint field is intentionally chatty about this β if you ever see "If reminders never arrive ..." in a tool result, you've already hit the issue and should restart CC.
Data directory layout
Created by the service under the lead's CWD on first tool call:
.agent-teams/
βββ README.md β auto-regenerated on each daemon start
βββ .locks/ β file locks (per-team + lead_pending)
βββ runtime/
β βββ http-mcp.json β {pid, url, token_file, base_dir, project_root}
β βββ workers.json β worker runtime sidecar (orphans marked dead on daemon restart)
βββ team-mode-service.log β service stderr/tracing
βββ <team-name>/
βββ team.json β team metadata (incl. owner_cc_pid)
βββ members.json β v=1, unified identity + execution profile
βββ room.json β room metadata
βββ messages.jsonl β append-only message history (source of truth)
βββ lead_pending.jsonl β per-team push queue, atomically drained by hook
# Hook-side scratch files:
.agent-teams/.lead-pending-wake.log
.agent-teams/.cc-identity.<session_id>.json
.lead-sessions.json β per-project, written by Stop hook so Web UI can route
each lead conversation to the right CC across reconnects
# Service-wide diagnostics (machine-global, not per project):
~/.team-mode/runtime/service.log β service stderr; one `event=http_call_context`
line per MCP call (logs resolved project_root)
~/.team-mode/runtime/relay-startup.log β one line per relay spawn (cwd + cli_arg
+ cc_session_cwd + resolved_project_root)
Old project-root lead_pending.jsonl files are migrated into per-team files by the service at startup.
A legacy .team-mode-data/ directory triggers a startup warning (not migrated β delete manually).
Development
If you're hacking on this repo (not just installing it), bootstrap a project-local dev environment:
git clone https://github.com/jessepwj/agent-teams-mcp
cd agent-teams-mcp
bash scripts/setup.sh
# or: powershell -ExecutionPolicy Bypass -File scripts\setup.ps1
claude # launch Claude Code from the repo root
This builds the HTTP service, generates a project-local .mcp.json, and runs the 351 unit tests.
# Compile check (fast, no link)
cargo check --lib
# Run the 351 unit tests (~1s)
cargo test --lib
# Build the default HTTP MCP service
cargo build --release --bin team_mode_service
# Optional web binary (built into the daemon by default; standalone build for hacking)
cargo build --release --features team-mode-web --bin team_mode_web
Useful design specs:
.plans/agent-teams-v2/decisions.mdβ ADR-020/021/022 current HTTP service and async wake decisions.plans/agent-teams-v2/docs/05-design-history/legacy/team-mode-mcp-final.mdβ legacy rollback / fallback stdio MCP runtime + tool surface + storage layout.plans/agent-teams-v2/docs/02-current-system/worker-detach-refactor.mdβ legacy rollback / fallback daemon architecture rationale.plans/agent-teams-v2/docs/05-design-history/hook-push-design.mdβ Stop hook + JSON block design.plans/agent-teams-v2/docs/05-design-history/design-decisions.mdβ full bug journal + alternatives considered.plans/refactor-data-layout/spec.mdβ current data layout spec
Adding a backend? See src/backend/{claude_code,codex,gemini}.rs for reference implementations of the Backend trait. AgentLoop drives all backends uniformly. Read CONTRIBUTING.md for code conventions.
Codex as Lead
Short version: not currently supported. The Stop-hook-based push is a Claude Code specific feature β Codex CLI has no equivalent blocking hook (openai/codex#8375).
The only officially supported path for Codex-as-Lead would be the codex app-server JSON-RPC mode, which would require building a harness around it (~2000+ lines of Rust). Research / discussion welcome in the issue tracker.
Codex as a worker is fully supported and has full session-transcript parity with Claude Code in the web UI.
Credits
This project is derived from and builds on github.com/ZhangHanDong/agent-teams-rs (MIT, Β© 2025 Zhang Han Dong), which provides the core runtime, backends, team/task/inbox domain, and CLI. This fork refocuses the project around the team_mode_service HTTP MCP service and adds:
- The Stop-hook + JSON-block + ancestor-routing push architecture
- A durable localhost
team_mode_servicethat survives Claude Code reconnects - A live web UI on
127.0.0.1:8787with per-sender colors, full session transcripts (Claude Code + Codex), and human-in-the-loop messaging - A unified member file layout (
members.jsonv=1 with merged identity + execution) - Per-team subdirectory data layout with auto-generated
README.md worker_add on_existing, strictsend_message,team_delete shutdown_failures,worker_addready-check- Per-dispatch terminal-message guarantee (silent turn / pipe close β
[SYSTEM]) - Strict slug validation, case-insensitive
@mention, just-in-time runtime hints - Service observability watchdog, asyncRewake Stop-hook batching, one-live-team enforcement
- The
inbox_readpull-mode tool - Hook scripts, setup automation, and end-user documentation
License
MIT β see LICENSE.
