LearnLoop
No description available
Ask AI about LearnLoop
Powered by Claude Β· Grounded in docs
I know everything about LearnLoop. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
LearnLoop
A team-wide prompting coach. Every prompt sent through Claude.ai, VS Code, Claude Code, or Copilot Chat is scored on five dimensions in real time. Weak prompts trigger a short teaching loop. Strong prompts feed a team wiki, which gets injected back into the next person's context β so a team's "way of prompting" compounds without anyone writing docs.
Trailhead is the engineering codename inside the repo; LearnLoop is the product name on the marketing site.
- Landing page / waitlist: https://learnloop-gules.vercel.app/
- Demo video (3 min walkthrough): https://www.youtube.com/watch?v=kD6nnJAmRK8
Screenshots
Browser-extension popup. Toggle coaching, pick the team, scope scoring to a wiki node so the rubric reads against that subtree's conventions. |
Coach output inside Claude Code. Five dimensions, a per-dimension breakdown, and a teaching paragraph for the weakest one β surfaced through the MCP coach tool before the prompt is sent.
|
The 5-dimension rubric
Every prompt is scored 0β10 on:
goal_clarityβ what outcome is being asked forspecificityβ concrete files, functions, errors namedcontext_loadingβ relevant code/docs/examples attachedconstraint_articulationβ what must not change, perf/style limitsoutput_specificationβ desired shape of the response
overall = mean of the five dims. Below 7 triggers coaching; β₯ 7 lands
silently. The rubric is concrete enough that a human reviewer could apply it β
the LLM is the implementation, not the product.
What's actually implemented
The repo is a working npm-workspaces monorepo. Six surfaces, all wired to one backend, all sharing the same TypeScript contract.
apps/api β Hono backend (TypeScript, Node 22, Postgres)
Single source of truth. Multi-tenant by X-Team-Token header, with optional
auto-creation of new teams on unknown tokens (TRAILHEAD_AUTO_CREATE_TEAMS).
Endpoints implemented in apps/api/src/index.ts:
| Method + Path | What it does |
|---|---|
GET / | Health + endpoint catalog (unauth) |
GET /teams | List all teams with tokens (unauth, drives the dashboard team picker) |
POST /score | 5-dimension Gemini score; writes skill_observation rows with a 30 s per-dimension dedup window |
POST /coach | Stateless 3-round teachβreveal coaching loop |
POST /capture | Stores a (prompt, response, outcome) capture from any surface |
POST /wiki/propose | Normalize + dedup an insight on (node_id, body_normalized), increment reinforcement_count, promote draft β durable at β₯ 3 |
GET /context?path= | Ancestor walk: returns every wiki node whose path is a prefix of the file path, plus its durable learnings |
GET /examples?path= | Top graduated prompts for an ancestor of a file path |
GET /wiki/recent?since=ISO | Polling endpoint for the VS Code wiki-toast surface |
POST /diff | Picks the closest graduated team prompt by topic + ancestry, scores both prompts, asks Gemini Pro to narrate the difference |
POST /improve | Multi-turn Gemini-driven prompt rewrite, capped at 5 user replies |
GET /skill-arc | Time-series of per-dimension scores (powers the dashboard hero chart) |
GET /team/metrics | Snapshot: avg overall, reuse rate, durable count, draft count, active users |
GET /wiki/tree | Full node + learnings tree |
POST /onboard/repo | Bulk-upsert one node per path, idempotent, optional initial_rules[path] for seeding body_md |
POST /onboard/repo/full | Async rich bootstrap: accepts a folder + file bundle (capped at 16 MB / 2 000 files / 32 KB per file), enqueues a wiki_jobs row, three-pass Gemini fan-out via setImmediate |
GET /onboard/jobs/:id | Per-path progress for a rich-bootstrap job |
DELETE /team/data | Wipes the requesting team's data; demo team is protected unless TRAILHEAD_ALLOW_DEMO_RESET=true |
LLM work runs through apps/api/src/gemini.ts: Gemini 2.5 Flash for scoring
(JSON-schema mode), Gemini 2.5 Pro for diff narration and rich bootstrap.
Every Gemini call is instrumented with Langfuse when
LANGFUSE_PUBLIC_KEY / LANGFUSE_SECRET_KEY are set β one trace per HTTP
request, one nested generation per LLM call, with token usage and latency.
Tracing silently no-ops when keys are missing.
apps/browser-ext β Chrome MV3 extension for Claude.ai
Vanilla TypeScript + esbuild. Manifest declares https://claude.ai/* as the
content-script host and pre-allowlists the deployed Railway API.
Implemented widgets (src/widgets/):
- Score card under the textarea β debounced 250 ms hits to
/score, per-dimension bars, missing-dimension hints - Score badge on each user bubble
- Prompt diff panel β "Compare to team" expands a
/diffview inline - Outcome rating chips on each assistant bubble (π / π€· / π β
/capture) - Wiki toast β drops in when
/wiki/recentpolling sees a new learning - Improve chat β multi-turn rewrite using
/improve - Context pill + popup β pick a wiki node to bias scoring
- Send-intercept β on send:
β₯ 7lets the native send fire;< 7shows a 5-second nudge with Have Claude clarify / Send as-is; auto-sends as-is on timeout. Fail-open on every API error.
Kill-switch: chrome.storage.local.set({ 'trailhead.disabled': true }) halts
the extension on next page load.
apps/vscode-ext β VS Code extension
Sidebar webview registered under the trailhead activity bar. Settings expose
trailhead.apiUrl, trailhead.teamToken, trailhead.userId. Same 5-dimension
score-card render as the browser extension, plus a wiki-diff polling loop that
toasts when /wiki/recent reports a new insight.
apps/mcp-server β MCP server for Claude Code + Copilot Chat
STDIO MCP server distributed via npx trailhead-mcp β¦. Four hero tools
deliberately collapsed from a previous seven-tool surface so Copilot's tool
selector picks reliably:
| Tool | Routes to |
|---|---|
coach | POST /coach β server-side teachβreveal cycle returns proceed: true/false and a rendered text block; the directive is a thin "relay text, follow proceed" loop |
wiki_lookup | GET /context + GET /examples (file-path based) and/or GET /search (query) |
wiki_save | POST /wiki/propose with server-side dedup |
wiki_bootstrap | POST /onboard/repo (skeleton) or POST /onboard/repo/full (rich, LLM-populated) |
Plus a ping for health checks.
CLI subcommands (bin/cli.mjs):
trailhead-mcp initβ per-repo install. Writes.mcp.json+CLAUDE.mdfor Claude Code and.vscode/mcp.json+.github/copilot-instructions.mdfor Copilot. Idempotent. Token derivation order:--team-tokenβTRAILHEAD_TEAM_TOKENβ.trailhead-teamsentinel β SHA-256 ofgit remote get-url originβ randomrepo_local_*token written to.trailhead-teamand added to.gitignore.trailhead-mcp bootstrapβ walks the cwd, bundles source files, posts to/onboard/repo/full. Default rich mode shows a live progress bar. Flags:--minimal,--paths,--force,--dry-run,--yes.trailhead-mcp resetβ wipes the team's wiki/captures/observations.
apps/dashboard β Next.js 15 dashboard (Vercel)
App router, server components for the team list, SWR for the live charts.
Pages (src/app/):
/β team picker (lists every team returned by/teams)/skill-arc?team=β¦β per-dimension team chart driven by/skill-arc, polls every 2 s during the demo/team?team=β¦β L1βL2 metric cards from/team/metrics/wiki?team=β¦β node tree + durable learnings from/wiki/tree
apps/landing-page β LearnLoop marketing site
Single static index.html + JSX components loaded at runtime via Babel
standalone. Tailwind via CDN. Sections: hero, problem, solution, features,
demo, footer. Deployed at https://learnloop-gules.vercel.app/.
Packages
packages/sharedβ TypeScript types for every API request/response. Every surface imports from here so wire shapes can't drift.packages/scoringβ Locked Gemini prompt templates (score, augment, teach, topic, extract) and pure helpers (buildAugmentation,normalize,normalizePath,ancestorPaths, the teach/skip/success reveal renderers).packages/score-cardβ Pure-DOM render function for the 5-dimension card. Used by the browser extension and the VS Code webview.packages/dbβschema.sql(idempotent, everyCREATEusesIF NOT EXISTS),migrate.mjs,check.mjs, andseed.mjsfor the Acme Fintech demo data.
Database (Postgres on Neon)
Eight tables in packages/db/schema.sql:
teamsβ tenancynodesβ one row per folder or file path; carriesbody_mdlearningsβ accumulated insights with normalize-based dedup, counter, anddraft | durablestatuspromptsβ graduated prompt templates withtopicandreuse_countcapturesβ stored conversations with outcomeskill_observationsβ per-dimension score writes;prompt_hashbacks the 30 s dedup windowwiki_jobs+wiki_job_pathsβ async rich-bootstrap state
The demo team (Acme Fintech, token trailhead_demo_acme_2026) is hardcoded
into the schema with a fixed UUID so every surface can reference it without a
lookup.
Repository layout
apps/
api/ Hono + TypeScript backend (Railway)
browser-ext/ Chrome MV3 extension for Claude.ai
vscode-ext/ VS Code IDE extension
mcp-server/ MCP server (Claude Code + Copilot Chat) + CLI
dashboard/ Next.js 15 dashboard (Vercel)
landing-page/ Static marketing site (LearnLoop)
packages/
shared/ TypeScript types β single source of truth for API shapes
scoring/ Locked Gemini prompt templates + pure helpers
score-card/ Pure-DOM render of the 5-dimension card
db/ Postgres schema, migrations, seed data
docs/
superpowers/specs/ Design specs
roadmaps/ Per-surface 24h build roadmaps
Quick start
Prerequisites:
- Node
>= 22.6 - A Postgres database (Neon β
DATABASE_URLmust includesslmode=require) - A Gemini API key from https://aistudio.google.com/apikey
# 1. Install workspace dependencies
npm install
# 2. Configure environment
cp .env.example .env
# Edit .env β fill in DATABASE_URL and GEMINI_API_KEY
# 3. Apply the schema (idempotent, safe to re-run)
psql "$DATABASE_URL" -f packages/db/schema.sql
# 4. (optional) Seed the Acme Fintech demo data
node packages/db/seed.mjs
# 5. Run the API
npm run dev
# β http://localhost:3000
The API refuses to boot without DATABASE_URL and GEMINI_API_KEY.
Run individual surfaces
# Dashboard (Next.js, port 3001)
NEXT_PUBLIC_API_URL=http://localhost:3000 \
NEXT_PUBLIC_TEAM_TOKEN=trailhead_demo_acme_2026 \
npm --workspace=apps/dashboard run dev
# Browser extension β build, then load apps/browser-ext/dist as unpacked
npm --workspace=@trailhead/browser-ext run build
# chrome://extensions β Developer mode β Load unpacked β apps/browser-ext/dist/
# VS Code extension β build, then F5 with apps/vscode-ext as the workspace
npm --workspace=apps/vscode-ext run build
# MCP server β install into a target repo
cd /path/to/your/repo
npx trailhead-mcp init
npx trailhead-mcp bootstrap
Workspace scripts
npm run typecheck # tsc --noEmit across all workspaces
npm run test # run all workspace tests
npm run build # build all workspaces that expose a build script
Environment variables
Single root .env.example β every surface reads from the same set.
| Var | Used by | Notes |
|---|---|---|
DATABASE_URL | api | Postgres connection string, sslmode=require |
GEMINI_API_KEY | api | Gemini 2.5 Flash + 2.5 Pro |
LANGFUSE_PUBLIC_KEY | api | Optional. Hosted Langfuse public key (pk-lf-β¦) |
LANGFUSE_SECRET_KEY | api | Optional. Hosted Langfuse secret key (sk-lf-β¦) |
LANGFUSE_BASEURL | api | Defaults to https://cloud.langfuse.com (EU). Use https://us.cloud.langfuse.com for US |
TEAM_TOKEN | clients | Demo single-tenant secret, sent as X-Team-Token |
PORT | api | Defaults to 3000; Railway injects automatically |
TRAILHEAD_AUTO_CREATE_TEAMS | api | false to disable on-the-fly team creation |
TRAILHEAD_ALLOW_DEMO_RESET | api | true to allow DELETE /team/data on the demo team |
NEXT_PUBLIC_API_URL | dashboard | Where the dashboard fetches |
NEXT_PUBLIC_TEAM_TOKEN | dashboard | Team token surfaced to the browser |
trailhead.apiUrl / .teamToken / .userId | vscode-ext | VS Code settings |
TRAILHEAD_API_URL / TRAILHEAD_TEAM_TOKEN | mcp-server | Per-repo MCP config |
Deployment
- API β Railway.
railway.jsondeclaresnpm --workspace=apps/api startwith healthcheck on/. - Dashboard β Vercel. Set
NEXT_PUBLIC_API_URLandNEXT_PUBLIC_TEAM_TOKEN, thenvercel --prodfromapps/dashboard/. - Landing page β Vercel β already live at https://learnloop-gules.vercel.app/.
- Browser extension β loaded unpacked from
apps/browser-ext/dist/. - VS Code extension β
vsce packagefromapps/vscode-ext/. - MCP server β distributed via
npx trailhead-mcp init(per-repo wiring, multi-tenant token derivation from the git remote).
How the pieces fit
- Engineer types a prompt. Browser extension debounces 250 ms and hits
/score. The card mounts under the textarea with five per-dimension bars and missing-dimension hints. - Below 7 β 5-second Have Claude clarify nudge, or fall back to Send
as-is. Each
/scorewrites 5skill_observationrows; the dashboard's/skill-arcchart polls every 2 s, so the rightmost bucket climbs as the user prompts. - In Claude Code or Copilot Chat, the MCP server's
coachtool is called first. Server returnsproceed: falseplus a teach-block when the score is low; the host LLM relays the block, gathers a reply, calls back. Three rounds max, then a reveal block shows the score arc and prompt diff. - When the user states a teamwide convention,
wiki_savecallsPOST /wiki/propose. Server-side normalize + dedup means repeated calls reinforce the same draft instead of duplicating;reinforcement_count >= 3promotesdraft β durable. The VS Code extension polls/wiki/recentand toasts the update β visible proof of the autonomous loop. - Next prompt the same engineer (or a teammate) types in the same path
triggers
/scoreagain, but nowcontext_pathpulls the team's HCL bundle into Gemini's system prompt, so the rubric is calibrated against the team's own conventions.
Specs
The project was specced before it was built. Source of truth for why:
docs/superpowers/specs/2026-04-25-trailhead-design.mdβ master specdocs/superpowers/specs/2026-04-25-mcp-plugin-ux-design.mdβ MCP install story and four-tool surfacedocs/superpowers/specs/2026-04-25-trailhead-browser-ext-design.mdβ Claude.ai content-script architecturedocs/superpowers/specs/2026-04-25-demo-completion-design.mdβ dashboard and seeding plandocs/superpowers/specs/2026-04-26-trailhead-educational-loop-design.mdβ the teach β reveal coaching loopdocs/superpowers/specs/2026-04-26-wiki-bootstrap-rich-design.mdβ async rich bootstrapdocs/superpowers/specs/2026-04-26-improve-widget-design.mdβ multi-turn improve widgetdocs/roadmaps/β per-surface 24-hour build plans
Each app and package also has its own README.md covering surface-specific
contracts, builds, and tests.
Tech stack
- Backend: Hono, TypeScript, Node 22,
@hono/node-server, rawpg - DB: Postgres on Neon, no ORM
- LLMs: Gemini 2.5 Flash (scoring, JSON-schema mode), Gemini 2.5 Pro (diff narration, rich bootstrap)
- Observability: Langfuse (hosted) β one trace per request, one generation per LLM call
- Frontend: Next.js 15 + Tailwind + Recharts + SWR (dashboard); vanilla TS + esbuild (extensions); React via CDN (landing page)
- MCP:
@modelcontextprotocol/sdk, STDIO transport - Build: npm workspaces; per-package
tsc/esbuild - Hosts: Railway (API), Vercel (dashboard + landing page), per-repo MCP
install via
npx trailhead-mcp
