Auto Claude Code Research In Sleep
ARIS โ๏ธ (Auto-Research-In-Sleep) โ Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in โ works with Claude Code, Codex, OpenClaw, or any LLM agent.
Ask AI about Auto Claude Code Research In Sleep
Powered by Claude ยท Grounded in docs
I know everything about Auto Claude Code Research In Sleep. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Auto-claude-code-research-in-sleep (ARIS โ๏ธ๐)
๐ก Use ARIS in Claude Code / Cursor / Trae as a skill-based workflow, or get the full experience with the standalone CLI โ enjoy any way you like!
๐ค AI agents: Read AGENT_GUIDE.md instead โ structured for LLM consumption, not human browsing.
๐ฅ ARIS-Code CLI โ ็ฌ็ซๅฎ่ฃ ็ ยท English | โฌ๏ธ Download
๐ฐ ARIS-Code v0.4.4 (2026-04-20) โ Setup UX + reviewer routing fixes (resolves #158, #162) |
/setupno longer forces Bearer for Anthropic + custom URL (fixes ModelScope /code.newcli.cometc.) | Provider-aware proxy URL hints | Stale state no longer leaks across provider switches | LlmReview smart fallbackPrevious versions
v0.4.3 (2026-04-17) โ Third-party Anthropic-compat proxy support (Bedrock etc.) | Skip beta flags that proxies reject | Propagate custom base URL for
anthropicprovider | Credit @screw-44v0.4.2 (2026-04-17) โ Auto-compaction corruption fix | Compaction summary preserved on OpenAI-compat executors | Shell-provided API keys no longer erased on launch
v0.4.1 (2026-04-15) โ Plan mode (
/plan) | Cooperative Ctrl+C interrupt | Auto-retry (429/5xx/network) | Research Wiki ๐ (persistent knowledge base) | Self-Evolution ๐งฌ (/meta-optimize) | Local models (LM Studio/Ollama) | 62 skills syncedv0.3.11 (2026-04-13) โ Reviewer Anthropic-compatible mode (Claude via proxy)
v0.3.9 (2026-04-11) โ Proxy/custom base URL (CCSwitch) | Local models (LM Studio/Ollama) | Windows (experimental)
v0.3.5 (2026-04-08) โ Research Wiki (persistent papers/ideas/experiments/claims + relationship graph) | Meta-Optimize self-evolution (analyze logs โ propose SKILL.md patches)
v0.3.0 (2026-04-03) โ Multi-file memory index | Rich task system (TodoWrite) |
/plan| Security hardeningv0.2.2 (2026-04-03) โ
/planstep-by-step planning |/taskspersistent trackingv0.2.1 (2026-04-03) โ Persistent Memory | Kimi K2.5 multi-turn fix | CJK cursor fix
v0.2.0 (2026-04-02) โ Open source | Kimi + MiniMax + GLM support | Smart LlmReview routing | CI/CD
v0.1.0 (2026-04-02) โ Initial release | Multi-executor & reviewer | 42 bundled skills
ไธญๆ็ README | English

๐ Let Claude Code do research while you sleep. Wake up to find your paper scored, weaknesses identified, experiments run, and narrative rewritten โ autonomously.
๐ชถ Radically lightweight โ zero dependencies, zero lock-in. The entire system is plain Markdown files. No framework to learn, no database to maintain, no Docker to configure, no daemon to babysit. Every skill is a single
SKILL.mdreadable by any LLM โ swap Claude Code for Codex CLI, OpenClaw, Cursor, Trae, Antigravity, Windsurf, or your own agent and the workflows still work. Fork it, rewrite it, adapt it to your stack.๐ก ARIS is a methodology, not a platform. What matters is the research workflow โ take it wherever you go. ๐ฑ
ยท
ยท
ยท
ยท ๐ฌ Join Community ยท
Custom Claude Code skills for autonomous ML research workflows. These skills orchestrate cross-model collaboration โ Claude Code drives the research while an external LLM (via Codex MCP) acts as a critical reviewer. ๐ Also supports alternative model combinations (Kimi, LongCat, DeepSeek, etc.) โ no Claude or OpenAI API required. For example, MiniMax-M2.7 + GLM-5 or GLM-5 + MiniMax-M2.7. ๐ค Codex CLI native โ full skill set also available for OpenAI Codex. ๐ฑ๏ธ Cursor โ works in Cursor too. ๐ฅ๏ธ Trae โ ByteDance AI IDE. ๐ Antigravity โ Google's agent-first IDE. ๐ Free tier via ModelScope โ zero cost, zero lock-in.
๐ญ Why not self-play with a single model? Using Claude Code subagents or agent teams for both execution and review is technically possible, but tends to fall into local minima โ the same model reviewing its own patterns creates blind spots.
Think of it like adversarial vs. stochastic bandits: a single model self-reviewing is the stochastic case (predictable reward noise), while cross-model review is adversarial (the reviewer actively probes weaknesses the executor didn't anticipate) โ and adversarial bandits are fundamentally harder to game.
๐ญ Why two models, not more? Two is the minimum needed to break self-play blind spots, and 2-player games converge to Nash equilibrium far more efficiently than n-player ones. Adding more reviewers increases API cost and coordination overhead with diminishing returns โ the biggest gain is going from 1โ2, not 2โ4.
Claude Code's strength is fast, fluid execution; Codex (GPT-5.4 xhigh) is slower but more deliberate and rigorous in critique. These complementary styles โ speed ร rigor โ produce better outcomes than either model talking to itself.
๐งฟ Want the strongest possible reviewer? Add
โ reviewer: oracle-proto any skill to route reviews through GPT-5.4 Pro via Oracle MCP. Pro-level reasoning for proof verification, experiment auditing, and final stress tests. Works with API key or free browser mode. Setup โ
๐ฏ More Than Just a Prompt
These are full pipelines โ you can also use each workflow independently. Already have an idea? Skip to Workflow 1.5. Have results? Jump to Workflow 3. Got reviews? Jump to Workflow 4. Want persistent memory? Enable Research Wiki. See Quick Start for all commands and Workflows for the full breakdown.
Basic mode โ give ARIS a research direction, it handles everything:
/research-pipeline "factorized gap in discrete diffusion LMs"
๐ฅ Targeted mode โ got a paper you want to improve? Give ARIS the paper + the code:
/research-pipeline "improve method X" โ ref paper: https://arxiv.org/abs/2406.04329, base repo: https://github.com/org/project
ARIS reads the paper โ finds its weaknesses โ clones the codebase โ generates ideas that specifically fix those weaknesses with that code โ runs experiments โ writes your paper. Like telling a research assistant: "read this paper, use this repo, find what's missing, and fix it."
Mix and match:
ref paperonly = "what can be improved?",base repoonly = "what can I build with this code?", both = "improve this paper using this code."
๐ฅ Rebuttal mode โ reviews just dropped? Don't panic. ARIS reads every concern, builds a strategy, and drafts a rebuttal that's grounded, structured, and under the character limit:
/rebuttal "paper/ + reviews" โ venue: ICML, character limit: 5000
| Parameter | Default | What it does |
|---|---|---|
venue | ICML | Target venue (ICML/NeurIPS/ICLR/CVPR/ACL/AAAI/ACM) |
character limit | โ | Required. Hard character limit for rebuttal text |
quick mode | false | Stop after parsing + strategy (Phase 0-3). See what reviewers want before drafting |
auto experiment | false | Auto-run supplementary experiments via /experiment-bridge when reviewers ask for new evidence |
max stress test rounds | 1 | How many times GPT-5.4 xhigh stress-tests the draft |
max followup rounds | 3 | Per-reviewer follow-up round limit |
Three safety gates โ rebuttal will NOT finalize if any fails:
- ๐ No fabrication โ every claim maps to paper/review/user-confirmed result
- ๐ No overpromise โ every promise is user-approved
- ๐ Full coverage โ every reviewer concern is tracked
Two outputs: PASTE_READY.txt (exact char count, paste to venue) + REBUTTAL_DRAFT_rich.md (extended version for manual editing).
After acceptance โ your paper is in, now prepare the presentation:
/paper-slides "paper/" # โ Beamer PDF + PPTX + speaker notes + Q&A prep
/paper-poster "paper/" # โ A0/A1 poster PDF + editable PPTX + SVG
๐ก From idea to paper to podium โ one toolchain. ๐ฑ
๐ Community Submissions Built with ARIS
| Paper | AI-review signal | Status | Author | Stack |
|---|---|---|---|---|
| CS Paper Submission | CSPaper simulated review: 8/10; AI reviewer recommendation: "clear accept" | Submitted to a CS conference; awaiting official feedback | @DefanXue & @Monglitay | Claude Code + GPT-5.4 |
| AAAI 2026 Paper Submission | Stanford Agentic Reviewer AAAI-style review: 7/10; AI reviewer recommendation: "good paper, accept" | Submitted to AAAI 2026 Main Technical; awaiting official decision | @xinbo820-web | Pure Codex CLI |
| UAV-CC | Under review | Submitted to IEEE TGRS | @wxx827 | Claude Opus 4.6 + Codex 5.4 xhigh + Cursor |
Built with ARIS โ from idea to submission. AI-review scores are community-reported signals from simulated/third-party review tools, not official peer-review or acceptance results. Because ARIS explicitly iterates against AI reviewers, higher AI-review scores are expected and should be read as stress-test feedback; human reviewers may bring newer perspectives, venue taste, and concerns not captured by those systems. Full details + review screenshots โ
๐ข What's New
- 2026-05-06 โ
๐ค
/paper-talkworkflow +/slides-polishskill โ end-to-end conference talk pipeline./paper-talkorchestrates paper โ slide outline โ Beamer + PPTX โ per-page polish โ assurance audits โ final report (sister to/paper-writing,/paper-poster); composes/paper-slides,/slides-polish, plus/paper-claim-audit+/citation-auditwhenassurance: conference-ready./slides-polishis the post-generation visual pass: per-page Codex review against a reference PDF + a fix-pattern catalog (PPTX font scaling 1.5-1.8ร for projector-readable size, text-frame resize after font bump, banner-as-tcolorbox, italic style leak guard, em-dash spacing, Chinese EA font hint via PingFang SC, anonymity placeholder discipline). Assurance ladderdraft / polished (default) / conference-readyis independent from the effort axis;effort: lite, assurance: conference-readyis legal and means "fast pipeline, every audit must emit verdict before final". Phase 4 staging adapter materializes slide text + speaker notes + talk script as a synthetic paper directory (.aris/paper-talk/audit-input/sections/*.tex+ symlinked.bib/results//figures/) so the existing audits run with their paper-shaped contracts and emit 6-state JSON verdicts pershared-references/assurance-contract.md. - 2026-05-05 โ
๐
/resubmit-pipelineโ Workflow 5: text-only resubmit across venues (#208). Port a polished paper from one venue to another under hard constraints (no new experiments, no bib edits, no framework changes, never overwrite prior submissions). 5 phases: physical isolation โ 5-layer anonymity check โ audits (proof / claim / citation--soft-only) โ microedits via/auto-paper-improvement-loop --edit-whitelistwith per-round diff gate โ adversarial gate via/kill-argumentโ final compile + Overleaf push via/overleaf-sync. Two prerequisite SKILL upgrades shipped in the same PR:/auto-paper-improvement-loop --edit-whitelist <path>(YAML schema with allowed/forbidden paths +forbidden_operationslikenew_cite/new_theorem_env/numerical_claim,forbidden_deletions,requires_user_approval_for,max_edits_per_round) and/citation-audit --soft-only(translates KEEP/FIX/REPLACE/REMOVE verdicts to text-rewrite proposals when bib is frozen; hallucinated citations getdrop_cite_in_body_onlyaction). MasterRESUBMIT_REPORT.jsonledger pershared-references/assurance-contract.md; 7-verdict failure mode table includingUSER_DECISIONruntime state. - 2026-05-05 โ
๐ก
/kill-argumentโ adversarial Attack-Adjudication review for theory papers (#206). Two fresh codex 5.5 + xhigh threads: Thread 1 writes the strongest 200-word rejection memo a senior area chair would produce; Thread 2 (independent adjudicator, NOT defender) reads the current paper and classifies each rejection point asanswered_by_current_text/partially_answered/still_unresolvedwith file:line evidence. Output:KILL_ARGUMENT.{md,json}, detect-only. Integrated as Phase 5.6 of/paper-writing(between claim-audit and citation-audit) and as the canonical implementation called from/auto-paper-improvement-loopStep 5.5 โ replaces inline prompt in both places. Mandatory atassurance: submissionfor theory-heavy / scope-heavy papers; emitsNOT_APPLICABLEfor empirical papers without scope claims. Audit JSON isverify_paper_audits.sh-compatible (full schema pershared-references/assurance-contract.md, 6-state verdict). Catches the failure mode score-based reviews miss: when every local component is correct (numbers match, cites resolve, theorems prove) but the paper still oversells what it actually establishes. - 2026-05-04 โ
๐ชฒ
/research-wikiand 8 caller skills now resolve helper via fallback chain (#204). Bug: afterbash tools/install_aris.shthe helper lives at.aris/tools/research_wiki.py(symlink), but skills hard-codedtools/research_wiki.pyand silently failed when invoked โresearch-wiki/stayed empty across full W1 runs. Fix: 3-layer chain (.aris/tools/โtools/โ$ARIS_REPO/tools/) codified inshared-references/wiki-helper-resolution.md. The manual-copy workaround at<project>/tools/research_wiki.pyis layer 2, so users whocp-installed the helper as a temporary fix continue to work. โ ๏ธ Existing users: rerunbash tools/install_aris.shonce โ also picks up a separate Python 3.9ImportErrorfix in the helper. - 2026-05-03 โ
๐จ Opt-in
โ style-ref: <source>for writer-side skills (#202)./paper-{plan,write,writing,illustration,poster,slides},/grant-proposal, and/auto-paper-improvement-loopaccept an optionalโ style-ref: <source>argument that mimics a reference paper's structural style (section ordering, theorem/figure density, sentence cadence, citation style) without copying its prose, claims, or terminology. Sources: local.texdir/file, local PDF, arXiv id (2501.12345orarxiv:2501.12345), HTTP/HTTPS URL. Overleaf URLs/IDs are rejected โ clone via/overleaf-sync setup <id>first. Default OFF; existing behavior unchanged when the flag is absent. Reviewer / auditor sub-skills (/proof-checker,/paper-claim-audit,/citation-audit, the improvement-loop reviewer) never see the style ref โ cross-model review independence preserved. โ ๏ธ Existing ARIS users: the helper ships attools/extract_paper_style.py, distributed via the.aris/toolssymlink (install_aris.shPhase 0, added in #192). Re-runbash tools/install_aris.shonce to refresh the symlink and pick up the helper. Manual fallback:cp <ARIS-repo>/tools/extract_paper_style.py <your-project>/tools/. Without either, the writer skill aborts with a clear error pointing here. - 2026-05-02 โ
๐ชจ Community spotlight: rosetta by @SyntaxSmith. Programmatic access to ChatGPT Pro /
gpt-5.5-pro/ DeepResearch from Node, via Chrome CDP Fetch interception + WebSocket second-leg streaming; ships an MCP server for Claude Code / Codex / Cline. Alternative implementation path to Oracle MCP for ARIS users invokingโ reviewer: oracle-proโ same target capability (Pro-tier reviewer), different mechanics. Indexed under Awesome Community Skills & Extensions. ๐ if you're using it! - 2026-05-02 โ
๐๐งฟ Model & MCP routing updates. (a)
/gemini-searchdefault bumped togemini-3-pro-preview(strongest Gemini, out-of-box). โ ๏ธ Action required: requiresgemini-cliv0.40+ (rungemini --version; upgrade withnpm i -g @google/gemini-cliif older). Legacy override:/gemini-search "topic" โ model: gemini-2.5-pro. Other overrides:gemini-3-flash-preview(faster),auto-gemini-3(load-routed). (b)/idea-discoveryPhase 1 now includes Gemini in its literature survey by default (#199) โ auto-injectsโ sources: all, geminiinto/research-litunless the user passed an explicitโ sources:; graceful skip ifgemini-clinot installed. (c) Oracle MCP upstream PR queue (steipete/oracle/pulls) is the first triage stop when invokingโ reviewer: oracle-pro(especiallyo3-deep-research/gpt-5.5-pro) โ ARIS does not vendor Oracle MCP; check upstream first if behavior surprises you (reviewer-routing.md) - 2026-05-02 โ
๐ ๏ธ๐ Tools-infrastructure migration started. (a)
install_aris.shcreates optional.aris/toolssymlink (#192, closes #174) โ Phase 0 of the 4-step tools-stability plan (#174 โ #176 โ #177 โ #178); idempotent, zero impact until rerun. (b)/experiment-queueorchestration paths repaired (#193) โ first real user of the symlink; 7 cascading bugs fixed via 3 rounds of Codex MCPgpt-5.5xhigh audit. Pure prose + docstring;queue_manager.pylogic untouched. Windowsinstall_aris.ps1parallel update tracked as follow-up - 2026-05-02 โ
๐ฌ Three new opt-in audit flags via fast-path delegated-agent workflow (#187, #188, #189).
/citation-audit --uncitedsurfaces bib entries with no\cite{}reference (detect-only)./proof-checker --deep-fixadds a repair-grade plan to the Phase 1 reviewer prompt (corrected statement / patch plan / closure tests + Schur/quadratic-form algebra sanity)./proof-checker --restatement-checkadds Phase 3.6 cross-location theorem drift detection (6 drift signatures). Zero behavior change when flags unset. Plus doc PRs #190 (thread-policy) + #191 (auto-loop xref). Delegated-agent + maintainer-fixup pattern; Codex MCPgpt-5.5xhigh review caught 6+ blockers - 2026-05-01 โ
๐ Gemini + OpenAlex literature sources for
/research-lit(#175, community contribution by @stdAri). Two opt-in sources:/gemini-search(AI-driven discovery viajamubc/gemini-mcp-toolMCP) and/openalex(250M+ work open citation graph, no API key). Triggered viaโ sources: geminiorโ sources: openalex; zero behavior change when defaultall(both excluded). Maintainer fixups: corrected@google/gemini-clinpm name; addedtry/except ImportError+ bash preflight for graceful OpenAlex skip whenrequestsmissing - 2026-04-30 โ
๐
/rebuttalper-reviewer thread mode + transferable patterns (SKILL.md). AddsVENUE_MODE(single_document|per_reviewer_thread) for OpenReview-style venues,reviewer_priority: pivotalrouting,structural_distinctionresponse mode, 5 reviewer-defensive heuristics, 2 Phase 5 lints, and severity-scaled stress rounds. DefaultVENUE_MODE = single_documentkeeps ICML-style behavior โ zero change for existing users. Three rounds of cross-model review before/after merge - 2026-04-30 โ
๐ช Codex skill mirror rebuilt + dedicated install/update chain (#179, community contribution by @No-518).
skills/skills-codex/now mirrors all 67 mainline skills; replacesmcp__codex__codexreviewer path with Codex-nativespawn_agent+send_input. Newtools/install_aris_codex.sh+tools/smart_update_codex.shhandle project-local symlinks with manifest tracking. Anti-drift:tests/test_codex_skill_mirror.py+tests/test_codex_install_update.py(26 failure paths). Open discussion in #184 - 2026-04-24 โ
๐จ
/paper-illustration-image2โ Codex-native image generation as Phase 2b illustration backend (#166, community contribution by @kbr19-thu ๆธ ๅ). Uses ChatGPT Plus/Pro quota via local Codex app-server MCP bridge โ noGEMINI_API_KEYrequired. Triggered by/paper-writing โ illustration: codex-image2; default staysfigurespec(zero behavior change). Async-only API, sandboxed writes tofigures/ai_generated/, integration-contract-compliant helper. Marked experimental (Codex debug app-server is unstable upstream) - 2026-04-21 โ
๐ Research Wiki ingest actually works now (
research_wiki.py,/research-wiki). Fixes user-reported bug where/research-wiki initleftpapers/empty forever (ingestsubcommand had no implementation; paper-reading skills had no wiki hook). New canonicalpython3 tools/research_wiki.py ingest_paperhelper owns slugging / metadata fetch / dedup / page render; all 6 paper-reading skills wired to it. Manual backfill viasync --arxiv-idsorsync --from-file. Ships withintegration-contract.mdformalizing the six-component pattern every cross-skill integration must follow - 2026-04-21 โ
๐ก๏ธ Assurance Gate:
โ effort: beast | maxnow really runs mandatory audits (assurance-contract.md,tools/verify_paper_audits.sh). Fixes silent-skip of/proof-checker//paper-claim-audit//citation-auditat high effort. Newassuranceaxis (draft|submission) independent fromeffort:lite/balancedโdraft(zero behavior change),max/beastโsubmission. At submission the 3 audits emit a JSON artifact with 6-state verdict;paper-writingPhase 6 runs the external verifier as source of truth (non-zero exit blocks Final Report). SHA256 input hashing catches stale audits. Escape hatch:โ effort: beast, assurance: draft
Earlier updates (2026-03-12 โ 2026-04-20, 44 entries)
-
2026-04-20 โ ๐ฉน Project install: flat layout + manifest tracking โ fixes a real bug where the previous nested install (
.claude/skills/aris/) hid skills from Claude Code's slash-command discovery (CC only scans one directory level). Anyone who raninstall_aris.shbefore this date was silently affected. Newinstall_aris.shcreates one symlink per skill at.claude/skills/<name>, writes a versioned manifest to.aris/installed-skills.txt, and is re-runnable to reconcile new/removed upstream skills. Defense-in-depth: 13 safety rules (no-symlinked-parents, exact-target revalidation, slug regex, atomic same-dir manifest rename, no-overwrite-real-files, mkdir-based portable lock, ADOPT for crash recovery, โฆ). Granular--adopt-existing/--replace-linkflags replace the all-or-nothing--force. Migration paths:--from-oldfor legacy nested symlink,--migrate-copy keep-user|prefer-upstreamfor legacy nested copy.smart_update.sh --target-subdir .claude/skills/arisis now deprecated with a redirect toinstall_aris.sh. Stale-file bug incp -roverlay also fixed (nowrm -rf && cp -rfor safe-update path) -
2026-04-19 โ ๐
/overleaf-syncโ two-way bridge between local ARIS paper directory and an Overleaf project via the official Overleaf Git bridge (Premium). Lets collaborators keep editing in the Overleaf web UI while ARIS audit/edit pipelines (/paper-claim-audit,/citation-audit,/auto-paper-improvement-loop) keep running locally. Sub-commands:setup(one-time, user-driven so the agent never sees the token) /pull(with diff-protocol โ flags half-sentences, typos, claim/cite changes that should re-trigger audits) /push(with confirmation gate before writing to shared Overleaf state) /status(3-way divergence check). Token never touches the agent or any file โ primed once into macOS Keychain via the user's terminal, then auth-free for all subsequent agent operations -
2026-04-19 โ ๐
/citation-auditโ fourth and final layer of the evidence-and-claim assurance stack (experiment-auditโresult-to-claimโpaper-claim-auditโcitation-audit). Fresh cross-family reviewer (gpt-5.4 via Codex MCP) with web/DBLP/arXiv lookup verifies every\cite{...}along three independent axes: existence (paper resolves at claimed arXiv ID/DOI/venue), metadata correctness (authors/year/venue/title match canonical sources), and context appropriateness (the cited paper actually establishes the claim it supports โ the most diagnostic check). Per-entry verdicts: KEEP / FIX / REPLACE / REMOVE. Auto-integrated into Workflow 3 Phase 5.8 as the pre-submission bibliography gate. Empirical motivation: in a real submission run, several real papers were cited in contexts they did not actually support, and at least one entry shipped withauthor = "Anonymous"โ none caught by metadata-only checks -
2026-04-17 โ ๐
/experiment-queueintegrated into Workflow 1.5 + research-pipeline โexperiment-bridgePhase 4 Deploy now auto-routes by milestone job count: โค5 jobs โ/run-experiment, โฅ10 jobs or phase dependencies โ/experiment-queue(with OOM retry, stale-screen cleanup, wave-transition gating, crash-safe state). New--- batch: queueoverride for global force-queue mode. Large multi-seed sweeps fromEXPERIMENT_PLAN.md(e.g., 36-cellN ร seed ร n_traingrids) now get proper orchestration without manual queue invocation -
2026-04-17 โ ๐ Project-local symlink install (resolves #118) โ new recommended default install.
bash tools/install_aris.shauto-detects platform (Claude Code / Codex CLI), creates.claude/skills/arisor.agents/skills/arissymlink to the ARIS repo, adds a managed<!-- ARIS:BEGIN -->block toCLAUDE.md/AGENTS.mdtelling the agent to use only project-local skills, and records install metadata in.aris/skill-source.txt. Solves the skill collision problem when ARIS is mixed with Superpowers / OpenHands / other community packs in the same global skill directory. PowerShell version (install_aris.ps1) ships with junction support for Windows.smart_update.sh --target-subdirflag added for.agents/skills/aris(Codex) project-copy installs; symlinked installs now correctly refusesmart_updateand direct users togit pull. Global install remains supported for power users -
2026-04-16 โ ๐จ
/figure-specโ deterministic JSONโSVG renderer packaged as a first-class skill. Preferred default for architecture/workflow/pipeline/audit-cascade figures in papers. Shape-aware edge clipping (rect/circle/ellipse/diamond), self-loops, curved edges, multi-line labels with CJK width estimation. Editable vector output, reproducible (same spec โ same SVG), no external API. Phase 2b in Workflow 3 restored:illustration: figurespec(new default) /gemini/mermaid/falseโ 4-way illustration selector with complementary strengths -
2026-04-16 โ โ๏ธ
/experiment-queueโ SSH job queue for multi-seed/multi-config ML experiments. Designed from real 36-cell NeurIPS sweep pain points: OOM-aware retry with backoff, stale-screen cleanup, wave-transition race prevention, teacherโstudent phase dependencies, crash-safe scheduler that resumes from JSON state. Declarative grid specs expand automatically (e.g.,N ร seed ร n_train โ 36 jobs). Configurableconda_hook+gpu_free_threshold_mibfor non-standard environments. Use for โฅ10 jobs;/run-experimentstays for ad-hoc -
2026-04-15 โ ๐ก๏ธ Paper Writing Pipeline Hardening โ 10 empirically-motivated patches from a real NeurIPS run.
REVIEWER_BIAS_GUARD=true: every review round uses a fresh thread (codex-reply inflated 3โ8/10). Reviewer Independence Protocol: no fix summaries to reviewer. Step 4.5 Restatement Regression Test: catches theorem drift across fix rounds. Step 5.5 Kill Argument Exercise: final-round adversarial attack/defense for theory papers. Location-aware overfull blocking. Theory Paper Consistency Pass in/paper-write. Enforced Bib Hygiene with DBLP/CrossRef validation. Phase 5.5 Mandatory Final Claim Audit as submission gate. Review Tracing Protocol: full prompt/response pairs saved to.aris/traces/for reviewer-independence audit (review-tracing.md,save_trace.sh). Inspired by community contribution from @ๆๅฒ้พ -
2026-04-15 โ ๐จ FigureSpec Renderer v2 โ deterministic JSONโSVG figure generation for academic papers. Shape-aware edge clipping (rect/circle/ellipse/diamond), self-loops, curved edges, multi-line labels with CJK width estimation, comprehensive validation (type checks, structure, palette). Went through 5 rounds of Codex review (3/10โ7/10). All architecture and workflow diagrams in the ARIS tech report were generated with this pipeline. New
--- mode: vectorfor/paper-illustrationskill -
2026-04-14 โ ๐
/paper-claim-auditโ zero-context paper-to-evidence verification. Fresh reviewer with NO prior context compares every number in the paper against raw result files. Catches rounding inflation, best-seed cherry-pick, config mismatch, delta errors, scope overclaim. Auto-integrated into Workflow 3 (Phase 4.7). Completes the 3-layer audit chain:/experiment-audit(code) โ/result-to-claim(science) โ/paper-claim-audit(reporting). ๐๏ธ Visual PDF review also added to improvement loop โ reviewer now sees compiled PDF, not just LaTeX source. Inspired by Hermes Agent -
2026-04-13 โ ๐งฟ GPT-5.4 Pro via Oracle โ
โ reviewer: oracle-proon any skill for the strongest available reviewer. API mode (fast) or browser mode (free). Supported on:/research-review,/auto-review-loop,/experiment-audit,/proof-checker,/rebuttal,/idea-creator,/research-lit. Default stays Codex xhigh. Not installed = zero impact. Setup โ -
2026-04-13 โ ๐ฌ
/proof-checkerโ rigorous mathematical proof verification via cross-model review. 20-category issue taxonomy, two-axis severity, side-condition checklists (DCT/MCT/Fubini/IFT/...), counterexample red team, proof-obligation ledger. Auto-integrated into Workflow 3: detects\begin{theorem}and runs before improvement loop. Complements/proof-writer -
2026-04-10 โ โก Effort Levels โ
โ effort: lite | balanced | max | beast. Controls work intensity across all skills: papers found, ideas generated, review rounds, writing depth. Codex reasoning staysxhighalways.beast= every knob to maximum for top-venue sprints. Defaultbalanced= zero change for existing users. Details โ -
2026-04-10 โ ๐ DeepXiv integration โ progressive paper retrieval via DeepXiv CLI. Opt-in:
โ sources: deepxivorโ sources: all, deepxiv. Staged reading: search โ brief โ head โ section.pip install deepxiv-sdkto enable. Community contribution by @DreamEnding -
2026-04-10 โ ๐ก๏ธ
/experiment-auditโ cross-model experiment integrity verification. GPT-5.4 reads your eval scripts and results directly, checks for fake ground truth, self-normalized scores, phantom results, and scope inflation (#131, #57). Advisory โ warns loudly, never blocks./result-to-claimauto-reads audit if present. New experiment-integrity.md shared reference. The executor must never judge its own integrity. -
2026-04-10 โ ๐ง
tools/smart_update.shโ intelligent skill updater. Compares local vs upstream, detects personal customizations (server paths, API keys), only updates safe skills.bash tools/smart_update.sh --apply -
2026-04-10 โ ๐ Community paper: UAV-CC โ first community paper with full PDF archived. UAV change captioning benchmark for IEEE TGRS by @wxx827. Stack: Claude Opus 4.6 + Codex 5.4 xhigh + Cursor. Papers now archived in
community_papers/ -
2026-04-08 โ ๐
/research-wikiโ persistent research knowledge base inspired by Karpathy's LLM Wiki. Accumulates papers, ideas, experiments, and claims across the entire research lifecycle with typed relationships. Wiki-aware hooks in/research-lit(ingest papers),/idea-creator(read wiki + write ideas back), and/result-to-claim(update claim status + trigger re-ideation). Failed ideas become anti-repetition memory. ARIS now learns from its mistakes. -
2026-04-05 โ ๐งฌ
/meta-optimizeโ outer-loop harness optimization for ARIS. Passively logs skill invocations, tool calls, failures, and parameter overrides via Claude Code hooks. Run/meta-optimizeto analyze accumulated usage data and propose SKILL.md improvements โ reviewer-gated, user-approved. Inspired by Meta-Harness (Lee et al., 2026). ARIS now optimizes itself. -
2026-04-04 โ ๐ง Codex Plugin deep integration โ
/codex:rescuenow auto-invoked when experiments fail (Workflow 1.5) or LaTeX won't compile (Workflow 3). GPT independently diagnoses the bug before Claude retries โ two AI debuggers are better than one. Optional:codex execpowers nightmare review,/codex:rescuepowers auto-debug. Setup โ -
2026-04-03 โ โ๏ธ Modal serverless GPU โ no GPU?
gpu: modalin CLAUDE.md, one command (modal run launcher.py), no SSH, no Docker, auto scale-to-zero. $30/month free tier โ enough to try ARIS experiments without any hardware.pip install modal && modal setupand go. Community contribution by @zeyuzhangzyz -
2026-04-03 โ ๐ฎ Reviewer Difficulty Levels โ
medium(default, unchanged),hard(reviewer memory + debate protocol),nightmare(GPT reads repo directly viacodex execโ Claude can't hide anything).โ difficulty: nightmarefor maximum stress test before submission -
2026-03-30 โ ๐ฅ Auto-debug & exhaust-before-surrender โ experiment-bridge auto-diagnoses failures (OOM, import, CUDA, NaN) and retries up to 3ร. Inspired by PUA
-
2026-03-30 โ โ๏ธ Vast.ai GPU rental โ
gpu: vastauto-rents cheapest GPU. By @YIHONG-JIN. ๐ง MiniMax M2.7 upgrade by @octo-patch -
2026-03-27 โ ๐ IEEE venue support (9 families). ๐ Semantic Scholar. By @ypd666
-
2026-03-26 โ ๐ Document-based input โ
RESEARCH_BRIEF.mdauto-detect -
2026-03-24 โ ๐ Workflow 4:
/rebuttalโ 7-phase pipeline, 3 safety gates -
2026-03-23 โ ๐ง
/training-check,/result-to-claim,/ablation-plannerintegrated. ๐ฆcompactmode. By @JingxuanKang & @couragec -
2026-03-22 โ ๐ Templates โ input templates for every workflow. ๐ 7 venue templates โ CVPR, ACL, AAAI, ACM MM added. ๐ก๏ธ Anti-hallucination fix โ Workflow 2 enforces DBLP โ CrossRef โ [VERIFY]. ๐
base repoโ clone a GitHub repo as base codebase (โ base repo: https://github.com/org/project) -
2026-03-22 โ ๐ Codex + Gemini review guide โ Codex executes, Gemini reviews via local
gemini-reviewMCP bridge. CN -
2026-03-20 โ ๐ Antigravity adaptation guide โ use ARIS skills in Google Antigravity (agent-first IDE). Community contribution by @PeppaPigw
-
2026-03-20 โ ๐ฅ๏ธ Trae adaptation guide โ use ARIS skills in Trae (ByteDance AI IDE). Community contribution by @Prometheus-cotigo. ๐ข
formula-derivationโ Community contribution by @Falling-Flower -
2026-03-19 โ ๐ผ๏ธ
paper-posterโ Conference poster. Community contribution by @dengzhe-hou -
2026-03-19 โ ๐ Workflow 1.5 upgraded โ
/experiment-bridgeGPT-5.4 code review. ๐ W&B fix -
2026-03-18 โ ๐ค
paper-slides+ ๐ Codex+Claude bridge + ๐ฑ๏ธ Cursor guide + ๐ค Codex CLI skills + ๐grant-proposal+ ๐จpaper-illustration(Gemini) + ๐ CitationClaw -
2026-03-17 โ ๐ง Git code sync + ๐ ModelScope guide + parameter pass-through
-
2026-03-16 โ ๐ฌ
research-refine+experiment-planโ turn vague ideas into problem-anchored proposals with claim-driven experiment roadmaps. Now integrated into Workflow 1 (/idea-discovery). Community contribution by @zjYao36 -
2026-03-16 โ ๐จ๐ณ Alibaba Coding Plan guide โ one API key, 4 models (Kimi-K2.5 + Qwen3.5+ + GLM-5 + MiniMax-M2.7), dual-endpoint setup. Community contribution by @tianhao909
-
2026-03-15 โ ๐ Bring your own model! Any OpenAI-compatible API now works as reviewer via
llm-chatMCP server. GLM, MiniMax, Kimi, LongCat, DeepSeek all tested โ zero Claude or OpenAI API needed -
2026-03-15 โ ๐พ OpenClaw adaptation guide โ use ARIS research workflows in OpenClaw without Claude Code slash skills
-
2026-03-15 โ ๐
proof-writerโ community skill for rigorous theorem proof drafting. ๐ Anti-hallucination citations โ/paper-writenow fetches real BibTeX from DBLP/CrossRef instead of LLM-generated entries โ on by default, zero install -
2026-03-14 โ ๐ฑ Feishu/Lark integration: three modes (off/push/interactive), mobile notifications for experiments, reviews, and checkpoints
-
2026-03-13 โ ๐ Human-in-the-loop: configurable
AUTO_PROCEEDcheckpoints across all workflows. Full autopilot or step-by-step approval -
2026-03-12 โ ๐ Zotero + Obsidian + local PDFs + arXiv/Scholar: multi-source literature search with cross-model novelty verification
-
2026-03-12 โ ๐ Three end-to-end workflows complete: one prompt โ top-venue-style paper.
/research-pipelinechains idea discovery โ auto review โ paper writing autonomously -
2026-03-12 โ ๐
/paper-writingworkflow: narrative report โ structured outline โ figures โ LaTeX โ compiled PDF โ 2-round auto-improvement (4/10 โ 8.5/10)
๐ Quick Start
# 1. Install skills
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep.git
mkdir -p ~/.claude/skills/ # create if it doesn't exist (new Claude Code versions)
cp -r Auto-claude-code-research-in-sleep/skills/* ~/.claude/skills/
# 1b. Update skills (when upstream has new versions)
cd Auto-claude-code-research-in-sleep && git pull
bash tools/smart_update.sh # dry-run: shows what's new/changed/safe
bash tools/smart_update.sh --apply # apply: adds new + updates safe ones
# Optional Codex mirror managed project install
bash tools/install_aris_codex.sh ~/your-codex-project
# Managed Codex project update
cd Auto-claude-code-research-in-sleep && git pull
bash tools/install_aris_codex.sh ~/your-codex-project --reconcile
# Copied Codex installs only (not for projects installed by install_aris_codex.sh)
bash tools/smart_update_codex.sh --local ~/.codex/skills
bash tools/smart_update_codex.sh --local ~/.codex/skills --apply
# 2. Set up Codex MCP (for review skills)
npm install -g @openai/codex
codex setup # set model to gpt-5.4 when prompted
claude mcp add codex -s user -- codex mcp-server
# 3. Use in Claude Code
claude
> /idea-discovery "your research direction" # Workflow 1 โ be specific! not "NLP" but "factorized gap in discrete diffusion LMs"
> /experiment-bridge # Workflow 1.5 โ have a plan? implement + deploy + collect results
> /auto-review-loop "your paper topic or scope" # Workflow 2: review โ fix โ re-review overnight
> /paper-writing "NARRATIVE_REPORT.md" # Workflow 3: narrative โ polished PDF
> /rebuttal "paper/ + reviews" โ venue: ICML # Workflow 4: parse reviews โ draft rebuttal โ follow-up
> /research-pipeline "your research direction" # Full pipeline: Workflow 1 โ 1.5 โ 2 โ 3 end-to-end
> /research-wiki init # ๐ Enable persistent research memory (one-time)
> /meta-optimize # Meta: analyze usage logs โ propose skill improvements
๐ Research Wiki (optional): Give ARIS persistent memory across sessions. Papers, ideas, failed experiments โ nothing is forgotten:
# In Claude Code: > /research-wiki init # creates research-wiki/ in your project # That's it. From now on, /research-lit auto-ingests papers, /idea-creator reads # the wiki before brainstorming (and writes ideas back), /result-to-claim updates # claim status. Failed ideas become anti-repetition memory for future ideation.See Research Wiki for the full guide.
๐งฌ Meta-optimization (optional): Run these in your normal terminal (not inside Claude Code) to enable passive usage logging:
# One-time setup in your project directory mkdir -p .claude .aris/meta tools/meta_opt cp Auto-claude-code-research-in-sleep/templates/claude-hooks/meta_logging.json .claude/settings.json cp Auto-claude-code-research-in-sleep/tools/meta_opt/*.sh tools/meta_opt/ chmod +x tools/meta_opt/*.sh # Then start Claude Code โ hooks are active immediately claudeEvents are logged to both project-level (
.aris/meta/events.jsonl) and global (~/.aris/meta/events.jsonl) logs. After 5+ workflow runs, run/meta-optimizeto see data-driven improvement proposals. Use/meta-optimize --globalto analyze trends across all your projects. See Workflow M for details.
๐ Templates available! See
templates/for ready-to-use input templates for every workflow โ research brief (Workflow 1), experiment plan (Workflow 1.5), narrative report (Workflow 3), paper plan (Workflow 3).๐ Optional: DeepXiv progressive retrieval
pip install deepxiv-sdkThen use
/deepxivdirectly or opt into it from/research-litwithโ sources: deepxivorโ sources: all, deepxiv.๐ Optional: Exa AI-powered web search
pip install exa-py export EXA_API_KEY=your-key-hereThen use
/exa-searchdirectly or opt into it from/research-litwithโ sources: exaorโ sources: all, exa. Covers blogs, docs, news, and research papers with built-in content extraction.๐๏ธ Uninstall: To remove ARIS skills without affecting your own personal skills:
cd Auto-claude-code-research-in-sleep && ls skills/ | xargs -I{} rm -rf ~/.claude/skills/{}
Tip: All pipeline behaviors are configurable via inline overrides โ append
โ key: valueto any command:
Parameter Default What it does AUTO_PROCEEDtrueAuto-continue at idea selection gate. Set falseto manually pick which idea to pursue before committing GPU timehuman checkpointfalsePause after each review round so you can read the score, give custom modification instructions, skip specific fixes, or stop early sourcesallWhich literature sources to search: zotero,obsidian,local,web,semantic-scholar,deepxiv,exa, orall. Note:semantic-scholar,deepxiv, andexamust be explicitly listed โ not included inallarxiv downloadfalseDownload top relevant arXiv PDFs during literature survey. When false, only fetches metadata (title, abstract, authors)DBLP_BIBTEXtrueFetch real BibTeX from DBLP/CrossRef instead of LLM-generated entries. Eliminates hallucinated citations. Zero install code reviewtrueGPT-5.4 xhigh reviews experiment code before GPU deployment. Set falseto skipwandbfalseAuto-add W&B logging to experiment scripts. Set true+ configurewandb_projectin CLAUDE.md./monitor-experimentpulls training curves from W&BillustrationgeminiAI illustration in Workflow 3: gemini(default, needsGEMINI_API_KEY),mermaid(free), orfalse(skip)venueICLRTarget venue: ICLR,NeurIPS,ICML,CVPR,ACL,AAAI,ACM. Determines LaTeX style file and page limitbase repofalseGitHub repo URL to clone as base codebase (e.g., โ base repo: https://github.com/org/project). No code? Build on top of an open-source projectgpulocalGPU target: local(default),remote(SSH server), orvast(rent on-demand from Vast.ai โ auto-provision, auto-destroy)compactfalseGenerate compact summary files ( IDEA_CANDIDATES.md,findings.md,EXPERIMENT_LOG.md) for short-context models and session recoveryref paperfalseReference paper to build on (PDF path or arXiv URL). Summarized first, then ideas extend/improve it. Combine with base repofor paper+code workflowseffortbalancedWork intensity: lite(0.4x tokens),balanced(default),max(2.5x),beast(5-8x). Controls breadth/depth/iterations. Codex reasoning alwaysxhigh. See Effort LevelsreviewercodexReviewer backend: codex(GPT-5.4 xhigh, default),oracle-pro(GPT-5.4 Pro via Oracle โ strongest reasoning). See Setup โdifficultymediumReviewer adversarial level: medium(default),hard(+ memory + debate),nightmare(+ GPT reads repo viacodex exec)/research-pipeline "your topic" โ AUTO_PROCEED: false # pause at idea selection gate /research-pipeline "your topic" โ human checkpoint: true # pause after each review round to give feedback /research-pipeline "your topic" โ sources: zotero, web # only search Zotero + web (skip local PDFs) /research-pipeline "your topic" โ sources: all, deepxiv # default sources plus DeepXiv progressive retrieval /research-pipeline "your topic" โ sources: all, exa # default sources plus Exa AI-powered web search /research-pipeline "your topic" โ arxiv download: true # download top arXiv PDFs during literature survey /research-pipeline "your topic" โ difficulty: nightmare # maximum adversarial review before submission /research-pipeline "your topic" โ effort: beast # all knobs to maximum โ top-venue sprint /research-pipeline "your topic" โ effort: beast, reviewer: oracle-pro # beast + GPT-5.4 Pro reviewer โ ultimate mode /research-pipeline "your topic" โ effort: lite # quick exploration, save tokens /research-pipeline "your topic" โ effort: max, review_rounds: 3 # max effort but cap review at 3 rounds /research-pipeline "your topic" โ AUTO_PROCEED: false, human checkpoint: true # combine options /proof-checker "paper/" โ reviewer: oracle-pro # Pro-level proof verification
Important: Codex MCP uses the model from
~/.codex/config.toml, not from skill files. Make sure it saysmodel = "gpt-5.4"(recommended). Other options:gpt-5.3-codex,gpt-5.2-codex,o3. Runcodex setupor edit the file directly.
Want Codex to execute but Claude Code to review? See
docs/CODEX_CLAUDE_REVIEW_GUIDE.md. That path installs the baseskills/skills-codex/*, then overlaysskills/skills-codex-claude-review/*, and routes review-heavy skills through the localclaude-reviewMCP bridge.
Want Codex to execute but Gemini to review locally? See
docs/CODEX_GEMINI_REVIEW_GUIDE.mdand CN. That path installs the baseskills/skills-codex/*, then overlaysskills/skills-codex-gemini-review/*, and routes the reviewer-aware predefined skills through the localgemini-reviewMCP bridge using direct Gemini API by default.
Want the Codex mirror install chain? Use
tools/install_aris_codex.shfor managed project installs andtools/smart_update_codex.shfor copied Codex installs. The Claude scripts remain the mainline entry points for Claude projects.
See full setup guide for details and alternative model combinations if you don't have Claude/OpenAI API.
๐ง Update skills later? Smart update analyzes what's safe:
cd Auto-claude-code-research-in-sleep git pull bash tools/smart_update.sh # dry-run: shows what's new/changed/safe bash tools/smart_update.sh --apply # apply: adds new + updates safe onesCompares local skills with upstream, detects personal customizations (server paths, API keys, etc.), and only updates skills that are safe to replace. Skills with your personal info are flagged for manual review.
โจ Features
-
๐ 31 composable skills โ mix and match, or chain into full pipelines (
/idea-discovery,/auto-review-loop,/paper-writing,/research-pipeline) -
๐ Literature & novelty โ multi-source paper search (Zotero + Obsidian + local PDFs + arXiv/Scholar) + cross-model novelty verification
-
๐ก Idea discovery โ literature survey โ brainstorm 8-12 ideas โ novelty check โ GPU pilot experiments โ ranked report
-
๐ Auto review loop โ 4-round autonomous review, 5/10 โ 7.5/10 overnight with 20+ GPU experiments
-
๐ Paper writing โ narrative โ outline โ figures โ LaTeX โ PDF โ auto-review (4/10 โ 8.5/10), one command. Anti-hallucination citations via DBLP/CrossRef
-
๐ค Cross-model collaboration โ Claude Code executes, GPT-5.4 xhigh reviews. Adversarial, not self-play. Optional upgrade:
โ reviewer: oracle-profor GPT-5.4 Pro (strongest reasoning) via Oracle -
๐ Peer review โ review others' papers as a conference reviewer, with structured scoring and meta-review
-
๐ฅ๏ธ Review-driven experiments โ when GPT-5.4 says "run an ablation", Claude Code automatically writes the script, rsyncs to your GPU server, launches in screen, collects results, and folds them back into the paper. Just configure your server in
CLAUDE.md(setup guide). No GPU? Usegpu: vastto rent one from Vast.ai on demand -
๐ Flexible models โ default Claude ร GPT-5.4, also supports GLM, MiniMax, Kimi, LongCat, DeepSeek, etc. โ no Claude or OpenAI API required
-
๐ Human-in-the-loop โ configurable checkpoints at key decisions.
AUTO_PROCEED=truefor full autopilot,falseto approve each step -
๐ฑ Feishu/Lark notifications โ three modes: off (default, strongly recommended for most users), push-only (webhook, mobile alerts), interactive (approve/reject from Feishu). Zero impact when unconfigured
Preview: Push cards (group) & Interactive chat (private)
Push Only โ group chat cards (experiment done, checkpoint, error, pipeline complete):
Interactive โ private chat with Claude Code (approve/reject, custom instructions):
-
๐ Research Wiki โ persistent knowledge base that accumulates papers, ideas, experiments, and claims across the research lifecycle. Failed ideas become anti-repetition memory. ARIS learns from its mistakes and gets smarter with every run. Inspired by Karpathy's LLM Wiki
-
๐งฉ Extensible โ domain-specific skills welcome! Add a
SKILL.mdand open a PR. See community skills likedse-loop(architecture/EDA)
๐ Score Progression (Real Run)
A real overnight 4-round run on an ML research project, from borderline reject to submission-ready:
| Round | Score | What Happened |
|---|---|---|
| Initial | 5.0/10 | Borderline reject |
| Round 1 | 6.5/10 | Added standard metrics, discovered metric decoupling |
| Round 2 | 6.8/10 | Key claim failed to reproduce, pivoted narrative |
| Round 3 | 7.0/10 | Large seed study killed main improvement claim |
| Round 4 | 7.5/10 โ | Diagnostic evidence solidified, submission ready |
The loop autonomously ran 20+ GPU experiments, rewrote the paper's narrative framing, and killed claims that didn't hold up โ all without human intervention.
๐ Community Showcase โ Papers Built with ARIS
Real projects where the ARIS pipeline was used end-to-end to produce submitted manuscripts. This section does not claim official acceptance unless a row explicitly says so: ratings and quoted verdicts are AI/third-party review signals from tools such as CSPaper and Stanford Agentic Reviewer, not venue decisions. One important caveat: ARIS is designed to optimize through AI-review loops, so elevated AI-review scores are a normal consequence of the workflow rather than independent proof of acceptance. Human reviewers can still bring updated literature knowledge, community context, venue-specific taste, and objections that an AI reviewer did not model. If you've used ARIS to complete a paper, we'd love to feature it here โ open an issue or PR!
| Paper | AI-review signal | Submission status | Built by | Notes |
|---|---|---|---|---|
| CS Paper Submission | CSPaper 8/10 โ AI reviewer recommendation: "Top 50% of accepted papers, clear accept" | Submitted to a CS conference; awaiting official feedback | @DefanXue & @Monglitay | Full ARIS pipeline: idea โ experiments โ auto-review โ paper writing. The quote is from CSPaper's simulated review, not an official venue review. |
| AAAI 2026 Paper Submission | Stanford Agentic Reviewer 7/10 โ AI reviewer recommendation: "Good paper, accept" | Submitted to AAAI 2026 Main Technical; awaiting official decision | @xinbo820-web | Pure Codex CLI (ARIS-Codex skills). The 7/10 signal comes from an AAAI-style Stanford Agentic Reviewer run, not an official AAAI acceptance result. |
| UAV-CC | Under review | Submitted to IEEE TGRS | @wxx827 | UAV change captioning benchmark. Claude Opus 4.6 (executor) + Codex GPT-5.4 xhigh (reviewer) + Cursor Opus 4.6 (assist). PDF โ |
Reviewer screenshots
Papers built with ARIS โ from idea to submission. Know more? Let us know!
๐งฉ Awesome Community Skills & Extensions
Domain-specific skills and external projects contributed by the community. PRs welcome โ just add a skills/your-skill/SKILL.md and open a PR!
๐ก How to use: Community skills are not auto-wired into core workflows. To use one, ask your executor (Claude Code / OpenClaw / etc.) to read the skill's
SKILL.md, then plug it into the appropriate workflow stage based on the description below.
๐ Community Skills (13): research-refine ยท experiment-plan ยท grant-proposal ยท paper-poster ยท paper-slides ยท mermaid-diagram ยท proof-writer ยท comm-lit-review ยท dse-loop ยท idea-discovery-robot ยท formula-derivation ยท paper-illustration ยท writing-systems-papers
๐ External Projects & Docs (12): rosetta ยท open-source-hardening-skills ยท CitationClaw ยท auto-hparam-tuning ยท paper-to-course ยท Antigravity Adaptation Guide ยท OpenClaw Adaptation Guide ยท Cursor Adaptation Guide ยท Codex+Claude Review Bridge ยท Trae Adaptation Guide ยท paper-illustration ยท MiniMax-AI/cli
๐ Thanks to every contributor! We fold the tables below to keep the README readable โ but every skill and project here is equally valued. PRs always welcome!
๐ Community Skills (13) โ click to expand
| Name | Domain | Description | Codex MCP? |
|---|---|---|---|
๐ฌ research-refine | General | Turn a vague idea into a problem-anchored, implementation-oriented method proposal. Best inserted between /idea-discovery and /auto-review-loop | Yes |
๐งช experiment-plan | General | Turn a refined proposal into a claim-driven experiment roadmap with ablations, budgets, and run order | No |
๐งญ research-refine-pipeline | General | One-shot chain: /research-refine โ /experiment-plan for method refinement plus experiment planning | Yes |
๐ grant-proposal | General | Grant proposal drafting (KAKENHI/NSF/NSFC/ERC/DFG/SNSF/ARC/NWO). Chains /research-lit โ /novelty-check โ /research-review โ /paper-illustration | Yes |
๐ค paper-slides | General | Conference talk slides (beamer โ PDF + PPTX) with speaker notes, full talk script + Q&A prep. Auto slide count from talk type | Yes |
๐ผ๏ธ paper-poster | General | Conference poster (article + tcbposter โ A0/A1 PDF + component PPTX + SVG). Venue-specific colors, visual review loop, Codex MCP review | Yes |
๐ proof-writer | ML Theory | Rigorous theorem/lemma proof drafting โ feasibility triage, dependency maps, honest blockage reports | No |
๐ก comm-lit-review | Communications / Wireless | Domain-specific literature review โ IEEE/ACM/ScienceDirect priority, venue tiering, PHY/MAC/transport/NTN taxonomy | No |
๐๏ธ dse-loop | Architecture / EDA | Autonomous design space exploration โ iteratively run, analyze, and tune parameters (gem5, Yosys, etc.) | No |
๐ค idea-discovery-robot | Robotics / Embodied AI | Workflow 1 adaptation โ grounds idea discovery in embodiment, benchmark, sim2real path, and real-robot safety constraints | Yes |
๐ mermaid-diagram | General | Mermaid diagrams (20+ types) โ free alternative to paper-illustration, no API key needed | No |
๐ข formula-derivation | General | Research formula development โ derivation, verification, and LaTeX formatting | No |
๐ฅ๏ธ writing-systems-papers | Systems | Paragraph-level blueprint for 10-12 page systems papers (OSDI/SOSP/ASPLOS/NSDI/EuroSys) โ page allocation, writing patterns, self-check | Yes |
๐ External Projects & Docs (12) โ click to expand
| Name | Domain | Description |
|---|---|---|
| ๐ชจ rosetta | Pro-tier ChatGPT MCP | Programmatic access to ChatGPT Pro / gpt-5.5-pro / DeepResearch from Node, via Chrome CDP Fetch interception + WebSocket second-leg streaming. Ships an MCP server for Claude Code / Codex / Cline โ alternative implementation path to Oracle MCP for โ reviewer: oracle-pro style high-tier review. Supports multi-turn, parallel concurrency, live token deltas, 15-min idle-timeout watchdog (long Pro thinks survive). MIT, by @SyntaxSmith |
| ๐ก๏ธ open-source-hardening-skills | DevOps / OSS | 10-skill pipeline to harden research code into production-ready open-source projects โ audit, refactor, test, CI, docs, review |
| ๐ CitationClaw | General | Citation impact analysis โ input paper title โ citation crawling, scholar identification, tiered analysis, HTML dashboard |
| ๐ Antigravity Adaptation Guide | General | Use ARIS skills in Google Antigravity โ native SKILL.md support, dual model (Claude Opus 4.6 / Gemini 3.1 Pro), MCP setup, EN + CN guides |
| ๐พ OpenClaw Adaptation Guide | General | Use ARIS workflow methodology in OpenClaw โ skill-to-stage mapping, file-based orchestration, no Claude Code CLI needed |
| ๐ฑ๏ธ Cursor Adaptation Guide | General | Use ARIS skills in Cursor โ @-reference skills, MCP setup, workflow mapping, state file recovery across sessions |
| ๐ฅ๏ธ Trae Adaptation Guide | General | Use ARIS skills in Trae (ByteDance AI IDE) โ EN + CN guides |
๐จ paper-illustration | General | AI-generated architecture diagrams via Gemini. Built on PaperBanana. Integrated into Workflow 3 |
๐ค skills-codex | General | Codex CLI sync pack for the main research skills, now including training-check, result-to-claim, ablation-planner, rebuttal, plus the shared-references/ support directory |
| ๐๏ธ auto-hparam-tuning | General | Automatic hyperparameter tuning โ AI agent reads project, plans strategy, runs experiments, analyzes TensorBoard, learns from results. Hydra-based |
| ๐ Codex+Claude Review Bridge | General | Codex executes + Claude reviews via local claude-review MCP bridge with async polling |
| ๐ paper-to-course | Education | Convert research papers (PDF/LaTeX) into interactive six-module HTML courses with formula breakdowns, literature timelines, quizzes, and glossary tooltips โ single bundled file, no server needed |
| ๐ค MiniMax-AI/cli | General | Official MiniMax CLI โ text, image, video, speech, and music generation + web search. skill/SKILL.md follows the agentskills.io standard. Drop-in companion for the Alt B (MiniMax reviewer) setup |
๐ Workflows
These skills compose into a full research lifecycle. The four workflows can be used independently or chained together:
- Exploring a new area (e.g., writing a survey)? Start with Workflow 1 โ
/idea-discovery - Have a plan, need to implement and run? Workflow 1.5 โ
/experiment-bridge - Already have results, need iterative improvement? Workflow 2 โ
/auto-review-loop - Ready to write the paper? Workflow 3 โ
/paper-writing(or step by step:/paper-planโ/paper-figureโ/paper-writeโ/paper-compileโ/auto-paper-improvement-loop) - Got reviews back? Need to rebuttal? Workflow 4 โ
/rebuttalโ parse reviews, draft safe rebuttal, follow-up rounds - Full pipeline? Workflow 1 โ 1.5 โ 2 โ 3 โ submit โ 4 โ
/research-pipeline+/rebuttalโ from idea to acceptance - Want ARIS to remember and learn? ๐
/research-wiki initโ persistent memory across sessions. Papers, ideas, failed experiments compound over time - Want ARIS to improve itself? Workflow M โ
/meta-optimizeโ analyze usage logs, propose skill improvements, reviewer-gated
โ ๏ธ Important: These tools accelerate research, but they don't replace your own critical thinking. Always review generated ideas with your domain expertise, question the assumptions, and make the final call yourself. The best research comes from human insight + AI execution, not full autopilot.
Full Pipeline ๐
/research-lit โ /idea-creator โ /novelty-check โ /research-refine โ /experiment-bridge โ /auto-review-loop โ /paper-writing โ submit โ /rebuttal โ accept! ๐
(survey) (brainstorm) (verify novel) (refine method) (implement+deploy) (review & fix) (write paper) (send) (reply to reviewers)
โโโโโโโโโโโโโโโ Workflow 1: Idea Discovery โโโโโโโโโโโโโโโค โ Workflow 1.5 โโค โโโ Workflow 2 โโโค โโโ Workflow 3 โโโค โโโ Workflow 4 โโโค
๐ research-wiki (persistent memory โ papers, ideas, experiments, claims)
โ reads before ideation, writes after every stage, failed ideas = anti-repetition memory
/meta-optimize (Workflow M โ runs independently, improves ARIS itself)
โ reads .aris/meta/events.jsonl (accumulated from all runs above)
๐ Blog post: ๆขฆไธญ็ง็ ๅ จๆต็จๅผๆบ
Workflow 1: Idea Discovery & Method Refinement ๐
"What's the state of the art? Where are the gaps? How do we solve it?"
Don't have a concrete idea yet? Just give a research direction โ /idea-discovery handles the rest:
- ๐ Survey the landscape (recent papers, open problems, recurring limitations)
- ๐ง Brainstorm 8-12 concrete ideas via GPT-5.4 xhigh
- ๐ Filter by feasibility, compute cost, and quick novelty search
- ๐ก๏ธ Validate top ideas with deep novelty check + devil's advocate review
- ๐งช Pilot top 2-3 ideas in parallel on different GPUs (30 min - 2 hr each)
- ๐ Rank by empirical signal โ ideas with positive pilot results rise to the top
- ๐ฌ Refine the top idea into a problem-anchored proposal via iterative GPT-5.4 review
- ๐งช Plan claim-driven experiments with ablations, budgets, and run order
The output is a ranked IDEA_REPORT.md plus a refined proposal (refine-logs/FINAL_PROPOSAL.md) and experiment plan (refine-logs/EXPERIMENT_PLAN.md) for the top idea. Dead-end ideas are documented too, saving future exploration.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Idea Discovery & Method Refinement โ
โ โ
โ /research-lit /idea-creator /novelty-check โ
โ (find papers) (brainstorm) (verify novelty) โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Scan โโโโโถโ Generate โโโโโโถโ Check if โ โ
โ โ local โ โ 8-12 โ โ idea is โ โ
โ โ papers + โ โ ideas โ โ novel โ โ
โ โ search โ โ + rank โ โ โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ โ
โ โผ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Filter โโโโโโถโ External โ โ
โ โ by cost, โ โ LLM โ โ
โ โ novelty โ โ evaluatesโ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ /research-refine โผ โ
โ (refine method) โโโโโโโโโโโโ โ
โ โ โ Freeze โ โ
โ โผ โ problem โ โ
โ โโโโโโโโโโโโ โ anchor + โ โ
โ โ Iterate โโโโโโถโ refine โ โ
โ โ until โ โ method โ โ
โ โ scoreโฅ9 โ โโโโโโโโโโโโ โ
โ โโโโโโโโโโโโ โ โ
โ โ โผ โ
โ /experiment-plan โโโโโโโโโโโโ โ
โ โ โ Claim- โ โ
โ โผ โ driven โ โ
โ โโโโโโโโโโโโ โ experimentโ โ
โ โ Plan โโโโโโถโ roadmap โ โ
โ โ runs โ โโโโโโโโโโโโ โ
โ โโโโโโโโโโโโ โ
โ โ
โ Typical flow: โ
โ 1. /research-lit "discrete diffusion models" โ
โ 2. /idea-creator "DLLMs post training" โ
โ 3. Review ranked ideas, pick top 2-3 โ
โ 4. /novelty-check "top idea" (deep verification) โ
โ 5. /research-review "top idea" (critical feedback) โ
โ 6. /research-refine "top idea" (problem anchor + method) โ
โ 7. /experiment-plan (claim-driven roadmap) โ
โ 8. /run-experiment โ /auto-review-loop โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Skills involved: research-lit + idea-creator + novelty-check + research-review + research-refine-pipeline
๐ก One-command shortcut:
/idea-discovery "your research direction"runs this entire workflow automatically.
๐ Human-in-the-loop: Each phase presents results and waits for your feedback. Not happy? Tell it what's missing โ it refines the prompt and regenerates. Trust the defaults? It auto-proceeds with the top-ranked option. You decide how hands-on to be.
โ๏ธ Pilot experiment budgets (max hours, timeout, GPU budget) are configurable โ see Customization.
๐ Blog post: Claude Code ไธคๆ NeurIPS ๆๅ
Workflow 1.5: Experiment Bridge ๐
"I have a plan. Now implement it, deploy it, and get me initial results."
Already have an experiment plan (from Workflow 1 or your own)? /experiment-bridge turns it into running code:
- ๐ Parse the experiment plan (
refine-logs/EXPERIMENT_PLAN.md) - ๐ป Implement experiment scripts (reuse existing code, add proper argparse/logging/seeds)
- ๐ GPT-5.4 code review โ cross-model review catches logic bugs before wasting GPU hours (
code review: trueby default) - โ Sanity check โ run the smallest experiment first to catch runtime bugs
- ๐ Deploy full experiment suite to GPU via
/run-experiment - ๐ Collect initial results and update the experiment tracker
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Workflow 1.5: Experiment Bridge โ
โ โ
โ EXPERIMENT_PLAN.md โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Claude โโโโโโถโ GPT-5.4 โโโโโโถโ Sanity โ โ
โ โ Code โ โ xhigh โ โ Check โ โ
โ โ writes โ โ reviews โ โ (1 GPU) โ โ
โ โ code โ โ code โ โ โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Collect โโโโโโโ Monitor โโโโโโโ Deploy โ โ
โ โ results โ โ progress โ โ to GPUs โ โ
โ โ โ โ (+ W&B) โ โ โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ Ready for /auto-review-loop โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Skills involved: experiment-bridge + run-experiment + monitor-experiment
๐ก One-command shortcut:
/experiment-bridgereadsrefine-logs/EXPERIMENT_PLAN.mdautomatically. Or point it to any plan:/experiment-bridge "my_plan.md".
โ๏ธ
CODE_REVIEW,AUTO_DEPLOY,SANITY_FIRST,MAX_PARALLEL_RUNSare configurable โ see Customization.
Workflow 2: Auto Research Loop ๐ (sleep & wake up to results)
"Review my paper, fix what's wrong, repeat until it's good."
GPT-5.4 reviews โ identifies weaknesses โ suggests experiments โ Claude Code writes scripts, deploys to GPU, monitors results, rewrites the paper โ all while you sleep. Just add your GPU server config to
CLAUDE.md.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Auto Review Loop โ
โ โ
โ /research-review /auto-review-loop โ
โ (single deep review) (autonomous loop) โ
โ โ โ โ
โ โผ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ External โโโโถโ Implementโโโโถโ Monitor โโโโถ repeat โ
โ โ LLM โ โ fixes โ โ results โ until โ
โ โ reviews โ โ & run โ โ โ score โฅ 6 โ
โ โโโโโโโโโโโโ โ experimentsโ โโโโโโโโโโโโ โ
โ โโโโโโโโโโโโ โ
โ โ
โ When reviewer suggests a new method direction: โ
โ /novelty-check โ verify idea isn't already published โ
โ โ
โ Supporting skills: โ
โ /run-experiment โ deploy to local/remote/vast.ai GPU โ
โ /analyze-results โ interpret experiment outputs โ
โ /monitor-experiment โ check progress, collect results โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Skills involved: auto-review-loop + research-review + novelty-check + run-experiment + analyze-results + monitor-experiment
๐ก One-command shortcut:
/auto-review-loop "your paper topic"runs this entire workflow automatically.What to pass as argument? A short topic or scope is enough โ the skill automatically reads your project's narrative docs (
NARRATIVE_REPORT.md), memory files, experiment results, and prior reviews to build the full context for GPT-5.4. Examples:
/auto-review-loop "factorized gap in discrete diffusion LMs"โ broad topic, skill finds everything/auto-review-loop "focus on Section 3-5, our CRF results are weak"โ targeted scope with hints/auto-review-loopโ also works: skill reads project files and infers the topic
๐ฎ Reviewer Difficulty โ control how adversarial the reviewer is:
| Level | What changes | Use when |
|---|---|---|
medium (default) | Standard MCP review โ same as before | Normal workflow |
hard | + Reviewer Memory (GPT tracks suspicions across rounds) + Debate Protocol (Claude rebuts, GPT rules) | Want tougher feedback |
nightmare | + GPT reads repo directly via codex exec (Claude can't filter what it sees) + adversarial verification | Preparing for top venue, want maximum stress test |
/auto-review-loop "topic" โ difficulty: nightmare # GPT reads your code and verifies claims itself
๐ก๏ธ Key safety features:
- ๐ MAX_ROUNDS = 4 โ prevents infinite loops; stops early if score threshold is met
- โฑ๏ธ > 4 GPU-hour experiments skipped โ won't launch massive jobs; flags them for manual follow-up
- ๐ง Prefer reframing over new experiments โ when both can address a weakness, chooses the cheaper path
- ๐ช No hiding weaknesses โ explicit rule: "Do NOT hide weaknesses to game a positive score"
- ๐ง Fix before re-review โ must actually implement fixes before resubmitting; no empty promises
- ๐พ Compact recovery โ persists state (
REVIEW_STATE.json) after each round. If the context window fills up and auto-compacts mid-loop, the workflow reads the state file and resumes from where it left off โ no human intervention needed
โ๏ธ MAX_ROUNDS, score threshold, and GPU limits are configurable โ see Customization.
๐ Blog post: ๅผๆบ | ็ก่ง Claude ่ชๅจ่ทๅฎ้ชๆนๆ
Workflow 3: Paper Writing Pipeline ๐
"Turn my research narrative into a submission-ready PDF." Requires a local LaTeX environment โ see Prerequisites.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Paper Writing Pipeline โ
โ โ
โ /paper-plan /paper-figure /paper-write โ
โ (outline) (plots & tables) (LaTeX draft) โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Claims- โโโโโถโ Generate โโโโโโถโ Section โโโโ โ
โ โ Evidence โ โ figures, โ โ by โ โ โ
โ โ Matrix + โ โ tables, โ โ section โ โ โ
โ โ Section โ โ LaTeX โ โ LaTeX โ โ โ
โ โ Plan โ โ includes โ โ draft โ โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ โ
โ โ โ โ
โ โ /paper-compile โ โ
โ โ (build PDF) โ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ NARRATIVE_REPORT.md โโโบ PAPER_PLAN.md โโโบ paper/ โ โ
โ โ (input) (outline) (LaTeX+PDF)โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Typical flow: โ
โ 1. Write NARRATIVE_REPORT.md (from Workflow 2 results) โ
โ 2. /paper-plan (claims-evidence matrix + section plan) โ
โ 3. /paper-figure (comparison tables, training curves, etc.) โ
โ 4. /paper-write (section-by-section LaTeX generation) โ
โ 5. /paper-compile (build PDF, fix errors, page check) โ
โ 6. /auto-paper-improvement-loop (review ร2 + format check) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Skills involved: paper-plan + paper-figure + paper-write + paper-compile + auto-paper-improvement-loop + (post-acceptance) paper-poster + paper-slides
One-command shortcut:
/paper-writing "NARRATIVE_REPORT.md"runs this entire workflow automatically.
Input: A NARRATIVE_REPORT.md describing the research: claims, experiments, results, figures. The more detailed the narrative (especially figure descriptions and quantitative results), the better the output. See templates/NARRATIVE_REPORT_TEMPLATE.md for a complete example.
Output: A paper/ directory with LaTeX source, clean .bib (only cited entries), and compiled PDF. The PDF is labelled submission-ready only when run at โ effort: max | beast (or explicit โ assurance: submission) and tools/verify_paper_audits.sh reports green on the three mandatory audits (proof-checker, paper-claim-audit, citation-audit); see Assurance Gate below. At the default balanced level, the output is a reviewed draft.
Key features:
- ๐ Claims-Evidence Matrix โ every claim maps to evidence, every experiment supports a claim
- ๐ Auto figure generation โ line plots, bar charts, comparison tables from JSON data
- ๐งน Clean bib โ automated filtering removes uncited entries (948โ215 lines in testing). Real BibTeX from DBLP/CrossRef instead of LLM-generated entries
- ๐ Flexible sections โ 5-8 sections depending on paper type (theory papers often need 7)
- ๐ GPT-5.4 review โ each step optionally reviewed by external LLM
- โ๏ธ De-AI polish โ removes AI writing patterns (delve, pivotal, landscape...)
- ๐ฏ Page verification โ
pdftotext-based precise check that main body fits page limit
โ ๏ธ Figure generation scope:
/paper-figureauto-generates data-driven plots (training curves, bar charts, heatmaps) and comparison tables from JSON/CSV. For architecture diagrams and method figures:illustration: gemini(default) uses ClaudeโGeminiโNano Banana Pro for publication-quality diagrams;illustration: mermaidgenerates Mermaid diagrams for free;illustration: falseskips AI figures entirely.Gemini API setup (for
illustration: gemini): Get your API key at Google AI Studio, then set it as an environment variable:export GEMINI_API_KEY="your-key". Or add to your shell profile (~/.zshrc/~/.bashrc). No other dependencies needed.
Tested end-to-end: Generated a 9-page ICLR 2026 theory paper (7 sections, 29 citations, 4 figures, 2 comparison tables) from a single NARRATIVE_REPORT.md โ zero compilation errors, zero undefined references.
Auto Paper Improvement Loop โจ
After Workflow 3 generates the paper, /auto-paper-improvement-loop runs 2 rounds of GPT-5.4 xhigh content review โ fix โ recompile, plus a final format compliance check, autonomously polishing the paper from rough draft to a reviewer-scored draft. Whether the result is tagged submission-ready is decided separately by the Phase 6 assurance gate (see Assurance Gate).
Score Progression (Real Test โ ICLR 2026 theory paper):
| Round | Score | Key Changes |
|---|---|---|
| Round 0 | 4/10 (content) | Baseline |
| Round 1 | 6/10 (content) | Fixed assumptions, softened claims, renamed notation |
| Round 2 | 7/10 (content) | Added synthetic validation, stronger limitations |
| Round 3 | 5โ8.5/10 (format) | Removed hero fig, appendix, compressed conclusion, float spacing |
Final: 8 pages main body (ICLR limit: 9), 0 overfull hbox, ICLR-compliant. +4.5 points across 3 rounds.
Round 1 fixes (6 items)
- CRITICAL โ Assumption-model mismatch: A boundedness assumption contradicted the model's distributional family. Replaced with a tail-compatible assumption and added formal truncation bridge.
- CRITICAL โ Theory-practice gap: Theory assumes idealized encoders, experiments use learned nonlinear encoders. Softened "validate" โ "demonstrate practical relevance" and added explicit disclaimer.
- MAJOR โ Missing quantitative metrics: Added parameter count table (latent vs total) with honest accounting of system cost.
- MAJOR โ Theorem not self-contained: Added "Interpretation" paragraph listing all dependencies explicitly.
- MAJOR โ Overclaim in novelty statement: Scoped a broad "first convergence guarantee" to precise conditions under which it holds.
- MAJOR โ Notation confusion: Renamed a symbol that clashed with another key variable. Added Notation paragraph.
Round 2 fixes (4 items)
- MAJOR โ Missing theory-aligned experiments: Added a synthetic validation subsection directly testing the two main theoretical predictions under controlled conditions.
- MAJOR โ Overclaim softening: Replaced strong equivalence claims with appropriately hedged language across all files.
- MAJOR โ Informal theoretical argument: Formalized an informal justification into a proper proposition with explicit error bounds.
- MINOR โ Weak limitations: Expanded to explicitly list all assumptions and acknowledge missing standard evaluations.
Round 3 format fixes (8 items)
- Removed hero figure block (saved ~0.7 pages)
- Compressed conclusion from 15โ9 lines
- Moved synthetic validation to Appendix A
- Moved comparison tables to Appendix B
- Fixed overfull hbox (85pt) with
\resizebox - Added compact float spacing (
\captionsetup,\textfloatsep) - Inlined centered question block in introduction
- Tightened
itemizeenvironments
Workflow 4: Rebuttal ๐ (reply to reviewers safely)
"Reviews are in. Help me draft a safe, grounded rebuttal."
Got reviews back? /rebuttal parses them, builds a strategy, and drafts a venue-compliant response:
- ๐ Parse โ normalize reviews, validate venue rules (character limit, text-only, etc.)
- ๐ Atomize โ split each review into issue cards (type, severity, reviewer stance)
- ๐บ๏ธ Strategize โ global themes, per-reviewer priorities, character budget, blocked claims
- ๐งช Evidence sprint โ if
auto experiment: true, auto-run supplementary experiments via/experiment-bridge - โ๏ธ Draft โ global opener + numbered per-reviewer responses + closing for meta-reviewer
- ๐ก๏ธ Safety check โ 6 lints: coverage, provenance, commitment, tone, consistency, limit
- ๐ฌ GPT-5.4 stress test โ internal skeptical review of the draft
- ๐ Finalize โ two outputs:
PASTE_READY.txt(exact character count) +REBUTTAL_DRAFT_rich.md(extended version for manual editing) - ๐ Follow-up rounds โ delta replies for reviewer discussions, technically escalating
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Workflow 4: Rebuttal โ
โ โ
โ Reviews arrive โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Parse & โโโโโโถโ Strategy โโโโโโถโ Evidence โ โ
โ โ atomize โ โ plan โ โ sprint โ โ
โ โ reviews โ โ โ โ (optional)โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Finalize โโโโโโโ GPT-5.4 โโโโโโโ Draft โ โ
โ โ 2 versionsโ โ stress โ โ rebuttal โ โ
โ โ โ โ test โ โ โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ PASTE_READY.txt (strict) + RICH.md (extended) โ
โ โ โ
โ โผ โ
โ Follow-up rounds (delta replies, per-reviewer threads) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Skills involved: rebuttal
๐ก Quick mode:
/rebuttal โ quick mode: truestops after parsing + strategy (Phase 0-3). See what reviewers want before committing to a full draft.
โ๏ธ
VENUE,AUTO_EXPERIMENT,QUICK_MODE,MAX_STRESS_TEST_ROUNDSare configurable โ see Customization.
Three safety gates โ rebuttal will NOT finalize if any fails:
- ๐ Provenance โ every claim maps to paper/review/user-confirmed result. No fabrication.
- ๐ Commitment โ every promise is user-approved. No overpromising.
- ๐ Coverage โ every reviewer concern is tracked. Nothing disappears.
๐ Research Wiki โ Persistent Research Memory
"Stop re-deriving. Start compounding." โ inspired by Karpathy's LLM Wiki
Without the wiki, ARIS is stateless โ every /idea-discovery starts from scratch. With the wiki, ARIS accumulates knowledge across the entire research lifecycle: papers read, ideas tested, experiments run, claims verified or invalidated.
The key insight: failed ideas are the most valuable memory. A researcher who knows what doesn't work generates better ideas than one starting from zero.
Setup:
> /research-wiki init # one-time, creates research-wiki/ in your project
That's it. Once initialized, the wiki works automatically:
| When | What happens | Wiki action |
|---|---|---|
/research-lit finds papers | Papers auto-ingested | papers/<slug>.md created, edges added, query_pack rebuilt |
/idea-creator runs | Reads wiki first | Failed ideas = banlist, gaps = search seeds, papers = known prior work |
/idea-creator finishes | ALL ideas written back | Both recommended AND eliminated ideas โ ideas/<id>.md |
/result-to-claim judges | Results written back | Experiment page created, claim status updated (supported/invalidated) |
| 3+ ideas fail | Re-ideation suggested | "๐ก Consider re-running /idea-creator โ the wiki now knows what doesn't work" |
Four entity types:
| Entity | What it stores | Example |
|---|---|---|
| ๐ Paper | Structured summary: thesis, method, limitations, reusable ingredients | paper:chen2025_factorized_gap |
| ๐ก Idea | Hypothesis, status (proposed/failed/succeeded), failure notes, lessons | idea:001 |
| ๐งช Experiment | Metrics, verdict, hardware, duration | exp:001 |
| ๐ Claim | Testable statement + evidence status (reported/supported/invalidated) | claim:C1 |
Typed relationships (stored in graph/edges.jsonl):
paper --extends--> paper idea --inspired_by--> paper
paper --contradicts--> paper idea --tested_by--> experiment
paper --addresses_gap--> gap experiment --supports--> claim
paper --supersedes--> paper experiment --invalidates--> claim
Spiral learning in action:
Round 1: read 15 papers โ wiki remembers โ idea A โ experiment โ FAIL
wiki records: "A fails because OOM at batch>32, loss diverges"
Round 2: /idea-creator reads wiki โ sees A failed โ generates idea D (avoids A's trap)
โ experiment โ PARTIAL SUCCESS
wiki records: "D works on small models, fails on large"
Round 3: /idea-creator reads wiki โ knows A failed + D partial โ generates idea F
(combines D's success with new approach) โ experiment โ SUCCESS ๐
Subcommands:
/research-wiki init # initialize wiki
/research-wiki ingest "paper title" โ arxiv: xxx # manually add a paper
/research-wiki query "topic" # rebuild query_pack.md
/research-wiki update idea:001 โ outcome: negative # update entity
/research-wiki lint # health check (orphans, contradictions, stale claims)
/research-wiki stats # overview (paper/idea/experiment/claim counts)
๐ Safe by design: All workflow hooks are guarded by
if research-wiki/ exists. No wiki = no impact. Zero dependencies (pure Python stdlib). You choose when to enable it.
Workflow M: Meta-Optimize ๐งฌ (ARIS optimizes itself)
"Analyze my usage patterns and improve your own skills."
Unlike Workflows 1โ4 which optimize research artifacts (papers, code, experiments), Workflow M optimizes the harness itself โ the SKILL.md instructions, default parameters, and convergence rules that govern how ARIS operates. Inspired by Meta-Harness (Lee et al., 2026).
Setup (one-time, in normal terminal):
mkdir -p .claude .aris/meta tools/meta_opt
cp Auto-claude-code-research-in-sleep/templates/claude-hooks/meta_logging.json .claude/settings.json
cp Auto-claude-code-research-in-sleep/tools/meta_opt/*.sh tools/meta_opt/
chmod +x tools/meta_opt/*.sh
claude # hooks active immediately
Usage (after 5+ workflow runs):
> /meta-optimize # analyze current project
> /meta-optimize "auto-review-loop" # focus on one skill
> /meta-optimize --global # analyze trends across ALL projects
> /meta-optimize apply 1 # apply recommended change #1
How it works:
- ๐ Passive logging โ Claude Code hooks silently record every skill invocation, tool call, failure, parameter override, and user prompt. Events are written to both project-level (
.aris/meta/events.jsonl) and global (~/.aris/meta/events.jsonl, with a"project"tag) logs. Zero user effort. - ๐ Pattern analysis โ
/meta-optimizereads the log and identifies:- Parameters users override most often (bad defaults)
- Tools that fail repeatedly in specific skills (missing error handling)
- Review score plateaus (convergence rules too loose/tight)
- Manual corrections users make (skill gaps)
- ๐ฉน Patch proposal โ generates minimal diffs to target SKILL.md files with data-backed justifications
- ๐ฌ Reviewer gate โ GPT-5.4 xhigh reviews each patch: does the evidence support it? could it hurt other users?
- โ User approval โ only applied with explicit user consent. All changes are logged and reversible.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Workflow M: Meta-Optimize โ
โ โ
โ Normal ARIS usage (W1-W4) โ
โ โ (hooks log events passively) โ
โ โผ โ
โ .aris/meta/events.jsonl โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ Analyze โโโโโโถโ Propose โโโโโโถโ GPT-5.4 โ โ
โ โ patterns โ โ SKILL.md โ โ reviews โ โ
โ โ โ โ patches โ โ patch โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ User approves? โ
โ Yes โ Apply โ
โ No โ Skip โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What gets optimized (harness components):
| Component | Example |
|---|---|
| Skill prompts | Reviewer instructions, quality gates, step descriptions |
| Default parameters | difficulty, MAX_ROUNDS, threshold |
| Convergence rules | When to stop the review loop, retry counts |
| Error handling | Auto-debug patterns, failure recovery steps |
What does NOT get optimized: research artifacts (papers, code, experiments) โ that's what W1โW4 do.
Skills involved: meta-optimize
๐ก This is a maintenance workflow, not part of the W1โW1.5โW2โW3โW4 research pipeline. Run it periodically, like
git gcfor your research harness.
โก Effort Levels
"How hard should ARIS work?" โ Every skill accepts
โ effort: lite | balanced | max | beast.
| Level | Tokens | Best for | What changes |
|---|---|---|---|
lite | ~0.4x | Quick exploration, budget users | Fewer papers, ideas, rounds. Minimum viable depth |
balanced | 1x | Normal workflow (default) | Current ARIS behavior. Zero change for existing users |
max | ~2.5x | Serious submission prep | More papers, deeper review, more ablations |
beast | ~5-8x | Top-venue final sprint | Every knob to maximum. No budget limit |
What NEVER changes regardless of effort:
- Codex reasoning: always xhigh (reviewer quality is non-negotiable)
- DBLP/CrossRef citations: always on
- Reviewer independence: always on
- Experiment integrity: always on
# Every skill accepts effort independently
/research-lit "topic" โ effort: beast # 40-50 papers, 15+ queries
/idea-creator "direction" โ effort: lite # 4-6 ideas, quick filter
/auto-review-loop โ effort: max # 6 rounds, 4-6 fixes/round
# Mix with specific overrides
/auto-review-loop โ effort: beast, review_rounds: 3 # beast everything, but cap at 3 rounds
# Full pipeline
/research-pipeline "your topic" โ effort: beast # top-venue sprint mode
Full effort comparison table โ click to expand
| Skill | Dimension | lite | balanced | max | beast |
|---|---|---|---|---|---|
| research-lit | papers | 6-8 | 10-15 | 18-25 | 40-50 |
| idea-creator | ideas | 4-6 | 8-12 | 12-16 | 20-30 |
| idea-creator | pilots | 1-2 | 2-3 | 3-4 | 5-6 |
| novelty-check | claims | 2-3 | 3-4 | 4-6 | all |
| research-refine | rounds | 3 | 5 | 7 | 10+ |
| experiment-plan | experiments | 3 | 5 | 7 | 10+ |
| experiment-plan | seeds | 1 | 3 | 5 | 5 |
| auto-review-loop | rounds | 2 | 3-4 | 6 | 8+ |
| paper-improvement | rounds | 1 | 2 | 3 | 5 |
| paper-illustration | iterations | 2 | 3 | 5 | 7 |
| rebuttal | stress tests | 0-1 | 1 | 2 | 3 |
| experiment-audit | depth | skip | basic | full | line-by-line |
๐ Full specification:
shared-references/effort-contract.md
Assurance Gate (effort: max | beast)
ARIS has two independent axes: effort controls how much work is done
(breadth/depth), assurance controls whether mandatory audits are
load-bearing. Default mapping:
effort | Implied assurance | Paper-writing Phase 6 behavior |
|---|---|---|
lite / balanced (default) | draft | Current behavior, zero change. Audits run only if their content detector matches; missing artifacts are non-blocking. |
max / beast | submission | Phase 6 force-invokes /proof-checker, /paper-claim-audit, /citation-audit in fresh threads, runs tools/verify_paper_audits.sh, and refuses to emit the Final Report if the verifier returns non-zero (missing / stale / FAIL / BLOCKED / ERROR). |
What this fixes: previously, โ effort: beast did not actually
guarantee the three mandatory audits ran โ the content detectors could
silent-skip, so beast-mode papers could ship without proof verification or
citation checks. The assurance axis makes audit enforcement externally
verifiable via tools/verify_paper_audits.sh (the verifier's exit code is
the source of truth, not the executor's self-report).
Backwards compatibility: users on the default balanced level see
zero change. Only users who opt up to max / beast, or who explicitly
pass โ assurance: submission, see the new gate.
Escape hatch: โ effort: beast, assurance: draft gets the old
"depth-only, no audit gate" behavior back. Legal but discouraged for
actual submissions.
Optional harness hardening (advanced): teams who want the model to
be physically prevented from ending a session while the verifier is red
can register a Stop hook in ~/.claude/settings.json (replace
<ARIS_REPO> with the absolute path to your ARIS clone, e.g.
/Users/you/Auto-claude-code-research-in-sleep):
{
"hooks": {
"Stop": [
{"command": "bash <ARIS_REPO>/tools/verify_paper_audits.sh paper/ --assurance submission"}
]
}
}
This is not required โ the default repo behavior (Phase 6 verifier-as-truth) already blocks Final Report emission on a red verdict. The Stop hook is a belt-and-suspenders layer for teams that want harness-level enforcement.
๐ Full specification:
shared-references/assurance-contract.md
๐งฟ Optional: GPT-5.4 Pro via Oracle
For expert researchers who want the strongest possible reviewer.
Oracle unlocks GPT-5.4 Pro as an ARIS reviewer โ the strongest reasoning model available. Pro excels at deep mathematical proof verification, line-by-line code auditing, and complex experimental design critique.
Setup:
# 1. Install Oracle
npm install -g @steipete/oracle
# 2. Add Oracle MCP to Claude Code
claude mcp add oracle -s user -- oracle-mcp
# 3. Restart Claude Code
# 4a. API mode (fast, recommended):
export OPENAI_API_KEY="your-key"
# 4b. Browser mode (free, no API key โ log in to ChatGPT in Chrome):
# Just open Chrome โ chatgpt.com โ log in
Usage โ add โ reviewer: oracle-pro to any skill:
/research-review "my draft" โ reviewer: oracle-pro # Pro-level paper critique
/proof-checker "paper/" โ reviewer: oracle-pro # deepest mathematical verification
/experiment-audit โ reviewer: oracle-pro # Pro audits your eval code
/auto-review-loop "scope" โ reviewer: oracle-pro # Pro stress test each round
/idea-creator "direction" โ reviewer: oracle-pro # Pro evaluates your ideas
/rebuttal "paper/ + reviews" โ reviewer: oracle-pro # Pro stress tests your rebuttal
Default is always Codex xhigh. Oracle not installed = zero impact. โ reviewer: oracle-pro without Oracle installed = graceful fallback to Codex + warning.
๐ Full specification:
shared-references/reviewer-routing.md
๐งฐ All Skills
๐ Full Pipeline
| Skill | Description | Codex MCP? |
|---|---|---|
๐๏ธ research-pipeline | End-to-end: Workflow 1 โ 1.5 โ 2 โ 3, from research direction to submission | Yes |
๐ Workflow 1: Idea Discovery & Method Refinement
| Skill | Description | Codex MCP? |
|---|---|---|
๐ญ idea-discovery | Pipeline orchestrator โ runs all skills below in sequence | Yes |
โ ๐ research-lit | Multi-source literature search (Zotero + Obsidian + local PDFs + arXiv API + web) | No |
โ ๐ก idea-creator | Brainstorm 8-12 ideas, filter by feasibility, pilot on GPU, rank by signal | Yes |
โ ๐ novelty-check | Verify idea novelty against recent literature (multi-source + GPT-5.4 cross-check) | Yes |
โ ๐ฌ research-review | Single-round deep review from external LLM (xhigh reasoning) | Yes |
โ ๐งญ research-refine-pipeline | Refine method + plan experiments in one chain | Yes |
ใโ ๐ฌ research-refine | Problem anchor โ iterative method refinement (up to 5 rounds, score โฅ 9) | Yes |
ใโ ๐งช experiment-plan | Claim-driven experiment roadmap with ablations, budgets, and run order | No |
๐ Workflow 1.5: Experiment Bridge
| Skill | Description | Codex MCP? |
|---|---|---|
๐ experiment-bridge | Read experiment plan โ implement code โ sanity check โ deploy to GPU โ collect initial results | No |
โ ๐ run-experiment | Deploy experiments to local, remote, or Vast.ai GPU (gpu: local/remote/vast) | No |
โ ๐ monitor-experiment | Monitor running experiments, check progress, collect results | No |
โ โ๏ธ vast-gpu | Rent, manage, and destroy on-demand GPU instances from Vast.ai | No |
๐ Workflow 2: Auto Research Loop
| Skill | Description | Codex MCP? |
|---|---|---|
๐ auto-review-loop | Pipeline orchestrator โ autonomous reviewโfixโre-review (max 4 rounds) | Yes |
โ ๐ฌ research-review | Deep review from external LLM (shared with Workflow 1) | Yes |
โ ๐ novelty-check | Verify novelty when reviewer suggests new directions | Yes |
โ ๐ run-experiment | Deploy experiments to local, remote, or Vast.ai GPU (gpu: local/remote/vast) | No |
โ ๐ analyze-results | Analyze experiment results, compute statistics, generate insights | No |
โ ๐ monitor-experiment | Monitor running experiments, check progress, collect results | No |
๐ auto-review-loop-llm | Same as above, but uses any OpenAI-compatible API via llm-chat MCP server | No |
๐ Workflow 3: Paper Writing
| Skill | Description | Codex MCP? |
|---|---|---|
๐ paper-writing | Pipeline orchestrator โ runs all skills below in sequence | Yes |
โ ๐ paper-plan | Claims-evidence matrix, section structure, figure plan, citation scaffolding | Yes |
โ ๐ paper-figure | Publication-quality matplotlib/seaborn plots + LaTeX comparison tables | Optional |
โ ๐จ paper-illustration | AI-generated architecture diagrams and method figures via Gemini (when illustration: true) | No (needs Gemini API) |
โ โ๏ธ paper-write | Section-by-section LaTeX generation (ICLR/NeurIPS/ICML). Anti-hallucination BibTeX via DBLP/CrossRef | Yes |
โ ๐จ paper-compile | Compile LaTeX to PDF, auto-fix errors, submission readiness checks | No |
โ ๐ auto-paper-improvement-loop | 2-round content review + format check (4/10 โ 8.5/10) | Yes |
๐ Workflow 4: Rebuttal
| Skill | Description | Codex MCP? |
|---|---|---|
๐ rebuttal | Parse reviews โ atomize โ strategy โ draft โ safety check โ stress test โ finalize (2 versions) โ follow-up | Yes |
๐ ๏ธ Standalone / Utility
| Skill | Description | Codex MCP? |
|---|---|---|
๐ arxiv | Search, download, and summarize arXiv papers. Standalone or /research-lit supplement | No |
๐ semantic-scholar | Search published venue papers (IEEE, ACM, Springer) via Semantic Scholar API. Citation counts, venue metadata, TLDR | No |
๐ deepxiv | Progressive paper retrieval via DeepXiv CLI: search, brief, section map, section reads, trending, web search | Yes (pip install deepxiv-sdk) |
๐ exa-search | AI-powered broad web search via Exa: blogs, docs, news, companies, research papers with content extraction (highlights, text, summaries) | Yes (pip install exa-py) |
๐ alphaxiv | Quick single-paper lookup via AlphaXiv LLM-optimized summaries. Three-tier fallback: overview โ full markdown โ LaTeX source | No |
๐จ pixel-art | Generate pixel art SVG illustrations for READMEs, docs, or slides | No |
๐ฑ feishu-notify | Feishu/Lark push (webhook) or interactive (bidirectional). Off by default | No |
โ๏ธ Setup
Prerequisites
- Claude Code installed
- (For review skills) Codex CLI installed and configured as MCP server:
npm install -g @openai/codex claude mcp add codex -s user -- codex mcp-server - (For Workflow 3: paper writing) LaTeX environment with
latexmkandpdfinfo:# macOS brew install --cask mactex # or: brew install basictex brew install poppler # provides pdfinfo # Ubuntu/Debian sudo apt install texlive-full latexmk poppler-utils # Verify latexmk --version && pdfinfo -vIf you only need Workflow 1 & 2 (idea discovery + auto review), LaTeX is not required.
Install Skills
๐ก Recommended: project-local flat symlink install (since 2026-04-20). Each ARIS skill is symlinked individually into
.claude/skills/<skill-name>, so Claude Code's slash-command discovery picks them up. A manifest at.aris/installed-skills.txttracks what ARIS installed โ uninstall and reconcile only ever touch managed entries, never your own skills.๐ค Codex mirror route: keep Claude on
install_aris.sh/smart_update.sh. For Codex-native project installs, useinstall_aris_codex.sh; for copied Codex installs, usesmart_update_codex.sh.
# 1. Clone ARIS once to a stable location
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep.git ~/aris_repo
# 2. For each project that uses ARIS, attach via symlinks:
cd ~/your-paper-project
bash ~/aris_repo/tools/install_aris.sh
# โ creates one symlink per skill: .claude/skills/<skill> โ ~/aris_repo/skills/<skill>
# โ writes manifest .aris/installed-skills.txt (tracks every entry ARIS installed)
# โ updates managed CLAUDE.md ARIS block (best-effort, compare-and-swap)
# โ re-runnable: rerun anytime to reconcile new/removed upstream skills
# 3. To update existing skills' content for ALL attached projects:
cd ~/aris_repo && git pull # symlinks resolve to live upstream โ content updates automatically
# 3a. To pick up newly added or removed upstream skills, rerun the installer:
bash ~/aris_repo/tools/install_aris.sh ~/your-paper-project # adds new symlinks, removes broken ones
# Other useful flags:
bash ~/aris_repo/tools/install_aris.sh --dry-run # show plan, no changes
bash ~/aris_repo/tools/install_aris.sh --uninstall # remove only managed symlinks (per manifest)
bash ~/aris_repo/tools/install_aris.sh --from-old # migrate from old nested .claude/skills/aris/
# Windows (PowerShell, requires admin or developer mode for junctions):
.\tools\install_aris.ps1 C:\path\to\your-paper-project
Why "git pull" alone isn't enough for new/removed skills: the flat layout uses one symlink per skill, so upstream additions/deletions don't propagate until the installer is re-run. The trade-off bought us Claude Code's automatic slash-command discovery (which only scans one directory level deep).
Migrating from the old nested install (pre-2026-04-20)
If you previously installed via install_aris.sh (which created .claude/skills/aris/ as a single nested symlink) or via smart_update.sh --target-subdir .claude/skills/aris, your slash commands probably weren't being auto-discovered by Claude Code. Migrate to the flat layout:
# Symlink-style legacy install:
bash ~/aris_repo/tools/install_aris.sh ~/your-project --from-old
# Copy-style legacy install (with possible local edits โ chose strategy explicitly):
bash ~/aris_repo/tools/install_aris.sh ~/your-project --from-old --migrate-copy keep-user
# โ keeps your nested .claude/skills/aris/ copy intact alongside the new flat install
bash ~/aris_repo/tools/install_aris.sh ~/your-project --from-old --migrate-copy prefer-upstream
# โ archives nested copy to .aris/legacy-copy-backup-<timestamp>/, then flattens
Alternative installs (advanced)
Project-local copy (no symlinks, useful for per-project skill edits):
mkdir -p ~/your-project/.claude/skills
bash ~/aris_repo/tools/smart_update.sh --project ~/your-project --apply
# Default --target-subdir is .claude/skills (flat), which is what Claude Code expects.
# (The old --target-subdir .claude/skills/aris is now deprecated โ see migration block above.)
Global install (one copy in your home dir, available to every project):
mkdir -p ~/.claude/skills
cp -r ~/aris_repo/skills/* ~/.claude/skills/
# Update with: bash tools/smart_update.sh --apply
Global install increases the risk of skill name collisions with other globally-installed packs. Use only if you don't mix ARIS with Superpowers / OpenHands / etc. โ otherwise prefer the project-local install above.
๐ก New Claude Code versions may not auto-create
~/.claude/skills/. If using global install, create it first:mkdir -p ~/.claude/skills/. The symlink installer handles directory creation automatically.
Optional: Codex Plugin for Code Review
codex-plugin-cc provides additional Codex capabilities that ARIS auto-detects when installed:
# In Claude Code:
/plugin marketplace add openai/codex-plugin-cc
/plugin install codex@openai-codex
/reload-plugins
/codex:setup
Where ARIS uses the plugin:
| Skill | Command | What it does |
|---|---|---|
/codex:review | Workflow 1.5 | Review experiment code before GPU deployment |
/codex:adversarial-review | Workflow 1.5 | Adversarial code review (find edge cases, bugs) |
/codex:rescue | Workflow 1.5 + 3 | Auto-debug rescue โ when experiment or LaTeX compilation fails after 2 attempts, Codex independently diagnoses the root cause before the next retry |
All plugin features are optional โ if not installed, ARIS falls back to Claude's own diagnosis. The plugin just adds a second pair of eyes.
Note: ARIS's core cross-model review (paper scoring, idea evaluation, rebuttal stress test) still uses Codex MCP, which allows custom prompts. The plugin cannot replace this.
Update Skills
cd Auto-claude-code-research-in-sleep
git pull
# ๐ง Smart update (recommended) โ analyzes what's safe to update
bash tools/smart_update.sh # dry-run: shows what would change
bash tools/smart_update.sh --apply # apply: adds new + updates safe ones
# Manual options (if you prefer):
# cp -r skills/* ~/.claude/skills/ # Option A: overwrite all
# cp -rn skills/* ~/.claude/skills/ # Option B: only add new, keep yours
# cp -r skills/experiment-bridge ~/.claude/skills/ # Option C: specific skill
๐ก Smart update compares your local skills with upstream, detects personal customizations (server paths, API keys, etc.), and only updates skills that are safe to replace. Skills with your personal info are flagged for manual review.
Usage
# Workflow 1: Idea Discovery
> /idea-discovery "your research direction" # full pipeline
> /research-lit "topic" # just literature survey (all sources)
> /research-lit "topic" โ sources: zotero, web # mix and match sources
> /research-lit "topic" โ sources: deepxiv # DeepXiv-only progressive retrieval
> /research-lit "topic" โ sources: exa # Exa AI-powered web search with content extraction
> /research-lit "topic" โ arxiv download: true # also download top arXiv PDFs
> /arxiv "discrete diffusion" โ download # standalone arXiv search + download
> /idea-creator "topic" # just brainstorm
# Workflow 2: Auto Research Loop
> /auto-review-loop "your paper topic" # review โ fix โ repeat
> /research-review "your paper" # single deep review
# Workflow 3: Paper Writing
> /paper-writing "NARRATIVE_REPORT.md" # full pipeline
> /paper-plan "NARRATIVE_REPORT.md" # just outline
> /paper-compile "paper/" # just compile
# Full Pipeline
> /research-pipeline "your research direction" # Workflow 1 โ 2 โ 3 end-to-end
# Supporting Skills
> /run-experiment train.py --lr 1e-4 --epochs 100
> /analyze-results figures/*.json
> /monitor-experiment server5
๐ Auto-Allow for Overnight Runs (Optional)
To run the auto-review loop without clicking permission prompts, add to .claude/settings.local.json:
{
"permissions": {
"allow": [
"mcp__codex__codex",
"mcp__codex__codex-reply",
"Write",
"Edit",
"Skill(auto-review-loop)"
]
}
}
๐ฅ๏ธ GPU Server Setup (For Auto-Experiments)
When GPT-5.4 says "run an ablation study" or "add a baseline comparison", Claude Code automatically writes the experiment script and deploys it to your GPU server. For this to work, Claude Code needs to know your server environment.
Three GPU modes are supported โ pick one and add it to your project's CLAUDE.md:
Option A: Remote SSH Server (gpu: remote)
## Remote Server
- gpu: remote
- SSH: `ssh my-gpu-server` (key-based auth, no password)
- GPU: 4x A100
- Conda env: `research` (Python 3.10 + PyTorch)
- Activate: `eval "$(/opt/conda/bin/conda shell.bash hook)" && conda activate research`
- Code directory: `/home/user/experiments/`
- Use `screen` for background jobs: `screen -dmS exp0 bash -c '...'`
Claude Code reads this and knows how to SSH in, activate the environment, and launch experiments. GPT-5.4 (the reviewer) only decides what experiments to run โ Claude Code figures out how based on your CLAUDE.md.
Option B: Local GPU (gpu: local)
If you are already on the GPU server, you can add the following to your CLAUDE.md:
## GPU Environment
- gpu: local
- This machine has direct GPU access (no SSH needed)
- GPU: 4x A100 80GB
- Experiment environment: `YOUR_CONDA_ENV` (Python 3.x + PyTorch)
- Activate before any Python command: `The command to activate your experiment environment` (uv, conda, etc.)
- Code directory: `/home/YOUR_USERNAME/YOUR_CODE_DIRECTORY/`
Option C: Vast.ai On-Demand GPU (gpu: vast)
No GPU? Rent one from Vast.ai on demand. ARIS analyzes your training task (model size, dataset, estimated time), searches for the cheapest GPU that fits, and presents options with estimated total cost โ not just $/hr. After you pick, it handles everything: rent โ setup โ run โ collect results โ destroy.
Prerequisites:
-
Create a Vast.ai account at https://cloud.vast.ai/ and add billing (credit card or crypto)
-
Install the
vastaiCLI (requires Python โฅ 3.10):pip install vastaiIf your Python is older (check with
python --version), use a virtual environment with Python โฅ 3.10 (e.g.,conda create,pyenv,uv venv, etc.). -
Set your API key โ get it from https://cloud.vast.ai/cli/:
vastai set api-key YOUR_API_KEY -
Upload your SSH public key at https://cloud.vast.ai/manage-keys/ โ this is required before renting any instance (keys are baked in at creation time). If you don't have one:
ssh-keygen -t ed25519 -C "your_email@example.com" cat ~/.ssh/id_ed25519.pub # copy this to Vast.ai -
Verify setup โ test that search works:
vastai search offers 'gpu_ram>=24 reliability>0.95' -o 'dph+' --limit 3
Add to CLAUDE.md:
## Vast.ai
- gpu: vast # rent on-demand GPU from vast.ai
- auto_destroy: true # auto-destroy after experiment completes (default)
- max_budget: 5.00 # optional: warn if estimated cost exceeds this
That's it โ no GPU model or hardware config needed. When you run /run-experiment, ARIS reads your experiment scripts/plan, estimates VRAM and training time, and presents options like:
| # | GPU | VRAM | $/hr | Est. Hours | Est. Total | Offer ID |
|---|-----------|-------|-------|------------|------------|----------|
| 1 | RTX 4090 | 24 GB | $0.28 | ~4h | ~$1.12 | 6995713 | โ best value
| 2 | A100 SXM | 80 GB | $0.95 | ~2h | ~$1.90 | 7023456 | โ fastest
Pick a number and it handles the rest. Use /vast-gpu directly for manual control.
No server at all? The review and rewriting skills still work without GPU access. Only experiment-related fixes will be skipped (flagged for manual follow-up).
๐ Zotero Integration (Optional)
If you use Zotero to manage your paper library, /research-lit can search your collections, read your annotations/highlights, and export BibTeX โ all before searching the web.
Recommended: zotero-mcp (1.8kโญ, semantic search, PDF annotations, BibTeX export)
# Install
uv tool install zotero-mcp-server # or: pip install zotero-mcp-server
# Add to Claude Code (Local API โ requires Zotero desktop running)
claude mcp add zotero -s user -- zotero-mcp -e ZOTERO_LOCAL=true
# Or use Web API (works without Zotero running)
claude mcp add zotero -s user -- zotero-mcp \
-e ZOTERO_API_KEY=your_key -e ZOTERO_USER_ID=your_id
Get your API key at https://www.zotero.org/settings/keys
What it enables in /research-lit:
- ๐ Search your Zotero library by topic (including semantic/vector search)
- ๐ Browse collections and tags
- ๐ Read your PDF annotations and highlights (what you personally found important)
- ๐ Export BibTeX for direct use in paper writing
Not using Zotero? No problem โ /research-lit automatically skips Zotero and uses local PDFs + web search instead.
๐ Obsidian Integration (Optional)
If you use Obsidian for research notes, /research-lit can search your vault for paper summaries, tagged references, and your own insights.
Recommended: mcpvault (760โญ, no Obsidian app needed, 14 tools, BM25 search)
# Add to Claude Code (point to your vault path)
claude mcp add obsidian-vault -s user -- npx @bitbonsai/mcpvault@latest /path/to/your/vault
Optional complement: obsidian-skills (13.6kโญ, by Obsidian CEO) โ teaches Claude to understand Obsidian-specific Markdown (wikilinks, callouts, properties). Copy to your vault:
git clone https://github.com/kepano/obsidian-skills.git
cp -r obsidian-skills/.claude /path/to/your/vault/
What it enables in /research-lit:
- ๐ Search your vault for notes on the research topic
- ๐ท๏ธ Find notes by tags (e.g.,
#paper-review,#diffusion-models) - ๐ Read your processed summaries and insights (more valuable than raw papers)
- ๐ Follow wikilinks to discover related notes
Not using Obsidian? No problem โ /research-lit automatically skips Obsidian and works as before.
๐ก Zotero + Obsidian together: Many researchers use Zotero for paper storage and Obsidian for notes. Both integrations work simultaneously โ
/research-litchecks Zotero first (raw papers + annotations), then Obsidian (your processed notes), then local PDFs, then web search.
arXiv Integration
/research-lit automatically queries the arXiv API for structured metadata (title, abstract, full author list, categories) โ richer than web search snippets. No setup required.
By default, only metadata is fetched (no files downloaded). To also download the most relevant PDFs:
/research-lit "topic" โ arxiv download: true # download top 5 PDFs
/research-lit "topic" โ arxiv download: true, max download: 10 # download up to 10
For standalone arXiv access, use the dedicated /arxiv skill:
/arxiv "attention mechanism" # search
/arxiv "2301.07041" โ download # download specific paper
๐ฑ Feishu/Lark Integration (Optional)
Get mobile notifications when experiments finish, reviews score, or checkpoints need your input โ without sitting in front of the terminal.
| Push Only (group cards) | Interactive (private chat) |
|---|---|
![]() | ![]() |
Three modes โ you choose per-project:
| Mode | What happens | You need |
|---|---|---|
| Off (default) | Nothing. Pure CLI, no Feishu | Nothing |
| Push only | Webhook notifications at key events. Mobile push, no reply | Feishu bot webhook URL |
| Interactive | Full bidirectional. Approve/reject ideas, reply to checkpoints from Feishu | feishu-claude-code running |
Push Only Setup (5 min)
Group notifications with rich cards โ experiment done, review scored, pipeline complete. Mobile push, no reply needed.
Step 1: Create a Feishu group bot
- Open your Feishu group (or create a test group)
- Group Settings โ Bots โ Add Bot โ Custom Bot
- Name it (e.g.,
ARIS Notifications), copy the Webhook URL - Security: add custom keyword
ARIS(all notifications include this word), or leave unrestricted
Step 2: Create config file
cat > ~/.claude/feishu.json << 'EOF'
{
"mode": "push",
"webhook_url": "https://open.feishu.cn/open-apis/bot/v2/hook/YOUR_WEBHOOK_ID"
}
EOF
Step 3: Test it
curl -s -X POST "YOUR_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d '{
"msg_type": "interactive",
"card": {
"header": {"title": {"tag": "plain_text", "content": "๐งช ARIS Test"}, "template": "blue"},
"elements": [{"tag": "markdown", "content": "Push mode working! ๐"}]
}
}'
You should see a blue card in your group. Skills will now automatically send rich cards at key events:
| Event | Card color | Content |
|---|---|---|
| Review scored โฅ 6 | ๐ข Green | Score, verdict, top weaknesses |
| Review scored < 6 | ๐ Orange | Score, verdict, action items |
| Experiment complete | ๐ข Green | Results table, delta vs baseline |
| Checkpoint waiting | ๐ก Yellow | Question, options, context |
| Error | ๐ด Red | Error message, suggested fix |
| Pipeline done | ๐ฃ Purple | Score progression, deliverables |
Interactive Setup (15 min)
Everything Push mode does, plus bidirectional private chat with Claude Code via Feishu. Approve/reject ideas, reply to checkpoints, give custom instructions โ all from your phone.
How it works: Push cards go to the group (everyone sees status). Interactive conversations happen in private chat with the bot (you reply, Claude Code acts on it).
Step 1: Complete Push setup above first (you'll keep both)
Step 2: Create a Feishu app on open.feishu.cn
- Click Create Enterprise App โ name it (e.g.,
ARIS Claude Bot) โ create - Left menu โ Add Capabilities โ check Bot
- Left menu โ Permissions โ search and enable these 5 permissions:
| Permission | Scope | Why |
|---|---|---|
im:message | Send & receive messages | Core messaging |
im:message:send_as_bot | Send as bot | Bot replies |
im:message.group_at_msg:readonly | Receive group @mentions | Group messages |
im:message.p2p_msg:readonly | Receive private messages | โ ๏ธ Easy to miss! Without this, the bot connects but never receives your messages |
im:resource | Access attachments | Images/files |
- Left menu โ Events & Callbacks โ select Long Connection mode โ add event:
im.message.receive_v1โ save
โ ๏ธ Important: The "Long Connection" page may show "ๆชๆฃๆตๅฐๅบ็จ่ฟๆฅไฟกๆฏ" โ this is normal. You need to start the bridge first (Step 3), then come back and save.
- Left menu โ Version Management โ Create Version โ fill description โ Submit for Review
For personal/test Feishu organizations, approval is usually instant.
Step 3: Deploy the bridge
git clone https://github.com/joewongjc/feishu-claude-code.git
cd feishu-claude-code
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Configure
cp .env.example .env
Edit .env:
FEISHU_APP_ID=cli_your_app_id # From app credentials page
FEISHU_APP_SECRET=your_app_secret # From app credentials page
DEFAULT_MODEL=claude-opus-4-6 # โ ๏ธ Default is sonnet โ change to opus for best results
DEFAULT_CWD=/path/to/your/project # Working directory for Claude Code
PERMISSION_MODE=bypassPermissions # Or "default" for safer mode
โ ๏ธ Model matters: The default
claude-sonnet-4-6works but may struggle with complex project context.claude-opus-4-6correctly identified 18 ARIS skills on first try where sonnet could not.
Start the bridge:
python main.py
# Expected output:
# โ
่ฟๆฅ้ฃไนฆ WebSocket ้ฟ่ฟๆฅ๏ผ่ชๅจ้่ฟ๏ผ...
# [Lark] connected to wss://msg-frontier.feishu.cn/ws/v2?...
For long-running use, put it in a screen session:
screen -dmS feishu-bridge bash -c 'cd /path/to/feishu-claude-code && source .venv/bin/activate && python main.py'
Step 4: Save event config โ Go back to Feishu Open Platform โ Events & Callbacks โ the long connection should now show "ๅทฒๆฃๆตๅฐ่ฟๆฅ" โ Save
If you published the app version before the bridge was running, you may need to create a new version (e.g., 1.0.1) and re-publish after saving event config.
Step 5: Test private chat
- In Feishu, find the bot in your contacts (search by app name)
- Send it a message:
ไฝ ๅฅฝ - It should reply via Claude Code
If the bot doesn't reply: Send /new to reset the session, then try again. Common issues:
| Symptom | Cause | Fix |
|---|---|---|
| Bot connects but never receives messages | Missing im:message.p2p_msg:readonly permission | Add permission โ create new version โ publish |
| Bot replies but doesn't know your project | DEFAULT_CWD points to wrong directory | Edit .env โ restart bridge |
| Bot replies but seems less capable | Using claude-sonnet-4-6 | Change to claude-opus-4-6 in .env โ restart |
| Old session has stale context | Session cached from before config change | Send /new in chat to start fresh session |
| "ๆชๆฃๆตๅฐๅบ็จ่ฟๆฅไฟกๆฏ" when saving events | Bridge not running yet | Start bridge first, then save event config |
Step 6: Update ARIS config
cat > ~/.claude/feishu.json << 'EOF'
{
"mode": "interactive",
"webhook_url": "https://open.feishu.cn/open-apis/bot/v2/hook/YOUR_WEBHOOK_ID",
"interactive": {
"bridge_url": "http://localhost:5000",
"timeout_seconds": 300
}
}
EOF
Now skills will:
- Push rich cards to the group (status notifications, everyone sees)
- Private chat you for decisions (checkpoints, approve/reject, custom instructions)
Which skills send notifications?
| Skill | Events | Push | Interactive |
|---|---|---|---|
/auto-review-loop | Review scored (each round), loop complete | Score + verdict | + wait for continue/stop |
/auto-paper-improvement-loop | Review scored, all rounds done | Score progression | Score progression |
/run-experiment | Experiments deployed | GPU assignment + ETA | GPU assignment + ETA |
/vast-gpu | Instance rented/destroyed | Instance ID + cost | Instance ID + cost |
/monitor-experiment | Results collected | Results table | Results table |
/idea-discovery | Phase transitions, final report | Summary at each phase | + approve/reject at checkpoints |
/research-pipeline | Stage transitions, pipeline done | Stage summary | + approve/reject |
Not using Feishu? No problem โ without ~/.claude/feishu.json, all skills behave exactly as before. Zero overhead, zero side effects.
๐ก Alternative IM platforms: The push-only webhook pattern works with any service that accepts incoming webhooks (Slack, Discord, DingTalk, WeChat Work). Just change the
webhook_urland card format infeishu-notify/SKILL.md. For bidirectional support, see cc-connect (multi-platform bridge) or clawdbot-feishu.
๐๏ธ Customization
Skills are plain Markdown files. Fork and customize:
๐ก Parameter pass-through: Parameters flow down the call chain automatically. For example,
/research-pipeline "topic" โ sources: zotero, arxiv download: truepassessourcesandarxiv downloadthroughidea-discoveryall the way down toresearch-lit. This also works for optional sources such asdeepxivandexa:/research-pipeline "topic" โ sources: all, deepxiv, exa. You can set any downstream parameter at any level โ just addโ key: valueto your command.research-pipeline โโโ idea-discovery โโโ research-lit โโโ experiment-bridge โโโ run-experiment โโโ auto-review-loop โโโ idea-creator โโโ novelty-check โโโ research-review
Full Research Pipeline (research-pipeline)
| Constant | Default | Description | Pass-through |
|---|---|---|---|
AUTO_PROCEED | true | Auto-continue with top-ranked option if user doesn't respond | โ idea-discovery |
ARXIV_DOWNLOAD | false | Download top arXiv PDFs after literature search | โ idea-discovery โ research-lit |
HUMAN_CHECKPOINT | false | When true, pause after each review round for approval | โ auto-review-loop |
WANDB | false | Auto-add W&B logging to experiments | โ experiment-bridge โ run-experiment |
CODE_REVIEW | true | GPT-5.4 reviews experiment code before deployment | โ experiment-bridge |
BASE_REPO | false | GitHub repo URL to clone as base codebase for experiments | โ experiment-bridge |
GPU | local | GPU target: local, remote (SSH), or vast (Vast.ai on-demand rental) | โ experiment-bridge โ run-experiment |
COMPACT | false | Generate compact summary files for short-context models and session recovery | โ all workflows |
REF_PAPER | false | Reference paper (PDF path or URL) to base ideas on. Summarized first, then used as context | โ idea-discovery |
ILLUSTRATION | gemini | AI illustration: gemini (default), mermaid (free), or false (skip) | โ paper-writing |
Override inline: /research-pipeline "topic" โ auto proceed: false, illustration: mermaid
Auto Review Loop (auto-review-loop)
| Constant | Default | Description |
|---|---|---|
MAX_ROUNDS | 4 | Maximum reviewโfixโre-review iterations |
POSITIVE_THRESHOLD | 6/10 | Score at which the loop stops (submission-ready) |
> 4 GPU-hour skip | 4h | Experiments exceeding this are flagged for manual follow-up |
Idea Discovery (idea-discovery / idea-creator)
| Constant | Default | Description | Pass-through |
|---|---|---|---|
PILOT_MAX_HOURS | 2h | Skip any pilot estimated to take longer per GPU | โ |
PILOT_TIMEOUT_HOURS | 3h | Hard timeout โ kill runaway pilots, collect partial results | โ |
MAX_PILOT_IDEAS | 3 | Maximum number of ideas to pilot in parallel | โ |
MAX_TOTAL_GPU_HOURS | 8h | Total GPU budget across all pilots | โ |
AUTO_PROCEED | true | Auto-continue with top-ranked option if user doesn't respond | โ |
ARXIV_DOWNLOAD | false | Download top arXiv PDFs after literature search | โ research-lit |
Override inline: /idea-discovery "topic" โ pilot budget: 4h per idea, sources: zotero, arxiv download: true
Experiment Bridge (experiment-bridge)
| Constant | Default | Description |
|---|---|---|
CODE_REVIEW | true | GPT-5.4 xhigh reviews code before deployment. Catches logic bugs before wasting GPU hours |
AUTO_DEPLOY | true | Automatically deploy experiments after implementation + review. Set false to manually inspect |
SANITY_FIRST | true | Run smallest experiment first to catch setup bugs before full deployment |
MAX_PARALLEL_RUNS | 4 | Maximum experiments to deploy in parallel (limited by available GPUs) |
WANDB | false | Auto-add W&B logging. Requires wandb_project in CLAUDE.md |
BASE_REPO | false | GitHub repo URL to clone as base codebase for experiments |
Override inline: /experiment-bridge โ base repo: https://github.com/org/project
Literature Search (research-lit)
| Constant | Default | Description |
|---|---|---|
PAPER_LIBRARY | papers/, literature/ | Local directories to scan for PDFs before searching online |
MAX_LOCAL_PAPERS | 20 | Max local PDFs to scan (first 3 pages each) |
SOURCES | all | Which sources to search: zotero, obsidian, local, web, semantic-scholar, deepxiv, exa, or all. semantic-scholar, deepxiv, and exa must be explicitly listed |
ARXIV_DOWNLOAD | false | When true, download top relevant arXiv PDFs to PAPER_LIBRARY after search |
ARXIV_MAX_DOWNLOAD | 5 | Maximum number of PDFs to download when ARXIV_DOWNLOAD = true |
Override inline: /research-lit "topic" โ sources: zotero, web, /research-lit "topic" โ sources: all, deepxiv, /research-lit "topic" โ sources: all, exa, /research-lit "topic" โ arxiv download: true, max download: 10
Paper Writing (paper-write)
| Constant | Default | Description |
|---|---|---|
DBLP_BIBTEX | true | Fetch real BibTeX from DBLP/CrossRef instead of LLM-generated entries |
TARGET_VENUE | ICLR | Target venue: ICLR, NeurIPS, ICML, CVPR, ACL, AAAI, ACM, IEEE_JOURNAL, IEEE_CONF |
ANONYMOUS | true | Use anonymous author block for blind review. Note: most IEEE venues are NOT anonymous โ set false for IEEE |
MAX_PAGES | 9 | Page limit. ML conferences: main body excl. refs. IEEE: total pages incl. refs |
ILLUSTRATION | gemini | AI illustration mode: gemini (default, needs GEMINI_API_KEY), mermaid (free), or false (skip) |
Override inline: /paper-write โ target venue: NeurIPS, illustration: mermaid
General (all skills using Codex MCP)
| Constant | Default | Description |
|---|---|---|
REVIEWER_MODEL | gpt-5.4 | OpenAI model used via Codex MCP. Also available: gpt-5.3-codex, gpt-5.2-codex, o3. See supported models for full list. |
- Prompt templates โ tailor the review persona and evaluation criteria
allowed-toolsโ restrict or expand what each skill can do
๐ Alternative Model Combinations
Don't have Claude / OpenAI API access? You can swap in other models โ same cross-model architecture, different providers.
โญ We strongly recommend Claude + GPT-5.4 (default setup). It's the most tested and reliable combination. Alternative setups work but may require prompt tuning.
| Executor | Reviewer | Need Claude API? | Need OpenAI API? | Guide | |
|---|---|---|---|---|---|
| Default โญ | Claude Opus/Sonnet | GPT-5.4 (Codex MCP) | Yes | Yes | Quick Start |
| Alt A | GLM-5 (Z.ai) | GPT-5.4 (Codex MCP) | No | Yes | Setup below |
| Alt B | GLM-5 (Z.ai) | MiniMax-M2.7 | No | No | MINIMAX_MCP_GUIDE |
| Alt C | Any CC-compatible | Any OpenAI-compatible | No | No | LLM_API_MIX_MATCH_GUIDE |
| Alt D | Kimi-K2.5 / Qwen3.5+ | GLM-5 / MiniMax-M2.7 | No | No | ALI_CODING_PLAN_GUIDE |
| Alt E ๐ | DeepSeek-V3.1 / Qwen3-Coder | DeepSeek-R1 / Qwen3-235B | No | No | MODELSCOPE_GUIDE |
| Alt F | Codex CLI (GPT-5.4) | Codex spawn_agent (GPT-5.4) | No | Yes | skills-codex/ |
| Alt G ๐ | Codex CLI | Claude Code CLI (claude-review MCP) | No* | No* | CODEX_CLAUDE_REVIEW_GUIDE |
| Alt H ๐ | Antigravity (Claude Opus 4.6 / Gemini 3.1 Pro) | GPT-5.4 (Codex MCP) or any via llm-chat | No | Optional | ANTIGRAVITY_ADAPTATION |
| Alt I ๐ | Codex CLI | Gemini direct API (gemini-review MCP) | No | No | CODEX_GEMINI_REVIEW_GUIDE |
Alt C supports tested providers: GLM (Z.ai), Kimi (Moonshot), LongCat (Meituan) as executors; DeepSeek, MiniMax as reviewers. Any OpenAI-compatible API should also work via the generic llm-chat MCP server. Alt D uses Alibaba Coding Plan โ one API key for both executor and reviewer, 4 models included (Kimi, Qwen, GLM, MiniMax). Alt E uses ModelScope โ free (2000 calls/day), one key, no automation restrictions. Alt G keeps Codex as executor but swaps the reviewer to Claude Code CLI via the local claude-review MCP bridge, with async polling for long paper/review prompts. Alt H uses Google Antigravity as the executor with native SKILL.md support โ choose Claude Opus 4.6 (Thinking) or Gemini 3.1 Pro (high) as the execution model. Alt I keeps Codex as executor, adds only a thin skills-codex-gemini-review overlay, and routes the reviewer-aware predefined skills through the local gemini-review MCP bridge with direct Gemini API by default. It is the closest Gemini analogue to the existing Codex+Claude review path, while minimizing skill changes and now also covers poster PNG review via the same bridge. Free-tier availability, rate limits, and data-use terms remain subject to Google's current policy.
* Alt G normally relies on local Codex CLI and Claude Code CLI logins. Direct API keys are optional, not required.
Alt A: GLM + GPT
Only replace the executor (Claude โ GLM), keep GPT-5.4 as reviewer via Codex MCP.
npm install -g @anthropic-ai/claude-code
npm install -g @openai/codex
codex setup # set model to gpt-5.4
Configure ~/.claude/settings.json:
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "your_zai_api_key",
"ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
"API_TIMEOUT_MS": "3000000",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.7",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5"
},
"mcpServers": {
"codex": {
"command": "/opt/homebrew/bin/codex",
"args": ["mcp-server"]
}
}
}
Codex CLI uses your existing OPENAI_API_KEY (from ~/.codex/config.toml or environment) โ no extra config needed for the reviewer side.
Alt B: GLM + MiniMax
No Claude or OpenAI API needed. Uses a custom MiniMax MCP server instead of Codex (because MiniMax doesn't support OpenAI's Responses API). Full guide: docs/MINIMAX_MCP_GUIDE.md.
Alt C: Any Executor + Any Reviewer
Mix and match freely using the generic llm-chat MCP server. Supports any OpenAI-compatible API as reviewer. Full guide: docs/LLM_API_MIX_MATCH_GUIDE.md.
Example combinations: GLM + DeepSeek, Kimi + MiniMax, Claude + DeepSeek, LongCat + GLM, etc.
After Setup: Install Skills & Verify
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep.git
cd Auto-claude-code-research-in-sleep
cp -r skills/* ~/.claude/skills/
claude
โ ๏ธ For non-Claude executors (GLM, Kimi, etc.): Let the model read through the project once to ensure skills are correctly parsed. This is especially important if you've rewritten skills to use a different reviewer MCP (e.g.,
mcp__llm-chat__chatinstead ofmcp__codex__codex) โ the new executor needs to understand the changed tool call patterns:Read through this project and verify all skills are working: /idea-creator, /research-review, /auto-review-loop, /novelty-check, /idea-discovery, /research-pipeline, /research-lit, /run-experiment, /analyze-results, /monitor-experiment, /pixel-art
โ ๏ธ Note: Alternative models may behave differently from Claude and GPT-5.4. You may need to tune prompt templates for best results. The core cross-model architecture remains the same.
๐ Roadmap
Done
- Human-in-the-loop checkpoints โ idea-discovery and research-pipeline pause at key decision points for user approval. Configurable via
AUTO_PROCEED(default: auto-continue; setfalseto always wait) - Alternative model combinations โ GLM + GPT, GLM + MiniMax fully documented with setup guides. No Claude or OpenAI API required
- Workflow 3: Paper Writing Pipeline โ full chain:
/paper-planโ/paper-figureโ/paper-writeโ/paper-compile. ICLR/NeurIPS/ICML templates, claims-evidence matrix, publication-quality figures, latexmk auto-fix. Inspired by claude-scholar, Research-Paper-Writing-Skills, baoyu-skills
Show 6 more completed items
- Configurable REVIEWER_MODEL โ all Codex-dependent skills support custom reviewer model (default
gpt-5.4, also works withgpt-5.3-codex,gpt-5.2-codex,o3, etc.) - Local paper library scanning โ
/research-litscans localpapers/andliterature/directories before external search, leveraging papers you've already read - Idea Discovery pipeline โ
/idea-discoveryorchestrates research-lit โ idea-creator โ novelty-check โ research-review in one command, with pilot experiments on GPU - Full research pipeline โ
/research-pipelinechains Workflow 1 (idea discovery) โ implementation โ Workflow 2 (auto-review-loop) end-to-end - Peer review skill โ
/peer-reviewfor reviewing others' papers as a conference reviewer, with GPT-5.4 meta-review (planned; currently use/research-reviewwith a paper PDF) - Cross-model collaboration โ Claude Code (executor) ร Codex GPT-5.4 xhigh (reviewer) architecture, avoiding single-model self-play local minima
- Feishu/Lark integration โ three modes (off/push/interactive), configurable via
~/.claude/feishu.json. Push-only needs just a webhook URL; interactive uses feishu-claude-code. Off by default โ zero impact on existing workflows. See setup guide - Zotero MCP integration โ
/research-litsearches Zotero collections, reads annotations/highlights, exports BibTeX. Recommended: zotero-mcp (1.8kโญ). See setup guide - Obsidian integration โ
/research-litsearches Obsidian vault for research notes, tagged references, wikilinks. Recommended: mcpvault (760โญ) + obsidian-skills (13.6kโญ). See setup guide - More executor ร reviewer combinations โ any OpenAI-compatible API works via
llm-chatMCP server. GLM, MiniMax, Kimi, LongCat, DeepSeek all tested โ no Claude or OpenAI API required - GitHub-based code sync โ
/run-experimentsupportscode_sync: git(git pushโssh "git pull") - W&B integration โ auto
wandb.init()+wandb.log()whenwandb: true./monitor-experimentpulls training curves - ModelScope integration โ free (2000 calls/day), one API key, dual-protocol
Planned
- Daemon mode โ auto-restart Claude Code session via
launchd/systemdfor true unattended operation. Currently the orchestration layer requires an active CLI session; state files (REVIEW_STATE.json,AUTO_REVIEW.md) support resuming across sessions, but relaunch is manual (#11) - Reference-style figure generation โ read figures from reference PDFs โ identify chart type, color scheme, layout โ generate same-style figures with your own data. Sub-goal remaining: Data charts (extract color/font style โ matplotlib rcParams). Method diagrams โ
solved by
paper-illustration - Workflow execution report โ after each workflow (1/1.5/2/3) completes, auto-generate a structured summary: what was done, key decisions made, experiments run, results obtained, scores, and time spent. Output as
WORKFLOW_REPORT.mdfor progress tracking, team reporting, and supervisor updates - Document-based pipeline input โ
/idea-discoveryand/research-pipelineauto-detectRESEARCH_BRIEF.mdin project root. Detailed context replaces one-line prompt. Template:templates/RESEARCH_BRIEF_TEMPLATE.md - Auto hyperparameter tuning skill โ rewrite auto-hparam-tuning as an ARIS SKILL.md. 5-step cycle: understand project โ plan tuning strategy โ run experiments โ analyze metrics (TensorBoard/W&B) โ learn and iterate. Would plug into Workflow 1.5 (
/experiment-bridge) or Workflow 2 (/auto-review-loop) when reviewer says "tune hyperparameters" - Plugin format โ package ARIS as a Claude Code Plugin for one-click install via
/plugin install aris. Skills version continues for cross-platform compatibility (Codex CLI, Cursor, Trae, etc.)
๐ฌ Community
Domain-specific skills welcome! The core skills cover general research workflows, but every field has its own tools and patterns. We welcome PRs that add new skills for your domain โ EDA, bioinformatics, robotics, HPC, or anything else. Just add a skills/your-skill/SKILL.md and open a PR. See dse-loop for an example.
Join the WeChat group for discussion on Claude Code + AI-driven research workflows:
๐ Citation
If you use ARIS in your research, please cite:
@misc{yang2026aris,
author = {Yang, Ruofeng and Li, Yongcan and Li, Shuai},
title = {ARIS: Fully Autonomous Research via Adversarial Multi-Agent Collaboration},
year = {2026},
url = {https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep},
}
โญ Star History
๐ Acknowledgements
ARIS is inspired by:
- ๐งช AI Scientist (Sakana AI) โ Automated research pioneer
- ๐ AutoResearch (Andrej Karpathy) โ End-to-end research automation
- ๐ญ FARS (Analemma) โ Fully Automated Research System
- ๐จ PaperBanana (PKU) โ Multi-agent academic illustration framework
This project builds on and integrates with many excellent open-source projects:
Core Infrastructure
- Claude Code โ Anthropic's CLI for Claude, the execution backbone
- Codex CLI โ OpenAI's CLI, used as MCP server for cross-model review
Zotero Integration (setup guide)
- zotero-mcp โ Zotero MCP server with semantic search and PDF annotations
- Zotero โ Open-source reference manager
Obsidian Integration (setup guide)
- mcpvault โ Obsidian vault MCP server (no app required)
- obsidian-skills โ Claude Code skills for Obsidian Markdown by Steph Ango (Obsidian CEO)
Paper Writing Inspiration
- claude-scholar โ Academic paper writing with Claude
- Research-Paper-Writing-Skills โ Paper writing skill templates
- baoyu-skills โ Claude Code skills collection
Feishu/Lark Integration (setup guide)
- feishu-claude-code โ Bidirectional Feishu โ Claude Code bridge
- clawdbot-feishu โ Feishu bot for Claude
- cc-connect โ Multi-platform messaging bridge
- lark-openapi-mcp โ Official Lark MCP server
Community
- awesome-agent-skills โ Curated list of Claude Code skills (featured)
Special Thanks โ Platform Adaptation
ARIS wouldn't run on so many platforms without these contributors:
- ๐ค @Falling-Flower โ adapted all ARIS skills for Codex CLI using
spawn_agent - ๐ง @No-518 โ ongoing maintenance of the Codex skill set, keeping parity with latest updates
- ๐ฑ๏ธ @YecanLee โ wrote the Cursor adaptation guide and local GPU setup docs
- ๐ @DefanXue & @Monglitay โ first community paper built entirely with ARIS, scored 8/10 at CS conference
Special Thanks โ Architecture & Vision
- ๐ก @JingxuanKang โ beyond code contributions (training-check, result-to-claim, ablation-planner, watchdog, templates, session recovery), deeply shaped ARIS through discussions on architecture design, compact mode, workflow state management, and the vision of what autonomous research workflows should look like. Many of today's core features โ from structured project files to context-aware session recovery โ grew out of these conversations.
License
MIT
