De Anthropocentric Research Engine
De-Anthropocentric Research Engine β AI-powered academic research automation with deep literature survey, gap analysis, idea generation, experiment design & execution. Combines iterative deep research, adversarial debate, evolutionary generation, and distributed GPU execution.
Ask AI about De Anthropocentric Research Engine
Powered by Claude Β· Grounded in docs
I know everything about De Anthropocentric Research Engine. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Human-centered AI-assisted research can no longer sustain the next great leaps of our civilization. What we need is not just more tools, but an AI researcher that thinks and acts independently β a new entity to replace the human role in science. This is DARE.
π§ DARE β De-Anthropocentric Research Engine
π§ Personal side project. Actively under development.
DARE is not a tool that helps you do research. It is the researcher. You set the direction β DARE searches, reads, discovers gaps, generates ideas, designs experiments, and executes them on GPUs. Autonomously. Iteratively. Without asking for permission.
β‘ What It Does
- π Autonomous literature survey β searches Google Scholar, downloads full papers (not just abstracts), reads them cover-to-cover with a three-pass protocol
- π Gap discovery β identifies what the field is missing, not what you tell it to find
- π‘ Idea generation β 31 ideation methods across 5 categories (SCAMPER, component surgery, cross-domain, perspective forcing, structural deconstruction), filtered by MAP-Elites quality-diversity algorithm
- βοΈ Adversarial debate β Proposer-Critic-Judge architecture validates every gap, insight, and idea through structured 4+3 round debates
- π¬ INSIGHT pipeline β 7-step deep analysis (root-cause β stakeholder β tension β HMW β abstraction β assumption β validation)
- π Self-review loop β an independent AI process reviews all outputs, scores them, and selectively re-runs weak stages
- π Research depth enforcement β quantitative budget floors (S/M/L topic tiers), state ledgers, budget gates, and adversarial completeness probes ensure the AI cannot take shortcuts or produce shallow results
- π Forced cross-domain discovery β before ideation, mandatory search across 3+ unrelated domains (biology, physics, economics, etc.) to fuel cross-domain collision methods
- π§ͺ Experiment design & execution β designs experiments and runs them on remote GPU pods, autonomously
- 𧬠Method evolution (planned) β AlphaEvolve-inspired evolutionary improvement of DARE's own methods (mutation + crossover + Elo ranking). Core tools implemented, full loop coming in v3.2+
- π Deep reference exploration β traces citation graphs via Semantic Scholar
- πΎ Full-text caching β every paper and web page converted to markdown, cached locally
- π Git-based context transfer β research context pushed to GitHub, cloned on remote GPU pod, executed by a fresh AI instance
πΊοΈ Roadmap
Active development continues. Near-term priorities:
- π¬ Search pipeline overhaul β major refactor of
dare-webanddare-scholarsearch flows to better integrate with AlphaXiv MCP and upcoming Perplexity MCP, reducing redundancy and improving retrieval quality - 𧬠Method-evolve full loop (v3.2+) β AlphaEvolve-inspired evolutionary improvement of DARE's own research methods. Core tools (mutate, crossover, evaluate) implemented; full autonomous Elo-tournament loop next
- π Paper-writing implementation (v3.1+) β automated academic paper composition from research outputs. Strategy interface defined, implementation pending
- π GitHub MCP integration β native GitHub MCP adapter for issue tracking, PR-driven experiment workflows, and automated result reporting
- π Perplexity MCP adapter β leverage Perplexity's search-augmented generation as an additional web research backend
π― Design Philosophy
π€ Why "De-Anthropocentric"?
The bottleneck in modern research is not data or compute β it's the human in the loop. Every existing "AI research assistant" still requires a human to decide what to search, what to read, which gaps matter, and which ideas are worth pursuing. DARE removes this bottleneck entirely. The human provides only the initial direction; everything after that is autonomous.
This isn't about replacing researchers β it's about creating a parallel research capacity that operates on timescales and breadths impossible for any individual.
ποΈ The Military Metaphor: Four-Layer Command Structure
DARE's architecture follows a military command hierarchy β not because research is war, but because the decomposition pattern is remarkably effective:
General (Meta-Strategy) β "Take that hill" β WHAT to research
Colonel (Strategy) β "Flank from the east" β WHEN and WHY
Captain (Tactic) β "Squad A cover, B move" β HOW to combine
Sergeant (SOP) β "Fire, reload, advance" β HOW to execute
Each layer has a single concern and calls only the layer directly below it. A Strategy never touches MCP tools directly; a Tactic never decides research direction. This strict layering means every component is independently testable, replaceable, and composable.
π€ Micro-Agent Paradigm: Every Tool Thinks
Traditional MCP tools are dumb functions β they take input, return output, no reasoning involved. DARE's dare-agents tools are fundamentally different. Each of the 49 tools is a single-responsibility LLM micro-agent with its own system prompt, personality, and reasoning chain.
When DARE runs "root-cause-drilling", it's not calling a template β it's spawning an AI agent whose entire existence is devoted to drilling from surface symptoms to root causes. When "debate-critic" runs, it genuinely tries to destroy the idea it's reviewing. This is what makes DARE's outputs qualitatively different from prompt-chaining systems.
Built on pi-ai β a lightweight framework for building LLM-powered tools as MCP servers.
βοΈ Adversarial Validation: Ideas Must Survive Attack
Every significant output in DARE goes through adversarial debate before being accepted. The Proposer-Critic-Judge architecture isn't decoration β it's the core quality mechanism:
- A Proposer presents the gap/insight/idea
- A Critic attacks it from every angle (4 rounds of critique)
- A Defender responds to each attack (3 rounds of defense)
- A Judge evaluates the exchange and scores the result
Ideas that survive this gauntlet are genuinely robust. Ideas that don't are discarded or refined. No hand-waving, no "sounds good to me."
π² Quality Γ Diversity: Not Just Good Ideas, Different Ideas
Most AI systems optimize for a single quality metric β they'll give you 10 variations of the same good idea. DARE uses MAP-Elites, a quality-diversity algorithm that maintains a population of ideas spanning multiple dimensions of variation. The result: you get the best idea in each niche, not 10 copies of the same insight.
π Research Depth & Breadth Enforcement
AI agents naturally take the path of least resistance β searching a handful of papers and declaring victory. DARE embeds hard enforcement mechanisms directly into every skill to prevent this:
- Research Budget: Every strategy declares quantitative floors with three topic-size tiers (Small / Medium / Large). A literature survey on a Medium topic must fetch 40+ papers and 50+ web pages.
- State Ledger: A progress table printed before every iteration β the AI cannot lose track of where it stands.
- Budget Gate: A
<HARD-GATE>that blocks the strategy from exiting its loop until 80% of the budget is met. The AI cannot stop early no matter how "satisfied" it feels. - Adversarial Completeness Probe: After the budget is met, a qualitative self-check probes for blind spots (missing sub-areas, unchecked citations, unexplored perspectives). Up to 2 extra iterations if gaps are found.
- Yield Reports: Every tactic prints execution metrics (papers fetched, ideas generated, methods used) that feed the calling strategy's ledger.
- Cross-Domain Discovery: Before any ideation method runs, a mandatory phase searches 3+ unrelated domains for analogical inspiration β because the best ideas come from unexpected collisions.
You ask a question
β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Phase 0: Brainstorming (structured requirement clarification) β
β Phase 1: Intake (research brief) β
β Phase 2: Research Loop (Stages 1-3, up to 7 rounds) β
β βββ Literature Survey (S:20 / M:40 / L:60+ papers) β
β βββ Gap Analysis (S:10 / M:15 / L:25+ papers) β
β βββ Insight (7-step pipeline) β
β βββ Ideation (cross-domain discovery β 31 methods Γ 5) β
β βββ Review β Selective Redo β Review (score β₯ 8/10) β
β Phase 3: Experiment Design β
β Phase 4: GPU Execution (remote pod, fully autonomous) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
Results returned via git
Each stage runs SEARCH β READ β REFLECT β EVALUATE cycles with autonomous gap discovery and dynamic stopping. No human in the loop.
ποΈ Architecture (v3.0)
Four-layer skill hierarchy where each layer calls only the layer below:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β META-STRATEGY (/dare) β
β Entry point β orchestrates the full research pipeline β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β STRATEGY (8) β
β intake, lit-survey, gap-analysis, insight, ideation, β
β round, paper-writing, method-evolve β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β TACTIC (15) β
β academic-research, web-research, insight, multiagent-debate, β
β review, idea-generation, idea-augmentation, scamper, β
β component-surgery, cross-domain-collision, and more β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SOP (60) β
β Single-responsibility wrappers around dare-agents tools β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β TOOL LAYER (MCP servers β atomic operations) β
β dare-agents, dare-scholar, dare-web, apify, brave, runpod β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Meta-Strategy = WHAT to research (entry point, pipeline orchestration)
- Strategy = WHEN and WHY (iteration loops, state management, stopping conditions)
- Tactic = HOW to combine (orchestrates multiple SOPs into coherent workflows)
- SOP = HOW to execute (single dare-agents tool wrapper with protocol)
- Tool = WHAT to do (atomic MCP operations)
π§© dare-agents β LLM-Powered Micro-Agent Tools
The core engine of v3. 49 tools built with pi-ai, each a single-responsibility LLM micro-agent with its own system prompt.
| Category | Tools | Count |
|---|---|---|
| Insight | root-cause-drilling, stakeholder-mapping, tension-mining, question-reformulation, abstraction-laddering, assumption-audit, validation | 7 |
| Debate | debate-critic, debate-defender, debate-judge | 3 |
| SCAMPER | substitute, combine, adapt, modify, put-other-use, eliminate, reverse | 7 |
| Component Surgery | subtract, multiply, divide, unify, redirect | 5 |
| Cross-Domain & Others | analogical-transfer, forced-bridge, triz-contradiction, morphological-matrix, axiom-negation, constraint-injection, random-paper-entry, reverse-engineering, worst-method-analysis, method-problem-matrix, time-machine, anti-benchmark, ablation-brainstorm, benchmark-sweep, failure-taxonomy | 15 |
| Utility | facet-extraction, facet-bisociation, digest-extraction, paper-rating, quality-diversity-filtering, self-review, reviewer2-hat, theorist-hat, practitioner-hat | 9 |
| Method-Evolve | mutate, crossover, evaluate | 3 |
π Monorepo Structure
dare/
βββ packages/
β βββ agents/ # dare-agents MCP β 49 LLM micro-agent tools (72 tests)
β βββ scholar/ # dare-scholar MCP β academic paper pipeline (5 tools)
β βββ ss/ # dare-ss MCP β Semantic Scholar API (8 tools, 75 tests)
β βββ web/ # dare-web MCP β web page fetching & caching (2 tools)
β βββ session/ # dare-session β pod provisioning scripts
βββ skills/
β βββ dare/ # /dare meta-strategy (entry point)
β βββ strategy/ # 8 strategies (lit-survey, gap-analysis, insight, ...)
β βββ tactic/ # 15 tactics (debate, scamper, surgery, ...)
β βββ sop/ # 60 SOPs (one per dare-agents tool)
βββ package.json # Root workspace config
βββ .mcp.json # MCP server configuration (gitignored)
π MCP Servers
| Server | Source | Tools | Purpose |
|---|---|---|---|
| dare-agents | packages/agents | 49 | LLM micro-agent tools (ideation, debate, insight, method-evolve) |
| dare-scholar | packages/scholar | 5 | Academic paper pipeline β search, enrich, fetch, read, reference |
| dare-ss | packages/ss | 8 | Semantic Scholar API β paper lookup, citations, references, recommendations, author info |
| dare-web | packages/web | 2 | Web page fetching and markdown caching |
| dare-session | packages/session | β | Git-based context transfer to remote GPU pods |
| apify | @apify/actors-mcp-server | 2 | Google Scholar search + web page scraping |
| brave-search | @brave/brave-search-mcp-server | 1 | Web search API |
| runpod | @runpod/mcp-server | 4 | GPU pod lifecycle management |
| alphaxiv | AlphaXiv MCP (SSE) | 6 | Paper search, Q&A, code exploration (arXiv) |
π Quick Start
- Clone and install:
git clone https://github.com/Pthahnix/De-Anthropocentric-Research-Engine.git
cd De-Anthropocentric-Research-Engine
npm install
- Install external MCP servers:
npm install -g @apify/actors-mcp-server @brave/brave-search-mcp-server @runpod/mcp-server
-
Copy
.mcp.example.jsonto.mcp.jsonand fill in your API keys and paths. -
Claude Code will auto-discover all tools from the configured MCP servers.
βοΈ Configuration
dare-agents
| Variable | Description |
|---|---|
ANTHROPIC_AUTH_TOKEN | API key for LLM completions (Anthropic or compatible proxy) |
ANTHROPIC_BASE_URL | API base URL (optional, for proxy/gateway) |
ANTHROPIC_MODEL | Model ID (default: claude-sonnet-4-20250514) |
dare-scholar
| Variable | Description |
|---|---|
APIFY_TOKEN | Apify API token for PDF β markdown conversion |
EMAIL | Email for Unpaywall API (polite pool) |
DARE_CACHE | Cache directory (must be an absolute path) |
OPENAI_API_KEY | OpenAI-compatible API key for AI paper reading |
OPENAI_BASE_URL | API base URL |
OPENAI_MODEL | Model name for paper reading agent |
dare-ss
| Variable | Description |
|---|---|
SS_API_KEY | Semantic Scholar API key (optional β public API works without key at lower rate limits) |
dare-web
| Variable | Description |
|---|---|
DARE_CACHE | Cache directory, shared with dare-scholar (must be an absolute path) |
APIFY_TOKEN | Apify API token for rag-web-browser |
dare-session
| Variable | Description |
|---|---|
RUNPOD_API_KEY | RunPod API key (for GPU pod targets) |
REMOTE_HOST | SSH hostname/IP (for remote server targets) |
REMOTE_USER | SSH username (for remote server targets) |
HF_TOKEN | Hugging Face token (passed to pod for model downloads) |
π License
Built by Pthahnix
