Aiden
Local-first AI execution runtime for Linux & Windows β secure automation Β· AGPL-3.0 - built by solo developer
Ask AI about Aiden
Powered by Claude Β· Grounded in docs
I know everything about Aiden. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
ββββββ ββββββββββ ββββββββββββ βββ
ββββββββββββββββββββββββββββββββ βββ
ββββββββββββββ βββββββββ ββββββ βββ
ββββββββββββββ βββββββββ ββββββββββ
βββ βββββββββββββββββββββββββ ββββββ
βββ βββββββββββββ βββββββββββ βββββ
Autonomous AI Engine
72 skills Β· 42 tools Β· 19 providers Β· 9 channels Β· AGPL-3.0
Windows Β· Linux Β· WSL Β· macOS (API Mode)
Local-first Β· Self-healing routing Β· Browser & terminal control Β· Persistent memory
Website Β Β·Β Contact Β Β·Β Discord Β Β·Β Download Β Β·Β Book
v4.1.0 β Multi-channel autonomous AI engine Telegram + MCP server + subagent fanout + voice CLI + skill mining. Hardened cron, structured markdown, cross-platform CI. See changelog below.
Support Aiden
Solo-built, AGPL-3.0. If Aiden saves you time, consider sponsoring development.
Funds go to ongoing development, infrastructure costs, and contributor bounties.
Why Aiden
Most AI agents answer questions. Aiden runs work end-to-end on your machine.
- Runs on your machine β local-first, no telemetry, no cloud account required
- Controls your desktop β vision loop, browser, terminal, files. Not a chatbot wrapped in a sandbox.
- Automates any browser β 10 Playwright-driven tools (navigate, click, type, fill, scroll, extract, screenshot, get-url, close, captcha-check)
- Self-healing provider routing β 6-slot fallback chain (
together β groq Γ 4) advances slots in under a second on rate-limit - OAuth subscription routing β sign in with Claude Pro or ChatGPT Plus; queries route to your subscription quota, not pay-as-you-go
- Persistent memory β
MEMORY.md,USER.md,SOUL.md, plus semantic recall and aLESSONS.mdfailure log that grows every session - Lives where you do β identity files re-read every turn; edit
USER.mdmid-conversation and the change lands within one reply - One command to start β
npx aiden-runtimeinstalls, configures, and runs everything - Honest failures β every tool error names the tool, provider, retry count, fallback chain, error, and next step. No silent swallowing.
- Plugin extension β drop a plugin into
<aiden-home>/plugins/and callctx.commandRegistry.register()to add slash commands without touching core - Open source β AGPL-3.0 core, Apache-2.0 skills. Read every line, modify anything, contribute back.
Aiden is a local-first AI operating system. It runs entirely on your machine β no cloud account required, no telemetry, no data leaving your hardware unless you configure a cloud provider. It installs as a global npm package (aiden-runtime, ~16 MB) on Windows, Linux, WSL, and macOS β Node.js 18+ is the only prerequisite. Features: 68 bundled skills, 42 built-in tools across 11 categories, multi-layer memory architecture, self-healing provider routing across 19 providers, the ability to control your screen, browse the web, run code, send emails and messages, manage files, and hold a full conversation β offline via Ollama.
Disclaimer
Aiden is a hobby project built solo by one person. It is provided "as-is" without warranty of any kind, express or implied.
β οΈ Important risks to understand before installing:
- Aiden controls your computer. It can run shell commands, edit or delete files, automate your browser, capture screenshots, and send messages on your behalf. Always review what it's doing, especially when running in
/yolo(no-approval) mode. - Use API keys responsibly. You provide your own provider keys. You pay for any usage on those keys. Aiden has built-in budget caps but you should set spending limits with your provider too.
- Back up important data. Aiden can patch and delete files. Always work in version-controlled directories or with backups.
- Skills and plugins may send data externally. Built-in capabilities are local-first, but third-party skills you install may make external API calls. Review skill source before use.
- OAuth providers may change behavior. Claude Pro and ChatGPT Plus subscription routing depends on provider-side gates. If your subscription is restricted by Anthropic/OpenAI, fall back to direct API keys.
- Not for production-critical work without review. Aiden is designed for personal use and exploration. For business-critical workflows, use approval mode (default) and review every action.
By installing Aiden you accept these risks. The author and contributors accept no liability for data loss, financial loss, account suspensions, or any other damages arising from use of this software.
For commercial deployments with support and indemnification, see aiden.taracod.com/contact?type=enterprise.
Platform support
All platforms use the same npm-based install path. Node.js 18+ is the only prerequisite.
| Platform | Install | Skills available |
|---|---|---|
| Windows 10/11 | β
npm install -g aiden-runtime | All 68 (including Windows-only skills) |
| Linux | β
npm install -g aiden-runtime | ~62 (Windows-only skills auto-skipped) |
| WSL 2 | β
npm install -g aiden-runtime | ~62 (Windows-only skills auto-skipped) |
| macOS | β
npm install -g aiden-runtime | ~62 (Windows-only skills auto-skipped) |
Windows-only skills (clipboard history, Defender, OneNote, Outlook COM, registry, Task Scheduler, etc.) are tagged platform: windows and silently skipped on other platforms at load time.
Get an API key
Aiden needs at least one AI provider configured. You can use the setup wizard (aiden setup) to walk through this, or manually set keys in .env.
Free providers (recommended for getting started):
| Provider | Where to get a key | Free tier |
|---|---|---|
| Groq | console.groq.com/keys | Yes β fast Llama 3 / Qwen |
| Gemini | aistudio.google.com/apikey | Yes β generous free tier |
| OpenRouter | openrouter.ai/keys | Free credits + paid tier |
| NVIDIA NIM | build.nvidia.com | Free playground tier |
Paid providers:
| Provider | Where to get a key | Notes |
|---|---|---|
| Anthropic | console.anthropic.com | Best Claude models |
| OpenAI | platform.openai.com/api-keys | GPT-4, GPT-5 |
| Together AI | api.together.xyz/settings/api-keys | Default in Aiden |
Subscription routing (use your existing subscription):
- Claude Pro / Max β sign in with
/auth login claude-proinside Aiden. Routes to your subscription quota instead of pay-as-you-go. - ChatGPT Plus β sign in with
/auth login chatgpt-plus. Routes to Codex backend.
Fully offline (no API key needed):
- Ollama β install from ollama.ai, then
ollama pull qwen2.5:7b. Aiden auto-detects and uses it as fallback.
Quick Start
Fastest β npx (no install needed)
npx aiden-runtime
That's it. Node.js 18+ is the only prerequisite. On first run it asks which AI provider you want (Groq is free), validates your key, saves config to ~/.aiden/, and starts the chat REPL. Subsequent runs skip the wizard and go straight to the assistant.
Or install globally for the aiden command:
npm install -g aiden-runtime
aiden
Prerequisites (all platforms)
- Node.js 18+
- Git (only for the manual install path below)
- Ollama (optional, for offline mode): ollama.ai
Windows β one-line install
irm aiden.taracod.com/install.ps1 | iex
The installer verifies Node.js 18+ then runs npm install -g aiden-runtime. Same package as npx aiden-runtime above; just adds the aiden command to your PATH so you can launch from any terminal.
Linux / WSL / macOS β one-line install
curl -fsSL aiden.taracod.com/install.sh | bash
Manual install (all platforms)
git clone https://github.com/taracodlabs/aiden.git
cd aiden
npm install
cp .env.example .env
# Edit .env β add at minimum one API key (Groq is free: console.groq.com)
Run (manual install)
# Build, then start
npm run build
aiden # CLI
# Or run the API server explicitly:
npm start # API server on port 4200
After pulling updates (manual install)
git pull
npm run build
aiden
Uninstall
Windows Open Settings β Apps (or Control Panel β Programs) and uninstall Aiden. To also remove user data:
Remove-Item -Recurse -Force "$env:APPDATA\aiden"
Remove-Item -Recurse -Force "$env:LOCALAPPDATA\aiden"
Linux / macOS / WSL
curl -fsSL aiden.taracod.com/uninstall.sh | bash
Or manually:
rm -rf ~/.local/share/aiden ~/.config/aiden
npm uninstall -g aiden-runtime
Minimum .env to get started
GROQ_API_KEY=your_key_here # free at console.groq.com/keys
Set AIDEN_HEADLESS=true to suppress the Electron GUI when running the packaged app.
Known limitations (v4.0.0)
We're shipping honest. Things that work, things that don't:
Tested and working:
- Windows 10/11 native (primary platform, full QA)
- Linux via WSL2 (cross-platform paths verified)
- Together AI (default provider, fast)
- Groq 4-slot fallback chain
- ChatGPT Plus OAuth (verified end-to-end with Codex backend)
- Claude Pro OAuth (verified β subscription routing, sanitised identity)
Untested at launch:
- macOS native β best-effort, may need user reports
- Linux distributions beyond Ubuntu/Debian (Snap/Flatpak Chrome detection)
- Hugging Face / Vercel AI Gateway providers β registered but unverified
Not in v4.0:
- Subagent fanout / parallel agent swarm β single-loop only; deferred to v4.x
- OCR β not bundled (vision-loop screen capture works, but no Tesseract)
- Full agentskills.io ecosystem install β held pending license review
- Docker sandbox backend β dropped in v4 rewrite
Landed in v4.1:
- Telegram channel adapter (DM polling + per-chat memory) β see docs/channels/telegram.md
Beta features:
- OAuth providers β provider-side gates may apply, use API keys as fallback
- Auto-update β notifies on outdated version, doesn't auto-install
Found a bug? Report at https://github.com/taracodlabs/Aiden-v4/issues with output of aiden doctor for fast triage.
Getting Started
Once Aiden is running, type these in the chat prompt:
| First thing to do | What to type |
|---|---|
| See all available commands | /help |
| Switch providers / models | /model |
| List configured providers | /providers |
| Browse available skills | /skills |
| Run health checks | /doctor (or aiden doctor from shell) |
| Schedule a recurring task | /cron add "0 9 * * 1-5" 'morning briefing' |
| Sign in with Claude Pro | /auth login claude-pro |
| Sign in with ChatGPT Plus | /auth login chatgpt-plus |
Ask anything in plain English β no special syntax needed for regular tasks:
summarize the PDF on my desktop
open chrome and search for latest AI news
take a screenshot and describe what you see
remind me to deploy at 5pm
play me a popular hindi song
what files did I download today
Type / to browse all 28 commands with instant search. Skills register their own dynamic slash commands at load time.
Troubleshooting
"Cannot find module" or TypeScript errors
npm run build # always rebuild after git pull
"npm run serve" not found
There is no serve script. Use npm start instead.
Server not responding
# Check if API server is running on port 4200
netstat -ano | findstr :4200 # Windows
lsof -i :4200 # Linux/macOS
Ollama not connecting
ollama serve # make sure Ollama is running
ollama pull qwen2.5:7b # pull your chosen model
Changing Ollama model or inference settings (no recompile needed β edit .env):
OLLAMA_MODEL=qwen2.5:7b
OLLAMA_TEMPERATURE=0.3
OLLAMA_CONTEXT_LENGTH=4096
OLLAMA_NUM_GPU=99
Use with any OpenAI client (Open WebUI, Chatbox, Cursor, β¦)
Base URL: http://localhost:4200
API Key: none required (or set AIDEN_API_KEY=β¦ for Bearer auth)
Model: aiden-3.13 (alias preserved for client compatibility)
Screenshots
Terminal (TUI)

Boot card with environment + capabilities. Status pills (core / mode / model / memory). Per-turn rule separator. Random spinner phrases. Provider/context/elapsed footer. Runs in any terminal.
Desktop app

Full chat interface with live activity panel. Local-first, connects to Ollama or any of 19 cloud providers via your own API key.
Memory graph

Multi-layer memory visualised β every conversation, task, and learned pattern becomes a node in the local knowledge graph. Persisted to disk, searchable.
Features
| Category | What Aiden does |
|---|---|
| Inference & providers | 19 providers including Anthropic, OpenAI, Groq (4-slot fallback), Together, Gemini, NVIDIA NIM, OpenRouter, DeepSeek, Mistral, Z.ai, Kimi, MiniMax, Hugging Face, custom OpenAI-compatible endpoints, and Ollama for fully offline. OAuth subscription routing for Claude Pro and ChatGPT Plus. |
| 42 built-in tools | Web search & fetch, deep research, YouTube search, Playwright browser automation (10 tools), file ops (read, list, write, patch, delete, move, copy), process control (spawn, kill, list, log-read, wait), shell exec, code execution, system info, MCP bridge, memory add/replace/remove, session list/search, skill view/list/manage. |
| 68 bundled skills | Composable workflows each with a SKILL.md prompt, optional helper scripts, and tool requirements. Includes: GitHub PR/issue workflows, NSE / Upstox / Zerodha trading, Censys / Shodan / VirusTotal lookups, Windows Defender / Task Scheduler, Docker management, YouTube content tools, ASCII art, and more. |
| 6-layer memory | MEMORY.md (declarative facts), conversation/session/workspace memory, semantic search (BM25 + embeddings), learning memory (LESSONS.md), structured user profile. Dirty-bit invalidation rebuilds the prompt when files change mid-session. |
| Voice | Edge TTS / Windows SAPI text-to-speech, speech-to-text helpers. |
| Channel adapters | Discord, Slack, Telegram, WhatsApp, Email (IMAP+SMTP), Webhook, Twilio SMS, iMessage (macOS), Signal β any channel triggers the same agent loop. |
| Computer use | Screenshot capture, screen-state vision loop, browser automation. Mouse/keyboard automation partial. |
| Cron scheduler | Persistent recurring tasks via the croner engine. Atomic state writes, output capture, 5/6-field cron + @daily/@hourly shortcodes. |
| Plugins | Three bundled plugins: Chrome DevTools Protocol bridge, Claude Pro OAuth, ChatGPT Plus OAuth. Plugin system with permission-state machine (pending-grant / loaded / suspended). |
| MCP | Model Context Protocol bridge β stdio + HTTP transports, schema discovery, tool dispatch. |
| Security moat | 10-module safety layer: tiered approval engine (safe / caution / dangerous), dangerous-command pattern classifier, honesty enforcement (post-loop scan rewrites false claims), memory guard, planner-guard tool narrowing, SSRF-safe URL fetcher, secret/PII pre-write scanner, skill-teacher (auto-create skills from successful flows), pro-license gate, provider-chain glue. |
What Aiden is
Aiden runs locally on your machine. It controls your desktop, browser, and terminal through natural conversation. It learns from your work and remembers what matters across sessions.
- Local-first β your conversations and data stay on your machine. No cloud account required.
- Real desktop control β vision, browser, terminal, files. Not a chatbot wrapped in a sandbox.
- Persistent memory β Aiden remembers facts, preferences, and lessons from prior sessions. The longer you use it, the better it knows your work.
- Honest by design β when a tool fails, Aiden surfaces the failure rather than fabricating success.
- Open source β AGPL-3.0. Read every line, modify anything, contribute back.
Architecture
User input (any channel)
β
βΌ
βββββββββββββββββββββββββββββ
β AidenAgent β single loop β β per turn: build prompt β ask provider β
β core/v4/aidenAgent.ts β dispatch tools β loop until stop
ββββββ¬ββββββββββββ¬βββββββββββ
β β
β βΌ
β ββββββββββββββββββββ
β β Tool dispatcher ββββΆ 42 built-in tools
β ββββββββββββββββββββ + skill-driven dynamic tools
β
βΌ
βββββββββββββββββββββββββββββββββββββββ
β Memory β
β MEMORY.md Β· USER.md Β· SOUL.md β
β conversation Β· session Β· workspace β
β semantic (BM25 + embeddings) β
β learning (LESSONS.md) β
βββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββ
β Provider router + fallback β β 19 providers, 6-slot self-healing
β providers/v4/runtimeResolver.ts β chain (together β groq Γ 4)
ββββββββββββββββββββββββββββββββββββ
β
βΌ
Response (streamed back to originating channel)
See ARCHITECTURE.md for a full layer-by-layer breakdown, prompt-slot composition, and the skill system design.
Configuration
Copy .env.example to .env in the Aiden data directory.
cp .env.example .env
Key environment variables:
| Variable | Default | Notes |
|---|---|---|
OLLAMA_HOST | http://127.0.0.1:11434 | Override if Ollama runs on a different host/port |
OLLAMA_MODEL | qwen2.5:7b | Default chat model for offline mode |
ANTHROPIC_API_KEY | β | Optional cloud provider |
OPENAI_API_KEY | β | Optional cloud provider |
GROQ_API_KEY | β | Free tier: fast Llama 3 / Qwen inference |
TOGETHER_API_KEY | β | Default cloud provider |
AIDEN_HEADLESS | false | true suppresses the Electron GUI |
AIDEN_BROWSER_HEADLESS | false | true runs Playwright headless |
AIDEN_UI_ICONS | 0 | 1 enables emoji tool-row icons |
AIDEN_UI_TIMESTAMPS | 0 | 1 prepends HH:MM:SS to chat lines |
AIDEN_API_KEY | β | Set to require Bearer auth on the OpenAI-compatible API |
See .env.example for the full list covering voice, messaging integrations, search, computer use, and more.
Use with any OpenAI client
Aiden exposes an OpenAI-compatible API at localhost:4200. Point any OpenAI client at Aiden to get the full agent loop instead of raw LLM inference:
| Setting | Value |
|---|---|
| Base URL | http://localhost:4200 |
| API Key | (none required locally) |
| Model | aiden-3.13 (alias preserved for client compatibility) |
Works with: Open WebUI Β· LibreChat Β· Chatbox Β· Continue.dev Β· Cursor Β· TypingMind Β· any app using the OpenAI SDK.
# Python example β zero config
from openai import OpenAI
client = OpenAI(base_url="http://localhost:4200", api_key="none")
response = client.chat.completions.create(
model="aiden-3.13",
messages=[{"role": "user", "content": "search news about AI agents"}]
)
print(response.choices[0].message.content)
Optional: set AIDEN_API_KEY=your-secret in .env to require Bearer-token authentication.
Commands
Start Aiden
| Command | Description |
|---|---|
npx aiden-runtime | Install, configure, and start (recommended) |
aiden | Start the chat REPL |
aiden doctor | Run diagnostic health checks |
aiden setup | Re-run the setup wizard |
npm start | Start the API server (port 4200) |
npm run build | Rebuild after source changes |
In-chat slash commands (28 total)
Session
| Command | Description |
|---|---|
/clear | Clear the current conversation |
/compress | Compress the conversation to free context |
/save | Save the current session |
/title | Set a title for the session |
Configuration
| Command | Description |
|---|---|
/model | Two-step provider/model picker |
/providers | List configured providers + status |
/personality | Switch personality overlay |
/skin | Switch terminal colour skin |
/streaming on|off | Toggle streamed responses |
/reasoning | Show reasoning toggle for capable models |
/verbose compact|normal|verbose | Verbosity level |
/debug-prompt | Print the system prompt for inspection |
Identity
| Command | Description |
|---|---|
/identity | Print SOUL.md / USER.md identity blocks |
System
| Command | Description |
|---|---|
/doctor | Run health checks |
/license | Show / set Pro license |
/plugins | List, grant, suspend plugins |
/reload-mcp | Reconnect MCP servers |
/tools | List registered tools |
/skills | List, view, install skills |
/usage | Token usage + cost summary |
/yolo | No-approval mode (use carefully) |
/cron | Schedule recurring tasks |
/quit | Exit the REPL |
Authentication
| Command | Description |
|---|---|
/auth login claude-pro | Sign in with Claude Pro / Max subscription |
/auth login chatgpt-plus | Sign in with ChatGPT Plus subscription |
/auth status | Show current auth state |
/auth logout | Sign out of OAuth providers |
Help
| Command | Description |
|---|---|
/help | List all commands grouped by section |
Skills can register their own dynamic slash commands at load time.
CLI vs Dashboard quick reference
Both the terminal CLI and the browser dashboard (localhost:4200/ui) expose the full feature set. Use whichever fits your workflow.
| Feature | Terminal CLI | Browser (localhost:4200/ui) |
|---|---|---|
| Chat | β inline prompt | β chat panel |
| Streaming responses | β token-by-token | β live SSE |
| Markdown rendering | β | β |
| Slash commands | β all 28 | β same commands |
/ command dropdown | β instant, 28 commands | β |
| Provider panel | /providers | β Providers tab |
| Memory panel | /identity + tool calls | β Memory tab |
| Skills panel | /skills | β Skills tab |
| Plugin hooks | β | β |
| MCP server mode | aiden mcp | β |
| OpenAI-compatible API | β | β
localhost:4200/v1 |
Tech stack
- TypeScript 5.9 β strict mode, full typing across core, providers, CLI, API.
- Node.js 18+ β runtime;
node-fetchnot needed (built-infetch). - Electron 41 β optional desktop wrapper; primary install is npm-based.
- Next.js (app router) β
dashboard-next/for the browser UI. - React 18 β dashboard component model.
- Playwright 1.58 β browser automation backbone.
- Ollama β fully offline LLM via local Ollama daemon.
- Model Context Protocol 1.27 β
@modelcontextprotocol/sdkfor tool / server dispatch. - better-sqlite3 + sql.js β local persistence.
- croner β cron scheduler.
- discord.js, @slack/web-api, whatsapp-web.js, twilio, nodemailer, imap-simple β channel adapters.
- Vitest 4 β test runner; ~1,500 unit + integration tests.
- esbuild β bundler for the npm package; electron-builder β optional desktop wrapper.
- Cloudflare Workers β landing page + license server + install-script proxy.
Contributing
Contributions are welcome β see CONTRIBUTING.md for the full guide.
Quickstart:
git clone https://github.com/taracodlabs/aiden.git
cd aiden
npm install
cp .env.example .env # add at minimum one API key (Groq is free: console.groq.com/keys)
npm run build
aiden # CLI
- Bug fixes and new skills are the easiest entry points.
- All contributors sign the CLA once via PR comment.
- Follow Conventional Commits.
- Run
npm run typecheckandnpm testbefore opening a PR.
Community
| Discord | discord.gg/gMZ3hUnQTm β chat, support, share what you build |
| Skills registry | agentskills.io β agentskills.io-compatible format |
| Bug reports & features | github.com/taracodlabs/aiden/issues |
| Star the repo | github.com/taracodlabs/aiden β |
| npm | npm install -g aiden-runtime |
| Sponsor | github.com/sponsors/taracodlabs |
Documentation
| Document | Description |
|---|---|
| ARCHITECTURE.md | Layer-by-layer breakdown, data flow diagrams, skill system design |
| AGENTS.md | Agent-loop contract β public API, honesty moat, memory layers |
| CONTRIBUTING.md | How to contribute β skills, tools, providers, docs |
| CODE_OF_CONDUCT.md | Community standards |
| CHANGELOG.md | Full release history |
| .env.example | All ~90 environment variables with descriptions |
| workspace-templates/ | Starter workspace configs and example plugins |
| Download installer | github.com/taracodlabs/aiden-releases/releases/latest |
| Releases & changelog | github.com/taracodlabs/aiden-releases |
| License | AGPL-3.0 core Β· Apache-2.0 skills |
Migration from v3.x
v4.0.0 is a clean rewrite. Existing v3 installs need a migration step:
- npm package renamed β
aiden-osβaiden-runtime. Runnpm uninstall -g aiden-os && npm install -g aiden-runtime. - Slash commands consolidated β v3's
/switch,/budget,/memory,/profile,/permissions,/sandbox,/retry,/failed,/publishare gone. Use/model,/usage,/identity,/yolofor equivalent functionality. See/helpfor the v4 list. - Subagent fanout removed β v4 is single-loop only; subagent support deferred to v4.x.
- Docker sandbox dropped β
AIDEN_SANDBOX_MODEno longer applies. Tools run on the host. ThetirithScannersecret/PII guard,ssrfProtection, and tiered approval engine remain as the safety layer. - Skill registry install changed β auto-fetch from external repos held pending license review. Skills install via
/skills install <local-path-or-url>only at v4.0. - Config compatible β most environment variables (
OPENAI_API_KEY,ANTHROPIC_API_KEY,GROQ_API_KEY, etc.) are recognised as-is. Copy your existing.envand Aiden picks them up.
More from the author
If you want a deeper read on the philosophy behind Aiden β autonomy, local-first AI, why solo developers should build their own tools β Shiva's book is on Amazon:
Build your own thing β solo-dev playbook
Buying the book directly funds Aiden's development.
Sponsors
Aiden is built and maintained by one person. If it saves you time, consider sponsoring:
Changelog
See CHANGELOG.md for the full history. v4.0.0 highlights:
- π§ Clean-room core rewrite β every adapter, every prompt slot, every loop. 7 dual-attribution files rewritten under full Aiden copyright.
- π 19 providers including OAuth subscription routing for Claude Pro and ChatGPT Plus (subscription quota, not pay-as-you-go).
- β‘ Single-loop agent β sequential tool dispatch, 90-turn cap with budget warnings at 70 % / 90 %.
- π‘ 6-slot self-healing fallback β together β groq Γ 4 β cooldown + least-used selection.
- π¨ Neofetch-style boot card β banner + status pills + Environment / Capabilities + parchment credits + bottom hint.
- π Cron scheduler β
/cron add|list|pause|resume|delete|runwith atomic state writes and output capture. - π§° 42 built-in tools across 11 categories β web, files, browser (10), sessions, skills, memory, process, system, terminal, code, MCP.
- π€ Inline JSON tool-call recovery β Llama / Qwen / NVIDIA-Llama emit raw JSON in answer text? It's detected, validated against the request's tool list, and dispatched as a proper tool call. Code-fenced examples are left alone.
- π Spinner has personality β 20-phrase pool (Thinking Β· Brewing Β· Cogitating Β· Brain yakka Β· β¦) sampled per turn.
- πͺΆ Env-gated polish β
AIDEN_UI_ICONS=1for tool-row emoji,AIDEN_UI_TIMESTAMPS=1for HH:MM:SS line prefix. - π§Ή Honest failure surface β every tool failure names the tool, provider, retry count, fallback chain, error, and next step.
License
| Component | License |
|---|---|
Core (cli/, api/, core/, providers/, dashboard-next/) | AGPL-3.0-only |
Skills (skills/) | Apache-2.0 |
Commercial use
Aiden's core is AGPL-3.0. You can self-host, modify, and study it freely. Embedding it in a closed-source commercial product or offering it as a hosted service requires either releasing your modifications under AGPL-3.0 or purchasing a commercial license.
Skills in skills/ are Apache-2.0 and can be used in commercial products without copyleft obligations.
For commercial licensing and enterprise deployments: aiden.taracod.com/contact?type=enterprise
Built by Taracod Β· Built by Shiva Deore Β· AGPL-3.0
