Awareness Local
Local-first AI agent memory โ one command, works offline, no account needed. Give your Claude Code, Cursor, Windsurf, OpenClaw agent persistent memory. Markdown storage, hybrid search (FTS5 + embedding), MCP protocol, Web dashboard.
Ask AI about Awareness Local
Powered by Claude ยท Grounded in docs
I know everything about Awareness Local. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Awareness Local
Languages: English | ็ฎไฝไธญๆ
Give your AI agent persistent memory. One command. No account. Works offline.
Awareness Local is a local-first MCP memory server for AI coding agents. It gives Cursor, Claude Code, Copilot, Cline, and other MCP IDEs persistent memory, hybrid semantic + keyword retrieval, and reusable knowledge cards for long-running software projects.
It runs a lightweight daemon on your machine, stores memory as Markdown, indexes recall with SQLite FTS5 + embeddings, and keeps your AI workflow fast, explainable, and offline-ready.
npx @awareness-sdk/setup
That's it. Your AI agent now remembers everything across sessions.
Why Awareness Local
AI coding agents lose context between sessions. Awareness Local provides cross-session memory recall so agents can continue work without re-explaining architecture, past decisions, pending tasks, and implementation constraints.
- Persistent memory for AI coding agents
- Local-first MCP server with offline support
- Hybrid retrieval (keyword + semantic)
- Knowledge card extraction for decisions, solutions, and risks
Quick Start
npx @awareness-sdk/setup
Then open your IDE and start coding. Awareness tools become available for recall, record, and session initialization.
Popular Use Cases
- Long-running codebase migrations across many sessions
- Team handoffs where AI should remember prior implementation context
- Personal coding workflows that need durable preferences and conventions
- Multi-agent setups that share decision history and task memory
FAQ
Does Awareness Local work offline?
Yes. Local mode works fully offline with memory stored on your machine.
Where is data stored?
Memory is stored as Markdown in .awareness/, with a local SQLite index for retrieval.
Do I need a cloud account?
No. Cloud sync is optional and can be enabled later.
Which IDEs are supported?
Any MCP-compatible IDE, including Cursor, Claude Code, Copilot, Cline, Windsurf, and others.
Navigation
Benchmark: LongMemEval (ICLR 2025)
Evaluated on LongMemEval โ the industry standard benchmark for long-term conversational memory. 500 human-curated questions across 5 core capabilities.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ Awareness Memory โ LongMemEval Benchmark Results โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Benchmark: LongMemEval (ICLR 2025) โ
โ Dataset: 500 human-curated questions โ
โ Variant: LongMemEval_S (~115k tokens per question) โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ
โ โ Recall@1 77.6% (388 / 500) โ โ
โ โ Recall@3 91.8% (459 / 500) โ โ
โ โ Recall@5 95.6% (478 / 500) โ PRIMARY โ โ
โ โ Recall@10 97.4% (487 / 500) โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Method: Hybrid RRF (BM25 + Semantic Vector Search) โ
โ Embedding: all-MiniLM-L6-v2 (384d) โ
โ LLM Calls: 0 (pure retrieval, no generation cost) โ
โ Hardware: Apple M1, 8GB RAM โ 14 min total โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Long-Term Memory Retrieval โ R@5 Leaderboard โ
โ LongMemEval (ICLR 2025, 500 questions) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโค
โ System โ R@5 โ Note โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโผโโโโโโโโโโโโโโโโค
โ MemPalace (ChromaDB raw) โ 96.6% โ R@5 only * โ
โ โ
Awareness Memory (Hybrid) โ 95.6% โ Hybrid RRF โ
โ OMEGA โ 95.4% โ QA Accuracy โ
โ Mastra (GPT-5-mini) โ 94.9% โ QA Accuracy โ
โ Mastra (GPT-4o) โ 84.2% โ QA Accuracy โ
โ Supermemory โ 81.6% โ QA Accuracy โ
โ Zep / Graphiti โ 71.2% โ QA Accuracy โ
โ GPT-4o (full context) โ 60.6% โ QA Accuracy โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโค
โ * MemPalace 96.6% is Recall@5 only, not QA Accuracy. โ
โ Palace hierarchy was NOT used in the evaluation. โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Awareness Memory โ R@5 by Question Type โ
โ โ
โ knowledge-update โโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% โ
โ multi-session โโโโโโโโโโโโโโโโโโโโโโโโโโโโ 98.5%โ
โ single-session-asst โโโโโโโโโโโโโโโโโโโโโโโโโโโโ 98.2%โ
โ temporal-reasoning โโโโโโโโโโโโโโโโโโโโโโโโโโ 94.7%โ
โ single-session-user โโโโโโโโโโโโโโโโโโโโโโโโโ 88.6%โ
โ single-session-pref โโโโโโโโโโโโโโโโโโโโโโโโ 86.7%โ
โ โ
โ Overall โโโโโโโโโโโโโโโโโโโโโโโโโโ 95.6%โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Ablation Study โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ Vector-only: 92.6% โโโโโโโโโโโโโโโโ โ โ
โ โ BM25-only: 91.4% โโโโโโโโโโโโโโโโ โ โ
โ โ Hybrid RRF: 95.6% โโโโโโโโโโโโโโโโ โ
โ โ
โ โ Hybrid = +3% over any โ โ
โ โ single method alone โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ arxiv.org/abs/2410.10813 awareness.market โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Zero LLM calls. Reproducible benchmark scripts โ
What It Does
Before: Every session starts from scratch. You re-explain the codebase, re-justify decisions, watch the agent redo work.
After: Your agent says "I remember you were migrating from MySQL to PostgreSQL. Last session you completed the schema changes and had 2 TODOs remaining..."
Session 1 Session 2
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Agent: "What database?" โ โ Agent: "I remember we โ
โ You: "PostgreSQL..." โ โ chose PostgreSQL for โ
โ Agent: "What framework?"โ โ โ JSON support. You had โ
โ You: "FastAPI..." โ โ 2 TODOs left. Let me โ
โ (repeat every session) โ โ continue from there." โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ
Supported IDEs (13+)
| IDE | Auto-detected | Plugin |
|---|---|---|
| Claude Code | โ | awareness-memory |
| Cursor | โ | via MCP |
| Windsurf | โ | via MCP |
| OpenClaw | โ | @awareness-sdk/openclaw-memory |
| Cline | โ | via MCP |
| GitHub Copilot | โ | via MCP |
| Codex CLI | โ | via MCP |
| Kiro | โ | via MCP |
| Trae | โ | via MCP |
| Zed | โ | via MCP |
| JetBrains (Junie) | โ | via MCP |
| Augment | โ | via MCP |
| AntiGravity (Jules) | โ | via MCP |
How It Works
Your IDE / AI Agent
โ
โ MCP Protocol (localhost:37800)
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Awareness Local Daemon โ
โ โ
โ Markdown files โ Human-readable, git-friendly
โ SQLite FTS5 โ Fast keyword search
โ Local embedding โ Semantic search (optional: npm i @huggingface/transformers)
โ Knowledge cards โ Auto-extracted decisions, solutions, risks
โ Web Dashboard โ http://localhost:37800/
โ โ
โ Cloud sync (optional) โ
โ โ One-click device-auth โ
โ โ Bidirectional sync โ
โ โ Semantic vector search โ
โ โ Team collaboration โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Your Data
All memories stored as Markdown files in .awareness/ โ human-readable, editable, git-friendly:
.awareness/
โโโ memories/
โ โโโ 2026-03-22_decided-to-use-postgresql.md
โ โโโ 2026-03-22_fixed-auth-bug.md
โ โโโ ...
โโโ knowledge/
โ โโโ decisions/postgresql-over-mysql.md
โ โโโ solutions/auth-token-refresh.md
โโโ tasks/
โ โโโ open/implement-rate-limiting.md
โโโ index.db (search index, auto-rebuilt)
Features
MCP Tools (available in your IDE)
| Tool | What it does |
|---|---|
awareness_init | Load session context โ recent knowledge, tasks, rules |
awareness_recall | Search memories โ progressive disclosure (summary โ full) |
awareness_record | Save decisions, code changes, insights โ with knowledge extraction |
awareness_lookup | Fast lookup โ tasks, knowledge cards, session history, risks |
awareness_get_agent_prompt | Get agent-specific prompts for multi-agent setups |
Progressive Disclosure (Smart Token Usage)
Instead of dumping everything into context, Awareness uses a two-phase recall:
Phase 1: awareness_recall(query, detail="summary")
โ Lightweight index (~80 tokens each): title + summary + score
โ Agent reviews and picks what's relevant
Phase 2: awareness_recall(detail="full", ids=[...])
โ Complete content for selected items only
โ No truncation, no wasted tokens
Web Dashboard
Visit http://localhost:37800/ to browse memories, knowledge cards, tasks, and manage cloud sync.
Cloud Sync (Optional)
Connect to Awareness Cloud for:
- Semantic vector search (100+ languages)
- Cross-device real-time sync
- Team collaboration
- Memory marketplace
npx @awareness-sdk/setup --cloud
# Or click "Connect to Cloud" in the dashboard
SDK & Plugin Ecosystem
Awareness Local is part of the Awareness ecosystem:
| Package | For | Install |
|---|---|---|
| Awareness Local | Local daemon + MCP server | npx @awareness-sdk/setup |
| Python SDK | wrap_openai() / wrap_anthropic() interceptors | pip install awareness-memory-cloud |
| TypeScript SDK | wrapOpenAI() / wrapAnthropic() interceptors | npm i @awareness-sdk/memory-cloud |
| OpenClaw Plugin | Auto-recall + auto-capture | openclaw plugins install @awareness-sdk/openclaw-memory |
| Claude Code Plugin | Skills + hooks | /plugin marketplace add edwin-hao-ai/Awareness-SDK โ /plugin install awareness-memory@awareness |
| Setup CLI | One-command setup for 13+ IDEs | npx @awareness-sdk/setup |
Full SDK docs: awareness.market/docs
Requirements
- Node.js 18+
- Any MCP-compatible IDE
No Python, no Docker, no cloud account needed.
License
Apache 2.0
Tags & Integration
IDE Support: Cursor, Windsurf, Trae, Zed, VS Code, JetBrains. Compatible with: OpenClaw, AutoGPT, LangChain, MetaGPT. Key Technology: OMP (Open Memory Protocol), LatentMAS, Shared Thought Space, One-click Deployment. Focus: Solving AI "Lobster Memory" (Long-term memory loss), Automating complex workflows, Simplifying Agent setup.
