Lore Context
Governed AI-agent memory, Evidence Ledger traces, evals, and portable context tools.
Ask AI about Lore Context
Powered by Claude ยท Grounded in docs
I know everything about Lore Context. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Lore Context
The control plane for AI-agent memory, eval, and governance.
Know what every agent remembered, used, and should forget โ before memory becomes production risk.
Getting Started ยท API Reference ยท Architecture ยท Project Plan ยท Roadmap ยท Integrations ยท Deployment ยท Changelog
๐ Read this in your language: English ยท ็ฎไฝไธญๆ ยท ็น้ซไธญๆ ยท ๆฅๆฌ่ช ยท ํ๊ตญ์ด ยท Tiแบฟng Viแปt ยท Espaรฑol ยท Portuguรชs ยท ะ ัััะบะธะน ยท Tรผrkรงe ยท Deutsch ยท Franรงais ยท Italiano ยท ฮฮปฮปฮทฮฝฮนฮบฮฌ ยท Polski ยท ะฃะบัะฐัะฝััะบะฐ ยท Bahasa Indonesia
Localized docs may lag the current English release notes; the canonical v0.6 docs are the English README and docs/ set.
What is Lore Context
Lore Context is an open-core control plane for AI-agent memory: it composes context across memory, search, and tool traces; evaluates retrieval quality on your own datasets; routes governance review for sensitive content; and exports memory as a portable interchange format you can move between backends.
It does not try to be another memory database. The unique value is what sits on top of memory:
- Context Query โ single endpoint composes memory + web + repo + tool traces, returns a graded context block with provenance.
- Memory Eval โ runs Recall@K, Precision@K, MRR, stale-hit-rate, p95 latency on datasets you own; persists runs and diffs them for regression detection.
- Governance Review โ six-state lifecycle (
candidate / active / flagged / redacted / superseded / deleted), risk-tag scanning, poisoning heuristics, immutable audit log. - MIF-like Portability โ JSON + Markdown export/import preserving
provenance / validity / confidence / source_refs / supersedes / contradicts. Works as a migration format between memory backends. - Multi-Agent Adapter โ first-class
agentmemoryintegration with version probe + degraded-mode fallback; clean adapter contract for additional runtimes.
When to use it
| Use Lore Context when... | Use a memory database (agentmemory, Mem0, Supermemory) when... |
|---|---|
| You need to prove what your agent remembered, why, and whether it was used | You just need raw memory storage |
| You run multiple agents (Claude Code, Cursor, Qwen, Hermes, Dify) and want shared trustable context | You're building a single agent and OK with a vendor-locked memory tier |
| You require local or private deployment for compliance | You prefer a hosted SaaS |
| You need eval on your own datasets, not vendor benchmarks | Vendor benchmarks are sufficient signal |
| You want to migrate memory between systems | You don't plan to ever switch backends |
Quick Start
# 1. Clone + install
git clone https://github.com/Lore-Context/lore-context.git
cd lore-context && pnpm install
# 2. Run the quickstart helper and inspect the activation report
pnpm quickstart -- --dry-run --activation-report
# 3. Generate a real API key (do not use placeholders in any environment beyond local-only dev)
export LORE_API_KEY=$(openssl rand -hex 32)
# 4. Start the API (file-backed, no Postgres required)
pnpm build && PORT=3000 LORE_STORE_PATH=./data/lore-store.json pnpm start:api
# 5. Write a memory
curl -H "Authorization: Bearer $LORE_API_KEY" -H "Content-Type: application/json" \
-X POST http://127.0.0.1:3000/v1/memory/write \
-d '{"content":"Use Postgres pgvector for Lore Context production storage.","memory_type":"project_rule","project_id":"demo"}'
# 6. Query context, then inspect the returned traceId in the Evidence Ledger
curl -H "Authorization: Bearer $LORE_API_KEY" -H "Content-Type: application/json" \
-X POST http://127.0.0.1:3000/v1/context/query \
-d '{"query":"production storage","project_id":"demo","token_budget":1200}'
For full setup (Postgres, Docker Compose, Dashboard, MCP integration), see docs/getting-started.md.
For AI-readable discovery, the website publishes /llms.txt and /llms-full.txt
from public documentation only. Distribution drafts live under
docs/distribution, launch drafts under
docs/launch, and design partner intake under
docs/design-partners.
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
MCP clients โโโโโโโบ โ apps/api (REST + auth + rate limit + logs) โ
(Claude Code, โ โโโ context router (memory/web/repo/tool) โ
Cursor, Qwen, โ โโโ context composer โ
Dify, Hermes...) โ โโโ governance + audit โ
โ โโโ eval runner โ
โ โโโ MIF import/export โ
โโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โผ โผ โผ
Postgres+pgvector agentmemory adapter packages/search
(incremental (version-probed, (BM25 / hybrid
persistence) degraded-mode safe) pluggable)
โฒ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ apps/dashboard (Next.js) โ โโโโโโโโโโโ
โ protected by Basic Auth โ
โ memory ยท traces ยท eval โ
โ governance review queue โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
For detail, see docs/architecture.md.
What's in v0.6.0-alpha
| Capability | Status | Where |
|---|---|---|
| REST API with API-key auth (reader/writer/admin) | โ Production | apps/api |
OpenAPI 3.1 contract at /openapi.json | โ Production | apps/api/src/openapi.ts |
pnpm quickstart local adoption helper | โ Alpha | scripts/lore-quickstart.mjs |
| Quickstart activation report with redacted first-value proof | โ Alpha | scripts/lore-quickstart.mjs |
AI-readable docs (/llms.txt, /llms-full.txt) | โ Alpha | apps/website |
| MCP stdio server (legacy + official SDK transport) | โ Production | apps/mcp-server |
| Next.js dashboard with HTTP Basic Auth gating | โ Production | apps/dashboard |
| Evidence Ledger API + Dashboard summary | โ Alpha | apps/api, apps/dashboard |
| Postgres + pgvector incremental persistence | โ Optional | apps/api/src/db/ |
| Governance state machine + audit log | โ Production | packages/governance |
| Eval runner (Recall@K / Precision@K / MRR / staleHit / p95) | โ Production | packages/eval |
Eval report export (json / markdown) | โ Alpha | GET /v1/eval/report |
| Public-safe eval report CLI | โ Alpha | scripts/export-eval-report.mjs |
MIF v0.2 import/export with supersedes + contradicts | โ Production | packages/mif |
agentmemory adapter with version probe + degraded mode | โ Production | packages/agentmemory-adapter |
| Rate limiting (per-IP + per-key with backoff) | โ Production | apps/api |
| Structured JSON logging with sensitive-field redaction | โ Production | apps/api/src/logger.ts |
| Docker Compose private deployment | โ Production | docker-compose.yml |
| Demo dataset + smoke tests + Playwright UI test | โ Production | examples/, scripts/ |
| Distribution docs, launch drafts, design partner intake | โ Alpha | docs/distribution/, docs/launch/, docs/design-partners/ |
| Hosted multi-tenant cloud sync | โณ Roadmap | โ |
See CHANGELOG.md for the full v0.6.0-alpha release notes.
Release focus
The v0.6 release is the Distribution and Trust Sprint. The goal is not more memory storage features; it is making the v0.5 substrate easier to discover, install, verify, and share without leaking local secrets or private data.
Shipped v0.6 work:
- AI-readable website docs at
/llms.txtand/llms-full.txt; - canonical, Open Graph, Twitter, and static header metadata for public docs;
pnpm quickstart -- --activation-reportwith redacted dry-run and real first-value proof;- stricter activation proof that fails instead of skipping when the target port is occupied;
- public-safe eval reporting and smoke coverage for eval export plus MIF JSON export;
- distribution metadata drafts for MCP registry, marketplace listings, and agent plugins;
- launch content drafts and design partner intake/scorecard workflow.
It deliberately does not claim public SaaS, billing, managed sync, remote MCP HTTP, or benchmark wins.
See docs/project-plan.md, docs/roadmap.md, and docs/release-governance.md.
Integrations
Lore Context speaks MCP and REST and integrates with most agent IDEs and chat frontends:
| Tool | Setup guide |
|---|---|
| Claude Code | docs/integrations/claude-code.md |
| Cursor | docs/integrations/cursor.md |
| Qwen Code | docs/integrations/qwen-code.md |
| OpenClaw | docs/integrations/openclaw.md |
| Hermes | docs/integrations/hermes.md |
| Dify | docs/integrations/dify.md |
| FastGPT | docs/integrations/fastgpt.md |
| Cherry Studio | docs/integrations/cherry-studio.md |
| Roo Code | docs/integrations/roo-code.md |
| OpenWebUI | docs/integrations/openwebui.md |
| Other / generic MCP | docs/integrations/README.md |
Deployment
| Mode | Use when | Doc |
|---|---|---|
| Local file-backed | Solo dev, prototype, smoke testing | This README, Quick Start above |
| Local Postgres+pgvector | Production-grade single-node, semantic search at scale | docs/deployment/README.md |
| Docker Compose private | Self-hosted team deployment, isolated network | docs/deployment/compose.private-demo.yml |
| Hosted cloud | Future private roadmap, not a public alpha claim | โ |
All deployment paths require explicit secrets: POSTGRES_PASSWORD, LORE_API_KEYS, DASHBOARD_BASIC_AUTH_USER/PASS. The scripts/check-env.mjs script refuses production startup if any value matches a placeholder pattern.
Security
v0.6.0-alpha keeps the v0.5 adoption baseline and adds distribution-facing AI-readable docs, activation evidence, public-safe reports, and launch materials. The security posture remains appropriate for local and private alpha deployments:
- Authentication: API-key bearer tokens with role separation (
reader/writer/admin) and per-project scoping. Empty-keys mode fails closed in production. - Rate limiting: per-IP + per-key dual bucket with auth-failure backoff (429 after 5 fails in 60s, 30s lockout).
- Dashboard: HTTP Basic Auth middleware. Refuses to start in production without
DASHBOARD_BASIC_AUTH_USER/PASS. - Containers: all Dockerfiles run as non-root
nodeuser; HEALTHCHECK on api + dashboard. - Secrets: zero hardcoded credentials; all defaults are required-or-fail variables.
scripts/check-env.mjsrejects placeholder values in production. - Governance: PII / API key / JWT / private-key regex scanning on writes; risk-tagged content auto-routed to review queue; immutable audit log on every state transition.
- Memory poisoning: heuristic detection on consensus + imperative-verb patterns.
- MCP: zod schema validation on every tool input; mutating tools require
reason(โฅ8 chars) and surfacedestructiveHint: true; upstream errors sanitized before client return. - Logging: structured JSON with auto-redaction of
content,query,memory,value,password,secret,token,keyfields.
Vulnerability disclosures: SECURITY.md.
Project structure
apps/
api/ # REST API + Postgres + governance + eval (TypeScript)
dashboard/ # Next.js 16 dashboard with Basic Auth middleware
mcp-server/ # MCP stdio server (legacy + official SDK transports)
web/ # Server-side HTML renderer (no-JS fallback UI)
website/ # Marketing site (handled separately)
packages/
shared/ # Shared types, errors, ID/token utilities
agentmemory-adapter # Bridge to upstream agentmemory + version probe
search/ # Pluggable search providers (BM25 / hybrid)
mif/ # Memory Interchange Format (v0.2)
eval/ # EvalRunner + metric primitives
governance/ # State machine + risk scan + poisoning + audit
docs/
i18n/<lang>/ # Localized README in 17 languages
integrations/ # 11 agent-IDE integration guides
deployment/ # Local + Postgres + Docker Compose
legal/ # Privacy / Terms / Cookies (Singapore law)
scripts/
check-env.mjs # Production-mode env validation
smoke-*.mjs # End-to-end smoke tests
apply-postgres-schema.mjs
Requirements
- Node.js
>=22 - pnpm
10.30.1 - (Optional) Postgres 16 with pgvector for semantic-search-grade memory
Contributing
Contributions are welcome. Please read CONTRIBUTING.md for the development workflow, commit message protocol, and review expectations.
For documentation translations, see the i18n contributor guide.
Operated by
Lore Context is operated by REDLAND PTE. LTD. (Singapore, UEN 202304648K). Company profile, legal terms, and data handling are documented under docs/legal/.
License
The Lore Context repository is licensed under Apache License 2.0. Individual packages under packages/* declare MIT to enable downstream consumption. See NOTICE for upstream attribution.
Acknowledgments
Lore Context builds on top of agentmemory as a local memory runtime. Upstream contract details and version-compatibility policy are documented in UPSTREAM.md.
