Attestor
Self-hosted memory for agent teams. Bi-temporal replay, deterministic retrieval, audit log.
Ask AI about Attestor
Powered by Claude Β· Grounded in docs
I know everything about Attestor. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Attestor
The memory layer for agent teams. Self-hosted, deterministic retrieval, zero LLM in the critical path.
pip install attestor
| Version | 4.0.0 (stable; greenfield rebuild β no v3 migration path) |
| PyPI | attestor |
| Import | attestor |
| Live site | https://attestor.dev/ |
| Repo | https://github.com/bolnet/attestor |
| License | MIT |
Designed and built by Surendra Singh β building auditable infrastructure for multi-agent AI, with fifteen years of production-systems discipline brought to the memory layer. Companion projects:
claude-finance(Claude-powered financial analytics) Β·private-equity(PE Γ AI workshop). Reach out if you're hiring senior IC for AI infrastructure.
What it is
Attestor is a memory store for agent teams that need a shared, tenant-isolated memory with bi-temporal replay, deterministic retrieval, and an auditable supersession chain. It runs as a Python library, a Starlette REST service, or an MCP server β same API in all three.
It is built around three claims, each grounded in code:
- Bi-temporal β replay any past state. Every memory has both event time (
valid_from/valid_until) and transaction time (t_created/t_expired). Nothing is deleted; everything is queryable forever (attestor/temporal/manager.py:43-73,core.py:888-890). - Semantic-first retrieval, no LLM in the hot path. A six-step deterministic pipeline. Same query β same ranking. Unit-testable (
attestor/retrieval/orchestrator.py:1-14). - Conversation ingest with auditable conflict resolution. Two-pass speaker-locked extraction, then a four-decision (
ADD / UPDATE / INVALIDATE / NOOP) resolver per fact. Every supersession carries anevidence_episode_id(attestor/extraction/conflict_resolver.py:98).
Designed for
- Multi-agent products where many LLMs write to the same memory store
- Regulated chat systems that need point-in-time reconstruction (compliance, audit, FOIA-style queries)
- Self-hosted deployments β your VPC, your Postgres, your Neo4j
Not designed for
- A general-purpose vector database
- A RAG framework with built-in chunking, reranking, and orchestration
- An LLM agent runtime β Attestor is the memory backend; the agent loop is yours
Quick start
1. Install
pip install attestor # or: pipx install attestor
Or pull the container (introspection-grade image, single layer over python:3.12-slim, currently linux/amd64):
docker pull ghcr.io/bolnet/attestor:latest # recommended β anonymous pull, mirrored to all registries below
Same image is mirrored to:
| Registry | Pull address |
|---|---|
| GHCR | ghcr.io/bolnet/attestor:latest |
| Docker Hub | bolnet2025/attestor:latest |
| Quay | quay.io/bolnet/attestor:latest |
| AWS ECR Public | public.ecr.aws/m6h5j7o3/attestor:latest |
| GCP AR | us-central1-docker.pkg.dev/coral-marker-452616-n4/attestor/attestor:latest |
(An internal Azure ACR mirror exists at memwright.azurecr.io/attestor but is private β Azure customers should use az acr import from one of the public registries above.)
The image's default entrypoint is attestor mcp (MCP server over stdio). For full production use, point the container at an external Postgres + Neo4j via env vars (or compose them with attestor/infra/local/docker-compose.yml); override the entrypoint to run attestor doctor, attestor api, etc.
2. Bring up local Postgres + Pinecone + Neo4j
attestor setup local # writes attestor/infra/local/docker-compose.yml
docker compose -f attestor/infra/local/docker-compose.yml up -d
The default stack ships three containers (one per storage role):
| Container | Role | Port | Purpose |
|---|---|---|---|
| Postgres 16 | Document | 5432 | Source of truth β content, tags, entity, ts, provenance, RLS-isolated by user_id |
| Pinecone Local | Vector | 5080-5089 | Dense embeddings, free per-namespace isolation, plain gRPC (no HTTPS) |
| Neo4j 5 + GDS | Graph | 7687 | Entity nodes + typed edges, PageRank / BFS / Leiden |
pgvector remains in the Postgres schema as an opt-in fallback for single-process / self-contained deploys, but the default vector role is Pinecone as of 2026-04-29 β it delivered the +10pp LME-S temporal-reasoning lift in the most recent bench-up.
3. Configure the embedder
The default embedder is Pinecone Inference llama-text-embed-v2 (NVIDIA-hosted, 1024-D) β one vendor for embedder + storage, free Starter tier (5M tokens/month per organization, see Β§ Cost & runtime guide). Set PINECONE_API_KEY in .env and the auto-detect chain in attestor/store/embeddings.py picks it up.
echo "PINECONE_API_KEY=pcsk_β¦" >> .env # cloud key for the embedder; storage can stay local
Alternative providers (override via ATTESTOR_EMBEDDING_PROVIDER / ATTESTOR_EMBEDDING_MODEL):
voyageβ Voyage AIvoyage-4(1024-D, paid)openaiβtext-embedding-3-small(1024-D via Matryoshka)ollamaβbge-m3local-first (free, requiresollama pull bge-m3)
4. Verify (mandatory)
attestor doctor
All four checks must be green for the default install: Document Store (Postgres), Vector Store (Pinecone Local or Cloud), Graph Store (Neo4j), Retrieval Pipeline. Graph (Neo4j) is required β the 6-step retrieval pipeline narrows on graph neighborhoods and the conversation ingest path writes typed edges (uses, authored-by, supersedes). The only hard dependency that cannot be down is the document store (Postgres); transient vector-probe failures are surfaced in the response trace rather than swallowed (retrieval/orchestrator.py β vector_error field).
5. Use it
from attestor import AgentMemory, AgentContext, AgentRole
mem = AgentMemory() # picks up env / ~/.attestor.toml automatically
ctx = AgentContext(
agent_id="researcher-1",
role=AgentRole.RESEARCHER,
namespace="acme-prod",
)
mem.add(
content="Alice is the engineering manager",
entity="alice",
category="role",
context=ctx,
)
results = mem.recall(query="who runs engineering?", context=ctx)
for r in results:
print(r.score, r.memory.content)
SOLO mode (zero-config). In v4,
AgentMemory().add('foo')auto-provisions a singletonlocaluser, an Inbox project (metadata.is_inbox=true), and a daily session β so the snippet above works on a fresh database without configuring identity (core.py:179-209). For multi-tenant production use, pass an explicitAgentContextwith a realnamespace.
6. Run a smoke benchmark (optional)
Verify your install end-to-end against a tiny LongMemEval slice. Defaults come from configs/attestor.yaml: Pinecone Inference llama-text-embed-v2 (1024-D) embedder + Pinecone vector store, openai/gpt-5.5 answerer, dual judges (openai/gpt-5.5 + anthropic/claude-sonnet-4-6), parallel=2.
set -a && source .env && set +a # OPENROUTER_API_KEY, PINECONE_API_KEY, NEO4J_PASSWORD
.venv/bin/python scripts/lme_smoke_local.py --n 2 --yes
Every model and parameter comes from YAML β see Β§ Benchmarking below for the full bench harness.
Benchmarking
Every benchmark β smoke, single slice, full sweep, synthetic supersession β reads its knobs from two YAMLs:
| File | What lives there |
|---|---|
configs/attestor.yaml | Stack β embedder, models, retrieval features, DBs, registries, clouds |
configs/bench.yaml | Bench-only β variants, category iteration order, target scores, output paths |
The two files must have disjoint keys. The CI test tests/test_config_no_duplicate_keys.py enforces this; the bench loader (attestor.bench_config.get_bench) crashes on overlap. If you want a one-off override (different model for one bench run), use an env var or CLI flag β never duplicate the key in bench.yaml.
What LongMemEval is
LongMemEval (Wu et al., 2024 β published at ICLR '25) is the canonical benchmark for memory-augmented chat assistants. It measures whether an AI system can correctly answer questions that require recalling facts from long, multi-session conversation histories β the exact scenario Attestor is built for.
500 questions, 6 reasoning categories, 3 haystack sizes. Same questions across all three sizes; only the noise around the answer-bearing session changes:
| Variant | Tokens / Q | Sessions | What it measures |
|---|---|---|---|
oracle | ~3-15k | 1-3 gold | Reasoning ceiling β what the answerer can do with perfect retrieval. If you score low here, your prompt or LLM is broken (retrieval can't help). |
s (Standard / Small) | ~115k | ~50 | Public leaderboard β the canonical comparison. Fits in a single Claude/GPT context window, so Attestor's retrieval is benchmarked against the "just stuff everything into long context" baseline. |
m (Plus / Medium) | ~1M+ | ~500 | Pure retrieval β too big for any context window. Memory layer is forced; no long-context shortcut available. |
LME-S is the headline number to beat. A memory layer that scores within 5% of a long-context baseline at 30Γ lower token cost is the marketing pitch.
The 6 reasoning categories (cleaned LME-S, 500 questions total β note: no abstention slice in the cleaned split, which the synthetic supersession suite covers):
| Category | N | What it tests |
|---|---|---|
multi-session | 133 | Fact spans across multiple sessions β must track an entity over time |
temporal-reasoning | 133 | Date arithmetic ("two weeks ago", "before X") β Attestor's bi-temporal layer is built for this slice |
knowledge-update | 78 | Supersession β newer fact must beat older fact when both exist |
single-session-user | 70 | One session, fact stated by the user |
single-session-assistant | 56 | One session, fact stated by the assistant |
single-session-preference | 30 | One session, user preference |
Why this benchmark for Attestor: the temporal-reasoning and knowledge-update slices directly exercise features that distinguish Attestor from a vanilla RAG: bi-temporal recall, supersession-on-contradiction, event-time vs transaction-time disambiguation. A high score on those slices is the regulated-AI / audit / compliance pitch.
For the published Attestor numbers, see docs/bench/ β bench artifacts persist as lme-{variant}-{category}-{date}.{report,summary}.json. The Reporting section below shows how to render them as a table.
Download the LongMemEval dataset (one-time, before any bench run)
All lme_*.sh scripts use the cleaned LongMemEval split published on HuggingFace by xiaowu0162/longmemeval-cleaned. It auto-downloads on first use, but you'll want to know what's happening.
Cache location (created on first call):
~/.cache/attestor/longmemeval/
(Or $XDG_CACHE_HOME/attestor/longmemeval/ if you set XDG_CACHE_HOME.)
Variants and on-disk sizes:
| Variant | Filename | Size | Tokens / Q | Use |
|---|---|---|---|---|
oracle | longmemeval_oracle.json | ~5 MB | ~3-15k | Reasoning ceiling β cheapest smoke |
s | longmemeval_s_cleaned.json | ~250 MB | ~115k | Public leaderboard (canonical) |
m | longmemeval_m_cleaned.json | ~2 GB | ~1M+ | Forces retrieval (no long-context shortcut) |
Option A β auto-download (recommended)
Just run any bench command. The first call downloads and caches; every subsequent call reads from disk:
# Will download longmemeval_oracle.json (~5 MB) the first time
.venv/bin/python scripts/lme_smoke_local.py --n 2 --yes --variant oracle
# Will download longmemeval_s_cleaned.json (~250 MB) the first time
scripts/bench/lme_run.sh knowledge-update
You only pay the download cost once per variant. Internet flake during the first run? Delete the partial file in the cache dir and rerun.
Option B β pre-warm the cache (offline / CI)
Pre-fetch every variant you plan to use before the bench day:
.venv/bin/python -c "
from attestor.longmemeval import load_or_download
for v in ('oracle', 's', 'm'):
samples = load_or_download(variant=v)
print(f'{v}: {len(samples)} samples')
"
Expected output:
oracle: 500 samples
s: 500 samples
m: 500 samples
Option C β manual download (firewalled environments)
If your runner can't reach huggingface.co, fetch the files on a connected machine and drop them into the cache dir manually:
mkdir -p ~/.cache/attestor/longmemeval
cd ~/.cache/attestor/longmemeval
# pick the variants you need
curl -L -o longmemeval_oracle.json \
https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_oracle.json
curl -L -o longmemeval_s_cleaned.json \
https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_s_cleaned.json
curl -L -o longmemeval_m_cleaned.json \
https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_m_cleaned.json
The bench harness checks for these filenames exactly β don't rename them.
Verify the dataset is loadable
After download (auto or manual), confirm the loader picks it up cleanly:
.venv/bin/python -c "
from attestor.longmemeval import load_or_download
from collections import Counter
samples = load_or_download(variant='s')
cnt = Counter(s.question_type for s in samples)
print(f'Loaded {len(samples)} samples')
for cat, n in sorted(cnt.items(), key=lambda x: -x[1]):
print(f' {cat}: {n}')
"
Expected for the cleaned s variant (500 questions, 6 categories β note: no abstention slice in the cleaned split):
Loaded 500 samples
multi-session: 133
temporal-reasoning: 133
knowledge-update: 78
single-session-user: 70
single-session-assistant: 56
single-session-preference: 30
If counts don't match, the file is truncated β re-download.
Quick smoke (β€ 1 minute, β€ $0.10)
Confirm the pipeline runs end-to-end before committing or running anything bigger:
.venv/bin/python scripts/lme_smoke_local.py --n 2 --yes --variant oracle
oracle is the cheapest variant (gold sessions only, no distractor haystack). Schema is reapplied automatically; pass --skip-schema if you want to keep a populated DB between runs.
Single category β scripts/bench/lme_run.sh
# all 6 categories, current variant from bench.yaml (default: s)
scripts/bench/lme_run.sh
# one slice β full
scripts/bench/lme_run.sh knowledge-update
# one slice β capped at N samples (smoke)
scripts/bench/lme_run.sh knowledge-update 10
# one slice on a different variant (oracle = cheapest, m = ~1M tokens)
scripts/bench/lme_run.sh knowledge-update "" oracle
Valid --category values: single-session-user, single-session-assistant, single-session-preference, multi-session, temporal-reasoning, knowledge-update. See What LongMemEval is above for sample counts and what each category tests.
Each run persists two files:
docs/bench/lme-{variant}-{category}-{YYYYMMDD}.report.json # full LMERunReport
docs/bench/lme-{variant}-{category}-{YYYYMMDD}.summary.json # BenchmarkSummary
Full sweep β scripts/bench/lme_all.sh
Iterates bench.yaml's lme.categories list in order. Adding/removing slices is a YAML edit, not a script edit:
# All 6 slices, current variant
scripts/bench/lme_all.sh
# All 6 slices, capped at 10 samples each (smoke)
scripts/bench/lme_all.sh 10
# All 6 slices on Oracle variant
scripts/bench/lme_all.sh "" oracle
If one slice fails, the script logs it and moves on to the next.
Reporting β scripts/bench/lme_report.py
Aggregates every docs/bench/lme-*.summary.json into one markdown table; picks the most-recent file per (variant, category):
.venv/bin/python scripts/bench/lme_report.py # latest-per-slice
.venv/bin/python scripts/bench/lme_report.py --variant s # filter to LME-S
.venv/bin/python scripts/bench/lme_report.py \
--markdown-out docs/bench/LME-S.md # also write file
.venv/bin/python scripts/bench/lme_report.py --trend # progression over time
Default mode (latest-per-slice):
| Variant | Category | Score | N | Date | Answer | Judges |
| ------- | -------- | -----:| -:| ---- | ------ | ------ |
| s | knowledge-update | 87.5% | 78 | 20260429 | openai/gpt-5.4-mini | openai/gpt-5.5, anthropic/claude-sonnet-4-6 |
Trend mode (--trend) reads docs/bench/trend.csv β one row appended per bench run (auto-populated by lme_run.sh) β and shows progression with a Ξ column:
| Variant | Category | Date | N | Score | Ξ | SHA | Features | Run |
| ------- | -------- | ---- | -:| -----:| -:| --- | -------- | --- |
| s | knowledge-update | 20260429 | 78 | 80.0% | | a126e7a | | bench |
| s | knowledge-update | 20260430 | 78 | 88.0% | +8.0 | badcf1b | multi_query | bench |
| s | knowledge-update | 20260501 | 78 | 91.5% | +3.5 | xxxxxxx | multi_query,hyde | bench |
The Features column records exactly which retrieval/answerer flags were enabled per run, so you can see at a glance which knob produced which lift.
Retrieval + answerer feature flags
Five orthogonal features land via configs/attestor.yaml boolean flips. All disabled by default β pick one per bench run, measure the lift, decide which to ship enabled.
| Flag | What it does | Lift | Cost overhead |
|---|---|---|---|
retrieval.multi_query | rewrite question into N paraphrases, RRF-merge N+1 vector lanes | +6-10% (lit.); regressed β10pp on LME-S temporal smoke | 1 small LLM call + N extra vector searches per recall |
retrieval.hyde | event-descriptive hypothetical-document embedding (temperature=0) β embed it as a parallel vector lane | +10pp measured on LME-S temporal-reasoning (30q smoke, 70%β80%β96.7% with BM25 hybrid) | 1 small LLM call + 1 extra vector search per recall |
retrieval.temporal_prefilter | regex-detect "two weeks ago" etc; narrow event-time window before vector | +1.5% (lit.); 0pp on LME-S interrogative-anchor questions | Free (regex-only, no LLM) |
self_consistency | answerer draws K=5 samples at temperature, elects consensus | +3-6% (lit.) | 5Γ answerer cost |
critique_revise | answer β critique β conditional revise | +3-5% (lit.) | ~3Γ answerer worst case |
multi_query and hyde are mutually exclusive in this release (multi_query wins if both flags are on with a logged warning). self_consistency and critique_revise are similarly mutually exclusive on the answerer side. Combinations across the two sides (e.g. hyde + self_consistency) are fine.
HyDE v2 prompt (attestor/retrieval/hyde.py) β generates an event-descriptive snippet rather than an answer-shape response, so the embedding lands close to source-shape conversation turns instead of question-shape queries. This is the lever that produced the +10pp measured lift on LME-S temporal-reasoning. temperature=0 is pinned so re-runs are deterministic.
Honest negative results documented above β multi_query and temporal_prefilter did NOT generalize from their literature numbers on the LME-S temporal-reasoning slice. multi_query paraphrases stay in question-shape and RRF dilutes marginal hits; temporal_prefilter heuristic anchors don't help interrogative-form questions ("how many days agoβ¦"). HyDE was the right tool. Per-feature methodology + diagnostic artifacts in docs/bench/pinecone-lme-temporal-diagnostic-{baseline,mq3,hyde,hyde-bm25}-20260429.json.
Cross-vector-DB diagnostic harness β experiments/pinecone_lme_temporal_diagnostic.py runs retrieval-only LME-S diagnostics against Pinecone Local with --baseline / --multi-query / --hyde / --bm25-hybrid / --temporal-prefilter / --category flags. No answerer, no judge β pure recall@K ceiling. --skip-ingest reuses populated namespaces for fast retrieval-flag iteration (~60s for 30q vs ~50min with fresh ingest).
To benchmark a single feature: flip its enabled: true in configs/attestor.yaml, run the bench slice, compare against a same-day baseline run with everything off. The trend table will show the delta in the Ξ column.
Synthetic supersession suite β python -m evals.knowledge_updates
50 hand-curated cases, 10 contradiction categories Γ 5 each (numeric, categorical, temporal, preference, entity, locational, intent, relational, count, status_binary). Each case ingests two sessions (Session 1 states a fact, Session 5 contradicts it) and asks a question that should resolve to the newer fact. Metric: % of cases where retrieval surfaces the new fact as top-1.
# All 50 cases β ~5 min, ~$0.50 worth of embedding calls
.venv/bin/python -m evals.knowledge_updates
# Smoke β first 5 cases
.venv/bin/python -m evals.knowledge_updates --limit 5
# Custom fixtures
.venv/bin/python -m evals.knowledge_updates --fixtures my_cases.json
Outputs:
docs/bench/knowledge-updates-{YYYYMMDD}.report.json # per-case verdicts (new_wins | stale_wins | miss | ambiguous)
docs/bench/knowledge-updates-{YYYYMMDD}.summary.json # aggregate score + per-category breakdown
Target score (configurable in bench.yaml): 92% new_wins. Below that, the supersession-confidence-decay weight in attestor/retrieval/scorer.py needs tuning.
Cost & runtime guide
Approximate, at reasoning_effort=high for answerer + judge, parallel=2, OpenRouter pricing:
| Run | N | Wall time | Cost |
|---|---|---|---|
| Quick smoke | 2 oracle | ~1 min | < $0.10 |
knowledge-update slice | 78 | ~30-60 min | ~$3-5 |
temporal-reasoning slice | 133 | ~50-100 min | ~$5-8 |
| Full LME-S 500q | 500 | ~75-180 min | ~$20-30 |
| Synthetic supersession | 50 | ~5 min | ~$0.50 (embeddings only) |
To cut costs, edit configs/attestor.yaml's models.reasoning_effort.{answerer,judge} from high β medium or low.
Configuration cheat sheet β configs/bench.yaml
bench:
lme:
variant: s # s | m | oracle
cache_dir: ~/.cache/attestor/lme
output_dir: docs/bench
sample_limit: null # null = full dataset; int = truncate
category: null # null = all 7; or single slice name
categories: [...] # iteration order for lme_all.sh
variants_to_run: [...] # for full size matrix
knowledge_updates:
fixtures_path: evals/knowledge_updates/fixtures.json
n_cases: 50
target_score: 0.92
categories: [numeric, categorical, ...]
report:
headline_slice: abstention
trend_csv: docs/bench/trend.csv
markdown_path: docs/bench/LME-S.md
Architecture
Bi-temporal β replay any past state
Every memory carries two time axes:
| Axis | Columns | Meaning |
|---|---|---|
| Event time | valid_from, valid_until | When the fact is true in the world |
| Transaction time | t_created, t_expired | When the row landed in the store |
Plus a superseded_by chain. Old facts are never deleted β they remain queryable forever (attestor/temporal/manager.py:30-66).
# What did we believe on March 1?
mem.recall(query="who runs engineering?", as_of="2026-03-01T00:00:00Z", context=ctx)
# Show me everything we knew about Alice between Feb and Apr
mem.recall(query="alice", time_window=("2026-02-01", "2026-04-01"), context=ctx)
as_of and time_window propagate end-to-end through the orchestrator and document store. Auto-supersession on write is wired into core.py:add() (core.py:762, 784-785): on every add, the temporal manager finds active rows with the same (entity, category, namespace) and different content, marks them superseded, sets valid_until=now, and links superseded_by=<new_id>. Detection is rule-based string equality today.
Tenant isolation β Postgres Row-Level Security
Every tenant table (users, projects, sessions, episodes, memories, user_quotas, deletion_audit) carries a tenant_isolation_* policy keyed off the attestor.current_user_id session variable. An empty / unset value fails closed β no rows visible (attestor/store/schema.sql:311-327).
Honest disclosure. Enforcement lives in Postgres, not Python. The
AgentRoleenum inattestor/context.py:49-56is metadata that flows onto memories for provenance; it does not gate operations in Python. RLS is what actually controls access. This is correct architecture for a memory backend, but worth knowing if you read the Python alone.
The retrieval pipeline β semantic-first, six steps
attestor/retrieval/orchestrator.py runs the same six steps for every query:
- Vector top-K β Pinecone cosine, k=50 (pgvector remains as opt-in fallback for self-contained deploys)
- Graph narrow β Neo4j BFS depth β€ 2 from each candidate's entity to the question entities; affinity bonus per hop (0-hop=+0.30, 1-hop=+0.20, 2-hop=+0.10; unreachable=β0.05). Discrete, not "soft".
- Triples inject β typed-edge facts (
uses,authored-by,supersedes) injected as synthetic memories - MMR rerank β Ξ»=0.7
- Confidence decay + temporal boost β recency lifts; stale, low-confidence rows fall
- Budget fit β greedy monotonic-by-score pack into the caller's token budget
Every call writes a JSONL trace to logs/attestor_trace.jsonl (disable via ATTESTOR_TRACE=0).
Async retrieval β lower latency without weakening audit
Independent recall steps run concurrently via asyncio.gather, but none of the eight audit invariants are relaxed. You don't trade trust for speed β you get both.
| Async step | Latency win | Audit invariant preserved |
|---|---|---|
| HyDE LLM call β original-question vector embed | β33 % on HyDE-enabled recalls (~600 ms β ~400 ms in the simulated unit-test) | A7 β generator pins temperature=0.0, same prompt + same model = same hypothetical = same RRF order. Async amplifies non-determinism risk if T > 0; we explicitly pin T=0. |
| Per-lane vector searches in parallel (HyDE / multi-query) | proportional to N (β N Γ per-lane β max-per-lane) | RRF over the lanes is deterministic given identical inputs β gather order does not corrupt rank positions (test_multi_query_async_preserves_RRF_order). |
| Self-consistency K-fanout (answerer side) | 5Γ on K=5 sampling | Vote consensus is order-independent; answerer-side change, doesn't touch the document store. |
| Vector β BM25 β graph candidate-fetch | β20 % on baseline recalls | A2 recall_started_at ceiling β every cross-store read carries the same monotonic timestamp captured at recall start. Concurrent writes that land mid-recall are simply not visible. |
| Graph BFS β Postgres doc-fetch | β50 ms typical | Same ceiling. |
Write-side stays sync. All add(), update(), supersession writes are explicitly non-goals for the async refactor β the audit chain depends on serial write ordering and the bi-temporal t_created order must be linearizable per row. Async is read-side only.
Trace stays reconstructable. Every event carries recall_id + monotonic seq + optional parent_event_id, so the audit dashboard renders concurrent recalls as a tree of events rather than a stream β (recall_id, seq) reconstructs causal order from the JSONL log.
Same recall(as_of=X) replay guarantee. A past recall remains byte-for-byte reproducible from the bi-temporal columns + deletion_audit + the trace JSONL β async parallelism doesn't change what gets read, only when. The load-bearing test (tests/test_as_of_replay.py) is in the regression gate of every async PR.
Full design + audit-invariant matrix: docs/plans/async-retrieval/PLAN.md. Convention: every async PR ships with an audit-preservation argument and the matching invariant test (tests/async_retrieval/test_audit_invariants_under_async.py) GREEN before merge.
Three storage roles
| Role | Purpose | Default | Alternatives |
|---|---|---|---|
| Document | Source of truth (content, tags, entity, ts, provenance, confidence) | Postgres 16 | AlloyDB, ArangoDB, DynamoDB, Cosmos DB |
| Vector | Dense embedding per memory | Pinecone (Local Docker / Cloud) | pgvector, AlloyDB ScaNN, ArangoDB, OpenSearch Serverless, Cosmos DiskANN |
| Graph | Entity nodes + typed edges | Neo4j 5 + GDS | Apache AGE on AlloyDB, ArangoDB, Neptune, NetworkX (Azure) |
Postgres is the source of truth. Pinecone vectors and Neo4j graph are derived state, both rebuildable from Postgres β but both are required for the canonical install: vector cosine is step 1 of the retrieval pipeline, graph expansion is step 2, and conversation ingest writes typed edges. The only role that cannot be down is the document store; the orchestrator records transient vector-probe failures in the response trace (vector_error) instead of swallowing them.
Optional BM25 / FTS lane
A trigger-maintained content_tsv tsvector + GIN index lifts queries that embeddings under-recall (acronyms, IDs, rare proper nouns). Enabled when v4 schema is detected; fuses with the vector lane via Reciprocal Rank Fusion (RRF, k=60). Graceful no-op on backends without the column (core.py:122-130).
Conversation ingest
The heavyweight write path that turns conversation turns into auditable memories. core.py:ingest_round(turn) orchestrates four passes:
turn β extract_user_facts(user_turn) β
extract_agent_facts(assistant_turn) β β resolve_conflicts β apply
Two-pass speaker-locked extraction
attestor/extraction/round_extractor.py:216, 258 β separate prompts for user vs assistant turns. The user-turn extractor only emits facts attributable to the user; the assistant-turn extractor only emits facts the assistant introduced. Stops cross-attribution. The "+53.6 over Mem0" delta in our LongMemEval scores comes from this split.
Four-decision conflict resolver
attestor/extraction/conflict_resolver.py:40, 98 β for each newly-extracted fact, an LLM call against existing similar memories returns one of:
| Decision | Effect |
|---|---|
ADD | New info, no existing match β write fresh memory |
UPDATE | Same entity + predicate, refined value β keep existing id |
INVALIDATE | Old memory contradicted β mark superseded (timeline replays) |
NOOP | Already represented β skip |
Each Decision carries evidence_episode_id. Every supersession is auditable. Failsafe: parse failure on a single fact yields ADD-by-default β better a duplicate-ish row than a silent drop.
Two write paths, two contracts.
mem.add(...)runs the lightweight rule-based supersession (Β§Bi-temporal).mem.ingest_round(turn)runs the full four-decision pipeline. Pickingest_roundfor conversational data; pickaddfor structured writes where you've already done the conflict reasoning.
Sleep-time consolidation
mem.consolidate() (core.py:526) re-extracts and synthesizes facts from recent episodes with a stronger model. Currently a Python-API-only call β no CLI command. Schedule it from your application (cron, systemd timer, ECS scheduled task) when you want fresher facts than the streaming extractor produces.
Reflection engine
attestor/consolidation/reflection.py runs periodic synthesis across N episodes for one user. Outputs:
stable_preferencesβ patterns appearing in 3+ episodesstable_constraintsβ rules the user repeatedly invokeschanged_beliefsβ preferences that shifted (old β new, with explicit invalidate)contradictions_for_reviewβ flagged for HUMAN REVIEW, not auto-resolved
The "do not auto-resolve" stance is the load-bearing piece for regulated chat systems. The prompt is explicit (reflection.py:35-66): "Do NOT auto-resolve contradictions. Flag them for human review."
Chain-of-Note reading
recall() returns a list. recall_as_pack() returns a typed retrieval envelope an agent can actually reason about β every field a Chain-of-Note flow needs to cite, abstain, or pick the right validity window when memories conflict:
pack = mem.recall_as_pack(query="who runs engineering?", context=ctx)
for entry in pack.memories:
print(entry.id, # cite this in the answer
entry.confidence, # weight or abstain
entry.valid_from, # bi-temporal window for conflict resolution
entry.valid_until,
entry.source_episode_id) # provenance back to the round it came from
agent.send(pack.render_prompt()) # Chain-of-Note prompt, memories interpolated as JSON
ContextPack is frozen=True, hashable, JSON-serializable. It drops cleanly into a tool call. The default prompt has explicit ABSTAIN and CONFLICT clauses β every frontier model defaults to confabulation otherwise.
Multi-agent primitives
Six roles
AgentRole: ORCHESTRATOR, PLANNER, EXECUTOR, RESEARCHER, REVIEWER, MONITOR (attestor/context.py:49-56). The role flows onto every memory's metadata for provenance. Access enforcement is two-layer:
- AgentContext layer β
ROLE_PERMISSIONSmatrix gates writes / forgets per role. Matrix:ORCHESTRATOR = R+W+F;PLANNER/EXECUTOR/RESEARCHER=R+W;REVIEWER/MONITOR=Ronly.read_only=Trueis an independent kill switch. - Postgres RLS layer β row-level filter on
user_id(see Β§Tenant isolation).
AgentContext β handoff, scratchpad, trail
orchestrator = AgentContext.from_env(agent_id="orchestrator", namespace="project:acme")
planner = orchestrator.as_agent("planner", role=AgentRole.PLANNER)
executor = planner.as_agent("executor", role=AgentRole.EXECUTOR)
# Each child carries parent_agent_id + accumulating agent_trail.
# All three share the same scratchpad: Dict[str, Any] for typed handoff data.
as_agent() creates a child context with parent_agent_id, full agent_trail, and a shared scratchpad. The trail accumulates β useful for proving "this answer came from agent X who got it from agent Y."
Per-agent token budgets
AgentContext.token_budget (default 20 000) is enforced β recall() packs results greedily until the budget is exhausted (scorer.py:fit_to_budget). token_budget_used accumulates across calls in a session.
Optional write quotas
mem.set_quota(user_id, daily_writes=...) β enforced on add against the v4 user_quotas table (core.py:592-621). Optional; unset means unlimited.
Security & Compliance
Row-Level Security
Cross-link to Β§Tenant isolation. RLS policies are the access-control surface; the Python layer trusts them. Set attestor.current_user_id per connection.
Provenance on every memory
Every memory carries agent_id, session_id, source_episode_id. The supersession chain (superseded_by) is preserved forever. Conversation episodes are stored verbatim, separate from the memories extracted from them β meaning you can always reconstruct which conversation turn produced which fact.
Deletion audit log
Hard deletes (e.g., GDPR purges) write a row to deletion_audit before the cascade β what was deleted, when, why, by whom. This is the carve-out for the otherwise-immutable schema.
GDPR β export and purge
mem.export_user(external_id="user-42") # full data export (memories + episodes + sessions + projects)
mem.purge_user(external_id="user-42", # cascading hard delete with audit trail
reason="GDPR right-to-erasure request 2026-04-27")
mem.deletion_audit_log(limit=100) # forensic readback
core.py:557-590. v4 only. Returns / writes everything Subject Access requires for Art. 15 / Art. 17.
Optional: Ed25519 provenance signing
Enable via config (signing.enabled = true). On every add, attestor signs the canonical payload id || agent_id || t_created || content_hash with an Ed25519 key. mem.verify_memory(memory_id) returns bool (core.py:623-640). Optional, off by default β turn on for adversarial-write contexts where you need cryptographic non-repudiation.
Runtime topologies
Same API across all three. Only configuration changes.
| Mode | Shape | When to use |
|---|---|---|
| A β Embedded library | AgentMemory(config) in-process; talks directly to Postgres + Neo4j | Single-process agents, scripts, notebooks |
| B β Sidecar | attestor api on localhost:8080; language-agnostic HTTP client shares the same Postgres + Neo4j | Polyglot agents on one box (Python + TS + Go) |
| C β Shared service | One Attestor service in front of an agent mesh (App Runner / Cloud Run / Container Apps) backed by managed Postgres + Neo4j | Production multi-agent platforms |
attestor api --port 8080 # Mode B / C β Starlette ASGI REST (HTTP)
attestor mcp --path ~/.attestor # MCP stdio server (zero-config; for Claude Desktop / Cursor / Windsurf)
attestor serve ~/.attestor # MCP stdio server (positional-path variant; equivalent transport)
Backends
| Backend | Document | Vector | Graph | Status |
|---|---|---|---|---|
| Postgres + Neo4j (default) | β | pgvector | Neo4j + GDS | Production-ready |
| ArangoDB | β | β | β | Production-ready (one engine, all 3 roles) |
| AWS | DynamoDB | OpenSearch Serverless | Neptune | Backend code + Terraform shipped |
| Azure | Cosmos DB | Cosmos DiskANN | NetworkX (in-process) | Backend code shipped, Terraform forthcoming |
| GCP | AlloyDB | AlloyDB ScaNN | AGE on AlloyDB | Backend code shipped, Terraform forthcoming |
Override the default via config:
# ~/.attestor.toml
backend = "postgres+neo4j" # or "arangodb" | "aws" | "azure" | "gcp"
Reference Terraform lives under attestor/infra/.
Embeddings
Provider auto-detect (attestor/store/embeddings.py:get_embedding_provider), in this order:
- Local Ollama
bge-m3β 1024-D, 8K context β used whenhttp://localhost:11434is reachable - Cloud-native β Bedrock Titan / Vertex / Azure OpenAI when their SDK + creds are present
- OpenAI
text-embedding-3-large(3072-D native; pinOPENAI_EMBEDDING_DIMENSIONS=1024for schema compat) - OpenRouter β for federated runs
Local-first by design. Override:
export ATTESTOR_DISABLE_LOCAL_EMBED=1 # skip the Ollama probe entirely
export ATTESTOR_EMBEDDING_PROVIDER=openai
export ATTESTOR_EMBEDDING_MODEL=text-embedding-3-large
CLI
attestor --help lists everything. The most useful commands:
| Command | Purpose |
|---|---|
attestor init | Create a starter config |
attestor setup local | Generate Docker Compose for Postgres + Neo4j |
attestor doctor | Health-check every store + the retrieval pipeline |
attestor add / recall / search / list | CRUD-ish memory ops |
attestor timeline | Entity timeline (uses bi-temporal manager) |
attestor stats | Store statistics |
attestor export / import | JSON dump / restore |
attestor compact | Remove archived memories |
attestor update / forget | Mutate / archive a memory |
attestor inspect | Inspect raw database state |
attestor api | Start the Starlette REST API |
attestor serve <path> | Start MCP stdio server (positional-path variant) |
attestor mcp [--path β¦] | Start MCP stdio server (zero-config; default for Claude Desktop / Cursor / Windsurf) |
attestor ui | Read-only browser UI for the store |
attestor hook {session-start, post-tool-use, stop} | Run a Claude Code lifecycle hook |
attestor lme / locomo / mab | Built-in benchmark runners (see Β§Evaluation) |
MCP server
attestor mcp (or attestor serve <path>) exposes an MCP stdio server with eight tools:
| Tool | Purpose |
|---|---|
memory_add | Write a memory with provenance |
memory_get | Fetch one memory by id |
memory_recall | Run the full retrieval pipeline |
memory_search | Filtered list (entity / category / time / namespace) |
memory_forget | Archive a memory by id |
memory_timeline | Chronology for an entity |
memory_stats | Store statistics |
memory_health | Per-role health snapshot β call this first when integrating |
Plus MCP resources (memory listings) and prompts (canned recall prompts for IDE assistants).
Hooks (Claude Code)
Three lifecycle hooks ship in attestor/hooks/:
session_startβ injects relevant memories into the session context based on cwd / repopost_tool_useβ auto-captures useful artifacts fromWrite/Edit/Bashstopβ writes a session summary on exit
Wire them up via the installer (next section) or by hand in ~/.claude/settings.json.
Install for Claude Code
Single instruction users can give Claude Code:
install attestor
(Or run /install-attestor.) The installer interviews you on:
- Scope β global (
~/.claude/.mcp.json) vs project (.mcp.json) - Postgres connection β local Docker, Neon, RDS, etc.
- Neo4j connection β local Docker, AuraDB, etc.
- Backend override β default
postgres+neo4j, orarangodb/aws/azure/gcp - Embedding provider β local Ollama (default), OpenAI, or cloud-native
- Hooks β whether to wire
session-start/post-tool-use/stop - Namespace + default token budget
Then it installs attestor via pipx, writes the MCP config, optionally writes settings.json hooks, and runs attestor doctor to verify.
Install as a Skill (2026 agent SDKs)
Attestor ships with a canonical SKILL.md at skills/attestor-memory/SKILL.md. Both Anthropic (skills-2025-10-02) and OpenAI's Responses API converged on this format β a markdown file with YAML frontmatter β for distributing reusable agent expertise. The wheel ships the SKILL.md, so every 2026-grade harness can auto-discover it after a single pip install attestor.
The skill teaches the agent the six core primitives (recall, add, timeline, current_facts, forget, audit) plus the v4 enterprise surface (bi-temporal as_of replay, RBAC roles, namespace isolation, provenance signing, GDPR export / purge). Every code example references methods that actually exist on attestor.AgentMemory, and a CI test (tests/test_skill_md.py) keeps the SKILL.md from drifting from the live API.
To pin the contract in your own host:
pip install attestor
python -c "import attestor, importlib.resources as r; print(r.files('attestor'))" # confirm wheel installed
# Point your agent harness at the bundled SKILL.md or read it directly:
python -c "from pathlib import Path; import attestor; \
print((Path(attestor.__file__).parent.parent / 'skills' / 'attestor-memory' / 'SKILL.md').read_text())"
Evaluation
Boundary statement. The dual-LLM judge stack is a benchmarking mechanism, not the runtime contract. Recall in production is single-pipeline and deterministic. Multiple judges score answers in evaluation only β never in user-facing reads.
| Runner | Source | Measures |
|---|---|---|
attestor lme | LongMemEval (Google's long-memory benchmark) | answer accuracy under long history, distillation, dual-judge cross-family |
attestor locomo | LoCoMo | conversational long-memory consistency |
attestor mab | MultiAgentBench | multi-agent coordination |
| AbstentionBench (CI gate) | internal | when not to answer β known unknowns |
scripts/lme_smoke_local.py | dual-LLM smoke | quick install verification (see Quick Start Β§6) |
The smoke driver mirrors the canonical published-benchmark stack exactly. See --help for the full env-var / CLI-flag override matrix.
Project layout
attestor/
core.py -- AgentMemory (main public API)
client.py -- MemoryClient (HTTP drop-in for remote Attestor)
context.py -- AgentContext, AgentRole, Visibility
models.py -- Memory, RetrievalResult, ContextPack
cli.py -- attestor CLI entry point
api.py -- Starlette ASGI REST API
longmemeval.py -- LongMemEval benchmark runner (dual-judge)
locomo.py -- LoCoMo runner
doctor_v4.py -- v4 schema + invariant validator
init_wizard.py -- interactive install flow
store/
base.py -- DocumentStore / VectorStore / GraphStore protocols
registry.py -- backend selection
connection.py -- config layering / env resolution
embeddings.py -- provider auto-detect (Ollama / OpenAI / Bedrock / Vertex / Azure)
postgres_backend.py -- pgvector (document + vector roles)
neo4j_backend.py -- Neo4j + GDS (graph role)
arango_backend.py -- all 3 roles in one
aws_backend.py -- DynamoDB + OpenSearch Serverless + Neptune
azure_backend.py -- Cosmos DB DiskANN + NetworkX
gcp_backend.py -- AlloyDB pgvector + AGE + ScaNN
schema.sql -- v4 Postgres schema (RLS, bi-temporal columns, content_tsv)
conversation/
ingest.py -- ingest_round() pipeline
extraction/
round_extractor.py -- 2-pass speaker-locked extraction
conflict_resolver.py -- 4-decision contract (ADD/UPDATE/INVALIDATE/NOOP)
rule_based.py -- deterministic fact extraction (no LLM)
prompts.py -- shared prompt templates
consolidation/
consolidator.py -- sleep-time re-extraction
reflection.py -- cross-thread synthesis (stable patterns + flagged contradictions)
graph/
extractor.py -- entity / relation extraction
retrieval/
orchestrator.py -- 6-step semantic-first pipeline
tag_matcher.py
scorer.py -- MMR, confidence decay, entity boost, fit-to-budget
trace.py -- JSONL trace writer
temporal/
manager.py -- timelines, supersession, contradiction detection, as_of replay
identity/
signing.py -- Ed25519 provenance signing (optional)
defaults.py -- SOLO mode auto-provisioning
mcp/
server.py -- MCP server (tools, resources, prompts)
hooks/
session_start.py
post_tool_use.py
stop.py
ui/
app.py -- Starlette read-only viewer
static/, templates/ -- Evidence Board UI
utils/
config.py, tokens.py
infra/
local/ -- Docker Compose (Postgres + Neo4j)
aws_arango/ -- Reference Terraform
tests/ -- Unit tests; live cloud tests env-gated
evals/ -- LongMemEval / LoCoMo / MultiAgentBench / AbstentionBench harnesses
docs/ -- Architecture notes, ADRs
commands/ -- /install-attestor, etc.
scripts/ -- lme_smoke_local.py, etc.
Development
poetry install
poetry run pytest tests/ -q # unit tests, no external services needed
ATTESTOR_LIVE_PG=1 poetry run pytest tests/live -q # live integration (env-gated)
Style: black formatting, isort imports, ruff lint, mypy types. PEP 8, type-annotated signatures, dataclasses for DTOs. Many small files (200β400 lines typical, 800 max).
Conventions worth knowing:
- Postgres is the source of truth. Neo4j is derived; rebuild it from Postgres if it drifts.
- Non-fatal errors in vector / graph paths are caught and logged. The document path never silently breaks.
- Configuration layering: env vars β
~/.attestor.tomlβ in-code overrides. - Two write paths:
add()for structured (lightweight rule-based supersession),ingest_round()for conversational (full 2-pass + 4-decision contract).
Health check
Always call this first when integrating:
attestor doctor # CLI
mem = AgentMemory()
print(mem.health()) # Python API
// MCP
{ "tool": "memory_health" }
It probes Document Store (Postgres), Vector Store (pgvector), Graph Store (Neo4j), and the retrieval pipeline. All four are required for the default topology β graph expansion is step 2 of the canonical pipeline, not an optional accelerator. Transient vector-probe failures surface in the recall() trace (vector_error) so callers can distinguish a degraded result from a clean one.
Status & versioning
- Version: 4.0.0 (stable) β published to PyPI and the MCP Registry as
io.github.bolnet/attestor.pip install attestorreturns 4.0.0 (no--preflag needed). - v3 β v4: greenfield rebuild on a v4-native Postgres schema with hard tenant isolation, bi-temporal facts, and a no-LLM retrieval critical path. There is no automated migration. v3 was alpha-only with no production users; drop your v3 DB and reinstall.
- See
CHANGELOG.mdfor the full track-by-track changelog.
License
MIT. See LICENSE.
