Cognirepo
Local cognitive infrastructure for AI coding agents β semantic memory, repository intelligence, and MCP tools to reduce token usage.
Ask AI about Cognirepo
Powered by Claude Β· Grounded in docs
I know everything about Cognirepo. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
CogniRepo
Persistent memory and context for any AI tool. Not a chatbot β infrastructure.

What it does
Every AI conversation starts from zero. Claude, Cursor, Gemini β none of them remember what you fixed yesterday, which files relate to which features, or what decisions were made last sprint. CogniRepo fixes that.
It sits between your codebase and any AI tool, providing:
- Semantic memory β FAISS vector store with sentence-transformer embeddings. Store decisions, docs, architecture notes. Retrieve them with natural language.
- Episodic log β append-only event journal. Know what happened before that error.
- Knowledge graph β NetworkX DiGraph linking functions, classes, files, imports, inheritance chains, call relationships, and concepts. All queryable.
- AST reverse index β O(1) symbol lookup across your entire codebase in any supported language.
- User behavior profiling β tracks how you prompt so Claude adapts its response style without you having to re-explain preferences every session.
- Error tracking β records errors with prevention hints so Claude avoids repeating the same mistake across sessions.
- Session history β persists conversation exchanges so any session can resume where the last one ended.
- Architectural summaries β auto-generated on first init; built entirely from the local AST index (no API key needed). File β directory β repo summary tree, embedded into FAISS for semantic search.
- Multi-model orchestration β classify query complexity β build context β route to the right model. Claude for deep reasoning, Gemini Flash for quick lookups. All automatic.
Every AI tool that connects gets the same accumulated project knowledge. Memory persists across sessions, across tools, across time.
When to use CogniRepo
Most effective on codebases β₯ 15K LOC. On small repos (< 10K LOC), native file reads are fast enough that the MCP tool schema overhead (~3,900 tokens for 32 tools) takes more than you save. Break-even is roughly 4 tool calls on a medium-sized repo.
CogniRepo vs. claude-context / similar tools:
| Feature | CogniRepo | claude-context / similar |
|---|---|---|
| Pure code retrieval | β (FAISS + graph + AST) | β Often faster on first use |
| Episodic memory (what happened last sprint) | β Persistent BM25 + vector | β |
| Cross-agent handoff (Claude β Gemini β Cursor) | β last_context.json shared | β |
| User behaviour profile (adapts depth/style) | β get_user_profile() | β |
| Error pattern avoidance (learns from past fails) | β record_error() | β |
| Architectural decision records | β record_decision() | β |
| Multi-repo org graph (microservices) | β CHILD_OF / CALLS_API edges | β |
Conclusion: prefer CogniRepo when you value institutional memory across sessions. Use simpler tools when you just need one-shot code retrieval on a small codebase.
Why it helps β measured numbers
Benchmarked across 6 real open-source repos (FastAPI, Flask, Celery, Ansible, Moby/Docker, Kubernetes) using 30 structured prompts tested against Claude, Gemini, and Cursor/Codex.
| Metric | Value | Notes |
|---|---|---|
| Token reduction β Python repos | 50β84% | FastAPI FA-2: 12 000 β 2 500 Β· FA-4: 2 000 β 450 Β· FL-4: 8 000 β 1 250 |
| Token reduction β average (all tested) | ~60% | Across FA/FL/CE/AN where both baselines were captured |
| Token reduction β complex dynamic codebases | 20β35% | Celery CE-4/CE-5; deep async/dynamic-dispatch patterns reduce gains |
| Symbol lookup latency | < 1 ms | vs. grep at 2β8 s on large repos |
| Accuracy vs. baseline | equal or better in 100% of tests | No regression observed; FA-2 accuracy improved Moderate β High |
| Cross-agent context handoff | β validated | CE-4: Claude primed index, Gemini CLI consumed it β 35% token saving, same accuracy |
| Dynamic dispatch coverage | honest gap | CE-3 (APScheduler beat dispatch) returned NA for both; CogniRepo does not fabricate call chains |
| Go/multi-language coverage | partial | Moby MO-2 showed 67% savings; MO-3-5 / K8-* incomplete pending Go grammar improvements |
Honest limits: CogniRepo adds the most value on Python repos with clear static structure. Dynamic dispatch patterns (Celery beat, plugin registries), deep Go codebases, and Ansible's 22-level variable precedence chains reduce retrieval confidence. The tool reports uncertainty rather than hallucinating call chains.
Measured: precision@k and index build time (4 external repos)
Indexed 4 real repos, measured with cognirepo index-repo + context_pack queries. CPU-only, no GPU.
| Repo | Files | Index time | Lookup latency | precision@3 |
|---|---|---|---|---|
| flask | 83 | 12s | 0.011 ms | 100% |
| fastapi | 1,122 | 34s | 0.005 ms | 89% |
| celery | 416 | 44s | 0.025 ms | 100% |
| ansible | 1,813 | 145s | 0.018 ms | 80% |
All repos: symbol hit rate 5/5, lookup latency < 0.1ms. All quality gates pass. Full numbers: docs/METRICS.md.
Run cognirepo benchmark on your own codebase to reproduce. See docs/METRICS.md.
How it works
User / AI Tool
β
βββ MCP stdio (Claude Desktop, Gemini CLI, Cursor)
β
tools/ β single entry point to memory engine
β
βββββββββββΌββββββββββββββββββββββββββββββββββββββ
βΌ βΌ βΌ
memory/ retrieval/hybrid.py graph/knowledge_graph.py
FAISS 3-signal merge: NetworkX DiGraph
episodic vector + graph + behaviour 7 node types:
embeddings FILE, FUNCTION, CLASS,
indexer/ast_indexer.py CONCEPT, QUERY, SESSION,
tree-sitter multi-language ERROR
+ stdlib ast fallback 9 edge types:
CALLS, CALLED_BY,
graph/behaviour_tracker.py DEFINED_IN, CO_OCCURS,
per-symbol hit counts IMPORTS, INHERITS,
user behavior profile RELATES_TO,
error pattern tracking QUERIED_WITH
session history
β
.cognirepo/ (Fernet encrypted if storage.encrypt: true)
Quick start
Requirements
- Python 3.11+
- API key (optional β only needed for
cognirepo ask):ANTHROPIC_API_KEY,GEMINI_API_KEY,OPENAI_API_KEY, orGROK_API_KEY. Indexing, memory, summarization, and all MCP tools work fully offline.
Install
# Recommended β ONNX/fastembed, no GPU/CUDA required (~50 MB install):
pip install 'cognirepo[languages]'
# For encryption at rest:
pip install 'cognirepo[languages,security]'
# With model routing (cognirepo ask β needs an API key):
pip install 'cognirepo[languages,providers]'
# Full development install:
pip install -e '.[dev,security,languages]'
Note: CPU-only embeddings are the default (fastembed/ONNX, no PyTorch/CUDA required). Use
pip install 'cognirepo[gpu]'and install torch separately for GPU acceleration:pip install torch --index-url https://download.pytorch.org/whl/cu121
Run
# One-command onboarding (init + index + auto-configure MCP for Claude/Cursor/VS Code):
cognirepo setup
# Or step by step:
cognirepo init --no-index # scaffold .cognirepo/
cognirepo index-repo . # index your codebase (required before MCP tools work)
cognirepo index-repo . --daemon # index and run watcher in background
# Check everything is working:
cognirepo status # shows symbol count, graph nodes, signal warmth
cognirepo doctor # full health check
# Query through multi-model orchestrator:
cognirepo ask "why is auth slow?"
# Manage background watchers:
cognirepo list # show all running watcher daemons
cognirepo list -n <PID> --view # tail the log of a specific watcher
cognirepo list -n <PID> --stop # stop a watcher
First-time setup:
cognirepo init+cognirepo index-repo .must complete before MCP tools (context_pack,lookup_symbol,who_calls, etc.) return data.
Connect your AI tools
Claude Code / Claude Desktop (recommended β project-scoped)
Run cognirepo init inside your project β it asks if you want to configure Claude and
automatically writes .claude/CLAUDE.md and .claude/settings.json with the correct
project-locked connector.
Each project gets its own isolated connector named cognirepo-<project>:
{
"mcpServers": {
"cognirepo-myproject": {
"command": "cognirepo",
"args": ["serve", "--project-dir", "/abs/path/to/myproject"],
"env": {}
}
}
}
The --project-dir flag locks the MCP server to that project's .cognirepo/ directory.
When Claude has multiple projects open simultaneously, each connector reads only its own
memories β never mixing data across projects or teams.
Cursor / Copilot
cognirepo export-spec
cp adapters/cursor_mcp_config.json .cursor/mcp.json
# Restart Cursor β CogniRepo tools appear in the tool selector
Docker
cp .env.example .env # add your API keys
docker compose up mcp # MCP stdio server
MCP Tools β complete reference
All 32 tools are available to Claude, Cursor, and any MCP-compatible client.
Core retrieval
| Tool | Description | When to use |
|---|---|---|
context_pack(query, max_tokens=2000) | Token-budget code + memory context | Every session β FIRST call before any file read |
lookup_symbol(name) | O(1) symbol lookup β file + line | Before grepping for a function |
who_calls(function_name) | Trace callers + dynamic dispatch fallback | Impact analysis, refactoring |
search_token(word) | Word-level reverse index across names, docs, comments | Finding where a concept lives |
retrieve_memory(query, top_k=5) | Semantic similarity search over stored memories | Before answering β pull past context |
search_docs(query) | Full-text search in all .md files | Documentation lookups |
semantic_search_code(query, language=None) | Vector search over code symbols only | Code-specific semantic queries |
subgraph(entity, depth=2) | Local knowledge graph neighbourhood | Understand symbol relationships |
graph_stats() | Node/edge count and graph health | Check if graph has data |
episodic_search(query, limit=10) | BM25 keyword search in event history | Find past decisions or incidents |
dependency_graph(module, direction="both") | Import/dependency relationships | Module coupling analysis |
explain_change(target, since="7d") | What changed in a file/function + git cross-ref | Understanding recent changes |
architecture_overview(scope="root") | Pre-computed LLM architectural summaries | Big-picture questions |
User & session intelligence
| Tool | Description | When to use |
|---|---|---|
get_user_profile() | User's interaction style: depth pref, question types, vocabulary | Call at session start β calibrates Claude's response style |
get_session_history(limit=10) | Recent conversation exchanges across sessions | Resuming context from prior sessions |
record_user_preference(key, value, context="") | Store a style or format preference | When user corrects interpretation or states a preference |
Error tracking & prevention
| Tool | Description | When to use |
|---|---|---|
get_error_patterns(min_count=1) | Recurring errors with prevention hints | Before proposing a fix β check if it has failed before |
record_error(error_type, message, file_path, query_context) | Log an error for future avoidance | After any error Claude or user encounters |
Session start
| Tool | Description | When to use |
|---|---|---|
get_agent_bootstrap() | Single-call session start: brief + last context + profile + errors (~300 tokens vs ~900) | Preferred first call β replaces the 4-call sequence |
get_session_brief() | Architecture + hot symbols + index health | First call when you need granular parts separately |
get_last_context() | Most recent context_pack snapshot from prior session | Resume where previous agent left off |
Memory & storage
| Tool | Description | When to use |
|---|---|---|
store_memory(text, source="") | Persist a memory to the FAISS index | After solving bugs, recording decisions |
log_episode(event, metadata={}) | Append event to episodic journal | Track milestones, incidents, deployments |
record_decision(summary, rationale="") | Record architectural decision to episodic memory | When making non-obvious design choices |
supersede_learning(old_memory_id, new_text) | Deprecate and replace an outdated memory in one call | When a past decision or fact has changed |
Cross-repo (organization)
| Tool | Description | When to use |
|---|---|---|
org_search(query) | Search memories across all org repos | Multi-repo context queries |
org_wide_search(query) | Search across every project in the org | Broadest cross-repo sweep |
org_dependencies(depth=2) | Bidirectional inter-repo dependency graph | "What does this service depend on?" |
cross_repo_search(query, scope="project") | Project-scoped or org-scoped search | Finding shared components |
cross_repo_traverse(symbol, direction="both") | Traverse org graph from a repo or symbol | Tracing bugs across service boundaries |
list_org_context() | Org metadata + sibling repos | Understanding repo relationships |
link_repos(src_repo, dst_repo, relationship) | Record cross-repo dependency | When you discover one repo imports another |
Knowledge graph β what gets indexed
The knowledge graph is significantly richer than a simple call graph.
Node types
| Type | Description |
|---|---|
FILE | Every indexed source file |
FUNCTION | Function and method definitions with docstrings |
CLASS | Class definitions with base classes |
CONCEPT | Semantic concepts extracted from docstrings and identifiers |
QUERY | Recorded query nodes (for retrieval scoring) |
SESSION | Conversation session nodes |
ERROR | Recurring error pattern nodes |
MEMORY | Cross-agent memory nodes (synced from Claude/Gemini) |
Edge types
| Type | Direction | Description |
|---|---|---|
DEFINED_IN | symbol β file | Symbol lives in this file |
CALLS / CALLED_BY | bidirectional | Function call relationships with purpose labels |
IMPORTS | file β file | Python import dependencies |
INHERITS | class β parent | Inheritance hierarchy |
CO_OCCURS | file β file | Files edited together (behavioural co-edit signal) |
RELATES_TO | concept β symbol | Semantic concept linkage |
QUERIED_WITH | query β symbol | Retrieval tracking for scoring |
IMPORTS and INHERITS edges are built automatically during index-repo from Python AST.
Use subgraph("MyClass", depth=2) or dependency_graph("mymodule") to query them.
User behavior profiling
CogniRepo tracks how you interact across sessions and builds a profile that Claude uses to calibrate its responses β without you having to repeat preferences every session.
What gets tracked
- Depth preference β inferred from average query length:
concise/medium/detailed - Question types β distribution across:
why,what,how,fix,explain,where,refactor,add - Domain vocabulary β top terms that appear frequently in your queries
- Code focus β percentage of queries referencing code identifiers (symbols, functions)
- Sample queries β last 3 queries for Claude to infer framing style
Accessing your profile
# MCP tool (Claude calls automatically at session start):
get_user_profile()
# CLI:
cognirepo user-prefs
Example profile output
{
"depth_preference": "detailed",
"top_question_type": "how",
"question_type_distribution": {"how": 12, "why": 8, "fix": 5},
"top_terminology": ["auth", "token", "session", "middleware", "validate"],
"code_focus_percent": 73,
"framing_hints": "prefers detailed responses; often asks 'how' questions; domain vocabulary: auth, token, session",
"total_queries_tracked": 47
}
Claude receives framing_hints at session start and adjusts response length, code density,
and terminology accordingly. The profile accumulates over time β more accurate the more you use it.
Error tracking & prevention
CogniRepo logs every error that occurs during sessions β whether it's a Python exception, a failed build step, or a tool call that went wrong. Errors are stored with:
- Dedup signature β prevents the same error from inflating the count
- Prevention hint β a targeted suggestion to avoid the same error class
- Occurrence context β last 5 occurrences with file path and error message
- Query context β the query or action that triggered the error
Logging errors
# MCP tool (Claude calls after errors):
record_error("TypeError", "expected str got int", "config/parser.py", "fix config loading")
Viewing error patterns
# MCP tool:
get_error_patterns()
Returns:
[
{
"error_type": "TypeError",
"count": 7,
"files": ["config/parser.py", "api/handlers.py"],
"last_seen": "2026-04-22T10:30:00Z",
"prevention_hint": "Wrong type β validate inputs at function boundary.",
"recent_context": "expected str got int in parse_config"
}
]
Built-in prevention hints
| Error class | Prevention hint |
|---|---|
NameError | Undefined variable β check imports and scope before use |
ImportError | Import failed β verify package is installed and module path is correct |
AttributeError | Object missing attribute β check type, None-guard, or spelling |
TypeError | Wrong type β validate inputs at function boundary |
KeyError | Missing dict key β use .get() with default or check existence first |
IndexError | List out of range β guard with len() check before access |
OSError | File/IO error β always guard file ops with try/except OSError |
SyntaxError | Syntax error β run a linter before committing |
Timeout | Timeout β add explicit timeout parameter and retry logic |
AssertionError | Assertion failed β review invariants; do not use assert in prod |
Session history
Every cognirepo ask exchange is persisted to .cognirepo/sessions/.
Sessions are indexed by UUID and retrievable via:
# List recent sessions:
cognirepo sessions
# MCP tool β Claude calls at session start to resume context:
get_session_history(limit=5)
Each entry returns: session ID, created timestamp, message count, model used, and the last user/assistant exchange for quick context scan.
Architectural summaries
cognirepo init automatically prompts to run cognirepo summarize after the first index.
This produces a 3-level LLM summary of the entire codebase:
- Level 1 β repo-wide summary (what the project does, key modules, entry points)
- Level 2 β per-directory summaries (what each package is responsible for)
- Level 3 β per-file summaries (what each file contains, key functions/classes)
Summaries are stored in .cognirepo/index/summaries.json and served via the
architecture_overview MCP tool β zero token cost for Claude to understand the big picture.
# Auto-prompted on first init. Run manually anytime:
cognirepo summarize
# Fully local β no API key required. Reads from ast_index.json, runs in < 1 second.
# File summaries are also embedded into FAISS for semantic architecture queries.
Multi-model orchestration
cognirepo ask automatically picks the right model for each query:
| Tier | Score | Default model | Use case |
|---|---|---|---|
| QUICK | β€2 | local resolver | Single-token / trivial β zero API, fastest path |
| STANDARD | β€4 | Haiku | Quick lookup, factual, single symbol |
| COMPLEX | β€9 | Sonnet | Moderate reasoning |
| EXPERT | >9 | Opus | Cross-file, architectural, ambiguous β full context, best model |
cognirepo ask "where is verify_token defined?" # β QUICK, answered locally
cognirepo ask "why is auth slow?" # β EXPERT, Claude with full context
cognirepo ask --verbose "explain the circuit breaker" # show tier/score/signals
Provider fallback chain: Grok β Gemini β Anthropic β OpenAI.
All errors are logged to .cognirepo/errors/<date>.log β no raw tracebacks shown to users.
Language support
| Language | Extensions | Install |
|---|---|---|
| Python | .py | built-in |
| JavaScript / TypeScript | .js .ts .jsx .tsx | cognirepo[languages] |
| Java | .java | cognirepo[languages] |
| Go | .go | cognirepo[languages] |
| Rust | .rs | cognirepo[languages] |
| C / C++ | .c .cpp .h | cognirepo[languages] |
Full details and roadmap: docs/LANGUAGES.md
Storage layout
.cognirepo/
config.json β project settings (project_id, model, retrieval weights)
vector_db/
semantic.index β FAISS flat index for semantic memory
ast.index β FAISS IndexIDMap2 for code symbols
ast_metadata.json β parallel metadata for ast.index rows
graph/
graph.pkl β NetworkX DiGraph (optionally Fernet-encrypted)
behaviour.json β per-symbol hit counts, user profile, error patterns
index/
ast_index.json β reverse symbol index + file records
manifest.json β git SHA + platform info for integrity checks
summaries.json β LLM architectural summaries (Level 1β3)
memory/
episodic.json β append-only event journal
sessions/
<uuid>.json β conversation session files
current.json β pointer to most-recent session
errors/
<date>.log β daily error logs (full tracebacks, never shown to users)
learnings/
learnings.json β structured learnings: decisions, bugs, prod issues
Everything under .cognirepo/ is .gitignored by default β never committed.
Fernet encryption is opt-in at storage.encrypt: true in config.json.
CLI reference
# Setup
cognirepo init # scaffold + configure; auto-indexes + auto-summarizes
cognirepo setup-env # interactive API key wizard
cognirepo test-connection # test API key connectivity
cognirepo migrate-config # migrate deprecated config keys
# Indexing
cognirepo index-repo [path] # AST-index a codebase
cognirepo summarize # generate LLM architectural summaries (auto-prompted on init)
cognirepo seed --from-git # seed behaviour weights from git history
cognirepo verify-index # verify AST index integrity
cognirepo coverage # per-directory symbol counts
# Querying
cognirepo ask <query> # route through multi-model orchestrator
cognirepo retrieve-memory <q> # similarity search
cognirepo search-docs <q> # full-text search in .md files
cognirepo log-episode <event> # append episodic event
cognirepo history # print recent episodic events
cognirepo sessions # list recent conversation sessions
# Memory management
cognirepo store-memory <text> # save a semantic memory
cognirepo user-prefs # view/set global user preferences
cognirepo prune [--dry-run] # prune low-score memories
# Health & monitoring
cognirepo prime # generate session bootstrap brief
cognirepo status # live retrieval signal weights + index health
cognirepo doctor [--fix] # full health check; --fix auto-repairs common issues
cognirepo benchmark # run quantitative value benchmarks
# Organization
cognirepo org create <name> # create local organization
cognirepo org link <org> [path] # link repo to organization
cognirepo org list # list organizations
# Daemon management
cognirepo list # list MCP servers, running daemons
cognirepo watch # manage background file-watcher daemon
Future Plans
Priorities drawn from the v0.3.0 benchmark findings and community feedback.
Near-term (v0.3.0)
- Go call-graph indexing β tree-sitter-go grammar is loaded but call extraction is incomplete; Moby/Kubernetes tests (MO-3-5, K8-*) could not be completed without it. Adding Go-aware
who_callsand IMPORTS edges is the single highest-impact unblocked item. cognirepo askβ multi-model orchestrator (QUICK/STANDARD/COMPLEX/EXPERT tiers). Initial implementation stubbed in v0.2.0; orchestrator logic is implemented inorchestrator/and being wired to a working API key flow in v0.3.0.- Incremental re-index on save β file-watcher daemon exists (
cognirepo watch) but re-index on write is not yet debounced correctly; large repos see spurious full re-indexes. - CLAUDE.md mandatory-call relaxation β benchmark feedback (Moby tests) flagged that forcing
context_packbefore every file read adds latency under memory pressure. Will add a--fastmode that skips the tool-first gate for files under 50 lines.
Medium-term (v0.4.0)
- Kubernetes / 2M-LOC scale validation β K8-1 through K8-5 test suite not yet completed. Goal: full scheduling-decision trace at < 8 000 tokens with CogniRepo vs. > 50 000 without.
- Plugin-registry pattern detection β Ansible AN-3/AN-4 (22-level variable precedence, strategy plugins) and Celery CE-3 (dynamic dispatch) returned NA. Plan: static heuristic pass that detects
register,entry_points, and__init_subclass__patterns and annotates them asDYNAMIC_DISPATCHnodes in the graph. - BM25 over symbol names β current keyword search uses exact-word reverse index; adding BM25 TF-IDF ranking over symbol names and docstrings would improve partial-match recall (e.g.
HttpClientmatchinghttp_client). - Cross-session memory warm-up β Ansible benchmark noted episodic/memory retrieval is low-value on fresh sessions.
cognirepo primeexists but is not run automatically oninit; will make it opt-in default.
Longer-term
cognirepo askstreaming REPL β full interactive session with tier routing, session persistence, and sub-agent delegation.- Ruby, PHP, C#, Swift grammar support β tree-sitter grammars exist; need
_TS_FUNCTION_TYPES/_TS_CLASS_TYPESmappings and call-extraction rules per language. - Similarity edges in knowledge graph β embedding-distance clustering to connect semantically related symbols across files (not yet implemented).
- VS Code / JetBrains extension β surface
lookup_symbol,context_pack, andwho_callsdirectly in the editor sidebar without requiring an MCP-capable host.
Documentation
| Document | Description |
|---|---|
| docs/ARCHITECTURE.md | System design, component responsibilities, data flow |
| docs/architecture/SPECIFICATION.md | Technical spec, complexity signals, storage layout |
| docs/USAGE.md | Complete CLI, MCP, and Docker reference |
| docs/METRICS.md | Quantitative benchmarks: token reduction, lookup speedup, recall |
| CONTRIBUTING.md | How to add adapters, tools, and language support |
| SECURITY.md | Vulnerability reporting, data handling, trust model |
| docs/LANGUAGES.md | Language support details and roadmap |
License
CogniRepo is licensed under the MIT License.
- Free to use, study, modify, and distribute
- Use in proprietary products and commercial services β no restrictions
- No requirement to open-source your application
See LICENSE for full details.
