Mentedb
MCP server for MenteDB, persistent memory for AI agents
Ask AI about Mentedb
Powered by Claude Β· Grounded in docs
I know everything about Mentedb. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MenteDB MCP Server
Beta β MenteDB is under active development. APIs may change between minor versions.
The MCP (Model Context Protocol) server for MenteDB, the mind database for AI agents.
What is this?
This MCP server lets any AI agent (Claude, GPT, Copilot, or any MCP compatible client) use MenteDB as persistent memory. It connects to MenteDB Cloud by default β no local database, no file locks, works across multiple sessions simultaneously.
Quick Start
Install and configure in one command:
npx mentedb-mcp@latest setup copilot
Then authenticate:
npx mentedb-mcp@latest login
That's it. Your agent now has persistent memory that works across all your sessions and devices. Replace copilot with cursor or claude for other editors.
How it works
Once logged in, the MCP server runs as a thin HTTP client β all memory operations (store, search, recall) are handled by MenteDB Cloud. This means:
- No local database locks
- Multiple editor sessions can run simultaneously
- Memories sync across devices automatically
- Embeddings and extraction are handled server-side (no local GPU needed)
Local mode (offline/self-hosted)
If you prefer to run entirely offline without cloud:
mentedb-mcp --local
In local mode, the server uses an embedded database at ~/.mentedb/. Only one instance can run at a time due to file locking.
Alternative: install from source
If you prefer building from source instead of npx:
cargo install mentedb-mcp
mentedb-mcp setup copilot
mentedb-mcp login
Updating
After upgrading, instructions auto-update on server startup. To manually review and confirm changes:
mentedb-mcp update copilot
The update command shows you the exact instructions that will be written and asks for confirmation. If you've customized the MenteDB block, it warns you and creates a .bak backup. Your own instructions outside the MenteDB block are always preserved.
CLI Commands
| Command | Description |
|---|---|
setup <client> | Auto-configure MCP for copilot, cursor, or claude |
update <client> | Update agent instructions (preserves customizations) |
login | Authenticate with MenteDB Cloud via browser |
logout | Remove cloud credentials |
status | Check cloud connection and token validity |
Authentication
npx mentedb-mcp@latest login
This opens your browser to authorize the CLI. Once authenticated, credentials are saved to ~/.mentedb/cloud.json and the MCP server connects to MenteDB Cloud on subsequent runs.
To check your connection:
npx mentedb-mcp@latest status
To revoke access:
npx mentedb-mcp@latest logout
You can also revoke sessions from the web dashboard at app.mentedb.com.
Manual Configuration
Claude Desktop
Add to ~/.config/claude/claude_desktop_config.json (macOS/Linux) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"mentedb": {
"command": "npx",
"args": ["-y", "mentedb-mcp@latest"]
}
}
}
Cursor
Add to your Cursor MCP configuration:
{
"mcpServers": {
"mentedb": {
"command": "npx",
"args": ["-y", "mentedb-mcp@latest"],
"transportType": "stdio"
}
}
}
GitHub Copilot CLI
Add to ~/.copilot/mcp-config.json:
{
"mcpServers": {
"mentedb": {
"command": "npx",
"args": ["-y", "mentedb-mcp@latest"],
"alwaysAllow": [
"process_turn", "store_memory", "search_memories", "forget_memory"
]
}
}
}
The alwaysAllow list lets memory tools run without approval prompts.
Tools
By default, the server exposes 4 essential tools:
| Tool | Description |
|---|---|
process_turn | Call every turn. Stores conversation, retrieves context, detects contradictions, generates pain warnings. Triggers automatic enrichment when LLM is configured. Accepts project_context and agent_id for scoping. |
store_memory | Store an important fact with type, tags, and optional scope. |
search_memories | Semantic search by query, or get full content by memory UUID. Accepts limit (default 10, max 50) and memory_type filter. |
forget_memory | Delete a memory by ID. Accepts optional reason for audit logging. |
What process_turn returns
| Field | Description |
|---|---|
context | Top 10 semantically relevant memories + all always-scoped memories |
stored | Number of facts auto-extracted and stored from this turn |
contradictions | Number of contradictions detected |
contradiction_details | Array of { memory_id, explanation } for each contradiction |
pain_warnings | Array of { id, warning } from anti_pattern memories matching current context |
Automatic Enrichment
When an LLM provider is configured, process_turn automatically triggers a background enrichment pipeline that enhances your memory graph over time:
| Phase | What it does |
|---|---|
| Extraction | Converts raw conversations into structured semantic facts and entity nodes |
| Entity Linking | Resolves duplicates and aliases (e.g., "JS" β "JavaScript") using rules + LLM |
| Community Detection | Groups related entities and generates summaries per community |
| User Model | Builds an always-available user profile from accumulated knowledge |
Enrichment is fully automatic β no additional tools or configuration needed beyond setting an LLM provider. Results feed directly into future process_turn context retrieval, improving recall quality over time.
Configure an LLM provider via environment variables:
# OpenAI (recommended)
export MENTEDB_OPENAI_API_KEY=sk-...
# Or Anthropic
export MENTEDB_ANTHROPIC_API_KEY=sk-ant-...
# Or Ollama (local, no key needed)
export MENTEDB_LLM_PROVIDER=ollama
Without an LLM provider, the MCP server works perfectly β enrichment simply doesn't run.
Memory Types
| Type | Use for | Example |
|---|---|---|
semantic | Facts, preferences, project details | "User prefers Rust over Go" |
episodic | What happened in a specific interaction | "Debugged OOM in prod on Jan 5" |
procedural | How to do things | "To release: bump version, tag, push" |
correction | Something was wrong and is now right | "API key goes in .env, not config.toml" |
anti_pattern | Things to never do | "Never force-push to main" |
reasoning | Why a decision was made | "Chose DynamoDB over Postgres for scaling" |
Memory Scope
| Scope | Behavior |
|---|---|
contextual (default) | Retrieved by semantic similarity when relevant to the conversation |
always | Returned on every process_turn call regardless of topic. Use for critical rules. |
Set scope: 'always' when the user says "always remember this" or states a hard constraint.
Memory Quality Guidelines
- One fact per memory β don't combine unrelated facts in a single memory
- Self-contained β "User prefers tabs over spaces in Python" not just "prefers tabs"
- Include context β "For mentedb-mcp, deploy with..." not just "deploy with..."
- Keep under 200 words β summarize if needed
- Don't store β greetings, temporary info, large code blocks, chitchat
Resilience
Even if process_turn fails or errors on a turn, always call it again on the next turn. Never skip because of a prior failure.
Local mode: full tools (--full-tools)
In local mode (--local), you can expose all 32 tools with --full-tools for advanced memory operations including knowledge graph traversal, consolidation, cognitive systems, and GDPR forget.
Configuration
CLI Arguments
mentedb-mcp [OPTIONS]
Options:
--local Force local mode (embedded database, single instance)
--data-dir <PATH> Data directory path [default: ~/.mentedb]
--embedding-dim <DIM> Embedding vector dimension [default: 128]
--llm-provider <PROVIDER> LLM provider for local extraction: openai, anthropic, ollama, mock [default: mock]
--llm-api-key <KEY> API key for the LLM provider (overrides env var)
--llm-model <MODEL> Model name override for the LLM provider
--full-tools Expose all 32 tools (local mode only, default: 4 essential tools)
-h, --help Print help
Environment Variables
| Variable | Description |
|---|---|
MENTEDB_API_URL | Override cloud API URL (default: https://api.mentedb.com) |
MENTEDB_CLOUD_URL | Override cloud dashboard URL (for login flow) |
MENTEDB_LLM_PROVIDER | LLM provider: openai, anthropic, ollama, mock |
MENTEDB_LLM_API_KEY | API key for LLM extraction |
MENTEDB_LLM_MODEL | Model name override |
MENTEDB_OPENAI_API_KEY | OpenAI API key (sets provider to openai automatically) |
MENTEDB_ANTHROPIC_API_KEY | Anthropic API key (sets provider to anthropic automatically) |
The server writes logs to both stderr and a rolling file at ~/.mentedb/mentedb-mcp.log.
Architecture
Cloud mode (default): The server runs as a lightweight HTTP proxy on stdio transport. All memory operations are forwarded to MenteDB Cloud which handles embedding generation (via AWS Bedrock Titan), semantic search, LLM extraction (via Claude), and DynamoDB storage. No local state is kept.
Local mode (--local): The server uses the full MenteDB engine with an embedded fjall database, local Candle embeddings (all-MiniLM-L6-v2), and optional LLM extraction. This mode supports all 32 tools including knowledge graph, consolidation, and cognitive systems.
Issues
Found a bug or have a feature request? Open an issue.
License
Apache-2.0
