RLM Memory MCP Server
A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.
Ask AI about RLM Memory MCP Server
Powered by Claude Β· Grounded in docs
I know everything about RLM Memory MCP Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
RLM Memory MCP Server
Recursive Large Model (RLM) Memory System - A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.
The core philosophy: The AI Agent is intentionally blinded to the file system. Instead of using ls, grep, find, or dir, the AI relies on the MCP to be its eyes and memory.
How It Works
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β YOU (Developer) β
β β
β npm start β Opens UI at http://localhost:3848 β
β View all projects and memories in real-time β
β Test all tools via the testing interface β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β projects/ directory β
β rlm-memory-mcp-server/projects/ β
β βββ jumpinotech/.rlm/ β
β βββ my-app/.rlm/ β
β βββ another-project/.rlm/ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AI Agents (Claude Code, Codex, etc) β
β β
β NEW! Bi-directional communication: β
β Agent asks: "What files for this task?" β MCP answers β
β MCP asks: "Is indexing complete?" β Agent confirms β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Quick Start
cd rlm-memory-mcp-server
# Install dependencies
npm install
# Build
npm run build
# Create .env file with your Gemini API key
echo 'GEMINI_API_KEY=your-key-here' > .env
# Start the UI (for you to view memories and test tools)
npm start
# β Opens http://localhost:3848
Scripts
| Command | Description |
|---|---|
npm start | Start the UI server (for viewing memories + testing) |
npm run dev | Start UI in development mode with auto-reload |
npm run mcp | Run MCP server directly (for testing) |
npm run build | Build TypeScript to JavaScript |
Environment Variables
Create a .env file:
# Required for AI features
GEMINI_API_KEY=your-gemini-api-key
# Optional
UI_PORT=3848
Get a Gemini API key at Google AI Studio.
MCP Configuration for AI Agents
Claude Code
Add to ~/.claude.json:
{
"mcpServers": {
"rlm-memory": {
"command": "node",
"args": ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]
}
}
}
Or use CLI:
claude mcp add rlm-memory -- node D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js
OpenAI Codex CLI
Add to ~/.codex/config.toml:
[mcp_servers.rlm-memory]
command = "node"
args = ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]
Gemini CLI
Add to ~/.gemini/mcp.json:
{
"servers": {
"rlm-memory": {
"command": "node",
"args": ["D:\\rlm_memory\\rlm-memory-mcp-server\\dist\\index.js"]
}
}
}
Tools Available to AI Agents
For AI Agent Integration: See example_agents.md for concise rules AI agents should follow.
Core Tools
| Tool | Purpose |
|---|---|
rlm_init | Initialize a new project for tracking |
rlm_status | Get project statistics |
rlm_list_projects | List all tracked projects |
Discovery & Search Tools
| Tool | Purpose |
|---|---|
rlm_query | PRIMARY - Ask MCP about relevant files for a user request |
rlm_recall_memory | Retrieve relevant past context by keywords |
rlm_find_files_by_intent | Semantic file search by natural language |
Indexing & Memory Tools
| Tool | Purpose |
|---|---|
rlm_index_codebase | Scan & index existing codebase |
rlm_verify_index | Verify indexing is complete (post-index check) |
rlm_smart_memory | RECOMMENDED - Create memory with rich metadata |
rlm_create_memory | Basic memory creation (legacy) |
Sitemap Management Tools
| Tool | Purpose |
|---|---|
rlm_manage_sitemap | Delete, move, or update file entries when codebase changes |
New Tools (v2.0)
1. rlm_query - Bi-Directional Communication (PRIMARY)
The main tool for AI agent β MCP communication.
AI agent asks: "The user wants to fix the login button, what files should I look at?" MCP's Gemini searches memory + file map + edit history and returns relevant files with context.
{
"project_name": "my-app",
"user_request": "The user wants to fix the submit button color on the login form",
"include_memories": true,
"include_suggestions": true,
"max_files": 10
}
Returns:
relevant_files: Files with descriptions, recent changes, component type, feature arearelevant_memories: Past work related to this requestai_analysis: Explanation of how to approach the tasksuggestions: Tips for the AI agent
2. rlm_smart_memory - Enhanced Memory Creation (RECOMMENDED)
Creates memory entries with rich metadata. The AI agent provides detailed context, and Gemini:
- Extracts optimal keywords for semantic search
- Classifies files by component type (button, form, modal, api-endpoint, etc.)
- Classifies files by feature area (auth, checkout, dashboard, etc.)
- Tracks edit history for each file
{
"project_name": "my-app",
"user_prompt": "Fix the submit button color",
"changes_context": "Changed the submit button in LoginForm to use the primary theme color instead of hardcoded blue. Also added hover state styling.",
"files_modified": [
{
"path": "src/components/LoginForm.tsx",
"change_type": "modified",
"change_summary": "Updated button color to use theme.primary, added hover state"
}
],
"new_features": ["themed-buttons"],
"affected_areas": ["auth", "ui"]
}
3. rlm_verify_index - Post-Indexing Verification
After indexing a codebase, this tool asks: "Is this everything? Are you sure?"
{
"project_name": "my-app",
"expected_features": ["authentication", "payment", "dashboard"],
"report_format": "summary"
}
Returns:
- Files indexed grouped by type and feature area
- Potential gaps detected (e.g., "No test files found")
- Confirmation prompt for the AI agent
4. rlm_manage_sitemap - Sitemap Management
Keep your sitemap in sync when the codebase changes.
AI agents can use this tool to:
- Delete entries for files that no longer exist
- Move entries when files are renamed/moved
- Update metadata (description, keywords, component_type, feature_area)
{
"project_name": "my-app",
"operations": [
{ "action": "delete", "file_path": "src/old-component.tsx" },
{ "action": "move", "file_path": "src/Button.tsx", "new_path": "src/ui/Button.tsx" },
{
"action": "update",
"file_path": "src/api/auth.ts",
"updates": {
"description": "JWT authentication service",
"keywords": ["jwt", "auth", "token"],
"feature_area": "security"
}
}
]
}
Returns:
- Summary of successful/failed operations
- Detailed results for each operation
- Current sitemap entry count
Enhanced Features
File Metadata
Each file in the map now includes:
component_type: button, form, modal, hook, service, api-endpoint, etc.feature_area: auth, checkout, dashboard, user-profile, etc.edit_history: Array of past changes with dates and summaries
Smart Semantic Search
The rlm_find_files_by_intent tool now:
- Uses component type and feature area to narrow results
- Considers edit history for relevance scoring
- Won't return ALL buttons when you ask for ONE specific button
- Provides reasoning for why files were selected
Fallback Mode
All tools work without Gemini API (keyword-based fallback):
rlm_query: Uses weighted keyword matchingrlm_smart_memory: Infers types from file pathsrlm_find_files_by_intent: Basic keyword search
Existing Tools Reference
rlm_init - Initialize Project
{
"project_name": "jumpinotech",
"working_directory": "D:\\projects\\jumpinotech"
}
Creates projects/jumpinotech/.rlm/ with memory storage.
rlm_recall_memory - Recall Context (CALL FIRST!)
{
"project_name": "jumpinotech",
"keywords": ["auth", "login", "session"]
}
Returns relevant memories from past work.
rlm_find_files_by_intent - Find Files
{
"project_name": "jumpinotech",
"user_prompt": "I need to fix the submit button color"
}
Uses AI to find relevant files from the semantic map.
rlm_create_memory - Save Memory (Legacy)
{
"project_name": "jumpinotech",
"user_prompt": "Fix login timeout",
"changes_summary": "Increased session timeout from 30min to 2hrs",
"files_modified": ["src/config/auth.ts"],
"keywords": ["auth", "session", "timeout"]
}
rlm_index_codebase - Index Existing Codebase
{
"project_name": "jumpinotech",
"directory_path": "D:\\projects\\jumpinotech",
"max_files": 200,
"read_content": true
}
Now also extracts: component_type, feature_area, and prompts for verification.
The RLM Workflow
For New Projects
User: "Help me work on this new project"
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 1. rlm_init β
β Initialize project β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
Ready for RLM workflow!
For Existing Codebases (First Time)
User: "Index this codebase"
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 1. rlm_init + rlm_index_codebase β
β Scans directory, builds file map β
β with AI-generated descriptions β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 2. rlm_verify_index β
β MCP asks: "Is this everything?" β
β Shows what was indexed + gaps β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
Project is ready!
Regular Task Workflow (Recommended)
User: "Fix the submit button"
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 1. rlm_query (PRIMARY TOOL) β
β "User wants to fix submit button" β
β β Gets: Relevant files, past β
β memories, AI suggestions β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 2. AI reads & fixes the files β
β Using context from rlm_query β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 3. rlm_smart_memory (MANDATORY!) β
β Records changes with rich context β
β Updates file map with edit historyβ
ββββββββββββββββββββββββββββββββββββββββ
Legacy Workflow (Still Supported)
User: "Fix the submit button"
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 1. rlm_recall_memory β
β keywords: ["submit", "button"] β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 2. rlm_find_files_by_intent β
β "Fix submit button not working" β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 3. AI reads & fixes the files β
ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββ
β 4. rlm_create_memory β
β Records what was done β
ββββββββββββββββββββββββββββββββββββββββ
Project Structure
rlm-memory-mcp-server/
βββ src/
β βββ index.ts # MCP server (for AI agents via stdio)
β βββ ui/
β β βββ server.ts # Web UI (for you at localhost:3848)
β βββ services/
β β βββ database.ts # File-based storage
β β βββ gemini.ts # Gemini AI (semantic search, keywords)
β βββ tools/
β β βββ query.ts # NEW: rlm_query
β β βββ smart-memory.ts # NEW: rlm_smart_memory
β β βββ verify-index.ts # NEW: rlm_verify_index
β β βββ index-codebase.ts # Enhanced with types
β β βββ find-files.ts # Enhanced semantic search
β β βββ recall-memory.ts
β β βββ create-memory.ts
β β βββ init-status.ts
β βββ schemas/ # Zod validation
βββ projects/ # All project data stored here
β βββ jumpinotech/.rlm/
β βββ my-app/.rlm/
βββ dist/ # Built JavaScript
βββ .env # Your API keys
βββ package.json
Web UI Features
Open http://localhost:3848 after running npm start:
- Real-time updates - Auto-refreshes every 5 seconds
- Project browser - See all tracked projects
- Memory viewer - View all memories with timestamps
- File map - See the semantic file index with component types and feature areas
- Search - Filter projects by name
- Tool testing - Test all MCP tools directly from the UI
FAQ
Why not store data in each project folder?
Centralized storage in projects/ means:
- One place to back up all AI memories
- Easy to view across all projects in the UI
- No cluttering project repos with
.rlmfolders - Works even if you delete project folders
Can I use this without Gemini API?
Yes! Falls back to keyword matching. AI features just won't be as smart.
How do I back up my memories?
Just copy the projects/ folder.
What's the difference between rlm_query and rlm_recall_memory?
rlm_query: Comprehensive - Searches files + memories + edit history, returns AI analysis and suggestionsrlm_recall_memory: Simple - Just searches memories by keywords
What's the difference between rlm_smart_memory and rlm_create_memory?
rlm_smart_memory: Rich metadata - Extracts component types, feature areas, tracks edit historyrlm_create_memory: Basic - Just stores the memory entry
License
MIT
