Synq
SYNQ: ContextBridge β persistent memory layer for AI agents. SYNQ is a Chrome extension + local backend that gives your AI assistant a memory it was never designed to have.
Ask AI about Synq
Powered by Claude Β· Grounded in docs
I know everything about Synq. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
SYNQ
Your AI forgets. SYNQ makes it remember.
Every time you open a new chat, your AI starts from zero. SYNQ gives your AI persistent, cross-session memory.
Works with Claude Β· ChatGPT Β· Gemini Β· DeepSeek β and Claude Code Β· Cursor Β· Windsurf via MCP.
https://github.com/user-attachments/assets/ab003d01-3e36-405c-a7a4-9eae417b77ca
Try SYNQ in 30 Seconds
Don't have Docker? No problem. Use the new Zero-Docker (SQLite) mode for a one-click setup.
Fastest path (macOS/Linux):
- Install Docker Desktop + Ollama + Node 20
git clone https://github.com/Eshaan-Nair/Synq && cd Synq && ./install.sh- Load
extension/distas unpacked Chrome extension - Open Claude/ChatGPT β click SYNQ badge β Save Chat
β± First install: ~5 min (model download included)
The Problem
You're deep into a complex project. You've had 12 conversations with Claude about your architecture, your auth flow, your database schema. Then you open a new chat β it's all gone.
SYNQ captures your conversations, distills them into a semantic knowledge graph, and automatically injects the most relevant context into every new prompt.
Table of Contents
- Key Features
- System Requirements
- Quick Start
- MCP Server
- Usage Guide
- How It Works
- Architecture
- Tech Stack
- Privacy and Security
- Whats New in v1.4.4
- Documentation
- Contributing
- License
Key Features
| Feature | Description |
|---|---|
| Hybrid Search RAG | Combines Vector search with Knowledge Graph facts for 2x better accuracy |
| Auto-Connect | Intercepts every prompt and injects relevant context automatically |
| 100% Local | Ollama runs embeddings and extraction on your machine β nothing leaves your network |
| Zero Data Loss | Sliding window chunker preserves every word β no filtering, no minimum length |
| Prompt Injection Defence | Chunks scanned for injection patterns; context wrapped in XML delimiters |
| Knowledge Graph | 22 entity types, 20+ relation types β captures technical and personal context |
| D3.js Dashboard | Force-directed graph with degree-scaled nodes, hover tooltips, zoom controls |
| MCP Evolution | Smart project detection + Hybrid recall in Claude Code, Cursor, Windsurf |
| Resilient Selectors | 5-strategy fallback per platform; weekly CI detects when selectors go stale |
| Lite Mode | Runs on 4 GB RAM machines β skips Neo4j, RAG still works fully |
System Requirements
| Mode | RAM | Disk | Docker? | What runs |
|---|---|---|---|---|
| Full | 8 GB+ | 15 GB+ | Required | All features β Neo4j, MongoDB, ChromaDB, Ollama |
| Lite | 4 GB+ | 10 GB+ | Required | RAG only β MongoDB, ChromaDB (no knowledge graph) |
| SQLite | 2 GB+ | 5 GB+ | β Not needed | All features β single .db file + Ollama |
No Docker? Use SQLite Mode
Set SYNQ_STORAGE_MODE=sqlite in backend/.env before starting.
The installer detects Docker automatically and sets this for you if Docker is missing.
All launchers (start.bat, start.sh, install.bat, install.sh) auto-detect RAM and choose the right mode. Override with SYNQ_PROFILE=full or SYNQ_PROFILE=lite.
Quick Start
Prerequisites
| Requirement | Version | Link |
|---|---|---|
| Docker Desktop | 24.0+ | docker.com β enable WSL2 on Windows |
| Node.js | 20 LTS+ | nodejs.org |
| Ollama | Latest | ollama.com |
| Groq API Key | β | console.groq.com β free, only needed if Ollama is unavailable |
First-time Setup
Windows β double-click install.bat
Checks Docker + Node.js, opens Ollama download if missing,
pulls models, installs npm deps, builds all packages,
detects RAM, starts Docker with the correct profile.
**Configure API Keys** (Optional)
Copy `backend/.env.example` to `backend/.env` and add your `GROQ_API_KEY` for faster extraction if Ollama is slow or unavailable.
macOS / Linux:
git clone https://github.com/Eshaan-Nair/Synq.git
cd Synq
chmod +x install.sh && ./install.sh
Daily Use
Windows: start.bat
macOS/Linux: ./start.sh
Load the Extension
- Open Chrome β
chrome://extensions - Enable Developer mode (top-right toggle)
- Load unpacked β select
Synq/extension/dist - The SYNQ badge appears on Claude, ChatGPT, Gemini, and DeepSeek
Dashboard
Start the backend, then open http://localhost:3001
The dashboard is a production build served by the backend β no separate window needed.
MCP Server
v1.4.2 β SYNQ now works in any MCP-compatible AI tool.
Build the backend first:
cd backend && npm run build
Add to your AI tool's config:
Claude Desktop (~/.claude/claude_desktop_config.json):
{
"mcpServers": {
"synq": {
"command": "node",
"args": ["/path/to/Synq/backend/dist/mcp/server.js"]
}
}
}
Cursor / Windsurf (.cursor/mcp.json in project root):
{
"mcpServers": {
"synq": { "command": "node", "args": ["/path/to/Synq/backend/dist/mcp/server.js"] }
}
}
Available tools: recall_context Β· store_memory Β· search_memory Β· list_projects Β· get_project_summary
Full guide: MCP_SETUP.md
Usage Guide
Saving a Conversation
- Have a conversation on Claude, ChatGPT, Gemini, or DeepSeek
- Click the SYNQ icon in the toolbar
- Enter a project name and click Save Chat
- SYNQ scrubs PII, chunks, embeds locally, and extracts graph triples β typically under 5 seconds
Auto-Connect
Once a session is active, SYNQ auto-attaches on every page load. Just type β context is prepended automatically. Click Pause in the popup to suspend. Click again to resume.
Classic Inject
Click Inject Context (one-time) to paste the knowledge graph summary directly into the chat input for manual sending. Useful for priming a cold start.
Dashboard
Open http://localhost:3001:
| Tab | Content |
|---|---|
| Graph | D3.js force graph β hover nodes for connections |
| History | All semantic triples with timestamps |
| Chat | Full conversation with color-coded bubbles |
How It Works
1. CAPTURE
Save Chat β scrape conversation β FNV-1a deduplication
2. PRIVACY SCRUB
API keys, JWTs, emails, connection strings β [REDACTED]
Done in the browser before transmission
3A. VECTOR TRACK 3B. GRAPH TRACK
Sliding window chunker Ollama llama3.1:8b
300 words / 80 overlap (Groq fallback if unavailable)
Ollama embeddings summarize β extract triples
ChromaDB cosine storage Neo4j MERGE
4. AUTO-CONNECT (every prompt)
Intercept β ChromaDB cosine search
β sanitizeChunks() β injection patterns redacted
β wrapInContextBlock() β XML delimiters
β top-3 chunks prepended β sent to AI
Architecture
Synq/
βββ backend/src/
β βββ mcp/ server.ts + tools/recall|store|search|projects|summary
β βββ middleware/ sanitize.ts
β βββ routes/ chat Β· context Β· graph Β· rag
β βββ services/ chroma Β· chunker Β· embeddings Β· extractor Β· mongo Β· neo4j
β βββ utils/ logger Β· privacy
βββ dashboard/src/ React 19 + D3.js + Vite (built to dashboard/dist/)
βββ extension/src/
β βββ platform/ resolver.ts (multi-strategy selector engine)
β βββ platforms/ claude Β· chatgpt Β· gemini Β· index
β βββ content.ts DOM scraping, prompt interception
β βββ background.ts service worker, backend proxy
βββ MCP_SETUP.md MCP server setup guide
βββ .github/workflows/ integration-tests Β· selector-check Β· release
βββ docker-compose.yml full profile
βββ docker-compose.lite.yml lite profile
βββ install.bat / install.sh first-time setup
βββ start.bat / start.sh daily launcher
Ports
| Service | Port | Notes |
|---|---|---|
| Backend + Dashboard | 3001 | API + sirv static serving |
| Neo4j | 7474 / 7687 | Full mode only |
| MongoDB | 27017 | Always |
| ChromaDB | 8000 | Always |
| Ollama | 11434 | Local AI |
| MCP Server | stdio | External tool integration |
Tech Stack
| Layer | Technology |
|---|---|
| Extension | TypeScript, Chrome MV3, esbuild |
| Backend | Node.js, Express 5, TypeScript |
| Knowledge graph | Neo4j 5.18 |
| Vector store | ChromaDB 0.6.3 (cosine) |
| Embeddings | Ollama nomic-embed-text (768-dim, CPU) |
| LLM | Ollama llama3.1:8b primary + Groq fallback |
| MCP | @modelcontextprotocol/sdk (stdio) |
| Dashboard | React 19, Vite 7, D3.js v7 |
| Static serving | sirv |
| Infrastructure | Docker Compose (full/lite profiles) |
| Testing | Jest + ts-jest, pipeline integration test |
| CI/CD | GitHub Actions β tests, selector check, auto-release |
Privacy and Security
All data lives in local Docker volumes. Nothing syncs externally.
Ollama is the primary extraction backend β fully local. Groq is an automatic fallback only if Ollama is unavailable, with a console warning.
| Control | Detail |
|---|---|
| Prompt injection defence | Chunks scanned + XML context delimiters |
| PII auto-redaction | API keys, JWTs, emails, connection strings |
| Rate limiting | 200 req/min global Β· 10 req/min on /api/chat/save |
| CORS | localhost:3001, localhost:5173, chrome-extension:// only |
| Input validation | sessionId as ObjectId, platform as enum, text length enforced |
| Security headers | helmet on every response |
| Shared secret | Optional X-SYNQ-Secret header |
See SECURITY.md for the full threat model and vulnerability reporting policy.
Whats New in v1.4.4
- Architectural Hardening β Resolved UI deadlocks, ghost jobs, and implemented robust API polling routes
- Data Portability β Export and import entire sessions (with chat history and graphs) as JSON directly from the Dashboard
- Context Budgeting β Character-based context window management to prevent LLM overflow
- Memory Decay β Time-based relevance scoring for aging conversations
- Test Coverage & Benchmarking β Expanded unit test suite and introduced a quantitative RAG benchmarking harness
See CHANGELOG.md for the full history.
Documentation
| File | Description |
|---|---|
| ARCHITECTURE.md | Data flow, security model, data models, env vars |
| RAG_PIPELINE.md | Pipeline details, scoring, threshold tuning |
| PLATFORM_SELECTORS.md | Selectors, resolver system, staleness guide |
| MCP_SETUP.md | MCP setup for all supported AI tools |
| ROADMAP.md | Versioned milestones |
| SELF_HOSTING.md | Ports, passwords, backups, reverse proxy |
| CONTRIBUTING.md | Fork workflow, commit format, new platforms |
| CHANGELOG.md | Full version history |
| SECURITY.md | Threat model, vulnerability reporting |
Contributing
Bug fixes, new platform support, UI improvements, documentation, and test coverage are all welcome.
Contributing Guide Β· Code of Conduct
Good first issues: good first issue
License
MIT β see LICENSE.
Stop re-explaining yourself. Give your AI the memory it should have had from day one.
Built by Eshaan Nair
