Grafyn
The knowledge base built for people who live in LLMs. Import conversations from ChatGPT, Claude, Gemini, and Grok. Run models side-by-side or watch them debate each other on your notes. Local-first, graph-aware, with a native MCP server for AI assistant integration.
Ask AI about Grafyn
Powered by Claude Β· Grounded in docs
I know everything about Grafyn. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Grafyn
A desktop knowledge graph and Canvas for capturing how you think, what you know, and how a future digital twin should reason with your evidence.
Windows Β· macOS Β· Linux
Download Β· What It Is Β· Twin Pipeline Β· Quick Start Β· Guidelines
Early development - expect rough edges. Grafyn is currently focused on local evidence capture, knowledge organization, and the first native RAG twin workflow. It is not yet a scratch-trained personal model.
What Grafyn Is
Grafyn is a desktop-only app for building a local knowledge vault and using it inside a multi-model Canvas. The long-term goal is to become the capture layer for a personal digital twin pipeline: users work inside Grafyn, Grafyn records explicit and passive evidence about their knowledge and reasoning patterns, and later twin systems can use that evidence.
The current app is not claiming to be "you." It captures evidence about you.
The first usable twin mode is a native RAG twin:
- Retrieve relevant notes from your vault.
- Retrieve reviewed user records about your thinking and preferences.
- Assemble that context into Canvas prompts.
- Let the chosen model answer in either advisor mode or explicitly labeled simulation mode.
- Feed your accept/reject/correct/rank feedback back into the evidence loop.
Core Features
Knowledge Vault
- Markdown notes with
[[wikilinks]]and YAML frontmatter. - Draft, evidence, and canonical note status workflow.
- Full-text search powered by Tantivy.
- Backlinks, outgoing links, and graph-aware retrieval.
- Conversation import from ChatGPT, Claude, Grok, Gemini, and Codex-style exports.
Knowledge Graph And Hub Clustering
- D3 force-directed graph view for notes and topic hubs.
- Auto-managed topic hubs for broad knowledge areas.
- Deterministic graph-based clustering over explicit links and shared topic/title signals.
- Label-propagation communities so linked note groups become broader hubs instead of many tiny hubs.
- Noise filtering so model/provider names such as
Claudedo not become hubs just because they appear in a prompt. - Major hubs keep minor recurring tags under a
Subtopicssection instead of exploding the sidebar with one folder per tiny topic.
Multi-LLM Canvas
- Compare multiple OpenRouter models side by side.
- Stream responses in parallel.
- Branch from model responses.
- Debate mode for model critique and synthesis.
- Semantic note context from the vault.
- Twin context mode using reviewed user records.
- Smart web search detection for prompts that need current information.
- Save Canvas sessions as notes.
Twin Capture And Review
- Canvas feedback controls:
Matches Me,Not Me,Correct,Rank Selection,Capture Insight, andExport Twin Data. - Local evidence capture from feedback, branching, note exports, canonical promotion, debate choices, and related passive signals.
- Local signal inference for
Fact,Preference, andReasoningPatternrecords. - Review dashboard at
/twinfor candidate, auto-promoted, endorsed, rejected, private, and no-train records. - Evidence resolution so records can be traced back to the prompts, sessions, models, and excerpts that supported them.
- Revert/reject support that prevents rejected inference keys from being silently auto-promoted again.
Native RAG Twin
Canvas supports a Twin context mode with two answer modes:
- Advisor - decision-support assistant using your reviewed notes and user records.
- Simulation - explicitly labeled likely-user-style simulation. It is not represented as the actual user.
Twin context uses:
- Relevant vault notes and chunks.
- Approved user records:
endorsedandauto_promoted. - Relevant candidate records only when they match the prompt, disclosed separately as tentative.
Twin context excludes:
rejectedprivateno_train
Rejected records are preserved for export as negative evidence, not used as live answer context.
Twin Capture Pipeline
Grafyn currently learns in the evidence and retrieval sense, not by changing model weights.
User work in Canvas/Notes
|
v
Trace events + feedback + note actions
|
v
Local signal inference
|
v
Evidence-linked user records
|
v
Twin Review: endorse / reject / private / no-train
|
v
Native RAG twin context + export bundles
Current Stage: Local Evidence And RAG
Grafyn stores what happened and infers specific records such as:
- "Prefers evidence-backed implementation detail."
- "Rejects vague strategic answers."
- "Often asks for blunt tradeoff analysis."
These records are linked to evidence. They are not broad personality labels.
Export Contract
Twin exports separate reviewed records into different JSONL files:
approved_user_records.jsonl- endorsed and auto-promoted records.candidate_user_records.jsonl- tentative records for later review or weak-signal use.rejected_user_records.jsonl- negative evidence for future pipelines.
The export manifest includes matching counts and paths.
Future Training Paths
Grafyn's data can later support stronger personal models, but those are not v1:
- RAG twin - implemented first; no model weights change.
- Preference/ranking model - learns what answer shape or decision style you choose.
- Local adapters or fine-tuning - adjusts a capable base model using reviewed examples.
- Scratch-trained personal model - research path only. Prompts alone are not enough; it would require large volumes of personal writing, decisions, outcomes, corrections, and domain evidence.
Quick Start
Download
Grab the latest installer from Releases:
| Platform | File |
|---|---|
| Windows x64 | Grafyn_*_x64-setup.exe |
| Windows ARM64 | Grafyn_*_arm64-setup.exe |
| macOS Apple Silicon | Grafyn_*_aarch64.dmg |
| Linux Debian/Ubuntu | grafyn_*_amd64.deb |
| Linux Universal | grafyn_*_amd64.AppImage |
Grafyn auto-updates after installation.
Build From Source
Prerequisites:
- Node.js 20+
- Rust via rustup
- Tauri v1 dependencies
cd frontend
npm install
node scripts/generate-icons.cjs
npm run tauri:dev
npm run tauri:build
Configuration
On first launch, Grafyn walks through setup:
- Vault path - where markdown notes are stored. Default:
~/Documents/Grafyn/vault/. - OpenRouter API key - required for Canvas model execution, distillation, link discovery, and native RAG twin answers.
Local vault data stays on your machine. Canvas model calls send the selected prompt context to the configured model runtime.
MCP Integration
Grafyn bundles a native Rust MCP server, grafyn-mcp, for desktop agents such as Claude Desktop or Codex Desktop.
Use Grafyn Settings to copy the generated MCP config snippet, or configure it manually:
{
"mcpServers": {
"grafyn": {
"command": "path/to/grafyn-mcp",
"args": ["--vault", "path/to/vault", "--data", "path/to/data"]
}
}
}
The MCP binary shares the same vault and index paths as the desktop app. If the desktop app is holding the search writer lock, MCP falls back to read-only search.
Architecture
Grafyn is a single desktop app: Vue 3 frontend, Rust/Tauri backend, local filesystem storage.
Tauri Desktop App
βββ Vue 3 Frontend
β βββ Notes
β βββ Knowledge Graph
β βββ Canvas
β βββ Twin Review
βββ Rust Backend
β βββ Tauri IPC commands
β βββ Knowledge store
β βββ Tantivy search and chunk retrieval
β βββ Graph index and topic clustering
β βββ Canvas session store
β βββ Twin evidence store
β βββ OpenRouter integration
βββ grafyn-mcp
βββ ~/Documents/Grafyn/
βββ vault/
βββ data/
Tech Stack
| Layer | Technology |
|---|---|
| Frontend | Vue 3, Vite, Pinia, D3.js |
| Desktop | Tauri 1.8 |
| Backend | Rust |
| Search | Tantivy |
| Graph | petgraph + local graph algorithms |
| LLM Runtime | OpenRouter via reqwest |
| MCP | rmcp over stdio |
| Storage | Local markdown vault + JSON data files |
| Updates | Cloudflare R2 + Workers |
Developer Guidelines
Product Rules
- Grafyn is desktop-first and local-first. Do not add a hosted backend for core vault or twin storage.
- Treat user records as evidence-linked claims, not personality labels.
- Do not silently train on or use records marked
rejected,private, orno_train. - Candidate records may influence live RAG answers only when relevant and must be disclosed as tentative.
- Advisor mode is the default for decision support.
- Simulation mode must be clearly labeled as simulation.
- Scratch-trained personal models are future research, not current product behavior.
Hub And Graph Rules
- Prefer broad major hubs over many narrow hubs.
- Use graph structure first, then deterministic canonicalization as fallback.
- Model names, providers, transcript artifacts, and generic UI words should not become hubs.
- Auto-managed duplicate hubs can be merged or removed by sync.
- User-authored hubs should not be silently deleted.
- Minor recurring themes belong in a hub's
Subtopicssection unless they become large enough to justify their own major hub.
Development Commands
# Frontend tests
cd frontend
npm run test:run
npm run build
# Prepare Rust/Tauri test prerequisites
cd frontend
npm run prepare:sidecar
# Rust tests
cd frontend/src-tauri
cargo test
Known test noise:
- Some HomeView unit tests emit
router-linkresolution warnings. - A Canvas store test intentionally logs a failed delete.
- Rust currently warns that
SimilarityProvider::encode_batchis unused.
Contributing
- Fork the repository and create a feature branch.
- Keep changes scoped and evidence-backed.
- Add or update tests for behavior changes.
- Run the frontend and Rust verification commands.
- Submit a pull request.
See CLAUDE.md, WORKING_GUIDE.md, and TWIN_RAG_SPEC.md for deeper architecture and workflow notes.
