I know everything about Beever Atlas. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Loading tools...
Reviews
Documentation
Β Beever Atlas
Turn your team's Slack, Discord, Teams & Mattermost chats
into a self-maintaining wiki β automatically.
Beever Atlas pulls the conversations your team already has on Slack, Discord, Microsoft Teams, and Mattermost, extracts atomic facts, deduplicates them, and clusters them into topic pages with citations. A graph store links the people, decisions, and projects mentioned across channels. Ask questions in natural language and get answers cited back to the source messages β through the dashboard, or through MCP into Claude Code and Cursor.
If you want a knowledge base that grows on its own from the chats your team already has, this is it.
β¨ Features in action
Six short clips β connect a workspace, sync history, watch memory build, browse the auto-generated wiki, ask questions, plug external AI agents in via MCP.
Multi-Platform
Connect Slack, Discord, Teams, Mattermost, or file imports. One bot, every workspace.
Message Sync
Pull channel history on demand or on a schedule. Resumable and rate-limit aware.
Memory Ingestion
6-stage ADK pipeline distils messages into atomic facts, entities, and relationships.
LLM Wiki
Auto-maintained wiki per channel β overview, topics, people, decisions, citations.
QA Agent
Streams cited answers over SSE. Smart router picks semantic or graph per question.
MCP Server
Plug Claude Code / Cursor into your knowledge base β 16 tools, per-agent auth.
ποΈ Architecture
Conversations from any supported platform flow into a unified ingestion pipeline that produces two complementary memory systems β a 3-tier semantic store (channel / topic / atomic fact) for fast hybrid search, and a graph store that extracts entities and their relationships. Those memories fuel two consumer surfaces: the LLM Wiki (distilled, auto-maintained) and QA Agents (served through the dashboard directly, or through MCP into Claude Code / Cursor).
From chat platforms to MCP agents β one ingestion path, two memory systems, two delivery surfaces.
Under the hood, three services (backend, bot, frontend) are backed by four data stores (Weaviate, Neo4j, MongoDB, Redis). See the architecture overview on the documentation site for the full design β component responsibilities, dual-memory internals, and the smart query router.
π‘ Why Wiki-First RAG?
Most RAG systems answer questions by retrieving raw message snippets and feeding them straight to an LLM. Beever Atlas takes a different approach: it continuously distils conversations into a structured, auto-maintained wiki β with topic pages, entity graphs, decisions, and citations β before any query is issued. When you ask a question, the retrieval layer works against clean, deduplicated knowledge rather than noisy chat history. This means answers are more consistent, citations are traceable to source messages, and the wiki itself becomes a useful artifact your team can browse independently of the Q&A interface. The dual-memory architecture (semantic + graph) lets the query router pick the right retrieval strategy per question, keeping latency low and context precise.
A live auto-generated channel wiki: overview, concept map, topics, FAQ, glossary β distilled from 246 Slack messages, not hand-written.
The inspiration: LLMs read wikis, not chat logs
The per-channel wiki concept is directly inspired by Andrej Karpathy's observation that LLMs are far better at reasoning over curated, encyclopedic content (books, docs, wikis) than over raw conversational transcripts. Chat history is noisy, redundant, temporally scattered, and full of implicit context that only humans resolve. A wiki, by contrast, is the already-distilled form of that knowledge β deduplicated, structured, citation-bearing, and organised by topic rather than by timestamp.
Beever Atlas operationalises this insight: every synced channel gets its own auto-generated, continuously-updated wiki β sections for topics, entities, decisions, open questions, and timelines β rebuilt incrementally as new messages arrive. The QA agent retrieves against this wiki first, falling back to raw messages only when a fact hasn't been distilled yet.
What this unlocks in practice
Better answers, fewer hallucinations β retrieval operates on fact-dense prose with explicit entity relationships, not on fragmented turn-by-turn chat.
Traceable citations β every wiki claim links back to the source messages that produced it, so answers are auditable all the way down to the original Slack/Discord/Teams thread.
A browsable artifact, not just a Q&A box β the wiki is useful on its own. New teammates onboarding to a channel can read the distilled wiki instead of scrolling three months of history.
Cheaper inference at query time β the expensive distillation work happens once, at ingestion. Queries hit compact, pre-digested context instead of re-summarising raw logs on every request.
Graph-aware reasoning β the entity graph built alongside the wiki lets the query router answer relational questions ("who worked on X with Y?") that pure vector RAG struggles with.
For a detailed comparison with other LLM knowledge tools, see the comparison page on the documentation site.
π Quick Start
Beever Atlas ships as a Docker Compose stack (backend + bot + web + 4 datastores). You can try a seeded demo in 30 seconds with zero keys, then pick one of three deployment options to install it for real.
1. Get the code
git clone https://github.com/beever-ai/beever-atlas.git
cd beever-atlas
2. Try the demo first (optional, no keys needed for seeding)
make demo
make demo brings up the full stack pre-loaded with a public Wikipedia corpus (Ada Lovelace + Python history). Seeding uses pre-computed fixtures β no API keys required. Asking questions via /api/ask needs a free-tier GOOGLE_API_KEY because the QA agent calls Gemini. See demo/README.md for curl examples.
Skip this step if you're ready to install for real.
3. Before you start: get your API keys
Two free keys are required before installing. Both offer generous free tiers β enough to sync a small team's channels for testing.
External web search when QA retrieval confidence is low β tavily.com
Slack / Discord / Teams bot tokens
Configured via the web UI after setup, not .env β the bot stores platform credentials encrypted in MongoDB
Tip: Keep the two required keys handy before you start. Option 1 prompts for them interactively; Options 2 and 3 need them pasted into .env.
4. Choose a deployment option
Option
When to use
Time to "up"
1. One-line install (recommended)
You want the fastest path to a running stack.
~2 min first run
2. Manual Docker
CI/CD, ops environments, or when you want explicit control over every step.
~3 min first run
3. Local development
Active contributors who need hot-reload on backend and frontend.
varies
Option 1 β One-line install (recommended)
./atlas
The atlas installer walks you through a guided 4-step checklist:
Required LLM keys β prompts for GOOGLE_API_KEY (Gemini) and JINA_API_KEY (embeddings); press Enter to skip either.
Optional integrations β Tavily web search, Ollama, MCP server for Claude Code / Cursor.
Graph backend β Neo4j (default) or skip.
Auth tokens β keep dev defaults or rotate now.
Under the hood it verifies docker + docker compose, copies .env.example β .env (preserves your values on re-run, chmod 600), auto-generates CREDENTIAL_MASTER_KEY (64 hex) and WEAVIATE_API_KEY (32 hex), runs a port-conflict preflight, launches the stack via docker compose up -d --build --force-recreate --remove-orphans, and polls /api/health before printing the ready card.
cp .env.example .env
# Fill in GOOGLE_API_KEY, JINA_API_KEY, CREDENTIAL_MASTER_KEY, WEAVIATE_API_KEY (same as Option 2)
# Start just the databases
docker compose up -d weaviate neo4j mongodb redis
# Backend (terminal 1)
uv sync
uv run uvicorn beever_atlas.server.app:app --reload --port 8000
# Bot (terminal 2)
cd bot && npm install && npm run dev
# Web (terminal 3) β Vite dev server with HMR
cd web && npm install && npm run dev
The Vite dev server proxies /api/* to http://localhost:8000 (configured via VITE_API_URL).
Before going to production
.env.example defaults are tuned for local testing. Before any real deploy, rotate the secrets that ship with placeholder values and flip the environment flag:
What to change
Why
How
BEEVER_API_KEYS, BEEVER_ADMIN_TOKEN
Ship as dev-key-change-me / dev-admin-change-me β public placeholders
python -c "import secrets; print(secrets.token_hex(24))" per token
BRIDGE_API_KEY
Shared secret between backend and bot; blank by default, required outside local dev
Same secrets.token_hex(24)
VITE_BEEVER_API_KEY, VITE_BEEVER_ADMIN_TOKEN
Vite bakes these into the web bundle at build time β must mirror the rotated backend values above
Copy the rotated BEEVER_API_KEYS / BEEVER_ADMIN_TOKEN values
NEO4J_PASSWORD + password half of NEO4J_AUTH
Dev password is public in this repo
Pick a strong password; both values must match
BEEVER_ENV=production
Enables fail-fast startup that rejects every dev default above
Flip the value in .env
Option 1 (./atlas) handles all of this through the "Rotate auth tokens" prompt in step 4 of the checklist β answer Y and the installer generates random tokens and mirrors the VITE_* values for you. If you used Option 2 or 3, you can re-run ./atlas on the existing .env, skip every other prompt with Enter, and only accept the rotation prompt.
Real mode (default, ADAPTER_MOCK=false): connect a workspace in Settings β Connections β Slack / Discord / Teams tokens are entered through the UI, not .env.
Mock mode (ADAPTER_MOCK=true): uses fixture data β opt in for local UI iteration without platform credentials.
6. Sync a channel
From the dashboard: Connections β Add Workspace β Select channels β Sync.
Or via API (auto-extracts your bearer token from .env):
Beever Atlas exposes a curated MCP (Model Context Protocol) server at /mcp for AI agents like Claude Code and Cursor. This allows external code assistants to query your team's knowledge base without using the dashboard.
docker compose up -d # Start in background
docker compose logs -f beever-atlas # Tail backend logs
docker compose down # Stop (keeps data)
docker compose down -v # Stop and DELETE all indexed data
make demo # Full stack + seeded demo corpus
make docker-up # Shortcut for `docker compose up -d`
π Privacy & Telemetry
Beever Atlas collects no telemetry. No usage data, error reports, or analytics are sent anywhere by default. All LLM calls go through API keys you configure in your own .env, and all data stays in the databases you control.
π API Stability
All /api/* endpoints are UNSTABLE in 0.1.0. v0.2.0 will introduce a /api/v1/* prefix; clients pinning current paths will break. See SECURITY.md.
π¬ Community & Contact
Discord: discord.gg/VshBCUUX β get help, share what you're building, talk to the team
X / Twitter: @Beever_AI β release notes, posts, announcements
Website: beever.ai β about the company and other projects