MacchiatoBot
An LLM Agent with OS kernel-like structure, aiming to provide concurrent scheduling and a foundation for multi-agent extensions.
Ask AI about MacchiatoBot
Powered by Claude Β· Grounded in docs
I know everything about MacchiatoBot. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
macchiatoBot
English | δΈζ
macchiatoBot is an LLM assistant built around a tool-driven, kernel-style runtime. The project is designed for long-running use: interactive chat, scheduled automation, multi-session concurrency, and frontend integrations all go through the same execution pipeline.
The current codebase separates reasoning from execution:
AgentCorehandles prompt building, LLM calls, memory recall, and the tool-calling loop.AgentKernelexecutes tools, enforces tool visibility and permissions, and handles context compression.KernelSchedulerand the automation layer own session lifecycle, IPC, queueing, and background jobs.
What It Supports
- Interactive CLI backed by a long-running daemon
- Feishu WebSocket gateway with shared sessions and slash commands
- Local MCP stdio server
- Scheduled jobs and notification-oriented automation
- Tool registry with file, bash, memory, web, multimodal, Canvas, Shuiyuan, and SJTU helpers
- Multi-provider LLM routing with runtime model switching
- Working memory, chat history retrieval, and long-term/content memory
- Subagent and multi-agent tooling
Architecture
Frontend / External Trigger
ββ CLI
ββ Feishu gateway
ββ MCP stdio server
ββ Automation jobs
β
βΌ
Automation IPC / Core Gateway / Task Queue
β
βΌ
KernelScheduler / CorePool / SessionSummarizer
β
βΌ
AgentKernel
ββ tool execution
ββ permission checks
ββ path resolution
ββ context compression
β
βΌ
AgentCore
ββ prompt assembly
ββ LLM provider routing
ββ working set + tool selection
ββ memory recall / persistence
ββ multi-turn reasoning loop
Layer Map
| Layer | Main modules | Responsibility |
|---|---|---|
| Frontend | main.py, feishu_ws_gateway.py, mcp_server.py, src/frontend/* | User entrypoints and channel-specific adapters |
| Automation | automation_daemon.py, src/system/automation/* | IPC server/client, scheduled jobs, queue consumption, session rotation |
| Kernel | src/system/kernel/* | Core pooling, scheduling, terminal shell, output routing, summarization |
| Agent runtime | src/agent_core/agent/*, src/agent_core/llm/*, src/agent_core/context/* | Prompting, LLM loop, checkpoints, multimodal staging, session state |
| Tooling | src/system/tools/*, src/agent_core/tools/*, src/agent_core/mcp/* | Tool registry, permissions, MCP proxying, runtime tools |
| Integrations | src/frontend/feishu/*, src/frontend/shuiyuan_integration/*, src/frontend/canvas_integration/* | External platform integration and connector logic |
Repository Layout
src/
βββ agent_core/
β βββ agent/ # AgentCore, checkpoints, prompt builder, workspace/memory paths
β βββ llm/ # Provider resolution and OpenAI-compatible adapters
β βββ memory/ # Working memory, long-term memory, chat history DB
β βββ tools/ # Core tools such as bash / ask_user / permission flow
β βββ mcp/ # MCP client, pool, and proxy tools
β βββ prompts/ # System prompts and skills
βββ system/
β βββ automation/ # Daemon runtime, queue, IPC, connectors, config sync
β βββ kernel/ # AgentKernel, scheduler, core pool, terminal
β βββ tools/ # App-level tools: memory, web, canvas, shuiyuan, planner
β βββ multi_agent/ # Multi-agent registry and constants
βββ frontend/
βββ cli/ # Interactive CLI loop
βββ feishu/ # Feishu gateway, cards, callbacks, routing
βββ mcp_server/ # Local MCP stdio server
βββ canvas_integration/
βββ shuiyuan_integration/
Quick Start
1. Install dependencies
uv sync --all-groups
Optional helper:
source init.sh
init.sh is a convenience script. It runs uv sync, exports PYTHONPATH, and loads .env into the current shell. You do not need to source it before every command if your environment is already set up.
2. Prepare config
cp config/config.example.yaml config/config.yaml
cp .env.example .env
Then fill provider keys in .env, for example OPENAI_API_KEY, DASHSCOPE_API_KEY, GEMINI_API_KEY, DEEPSEEK_API_KEY, or KIMI_CODE_API_KEY.
3. Start the daemon
uv run automation_daemon.py
The daemon is the shared runtime for:
- CLI requests
- Feishu requests
- scheduled automation jobs
- session expiration and rotation
4. Start a frontend
uv run main.py
uv run feishu_ws_gateway.py
For a single command:
uv run main.py "schedule a meeting tomorrow at 3pm"
Optional session identity override:
SCHEDULE_USER_ID=root SCHEDULE_SOURCE=cli uv run main.py
Runtime Model
Daemon-first workflow
main.py is a thin IPC client. It does not run the full agent locally; it connects to automation_daemon.py over a Unix socket. If the daemon is not running, CLI exits with an error.
What the daemon does
- loads config and tool registry
- syncs
automation.jobsfromconfig/config.yaml - runs queue consumers and job scheduling
- hosts the IPC server used by CLI and other frontends
- centralizes session expiration, rotation, and summarization
Common Commands
CLI and Feishu share the same slash command surface through IPC:
/help/model/model list/model <name>/session/session whoami/session list/session new [id]/session switch <id>/session delete <id>
Example:
/session
/session new cli:work
/session list
/session switch cli:root
Configuration
Main config: config/config.yaml. Example: config/config.example.yaml.
Important areas:
| Key | Purpose |
|---|---|
llm.* | active provider, vision provider, provider fragments, request defaults |
multimodal.* | multimodal input limits and timeout |
agent.* | iteration limits, subagent caps, working set size |
tools.* | core tool exposure and template-based tool sets |
memory.* | working-memory limits, recall policy, persistent memory |
automation.jobs | scheduled jobs managed by the daemon |
file_tools.* | file read/write/modify controls |
command_tools.* | bash enablement, workspace isolation, writable roots |
canvas.* | Canvas integration |
shuiyuan.* | Shuiyuan integration |
sjtu_jw.* | SJTU course schedule sync |
mcp.* | external MCP server configuration |
feishu.* | Feishu app and gateway settings |
Provider fragments live in config/llm/providers.d/*.yaml. The active provider is selected by llm.active, and can be changed at runtime with /model <name>.
MCP
Local MCP entry:
uv run mcp_server.py
External MCP servers can be configured under mcp.servers in config/config.yaml.
Development
uv sync --all-groups
uv run pytest tests/ -v
The repository currently has broad coverage across agent runtime, automation, permissions, multimodal handling, frontend integrations, and tool behavior.
If you want the shell to inherit values from .env, either use source init.sh once for that shell, or load .env with your own workflow.
Additional Docs
License
MIT
