Hatchify
No description available
Ask AI about Hatchify
Powered by Claude ยท Grounded in docs
I know everything about Hatchify. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Hatchify
English | ็ฎไฝไธญๆ
๐ Cloud Version: https://hatchify.ai/ - Try Vibe Graph instantly without installation!
๐ Introduction
Hatchify is a powerful multi-agent workflow platform that enables complex AI Agent collaboration through a dynamic graph execution engine. Built on FastAPI + AWS Strands SDK, it supports dynamic creation and execution of Agent workflows via JSON configuration.
Core Features
- ๐ค Dynamic Multi-Agent Orchestration: Build and execute Agent workflows dynamically through JSON configuration
- ๐ Intelligent Routing System: Support for multiple routing strategies including Rules, JSONLogic, Router Agent, and Orchestrator
- ๐ MCP Protocol Integration: Native support for Model Context Protocol, easily extend tool capabilities
- ๐ฌ Web Builder: Conversational web application generation with real-time preview and deployment (in progress)
- ๐ Event-Driven Architecture: Real-time event streaming based on SSE, complete execution tracking
- ๐๏ธ Version Management: Version snapshots and rollback support for Graph specifications
- ๐ฏ Multi-Model Support: Unified LLM interface supporting OpenAI, Gemini, Claude, and other mainstream models
- ๐ Enterprise Architecture: Layered design (API/Business/Repository), easy to extend and maintain
๐ Quick Start
Requirements
Backend:
- Python 3.13+
- SQLite / PostgreSQL (optional)
Frontend:
- Node.js 20+
- pnpm 9+
Installation
Backend
# Clone repository
git clone https://github.com/Sider-ai/hatchify.git
cd hatchify
# Install dependencies (recommended using uv)
uv sync
Frontend
# Navigate to web directory
cd web
# Install dependencies
pnpm install
# Build icons package (required before first run)
pnpm build:icons
Configuration
Backend Configuration
- Copy configuration files
cp resources/example.mcp.toml resources/mcp.toml
cp resources/example.models.toml resources/models.toml
cp resources/example.tools.toml resources/tools.toml
- Edit model configuration (
resources/models.toml)
[[models]]
name = "gpt-4o"
provider = "openai"
api_key = "your-api-key-here"
api_base = "https://api.openai.com/v1"
- Edit predefined tools configuration (
resources/tools.toml) (optional)
[nano_banana]
enabled = true
model = "gemini-3-pro-image-preview"
api_key = "your-google-genai-api-key"
- Edit MCP server configuration (
resources/mcp.toml) (optional)
[[servers]]
name = "filesystem"
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
Frontend Configuration
Create .env file in the web directory:
# API endpoint configuration (default: http://localhost:8000)
VITE_API_TARGET=http://localhost:8000
See web/.env.example for all available environment variables.
Launch
Backend
# Development mode
uvicorn hatchify.launch.launch:app --reload --host 0.0.0.0 --port 8000
# Or use main.py
python main.py
Visit http://localhost:8000/docs to view API documentation.
Frontend
# Navigate to web directory (if not already there)
cd web
# Development mode (with hot reload)
pnpm dev
# Production build
pnpm build
# Preview production build
pnpm preview
Visit http://localhost:5173 (default Vite dev server port) to access the web interface.
Docker Deployment
1. Build Image
docker build -t hatchify .
2. Start Container
# Run in background with port mapping and volume mounting
docker run -itd \
--name=hatchify \
-p 8000:8000 \
-v ./data:/app/data \
-v ./resources:/app/resources \
hatchify
Parameter Explanation:
-p 8000:8000: Map container port 8000 to host port 8000-v ./data:/app/data: Mount data directory (including database, storage, sessions, etc.)-v ./resources:/app/resources: Mount configuration directory (mcp.toml,models.toml,development.yaml)
3. View Logs
# Real-time log viewing
docker logs -f hatchify
# View last 100 lines
docker logs --tail 100 hatchify
4. Container Management
# Stop container
docker stop hatchify
# Start container
docker start hatchify
# Restart container
docker restart hatchify
# Remove container
docker rm -f hatchify
5. Environment Variable Configuration
Override configuration with environment variables:
docker run -itd \
--name=hatchify \
-p 8000:8000 \
-e HATCHIFY__SERVER__BASE_URL=https://your-domain.com \
-e HATCHIFY__SERVER__PORT=8000 \
-v ./data:/app/data \
-v ./resources:/app/resources \
hatchify
Important Notes:
- โ ๏ธ For production deployment, make sure to modify
HATCHIFY__SERVER__BASE_URLto the actual public URL - Ensure
./dataand./resourcesdirectories exist with proper permissions - Configure
resources/mcp.tomlandresources/models.tomlbefore first startup
๐ Project Structure
Hatchify/
โโโ hatchify/ # Main application package
โ โโโ business/ # Business layer
โ โ โโโ api/v1/ # RESTful API routes
โ โ โโโ db/ # Database configuration
โ โ โโโ models/ # ORM models
โ โ โโโ repositories/ # Data access layer
โ โ โโโ services/ # Business logic layer
โ โโโ common/ # Shared layer
โ โ โโโ domain/ # Domain models (Entity, Event)
โ โ โโโ extensions/ # Extension modules
โ โ โโโ settings/ # Configuration management
โ โโโ core/ # Core engine
โ โ โโโ factory/ # Factory pattern (Agent, LLM, Tool)
โ โ โโโ graph/ # Dynamic graph building system
โ โ โโโ manager/ # Managers (MCP, Model, Tool)
โ โ โโโ mcp/ # MCP protocol integration
โ โ โโโ stream_handler/ # Event stream processing
โ โโโ launch/ # Application entry point
โโโ resources/ # Configuration directory
โ โโโ development.yaml # Environment configuration
โ โโโ mcp.toml # MCP server configuration
โ โโโ models.toml # Model configuration
โโโ main.py # Program entry point
๐ฅ Core Features
1. ๐ฌ Vibe Graph - Natural Language-Driven Workflow Generation
Through natural language interaction, leveraging LLM's semantic understanding to automatically generate GraphSpec specifications, enabling end-to-end conversion from requirement descriptions to executable workflows. The system uses structured output mechanisms to parse user intent into complete graph definitions containing Agent nodes, tool configurations, and routing strategies.
Core Capabilities:
- ๐ฃ๏ธ Semantic Parsing: LLM-based intent understanding, mapping natural language requirements to GraphSpec structure
- ๐ง Intelligent Inference: Auto-infer Agent role positioning, tool dependencies, and inter-node routing logic
- ๐ Conversational Iteration: Support multi-turn dialogue for workflow structure optimization and dynamic node configuration
- ๐ Auto-Orchestration: Automatically select LLM models, assign tool sets, and configure routing strategies based on task characteristics
2. ๐๏ธ Flexible Graph Building System
Graphs consist of nodes and edges, supporting declarative definition of complex multi-agent collaboration processes.
Node Types:
Agent Nodes - LLM-based intelligent nodes
- General Agent: General-purpose Agent executing specific tasks (e.g., data analysis, content generation)
- Router Agent: Routing Agent determining workflow jumps based on upstream structured output fields
- Orchestrator Agent: Orchestration Agent centrally coordinating all nodes, supporting
COMPLETEsignal for process termination
Each Agent can be configured with:
- Dynamic model selection (supporting OpenAI, Gemini, Claude, etc.)
- Tool set registration (MCP tools, custom local tools)
- Structured output Schema (for routing decisions and data passing)
Function Nodes - Deterministic function nodes
- Defined using
@tooldecorator as independent nodes in Graph - Receive structured output from upstream Agents as input
- Execute deterministic operations (e.g., data transformation, formatting, computation)
- Must return Pydantic BaseModel type for type-safe data passing
- Referenced via
function_refto pre-registered function names
Tools and Custom Extensions:
1. Agent Tools (Called by Agents)
- MCP Tools: Support Model Context Protocol standard, dynamically load external tool servers
- File system operations (
@modelcontextprotocol/server-filesystem) - Git operations (
@modelcontextprotocol/server-github) - Database queries, etc.
- File system operations (
- Custom Local Tools: Define using
@tooldecorator and register toToolRouterfrom strands import tool, ToolContext from hatchify.core.factory.tool_factory import ToolRouter tool_router = ToolRouter() @tool(name="add", description="Add two numbers", context=True) async def add(a: float, b: float, tool_context: ToolContext) -> float: return a + b tool_router.register(add)
2. Function Nodes (As Graph Nodes)
- Use same
@tooldecorator but register to Function Router - Must define Pydantic output model
from pydantic import BaseModel from strands import tool class EchoResult(BaseModel): text: str @tool(name="echo_function", description="Echo input") async def echo_function(text: str) -> EchoResult: return EchoResult(text=f"[ECHO] {text}")
3. โ๏ธ Unified Configuration Management
Manage models and tools through declarative configuration files, supporting multiple Providers and transport protocols.
Model Configuration (resources/models.toml)
Support multiple Provider configurations for unified management of different LLM service providers:
default_provider = "openai-like"
[providers.openai]
id = "openai"
name = "OpenAI"
family = "openai"
base_url = "https://api.openai.com/v1"
api_key = "sk-xxx"
enabled = true
priority = 3 # Priority, lower number = higher priority
[[providers.openai.models]]
id = "gpt-4o"
name = "gpt-4o"
max_tokens = 16384
context_window = 128000
description = "..."
[providers.anthropic]
id = "anthropic"
family = "anthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-xxx"
enabled = true
priority = 4
[[providers.anthropic.models]]
id = "claude-sonnet-4-5-20250929"
max_tokens = 64000
context_window = 200000
Configuration Features:
- Support multiple Provider configurations simultaneously (OpenAI, Anthropic, DeepSeek, etc.)
priorityfield controls Provider fallback order (lower number = higher priority)- Support individually disabling models (
enabled = false) - Compatible with OpenAI-Like interfaces (adapt third-party proxy services)
MCP Tool Configuration (resources/mcp.toml)
Support three transport protocols for dynamically loading external tool servers:
1. Stdio Transport (Local Process)
[[servers]]
name = "filesystem"
transport = "stdio"
enabled = true
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
prefix = "fs" # Tool name prefix
# Optional configuration
cwd = "/tmp"
encoding = "utf-8"
[servers.env]
NODE_ENV = "production"
[servers.tool_filters]
allowed = ["read_file", "write_file"] # Whitelist
2. SSE Transport (Server-Sent Events)
[[servers]]
name = "calculator-sse"
transport = "sse"
enabled = true
url = "http://localhost:8000/sse"
prefix = "calc"
timeout = 5
sse_read_timeout = 300
[servers.headers]
Authorization = "Bearer your-token"
3. StreamableHTTP Transport
[[servers]]
name = "weather-api"
transport = "streamablehttp"
enabled = true
url = "http://localhost:8001/mcp/"
prefix = "weather"
timeout = 30
terminate_on_close = true
MCP Configuration Features:
- Support three transport protocols (stdio / sse / streamablehttp)
- Tool filters (whitelist
allowed/ blacklistrejected) - Tool name prefixes (avoid naming conflicts)
- Dynamic enable/disable servers (
enabledfield)
4. ๐จ Web Builder - Vibe Website Builder ๐ง
Status: In Development This feature is currently under development, some functions may not be fully implemented.
Through natural language conversation, let AI automatically generate and customize web applications, from requirement description to deployment in one stop.
Tech Stack:
- React 19 + TypeScript
- Vite 7 (Build tool)
- Tailwind CSS 4 (Styling framework)
- React JSON Schema Form (Dynamic form generation)
Workflow:
-
Project Initialization
- Auto-generate web project based on Graph's
input_schemaandoutput_schema - Generate form page (for inputting data and triggering Webhook)
- Generate result display page (intelligently render Graph output)
- Auto-generate web project based on Graph's
-
Conversational Customization
- Agent can call tools to modify the project:
file_read: Read project fileseditor: Edit code filesfile_write: Create new filesshell: Bash tool implementation
- Support multi-turn dialogue for iterative interface design and functionality optimization
- Agent can call tools to modify the project:
-
Intelligent Content Rendering
- Auto-identify output data types (images, URLs, structured data, lists, etc.)
- Defensive programming, compatible with data-schema mismatches
- Responsive design, adapts to various device sizes
-
One-Click Deployment
- Auto-execute
npm installandnpm run build - Mount build artifacts to
/preview/{graph_id}path - Real-time push build logs and progress
- Support hot updates, auto-rebuild after modifications
- Auto-execute
Use Cases:
- Quickly generate web interfaces for Graph workflows
- No frontend development experience needed, customize interfaces through conversational interaction
- Auto-generate dynamic forms based on JSON Schema
- Intelligently render various types of Graph output results
5. ๐ง Environment Configuration System
Centrally manage all runtime configurations through resources/development.yaml.
Core Configuration Items:
1. Server Configuration
hatchify:
server:
host: 0.0.0.0
port: 8000
base_url: http://localhost:8000 # โ ๏ธ Must change to public URL in production
โ ๏ธ Important Note: base_url is the most critical configuration item
- Local development:
http://localhost:8000 - Production deployment: Must modify to actual public URL (e.g.,
https://your-domain.com) - Impact scope: Webhook callbacks, Web Builder project API addresses, preview page resource loading
2. Model Configuration
models:
spec_generator: # Model used by Vibe Graph generator
model: claude-sonnet-4-5-20250929
provider: anthropic
schema_extractor: # Model used by Schema extractor
model: claude-sonnet-4-5-20250929
provider: anthropic
web_builder: # Model used by Web Builder
model: claude-sonnet-4-5-20250929
provider: anthropic
3. Database Configuration
db:
platform: sqlite # Currently only supports: sqlite
sqlite:
driver: sqlite+aiosqlite
file: ./data/dev.db
echo: False
pool_pre_ping: True
โ ๏ธ Note: Current version only supports SQLite. PostgreSQL and MySQL support will be added in future releases.
4. Storage Configuration
storage:
platform: opendal # Currently only supports: opendal
opendal:
schema: fs # Supports: fs / s3 / oss, etc. (based on OpenDAL)
bucket: hatchify
folder: dev
root: ./data/storage
5. Session Management Configuration
session_manager:
manager: file # Currently only supports: file
file:
folder: dev
root: ./data/session
6. Web Builder Configuration
web_app_builder:
repo_url: https://github.com/Sider-ai/hatchify-web-app-template.git
branch: master
workspace: ./data/workspace
# Environment variable injection during project initialization
init_steps:
- type: env
file: .env
vars:
VITE_API_BASE_URL: "{{base_url}}" # Auto-use server.base_url
VITE_GRAPH_ID: "{{graph_id}}"
VITE_BASE_PATH: "/preview/{{graph_id}}"
# Security configuration
security:
allowed_directories: # Whitelist: directories Agent can access
- ./data/workspace
- /tmp
sensitive_paths: # Blacklist: sensitive paths forbidden to access
- ~/.ssh
- ~/.aws
- /etc/passwd
- /root
Environment Variable Override:
Support overriding configuration via environment variables using HATCHIFY__ prefix:
# Override server port
export HATCHIFY__SERVER__PORT=8080
# Override base_url (use in production deployment)
export HATCHIFY__SERVER__BASE_URL=https://your-domain.com
# Override database platform
export HATCHIFY__DB__PLATFORM=postgresql
Configuration Priority: Environment Variables > YAML Configuration File > Default Values
6. ๐๏ธ Enterprise-Grade Layered Architecture
Adopting classic three-tier architecture design (API โ Service โ Repository), achieving high cohesion and low coupling through generics and dependency injection.
Architecture Layers:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ API Layer (FastAPI Router) โ
โ - Route definition, request validation, โ
โ response serialization โ
โ - Dependency injection via Depends โ
โโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Calls
โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Service Layer (GenericService[T]) โ
โ - Business logic orchestration, โ
โ transaction management โ
โ - Cross-Repository coordination โ
โโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Uses
โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Repository Layer (BaseRepository[T]) โ
โ - Data access abstraction, CRUD operations โ
โ - Query building, pagination encapsulation โ
โโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Operates
โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Database Layer (SQLAlchemy ORM) โ
โ - ORM models, database connections โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
1. Repository Layer - Data Access Abstraction
Core Features:
- Generic design, type-safe
- Asynchronous operations, high performance
- Unified pagination interface (based on
fastapi-pagination) - Flexible query filtering (
find_by(**filters))
2. Service Layer - Business Logic Orchestration
Core Features:
- Transaction management (auto commit/rollback)
- Data validation (based on Pydantic)
- Cross-Repository coordination
- Business logic reuse
3. API Layer - Routing and Dependency Injection
Core Features:
- Dependency injection (
ServiceManager,RepositoryManager) - Unified response format (
Result[T]) - Automatic parameter validation (Pydantic)
- Unified exception handling
Architecture Advantages:
- ๐ฆ Separation of Concerns: Clear responsibilities per layer, easy to maintain
- ๐ Testability: Each layer can be unit tested independently
- ๐ Extensibility: Quickly extend new entities through generic base classes
- ๐ฏ Type Safety: Generics + Pydantic ensure type correctness
- ๐ Development Efficiency: Common CRUD operations out-of-the-box
๐ ๏ธ API Endpoints Overview
Graph Management
GET /api/graphs- List all GraphsPOST /api/graphs- Create new GraphGET /api/graphs/{graph_id}- Get Graph detailsPUT /api/graphs/{graph_id}- Update GraphDELETE /api/graphs/{graph_id}- Delete Graph
Execution
POST /api/webhooks/{graph_id}- Execute Graph (Webhook)GET /api/executions- Query execution records
Web Builder
POST /api/web_builder/create- Create Web Builder sessionPOST /api/web_builder/chat- Conversational buildingPOST /api/web_builder/deploy- Deploy generated web application
Version Management
GET /api/graph_versions- List version historyPOST /api/graph_versions- Create version snapshot
Sessions and Messages
GET /api/sessions- List sessionsPOST /api/sessions- Create sessionGET /api/messages- Query message history
System
GET /api/tools- List available toolsGET /api/models- List available models
๐ Common Tasks
Adding New Agent Type
- Define configuration in
AgentCard - Add to
GraphSpec.agents AgentFactoryautomatically handles creation
Adding New Function Node
- Implement function in
core/graph/functions/ - Register in
FunctionManager - Reference in
GraphSpec.functions
Adding New Tool
- Strands Tools: Implement in
core/graph/tools/ - MCP Tools: Configure MCP server in
resources/mcp.toml
Adding New Event Type
- Define event class in
common/domain/event/(inherit fromStreamEvent) - Trigger in corresponding stream processor (e.g.,
GraphExecutor) - Frontend receives via SSE
Custom Routing Logic
Extend routing types in DynamicGraphBuilder._create_edge_condition().
๐ Development Guide
Database
- Supported Databases: SQLite (default), PostgreSQL (planned), MySQL (planned)
- Connection Configuration: Via
resources/development.yaml - Initialization: Database tables auto-created on app startup (
init_db()inbusiness/db/session.py)
Storage System
- Abstraction Layer: OpenDAL
- Supported Schemas: fs, s3, oss, etc.
- Configuration: Via
resources/development.yaml
โ ๏ธ Important Notes
- Async First: All database and I/O operations use
async/await - Dependency Injection: Services and Repositories obtained through Manager singletons
- Version Management: Graph's
current_specis single source of truth, version table for snapshots - Security: Web Builder file operations restricted by
security.allowed_directories(seedevelopment.yaml) - Configuration Priority: Environment Variables > YAML > .env file
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Links
- Official Website: https://hatchify.ai/
- Documentation: Coming soon
- GitHub: https://github.com/Sider-ai/hatchify
๐ง Contact
For questions or feedback, please open an issue on GitHub.
Star History
Made with โค๏ธ by Sider.ai
