Pico Flare
No description available
Ask AI about Pico Flare
Powered by Claude Β· Grounded in docs
I know everything about Pico Flare. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
PicoFlare
Cloudflare-native AI agent with Code Mode MCP, R2 storage, and Vectorize memory.
What makes PicoFlare different: LLM-defined agent scaling
Describe an agent, get it. PicoFlare scales agents differently: instead of hand-coding each specialist or wiring up complex orchestration, you tell the LLM what kind of agent you want and it creates it for you.
/createagentβ Describe an agent in plain language (e.g. "Next.js specialist for App Router" or "Python data science expert for pandas"). The LLM writes a skill toworkspace/skills/<name>/SKILL.mdand itβs loaded into context for future use.- Agent self-creation β The agent itself can create new agents. Ask it to "create a Rust async specialist" and it uses the
create_skilltool to add that capability. No code changes, no redeploys. - Skills as domain knowledge β Each skill is markdown with YAML frontmatter, injected into the system prompt. New agents become first-class capabilities the main agent can draw on.
This is the core differentiator: agent scaling via natural language, not pipelines or hardcoded roles.
What it does
- Cloudflare MCP β Calls the Cloudflare MCP via Streamable HTTP (JSON-RPC). Two tools:
search(query the OpenAPI spec) andexecute(run API calls via Code Mode). ~1k tokens instead of ~244k. - R2 Storage β S3-compatible object storage via Cloudflare R2.
- Vectorize β Vector memory for RAG (semantic search over stored knowledge).
- Telegram β Bot channel with
/createagentand voice notes.
Setup
cp .env.example .env
# Fill in your Cloudflare credentials and Telegram token
| Variable | Source |
|---|---|
CLOUDFLARE_ACCOUNT_ID | Cloudflare Dashboard |
CLOUDFLARE_API_TOKEN | API Tokens β needs R2 Edit |
R2_ACCESS_KEY_ID / R2_SECRET_ACCESS_KEY | R2 API Token from dashboard |
TELEGRAM_BOT_TOKEN | @BotFather |
Build & Run
go build -o picoflare .
./picoflare # default: run pico-flare agent (interactive)
./picoflare agent # run pico-flare agent (interactive)
./picoflare bot # Telegram bot (TELEGRAM_BOT_TOKEN required)
./picoflare mcp-test # create R2 bucket + Vectorize index via MCP
./picoflare help # show usage
When the MCP server is unavailable, pico-flare agent falls back to the Cloudflare REST API so you still get Workers, R2, KV, D1, and Vectorize tools.
Webhook mode (Cloudflare deployment)
Set TELEGRAM_WEBHOOK_URL to run behind Cloudflare Tunnel or a reverse proxy:
TELEGRAM_WEBHOOK_URL=https://picoflare.example.com/bot ./picoflare bot
See DEPLOY_CLOUDFLARE.md for full instructions.
Project Structure
PicoFlare/
βββ main.go # Entry point, bot / mcp-test commands
βββ pkg/
β βββ agent/ # Agent loop, Code Mode tools, create_skill
β βββ bot/ # Telegram handlers, /createagent
β βββ mcpclient/client.go # Cloudflare MCP client (Streamable HTTP)
β βββ skills/loader.go # Load SKILL.md from workspace/skills/*
β βββ storage/r2.go # R2 object storage (S3-compatible)
β βββ memory/vectorize.go # Vectorize REST client (RAG memory)
βββ skills/
β βββ mcp-builder/ # Built-in skill: create MCP servers on Workers
βββ workspace/skills/ # LLM-created agents (via /createagent)
βββ AGENT_DESIGN.md # Architecture + token-aware design
βββ .env.example # Template (no secrets)
βββ .gitignore
Design
See AGENT_DESIGN.md for the full architecture: MCP Builder skill, token tracking, and optimization principles. The agent scaling model (LLM-defined skills via /createagent and create_skill) is documented there as well.
License
MIT
