Agentai
a command-line agentic ai assistant for automated coding, planning, and mcp integration
Ask AI about Agentai
Powered by Claude Β· Grounded in docs
I know everything about Agentai. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
AgentAI
AI Code Assistant
A command-line tool that helps you write code faster with AI assistance, smart planning, and support for multiple AI providers.
Quick Start β’ Features β’ Usage β’ Configuration β’ Providers
Features
Goal-Based Development
- Describe what you want to build in plain English
- AI automatically creates a step-by-step plan
- Handles file creation, code writing, testing, and command execution
Multiple AI Providers
- Gemini - Google's advanced AI model
- OpenAI - GPT models with strong capabilities
- OpenRouter - Access to many AI models through one service
- Ollama - Run AI models locally on your machine
- Cloudflare AI Gateway - Enterprise AI with analytics and caching
Easy Configuration
- Project Settings: Store settings in your project folder
- Global Settings: Store settings for all projects
- Environment Variables: Override settings when needed
- Switch Easily: Change AI providers and models anytime
Smart Memory
- Project Memory: Remembers your conversation and project state
- Code Understanding: Analyzes your existing code automatically
- Context Awareness: Keeps track of what you're working on
Safe and Secure
- Command Safety: Validates shell commands before running
- Local Processing: Your code stays on your computer
- No External Servers: Built-in file operations
Interactive Interface
- Terminal UI: Clean, modern interface
- Real-time Updates: See progress as it happens
- Activity Logging: Track what AgentAI is doing
Installation
-
Go 1.22+
-
Clone or navigate to the project
git clone https://github.com/marcuwynu23/agentai.git cd agentai -
Build
go build -o agentai . # Windows: agentai.exe -
Configure (e.g., Gemini)
agentai config set provider gemini --local agentai config set api_key YOUR_GEMINI_API_TOKEN --local agentai config set model gemini-2.5-flash --localOr use
.env(see.env.example) withGEMINI_API_TOKEN, etc.
Configuration
- Config file:
.agentai/config.json- Local:
<current-directory>/.agentai/config.json(use--local) - Global:
~/.agentai/config.json(use--global)
- Local:
- Keys:
provider,api_key,model,base_url - Resolution: Explicit
--configpath β local file β global file β environment variables
Commands:
agentai config show --local # Show repo config
agentai config show --global # Show user config
agentai config set provider ollama --local
agentai config set model llama3.2 --local
agentai config set base_url http://192.168.1.55:11434 --local # Optional
Environment (optional):
- GEMINI_API_TOKEN β Gemini API key (used when provider is gemini and no
api_keyin file) - OPENAI_API_KEY β OpenAI key (for openai provider)
- GEMINI_MODEL β Model name (e.g.,
gemini-2.5-flash) - REQUEST_DELAY, MAX_RETRIES β Rate limiting
- WORKSPACE_PATH, LOGS_PATH β Paths (defaults: cwd, empty)
Supported Providers
| Provider | Default base URL |
|---|---|
| gemini | https://generativelanguage.googleapis.com/v1beta/models |
| openai | https://api.openai.com/v1 |
| openrouter | https://openrouter.ai/api/v1 |
| ollama | http://localhost:11434 |
| cloudflare | https://gateway.ai.cloudflare.com/v1 |
Override any with base_url in config.
Cloudflare AI Gateway Setup
For Cloudflare AI Gateway, you need:
- Account ID - Find it in the Cloudflare dashboard
- API Token - Create a token with AI Gateway - Read and AI Gateway - Edit permissions
- Base URL format -
https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat
Example configuration:
agentai config set provider cloudflare --local
agentai config set api_key YOUR_CLOUDFLARE_API_TOKEN --local
agentai config set base_url https://gateway.ai.cloudflare.com/v1/123456789/default/compat --local
agentai config set model openai/gpt-4 --local
Usage
Run the interactive TUI chat:
agentai chat
Then type your goal and press enter to start!
With Ollama (default localhost):
agentai config set provider ollama --local
agentai config set model llama3.2 --local
agentai chat
Development Status
- Multi-provider AI: Implemented (Gemini, OpenAI, OpenRouter, Ollama, Cloudflare AI Gateway) via raw HTTP
- Config: Local/global
.agentai/config.jsonand env - File operations: Create, modify, read in project directory
- Code generation: AI-generated code with cleanup
- Test creation: AI-generated or template test files
- Command execution: Validated, safe command execution
- TUI chat interface: Implemented with live activity logging
How It Works
- Config: Load provider, api_key, model, base_url (file + env).
- Project: New run β AI suggests project name and directory; existing run β reuse.
- Analysis: Scan codebase; summarize for planner.
- Plan: AI produces a JSON plan (steps with types and dependencies).
- Execution: For each step, AI reasons then the MCP client runs file/command/test logic.
- Memory: Results and conversation saved to
<project-name>/.memory.json.
Requirements
- Go 1.22+
- API key for Gemini, OpenAI, OpenRouter, or Cloudflare AI Gateway; or local Ollama (no key)
Documentation
Core Documentation
- USAGE.md β Complete usage guide, providers, configuration, and troubleshooting
- CONTRIBUTING.md β Development guidelines and contribution process
- CHANGELOG.md β Version history and release notes
- LICENSE β MIT license information
Development Resources
- docs/architecture.md β Technical architecture and design decisions
Community Resources
- CODE_OF_CONDUCT.md β Community guidelines and code of conduct
- SECURITY.md β Security policies and vulnerability reporting
- RELEASE-NOTES.md β Detailed release information
Issue Templates
- .github/ISSUE_TEMPLATE/bug_report.md β Bug report template
- .github/ISSUE_TEMPLATE/feature_request.md β Feature request template
- .github/PULL_REQUEST_TEMPLATE.md β Pull request template
