Bob Old
self-replicating ai agent built for you
Ask AI about Bob Old
Powered by Claude Β· Grounded in docs
I know everything about Bob Old. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Bob - Ambient AI Agent
Every AI you've used requires you to actively use it in chat
Bob is different.
Set your goal, connect your data, and then it goes ambient.
When you need it, it responds instantly. When it spots something you should know, it messages you. When you're busy, it keeps working.

Clone Bob (Hosted)
Then set the following environment variables in the dashboard:
ANTHROPIC_API_KEY=your-key
OPENAI_API_KEY=your-key
RAILWAY_ENVIRONMENT=production
Clone Bob (Local)
Docker (Recommended)
# Copy the Docker environment template
cp .env.docker.example .env.docker
# Edit .env.docker and add your API keys:
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
# OPENAI_API_KEY=your_openai_api_key_here
# Build all services (takes ~10-15 minutes first time)
docker compose -f docker-compose.local.yml build
# Start all services (Bob Agent + Phoenix + All MCP Servers)
docker compose -f docker-compose.local.yml up
Manual Setup
1. Install Dependencies
# Install Node.js dependencies
npm install
# Install Python dependencies for MCP servers
pip install -e packages/memory-mcp-server
pip install -e packages/observability-mcp-server
pip install -e packages/ability-mcp-server
2. Set Up Environment Variables
Create a .env file in the project root:
# Required
ANTHROPIC_API_KEY=your_anthropic_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
# Optional (defaults shown)
AGENT_MODEL=claude-sonnet-4-5-20250929
MAX_TURNS=10
PHOENIX_COLLECTOR_ENDPOINT=http://localhost:6006
3. Build the Project
npm run build
4. Run Examples
Basic Command Structure:
npm run dev -- --goal "Your goal here" --evaluation "Success criteria here"
Example 1: Simple Task
npm run dev -- --goal "Calculate 10 + 15 and tell me the result" --evaluation "Returns 25"
Example 2: File Operations
npm run dev -- --goal "Read package.json and summarize the project" --evaluation "Provides project summary"
Example 3: Multi-Step Analysis
npm run dev -- --goal "Analyze the Bob Agent project: 1) Count total TypeScript files, 2) List all packages, 3) Summarize the architecture" --evaluation "Provides file count, package list, and architecture summary"
Example 4: Parallel Execution
npm run dev -- --goal "Read these three files in parallel: package.json, tsconfig.json, and README.md. Then summarize each file's purpose." --evaluation "Reads all three files and provides summaries"
Example 5: Complex Workflow
npm run dev -- --goal "Create a comprehensive report: 1) Find all .ts files, 2) Count lines of code in each, 3) Identify the largest files, 4) Summarize findings" --evaluation "Provides comprehensive code analysis"
Project Structure
bob-agent/
βββ README.md # This file
βββ LICENSE # MIT License
βββ CONTRIBUTING.md # Contribution guidelines
βββ CHANGELOG.md # Version history
βββ Dockerfile # Multi-stage Docker build
βββ Dockerfile.railway # Optimized Railway deployment
βββ docker-compose.yml # Production deployment
βββ docker-compose.mcp.yml # MCP servers only
βββ docker-compose.local.yml # Complete local stack (Recommended!)
βββ .env.example # Environment template (native)
βββ .env.docker.example # Environment template (Docker)
βββ .dockerignore # Docker exclusions
βββ mcp.json # MCP server configuration (native)
βββ mcp.docker.json # MCP server configuration (Docker)
βββ mcp.railway.json # Railway MCP configuration
βββ railway.toml # Railway deployment settings
βββ setup.sh # Automated setup script
βββ package.json # Node.js dependencies
βββ package-lock.json # Locked dependencies (for Railway)
βββ tsconfig.json # TypeScript config
βββ src/
β βββ index.ts # Entry point
β βββ agent.ts # PlanAct agent implementation
β βββ mcp-config.ts # MCP configuration loader
β βββ types/
β β βββ dag.ts # DAG type definitions
β βββ dag/
β βββ ExecutionDAG.ts # DAG management with cycle detection
βββ packages/
β βββ memory-mcp-server/ # Python Memory Server (Mem0)
β βββ observability-mcp-server/ # Python Observability Server (Phoenix)
β βββ ability-mcp-server/ # Python Ability Server (Agent Lightning)
β βββ tools/ # Sample MCP tools
βββ tests/
β βββ python/e2e/ # Python end-to-end tests
β βββ typescript/integration/ # TypeScript integration tests
β βββ scripts/ # Test scripts and verification
β βββ outputs/ # Test outputs and logs
β βββ README.md # Test documentation
βββ notepad/ # Research and design notes
β βββ notepad_react_planact_research_*.md
β βββ notepad_planact_design_*.md
β βββ notepad_dag_methods_*.md
β βββ notepad_e2e_testing_*.md
βββ docs/
βββ ARCHITECTURE_PLANACT.md # PlanAct architecture documentation
βββ DAG_METHODS.md # DAG definition patterns (50+ pages)
βββ REACT_VS_PLANACT_COMPARISON.md # Comparison guide
βββ TESTING_REPORT_FINAL.md # Testing results
βββ DEPLOYMENT.md # Deployment guide
βββ DOCKER.md # Docker documentation
References
- Mem0: "Building Production-Ready AI Agents with Scalable Long-Term Memory" (arXiv:2504.19413)
- Agent Lightning: "Agent Lightning: Train ANY AI Agents with Reinforcement Learning" (arXiv:2508.03680)
- Mem0 Documentation
- Arize Phoenix Documentation
- Agent Lightning Documentation
- Claude Agent SDK Documentation
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Key areas for contribution:
- Additional MCP tool integrations
- Performance optimizations
- Documentation improvements
- Bug fixes and testing
- Example agent implementations
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/
π Acknowledgements
Built on:
- Claude Agent SDK by Anthropic
- Mem0 memory system
- Arize Phoenix observability platform
- Agent Lightning by Microsoft Research
- FastMCP for MCP server implementation
