Agent Studio Starter
Stop building AI agents from scratch. Bootstrap starter Agent app with LangGraph, CopilotKit, and beautiful Generative UIs.
Ask AI about Agent Studio Starter
Powered by Claude Β· Grounded in docs
I know everything about Agent Studio Starter. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
LangGraph Deep Agents + CopilotKit Generative UI Template
| Backend | |
| Frontend |
A starter template for building AI agent applications with beautiful generative UIs, combining LangChain Deep Agents (LangGraph-powered backend) and CopilotKit (React frontend with generative UI). This project demonstrates a weather assistant application that showcases real-time tool calling with custom UI components.
π― Project Overview
This is a project bootstrap/demonstration that serves as a template for building AI agent applications with:
- Backend: Python FastAPI service using LangChain Deep Agents framework for intelligent agent orchestration
- Frontend: Next.js application with CopilotKit for generative UI and real-time agent interaction
- Infrastructure: Kubernetes deployment with Skaffold for streamlined local development
Key Features
β¨ Deep Agents Framework - Built on LangGraph with support for:
- Complex multi-step task planning
- Tool usage and sub-agent delegation
- Long-term memory with checkpointing
- Streaming responses
π¨ Generative UI - Dynamic UI components rendered based on agent tool calls:
- Custom weather cards with animations
- Real-time status updates
- Beautiful gradients and responsive design
π Kubernetes-Native - deployment:
- Container orchestration
- Service mesh with Skaffold
- Hot reloading for development
- Multi-service architecture
ποΈ Architecture
High-Level System Architecture
graph TB
subgraph "Frontend - Next.js + CopilotKit"
UI[React UI]
CK[CopilotKit Runtime]
RT[useRenderToolCall Hook]
end
subgraph "Backend - FastAPI + Deep Agents"
API[FastAPI Server]
DA[Deep Agent Graph]
LLM[LLM - ChatLiteLLM]
TOOLS[Tools - get_weather]
MEM[Memory Checkpointer]
end
subgraph "Kubernetes Cluster"
FS[Frontend Service]
BS[Backend Service]
end
UI --> CK
CK --> |HTTP| API
API --> DA
DA --> LLM
DA --> TOOLS
DA --> MEM
CK --> RT
RT --> |Render Tool Calls| UI
FS --> UI
BS --> API
style UI fill:#61dafb
style CK fill:#0ea5e9
style DA fill:#f59e0b
style LLM fill:#8b5cf6
Data Flow - Tool Call with Generative UI
sequenceDiagram
participant User
participant Frontend
participant CopilotKit
participant Backend
participant DeepAgent
participant LLM
participant Tool
User->>Frontend: "What's the weather in SF?"
Frontend->>CopilotKit: Send message
CopilotKit->>Backend: POST /api/copilotkit
Backend->>DeepAgent: Process message
DeepAgent->>LLM: Generate response
LLM->>DeepAgent: Call get_weather tool
DeepAgent->>Tool: get_weather("San Francisco")
Tool-->>DeepAgent: Weather data (JSON)
DeepAgent-->>Backend: Tool result
Backend-->>CopilotKit: Stream response
CopilotKit->>Frontend: Tool call event
Frontend->>Frontend: useRenderToolCall renders custom UI
Frontend-->>User: Beautiful weather card π€οΈ
Component Architecture
graph LR
subgraph "Backend Container :8123"
MAIN[main.py]
UTILS[utils.py]
AGUI[ag_ui_langgraph]
CKBE[CopilotKit Backend]
MAIN --> UTILS
MAIN --> AGUI
UTILS --> |create_deep_agent| DA[Deep Agent Graph]
AGUI --> |LangGraphAGUIAgent| CKBE
end
subgraph "Frontend Container :3000"
PAGE[page.tsx]
ROUTE[route.ts]
CKRT[CopilotRuntime]
UI[Generative UI]
PAGE --> UI
ROUTE --> CKRT
CKRT --> |LangGraphHttpAgent| HTTP
end
HTTP[HTTP Client] --> |http://backend:8123| CKBE
style DA fill:#f59e0b
style UI fill:#61dafb
style CKBE fill:#0ea5e9
style CKRT fill:#0ea5e9
π Backend - Deep Agents with LangGraph
The backend is a FastAPI service that uses the Deep Agents framework - an agent harness built on top of LangGraph for complex, multi-step tasks.
Technology Stack
| Component | Technology | Purpose |
|---|---|---|
| Framework | FastAPI | High-performance async API server |
| Agent Framework | Deep Agents (0.3.12) | Agent orchestration and planning |
| Runtime | LangGraph | Durable execution, streaming, HITL |
| LLM | ChatLiteLLM | Flexible LLM integration (GitHub Copilot) |
| Integration | CopilotKit | Frontend-backend agent communication |
| Memory | MemorySaver | Conversation state persistence |
Project Structure
backend/
βββ src/
β βββ agent/
β βββ main.py # FastAPI app + CopilotKit integration
β βββ utils.py # Agent builder + tools
βββ tests/
β βββ agent/
β βββ test_main.py # Unit tests
βββ k8s/
β βββ deployment.yaml # Kubernetes manifests
βββ Dockerfile # Container image
βββ pyproject.toml # Python dependencies
βββ Makefile # Build and run commands
Key Components
1. Agent Builder (utils.py)
from deepagents import create_deep_agent
from copilotkit import CopilotKitMiddleware
def build_agent():
agent_graph = create_deep_agent(
model=ChatLiteLLM(model="github_copilot/gpt-5-mini"),
tools=[get_weather],
middleware=[CopilotKitMiddleware()],
system_prompt="You are a helpful assistant",
checkpointer=MemorySaver(),
)
return agent_graph
Deep Agents Features Used:
- π§ Model: Flexible LLM integration via LiteLLM
- π§ Tools: Custom tool functions (get_weather)
- π Middleware: CopilotKit for streaming and UI updates
- πΎ Checkpointer: Conversation memory across sessions
2. FastAPI Server (main.py)
from ag_ui_langgraph import add_langgraph_fastapi_endpoint
from copilotkit import LangGraphAGUIAgent
app = FastAPI()
agent_graph = build_agent()
add_langgraph_fastapi_endpoint(
app=app,
agent=LangGraphAGUIAgent(
name="weather_application_assistant",
graph=agent_graph,
),
path="/",
)
The LangGraphAGUIAgent wraps the Deep Agents graph and exposes it via FastAPI endpoints that CopilotKit can connect to.
Running the Backend
# Install dependencies
cd backend
uv sync
# Run locally
uv run python src/agent/main.py
# Run tests
uv run pytest
# Build Docker image
make build
The backend listens on http://0.0.0.0:8123 and exposes:
/- LangGraph agent endpoints/healthz- Health check endpoint
βοΈ Frontend - Next.js with CopilotKit Generative UI
The frontend is a Next.js application that uses CopilotKit to create beautiful generative UIs that respond to agent tool calls in real-time.
Technology Stack
| Component | Technology | Purpose |
|---|---|---|
| Framework | Next.js 16 | React framework with App Router |
| UI Library | CopilotKit (1.51.4) | Agent integration + generative UI |
| Styling | Tailwind CSS | Utility-first styling |
| Agent Client | LangGraphHttpAgent | HTTP client for backend connection |
| Language | TypeScript | Type-safe development |
Project Structure
frontend/
βββ src/
β βββ app/
β βββ page.tsx # Main page with generative UI
β βββ layout.tsx # App layout with CopilotKit provider
β βββ globals.css # Global styles
β βββ api/
β βββ copilotkit/
β βββ route.ts # CopilotKit API endpoint
βββ k8s/
β βββ deployment.yaml # Kubernetes manifests
βββ Dockerfile # Container image
βββ package.json # Node dependencies
βββ next.config.ts # Next.js configuration
Key Components
1. CopilotKit Runtime (route.ts)
import { CopilotRuntime } from "@copilotkit/runtime";
import { LangGraphHttpAgent } from "@copilotkit/runtime/langgraph";
const runtime = new CopilotRuntime({
agents: {
weather_assistant: new LangGraphHttpAgent({
url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
}),
},
});
The LangGraphHttpAgent connects to the backend FastAPI server and streams tool calls and responses.
2. Generative UI with useRenderToolCall (page.tsx)
import { useRenderToolCall } from "@copilotkit/react-core";
useRenderToolCall({
name: "get_weather",
render: ({status, args, result}) => {
const weatherData = JSON.parse(result);
return (
<div className="weather-card">
<h3>{weatherData.location}</h3>
<div className="temp">{weatherData.temperature}Β°{weatherData.unit}</div>
<p>{weatherData.weather}</p>
</div>
);
}
});
Generative UI Features:
- π¨ Custom Rendering: Fully customizable UI components
- β‘ Real-time Updates: Status changes as tool executes
- π Loading States: Built-in loading animations
- π Rich Data Display: Parse and display structured data
3. Weather Card Component
The weather card demonstrates advanced generative UI features:
- Dynamic gradients based on weather conditions
- Weather icons (βοΈ, βοΈ, π§οΈ, βοΈ, βοΈ)
- Animated loading states
- Responsive hover effects
- Real-time status indicators
Running the Frontend
# Install dependencies
cd frontend
npm install
# Run development server
npm run dev
# Build for production
npm run build
# Start production server
npm start
The frontend runs on http://localhost:3000 and connects to the backend via the LANGGRAPH_DEPLOYMENT_URL environment variable.
βΈοΈ Kubernetes & Skaffold Deployment
This project uses Skaffold for streamlined Kubernetes development with hot reloading and automatic rebuilds.
Architecture
graph TB
subgraph "Kubernetes Cluster"
subgraph "Frontend Service"
FP[Port 3000]
FD[Deployment: frontend]
FC[Container: Next.js]
FP --> FD
FD --> FC
end
subgraph "Backend Service"
BP[Port 8123]
BD[Deployment: backend]
BC[Container: FastAPI]
BP --> BD
BD --> BC
end
FC --> |HTTP| BP
end
DEV[Developer] --> |skaffold dev| Skaffold
Skaffold --> |Build Images| Docker
Skaffold --> |Deploy| K8S[kubectl]
K8S --> FD
K8S --> BD
Skaffold --> |Port Forward| FP
DEV --> |http://localhost:3000| FP
style FD fill:#61dafb
style BD fill:#f59e0b
style Skaffold fill:#0ea5e9
Skaffold Configuration
The skaffold.yaml file defines the build and deployment pipeline:
build:
artifacts:
- image: frontend
sync:
infer:
- "**/*.ts"
- "**/*.tsx"
- "**/*.css"
- image: backend
sync:
infer:
- "**/*.py"
manifests:
rawYaml:
- frontend/k8s/deployment.yaml
- backend/k8s/deployment.yaml
portForward:
- resourceType: service
resourceName: frontend
port: 3000
Key Features
| Feature | Description |
|---|---|
| Hot Reload | File sync for .ts, .tsx, .py files - no rebuild needed |
| Auto Rebuild | Automatic Docker image rebuild on code changes |
| Port Forwarding | Access frontend at localhost:3000 |
| Service Discovery | Backend accessible at http://backend:8123 from frontend |
| Local Development | Full Kubernetes environment on your machine |
Kubernetes Resources
Backend Service (backend/k8s/deployment.yaml)
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
ports:
- port: 8123
selector:
app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 1
template:
spec:
containers:
- name: backend
image: backend
Frontend Service (frontend/k8s/deployment.yaml)
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
ports:
- port: 3000
selector:
app: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
spec:
containers:
- name: frontend
image: frontend
env:
- name: LANGGRAPH_DEPLOYMENT_URL
value: "http://backend:8123"
Development Workflow
# Start development environment
skaffold dev
# This will:
# 1. Build Docker images for frontend and backend
# 2. Deploy to Kubernetes
# 3. Set up port forwarding
# 4. Watch for file changes and hot reload
# Access the application
open http://localhost:3000
# Clean up
# Press Ctrl+C to stop skaffold
# Resources are automatically deleted
Production Deployment
For production, you would:
- Build and push images to a container registry
- Update image references in deployment manifests
- Apply manifests to production cluster
- Configure ingress for external access
# Build production images
docker build -t your-registry/backend:v1.0 ./backend
docker build -t your-registry/frontend:v1.0 ./frontend
# Push to registry
docker push your-registry/backend:v1.0
docker push your-registry/frontend:v1.0
# Deploy to production
kubectl apply -f backend/k8s/deployment.yaml
kubectl apply -f frontend/k8s/deployment.yaml
π Getting Started
Prerequisites
- Python 3.13+ and
uvfor backend - Node.js 20+ and
npmfor frontend - Docker for containerization
- Local Kubernetes cluster (Minikube or another)
- Skaffold CLI for development workflow
- API Keys: GitHub Copilot or other LLM provider
Quick Start
- Clone the repository
git clone https://github.com/nsphung/agent-studio-starter.git
cd agent-studio-starter
- Start with Skaffold
# Make sure Local Kubernetes is running (Minikube or another)
skaffold dev
- Access the application
Open your browser to http://localhost:3000 and start chatting!
Try asking:
- "What's the weather in San Francisco?"
- "Tell me the weather in Tokyo"
- "How's the weather in London?"
Manual Setup (Without Skaffold)
Backend
cd backend
uv sync
uv run python src/agent/main.py
Frontend
cd frontend
npm install
npm run dev
π Project Structure
agent-studio-starter/
βββ backend/ # Python FastAPI backend
β βββ src/agent/ # Agent code
β βββ tests/ # Unit tests
β βββ k8s/ # Kubernetes manifests
β βββ Dockerfile # Container image
β βββ pyproject.toml # Dependencies
β
βββ frontend/ # Next.js frontend
β βββ src/app/ # Next.js app
β βββ k8s/ # Kubernetes manifests
β βββ Dockerfile # Container image
β βββ package.json # Dependencies
β
βββ notebooks/ # Jupyter notebooks for evaluation
β βββ evaluate.ipynb # Agent evaluation
β
βββ skaffold.yaml # Skaffold configuration
βββ Makefile # Build commands
βββ README.md # This file
π Learning Resources
LangChain Deep Agents
CopilotKit
Kubernetes & Skaffold
π§ Customization Guide
Adding New Tools
- Define the tool in
backend/src/agent/utils.py:
def my_custom_tool(param: str) -> str:
"""Tool description for the LLM."""
# Your tool logic
return result
- Add to agent tools:
agent_graph = create_deep_agent(
model=model,
tools=[get_weather, my_custom_tool], # Add your tool
middleware=[CopilotKitMiddleware()],
)
- Create generative UI in
frontend/src/app/page.tsx:
useRenderToolCall({
name: "my_custom_tool",
render: ({status, args, result}) => (
<YourCustomComponent data={result} />
)
});
Switching LLM Providers
Update the model in backend/src/agent/utils.py:
# OpenAI
model = ChatLiteLLM(model="gpt-4")
# Anthropic
model = ChatLiteLLM(model="anthropic/claude-3-5-sonnet")
# Azure OpenAI
model = ChatLiteLLM(model="azure/gpt-4")
Adding Persistent Storage
Update the checkpointer to use PostgreSQL or other backends:
from langgraph.checkpoint.postgres import PostgresSaver
checkpointer = PostgresSaver(connection_string="postgresql://...")
π§ͺ Testing
Backend Tests
cd backend
uv run pytest
Frontend Tests
cd frontend
npm test
Agent Evaluation
Use the Jupyter notebook for agent evaluation:
jupyter notebook notebooks/evaluate.ipynb
π€ Contributing
This is a template project designed to be forked and customized for your own use cases. Feel free to:
- Add new tools and capabilities
- Enhance the UI with more generative components
- Integrate with external APIs
- Add authentication and authorization
- Deploy to production Kubernetes clusters
π License
See LICENSE file for details.
π‘ Use Cases
This template can be adapted for various AI agent applications:
- π Research Assistants - Web search and document analysis
- π Data Analysis Tools - Query databases and visualize results
- π E-commerce Assistants - Product search and recommendations
- π§ Email Automation - Draft and send emails
- π Scheduling Agents - Calendar management
- π Code Analysis - Review and explain code
- π Financial Advisors - Market data and portfolio analysis
π Acknowledgments
Built with amazing open-source technologies:
- LangChain - Building blocks for LLM applications
- LangGraph - Agent orchestration
- Deep Agents - Agent framework
- CopilotKit - Generative UI framework
- Next.js - React framework
- FastAPI - Python web framework
- Skaffold - Kubernetes development tool
Happy Building! π
For questions or issues, please open an issue on GitHub.
