Even G2 Agentic App
Agentic AI framework for Even Realities G2 smart glasses β voice-controlled, tool-using AI assistant with MCP integration
Ask AI about Even G2 Agentic App
Powered by Claude Β· Grounded in docs
I know everything about Even G2 Agentic App. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
even-g2-agentic-app
An agentic AI framework for Even Realities G2 smart glasses. Voice-controlled, tool-using AI assistant that renders interactive widgets directly on the glasses display.
Talk to your glasses. They talk back β with real data.
Architecture
Three services work together to create an agentic experience on the G2 glasses:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Glasses App (Vite :5173) βWebSocketβ Voice Backend (:8000) β
β TypeScript/Vite FastAPI/Python β
β β β β
β Even Hub SDK Claude API + Deepgram STT β
β (iPhone WebViewβBLEβG2) β β
β MCP Client (JSON-RPC/HTTP) β
β β β
β MCP Server (:3001) β
β Express/TypeScript β
β MCP App Tools β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
How it works
- You speak β The glasses capture audio via the built-in mic
- Speech-to-text β Audio streams to Deepgram for real-time transcription
- AI reasoning β Claude processes your request, decides if tools are needed
- Tool execution β MCP tools fetch live data (weather, time, etc.)
- Visual response β Results render as widgets on the 200x100px glasses display
- Voice response β Claude summarizes the answer in 1-2 sentences
MCP Apps
Tools are built as MCP Apps β they include both a data-fetching backend and an interactive HTML UI. When used through Claude Desktop, you get rich web UIs. When used through the glasses, results render as pixel-art widgets on the tiny display.
Prerequisites
- Node.js v22+ (we recommend nvm)
- Python 3.14+ with uv
- Even Realities G2 glasses + iPhone with Even App
- API Keys: Anthropic + Deepgram
Quick Start
# 1. Clone and install
git clone https://github.com/YOUR_USERNAME/even-g2-agentic-app.git
cd even-g2-agentic-app
npm install
cd mcp-server && npm install && cd ..
cd voice-backend && uv sync && cd ..
# 2. Configure API keys
cp .env.example .env
# Edit .env with your ANTHROPIC_API_KEY and DEEPGRAM_API_KEY
# 3. Launch everything
./run
# Choose option 1 β "All services + QR code"
# 4. Scan the QR code with the Even App on your iPhone
Or start services individually:
npm run dev # Glasses app only (Vite :5173)
cd mcp-server && npm run dev # MCP server (:3001)
cd voice-backend && uv run --env-file ../.env uvicorn server:app --host 0.0.0.0 --port 8000 --reload
Project Structure
βββ src/ # Glasses app (TypeScript/Vite)
β βββ main.ts # Entry point β menu system + render loop
β βββ bridge.ts # Even Hub SDK wrapper (display, audio, events)
β βββ canvas.ts # GlassesCanvas: 200x100px drawing primitives
β βββ types.ts # Demo interface, display constants
β βββ menu.ts # Text menu renderer
β βββ demos/ # Visual demos + chatbot
β β βββ chatbot.ts # Voice AI chatbot with widget rendering
β β βββ clock.ts # Analog clock demo
β β βββ hello.ts # Bouncing text demo
β β βββ rainbow.ts # Color gradient demo
β β βββ plasma.ts # Plasma effect demo
β β βββ circle.ts # Animated circle demo
β βββ chatbot/ # Chatbot subsystem
β βββ ws-client.ts # WebSocket client for voice backend
β βββ display.ts # Chat history formatter (2000-char limit)
β βββ widgets.ts # MCP tool result β canvas renderers
β
βββ mcp-server/ # MCP tool server (Express/TypeScript)
β βββ server.ts # Tool registry β add new tools here
β βββ main.ts # HTTP + stdio transport
β βββ tools/
β β βββ clock.ts # Current time tool (no external API)
β β βββ weather.ts # Weather forecast tool (Open-Meteo API)
β βββ ui/ # MCP App HTML UIs (bundled via Vite)
β βββ current-time.* # SVG analog clock + digital display
β βββ weather-forecast.* # Animated weather with themed effects
β
βββ voice-backend/ # Voice AI backend (Python/FastAPI)
β βββ server.py # WebSocket endpoint: audio in, responses out
β βββ config.py # Environment config + system prompt
β βββ claude_llm.py # Claude streaming + agentic tool loop
β βββ deepgram_stt.py # Real-time speech-to-text
β βββ mcp_client.py # Lightweight JSON-RPC 2.0 MCP client
β
βββ run # Interactive launcher script
βββ .env.example # API key template
βββ app.json # Even Hub app manifest
Adding a New MCP Tool
This is the main extension point. Each tool follows a consistent pattern:
1. Create the tool backend
Create mcp-server/tools/my-tool.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { registerAppTool, registerAppResource, RESOURCE_MIME_TYPE } from "@modelcontextprotocol/ext-apps/server";
import { z } from "zod";
import fs from "node:fs/promises";
import path from "node:path";
const RESOURCE_URI = "ui://my-tool/my-tool.html";
export function registerMyTools(server: McpServer, distDir: string): void {
registerAppTool(
server,
"my-tool",
{
title: "My Tool",
description: "What this tool does β Claude reads this to decide when to use it",
inputSchema: {
query: z.string().describe("What to look up"),
},
_meta: { ui: { resourceUri: RESOURCE_URI } },
},
async ({ query }) => {
// Fetch data, compute results, etc.
const data = { result: `Hello ${query}` };
const summary = `Result for ${query}: ${data.result}`;
return {
content: [{ type: "text", text: JSON.stringify(data) }],
structuredContent: {
type: "resource",
resource: { uri: RESOURCE_URI, mimeType: RESOURCE_MIME_TYPE, text: summary },
},
};
}
);
registerAppResource(server, "My Tool UI", RESOURCE_URI,
{ description: "My tool display", mimeType: RESOURCE_MIME_TYPE },
async () => {
const html = await fs.readFile(path.join(distDir, "ui", "my-tool.html"), "utf-8");
return { contents: [{ uri: RESOURCE_URI, mimeType: RESOURCE_MIME_TYPE, text: html }] };
}
);
}
2. Register it in the server
In mcp-server/server.ts:
import { registerMyTools } from "./tools/my-tool.js";
// Inside createServer():
registerMyTools(server, DIST_DIR);
3. Create the UI (optional)
Add mcp-server/ui/my-tool.html, .ts, .css and a build script in mcp-server/package.json:
"build:ui:my-tool": "cross-env INPUT=ui/my-tool.html vite build"
The UI uses the @modelcontextprotocol/ext-apps client library for bidirectional communication with the host.
4. Add a glasses widget renderer (optional)
In src/chatbot/widgets.ts, add an entry to the RENDERERS map:
const RENDERERS: Record<string, WidgetRenderFn> = {
'weather-forecast': renderWeatherWidget,
'current-time': renderClockWidget,
'my-tool': renderMyToolWidget, // Add this
}
The renderer parses the JSON from content[].text and draws to a 200x100 GlassesCanvas.
Adding a New Glasses Demo
Demos are standalone visual programs that render to the glasses display:
- Create
src/demos/my-demo.tsimplementing theDemointerface:
import type { Demo } from '../types'
import { GlassesCanvas } from '../canvas'
export const myDemo: Demo = {
name: 'My Demo',
fps: 10,
render(canvas: GlassesCanvas, t: number, frame: number): void {
canvas.clear()
canvas.text(10, 40, 'HELLO', GlassesCanvas.hsv((t * 60) % 360), 1, 3)
},
}
- Add it to
src/demos/index.ts.
Claude Desktop Integration
The MCP server also works with Claude Desktop via stdio transport:
{
"mcpServers": {
"g2-tools": {
"command": "node",
"args": ["path/to/even-g2-agentic-app/mcp-server/dist-server/main.js", "--stdio"]
}
}
}
Build first: cd mcp-server && npm run build
Display Constraints
The G2 glasses have a unique display:
- Resolution: 576x288 pixels per eye, 4-bit greyscale
- Image canvas: 200x100 pixels (rendered as PNG, pushed via BLE)
- Text: Max 2000 characters, ~35 chars per line
- Containers: Max 4 per page (text, image, list)
- Input: Tap, double-tap, swipe gestures via temple touch bar
Environment Variables
| Variable | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY | Yes | Claude API key for LLM |
DEEPGRAM_API_KEY | Yes | Deepgram API key for speech-to-text |
MCP_SERVERS | Auto | Comma-separated MCP server URLs (set by launcher) |
CLAUDE_MODEL | No | Override Claude model (default: claude-sonnet-4-20250514) |
Reference
- Even Realities G2 β The glasses hardware
- Community G2 SDK notes β Invaluable SDK reference
- MCP Apps specification β How MCP App UIs work
- Even G2 starter template β Simpler starter for G2 plugins
License
MIT
