Agentlens MCP Server
MCP server for AgentLens β self-instrumentation tools for AI agents
Ask AI about Agentlens MCP Server
Powered by Claude Β· Grounded in docs
I know everything about Agentlens MCP Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
π AgentLens
Observability and analytics for AI agents.
See what your agents actually do β every LLM call, tool use, and token spent.
Quick Start β’ Integrate β’ Dashboard β’ API β’ Deploy
AgentLens is a self-hosted observability platform for autonomous AI agents. Think of it as Datadog, but for agents β not servers.
It captures telemetry (LLM calls, tool usage, token consumption) and turns it into actionable insights:
- π Session traces β See exactly what your agent did, step by step
- π Agent Loop Intervention β First-of-its-kind "Kill Switch". Safely stall looping agents at the network proxy layer and instantly inject a natural language hint via the Dashboard to steer them back on track.
- π Loop detection β Automatically detect when agents get stuck in repetitive cycles
- π° Dynamic Cost Tracking β Instead of hardcoding AI model prices (which change constantly), AgentLens features a background NestJS
PricingServicethat syncs daily via Cron with the open-source LiteLLM JSON registry. It automatically fuzzy-matches your agent's current model to its real-time cost constants and calculates accurate USD(inputTokens * inputPrice) + (outputTokens * outputPrice)immediately during ingestion, giving you zero-maintenance billing observability. - π§ RL-powered insights β Q-learning scores each tool based on real outcomes
- π Retention β Track how often users return to agentic workflows
Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β docker compose up β
β β
β βββββββββββ βββββββββββ βββββββββββββ ββββββββββββββββ β
β β Postgres β β Redis β β API β β Dashboard β β
β β :5432 β β :6379 β β :9471 β β :9472 β β
β βββββββββββ βββββββββββ βββββββ¬ββββββ ββββββββββββββββ β
β β β
β ββββββββ΄βββββββ β
β β LLM Proxy β β
β β :9473 β β
β βββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Service | Port | What it does |
|---|---|---|
| API | 9471 | REST API β receives telemetry, serves analytics |
| Dashboard | 9472 | Next.js analytics UI |
| LLM Proxy | 9473 | Transparent proxy β auto-logs LLM calls from any client |
| PostgreSQL | 5432 | Data storage |
| Redis | 6379 | Job queues (BullMQ) |
Quick Start
Prerequisites
- Docker and Docker Compose
- Node.js 18+ (for local development)
1. Clone and start
git clone https://github.com/itzvenkat/agentlens.git
cd agentlens
cp .env.example .env.development
docker compose up -d
All 5 services start up. Wait ~30 seconds for everything to be healthy, then open:
- Dashboard β http://localhost:9472
- API β http://localhost:9471/health
- Proxy β http://localhost:9473/health
2. Create a project
Every agent/app gets its own project with a unique API key:
curl -X POST http://localhost:9471/v1/projects \
-H "X-Master-Key: agentlens_master_dev_key" \
-H "Content-Type: application/json" \
-d '{"name": "my-first-agent", "description": "Testing AgentLens"}'
The response includes your API key (starts with al_). Save it β it's only shown once.
3. Start sending data
Pick the integration method that matches your setup:
How to Integrate
AgentLens offers four ways to connect, from zero-code to full programmatic control:
| Method | Best for | Effort |
|---|---|---|
| LLM Proxy | Desktop apps & IDEs (Claude, Cursor, ChatGPT) | Change 1 URL |
| SDK | Custom agents, TypeScript/Node.js apps | 2 lines of code |
| MCP Server | MCP-compatible tools (Claude Desktop, Copilot) | Edit 1 config file |
| REST API | Any language, direct HTTP | POST request |
Option A: LLM Proxy
Best for: Desktop apps, IDEs, and anything where you can't modify code.
The proxy sits between your client and the LLM API. It forwards requests unchanged and silently logs telemetry to AgentLens.
Step 1: Add your API key to .env.development:
AGENTLENS_API_KEY=al_your_key_here
Step 2: Restart the proxy:
docker compose up -d proxy
Step 3: Point your client to the proxy:
| Client | Where to change | Value |
|---|---|---|
| Cursor | Settings β Models β OpenAI Base URL | http://localhost:9473/v1 |
| Any OpenAI client | Environment variable | OPENAI_BASE_URL=http://localhost:9473/v1 |
| Anthropic SDK | Auto-detected from request headers | ANTHROPIC_BASE_URL=http://localhost:9473 |
| Ollama clients | Set upstream to Ollama | UPSTREAM_BASE_URL=http://localhost:11434 |
The proxy auto-detects the provider (OpenAI, Anthropic, Google, OpenRouter, Ollama) based on request headers and URL patterns.
How it works: Client β
localhost:9473β proxy logs the request β forwards to real API β returns response β logs the response.
Option B: SDK
Best for: TypeScript/Node.js apps where you want fine-grained control.
npm install @itzvenkat0/agentlens-sdk
OpenAI:
import OpenAI from 'openai';
import { AgentLensClient, wrapOpenAI } from '@itzvenkat0/agentlens-sdk';
const lens = new AgentLensClient({
apiKey: 'al_your_key_here',
endpoint: 'http://localhost:9471',
});
const openai = wrapOpenAI(lens, new OpenAI());
// That's it β all calls are now automatically traced
const result = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is 2+2?' }],
});
Anthropic:
import Anthropic from '@anthropic-ai/sdk';
import { AgentLensClient, wrapAnthropic } from '@itzvenkat0/agentlens-sdk';
const lens = new AgentLensClient({ apiKey: 'al_...', endpoint: 'http://localhost:9471' });
const anthropic = wrapAnthropic(lens, new Anthropic());
Vercel AI SDK:
import { generateText, streamText } from 'ai';
import { AgentLensClient, wrapVercelAI } from '@itzvenkat0/agentlens-sdk';
const lens = new AgentLensClient({ apiKey: 'al_...', endpoint: 'http://localhost:9471' });
const ai = wrapVercelAI(lens, { generateText, streamText });
Generic (any provider via fetch):
import { AgentLensClient, wrapFetch } from '@itzvenkat0/agentlens-sdk';
const lens = new AgentLensClient({ apiKey: 'al_...', endpoint: 'http://localhost:9471' });
globalThis.fetch = wrapFetch(lens, globalThis.fetch);
// All subsequent fetch() calls to known LLM APIs are auto-traced
Manual spans:
const trace = lens.trace('task-123');
const span = trace.span('tool', 'read_file');
// ... do work ...
span.end({ status: 'ok', toolName: 'read_file' });
await trace.end('success');
See
libs/sdk/README.mdfor the full API reference.
Option C: MCP Server
Best for: MCP-compatible agents (Claude Desktop, Copilot, Gemini CLI, Cursor).
Install globally:
npm install -g @itzvenkat0/agentlens-mcp-server
Then add to your MCP config:
{
"mcpServers": {
"agentlens": {
"command": "agentlens-mcp",
"env": {
"AGENTLENS_API_URL": "http://localhost:9471",
"AGENTLENS_API_KEY": "al_your_key_here"
}
}
}
}
The agent gets three tools: report_progress, report_result, and report_error.
Config file locations:
| Client | Config path |
|---|---|
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Cursor | .cursor/mcp.json in your project |
| Gemini CLI | .gemini/settings.json in your project |
| VS Code | .vscode/mcp.json in your project |
Option D: REST API
Best for: Any language, any framework, full control.
curl -X POST http://localhost:9471/v1/ingest \
-H "X-API-Key: al_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"spans": [{
"traceId": "task-123",
"spanId": "span-1",
"type": "llm",
"model": "gpt-4o",
"inputTokens": 150,
"outputTokens": 80,
"durationMs": 1200,
"status": "ok"
}]
}'
Dashboard
Open http://localhost:9472 to see:
- Overview β KPIs, RL tool ratings, recent sessions
- Sessions (with Kill Switch) β Filterable list with full trace data. Includes a Halt & Steer UI to actively intervene in looping agents and inject custom developer hints.
- Tool Efficiency β Which tools help agents succeed vs. cause loops
- Retention β Daily agent activity and return rates
API Reference
All endpoints require X-API-Key header except those marked as public.
| Method | Endpoint | Description |
|---|---|---|
POST | /v1/projects | Create a project (requires X-Master-Key) |
POST | /v1/ingest | Ingest a batch of spans |
POST | /v1/ingest/end-session | End a session with final status |
GET | /v1/interventions/:traceId | Check active intervention state for a trace |
POST | /v1/interventions/resolve/:sessionId | Submit a developer hint to release a stuck trace |
GET | /v1/analytics/overview | KPI summary |
GET | /v1/analytics/sessions | Paginated session list |
GET | /v1/analytics/sessions/:id/trace | Span waterfall for a session |
GET | /v1/analytics/tools | Tool efficiency metrics |
GET | /v1/analytics/retention | Retention data |
GET | /v1/analytics/rl-insights | RL Q-value tool rankings |
GET | /v1/analytics/stream | SSE real-time updates |
GET | /health | Health check (public) |
Configuration
Copy .env.example to .env.development and edit:
| Variable | Default | Description |
|---|---|---|
APP_PORT | 9471 | API server port |
DB_HOST | postgres | PostgreSQL host |
DB_PASSWORD | β | Database password |
REDIS_HOST | redis | Redis host |
MASTER_API_KEY | β | Master key for creating projects |
APP_CORS_ORIGINS | http://localhost:9472 | Allowed CORS origins |
AGENTLENS_API_KEY | β | API key for the LLM proxy |
PROXY_PORT | 9473 | LLM proxy port |
UPSTREAM_BASE_URL | https://api.openai.com | Default upstream LLM API |
DASHBOARD_PORT | 9472 | Dashboard port |
LOOP_DETECTION_THRESHOLD | 3 | Duplicate tool calls before flagging a loop |
Deploy
Docker (recommended)
# Production
cp .env.example .env.production
# Edit .env.production with real credentials
NODE_ENV=production docker compose up -d
Local Development
# Install everything
npm install --legacy-peer-deps
cd dashboard && npm install && cd ..
# Start only infrastructure
docker compose up -d postgres redis
# Start API with hot reload
npm run start:api:dev
# Start dashboard (separate terminal)
cd dashboard && npm run dev
# Start proxy (separate terminal)
npm run start:proxy
Commands
| Command | Description |
|---|---|
docker compose up -d | Start all services |
docker compose down | Stop all services |
docker compose logs -f | Follow all logs |
npm run build:all | Build everything |
npm run start:api:dev | API with hot reload |
npm run start:proxy | Start proxy server |
npm run build:proxy | Build proxy |
npm test | Run tests |
npm run lint | Lint code |
npm run format | Format code |
Project Structure
agentlens/
βββ apps/
β βββ api/ # NestJS REST API
β β βββ src/
β β βββ auth/ # API key auth, project management
β β βββ ingest/ # Telemetry ingestion, PII scrubbing
β β βββ analytics/ # Queries, aggregations, SSE
β β βββ processor/ # Loop detection, RL engine, daily aggregation
β β βββ config/ # Multi-env config with Joi validation
β βββ mcp-server/ # MCP server (stdio transport)
β βββ proxy/ # Transparent LLM Proxy
βββ libs/
β βββ common/ # Shared entities, DTOs, constants
β βββ sdk/ # TypeScript SDK (@itzvenkat0/agentlens-sdk)
β βββ client.ts # Core client (batching, flush, PII)
β βββ trace.ts # Trace + Span classes
β βββ wrappers/ # OpenAI, Anthropic, Vercel AI, fetch
βββ dashboard/ # Next.js 15 analytics dashboard
βββ docker/ # Dockerfiles, init SQL
βββ .github/workflows/ # CI + npm publish
βββ .env.example # Config template
βββ docker-compose.yml # Full stack (one command)
How the RL Engine Works
AgentLens learns from session outcomes using Q-learning:
-
Reward signal β Each completed session gets a score based on:
- Success/failure (+1.0 / β0.5)
- Token efficiency (budget usage)
- Loop penalty (β0.3 per detected loop)
- Speed bonus (faster = higher)
-
Q-value updates β Each tool's Q-value is updated incrementally:
Q(tool) = Q(tool) + Ξ± Γ (reward - Q(tool))Tools closer to the outcome get more credit (Ξ³ = 0.95).
-
Dashboard insights β Q-values surface as ranked tool recommendations, showing which tools to keep, improve, or deprecate.
Tech Stack
| Layer | Technology |
|---|---|
| API | NestJS 11, TypeORM, PostgreSQL 16 |
| Queue | BullMQ, Redis 7 |
| Dashboard | Next.js 15, React 19 |
| SDK | TypeScript, zero dependencies |
| MCP | @modelcontextprotocol/sdk |
| Proxy | Pure Node.js HTTP, zero dependencies |
| Containers | Docker, multi-stage Alpine builds |
Packages
| Package | npm | Description |
|---|---|---|
@itzvenkat0/agentlens-sdk | npm | SDK with auto-instrumentation wrappers |
@itzvenkat0/agentlens-mcp-server | npm | MCP server for agent self-instrumentation |
