Fastmcp Teleport Cp
No description available
Ask AI about Fastmcp Teleport Cp
Powered by Claude Β· Grounded in docs
I know everything about Fastmcp Teleport Cp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Teleport Control Plane v0.2
Model B: Production-grade remote Control Plane with Postgres state, policy-gating, lease management, and comprehensive Teleport admin + access tooling.
Architecture
High-level flow
- MCP Clients (OpenWebUI, agents, LLMs) β
- HTTP/SSE MCP Gateway (FastAPI + FastMCP) β
- Policy Engine (capability allowlists per actor role) β
- Execution Engine (tctl/tsh runners with timeouts, redaction, isolation) β
- Teleport Proxy (actual cluster)
- Postgres (operations + leases durable store)
- ProcessManager (long-running proxy lease management)
Capability planes
- Admin Plane (tctl): Inventory (nodes/proxies/roles/users/tokens/CAs), health checks, RBAC scans, audit search/anomaly
- Access Plane (tsh): Access requests (list/show/create/review/drop)
- Proxy/Session Plane: Lease-managed
tsh proxy *processes (SSH/DB/App/Kube/MCP, vnet) - Passthrough Plane (optional, disabled by default):
tsh/tctlcommand passthroughs for operators
Quick start
Prerequisites
- Docker & Docker Compose
- Teleport cluster (with tctl + tsh binaries, or deploy in container)
- Service identity file for the control plane (or headless auth)
1. Clone & setup
cd mcp-teleport-control-plane
cp config.yaml.example config.yaml
# Edit config.yaml with your Teleport proxy + identity file
2. Configure
Edit config.yaml:
teleport:
proxy: teleport.example.com:443
identity_file: /path/to/control-plane-identity.pem # or set TELEPORT_IDENTITY_FILE
tctl_path: tctl
tsh:
tsh_path: tsh
home: /var/lib/mcp-teleport/home # optional isolated TELEPORT_HOME
service:
host: 0.0.0.0
port: 8000
log_level: INFO
database:
url: postgresql+psycopg://mcp:mcp@localhost:5432/mcpteleport
policy:
allow_tsh_passthrough: false # OFF by default (security)
allow_tctl_passthrough: false # OFF by default
default_proxy_ttl_seconds: 3600 # 1 hour leases
max_concurrent_proxies_per_actor: 10
3. Start services
docker-compose up -d
# Wait for Postgres to be healthy
# Then control-plane starts automatically
4. Check health
curl http://localhost:8000/health
# {
# "status": "healthy"
# }
5. Connect MCP client
Point your MCP host to:
http://localhost:8000/mcp/
Tools
Inventory & Diagnostics (read-only)
list_nodes()β Teleport nodes inventorylist_proxies()β Teleport proxies inventorylist_roles()β RBAC roleslist_users()β Users + role assignmentslist_tokens()β Join tokenslist_cas()β Certificate authorities + rotation warningsget_auth_preference()β Auth settings (MFA, second_factor, etc.)
Health & Security
scan_rbac()β RBAC risk scan (wildcard perms, privileged roles, MFA gaps, etc.)run_health_check()β Full system health (inventory + checks)
Audit
search_audit(hours=24)β Event aggregation by type, failure counts, top usersdetect_audit_anomalies(hours=24)β Detect storms, brute force, role spikes, escalation signals
Access Requests
list_access_requests(limit=10)β List pending/historyshow_access_request(request_id)β Show detailscreate_access_request(resources, reason, ticket?)β Create new requestreview_access_request(request_id, approve, reason)β Approve/denydrop_access_request(request_id)β Withdraw request
Proxy Leases (long-running tsh proxy *)
start_proxy(kind, target?, listen_addr?, ttl_seconds?)β Start a proxy leasekind:proxy_ssh|proxy_db|proxy_app|proxy_kube|proxy_mcp|vnettarget: optional dict like{"db": "mydb"}or{"app": "myapp"}listen_addr: local bind address (e.g.,127.0.0.1:0)ttl_seconds: max 1h by default
get_proxy_status(lease_id)β Check status, pid, listening addressget_proxy_logs(lease_id, tail=200)β Last N lines of proxy logsstop_proxy(lease_id)β Clean shutdownlist_proxies_leases(active_only=true)β List all leases for actor
Passthrough (disabled by default)
run_tsh_command(args, reason, ticket?)β Raw tsh (requiresallow_tsh_passthrough=true)run_tctl_command(args, reason, ticket?)β Raw tctl (requiresallow_tctl_passthrough=true)
Policy & security
Capability allowlists per role
# Built-in roles:
BUILTIN_ROLES = {
"admin": {...all capabilities...},
"operator": {...inventory, audit, proxies, requests...},
"user": {...basic access + home proxies...},
"readonly": {...inventory, audit, health only...},
}
Service identities (e.g., teleport-ai-prod) default to "operator" role.
Enforcement
Every tool invocation goes through the policy engine:
- Actor identification: service principal or user role
- Capability check: does [actor.role] have the capability?
- Constraints: apply TTLs, concurrency caps, etc.
- Context requirements: mutating actions require
reason+ optionalticket
Example policy decision:
decision = policy.evaluate(
actor="teleport-ai-prod",
capability="proxy.start",
context={"reason": "debug db issues", "ticket": "SEC-123"}
)
# PolicyDecision(
# allow=True,
# reason="Capability 'proxy.start' allowed for role 'operator'",
# constraints={"max_ttl_seconds": 3600}
# )
Safe defaults
- Passthrough disabled: tsh/tctl commands not exposed unless explicitly enabled
- Output redacted: tokens, certs, keys removed from logs + telemetry
- Timeouts: all tctl/tsh runs have 30s timeout (customizable)
- Concurrency caps: max 10 proxies per actor by default
- Leases are explicit: no fire-and-forget; every long-running process has a TTL + health heartbeat
Architecture: Postgres state store
Operations table
Every tool invocation creates an operation record:
op_id (uuid)
request_id (from MCP)
actor
capability
inputs (validated JSON)
status (queued|running|succeeded|failed)
started_at / ended_at
exit_code, stdout/stderr (truncated)
error
decision (policy decision audit trail)
teleport_audit_refs (deep-links to Teleport audit)
Leases table
Long-running proxies are managed via leases:
lease_id (uuid)
kind (proxy_db, proxy_ssh, vnet, etc.)
actor
cluster
target (JSON: {db: "mydb"}, {app: "myapp"}, ...)
listen_addr
pid (central execution) | edge_agent_id (delegated)
created_at / expires_at
status (starting|active|stopping|stopped|expired|failed)
last_heartbeat_at
logs_ref
Cleanup
Background job (stub in code) periodically:
- Kills expired leases
- Cleans up orphaned PIDs
- Marks stale edge agents as offline
Observability
Structured logging
All logs are JSON with operation context:
{
"timestamp": "2026-03-05T12:34:56Z",
"level": "info",
"message": "operation.start",
"extra": {
"op_id": "...",
"actor": "teleport-ai-prod",
"capability": "inventory.nodes"
}
}
Metrics (stub)
operation.total(capability, outcome)operation.duration_ms(capability histogram)lease.event(kind, event type)
Replace with Prometheus/StatsD exporter as needed.
Tracing (optional)
OpenTelemetry integration (OTEL_ENABLED in config) for distributed tracing when used within a larger system.
Deployment models
Model A: Docker Compose (dev/test)
docker-compose up -d
Suitable for:
- Development
- Testing
- Small teams
- Single-region
Model B: Kubernetes + Helm (production)
helm repo add mcp-teleport-control-plane https://...
helm install mcp-teleport-cp mcp-teleport-control-plane/control-plane \
--values values.yaml
- Horizontal scale MCP gateway pods
- Postgres HA (CloudSQL, RDS, Patroni)
- RBAC + network policies
- Audit logging (GCP Cloud Audit, Datadog, etc.)
Advanced: Edge agent (optional)
For environments where proxies must run on user workstations (DB access restricted by network):
- Deploy
mcp-teleport-edge-agenton user VDI/jumpbox - Control plane delegates long-running proxy leases to edge agents
- Edge agents stream logs back to control plane
- Tight lifecycle control: lease expiry β auto-kill process on edge
(Edge agent code TBD; requires leader election + webhook for lease delegation.)
Security & compliance
Identity
- Service identity per environment (teleport-ai-dev, teleport-ai-stage, teleport-ai-prod)
- Each scoped to a cluster via RBAC
- Certificates rotated on schedule
Secrets
- Identity files stored on disk (0600) or injected from Vault/KMS
- Environment variables sanitized from logs
- Token/cert patterns redacted in all output
Audit
- Every operation logged to Postgres
- Reason + ticket required for mutating actions
- Teleport audit event references included in operations
- Full rebuild of what was done (command, args, outcome, duration)
Network
- MCP server behind ingress (mTLS or OIDC)
- Egress restricted to Teleport proxy (+ optional target DBs if central proxies)
- Request signing + rate limiting at edge
Troubleshooting
"tctl: permission denied"
- Check identity file path and permissions
- Verify service identity has appropriate RBAC in cluster
- Test manually:
tctl --proxy=... --identity=... get nodes
Postgres connection errors
# Check pg
docker-compose logs postgres
# Manually test
docker-compose exec postgres psql -U mcp -d mcpteleport -c "SELECT 1;"
Lease stuck in "starting" state
- Check logs:
docker-compose logs control-plane - Manually kill any orphan process
- Query Postgres:
SELECT * FROM leases WHERE status = 'starting'; - Hard-stop:
UPDATE leases SET status = 'stopped' WHERE lease_id = '...'
Next steps
- Deploy edge agent (for user network proxies)
- Helm charts (for K8s)
- API gateways (Envoy, Kong, AWS ALB + mTLS)
- Observability (Prometheus, Datadog, Splunk)
- Backup/DR (Postgres replication, secrets rotation)
- Multi-region (cross-cluster federated control planes)
License
MIT (see LICENSE)
Support
Teleport Docs: https://goteleport.com/docs MCP Spec: https://modelcontextprotocol.io
