io.github.temurkhan13/openclaw-health-mcp
MCP server for OpenClaw deployment health: gateway, resources, errors, skills, upgrades, disk.
Ask AI about io.github.temurkhan13/openclaw-health-mcp
Powered by Claude Β· Grounded in docs
I know everything about io.github.temurkhan13/openclaw-health-mcp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
openclaw-health-mcp
MCP server for AI agent deployment health β gateway status, CPU/RAM/swap, recent errors from journalctl/dmesg, skill-registry integrity, upgrade outcomes, cron + disk usage in a single tool call. Each component gets a HEALTHY/DEGRADED/CRITICAL classification, with overall rollup + ranked critical findings. Linux-proc backend works on any Linux/macOS/Windows host; OpenClaw operators get native
~/.openclaw/parsing as a built-in reference implementation. Keywords: AI agent health, production AI monitoring, deployment readiness, MCP infrastructure observability.
What it does
Anyone running production AI agents needs a single tool that answers "is this deployment healthy right now?" without SSH'ing in to run six separate commands. The HN front-page thread Ask HN: How are you monitoring AI agents in production? (March 2026) made the gap explicit β the most-upvoted comments described:
- "observability and governance cannot live inside the agent framework. They have to live in an independent execution layer" β the framework-level monitoring leaks audit trail when teams use multiple frameworks
- "agent makes 10,000 correct $0.02 decisions that collectively don't make sense" β per-call rate limits miss systemic patterns
- The gap that "actually hurts during post-mortems" β knowing whether a model drifted, context window failed, or tool misbehaved
Existing options (LangSmith, Langfuse, AgentShield, OTEL/LGTM) sit at the framework or proxy layer. openclaw-health-mcp sits one level closer to the agent runtime β read-only, local, MCP-native β surfacing infrastructure-layer health (gateway, CPU/RAM, recent errors, skill-registry, upgrade outcome, cron, disk) to the same Claude conversation that's running the agent. Works on any Linux/macOS/Windows host out of the box via the linux-proc backend; OpenClaw operators get an additional native backend that parses ~/.openclaw/ paths.
> claude: is my OpenClaw deployment healthy?
[MCP tool: health_overview]
overall_health: critical
component_summary:
gateway: degraded (bound to 0.0.0.0, 1 crash in 24h)
resources: degraded (memory at 78%, swap at 12%)
skill_registry: critical (skill 'clawhub-trending-bot-v2' flagged suspicious)
upgrade: degraded (last upgrade rolled back)
cron: degraded (1 overdue job)
disk: degraded (root at 82%, log dir +187 MB/24h)
critical_findings:
[CRITICAL] Skill 'clawhub-trending-bot-v2' flagged β possible exfiltration. Disable.
[DEGRADED] Last upgrade 2026.4.23β2026.4.26 rolled back: websocket_stalls, cpu_spike.
[DEGRADED] Root disk at 82% β set up log rotation before reaching 95%.
[DEGRADED] 1 cron job(s) overdue. Install silentwatch-mcp for silent-failure detection.
Why openclaw-health-mcp
Three things that existing tools (Datadog, Prometheus, raw top/free/df) don't do for OpenClaw specifically:
- OpenClaw-aware probes. Detects 0.0.0.0-binding (the default-publicly-exposed misconfig per the 135k exposed-instances stat), parses ClawHub skill-registry diffs, recognizes named upgrade-regression patterns (
websocket_stalls,cpu_spikepost-2026.4.26), distinguishes intentional restarts from crashes. - MCP-native, no integration layer. Claude Desktop, Cline, Continue, OpenClaw agents β any MCP-aware client queries directly. No Grafana plugin, no API wrapper, no JSON to parse manually.
- Composable with the rest of the production-AI MCP stack. Pairs with silentwatch-mcp (cron silent-failure detection β
cron_healthhere is intentionally basic and defers to silentwatch when present). Skill-registry vetting in this server is light heuristics; deep static analysis goes inopenclaw-skill-vetter-mcp(planned).
Built for the SMB self-hoster running OpenClaw on a $40 VPS where Datadog is overkill β but the OpenClaw-specific patterns are valuable on enterprise infra too.
Tool surface
The server registers these MCP tools (full spec in SPEC.md):
| Tool | Returns |
|---|---|
health_overview | Full snapshot β every component + overall HealthLevel + ranked critical findings |
gateway_status | Gateway alive/dead, uptime, restarts, crashes, bind address |
cpu_memory_health | CPU/memory/swap snapshot + 24h OOM count + load averages |
recent_errors(window_hours, min_severity) | Recent error/warning entries, filterable by lookback + severity |
skill_registry_check | Skill counts, recent additions/modifications, light heuristic flags |
last_upgrade_status | From-version, to-version, outcome, regression markers, available upgrade |
cron_health | Basic cron summary (defers to silentwatch-mcp when richer detection wanted) |
disk_usage | Root disk + log directory size + 24h growth + largest log files |
Resources:
health://overviewβ full snapshot (same ashealth_overviewtool)health://gatewayβ gateway-onlyhealth://resourcesβ CPU/memory-only
Prompts:
diagnose-degraded-healthβ diagnostic walk-through, ranked corrective actionssummarize-health-trendβ daily operational digest
Quickstart
Install
pip install openclaw-health-mcp
Configure for Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-health": {
"command": "python",
"args": ["-m", "openclaw_health_mcp"],
"env": {
"OPENCLAW_HEALTH_BACKEND": "mock"
}
}
}
}
Restart Claude Desktop. Test:
Show me a full health snapshot of my OpenClaw deployment.
The mock backend returns deliberately mixed data (gateway DEGRADED, skill registry CRITICAL, etc.) so the response demonstrates the full schema.
Backends
| Backend | Status | Description |
|---|---|---|
mock | β v1.0 | Sample data for protocol-wiring verification (default) |
linux-proc | β v1.0 | psutil-based system metrics (CPU/memory/swap/load/disk) cross-platform; Linux-specific OOM-event detection via journalctl/dmesg; recent-error log parsing via journalctl. Returns UNKNOWN for OpenClaw-specific components (gateway, skill_registry, upgrade, cron) β those need the openclaw backend |
openclaw | β³ v1.1 | Parses OpenClaw config + log directory + ClawHub manifest + upgrade journal |
Select via OPENCLAW_HEALTH_BACKEND env var. Multi-backend support (federating linux-proc system metrics + openclaw application-specific) is planned for v1.2.
Roadmap
| Version | Scope | Status |
|---|---|---|
| v0.1 | Protocol wiring, mock backend, 8 tools / 3 resources / 2 prompts, 40 tests | β |
| v1.0 | linux-proc backend (psutil + journalctl/dmesg OOM detection + log parsing); GitHub Actions CI matrix; PyPI Trusted Publishing; MCP Registry submission; 59 tests | β |
| v1.1 | openclaw backend β parses OpenClaw config, log dir, ClawHub manifest, upgrade journal | β³ |
| v1.2 | Backend federation (linux-proc + openclaw); expanded log sources | β³ |
| v1.x | cowork backend, custom backend SDK, webhook emitter for alerts | β³ |
Need this adapted to your stack?
openclaw-health-mcp ships with a mock backend at v0.1 (Linux + OpenClaw backends in v0.2). If your AI agent runtime is different β Claude Code, Cowork, custom Python services, agent harnesses on AWS / GCP β and you want the same single-pane health visibility for it, that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Single backend adapter for an existing runtime with documented logging/metrics | $8,000β$10,000 | 1β2 weeks |
| Standard | Custom backend + custom severity rules + integration with your existing alerting | $15,000β$20,000 | 2β4 weeks |
| Complex | Multi-backend federation + RBAC + audit-log integration + on-call workflow | $25,000β$35,000 | 4β8 weeks |
To engage:
- Email temur@pixelette.tech with subject
Custom MCP Build inquiry - Include: a 1-paragraph description of your stack + which tier you're considering
- Reply within 2 business days with a 30-min discovery call slot
This server is part of a production-AI infrastructure MCP suite β companion to silentwatch-mcp (cron silent-failure detection) and the upcoming AI Production Discipline Framework Notion template (the methodology these tools operationalize).
Production AI audits
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present, and write the corrective-action plan β that's what this MCP is built into supporting:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2β3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3β4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
Contributing
PRs welcome. Backends are intentionally pluggable β see src/openclaw_health_mcp/backends/ for the contract.
To add a new backend:
- Subclass
HealthBackendinbackends/<your_backend>.py - Implement the 7 abstract probe methods (one per component)
- Register in
backends/__init__.py - Add tests in
tests/test_backend_<your_backend>.py
Bug reports + feature requests: open a GitHub issue.
License
MIT β see LICENSE.
Related
- Production-AI MCP Suite (Gumroad bundle) β this server plus 6 others in one curated 7-pack bundle with a decision tree, day-one drill, and Custom MCP Build CTA. $99, or $49 with
LAUNCH50for the first 30 days. - silentwatch-mcp β cron silent-failure detection. Install alongside this server for richer
cron_healthdata. - openclaw-cost-tracker-mcp β token-cost telemetry + 429 prediction (v1.1+)
- openclaw-skill-vetter-mcp β ClawHub skill security vetting
- openclaw-upgrade-orchestrator-mcp β read-only upgrade advisor + provider-side regression detection (v1.2+)
- openclaw-output-vetter-mcp β agent claim verification (inline grounding-check + swallowed-exception scanner + multi-turn transcript review)
- AI Production Discipline Framework β Notion template, $29 β the methodology these MCP tools implement.
- SPEC.md β full server design.
- Model Context Protocol β protocol overview.
Built by Temur Khan β independent practitioner on production AI systems. Contact: temur@pixelette.tech
