silentwatch-mcp
An MCP server that surfaces scheduled-job state and detects silent failures (exit 0 but no useful output) for cron, systemd timers, and OpenClaw schedulers, enabling AI agents to query job health and overdue status directly.
Ask AI about silentwatch-mcp
Powered by Claude Β· Grounded in docs
I know everything about silentwatch-mcp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
silentwatch-mcp
MCP server for catching cron silent failures β when scheduled jobs exit 0 with empty output, when retry storms run away, when action budgets leak. Surfaces overdue jobs, length anomalies, and silent-fail patterns to any Claude or MCP-aware agent. Works with system cron, systemd timers, OpenClaw cron logs, and any JSONL run-log out of the box. Keywords: AI agent monitoring, cron health, scheduled-task observability, production AI ops.
What it does
Real silent failures from production AI deployments in the last 30 days:
- GitHub Issue #54260, anthropics/claude-code β Claude Code Routines: cron triggers fire and the routine state advances (
ended_reason: run_once_fired), but the cloud container never reaches prompt execution. This silently affected the operator's routines for at least 28 days before they noticed the output files weren't updating. - GitHub Issue #1243, anthropics/claude-code-action β
claude-sonnet-4-6returns empty assistant turns in a tight loop (stop_reason: null,output_tokens: 8) for ~20 minutes. The workflow step then exits assuccesswith no artifacts produced β the GitHub Actions API can't distinguish "completed cleanly" from "returned empty for 20 minutes burning Claude Max budget." - dev.to: "5 Silent Failure Patterns I Keep Finding in Production AI Systems" β the systematic taxonomy.
These all map to one underlying problem: exit-code monitoring lies. The job returned 0; the data is broken anyway. Any team running scheduled jobs has hit at least one of these:
- Silent failure β the job ran, returned exit code 0, but produced no useful output (a web-search cron returning empty, a backup that wrote a 0-byte file, a digest email that sent with
<no rows>in the body). Traditional monitoring sees a green checkmark; the data is broken anyway. - Overdue without alert β a job stopped running for 3 days; nobody noticed because nobody was watching
- Last-success drift β the job runs every hour but only succeeded once in the last 12 attempts; everyone assumes it's healthy because the most recent run was green
- Audit-trail gap β you need to know when a specific job last completed for a compliance check, and the only "log" is
journalctloutput that rotated last week
silentwatch-mcp exposes that visibility as MCP tools your AI agent can query directly. No metrics pipeline, no separate dashboard, no SaaS subscription.
> claude: which of my cron jobs have silent failures in the last 24 hours?
[MCP tool: find_silent_failures]
3 jobs flagged:
β’ web-search-refresh β ran 12Γ successfully but output empty in 8 (66% silent fail rate)
β’ daily-summary β ran 1Γ successfully (24Γ expected); output normal
β’ audit-snapshot β last success 5 days ago, all subsequent runs returned exit 0 with empty body
Why silentwatch-mcp
Three things existing tools (Cronitor, Healthchecks.io, Datadog, Prometheus) don't do:
- Detect silent failures, not just exit codes. Traditional cron monitoring assumes
exit 0 = success. We check the output against configurable rules: empty output, length anomaly vs historical median, error keywords in stdout despite exit 0, duration anomaly. The job that "ran successfully" but returned nothing useful β that's the failure mode that hides for weeks. We catch it. - MCP-native, no integration layer. Claude Desktop, Cline, Continue, OpenClaw agents β any MCP-aware client queries directly. No Grafana plugin, no API wrapper, no JSON to parse manually.
- Multi-source out of the box. OpenClaw native JSONL logs, system crontab (
/etc/crontab+/etc/cron.d/*+ per-usercrontab -l), and systemd timers (systemctl list-timers+journalctl) β all four backends ship in v0.3, so you can runsilentwatch-mcpagainst whatever scheduler you have. No vendor lock-in.
Built for the SMB self-hoster running a $40 VPS where Datadog is overkill and a "$0/mo open-source MCP" is the right price point β but the silent-failure detection is just as valuable on enterprise infra.
Tool surface
The server registers these MCP tools (full spec in SPEC.md):
| Tool | What it does |
|---|---|
list_jobs | Enumerate all known cron jobs with last-run summary |
get_job_status(job_id) | Detailed status for one job: last run, last success, success rate over window |
get_job_runs(job_id, limit) | Recent run history with timing + status + output snippet |
find_overdue_jobs | Jobs whose schedule says they should have run but haven't |
find_silent_failures(window_hours) | Jobs that ran "successfully" but output looks suspicious |
tail_job_logs(job_id, lines) | Recent log output for one job |
Resources:
cron://jobsβ list of all jobs (manifest)cron://job/{id}β individual job manifest + recent runscron://run/{id}β individual run instance with full output
Prompts:
diagnose-overdueβ diagnostic prompt template for an overdue jobsummarize-cron-healthβ daily digest of cron activity + anomalies
Quickstart
v0.3 beta β all 4 backends shipped + real overdue detection via cron-schedule parsing (croniter). Mock, OpenClaw JSONL, crontab, and systemd backends are all production-ready. 74 tests passing. v1.0 is now polish: PyPI release + GitHub Actions CI + MCP registry submissions.
Install
pip install silentwatch-mcp # not yet on PyPI; install from source for now:
pip install -e .
Configure for Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"silentwatch": {
"command": "python",
"args": ["-m", "silentwatch_mcp"],
"env": {
"SILENTWATCH_BACKEND": "mock"
}
}
}
}
Backends (all four shipped as of v0.3):
SILENTWATCH_BACKEND=mockβ returns sample data (default for development)SILENTWATCH_BACKEND=openclaw-jsonlβ parses OpenClaw's native cron run JSONL files (setSILENTWATCH_OPENCLAW_LOGSto the directory, default~/.openclaw/cron-runs/); richest data β full run history + silent-fail detectionSILENTWATCH_BACKEND=crontabβ parses/etc/crontab+/etc/cron.d/*+ user crontabs (crontab -l); last-run inferred from/var/log/syslogor/var/log/cron(setSILENTWATCH_SYSLOGto override)SILENTWATCH_BACKEND=systemdβ parsessystemctl list-timers --all --output=json+journalctl -u <unit>for run history; liftsOnCalendar=into the schedule field
All non-mock backends gracefully return empty results on platforms / hosts where the underlying tooling isn't present, so configuration is safe to leave in place across environments.
Restart Claude Desktop
The server registers as silentwatch. Test:
Show me all my cron jobs and their last-run status.
Roadmap
| Version | Scope | Status |
|---|---|---|
| v0.1 | Protocol wiring, mock backend, all 6 tools registered with stub data, tests pass | β Complete |
| v0.2 | OpenClaw JSONL backend implemented (real cron run parsing, malformed-line handling, silent-fail enrichment) | β Complete (2026-05-02) |
| v0.3 | Crontab + systemd backends; cron-schedule parsing for real overdue detection (croniter); 35 new tests | β Complete (2026-05-02) |
| v1.0 | Polish: PyPI release, GitHub Actions CI, MCP registry submissions (Glama + PulseMCP), refined silent-fail rule configuration | β³ Phase 1 ship target (W3, May 18) |
| v1.x | Additional backends (Cowork scheduler, Claude Code background tasks, generic JSON config), webhook emitter for alerts | β³ Phase 2+ |
Need this adapted to your stack?
silentwatch-mcp ships with 4 backends (mock, OpenClaw JSONL, crontab, systemd). If your scheduler is something else β AWS EventBridge, GCP Cloud Scheduler, Hangfire, Sidekiq, Temporal, Apache Airflow, Prefect, Dagster, or a custom job runner β and you want the same silent-failure-detection MCP visibility surface for it, that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Single backend adapter for an existing scheduler with documented API (e.g., GCP Cloud Scheduler) | $8,000β$10,000 | 1β2 weeks |
| Standard | Custom backend + custom silent-fail rules + integration with your existing alerting (PagerDuty, Slack, etc.) | $15,000β$20,000 | 2β4 weeks |
| Complex | Multi-backend (federated cron across regions / clusters / tenants) + RBAC + audit-log integration + on-call workflow | $25,000β$35,000 | 4β8 weeks |
To engage:
- Email temur@pixelette.tech with subject
Custom MCP Build inquiry - Include: a 1-paragraph description of your scheduler stack + which tier you're considering
- Reply within 2 business days with a 30-min discovery call slot
This server is also part of the AI Production Discipline Framework β the methodology underlying production AI audits I run.
Production AI audits
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns that are already present, and write the corrective-action plan β that's what this MCP is built into supporting. The standalone audit service:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2β3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3β4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
Contributing
PRs welcome. The structure is intentionally flat to make custom backends easy to add β see src/silentwatch_mcp/backends/ for existing examples.
To add a new backend:
- Subclass
CronBackendinbackends/<your_backend>.py - Implement
list_jobs,get_job_runs,tail_logs - Register in
backends/__init__.py - Add tests in
tests/test_backend_<your_backend>.py
Bug reports + feature requests: open a GitHub issue.
License
MIT β see LICENSE.
Related
- Production-AI MCP Suite (Gumroad bundle) β this server plus 6 others (
openclaw-health-mcp,openclaw-cost-tracker-mcp,openclaw-skill-vetter-mcp,openclaw-upgrade-orchestrator-mcp,openclaw-output-vetter-mcp,bash-vet-mcp) in one curated 7-pack bundle with a decision tree, day-one drill, and Custom MCP Build CTA. $99, or $49 withLAUNCH50for the first 30 days. - openclaw-health-mcp β deployment health (gateway, CPU/RAM, skills, recent errors)
- openclaw-cost-tracker-mcp β token-cost telemetry + 429 prediction (v1.1+)
- openclaw-skill-vetter-mcp β ClawHub skill + agent-config security vetting (v1.1+)
- openclaw-upgrade-orchestrator-mcp β read-only upgrade advisor + provider-side regression detection (v1.2+)
- openclaw-output-vetter-mcp β agent claim verification (inline grounding-check + swallowed-exception scanner + multi-turn transcript review + action-outcome verifier v1.1+)
- bash-vet-mcp β pre-execution shell-command vetting (28 destructive-pattern rules across 8 families)
- AI Production Discipline Framework β Notion template, $29 β the full 14-pattern catalog this MCP server is built around
- AI Production Auditor (GPT Store) β paste your config or agent setup, get a 5 Cs audit report. Free, ChatGPT-only.
- SPEC.md β full server design
- Model Context Protocol β protocol overview
Built by Temur Khan β independent practitioner on production AI systems. Contact: temur@pixelette.tech
