Writing MCP
Metadata-first fiction editing and reasoning tools for long-form writing projects.
Ask AI about Writing MCP
Powered by Claude Β· Grounded in docs
I know everything about Writing MCP. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
mcp-writing
An MCP service for AI-assisted reasoning and editing on long-form fiction projects.
Designed to work with OpenClaw but compatible with any MCP-capable AI gateway.
Quick launch
For local stdio MCP clients, run the published package directly:
WRITING_SYNC_DIR=/path/to/sync-dir DB_PATH=./writing.db npx -y @hanna84/mcp-writing
The CLI wrapper defaults to stdio transport and adds the Node 22 SQLite flag automatically when needed.
VS Code extension
For VS Code-native setup flows (including prose styleguide setup), use:
What it does
Instead of feeding an entire manuscript to an AI and hoping it fits in the context window, mcp-writing builds a structured index from your scene files. The AI queries that index first β finding relevant characters, beats, and loglines β then loads only the specific prose it needs.
Current status:
- Core platform complete: Metadata-first analysis, sidecar-backed metadata maintenance, AI-assisted prose editing with confirmation + git history, review bundles, and Scrivener Direct extraction are all implemented.
- Recently delivered: Guideline generation is now delivered and tracked in done PRDs.
- Active development: OpenClaw integration is the current focus area.
- Deferred backlog: embeddings search is intentionally deferred for later exploration.
Who it is for
- Novelists and writing teams working on long manuscripts with many scenes, characters, and continuity constraints.
- AI-assisted editing workflows where you want targeted context retrieval instead of full-manuscript prompting.
- Projects that need traceable, reversible edits with metadata that stays synchronized as drafts evolve.
Documentation
| Guide | Description |
|---|---|
| docs/setup.md | Prerequisites, first-time setup, Scrivener import, native sync format |
| mcp-writing-vscode | VS Code extension for client-native setup flows |
| docs/docker.md | Docker Compose, OpenClaw integration, SSH hardening |
| docs/data-ownership.md | Which tools write which files, import safety rules |
| docs/tools.md | Full tool reference β auto-generated from source |
| docs/development.md | Running locally, tests, environment variables, troubleshooting |
Breaking changes
describe_workflows surface redesign
describe_workflows now exposes an outcome-first, discovery-first workflow map. This is a breaking change if your prompts or automation depend on previous workflow IDs or ordering.
Update integrations using this mapping:
manuscript_exploration->question_driven_discovery(ortargeted_scene_readingwhen the task is prose inspection)prose_editing->safe_scene_revisioncharacter_management->character_understandingplace_management->place_understandingreview_bundle->review_preparation
New workflow IDs added:
thread_understandingparity_recovery
Styleguide workflows are still available, but no longer positioned as part of the primary daily workflow surface.
find_scenes and get_arc response-shape standardization
find_scenes and get_arc now always return structured envelopes, including non-paginated calls.
- Envelope fields:
results,total_count. - Pagination fields are included when paging is active.
warning/next_stepare included when relevant.
If your integration previously handled raw arrays for non-paginated calls, update it to parse envelopes consistently.
Safe parsing pattern:
const parsed = JSON.parse(toolText);
const scenes = parsed.results ?? [];
const totalCount = parsed.total_count ?? scenes.length;
const warning = parsed.warning ?? null;
const nextStep = parsed.next_step ?? null;
Usage scenarios
1) Continuity pass before sending chapters to beta readers
Goal: catch inconsistencies before sharing pages.
- Run
syncafter your latest writing session. - Ask
find_scenesfor scenes involving a specific character or tag (for example, all scenes taggedinjuryorpromise). - Use
get_arcto review that character's ordered progression across the manuscript. - Load only the suspect scenes with
get_scene_prose. - Attach follow-up notes with
flag_scenewhere continuity needs a fix.
Outcome: you review one narrative thread at a time instead of rereading the entire novel to find contradictions.
2) Planning and tracking subplot beats during revisions
Goal: make sure subplot threads progress intentionally and resolve on time.
- Run
list_threadsfor the project. - Use
get_thread_arcto inspect scene order and beat labels for each thread. - When a beat is missing, call
upsert_thread_linkto add or update it on the right scene. - Re-run
get_thread_arcto confirm pacing and coverage.
Outcome: subplot structure stays visible and auditable, which reduces dropped threads in late drafts.
3) Tightening scene metadata after heavy prose edits
Goal: keep indexes accurate without manually re-tagging everything.
- After rewriting scenes, call
enrich_sceneto re-derive lightweight metadata from current prose. - Use
update_scene_metadatafor intentional editorial fields (for example, beat, POV, timeline position, and tags). - Use
search_metadataandfind_scenesto verify scenes are discoverable under the expected filters.
Outcome: your AI assistant can reliably find the right scenes without drifting from the manuscript.
4) Safe AI-assisted line edits with rollback
Goal: let AI propose prose edits without losing control of your draft.
- Ask the AI to call
propose_editfor a specific scene. - Review the staged diff.
- Accept with
commit_editor reject withdiscard_edit. - Use
list_snapshots(and optionalsnapshot_scene) to inspect or preserve revision history.
Outcome: you get AI speed with explicit approval and recoverable history for every applied change.
5) Refreshing scene-character links after imports or major rewrites
Goal: rebuild scene-to-character links in a controlled way after imported prose changes or metadata drift.
- Start with
enrich_scene_characters_batchusing the defaultdry_run=trueto preview inferred links for a project, chapter, or explicit scene list. - Poll
get_async_job_statusuntil the batch job completes, then reviewjob.result.resultsfor changed scenes, ambiguous matches, and partial failures. - Spot-check a few affected scenes with
get_scene_proseif the changes touch important continuity or cast-heavy chapters. - Re-run
enrich_scene_characters_batchwithdry_run=falseonce the preview looks correct. - If you want a destructive overwrite instead of additive merge behavior, use
replace_mode=replacewithconfirm_replace=truedeliberately.
Outcome: character-link maintenance becomes a preview-first batch operation instead of a one-off regex script or manual sidecar cleanup.
6) Post-upgrade recovery after legacy migration warnings
Goal: recover index confidence quickly when legacy upgrade warnings indicate ambiguous rows were skipped.
- Start by checking
get_runtime_config(ordescribe_workflows) and confirm whetherdb_migration_warningscontainsLEGACY_JOIN_ROWS_SKIPPED. - If present, run
syncimmediately to rebuild scene relationships from current sidecars and prose metadata. - Continue normal discovery (
find_scenes,get_arc,get_thread_arc) and watch for stale-metadata warnings. - When you touch stale scenes, run
enrich_scene(scene_id, project_id)to recover metadata parity incrementally. - If many scenes remain stale, switch to
enrich_scene_characters_batch(dry-run first) for broader catch-up.
Outcome: upgrade-related data loss risk becomes an explicit, operator-visible recovery workflow instead of a silent state mismatch.
License
AGPL-3.0-only
