pastorsimon1798/mcp-video
screen, batch processing, format conversion, subtitles, watermarks, and more. 380 tests, CI on Python 3.11+3.12, progress callbacks, works with Claude Code, Cursor, and any MCP client.
Ask AI about pastorsimon1798/mcp-video
Powered by Claude · Grounded in docs
I know everything about pastorsimon1798/mcp-video. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
mcp-video
Video editing MCP server for AI agents.
Structured tools for FFmpeg video editing, cinematic prompt planning, media analysis, subtitles, audio, effects, Hyperframes video creation, and local repurposing packages.
Install • Quick Start • Agent Workflows • Tools • Tool Reference • AI Discovery • llms.txt
Public Discovery
mcp-video is a free, open-source Model Context Protocol (MCP) server, Python library, and CLI that gives AI agents a real video-editing surface. It wraps FFmpeg, PUSHING CREATION-style planning, media analysis, quality checks, subtitles, audio generation, effects, Hyperframes 0.5 rendering, and local repurposing packages behind structured tool schemas.
Best-fit searches:
- video editing MCP server
- AI agent video editing
- FFmpeg MCP tools
- Claude Code video editing
- Cursor MCP video tools
- Python video editing library
- subtitle automation
- reels and shorts automation
- agentic media pipeline
- local AI video workflow
- Hyperframes video creation
- YouTube Shorts repurposing
Why It Exists
AI agents can write FFmpeg commands, but they should not have to guess flags, parse brittle stderr, or silently publish broken media. mcp-video gives agents typed operations, inspectable tool metadata, structured results, and quality checkpoints so a video workflow can be automated and reviewed without turning into shell-command roulette.
Use it when you want an AI assistant to:
- trim, merge, resize, crop, rotate, transcode, or export video;
- add text, subtitles, watermarks, overlays, filters, fades, effects, and transitions;
- extract audio, normalize audio, synthesize audio, add generated audio, or create waveforms;
- detect scenes, make thumbnails, generate storyboards, compare quality, and create release checkpoints;
- scaffold cinematic projects, read STYLE_/NEG_ blocks, parse storyboard tables, and expand shot prompts;
- create new Hyperframes projects, inspect rendered layouts, capture websites, generate local speech, remove backgrounds, and post-process the result with FFmpeg tools;
- repurpose one source video into vertical, horizontal, and square local delivery packages with manifests and review artifacts;
- drive repeatable media workflows from Claude Code, Cursor, Codex-style clients, scripts, or CI.
Installation
Prerequisite: FFmpeg must be installed and available on PATH.
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
Run without a global install:
uvx --from mcp-video mcp-video doctor
Or install with pip:
pip install mcp-video
mcp-video doctor
Hyperframes tools additionally need Node.js 22+ because they call the Hyperframes CLI through npx.
Quick Start
Claude Code
claude mcp add mcp-video -- uvx --from mcp-video mcp-video
Claude Desktop
{
"mcpServers": {
"mcp-video": {
"command": "uvx",
"args": ["--from", "mcp-video", "mcp-video"]
}
}
}
Cursor
{
"mcpServers": {
"mcp-video": {
"command": "uvx",
"args": ["--from", "mcp-video", "mcp-video"]
}
}
}
Then ask your agent:
Trim this interview into a 45-second vertical clip, add burned captions, normalize the audio, make a thumbnail, and create a release checkpoint before export.
Python Client
from mcp_video import Client
editor = Client()
clip = editor.trim("interview.mp4", start="00:02:15", duration="00:00:45")
caption_file = "captions.srt"
editor.ai_transcribe(clip.output_path, output_srt=caption_file)
captioned = editor.subtitles(clip.output_path, subtitle_file=caption_file)
vertical = editor.resize(captioned.output_path, aspect_ratio="9:16")
checkpoint = editor.release_checkpoint(vertical.output_path)
print(checkpoint["thumbnail"])
print(checkpoint["storyboard"])
CLI
mcp-video info interview.mp4
mcp-video trim interview.mp4 -s 00:02:15 -d 45
mcp-video video-ai-transcribe clip.mp4 --output captions.srt
mcp-video subtitles clip.mp4 captions.srt
mcp-video resize clip.mp4 --aspect-ratio 9:16
mcp-video video-quality-check clip.mp4
mcp-video repurpose clip.mp4 --platforms youtube-shorts instagram-reel tiktok
What Agents Can Do
| Workflow | Example prompt |
|---|---|
| Social clips | "Turn this landscape recording into a captioned TikTok and YouTube Short." |
| Podcast production | "Find the strongest segment, trim it, normalize audio, add chapters, and export." |
| Product demos | "Create a short launch video from screenshots, title cards, and voiceover." |
| Cinematic planning | "Create a style pack and storyboard, then render shot prompts for generation." |
| Quality review | "Compare these two exports, make thumbnails, and flag visual or audio problems." |
| Batch automation | "Convert this folder of clips to web-ready MP4 with consistent loudness." |
| Code-created video | "Scaffold a Hyperframes composition, inspect it, render it, then add subtitles and a watermark." |
| Local repurposing | "Turn this master clip into Shorts, Reels, TikTok, and YouTube assets with thumbnails and a manifest." |
MCP Tools
mcp-video registers a broad MCP tool surface, including a search_tools discovery tool so agents can find the right operation without loading every tool description into context.
| Category | Count | Highlights |
|---|---|---|
| Core video editing | 32 | trim, merge, resize, crop, rotate, convert, overlays, subtitles, export, cleanup, templates |
| Cinematic creation | 4 | project scaffold, style-pack parsing, storyboard parsing, shot prompt expansion |
| AI-assisted media | 11 | transcription, scene detection, upscaling, stem separation, silence removal, color grading |
| Hyperframes | 18 | init, preview, render, snapshots, inspect, catalog, website capture, local TTS, transcription, background removal, diagnostics, benchmark, post-process |
| Repurposing | 2 | dry-run manifests, platform-ready variants, thumbnails, storyboards, release checkpoints |
| Procedural audio | 7 | synthesize, compose, presets, effects, sequences, generated audio, spatial audio |
| Visual effects | 8 | vignette, glow, noise, scanlines, chromatic aberration, luma key, mask, shape mask |
| Transitions | 3 | glitch, morph, pixelate |
| Layout and motion | 6 | grid, picture-in-picture, animated text, counters, progress bars, auto-chapters |
| Analysis | 8 | scene detection, thumbnail, preview, storyboard, quality compare, metadata, waveform, release checkpoint |
| Image analysis | 3 | extract colors, generate palettes, analyze product images |
| Discovery | 1 | search_tools |
from mcp_video import Client
editor = Client()
matches = editor.search_tools("subtitle")
print(matches["tools"])
Full reference: docs/TOOLS.md
Agent-Safe Workflow
For autonomous agents, the intended path is inspect, edit, verify, then ask a human to review release artifacts:
from mcp_video import Client
client = Client()
print(client.inspect("trim"))
result = client.pipeline(
[
{"op": "trim", "input": "source.mp4", "start": "00:01:00", "duration": "00:00:45"},
{"op": "add_text", "text": "Launch clip", "position": "top-center"},
{"op": "normalize_audio"},
{"op": "resize", "aspect_ratio": "9:16"},
{"op": "export", "quality": "high"},
{"op": "release_checkpoint"},
],
output_path="final-short.mp4",
)
Safety contract:
- Media-producing calls return structured results with output paths.
- Analysis and discovery calls return structured JSON reports.
- Tool discovery is available through
search_tools()andClient.inspect(). - Unexpected keyword errors are converted into actionable
MCPVideoErrorguidance. - Do not publish agent-generated video without
video_quality_check,video_release_checkpoint, and human visual/audio inspection.
Documentation
Testing
Development verification lives in docs/TESTING.md. Keep public-surface, media workflow, and security checks current when changing tool behavior.
Development
git clone https://github.com/KyaniteLabs/mcp-video.git
cd mcp-video
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v -m "not slow and not hyperframes"
Community
- Contributing
- Code of Conduct
- Governance
- Maintainers
- Security
- Support
- Roadmap
- Changelog
- GitHub Discussions
License
Apache 2.0. See LICENSE.
Built with FFmpeg, Hyperframes, and the Model Context Protocol.
