io.github.kapillamba4/meta-prompt-mcp
MCP server providing official Google and Anthropic prompting guides for meta-prompt generation.
Ask AI about io.github.kapillamba4/meta-prompt-mcp
Powered by Claude Β· Grounded in docs
I know everything about io.github.kapillamba4/meta-prompt-mcp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Meta-Prompt MCP
A Prompting Oracle β An MCP server that bridges official Prompting Guides with your LLM workflow to help you generate highly accurate, effective, and structured meta-prompts.
What It Does
Meta-Prompt MCP is a specialized Model Context Protocol (MCP) server that acts as an automated "Prompting Oracle." It empowers any MCP-compatible host (Claude Desktop, Cursor, etc.) to query expert Prompting Guides mid-conversation.
When building AI workflows, creating robust "meta-prompts" (system prompts for agents) is critical. Instead of guessing how to instruct an LLM, this server provides immediate access to authoritative guidelines. By surfacing these best practices on-demand, it ensures the meta-prompts you generate are exceptionally accurate, helpful, and grounded in proven methodology.
Architecture
βββββββββββββββββββββββ stdio ββββββββββββββββββββββββββββ
β MCP Host βββββββββββββββββΊβ Meta-Prompt MCP β
β (Claude Desktop, β β β
β Cursor, IDEs) β β ββββββββββββββββββββ β
β β β β FastMCP Server β β
β β β β β’ get_google_ β β
β β β β guide β β
β β β β β’ get_anthropic_ β β
β β β β guide β β
β β β ββββββββββ¬ββββββββββ β
β β β β β
β β β ββββββββββΌββββββββββ β
β β β β ./data/ β β
β β β β (markdown files) β β
β β β ββββββββββββββββββββ β
βββββββββββββββββββββββ ββββββββββββββββββββββββββββ
Key Features
| Feature | Details |
|---|---|
get_google_guide tool | Retrieves the comprehensive Google Prompting Guide to inform clear, context-rich prompting strategies |
get_anthropic_guide tool | Retrieves the full Anthropic Prompting Guide for mastering capabilities and system prompts |
| Offline capable | Runs entirely locally, reading from bundled markdown files with zero API dependencies |
Benchmark Results
To validate the tool's impact, we ran a benchmark comparing prompts generated with and without the prompting guides across 5 diverse tasks. An independent judge LLM scored each prompt on Clarity, Specificity, Structure, Effectiveness, and Overall quality (1β10 scale).
Run the benchmark yourself:
export OPENROUTER_API_KEY=sk-or-...
make benchmark
Quick Start
1. Install
# Via uvx (recommended β run without installing globally)
uvx meta-prompt-mcp
# Or install via pip
pip install meta-prompt-mcp
The package ships with bundled markdown guides β no API keys or setup needed.
2. Configure Your MCP Host
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"meta-prompt-mcp": {
"command": "uvx",
"args": ["meta-prompt-mcp"]
}
}
}
Cursor
Add to your MCP settings:
{
"mcpServers": {
"meta-prompt-mcp": {
"command": "uvx",
"args": ["meta-prompt-mcp"]
}
}
}
Claude Code
Run the following command in your terminal:
claude mcp add meta-prompt-mcp -- uvx meta-prompt-mcp
Development
# Clone the repo
git clone <your-repo-url>
cd meta-prompt-mcp
# Install in dev mode
make dev
# Run the server
make run
Make Targets
| Command | Description |
|---|---|
make dev | Install in editable mode with dev dependencies |
make run | Start the MCP server |
make benchmark | Run prompt quality benchmark (requires OPENROUTER_API_KEY) |
make lint | Run linter |
make format | Auto-format code |
make test | Run tests |
make build | Build distribution packages |
make publish | Publish to PyPI |
Project Structure
meta-prompt-mcp/
βββ pyproject.toml # Package config & dependencies
βββ Makefile # Dev commands (make help)
βββ README.md
βββ .env.example # Env template (OPENROUTER_API_KEY)
βββ benchmarks/
β βββ benchmark.py # Prompt quality benchmark
β βββ results.md # Generated benchmark results
βββ src/
βββ meta_prompt_mcp/
βββ __init__.py
βββ __main__.py # python -m support
βββ server.py # FastMCP server with tools
βββ data/ # Bundled markdown guides
License
MIT
