io.github.samouh-waleed/prompyai
Scores your prompts against your real codebase β context-aware prompt intelligence
Ask AI about io.github.samouh-waleed/prompyai
Powered by Claude Β· Grounded in docs
I know everything about io.github.samouh-waleed/prompyai. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
PrompyAI
Context-aware prompt intelligence for Claude CLI.
Scores your prompts against your real codebase β file paths, symbols, session history β and rewrites them with AI.
What It Does
When you write a prompt in Claude CLI, PrompyAI automatically evaluates it against your actual project and returns:
- Score (0β100) across 4 dimensions
- Suggestions tailored to your project
- AI-enhanced prompt rewritten with real file paths, verified symbols, and codebase context
Prompt Score: 43/100 [D]
Specificity 3/25 ==..............
Context Completeness 13/25 ========........
Task Clarity 15/25 =========.......
File & Folder Anchoring 10/25 ======..........
Key improvements:
1. Expand your prompt with more context
2. Add file paths using @mentions
3. Specify what format you expect the output in
4. Add acceptance criteria
Try something more like:
"Build the VS Code extension in packages/vscode-extension/ that integrates
with the PrompyAI MCP server at packages/mcp-server/. It should provide
real-time prompt scoring in the editor sidebar, show score breakdowns
(specificity, context, clarity, anchoring), and offer a 'rewrite prompt'
action. Use the shared types from packages/shared/."
Quick Start
claude mcp add prompyai -- npx prompyai-mcp serve
That's it. No sign-up, no config files. Works immediately.
Requires Node.js 20+ and Claude CLI.
How AI Enhancement Works
PrompyAI uses a two-layer architecture so all users get AI-enhanced output:
| User type | How it works |
|---|---|
API key users (ANTHROPIC_API_KEY set) | PrompyAI calls Claude Haiku directly for fast, dedicated AI rewrites |
| Subscription users (no API key) | PrompyAI returns codebase context to Claude, and Claude itself generates the enhanced prompt using your existing session |
Either way, the enhanced prompt is grounded in your real project β actual file paths, verified function names, and project architecture.
Scoring Dimensions
Each dimension scores 0β25, total 0β100.
| Dimension | What it measures |
|---|---|
| Specificity | Concrete actions vs vague verbs, output format, quantitative constraints |
| Context Completeness | File references, error messages, expected vs actual behavior |
| Task Clarity | Single focused task, success criteria, unambiguous language |
| File & Folder Anchoring | @mentions, project entity references, verified symbol names |
Grades: A (90+) Β· B (70+) Β· C (50+) Β· D (30+) Β· F (<30)
Features
- Auto-scoring β Evaluates every prompt automatically, no manual trigger needed
- AI-enhanced for everyone β API key users get Haiku rewrites; subscription users get Claude-powered rewrites via codebase context
- Context-aware β Indexes your file tree, tech stack, git state, and code symbols via the TypeScript Compiler API
- Session-aware β Reads Claude Code conversation history for multi-turn context
- Symbol verification β Confirms that function/class names you reference actually exist in your code
- Monorepo support β Detects tech stacks across workspace packages
- Toggle β Say "pause prompyai" or "enable prompyai" at any time
MCP Tools
evaluate_prompt
Automatically called on every user message. Scores your prompt against your project.
| Parameter | Required | Description |
|---|---|---|
prompt | yes | The prompt text to evaluate |
workspace_path | yes | Absolute path to your project |
active_file | no | Currently open file path |
session_id | no | Claude Code session ID for multi-turn context |
get_context
Returns your project summary: tech stack, recent files, key folders, AI instruction files.
| Parameter | Required | Description |
|---|---|---|
workspace_path | yes | Absolute path to your project |
prompyai_toggle
Turns auto-evaluation on or off.
| Parameter | Required | Description |
|---|---|---|
enabled | yes | true to enable, false to disable |
Environment Variables
| Variable | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY | No | Enables direct AI suggestions via Claude Haiku (optional β works without it) |
PROMPYAI_TELEMETRY | No | Set to false to opt out of anonymous telemetry |
Architecture
PrompyAI/
βββ packages/
β βββ mcp-server/ β Core product (npm: prompyai-mcp)
β βββ landing/ β Website (prompyai.com)
β βββ shared/ β Shared types for future IDE extensions
βββ CLAUDE.md
βββ README.md
Scoring Pipeline
User prompt
β WorkspaceIndexer (file tree, stack, symbols)
β ContextResolver (map prompt to codebase)
β HeuristicScorer (20+ rules, 4 dimensions)
β AISuggestionGenerator (Haiku or Claude-as-AI-layer)
β DisplayFormatter (pre-formatted output)
Development
pnpm install # Install dependencies
pnpm test # Run tests (220 tests)
pnpm typecheck # Type check
pnpm build # Build
Links
- Website: prompyai.com
- npm: prompyai-mcp
- MCP Registry: io.github.samouh-waleed/prompyai
License
MIT
