io.github.backspacevenkat/perspectives
Query multiple AI models (GPT-4, Claude, Gemini, Grok) in parallel for diverse perspectives
Ask AI about io.github.backspacevenkat/perspectives
Powered by Claude Β· Grounded in docs
I know everything about io.github.backspacevenkat/perspectives. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Polydev - Multi-Model AI Perspectives
Get unstuck faster. Query GPT 5.2, Claude Opus 4.5, Gemini 3, and Grok 4.1 simultaneously β one API call, four expert opinions.
Why Polydev?
Stop copy-pasting between ChatGPT, Claude, and Gemini. Get all their perspectives in your IDE with one request.
| Metric | Result |
|---|---|
| SWE-bench Verified | 74.6% Resolve@2 |
| Cost vs Claude Opus | 62% lower |
| Response time | 10-40 seconds |
"Different models have different blind spots. Combining their perspectives eliminates yours."
Supported Models
| Model | Provider | Strengths |
|---|---|---|
| GPT 5.2 | OpenAI | Reasoning, code generation |
| Claude Opus 4.5 | Anthropic | Analysis, nuanced thinking |
| Gemini 3 Pro | Multimodal, large context | |
| Grok 4.1 | xAI | Real-time knowledge, directness |
Quick Start
1. Get your free API token
polydev.ai/dashboard/mcp-tokens
| Tier | Messages/Month | Price |
|---|---|---|
| Free | 1,000 | $0 |
| Pro | 10,000 | $19/mo |
2. Install in your IDE
Claude Code
claude mcp add polydev -- npx -y polydev-ai@latest
Then set your token:
export POLYDEV_USER_TOKEN="pd_your_token_here"
Or add to ~/.claude.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Windsurf
Add to your MCP configuration:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Cline (VS Code)
- Open Cline settings (gear icon)
- Go to "MCP Servers" β "Configure"
- Add the same JSON config as above
OpenAI Codex CLI
Add to ~/.codex/config.toml:
[mcp_servers.polydev]
command = "npx"
args = ["-y", "polydev-ai@latest"]
[mcp_servers.polydev.env]
POLYDEV_USER_TOKEN = "pd_your_token_here"
[mcp_servers.polydev.timeouts]
tool_timeout = 180
session_timeout = 600
Usage
Natural Language
Just mention "polydev" or "perspectives" in your prompt:
"Use polydev to debug this infinite loop"
"Get perspectives on: Should I use Redis or PostgreSQL for caching?"
"Use polydev to review this API for security issues"
MCP Tool
Call the get_perspectives tool directly:
{
"tool": "get_perspectives",
"arguments": {
"prompt": "How should I optimize this database query?",
"user_token": "pd_your_token_here"
}
}
Example Response
π€ Multi-Model Analysis
ββ GPT 5.2 ββββββββββββββββββββββββββββββββββββββββ
β The N+1 query pattern is causing performance issues.
β Consider using eager loading or batch queries...
βββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Claude Opus 4.5 ββββββββββββββββββββββββββββββββ
β Looking at the execution plan, the table scan on
β `users` suggests a missing index on `email`...
βββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Gemini 3 βββββββββββββββββββββββββββββββββββββββ
β The query could benefit from denormalization for
β this read-heavy access pattern...
βββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Grok 4.1 βββββββββββββββββββββββββββββββββββββββ
β Just add an index. The real problem is you're
β querying in a loop - fix that first.
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β
Consensus: Add index on users.email, fix N+1 query
π‘ Recommendation: Use eager loading with proper indexing
Research
Our approach achieves 74.6% on SWE-bench Verified (Resolve@2), matching Claude Opus at 62% lower cost.
| Approach | Resolution Rate | Cost/Instance |
|---|---|---|
| Claude Haiku (baseline) | 64.6% | $0.18 |
| + Polydev consultation | 66.6% | $0.24 |
| Resolve@2 (best of both) | 74.6% | $0.37 |
| Claude Opus (reference) | 74.4% | $0.97 |
Available Tools
| Tool | Description |
|---|---|
get_perspectives | Query multiple AI models simultaneously |
get_cli_status | Check status of local CLI tools |
force_cli_detection | Re-detect installed CLI tools |
send_cli_prompt | Send prompts to local CLIs with fallback |
Links
- Website: polydev.ai
- Dashboard: polydev.ai/dashboard
- npm: npmjs.com/package/polydev-ai
- Research: SWE-bench Paper
IDE Guides
License
MIT License - see LICENSE for details.
Built by Polydev AI
Multi-model consultation for better code
