Secretless AI
One command to keep secrets out of AI. Works with Claude Code, Cursor, Copilot, Windsurf, and any AI coding tool.
Ask AI about Secretless AI
Powered by Claude Β· Grounded in docs
I know everything about Secretless AI. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
OpenA2A: CLI Β· HackMyAgent Β· Secretless Β· AIM Β· Browser Guard Β· DVAA
secretless-ai
Keep API keys and secrets invisible to AI coding tools. Works with Claude Code, Cursor, GitHub Copilot, Windsurf, Cline, and Aider.
Quick Start
npx secretless-ai init
Secretless v0.17.0
Keeping secrets out of AI
Configured: Claude Code (1 of 1 detected)
Created:
+ .claude/hooks/secretless-guard.sh
+ CLAUDE.md
Modified:
~ .claude/settings.json (added 69 deny patterns)
Next steps:
Verify: secretless-ai verify
Scan: secretless-ai scan
Status: secretless-ai status

For a full security dashboard covering credentials, shadow AI, config integrity, and more:
npx opena2a-cli review
MCP Server Protection
Every MCP server config has plaintext API keys in JSON files on your machine. The LLM sees them. Secretless encrypts them.
npx secretless-ai protect-mcp
Scanned 1 client(s)
+ claude-desktop/browserbase
BROWSERBASE_API_KEY (encrypted)
+ claude-desktop/github
GITHUB_PERSONAL_ACCESS_TOKEN (encrypted)
+ claude-desktop/stripe
STRIPE_SECRET_KEY (encrypted)
3 secret(s) encrypted across 3 server(s).
MCP servers start normally -- no workflow changes needed.
Scans configs across Claude Desktop, Cursor, Claude Code, VS Code, and Windsurf. Secrets move to your configured backend. Non-secret env vars (URLs, regions) stay untouched.
npx secretless-ai protect-mcp --backend 1password # Store MCP secrets in 1Password
npx secretless-ai mcp-status # Show which servers are protected
npx secretless-ai mcp-unprotect # Restore original configs from backup
Triage Helpers
npx secretless-ai scan --min-confidence 0.85 # Show only high-confidence findings
npx secretless-ai ignore docs/migration.md # Append a path to .secretlessignore
npx secretless-ai ignore --pattern '*.golden.txt'
npx secretless-ai diff main # Audit secretless-managed file changes vs a git ref
scan now renders a Confidence: high (0.92) line under every finding. The score combines pattern specificity, value entropy, value length, and path tier. With --no-ignore, findings whose path matches the default-ignore list are tagged (looks like a test fixture) so users can keep them in view without re-suppressing them.
How It Works
- Scans your project for hardcoded credentials in config files and source code (56 patterns from
@opena2a/credential-patterns@0.1.1, lockstep-asserted, across .js, .ts, .py, .go, .java, .rb, and more). Suppresses fixture-path false positives via.secretlessignoredefaults (test/,__tests__/,examples/,e2e/,docs/vhs/,node_modules/...) - Migrates them to secure storage (OS keychain, 1Password, Vault, GCP Secret Manager)
- Blocks AI tools from reading credential files (21 file patterns)
- Brokers access through environment variables -- secrets never enter AI context
Architecture
Secretless has three layers. You can use one, two, or all three β each is independent and works against any supported backend.
Tier 1 β In-process SDK. Credentials resolved in the call stack and zeroized after use. Available in the Python and TypeScript AIM SDKs. Sub-millisecond overhead.
Tier 2 β Vault Exec. A subprocess primitive that injects a credential into a child process's environment without exposing it to the parent. The agent running under an AI assistant never sees the secret.
npx secretless-ai vault exec github -- curl https://api.github.com/user
The child process receives $GITHUB. The parent shell, the AI tool's context, and any process listing see nothing. Language-agnostic β wraps any command.
Tier 3 β Broker with identity policy. A local daemon that mediates credential access across multiple agents. Policy rules allow or deny access by agent ID, credential name, time window, and rate limit. Optional AIM integration adds trust-score and capability constraints.
npx secretless-ai broker start
See Run the Broker for when to use the daemon and how to configure it.
AIM is optional. Tier 1 and Tier 2 work against any of the five storage backends with no AIM involvement. Tier 3 adds identity-bound policy when an AIM server is reachable; it still enforces default-deny locally without one.
Use Cases
Step-by-step guides for common workflows: docs/USE-CASES.md
- Protect My Credentials -- Keep API keys out of AI tools (2 min)
- Secure MCP Configs -- Encrypt MCP server credentials (3 min)
- Bring Your Own Vault -- Point Secretless at HashiCorp Vault, GCP SM, or 1Password (3 min)
- Run the Broker -- Policy-gated credential daemon for multi-agent runtimes (3 min)
- Team Setup -- Shared backend, CI/CD, onboarding (5 min)
- Migrate from .env -- Move .env files to encrypted storage (3 min)
Supported Tools
| Tool | Protection Method |
|---|---|
| Claude Code | PreToolUse hook (blocks reads before they happen) + deny rules + CLAUDE.md |
| Cursor | .cursorrules instructions |
| GitHub Copilot | .github/copilot-instructions.md instructions |
| Windsurf | .windsurfrules instructions |
| Cline | .clinerules instructions |
| Aider | .aiderignore file patterns |
Claude Code gets the strongest protection because it supports hooks -- a shell script runs before every file read and blocks access at the tool level.
Storage Backends
| Backend | Storage | Best For |
|---|---|---|
local | AES-256-GCM encrypted file | Quick start, single machine |
keychain | macOS Keychain / Linux Secret Service | Native OS integration |
1password | 1Password vault | Teams, CI/CD, multi-device |
vault | HashiCorp Vault KV v2 | Enterprise, self-hosted |
gcp-sm | GCP Secret Manager | GCP-native workloads |
npx secretless-ai backend set 1password # Switch backend
npx secretless-ai migrate --from local --to 1password # Migrate existing secrets
NanoMind Integration
Optional integration with NanoMind for enhanced security analysis:
npm install @nanomind/guard @nanomind/engine # Optional
- MCP injection screening:
protect-mcpscreens env var values for prompt injection patterns and warns when suspicious content is detected - Rich scan explanations:
scan --explaingenerates context-aware security explanations for each finding using NanoMind's local inference engine
Both features gracefully degrade when NanoMind packages are not installed.
Using with opena2a-cli
opena2a-cli unifies all OpenA2A security tools:
npm install -g opena2a-cli
opena2a review # Full security dashboard
opena2a secrets init # Initialize secretless protection
Development
npm run build && npm test # 809 tests
Telemetry
Secretless sends anonymous tier-1 usage data to the OpenA2A Registry: tool name (secretless-ai), version, command name (scan, protect, etc.), success, duration, platform, Node major version, and a stable per-machine install_id. No content is collected β no scanned secrets, no file paths, no env-var values, no rule contents, no IPs.
- Policy: opena2a.org/telemetry.
- Status:
secretless-ai telemetry status. - Disable per-invocation:
OPENA2A_TELEMETRY=off secretless-ai <anything>. - Disable persistently:
secretless-ai telemetry off. - Audit every payload:
OPENA2A_TELEMETRY_DEBUG=print secretless-ai <anything>echoes each event to stderr in JSON.
Fire-and-forget with a 2-second timeout β telemetry never blocks Secretless.
License
Apache-2.0
Part of the OpenA2A ecosystem. Full reference: opena2a.org/docs/secretless
