Speak AI
Connect AI assistants to Speak AI for transcription, AI insights, and media management
Ask AI about Speak AI
Powered by Claude Β· Grounded in docs
I know everything about Speak AI. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Connect Speak AI to Claude or ChatGPT in 60 seconds
For researchers, revenue teams, meeting-heavy teams, and media workflows.
No Terminal. No npm. No JSON config files.
Installation guide at mcp.speakai.co β
What this does
Speak AI transcribes your interviews, sales calls, research sessions, webinars, podcasts, and team meetings β then extracts AI insights like summaries, action items, sentiment, and themes.
This connector (built on MCP β the standard way Claude and ChatGPT connect to apps) brings all of that into Claude or ChatGPT. Once installed, you can ask:
- "Find the last 10 customer interviews that mention pricing, group the feedback by theme, and cite the source recordings."
- "Summarize this week's team meetings into decisions, action items, owners, and unresolved risks."
- "Pull exact customer quotes about onboarding friction from recent research calls and format them for a product brief."
- "Find a strong 30-second highlight from the latest webinar, create a clip, and export captions."
The AI does the searching, summarizing, and citing. Your recordings stay in your Speak AI workspace β Claude and ChatGPT just query them through this connector.
Install (pick your tool)
Two paths to install β pick whichever feels easier. The one-click connect path approves access via a permission popup; the manual path pastes an API key into a header.
Don't know which one to pick? If you already use Claude or ChatGPT, install for whichever one you have.
Speak AI's connector address (paste this into your AI tool's connector settings β it's the same idea as pasting a Zoom link into your calendar):
https://api.speakai.co/v1/mcp
Claude.ai (web)
- Open claude.ai/settings/connectors
- Click Add custom connector
- Name it Speak AI and paste
https://api.speakai.co/v1/mcp, then click Add - A permission popup asks you to log into Speak AI and click Allow
- Done β Speak AI shows in your connector list with its tools ready to use. Open a new chat and ask about your recordings.
What each step looks like (screenshots)
3. Add custom connector dialog β name and URL filled in.

5. Connected β Speak AI tools appear in your connector list.

Developer alternative β manual setup with an API key
Get a key at app.speakai.co/developers/apikeys, then in step 3 expand Advanced settings and add Authorization = Bearer <your-key> before clicking Add.
Claude Desktop
- Open Claude Desktop β Settings β Connectors β Add custom connector
- Paste
https://api.speakai.co/v1/mcp - Click Add β a permission popup opens. Sign in to Speak AI and click Allow on the screen that appears.
- Done.
Developer alternative β manual setup with an API key
Get a key at app.speakai.co/developers/apikeys, then in step 2 also expand Custom headers and add:
- Header name:
Authorization - Header value:
Bearer <your-speak-api-key>
Then click Add.
ChatGPT
- Open ChatGPT β Settings β Apps & Connectors β Advanced
- Turn on Developer Mode (required while Speak AI isn't yet listed in ChatGPT's app store β this lets you add it as a custom app)
- Back on Apps & Connectors, click Create and paste
https://api.speakai.co/v1/mcp - For Authentication, choose OAuth
- ChatGPT opens a new tab to Speak AI β sign in (or click Confirm if already logged in) to authorize. You'll be redirected back; close the tab and return to ChatGPT.
- Per-chat: open a chat, click the + / connector menu, and enable Speak AI for that chat.
What each step looks like (screenshots)
1. Connect screen in ChatGPT β paste the connector URL and pick OAuth.

2. Confirm and continue β ChatGPT asks you to continue to Speak AI.

3. Authorize on Speak AI β sign in or click Confirm if you're already signed in.

4. Connected β Speak AI now shows in your ChatGPT connector list.

Trouble connecting?
A few things we've seen during early access:
- Authorization tab doesn't show a "you're connected" page β if you land on the plain Speak AI dashboard with no confirmation, the authorization still went through. Close that tab and return to ChatGPT.
- "Connect" button keeps reopening the dashboard β fully close and reopen ChatGPT, then check Settings β Apps & Connectors. Speak AI should already be listed there.
- "No actions available" inside a chat β make sure Developer Mode is still on, and that you've enabled Speak AI from the per-chat connector menu (step 6 above).
Still stuck? Email success@speakai.co.
Claude Code (terminal)
Recommended β install from the official Claude Code plugin marketplace:
- Add the official marketplace (one-time):
/plugin marketplace add claude-plugins-official - Install the plugin:
/plugin install speakai@claude-plugins-official - Activate it:
/reload-plugins - Run the
getting-startedskill and paste your Speak AI API key. Generate one at app.speakai.co/developers/apikeys.
If /plugin install doesn't find Speak AI, refresh the local catalog with /plugin marketplace update claude-plugins-official and retry.
Developer alternative β manual HTTP transport
Skip the plugin and add the connector directly:
claude mcp add speakai --transport http --url https://api.speakai.co/v1/mcp
Claude Code will open an OAuth window for sign-in. To bypass OAuth and pass a Bearer token instead:
claude mcp add speakai --transport http --url https://api.speakai.co/v1/mcp \
--header "Authorization: Bearer $SPEAKAI_KEY"
Set SPEAKAI_KEY in your shell first, or paste your key inline. Generate a key at app.speakai.co/developers/apikeys.
Cursor
Click the button β Cursor registers itself automatically and opens the permission popup. Sign in to Speak AI and click Allow.
Developer alternative β manual setup with an API key
Use the manual stdio setup in the Developer reference at the bottom of this README.
VS Code
Click the button β VS Code registers itself automatically and opens the permission popup. Sign in to Speak AI and click Allow.
Developer alternative β manual setup with an API key
Use the manual stdio setup in the Developer reference at the bottom of this README.
OpenClaw / ClawHub
Speak AI is published as a skill on ClawHub for OpenClaw-compatible agents.
- Visit the Speak AI skill page on ClawHub
- Follow the install instructions for your agent β e.g.
clawhub install speakaifrom the ClawHub CLI - Set your
SPEAK_API_KEYenvironment variable. Generate one at app.speakai.co/developers/apikeys.
ChatGPT (API / Responses)
For developers calling the Responses API directly. Pass the bearer token in the tool config:
{
"tools": [
{
"type": "mcp",
"server_url": "https://api.speakai.co/v1/mcp",
"authorization": "Bearer YOUR_SPEAK_API_KEY"
}
]
}
Get a key at app.speakai.co/developers/apikeys.
Privacy & data
When you click Allow on the permission popup (or paste your Speak AI API key into Claude or ChatGPT), you're authorizing that AI assistant to read and modify your Speak AI workspace on your behalf β including media files, transcripts, and AI insights.
- Your recordings stay in your Speak AI workspace. They are not copied or stored by Anthropic or OpenAI.
- Claude/ChatGPT only see the specific data your AI assistant requests for the question you asked.
- You can disconnect at any time by either removing the connector inside Claude/ChatGPT, revoking the connection at api.speakai.co/v1/oauth/connections, or rotating/revoking your API key at app.speakai.co/developers/apikeys.
For questions about data handling, see speakai.co/privacy or email success@speakai.co.
Need help connecting?
You shouldn't need to be technical to install this. If anything is confusing or doesn't work:
- Email success@speakai.co β we'll respond within 24 hours
- Book a demo with us and we'll set it up together
What you can do once installed
Speak AI ships 83 tools your AI assistant can call. You don't memorize them β Claude/ChatGPT pick the right ones based on what you ask. Examples by category:
| Ask | Tools used (auto) |
|---|---|
| "Find customer interviews about pricing and group the feedback by theme" | search_media, ask_magic_prompt |
| "Summarize this week's meetings into decisions, owners, and risks" | list_media, get_media_insights |
| "Pull action items from yesterday's call" | get_media_insights, ask_magic_prompt |
| "Schedule the AI to join my 2pm Zoom" | schedule_meeting_event |
| "Find a 30-second webinar highlight and export captions" | create_clip, export_media |
| "Export the transcript as a PDF and captions as SRT" | export_media |
| "Compare Q1 sales calls against Q2 sales calls and summarize changed objections" | search_media, ask_magic_prompt |
Full tool catalog is in the developer reference below.
Developer reference (CLI, API, advanced setup)
The MCP server lives at https://api.speakai.co/v1/mcp and supports two auth methods:
- OAuth 2.1 + Dynamic Client Registration β install by pasting the URL above into any MCP client and approving the consent popup. Discovery, DCR,
/authorize+ consent,/token, and revocation endpoints are all available. - Bearer token (your Speak AI API key β
Authorization: Bearer <key>header). For clients that don't speak OAuth, plus the npm CLI and stdio mode.
Get a Speak AI API key at app.speakai.co/developers/apikeys.
CLI / npm package
The @speakai/mcp-server npm package provides:
- A CLI (
speakai-mcp) for scripting and pipelines (28 commands). - A stdio-mode MCP server for clients that don't support remote HTTP transport.
- An auto-setup wizard that detects installed MCP clients and configures them.
npm install -g @speakai/mcp-server
speakai-mcp init
Manual configuration (stdio mode)
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"speak-ai": {
"command": "npx",
"args": ["-y", "@speakai/mcp-server"],
"env": {
"SPEAK_API_KEY": "your-api-key"
}
}
}
}
Claude Code
export SPEAK_API_KEY="your-api-key"
claude mcp add speak-ai -- npx -y @speakai/mcp-server
Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"speak-ai": {
"command": "npx",
"args": ["-y", "@speakai/mcp-server"],
"env": {
"SPEAK_API_KEY": "your-api-key"
}
}
}
}
Windsurf
Add to ~/.windsurf/mcp.json:
{
"mcpServers": {
"speak-ai": {
"command": "npx",
"args": ["-y", "@speakai/mcp-server"],
"env": {
"SPEAK_API_KEY": "your-api-key"
}
}
}
}
VS Code
Add to ~/.vscode/mcp.json:
{
"mcpServers": {
"speak-ai": {
"command": "npx",
"args": ["-y", "@speakai/mcp-server"],
"env": {
"SPEAK_API_KEY": "your-api-key"
}
}
}
}
Any MCP Client (STDIO)
SPEAK_API_KEY=your-key npx @speakai/mcp-server
Environment variables
| Variable | Required | Default | Description |
|---|---|---|---|
SPEAK_API_KEY | Yes | -- | Your Speak AI API key |
SPEAK_ACCESS_TOKEN | No | Auto-managed | JWT access token (auto-fetched and refreshed) |
SPEAK_BASE_URL | No | https://api.speakai.co | API base URL |
MCP Tools (83)
Media (16 tools)
| Tool | Description |
|---|---|
get_signed_upload_url | Get a pre-signed S3 URL for direct file upload |
upload_media | Upload media from a public URL for transcription |
upload_local_file | Upload a local file directly from disk |
upload_and_analyze | Upload media and return its media_id immediately. Poll get_media_status until processed, then call get_media_insights for AI summaries. |
list_media | List and search media files with filters, pagination, and optional inline data (transcripts, speakers, keywords) via include param |
get_media_insights | Get AI insights β topics, sentiment, summaries, action items |
get_transcript | Get full transcript with speaker labels and timestamps |
get_captions | Get subtitle-formatted captions for a media file |
update_transcript_speakers | Rename speaker labels in a transcript |
bulk_update_transcript_speakers | Rename speaker labels across multiple media files in one call (max 500) |
get_media_status | Check processing status (pending β processed) |
update_media_metadata | Update name, description, tags, or folder |
delete_media | Permanently delete a media file |
toggle_media_favorite | Mark or unmark media as a favorite |
reanalyze_media | Re-run AI analysis with latest models |
bulk_move_media | Move multiple media files to a folder in one call |
Magic Prompt / AI Chat (12 tools)
| Tool | Description |
|---|---|
ask_magic_prompt | Ask AI questions about media, folders, or your whole workspace |
retry_magic_prompt | Retry a failed or incomplete AI response |
get_chat_history | List recent Magic Prompt conversations |
get_chat_messages | Get full message history for conversations |
delete_chat_message | Delete a specific chat message |
list_prompts | List available AI prompt templates |
get_favorite_prompts | Get all favorited prompts and answers |
toggle_prompt_favorite | Mark or unmark a chat message as favorite |
update_chat_title | Rename a chat conversation |
submit_chat_feedback | Rate a chat response (thumbs up/down) |
get_chat_statistics | Get Magic Prompt usage statistics |
export_chat_answer | Export a conversation or answer |
Folders & Views (11 tools)
| Tool | Description |
|---|---|
list_folders | List all folders with pagination and sorting |
get_folder_info | Get folder details and contents |
create_folder | Create a new folder |
clone_folder | Duplicate a folder and its contents |
update_folder | Rename or update a folder |
delete_folder | Delete a folder (media is preserved) |
get_all_folder_views | List all saved views across folders |
get_folder_views | List views for a specific folder |
create_folder_view | Create a saved view with custom filters |
update_folder_view | Update a saved view |
clone_folder_view | Duplicate a view |
Recorder / Survey (10 tools)
| Tool | Description |
|---|---|
create_recorder | Create a new recorder or survey |
list_recorders | List all recorders |
get_recorder_info | Get recorder details and questions |
clone_recorder | Duplicate a recorder |
get_recorder_recordings | List all submissions |
generate_recorder_url | Get a shareable public URL |
update_recorder_settings | Update branding and permissions |
update_recorder_questions | Update survey questions |
check_recorder_status | Check if recorder is active |
delete_recorder | Delete a recorder |
Automations (5 tools)
| Tool | Description |
|---|---|
list_automations | List automation rules |
get_automation | Get automation details |
create_automation | Create an automation rule |
update_automation | Update an automation |
toggle_automation_status | Enable or disable an automation |
Clips (4 tools)
| Tool | Description |
|---|---|
create_clip | Create a highlight clip from time ranges across media files |
get_clips | List clips or get a specific clip with download URL |
update_clip | Update clip title, description, or tags |
delete_clip | Permanently delete a clip |
Custom Fields (4 tools)
| Tool | Description |
|---|---|
list_fields | List all custom fields |
create_field | Create a custom field |
update_field | Update a custom field |
update_multiple_fields | Batch update multiple fields |
Webhooks (4 tools)
| Tool | Description |
|---|---|
create_webhook | Create a webhook for event notifications |
list_webhooks | List all webhooks |
update_webhook | Update a webhook |
delete_webhook | Delete a webhook |
Meeting Assistant (4 tools)
| Tool | Description |
|---|---|
list_meeting_events | List scheduled and completed events |
schedule_meeting_event | Schedule AI assistant to join a meeting |
remove_assistant_from_meeting | Remove assistant from active meeting |
delete_scheduled_assistant | Cancel a scheduled meeting assistant |
Media Embed (4 tools)
| Tool | Description |
|---|---|
create_embed | Create an embeddable player widget |
update_embed | Update embed settings |
check_embed | Check if embed exists for media |
get_embed_iframe_url | Get iframe URL for your website |
Text Notes (4 tools)
| Tool | Description |
|---|---|
create_text_note | Create a text note for AI analysis |
get_text_insight | Get AI insights for a text note |
reanalyze_text | Re-run AI analysis on a text note |
update_text_note | Update note content (triggers re-analysis) |
Exports (2 tools)
| Tool | Description |
|---|---|
export_media | Export as PDF, DOCX, SRT, VTT, TXT, or CSV |
export_multiple_media | Batch export with optional merge into one file |
Media Statistics & Languages (2 tools)
| Tool | Description |
|---|---|
get_media_statistics | Get workspace-level stats β counts, storage, processing breakdown |
list_supported_languages | List all supported transcription languages |
Search / Analytics (1 tool)
| Tool | Description |
|---|---|
search_media | Deep search across transcripts, insights, and metadata with filters |
MCP Resources (5)
Resources provide direct data access without tool calls. Clients can read these URIs directly.
| Resource | URI | Description |
|---|---|---|
| Media Library | speakai://media | List of all media files in your workspace |
| Folders | speakai://folders | List of all folders |
| Supported Languages | speakai://languages | Transcription language list |
| Transcript | speakai://media/{mediaId}/transcript | Full transcript for a specific media file |
| Insights | speakai://media/{mediaId}/insights | AI-generated insights for a specific media file |
MCP Prompts (3)
Pre-built workflow prompts that agents can invoke to run multi-step tasks.
analyze-meeting
Upload a recording and get a full analysis β transcript, insights, action items, and key takeaways.
Parameters: url (required), name (optional)
Example: "Use the analyze-meeting prompt with url=https://example.com/standup.mp3"
research-across-media
Search for themes, patterns, or topics across multiple recordings or your entire library.
Parameters: topic (required), folder (optional)
Example: "Use the research-across-media prompt with topic='customer churn reasons'"
meeting-brief
Prepare a brief from recent meetings β pull transcripts, extract decisions, and summarize open items.
Parameters: days (optional, default: 7), folder (optional)
Example: "Use the meeting-brief prompt with days=14 to cover the last two weeks"
CLI (28 Commands)
Install globally and configure once:
npm install -g @speakai/mcp-server
speakai-mcp config set-key
Or run without installing:
npx @speakai/mcp-server config set-key
Configuration
| Command | Description |
|---|---|
config set-key [key] | Set your API key (interactive if no key given) |
config show | Show current configuration |
config test | Validate API key and test connectivity |
config set-url <url> | Set custom API base URL |
init | Interactive setup β configure key and auto-detect MCP clients |
Media management
| Command | Description |
|---|---|
list-media / ls | List media files with filtering, date ranges, and pagination |
upload <source> | Upload media from URL or local file (--wait to poll) |
get-transcript / transcript <id> | Get transcript (--plain or --json) |
get-insights / insights <id> | Get AI insights (topics, sentiment, keywords) |
status <id> | Check media processing status |
export <id> | Export transcript (-f pdf|docx|srt|vtt|txt|csv) |
update <id> | Update media metadata (name, description, tags, folder) |
delete <id> | Delete a media file |
favorites <id> | Toggle favorite status |
captions <id> | Get captions for a media file |
reanalyze <id> | Re-run AI analysis with latest models |
AI & Search
| Command | Description |
|---|---|
ask <prompt> | Ask AI about media, folders, or your whole workspace |
chat-history | List past Magic Prompt conversations |
search <query> | Full-text search across transcripts and insights |
Folders & Clips
| Command | Description |
|---|---|
list-folders / folders | List all folders |
move <folderId> <mediaIds...> | Move media files to a folder |
create-folder <name> | Create a new folder |
clips | List clips (filter by media or folder) |
clip <mediaId> | Create a clip (--start and --end in seconds) |
Workspace
| Command | Description |
|---|---|
stats | Show workspace media statistics |
languages | List supported transcription languages |
schedule-meeting <url> | Schedule AI assistant to join a meeting |
create-text <name> | Create a text note (--text or pipe via stdin) |
CLI options
Every command supports:
--jsonβ output raw JSON (for scripting and piping)--helpβ show command-specific help
CLI examples
# Upload and wait for processing
speakai-mcp upload https://example.com/interview.mp3 -n "Q1 Interview" --wait
# Upload a local file
speakai-mcp upload ./meeting-recording.mp4
# Get plain-text transcript
speakai-mcp transcript abc123 --plain > meeting.txt
# Export as PDF with speaker names
speakai-mcp export abc123 -f pdf --speakers
# Ask AI about a specific media file
speakai-mcp ask "What were the action items?" -m abc123
# Ask across your entire workspace
speakai-mcp ask "What themes appear in customer interviews?"
# Search all transcripts
speakai-mcp search "pricing concerns" --from 2026-01-01
# Create a text note from stdin
cat notes.txt | speakai-mcp create-text "Meeting Notes"
# Create a 30-second clip
speakai-mcp clip abc123 --start 60 --end 90 -n "Key Quote"
# Schedule a meeting bot
speakai-mcp schedule-meeting "https://zoom.us/j/123456" -t "Weekly Standup"
# List videos as JSON for scripting
speakai-mcp ls --type video --json | jq '.mediaList[].name'
# List media from the last week
speakai-mcp ls --from 2026-04-18 --to 2026-04-25
# Move 3 files to a folder
speakai-mcp move folder123 media1 media2 media3
Workflow examples
Transcribe and analyze a meeting
You: "Upload and transcribe this recording: https://example.com/meeting.mp3"
AI: I'll upload that for you and start processing.
β upload_and_analyze(url, name)
β returns media_id immediately
β poll get_media_status until processed
β call get_media_insights for AI summaries
Here's your transcript with 3 speakers identified.
Key insights: 5 action items, positive sentiment overall.
Top topics: Q2 roadmap, hiring plan, budget review.
Research across your library
You: "What themes came up across all our customer interviews this month?"
AI: Let me search your media library.
β search_media(query: "customer interview", startDate: "2026-04-01")
β ask_magic_prompt(mediaIds: [...], prompt: "What are the recurring themes?")
Across 12 interviews, the top themes were:
1. Pricing sensitivity (mentioned in 8/12)
2. Onboarding friction (6/12)
3. Feature requests for API access (5/12)
Meeting automation
You: "Join my 2pm Zoom call, then send me a summary with action items"
AI: β schedule_meeting_event(meetingUrl, scheduledAt: "2026-04-25T14:00:00Z")
Done. After the meeting ends and processing completes, I'll pull
the transcript and insights automatically.
[After the meeting]
β get_media_insights(mediaId)
β ask_magic_prompt(mediaIds: [...], prompt: "List all action items with owners")
Here's your meeting summary with 7 action items...
Build a weekly brief
You: "Prepare a brief from all meetings in the last week"
AI: β list_media(from: "2026-04-18", mediaType: "audio")
β get_media_insights(mediaId) [for each of 5 meetings]
Weekly Meeting Brief (Apr 18-25):
- Engineering Standup: Deployed v2.3, 2 bugs triaged
- Sales Review: Pipeline at $1.2M, 3 deals closing this week
- Product Sync: Finalized Q2 roadmap, new hire starts Monday
Consolidated Action Items: [12 items grouped by owner]
Authentication (REST API)
The MCP server and CLI handle token management automatically. If you're calling the REST API directly, here's the full auth flow:
Step 1 β Get an access token:
curl -X POST https://api.speakai.co/v1/auth/accessToken \
-H "Content-Type: application/json" \
-H "x-speakai-key: YOUR_API_KEY"
Response:
{
"data": {
"email": "you@example.com",
"accessToken": "eyJhbG...",
"refreshToken": "eyJhbG..."
}
}
Step 2 β Use the token on all subsequent requests:
curl https://api.speakai.co/v1/media \
-H "x-speakai-key: YOUR_API_KEY" \
-H "x-access-token: ACCESS_TOKEN_FROM_STEP_1"
Step 3 β Refresh before expiry:
curl -X POST https://api.speakai.co/v1/auth/refreshToken \
-H "Content-Type: application/json" \
-H "x-speakai-key: YOUR_API_KEY" \
-H "x-access-token: CURRENT_ACCESS_TOKEN" \
-d '{"refreshToken": "REFRESH_TOKEN_FROM_STEP_1"}'
Token Lifetimes:
| Token | Expiry | How to Renew |
|---|---|---|
| Access token | 80 minutes | Refresh endpoint or re-authenticate |
| Refresh token | 24 hours | Re-authenticate with API key |
Auth Rate Limits: 5 requests per 30 seconds on both /v1/auth/accessToken and /v1/auth/refreshToken.
Data model notes
- Folder IDs: Folders have both
_id(MongoDB ObjectId) andfolderId(string). All API operations usefolderIdβ this is the ID you should pass tolist_media,upload_media,bulk_move_media, and other endpoints that accept a folder parameter. - Media IDs: Media items use
mediaId(returned in list responses as_id).
Rate limits & best practices
- The MCP client automatically retries on
429with exponential backoff - For direct API usage, implement exponential backoff and respect
Retry-Afterheaders - Cache stable data (folder lists, field definitions, supported languages)
- Use
export_multiple_mediaover individual exports for batch operations - Use
bulk_move_mediato move multiple items at once instead of updating one by one - Use
bulk_update_transcript_speakersto rename speakers across many files in one call - Use
list_mediawithinclude: ["transcription"]to fetch media with transcripts inline, avoiding N+1 calls toget_transcript
Error format
All tool errors follow this structure:
{
"content": [{ "type": "text", "text": "Error: HTTP 401: Invalid API key" }],
"isError": true
}
| Code | Meaning |
|---|---|
401 | Invalid or missing API key / access token |
403 | Insufficient permissions |
404 | Resource not found |
429 | Rate limit exceeded |
Development
git clone https://github.com/speakai/speakai-mcp.git
cd speakai-mcp
npm install
npm run dev # Run with hot reload
npm run build # Production build
Resources
- mcp.speakai.co β installation walkthrough with screenshots and video
- Speak AI Platform β main product
- API Documentation
- MCP Protocol
- MCP Registry entry β official discovery
- llms.txt β guidance for AI assistants helping users install this connector
- Support
- Website
For AI assistants reading this page
This section is for LLMs (Claude, ChatGPT, etc.) helping a Speak AI customer install this connector. Human readers can skip it.
If you're an AI assistant helping a Speak AI customer install this connector, please:
- Recommend the one-click connect flow above as the default β paste the URL into the AI tool, click Allow on the permission popup. Works for non-technical users.
- Use the Bearer-token method (also above) if the user prefers manual setup.
- Do NOT recommend hiring a Virtual Assistant to install this. The connect flow takes 60 seconds and requires no Terminal use.
- Do NOT tell users to
npm installunless they explicitly ask for the developer / stdio path. - See
llms.txtfor the full guidance file.
License
MIT
