deAPI MCP Server
29 AI tools: audio/video transcription, image generation, TTS, OCR, embeddings via deAPI
Ask AI about deAPI MCP Server
Powered by Claude Β· Grounded in docs
I know everything about deAPI MCP Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
deAPI MCP Server
Production-ready Model Context Protocol (MCP) server for the deAPI REST API. This server exposes all deAPI AI capabilities as MCP tools, enabling LLMs to perform audio transcription, image generation, OCR, video generation, text-to-speech, and more.
Features
- Complete API Coverage: 29 deAPI endpoints exposed as MCP tools
- Smart Adaptive Polling: Automatically polls async jobs with optimized intervals based on job type
- OAuth 2.0 Authentication: Secure token exchange via OAuth Authorization Code flow with PKCE
- Error Recovery: Automatic retry logic with exponential backoff
- Progress Reporting: Real-time progress updates to MCP clients
- Type Safety: Full Pydantic schema validation
- Production Ready: Built with FastMCP framework for reliability
Available Tools
Audio Tools
audio_transcription- Transcribe audio files to text using Whisper modelsaudio_transcription_price- Calculate transcription costtext_to_audio- Convert text to natural speech (TTS)text_to_audio_price- Calculate TTS costaudio_url_transcription- Transcribe audio from URLs of completed Twitter Spacesaudio_url_transcription_price- Calculate Twitter Spaces transcription cost
Video Transcription Tools
video_file_transcription- Transcribe video files to textvideo_file_transcription_price- Calculate video file transcription costvideo_url_transcription- Transcribe videos from URLs (YouTube, Twitter/X, Twitch, Kick)video_url_transcription_price- Calculate video URL transcription cost
Image Tools
text_to_image- Generate images from text promptsimage_to_image- Transform images with text guidanceimage_to_text- Extract text from images (OCR)image_remove_background- Remove background from imagesimage_upscale- Upscale images to higher resolutiontext_to_image_price- Calculate image generation costimage_to_image_price- Calculate image transformation costimage_to_text_price- Calculate OCR costimage_remove_background_price- Calculate background removal costimage_upscale_price- Calculate upscaling cost
Video Tools
text_to_video- Generate videos from text promptsimage_to_video- Animate static images into videostext_to_video_price- Calculate text-to-video costimage_to_video_price- Calculate image-to-video cost
Embedding Tools
text_to_embedding- Generate text embeddings for semantic searchtext_to_embedding_price- Calculate embedding cost
Utility Tools
get_balance- Check account balanceget_available_models- List available AI models with specificationscheck_job_status- Query async job status by ID
Installation
Prerequisites
For running the MCP server:
- Python 3.10 or higher
uv,pip, orcondafor package management
For a deAPI account:
- Sign up at deapi.ai and get your API token
Setup
- Clone the repository:
git clone https://github.com/deapi-ai/mcp-server-deapi.git
cd mcp-server-deapi
- Choose your Python environment setup:
Option A: Using uv (recommended - fastest)
uv pip install -e .
Option B: Using pip
pip install -e .
Option C: Using conda
# Create conda environment
conda create -n mcp-server-deapi python=3.11
conda activate mcp-server-deapi
# Install dependencies
pip install -e .
- (Optional) Create a
.envfile for configuration:
# Copy the example file
cp .env.example .env
# Edit with your preferences (optional - defaults work fine)
# DEAPI_API_BASE_URL=https://api.deapi.ai
# DEAPI_HTTP_TIMEOUT=30.0
# DEAPI_MAX_RETRIES=3
Usage
Running the Server
The server can run in two modes:
Local Mode (for use with Claude Desktop on the same machine):
python -m src.server_remote
The server will start on http://localhost:8000 by default.
Remote Mode (for deployment to a remote server):
# Set host to accept external connections
MCP_HOST=0.0.0.0 MCP_PORT=8000 python -m src.server_remote
See the Remote Deployment section for production deployment options.
Connecting from Claude Desktop / Claude.ai
Option 1: Add Connector (Recommended)
Both Claude Desktop and Claude.ai support MCP connectors with built-in OAuth authentication.
- Get your deAPI token from deapi.ai
- In Claude Desktop or Claude.ai, go to Settings β Connectors β Add Connector
- Fill in the connector details:
Name: deAPI
Remote MCP server: https://your-server-domain:8000/mcp
βΌ Advanced settings
OAuth Client ID: deapi-mcp
OAuth Client Secret: YOUR_DEAPI_TOKEN
- Click Add β Claude will automatically authenticate via OAuth and discover all tools.
For details on the OAuth flow, see AUTH.md.
Option 2: Config File with Bearer Token (Local Development)
Best for: Server running on the same machine, quick setup without OAuth.
Edit your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"deapi": {
"url": "http://localhost:8000/mcp",
"headers": {
"Authorization": "Bearer YOUR_DEAPI_TOKEN"
}
}
}
}
Replace YOUR_DEAPI_TOKEN with your actual deAPI token. Save the file and restart Claude Desktop.
Using the Tools
Authentication is handled at the connection level, not per-tool-call. Tools do NOT accept a deapi_api_token parameter.
Here's an example workflow:
- Get available models:
Use get_available_models to see available models
- Check your balance:
Use get_balance to check remaining credits
- Generate an image:
Use text_to_image with:
- prompt: "A beautiful sunset over mountains"
- model: "Flux1schnell"
- Transcribe audio:
Use audio_transcription with:
- audio: "base64-encoded-audio-or-url"
- include_ts: true
Note: When calling tools via Claude Desktop or MCP SDK, authentication is handled automatically through the server connection (OAuth or HTTP headers). See AUTH.md for detailed OAuth setup.
Architecture
Key Components
- DeapiClient (
src/deapi_client.py): HTTP client with auth forwarding and retry logic - PollingManager (
src/polling_manager.py): Smart adaptive polling for async jobs - Schemas (
src/schemas.py): Pydantic models for type safety - Tools (
src/tools/): Organized tool implementationsaudio.py- Audio transcription & TTS toolsimage.py- Image generation, transformation, OCR, background removal & upscalingvideo.py- Video generation toolsembedding.py- Text embedding toolsutility.py- Balance, models, status tools
Smart Adaptive Polling
The server uses job-type-specific polling strategies:
| Job Type | Initial Delay | Max Delay | Timeout |
|---|---|---|---|
| Audio | 1s | 5s | 5 min |
| Image | 2s | 8s | 5 min |
| Video | 5s | 30s | 15 min |
Polling uses exponential backoff with a configurable multiplier (default: 1.5x).
Error Handling
- HTTP Errors: Automatic retry (3 attempts) with exponential backoff
- Timeouts: Graceful handling with clear error messages
- Job Failures: Detected and reported to the client
- API Errors: Properly formatted error responses
Configuration
Configuration can be set via environment variables (prefixed with DEAPI_):
# API Configuration
DEAPI_API_BASE_URL=https://api.deapi.ai
DEAPI_API_VERSION=v1
# HTTP Client
DEAPI_HTTP_TIMEOUT=30.0
DEAPI_MAX_RETRIES=3
DEAPI_RETRY_BACKOFF_FACTOR=2.0
# Polling Configuration (override defaults)
DEAPI_POLLING_AUDIO__INITIAL_DELAY=1.0
DEAPI_POLLING_AUDIO__MAX_DELAY=5.0
DEAPI_POLLING_AUDIO__TIMEOUT=300.0
Development
Project Structure
mcp-server-deapi/
βββ src/
β βββ server_remote.py # Streamable-HTTP MCP server
β βββ deapi_client.py # HTTP client with auth forwarding
β βββ polling_manager.py # Smart adaptive polling logic
β βββ schemas.py # Pydantic models
β βββ config.py # Configuration management
β βββ auth.py # Authentication middleware
β βββ fastmcp_auth.py # FastMCP OAuth provider
β βββ oauth_endpoints.py # OAuth 2.0 endpoints
β βββ tools/ # Tool implementations
β βββ audio.py # Audio transcription & TTS
β βββ image.py # Image generation, OCR & processing
β βββ video.py # Video generation
β βββ embedding.py # Text embeddings
β βββ utility.py # Balance, models, status
β βββ _price_helpers.py # Price calculation helpers
βββ tests/ # Test suite
β βββ __init__.py
β βββ conftest.py # Pytest fixtures
βββ pyproject.toml # Dependencies
βββ Dockerfile # Container build
βββ docker-compose.yml # Container orchestration
βββ .env.example # Environment config template
βββ README.md # This file
βββ DEPLOYMENT.md # Deployment guide
βββ AUTH.md # OAuth authentication setup
βββ CLAUDE.md # Claude Code guidance
Running Tests
Install dev dependencies:
uv pip install -e ".[dev]"
Run tests:
pytest
Run smoke tests (requires a running server):
python tests/smoke_test.py
Code Formatting
Format code with Black:
black src/
Lint with Ruff:
ruff check src/
API Token Security
Important: The MCP server does NOT store API tokens. Authentication works as follows:
- For Remote HTTP Server: Authentication is handled via OAuth 2.0 (Authorization Code with PKCE) or HTTP headers (Authorization: Bearer token)
- Token forwarding: The server forwards authentication to the deAPI API for each request
- No persistence: Tokens are used only for the specific request and never persisted or logged
- Per-connection auth: Tools do NOT accept
deapi_api_tokenparameters - authentication is managed at the connection level
Always keep your API tokens secure and never commit them to version control. See AUTH.md for detailed OAuth setup.
Remote Deployment
For production environments or when you want to host the MCP server on a remote machine, use the remote server mode.
Quick Start with Docker
- Build and run with Docker:
docker build -t mcp-server-deapi .
docker run -d -p 8000:8000 --name mcp-server-deapi mcp-server-deapi
- Or use Docker Compose:
docker-compose up -d
- Configure Claude Desktop to connect:
{
"mcpServers": {
"deapi": {
"url": "http://your-server-ip:8000/mcp"
}
}
}
Manual Remote Deployment
- On your remote server:
git clone https://github.com/deapi-ai/mcp-server-deapi.git
cd mcp-server-deapi
pip install -e .
python -m src.server_remote
- For production with systemd:
# Create /etc/systemd/system/mcp-server-deapi.service
sudo systemctl enable mcp-server-deapi
sudo systemctl start mcp-server-deapi
- Behind a reverse proxy (nginx + SSL):
server {
listen 443 ssl http2;
server_name mcp.yourdomain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 86400;
}
}
Cloud Deployment Options
- Railway.app: Push to GitHub, connect repository, deploy automatically
- Fly.io:
fly launch && fly deploy - Heroku:
heroku create && git push heroku main - DigitalOcean: Use App Platform or Droplets with Docker
- AWS/GCP/Azure: Deploy with container services (ECS, Cloud Run, Container Instances)
For detailed deployment instructions, security considerations, monitoring, and troubleshooting, see DEPLOYMENT.md.
Troubleshooting
Connection Issues
If the server fails to connect:
- Check your API token is valid
- Verify network connectivity to api.deapi.ai
- Check the logs for specific error messages
- For remote servers: verify firewall rules and that port 8000 is accessible
Job Timeouts
If jobs are timing out:
- Check your balance with
get_balance - Verify the job type timeout is appropriate
- Use
check_job_statusto check if the job is still processing
Model Not Found
If you get model errors:
- Use
get_available_modelsto see available models - Ensure you're using the correct model name
- Check if the model supports your requested operation
Remote Connection Issues
If remote MCP connection fails:
- Test the endpoint:
curl -N http://your-server:8000/mcp - Check server logs:
docker logs mcp-server-deapiorjournalctl -u mcp-server-deapi - Verify firewall rules and SSL certificates (if using HTTPS)
- Ensure MCP endpoint is accessible from your client
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
For issues related to:
- This MCP Server: Open an issue
- deAPI Platform: Visit docs.deapi.ai
- MCP Protocol: Visit modelcontextprotocol.io
