Audio Server
A powerful audio transcription server that seamlessly transcribes meeting recordings, generates notes, and intelligently splits audio files for efficient management. Open-source and built with FastMCP and Groq/OpenAI Whisper
Ask AI about Audio Server
Powered by Claude Β· Grounded in docs
I know everything about Audio Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Audio Server
A Model Context Protocol (MCP) server that provides audio transcription, intelligent splitting, and meeting analysis tools. This server exposes audio processing capabilities to MCP-compatible clients like Claude Desktop, enabling seamless integration of audio workflows into AI conversations.
What is MCP?
The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect to external data sources and tools. This MCP server provides audio processing tools that can be used by any MCP-compatible client, allowing AI assistants to:
- Transcribe audio files directly in conversations
- Split large audio files for processing
- Generate meeting summaries and insights
- Analyze multiple audio transcripts simultaneously
Features
- ποΈ Audio Transcription: High-quality transcription using Groq's Whisper models
- βοΈ Smart Audio Splitting: Automatically split large audio files by size or duration
- π Transcript Summarization: Generate comprehensive meeting summaries with context
- π Multi-file Analysis: Chat with multiple transcript files simultaneously
- π Format Fallbacks: Robust export with MP3 β AAC β WAV fallback chain
- π Size Management: Automatic handling of 25MB Groq API limits
- π― Intelligent Break Points: Uses silence detection for optimal split points
Installation
Prerequisites
- Python 3.13+
- uv package manager
- ffmpeg (for audio processing)
- Groq API key
Setup
- Clone the repository:
git clone <your-repo-url>
cd mcp-audio-server
- Install with uv:
uv sync
- Set up your Groq API key:
export GROQ_API_KEY="your-groq-api-key-here"
- Verify installation:
uv run python setup.py
Usage
With Claude Desktop
Add this server to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"mcp-audio-server": {
"command": "uv",
"args": ["run", "python", "-m", "mcp_audio_server.server"],
"cwd": "/path/to/mcp-audio-server",
"env": {
"GROQ_API_KEY": "your-groq-api-key-here"
}
}
}
}
With Other MCP Clients
Use the provided configuration file:
# Copy and edit the config
cp mcp-config.json my-config.json
# Edit my-config.json with your API key and paths
Then connect your MCP client using the configuration.
Standalone Testing
Test the server directly:
# Start the MCP server
uv run python -m mcp_audio_server.server
# Or test with the CLI client
uv run mcp-audio-client transcribe path/to/audio.mp3
MCP Tools
This server exposes the following tools to MCP clients:
transcribe_audio
Transcribe audio files using Groq's Whisper API.
Parameters:
file_path(string): Path to audio filemodel(string, optional): Groq model (default: "whisper-large-v3")language(string, optional): Audio language
Example in Claude:
"Please transcribe the audio file at
/path/to/meeting.mp3"
split_audio
Split audio files with multiple strategies.
Parameters:
file_path(string): Path to audio filesplits(array, optional): Manual split pointsoutput_dir(string, optional): Output directorymax_size_mb(number, optional): Max size for auto-splitting (default: 24MB)max_duration_minutes(number, optional): Max duration for auto-splittingauto_split_by_size(boolean): Enable size-based splitting (default: true)auto_split_by_duration(boolean): Enable duration-based splitting
Example in Claude:
"Please split the large audio file at
/path/to/long_meeting.mp3into segments under 25MB"
summarize_transcript
Generate summaries from transcripts.
Parameters:
transcript(string): Transcript textcontext(string, optional): Additional contextcustom_prompt(string, optional): Custom system promptmodel(string, optional): Groq model (default: "llama3-8b-8192")
Example in Claude:
"Please summarize this meeting transcript with context about our quarterly planning session"
multi_file_chat
Analyze multiple files simultaneously.
Parameters:
file_paths(array): List of file pathsquestion(string): Question to asksystem_prompt(string, optional): Custom system promptmodel(string, optional): Groq model (default: "llama3-8b-8192")
Example in Claude:
"Please analyze these three meeting transcripts and tell me what the common themes were"
Configuration
Environment Variables
GROQ_API_KEY: Your Groq API key (required for transcription/summarization)
Supported Audio Formats
- MP3, M4A, WAV, FLAC, OGG, and more (via pydub)
Export Formats
- Primary: MP3 (most compatible)
- Fallback: AAC (ADTS format)
- Final Fallback: WAV (uncompressed)
Example Workflows
Complete Meeting Processing
- Split large recording: "Please split this 2-hour meeting recording into manageable segments"
- Transcribe segments: "Now transcribe each segment"
- Generate summary: "Create a comprehensive summary of all segments with action items"
Multi-Meeting Analysis
- Transcribe multiple meetings: Process several meeting recordings
- Cross-meeting analysis: "What are the recurring themes across these three meetings?"
- Action item tracking: "What action items were mentioned and who owns them?"
Development
Setup Development Environment
uv sync --dev
Run Tests
uv run pytest
Code Formatting
uv run black src/
uv run ruff check src/
Testing the MCP Server
# Test server startup
uv run python -m mcp_audio_server.server
# Test with example client
uv run python examples/basic_usage.py
Requirements
- Python 3.13+
- uv package manager
- ffmpeg (for audio processing)
- Groq API key
Installing ffmpeg
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
# Windows
# Download from https://ffmpeg.org/
Troubleshooting
Common Issues
-
"GROQ_API_KEY not set"
- Ensure your API key is exported:
export GROQ_API_KEY="your-key" - For Claude Desktop, add it to the MCP configuration
- Ensure your API key is exported:
-
"ffmpeg not found"
- Install ffmpeg using the instructions above
- Ensure it's in your system PATH
-
"File too large" errors
- Use the
split_audiotool first to break large files into <25MB segments - Then transcribe each segment individually
- Use the
-
MCP connection issues
- Verify the server path in your MCP configuration
- Check that uv is installed and accessible
- Ensure the working directory is correct
License
MIT License - see LICENSE file for details.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes with uv:
uv sync --dev - Add tests if applicable
- Submit a pull request
Support
For issues and questions:
- Open an issue on GitHub
- Check the MCP documentation: https://modelcontextprotocol.io/
- Review the examples in this repository
Note: This is an MCP server that requires a compatible MCP client (like Claude Desktop) to use. The server provides audio processing tools that integrate seamlessly into AI conversations.
