Openai Assistants
OpenAI Assistants MCP Server - A Model Context Protocol server providing OpenAI Assistants API tools and resources. Features comprehensive assistant management, thread operations, message handling, and run execution with seamless stdio transport for Claud
Ask AI about Openai Assistants
Powered by Claude Β· Grounded in docs
I know everything about Openai Assistants. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Jezweb MCP Core v3.0.1 - Adaptable Multi-Provider Architecture
A production-ready Model Context Protocol (MCP) server featuring an adaptable, provider-agnostic architecture that supports multiple LLM providers through a unified interface. Built with a "Shared Core with Thin Adapters" architecture for maximum flexibility and simplicity.
π Universal MCP Server - Three Ways to Connect
Choose the deployment option that best fits your needs:
π Option 1: Cloudflare Workers (Production Ready - v3.0 Unified Architecture)
Production URL: https://openai-assistants-mcp.jezweb.ai/mcp/{api-key}
- β Adaptable Architecture - Support for multiple LLM providers (OpenAI, Claude, etc.)
- β Simple Configuration - Environment-first configuration, no complex setup
- β Lightweight & Fast - Sub-100ms response times with global edge distribution
- β Zero Dependencies - No local setup required
- β LIVE & OPERATIONAL - v3.0 unified architecture deployed and tested
π¦ Option 2: NPM Package (Local Stdio - v3.0 Deployment Adapter)
Package: jezweb-mcp-core@3.0.1
- β Provider-Agnostic - Unified core with deployment-specific adapter
- β Simple Configuration - Environment variables and sensible defaults
- β Direct stdio transport - No proxy required
- β Local execution - Full control over environment
- β 100% Backward Compatible - Seamless upgrade from OpenAI-specific versions
π§ Option 3: Local Development Server
Local Build: Clone and run locally
- β Full source code access
- β Customizable implementation
- β Development and testing
- β Private deployment options
β¨ Key Features - Jezweb MCP Core v3.0
ποΈ Adaptable Multi-Provider Architecture
- Provider-Agnostic Design - Support for OpenAI, Anthropic Claude, Google, and more
- Extensible Provider System - Easy to add new LLM providers
- Unified Interface - Same tools and resources across all providers
- Smart Provider Selection - Automatic fallback and load balancing
- Simple Configuration - Environment-first setup with sensible defaults
π Core Capabilities
- Complete Assistant API Coverage - All 22 tools for full assistant, thread, message, and run management
- Universal Deployment - Three deployment options with identical functionality
- Production Ready - Deployed on Cloudflare Workers with modern architecture
- Lightweight - Minimal dependencies and fast execution
- Type Safe - Full TypeScript implementation with comprehensive type definitions
π― Enhanced User Experience
- Enhanced Tool Descriptions - Workflow-oriented descriptions with practical examples
- MCP Resources - 9 comprehensive resources including templates, workflows, and documentation
- Improved Validation - Detailed error messages with examples and suggestions
- Tool Annotations - Proper MCP annotations for better client understanding
- Assistant Templates - Pre-configured templates for common use cases
π§ Technical Excellence
- Secure Authentication - URL-based API key authentication (Workers) or environment variables (NPM)
- Advanced Error Handling - Context-aware error messages with actionable guidance
- CORS Support - Ready for web-based MCP clients
- Real-time Operations - Support for streaming and real-time assistant interactions
- Comprehensive Testing - Built-in test suites for both deployment options
π Architecture Overview
Provider System
Jezweb MCP Core uses a sophisticated provider registry system that abstracts away provider-specific details:
// Multiple providers supported
const providers = {
openai: { /* OpenAI configuration */ },
anthropic: { /* Claude configuration */ },
google: { /* Gemini configuration */ }
};
// Automatic provider selection
const provider = registry.selectProvider({
strategy: 'capability-based',
requiredCapabilities: ['assistants', 'threads']
});
Simple Configuration
Environment-first configuration with sensible defaults:
# Cloudflare Workers - via Wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
# NPM Package - via environment variables
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
Unified Architecture
shared/ # Unified shared core (single source of truth)
βββ core/ # Core business logic and handlers
βββ services/ # Provider registry and LLM service abstraction
β βββ llm-service.ts # Generic LLM provider interface
β βββ provider-registry.ts # Provider management and selection
β βββ providers/ # Individual provider implementations
βββ types/ # Unified type definitions
src/ # Cloudflare Workers deployment
βββ worker.ts # Cloudflare Workers entry point
βββ mcp-handler.ts # Worker-specific MCP handler
npm-package/ # NPM package deployment
βββ src/ # NPM-specific implementation
βββ universal-mcp-server.cjs # NPM package entry point
π Quick Start - Choose Your Installation Method
Prerequisites
- API key for your chosen LLM provider (OpenAI, Anthropic, etc.)
- Node.js 18+ (for NPM package or local development)
- MCP client (Claude Desktop, Roo, or other MCP-compatible client)
π Getting Started with LLM Providers
OpenAI Setup
- Visit the OpenAI API Keys page
- Create a new API key
- Monitor usage at OpenAI Dashboard
Anthropic Claude Setup
- Visit the Anthropic Console
- Create an API key
- Review Claude API documentation
π¦ Option 1: NPM Package (Recommended for Most Users)
Installation
# Option A: Use directly with npx (recommended for latest fixes)
npx jezweb-mcp-core@latest
# Option B: Install globally
npm install -g jezweb-mcp-core@latest
# Option C: Install locally in your project
npm install jezweb-mcp-core@latest
Claude Desktop Configuration
Add to your claude_desktop_config.json:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": ["jezweb-mcp-core@latest"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
}
}
}
}
Roo Configuration
Add to your Roo configuration file:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": ["jezweb-mcp-core@latest"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"ANTHROPIC_API_KEY": "your-anthropic-api-key-here"
},
"alwaysAllow": [
"assistant-create",
"assistant-list",
"assistant-get",
"assistant-update",
"assistant-delete",
"thread-create",
"thread-get",
"thread-update",
"thread-delete",
"message-create",
"message-list",
"message-get",
"message-update",
"message-delete",
"run-create",
"run-list",
"run-get",
"run-update",
"run-cancel",
"run-submit-tool-outputs",
"run-step-list",
"run-step-get"
]
}
}
}
βοΈ Option 2: Cloudflare Workers (Zero Setup)
Claude Desktop Configuration
- Install the MCP proxy:
npm install -g mcp-proxy
- Add to your
claude_desktop_config.json:
{
"mcpServers": {
"jezweb-mcp-core": {
"command": "npx",
"args": [
"mcp-proxy",
"https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_OPENAI_API_KEY_HERE"
]
}
}
}
π§ Option 3: Local Development Server
Setup
- Clone the repository:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp
- Install dependencies:
npm install
- Set up environment variables:
# Add your API keys to wrangler.toml or use wrangler secrets
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
- Start development server:
npm run dev
π οΈ Available Tools
Assistant Management
- assistant-create - Create a new assistant with instructions and tools
- assistant-list - List all assistants with pagination and sorting
- assistant-get - Get detailed information about a specific assistant
- assistant-update - Update assistant instructions, tools, or metadata
- assistant-delete - Delete an assistant permanently
Thread Management
- thread-create - Create a new conversation thread
- thread-get - Get thread details and metadata
- thread-update - Update thread metadata
- thread-delete - Delete a thread permanently
Message Management
- message-create - Add a message to a thread
- message-list - List messages in a thread with pagination
- message-get - Get details of a specific message
- message-update - Update message metadata
- message-delete - Delete a message from a thread
Run Management
- run-create - Start a new assistant run on a thread
- run-list - List runs for a thread with filtering
- run-get - Get run details and status
- run-update - Update run metadata
- run-cancel - Cancel a running assistant execution
- run-submit-tool-outputs - Submit tool call results to continue a run
Advanced Operations
- run-step-list - List steps in a run execution
- run-step-get - Get details of a specific run step
π MCP Resources Available
This server provides 9 comprehensive MCP resources to help you get started quickly:
π― Assistant Templates (4 resources)
assistant://templates/coding-assistant- Pre-configured coding assistantassistant://templates/writing-assistant- Professional writing assistantassistant://templates/data-analyst- Data analysis assistantassistant://templates/customer-support- Customer support assistant
π Workflow Examples (2 resources)
examples://workflows/create-and-run- Complete workflow examplesexamples://workflows/batch-processing- Efficient batch processing
π Documentation (3 resources)
docs://jezweb-mcp-core-api- Comprehensive API referencedocs://error-handling- Common errors and solutionsdocs://best-practices- Guidelines for optimal usage
π Enhanced Usage Examples
Multi-Provider Usage
# Create an assistant (automatically selects best available provider)
"Create an assistant named 'Code Helper' with instructions to help with programming tasks"
# Use specific provider
"Create an assistant using OpenAI's GPT-4 model"
"Create an assistant using Anthropic's Claude model"
Assistant Management
# List all assistants
"List my assistants"
# Get assistant details
"Get details of assistant asst_abc123"
# Update an assistant
"Update assistant asst_abc123 to include the code_interpreter tool"
Thread and Message Management
# Create a new thread
"Create a new conversation thread"
# Add a message to a thread
"Add the message 'Hello, how can you help me?' to thread thread_abc123"
# List messages in a thread
"List all messages in thread thread_abc123"
Run Management
# Start an assistant run
"Start a run with assistant asst_abc123 on thread thread_abc123"
# Get run status
"Get status of run run_abc123"
# Cancel a running execution
"Cancel run run_abc123"
π Deployment Option Parity
All deployment options provide identical functionality with all 22 tools working seamlessly:
β Functional Parity
- Identical Tools: All 22 tools work exactly the same way
- Same API Surface: Identical tool names, parameters, and responses
- Consistent Behavior: Error handling, validation, and responses are uniform
- Multi-Provider Support: All deployment options support multiple LLM providers
π Transport Differences
| Feature | Cloudflare Workers | NPM Package |
|---|---|---|
| Transport | HTTP/SSE via mcp-proxy | Direct stdio |
| Setup | Zero setup required | Node.js 18+ required |
| Performance | Sub-100ms global edge | Direct process communication |
| Dependencies | No local dependencies | Local Node.js execution |
| API Key | URL-based authentication | Environment variable |
| Scaling | Automatic global scaling | Single process |
| Offline | Requires internet | Works offline (after setup) |
ποΈ Architecture - Provider-Agnostic Design
Core Design Principles
- Adaptable - Support for multiple LLM providers through unified interface
- Simple - Environment-first configuration with sensible defaults
- Lightweight - Minimal dependencies and fast execution
- Extensible - Easy to add new providers and capabilities
- Reliable - Comprehensive error handling and fallback mechanisms
Provider System Architecture
// Provider Registry manages multiple LLM providers
interface LLMProvider {
createAssistant(request: GenericCreateAssistantRequest): Promise<GenericAssistant>;
listAssistants(request?: GenericListRequest): Promise<GenericListResponse<GenericAssistant>>;
// ... all assistant API methods
}
// Providers implement the same interface
class OpenAIProvider implements LLMProvider { /* ... */ }
class AnthropicProvider implements LLMProvider { /* ... */ }
class GoogleProvider implements LLMProvider { /* ... */ }
Configuration System
Simple, environment-first configuration using standard environment variables:
# Required - at least one provider API key
export OPENAI_API_KEY="your-openai-key-here"
export ANTHROPIC_API_KEY="your-anthropic-key-here"
# Optional configuration
export JEZWEB_LOG_LEVEL="info"
export JEZWEB_DEFAULT_PROVIDER="openai"
The system automatically detects the deployment environment and applies appropriate defaults.
π§ͺ Testing Infrastructure - Modern Vitest Framework
Comprehensive Test Suites
The project uses Vitest as the modern testing framework with comprehensive test coverage:
Available Test Commands
# Run all tests
npm test
# Run specific test categories
npm run test:unit # Unit tests only
npm run test:integration # Integration tests
npm run test:performance # Performance tests
npm run test:error-handling # Error handling tests
npm run test:edge-cases # Edge case tests
npm run test:deployment # Deployment tests
# Run specific deployment tests
npm run test:cloudflare # Cloudflare Workers tests
npm run test:npm # NPM package tests
# Development and debugging
npm run test:watch # Watch mode for development
npm run test:ui # Interactive UI for test exploration
npm run test:coverage # Generate coverage reports
npm run test:debug # Debug mode with inspector
npm run test:ci # CI-optimized test run
Test Categories
- Integration Tests: All 22 tools across both deployment options
- Performance Tests: Response time and memory usage benchmarks
- Error Handling Tests: Comprehensive error scenario coverage
- Edge Case Tests: Boundary conditions and Unicode handling
- Deployment Tests: Cloudflare Workers and NPM package specific tests
Manual Testing
Test the Cloudflare Workers deployment:
# List available tools
curl -X POST "https://openai-assistants-mcp.jezweb.ai/mcp/YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
Test the NPM Package:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | npx jezweb-mcp-core@latest
π§ Development
Local Development
- Clone and install:
git clone https://github.com/jezweb/openai-assistants-mcp.git
cd openai-assistants-mcp
npm install
- Set up environment:
wrangler secret put OPENAI_API_KEY
wrangler secret put ANTHROPIC_API_KEY
- Start development:
npm run dev
Adding New Providers
- Implement the
LLMProviderinterface - Create a provider factory
- Register with the provider registry
- Add configuration schema
Example:
class MyCustomProvider implements LLMProvider {
// Implement all required methods
}
const factory: LLMProviderFactory = {
create: (config) => new MyCustomProvider(config),
getMetadata: () => ({ name: 'my-provider', ... }),
validateConfig: (config) => true
};
registry.registerFactory(factory);
π Enhanced Validation & Error Handling
Intelligent Error Messages
- Format Examples: Error messages include correct format examples
- Documentation References: Errors link to relevant documentation
- Suggestion Guidance: Invalid values show supported alternatives
- Provider Context: Errors include provider-specific guidance
Validation Features
- ID Format Validation: Strict format checking with helpful messages
- Provider Validation: Validates provider availability and capabilities
- Configuration Validation: Comprehensive config validation
- Parameter Validation: Type and range checking with examples
π Security
- API Key Protection - Secure handling of multiple provider API keys
- Enhanced Input Validation - Comprehensive validation with helpful feedback
- Provider Isolation - Each provider operates in isolation
- CORS Security - Proper CORS headers for web clients
- Rate Limiting - Inherits provider-specific rate limits
π Performance
- Global Edge - Deployed on Cloudflare's global network
- Sub-100ms - Typical response times under 100ms
- Provider Selection - Smart provider selection for optimal performance
- Efficient - Minimal memory footprint and fast execution
π€ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
π License
MIT License - see LICENSE for details.
π― Migration Guide
From OpenAI Assistants MCP v2.x
The migration is seamless - just update your package name:
# Old
npx openai-assistants-mcp@latest
# New
npx jezweb-mcp-core@latest
All existing tools and functionality remain identical. The new version adds multi-provider support while maintaining 100% backward compatibility.
Configuration Migration
Old environment variables continue to work:
# Still supported
OPENAI_API_KEY=your-key-here
# New multi-provider support
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
Ready to get started? Choose your preferred installation method from the Quick Start guide above and begin building with multiple LLM providers through a unified interface!
