DevoChat
Unified Web AI Chat UI & MCP Client
Ask AI about DevoChat
Powered by Claude Β· Grounded in docs
I know everything about DevoChat. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
DevoChat
English | νκ΅μ΄
Unified AI Chat Platform
DevoChat is a web application that allows you to use various multimodal AI models and MCP (Model Context Protocol) servers through a single interface. Check out the live demo.
Screenshots
Main Page |
Model Selection |
File Upload |
Image Upload |
Image Generation |
Image Editing |
Code Highlighting |
Formula Rendering |
URL Processing |
Real-time Conversation |
MCP Server Selection |
MCP Server Usage |
Key Features
-
Unified Conversation System
- Uses MongoDB-based unified schema to freely switch between AI models during conversations without losing context.
- Provides client layers that normalize data to meet the API requirements of each AI provider.
- Offers an integrated management environment for various media files including images, PDFs, and documents.
-
Advanced Conversation Feature
- Provides parameter controls including temperature, reasoning intensity, response length, and system prompt modification.
- Supports markdown, LaTeX formula, and code block rendering.
- Enables streaming responses and simulates streaming for non-streaming models by sending complete responses in chunks.
- Supports image generation via Text-to-Image and Image-to-Image models.
- Supports real-time/low-latency STS (Speech-To-Speech) conversations through RealTime API.
-
Model Switching Architecture
- Allows immediate addition of various AI models to the system through JSON modification without code changes.
- Supports toggling of additional features like reasoning, web search, and research for hybrid models.
- Enables linking separate provider models (e.g., Qwen3-235B-A22B-Instruct-2507, Qwen3-235B-A22B-Thinking-2507) with a "switch" variant to function as a single hybrid model.
-
Web-based MCP Client
- Connects directly to all types of MCP servers (SSE, Local) from web browsers.
- Provides simple access to local MCP servers from anywhere on the web using the secure-mcp-proxy package.
- Supports visual monitoring of real-time tool calls and execution processes.
Project Structure
devochat/
βββ frontend/ # React frontend
β βββ public/ # Static public assets
β βββ src/
β β βββ components/ # UI components
β β βββ contexts/ # State management
β β βββ pages/ # Page components
β β βββ resources/ # Static resources
β β βββ styles/ # CSS stylesheets
β β βββ utils/ # Utility functions
β β βββ App.js # Main app component
β βββ build/ # Production build output
β βββ releases/ # Archived frontend builds
β βββ package.json
β βββ package-lock.json
β
βββ backend/ # FastAPI backend
β βββ config/ # Configuration files
β β βββ chat_models.json # Text AI model settings
β β βββ image_models.json # Image generation AI model settings
β β βββ mcp_servers_example.json # MCP server config template
β β βββ mcp_servers.json # MCP server settings
β β βββ realtime_models.json # Real-time conversation model settings
β βββ generated/ # Generated image outputs
β βββ icons/ # MCP server icons
β βββ prompts/ # System prompts
β βββ routes/ # API routers
β β βββ chat_clients/ # Text AI model clients
β β βββ image_clients/ # Image generation AI model clients
β β βββ auth.py # Authentication/authorization management
β β βββ common.py # Common utilities
β β βββ conversations.py # Conversation management API
β β βββ realtime.py # Real-time communication
β β βββ uploads.py # File upload handling
β βββ shared_pages/ # Generated shared conversation pages
β βββ uploads/ # Uploaded files and images
β βββ logging_util.py # Logging utility
β βββ main.py # FastAPI application entry point
β βββ requirements.txt # Python dependencies
βββ mcp-proxy/ # Local MCP proxy package and servers
β βββ servers/ # Local MCP server definitions
β βββ src/ # Proxy source package
β βββ servers.json
β βββ pyproject.toml
βββ samples/ # README screenshots
Tech Stack
Installation and Setup
Frontend
Environment Variables
WDS_SOCKET_PORT=0
REACT_APP_FASTAPI_URL=http://localhost:8000
Package Installation and Start
$ cd frontend
$ npm install
$ npm start
Build and Deploy
$ cd frontend
$ npm run build
$ npx serve -s build
Backend
Python Virtual Environment Setup
$ cd backend
$ python -m venv .venv
$ source .venv/bin/activate # Windows: .venv\Scripts\activate
$ pip install -r requirements.txt
Environment Variables
MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/chat_db
PRODUCTION_URL=https://your-production-domain.com
DEVELOPMENT_URL=http://localhost:3000
AUTH_KEY=your_auth_secret_key
# API Key Configuration
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GEMINI_API_KEY=...
PERPLEXITY_API_KEY=...
HUGGINGFACE_API_KEY=...
XAI_API_KEY=...
MISTRAL_API_KEY=...
OPENROUTER_API_KEY=...
FIREWORKS_API_KEY=...
FRIENDLI_API_KEY=...
FLUX_API_KEY=...
BYTEPLUS_API_KEY=...
ALIBABA_API_KEY=...
Run FastAPI Server
$ uvicorn main:app --host=0.0.0.0 --port=8000 --reload
Usage
chat_models.json Configuration
Define the AI models available in the application and their properties through the chat_models.json file.
{
"default": "google/gemini-3-flash-preview",
"alias": "google/gemini-3.1-flash-lite-preview",
"models": [
{
"model_name": "google/gemini-3-flash-preview",
"model_alias": "Gemini 3 Flash",
"description": "Default Gemini model",
"endpoint": "/chat/openrouter",
"billing": {
"in_billing": "0.5",
"out_billing": "3"
},
"capabilities": {
"stream": true,
"vision": true,
"reasoning": "toggle",
"web_search": "toggle",
"research": false,
"mcp": true
},
"controls": {
"instructions": true,
"temperature": false,
"reason": {
"levels": ["low", "medium", "high", "xhigh"],
"default": "high"
},
"verbosity": false
},
"admin": false
},
{
"model_name": "gpt-5.5",
"model_alias": "GPT 5.5",
"description": "High-performance GPT model",
"endpoint": "/chat/gpt",
"billing": {
"in_billing": "5",
"out_billing": "30"
},
"capabilities": {
"stream": true,
"vision": true,
"reasoning": "toggle",
"web_search": "toggle",
"research": false,
"mcp": true
},
"controls": {
"instructions": true,
"temperature": false,
"reason": {
"levels": ["low", "medium", "high", "xhigh"],
"default": "medium"
},
"verbosity": {
"levels": ["low", "medium", "high"],
"default": "medium"
}
},
"admin": true
},
{
"model_name": "grok-4.20-0309-non-reasoning",
"model_alias": "Grok 4.2",
"description": "Default Grok model",
"endpoint": "/chat/grok",
"billing": {
"in_billing": "2",
"out_billing": "6"
},
"variants": {
"reasoning": "grok-4.20-0309-reasoning"
},
"capabilities": {
"stream": true,
"vision": true,
"reasoning": "switch",
"web_search": "toggle",
"research": false,
"mcp": true
},
"controls": {
"instructions": true,
"temperature": true,
"reason": false,
"verbosity": false
},
"admin": false
}
]
}
Parameter Description
| Parameter | Description |
|---|---|
default | Default chat model selected when the app initializes |
alias | Model used to generate conversation aliases/titles |
model_name | The actual identifier of the model used in API calls |
model_alias | User-friendly name displayed in the UI |
description | Brief description of the model for reference when selecting |
endpoint | API path for handling model requests in the backend (e.g., /chat/gpt, /chat/claude, /chat/grok, /chat/openrouter) |
billing | Object containing model usage cost information |
billing.in_billing | Billing cost for input tokens (prompts). Unit: USD per million tokens |
billing.out_billing | Billing cost for output tokens (responses). Unit: USD per million tokens |
variants | Defines target models for "switch" capability values. Keys such as reasoning, web_search, and research point to the feature-specific model; base points back to the normal model |
capabilities | Defines the features supported by the model |
capabilities.stream | Whether streaming response is supported. Possible values: true, false |
capabilities.vision | Whether image input is supported. Possible values: true, false |
capabilities.reasoning | Whether reasoning is supported. Possible values: true, false, "toggle", "switch" |
capabilities.web_search | Whether web search is supported. Possible values: true, false, "toggle", "switch" |
capabilities.research | Whether research mode is supported. Possible values: true, false, "toggle", "switch" |
capabilities.mcp | Whether MCP server integration is supported. Possible values: true, false |
controls | Defines user control options supported by the model |
controls.instructions | Whether custom instructions setting is possible. Possible values: true, false |
controls.temperature | Whether temperature adjustment is possible. Possible values: true, false |
controls.reason | Defines selectable reasoning intensity levels. Possible values: false or an object |
controls.reason.levels | String array defining the selectable options shown in the UI |
controls.reason.default | Default value applied when the model is selected |
controls.verbosity | Defines selectable response length levels. Possible values: false or an object |
controls.verbosity.levels | String array defining the selectable options shown in the UI |
controls.verbosity.default | Default value applied when the model is selected |
admin | If true, only admin users can access/select this model |
Value Description
true
The feature is always enabled.
false
The feature is not supported.
toggle
Users can turn the feature on or off without changing the selected model.
switch
When a user toggles the feature, the selected model changes to another model defined in the variants object.
image_models.json Configuration
Define the image generation AI models available in the application and their properties through the image_models.json file:
{
"default": "gemini-2.5-flash-image",
"alias": "google/gemini-3.1-flash-lite-preview",
"models": [
{
"model_name": "gemini-2.5-flash-image",
"model_alias": "Nano Banana",
"description": "Google",
"endpoint": "/image/google/gemini",
"billing": {
"in_billing": "0",
"out_billing": "0.039"
},
"capabilities": { "vision": true, "max_input": 10 },
"admin": false
},
{
"model_name": "bytedance/seedream-v4.5",
"model_alias": "Seedream 4.5",
"description": "BytePlus",
"endpoint": "/image/wavespeed",
"billing": {
"in_billing": "0",
"out_billing": "0.04"
},
"variants": {
"vision": "bytedance/seedream-v4.5/edit"
},
"capabilities": { "vision": "switch" },
"admin": false
},
{
"model_name": "bytedance/seedream-v4.5/edit",
"model_alias": "Seedream 4.5",
"description": "BytePlus",
"endpoint": "/image/wavespeed",
"billing": {
"in_billing": "0",
"out_billing": "0.04"
},
"variants": {
"base": "bytedance/seedream-v4.5"
},
"capabilities": { "vision": "switch", "max_input": 10 },
"admin": false
}
]
}
Image Model Parameter Description
| Parameter | Description |
|---|---|
default | Default image model selected when the image page initializes |
alias | Model used to generate image conversation aliases/titles |
variants | Defines target models for "switch" capability values. vision points to the image-editing model; base points back to the text-to-image model |
capabilities.vision | Whether image input is supported. true: supported, false: not supported, "switch": switch to variant model |
capabilities.max_input | Maximum number of images that can be input simultaneously |
Model Switching System (Variants)
You can define various variants of models through the variants object.
Example
[
{
"model_name": "grok-4.20-0309-non-reasoning",
"variants": {
"reasoning": "grok-4.20-0309-reasoning"
},
"capabilities": {
"reasoning": "switch"
}
},
{
"model_name": "grok-4.20-0309-reasoning",
"variants": {
"base": "grok-4.20-0309-non-reasoning"
},
"capabilities": {
"reasoning": "switch"
}
}
]
realtime_models.json Configuration
Define the real-time voice models available in the application through the realtime_models.json file.
{
"default": "gpt-realtime-1.5:coral",
"models": [
{
"model_name": "gpt-realtime-1.5:marin",
"model_alias": "Marin",
"model_gender": "female",
"description": "A warm motivator"
},
{
"model_name": "gpt-realtime-1.5:ash",
"model_alias": "Ash",
"model_gender": "male",
"description": "A steady supporter who believes in you"
}
]
}
Realtime Model Parameter Description
| Parameter | Description |
|---|---|
default | Default real-time voice model selected when the real-time page initializes |
model_name | Actual voice/model identifier used by the real-time API |
model_alias | User-friendly voice name displayed in the UI |
model_gender | UI grouping/style hint for the voice. Current values use female or male |
description | Short voice/personality description displayed in the model picker |
MCP Server Configuration
DevoChat is a web-based MCP (Model Context Protocol) client.
You can define external servers to connect to in the mcp_servers.json file.
mcp_servers.json
{
"server-id": {
"url": "https://example.com/mcp/endpoint",
"authorization_token": "your_authorization_token",
"name": "Server_Display_Name",
"admin": false
}
}
Recommended MCP Servers
Local MCP Server Integration
To connect local MCP servers, use secure-mcp-proxy:
git clone https://github.com/gws8820/secure-mcp-proxy
cd secure-mcp-proxy
uv run python -m secure_mcp_proxy --named-server-config servers.json --port 3000
Contributing
- Fork this repository
- Create a new branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Create a Pull Request
License
This project is distributed under the MIT License.
