Dsclaude
One script to run claude code with deepseek V4
Ask AI about Dsclaude
Powered by Claude Β· Grounded in docs
I know everything about Dsclaude. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
dsclaude β Claude Code & Claude Desktop launchers for alternative backends
A small collection of shell scripts that point Claude Code and Claude Desktop at non-Anthropic model backends.
Author: Agents365-ai Β· Bilibili
Scripts
| Script | Agent | Platform | Backend | Models |
|---|---|---|---|---|
| dsclaude | Claude Code (CLI) | macOS / Linux | DeepSeek API (Anthropic-compatible endpoint) | deepseek-v4-pro[1m] (default, unified reasoning) Β· deepseek-v4-flash[1m] (fast / haiku tier) |
| mmclaude | Claude Code (CLI) | macOS / Linux | Xiaomi MiMo (Anthropic-compatible endpoint) | mimo-v2.5-pro |
| dsclaude-desktop | Claude Desktop (GUI) | macOS | DeepSeek API (Anthropic-compatible endpoint) | deepseek-v4-pro Β· deepseek-v4-flash (1M context on both) |
| dsclaude-desktop.ps1 | Claude Desktop (GUI) | Windows (untested) | DeepSeek API (Anthropic-compatible endpoint) | same as above |
| skills/deepseek-vision | skill (any agent that loads SKILL.md) | macOS / Linux | DashScope (Anthropic / OpenAI-compatible) | qwen3.6-flash (default vision) |
| dsvision-mcp | MCP server (Claude Desktop / Cowork / any MCP client) | macOS / Linux | DashScope | qwen3.6-flash (default vision) |
dsclaude exposes the alternate model in Claude Code's /model picker so you can hot-swap mid-session, sets ANTHROPIC_DEFAULT_HAIKU_MODEL so background/cheap tasks route to the fast model, and honors optional env overrides for context window and output token limits.
dsclaude-desktop is a one-command configurator for Claude Desktop's built-in Configure Third-Party Inference feature (Developer menu). It writes the same config that the dialog would write β pre-filled for DeepSeek β and restarts the app. Claude Desktop's launch chooser handles switching back to Anthropic mode natively, so there's no --revert flag.
Quick start
git clone https://github.com/Agents365-ai/dsclaude.git
cd dsclaude
That's it β the bash scripts ship with the executable bit set, so no chmod is needed. Each tool has its own usage section below.
To make dsclaude (Claude Code launcher) globally available:
sudo cp dsclaude /usr/local/bin/
The other tools (dsclaude-desktop, skills/deepseek-vision/analyze-image) reference their own paths or directories, so leave them in the repo.
dsclaude
Follows the official DeepSeek guide: Integrate with Coding Agents / Anthropic API.
export DEEPSEEK_API_KEY=sk-xxxxxxxxxxxxxxxxxx # add this line to ~/.zshrc or ~/.bashrc
dsclaude # start on deepseek-v4-pro (default, full reasoning)
dsclaude fast # start on deepseek-v4-flash[1m] (cheaper / faster)
dsclaude long # request a 1M context window (1,048,576 tokens)
dsclaude long fast # 1M + flash
Sets the DeepSeek-recommended env vars under the hood: ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic, Opus/Sonnet/Haiku model mappings, CLAUDE_CODE_SUBAGENT_MODEL, and CLAUDE_CODE_EFFORT_LEVEL=max (override via DSCLAUDE_EFFORT).
In-session: /model deepseek-v4-flash[1m] β /model deepseek-v4-pro[1m].
Note: Both
deepseek-v4-proanddeepseek-v4-flashnatively support a 1M-token context window. In Claude Code, the[1m]suffix is required on each model name to enable it (deepseek-v4-pro[1m],deepseek-v4-flash[1m]).dsclaudesets this for you.
mmclaude
Follows the official Xiaomi MiMo Claude Code configuration guide. One model (mimo-v2.5-pro), no fast/long aliases.
export MIMO_API_KEY=sk-xxxxxxxxxxxxxxxxxx # pay-as-you-go (add to ~/.zshrc)
# or
export MIMO_API_KEY=tp-xxxxxxxxxxxxxxxxxx # Token Plan
mmclaude # start on mimo-v2.5-pro
mmclaude update # git pull this repo
The base URL is auto-detected from the key prefix (sk-* β https://api.xiaomimimo.com/anthropic, tp-* β https://token-plan-cn.xiaomimimo.com/anthropic). Token Plan subscribers with a custom subdomain can override with MIMO_BASE_URL=.... Sets all four model slots (main / Opus / Sonnet / Haiku) to mimo-v2.5-pro and unsets ANTHROPIC_API_KEY per the MiMo docs (lingering official credentials shadow the bearer token).
dsclaude-desktop
A one-command configurator for Claude Desktop's built-in third-party inference feature, pre-filled for DeepSeek.
This is not a hack or workaround. Anthropic ships a "Configure Third-Party Inference" dialog inside Claude Desktop (Developer menu) where you can manually point the app at any Anthropic-compatible endpoint. The dialog has six required fields and a model list. dsclaude-desktop writes the same JSON config that the dialog would write, then restarts the app β saving you the menu navigation.
Prerequisites
- Claude Desktop installed (download from claude.ai/download)
- Developer Mode enabled in Claude Desktop
- Help β Troubleshooting β Enable Developer Mode
- Only needs to be done once. The script verifies this on each run.
- DeepSeek API Key in
$DEEPSEEK_API_KEY, your shell rc, or paste at the prompt
Usage
export DEEPSEEK_API_KEY=sk-xxxxxxxxxxxxxxxxxx # or add to ~/.zshrc
./dsclaude-desktop # configure and restart Claude Desktop
./dsclaude-desktop -h # help
What it does:
- Generates an entry under
~/Library/Application Support/Claude-3p/configLibrary/with your DeepSeek key, base URLhttps://api.deepseek.com/anthropic, auth schemebearer, anddeepseek-v4-pro+deepseek-v4-flash(1M context) as the model list - Sets
appliedIdto your entry in_meta.json(existing entries are preserved) - Restarts Claude Desktop with
killall Claude && open -a Claude
Switching modes
Claude Desktop's launch chooser handles mode switching natively β no --revert flag needed:
Even on the Anthropic sign-in page you can swap back to Gateway:
To switch: click your profile in Claude Desktop β Disconnect (or sign out) β at next launch, pick the other option.
What you get
In Gateway mode the Cowork and Code modes route to DeepSeek. The model picker shows your masked DeepSeek models:
One feature is unavailable: classic Chat (claude.ai-style conversation). Chat depends on Anthropic-hosted features (memory, projects, artifacts, web search) that aren't part of the inference API surface. To use Chat, switch back to Anthropic mode via the launch chooser.
Windows
dsclaude-desktop.ps1 is the PowerShell port. Same JSON schema, same flow:
$env:DEEPSEEK_API_KEY = "sk-xxxxxxxxxxxxxxxxxx"
pwsh ./dsclaude-desktop.ps1
Prerequisites mirror the macOS version: Claude Desktop installed, Developer Mode enabled, DeepSeek API key. The script writes to %APPDATA%\Claude-3p\configLibrary\ instead of ~/Library/Application Support/Claude-3p/configLibrary/.
Untested by the maintainer. The schema and gotchas were discovered on macOS; Anthropic ships the same Electron app on Windows so they should hold, but please open an issue if anything misbehaves.
deepseek-vision skill
A skill that gives any agent (especially text-only ones like DeepSeek) the ability to "see" images. When the agent encounters an image β file path or URL β it calls skills/deepseek-vision/analyze-image, which sends the image to Qwen3.6-Flash (DashScope) and returns a text description the agent can reason over.
export DASHSCOPE_API_KEY=sk-xxxxxxxxxxxxxxxxxx
# Local file:
./skills/deepseek-vision/analyze-image /path/to/screenshot.png "What error is shown?"
# Or an http(s) URL β passed through directly, no download needed:
./skills/deepseek-vision/analyze-image https://example.com/diagram.png
Loaded by any agent that reads SKILL.md files (Claude Code, Cowork, etc.). Default model is qwen3.6-flash; override via DSVISION_MODEL=qwen3.6-plus for higher quality, or DSVISION_BASE_URL=... for a different provider (Xiaomi MiMo-VL via SiliconFlow is one swap away).
Hardening: 10MB image cap with clear error, 60s curl timeout, empty-response detection, exits non-zero with a stderr message on any failure.
Inline-image caveat: this skill needs a file path or URL β it cannot read images that the user drag-drops, pastes, or attaches via Claude Desktop's "+ β Add files or photos" menu. For those use
dsvision-mcpbelow, which runs outside Cowork's sandbox and auto-finds Claude Code's image cache.
In action β Claude Code (CLI) running on DeepSeek via dsclaude:
User pasted a screenshot and said "explain the image". Claude Code recognized the skill (Skill(deepseek-vision) Successfully loaded skill), called analyze-image with the cached path under ~/.claude/image-cache/, and returned an accurate description of the Claude Code startup screen.
dsvision-mcp
A small MCP server that does the same job as the deepseek-vision skill, but bypasses two limitations the skill hits inside Cowork:
- Sandbox network egress. Cowork's VM only allows outbound traffic to
*.anthropic.com/*.claude.com. A bash skill callingdashscope.aliyuncs.comis firewalled. The MCP server runs as a Claude Desktop child process (outside the VM) and bypasses the egress filter. - Inline images. Claude Code caches every attached/pasted image to
~/.claude/image-cache/<session-uuid>/N.pngon the host filesystem. The MCP server reads from there directly when the agent callsanalyze_image()with no path β it auto-picks the most recent cached image. So drag-drop / "+ β Add files or photos" / paste workflows now Just Work. (macOS only. On Windows Cowork, inline images are not cached to disk β pass the file path explicitly instead.)
Prerequisites
- Python 3.8+ (macOS ships Python 3; check with
python3 --version) - DASHSCOPE_API_KEY β get one from DashScope (Alibaba Cloud account required)
Install
# 1. Install Python dependencies
pip3 install fastmcp requests
# 2. Set your API key (add this line to ~/.zshrc so it survives restarts)
export DASHSCOPE_API_KEY=sk-xxxxxxxxxxxxxxxxxx
# 3. Get the absolute path to dsvision-mcp
cd /path/to/dsclaude && pwd # e.g. /Users/niehu/github/dsclaude
Then add to your Claude Desktop MCP config. Pick the right file for your mode:
| Mode | Config file |
|---|---|
3P / Gateway (DeepSeek via dsclaude-desktop) | ~/Library/Application Support/Claude-3p/claude_desktop_config.json |
| Standard Anthropic mode | ~/Library/Application Support/Claude/claude_desktop_config.json |
If you use both modes, add the entry to both files. If the file doesn't exist yet, create it:
{
"mcpServers": {
"dsvision": {
"command": "/Users/niehu/github/dsclaude/dsvision-mcp"
}
}
}
Replace
/Users/niehu/github/dsclaude/dsvision-mcpwith the path you got from step 3 above.
Restart Claude Desktop. The analyze_image tool will appear to the agent automatically.
Verify it's working
Open Claude Desktop β ask the agent to "analyze the most recent image in cache." If dsvision is connected you'll see it in the Connectors panel (Cowork) or as a tool call in the message stream (Code mode). If the tool doesn't appear, check Claude β Settings β Developer β MCP Logs for startup errors.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| Tool doesn't appear | Wrong config file | Double-check which mode you're in and use the matching path from the table above |
| Tool doesn't appear | Invalid JSON | Validate with python3 -m json.tool <config-file> |
| Tool appears but errors | DASHSCOPE_API_KEY not set | The server reads env + ~/.zshrc. Make sure the key is exported in ~/.zshrc and restart Claude Desktop |
ModuleNotFoundError: fastmcp | Wrong Python | Use pip3 not pip; Claude Desktop runs the system Python 3 |
| Image not found | Path is relative | Always pass an absolute path, or leave empty for auto-detect |
| Image not found (Windows Cowork) | Inline images not cached on Windows | Windows Cowork (3P Gateway) does not cache pasted/dragged images to ~/.claude/image-cache/. Auto-detect won't work β save the image to disk first and pass the full path: analyze_image(image_path="C:\Users\...\screenshot.png") |
Usage from the agent's perspective
analyze_image() # auto: latest image in ~/.claude/image-cache/
analyze_image(image_path="/abs/path/to/foo.png")
analyze_image(focus="What error is shown?") # custom prompt
In action β both surfaces of Claude Desktop 3P, running on DeepSeek:
Cowork mode (task agent β dsvision visible in the Connectors panel):
Claude Code in Desktop (Code mode β Used analyze image tool call inline):
In both cases the user attached an image and said "explain the image". DeepSeek agent invoked analyze_image, MCP fetched the cached image from ~/.claude/image-cache/, sent it to Qwen3.6-Flash, and returned an accurate description back into the conversation β including details Qwen read off the image like the model name deepseek-v4-pro[1m] and the working directory.
When to pick which (tldr):
| Scenario | Use |
|---|---|
| Claude Code (CLI), explicit paths | skills/deepseek-vision (zero deps, simpler) |
| Cowork / Claude Desktop with inline images | dsvision-mcp (only thing that works) |
| Cowork with explicit path + not minding sandbox tweaks | either |
Why a skill instead of an MCP server: zero new dependencies (just
bash+curl+jq), no daemon process, single markdown + bash file you can read in 2 minutes.
License
MIT
Support
If these scripts save you time, consider supporting the author:
WeChat Pay |
Alipay |
Buy Me a Coffee |
Give a Reward |
Author
Agents365-ai
- Bilibili: https://space.bilibili.com/441831884
- GitHub: https://github.com/Agents365-ai
