Vmlx
vMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth
Ask AI about Vmlx
Powered by Claude ยท Grounded in docs
I know everything about Vmlx. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Local AI Engine for Apple Silicon
Run LLMs, VLMs, and image generation models entirely on your Mac.
OpenAI + Anthropic + Ollama compatible API. No cloud. No API keys. No data leaving your machine.
vMLX v2 (Swift) is the recommended desktop app. Native Swift + Metal, 50โ95 t/s on M-series. Pure SwiftUI, zero PyTorch in the hot path.
This Python repo (main branch) keeps shipping for users who run the engine standalone via pip install vmlx or who want the legacy Electron panel.
Quickstart โข Models โข Features โข Image Gen โข API โข Desktop App โข JANG โข CLI โข Config โข Contributing โข ํ๊ตญ์ด
JANG 2-bit destroys MLX 4-bit on MiniMax M2.5:
Quantization MMLU (200q) Size JANG_2L (2-bit) 74% 89 GB MLX 4-bit 26.5% 120 GB MLX 3-bit 24.5% 93 GB MLX 2-bit 25% 68 GB Adaptive mixed-precision keeps critical layers at higher precision. Scores at jangq.ai. Models at JANGQ-AI.
![]() | ![]() |
| Chat with any MLX model -- thinking mode, streaming, and syntax highlighting | Agentic chat with full coding capabilities -- tool use and structured output |
Quickstart
Install from PyPI
Published on PyPI as vmlx -- install and run in one command:
# Recommended: uv (fast, no venv hassle)
brew install uv
uv tool install vmlx
vmlx serve mlx-community/Qwen3-8B-4bit
# Or: pipx (isolates from system Python)
brew install pipx
pipx install vmlx
vmlx serve mlx-community/Qwen3-8B-4bit
# Or: pip in a virtual environment
python3 -m venv ~/.vmlx-env && source ~/.vmlx-env/bin/activate
pip install vmlx
vmlx serve mlx-community/Qwen3-8B-4bit
Note: On macOS 14+, bare
pip installfails with "externally-managed-environment". Useuv,pipx, or a venv.
Your local AI server is now running at http://0.0.0.0:8000 with an OpenAI + Anthropic compatible API. Works with any model from mlx-community -- thousands of models ready to go.
Or download the desktop app
Get MLX Studio -- a native macOS app with chat UI, model management, image generation, and developer tools. No terminal required. Just download the DMG and drag to Applications.
Use with OpenAI SDK
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="not-needed")
response = client.chat.completions.create(
model="local",
messages=[{"role": "user", "content": "Hello!"}],
stream=True,
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Use with Anthropic SDK
import anthropic
client = anthropic.Anthropic(base_url="http://localhost:8000/v1", api_key="not-needed")
message = client.messages.create(
model="local",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)
Use with curl
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
Model Support
vMLX runs any MLX model. Point it at a HuggingFace repo or local path and go.
| Type | Models |
|---|---|
| Text LLMs | Qwen 2/2.5/3/3.5/3.6, Llama 3/3.1/3.2/3.3/4, Mistral/Mixtral, Mistral-Medium-3.5 (ministral3), Mistral-Small-4, Gemma 3/4, Phi-4, DeepSeek V2/V3/V4, GLM-4/5, MiniMax M2.5/M2.7, Nemotron, Laguna (poolside), ZAYA (CCA + MoE), Kimi K2.5/K2.6, StepFun, and any mlx-lm model |
| Vision LLMs | Qwen-VL, Qwen3.5-VL / Qwen3.6-VL, Pixtral, InternVL, LLaVA, Gemma 3n / 4-VL, Mistral-Medium-3.5 (PIXTRAL) |
| Multimodal Omni | Nemotron-3-Nano-Omni (text + image + audio + video) โ Parakeet audio encoder + RADIO ViT vision tower; routed via OmniMultimodalDispatcher across /v1/chat/completions, /v1/messages, /v1/responses, /api/chat |
| MoE Models | Qwen 3.5/3.6 MoE (A3B/A10B), Mixtral, DeepSeek V2/V3/V4, MiniMax M2.5/M2.7, Llama 4, Laguna (256 routed experts top-8) |
| Hybrid SSM | Nemotron-H, Jamba, GatedDeltaNet (Mamba + Attention), Qwen3.5-A3B hybrid, Granite MoE Hybrid, LFM2 |
| Image Gen | Flux Schnell/Dev, Z-Image Turbo (via mflux) |
| Image Edit | Qwen Image Edit (via mflux) |
| Embeddings | Any mlx-lm compatible embedding model |
| Reranking | Cross-encoder reranking models |
| Audio | Kokoro TTS, Whisper STT (via mlx-audio) |
Features
Inference Engine
| Feature | Description |
|---|---|
| Continuous Batching | Handle multiple concurrent requests efficiently |
| Prefix Cache | Reuse KV states for repeated prompts -- makes follow-up messages instant |
| Paged KV Cache | Block-based caching with content-addressable deduplication |
| KV Cache Quantization | Compress cached states to q4/q8 for 2-4x memory savings |
| Disk Cache (L2) | Persist prompt caches to SSD -- survives server restarts |
| Block Disk Cache | Per-block persistent cache paired with paged KV cache |
| Speculative Decoding | Small draft model proposes tokens for 20-90% speedup |
| Prompt Lookup Decoding | No draft model needed โ reuses n-gram matches from the prompt/context. Best for structured or repetitive output (code, JSON, schemas). Enable with --enable-pld. |
| JIT Compilation | mx.compile Metal kernel fusion (experimental) |
| Hybrid SSM Support | Mamba/GatedDeltaNet layers handled correctly alongside attention |
| Distributed Compute | Pipeline parallelism across multiple Macs via Thunderbolt 5 / Ethernet / WiFi |
Distributed Inference (Multi-Mac)
Run models too large for a single Mac across 2+ machines. Each Mac loads a subset of transformer layers and they communicate hidden states over the network.
# On worker Macs:
pip install vmlx
vmlx-worker --secret mysecret
# On coordinator Mac (runs the server):
vmlx serve JANGQ-AI/Qwen3.5-Coder-Rerank-397B-A27B-JANG_2L --distributed --cluster-secret mysecret
| Feature | Description |
|---|---|
| Pipeline Parallelism | Split layers across nodes -- hidden state (~8KB/step) flows sequentially |
| Auto-Discovery | Bonjour mDNS, UDP broadcast, HTTP probes, Tailscale, cached peers, manual IP |
| Capability-Scored Election | Most powerful Mac becomes coordinator automatically |
| Any Network Works | TB5 (120 Gbps), 10GbE, 1GbE, WiFi, Tailscale -- PP is not bandwidth-bound |
| JANG Support | Each worker loads its layer range from JANG safetensors (mmap) |
| Live Node List | Desktop app shows discovered nodes, link type, latency, layer assignments |
| Cluster API | /v1/cluster/status, /v1/cluster/nodes, /v1/cluster/scan REST endpoints |
5-Layer Cache Architecture
Request -> Tokens
|
L1: Memory-Aware Prefix Cache (or Paged Cache)
| miss
L2: Disk Cache (or Block Disk Store)
| miss
Inference -> float16 KV states
|
KV Quantization -> q4/q8 for storage
|
Store back into L1 + L2
Tool Calling
Auto-detected parsers for every major model family:
qwen - llama - mistral - hermes - deepseek - glm47 - minimax - nemotron - granite - functionary - xlam - kimi - step3p5
Reasoning / Thinking Mode
Auto-detected reasoning parsers that extract <think> blocks:
qwen3 (Qwen3, QwQ, MiniMax, StepFun) - deepseek_r1 (DeepSeek R1, Gemma 3, GLM, Phi-4) - openai_gptoss (GLM Flash, GPT-OSS)
Audio
| Feature | Description |
|---|---|
| Text-to-Speech | Kokoro TTS via mlx-audio -- multiple voices, streaming output |
| Speech-to-Text | Whisper STT via mlx-audio -- transcription and translation |
Image Generation & Editing
Generate and edit images locally with Flux models via mflux.
pip install vmlx[image]
# Image generation
vmlx serve schnell # or dev, z-image-turbo
vmlx serve ~/.mlxstudio/models/image/flux1-schnell-4bit
# Image editing
vmlx serve qwen-image-edit # instruction-based editing
Generation API
curl http://localhost:8000/v1/images/generations \
-H "Content-Type: application/json" \
-d '{
"model": "schnell",
"prompt": "A cat astronaut floating in space with Earth in the background",
"size": "1024x1024",
"n": 1
}'
# Python (OpenAI SDK)
response = client.images.generate(
model="schnell",
prompt="A cat astronaut floating in space",
size="1024x1024",
n=1,
)
Editing API
# Edit an image with a text prompt (Flux Kontext / Qwen Image Edit)
curl http://localhost:8000/v1/images/edits \
-H "Content-Type: application/json" \
-d '{
"model": "flux-kontext",
"prompt": "Change the background to a sunset",
"image": "<base64-encoded-image>",
"size": "1024x1024",
"strength": 0.8
}'
# Python
import base64
with open("source.png", "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post("http://localhost:8000/v1/images/edits", json={
"model": "flux-kontext",
"prompt": "Make the sky purple",
"image": image_b64,
"size": "1024x1024",
"strength": 0.8,
})
Supported Image Models
Generation Models:
| Model | Steps | Speed | Memory |
|---|---|---|---|
| Flux Schnell | 4 | Fastest | ~6-24 GB |
| Z-Image Turbo | 4 | Fast | ~6-24 GB |
| Flux Dev | 20 | Slow | ~6-24 GB |
Editing Models:
| Model | Steps | Type | Memory |
|---|---|---|---|
| Qwen Image Edit | 28 | Instruction-based editing | ~54 GB |
API Reference
API Gateway
The desktop app runs an API Gateway on a single port (default 8080) that routes requests to all loaded models by name. Run multiple models simultaneously and access them all through one URL.
# All models accessible through the gateway
curl http://localhost:8080/v1/chat/completions \
-d '{"model": "Qwen3.5-122B", "messages": [{"role": "user", "content": "Hi"}]}'
# Works with Ollama CLI too
OLLAMA_HOST=http://localhost:8080 ollama run Qwen3.5-122B
The gateway supports OpenAI, Anthropic, and Ollama wire formats. Configure the port in the API tab.
Endpoints
OpenAI / Anthropic
| Method | Path | Description |
|---|---|---|
POST | /v1/chat/completions | OpenAI Chat Completions API (streaming + non-streaming) |
POST | /v1/messages | Anthropic Messages API |
POST | /v1/responses | OpenAI Responses API |
POST | /v1/completions | Text completions |
POST | /v1/images/generations | Image generation |
POST | /v1/images/edits | Image editing (Qwen Image Edit) |
POST | /v1/embeddings | Text embeddings |
POST | /v1/rerank | Document reranking |
POST | /v1/audio/transcriptions | Speech-to-text (Whisper) |
POST | /v1/audio/speech | Text-to-speech (Kokoro) |
GET | /v1/models | List loaded models |
GET | /v1/cache/stats | Cache statistics |
GET | /health | Server health check |
Ollama
| Method | Path | Description |
|---|---|---|
POST | /api/chat | Chat completion (NDJSON streaming) |
POST | /api/generate | Text generation (NDJSON streaming) |
GET | /api/tags | List loaded models |
POST | /api/show | Model details |
POST | /api/embeddings | Generate embeddings |
curl Examples
Chat completion (streaming)
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"messages": [{"role": "user", "content": "Explain quantum computing in 3 sentences."}],
"stream": true,
"temperature": 0.7
}'
Chat completion with thinking mode
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"messages": [{"role": "user", "content": "Solve: what is 23 * 47?"}],
"enable_thinking": true,
"stream": true
}'
Tool calling
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
}'
Anthropic Messages API
curl http://localhost:8000/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: not-needed" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "local",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'
Embeddings
curl http://localhost:8000/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"input": "The quick brown fox jumps over the lazy dog"
}'
Text-to-speech
curl http://localhost:8000/v1/audio/speech \
-H "Content-Type: application/json" \
-d '{
"model": "kokoro",
"input": "Hello, welcome to vMLX!",
"voice": "af_heart"
}' --output speech.wav
Speech-to-text
curl http://localhost:8000/v1/audio/transcriptions \
-F file=@audio.wav \
-F model=whisper
Image generation
curl http://localhost:8000/v1/images/generations \
-H "Content-Type: application/json" \
-d '{
"model": "schnell",
"prompt": "A mountain landscape at sunset",
"size": "1024x1024"
}'
Reranking
curl http://localhost:8000/v1/rerank \
-H "Content-Type: application/json" \
-d '{
"model": "local",
"query": "What is machine learning?",
"documents": [
"ML is a subset of AI",
"The weather is sunny today",
"Neural networks learn from data"
]
}'
Cache stats
curl http://localhost:8000/v1/cache/stats
Health check
curl http://localhost:8000/health
Desktop App
vMLX includes a native macOS desktop app (MLX Studio) with 5 modes:
| Mode | Description |
|---|---|
| Chat | Conversation interface with chat history, thinking mode, tool calling, agentic coding |
| Server | Manage model sessions -- start, stop, configure, monitor |
| Image | Text-to-image generation and image editing with Flux, Kontext, Qwen, and Fill models |
| Tools | Model converter, GGUF-to-MLX, inspector, diagnostics |
| API | Live endpoint reference with copy-pasteable code snippets |
![]() | ![]() |
| Image generation and editing with Flux models | Developer tools -- model conversion and diagnostics |
![]() | ![]() |
| Anthropic Messages API endpoint -- full compatibility | GGUF to MLX conversion -- bring your own models |
Download
Get the latest DMG from MLX Studio Releases, or build from source:
git clone https://github.com/jjang-ai/vmlx.git
cd vmlx/panel
npm install && npm run build
npx electron-builder --mac dmg
Menu Bar
vMLX lives in your menu bar showing all running models, GPU memory usage, and quick controls.
Advanced Quantization
vMLX supports standard MLX quantization (4-bit, 8-bit uniform) out of the box. For users who want to push further, JANG adaptive mixed-precision assigns different bit widths to different layer types -- attention gets more bits, MLP layers get fewer -- achieving better quality at the same model size.
JANG Profiles
| Profile | Attention | Embeddings | MLP | Avg Bits | Use Case |
|---|---|---|---|---|---|
JANG_2M | 8-bit | 4-bit | 2-bit | ~2.5 | Balanced compression |
JANG_2L | 8-bit | 6-bit | 2-bit | ~2.7 | Quality 2-bit |
JANG_3M | 8-bit | 3-bit | 3-bit | ~3.2 | Recommended |
JANG_4M | 8-bit | 4-bit | 4-bit | ~4.2 | Standard quality |
JANG_6M | 8-bit | 6-bit | 6-bit | ~6.2 | Near lossless |
Convert
pip install vmlx[jang]
# Standard MLX quantization
vmlx convert my-model --bits 4
# JANG adaptive quantization
vmlx convert my-model --jang-profile JANG_3M
# Activation-aware calibration (better at 2-3 bit)
vmlx convert my-model --jang-profile JANG_2L --calibration-method activations
# Serve the converted model
vmlx serve ./my-model-JANG_3M --continuous-batching --use-paged-cache
Pre-quantized JANG models are available at JANGQ-AI on HuggingFace.
Smelt Mode (Partial Expert Loading)
For MoE models that don't fit in RAM, Smelt loads only a subset of experts per layer from SSD and keeps the backbone resident. Routing is biased toward the resident experts, so response quality stays coherent while RAM usage drops. Trade-off: throughput scales inversely with expert % loaded, because expert swaps hit SSD on the hot path.
# Load 50% of experts per layer (default when --smelt alone)
vmlx serve ./MyMoE-JANG_4M --smelt --smelt-experts 50
# Aggressive: load 25% โ smallest RAM, slowest
vmlx serve ./MyMoE-JANG_4M --smelt --smelt-experts 25
Benchmarks on Nemotron-Cascade-2-30B-A3B-JANG_4M (23 MoE layers ร 128 experts, Apple M3 Ultra / 128 GB, dedicated machine, no parallel models):
--smelt-experts | Active RAM | Decode tok/s | RAM saving | Coherent |
|---|---|---|---|---|
| off (baseline) | 17,408 MB | 89.9 | โ | โ |
50 | 9,529 MB | 66.5 | โ45% | โ |
25 | 5,590 MB | * | โ68% | โ |
* Responses too short (2-5 tokens) for reliable steady-state tok/s measurement at 25 %. Subjectively responsive.
All three configurations produced coherent, non-looping output. No quality degradation observed โ routing bias keeps the model on-topic.
Credit: Smelt mode is inspired by Anemll's flash-moe โ a pure C / ObjectiveโC / Metal inference engine that showed huge MoE models (Qwen3.5-397B) can run on modest Apple Silicon hardware by streaming expert weights from SSD with
pread()on demand. vMLX Smelt takes a different implementation path: Python/MLX, tied to the JANG quantization format, and loading a fixed subset of experts per layer at startup (backbone resident, routing biased toward the loaded subset) rather than on-demand per-token. It plugs into the full vMLX server with continuous batching, paged cache, and OpenAI-compatible API. Different engine, same core insight โ thanks to the flash-moe team for validating the approach.
Smelt is mutually exclusive with VLM mode. vMLX detects smelt and automatically disables --is-mllm (with a warning) because the vision tower is not wired through the partial-expert loader โ image input on a smelt-loaded VLM would produce garbage logits. Use a text-only model when running smelt, or disable smelt when running a VLM.
Smelt requires an MoE model in JANG format. Not compatible with dense models (no experts to partial-load) or with non-JANG formats.
CLI Commands
vmlx serve <model> # Start inference server
vmlx convert <model> --bits 4 # MLX uniform quantization
vmlx convert <model> -j JANG_3M # JANG adaptive quantization
vmlx info <model> # Model metadata and config
vmlx doctor <model> # Run diagnostics
vmlx bench <model> # Performance benchmarks
vmlx-worker --secret <secret> # Start distributed worker node
Configuration
Server Options
vmlx serve <model> \
--host 0.0.0.0 \ # Bind address (default: 0.0.0.0)
--port 8000 \ # Port (default: 8000)
--api-key sk-your-key \ # Optional API key authentication
--continuous-batching \ # Enable concurrent request handling
--enable-prefix-cache \ # Reuse KV states for repeated prompts
--use-paged-cache \ # Block-based KV cache with dedup
--kv-cache-quantization q8 \ # Quantize cache: q4 or q8
--enable-disk-cache \ # Persist cache to SSD
--enable-jit \ # JIT Metal kernel compilation
--tool-call-parser auto \ # Auto-detect tool call format
--reasoning-parser auto \ # Auto-detect thinking format
--log-level INFO \ # Logging: DEBUG, INFO, WARNING, ERROR
--max-model-len 8192 \ # Max context length
--speculative-model <model> \ # Draft model for speculative decoding
--enable-pld \ # Prompt Lookup Decoding โ no draft model, best for code/JSON/schemas
--distributed \ # Enable multi-Mac pipeline parallelism
--cluster-secret <secret> \ # Shared auth secret for workers
--distributed-mode pipeline \ # pipeline (default) or tensor (coming soon)
--worker-nodes ip:port,... \ # Manual worker IPs (overrides auto-discovery)
--cors-origins "*" # CORS allowed origins
Quantization Options
vmlx convert <model> \
--bits 4 \ # Uniform quantization bits: 2, 3, 4, 6, 8
--group-size 64 \ # Quantization group size (default: 64)
--output ./output-dir \ # Output directory
--jang-profile JANG_3M \ # JANG mixed-precision profile
--calibration-method activations # Activation-aware calibration
Image Generation & Editing Options
pip install vmlx[image]
# Generation models
vmlx serve schnell \ # or dev, z-image-turbo
--image-quantize 4 \ # Quantization: 4, 8 (omit for full precision)
--port 8001
# Editing models
vmlx serve qwen-image-edit \ # Instruction-based editing (full precision only)
--port 8001
# Local model directory
vmlx serve ~/.mlxstudio/models/image/FLUX.1-schnell-mflux-4bit
Audio Options
TTS and STT require the mlx-audio package:
pip install mlx-audio
# TTS: serve Kokoro model
vmlx serve kokoro --port 8002
# STT: serve Whisper model
vmlx serve whisper --port 8003
Optional Dependencies
pip install vmlx # Core: text LLMs, VLMs, embeddings, reranking
pip install vmlx[image] # + Image generation (mflux)
pip install vmlx[jang] # + JANG quantization tools
pip install vmlx[dev] # + Development/testing tools
pip install vmlx[image,jang] # Multiple extras
Architecture
+--------------------------------------------+
| Desktop App (Electron) |
| Chat | Server | Image | Tools | API |
+--------------------------------------------+
| Session Manager (TypeScript) |
| Process spawn | Health monitor | Tray |
+--------------------------------------------+
| vMLX Engine (Python / FastAPI) |
| +--------+ +---------+ +-----------+ |
| |Simple | | Batched | | ImageGen | |
| |Engine | | Engine | | Engine | |
| +---+----+ +----+----+ +-----+-----+ |
| | | | |
| +---+------------+--+ +-----+-----+ |
| | mlx-lm / mlx-vlm | | mflux | |
| +--------+-----------+ +-----------+ |
| | |
| +--------+----------------------------+ |
| | MLX Metal GPU Backend | |
| | quantized_matmul | KV cache | SDPA | |
| +--------------------------------------+ |
+--------------------------------------------+
| L1: Prefix Cache (Memory-Aware / Paged) |
| L2: Disk Cache (Persistent / Block Store) |
| KV Quant: q4/q8 at storage boundary |
+--------------------------------------------+
Contributing
Contributions are welcome. Here is how to set up a development environment:
git clone https://github.com/jjang-ai/vmlx.git
cd vmlx
# Python engine
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev,jang,image]"
pytest tests/ -k "not Async" # 2000+ tests
# Electron desktop app
cd panel && npm install
npm run dev # Development mode with hot reload
npx vitest run # 1545+ tests
Project Structure
vmlx/
vmlx_engine/ # Python inference engine (FastAPI server)
panel/ # Electron desktop app (React + TypeScript)
src/main/ # Electron main process
src/renderer/ # React frontend
src/preload/ # IPC bridge
tests/ # Python test suite
assets/ # Screenshots and logos
Guidelines
- Run the full test suite before submitting PRs
- Follow existing code style and patterns
- Include tests for new features
- Update documentation for user-facing changes
License
Apache License 2.0 -- see LICENSE.
Built by Jinho Jang (eric@jangq.ai)
JANGQ AI โข PyPI โข GitHub โข Downloads
ํ๊ตญ์ด (Korean)
vMLX โ Apple Silicon์ ์ํ ๋ก์ปฌ AI ์์ง
Mac์์ LLM, VLM, ์ด๋ฏธ์ง ์์ฑ ๋ฐ ํธ์ง ๋ชจ๋ธ์ ์์ ํ ๋ก์ปฌ๋ก ์คํํ์ธ์. OpenAI + Anthropic ํธํ API. ํด๋ผ์ฐ๋ ์์. API ํค ๋ถํ์. ๋ฐ์ดํฐ๊ฐ ๊ธฐ๊ธฐ๋ฅผ ๋ ๋์ง ์์ต๋๋ค.
๋น ๋ฅธ ์์
pip install vmlx
vmlx serve mlx-community/Llama-3.2-3B-Instruct-4bit
์ฃผ์ ๊ธฐ๋ฅ
| ๊ธฐ๋ฅ | ์ค๋ช |
|---|---|
| ํ ์คํธ ์์ฑ | MLX ๋ฐ JANG ํ์์ LLM ์ถ๋ก |
| ๋น์ -์ธ์ด ๋ชจ๋ธ | ์ด๋ฏธ์ง + ํ ์คํธ ๋ฉํฐ๋ชจ๋ฌ ์ถ๋ก |
| ์ด๋ฏธ์ง ์์ฑ | Flux Schnell/Dev, Z-Image Turbo (mflux ๊ธฐ๋ฐ) |
| ์ด๋ฏธ์ง ํธ์ง | Qwen Image Edit (ํ ์คํธ ์ง์ ๊ธฐ๋ฐ ์ด๋ฏธ์ง ํธ์ง) |
| 5๋จ๊ณ ์บ์ฑ | ํ๋ฆฌํฝ์ค, ํ์ด์ง๋, KV ์์ํ, ๋์คํฌ, ๋ฉ๋ชจ๋ฆฌ ์ธ์ ์บ์ |
| ์ฐ์ ๋ฐฐ์นญ | ๋ค์ค ๋์ ์์ฒญ ์ฒ๋ฆฌ |
| ์์ด์ ํธ ๋๊ตฌ | 30๊ฐ ๋ด์ฅ ๋๊ตฌ (ํ์ผ, ์น ๊ฒ์, Git, ํฐ๋ฏธ๋) |
| OpenAI API | /v1/chat/completions, /v1/images/generations, /v1/images/edits |
| Anthropic API | /v1/messages (์คํธ๋ฆฌ๋ฐ, ๋๊ตฌ ํธ์ถ, ์์คํ ํ๋กฌํํธ) |
์ด๋ฏธ์ง ์์ฑ
pip install vmlx[image]
vmlx serve schnell # ๋น ๋ฅธ ์์ฑ (4 ๋จ๊ณ)
vmlx serve dev # ๊ณ ํ์ง ์์ฑ (20 ๋จ๊ณ)
์ด๋ฏธ์ง ํธ์ง
vmlx serve qwen-image-edit # ํ
์คํธ ์ง์ ๊ธฐ๋ฐ ์ด๋ฏธ์ง ํธ์ง
# ์ด๋ฏธ์ง ํธ์ง API
curl http://localhost:8000/v1/images/edits \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-image-edit",
"prompt": "๋ฐฐ๊ฒฝ์ ํด์ง๋
์ผ๋ก ๋ณ๊ฒฝ",
"image": "<base64 ์ธ์ฝ๋ฉ๋ ์ด๋ฏธ์ง>",
"size": "1024x1024",
"strength": 0.8
}'
๋ฐ์คํฌํฑ ์ฑ (MLX Studio)
macOS ๋ค์ดํฐ๋ธ ๋ฐ์คํฌํฑ ์ฑ์ผ๋ก 5๊ฐ์ง ๋ชจ๋๋ฅผ ์ ๊ณตํฉ๋๋ค:
| ๋ชจ๋ | ์ค๋ช |
|---|---|
| ์ฑํ | ๋ํ ์ธํฐํ์ด์ค, ์ฑํ ๊ธฐ๋ก, ๋๊ตฌ ํธ์ถ, ์์ด์ ํธ ์ฝ๋ฉ |
| ์๋ฒ | ๋ชจ๋ธ ์ธ์ ๊ด๋ฆฌ โ ์์, ์ ์ง, ์ค์ , ๋ชจ๋ํฐ๋ง |
| ์ด๋ฏธ์ง | ํ ์คํธ-์ด๋ฏธ์ง ์์ฑ ๋ฐ ์ด๋ฏธ์ง ํธ์ง (Flux, Qwen ๋ชจ๋ธ) |
| ๋๊ตฌ | ๋ชจ๋ธ ๋ณํ๊ธฐ, GGUF-MLX ๋ณํ, ์ง๋จ |
| API | ์ค์๊ฐ ์๋ํฌ์ธํธ ์ฐธ์กฐ ๋ฐ ์ฝ๋ ์ค๋ํซ |
์ค์น
pip install vmlx # ๊ธฐ๋ณธ: ํ
์คํธ LLM, VLM, ์๋ฒ ๋ฉ
pip install vmlx[image] # + ์ด๋ฏธ์ง ์์ฑ/ํธ์ง (mflux)
pip install vmlx[jang] # + JANG ์์ํ ๋๊ตฌ
pip install vmlx[audio] # + TTS/STT (mlx-audio)
๋ผ์ด์ ์ค
Apache License 2.0 โ LICENSE ์ฐธ์กฐ.
๊ฐ๋ฐ์: ์ฅ์งํธ (eric@jangq.ai)
JANGQ AI โข
Ko-fi๋ก ํ์ํ๊ธฐ





