๐Ÿ“ฆ
Vllm Mlx
by Unclaimed ai
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.
Free
View on GitHub Claim this server
Quick Install
npx vllm-mlx

No README available for this server yet.

No reviews yet. Be the first to leave one on the marketplace.