π¦
YALMR β Yet Another LLM Runtime
Run local GGUF models in .NET 10 via llama.cpp. Features streaming generation, tool calling, vision (multimodal), conversation compaction, MCP integration, and a multi-model server.
0 installs
Trust: 34 β Low
Comms
Ask AI about YALMR β Yet Another LLM Runtime
Powered by Claude Β· Grounded in docs
I know everything about YALMR β Yet Another LLM Runtime. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Loading tools...
Reviews
Documentation
No README available
