DesktopAssistant
Desktop AI agent: multi-chat, conversation branching, MCP servers, recursive sub-agents. Built with .NET 9, Avalonia, Semantic Kernel.
Ask AI about DesktopAssistant
Powered by Claude Β· Grounded in docs
I know everything about DesktopAssistant. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
DesktopAssistant
A desktop AI agent application built with .NET 9 and Avalonia UI. Supports any OpenAI-compatible LLM provider, real-time MCP server installation, conversation branching, multiple simultaneous chats, and recursive sub-agent orchestration.

Features
Conversations
- Multiple simultaneous chats β open as many conversations as needed, each running independently
- Conversation branching β fork any message to explore alternative responses, similar to ChatGPT's branching model
- Manual summarization β right-click any message and summarize the preceding context; a
SummaryNodeis inserted into the message tree, keeping the context window lean without losing history - Structured history reduction β uses a custom
IChatHistoryReducerthat asks the LLM to compact conversation history via a structuredsubmit_historytool call, preserving roles, function calls, and function results as properChatMessageContentobjects (compatible with agents usingFunctionChoiceBehavior.Required)

Sub-Agents
- Recursive sub-agent creation β enable per conversation; the agent can then spawn child agents to delegate subtasks
- Configurable sub-agents β when creating a sub-agent the parent specifies its assistant profile (model, endpoint, temperature), system prompt, and whether the sub-agent can spawn its own sub-agents
- Task assignment to existing sub-agents β the parent agent can send new tasks to already-created sub-agents, not just create new ones
- Conversation tree β each sub-agent runs as a linked conversation visible in the sidebar, showing its status and relationship to the parent

AI & LLM
- OpenAI-compatible providers β works with OpenAI, Azure OpenAI, local models via Ollama, LM Studio, or any OpenAI-compatible endpoint
- Configurable profiles β multiple assistant profiles with independent model, endpoint, temperature, and token settings
- Per-conversation system prompt β set a custom system prompt for each conversation
- Semantic Kernel β powered by Microsoft Semantic Kernel for LLM orchestration
- Streaming responses β real-time token streaming
MCP (Model Context Protocol)
- Agent-driven installation β the agent has a built-in set of tools that let it install and configure MCP servers on its own, without user involvement
- Install from GitHub β the agent can install any MCP server directly from a GitHub repository URL
- Built-in server catalog β a curated list of popular servers is available to the agent as a knowledge base (Tavily Search, Exa Search, Filesystem, Git, Fetch, Memory, Playwright, Qdrant, MySQL/PostgreSQL/SQLite, Kubernetes, Docker, Sequential Thinking, and more)
- Manual management β users can install and manage servers directly by editing the config file
- No restart required β servers connect at runtime; new tools become available immediately
- Per-tool auto-approval β configure which tools run automatically without confirmation prompts
Security & Storage
- API keys via Windows DPAPI β credentials encrypted at rest, never stored in plain text
- SQLite database β local storage for conversations, messages, settings
- No telemetry β all data stays on your machine
UI
- Avalonia UI β cross-platform desktop framework with Fluent theme
- System theme detection β follows Windows dark/light mode
- Markdown rendering β assistant responses rendered as rich markdown
Planned Features
- File support β work with images, documents, and other file types as conversation context
- Extended thinking β support for models with explicit reasoning/thinking steps
- Agent isolation β sandboxed execution environments per agent
- Daemon agents β long-running background agents and sub-agents
- Scheduled tasks β deferred and periodic task execution
Architecture
Clean Architecture with four layers:
Domain β entities, value objects, domain interfaces (no external dependencies)
Application β use cases, application services, DTOs, interface definitions
Infrastructure β LLM/SK integration, MCP, SQLite persistence, DPAPI security
UI β Avalonia MVVM presentation layer (CommunityToolkit.Mvvm)
Key infrastructure components:
KernelFactoryβ creates Semantic Kernel instances per assistant profileAgentKernelFactoryβ wrapsKernelFactory, conditionally registering sub-agent tools and pluginsConversationSessionβ encapsulates the LLM turn loop, tool approval, and streaming for a single conversationConversationSessionServiceβ singleton session pool keyed by conversation IDSubagentServiceβ manages sub-agent lifecycle and inter-agent communicationSubagentPluginβ SK plugin exposingcreate_subagent,send_message_to_subagent,list_subagentstools to the LLMDpapiCredentialStoreβ Windows DPAPI credential storeAvailableToolsServiceβ aggregates static SK plugins and dynamic MCP toolsSummarizationExecutorβ orchestrates history compaction viaChatHistoryCompactionReducer(seesrc/DesktopAssistant.Infrastructure/AI/Summarization/)
Requirements
- OS: Windows 10/11 (DPAPI credential storage is Windows-only)
- Runtime: .NET 9 SDK
- Node.js: Required for NPX-based MCP servers (most servers in the catalog)
- LLM provider: An API key for OpenAI or any compatible provider, or a locally running model
Getting Started
1. Clone the repository
git clone https://github.com/00wz/DesktopAssistant.git
cd DesktopAssistant
2. Build and run
dotnet run --project src/DesktopAssistant.UI
Or open DesktopAssistant.sln in Visual Studio 2022 / JetBrains Rider and run the DesktopAssistant.UI project.
3. First-time setup
- Open Settings β Profiles
- Create an assistant profile β enter your LLM provider base URL and model ID
- Your API key is stored securely via Windows DPAPI
- Start a new conversation
4. Adding MCP servers
- Open Settings β Tools
- Browse the built-in catalog or enter a custom server command
- The server connects immediately β no restart required
- Configure per-tool auto-approval as needed
Configuration
All settings are stored locally in a SQLite database (desktop_assistant.db) in the application directory.
| Setting | Storage |
|---|---|
| API keys | Windows DPAPI (encrypted) |
| Assistant profiles | SQLite |
| Conversations & messages | SQLite |
| Tool auto-approval | SQLite |
| MCP server configs | SQLite |
Logging is configured in appsettings.json via Serilog. Log files are written to the logs/ directory.
Tech Stack
| Component | Library / Version |
|---|---|
| UI Framework | Avalonia UI 11.3 |
| MVVM | CommunityToolkit.Mvvm 8.4 |
| LLM Orchestration | Microsoft Semantic Kernel 1.67 |
| MCP Client | ModelContextProtocol 0.2 |
| ORM | Entity Framework Core + SQLite 9.0 |
| Logging | Serilog 4.1 |
| Markdown | LiveMarkdown.Avalonia |
| Target Framework | .NET 9 |
Contributing
Feedback and pull requests are welcome. For major changes, please open an issue first.
