AgentRunKit
Lightweight Swift 6 framework for building LLM-powered agents — cloud + on-device inference via MLX on Apple Silicon
Ask AI about AgentRunKit
Powered by Claude · Grounded in docs
I know everything about AgentRunKit. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
AgentRunKit
A Swift 6 SDK for building LLM-powered agents with type-safe tool calling.
Zero-dependency core · Full Sendable · Async/await · Cloud + Local · MCP
Quick Start
import AgentRunKit
let client = OpenAIClient.openAI(apiKey: "sk-...", model: "gpt-5.4")
let weatherTool = try Tool<WeatherParams, String, EmptyContext>(
name: "get_weather",
description: "Get the current weather"
) { params, _ in
"72°F and sunny in \(params.city)"
}
let agent = Agent(client: client, tools: [weatherTool])
let result = try await agent.run(userMessage: "What's the weather in SF?", context: EmptyContext())
if let content = result.content {
print(content)
}
result.content is optional. Completed runs return finish-tool content, while structural terminal reasons such as max iterations or token budget exhaustion surface through result.finishReason with no final content.
Documentation
Full documentation including guides and API reference is available on Swift Package Index.
Runnable Example
Examples/AgentCode is an interactive terminal coding agent built with AgentRunKit. It demonstrates the full agent loop in a local workspace: streaming events, type-safe tools, approval-gated edits and command execution, bounded file access, transcript export, and deterministic offline mode.
cd Examples/AgentCode
swift run agent-code
By default it opens a bundled broken Swift package so you can ask it to fix failing tests. Set OPENAI_API_KEY for a live OpenAI-compatible provider, or run without a key to exercise the CLI with the offline test client.
Installation
Add to your Package.swift:
dependencies: [
.package(url: "https://github.com/Tom-Ryder/AgentRunKit.git", from: "2.4.0")
]
.target(name: "YourApp", dependencies: ["AgentRunKit"])
For on-device inference, additional targets are available:
AgentRunKitMLXfor MLX on Apple Silicon (links mlx-swift-lm)AgentRunKitFoundationModelsfor Apple Foundation Models (iOS 26+ and macOS 26+, no external dependencies)
Features
- Agent loop with configurable iteration limits and token budgets
- Streaming with
AsyncThrowingStreamand@ObservableSwiftUI wrapper - Type-safe tools with compile-time JSON schema validation
- Sub-agent composition with depth control and streaming propagation
- Context management: automatic compaction, pruning, token budgets
- Structured output with JSON schema constraints
- Multimodal input: images, audio, video, PDF
- Text-to-speech with concurrent chunking and MP3 concatenation
- MCP client: stdio transport, tool discovery, JSON-RPC
- Extended thinking / reasoning model support
Providers
| Provider | Description |
|---|---|
OpenAIClient | OpenAI and compatible APIs (OpenRouter, Groq, Together, Ollama) |
AnthropicClient | Anthropic Messages API |
GeminiClient | Google Gemini API |
VertexAnthropicClient | Anthropic models on Google Vertex AI |
VertexGoogleClient | Google models on Vertex AI |
ResponsesAPIClient | OpenAI Responses API with same-substrate continuity replay |
FoundationModelsClient | Apple on-device (macOS 26+ / iOS 26+) |
MLXClient | On-device via MLX on Apple Silicon |
Requirements
| Platform | Version |
|---|---|
| iOS | 18.0+ |
| macOS | 15.0+ |
| Swift | 6.0+ |
| Xcode | 26+ for local development and CI |
License
MIT License. See LICENSE for details.
