Ns Flue
Using https://flueframework.com/ with NativeScript where the device itself is the agent's sandbox.
Ask AI about Ns Flue
Powered by Claude Β· Grounded in docs
I know everything about Ns Flue. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
The agent's shell is the phone.
Flue agent harness framework running on NativeScript where the device itself is the agent's sandbox, Apple Intelligence
(FoundationModels) is the on-device LLM, and @nstudio/nstreamdown rendering the agent's streaming markdown output natively.
https://github.com/user-attachments/assets/30471b0e-ca05-46c1-acae-64c11fdb9ddf
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agnostic UI (NativeScript) β
β Mission picker Β· live transcript Β· animated tool-call cards β
ββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββ
β Stream of FlueEvent
ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ
β Flue runtime (in-tree, NS-native β mirrors @flue/sdk/client)β
β init() Β· session.prompt/skill/task Β· markdown-defined skillsβ
ββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββ
β SessionEnv interface
ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ
β Device Sandbox (NativeScript) β
β fs: NS Folder / File mapped to app sandbox β
β tools: UIKit / AVFoundation / etc. β
β model: FoundationModels (iOS 26+) / fallback elsewhere β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Two runtimes, side by side
This project ships two Flue runtimes and lets you switch between them at runtime from the Polyfills tab. The switch is persisted across launches.
Runtime A β real β actual @flue/sdk@0.3.x (default at first launch)
The real package, bundled into the NS app via a polyfill layer in
webpack.config.js and
src/flue/real/. When you switch to this runtime:
createFlueContext,FlueAgent,FlueSession,InMemorySessionStore, pi-agent-core's harness, pi-ai's event stream β all the actual upstream bytes execute on your phone.- Your
NativeSandbox(@nativescript/coreFolder/File-backed) is plugged in as Flue'screateDefaultEnvandcreateLocalEnv. - An on-device pi-ai provider for Apple FoundationModels is registered via
the framework's own
registerApiProviderextension point β no fork, no patch, just the public API.
Both runtimes drive the same UI, the same missions, the same skills, the same tools. The Polyfills tab shows you which one is active and lets you flip between them so you can A/B the same prompt under both.
Runtime B β shim
A Flue-API-shaped runtime in src/flue/ (~600 LOC) that mirrors
@flue/sdk/client's public surface β same names, same call shapes, same
Valibot-style schemas, same skill loading, same RxJS-style events. Boots
instantly, has no dependencies, ~5 KB additional bundle.
Why we needed a polyfill layer at all
@flue/sdk@0.3.x is built for Node.js, Cloudflare Workers, and CI
runtimes. Its transitive dependency tree pulls in just-bash,
@hono/node-server, esbuild, @modelcontextprotocol/sdk,
@google/genai, @aws-sdk/client-bedrock-runtime, google-auth-library,
undici, ws, proxy-agent, and a long tail of node:fs / node:net /
node:tls / node:crypto reach-throughs. NativeScript V8 has none of those.
- Replace
@mariozechner/pi-ai/dist/providers/register-builtins.jswith an in-tree stub that registers no cloud providers. Removes the entire google-genai / openai / anthropic / mistral / bedrock chain (the bulk of the Node-API blockers). - Alias
pkce-challenge/dist/index.node.jsβpkce-challenge/dist/index.browser.js. The package ships a Web Crypto build; we route MCP HTTP transport through it instead ofnode:crypto. - Resolve.fallback the remaining Node modules to
false(they're only reachable from cloud-provider code paths the on-device demo never exercises). - NormalModuleReplacementPlugin to strip
node:URI prefixes sonode:cryptofalls through to the samecryptofallback as a plainimport 'crypto'.
The result: real @flue/sdk runs end-to-end in NativeScript, the Polyfills tab shows
every decision live, and the diff between modes is reduced to one
AgentService.setRuntime('shim' | 'real') call.
See webpack.config.js and src/flue/real/polyfills/ for the full source.
Prerequisites
| Requirement | Why |
|---|---|
| Node 22+ and npm | NativeScript CLI install |
nativescript (ns command) | npm i -g nativescript |
| Xcode 26+ | iOS build, FoundationModels framework available in SDK |
| iOS 26 simulator or device | Required to actually use Apple Intelligence on-device |
| iPhone 15 Pro / iPhone 16 / 16 Pro | Apple Intelligence is gated on these device classes |
| Android Studio + emulator (opt.) | Android target boots, runs the mock LLM, and runs all tools |
The app also runs on the iOS simulator and on devices that don't support Apple Intelligence β it falls back to a deterministic mock LLM that emits plausible XML tool-call programs so the full UI flow lights up.
Install & run
git clone https://github.com/NathanWalker/ns-flue
cd ns-flue
npm install
iOS
ns run ios # simulator, default booted device
ns run ios --device <udid> # specific device or simulator
Android
ns run android
Android target uses the mock LLM β Apple Intelligence is iOS-only. The NativeSandbox, all schema-typed tools, the markdown skill loader, the live event timeline, and the entire UI all work on Android.
What you can actually do
The home screen β Mission Control β gives you four canned missions:
- Brief me (
brief-meskill) β checks battery, location, narrates a one-liner via TTS. The "Phase 0/1 spike" demo. - Catalog this shelf (
catalog-shelfskill) β writes a Markdown inventory into the agent's workspace; camera/object-classification tools land in Phase 4. - Find my missing thing (
find-my-thingskill) β hotter/colder game with haptics + TTS; BLE RSSI tool lands in Phase 4. - Free prompt β open shell. Send anything to the agent.
While the agent runs, the Live Timeline on the run screen narrates each step: streaming text, tool calls (icon per tool), tool results, errors, nested tasks, final reply. This is the Flue harness narrating itself.
The other tabs:
- Skills β picker for the bundled
*.mdskills, with each skill's full instructions visible. Demonstrates the OTA story: skills are content, not code. - Architecture β the layered diagram + per-Flue-primitive callouts + phased plan status.
- Polyfills β the demonstration tab. Live status of every Node API + library surface
@flue/sdkpulls in (real / polyfilled / stubbed / not-applicable), runtime selector to flip shim β real, plus a note for NS maintainers on which surfaces would unlock the most SDK integrations. - About β the inversion pitch and the three demo-mission descriptions verbatim from the plan.
Markdown rendering β @nstudio/nstreamdown
The agent emits Markdown (skills, replies, streamed turns). We render it
natively via
@nstudio/nstreamdown's Streamdown component.
How the runtime works (in 60 seconds)
AgentService.ensureReady()(root service) callsinit({ tools, skills, model })once.init()creates aNativeSandbox(aSessionEnvbacked byDocuments/agent-workspace/), and an RxJSSubject<FlueEvent>for UI consumption.- The user picks a mission.
agent.session('mission-<id>').skill(name, { args })is called. - Skill markdown is loaded from
knownFolders.currentApp()/agent-skills/<name>.md, frontmatter parsed, body rendered with{{ args }}substitution. - The model (
FoundationModelsLLMon iOS 26+,MockLLMeverywhere else) streams cumulative text via the SwiftFlueAI.streamPromptbridge. runToolLoopparses any<tool name="β¦">{json}</tool>blocks the model emits, schema-validates the args via the in-treevalibot-mini, dispatches against the sandbox, then feeds<tool_result for="β¦">β¦</tool_result>blocks back into the next turn. Capped atmaxSteps(default 8).- Every step is emitted as a
FlueEvent(agent_start,thinking,text_delta,tool_start,tool_end,task_*,turn_end,agent_end). TheEventTimelineComponentcollapses streaming deltas into a "thinking" card and pairstool_start/tool_endinto a single animated tool card.
Extending
Add a new tool
// src/flue/tools/take-photo.ts
import * as v from 'valibot';
import type { ToolDef } from '../types';
export const takePhoto: ToolDef = {
name: 'take_photo',
description: 'Capture a photo from the rear camera. Returns the saved path.',
schema: v.object({
hint: v.optional(v.string()),
}),
async execute({ hint }, ctx) {
// ...wire to AVCapturePhotoOutput, write to ctx.env, return path...
return JSON.stringify({ path: 'photos/2026-05-02-001.jpg', hint });
},
};
Register it in src/flue/tools/index.ts and the agent will discover it
through the LLM system prompt automatically.
Add a new skill
<!-- src/agent-skills/my-skill.md -->
---
name: my-skill
description: One-liner shown in the Skills tab.
---
You are doing X.
Steps:
1. Call `tool_a` with these args.
2. If condition, call `tool_b`.
3. Speak a short summary via `speak`.
The webpack copy rule (agent-skills/**/*.md) packages it into the app
bundle. loadBundledSkills() discovers it on next cold start.
Add a new mission card
Append a MissionDefinition to the MISSIONS array in
src/app/agent.service.ts. It'll show up on the
home screen automatically.
Capability probe
The home screen's Capability Probe card surfaces, at first launch:
- Active platform & SDK version
- Whether
FlueAI.shared.isAvailable()returns true (FoundationModels framework linkable + on-device model present) - Active model id (
apple/foundationmodels-on-devicevsflue/mock-on-device) - A human reason if FoundationModels is unavailable
- Skill / tool counts
Use this as your first triage step when something looks off.
Links
- Flue framework: https://flueframework.com Β· https://github.com/withastro/flue
- NativeScript: https://nativescript.org Β· https://docs.nativescript.org
- nstreamdown (native streaming markdown): https://nstreamdown.ai
- Reference FoundationModels NS app: https://github.com/NathanWalker/ns-ios-foundationmodels
