v0.2.9 · stable [ macOS · Linux beta · Win beta ]

A swarm of specialist agents, running entirely on your machine.

AgentiqFlow is a local-first operating system for AI agents. Run a Director, 25+ specialists, MCP servers, channels, cron, and your own vault — all on hardware you own. No SaaS in the loop. No tokens leaking to a hoster.

Agents
25+
Providers
42
MCP servers
[ swarm.viewport ]
live
director · idle
DISPATCH director → researcher · 12ms MCP github → fetched 47 issues VAULT secrets refreshed · 42 keys CRON 0 9 * * 1 → digest run · 2.3s DISCORD reply · #ops WORKFLOW invoice-export.v3 · ok SKILL crawl-news.v7 · 0.85s EMBED 1283 chunks · 312KB PROVIDER ollama-cloud · 18 reqs/min iMESSAGE inbound · alex TASK created · #q3-pricing POWER-UP webull · session ok CHANNEL discord/general · 03 events AGENT critic ↘ writer · feedback FILE inbox/contract.pdf → vault CRON */15 * * * * → market-pulse DISPATCH director → researcher · 12ms MCP github → fetched 47 issues VAULT secrets refreshed · 42 keys CRON 0 9 * * 1 → digest run · 2.3s DISCORD reply · #ops WORKFLOW invoice-export.v3 · ok SKILL crawl-news.v7 · 0.85s EMBED 1283 chunks · 312KB PROVIDER ollama-cloud · 18 reqs/min iMESSAGE inbound · alex TASK created · #q3-pricing POWER-UP webull · session ok CHANNEL discord/general · 03 events AGENT critic ↘ writer · feedback FILE inbox/contract.pdf → vault CRON */15 * * * * → market-pulse
[ 1 · manifesto ]

The opposite
of "AI cloud".

Built for people who don't want to rent their intelligence — and who'd rather know exactly which process is running which model on which CPU.

01

Local by default.

Models run on your hardware. Data stays in your filesystem. Nothing is forwarded unless you say so. No telemetry. No fine-print analytics.

02

Composable, not closed.

Every agent is a swappable module. Skills are scripts. Channels are adapters. MCP servers plug into the same bus. You ship the swarm you need.

03

Long-lived, not one-shot.

Cron, queues, and durable workflows make agents wait, retry, and hand work back and forth. Your AI is a daemon, not a chat box.

[ 2 · capabilities ]

Everything an
operating system for agents needs.

[ Director ] 01/09

A conductor for the swarm

One Director routes intent into the right specialist, splits work in parallel, gathers results, and decides what to do next.

engine.dispatch · 25 specialists
[ Skills hub ] 02/09

Hundreds of plug-and-play skills

Browse, install, fork. Skills are signed scripts your agents can call — from web crawl to invoice gen to market scrape.

shelf · 1,400+ skills
[ Channels ] 03/09

Reach you where you live

Native iMessage, Discord, email, Slack. Agents reply, react, and ping push notifications when long jobs finish.

macOS · DMs · groups
[ MCP servers ] 04/09

Bring any tool the model knows

Hook in any Model Context Protocol server. GitHub, Linear, Gmail, Drive, Stripe, Postgres. Connect once, reuse forever.

discover · install · pin
[ Workflows ] 05/09

Drag-and-drop or write code

Visual graph editor for repeatable jobs. Branches, retries, queues, timers, embed steps — composable like Lego, durable like a job runner.

visual · scripted · cron
[ Memory ] 06/09

Obsidian-style brain

Notes, wiki-links, sessions, transcripts. Auto-embedded, semantic search, agent-shared. Yours forever — plain markdown on disk.

vault · graph · embed
[ Vault ] 07/09

Encrypted secrets, local-only

API keys, OAuth tokens, browser sessions sealed in a per-machine vault. Agents request scoped access; nothing leaks to logs.

ed25519 · scoped
[ Live ops ] 08/09

See every tick of every agent

Live stream of dispatches, tool calls, costs, latencies. Drill into a single execution trace, replay it, fork it.

tracing · replay · cost
[ Power-ups ] 09/09

Native integrations, opt-in

Webull trading, Apple Photos, Calendar, Drive, Notion, Browserbase, Vercel deploy — first-class connectors that respect your scope.

12 ready · more weekly
[ 3 · architecture ]

Five layers,
zero cloud hops.

Every layer is on your machine. Outbound only happens when you call a model provider — and you choose which calls leave the box.

Architecture deep-dive →
  1. L1

    UI · React + Vite

    Dashboard, Chat, Live, Memory graph, Workflows. Loads from disk, runs in browser at localhost:8858.

    :8858
  2. L2

    Sidecar · Node

    Long-lived host process. Routing, MCP host, channels, queues, cron, license. Streams SSE end-to-end.

    engine
  3. L3

    Engine · runtime

    Director, swarm scheduler, persona memory, skills runner, providers, embed pipeline.

    core
  4. L4

    Vault · disk

    ~/.agentiqflow stores secrets (sealed), sessions, tasks, workflows, and your obsidian-style memory vault.

    fs
  5. L5

    Models · pluggable

    Anthropic, OpenAI, Gemini, NVIDIA NIM, OpenRouter, Ollama Cloud, local Ollama. Swap per agent.

    providers
  6. Boundary · network egress only on your call
    opt-in
[ 4 · providers ]

42 providers.
One bus.

Configure once. Route per-agent. Mix frontier with open-weights, fast with cheap, hosted with local. Failover and budget caps are built in.

Routing
per agent
Caching
prompt+context
Failover
automatic
Budgets
$ + tokens
Anthropic
Claude Opus / Sonnet / Haiku
01/42
OpenAI
GPT-5 family
02/42
Google
Gemini Pro / Flash
03/42
NVIDIA NIM
Hosted + on-prem GPUs
04/42
Mistral
Mistral Large / Codestral
05/42
Cohere
Command R+
06/42
Groq
Llama / Qwen at speed
07/42
Together
Open-weights at scale
08/42
Fireworks
Tuned + LoRA serving
09/42
OpenRouter
Unified relay · 200+ models
10/42
Ollama Cloud
Hosted open-weights
11/42
Ollama local
Run on your GPU
12/42
vLLM
Self-hosted inference
13/42
Replicate
Image / video / audio
14/42
DeepSeek
DeepSeek V3 / R1
15/42
xAI
Grok 3
16/42
Perplexity
Sonar online models
17/42
Qwen
Qwen 3 family
18/42
+ 24 more · see /models
[ 5 · pricing ]

Pay for the runtime.
Bring your own keys.

Model spend is on you and your provider — we never markup tokens. AgentiqFlow charges only for the runtime, channels, sync, and support tier you pick.

[ Personal ]
Free
forever

For solo operators.

Download
  • 1 device
  • 5 active agents
  • All built-in skills + MCP
  • Local memory + vault
  • Community Discord
  • Auto-update on stable channel
  • Channels (Discord/iMessage)
  • Cloud sync
  • Premium skills shelf
Most popular
[ Pro ]
$29
/ month

For power users who run agents 24/7.

Start 14-day trial
  • 3 devices
  • Unlimited agents + workflows
  • Channels: Discord, iMessage, Slack, email
  • Premium skills shelf
  • Cloud sync (E2E encrypted)
  • Priority providers & failover
  • 1:1 onboarding call
  • Team workspaces
  • SSO / SAML
[ Studio ]
$99
/ seat / month

For teams + agencies running flows for clients.

Talk to sales
  • Unlimited devices per seat
  • Team workspaces + shared workflows
  • Audit logs + role-based access
  • Private skills registry
  • SSO / SAML / SCIM
  • Dedicated support engineer
  • Custom MCP server hosting
Need an enterprise plan?

SSO, SCIM, on-prem update server, custom MSAs.

enterprise@agentiqflow.ai →
Annual billing

2 months free on Pro / Studio when paid yearly.

pay annually →
Education + OSS

Free Pro for verified students and maintainers of qualifying OSS projects.

apply →
[ 6 · download ]

Release
0.2.9

Signed with ed25519. SHA-256 published. Auto-update on stable channel. Source on GitHub.

Released
2026-05-08
Channel
stable
Signature
ed25519
License
MIT (core)
[ release.manifest ]
verified
Target Format Get
macOS
Apple Silicon · Intel
DMG ↓ 0.2.9
macOS
PKG installer
PKG ↓ 0.2.9
Linux
x64 AppImage
AppImage ↓ 0.2.9
Windows
x64 NSIS
EXE ↓ 0.2.9
sha256: 4f52ea4493b40688ae19ab00e553ff7b2c2a1ccf0c4e5bcc3a18ba61f5dd60e4
[ 7 · faq ]

Questions
worth answering.

Is my data ever sent to AgentiqFlow?

[ open ]

No. The runtime is local. The only network calls are: model providers you configure (using your keys), MCP servers you install, and our update / license server. The update server only sees your license key + device ID + app version.

01/08

Do I need a GPU?

[ open ]

No. Most agents work fine through hosted providers. You only need a GPU if you want to run local Ollama models or local video / image diffusion.

02/08

How are model costs handled?

[ open ]

We never markup tokens. You bring your own API keys (or run local). We optionally offer per-agent budget caps, daily ceilings, and prompt-cache analytics inside the app.

03/08

What is a Skill vs an MCP server vs a Power-up?

[ open ]

Skills are local scripts your agents can call (Python or Node). MCP servers are tool servers that follow the Model Context Protocol — agents discover them and use their tools. Power-ups are first-party native integrations (Webull, iMessage, Drive) that wrap platform APIs and store credentials in your vault.

04/08

Can teams use this together?

[ open ]

Studio plan supports team workspaces with shared workflows, audit logs, role-based access, and SSO/SCIM. Memory and vaults stay device-local; workspace artifacts sync end-to-end encrypted.

05/08

Is the source available?

[ open ]

Core runtime is MIT-licensed and on GitHub. Premium components (channels, cloud sync, marketplace) are source-available with paid license.

06/08

What about offline mode?

[ open ]

Yes. With local Ollama models, you can run a fully offline swarm. The license server tolerates 30 days offline without re-check.

07/08

Is there a free trial of Pro?

[ open ]

14 days, no card required. Cancel any time — Personal tier always remains free.

08/08
[ ready ]

Stop renting your
intelligence.

Install in 90 seconds. Bring your keys. Wire up your channels. Watch your swarm do work while you do other things.

Get the app · 31 MB View pricing
macOS · Linux beta · Windows beta