A conductor for the swarm
One Director routes intent into the right specialist, splits work in parallel, gathers results, and decides what to do next.
Run a Director, 25 specialists, MCP servers, channels, cron, and an encrypted vault — all on hardware you own. No SaaS in the loop. No tokens leaking to a hoster. License-gated, fully local.
license-gated runtime · plans from $100 / mo
Built for people who don't want to rent their intelligence — and who'd rather know exactly which process is running which model on which CPU.
Models run on your hardware. Data stays in your filesystem. Nothing is forwarded unless you say so. No telemetry. No fine-print analytics.
Every agent is a swappable module. Skills are scripts. Channels are adapters. MCP servers plug into the same bus. You ship the swarm you need.
Cron, queues, and durable workflows make agents wait, retry, and hand work back and forth. Your AI is a daemon, not a chat box.
One Director routes intent into the right specialist, splits work in parallel, gathers results, and decides what to do next.
Browse, install, fork. Skills are signed scripts your agents can call — from web crawl to invoice gen to market scrape.
Native iMessage, Discord, email, Slack. Agents reply, react, and ping push notifications when long jobs finish.
Hook in any Model Context Protocol server. GitHub, Linear, Gmail, Drive, Stripe, Postgres. Connect once, reuse forever.
Visual graph editor for repeatable jobs. Branches, retries, queues, timers, embed steps — composable like Lego, durable like a job runner.
Notes, wiki-links, sessions, transcripts. Auto-embedded, semantic search, agent-shared. Yours forever — plain markdown on disk.
API keys, OAuth tokens, browser sessions sealed in a per-machine vault. Agents request scoped access; nothing leaks to logs.
Live stream of dispatches, tool calls, costs, latencies. Drill into a single execution trace, replay it, fork it.
Webull trading, Apple Photos, Calendar, Drive, Notion, Browserbase, Vercel deploy — first-class connectors that respect your scope.
Every layer is on your machine. Outbound only happens when you call a model provider — and you choose which calls leave the box.
Architecture deep-dive →Dashboard, Chat, Live, Memory graph, Workflows. Loads from disk, runs in browser at localhost:8858.
Long-lived host process. Routing, MCP host, channels, queues, cron, license. Streams SSE end-to-end.
Director, swarm scheduler, persona memory, skills runner, providers, embed pipeline.
~/.agentiqflow stores secrets (sealed), sessions, tasks, workflows, and your obsidian-style memory vault.
Anthropic, OpenAI, Gemini, NVIDIA NIM, OpenRouter, Ollama Cloud, local Ollama. Swap per agent.
Configure once. Route per-agent. Mix frontier with open-weights, fast with cheap, hosted with local. Failover and budget caps are built in.
AgentiqFlow ships as a paid runtime — every install needs a valid, active license to download, install, and run. Model spend is on you and your provider; we never markup tokens. The runtime, channels, sync, and support tier scale with the price.
For one operator running their own swarm.
Buy licenseFor power users running agents 24/7 across machines.
Buy licenseFor teams + agencies running flows for clients.
Buy licenseSigned with ed25519. SHA-256 published. Auto-update on stable channel. Source on GitHub.
| Target | Format | Get |
|---|---|---|
| macOS Apple Silicon · Intel · DMG | DMG | v0.2.9 |
| macOS PKG installer | PKG | v0.2.9 |
| Linux x64 AppImage | AppImage | v0.2.9 |
| Windows x64 NSIS | EXE | v0.2.9 |
No. The runtime is local. The only network calls are: model providers you configure (using your keys), MCP servers you install, and our update / license server. The update server only sees your license key + device ID + app version.
No. Most agents work fine through hosted providers. You only need a GPU if you want to run local Ollama models or local video / image diffusion.
We never markup tokens. You bring your own API keys (or run local). We optionally offer per-agent budget caps, daily ceilings, and prompt-cache analytics inside the app.
Skills are local scripts your agents can call (Python or Node). MCP servers are tool servers that follow the Model Context Protocol — agents discover them and use their tools. Power-ups are first-party native integrations (Webull, iMessage, Drive) that wrap platform APIs and store credentials in your vault.
Studio plan supports team workspaces with shared workflows, audit logs, role-based access, and SSO/SCIM. Memory and vaults stay device-local; workspace artifacts sync end-to-end encrypted.
Core runtime is MIT-licensed and on GitHub. Premium components (channels, cloud sync, marketplace) are source-available with paid license.
Yes. With local Ollama models, you can run a fully offline swarm. The license server tolerates 30 days offline without re-check.
No. AgentiqFlow is paid-only — every download, install, and runtime invocation requires a valid, paid license. Plans start at $100/month for Solo and scale to $500/seat/month for Studio. Cancel any time in the Stripe Billing Portal; access stays active through the end of your paid period.
If a payment fails or you cancel, your license flips to past_due (Stripe dunning) then revoked. The local sidecar checks license status with the worker on a 5-minute cache; once revoked, agent dispatch is gated and the dashboard prompts you to reactivate billing.
Install in 90 seconds. Bring your keys. Wire up your channels. Watch your swarm do work while you do other things.