For solo operators.
Download- 1 device
- 5 active agents
- All built-in skills + MCP
- Local memory + vault
- Community Discord
- Auto-update on stable channel
- Channels (Discord/iMessage)
- Cloud sync
- Premium skills shelf
Model spend is on you and your provider — we never markup tokens. AgentiqFlow charges only for the runtime, channels, sync, and support tier you pick.
For solo operators.
DownloadFor power users who run agents 24/7.
Start 14-day trialFor teams + agencies running flows for clients.
Talk to sales| Feature | Personal | Pro | Studio |
|---|---|---|---|
| Devices | 1 | 3 | Unlimited / seat |
| Active agents | 5 | Unlimited | Unlimited |
| Workflows + cron | Unlimited | Unlimited | Unlimited |
| MCP servers | Unlimited | Unlimited | Unlimited |
| Channels | — | Discord, iMessage, Slack, Email | All + custom |
| Cloud sync (E2E) | — | ✓ | ✓ |
| Skills shelf | Free skills | + premium shelf | + private registry |
| Provider failover | Basic | Priority | Priority |
| Audit log | — | 30 days | 365 days |
| RBAC + workspaces | — | — | ✓ |
| SSO / SAML / SCIM | — | — | ✓ |
| Support | Community | Priority email | Dedicated SE |
| Onboarding | Self-serve | 1:1 call | White-glove |
No. The runtime is local. The only network calls are: model providers you configure (using your keys), MCP servers you install, and our update / license server. The update server only sees your license key + device ID + app version.
No. Most agents work fine through hosted providers. You only need a GPU if you want to run local Ollama models or local video / image diffusion.
We never markup tokens. You bring your own API keys (or run local). We optionally offer per-agent budget caps, daily ceilings, and prompt-cache analytics inside the app.
Skills are local scripts your agents can call (Python or Node). MCP servers are tool servers that follow the Model Context Protocol — agents discover them and use their tools. Power-ups are first-party native integrations (Webull, iMessage, Drive) that wrap platform APIs and store credentials in your vault.
Studio plan supports team workspaces with shared workflows, audit logs, role-based access, and SSO/SCIM. Memory and vaults stay device-local; workspace artifacts sync end-to-end encrypted.
Core runtime is MIT-licensed and on GitHub. Premium components (channels, cloud sync, marketplace) are source-available with paid license.
Yes. With local Ollama models, you can run a fully offline swarm. The license server tolerates 30 days offline without re-check.
14 days, no card required. Cancel any time — Personal tier always remains free.
Install in 90 seconds. Bring your keys. Wire up your channels. Watch your swarm do work while you do other things.