Multi-agent · Workflow · Project

Run your AI company,
on your own machine.

Hire a small team of AI agents — each with a name, a role, and a face. Assign work from a chat window, watch the output land. Single binary, single SQLite file, your machine.

See it in action.

A 60-second walkthrough — talk to Lead, get a workflow proposal, hit Run, watch the output land back in chat. No timeskip, no editing tricks.

Take a tour.

Hover or click to see each surface. Agents, group rooms, workflows, projects, a dashboard, the skill library, and the model picker — they're separate pieces that fit together, not a single bolted-on chat box.

Agent profile screen

Each agent has a name, a face, a system prompt, a model binding, and a role title. A "Lead" agent acts as your secretary, coordinating the rest.

Group chat room

Drop into a room of agents and watch them answer in parallel or round-robin, streaming token by token. Hit "let them continue" for autonomous rounds.

Workflow editor

Wire agents and groups into a directed graph. Sequential, parallel, review loops, aggregators — every dispatch produces a tracked run with full step history. Schedule them with cron / interval / once-off.

Project detail

Containers for long-running goals. Pick a coordinator and members, slice quotas by % per day / month, set milestones, and get an auto-generated daily report.

Dashboard

One screen for who's busy, who's idle, today's spend, and the latest runs. Every row links straight into the trace, the project, or the agent's chat.

Skill harness

The library holds three kinds of capability: reusable prompt fragments, built-in Python tools, and external MCP servers. Mount any of them on an agent — Lead picks them up automatically when proposing workflows.

Models settings

Bedrock, Anthropic, OpenAI, Azure, Gemini, MiniMax, and any OpenAI-compatible local endpoint — one normalised interface behind them all. Per-agent model binding, BYO credentials.

01 · Roster

Agents with names, faces, and roles.

Each agent has its own system prompt, avatar, working hours, and model binding. A "Lead" agent acts as your secretary — surfacing proposals, coordinating runs, and holding the conversational thread the others reply into.

02 · Collaboration

Watch a room of agents work in real time.

Drop into a group room and every agent in the group answers in parallel or round-robin, streaming token by token in the same window. Hit "let them continue" and they take autonomous rounds while you read along.

03 · Automation

Workflows are graphs, not scripts.

Wire agents and groups into a directed graph: sequential, parallel, review loops, aggregators. Every dispatch becomes a tracked run with full step history. Trigger by chat, by API, or on cron / interval / once-off.

04 · Goals

Projects hold the long-running stuff.

Containers for goals that outlast a single chat: a coordinator agent, a roster, quota slices by percent per day or month, milestone checkboxes, and an auto-generated daily report when the project has activity.

05 · Observability

One screen for the company you're running.

Who's busy, who's idle, today's spend, what shipped, what failed. Every row links straight into the run trace, the project, or the agent's chat — no hunting through tabs to find why a workflow stalled.

06 · Capabilities

The library is three things, mounted on agents.

Reusable prompt fragments, built-in Python tools, and external MCP servers all live in one library. Mount any combination on an agent — Lead picks them up automatically when proposing workflows that need them.

07 · Models

Bring your own brain — any provider.

Bedrock, Anthropic, OpenAI, Azure, Gemini, MiniMax, plus any OpenAI-compatible local endpoint (Ollama, vLLM). One normalised interface behind them all, per-agent model binding, BYO credentials. No SaaS plan, ever.

Three surfaces. One backend.

Want your agents floating around the edges of your Mac? Open the desktop app.
Want a dashboard your team shares? Use the web console.
Want cron jobs to dispatch workflows on their own? Mint an API token.

Tauri

Desktop overlay

Transparent, always on top, cursor passes through empty space. A Python sidecar handles the backend; all data lives in local SQLite.

Vite + React

Web console

Same React bundle as the desktop app. Dialog Center, Workflow Editor, Project detail, Records — full browser access.

Python · SSE

HTTP API

Authenticate with a session cookie, a desktop token, or Bearer hlns_…. Live SSE streaming so chat replies arrive token by token, plus webhooks, Slack and Telegram bridges.

How a chat turn becomes a workflow run.

You say a sentence — the lead proposes a workflow — you hit Run — the engine dispatches — workers execute — the result lands back in chat. Every step's tokens, cost, and output are logged and replayable.

1
Talk to Lead

Tell the lead what you need. It checks your agents, skills, and quotas, then proposes a workflow card or answers directly.

2
Approve & Dispatch

One click on the workflow card. The engine creates a run row and pushes the first node onto the task queue.

3
Workers chain it

One or more workers process sequential / parallel nodes, calling each agent's LLM client through a normalised adapter.

4
Result lands back

The run summary returns to the lead thread. Every cost, token, and step is logged — exportable, webhookable, replayable.

Open API. Your data, your machine.

Personal mode runs on local SQLite — no cloud, ever. Managed mode runs on Postgres + pgvector for a team. Your tokens, your data, your machine; wire it to a script or a webhook however you like.

# Dispatch a workflow from a script — no UI required.
curl -X POST https://your-host/api/workflows/42/run \
  -H "Authorization: Bearer hlns_•••" \
  -H "Content-Type: application/json" \
  -d '{"input":"draft Q3 board memo","project_id":1}'

# → returns { "run_id": 1526, "status": "queued" }
# Poll /api/runs/1526 for steps + cost, or subscribe to SSE.

What's next.

Holons is one developer shipping in public. These are the three directions we're leaning into — none of them are dated promises, and feedback in Discussions changes priorities every week.

Surface

Polish the desktop overlay

Native detail panels in-overlay (no browser hop), deeper tray menus, multi-monitor placement, smarter auto-hide. The desktop is the differentiator — make it feel like macOS, not a webview.

Audience

For non-engineers

The dream user is a one-person founder or ops lead who's never opened a YAML file. Plain-English workflows, fewer settings, hire-by-clicking, results that arrive in chat instead of dashboards.

System

Smarter, broader agents

More LLM providers, an MCP server marketplace, an agent template library, deeper Lead reasoning over project state. Ship the boring infra so building a new agent is a 5-minute job, not a half-day.

Have a use case we should chase? Open an Idea discussion or email us — we read everything.

Ready to put your agents on payroll?

Open source, self-hosted, every byte on your own machine. Personal mode is a double-click on the .dmg; managed mode is one docker compose up for backend + Postgres.

Download for macOS Read the architecture doc

Get in touch.

Building on top of Holons, want to partner, hit a bug you'd rather not file in public, or curious about a managed deployment? Drop a line — replies usually land within a couple of business days.

Security reports → please follow the SECURITY.md flow.