Why self-host
- Privacy. Your codebase stays on your machine. We never see file contents, only the task title, status, and a few per-step log lines you choose to emit.
- Provider freedom. Use any Anthropic / OpenAI / Google model you have access to, with the keys you already pay for.
- No outbound surprise. The daemon is the only thing that talks to model providers. Audit it, fork it, sandbox it.
Prerequisites
- Node.js 20.x or newer.
- pnpm 9.x.
- An Open Bee workspace and an API key (Settings β API keys β New).
- At least one provider key (Anthropic, OpenAI, or Google).
- ~200 MB free disk for the runtime + dependencies.
1. Install the runtime
git clone https://github.com/openbee/openbee.git
cd openbee
pnpm install
pnpm --filter @openbee/agent-runtime build2. Configure provider keys
The daemon reads ANTHROPIC_API_KEY, OPENAI_API_KEY, and GOOGLE_API_KEY from your environment. Set whichever you use in your shell rc file or pass them inline:
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-... # optional
export GOOGLE_API_KEY=AIza... # optional3. Link the daemon to your workspace
pnpm --filter @openbee/agent-runtime open-bee link \
--workspace your-workspace-slug \
--key obee_xxxxxxxxxxxxxxxxxxxxxxxxxxxxThis writes a config to ~/.open-bee/config.json. Multiple workspaces are supported β re-run link to add another.
4. Start the daemon
pnpm --filter @openbee/agent-runtime open-bee startThe daemon long-polls the cloud at 4-second intervals for new missions scoped to your workspace. When a mission is claimed, it executes locally with the configured provider, streaming step events back via HTTPS. On completion the metrics (tokens, cost, duration) are reported for billing parity.
Operating notes
- Concurrency. One mission at a time per daemon. To run more in parallel, start multiple daemons against the same workspace β task claim is atomic, so they won't double-execute.
- Crash recovery. A mission claimed but not completed stays in
runningfor 10 minutes, then the cloud auto-fails it so you can retry from the UI. - Tool sandboxing. The default toolset (filesystem, shell, web fetch) runs with the OS permissions of the daemon process. Run in a container or a non-privileged user account if you don't fully trust the missions you accept.
- Updates.
git pull && pnpm installon the runtime checkout. The cloud API is versioned β old daemons continue to work for at least 6 months after a contract change.
Diagnostics
# Check link state and which workspace is active
open-bee status
# Stream verbose logs
DEBUG=open-bee:* open-bee start
# Drop the link and start over
open-bee unlinkProduction deployments
For always-on operation we recommend:
- A small VM (1 vCPU / 2 GB RAM is plenty) running the daemon under systemd.
- A separate non-root user account so tool calls can't escape the mission scope.
logrotateon the daemon's log file with a 30-day retention.
Sample systemd unit, runbook, and Docker image are coming in the open-bee 0.3 release.
WhatsApp bridge (optional)
WhatsApp messages can flow into Mission Control via your daemon. Unlike Telegram (which uses a cloud webhook), WhatsApp requires a live browser session running whatsapp-web.js β so the daemon owns the client end-to-end and messages never traverse our cloud.
Install the optional deps
pnpm --filter @openbee/agent-runtime install --include=optionalThis adds whatsapp-web.js and qrcode. On first run Puppeteer downloads Chromium (~300 MB) into a local cache.
Pair your phone
- Start the daemon with the WhatsApp flag:
open-bee start --whatsapp. - In the cloud UI: Settings β Bridges β WhatsApp β Start pairing. The cloud creates a pending bridge.
- The daemon notices the bridge on its next sync (every 30 s) and launches Chromium.
whatsapp-web.jsemits a QR string; the daemon converts it to a PNG data URL and pushes it to the cloud. - The Bridges page polls for the QR every 2 s and renders it. Open WhatsApp on your phone β Settings β Linked Devices β Link a Device β scan.
- On
ready, the daemon updates the bridge toactivewith your phone number as the display id. The UI flips to a green active panel.
How messages flow
Once paired, every inbound WhatsApp message:
- Arrives at the daemon's
whatsapp-web.jsclient (browser tab in the Puppeteer process). - The daemon POSTs
/api/agent/bridges/<id>/incomingto create a Mission Control task scoped to your workspace. - The daemon's normal task poll claims the task, the agent runtime executes it, and on completion the daemon sends the agent's reply back over the same WhatsApp chat.
At no point does the message body touch our cloud beyond what's stored in your workspace's task history (which you control via Privacy).
Operating notes for WhatsApp
- Keep the daemon running. If the daemon process dies, the WhatsApp session dies with it. Use systemd / launchd / Docker so it auto-restarts.
- Session storage.
whatsapp-web.jsusesLocalAuthby default β credentials are persisted in./.wwebjs_auth/<bridge-id>next to the daemon process. Don't commit this directory to git. - One device per bridge. WhatsApp's "Linked Devices" limit applies. If you need multiple WhatsApp accounts in one workspace, run separate daemons against separate bridges.
- ToS. Using
whatsapp-web.jsautomates the WhatsApp Web client β Meta's ToS technically restricts that. Use at your own risk; for production deployments consider migrating to the official WhatsApp Business API.
Need help?
File issues on GitHub or email hello@openbee.ai.