Deployment
RivetOS supports three deployment targets: Docker (recommended for most users), Proxmox (homelab), and bare-metal (manual). This guide covers each approach, multi-agent setups, networking, and backup/restore.
Docker Deployment
The simplest way to run RivetOS. Works on any machine with Docker.
Single Agent
# Clone and installgit clone https://github.com/philbert440/rivetOS.gitcd rivetOSnpm install
# Run the interactive setupnpx rivetos init# Choose "Docker" as deployment target# Configure your agent, API key, and channels# The wizard generates config.yaml + .env + starts containers
# Or manually:cp config.example.yaml config.yamlcp .env.example .env# Edit both files, then:npx rivetos builddocker compose up -dMulti-Agent
Run multiple agents in the same Docker Compose stack:
agents: opus: provider: anthropic grok: provider: xai local: provider: ollama local: true
channels: discord: channel_bindings: "111111111": opus "222222222": grok "333333333": local# Start with multi-agent profiledocker compose --profile multi up -dThis creates separate containers for each agent plus a shared datahub (Postgres + shared storage).
Docker Compose Architecture
┌─────────────────────────────────────────────┐│ Docker Network: rivetos-net ││ ││ ┌──────────┐ ┌──────────┐ ┌──────────┐ ││ │ opus │ │ grok │ │ local │ ││ │ :3100 │ │ :3101 │ │ :3102 │ ││ │ agent img │ │ agent img│ │ agent img│ ││ └────┬─────┘ └────┬─────┘ └────┬─────┘ ││ │ │ │ ││ └──────────────┼──────────────┘ ││ │ ││ ┌───────┴───────┐ ││ │ datahub │ ││ │ postgres:16 │ ││ │ pgvector │ ││ │ /shared/ │ ││ │ :5432 │ ││ └───────────────┘ │└─────────────────────────────────────────────┘
Volumes: rivetos-pgdata → Postgres data (survives rebuilds) rivetos-shared → Shared storage (agent collaboration) ./workspace/ → Agent workspace files (bind mount) ./config.yaml → Configuration (bind mount) ./.env → Secrets (bind mount)Data Persistence
Containers are stateless. All persistent data lives on the host:
| Data | Storage | Survives Update |
|---|---|---|
| Workspace files (CORE.md, memory/, skills/) | Bind mount ./workspace/ | ✅ |
| Configuration | Bind mount ./config.yaml | ✅ |
| Secrets | .env on host | ✅ |
| PostgreSQL data | Named volume rivetos-pgdata | ✅ |
| Shared storage | Named volume rivetos-shared | ✅ |
| Plugins | In source tree | ✅ |
| Runtime code | Rebuilt from source | 🔄 |
Updating
npx rivetos updateThis pulls the latest source, rebuilds container images, and restarts. Your workspace, config, secrets, and database survive.
For a specific version:
npx rivetos update --version 0.8.2Proxmox Deployment
For homelab setups with Proxmox VE. Each agent runs in its own LXC container.
Prerequisites
- Proxmox VE 8.x
- At least one node with sufficient RAM (1-2 GB per agent container)
- Network bridge configured (e.g.,
vmbr1)
Configuration
deployment: target: proxmox
datahub: postgres: true shared_storage: true
image: build_from_source: true
proxmox: api_url: https://10.4.20.1:8006 nodes: - name: pve1 host: 10.4.20.1 role: datahub # Runs Postgres + NFS - name: pve2 host: 10.4.20.2 role: agents # Runs agent containers - name: pve3 host: 10.4.20.3 role: agents network: bridge: vmbr1 subnet: 10.4.20.0/24 gateway: 10.4.20.1Deployment
# Preview what will be creatednpx rivetos infra preview
# Deploynpx rivetos infra up
# Check statusnpx rivetos infra statusProxmox Architecture
┌─────────────────────────────────────────────────────┐│ Network: 10.4.20.0/24 (vmbr1) ││ ││ PVE1 (datahub) PVE2 (agents) PVE3 (agents) ││ ┌────────────┐ ┌────────────┐ ┌────────────┐ ││ │ CT 106 │ │ CT 101 │ │ CT 100 │ ││ │ postgres │ │ opus │ │ local │ ││ │ NFS server │ │ 10.4.20.101│ │ 10.4.20.100│ ││ │ /shared/ │ ├────────────┤ └────────────┘ ││ │ 10.4.20.106│ │ CT 102 │ ││ └────────────┘ │ grok │ ││ │ 10.4.20.102│ ││ └────────────┘ ││ ││ NFS exports /shared/ to all agents ││ Agents mount /shared/ via bind mount │└─────────────────────────────────────────────────────┘Multi-Node Shared Storage
The datahub node runs NFS to share /shared/ across all agents:
# On the datahub node (automatic with rivetos infra up):apt install nfs-kernel-serverecho "/shared 10.4.20.0/24(rw,sync,no_subtree_check)" >> /etc/exportsexportfs -ra
# On each Proxmox host:mount -t nfs 10.4.20.106:/shared /shared# Add to fstab for persistenceecho "10.4.20.106:/shared /shared nfs defaults 0 0" >> /etc/fstabEach agent container gets /shared/ as a bind mount.
Updating on Proxmox
# Update all agents (rolling — one at a time with health checks)npx rivetos update --mesh
# Update a single agentnpx rivetos updateMulti-Agent Mesh
Multiple RivetOS instances can form a mesh for cross-instance collaboration.
Setting Up a Mesh
First instance (seed node):
npx rivetos init# Configure normally — this becomes the seedAdditional instances:
npx rivetos init --join 10.4.20.101# Discovers the existing mesh and registersMesh Operations
# List all mesh nodesnpx rivetos mesh list
# Health check all peersnpx rivetos mesh ping
# Show local mesh statusnpx rivetos mesh status
# Join an existing meshnpx rivetos mesh join 10.4.20.101How Mesh Delegation Works
When an agent receives a delegate_task targeting an agent that isn’t local:
- Check local agents → not found
- Check mesh registry → found on remote node
- Send delegation request via HTTP to the remote agent channel
- Remote agent processes the task
- Result returned to the requesting agent
This is transparent — the requesting agent doesn’t know or care whether the delegate is local or remote.
Mesh Configuration
# Agent channel config (enables mesh)channels: agent: port: 3100 secret: ${RIVETOS_AGENT_SECRET}
# Mesh seeds (optional — for discovery)# Peers are also discovered via rivetos init --joinBare-Metal Deployment
Run RivetOS directly on your machine without containers.
Setup
git clone https://github.com/philbert440/rivetOS.gitcd rivetOSnpm install
# Configurecp config.example.yaml config.yamlcp .env.example .env# Edit both files
# Startnpx rivetos startSystemd Service
# Install as a systemd servicenpx rivetos service install
# Managesudo systemctl start rivetossudo systemctl stop rivetossudo systemctl status rivetossudo systemctl enable rivetos # Start on boot
# Uninstallnpx rivetos service uninstallPostgreSQL Setup
You need PostgreSQL 16+ with pgvector running separately:
# Ubuntu/Debiansudo apt install postgresql-16 postgresql-16-pgvectorsudo -u postgres createdb rivetossudo -u postgres psql rivetos -c "CREATE EXTENSION IF NOT EXISTS vector;"
# Set connection stringecho 'RIVETOS_PG_URL=postgresql://localhost:5432/rivetos' >> .envNetworking
Port Reference
| Port | Service | Description |
|---|---|---|
| 3100 | Agent HTTP | Agent channel (delegation, mesh, health) |
| 5432 | PostgreSQL | Database (datahub only) |
Firewall Rules
For multi-instance setups, agents need to reach each other on port 3100 and the datahub on port 5432:
# Allow agent mesh traffic (adjust subnet)ufw allow from 10.4.20.0/24 to any port 3100ufw allow from 10.4.20.0/24 to any port 5432DNS / Service Discovery
The mesh uses seed-node discovery by default. When you rivetos init --join <host>, the joining node contacts the seed’s /api/mesh/join endpoint and receives the full registry of known peers.
mDNS auto-discovery is supported for future use but not yet implemented.
Backup & Restore
What to Back Up
| Component | Location | Method |
|---|---|---|
| Config | ./config.yaml | File copy |
| Secrets | ./.env | File copy (secure!) |
| Workspace | ./workspace/ | File copy / rsync |
| Database | PostgreSQL | pg_dump |
| Shared storage | /shared/ or volume | File copy / rsync |
Backup Script
#!/bin/bashBACKUP_DIR="./backups/$(date +%Y%m%d-%H%M%S)"mkdir -p "$BACKUP_DIR"
# Config and secretscp config.yaml "$BACKUP_DIR/"cp .env "$BACKUP_DIR/"
# Workspacersync -a workspace/ "$BACKUP_DIR/workspace/"
# Databasedocker compose exec datahub pg_dump -U rivetos rivetos > "$BACKUP_DIR/database.sql"
# Shared storagersync -a /shared/ "$BACKUP_DIR/shared/"
echo "Backup complete: $BACKUP_DIR"Restore
BACKUP_DIR="./backups/20260405-120000"
# Config and secretscp "$BACKUP_DIR/config.yaml" ./cp "$BACKUP_DIR/.env" ./
# Workspacersync -a "$BACKUP_DIR/workspace/" workspace/
# Databasedocker compose exec -T datahub psql -U rivetos rivetos < "$BACKUP_DIR/database.sql"
# Shared storagersync -a "$BACKUP_DIR/shared/" /shared/
# Restartnpx rivetos updateAutomated Backups
Set up a cron job:
# Daily at 3am0 3 * * * /path/to/rivetos/backup.sh >> /var/log/rivetos-backup.log 2>&1Resource Requirements
Minimum (Single Agent, Docker)
- CPU: 1 core
- RAM: 1 GB (512 MB for agent + 512 MB for Postgres)
- Disk: 2 GB (source + node_modules + database)
Recommended (Multi-Agent, Docker)
- CPU: 2+ cores
- RAM: 2-4 GB (512 MB per agent + 512 MB for Postgres)
- Disk: 10 GB (room for database growth and skills)
Proxmox (Per Container)
- Agent CT: 512 MB RAM, 1 vCPU, 2 GB disk
- Datahub CT: 1 GB RAM, 1 vCPU, 10 GB disk
Health Monitoring
Health Endpoint
Each agent exposes:
GET /health— Full runtime status (agents, providers, channels, memory, metrics)GET /health/live— Simple liveness check (returns 200)GET /metrics— Raw metrics (turns, tool calls, tokens, latency)
CLI Checks
npx rivetos status # Runtime overviewnpx rivetos doctor # 12-category health checknpx rivetos test # Smoke test (provider, memory, tools)npx rivetos mesh ping # Check all mesh peersDocker Health Checks
The agent Dockerfile includes a built-in health check:
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \ CMD wget -qO- http://localhost:3100/health/live || exit 1Docker Compose uses this for dependency ordering — agents wait for the datahub to be healthy before starting.