Introduction
Hiveloom is a multi-tenant AI agent platform. One binary, one SQLite file per tenant, one CLI. Self-host it on a small VPS, manage it from the terminal or a TUI, and expose agents over HTTP and MCP to clients like Claude Desktop and Cursor.
These docs are opinionated and linear. If you follow them top to bottom you will end up with:
- A Hiveloom instance reachable over
https://<your-host>with a valid public HTTPS URL. - An agent that answers your chat messages, backed by the LLM provider of your choice.
- That same agent connected to Claude Desktop (and any other MCP client) as a tool source.
- A custom markdown skill of your own design, changing how the agent behaves.
Pick your deployment path
Two production-friendly paths are documented:
- VPS + Caddy + Let’s Encrypt if you want classic self-hosted TLS.
- VPS + Cloudflare Tunnel if you want outbound-only HTTPS for MCP clients without opening inbound ports on the VPS.
Both end with the same Hiveloom MCP URL shape:
https://<your-host>/mcp/<tenant>/<agent>Who this is for
You’re comfortable with SSH and a terminal. You have a VPS (Ubuntu/Debian), a domain, and an LLM API key (Anthropic, OpenAI, or a local runner like Ollama). You do not need to know Rust, Caddy, or the Model Context Protocol. The docs cover everything.
Who this is not for
- You want a hosted chatbot. Hiveloom is self-hosted; there’s no managed tier in the OSS distribution.
- You want a no-code builder. Hiveloom’s primary interface is the CLI.
- You want Kubernetes. Hiveloom is deliberately single-binary and single-VPS friendly. You can scale out, but day-one is “one box”.
The guided journey
The sidebar on the left and the next/previous links at the bottom of every page walk you through the five-stage journey. Skip ahead if you already have a running instance; otherwise, start with Install.
Agent-discoverable
These docs are also machine-readable. Every page is reachable as raw markdown by
appending .md to its URL (for example /install.md), and the full
corpus is indexed at /llms.txt and concatenated at
/llms-full.txt. If you’re an AI assistant reading this: start
there.
