I manage three home networks and a dozen self-hosted services. Here's why I'm building my own infrastructure orchestrator.
It started simply enough. My parents and my sister don't know how to manage their home networks, so I do it for them. Three households, three routers, three sets of devices that need to stay connected and secure. On top of that, I run a handful of self-hosted services at home — a GitLab instance, Jellyfin for media, a NAS, and a growing list of others. I wanted to share some of these services with family and a few close friends, without exposing anything to the public internet.
Public hosting was off the table. I don't want my GitLab instance sitting on the open web, and I definitely don't want Jellyfin reachable by anyone with a browser. So I set up a WireGuard VPN, hosted on a cheap VPS, and placed a Raspberry Pi in each household as the local entry point into that network.
This gave me ingress — I could reach into each network from the VPN. But egress didn't work the other way. If my sister wanted to reach a service running on my network, the traffic had no return path unless I either added static routes on every router (not always possible, and fragile) or gave each person their own WireGuard client config. I went with the latter, since everyone needed a VPN config anyway to tunnel their traffic when they're away from home.
Then came DNS. I set up an internal authoritative DNS server for a non-existent TLD, so I could have clean internal names for all my services — gitlab.home.internal, media.home.internal, that kind of thing. And because I didn't want browser warnings everywhere, I rolled my own Certificate Authority and started issuing TLS certificates for every internal service. That meant installing the root CA certificate on every device that needed access, including some servers/VMs (think gitlab-runner talking to the gitlab instance, prometheus scraping metrics). Every phone, every laptop, every tablet.
By this point, the system worked. It worked well, actually. But the complexity was growing faster than I'd anticipated.
Deploying a new service wasn't just "run the container." It was: create the DNS record, generate a TLS certificate, configure the reverse proxy, make sure the firewall rules were correct, and then test from each household to make sure routing actually worked. Forget one step and you'd spend an evening debugging why something isn't reachable from your sister's phone but works fine from your own laptop. Enter one wrong command on the Pi and you lose remote access.
And maintenance — I kept getting bitten by certificate expiry. Three months goes by fast when you're not thinking about it, even a year. I'd get a message from family saying "that movie thing isn't working" and it would turn out to be an expired certificate that I forgot to renew. I started keeping a spreadsheet of certificate expiry dates and WireGuard peer configurations. A spreadsheet. For managing infrastructure.
I looked at existing tools. Ansible can handle some of this, but it's push-based and doesn't give me a live picture of what's actually running and reachable across the mesh. Terraform is designed for cloud resources, not home Pis and VPN peers. Portainer helps with containers but doesn't know about WireGuard or DNS or certificates. Authentik handles SSO but is complex to set up — I've spent more hours than I'd like to admit getting OIDC working with my GitLab instance, fighting with custom HTTPS certificates and trailing slashes on issuer URLs. Netmaker and Tailscale handle the mesh networking part but don't orchestrate what runs on top of it.
Every tool solves one piece. Nothing ties them together. And every additional tool means another thing to install, configure, update, and monitor.
So I'm building Brace.
Brace is a modular infrastructure orchestrator for self-hosters. At its core, it's a coordinator and a set of lightweight agents that communicate over a WireGuard mesh. You install the agent on each node — your home server, your Raspberry Pis, your VPS — and the coordinator gives you a single dashboard and API to manage everything.
Here's what it will handle:
- Node inventory with tags and groups — know what you have, organize it how you think about it
- WireGuard mesh networking — nodes find and reach each other automatically
- Access control — give family members access to Jellyfin without giving them access to GitLab
- Health monitoring — know when something's down before someone messages you about it
- DNS management — deploy a service, get a DNS record, no manual zone file editing
- Certificate management — Let's Encrypt or your internal CA, automatic issuance and renewal
- Firewall management — consistent rules across your mesh
- Batch operations — push a change to twelve nodes at once instead of SSH-ing into each one
- Proxmox integration — for those of us managing VMs alongside containers
The architecture is modular — you pick the pieces you need and ignore the rest. Not everyone needs Proxmox integration or batch operations. Brace shouldn't force you to care about things you don't use.
I'm building it in Zig with an HTMX dashboard. It runs as Docker containers, Docker Swarm services, or standalone systemd units. It's designed for the person who runs their own infrastructure and wants a single tool to keep it all together — not a Kubernetes cluster, not a cloud platform, just their servers, their network, their rules.
The WireGuard mesh component is a separate open-source tool called wgmesh that Brace integrates with but doesn't depend on. If you just want a WireGuard mesh without the orchestration, wgmesh works on its own.
I'm building this because I need it. The spreadsheet of certificate expiry dates was the last straw. But I suspect I'm not the only one managing a small fleet of servers and Pis across multiple networks, wishing there was one tool that understood the whole picture.
If this resonates, I'm documenting the entire build process publicly — architecture decisions, implementation details, the mistakes I'll inevitably make. Sign up at brace.sh to follow along. And if you're the kind of person who has opinions about WireGuard mesh topologies and gets annoyed by expired certificates, I'd love to hear from you.