Introduction

Every self-hosting project starts with the same five decisions: how to route traffic, how to connect networks, how to manage containers, how to store photos, and where to put the servers. Each of these decisions has a clear community favorite, and in most cases, the community favorite is not what I chose.

This article accompanies a proof-of-concept project in which a private cloud was built using a repurposed Dell Optiplex 7050 Micro as a home server and a Contabo VPS (€4.99/month, 4 vCPU, 8 GB RAM, 150 GB SSD, 300 Mbps, located in Germany) as the public-facing networking layer. The system runs Immich for photo management on the home server, with Headscale, Traefik, and Komodo on the VPS providing VPN coordination, reverse proxying, and container management.

Each technology was chosen over more established alternatives. This article examines five of those decisions with technical reasoning, comparative data, and where relevant, configuration examples. All GitHub statistics and pricing are current as of March 2026.

flowchart LR User[User] --> DNS[DNS] --> VPS[Contabo VPS] VPS --- Traefik[Traefik] VPS --- HS[Headscale] VPS --- Komodo[Komodo] Traefik -->|port 2283| WG[WireGuard Tunnel] Komodo -->|port 8120| WG WG --> Optiplex[Optiplex] Optiplex --- Immich[Immich]

1. Reverse Proxy: why Traefik wins for mixed infrastructure

The reverse proxy is the single entry point for all public traffic. It terminates TLS, routes requests to the correct backend, and in this architecture, bridges two networks: the VPS’s local Docker environment and a home server accessible only through a WireGuard VPN tunnel.

Three tools were evaluated: Traefik (Go, ~60.9K GitHub stars), Caddy (Go, ~70.9K stars), and Nginx Proxy Manager (TypeScript/OpenResty, ~32.2K stars).

The dual-provider model

Traefik’s distinguishing feature is its ability to run multiple configuration providers simultaneously. A provider is a source of routing rules. The Docker provider watches the Docker socket and reads container labels to configure routes automatically when containers start or stop. The file provider reads static YAML definitions from a directory and watches for changes. Both run concurrently within a single Traefik process.

This matters because the architecture has two categories of services. Headscale and its admin UI run as Docker containers on the VPS, alongside Traefik itself. These are discovered automatically through Docker labels. Immich runs on the Optiplex at home, reachable only via the Headscale VPN at 100.64.0.2:2283. It has no presence in the VPS’s Docker environment.

The Traefik compose file illustrates the dual-provider setup:

traefik:
  image: "traefik:v3.6"
  command:
    # Docker provider: auto-discovers containers on the same host
    - "--providers.docker=true"
    - "--providers.docker.exposedbydefault=false"
    # File provider: reads static routes from YAML files
    - "--providers.file.directory=/dynamic"
    - "--providers.file.watch=true"
    # HTTPS with automatic Let's Encrypt certificates
    - "--certificatesresolvers.myresolver.acme.httpchallenge=true"
    - "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=web"
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - "/var/run/docker.sock:/var/run/docker.sock:ro"
    - "./dynamic:/dynamic:ro"

A service on the same VPS (Headscale) is configured through Docker labels in its own compose definition:

headscale:
  image: 'headscale/headscale:0.28.0'
  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.headscale.rule=Host(`headscale.example.com`)"
    - "traefik.http.routers.headscale.tls.certresolver=myresolver"

When Headscale starts, Traefik detects it within seconds and begins routing traffic. No configuration file needs editing. When the container stops, the route is removed automatically.

A service on a remote host (Immich on the Optiplex) is configured through a static YAML file placed in the dynamic/ directory:

http:
  routers:
    immich:
      rule: "Host(`immich.example.com`)"
      entrypoints: [websecure]
      tls:
        certResolver: myresolver
      service: immich-service
  services:
    immich-service:
      loadBalancer:
        servers:
          - url: "http://100.64.0.2:2283"

Traefik picks up this file automatically (the watch: true flag) and begins routing traffic to the Optiplex’s Headscale IP. Adding a new remote service means dropping a new YAML file into the directory. No restart required.

flowchart TB subgraph Traefik T1[Docker Provider] & T2[File Provider] --> T3[Routing] end subgraph Caddy C1[Caddyfile] --> C3[Routing] C2[docker-proxy plugin] -.-> C3 end subgraph NPM N1[Web GUI] --> N3[Routing] end

Neither Caddy nor Nginx Proxy Manager offers this dual model natively. Caddy is purely file-configured; a community plugin (caddy-docker-proxy) adds Docker label support, but it is third-party maintained and not part of Caddy’s core. Nginx Proxy Manager is purely GUI-driven with no Docker auto-discovery at all. Both can route to external hosts, but neither can mix automatic container discovery with static external routes in a single configuration without workarounds.

There is one important caveat with NPM worth calling out: if any single proxy host configuration is invalid, the entire Nginx process fails to reload, taking down all proxy hosts including the admin GUI itself. Traefik and Caddy both isolate failures to individual routes.

From a resource perspective, Traefik idles at roughly 50-100 MB of RAM. Caddy is lighter at 30-40 MB. NPM is heavier at 100-150 MB due to bundling Node.js and a database. The most rigorous publicly available benchmark (Tyblog’s “35 Million Hot Dogs” test) found Caddy and Nginx performing nearly identically at realistic concurrency, with Traefik consistently slightly behind. At homelab scale, where traffic rarely exceeds a few hundred requests per minute, this difference is purely academic.

2. VPN: Headscale gives you sovereignty, Tailscale gives you sleep

The home server has no static public IP and sits behind a consumer router. Exposing it to the internet requires either port forwarding (a security risk), a tunnel service like Cloudflare Tunnels (which routes traffic through a US corporation), or a mesh VPN.

Tailscale is a mesh VPN built on WireGuard. It uses a coordination server to handle key exchange, device registration, and NAT traversal. Normally this coordination server is hosted by Tailscale Inc. Headscale (Go, ~36.1K stars, BSD-3-Clause, v0.28.0) is an open-source reimplementation that you run yourself.

What the coordination server actually knows

This is the question that matters, and Tailscale deserves credit for being transparent about it. The coordination server never touches data-plane traffic. WireGuard encryption is end-to-end between devices. However, the coordination server handles the control plane and therefore has access to: device metadata (IPs, OS versions, hostnames), connection topology (which nodes communicate with which), public keys, ACL policies, and authentication records.

On Tailscale’s hosted service, this metadata resides on multi-tenant AWS infrastructure operated by a US-incorporated company. Tailscale states that all metadata is encrypted at rest with AES-256 and in transit via TLS, and the company has completed a SOC 2 Type II audit.

The legal issue is the US CLOUD Act (2018), which allows law enforcement to compel US-based companies to produce data regardless of where it is physically stored. This directly conflicts with GDPR. For an EU-based user, running Headscale on a German VPS places the coordination metadata under German data protection law, which requires judicial oversight for data access requests. Tailscale cannot hand over traffic content (it never has it) or private keys (they never leave devices), so the practical risk is metadata exposure, not data interception. Whether that matters to you is a personal judgment.

flowchart LR subgraph Tailscale TC[Coordination Server\nUS - multi-tenant] D1[26+ DERP relays] end subgraph Headscale HC[Coordination Server\nYour VPS - single-tenant] D2[1 DERP relay] end Phone[Phone] & Laptop[Laptop] & Server[Server] Phone <--> Laptop Phone <--> Server TC -.->|metadata| Phone HC -.->|metadata| Phone

Feature parity and the unusual maintainer arrangement

Headscale supports MagicDNS, ACLs, Taildrop file sharing, subnet routers, exit nodes, Tailscale SSH, and an embedded DERP relay. Features not yet implemented include Tailscale Funnel, Tailscale Serve, and Tailnet Lock. For a homelab use case, the missing features are not relevant.

An unusual aspect of the project deserves attention: Kristoffer Dalby, one of Headscale’s active maintainers, is employed by Tailscale and permitted to spend work hours on the project. Tailscale’s stated position is that it does not set Headscale’s direction but collaborates on client compatibility. The original creator, Juan Font of the European Space Agency, remains an active maintainer. This arrangement provides strong assurance of long-term protocol compatibility, and it is remarkably generous for a company whose open-source competitor directly undermines the business case for its paid coordination service.

The DERP relay trade-off

Tailscale operates 26+ geographically distributed DERP relay servers for when direct peer-to-peer connections fail (estimated 5-10% of connections). Self-hosted Headscale typically runs a single embedded DERP server. For a single-region deployment where all devices are in Western Europe, this is functionally equivalent. For globally distributed devices, it adds meaningful latency on relayed connections.

Data-plane performance between Tailscale and Headscale is identical because both use the same WireGuard protocol and the same Tailscale client. The coordination server is only involved during device registration and topology changes, not during active data transfer.

Tailscale’s free tier allows 3 users and 100 devices. Paid plans start at $6/user/month.

3. Container Management: Komodo is the right tool nobody has heard of

With services split across two machines, a container management tool provides a unified interface for deploying, monitoring, and updating containers. Four tools were evaluated: Portainer (Go, ~36.5K stars), Dockge (TypeScript, ~22.5K stars), Komodo (Rust, ~10.5K stars), and Compose Farm (Python, ~293 stars).

xychart-beta title "GitHub Stars (thousands, March 2026)" x-axis ["Portainer", "Dockge", "Komodo", "Compose Farm"] y-axis "Stars (K)" 0 --> 40 bar [36.5, 22.5, 10.5, 0.3]

GitHub stars are a measure of awareness, not quality. The interesting comparison is in architecture.

The multi-host problem

The critical requirement is managing Docker containers on a remote host (the Optiplex) from a central UI (on the VPS). This immediately narrows the field.

Dockge, created by Louis Lam of Uptime Kuma fame, is beloved for its simplicity. It stores compose files as standard compose.yaml on the host filesystem. Uninstall Dockge and your files remain, fully portable. But its multi-host support (added in v1.4) requires each host to run a full Dockge instance with bidirectional network access. There is no lightweight agent, no Git integration, and development has slowed significantly, with v1.5.0 (March 2025) remaining the latest release.

Portainer solves remote management through its Edge Agent, which initiates outbound connections from the remote host. This works well behind NAT. However, Portainer’s Community Edition now limits the free tier to 3 nodes, and features like RBAC, SSO, and Git-based auto-polling require Business Edition at $99/month. The more fundamental issue is philosophical: stacks created through Portainer’s UI are stored in its internal BoltDB database rather than as files on disk. Uninstalling Portainer without a prior export loses those definitions. For a project about sovereignty and control, storing infrastructure definitions in a proprietary database format is a hard sell.

Compose Farm, by Bas Nijholt, takes a fundamentally different approach. It is agentless: no server, no agent, just SSH. A single YAML config maps stack names to hosts, and running cf apply reconciles reality with the config, starting missing stacks, migrating moved ones, and stopping removed ones. Compose files stay as plain directories on disk, unchanged. The catch is that it requires shared storage (NFS, Syncthing, or similar) so compose files are accessible at the same path on all hosts. For this project, the VPS and the Optiplex have no shared filesystem, and setting one up over a WireGuard tunnel would add more infrastructure than it saves. Compose Farm shines for NAS-backed homelabs where all hosts mount the same storage.

Komodo uses a Core and Periphery architecture. The Core (UI, API, database) runs on the VPS. A stateless Periphery agent runs on each managed server, exposing an API that the Core calls to execute Docker commands, retrieve system stats, and stream logs. The Periphery agent authenticates requests using a shared passkey and an IP allowlist. In this project, the allowlist contains only the VPS’s Headscale IP (100.64.0.1), and all communication travels through the encrypted VPN tunnel. No shared filesystem needed, no open ports, no agent phoning home to a third party.

xychart-beta title "Idle RAM Usage (MB)" x-axis ["Dockge", "Caddy", "Traefik", "NPM", "Portainer", "Komodo"] y-axis "MB" 0 --> 300 bar [30, 35, 75, 125, 150, 256]

Komodo’s RAM usage is the highest of the group at ~256 MB for Core plus its FerretDB database, but this runs on the VPS which has 8 GB to spare. The Periphery agent on the Optiplex is stateless and minimal.

Compose file philosophy

Komodo supports three modes for defining stacks: compose files written in the web UI, read from existing files on the host filesystem, or pulled from a Git repository with webhook-triggered auto-deploy. In all three modes, the compose files are accessible as standard YAML. There is no proprietary format.

KomodoPortainer CEDockgeCompose Farm
LanguageRustGoTypeScriptPython
GitHub stars~10.5K~36.5K~22.5K~293
Multi-host modelCore + PeripheryServer + Edge AgentFull instance per hostSSH (agentless)
Compose storageUI, disk, or GitInternal databaseDisk (native YAML)Disk (native YAML)
Git integrationFirst-classBE only ($99/mo)NoneVersion-controllable by design
Shared storage requiredNoNoNoYes
Free node limitUnlimited3UnlimitedUnlimited
LicenseGPL-3.0Zlib (CE) / Proprietary (BE)MITMIT

Komodo was chosen because it provides Portainer’s multi-host management without the licensing restrictions, Dockge’s compose-on-disk portability with Git integration on top, and a natural fit for the VPN-only connectivity between the two machines that rules out Compose Farm’s shared-storage model.

4. Photo Management: Immich has effectively won

The self-hosted photo management space was fragmented for years. PhotoPrism, LibrePhotos, Nextcloud Photos, and half a dozen smaller projects each had trade-offs that kept users bouncing between them. Immich has ended that conversation.

With ~95,000 GitHub stars (up from 40,000 in mid-2024), 1,558+ contributors, and a development pace of 350+ commits per six-week release cycle, Immich reached stable release (v2.0.0) on October 9, 2025, after 271 pre-stable releases over 1,337 days. Current version is v2.6.0.

The FUTO funding model

In May 2024, the Immich core team joined FUTO, an Austin-based organization that funds open-source software. FUTO pays the team to work full-time while the project retains complete autonomy. The license remains AGPL-3.0 with no Contributor License Agreement, meaning the code cannot be relicensed regardless of what happens to FUTO. No features are paywalled. Revenue comes from an optional purchase key. This matters because it addresses the sustainability problem that kills most open-source projects without introducing the perverse incentives of venture capital, where growth must eventually be monetized through the users.

Why not the alternatives

PhotoPrism (~39K stars, AGPL-3.0, Go) offers the best metadata editing in the category and a distinctive 3D Earth map view, but has no native mobile app. Users rely on PWA or third-party sync tools. Its facial recognition is notably weaker in community comparisons. LibrePhotos (~7.9K stars, MIT, Python/Django) has capable AI tagging but is maintained by a single developer, requires 8GB+ RAM, and has no mobile app. Nextcloud Photos integrates with the broader Nextcloud ecosystem but its PHP architecture struggles with large libraries.

Why not Google Photos

Google Photos remains the usability gold standard, but the costs go beyond money. The 2TB Google One plan in the EU runs approximately €13.99/month (~€168/year). Google states it does not use Photos content for advertising and does not train generative AI on your photos outside Google Photos itself. However, with the Ask Photos/Gemini features, Google has acknowledged that it may train on summaries, inferences, and generated media derived from your library.

More concerning are the failure modes. In 2022, Google settled a $100 million class action in Illinois for collecting biometric face data without consent. Separately, there are documented cases of users permanently losing access to their entire Google account after automated CSAM detection flagged legitimate medical photos of their children. During COVID lockdowns, a father photographed a skin infection on his child for a telemedicine appointment. Google’s automated scanning flagged the image, his account was locked, police investigated and cleared him, but Google refused to reinstate the account. His emails, contacts, phone number, and years of photos were gone permanently. When your photo library is also your email, your calendar, your phone number, and your two-factor authentication, a single false positive can erase your digital life.

xychart-beta title "2TB Storage: 5-Year Cost (EUR)" x-axis ["Year 1", "Year 2", "Year 3", "Year 4", "Year 5"] y-axis "EUR" 0 --> 900 bar [168, 336, 504, 672, 840] bar [120, 180, 240, 300, 360]

Red bars represent Google One 2TB, blue bars represent self-hosted costs.

Self-hosting 2TB of photos on a NAS-grade HDD costs roughly €15-25 for the drive plus €20-50/year in electricity. The break-even point against Google One is within 1-2 years. At 4TB+, where Google One jumps to approximately €110/month, the economics become overwhelming.

5. EU VPS: all providers are raising prices, pick your trade-off

The VPS hosts the public-facing networking layer: Headscale, Traefik, and Komodo Core. The requirements were: EU data center (for GDPR jurisdiction), at least 4 GB RAM, a public IPv4 with no NAT, and a price under €10/month.

The EU VPS market in early 2026 is shaped by one macroeconomic fact: DRAM prices surged roughly 171% year-over-year through 2025, driven by AI infrastructure consuming HBM production capacity and squeezing commodity memory supply. Every major provider is raising prices, with increases of 30-67% taking effect April 2026.

Hetzner (Germany) is consistently ranked highest in benchmarks. VPSBenchmarks named it Best Global VPS for both 2025 and 2026. The CX32 (4 vCPU, 8 GB RAM, 80 GB SSD) costs €6.80/month, rising to approximately €9/month after April. Hetzner owns its data centers in Falkenstein, Nuremberg, and Helsinki, and holds ISO 27001 certification. All plans include 20 TB traffic.

Contabo (Germany, Munich) offers the most raw specs per euro. The plan used in this project provides 4 vCPU, 8 GB RAM, and 150 GB SATA SSD with 300 Mbps bandwidth for €4.99/month. Contabo assigns public IPs directly with no NAT layer, which simplifies firewall and VPN configuration considerably compared to Oracle Cloud’s VCN/subnet model. However, Contabo’s reliability record took a hit in late 2024: servers froze repeatedly due to ZRAM (compressed RAM swap) interactions with the Linux kernel under full memory load. A task force including external kernel developers traced the root cause, and ZRAM was deactivated across all systems by December 2024. The community widely interpreted this as evidence of overprovisioning. Contabo’s SLA guarantees only 95% uptime (roughly 18 days of allowed downtime per year), compared to Hetzner’s 99.9%.

OVHcloud (France) differentiates with unmetered bandwidth on all VPS plans but carries the legacy of the 2021 Strasbourg data center fire, which destroyed 29,000 servers. French courts rejected OVH’s force majeure defense because storing backups on the same servers that burned was deemed unreasonable.

Netcup (Germany) deserves attention as a value pick: the VPS 1000 G12 offers 4 vCPU, 8 GB DDR5 ECC RAM, and 256 GB NVMe for ~€8.71/month with a 2.5 Gbps network port and unmetered traffic. It holds ISO 27001 and 27701 certifications and uses 100% renewable energy.

Oracle Cloud’s free tier offers an almost unbelievable 4 ARM cores and 24 GB RAM at no cost. This is where the project originally ran. The catch: provisioning ARM instances in popular regions requires automated retry scripts running for days. Oracle warns about reclaiming underutilized instances after 7 days of low usage. The networking (VCN, subnet, security list configuration) is more complex than any standard VPS provider. And it is US infrastructure under US jurisdiction, which is what motivated the migration to Contabo.

ProvidervCPURAMStorageTrafficPrice/moTrade-off
Hetzner CX3248 GB80 GB20 TB~€9 (Apr ‘26)Best reliability, smaller disk
Contabo VPS48 GB150 GB32 TB€4.99Most specs, lowest reliability
OVHcloud VPS-2~4~8 GB80 GBUnlimited~€9.15Unlimited BW, fire history
Netcup VPS 100048 GB DDR5256 GB NVMeUnmetered~€8.71Best storage, 2.5 Gbps
Oracle Free4 ARM24 GB200 GB10 TBFreeIf you can get it

Contabo was chosen for this project on price: €4.99/month for 8 GB RAM is enough to run Headscale, Traefik, and Komodo Core simultaneously, and the German data center satisfies the EU jurisdiction requirement. For a production system where uptime is critical, Hetzner would be the stronger choice. For the best storage-per-euro with modern hardware, Netcup stands out.

Conclusion

Each decision followed the same methodology: identify the specific technical requirement, survey the options with concrete data, and select the tool whose architecture fits the constraint most naturally. The resulting stack (Traefik, Headscale, Komodo, Immich, Contabo) is not the most popular combination. That would probably be Nginx Proxy Manager, Tailscale, Portainer, Google Photos, and AWS. But it is the combination best supported by the data for a two-node, EU-hosted, privacy-conscious self-hosting architecture on a student budget.

The numbers tell a clear story: choose tools built on strong architectural foundations, backed by sustainable funding models and open licenses, and hosted on infrastructure you control within your jurisdiction.

References

  1. Traefik GitHub repository. https://github.com/traefik/traefik
  2. Caddy GitHub repository. https://github.com/caddyserver/caddy
  3. Nginx Proxy Manager. https://nginxproxymanager.com/
  4. Headscale GitHub repository. https://github.com/juanfont/headscale
  5. Headscale features documentation. https://headscale.net/stable/about/features/
  6. Tailscale security documentation. https://tailscale.com/security
  7. Tailscale control and data planes. https://tailscale.com/docs/concepts/control-data-planes
  8. Komodo GitHub repository. https://github.com/moghtech/komodo
  9. Portainer documentation. https://docs.portainer.io/
  10. Dockge GitHub repository. https://github.com/louislam/dockge
  11. Compose Farm GitHub repository. https://github.com/basnijholt/compose-farm
  12. Immich stable release announcement. https://immich.app/blog/stable-release
  13. Immich joins FUTO. https://immich.app/blog/immich-joins-futo
  14. FUTO sustainability model. https://futo.org/blog/beyond-donations/
  15. PhotoPrism vs Immich vs LibrePhotos comparison. https://selfhosting.sh/compare/photoprism-vs-immich-vs-librephotos/
  16. Hetzner Cloud pricing. https://www.hetzner.com/cloud
  17. Contabo reliability post-mortem. https://contabo.com/blog/how-we-resolved-the-server-issues-of-2024-a-peek-behind-the-scenes/
  18. VPSBenchmarks Contabo vs Hetzner. https://www.vpsbenchmarks.com/compare/contabo_vs_hetzner
  19. Tyblog reverse proxy benchmark. https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/
  20. Google One storage pricing. https://support.google.com/googleone/answer/2375123
  21. Google Photos Gemini privacy. https://support.google.com/photos/answer/15344015
  22. Google Photos privacy concerns. https://proton.me/blog/is-google-photos-safe
  23. Tailscale open source statement. https://tailscale.com/opensource