There is a moment, somewhere between your third Google account recovery and the notification that your iCloud storage is full again, where you start to wonder who all of this is really for. You have thousands of photos, years of memories, sitting on a server in a data center you will never visit, managed by a company whose terms of service you have never read, in a country whose data laws may not protect you. And for the privilege, you pay a monthly fee.

I decided to stop paying. Not just financially, but with my data, my autonomy, and my trust. This article documents how I built a private cloud using a decommissioned office computer, open-source software, and a five-euro European VPS. It is a proof of concept for digital sovereignty at the individual level.

Why this matters

If you spend any time in the right-to-repair and open-source communities, you already know the feeling. Louis Rossmann dismantling Apple’s arguments against independent repair. iFixit publishing teardowns and repairability scores to hold manufacturers accountable. Linus Tech Tips showing millions of viewers that building and maintaining your own hardware is not just possible, but worthwhile. Jeff Geerling turning single-board computers and home servers into accessible, well-documented projects that make self-hosting feel approachable. The principle running through all of these is the same: you should have the right to understand, modify, and control the things you own.

Rossmann is also part of FUTO, an organization that funds open-source software built to give control back to users. Their mission statement reads: “Computers should belong to you, the people. We develop and fund technology to give them back.” FUTO is the reason I discovered Immich, the photo management tool at the center of this project, and ultimately the reason this project exists at all.

That principle does not stop at hardware. The software and services running on your devices deserve the same scrutiny. When Google Photos offers you unlimited storage, or when Apple bundles iCloud into every device, you are the product being sold, not the storage. Your photos train machine learning models. Your usage patterns feed advertising profiles. Your data sits in jurisdictions where a government subpoena can access it without your knowledge.

The European Union has recognized this. The General Data Protection Regulation, now nearly a decade old, established that privacy is a fundamental right. But GDPR was only the beginning. The Digital Markets Act, which took effect in 2023, directly targets the gatekeeping behavior of big tech platforms. Apple was fined €500 million in 2025 for restricting how app developers could inform users about alternative payment options. Meta was fined €200 million for failing to offer users a service option that collects less personal data. The Digital Services Act layers on requirements around transparency, content moderation, and algorithmic accountability.

These are not abstract regulatory exercises. They reflect a shift in how Europe thinks about technology: that platforms should serve users, not exploit them. Digital sovereignty, the idea that individuals and nations should control their own digital infrastructure and data, has moved from academic papers to the top of the EU’s political agenda. At a 2025 summit in Berlin, European leaders agreed on strategic pillars to reduce technological dependency on non-EU companies, driven in part by fears of what some officials described as a potential American digital kill switch.

For me, this goes beyond politics. As a full-stack development student, I work with cloud services every day. I understand the convenience. I also understand what is being traded for that convenience. Self-hosting is my way of putting right-to-repair principles into practice, applied to the broken relationship between users and the services they depend on.

The hardware: a Dell Optiplex gets a second life

The server at the heart of this project is a Dell Optiplex 7050 Micro. It is a small form-factor office computer that was decommissioned from a corporate fleet, the kind of machine that ends up in bulk recycling or gathering dust in a storage closet. I picked it up for a fraction of the cost of any purpose-built NAS or home server.

The Optiplex came with its own challenges. A BIOS password, leftover from its corporate life, locked me out of hardware configuration. Solving that required research, patience, and a willingness to work at the motherboard level. Once past the BIOS, I installed Ubuntu Server, added a 2TB NVMe drive for the operating system and database, and repurposed a 1TB SATA drive (reformatted from its previous Windows partition to ext4) for media storage.

Repurposed hardware has clear advantages, both economic and environmental. The machine draws minimal power, runs silently, and fits behind a monitor. It does not need a rack, a dedicated room, or a cooling system. And it keeps functional hardware out of the waste stream. Anyone who has watched a DankPods teardown of perfectly good electronics being trashed knows how much usable tech gets thrown away.

The stack

Immich: photos without the cloud

Immich is a self-hosted alternative to Google Photos. It handles photo and video backup from mobile devices, provides a web interface for browsing and organizing media, supports facial recognition and location-based search, and runs entirely on your own hardware. The project is open-source, licensed under AGPL, and actively developed. In 2024, FUTO brought Immich’s core team on full-time, funding development while keeping the project free, with no paywalled features, no ads, and no data mining. The code stays open, the developers get paid, and users keep control. It is the kind of model the open-source world needs more of.

On the Optiplex, Immich runs as a set of Docker containers. The media library lives on the SATA drive, while the PostgreSQL database stays on the faster NVMe. This deliberate split keeps the database responsive without wasting SSD space on large photo files. The Docker Compose setup follows the official Immich documentation, with the key customization being the UPLOAD_LOCATION pointing to the mounted SATA drive.

In daily use, this works like Google Photos. My phone backs up photos automatically, I can browse them from any device, and search works as expected. The difference is that every byte of that data stays on a machine I own, in a room I control.

Headscale: a mesh VPN under my control

Making a home server accessible from the outside world is a well-known self-hosting problem. Opening ports on your router is a security risk. Dynamic DNS is fragile. Cloudflare Tunnels work but route your traffic through yet another American corporation.

Headscale solves this. It is a self-hosted implementation of the Tailscale coordination server. Tailscale itself is a mesh VPN built on WireGuard, the modern VPN protocol that is fast, lightweight, and cryptographically sound. Normally, Tailscale’s coordination server (the component that manages keys and tells your devices how to find each other) runs on Tailscale’s own infrastructure. Headscale replaces that component with something you run yourself.

With Headscale running, my phone, my laptop, the Optiplex, and my VPS all see each other as if they were on the same local network, secured by WireGuard encryption. No ports need to be opened on my home router. No traffic passes through a third party’s relay, assuming direct connections succeed (more on that shortly).

I originally ran Headscale on an Oracle Cloud free-tier VPS. It worked, but Oracle Cloud is a US-based provider, and the free tier comes with bandwidth limitations that caused noticeable performance issues. Uploads to Immich through the VPN were throttled to around 500 kbps, far below what my home connection could handle. The culprit was traffic being relayed through Oracle’s DERP server instead of establishing direct peer-to-peer connections.

This led to the migration that shaped the final architecture of this project.

The migration: Oracle Cloud to Contabo

Moving the networking layer from Oracle Cloud to Contabo, a German VPS provider, served two purposes. First, it put the coordination infrastructure on EU soil, under EU jurisdiction. Second, it gave me a VPS with more forgiving throttling and a clean network setup. Contabo assigns public IPs directly with no NAT, unlike Oracle’s layered security groups.

The migration taught me a lot about infrastructure portability. Headscale’s state lives in three critical files: a SQLite database (containing all users, nodes, and keys), a Noise protocol private key (the server’s cryptographic identity), and a DERP server private key. Preserving these files means every client device trusts the new server as if nothing changed. No re-registration required. The domain-based server_url in the configuration means DNS handles the IP change transparently.

The cutover was straightforward: set up the new server on Contabo, test it by pointing /etc/hosts at the new IP, stop Headscale on Oracle Cloud, update the DNS A records, and start the new one. Since the TTL was already at 300 seconds, rollback to Oracle Cloud was always under ten minutes away.

One thing I learned the hard way: Headscale 0.28.0 introduced stricter configuration validation than earlier versions. A base_domain value that worked fine before was now rejected because it could theoretically overlap with the server_url in a way that makes the DERP server unreachable. The fix was simple, but it is a good reminder that version upgrades in self-hosted infrastructure require attention. You are your own operations team.

Traefik: the reverse proxy

Traefik sits on the Contabo VPS and handles all incoming web traffic. When someone visits the Immich subdomain, the request hits Traefik, which terminates TLS with automatically renewed Let’s Encrypt certificates and forwards the request through the Headscale VPN tunnel to the Optiplex at 100.64.0.2:2283.

The important design choice here is Traefik’s file provider. Services running in Docker on the same VPS (like Headscale itself) are discovered automatically through Docker labels. But Immich runs on a different machine entirely, reachable only through the VPN. For that, a static YAML file in Traefik’s dynamic configuration directory defines the routing rule and the backend URL. Traefik watches this directory for changes, so adding a new service requires no restart.

Because of this setup, the Optiplex never needs to be directly exposed to the internet. All public traffic enters through the VPS, crosses the encrypted VPN tunnel, and reaches the home server. The attack surface stays small.

Komodo: managing containers across machines

With services split across two machines, managing them from a single interface becomes important. Komodo is a Rust-based container management platform with a Core and Periphery architecture. The Core (UI, API, and database) runs on the Contabo VPS. A lightweight Periphery agent runs on each managed server, exposing an API that lets the Core start, stop, and monitor containers remotely.

The Periphery agent on the Optiplex communicates with the Core through the Headscale VPN tunnel. The agent’s configuration whitelists the Core’s VPN IP and requires a shared passkey for authentication. No additional ports need to be opened because the VPN handles connectivity.

From a single dashboard, I can see container status, system stats (CPU, memory, disk), and logs across both machines. Stacks can be deployed from the UI or pulled from a Git repository, which ties directly into the GitOps workflow.

Infrastructure as code

Every configuration file, compose definition, and setup script lives in a GitLab repository. Sensitive values like database passwords, API keys, and VPN passkeys are never committed. Instead, .env.example files document the expected variables with placeholder values.

The article you are reading is also in that repository. It is written in Markdown, built with Hugo (a static site generator), and deployed to GitLab Pages through a CI/CD pipeline that triggers on every push to the main branch.

The cost of sovereignty

Monthly, this setup costs roughly €5 for the Contabo VPS and whatever electricity the Optiplex draws (estimated at €3-5 given its low-power profile). A domain costs around €10-15 per year. The Optiplex itself was a one-time purchase.

Compare this to Google One (€30/year for 100GB, €100/year for 2TB), iCloud+ (€36/year for 200GB, €120/year for 2TB), or any other cloud storage subscription. Within a year, the self-hosted setup pays for itself, and there is no storage ceiling. When the SATA drive fills up, I buy another one. No subscription tier upgrade, no monthly fee increase.

The real cost, though, is time and knowledge. Setting up this infrastructure took research, troubleshooting, and a willingness to read documentation. This is not for everyone, and it should not have to be. I am not trying to convince every person to run their own server. I want to demonstrate that alternatives exist, that they work, and that the skills to build them are accessible to anyone willing to learn.

Lessons learned

Convenience has a cost. Every managed service you use is a dependency you accept. When Oracle Cloud throttled my VPN traffic, I had no recourse except to migrate. When Headscale changed its configuration validation between versions, I had to debug it at 11 PM. Owning your infrastructure means owning its problems, but at least they are your problems to solve.

Direct connections matter. The difference between a relayed VPN connection (500 kbps through a DERP server) and a direct peer-to-peer connection (full speed) is enormous. Getting STUN ports open and NAT traversal working is worth the effort.

DNS is always the answer, and always the problem. Almost every issue I encountered during the migration came down to DNS. Propagation delays, TTL caching, misconfigured records. If something does not work, check DNS first.

Version-pin everything. Using latest tags in Docker is convenient until an upstream breaking change takes your service down. Pin your image versions in compose files and upgrade deliberately.

Conclusion

This project started as a way to self-host my photos. It grew into a practical exercise in digital sovereignty, touching networking, system administration, container orchestration, and EU data policy along the way.

The right to repair your own hardware and the right to host your own data share a common foundation: the belief that ownership should mean something. When you buy a device, you should be able to open it, fix it, and decide what runs on it. When you take a photo, you should decide where it lives and who can see it.

The tools to do this exist today. They are free, open-source, and improving rapidly. The regulatory environment in Europe is shifting to support this kind of independence. All that is missing is more people willing to try.