My personal server has been running a handful of services for a while — this site, a home automation app, a Minecraft server, a monitoring stack. They all lived in Docker containers, managed by hand. It worked fine, but deploying anything meant SSHing in, pulling the latest image, restarting things, and hoping nothing broke. Tedious enough that I kept putting off small updates.

I recently moved everything over to Coolify, a self-hosted platform that handles deployments for you. You hook it up to a GitHub repo, and from then on a git push is all it takes to deploy. It also sorts out HTTPS automatically, which used to be its own annoying thing to manage.

The migration was not as smooth as I'd hoped. Here's what happened.

What it actually is

Coolify is a web dashboard that manages your server's services. Point it at a GitHub repo, tell it what to run, fill in any secrets or environment variables, and it does the rest — builds the image, starts the containers, sets up a domain with SSL. When you push new code, it redeploys.

I've been running four things through it: this site, HOUSE(planning app for tasks around the house), a Minecraft server, and a monitoring stack that keeps an eye on everything else.

You can use it too

Since deploying new things is now pretty painless, there's capacity on the server that isn't being used. If you have a side project, a small app, a bot — something that needs somewhere to live — I can add it. Your code stays in your GitHub repo, secrets stay in the dashboard rather than in the repo, and updates are a git push. Let me know.

What broke during migration

Two things didn't survive the move, and they were basically the same mistake twice.

The monitoring stack came up with no data. Config files I'd been pointing to by relative path weren't there when the containers started — Coolify only puts the compose file in the deployment folder, not the whole repo. So anything your app needs at runtime either has to be baked into the Docker image at build time, or placed on the server manually before the first deploy. Neither of those is obvious until you've been bitten by it.

The HOUSE app had a similar problem — it expected a folder on the server to already exist. It didn't, so the container started, couldn't find its data, and failed without saying much useful. Once I knew what to look for it was a quick fix, but figuring out what to look for took a while.

The thing that will bite you if you have any stored data

Coolify recreates storage volumes on each deploy. If your app writes anything to a named Docker volume, that data gets wiped when you next deploy. I don't know why it works this way, but it does.

The workaround is to point your mounts at specific paths on the host machine instead — Coolify doesn't touch those. Everything on this server that stores anything important uses that approach now. But I wish I'd known before the first deploy rather than after.

A couple of Traefik things

Coolify uses Traefik under the hood for routing and SSL. It mostly handles itself, but two things caught me out.

There's a manual step to connect Traefik to the network your services run on, and if Traefik ever gets recreated it loses that connection and routing breaks. Easy to fix, annoying to diagnose at midnight.

HTTP-to-HTTPS redirect also doesn't happen automatically — there's an extra config file that needs to exist in a specific place. Without it, HTTP requests just hang rather than redirecting. Coolify doesn't tell you this anywhere obvious.

Overall

The rough edges during migration were frustrating but they were all one-time problems. Now that everything's running, deployments are boring in the best way — push, wait thirty seconds, done. I should have done this earlier.