<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  
  <title>bastian.nu</title>
  <subtitle>Bastian&#39;s personal site — thoughts on tech and the internet.</subtitle>
  <link href="https://bastian.nu/feed.xml" rel="self" />
  <link href="https://bastian.nu/" />
  <updated>2026-04-04T00:00:00Z</updated>
  <id>https://bastian.nu/</id>
  <author>
    <name>Bastian</name>
  </author>
  <entry>
    <title>Hello, World!</title>
    <link href="https://bastian.nu/posts/hello-world/" />
    <updated>2026-03-13T00:00:00Z</updated>
    <id>https://bastian.nu/posts/hello-world/</id>
    <content type="html">&lt;p&gt;Welcome to my little corner of the internet. 👋&lt;/p&gt;
&lt;p&gt;This site is built with &lt;a href=&quot;https://www.11ty.dev/&quot;&gt;11ty&lt;/a&gt; and styled with &lt;a href=&quot;https://jdan.github.io/98.css/&quot;&gt;98.css&lt;/a&gt; to give it that classic Windows 98 charm. I&#39;ve always had a soft spot for the era when the web felt handmade and personal — Geocities pages, blinking text, and all.&lt;/p&gt;
&lt;h2&gt;Why a personal site?&lt;/h2&gt;
&lt;p&gt;Because owning your own slice of the web still matters. Social media platforms come and go, but a simple static site? That can stick around for decades.&lt;/p&gt;
&lt;h2&gt;What you&#39;ll find here&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Blog posts&lt;/strong&gt; — thoughts on tech, software, and whatever I&#39;m currently obsessing over&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Projects&lt;/strong&gt; — things I&#39;ve built or am building&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bookmarks&lt;/strong&gt; — links worth sharing&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More soon. Thanks for stopping by!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Out With the Old — Why I Rebuilt My Site From Scratch</title>
    <link href="https://bastian.nu/posts/new-site/" />
    <updated>2026-03-13T00:00:00Z</updated>
    <id>https://bastian.nu/posts/new-site/</id>
    <content type="html">&lt;p&gt;The old site had been sitting untouched for a while. Every time I looked at it, I felt that specific kind of guilt that comes from something you built years ago that no longer represents who you are or what you care about. So I blew it up and started over.&lt;/p&gt;
&lt;h2&gt;What Was Wrong With the Old Site&lt;/h2&gt;
&lt;p&gt;Nothing was technically &lt;em&gt;broken&lt;/em&gt;, which is almost the problem. It just felt stale — a relic of a different era of my thinking about the web. The design was generic, the content was sparse, and the whole thing ran on a stack that I&#39;d grown out of. Maintaining it felt like homework.&lt;/p&gt;
&lt;p&gt;More practically: it wasn&#39;t mine in any meaningful sense. It used a theme someone else made, lived on a platform someone else controlled, and looked like a thousand other developer portfolios. I could have been anyone.&lt;/p&gt;
&lt;h2&gt;What I Wanted Instead&lt;/h2&gt;
&lt;p&gt;I wanted something that felt personal and a little weird. The web used to be full of pages like that — handmade, opinionated, idiosyncratic. Somewhere along the way, everyone&#39;s site started looking the same.&lt;/p&gt;
&lt;p&gt;That&#39;s where the Windows 98 aesthetic came from. It&#39;s not just nostalgia (though it is a little nostalgic). It&#39;s a statement: this place is mine, I built it, and it looks exactly how I wanted it to look. If that&#39;s not your taste, that&#39;s fine — but you&#39;ll remember it.&lt;/p&gt;
&lt;h2&gt;The New Stack&lt;/h2&gt;
&lt;p&gt;The new site is built with &lt;a href=&quot;https://www.11ty.dev/&quot;&gt;11ty&lt;/a&gt;, which is about as simple as a static site generator gets. No magic, no framework overhead, no build tooling I don&#39;t understand. Markdown goes in, HTML comes out.&lt;/p&gt;
&lt;p&gt;Styled with &lt;a href=&quot;https://jdan.github.io/98.css/&quot;&gt;98.css&lt;/a&gt; for the Windows 98 look, with some custom CSS on top for layout. The whole thing runs as a static site behind nginx in a Docker container — dead simple to deploy, dead simple to update.&lt;/p&gt;
&lt;h2&gt;Why It Matters&lt;/h2&gt;
&lt;p&gt;Honestly, the best reason to have your own site is that it&#39;s yours. Social platforms come and go. Algorithms bury things. Account bans happen. A static site you control doesn&#39;t have any of those problems.&lt;/p&gt;
&lt;p&gt;And there&#39;s something about the act of building it yourself — really building it, not dragging blocks around in a CMS — that makes you more invested in actually writing things. This post exists partly because I want to see if that&#39;s true.&lt;/p&gt;
&lt;p&gt;We&#39;ll find out.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>I Set a Trap for AI Crawlers</title>
    <link href="https://bastian.nu/posts/ai-tarpit/" />
    <updated>2026-03-14T00:00:00Z</updated>
    <id>https://bastian.nu/posts/ai-tarpit/</id>
    <content type="html">&lt;p&gt;AI companies have been sending crawlers to scrape every website they can find, feeding the content into training datasets whether site owners want it or not. The polite ones respect &lt;code&gt;robots.txt&lt;/code&gt;. Most don&#39;t.&lt;/p&gt;
&lt;p&gt;So I set a trap.&lt;/p&gt;
&lt;h2&gt;What&#39;s a tarpit?&lt;/h2&gt;
&lt;p&gt;A tarpit is a honeypot that doesn&#39;t block bots — it wastes their time instead. Rather than returning a 403 and letting the crawler move on in milliseconds, a tarpit serves an endless stream of content as slowly as possible, tying up the bot&#39;s connection for as long as it&#39;ll sit there.&lt;/p&gt;
&lt;p&gt;This site is running &lt;a href=&quot;https://zadzmo.org/code/nepenthes/&quot;&gt;Nepenthes&lt;/a&gt;, an AI-specific tarpit. Any request to &lt;code&gt;/blog/&lt;/code&gt; that looks like a scraper gets handed off to it. Nepenthes responds with a page full of randomly generated nonsense — Markov chain babble trained on public domain books — plus a maze of links pointing to more fake pages. The content is delivered at a trickle, 4–25 seconds per response, so the crawler spends real CPU time and bandwidth receiving text that is completely worthless.&lt;/p&gt;
&lt;h2&gt;How it works here&lt;/h2&gt;
&lt;p&gt;Nginx proxies &lt;code&gt;/blog/&lt;/code&gt; to Nepenthes running in a Docker container. The fake pages are seeded from a corpus of Project Gutenberg texts (Pride and Prejudice, Frankenstein, Sherlock Holmes) blended together into convincing-looking gibberish. Every page links to several others, creating a maze the crawler can wander indefinitely.&lt;/p&gt;
&lt;p&gt;The delay is the real weapon. If a crawler holds 10 connections open and each response takes 20 seconds, that&#39;s 200 seconds of crawler resources spent on nothing. At scale, across many sites running tarpits, this raises the cost of indiscriminate scraping meaningfully.&lt;/p&gt;
&lt;h2&gt;What it catches&lt;/h2&gt;
&lt;p&gt;From the first few hours of running:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bots hitting &lt;code&gt;/blog/&lt;/code&gt; receive the tarpit, not this site&#39;s actual content&lt;/li&gt;
&lt;li&gt;Each connection is held open for up to 25 seconds&lt;/li&gt;
&lt;li&gt;The generated text goes straight into the void — or, if they&#39;re not careful, into a training dataset full of Victorian novel slurry&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The real posts (like this one) live at &lt;code&gt;/blog/posts/&lt;/code&gt; and are served normally. The tarpit is what you get if you wander in without reading.&lt;/p&gt;
&lt;h2&gt;Monitoring&lt;/h2&gt;
&lt;p&gt;I hooked up Prometheus and Grafana to track it — hits, unique IPs, bytes wasted, total delay inflicted. Watching the numbers tick up is genuinely satisfying.&lt;/p&gt;
&lt;p&gt;If you&#39;re running your own site and are fed up with your content being scraped without consent, Nepenthes is worth a look. The setup is a single Docker container and a few lines of nginx config.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>My Server Now Has a PaaS — And You Can Deploy to It Too</title>
    <link href="https://bastian.nu/posts/coolify-migration/" />
    <updated>2026-04-04T00:00:00Z</updated>
    <id>https://bastian.nu/posts/coolify-migration/</id>
    <content type="html">&lt;p&gt;My personal server has been running a handful of services for a while — this site, a home automation app, a Minecraft server, a monitoring stack. They all lived in Docker containers, managed by hand. It worked fine, but deploying anything meant SSHing in, pulling the latest image, restarting things, and hoping nothing broke. Tedious enough that I kept putting off small updates.&lt;/p&gt;
&lt;p&gt;I recently moved everything over to &lt;a href=&quot;https://coolify.io/&quot;&gt;Coolify&lt;/a&gt;, a self-hosted platform that handles deployments for you. You hook it up to a GitHub repo, and from then on a git push is all it takes to deploy. It also sorts out HTTPS automatically, which used to be its own annoying thing to manage.&lt;/p&gt;
&lt;p&gt;The migration was not as smooth as I&#39;d hoped. Here&#39;s what happened.&lt;/p&gt;
&lt;h2&gt;What it actually is&lt;/h2&gt;
&lt;p&gt;Coolify is a web dashboard that manages your server&#39;s services. Point it at a GitHub repo, tell it what to run, fill in any secrets or environment variables, and it does the rest — builds the image, starts the containers, sets up a domain with SSL. When you push new code, it redeploys.&lt;/p&gt;
&lt;p&gt;I&#39;ve been running four things through it: this site, HOUSE(planning app for tasks around the house), a Minecraft server, and a monitoring stack that keeps an eye on everything else.&lt;/p&gt;
&lt;h2&gt;You can use it too&lt;/h2&gt;
&lt;p&gt;Since deploying new things is now pretty painless, there&#39;s capacity on the server that isn&#39;t being used. If you have a side project, a small app, a bot — something that needs somewhere to live — I can add it. Your code stays in your GitHub repo, secrets stay in the dashboard rather than in the repo, and updates are a git push. Let me know.&lt;/p&gt;
&lt;h2&gt;What broke during migration&lt;/h2&gt;
&lt;p&gt;Two things didn&#39;t survive the move, and they were basically the same mistake twice.&lt;/p&gt;
&lt;p&gt;The monitoring stack came up with no data. Config files I&#39;d been pointing to by relative path weren&#39;t there when the containers started — Coolify only puts the compose file in the deployment folder, not the whole repo. So anything your app needs at runtime either has to be baked into the Docker image at build time, or placed on the server manually before the first deploy. Neither of those is obvious until you&#39;ve been bitten by it.&lt;/p&gt;
&lt;p&gt;The HOUSE app had a similar problem — it expected a folder on the server to already exist. It didn&#39;t, so the container started, couldn&#39;t find its data, and failed without saying much useful. Once I knew what to look for it was a quick fix, but figuring out what to look for took a while.&lt;/p&gt;
&lt;h2&gt;The thing that will bite you if you have any stored data&lt;/h2&gt;
&lt;p&gt;Coolify recreates storage volumes on each deploy. If your app writes anything to a named Docker volume, that data gets wiped when you next deploy. I don&#39;t know why it works this way, but it does.&lt;/p&gt;
&lt;p&gt;The workaround is to point your mounts at specific paths on the host machine instead — Coolify doesn&#39;t touch those. Everything on this server that stores anything important uses that approach now. But I wish I&#39;d known before the first deploy rather than after.&lt;/p&gt;
&lt;h2&gt;A couple of Traefik things&lt;/h2&gt;
&lt;p&gt;Coolify uses Traefik under the hood for routing and SSL. It mostly handles itself, but two things caught me out.&lt;/p&gt;
&lt;p&gt;There&#39;s a manual step to connect Traefik to the network your services run on, and if Traefik ever gets recreated it loses that connection and routing breaks. Easy to fix, annoying to diagnose at midnight.&lt;/p&gt;
&lt;p&gt;HTTP-to-HTTPS redirect also doesn&#39;t happen automatically — there&#39;s an extra config file that needs to exist in a specific place. Without it, HTTP requests just hang rather than redirecting. Coolify doesn&#39;t tell you this anywhere obvious.&lt;/p&gt;
&lt;h2&gt;Overall&lt;/h2&gt;
&lt;p&gt;The rough edges during migration were frustrating but they were all one-time problems. Now that everything&#39;s running, deployments are boring in the best way — push, wait thirty seconds, done. I should have done this earlier.&lt;/p&gt;
</content>
  </entry>
</feed>