AI Usage
Let’s talk about the elephant in the room. Or rather the AI-generated content out there.
Honestly, it’s pretty dystopian that I even have to write a dedicated page for this to begin with. But in an era where every second blog post reads like a robot trying to mimic human enthusiasm, I want to be fully transparent about what actually goes on behind the scenes here.
Wait, so you actually write all of this?
Every single rant, architecture deep-dive, and obscure network protocol exploration on this site comes straight from my own brain.
The stories are my actual experiences, the pain of debugging is entirely genuine, and the code snippets are typed out by yours truly. Those code blocks are usually written after hours of staring at logs.
Well, except for that one time in 2021 where I let GitHub Copilot write a few sentences in a post about hiring challenges just to see if anyone would notice… But I digress!
I genuinely dont use AI to write my posts. There’s no fun in having an algorithm hallucinate a narrative about routing failures or cloud infrastructure quirks. The soul of this blog is the human element, complete with all its flaws and raw frustration.
Okay, but where do LLMs come into play?
I do use Large Language Models occasionally, but their scope is strictly limited.
Think of them as my personal, overly strict grammar linters. English is messy, and when I’m quickly jotting down thoughts after finally getting a cluster to behave, typos happen ofc. I use LLMs solely to:
- Proofread my drafts and catch glaring spelling mistakes.
- Smooth out clunky sentences so they flow a bit better.
- Provide minor stylistic adjustments without altering my voice.
They are an editing tool, nothing more. The substance, the opinions, and the technical architecture are all 100% human-crafted.
Hopefully, that clears things up. Now back to breaking things on purpose!