4 minutes
Building an anycast authoritative DNS infrastructure for funsies
It’s been some time since I wrote something, and frankly, I blame DNS.
Always blame DNS, right? It’s the silent killer of Friday afternoons and the invisible glue holding this fragile house of cards we call the internet together.
But what if you didn’t just consume DNS? What if you actually owned your entire DNS infrastructure?
Why do this to yourself?
Historically, running your own authoritative DNS was a nightmare reserved for masochists.
You had to deal with BIND9 config files that looked like ancient hieroglyphs. Then came the DDoS attacks, the routing loops, and the constant fear of a single point of failure taking down your entire digital existence. T_T
But despite all the pain, owning your DNS infrastructure grants you absolute freedom.
You’re no longer tied to a provider’s limits. No more waiting for their API limits to reset or dealing with their proprietary quirks. You are the master of your own domain. Literally.
The old way of doing things
Let’s look at how we used to build this stuff.
Typically, you’d have a Primary Authoritative server somewhere, sweating under the load. Then you’d sprinkle a few Secondaries across different datacenters.
graph TD
User([End User]) --> |DNS Query| Sec1
User --> |DNS Query| Sec2
User --> |DNS Query| Sec3
Primary[Primary Authoritative Server] --> |AXFR/IXFR| Sec1[Secondary EU]
Primary --> |AXFR/IXFR| Sec2[Secondary US]
Primary --> |AXFR/IXFR| Sec3[Secondary AP]
It worked, sure. But if an end user in Tokyo hit your secondary in Frankfurt… well, they had enough time to grab a coffee before the NXDOMAIN came back.
Latency was terrible, and routing was entirely dependent on GeoIP tricks that broke half the time.
Magic Anycast Dust
Enter Anycast. It’s not a protocol; it’s a routing trick that feels like pure magic.
Instead of giving each server a unique IP address, you give all your servers the same IP address.
You announce this IP from multiple locations using BGP. The internet’s routing tables do the heavy lifting, automatically sending the user to the geographically closest (or at least, network-closest) node.
Boom. DNS greatly benefits from this because DNS is mostly UDP. It’s connectionless. You want to ask a question, get an answer, and move on with your life as quickly as possible.
Anycast is an incredibly versatile tool for keeping response latencies practically non-existent.
Cooking up Anycst.Net
So, naturally, I had to build my own. I call it Anycst.Net.
I know what you’re thinking. “Not another side project!” Yes. Another one. Bear with me.
Anycst.Net is my baby. It consists of anycasted DNS secondaries distributed globally. But the real secret sauce is the backend.
The Master node is completely hidden from the public internet. It only talks to the secondaries through a bespoke, low-latency Wireguard tunnel mesh.
# A tiny peek into the wireguard mesh config
peers:
- name: sec-tokyo
endpoint: 203.0.113.88:51820
public_key: "xXsuperSecretKeyXx="
allowed_ips: "10.53.0.2/32"
- name: sec-frankfurt
endpoint: 198.51.100.12:51820
public_key: "yYevenMoreSecretYy="
allowed_ips: "10.53.0.3/32"
Because of this mesh, change propagation between the DNS nodes happens in milliseconds, no matter where they are physically located.
Update a record on the Master, and bam, the entire globe has it almost instantly. Heaven.
Knot-DNS and QUIC
Now, you can’t build this kind of Ferrari and put a lawnmower engine in it. BIND was out. PowerDNS was an option. But I went with Knot-DNS.
Honestly, Knot-DNS is an absolute beast.
Its performance is staggering, and its flexibility for these types of edge setups is unparalleled. It just chews through queries and asks for more.
But here is the real game changer: AXFR/IXFR over QUIC.
Standard zone transfers over TCP are slow, prone to blocking, and generally a pain when dealing with packet loss on long-haul links.
By pushing zone transfers over QUIC, we get encrypted, multiplexed, UDP-based transfers that handle packet loss infinitely better than TCP.
# Knot logs showing QUIC in action
info: [example.com] zone transfer, incoming IXFR over QUIC from 10.53.0.1
info: [example.com] zone transfer, completed successfully
It makes the standard DNS protocol look like a dial-up modem.
What’s next?
I don’t blame you if you think this is massive overkill for a personal project. It is.
But it’s also the foundation for something bigger.
We’re currently working closely with NordNIC to become a domain reseller as part of r0cket.cloud.
If everything goes according to plan, we hope to provide full, blazing-fast DNS capabilities for all r0cket.cloud customers by 2027.
Building this was a fun weekend project that got slightly out of hand, but the results speak for themselves. We’ll see how the integration pans out!
Related Posts: