Setting Up DNS
The first service I needed to get up and running was a DNS server. Now, it's not strictly necessary, but I wanted to be able to give family and guests domains to reach our services instead of IP addresses. A simple quality of life improvement. This is how I set it up.
Public vs Private Domain Names
While researching DNS software, I needed to decide on what domain I would use. I chose to use a public domain name so that I could use Caddy as a web proxy, and get TLS certs. Which in my head, was a really cool idea. Especially since Caddy has a bunch of DNS plugins that would allow me to use Let's Encrypt without opening a port in my firewall. I did run into an issue, which I will get to in a minute.
First, let's talk domain names and DNS. Almost everyone today is familiar with a domain name. You buy them from a registrar, point them to an IP, and "ta-dah!" you have a website. That bit on the end ( .com, .net, .org , etc.) is called a Top-Level Domain (TLD). Generally, it's recommended to own whatever domain you set up inside your network. Lots of businesses do it. It's very common. The IETF and ICANN has set aside a list of TLDs that users are allowed to use in their network without fear of it becoming a domain name on the internet. They are:
.internal
.test
.example
Now what's great is that you could use this to make really small domain names for your services. Like stream.internal
for your media server and office.internal
for NextCloud. Rad. The only downside is that you would need to run everything on http, or run a certificate authority if you want/need TLS certificates. Running everything on http isn't crazy, and most people I'm sure won't notice or care. Running an internal certificate authority is too much work for almost no reward (not for a home network).
Needless to say, it's a great option.
Now I wanted to be fancy and have TLS certificates, so I got a real domain and set it up internally. Functionally, it works the same. Except when you're using DNS over HTTPS. I learned that Firefox has this on by default, and will ignore your DNS settings in favor of it's DNS over HTTPS provider. Using an internal-only TLD won't be a problem because Firefox doesn't intercept those TLDs.
I've tried this in Firefox, Chromium, Vivaldi, and Gnome Web. Only Firefox does this, and it can be disabled (which I did). I haven't tested regular Chrome, so I'm not sure if it works there.
The Software: PowerDNS
Doing research landed a lot of DNS server options in front of me. Right off the bat, I ditched BIND. It's very capable, but I had no interest in writing DNS records by hand and there wasn't a simple UI for managing it. And really, that was similar for other options. There's a lot of great DNS software out there, but they all required a more manual approach to editing domain entries. I know myself, and I'm not going to remember what I need to do on the few occasions I'll need to edit something. So I chose PowerDNS and PowerDNS-Admin.
PowerDNS is a really solid DNS server with a fairly straightforward set up. PowerDNS-Admin is a Python app that uses SQLite to store data, and then pushes changes to PowerDNS via the API. Setting them up was new to me, but not terribly difficult.
The set up required three running services. The first is the PowerDNS Authoritative Server. It stores our custom domain records for all of our services. The second is the PowerDNS Recursor. It goes out to find domain records and serves them to whoever is requesting it (basically like the public DNS servers that you probably use now). We also point this to our Authoritative Server and say that any requests for our chosen domain name needs to look there. This is what allows the rest of the internet to work while still using custom domain records. Finally, we set up our PowerDNS-Admin to manage those custom domain records.
Setting up the Containers
We have a plan, so now we implement it. MicroOS comes with Podman. It's a container runtime like Docker, and works almost the same way. One of the really cool features it has are Pods. A Pod is a group of containers. Anyone familar with Kubernetes will be familiar with the concept. What's great about pods is that all of the containers can communicate with each other as if they were services running on one machine. Additionally, only the ports defined on the pod will be exposed to the rest of the world. This allows for the recursor and authoritative servers to talk to each other without exposing the authoritative server to anything else. Same with the API used between servers. The only thing the rest of the world sees is the DNS port and the Web UI.
So I started with the Pod by creating a new pod file at /etc/containers/systemd/pdns.pod
. And it looks like the below:
[Install]
WantedBy=default.target
[Unit]
Description=PowerDNS and PowerDNS Admin
[Pod]
PublishPort=53:5353
PublishPort=53:5353/udp
PublishPort=9191:80
If you're familiar with writing SystemD services, this is very similar. Podman has done a pretty good job of documenting these settings as well, and you can find those here.
Our pod isn't doing much. It exposes our DNS port (53) from the recursor server, and our web port (9191) from the Admin UI using the PublishPort
setting. If you're unfamiliar with containers, the first number is the port on the outside of the container and the second number is the port on the inside of the container. In our DNS port above ( 53:5353
), the rest of the network sees port 53 which forwards to our recursor running on port 5353. Docker and Podman by default assume you are only using TCP. If you need UDP as well, you need to explicitly set it.
If you're curious why I set the ports as I did, I'll get to that at the respective containers.
The Authoritative Server
The authoritative server, as stated above, is where our custom domain records will live. PowerDNS supports a bunch of database options. I'm just using SQLite because I'm not doing anything fancy.
Similar to the pod, we need to create a container file at /etc/containers/systemd/pdns-auth.container
:
[Unit]
Description=PowerDNS Auth
[Install]
WantedBy=multi-user.target default.target
[Service]
Restart=always
[Container]
Image=docker.io/powerdns/pdns-auth-49
AutoUpdate=registry
Pod=pdns.pod
Volume=/srv/pdns/auth/conf:/etc/powerdns/pdns.d:Z,U
Volume=pdns-auth-data.volume:/var/lib/powerdns
So here's some notes about what's going on.
Restart=Always
tells systemd that if the container shuts down for any reason that it needs to start it back up.
Image=docker.io/powerdns/pdns-auth-49
is the container image we're using. We're using the official image made by PowerDNS, and we're using version 4.9. When the container starts, Podman will automatically download this image. It's also important to give the full URL because of the next setting.
AutoUpdate=registry
makes updating the container really easy. We log into the server, run podman auto-update
, and Podman will check each container for this setting, update it if possible, and then restart the container.
Pod=pdns.pod
tells Podman that this container is part of our pdns pod that we set up earlier. This allows the containers to share a network and talk to each other over localhost.
We have two volumes attached. The first is our configuration directory. It tells the container to mount /srv/pdns/auth/conf
outside of the container to /etc/powerdns/pdns.d
located inside of the container. The extra important bit here is the final letters after the colon: Z,U
. These letters do two things. Z
updates the SELinux labels so the container can access the directory and its files. Yes, MicroOS runs SELinux. Frustratingly, it does not include any tools to manage them. You can technically install them with transactional-update
, but I won't being doing that. The other option, U
, updates the file owners on our server to match what's being used in the container. Some containers don't run as root for security reasons. PowerDNS is one of them. It's not documented anywhere except the Dockerfile, and I had to do some digging to find out why the container kept failing to find the config. That's why.
The other volume is where the database will be kept. While I mount config files to server locations, I tend to keep database files in a volume. I never touch the files, and using a volume eliminates extra headaches (like what I ran into with the config files). You'll notice that this setting is referencing a .volume
file. Setting that up is easy. Create a new file at /etc/containers/systemd/pdns-auth-data.volume
and write the following to it:
[Volume]
That's it. That's all you need. Podman will do the rest.
With the container set up. We need to configure our Authoritative Server. As I said in my last post. I'm stashing all that in the /srv
directory. It's empty, unused, and ready to go. I follow the pattern of either container/volume
or pod/container/volume
. For our Auth Server, I made the directory /srv/pdns/auth
for all my authoritative server needs. Inside that folder I made a conf
folder and created my pdns.conf
configuration file with the following contents:
local-address=127.0.0.1
local-port=5300
webserver
api
api-key={api key}
webserver-address=0.0.0.0
webserver-allow-from=0.0.0.0/0
webserver-password={webserver password}
local-address
and local-port
are settings for how the server will be listening to DNS requests. I set local-address
to 127.0.0.1
so it only listens to requests within the pod. For some reason, it wasn't responding to requests when I had it set to 0.0.0.0
. That should have worked as well, but the recursor wasn't working with it set like that. I chose 5300
as the port because I have two DNS servers running (auth and recursor), and the container is running as an unprivileged user. So no ports under 1000.
The next block of settings turns on the web server and api server. These are needed so that the admin ui can send commands to update the records. It doesn't say it here, but the default port for the webserver is 8081
.
Be sure to generate an api-key
and webserver-password
. I used my password manager to do this. I don't recall why I needed the webserver password. I don't use it anywhere else, but it didn't want to work without it.
With that, your Authoritative Server is ready.
The Recursor Server
The Recursor is the DNS server that all of your devices will actually use. When a device asks for a DNS record, the recursor reaches out to the internet to figure it out. It has its own independent way of seeking out records so you don't have to set an upstream DNS server (like Google's 8.8.8.8
or Cloudflare's 1.1.1.1
) to work. All we have to do is tell it where to get our custom domain records.
Starting with our container file at /etc/containers/systemd/pdns-recursor.container
, I gave it the following:
[Unit]
Description=PowerDNS Recursor
[Install]
WantedBy=multi-user.target default.target
[Service]
Restart=always
[Container]
Image=docker.io/powerdns/pdns-recursor-51
AutoUpdate=registry
Pod=pdns.pod
Volume=/srv/pdns/recursor/conf:/etc/powerdns/recursor.d:Z,U
You'll notice that it's very similar to our other container. We have a different container image, and it's also the official PowerDNS image. It's attaching itself to the same pod as well. It's got just one volume that points to its config file. Like the authoritative server, we need to make sure to set the Z,U
options on it to work.
The config file is as easy as the container file. I made a new directory at /srv/pdns/recursor/conf
and created a recursor.conf
with the following:
local-address=0.0.0.0
local-port=5353
forward-zones=custom.com=127.0.0.1:5300
Again, like the authoritative server, we start with the local-address
and local-port
settings. This time we definitely need to set the address to 0.0.0.0
so that the server can listen to requests outside of the pod. We set our port to 5353
because again the container is running as an unprivileged user.
The next setting is forwarding-zones
this tells the recursor to look at specific servers for certain domains. Here it says to look for custom.com
records at 127.0.0.1:5300
, which is our authoritative server. If you were using a private TLD (say .home
for example), you would set the forward-zones as:
forward-zones=internal=127.0.0.1:5300
If you want to support multiple domains, you can comma separate them:
forward-zones=internal=127.0.0.1:5300, test=127.0.0.1:5300
Okay! Recursor Server out of the way.
PowerDNS-Admin
PowerDNS-Admin is a Python application. The one we are using is the official container, but it's also no longer updated. This is not because it's been abandoned, but the application grew so quickly that it become difficult to maintain. So the developers are working on a rewrite to make developing easier in the future. I'm not sure when it will complete, but this works for us for now.
As before, let's create a new container file at /etc/containers/systemd/pdns-admin.container
with the following:
[Unit]
Description=PowerDNS Admin
[Install]
WantedBy=multi-user.target default.target
[Service]
Restart=always
[Container]
Image=docker.io/powerdnsadmin/pda-legacy:latest
AutoUpdate=registry
Pod=pdns.pod
Environment=SECRET_KEY={secret key}
Volume=pdns-admin-data.volume:/data
You'll notice a new setting, Environment
. As you can guess by the name, it sets an Environment variable inside the container when it's made. The variable we're setting is an encryption key to keep session data safe. Like above, I used my password manager to generate one.
PowerDNS-Admin stores all of its information in a SQLite database. So we've added a volume here for that. And I created a new volume file at /etc/containers/systemd/pdns-admin-data.volume
and put the following inside:
[Volume]
Starting Our New Service
With our pod, containers, volumes, and config files defined, we're ready to start everything up. But first, just like any systemd service changes, we need to reload the services with:
systemctl daemon-reload
Pods show up in systemd as name-pod
. So for our pod, pdns.pod
, the systemd service is called pdns-pod
. We can start it with:
systemctl start pdns-pod
And we have access to the other systemd commands:
systemctl restart pdns-pod
systemctl stop pdns-pod
systemctl status pdns-pod
We can also look at logs for this service by running:
journalctl -xeu pdns-pod.service
This is especially useful if there are issues with any of the containers.
In addition to the pod, each of the containers and volumes are all created as systemd services as well. Containers show us as just their name
, and volumes are like pods and have the pattern name-volume
. Our complete list of services for this pod are as follows:
pdns-pod.service
pdns-auth.service
pdns-recursor.service
pdns-admin.service
pdns-auth-data-volume.service
pdns-admin-data-volume.service
Each of these can be managed the same way using systemctl
, and have logs inspected with journalctl
. This also helps in those times when you need to just restart one container instead of the whole pod. As you can guess, running the commands against pdns-pod
starts/stops/restarts everything.
Setting Up PowerDNS-Admin
We're almost there! The final step is to set up PowerDNS-Admin. I went to my server IP on port 9191 (example: http://127.0.0.1:9191
). Once loaded, I set up the first admin user, and then logged in.
After logging in, the first important step is tying the Admin app to the Authoritative server. That can be done by going to Settings, and then to Server. There will be a form where you put in the URL, api key, and server version.
The URL is http://127.0.0.1:8081
. The key is the same as I set above in my auth config file. And the version is 4.9.1. At least, the version that I'm running is 4.9.1. If you're following these steps, you can find the version of PowerDNS by running systemctl status pdns-auth
. At the bottom of the service status will be some logs, and you can press the right arrow key and look for a line that says something like: PowerDNS Authoritative Server 4.9.1
Insert those there, and hit Save Settings.
The Conclusion
From there, all that needs to be done is create a new zone and records. Which is out of scope for this post. Once I was done making records, I changed the DNS of my local network to use the server. I haven't set a public DNS server as a back up. I'm not sure how that will work. Namely, I would want my server to be the primary, and the public DNS to be a back up in case my server goes down. I'm not sure if it'll set that way or be a "see who answers first" approach. I should probably know this or look it up, but I have not.
Either way, we're up and running. Next step is the web proxy powered by Caddy.