DNS With DNSSEC Validation

Intro On What & Why

This has been (and still is…) an interesting and somewhat challenging project. Challenging mostly in that it involves things I’ve never done before.

I learned DNS stuff back when “bind” was the only choice and there was no encryption nor certificate validation. All that is either “new to me” in DNS, or requires some “unlearning” of things I’ve done for decades. Plus, at the best of times, I only did DNS “at a distance”. There’s a lot of “stuff” in it that I’ve always just ignored as either I didn’t need it to make my tablet talk to the Telco router at home; or at work “I had a guy” working for me who was good at it on the industrial scale.

So y’all get to watch me flounder around a little bit on some of this ;-)

First, a little background on the set-up. I’ve got PiHole running on 2 Raspberry Pi boards, plus I’ve got the Telco Boundary Router (that they like to call a modem as it talks to phone wires). These sometimes like to argue with each other.

The Telco Router is SURE it is in charge, plus the Phone Company can fiddle with the config if they like (meaning anything I do can evaporate over night or be diddled in an “update”). Sometimes that’s a feature, often it isn’t. IT wants to be the DHCP server (for handing out IP numbers and domain names) and it wants to be the DNS Server target (though the actual Telco DNS Server is upstream somewhere).

Now it’s not that I don’t love and trust AT&T, it is just that they have been in bed with various Government Agencies since at least the 1960s, never saw any information they didn’t want to hand over to TLAs at the drop of hint, and are not very efficient. So DNS lookups to their server have a few negative properties: Slow. Subject to Government Edict so who knows who is dropped from it. Subject to all manner of Government Snoops requesting records (it is now just Standard Operating Procedure to get / hand over all Telco information). A constant target of Black Hats. And a few more minor things…

So I choose to use other upstream DNS providers.

Which means I run my own DNS servers AND often do a manual configuration of various equipment to point the gear at those servers, or sometimes at the upstream providers.

Traditional DNS requests are sent “in the clear”, meaning that anyone snooping on the wire (OR just be running your upstream DNS server and look in the logs) can see what sites you are visiting via their DNS resolution. They can also then attempt to feed you bogus lookup returns to send you to malware sites (Man In The Middle attack and / or Data Leakage) This is done by Black Hats and by Agencies (with full Telco cooperation) when convenient for them. Basically it is a HUGE privacy and security hole that is a common exploit.

So for a decade or two folks have been trying to work out ways to “fix that” while not breaking the internet in the process. For this reason there are a few competing ways to do some of these things. In particular, “bind” has become a legacy bloated beast with attempts to do it all. DNScrypt vs TLS vs SSL vs DNSSEC vs… So some folks just did a clean rewrite and named it “unbound” (bind, unbound, yeah,yet another cutesy crap name, but the software is good.

Sidebar On Squid Proxy Server & History

Oh, and installing a Squid Proxy Server is also a nice protective measure. It can cache some data locally speeding things up, plus any attack that tries to crawl back down the wire hits the Proxy Server and not your important personal machine.

https://chiefio.wordpress.com/2018/12/22/installing-squid-proxy-server-on-devuan/

First step was something I did years back. Install a local DNS Server. I ran one on Alpine Linux (linux router) using dnsmasq (yet another DNS server, but cut way back from “bind” to something a mortal can configure…) and some custom ban lists. Then along came PiHole. A marvelous little malware and advertisement blocking DNS server.

That was good for a while. The local DNS server does ONE lookup on a name then caches it locally. Subsequent DNS lookups from any machine in the house don’t need to talk out the Telco Wire so information leakage is reduced some. IF I get 100 DNS lookups for chiefio.wordpress.com in a day, only ONE goes to the upstream DNS provider, and that isn’t the TELCO once you set this up. Instead of being in their logs, they need to ‘tap the wire’ with a snoop to get that information (which they likely are doing anyway … firewalls now doing Deep Packet Inspection and all)

And yes, putting this all in a VPN hides all of it from the Telco / TLA@telco. But just moves the question / exposure to the VPN provider.

The local cache of lookups makes things MUCH faster. I was really surprised how much faster. Some lookups go from a couple of seconds to a couple of mili-seconds. IF you have a dozen of those in a web page, well, it adds up. Removing some of the high page weight crap that advertisers shove at you speeds things up a lot too.

BUT it is still going “in the clear” and it is still subject to Man In The Middle DNS spoofing / capture redirecting your MyBank.com to their “FraudBank.ch”…

In this posting I’m going to use DNSSEC to help block MITM attacks. Encrypting the tunnel will be left for later (because it looks complicated and I’ve not worked it out yet…). So this will still be “in the clear” DNS, but authenticated with encrypted certificates. You will be getting good and proven DNS lookups and, incidentally, blocking crap sites that are bogus.

First thing to do is get a Raspberry Pi (or similar SBC Single Board Computer) and install Linux on it. Armbian works fine and I ran it for a good many years. But I’ve also had odd problems from System D, so I now run Devuan. It can be a slightly long process to get the current Devuan installed, but IMHO is worth it. The first parts of this posting, when the OS is updated to current, but skipping the i2p bits, gives you a clean Beowulf Devuan 3.0:

https://chiefio.wordpress.com/2021/04/09/raspberry-pi-model-3-devuan-beowulf-with-i2p/

Then toss on a copy of PiHole:

https://chiefio.wordpress.com/2016/06/01/pi-hole-where-to-stick-advertizing-you-dont-want/

Into The Present With “unbound”

And I also installed “unbound”. It is trivial to install, though harder to figure out what to configure. After the usual apt-get update apt-get upgrade:

apt-get install unbound binutils

You may not need the binutils stuff, but it is good to have anyway.

At that point, the fun begins…

I followed a couple of models in doing this. One is:
https://docs.pi-hole.net/guides/dns/unbound/

It is pretty good, but a bit SystemD centric (so you get “systemctl restart foo” instead of “service foo restart”…)

Details Of My config File

The config file for “unbound” is in /etc/unbound/unbound.conf but the bulk of what you want in it is not.

root@XU4uDevuan3:/etc/unbound# head unbound.conf
# Unbound configuration file for Debian.
#
# See the unbound.conf(5) man page.
#
# See /usr/share/doc/unbound/examples/unbound.conf for a commented
# reference config file.

#
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.
include: “/etc/unbound/unbound.conf.d/*.conf”

It wants you to put stuff in a gaggle of individual files in a conf.d directory for local mods. It also points you at a place where it has installed a copy of a FULL configuration file… “See /usr/share/doc…”

So I just go to the bottom of the config file and in vi it is easy to copy in another file. Copy that file in as a template for your configuration. For those unsure how to do this in an editor, you could just concatenate the first and second files with a linux command:

cat /usr/share/doc/unbound/examples/unbound.conf >> /etc/unbound/unbound.conf

Then you get to slog though a thousand lines of example comments and options trying to figure out which ones to change / turn on or off. No, really, 1000 lines:

root@XU4uDevuan3:/etc/unbound# wc -l /usr/share/doc/unbound/examples/unbound.conf 
987 /usr/share/doc/unbound/examples/unbound.conf

987 + what was already in the /etc/unbound/unbound.conf

The good news is that almost all of it is just comments telling you what things do, and pointing out the default that is already set the way you want it. I stripped out the comments and blank lines and made a minimal list for this posting. Do note that I make NO CLAIM that this is best or even substantially correct. Just that it seems to work right. It is 186 lines, but I’m going to inject comments along the way. The rest of it was left as-is in the model:

root@headless1:/home/chiefio# cat SHORT.conf 
include: "/etc/unbound/unbound.conf.d/*.conf"
server:
	verbosity: 1

	# specify the interfaces to answer queries from by ip-address.
	# The default is to listen to localhost (127.0.0.1 and ::1).
	# specify 0.0.0.0 and ::0 to bind to all available interfaces.

I left it as 127.0.0.1 on one of my servers, added a interface line for the actual IP on another. Both seem to work fine. In debugging I had to turn verbosity up to 2 to find out what I was doing wrong. It is now set to 0 on the other server. This prevents big log files…

	# port to answer queries from
	# port: 53
	 port: 5335

	# specify the interfaces to send outgoing queries to authoritative
	# server from by ip-address. If none, the default (all) interface
	# is used. 
	 outgoing-interface: 192.168.16.252

I don’t know why 5335 was chosen, it was just in the model I was following so I kept it. 53 is the usual DNS port. IIRC 453 is used by TLS (or one of the encrypting versions) for DNS. I’ve seen 5353 used for some other one. Whatever. It just needs to match what you set in the PiHole config as the PiHole upstream IP#port.

You likely don’t need to explicitly call out the upstream interface, but I just wanted it constrained as the Pi M3 does have WiFi too.

In this next segment, I figured I was not going to have huge spikes like a full corporation, but 1 M was not that much memory to burn to assure a nice buffer.

	# buffer size for UDP port 53 incoming (SO_RCVBUF socket option).
	# 0 is system default.  Use 4m to catch query spikes for busy servers.
	# so-rcvbuf: 0
	 so-rcvbuf: 1m

	# buffer size for UDP port 53 outgoing (SO_SNDBUF socket option).
	# 0 is system default.  Use 4m to handle spikes on very busy servers.
	# so-sndbuf: 0
	 so-sndbuf: 1m

	# EDNS reassembly buffer to advertise to UDP peers (the actual buffer
	# is set with msg-buffer-size). 1472 can solve fragmentation (timeouts)
	# edns-buffer-size: 4096
	 edns-buffer-size: 1472

I set that to the value that “solves fragmentation” just because “why not?”

I raised TTL minimums as my goal is maximum DNS lookups kept on site and I don’t care if I miss some site changing their TTL as they are doing maintenance or changing their IP Address. In professional operations I have set TTL down to a few minutes so as to drain cached values prior to an IP swap, but for home use? Do I care if I can’t get to the National Bank Of Ickystan for 8 hours as their IP changes happens?

	# the time to live (TTL) value lower bound, in seconds. Default 0.
	# If more than an hour could easily give trouble due to stale data.
	# cache-min-ttl: 0
	 cache-min-ttl: 3600

	# minimum wait time for responses, increase if uplink is long. In msec.
	# infra-cache-min-rtt: 50
	 infra-cache-min-rtt: 350

	# Enable IPv4, "yes" or "no".
	# do-ip4: yes

	# Enable IPv6, "yes" or "no".
	# do-ip6: yes
	 do-ip6: no

I’m not ready to let these guys to IPv6 yet. I like having all my traffic behind a NAT Firewall boundary router. Just one more hurdle for an intruder to get past. At some point, yeah, I’ll set up a minimal IPv6 subnet with the DNS servers in it and let them talk straight to the Upstream DNS provider over IPv6. But not right now.

Then you must tell it what IP ranges are local and allowed to talk to it. By using the “non-routing” private ranges only (and no IPv6…) that limits the potential interactions with it.

	# control which clients are allowed to make (recursive) queries
	# to this server. Specify classless netblocks with /size and action.
	# By default everything is refused, except for localhost.
	# Choose deny (drop message), refuse (polite error reply),
	# allow (recursive ok), allow_setrd (recursive ok, rd bit is forced on),
	# allow_snoop (recursive and nonrecursive ok)
	# deny_non_local (drop queries unless can be answered from local-data)
	# refuse_non_local (like deny_non_local but polite error reply).
	# access-control: 0.0.0.0/0 refuse
	# access-control: 127.0.0.0/8 allow
	# access-control: ::0/0 refuse
	# access-control: ::1 allow
	# access-control: ::ffff:127.0.0.1 allow
	 access-control: 10.0.0.0/8 allow
	 access-control: 172.16.0.0/12 allow
	 access-control: 192.168.0.0/16 allow

“unbound” will not make the needed directory for the log file, but will make the logfile itself. So you need to do a:

mkdir /var/log/unbound
chown unbound:unbound /var/log/unbound

Otherwise at launch it will toss an error and not start. I also chose ascii time over decimal time…

	# the log file, "" means log to stderr.
	# Use of this option sets use-syslog to "no".
	# logfile: ""
	 logfile: "/var/log/unbound/unbound.log"

	# Log to syslog(3) if yes. The log facility LOG_DAEMON is used to
	# log to. If yes, it overrides the logfile.
	# use-syslog: yes
	 use-syslog: no

	# print UTC timestamp in ascii to logfile, default is epoch in seconds.
	# log-time-ascii: no
	 log-time-ascii: yes

This ‘grab root hints’is supposed to be a periodic task. Once every few months. I just did it long hand and will automate later, maybe. Then I turned on maybe 1/2 of the “harden” options.

wget https://www.internic.net/domain/named.root

Then I moved that file (and renamed it) to /etc/unbound/root.hints

	# file to read root hints from.
	# get one from https://www.internic.net/domain/named.cache
	# root-hints: ""
	 root-hints: "/etc/unbound/root.hints"

	# Harden against out of zone rrsets, to avoid spoofing attempts.
	 harden-glue: yes

	# Harden against receiving dnssec-stripped data. If you turn it
	# off, failing to validate dnskey data for a trustanchor will
	# trigger insecure mode for that zone (like without a trustanchor).
	# Default on, which insists on dnssec data for trust-anchored zones.
	 harden-dnssec-stripped: yes

We’re getting near the end, hang in there ;-)

We need to tell it some addresses are not for sharing:


	# Enforce privacy of these addresses. Strips them away from answers.
	# It may cause DNSSEC validation to additionally mark it as bogus.
	# Protects against 'DNS Rebinding' (uses browser as network proxy).
	# Only 'private-domain' and 'local-data' names are allowed to have
	# these private addresses. No default.
	 private-address: 10.0.0.0/8
	 private-address: 172.16.0.0/12
	 private-address: 192.168.0.0/16
	 private-address: 169.254.0.0/16
	 private-address: fd00::/8
	 private-address: fe80::/10
	 private-address: ::ffff:0:0/96

	# Allow the domain (and its subdomains) to contain private addresses.
	# local-data statements are allowed to contain private addresses too.
	# private-domain: "example.com"
	 private-domain: "chiefio.lab"
	 private-domain: "chiefio.home"

One of the fun things about running your own “authoritative” DNS server for your own private domains is that you need not comply with the “rules” of whoever thinks they are in control of the name space. There’s a LOT of “rogue” domains in use (and a wiki on it) and it is more than just .onion domain.

I’ve chosen to accept that I may have “issues” if any formal .lab or .home network high level qualifier is put into production. Though I’ve not set up a DNSSEC key set and all, so I have to declare them outside the DNSSEC authentication:

	# if yes, perform prefetching of almost expired message cache entries.
	# prefetch: no
	 prefetch: yes

	# File with trusted keys, kept uptodate using RFC5011 probes,
	# initial file like trust-anchor-file, then it stores metadata.
	# Use several entries, one per domain name, to track multiple zones.
	#
	# If you want to perform DNSSEC validation, run unbound-anchor before
	# you start unbound (i.e. in the system boot scripts).  And enable:
	# Please note usage of unbound-anchor root anchor is at your own risk
	# and under the terms of our LICENSE (see that file in the source).
	# auto-trust-anchor-file: "/etc/unbound/root.key"

	# Ignore chain of trust. Domain is treated as insecure.
	# domain-insecure: "example.com"
	 domain-insecure: "chiefio.lab"
	 domain-insecure: "chiefio.home"

Again I’m setting things to “give me a number” first, and worry about it having changed last. Almost always that will work fine, and when it doesn’t, odds are I’m not interested anyway. It isn’t like the numbers change a lot for most places I go. (Though folks doing round robin and dynamic DNS can cause grief, but I’m not going to care if I get a miss, I’ll just come back in a few minutes and try again.)

	# Serve expired responses from cache, with TTL 0 in the response,
	# and then attempt to fetch the data afresh.
	# serve-expired: no
	 serve-expired: yes
	
	# Limit serving of expired responses to configured seconds after
	# expiration. 0 disables the limit.
	# serve-expired-ttl: 0
	 serve-expired-ttl: 60

	# Set the TTL of expired records to the serve-expired-ttl value after a
	# failed attempt to retrieve the record from upstream. This makes sure
	# that the expired records will be served as long as there are queries
	# for it.
	# serve-expired-ttl-reset: no
	 serve-expired-ttl-reset: yes 

Then there’s a section where you can put in a table of the stuff in your location, so you can serve your own IP numbers by name, if desired.

	# You can add locally served data with
	# local-zone: "local." static
	# local-data: "mycomputer.local. IN A 192.0.2.51"
	# local-data: 'mytext.local TXT "content of text record"'

	 local-data: "Netgear.chiefio.home. IN A 192.168.16.251"
	 local-data: "netgear.chiefio.home. IN A 192.168.16.251"
	 local-data: "PiHole.chiefio.home. IN A 192.168.16.252"
	 local-data: "pihole.chiefio.home. IN A 192.168.16.252"
	 local-data: "PiOne.chiefio.home. IN A 192.168.16.253"
	 local-data: "pione.chiefio.home. IN A 192.168.16.253"
	 local-data: "Telco.chiefio.home. IN A 192.168.16.254"
	 local-data: "telco.chiefio.home. IN A 192.168.16.254"

	 local-data: "Netgear.chiefio.lab. IN A 10.1.1.254"
	 local-data: "netgear.chiefio.lab. IN A 10.1.1.254"

Zones… there must be zones…

I likely have this configured somewhat wrongly. It’s a bastard 1/2 way between a recursive and an authoritative. The first “forward-zone” says to just forward requests to a couple of DNS providers. In this case, the “filtering” DNS provided by OpenDNS (now owned by CISCO, so I’ll likely change it later).

Not that I don’t trust CISCO, but they signed up for the Prism Program to give all your data to the TLAs, so even though that program is supposedly scrapped, they stay on the Asshole List…

Then I’ve got an auth-zone set up that looks at the root servers (using that root.hints file) and tries to start at the top of authority and work down to where the actual source of authority is for any given identity, then send the DNS request directly to the authoritative server. In theory, this means that should I do a lookup on bobsmachine.someco.uk, only the authoritative server for someco.uk would see that I was asking about bobsmachine… Everyone else in the food chain just gets “he wants something from auth server for someco.uk”. A BIG plus.

But I’m not sure if “unbound” chooses forward first or auth-zone first, or what. So this part needs some work. I’ll likely try to just shut off the forward-zone and see if it all still works. But OTOH, for initial bring up, using a forward-zone worked while I figured out the root.hints and such.

 forward-zone:
 	name: "."
 	forward-addr: 208.67.222.222
 	forward-addr: 208.67.220.220

 auth-zone:
	name: "."
	master: 199.9.14.201         # b.root-servers.net	
	master: 192.33.4.12          # c.root-servers.net	
	master: 199.7.91.13          # d.root-servers.net	
	master: 192.5.5.241          # f.root-servers.net	
	master: 192.112.36.4         # g.root-servers.net	
	master: 193.0.14.129         # k.root-servers.net	
	master: 192.0.47.132         # xfr.cjr.dns.icann.org	
	master: 192.0.32.132         # xfr.lax.dns.icann.org	
	fallback-enabled: yes	
	for-downstream: no	
	for-upstream: yes

Once all that is working, you point the PiHole at it instead of at a regular forwarding / recursive DNS upstream. This is done in one panel on the PiHole and is nearly trivial.

UN-buggering /etc/resolv.conf

But first, a word about /etc/resolv.conf. Due to various food fights over how to do DNS, the present process is a layer cake of folks all thinking they are in charge. DHCP, Wicd, Network Manager, etc. Various things try to “help” by constantly changing the contents of /etc/resolv.conf. I chose to just take a crowbar to it and lock it down so I KNOW where my DNS is resolving. “chattr +i /etc/resolv.conf” sets the “immutable” attribute and even root can’t edit the file. Yes, it means I also get to do a “chattr -i /etc/resolv.conf” any time I want to change it. OTOH, I no longer suddenly find myself in the att.net domain with my DNS resolution going back to the Telco…

I left in the “nag” about somebody ELSE thinking they were in charge… just as a nose tweak.

I’ve pointed DNS on this machine to itself and set the search and domain names to my choice… and LOCKED IT DOWN with chattr. I’ve also left in 2 nameserver entries though commented them out. This is useful in debugging. When getting “No Joy” on DNS and you really really want to check something in the browser on a web page… just swap them in and 127.0.0.1 out. Then lookups local to the machine go out to a recursive DNS provider, your browser works again, and you can get what you wanted they try again…

root@headless1:/# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND — YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.0.1
search chiefio.home
domain chiefio.home
#nameserver 208.67.222.222
#nameserver 208.67.220.220

Configuring PiHole

Here’s a screen shot of the PiHole config page (click to embiggen):

PiHole DNS setting page

PiHole DNS setting page

You just “uncheck” the “BIG NAMES” from the left hand column and “check” the custom names on the right, then enter your DNS server names. Because I have 2 of them, I have one entry for “self” and one that points at the other one. Likely a bit pointless as if “self” is having issues it is unlikely to be able to talk well to the other one, but who knows. Also note that in the above long config file, I do not have the interface IP defined as a DNS target for the .252 PiHole, only the 127.0.0.1 target. So a bit of an asymmetry. The .253 one can see requests both on 127.0.0.1 (from the local PiHole) and on the ethernet interface (for the other PiHole and testing), but the .252 can only see itself on 127.0.0.1 so only the PiHole or folks on the computer can talk to “unbound”.

Something else to decide which way to go…

Resource Usage & Swap Management

Here’s a ssh login / htop from the Pi Model 1 showing that with the LXDE Login out of the way, there’s very little resource usage for this thing.

R. Pi Model One Htop

R. Pi Model One Htop

Yeah, 98 MB of memory and 3.9% of CPU. CPU rises when the squid proxy is in use, but not by a lot. DNS load is nearly nothing. And yes, that says 8 GB of swap on it. Why?

Because an early attempt had swap use rise to about 1.6 GB over a couple of days. Not sure exactly why, but decided to change swapiness and such.

ems@devuan:/etc$ tail sysctl.conf
[...]
net.ipv6.conf.all.disable_ipv6=1
vm.swappiness 0
vm.vfs_cache_pressure=100
#vm.vfs_cache_pressure=500
vm.dirty_background=10
vm.dirty_ratio=20

So I shut off ipv6, set vm-swappiness to 0 so file handles and such get dumped fast, put cache_pressure up some (and prepared for if more was needed) to encourage dumping old cached crap sooner, and then set the background swap / cache cleanup to happen at 10% and to start blocking applications to go do a fast clean up at 20% stale (that only gets reached if the background cleanup is having a slow day…)

Now it does look like swap is basically unused (that 20 ish on swap was from when I was logged in using a full desktop and it is a ‘left over’)

So some swap tuning on low resource machines might be in order ;-)

Some Links

There’s a nice manual page on unbound here:

https://man.openbsd.org/unbound.conf

So if you want to know what those 1000 lines are doing, look there.

Next up for me is attempting to sort out that whole encrypting tunnel thing. TLS vs HTTPS and all. What do I want to do and how to do it.

I published this 1/2 way step (really more like 3/4 of the way…) as it gets a LOT of DNS Bogosity and risk and snooping out of the way. Plus, if you are already using a VPN, you can stuff your traffic inside it anyway.

Also on my ToDo List, is to see about turning on HTTPS DNS in my FireFox browser but pointed at MY DNS servers. Sort of a ‘two fer’. Blocking FireFox from sending my browser DNS lookups off site, and at the same time getting HTTPS DNS working on my servers.

There’s 2 directions for encrypted links ( upstream to servers and downstream to clients) plus at least 2 major protocols (TLS / HTTPS) and so it’s a bit of a 4-way puzzle to sort out to set it up.

That’s next on the ToDo list.

This is an interesting page at a Germany site. Lists the test case too:

http://dnssec.vs.uni-due.de/

Test validation

dig sigok.verteiltesysteme.net @127.0.0.1 (should return A record)
dig sigfail.verteiltesysteme.net @127.0.0.1 (should return SERVFAIL)

If DNSSEC validation does not seem to work, check whether you’re using more than one DNS resolver and whether each of them has DNSSEC validation enabled. The most common configuration error is to use a secondary DNS resolver without DNSSEC validation. Upon validation error, the operating system will fall back to the secondary resolver and the security checks of the primary resolver will be moot.

I found it easiest to see the “SERVFAIL” notice in the PiHole Admin / logfile page where the things are a lot easier to read.

There’s an “unbound” tutorial here:

https://dnswatch.com/dns-docs/UNBOUND/

Where I’ll be spending some quality time trying to get a better handle on the encrypting bit and how forwarding interacts with auth-zone. Zones in general really.

Here’s a how-to for PiHole DNS over HTTPS (DoH), but using Cloudflare and SystemD stuff.

https://nathancatania.com/posts/pihole-dns-doh/#set-cloudflare-doh-as-the-upstream-dns-provider

I’m not a big fan of Cloudflare, but they are OK. I may set up one of mine this way as a first immersion… but needing a ‘special daemon’ seems a bit much.

Here’s another model. Just doing DNSSEC from PiHole is not as hard as going through unbound, but I’m doing it that way as unbound lets me get the encryption step too, plus move to auth-zone tree traversal and a lot more:

https://www.mpauli.de/dnssec-on-a-raspberry-pi-in-5-minutes.html

And some docs:

https://docs.pi-hole.net/guides/dns/unbound/#configure-unbound

Then some folks talking about it with hints:

https://github.com/pi-hole/docs/issues/207

And another POV:

https://dev.to/jldohmann/the-ultimate-ad-blocker-configuring-pi-hole-with-unbound-dns-20eo

So folks wanting to “Dig Here!” some more, y’all got your pointers.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits. Bookmark the permalink.

18 Responses to DNS With DNSSEC Validation

  1. E.M.Smith says:

    Well, that was interesting.

    I shut off the “forward” zone and just left it with the auth-zone. Then did an nslookup on a site I knew I’d not visited / referenced / bit too in the last weeks. NYT.

    With verbosity set to “3”, you get a LOT of information. First the query:

    ems@devuan:/var/log/unbound$ dig nyt.com  @192.168.1.252 
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> nyt.com @192.168.1.252
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14388
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;nyt.com.			IN	A
    
    ;; ANSWER SECTION:
    nyt.com.		3600	IN	A	151.101.65.164
    nyt.com.		3600	IN	A	151.101.1.164
    nyt.com.		3600	IN	A	151.101.193.164
    nyt.com.		3600	IN	A	151.101.129.164
    
    ;; Query time: 114 msec
    ;; SERVER: 192.168.1.252#53(192.168.1.252)
    ;; WHEN: Sat Apr 24 19:28:50 UTC 2021
    ;; MSG SIZE  rcvd: 100
    

    At 114 ms, it was doing more than hitting cache. So the log file on the server:

    Apr 24 12:23:56 unbound[14910:0] info: start of service (unbound 1.9.0).
    Apr 24 12:23:58 unbound[14910:0] debug: auth zone . updated to serial 2021042401
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_new
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: resolving nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 1 labels
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  192.48.79.30#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33040 rrset=38111 infra=4149 val=33196 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  192.48.79.30#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: new target dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: new target dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: new target dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  208.80.124.13#53
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 3 labels
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 2 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  192.41.162.30#53
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 2 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  192.35.51.30#53
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 2 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  192.31.80.30#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33040 rrset=41221 infra=5085 val=33652 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  208.80.124.13#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: prime trust anchor
    Apr 24 12:28:50 unbound[14910:0] info: generate keytag query _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: resolving . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: validate keys with anchor(DS): sec_status_secure
    Apr 24 12:28:50 unbound[14910:0] info: Successfully primed trust anchor . DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: resolving _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] info: query response was NXDOMAIN ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query _ta-4f66. NULL IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_subquery event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: validated DS com. DS IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_state_initial event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: resolving com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 2):  com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: resolving (init part 3):  com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  192.42.93.30#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33520 rrset=43282 infra=5318 val=34618 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  192.41.162.30#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 1 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.44.65#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33520 rrset=45580 infra=5557 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  192.31.80.30#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 1 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.45.1#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33520 rrset=45580 infra=5796 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  192.35.51.30#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was REFERRAL
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: removing 1 labels
    Apr 24 12:28:50 unbound[14910:0] info: sending query: p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33520 rrset=45580 infra=6035 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.44.65#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was nodata ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.44.1#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33675 rrset=45783 infra=6274 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.45.1#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was nodata ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33675 rrset=45783 infra=6274 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was nodata ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: processQueryTargets: dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: sending query: dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: sending to target:  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33675 rrset=45783 infra=6274 val=35001 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: response for com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  192.42.93.30#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] info: validated DNSKEY com. DNSKEY IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_wait_subquery event:module_event_pass
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] info: NSEC3s for the referral proved no DS.
    Apr 24 12:28:50 unbound[14910:0] info: Verified that unsigned response is INSECURE
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_wait_module event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query nyt.com. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33820 rrset=46651 infra=6274 val=35974 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.44.1#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query dns2.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=33980 rrset=46801 infra=6274 val=35974 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query dns1.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=34140 rrset=46951 infra=6274 val=35974 subnet=41372
    Apr 24 12:28:50 unbound[14910:0] debug: iterator[module 2] operate: extstate:module_wait_reply event:module_event_reply
    Apr 24 12:28:50 unbound[14910:0] info: iterator operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: response for dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] info: reply from  198.51.45.65#53
    Apr 24 12:28:50 unbound[14910:0] info: query response was ANSWER
    Apr 24 12:28:50 unbound[14910:0] info: finishing processing for dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: validator[module 1] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: validator operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: subnet[module 0] operate: extstate:module_state_initial event:module_event_moddone
    Apr 24 12:28:50 unbound[14910:0] info: subnet operate: query dns3.p06.nsone.net. A IN
    Apr 24 12:28:50 unbound[14910:0] debug: cache memory msg=34300 rrset=47101 infra=6274 val=35974 subnet=41372
    
    
    So it started at the root, worked down to Source Of Authority, got an ANSWER then did a whole lot of who knows what backing out (likely caching some information from the various levels).
    
    2nd time same query?
    
    
    ;; Query time: 2 msec
    ;; SERVER: 192.168.1.252#53(192.168.1.252)
    ;; WHEN: Sat Apr 24 19:35:42 UTC 2021
    ;; MSG SIZE  rcvd: 100
    

    I think I can live with 2 ms DNS resolutions ;-)

    And NOTHING in the unbound logs as the PiHole had cached it. As more things are looked up, unbound learns more authoritative servers for different zones, and doesn’t need to go to the root servers as often. It gets faster over time too.

    OK, so the above config of mine is a bit bogus in that the Forward-zone for “.” sends everything to the forwarding / recursive resolvers and it doesn’t get to the auth-zone part. But comment that out, it does the full Monty from root servers on down. Got it. Like this:

    # forward-zone:
    # 	name: "."
    # 	forward-addr: 208.67.222.222
    # 	forward-addr: 208.67.220.220
    

    Shuts off the forwarding to OpenDNS… and I’m good with that. It is where I was headed anyway.

    So pick your preference in the above config. Forwarding to recursive company resolvers, or all authoritative all the time.

    I think I’ll leave these two DNS servers as “one of each” for a little while, then decide ;-)

  2. E.M.Smith says:

    This site has what looks like a nearly trivial config for making it TLS. Near as I can tell, he just sets SSL to “yes” (and SSL is a synonym for TLS in the configs, IIRC):

    https://bartonbytes.com/posts/configure-pi-hole-for-dns-over-tls/

    Here’s his config file. I’ve bolded 2 bits. The added privacy bits, and the one line that looks like it is all that is turning on TLS security via the SSL option:

    ## DNS Over TLS, Simple ENCRYPTED recursive caching DNS, TCP port 853  
    ## unbound.conf -- original at https://calomel.org/unbound\_dns.html  
    ## tweaks by bartonbytes.com  
    server:  
    access-control: 127.0.0.0/8 allow  
    cache-max-ttl: 14400  
    cache-min-ttl: 600  
    do-tcp: yes  
    hide-identity: yes  
    hide-version: yes  
    interface: 127.0.0.1  
    minimal-responses: yes  
    prefetch: yes  
    qname-minimisation: yes  
    rrset-roundrobin: yes  
    ssl-upstream: yes  
    use-caps-for-id: yes  
    verbosity: 1  
    port: 5533  
    #  
    forward-zone:  
    name: "."  
    forward-addr: 9.9.9.9@853         # quad9.net primary  
    forward-addr: 1.1.1.1@853         # cloudflare primary  
    forward-addr: 149.112.112.112@853 # quad9.net secondary  
    forward-addr: 1.0.0.1@853         # cloudflare secondary
    
    He's just doing forward-zone stuff to Big Name DNS sites, but I presume the same thing would work with the root servers "auth-zone" config. Interesting...
  3. philjourdan says:

    Just curious – ever considered TinyDNS? It has some issues. But is easy to set up and use. (the issues are not fatal as long as you respect the RFC limits on TXT records – TonyDNs does not).

  4. E.M.Smith says:

    This one runs unbound in a chroot partition for added isolation / security. I’m not going to bother as the entire hardware platform is essentially an isolated unit from my POV. He uses the TLS option of the command that turns on upstream tunnels. Lots of other stuff wrapped around it, but again looks like just a “one and done” to make it tunnel. I’m skipping the more generic parts of his config file:

    # request upstream over TLS (with plain DNS inside the TLS stream).
    # Default is no. Can be turned on and off with unbound-control.
    # tls-upstream: yes
    
    ## Forward zones
    forward-zone:
    name: "."
    forward-addr: 208.67.222.222
    forward-addr: 208.67.220.220
    forward-addr: 8.8.8.8
    forward-addr: 8.8.4.4
    forward-addr: 2001:4860:4860::8844
    forward-addr: 2001:4860:4860::8888
    

    He also has IPv6 on and set to preferred. IIRC, the TLS syntax is now preferred over the ssl- syntax. It looks like all those other TLS configuration settings are optional / over-rides.

    Here’s two for using DNScrypt, but I though, it is now less preferred, yet ran into a couple of places giving thumbs up for DNScrypt:

    https://blog.sean-wright.com/dns-with-pi-hole-dnscrypt/

    https://edhull.co.uk/blog/2017-08-07/dnscrypt-pihole

    Looks like a dedicated proxy is installed to run DNS into DNScrypt links.

    General overview of “unbound” from the guys / place that wrote it.
    https://nlnetlabs.nl/projects/unbound/about/

    So TLS looks easy (we’ll see when I try it…)

    Now on to DNS over HTTPS (DoH).

    OK not that keen on the ways used for DoH.

    More think time needed. I think I’ll try the one line TLS first. Then move on to DNSCrypt and leave DoH for browser wars and / or a distant future attempt…

  5. E.M.Smith says:

    @Phil:

    Not familiar with TinyDNS. I’ll take a look.

    Looks like their niche is DNS hand-holdy tutorials for folks new to the stuff. They have a comparison of DNS software, but not their own option:

    https://tinydns.org/category/dns-servers/software-comparison/

    So mostly they want to tell me things I already know and revisit the question of what software when I’ve already chosen…

  6. philjourdan says:

    One of the “acquired” companies in my current empire of companies used it as their main DNS. They had a genius who hated to spend money when GNU things sufficed (he hated F5 over HaProxy , etc). It was only after I discovered the illegal TXT record that I found that fault. Other than that, it worked great. But we are a “big” company now, so I am tasked with getting rid of it (moving it to F5 DNS).

    Other than the TXT record violation, and idiots that did not listen to me, it worked well

  7. philjourdan says:

    BTW – Have been bleeding DNS since the first network “crises” was traced to a bad DNS entry from the Windows team. I have only one requirement when I accept a job – Who OWNS DNS? If they say Wintel? I say see you later!

  8. E.M.Smith says:

    Well… did a test case with just setting the TLS value to yes.

    Seemed to work both for the forward-zone and the auth-zone configurations.

    I need to up the logging to a 3 and retest it so as to get confirmation it is negotiating TLS tunnels, but as it stands it sure looks like it worked without any error message.

    Still going to try a DNScrypt path as it looks like it gives more options for encrypting / hiding the communications, but as a first and easy encrpted tunnel, the TLS option looks easy for “upstream”. Most of the TLS configurations look like things you need to set to SERVE over TLS to your downstream, and I’m just not seeing the need to do that between things all inside my own home ;-)

    So after dinner, one by one, I’m going to cut my DNS servers and test bed over to TLS encryption of the communications. Folks can still see the DNS port being hit with traffic, but not what’s in it.

    Provided the detail debugging logs confirm it, that’s like 99% of everything I want.

  9. E.M.Smith says:

    @Phil:

    Nice to know I’ve got backup ;-)

  10. E.M.Smith says:

    OK, some subtle thing is chewing on my ankles…

    The TLS test case works FINE on the XU4 (not with a PiHole front end bound to port 53 and talking to it on port 5335) but fails on the actual DNS servers despite almost identical configurations.

    I think it is stuff arguing over ports….

    I guess maybe the fix will be specifying ports on the interface definitions.

    I further guess that asking for TLS upstream causes an assumption of 853 to more than just the upstream, and this conflicts with the 5335 downstream to the PiHole. Explicitly setting tls-port sets it for the downstream (which I’m not doing) while the XU4 has no downstream as it is just servicing for itself. (127.0.0.1)

    I’m going to leave this sit for a while while I have a bit of a think about how to get the PiHole to listen on port 53, talk to unbound on port 5335, and have unbound talk TLS upstream on port 853… or if that is actually the issue. Soooo…. close…

  11. E.M.Smith says:

    From that openBSD manual page, I think this is the Majic Sauce…

    port:
    The port number, default 53, on which the server responds to queries.

    interface:
    Interface to use to connect to the network. This interface is listened to for queries from clients, and answers to clients are given from it. Can be given multiple times to work on several interfaces. If none are given the default is to listen to localhost. If an interface name is used instead of an ip address, the list of ip addresses on that interface are used. The interfaces are not changed on a reload (kill -HUP) but only on restart. A port number can be specified with @port (without spaces between interface and port number), if not specified the default port (from port) is used.

    ip-address:
    Same as interface: (for ease of compatibility with nsd.conf).

    The “upstream” is not expecting port 5335. It is expecting 53 or 835. BUT, by not specifying an overriding port on the upstream interface, it gets 5335 when TLS is declared. (Why that doesn’t happen when TLS is not declared is still unclear… )

    In any case, it looks like I’ve got to fiddle the port on an interface declaration for the upstream when TLS is declared AND the “port” option is set. I guess…. maybe.

  12. E.M.Smith says:

    Well that didn’t work.

    Reading further, I’m suspecting my ‘test case’ didn’t actually work either, but had just bypassed the tls method due to other settings / context.

    It looks like (reading in the commented conf file) there’s some salt and certs stuff that needs to be set up for this to actually work.

    I’m going to look at it again tomorrow. I’m done for the night.

    I’ve got 2 DNS servers doing a dandy recursive DNS with DNSSEC and filtered through PiHole, and a workstation that does the same recursive DNS with DNSSEC but without PiHole for testing. That’s enough for now.

    Wrapping it all in encrypted tunnels can wait.

  13. Pete_dtm says:

    Strictly amateur at this; but been running pi-hole for several years. Started when my ISP scragged ‘my’ router; and then imposed a limit of 16 devices in their dhcp.
    After getting the dhcp up; read up on the authoritative dns and just used the native pi-hole implementation; and keep promising myself to look into the DNSSEC stuff (for various reasons my get up & go; got up & went; and my to do list doesn’t progress much).

    So I am going to be really interested when you start looking at that!

    Wife & still at home daughter get annoyed by all the adverts when they are out and about. From time to time I have to make holes (whitelist) to allow work related functions happen (Microsoft and Google of course). Both my employer (large US automation company) and my wife’s employer (NHS, UK shudder) seem unconcerned about employees’ personal security.
    The pi-hole just sits on a Pi3; I manually run apt-get update / apt-get upgrade from time to time mainly for the pleasure of seeing a non Windows system do updates (including kernel) without requiring multiple re-boots and risk of BSODs and GB of instal files (have to be in Windows land at work; and also get exposed to the nightmare of Windows Server Update Server incompetence); who knew doing a patch session could actually be enjoyable !

  14. E.M.Smith says:

    @Pete_dtm:

    I’m presently on a contract that is “all Micro$oft all the time” so I literally “feel your pain”. One “fix” I was asked to do was that an automatic update to Windows 10 had broken some stuff and Might I Fix It?…

    Frankly, a LOT of the Linux stuff I do is just to keep skills up and enjoy a nice clean well designed system. UN-fortunately, SystemD (mented) is busy trying to turn Linux into a MS Clone (with monolithic management and registry like clone and all…). Thus my avoidance of if (and the bugs in it I’ve tripped over – shades of M$ Bug Of The Day…)

    Um, the above implementation does do authoritative DNS. Both using the Authoritative Server for the final domains in external lookups, and for your Local IPs as the embedded private domains and machines where you are the authority. Or were you talking about setting up a public authoritative domain? That requires a site registration (i.e. real domain name) and DNS reg with upstream.

    Being recursive ( IF you use the auth-zone section) it starts at a root server and works down to the authoritative server. So say I was looking up webserver.somecompany.com, the Root Server (any one from that list) will say “.com is managed by FOO, go see them at yyy.yyy.yyy.yyy”, then you go to FOO and it says “yeah, I got .com, you want somecompany.com? They manage their own, so go see BAR server at address…” You get to bar.somecompany.com (their dns authoritative server) and it says “I AM THE AUTHORITY for somecompany.com, what you want? Webserver? Sure, it is xx.xxx.xxx.xxx done and authoritative”

    Also, since it has DNSSEC turned on, each of those transactions has a cert validation so you know you got the right guy on the other end. So it is CERTIFIED and AUTHORITATIVE as it stands.

    The one thing it is not is encrypted in transit. I’m working on that now, but it just looks like a Royal Pain to set up all the certs stuff without being a Certificate Authority or paying one. (I’m convinced now that that’s where I’m stuck. It is NOT just turn on tls-upstream and it automagically just uses temp keys, but instead you must use openssl and gen public private keys and all that crap. At least, that’s the thesis this morning…)

    But, in fact, that is really “small stuff” in terms of privacy and security. Once you are doing recursive authoritative resolution, the only one in the chain who knows your end target site IS the authoritative server at the other end (bar.somecompany.com) and you will be talking to them anyway. Then, even with fully encrypted DNS traffic, your ISP can see your connection IP numbers to bar.somecompany.com AND the following connection to webserver.somecompany.com too. So even if the DNS traffic were encrypted, they would still know what you connected to. At most a reverse DNS lookup for them to get the name from the IP number.

    The encryption only really protects against folks sticking a sniffer on the wire somewhere between you and bar.somecompany.com and reading it clandestinely (mostly TLAs) but they can get it from the Telco anyway.

    The better solution, really, is to stick everything in an encrypted tunnel via VPN to some jurisdiction outside of your own (so they need an international warrant… and hopefully the VPN company does no logging…) or use TOR or i2p routing for really secret stuff.

    So why am I fooling around with TLS / HTTPS based DNS? Mostly because it is there… It’s free and I want to know. I also hope to set up enough to trap the HTTPS DNS lookups showing up in browsers. I want to defeat their bypass of MY DNS isolation. FFox lets you turn this off, but I’m not finding a place in Chromium and don’t know about Safari… (who, IIRC, use Cloudflair and I’ve seen some Cloudflair traffic go by that I suspect is the spousal laptop Mac…)

    But the reality is simple:

    The above configuration does most of everything I want:

    1) It caches lookups for a long time. Basically one lookup for a site per day or longer for most things. Some longer than that. MUCH faster and a lot less information leakage (like how often you have a particular lookup happen)

    2) It uses DNSSEC to Validate the DNS providers. Man In The Middle kept away, and you KNOW you are talking to authentic DNS servers.

    3) It does not use “forwarding” servers (your Telco, Cisco / OpenDNS, Cloudflare) who can and often do choose to decide what you can look at or not, and sell your browsing habits to others.

    4) It DOES use recursive authoritative DNS. The only one who gives you the final target IP address is the authority for that domain. You know it is them (DNSSEC) and you know they are the authority (recursive from root) and you know you would get that information from them anyway even if using a forwarding server.

    What is does not do is protect against wire snoops in the middle. But a VPN does that better anyway. (I just don’t want to pay for one ;-) It also does not prevent your ISP from blocking / censoring access to sites as they can and do still do that via IP# rather than DNS. Again, a VPN fixes that.

    Hopefully that helps more than it confuses…

  15. E.M.Smith says:

    Much shorter form:

    The PiHole is pointed at “unbound” on port 5335. The “unbound” config has an “auth-zone” config that points at the root servers, and uses DNSSEC to assure everyone is who they say they are. Then when a lookup happens, it gets the answer from the “Authoritative Server” for that zone.

    Both “unbound” and PiHole cache the result, so depending on who’s cache times out first, you get it from PiHole or from “unbound” via the PiHole; or worst case go back upstream. Except…

    “unbound” retains the information from the root server about who manages .com and from the ‘.com source of authority’ about who manages somecompany.com. So even if the specific lookup result time-to-live has expired, ‘unbound’ will go back to the finest still valid branch of that tree from the root servers to get the lookup done again. So rarely will you ever go back and start at the root server again for that site (or anyone else in .com domain… or at somecompany.com).

    Over time, most of your lookups will start at the ‘source of authority’ for that domain.

  16. E.M.Smith says:

    Some interesting data:

    My test station is the Odroid XU4. I have it (presently) configures with unbound running, but local lookups resolve (via /etc/resolv.conf) via my R. Pi DNS servers.

    This presented a fun opportunity to watch both of them do the recursion and caching, then see the times. (This is NOT a pure test as I’m not sure just how much of stuff like .com source of authority are already cached in each. First Ever lookups on the Pi ran about 450 ms and traversed the whole tree from root server on down, and these are faster).

    Going via Pi Model 1 server (as it is first in resolv.conf):

    root@XU4uDevuan3:/etc# time nslookup apple.com
    Server:		192.168.16.253
    Address:	192.168.16.253#53
    
    Non-authoritative answer:
    Name:	apple.com
    Address: 17.253.144.10
    
    
    real	0m0.044s
    user	0m0.025s
    sys	0m0.015s
    

    So a sloppy 44 ms as I was using “time” instead of the built in inside ‘dig’. But a lot less than 450 ms it takes for a first ever.

    root@XU4uDevuan3:/etc# dig apple.com
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> apple.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6478
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;apple.com.			IN	A
    
    ;; ANSWER SECTION:
    apple.com.		3577	IN	A	17.253.144.10
    
    ;; Query time: 2 msec
    ;; SERVER: 192.168.16.253#53(192.168.1.253)
    ;; WHEN: Sun Apr 25 17:57:10 UTC 2021
    ;; MSG SIZE  rcvd: 54
    

    A VERY nice 2 ms for the second dip.

    Took me a while to think of somewhere I’ve NEVER visited / looked up… Oh, I know, Greenpeace!

    root@XU4uDevuan3:/etc# dig greenpeace.org
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> greenpeace.org
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37465
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;greenpeace.org.			IN	A
    
    ;; ANSWER SECTION:
    greenpeace.org.		3600	IN	A	35.184.130.59
    
    ;; Query time: 375 msec
    ;; SERVER: 192.168.16.253#53(192.168.1.253)
    ;; WHEN: Sun Apr 25 18:06:32 UTC 2021
    ;; MSG SIZE  rcvd: 59
    

    375 ms. That’s what happens when I start at a root server… but subsequent lookups are 2 ms, even from a Raspberry Pi Model ONE on a slow Ethernet (10 Mb/sec) and a couple of switch hops away.

    Now, because I’ve already asked about ONE .org site, asking about another does not start at a root server, it starts at the “.org” source of authority server:

    root@XU4uDevuan3:/etc# dig goodwill.org
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> goodwill.org
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51772
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;goodwill.org.			IN	A
    
    ;; ANSWER SECTION:
    goodwill.org.		3600	IN	A	104.22.6.97
    goodwill.org.		3600	IN	A	172.67.25.176
    goodwill.org.		3600	IN	A	104.22.7.97
    
    ;; Query time: 177 msec
    ;; SERVER: 192.168.16.253#53(192.168.1.253)
    ;; WHEN: Sun Apr 25 18:10:27 UTC 2021
    ;; MSG SIZE  rcvd: 89
    

    So about twice as fast at 177 ms. Again, subsequent lookups for goodwill.org will run about 2 ms until their TTL expires, and then likely be about 60 ms until the source of authority TTL for “goodwill” runs out, then 177 ms until the TTL for “.org” runs out, then it will once again be 300 to 400 ms once only. (I’d guess about once a month or year…)

    Now, what happens if I point at the unbound DNS on my desktop / workstation directly? (Remember the above lookups went via /etc/resolv.conf to my R.Pi servers, while the local unbound used for testing just looks to root servers and knows not of the R.Pi caches…)

    root@XU4uDevuan3:/etc# dig apple.com @127.0.0.1
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> apple.com @127.0.0.1
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41857
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;apple.com.			IN	A
    
    ;; ANSWER SECTION:
    apple.com.		3600	IN	A	17.253.144.10
    
    ;; Query time: 46 msec
    ;; SERVER: 127.0.0.1#53(127.0.0.1)
    ;; WHEN: Sun Apr 25 17:58:10 UTC 2021
    ;; MSG SIZE  rcvd: 54
    

    At 46 ms, still pretty respectable. I know I’ve done other .com sites before though so that’s likely starting from the “.com” server. Still, what happens when we hit the cache on this machine?

    root@XU4uDevuan3:/etc# !!
    dig apple.com @127.0.0.1
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> apple.com @127.0.0.1
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7734
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;apple.com.			IN	A
    
    ;; ANSWER SECTION:
    apple.com.		3593	IN	A	17.253.144.10
    
    ;; Query time: 0 msec
    ;; SERVER: 127.0.0.1#53(127.0.0.1)
    ;; WHEN: Sun Apr 25 17:58:17 UTC 2021
    ;; MSG SIZE  rcvd: 54
    

    Yeah, ZERO ms. As in none. As it can’t measure it ;-)

    Now THAT’S fast DNS!

    For comparison, here’s the first ever lookup of greenpeace.org on this particular ‘unbound’ instance, so again starting from a root server:

    root@XU4uDevuan3:/etc# dig greenpeace.org @127.0.0.1
    
    ; <> DiG 9.11.5-P4-5.1+deb10u3-Debian <> greenpeace.org @127.0.0.1
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49573
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1472
    ;; QUESTION SECTION:
    ;greenpeace.org.			IN	A
    
    ;; ANSWER SECTION:
    greenpeace.org.		3600	IN	A	35.184.130.59
    
    ;; Query time: 251 msec
    ;; SERVER: 127.0.0.1#53(127.0.0.1)
    ;; WHEN: Sun Apr 25 18:16:45 UTC 2021
    ;; MSG SIZE  rcvd: 59
    

    So about 200 ms faster when the PiHole filtering is out of the way and I’m not running my DNS lookups on a 700 MHz armel chip… but instead directly on a nearly 2 GHz XU4 and talking directly to root et.al. servers. And subsequent lookups will be at ZERO ms too :-)

    Any wonder I’m talking about how much faster things go with local DNS?

    Now for most things, like my Tablet and phone, I’m not going to worry about 2 ms vs ZERO ms lookups. Also, as the whole house goes via those PiHole / unbound servers, lots and lots of stuff will end up cached locally and lots of intermediate servers (.com .org .gov…) will be known to unbound so the odds of hitting a ‘first ever’ top level domain rapidly approach zero. Time lost to the 2 ms vs Zero ms tends to be made up in lots more cache hits.

    OTOH, it really can be sweet having a major workstation with direct recursive DNS to the authoritative servers and a big fat cache ;-) Things go really fast ;-)

  17. Pete_dtm says:

    Thanks for the full reply, going to re-read a few times, but I believe I’ve got the gist.
    As an adblocker pi-hole rocks!

  18. E.M.Smith says:

    I probably ought to draw a clear line around what kinds of “authoritative” are in use in this description.

    For DNS Servers, there’s 3 kinds and the meanings are jargon with precise meanings not the same as regular old English. For regular old English, you could mean “I find an Authoritative DNS server” as opposed to “this IS an Authoritative DNS server”.

    The Jargon:

    1) Forwarding. This is a DNS server that holds a cache of some lookups, and if it doesn’t know, will forward your request on to someone else. This is things like Cloudflair, OpenDNS, Google DNS, your Telco. It is also what most folks run in their home or business if they run their own DNS server. This section points at forwarding servers (and makes you a forwarding server too)

     forward-zone:
     	name: "."
     	forward-addr: 208.67.222.222
     	forward-addr: 208.67.220.220
    

    2) Recursive. That’s what I implemented above with the “auth-zone” entries. It does NOT depend on a “forwarding” server at all. It knows about the Root Servers (list in the config above) and when you do a DNS lookup, it ‘disassembles’ the name and starts from the root, working server by server down the list, until it finds the “Authoritative Server” for that final location. This section if used makes you a Recursive DNS server and starts at the Authoritative Root Servers:

     auth-zone:
    	name: "."
    	master: 199.9.14.201         # b.root-servers.net	
    	master: 192.33.4.12          # c.root-servers.net	
    	master: 199.7.91.13          # d.root-servers.net	
    	master: 192.5.5.241          # f.root-servers.net	
    	master: 192.112.36.4         # g.root-servers.net	
    	master: 193.0.14.129         # k.root-servers.net	
    	master: 192.0.47.132         # xfr.cjr.dns.icann.org	
    	master: 192.0.32.132         # xfr.lax.dns.icann.org	
    	fallback-enabled: yes	
    	for-downstream: no	
    	for-upstream: yes
    

    “unbound” will rotate between the root servers and work on down from there.

    Note that in attempting to use both “forward-zone” and “auth-zone” I could not get them to play well together. In theory it looks to me like I ought to be able to define a forward-zone of “name.top” and have the rest go via “auth-zone”, but I wasn’t able to make that work right (yet… if it can be done…) so Pick One (at least to start…)

    So yada.what.foo.mycompany.com sent to a root server, chases down to the “.com” and says “go talk to the guy handing “.com” at xx.xxx.xxx.xxx. That guy reads down the list to “mycompany” (and already knows he runs .com) and says “go talk to ‘mycompany’ source of authority at yy.yy.yy.yy (which for little companies is likely their ISP or their domain name registrar, but for big companies is a machine at the company itself). Now, inside MyCompany I’ll chase down that name to “foo” and say “Hey, I’ve delegated that to the server over in Europe and send you off it it. That machine chases down the name to “what” (cause it knows it is .foo and that it is at .mycompany) and says “Oh, yeah .what is run on the marketing department DNS server authoritative server”, and hand you off to it, who finally says “Yeah, I’m the guy authoritative for all .what.foo.mycompany.com, here’s the number for “yada”).

    Then all the caching as described above.

    3. Which brings up the jargon of “Authoritative DNS server”. Those final leaf nodes in that chain of recursion are the Authoritative Servers. Where the recursion ends. IF you are talking about setting up an Authoritative DNS Server you can do it for a Private Domain (say Bob.Mygigle domain, as Mygigle is clearly not a valid top level qualifier and the root servers will not have any clue what to do with you). Like the .onion domain in Tor routing or the chiefio.lab domain in my config above. Folks in the rest of the world will not know who you are or be able to get your IP numbers as it is all darknet and private inside your own network. BUT, most Authoritative Servers are public. They are the leaf nodes of the whole DNS tree.

    Now the Private Authoritative DNS is seen in the config above with “domain insecure” and “local-data” entries:

    	 private-address: 192.168.0.0/16
    	 private-address: 169.254.0.0/16
    [...]
    	 private-domain: "chiefio.lab"
    	 private-domain: "chiefio.home"
    [...]
    	 local-data: "Netgear.chiefio.lab. IN A 10.1.1.254"
    	 local-data: "netgear.chiefio.lab. IN A 10.1.1.254"
    

    That makes ME and this config be an Authoritative DNS Server for my PRIVATE domain, network, and machines. Nobody but me has the authority to hand those out. BUT, I can’t do that to the public network.

    To be a PUBLIC Authoritative DNS server, you need to have a formally assigned DNS Name, and ownership of a block of addresses you can assign too. I don’t have an example of that above as I don’t have an assigned address block and DNS name.

    I assume you put it in the “auth-zone” as a named block but one that is not a private-domain but instead a public domain. Then you also need to assure that is propagated up the DNS tree above you (likely done by the ISP / Registrar from whom you bought your name and IP block).

    So my (somewhat…) unclear description above was trying to say is that the config I use, above, is Authoritative for my Private-Domains, and that the leaf nodes in my Recursive DNS (or even in the Forwarded DNS if you used that) would also be Authoritative Domain Servers, but PUBLIC Authoritative DNS. And I’m assuming you don’t need to set up a Public Authoritative DNS but just a Private Domain Authoritative DNS service. And the above config does that.

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.