Quoting Craig Sanders (c...@taz.net.au):

> On Fri, Sep 16, 2016 at 01:12:07AM -0700, Rick Moen wrote:
> 
> > _But_ that is completely unrelated to pdnsd.
> 
> ah, my mistake.  i assumed he was talking about powerdns.

No worries.  ;->

> > http://linuxmafia.com/faq/Network_Other/dns-servers.html
> 
> good page that, i've read it before but not for some time. IMO a useful
> addition to it would be a list of authoritative servers that use bind9
> RFC-1034 zonefiles.

You know, they kind of _could_ have called that format the RFC-1034 file
format, as some RRs are described/defined there, but because all the key
ones are described/defined in accompanying RFC-1035, it's generally
called 'RFC-1035 format'.

Anyway, yes, good idea -- and I actually do document RFC 1035 support
where I know about it.

> apart from "it aint broke, why fix it?" laziness, one of the reasons i'm
> still using bind9 is because I don't want to rewrite my zone files in
> a new format (or even have to learn a new format), and I haven't been
> overly happy with the few alternatives I've tried that could use bind
> zonefiles.
> 
>  - powerdns is serious overkill for my needs (home server with only a
>    few domains).

Yeah.  $WORK did a massive conversion of hundreds of domains from BIND9
to PowerDNS Authoritative Server, and there were various problems along
the way.  I'm not convinced it was a good idea, even for a large
Internet firm that does that many domains.  Probably on balance (gains
in performance and security), but with some reservations.

>  - last time i looked at it (years ago, not long after it was released),
>    there were some incompatibilities between NSD's interpretation of
>    bind zonefiles and bind9's interpretation.

I believe you, but haven't seen this.  I've administered NSD on
ns1.svlug.org from NSD 2.x days onwards, and it's been really good.
I've not encountered any zonefile-parsing weirdness.  (I still run BIND9
on ns1.linuxmafia.com .)

Searching for data on this, I find some docs in their initial public
release candidate:
https://www.nlnetlabs.nl/downloads/nsd/OLD/nsd-1.0.0-rc2/REQUIREMENTS
'Section C. Technical Specifications has C.1. Zone file format and RR
records.'  It basically _claimed_ NSD would parse any valid RFC 1035
file containing only IN-class RRs.  FWIW, I've not seen NSE 2.x and
later's parser reject or get wrong anything from my own zones.

> Also, I didn't want to
> have to run two name servers (internet-facing authoritative and
> private LAN recursive) - although dnsproxy or similar could solve
> that problem now. it's probably worth another look.

I found about a year ago what struck me at the time as the ideal
solution to that problem but failed to add it to my linuxmafia.com
knowledgebase.  Maybe it was dnsproxy.  

Here's a creative solution from one of the NLnet Labs guys:
https://www.nlnetlabs.nl/pipermail/nsd-users/2014-August/001998.html

  It is possible, but not using the same address+port of course. One
  solution is to have NSD only listen on localhost while unbound listens
  on the external adress. You can then use stub-zone configuration in
  unbound to make it use the localhost adress for lookups in any zone you
  are serving from NSD.

  This is what i do for my home network, for a production setup I would
  rather keep authorative and caching DNS services fully separated.

However, followup from a different poster stresses that this is
appropriate only for serving a private zone from NSD, as it wouldn't
have the AA bit set.  This is similar:
https://www.nlnetlabs.nl/pipermail/nsd-users/2014-August/002000.html

The ArchLinux wiki proposes a different soution:  Bind NSD to
127.0.0.1:53530, and bind Unbound to *:53 with the auhtoritative zones
declared as ones to refer to NSD using the 'local-zone' and 'stub-zone' 
features:
https://wiki.archlinux.org/index.php/Nsd

The 'Dnsspoof' examples on
https://web.archive.org/web/20160329083109/https://calomel.org/unbound_dns.html
show some ways to leverage the DNS host being dual-homed (if it is).

Other solutions might beckon if the host is multihomed, e.g., bind NSD
to the public-facing real IP, and bind Unbound to the private RFC1918
address.

Personally, when I do my next server rebuild on ns1.linuxmafia.com 
(which is a totally public-facing 'bastion host', not dual-homed),
what I'll probably do is IP-alias a second public IP address onto the
public network port (its sole network port other than loopback), 
and bind NSD to one and Unbound to the other -- which has the benefit of
simplicity, letting me easily ACL the daemons individually, and keeping
their configurations totally separate.  Fortunately, I have spare IPs.

None of this tested by your present correspondent.  Yet.  ;->

>  - maradns provides a conversion tool for bind zonefiles, but doesn't use
>    them natively.  otherwise, i'd probably switch to it.   I've used it
>    several times on gateway boxes i've built for other people.

I like author Sam Trenholme quite a bit, and have corresponded with him
frequently on nameservice matters.  Some of the details on that page,
notably the very thorough information in the entry for Deadwood (his
modern recursive server, intended to replace the old one in Maradns),
comes directly from Sam.

I get the impression that Deadwood is really, really excellent, on a par
with Unbound, PowerDNS Recursive Server, and dnscache.

As to Sam's authoritative-only daemon from Maradns, I honestly don't
know, and have to admit I haven't tried it out.  Many people are quite
happy with it.  Proper competition include NSD, Knot DNS (newest),
tinydns, and PowerDNS Authoritative Server (overlarge, over-complex).  

Differentiators include which ones fully support:  IPv6, DNSSEC/EDNS0.
Personally, I'm getting a bit cynical about the standards churn for the
latter, e.g., EDNS0 got rewritten via RFC 6891 in 2013.  I'm tempted to
react 'Fine, let us know when you're done playing standards gods, and
I'll start paying attention.'

_______________________________________________
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Reply via email to