Re: Freezing a Zone vs. Stopping the DNS Server

2021-09-29 Thread Timothe Litt
Why make manual changes to the zone file?  The zone is already
dynamically updated, so the usual reasons (formatting, structure,
in-line signing) don't apply.

Use nsupdate to add your entries.  Named will update the zone, handle
updating the serial number - an even do some validation on the records. 
It's easier, doesn't stop service, and because it automates the
mechanics, safer.

BTW: I recommend using TSIG for authorization with nsupdate rather than
IP addresses.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 29-Sep-21 07:41, Frank Kyosho Fallon wrote:
> Hi,
>
> Occasionally I need to add hosts manually to forward/reverse lookup
> zones in BIND 9.16. We also have ISC DHCP. Both are on a Mac Mini
> using MacPorts to install.
>
> Since dynamic updates are continually in progress, I understand I need
> to use *rndc**freeze zone* and *rndc**thaw zone* before and after
> making changes (including manually incrementing the sequence number). 
>
> Can I safely accomplish the same thing by issuing an *rndc stop*
> command? Would that allow me to make zone changes followed by an *rndc
> reload* command?
>
> Also, is it safe to simply reboot the server after OS updates, or is
> it necessary to manually stop the DNS server first?
>
> Does it matter where in the dynamically updated zone files I insert
> the new host A record and PTR record?
>
> With /etc/hosts I can add hosts on different subnets. To do that in
> DNS, do I first need to add a reverse zone for the additional subnet
> so that I can add PTR records to correspond to A records in the
> forward zone?
>
> Thanks for any light you can shed on this subject.
> -- 
> Frank Kyosho Fallon
> My pronouns are: He, HIm


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
On 10-Sep-21 13:11, Evan Hunt wrote:
> Recently a critical bug was discovered in which map files that were
> generated by a previous version of BIND caused a crash in newer versions.
> It took over a month for anybody to report the bug to us, which suggests
> that the number of people willing to put up with such a finicky format
> must be pretty small. (Or that the people who use it aren't keeping up with
> regular software updates, I guess.)

Thanks for the history/data.

In my experience, the bigger the operator (of any system), the more
slowly they are likely to update it.

A month doesn't seem like a long time - everyone wants to be second to
update (except for CVEs, and even there I don't rush to update for CVEs
related to features I don't use).

> it would be nice not to have to worry about map files when it came to
> maintaining feature parity.)

I wouldn't worry all that much about blowing away old map files with a
version upgrade; they're pretty well documented as a cache, not a
primary format.  And you supply the tools to convert to a stable format.

In fact, were you to come up with a data structure and loading scheme
that made raw as fast as map, you could treat "map" as a hint that a
user values speed over size & portability - and just write raw format
instead.  Until the pendulum swings again.




OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
I'm not a consumer of this and agree that it's up to users to speak up,
so I'll stop here - with one final observation.

The issue comment containing the benchmarks includes:

> Speedup provided by the |map| format does not seem significant enough
> to warrant the complexity of map format, especially when we take into
> account that the difference measured in terms of "real time" is in
> order of 10s of seconds.
10s of seconds *per zone* can certainly add up.  Call it 10 secs/zone *
100,000 zones = 1M sec / 3600 = 278 hrs *saved*.

Suppose loading zones is not disk limited, and cores scale linearly
(e.g. no lock conflicts & an index lets each core find a zone to work on
for free).  So give it 16 cores (each taking on one complete zone), and
it's still 17 hrs saved.  Real life won't be that efficient - meaning
cores won't help that much.

A new memory mapped data structure that didn't require "updating node
pointers" (e.g. that used offsets instead of pointers) may be worth
considering.  In current hardware and with a decent compiler and coding,
the apparent cost of this over absolute pointers may well be vanishingly
small.

OK, that was two.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 10-Sep-21 12:56, Victoria Risk wrote:
>
>>>> After all the "other improvements in performance" that you cited,
>>>> what is the performance difference between map and the other formats?
>>>
>>> I don’t know that, to be honest. We don’t have the resources to
>>> benchmark everything. Maybe someone on this list could?  We would
>>> also like to be able to embark on a wholesale update to the rbtdb
>>> next year and this is the sort of thing that might complicate
>>> refactoring unnecessarily.
>
>
> I was wrong, and in fact we have benchmarked it.
> See https://gitlab.isc.org/isc-projects/bind9/-/issues/2882
> <https://gitlab.isc.org/isc-projects/bind9/-/issues/2882> for details.
> Map format is still faster than raw, but not so much faster that it
> warrants retaining it, given it is riskier, harder to maintain and we
> have no feedback from users that it is important to them.  It also
> seems not to work with large numbers of zones, (>100K) at least in
> current versions of 9.11 and 9.16, which is further indication that it
> isn’t in wide use or we would have had complaints. 
>
> We also have discussed internally that there are other factors, other
> than loading the zone files, that may have more impact on the time it
> takes a BIND server to restart.
>
> If anyone out there is using it successfully, and wants us to keep
> this feature, this would be the time to speak up!
>
> Thank you,
>
> Vicky
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt

On 10-Sep-21 08:36, Victoria Risk wrote:
>
>
>> On Sep 10, 2021, at 7:24 AM, Timothe Litt > <mailto:l...@acm.org>> wrote:
>>
>> Clearly map format solved a big problem for some users.  Asking
>> whether it's OK to drop it with no statement of what those users
>> would give up today is not reasonable.
>>
> Actually, we are not sure there ARE any users. In fact, the one
> example I could come up with was Anand, who has replied to the list
> that he is in fact NOT using map zone.  I should have asked directly -
> is anyone on this list USING MAP ZONE format?
>
Well, if the answer is "no one", that simplifies matters :-)

I do remember that startup time was a big issue before map came out, and
that the complaints subsided thereafter.  No personal knowledge as to
whether that was cause and effect or a realignment of the planets.  In
general, I don't look to Astrology for answers :-)

>> After all the "other improvements in performance" that you cited,
>> what is the performance difference between map and the other formats?
>
> I don’t know that, to be honest. We don’t have the resources to
> benchmark everything. Maybe someone on this list could?  We would also
> like to be able to embark on a wholesale update to the rbtdb next year
> and this is the sort of thing that might complicate refactoring
> unnecessarily.

IIRC, when I did some work on the stats channel & was concerned with
scalability, Evan said that you keep some large datasets (1M+zones)
around for testing and produced some numbers for that.  So it ought to
be possible to get some basic data.

I'm not suggesting a full benchmarking campaign -but one or two
datapoints are a lot better than none.  E.g. If there's no difference
with 1 or 10M zones with, say, 10K records each, it's pretty clear that
map's time is past.  If it's orders of magnitude faster (and it's used),
it's not.

I don't remember - did your user survey ask about how many/how large
zones people serve?  I vaguely think so, but it's been a while...

>> For a case which took 'several hours' before map was introduced, what
>> would the restart time be for named if raw format was used now?
>>
>>> If I knew that I would have said. 'Raw’ was much faster than the
>>> text version. Map was faster than raw. Raw is apparently not a
>>> problem to maintain.  I believe the improvement with raw was ~3x.
>>>
>
I think the questions are: (a) is startup time an issue (however it's
solved)?, (b) if so, is map format the solution? (c) If it is and people
are using it, what would the consequences be to them if it went away?
(d) If it is, and people aren't using it - is the documentation too
scary (as Anand said it is for him)?
>> It's pretty clear to me that if map format saves a few seconds in the
>> worst case, it's not worth keeping.  If it saves hours for large
>> operators, then the alternative isn't adequate.  Maybe "map" isn't
>> the answer - how might 'raw' compare to a tuned database back end? 
>> (Which has other advantages for some.)  What if operators specified a
>> priority order for loading zones?  Or zones were loaded on demand
>> during startup, with low activity zones added as a background task? 
>> Or???
>
> Well, back when we added map zone format, startup time was a major
> pain point for some users. Now, it seems as though large operators are
> updating their zones all the time (also updating RPZ feeds) and
> efficiency in transfers seems to be a bigger issue. 
>
What I was getting as is how hard the definition of "startup time" is. 
Time to serving all zones?  Important zones? Is it OK for responses to
be slow during startup, or is startup only complete when responses are
at nominal speed?

I wonder if this comes from large operators using a database(DLZ)  back
end.  Database developers tend to have a single-minded focus on
performance, and direct updates are probably faster than going thru
named & its generalized authentication/validation.  Plus, depending on
how you set up your server architecture, DB replication can replace DNS
zone transfers.

> We don’t have any direct data on what features are being used, we can
> only judge based on complaints we receive via bug tickets or posts on
> this list.
You did a survey a while back...
>>
>> A fair question for users would be what restart times are acceptable
>> for their environment - obviously a function of the number and
>> size/content of zones.  And is a restart "all or nothing", or would
>> some priority/sequencing of zone availability meet requirements?
>>
> That is a good question. Can you answer it for yourself?

Sure.  I'm not a large operator, but I've always thought big and
implemented smaller.  

Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
Vicky,

I never reflexively "howl in protest", but it's really hard to have an
opinion on this proposal without some data.

Clearly map format solved a big problem for some users.  Asking whether
it's OK to drop it with no statement of what those users would give up
today is not reasonable.

After all the "other improvements in performance" that you cited, what
is the performance difference between map and the other formats?

For a case which took 'several hours' before map was introduced, what
would the restart time be for named if raw format was used now?

It's pretty clear to me that if map format saves a few seconds in the
worst case, it's not worth keeping.  If it saves hours for large
operators, then the alternative isn't adequate.  Maybe "map" isn't the
answer - how might 'raw' compare to a tuned database back end?  (Which
has other advantages for some.)  What if operators specified a priority
order for loading zones?  Or zones were loaded on demand during startup,
with low activity zones added as a background task?  Or???

A few data points would get you much more useful responses. 

A fair question for users would be what restart times are acceptable for
their environment - obviously a function of the number and size/content
of zones.  And is a restart "all or nothing", or would some
priority/sequencing of zone availability meet requirements?

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 09-Sep-21 15:13, Victoria Risk wrote:
> Greetings bind-users,
>
> The `map` zone file format was introduced in BIND
> 9.10. 
> https://bind9.readthedocs.io/en/v9_16_20/reference.html?highlight=map%20zone#additional-file-formats
> <https://bind9.readthedocs.io/en/v9_16_20/reference.html?highlight=map
> zone#additional-file-formats>
>
> At the time, this format significantly sped up a named restart, which
> could take several hours in some situations. This new file format is
> very complicated, and maintaining it has proven difficult. Meanwhile,
> the performance advantage versus the `raw` format, or the default text
> files, has decreased as we have made other improvements in performance. 
>
> We would like to deprecate the `map` zone file format in future
> branches of BIND. The proposal is to deprecate the feature in the 9.16
> branch, (users will see a warning when this feature is used but it
> will still work through the end of the 9.16 branch), and to disable
> the feature in 9.20.0 (it will no longer work in this and subsequent
> versions). 
>
> Per our policy on deprecating named options, we are notifying the user
> mailing list.  You are welcome now to howl in protest or point out
> something we haven’t considered.  ;-)
>
> Regards,
>
> Vicky Risk
> Product Manager


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: No more support for windows

2021-06-10 Thread Timothe Litt
On 09-Jun-21 18:46, Richard T.A. Neal wrote:
> Evan Hunt wrote:
>
>>> My understanding is BIND will still run fine under WSL; it's only the 
>>> native Visual Studio builds that we're removing. 
>>> For people who want to run named on windows, WSL seems like the best way to 
>>> go.
> Sadly no. To quote myself from an earlier email on this topic:
>
> There are two versions of WSL: WSL1 and WSL2. Development has all but ceased 
> on WSL1, but WSL1 is the only version that can be installed on Windows Server 
> 2019.
>
> Microsoft have not yet confirmed whether WSL2 will be available for Windows 
> Server vNext (Windows Server 2022, or whatever they name it).
>
> Even if WSL2 is made available for Windows Server 2022 it has some serious 
> networking limitations: it uses NAT from the host, so your Linux instance 
> gets a private 172.x.y.z style IP address, and that IP address is different 
> every reboot. Proxy port forwarding must therefore be reconfigured on every 
> reboot as well.
>
> Personally I'm comfortable with the decision that's been made and I 
> understand the logic. Saddened, like saying goodbye to an old friend, but 
> comfortable.
>
> Richard.

As I suggested early on, it would be great if the tools could somehow be
available as native binaries.  Sounds like there's progress there -
thanks Evan!

As for running a BIND server, all things considered it seems to me that
the simplest approach is to create a bare-bones VM running Linux.  Run
that on the windows server (use VMware, VirtualBox)  If the only things
running in that machine are named, a firewall, a text editor, logwatch,
and backups, there's really not much effort in keeping that machine
running.  Just remember to do a distribution update once in a while
(e.g. dnf update/apt-get, etc).  You might want to keep
SeLinux/Apparmor, but with no other services, it may not be worth the
effort.  You can tailor Linux distributions down to a very minimal set
of services.  It's often done for embedded applications.  You can even
do the backups by snapshoting the VM.

You can update the zone files via UPDATE.  You can update the config
(and zone files if you like) in the VM, or via an exported directory
from the Windoze host.  (E.g. VirtualBox does this trivially.)

This would completely eliminate the complexity of dealing with the
Windows networking stack - the Linux machine (and named) just see an
ethernet adapter (or two, or...) on the host's network.  (Mechanically,
the VM's "adapter"  injects and retrieves raw ethernet packets into the
driver stack very close to the wire.)  No NAT or proxy (unless you want
it, in which case it can be static.)  And whatever kernel
features/networking libraries ISC uses are just there - no porting.

I haven't measured performance, but I do run my Linux machines in
VirtualBox VMs (mostly hosted on a Linux server, but some on Windows). 
I haven't run into issues - but then I'm not a big operator.  I do use
CPUs (and IO) with hardware virtualization support. 

In any case, the workload on ISC would be zero - unless they choose to
provide the VM (there are portable formats).  That work might be
something that someone who wants a Windows solution could afford to
sponsor.  The biggest part would be scripting packaging from the
selected distro and a test system.  Plus a bit of keeping it
up-to-date.  And documentation.  Optionally, someone might want to do
some configuration/performance tuning - but most of that is what ISC
does anyway inside the VM.  Again, the work would seem to be something
that the Windows community could donate and/or sponsor.

It might even be the case that ISC could use the same VM as part of its
test suite - many CI engines are using that approach to get wide
coverage with minimal hardware.  (The CI folks, like GitHub Actions,
GitLab, etc spin up a VM, install the OS and minimal packages, then run
your tests.)

I confess that this is a practical approach - it won't satisfy those who
insist on a "pure" windows solution. (Though I bet if you looked inside
their routers, storage, phone systems, and certainly cars there'd be
Linux purring away under the hood...)  Nor anyone who thinks that the
status quo is ideal or that only a "no effort" solution is acceptable. 
Anyhow, it's not an attempt to start a religious war or to prolong the
debate on what ISC does.  It assumes BIND won't support windows, that
WSL is imperfect, and that an alternative to complaining might be
helpful...  Feel free to s/Linux/(Solaris|FreeBSD|VMS|yourfavorite/g.

I don't have a need for BIND (except the tools) under Windows, so I'm
not volunteering to implement this.

FWIW.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




OpenPGP_signature
Description: Open

Re: root.hints - apparmor access error with Bind from PPA

2021-06-04 Thread Timothe Litt
I'm not an apparmor user - but have you looked at the parent directory
permissions?  From what you posted, that would be the logical culprit.

In any case, unless you are using a private root zone, since named has
the root nameserver addresses built-in, the use of root.hint is
unnecessary.  (Even if one or two change addresses before the next
release, as does happen infrequently, once named starts it will ask the
network for the full set.  It only needs one - of the 13 - to bootstrap
itself.)

There is an argument for running your own root server with a copy of the
root zone - but most small operators don't.  Simplifying, it makes sense
if you are "far" from the global root servers, have regular outages that
leave a local region intact, or are very concerned about privacy.  (In
the latter case, qname minimization is likely a better choice.)

It seems that a lot of distributions configure a root.hint out of
habit.  It's actually a step backwards, since unless you have a process
to update root.hint, your copy is likely to end up being older than
named's built-ins...

It's been a while since I looked, but at that time, a 20ish year old
root.hint had only a couple of IPv4 addresses wrong.  (Didn't have many
IPv6.)  root.hint really IS stable - and so, therefore, are the named
built-ins.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 03-Jun-21 22:45, 3coma3 wrote:
> Dear list:
>
> I've used the PPA at https://launchpad.net/~isc/+archive/ubuntu/bind to
> upgrade
> bind from 9.11.3+dfsg-1ubuntu1.15 (current version for
> bionic-{updates,security}) to 9.16.16-2+ubuntu18.04.1+isc+1
>
> (I was needing to use the validate-except clause and this new version
> supports it)
>
> After the upgrade, attempting to start the named service failed with
> this error:
> Jun  3 22:03:53 top named[19946]: could not configure root hints from
> '/usr/share/dns/root.hints': permission denied
>
> Right below that apparmor logs this:
>
> Jun  3 22:03:53 top kernel: [17981.067014] audit: type=1400
> audit(1622768633.158:559): apparmor="DENIED" operation="open"
> profile="/usr/sbin/named" name="/usr/share/dns/root.hints" pid=19946
> comm="isc-worker" requested_mask="r" denied_mask="r" fsuid=129 ouid=0
>
>
> What's puzzling is that the apparmor profile apparently allows the read
> @ line 36:
>
> find /etc/apparmor.d -type f | xargs grep -n '/usr/share/dns'
> /etc/apparmor.d/usr.sbin.named:36:  /usr/share/dns/root.* r,
>
> dpkg -S /etc/apparmor.d/usr.sbin.named
> bind9: /etc/apparmor.d/usr.sbin.named
>
> apt-cache policy bind9
> bind9:
>   Installed: 1:9.16.16-2+ubuntu18.04.1+isc+1
>   Candidate: 1:9.16.16-2+ubuntu18.04.1+isc+1
>   Version table:
>  *** 1:9.16.16-2+ubuntu18.04.1+isc+1 500
>     500 http://ppa.launchpad.net/isc/bind/ubuntu bionic/main amd64
> Packages
>     100 /var/lib/dpkg/status
>  1:9.11.3+dfsg-1ubuntu1.15 500
>     500 http://mirrors.us.kernel.org/ubuntu bionic-updates/main
> amd64 Packages
>     500 http://security.ubuntu.com/ubuntu bionic-security/main amd64
> Packages
>  1:9.11.3+dfsg-1ubuntu1 500
>     500 http://mirrors.us.kernel.org/ubuntu bionic/main amd64 Packages
>
>
> Although the error appears to not be related to file perms, here's for
> completeness:
>
> ls -la /usr/share/dns
> total 28
> drwxr-xr-x   2 root root    55 dic 13  2019 .
> drwxr-xr-x 457 root root 12288 jun  3 21:44 ..
> -rw-r--r--   1 root root   166 feb  1  2018 root.ds
> -rw-r--r--   1 root root  3315 feb  1  2018 root.hints
> -rw-r--r--   1 root root   864 feb  1  2018 root.key
>
>
> It helped me to find a previous report at
> https://lists.isc.org/pipermail/bind-users/2020-July/103454.html
>
> And then I ended up solving the problem as Brett did there, by copying
> /usr/share/dns to /etc/bind/dns and changing the zone definition.
>
> Still I am reporting this in case it's affecting someone else, and
> because maybe you guys have an idea as to what's going on with apparmor
> here? I'm not very knowledgeable in it and would appreciate any info /
> help to solve the root cause (and maybe learn something).
>
> Thanks in advance
>
>
> full log:
>
> Jun  3 22:03:53 top systemd[1]: Started BIND Domain Name Server.
> Jun  3 22:03:53 top named[19946]: starting BIND 9.16.16-Ubuntu (Stable
> Release) 
> Jun  3 22:03:53 top named[19946]: running on Linux x86_64
> 5.6.7-050607-generic #202004230933 SMP Thu Apr 23 09:35:28 UTC 2020
> Jun  3 22:03:53 top named[19946]: built with '--build=x86_64-linux-gnu'
> '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr

Re: Deprecating BIND 9.18+ on Windows (or making it community improved and supported)

2021-04-29 Thread Timothe Litt
I gave up on running named on Windows long ago, so I generally support
this direction.

However, I do use the diagnostic tools (dig, delv, rndc, nsupdate) for
troubleshooting.  It can be helpful to diagnose from the client
environment (e.g. thru the same firewalls, anti-virus, buggy network
stack, and APIs).  The BIND tools are better than the windows tools, 
and
using the same tools everywhere is always beneficial.

Would reducing support to just the diagnostic tools be a helpful middle
ground?

It seems to me that they're much simpler (mostly if not all
single-threaded) and easier to maintain.  Do they have the same VS
issues? (I haven't built on windows for some time.)

I don't include tools that assume a local named instance in the
"diagnostic" category - e.g. named-journalprint, dnstap, etc. 

First order discriminant function is whether the tool talks to the
network (to make DNS queries[no, not named!], including control) - yes:
prefer to keep

FWIW - YMMV.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 29-Apr-21 07:35, Ondřej Surý wrote:
> Hi,
>
> we’ve been discussing the /subj for quite some time and we are either 
> thinking about deprecating the BIND 9 on Windows completely or just handing 
> it over to the “community supported” level.
>
> There are couple reasons for the move:
>
> * Neither the VisualStudio 2017 which we use nor VS2019 supports the C11 
> features we extensively use (stdatomic.h) which makes us to write a horrible 
> horrible shims on top of Windows API
> * No BIND 9 developer uses Windows even as secondary platform
> * BIND 9 doesn’t compile on Windows 10 nor with VS2019 and that 
would require extensive work
> * Windows now has WSL2 
> (https://docs.microsoft.com/en-us/windows/wsl/install-win10) that can be used 
> to run BIND 9 natively
>
> We think that the resources that would require us to support new Windows and 
> Visual Studio versions would be better spent elsewhere and therefore we would 
> like to deprecate the official support for Windows since BIND 9.18 (the next 
> ESV, to be released in 2022), the Windows support for BIND 
9.16 will be kept intact.
>
> Now, there are two options:
>
> a) The support will be completely dropped and the official way to run BIND 9 
> on Windows would be using WSL2
> b) A volunteer will step up and improve the Windows implementation to support 
> newer platforms and make it up to par with POSIX platforms.
>
> 1. Let me be absolutely clear here - we are not interested to keep the 
> Windows port just on the life support, that would miss the point. It has been 
> neglected for too long and if we are to keep it, there are several other 
> areas that would need an improvement - the installer, the system integration 
> and the build system would have to be extensively improved as well.
>
> Thanks,
> Ondrej
> --
> Ondřej Surý (He/Him)
> ond...@isc.org


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Status of zytrax.com "DNS for Rocket Scientists" website

2021-04-21 Thread Timothe Litt
Meantime, you can find it on archive.org:

https://web.archive.org/web/20201223052910/https://www.zytrax.com/
<https://web.archive.org/web/20201223052910/https://www.zytrax.com/>

https://web.archive.org/web/20201223034301/https://www.zytrax.com/books/dns/
<https://web.archive.org/web/20201223034301/https://www.zytrax.com/books/dns/>

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 20-Apr-21 19:09, Victoria Risk wrote:
> Ron Aitchinson called me this afternoon. He is fine, and he promised to 
try to resurrect his site. He has been struggling with his hosting provider and 
he said he *might* be looking for new hosting, but he hasn’t thrown in the 
towel yet.  
>
> I will report back if I get further updates. I told him that a lot of users 
> still find his site very useful and to let ‘us’ know 
if he ever plans to pull the plug. 
>
> Vicky
>
>> On Apr 19, 2021, at 8:49 AM, Victoria Risk  wrote:
>>
>> I will contact Ron and see what is up.
>>
>> Thank you for pointing it out Carsten!
>>
>> Vicky
>>
>>> On Apr 19, 2021, at 7:21 AM, Richard T.A. Neal  
>>> wrote:
>>>
>>> Carsten Strotmann wrote:
>>>
>>>> does anyone know about the status of the zytrax.com website and the 
>>>> excellent "DNS for Rocket Scientists" guide?
>>>> The webpage first had a x509 certificate error (expired) in December
>>>> 2020 and now the web server is unreachable.
>>>> I (and colleagues) have tried to reach Ron Aitchison by mail and other 
>>>> communication means, but no success.
>>> Unfortunately I don't but if anyone is able to make contact with Ron I'd be 
>>> very happy to offer to host an archive of the site at no cost.
>>>
>>> Best,
>>> Richard.
>>>
>


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How Zone Files Are Read

2020-12-16 Thread Timothe Litt

On 16-Dec-20 13:52, Tim Daneliuk wrote:
> On 12/16/20 12:25 PM, Timothe Litt wrote:
>> On 16-Dec-20 11:37, Tim Daneliuk wrote:
>>> I ran into a situation yesterday which got me pondering something about 
>>> bind.
>>>
>>> In this case, a single line in a zone file was bad.  The devops automation
>>> had inserted a space in the hostname field of a PTR record.
>>>
>>> What was interesting was that - at startup - bind absolutely refused
>>> to load the zone file at all.  I would have expected it to complain
>>> about the bad record and ignore it, but load the rest of the
>>> good records.
>>>
>>> Can someone please explain the rationale or logic for this?  Not 
>>> complaining,
>>> just trying to understand for future reference.
>>>
>>> TIA,
>>> Tim
>> DNS is complicated.  The scope of an error in a zonefile is hard to 
>> determine.
>>
>> To avoid this, your automation should use named-checkzone before releasing a 
>> zone file.
>>
>> This will perform all the checks that named will when it is loaded.
>>
>
> Kind of what I thought.  Whoever build the environment in question
> really didn't understand DNS very well and hacked together a kludge
> that I am still trying to get my head around.
>
For a simple example of why it's complicated - what if the typo you had
was for a host that sends e-mail?

You'll see intermittent delivery errors when remote hosts can't resolve
the host's address; some require that a reverse lookup resolve to the
host as an anti-spoofing measure.  Others won't.  You'll spend a long
time diagnosing.

named can't tell this case from a typo for a local printer's PTR - where
it's unlikely that a reverse lookup failure will matter.  Of course,
this means it could go undetected for years - until it IS needed.

Or the typo is in a NS record - which you probably won't detect until
the other NS goes down...

And, any errors are cached for their TTL by resolvers.  The TTL may
(hopefully for query rate reduction) be large.  In your case, it would
be the negative TTL (meaning that even adding the record later wouldn't
have immediate effect).

The bottom line is that named must assume that anything placed in a zone
file is important, and that the external impact - either sin of omission
or commission - might be large.

Thus, while named can't detect all (or even most) errors, those that it
does detect cause immediate failure to load.  That prevents caching and
propagation as well as getting human attention.

When something's wrong, it's best to stop and fix it.  Error recovery is
a very good thing - but only when you can demonstrate that the cure is
better than the disease.  Skipping format errors in a zone file would
not satisfy that constraint.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How Zone Files Are Read

2020-12-16 Thread Timothe Litt
On 16-Dec-20 11:37, Tim Daneliuk wrote:
> I ran into a situation yesterday which got me pondering something about bind.
>
> In this case, a single line in a zone file was bad.  The devops automation
> had inserted a space in the hostname field of a PTR record.
>
> What was interesting was that - at startup - bind absolutely refused
> to load the zone file at all.  I would have expected it to complain
> about the bad record and ignore it, but load the rest of the
> good records.
>
> Can someone please explain the rationale or logic for this?  Not complaining,
> just trying to understand for future reference.
>
> TIA,
> Tim

DNS is complicated.  The scope of an error in a zonefile is hard to
determine.

To avoid this, your automation should use named-checkzone before
releasing a zone file.

This will perform all the checks that named will when it is loaded.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: [External] Re: How can I launch a private Internet DNS server?

2020-11-08 Thread Timothe Litt
On 07-Nov-20 14:06, Tom J. Marcoen wrote:
> Having at least two name servers is not a requirement by the RFC
> standards but which TLD allows for only one NS server to be given when
> hou register a domain?
>
> On Sat, 7 Nov 2020 at 16:53, Kevin A. McGrail  <mailto:kmcgr...@pccc.com>> wrote:
>
> On 11/7/2020 10:15 AM, Reindl Harald wrote:
>>
>> https://tools.ietf.org/html/rfc1537
>> Common DNS Data File Configuration Errors
>>
>> 6. Missing secondary servers
>>
>> > It is required that there be a least 2 nameservers
>> > for a domain.
>>
>> -
>>
>> that above is common knowledge virtually forever and the
>> difference of "must" and "should" in IETF wordings is also very
>> clear 
>
> While I agree this is common knowledge as a best practice, this
> rfc is a memo NOT a standard from my reading:
>
>   This memo provides information for the Internet community.  It does
>not specify an Internet standard.  Distribution of this memo is
>unlimited.
>
> Regards,
> KAM
>
>

I'm amazed that this thread has persisted for so long on this list of
knowledgeable people.

RFC1034 <https://tools.ietf.org/html/rfc1034>, one of the two
foundational RFCs for the DNS:

P.18 in section 4.1 (NAME SERVERS => Introduction):

A given zone will be available from several name servers to insure its
availability in spite of host or communication link failure.  By
administrative fiat, we require every zone to be available on at least
two servers, and many zones have more redundancy than that.

In case the font is too small, the key phrase is:

"we require every zone to be available on at least two servers"

That's "REQUIRE" at least TWO SERVERS.

https://tools.ietf.org/html/rfc1537 documents common misconfigurations -
that is, cases of non-conformance to the RFCs that the author
encountered circa 1993.  It was superseded in 1993 by RFC 1912
<https://tools.ietf.org/html/rfc1912>, where section 2.8 starts with
"You are required to have at least two nameservers for every domain". 
Neither document supersedes RFC1034; rather they attempt to help with
interpreting it.

https://www.iana.org/help/nameserver-requirements  consolidates
information from several RFCs, since the DNS has evolved over time.  It
is not an RFC, but a convenient summary.  It primarily documents the
tests performed by IANA when it processes a delegation change to the
root, .INT, and .ARPA zones.  These tests validate conformance to the
RFCs.  As the introduction says, "These tests do not measure against
best practices or comprehensively measure protocol conformance. They are
a practical set of baseline requirements that catch common
misconfiguration errors that impact stable operations of the DNS."

Bottom line: two servers per zone are required by the DNS architecture. 
It's not folklore.  It's not optional.

It is true that the DNS is robust enough to function with a number of
misconfigurations (including just one server for a zone, since in
practice this is almost indistinguishable from transient conditions.)

Nonetheless, the goal of the DNS architecture (and most of its
operators) is to have a stable and robust name service. 
Misconfigurations, such as those documented in rfc1527, make the DNS
unstable and fragile.  The architecture tends to contain the effects of
many misconfigurations, but that doesn't make them wise.

As I noted earlier: "DNS appears deceptively simple at first blush. 
Setting up a serviceable infrastructure requires an investment of
thought and on-going maintenance.  You will not be happy if you skimp on
that investment, since broken DNS is externally visible - and frequently
catastrophic."

I'll finish with a 1987 quote from Leslie Lamport on distributed
systems, which the DNS most certainly is:

"A distributed system is one in which the failure of a computer you
didn't even know existed can render your own computer  unusable."

Can the quibbling stop now?

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How can I launch a private Internet DNS server?

2020-11-07 Thread Timothe Litt

On 06-Nov-20 08:50, Reindl Harald wrote:
>
>
> Am 06.11.20 um 13:25 schrieb Tom J. Marcoen:
>> First of all, sorry that I cannot reply within the thread, I was not
>> yet a member of the mailing list when those emails were sent.
>>
>>> On Thu 15/Oct/2020 18:57:16 +0200 Jason Long via bind-users wrote:
>>>>
>>>> Excuse me, I just have one server for DNS and that tutorial is
>>>> about secondary
>>>> DNS server too.
>>>
>>> Just skip the chapter about the secondary.  You're better off buying
>>> secondary
>>> DNS services externally.  A good secondary offloads your server
>>> noticeably, and
>>> keeps the domain alive in case of temporary failures.
>>>
>>> Best
>>> Ale
>>
>> Is it not a requirement to have at least two authoritative name
>> servers? I believe all TLDs require at least two name servers but I
>> must be mistaking as no one pointed this out yet.
>
> yes, and "You're better off buying secondary DNS services externally"
> don't say anything else
>
> the point is that the two nameservers are required to be located on
> two different ip-ranges anyways to minimize the risk that both going
> down at the same time
>
Do a web search for "secondary dns provider" and "backup dns provider". 
There are a number of them, some paid, some free.   Not all are equal -
last time I looked, support for DNSSEC was uncommon,, especially among
the free ones.  IPv6 support has been lagging, but improving.  Also, if
you use UPDATE, make sure the service that you use supports NOTIFY. 
Some limit or charge according to the number of queries, zones and/or
names - but that doesn't necessarily correlate with price. 

Also look for minimum TTL restrictions - especially with free services. 

I use a free service that does support IPv6, DNSSEC & NOTIFY - and runs
on BIND.

Often the external services provide better geographic diversity than a
small operation can - and have better internet connections. 

If you have the resources, you can also setup an agreement with a
similarly-situated organization for mutual secondary service - you slave
their zones & they slave yours.  This can work well - often at no cost -
especially if the resource demands are roughly equal.

Other caveats: external services typically won't use hostnames in your
domain - or if you want that, will charge you for it.  And if you depend
on views, external services will only work for external views - you'll
need to provide your own secondary servers for internal-only views. 

Finally, if performance matters and you have a dispersed user base, look
for a provider that has a solid infrastructure - ANYCAST is one good
clue.  You'll almost always have to subscribe to a paid service in these
cases, especially with high query rates.

RFC2182 (https://tools.ietf.org/html/rfc2182) is fairly readable and
describes many of the considerations involved in selecting secondary DNS
servers. 

DNS appears deceptively simple at first blush.  Setting up a serviceable
infrastructure requires an investment of thought and on-going
maintenance.  You will not be happy if you skimp on that investment,
since broken DNS is externally visible - and frequently catastrophic.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Request for review of performance advice

2020-07-10 Thread Timothe Litt
These suggestions - like most performance articles - are oriented toward
achieving the highest performance with large configurations.  E.g. "How
big can/should you go to support big loads?"

That's useful for many users.  But there are also many people who run
smaller operations, where the goal is to provide adequate (or even
exceptional) performance with a minimum footprint. When BIND is one of
many services, overall performance can be improved by minimizing BIND's
resource requirements.  This is also true in embedded applications,
where footprint matters.

So a discussion about how to optimize for the smaller cases - what do
you trade-off?  What knobs can one turn down - and how far? would be a
useful part of or complement to the proposed article.  E.g. "How small
can/should you go when your loads are smaller?"

FWIW, a wizard - even just a spreadsheet - that encapsulates known
performance results might also be useful.  E.g. Given a processor,
number/size of zones, query rate, & type, produce a memory size, disk &
network I/O rates, and starting configuration parameters... Obviously,
this could become arbitrarily complicated, but a simple spreadsheet with
configuration (hardware & software) and performance data that's
searchable would give people a good starting point.  Especially if it's
real-world. (It can be challenging to map artificial
"performance"/stress tests done in a development/verification
environment to the real world...)  While full automation can be fun,
it's amazing how much one can get out of a spreadsheet with/autofilter. 
(For the next level, pivot tables and/or charts...)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 07-Jul-20 21:57, Victoria Risk wrote:
> A while ago we created a KB article with tips on how to improve your
> performance with our Kea dhcp server. The tips were fairly obvious to
> our developers and this was pretty successful. We would like to do
> something similar for BIND, provide a dozen or so tips for how to
> maximize your throughput with BIND. However, as usual, everything is
> more complicated with BIND.
>
> Can those of you who care about performance, who have worked to
> improve your performance, share some of your suggestions that have the
> most impact?  Please also comment if you think any of these ideas
> below are stupid or dangerous. I have combined advice for resolvers
> and for authoritative servers, I hope it is clear which is which...
>
> The ideas we have fall into four general categories:
>
> System design
> 1a) Use a load balancerto specialize your resolvers and maximize your
> cache hit ratio.  A load balancer is traditionally designed to spread
> the traffic out evenly among a pool of servers, but it can also be
> used to concentrate related queries on one server to make its cache as
> hot as possible. For example, if all queries for domains in .info are
> sent to one server in a pool, there is a better chance that an answer
> will be in the cache there.
>
> 1b) If you have a large authoritative system with many servers,
> consider dedicating some machines to propagate transfers. These
> machines, called transfer servers, would not answer client queries,
> but just send notifies and process IXFR requests.
> 1c) Deploy ghost secondaries.  If you store copies of authoritative
> zones on resolvers (resolvers as undelegated secondaries), you can
> avoid querying those authoritative zones. The most obvious uses of
> this would be mirroring the root zone locally or mirroring your own
> authoritative zones on your resolver.
>
> we have other system design ideas that we suspect would help, but we
> are not sure, so I will wait to see if anyone suggests them.
>
> OS settings and the system environment
> 2a) Run on bare metal if possible, not on virtual machines or in the
> cloud. (any idea how much difference this makes? the only reference we
> can cite is pretty out of date
> - 
> https://indico.dns-oarc.net/event/19/contributions/234/attachments/217/411/DNS_perf_OARC_Apr_14.pdf
> )
>
> 2b) Consider using with-tuning-large.
> (https://kb.isc.org/docs/aa-01314) This is a compile time option, so
> not something you can switch on and off during production. 
>
> 2c) Consider which R/W lock choice you want to use -
> https://kb.isc.org/docs/choosing-a-read-write-lock-implementation-to-use-with-named
> For the highest tested query rates (> 100,000 queries per second),
> pthreads read-write locks with hyper-threading /enabled/ seem to be
> the best-performing choice by far.
>
> 2d) Pay attention to your choice of NIC cards. We have found wide
> variations in their performance. (Can anyone suggest what specifically

Re: Question About Recursion In A Split Horizon Setup

2020-04-17 Thread Timothe Litt
On 17-Apr-20 10:56, Tim Daneliuk wrote:
> On 4/17/20 9:50 AM, Bob Harold wrote:
>> Agree, that's odd, and not what the man page says.  Any chance that there is 
>> some other DNS helper running, like resolved, nscd, dnsmasq, etc?
> Nope.  This is vanilla FreeBSD with vanilla bind running.
>
>> 'dig' should tell you what address it used, at the bottom of the output - 
>> what does it say?
>
>
> ;; Query time: 0 msec
> ;; SERVER: ::1#53(::1)
> ;; WHEN: Fri Apr 17 09:53:51 CDT 2020
> ;; MSG SIZE  rcvd: 83
>
>
> Does the SERVER line indicate it's trying to get to the local instance via
> IPV6 or is this just standard notation?  (This is an IPV4 only environment).
>
>
You seem to be selecting views based on IP address.

If the host on which you are running dig is multi-homed, the OS may pick
a source
address other than what you intend.  Use -b to explicitly bind to a
particular interface.

(Or, if you use TSIG to match views, -k)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC - many doubts

2020-04-03 Thread Timothe Litt
The entropy problem is especially severe in many VMs.  Besides Warren's
suggestion:

Many current machines have hardware random noise sources that solve (or
at least
put a big dent) into the entropy problem.  A raspberry Pi is
inexpensive, and unless you
are generating zillions of keys, will solve most of these issues.  I use
entropy broker
https://www.vanheusden.com/entropybroker/ to distribute entropy from a Pi to
my network.  (And you can always add another RPi.)  I don't recall the
last time
I ran out of entropy - and no, I'm not talking about the "organization"
of my physical
desktop :-)

For a while, there USB keys with entropy sources were a good choice -
but with
hardware sources built into most CPUs, I think their time has passed. 
The same
low-power RPi that feeds entropy is also a great NTP server, VPN gateway
and a
few other things - for ~USD 40.  Or any Intel or AMD cpu since ~2015 has
RDRAND/RDSEED.

There are some religious arguments about booby-trapped hardware sources -
these days, kernels will mix all sources, so I don't get too upset.  But
YMMV.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 02-Apr-20 11:58, Warren Kumari wrote:
> On Thu, Apr 2, 2020 at 11:14 AM David Alexandre M. de Carvalho
>  wrote:
>> Hello, good afternoon.
>> My first post in this list :)
>>
>> I'm running BIND Chroot for many years (currently version 9.8.2) on some old 
>> hardware running Oracle Linux 6.
>> I believe it was last year when I was reading about implementing DNSSEC, and 
>> I think I've even tried to generate a
>> keypair in the slowest server, which after more than a day, wasn't ready 
>> yet. Maybe I was doing something wrong, I
>> honestly don't know.
> You almost definitely were -- even a really really slow machine should
> be able to generate keys in a small number of seconds -- you didn't
> list what commands you used, but I'm going to assume you were trying
> to generate an rsa key - you should be able to get a feel for how long
> this takes by running:
> time openssl genrsa -out private.key 2048
> or
> time openssl genrsa -out private.key 4096
>  (note that this is very different to running 'openssl speed rsa2048
> rsa4096', which benchmarks RSA operations, not key generations).
>
> I'm fairly sure that your issue was a lack of entropy -- in order to
> generate crypograohically good keys, you need good a good source of
> randomness. If you are running an older machine and older kernel, the
> /dev/random source is blocking, and if you try and read too much from
> it it will just hang until it has enough entropy to give "safe"
> output. Newer kernels do a better job of mixing in external event
> noise, but there are a number of modules which help with this -
> haveged being the best known (http://www.issihosts.com/haveged/ ).
> You could also test if this is the issue by using /dev/urandom, which
> doesn't block, or 'while true; do cat
> /proc/sys/kernel/random/entropy_avail; sleep 2; done' and see if the
> available entropy drops to zero during key generation...
>
> W
>
>> So now I had some time and reading about this again.
>>
>> If I query either of my servers about my domain:
>> dig @dns di.ubi.pt DNSKEY
>> I do get the DNSKEY, but I have no records when querying about +dnssec. My 
>> topdomain (ubi.pt) doesn't have DNSSEC yet
>> either.
>>
>> my named.conf already has the following:
>>
>> dnssec-enable yes;
>> dnssec-validation auto;
>> dnssec-lookaside auto;
>> bindkeys-file "/etc/named.iscdlv.key";
>> managed-keys-directory "/var/named/dynamic";
>>
>> Outside the configuration file I also have a /etc/named.root.key
>>
>> My questions:
>> 1) Will my old servers (1GB RAM) become much slower with  DNSSEC? Is it 
>> worth it?
>> 2) I have one global "hosts" file and 3 reverse zone files, each for the 
>> respective IP network. Can I use the same
>> Keypair in all of them?
>> 3) Are the files /etc/named.root.key file and /etc/named.iscdlv.key already 
>> being used? I compared them to the result
>> of the DNSKEY dig query but they are different.
>>
>> Thank you so much for your time!
>> Best regards
>>
>> Os melhores cumprimentos
>> David Alexandre M. de Carvalho
>> ---
>> Especialista de Informática
>> Departamento de Informática
>> Universidade da Beira Interior
>>
>>
>>
>> ___
>> Please visit https://l

Re: Machine friendly alternative to nsupdate

2020-04-01 Thread Timothe Litt
These projects tend to be custom... there may be a prepackaged solution,
but everything I've run into has either been tied to the specific
abstractions of a project - or very low level.

Mine uses the Perl Net::DNS module to setup update transactions.

Net::DNS gives you the ability to send update, use TSIG, get all the
response fields conveniently, and get display text.  It's pretty well
supported - and the basis for a number of DNS tools and tests.

When first approached, it can be, er, less than obvious exactly how to
make UDPATE work.  If you get stuck, I can probably extract the code to
do (TSIG-signed) updates.

As for the next layer - XML or whatever - that's another project.  If
you speak Perl, it would not be difficult to wrap Net::DNS to meet your
needs.

P.S. Other than using it (and reporting the occasional bug), I have no
relationship with Net::DNS :-)

Timothe Litt

ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 01-Apr-20 05:07, Petr Bena wrote:
> Hello,
>
> Some preamble: Some time ago I created an open source DNS admin web
> GUI *1 that is basically a wrapper around dig and nsupdate that allows
> people with "less CLI knowledge" to easily manipulate DNS records. The
> main reason for this was that in our corporation we have about 400
> internal DNS zones hosted on over 100 different BIND master servers,
> in more than 10 countries around the planet and this tool allowed us
> to unify the management as it allowed integration with different
> master servers, allow granular role based access for individual zones
> (integrated with LDAP groups), including some web API for our
> automation tools etc.
>
> Now to the actual problem: as I said, this tool is just a wrapper
> around nsupdate and dig, I like it that way because it's non-invasive,
> unlike other similar DNS admin panels, it doesn't require ANY changes
> on DNS server configuration and it integrates well with other
> solutions already in place. The problem I have however, is, that
> nsupdate was created as a tool for humans, rather than machines and
> parsing its output and even giving it input is very hard. Plus some
> things don't even seem to be possible in it.
>
> Is there any alternative to nsupdate, something that can work with XML
> or JSON payloads or provide output in such machine parseable format?
> For example, typical problem I am facing right now - is that nsupdate
> silently ignores things that IMHO shouldn't be ignored - for example
> when someone try to add a record that already exists, or try to add an
> A record over CNAME, nsupdate silently ignores this, even in debug
> output I can't see any difference, in first send the record is
> created, resulting in NOERROR, in second identical send, update is
> ignored resulting in NOERROR, so I have no way to tell users of my app
> that record was not in fact created or changed (because it already
> exists). For example:
>
> Here is operation where I first add a CNAME record and then try to add
> same A record (imagine two different users were doing this so user B
> was unaware that CNAME already exists) you can see in both cases
> nsupdate respond with same answer, despite record is created only in
> first case. And on top of that this answer is not easy to machine parse.
>
> > debug
> > update add petrbena.test.zone. 600 CNAME this.is.test.
> > send
> Sending update to 10.15.12.17#53
> Outgoing update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 48433
> ;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; UPDATE SECTION:
> petrbena.test.zone.    600    IN    CNAME    this.is.test.
>
> ;; TSIG PSEUDOSECTION:
> server. 0    ANY    TSIG    hmac-md5.sig-alg.reg.int. 1585729680 300
> 16 xx== 48433 NOERROR 0
>
>
> Reply from update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 48433
> ;; flags: qr ra; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; TSIG PSEUDOSECTION:
> server. 0    ANY    TSIG    hmac-md5.sig-alg.reg.int. 1585729680 300
> 16 xx== 48433 NOERROR 0
>
> > update add petrbena.test.zone. 600 A 0.0.0.0
> > send
> Sending update to 10.15.12.17#53
> Outgoing update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 30709
> ;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; UPDATE SECTION:
> petrbena.test.zone.    600    IN    A    0.0.0.0
>
> ;; TSIG PSEUDOSECTION:
>
> server. 0    ANY    TSIG 

Re: with dot in NAME for ACME via dynamic update (Axel Rau)

2020-03-14 Thread Timothe Litt
Er,

dig _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.

is missing a record type.  The default is A.


dig _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>. txt

will likely give you better results

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 14-Mar-20 13:31, bind-users-requ...@lists.isc.org wrote:
> Am 14.03.2020 um 18:14 schrieb Chuck Aurora  <mailto:c...@nodns4.us>>:
>
>> it seems, the dynamic update protocol does not allow things like
>> _acme-challenge.some-host.some.domain
>> TXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"
>> because there is no zone
>> some-host.some.domain
>
> I am pretty sure that is not correct, but we can't help unless you
> show your work.  If you need to specify the zone to update, you can
> and should.  BIND's nsupdate(8) and other dynamic DNS clients allow
> you to do this.

With this file
- - -
server localhost
debug
zone lrau.net <http://lrau.net>
ttl 3600
add _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
 3600 TXT  "tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"
show
send
answer
- - -
I get:
- - -
# nsupdate -k /usr/local/etc/namedb/dns-keys/ddns-key.conf
~/admin/ns-update-example.txt
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:      0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; UPDATE SECTION:
_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
3600 INTXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"

Sending update to ::1#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; UPDATE SECTION:
_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
3600 INTXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 


Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 

Answer:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 

# dig _acme-challenge.imap.lrau.net
<http://acme-challenge.imap.lrau.net>.  @localhost

; <<>> DiG 9.16.0 <<>> _acme-challenge.imap.lrau.net
<http://acme-challenge.imap.lrau.net>. @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6153
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 404b9f34e94920a4ef3dd3065e6d14308acdeabfe0744b88 (good)
;; QUESTION SECTION:
;_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.INA

;; AUTHORITY SECTION:
lrau.net <http://lrau.net>.3600INSOAns4.lrau.net <http://ns4.lrau.net>.
hostmaster.lrau.net <http://hostmaster.lrau.net>. 2020030850 86400 7200
604800 3600

;; Query time: 0 msec
;; SERVER: ::1#53(::1)
;; WHEN: Sat Mar 14 17:28:16 UTC 2020
;; MSG SIZE  rcvd: 145

(pki_dev_p37) [root@hermes /usr/local/py_venv/pki_dev_p37/src]# 

Axel
---
PGP-Key: CDE74120  ☀  computing @ chaos claudius


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Advice on balancing web traffic using geoip ACls

2020-02-23 Thread Timothe Litt
"Splitting traffic evenly" may not be in the interest of your clients -
suppose their locations are skewed?


In any case, this seems like a lot of work - including committing to
ongoing maintenance - for not much gain.


Consider setting up an anycast address - let the network do the work. 
This will route to the server closest to the client.  You can do this
with two DNS servers - pair each with a webserver, have the zone file
select the corresponding webserver.  And/Or the webservers - works well
for static content; there's a distributed DB challenge.


(It might be nice if someone with experience could write an end-to-end
tutorial on how to do this - from obtaining a suitable address - at a
reasonable cost - to setting up the BGP routing to the servers...)


Of course the simplest way out is to use a CDN - as this is a previously
solved problem.  It trades money for effort, which may be worthwhile if
it allows you to concentrate on your unique value proposition.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 22-Feb-20 20:25, Scott A. Wozny wrote:
> Greetings BIND gurus,
>
> I’m setting up hot-hot webserver clusters hosted on the west and east
> coasts of the US and would like to use Bind 9.11.4 with the Maxmind
> GeoIP database to split the traffic about evenly between those
> clusters.  Most of the traffic will be from the US so what I would
> like most to do is set up my ACLs to use the longitude parameter in
> the city DB and send traffic less than X (let's say -85) to a zone
> file that prioritizes the west coast servers and those greater than X
> to the east coast servers.  However, when I look through the 9.11.4
> ARM it doesn’t include the longitude field in the geoip available
> field list in section 7.1.  Has anyone tried this and it actually
> works as an undocumented feature or, because it’s not an “exact match”
> type operation, this is a non-starter?
>
> If this isn’t an option at all, does anyone have any suggestions on
> how to get a reasonably close split with ACLs using the geoIP
> database?  My first thought is to do continent based assignments to
> west and east coast zone files for all the non North American IPs with
> country based assignments of the non-US North American countries and
> then region (which, in the US, I believe translates to states) based
> assignments within the US.   I would need to do some balancing, but it
> seems fairly straightforward.  The downside is that the list would be
> fairly long and ACLs in most software can be kind of a performance hit.  
>
> The other alternative I was considering was doing splits by time zone,
> but there are a little over 400 TZs in the MaxMind GeoLite DB last
> time I checked and that also seems like it would be a performance hit
> UNLESS I could use wildcards in the ACL to group overseas time zones.
>  While I’ve not seen a wildcard in a geoip ACL, that doesn’t
> necessarily mean it can’t be done so I was wondering if anyone was
> able to make that work.
>
> Finally, I could try a hybrid of continent matches outside North
> America and then the North American timezones which seems like a
> reasonable compromise, but only if my preferred options of longitude <
> > isn’t available nor is wildcarding tz matches.  OR am I overthinking
> all of this and there is a simple answer for splitting my load that I
> haven’t thought of?  The documentation and examples available online
> are fairly limited so I thought I’d check with the people most likely
> to have actually done this.
>
> Any thoughts or suggestions would be appreciated.
>
> Thanks,
>
> Scott


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: A policy for removing named.conf options.

2019-07-07 Thread Timothe Litt

On 13-Jun-19 06:46, Matthijs Mekking wrote:
> Dear BIND 9 users,
>
> BIND 9 has a lot of configuration options.  Some have lost value over
> the years, but the policy was to keep the options to not break old
> configurations.
>
> However, we also want to clean up the code at some point.  Keeping these
> options increases the number of corner cases and makes maintenance more
> cumbersome.  It can also be confusing to new users.  We are trying to
> establish an orderly, staged process that gives existing users ample
> time to react and adapt to deprecated options.
>
> The policy below describes our proposal for the procedure for removing
> named.conf options. We would like to hear your feedback.
>
> Thanks, best regards,
>
> Matthijs
> [Snip]

Slowly catching-up from being off-line, I've reviewed the discussion on
this.  A couple of observations:

So far, the suggestions have included logging & making sure that
named-checkconf flags deprecated syntax.  While helpful, these all
suffer from a timing problem: notice is only provided after the new
software is installed.  For advance notification, they require someone
to look at "the" log file (or proactively run named-checkconf).  But if
it isn't going to bite now, there's a good chance that won't happen. 
And if it does bite now, the software has been installed - best case in
a test environment, worst case in production.  [The last may be more
likely with packaged distributions than with build-from source sites.] 
Further, "the" log file varies by startup mechanism (sysVinit, systemd,
named's logs, consoles, ...) - and in embedded cases, logs may be remote
and/or hard to access.

One approach to notification that hasn't been mentioned would be to
include a deprecation notice and scan of the default configuration file
in 'make install'.   This should be a separate script called from
install, that can also be used stand-alone.

This has limitations, but covers some interesting cases:

Advantages:

Proactive: can stop install if obsolete directives/syntax is detected -
before starting the test (or for the adventurous, production) environment.

Does not depend on logging, or on anyone reading the logs.

Does not depend on which startup mechanism is in use.

Should be caught by the packagers' build.  They are generally
responsible enough to pass on the deprecations to their users.  The
packagers can run the check script in their package's 'install' mechanism.

Works for most people who build from source.

Limitations:

Does not work for installations who use a non-default configuration
file. (e.g. named -c ...)

May be messy for chroot and enhanced security (selinux,apparmor,...)
environments

Will not inspect dynamic configurations (e.g. rndc addzone, modzone...)

Notes:

In all cases, make install could include a short notice of the form "See
DEPRECATIONS for changes that may require changes in your
configuration files".   The README can also refer to this file to avoid
duplication.

Why install?  Eventually, even packaged distributions use install - it
may be buried in an RPM's spec file, but it's run somewhere.  Install
allows the newly built(or distributed) version to check before the new
version is activated.  "configure" is too soon - you don't have the new
images, and with packaged (and cross-compiled) distributions, it's never
run on the target.

Probably, running the check should be the default (maximum coverage),
but a make install-nocheck target would probably be necessary.

Another mechanism would be to add a --fix option to named-checkconf. 
This would generate a new file(s), commenting out options that no longer
serve a purpose - with an easily detectable marker (e.g. '# OBSOLETE -
named-checkconf V19.2').  For options that are simply renamed, it can
insert the new, equivalent syntax.  For options that can't be
automatically updated, create a marker "# ATTENTION: named-checkconf
V19.2 - The 'use-Klingon-names' option is not supported, see
DEPRECATIONS section 659.712 for details" - and don't comment out the
option!  A log file listing all files modified should be produced. 
--fix would shift the burden of finding the affected options from the
user to software - making it (a) more likely to happen (b) easier -
especially for configurations that span dozens (or hundreds) of
'include'd files.

I don't think there's a single universal solution to handling
deprecations, but I hope that these observations are helpful.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: allow-update in global options (was Re: bind and certbot with dns-challenge)

2019-03-17 Thread Timothe Litt
Data points:

I saw another report of this issue on gitlab - #913 just after my
previous note.  It indicated that a distributions initial configuration
breaks with the change.  I see that it has been updated by Alan since.

I checked my configuration files.

I use allow-update-forwarding at the options level.

I use update-policy at the zone level.

I don't currently use either at the view level.

So my configurations would break.  (I haven't had the cycles to run
9.13, unfortunately for you - apparently, fortunately for me :-)

I don't see the serious harm in allowing these options to be inherited -
there are certainly other options that, if incorrectly/accidentally
inherited, could be dangerous.  Allow-transfer; allow-query,
deny-answer-* I could go on alphabetically, but I'm pretty sure a case
could be made for the majority of options causing mischief if
inadvertently inherited.

I'm curious about why these particular options were singled out -- yes,
they update persistent zone data.  But denial of service, information
leaks, and using the wrong directories can also be serious.

In any case, where a change is made that invalidates existing
configurations, I strongly prefer a deprecation warning at least one
(non-development) release prior.  With documentation.

Given that these prerequisites didn't happen in this case, I believe
that regardless of the merits, the previous behavior should be reinstated.

If there is a determination that the benefits of the change outweigh the
costs, then add a deprecation warning a stable release prior (perhaps
now?) and update the documentation -- including the ARM & release notes.

Also, the same arguments should be applied to all the other inheritable
options -- if there is justification for other changes, it's much better
to force operators to make a bundled set of changes than to dribble them
out piecemeal.

FWIW: In general, I choose to place configuration statements at the
level resulting in the shortest configuration.  (Not for performance,
but for clarity/ease of maintenance.)  So that's sometimes "global
enable, exception disable", and sometimes the inverse.  (This can be
frustrated when there's no obvious inverse to a directive, but that's
for another day.)

Finally, I looked at the 9.13 ARM for a list of which options are
allowed in the view statement.  The view Statement Grammar lists
[view_option; ...] - 'view_option' appears nowhere else in the ARM.  The
definition and usage section (in chapter 5) says only: "Many of the
options given in the *options* statement can also be used within
a *view* statement,".  To find an explicit list, one has to go to the
VIEW section of chapter 8 (the man page for named.conf) - which isn't
tagged with 'view_option'.  This frustrates searchers and people
unfamiliar with the ARM structure.  Note that allow-update and
allow-update-forwarding both appear as valid in the view syntax there,
although in chapter 5 the descriptions on p.97 says "only zone, not
options or view".

My 3.5¢ (USD, but your local currency will do :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 17-Mar-19 16:37, Alan Clegg wrote:
> On 3/17/19 2:51 PM, Alan Clegg wrote:
>> On 3/17/19 7:13 AM, Stephan von Krawczynski wrote:
>>> Hello all,
>>>
>>> I am using "BIND 9.13.7 (Development Release) " on arch linux. 
>>> Up
>>> to few days ago everything was fine using "certbot renew". I had
>>> "allow-update" in nameds' global section, everything worked well. Updating 
>>> to
>>> the above version threw a config error that "allow-update" has no global 
>>> scope
>>> and is to be used in every single zone definition.
>> And you may have found a bug.  I'm checking internally at this time.
> So, after a discussion with one of the BIND engineers this afternoon,
> this turned out to be quite an interesting and deep-rooted issue.
>
> During a cleanup of other code (specifically named-checkconf), code was
> changed that enforced what was believed to have been the default
> previously: specifically, allow-update was only allowed in zone stanzas.
>  The chain of changes follows:
>
> 5136.   [cleanup]   Check in named-checkconf that allow-update and
> allow-update-forwarding are not set at the
> view/options level; fix documentation. [GL #512]
>
> This, if the change remains, will be updated to [func] and additional
> documentation will be added to the release notes.
>
> The other changes down this long and twisting passage are:
>
> 4836.   [bug]   Zones created using "rndc addzone" could
> tem

Re: bind and certbot with dns-challenge

2019-03-17 Thread Timothe Litt
Named has options at the global, view and zone levels.  The 9.11 ARM
shows allow-update
in the options and zone statements.  If it's broken in 9.13 - note that
it is a "Developement Release".
So bugs are expected, and you should raise an issue on bind9-bugs or on
gitlab
(https://gitlab.isc.org/isc-projects/bind9/issues).

You can work around your issue by using 'include "my-common-stuff.conf";'
to simplify your configuration.  This is a useful strategy for things
that don't fit
the three-level model.

If you have large zones, you can speed up load time with
masterfile-format raw or map;
see the "tuning" section of the ARM for more information. 

Parsing configuration data is unlikely to be the dominant factor in
startup, but I'm
sure that the developers would welcome a reproducible test case that
shows otherwise.

You should consider update-policy instead of allow-update; it provides
much better control
and better security.

> It is really very obvious that this is only done by
> ideologists, not technical oriented people.
Actually, I've found that the contributors to named are very technical,
practical people.
Sometimes they introduce bugs, or ideas that work in one context but not
another.
They're responsive to criticism & contributions.  But name-calling is
generally not an
effective way to get anyone to help you.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 17-Mar-19 10:35, Stephan von Krawczynski wrote:
> On Sun, 17 Mar 2019 12:40:35 +0100
> Reindl Harald  wrote:
>
>> Am 17.03.19 um 12:13 schrieb Stephan von Krawczynski:
>>> So why is it, that there is no global way of defining default zone
>>> definitions which are only overriden by the actual zone definition?  
>> maybe because it brings a ton of troubles and whoever deals with more
>> than 5 zones has automatic config management in place anyways?
> If you don't want to follow the positive way (how about a nice additional
> feature), then please accept the negative way: someone broke the config
> semantics by implementing a zone based-only "allow update". This option worked
> globally before (too), so we can assume it is in fact broken now.
> Can someone please point me to the discussion about this incompatible change?
>
>>> Why is there no way to define a hosts-type-of-file with an URL-to-IP list?
>>> Do you really want people to define 50.000 zones to perform adblocking?  
>> no, just use the right tool for the task, this don't fit into the domain
>> concept of named and hence you have dnsmasq and rbldnsd to step into
>> that niche
> In todays' internet this is no niche any more. And the right tool means mostly
> "yet-another-host" because you then need at least a cascade of two, one for
> dnsmasq and one for bind/named. A lot of overhead for quite a simple task...
>
>>> Configs have to be reloaded every now and then, is there really no idea
>>> how to shorten things a bit?  
>> ??
> Shorter config = shorter load time. The semantic change of "allow update" 
> alone
> leaves every setup with 1000 domains in a situation where 999 config statments
> more have to be read, interpreted and configured - just to end up in the same
> runtime setup. It is really very obvious that this is only done by
> ideologists, not technical oriented people.
>


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: named cpu usage pretty high because of dns_dnssec_findzonekeys2 -> file not found

2019-03-11 Thread Timothe Litt
On 11-Mar-19 03:52, Mark Andrews wrote:
> Because you removed the key from disk before it was removed from the zone.  
> Presumably named
> was logging other error messages before you removed the key from disk or the 
> machine was off
> for a period or you mismanaged the key roll and named keep the key alive.
>
> Named’s re-signing strategy is different to when you are signing the whole 
> zone at once as
> you are signing it incrementally.  You should be allowing most of the 
> sig-validity interval
> before you delete the DNSKEY after you inactive it.  One should check that 
> there are no RRSIGs
> still present in the zone before deleting the DNSKEY from the zone.  
> Inactivating it stops the
> DNSKEY being used to generate new signatures but it needs to stay around 
> until all those RRSIGs
> have expired from caches which only happens after new replacement signatures 
> have been generated.

There are a lot of these "administrator should know" events and timeouts
in DNSSEC.  One could argue that these complexities are one of the
barriers to adoption.

It seems worth considering ways to make life easier, for administrators
and automation alike.

A few thoughts come immediately to mind - no doubt there are more:

- Rather than documenting "wait for n TTLs (or sig-validity interval)",
have bind log events that require/enable administrator actions (at
non-debug levels), such as:

"key (keyid) /foo/bar/.. no longer required and can be removed" - issue
at inactivation + max TTL of any RRSIG is signed.  Allows an admin (or
script) to know when it's safe rather than requiring research and/or math.

"key (keyid) /foo/baz... is now signing zone(s)
example.net,example.org.  It expires on <> and will be removed on <>"

- Provide an "obsolete-keys" directory - have named move keys that are
no longer required there.  (Or delete the files.  But emptying
obsolete-keys, like emptying /tmp, can be automated, and deleting a key
might be a problem if forensics - or audits - is required.) The key idea
is that an admin never removes a file from "keys".  And that should
prevent mistakes.

- Rather than relying on the keys directory for signing, use it only to
import/update keys.  Once named starts using a key, put a copy (or move
it) to ".active-keys" - or a database file - that persists as long is
the protocol requires it.  If the file in the keys directory is updated
with new dates, generate the appropriate events - but work from
.active-keys.  If the file disappears from "keys" before it should, use
.active-keys to restore it -- and add a comment explaining why.  ("#
Restored by named at 1-apr-2411: sig-validity interval for
lost.example.net (internal) extends to 15-may-2412")

- Provide an rndc show class command (or stats channel output) that
explains the status/fate of each signing key.  Perhaps a table:

 Key Zone view State created publish active deactivate remove
next_event

   key (keyid) /foo/baz... example.net external Published 1-jan-2000
1-jun-2000 1-Jul-2000 31-dec-2000 1-feb-2001 activate 1-Jun-2000 #
Assumes today is 11-Mar-2000

   key (keyid) /foo/baz... example.org external Published 1-jan-2000
1-jun-2000 1-Jul-2000 31-dec-2000 1-feb-2001 activate 1-Jun-2000 # Same
key, different zone

- Think more about what admins want to do, rather than how named (and
the protocols) do it.  E.g. "sign a zone", "roll key now|every month",
"use latest|specified|safest signature algorithm | key length", 
"enable/disable nsec|nsec3", "unsign zone"... Provide scripts and/or
named primitives that do this.  "dnssec settime -xyz" doesn't do a good
job of specifying intent - one has to do a lot of math, and the intent
isn't logged - just the date change.

I'm aware of the dnssec keymgr effort - it's still more oriented to
timeouts and e.g. coverage periods than to what one wants to
accomplish.  (As far as I can tell, it also doesn't support multiple
views - which makes it unusable for me.  I don't think this is an
unusual configuration...)

If you look at validate() in policy.py.in, there are 6 different errors
for conditions involving timer relationships.  [And the errors are
reported in seconds, not even as something vaguely human - such as
57w2d1h30m12s.] Why not (by default) adjust the timers & log the result?

I'm sure someone will opine that for every case, there's a choice
between shrinking one timer and extending the other. This is undoubtedly
true.  But better to pick a strategy that is consistent with safe
practice than to kick back each error to an admin.  An admin who has
particular requirements can read the log.  But for those who "just want
things to work", I suspect that we can identify a driver (I nominate key
lifetime) & adjust everything else to fit...

I'm sure there are some challenges in the details - but I hope the
message is clear.  Avoid blaming the admin for trying to make things
work.  Instead, package actions at admin-oriented levels of
abstraction.  Guard data that named needs, and 

Re: Forward zone inside a view

2019-02-12 Thread Timothe Litt
All these replies are correct in the details (as usual), but miss the point.

Blocking name resolution, while popular, does not meet the OP's requirement:

"The point is I have several desktops that *must* have access **only**
to internal domains.*"

Let's say that your client's favorite illicit site is facebook.com.

One dig (or host) command reveals that:

  facebook.com has address 157.240.3.35
 facebook.com has IPv6 address 2a03:2880:f101:83:face:b00c:0:25de

Fits on scrap of paper.  Carry in to office.  Connect - with a Host
header for http, SNI for TLS, and off you go.  Or just put it in
hosts.txt/hosts.

Or use a public nameserver.   Or...

If you want to block access, you need a firewall.  If you merely want to
inconvenience people or reduce the risk of clicking on ransomware
hyperlinks, mess with their default nameserver.  RPZ is good for that. 
If you have a private address space & need to resolve some names
differently inside and out, views are good for that. (Or you can have a
different nameserver; tastes vary.)  If you are resource limited and
want to benefit from a public server's larger cache, while serving
authoritatively some local names, forwarding can be a good choice.

But "**must** have access **only**" implies that one expects that the
solution should resist *more* than a cooperative or unmotivated client. 
NO DNS-only based solution will do that.

Governments and political pressure groups think that DNS corruption is
an effective tool for limiting access.  People here know better.  It
deters certain casual problem behavior.  It does not prevent anyone with
a modicum of knowledge and determination from watching cat videos.  (Or
downloading malware, or whatever other behavior a policy maker wishes to
ban.)

It is worth listening to the OP's problem statement and steering him
away from illusory technology.  It's the responsible thing to do.

That there are technical answers to the question asked doesn't mean that
it's the right question.  If it's not (and in this case it does not
appear to be), those answers are not helpful.  Even though they are
correct in other contexts.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

> On 12-Feb-19 17:45, Kevin Darcy wrote:

> Define root zone. 
>
> Delegate teamviewer.com <http://teamviewer.com> from root zone.
>
> Define teamviewer.com <http://teamviewer.com> as "type forward".
>
> "recursion no" is incompatible with *any* type of forwarding or
> iterative resolution. Should only be used if *everything* you resolve
> is from authoritative data, i.e. for a hosting-only BIND instance.
> Since you want to forward -- selectively -- you need "recursion yes".
> Nothing outside of that part of the namespace will be forwarded, since
> named considers everything else to be contained in the root zone.
>
>                                                                      
>           - Kevin
>
> On Mon, Feb 11, 2019 at 9:06 AM Roberto Carna
> mailto:robertocarn...@gmail.com>> wrote:
>
> Matus, I've followed whatyou say:
>
> view "internet" {
>    match-clients { internet_clients; key "pnet"; };
>
> recursion yes;
>
> zone "teamviewer.com <http://teamviewer.com>" {
>         type forward;
>         forward only;
>         forwarders {
>                 8.8.8.8;
>         };
> };
>
> };
>
> but clients can resolve ANY public Internet domain, in addition to
> teamviewer.comI think "recursion yes" apply to every public
> domain and not just for "teamviewer.com <http://teamviewer.com>",
> but I don't know why.
>
> Please can yoy give me more details, using forward or not, how can
> let some clients resolve just teamviewer.com
> <http://teamviewer.com> ??? I confirm that my BIND is an
> authorittaive name server for internal domains.
>
> Thanks a lot again.
>
> El lun., 11 feb. 2019 a las 10:49, Matus UHLAR - fantomas
> (mailto:uh...@fantomas.sk>>) escribió:
>
> On 11.02.19 10:38, Roberto Carna wrote:
> >Dear Mathus, thanks al lot for your help.
> >
> >>> what is the point of running DNS server with only two
> hostnames allowed
> >>> to resolve?
> >
> >The point is I have several desktops that must have access
> only to internal
> >domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in
> >order to download the Teamviewer client and a 

Re: Forward zone inside a view

2019-02-11 Thread Timothe Litt
On 11-Feb-19 08:38, Roberto Carna wrote:

> The point is I have several desktops that must have access only to
> internal domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in order to download the
> Teamviewer client and a pair of operations in this public domain.
>
(Ab)using the DNS for this is almost certainly the wrong approach,
though this sort of question comes up

frequently.

Any sufficiently motivated user can list a blacklisted domain in
HOSTS.TXT, change his DNS server

to a public one, use an IP address (obtained at home, the local internet
cafe, or elsewhere), or

use other work-arounds.

So besides being painful to set up, it's likely ineffective.  You can
clamp down on some of these with file

system or other administrative controls - but not all.  It will be a
frustrating path.

If you want (or are required) to create a walled garden, the only
effective approach is likely to be

a firewall configuration.  You can set it up to only allow traffic from
particular IP address to the permitted

ones.  And control protocols.  You can either send "not reachable" ICMP
responses, or redirect connection

attempts to a port-appropriate warning/notification service.  (e.g. a
web page, e-mail robot, etc.)

You need a process to update the firewall in the unlikely event that the
IP address of a permitted

service changes.  And if your clients get their addresses from DHCP,
you'll want to set up distinct

address pools - and possibly VLANs.

DNS is the wrong hammer for this nail. 

Whether you should hammer the nail at all is a political, not a
technical issue.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 11-Feb-19 08:38, Roberto Carna wrote:
> Dear Mathus, thanks al lot for your help.
>
> >> what is the point of running DNS server with only two hostnames allowed to
> >> resolve? 
>
> The point is I have several desktops that must have access only to
> internal domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in order to download the
> Teamviewer client and a pair of operations in this public domain.
>
> I think if I have setup "recursion = no", if I define a forward zone
> with "type forward" and the corresponding forwarder, this option
> enable the recursion just for this defined zone.
>
> In general, my question is how to forward a public domain to a DNS
> resolver like 8.8.8.8 ???
>
> Thanks again.
>
> El sáb., 9 feb. 2019 a las 12:28, Matus UHLAR - fantomas
> (mailto:uh...@fantomas.sk>>) escribió:
>
> On 07.02.19 16:30, Roberto Carna wrote:
> >Desktops I mentioned can only access to web apps from internal
> domains, but
> >in some web apps there are links to download Teamviewer client
> software
> >from Internet. I can create a private zone "teamviewer.com
> <http://teamviewer.com>" with all the
> >hostnames and IP's we will use, but if they change I will be in
> trouble.
> >
> >So we need to forward the query to our resolvers in order to get
> a valid
> >response.
> >
> >So I think we can use the forward option from BIND, but it
> doesn't work at
> >all as I described:
> >
> >1. "recursion no" can only be set at the top (view) level, not
> overridden
> >   at the zone level.
> >
> >2. If I set "recursion no" at the view level, then a "type forward"
> >   zone has no effect:
> >
> >  view "foo" {
> >    recursion no;
> >    ...
> >    zone "teamviewer.com <http://teamviewer.com>" {
> >      type forward;
> >      forward only;
> >      forwarders {172.18.1.1; 172.18.1.2;};
> >    };
> >
> >-- query for foo.teamviewer.com <http://foo.teamviewer.com> fails
> and tell it's not a recursive query
>
> the whole point of "recursion no" is not to answer recursive queries,
> so there should be no wonder it works that way.
>
>
> >3. If I define "recursion yes" at view level:
> >
> >  view "foo" {
> >    recursion yes;
> >    ...
> >    zone "teamviewer.com <http://teamviewer.com>" {
> >      type forward;
> >      forward only;
> >      forwarders {172.18.1.1; 172.18.1.2;};
> >    };
> >
> >-- query for foo.teamviewer.com <http://foo.teamviewer.com> is

Re: forward all but ANY requests

2018-11-30 Thread Timothe Litt
On 30-Nov-18 08:14, Erich Eckner wrote:

> On 30.11.18 12:26, Timothe Litt wrote:
>> On 30-Nov-18 06:04, Erich Eckner wrote:
>>> Hi,
>>>
>>> I'm running a bind9 name server (9.13.4 on debian) which forwards some
>>> zone (onion.) to tor's name server. Unfortunately, tor's name server
>>> only answers A and  requests, but not e.g. ANY requests.
>>>
>>> 192.168.1.3 is running the tor dns,
>>> 192.168.1.13 is running bind9 forwarding to 192.168.1.3:9053
>>>
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion ANY
>>> ;; Connection to 192.168.1.3#9053(192.168.1.3) for
>>> 3g2upl4pq6kufc4m.onion failed: connection refused.
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion A
>>> 10.255.55.223
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion 
>>> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion ANY
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion A
>>> 10.255.55.223
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion 
>>> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>>>
>>> Is there any option:
>>>  - to make bind fall back to A or  when the ANY request fails (even
>>> the connection fails!) or
>>>  - to only forward requests of certain type(s) or
>>>  - to answer ANY requests _always_ with A or  records (not trying if
>>> the ANY request can be forwarded successfully), possibly for certain
>>> zones only?
>>>
>>> Sry, if that has been asked before, but I seem unable to find anything
>>> useful on the internet, since "ANY" is not a good search term ;-) and
>>> without "ANY" I only turn up how to set bind to ipv4/ipv6-only.
>>>
>>> regards,
>>> Erich
>> This reflects a common misunderstanding.
>>
>> A query for ANY does not return 'everything'.  It returns what the
>> server happens to have cached.  It's a diagnostic.
>>
>> You have to ask explicitly for the record types that you want.
>>
>> Many people have fallen into the trap of thinking that an ANY query will
>> return all records in the DNS, and assume that therefore it can be used
>> to make fewer queries.  You're not the first.
>>
>> Any software (or wetware) that relies on ANY for any purpose other than
>> determining what's in a server's cache for diagnostic purposes is broken.
>>
>>
>> Timothe Litt
>> ACM Distinguished Engineer
>> --
>> This communication may not represent the ACM or my employer's views,
>> if any, on the matters discussed. 
> Thank you for the clarification. Indeed, I can (after querying A and
> ) retrieve those records via ANY requests. :-)
>
> Regards,
> Erich

Note that this result is not guaranteed.  The server is not required to
cache records.  The records may have a TTL less than the time between
your queries.  (E.g. 0)  The records may be evicted from a busy cache
before the TTL expires.  Or the server may reboot between queries.  Or...

Unless you have some specific reason for finding out what is in a
server's cache, you don't want to use queries for ANY.  The results will
seem confusing/unpredictable - and while they may "seem to work" for a
while, will end up wasting a lot of your time.

ANY queries are a classic "sharp tool".  If used properly, they can cut
the time required to diagnose a problem.  If used improperly, they will
cut you instead.  For most people, in most circumstances, the best
strategy is to never issue a ANY query.  (dig is also a sharp tool; else
issuing an ANY query would produce an "are you sure?" prompt :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: forward all but ANY requests

2018-11-30 Thread Timothe Litt
On 30-Nov-18 06:04, Erich Eckner wrote:
> Hi,
>
> I'm running a bind9 name server (9.13.4 on debian) which forwards some
> zone (onion.) to tor's name server. Unfortunately, tor's name server
> only answers A and  requests, but not e.g. ANY requests.
>
> 192.168.1.3 is running the tor dns,
> 192.168.1.13 is running bind9 forwarding to 192.168.1.3:9053
>
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion ANY
> ;; Connection to 192.168.1.3#9053(192.168.1.3) for
> 3g2upl4pq6kufc4m.onion failed: connection refused.
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion A
> 10.255.55.223
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion 
> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion ANY
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion A
> 10.255.55.223
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion 
> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>
> Is there any option:
>  - to make bind fall back to A or  when the ANY request fails (even
> the connection fails!) or
>  - to only forward requests of certain type(s) or
>  - to answer ANY requests _always_ with A or  records (not trying if
> the ANY request can be forwarded successfully), possibly for certain
> zones only?
>
> Sry, if that has been asked before, but I seem unable to find anything
> useful on the internet, since "ANY" is not a good search term ;-) and
> without "ANY" I only turn up how to set bind to ipv4/ipv6-only.
>
> regards,
> Erich

This reflects a common misunderstanding.

A query for ANY does not return 'everything'.  It returns what the
server happens to have cached.  It's a diagnostic.

You have to ask explicitly for the record types that you want.

Many people have fallen into the trap of thinking that an ANY query will
return all records in the DNS, and assume that therefore it can be used
to make fewer queries.  You're not the first.

Any software (or wetware) that relies on ANY for any purpose other than
determining what's in a server's cache for diagnostic purposes is broken.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: dig @ipv6-address

2018-11-29 Thread Timothe Litt
On 29-Nov-18 06:21, Christian Weiske wrote:
> Hello,
>
>
> I'm trying to use dig version 9.10.3 to test my local DNS
> server which listens on IPv6 only.
>
> I only get an error when running dig:
>
>> $ dig @2a01:488:66:1000:53a9:2dde:0:1 cweiske.de
>> couldn't get address for '2a01:488:66:1000:53a:53': not found

This looks like a typo. And the error doesn't match the command given.

I suspect that your actual 'dig' command was 'dig
@2a01:488:66:1000:53a:53 cweiske.de', which will reproduce the error.

'2a01:488:66:1000:53a:53' is not an IPv6 address.  You are missing a ::
or a couple of words.  (There should be 8 16-bit words delimited by ':',
or a single '::' ellipsis to represent a run of zeroes.)  Since it
doesn't parse as an IPv6 address, dig (probably getaddrinfo()) tried to
translate the string as a hostname.  Hence the error.

It's really not in anyone's interest when people post obfuscated
questions...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Method of writing zone files

2018-11-13 Thread Timothe Litt
On 12-Nov-18 14:39, Marcus Frenkel asked about backing up slave zone
files & bind's update mechanism:

I believe you're asking the wrong questions and are likely to run into
complications.  You don't know when BIND will merge the journal, or that
rsync will atomically snapshot the zone file and the journal.  So you
will get a coherent copy of the the zone file - but it may be stale. 
The journal may be earlier or later than the zone file, or it may be
incomplete. This means that a restore  may have the slave serve stale
data - until it queries the master for any updates & if necessary, fixes
the journal file.  If the master happens to be down, this is especially
suboptimal.

So I would exclude both files from rsync, and use another approach to
save a coherent copy of the zone file(s) in my backup procedure.

One approach is to axfr the zone (from the slave, or any other server)
to your backup server.

You can do that directly with dig, or using a library (e.g. Perl's
Net::DNS).  Recent versions of BIND write slave zone files in a binary
format by default.  Just pipe the data through named-compilezone -F when
writing the file.  This has the advantage that it doesn't have to run on
the slave. (You can pull the data to your backup server, or push it from
a machine other than the slave.)

If you are dealing with very large files and/or a slow network link, you
may prefer to use named-compilezone -j to merge the zone & any journal
file into a temporary snapshot & back that up with rsync.  I'm not sure
how BIND's binary format interacts with rsync - it's possible that
outputting text format would be more efficient.  You'd have to benchmark
to decide.

If you restore from such a backup, you'll want to delete the journal
file to save named some work when it loads the zone.  And, of course,
make sure that named isn't running :-)  If you backup the zone in text
format, you can compile it only during restore to save time at the cost
of space.  (Backups should be frequent; restores rare.)

This assumes that your recovery strategy depends on being able to
restore the slave with the master down.

If you can assume that the master will be up (or restored first), by far
the simplest approach is not backup the zone data (file or journal) on
the slaves at all.  This is often the case, since restoring a master
tends to be higher priority than restoring slaves.  To restore, simply
start named.  If you are restoring over existing files, delete both zone
and journal files first.  Named will notice that it has no data, and
will do the axfr(s) & create the files in fairly short order.  Named
will spread the transfers of multiple zones out over time, but you'll
want to do some math to determine if the restore time and impact on your
network are acceptable.

Although you asked about slaves, note that masters have similar issues &
solutions.  Masters will also have journal files if they do dynamic
updates (including for DNSSEC).

For servers which are masters for some zones and slaves for others, the
strategy should be determined per-zone, not per server.

The bottom line: backups of databases (and DNS is a distributed
database) are complicated.  The best approach depends on the details of
your operational environment -- and not on the minutiae of BINDs
implementation.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 12-Nov-18 14:39, Marcus Frenkel wrote:
> Thank you for the quick reply Tony!
>
> Follow-up questions just to be sure:
> 1. The new zone file is renamed in the placed of the old one, only
> after all changes to the new file are written?
> 2. Is the zone file atomically replaced during the renaming process,
> in a sense that there is no window in which the file is empty or
> non-existent?
>
> I'm running BIND on Debian 9. Based on this
> <http://man7.org/linux/man-pages/man2/rename.2.html> Linux man page,
> the rename function should be atomic. I would not imagine that BIND
> does it in different way, like the worst case scenario to first remove
> the current file and then move the new one to the same path. I know
> I'm too cautious, I'm just trying to avoid any chance for rsync to
> transfer incomplete or empty zone file, or maybe delete the file at
> the destination if it does not exist at the source for a short moment.
>
> Marcus
>
> On Mon, Nov 12, 2018 at 7:19 PM Tony Finch  <mailto:d...@dotat.at>> wrote:
>
> Marcus Frenkel  <mailto:marcus.fren...@gmail.com>> wrote:
> >
> > I need to know how BIND writes to slave zone files after zone
> has been
> > updated. Does it modify the file in place or it replaces the
> file with
> > new one at once?
>
> Changes are written to a journal append-only style. Eve

Re: Dropping queries from some well-known ports

2018-08-03 Thread Timothe Litt
On 03-Aug-18 14:00, Petr Menšík wrote:
> Hi!
>
> Our internal support reached to me with question, why are some queries
> bound to low ports silently dropped. I have found there is feature for
> that, that will silently drop queries from selected ports.
>
> I admit queries from such low ports are wrong. But why are some ports
> allowed when some ports are not? Should not it be configured by firewall
> instead?
>
> Just try this command:
> $ sudo dig @127.0.0.1 -b 127.0.0.1#32 localhost
>
> If bind is running on local interface, it will drop the query. If any
> other server is running there, it will respond.
>
> Does such feature make sense in year 2018? Can you remember what was
> motivation to implement it? Is it wise to still enable it by default,
> without at least configure option to disable it?
>
> 1.
> https://gitlab.isc.org/isc-projects/bind9/commit/05d32f6b0f6590ca22136b753309f070ce769000
Those particular ports are reserved for services that have the rather
odd property that any junk set to them will result in a response.  E.g.
simply opening a connection to daytime will result in a response with
the current date and time in some (unspecified) ASCII format.  Daytime
returns a 32-bit time - that will overflow "soon"; you should be using
NTP instead.

They were designed for diagnostic purposes at a time when the internet
was young and friendly.

Suppose someone knows of a server running one of those services (they
have mostly been replaced/blocked for this and other reasons).

If that someone were able to spoof a request from one of these ports on
that server to your named, responding with anything - including a
FORMERR response, would result in another response.  Named would take
that as another ill-formed request, and reply...  In an infinite loop
using whatever bandwidth is available.  This amounts to a denial of
service attack on both servers, for the cost of a single
packet/connection.  Dropping these packets is the right thing to do,
since the non-named services are acting correctly (according to their
specifications).  And if operating according to their specifications,
none of those servers would ever *initiate* a connection to anyone -
including named.

As for why other low-numbered ports are not dropped: unlike these, they
may have legitimate needs for name resolution.  You could configure a
firewall to drop these - and probably should.  But it certainly doesn't
hurt for named to protect itself from this particular attack.

I should note that your example used port 32 - which is not dropped by
the commit that you cited.  Port 32 is not assigned by IANA.

[Although this is a security issue, I'm not revealing anything new
here.  The commit is 12 years old.  It has been standard advice for many
years not to run these services on the public internet.  If anyone IS
running them(I think NIST is still running the time services), they
should know the risk, and at least rate-limit requests from any given
client IP...]

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Authoritative dns with private IP for hostname

2018-07-27 Thread Timothe Litt

On 27-Jul-18 11:59, Elias Pereira wrote:
> hello,
>
> Can an authoritative dns for a domain, eg mydomain.tdl, have a
> hostname, example, wordpress.mydomain.tdl with a private IP?
>
> Would this be accessible from the internet via hostname, if I did a
> nat on the firewall?
>
> -- 
> Elias Pereira

No.  Two issues seem to be conflated here.

For DNS, what you probably want is a setup with views; that way the site
will resolve to the private IP address from inside your site, but to the
external address from outside.

For making your servers accessible, NAT will probably be necessary for
the webserver and the DNS server inside your firewall to be accessible
from outside.  Your secondary DNS servers are required to be
geographically separate.  So either you have another location with a
firewall (where you again NAT), or you use a secondary DNS service.

Views are in the bind ARM, and have been discussed on this list before.

There are some middleboxes (among them Cisco Routers) that do attempt to
rewrite DNS records on the fly in a NAT like fashion.  Stay away from
those.  They tend to break things in the best of circumstances, and
absolutely break DNSSEC.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: tool for finding undelegated children in your DNS

2018-07-27 Thread Timothe Litt
On 26-Jul-18 19:46, Victoria Risk wrote:
> I have been told this is a very poor description of the problem.
>
> What I am concerned about is, how people with a sort of lazy zone file
> can assess the potential impact of QNAME minimization on their ability
> to answer for all of their zones.
>
> I have gotten two suggestions off list:
> - I would use named-checkzone to print the zone with all owner names
> printed out and then use text processing tools
> - “dig ds -f list-of-zones”, Those that return NXDOMAIN are likely
> missing NS records.
>
> Any other ideas?
> Has anyone done this kind of housekeeping on their own zones?
>
>
>> On Jul 26, 2018, at 11:41 AM, Victoria Risk > <mailto:vi...@isc.org>> wrote:
>>
>> Does anyone know of a good tool that you can run on your DNS records
>> to find parent + child pairs where there is no NS record for the
>> child in the parent?
>>
>> Someone must have a perl script for that, right?
>>
>> Thank you for any suggestions.
>>
>> Vicky
>>
>>
If you want to do this validation with zone files, then text tools (e.g.
a Perl, awk, etc) are a reasonable approach.  It would not be
particularly difficult - though you do have to handle include files. 
Rather than working from zone files, the easiest approach is to do a dig
axfr to get the actual zone...

I tend to use dnsviz <http://dnsviz.net/>(http://dnsviz.net) and
zonemaster
<https://www.zonemaster.net/domain_check>(https://www.zonemaster.net/domain_check)
for consistency checking. 

I don't tend to have issues with internal views because of the tools
that I use to update my zones (they pretty
much ensure that mistakes made there will also show up externally :-(). 
So the web checkers are my tools of choice.

But both dnsviz <https://github.com/dnsviz/dnsviz>and zonemaster
<https://github.com/zonemaster/zonemaster>are on GitHub & can be run
internally.  Zonemaster is Perl; dnsviz is Python.  Zonemaster requires
a database (MySQL/MariaDB/PostgresSQL).  The web version of dnsviz is
graphic, and has accessibility issued.  Zonemaster is standard HTML &
more suitable if you use a screen reader.

dnsviz run locally has command line options that will do the analysis -
see the GitHub readme.

Both tools do extensive checks (dnsviz is oriented around DNSSEC, but
does many other checks).

It's a good idea to run one or the other regardless of this point
issue.  Actually - I run both.

Of course the usual caveats about stealth (unlisted) servers apply.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: PKCS#11 vs OpenSSL (BIND Future Development Question)

2018-06-03 Thread Timothe Litt
at eventually, of course, as
   part of refactoring to consolidate some/all of the libraries. 
But the end user just configures one
   name for his/her provider.  The idea is to keep things simple -
for the user, and for development.

  On the other hand, if I have multiple machines with different
providers, I don't have to compile
  a unique BIND for each.  I build it once with (at least) the union
of all the required providers, and
  deploy the same image everywhere.  Better yet, with luck my
distribution ships with all the provider libraries that I need, and I
don't compile anything!  The config file is the only variant.

So, if you can switch to OpenSSL, it seems the best long-run option.  If
you can't, (or are encouraged not to by other customers), you could
solve a lot of the customer pain by making the provider loadable.

For entropy, I use a mixture of USB keys and CPU hardware generators. 
As I may have mentioned, I use EntropyBroker to distribute the entropy
securely - this keeps cost reasonable, especially with many VMs (some of
which don't naturally generate a lot of entropy...).  See
https://www.vanheusden.com/entropybroker/ &
https://github.com/flok99/entropybroker. 

Hope this helps.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Should we bundle the MaxMind GeoIP db?

2018-05-30 Thread Timothe Litt

On 30-May-18 17:27, Victoria Risk wrote:
> Hello GeoIP users,
>
> We are aware that Maxmind is discontinuing their older free GeoLite
> location database and replacing it with a new database with a new
> format (GeoLite2). https://dev.maxmind.com/geoip/geoip2/geolite2/
>
> We have an issue open in the BIND gitlab to update our Geo-IP support
> to use the new database api.
>  https://gitlab.isc.org/isc-projects/bind9/issues/182
>
> The question is, would it be useful if we included the GeoLite2
> database with the BIND distribution? Since we update at least twice a
> year, we could keep it fairly well up to date, and it would save users
> having to go get and update the db themselves. It would add about
> 1.5MB to the BIND distribution (depending on whether we use the
> country or city level).
>
> Votes, comments welcome. 
>
> Thank you,
>
> Vicky
> -
> Product Manager
> Internet Systems Consortium
> vi...@isc.org 
>
>
I use GeoIP with webservers, but not with BIND.  I run a cron job that
pulls the Maxmind updates roughly monthly.  IP address allocations
change a bit more frequently than twice a year.

Rather than bundling the database, you might want to bundle a script to
automate the update process... preferably one that you don't have to
maintain.  (Stick to your core competency...)

I think that would be more useful (and less likely to complicate the
lives of packagers) than bundling the database.

And less work for you :-)




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: BIND Server running but not responding

2018-04-18 Thread Timothe Litt
On 18-Apr-18 09:51, Admin Hardy wrote:
>
> I would be so grateful of your help in this issue.
>
> I am running BIND 9 on Windows 7
> Service "ISC BIND" shows as started up
>

Warren's right.  And change your rndc-key's secret ASAP.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Re: Suggestions for a distributed DNS zone hosting solution I'm designing

2018-03-09 Thread Timothe Litt
On 08-Mar-18 07:52, Tony Finch wrote:
> Best way to achieve this is with anycast, which can be pretty
> time-consuming to set up - try searching for Nat Morris's presentation
> "anycast on a shoestring" which he gave at several NOG meetings.
> The advantage of anycast (as opposed to having NS records in lots of
> locations) is that you are depending less on resolvers to work out for
> themselves which of your servers is fastest.
>
Does anyone know what happened to his project?

It looked like an interesting secondary DNS, but it seems to be out of
business.

noc.esgob.com has a recently expired certificate, and redirects to one
line text page (his name).

The github repository is empty.

So it appears to be defunct.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Re: DNSSEC validation without current time

2017-12-18 Thread Timothe Litt
rom NTP time every week or two - that's more than sufficient for
DNSSEC & NTP bootstrap.)

Alternatively, as previously discussed, if you need the best (non PTP)
time, add a GPS receiver, with pool backup.

You can skip the DNS cyclic dependency completely if you have
locally-trusted NTP and DHCP servers - provide your clients with the NTP
server addresses via DHCP.  (They're sent as IP addresses, not names.) 
This isn't as hard as it appears.  If you run NTP on all your machines
(yes, there's NTP for windows), your Pi can get time from them.

Further, since you run your own DNS server - presumably within some
firewall - you can trust it to serve your local zones.  DNSSEC not
required.  If you include your local machines in your NTP
configurations, everything is under your control.  It then becomes a
sequencing issue only if your entire site goes down.   (If so, you want
your local master to be up first.  Otherwise, the rest will coast using
other NTP sources.)  If you're really serious, you run at least 3 local
clocks - preferably something like GPS, WWV (or other radio source), and
a local atomic (or at least, TXCO)  clock.  If you start looking at
failure scenarios, it gets more interesting.

As previously noted, startup scripts need to have the "right" definition
of "system time available" & dependencies for your applications
(including named) to start.

Because the draw minimal power (and so will run a long time with a
modest UPS), I use an RPi with GPS & some pool servers as my preferred
time source.  It boots using an RTC.  My edge router also runs NTP,
preferentially taking time from the RPi - but also configured with other
Public and local servers.  In case the RPi goes down, the local machines
also participate - the low latency and dispersion pretty much ensures
that they'll be taken over the public servers.  I may add another Pi
with another GPS and/or radio receiver, when I acquire enough round TUITs.

So, what to conclude?

  * If you have other machines in your local network, use them as NTP
sources and provide the addresses to your RPi via DHCP.  This is
cheapest and easiest.
  * If you don't need precise time (e.g. for purposes beyond DNSSEC),
the next cheapest solution (in $ and time) is to just add an RTC.
  * If you also want precise time, but don't need it to be highly
available, add a GPS.
  * For more availability, do both.  And possibly add other time sources
(Radio, TCXO, geographically dispersed GPS, more RPis...).

In any case, let us know what you end up with.

Have fun!

(1) This isn't an expensive problem to solve.  My RPi's RTC (TOY) uses a
DS1302 - I got a bunch from e-bay for about $2 (including battery &
shipping).  I could publish the software if there's interest.


rtc/rtc-ctl --show --debug
TOY Clock registers as read (UTC):
81: 57 RUN 57 sec
83: 42 42 min
85: 12 24H 12 hr
87: 18 18 date
89: 12 12 month
8B: 02 02 weekday
8D: 17 17 year
8F: 80  WP ctl
Applying drift correction of -28.370 PPM to 10869574.837 seconds
(125d 19h 19m 35s) elapsed
TOY    time is Mon Dec 18 2017 07:48:05 EST
System time is Mon Dec 18 2017 07:48:07.234 EST
Remaining offset is -2.234 sec (-0.206 PPM)

(2) 20 ppm is ~ one min/month.  Typical crystals can be 100 ppm or more
(depending on temperature & PCB layout), so 5 min/month.  TSIG fudge is
nominally 5 min, so resyncing every 1-2 weeks is close enough.  And also
close enough for sane DNSSEC configurations.  You can resync more often,
but it's a fair bit of bit-banging on a slow bus (I2C or SPI for most),
and there's no point.

Oh, why mention TSIG?  Because ... it's another time-sensitive part of
named, and often used for DHCP - DNS updates...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNSSEC validation without current time

2017-12-15 Thread Timothe Litt

On 15-Dec-17 07:44, Mukund Sivaraman wrote:

On Fri, Dec 15, 2017 at 12:45:11PM +0100, Petr Menšík wrote:
>> Hi folks.
>>
>> I am looking for a way to validate name also on systems, where current
>> time is not available or can be inaccurate.
> I use a Garmin 18x LVC 1pps GPS receiver device connected to RS-232
> serial port. The device plus cables cost me $70 altogether, and ntpd
> works natively with it using the NMEA refclock driver (there's no need
> of gpsd). It has a 1s PPS signal accurate to 1us. It is accurate to
> within +/- 100us on Fedora where due to no hardpps kernel support
> because of tickless kernel, the PPS signal is timestamped and available
> on /dev/pps0 but the kernel doesn't use it to directly maintain the
> clock and it has to be done from userland which is affected by the
> system load.  If you were to recompile a kernel that's configured
> appropriately, I feel the clock can be synchronized to about 1us
> accuracy.
>
> It is more or less reliable and value for $70 if one wants UTC on their
> computer without accessing the internet. This is more than sufficient
> for DNSSEC validation and many other network services, and certainly
> more accurate than using the ntp.org pools.
>
>   Mukund
>
I use an 19xLVC too (On Raspbian == Debian).  But I also have an RTC. 
GPS does have outages,  can take a while to get a fix, and NTP wants
consensus.  So I use my GPS receiver as a local clock source
(preferred), but also configure several servers from the pools as a
sanity check - and to deal with any GPS outages/slow starts.  It's
worked well for me.

Along those lines, I haven't splurged yet, but Adafruit has an
interesting module for ~$40 (US)  with a breakout module, ($45 on a Pi
Hat - which is cheaper/easier than building your own PCB), which
includes a GPS patch antenna.  If you need an external antenna, it comes
up to about the cost of the Garmin, but draws only 20ma vs. 90, and is a
more modern receiver.)   On paper it looks good.

See https://www.adafruit.com/?q=ultimate%20gps - I'm not affiliated with
Adafruit, and while I've looked at the specs, don't have direct
experience.  YMMV.

Enjoy.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: DNSSEC validation without current time

2017-12-15 Thread Timothe Litt

On 15-Dec-17 06:45, Petr Menšík wrote:
> Hi folks.
>
> I am looking for a way to validate name also on systems, where current
> time is not available or can be inaccurate.
>
> This is related to booting with NTP client, when the only configuration
> is hostname that has to be resolved. There is a bit circle dependencies.
> First current time is required for DNSSEC validator to verify signatures
> of all keys. However that is hard to maintain on systems without RTC
> clock running when it is down. Raspberry PI is example of such system.
> Until hostname is known, time cannot be synchronized and corrected to
> real value. They sort of depend on each other. The only secure way I
> found is to hardcode IP address into NTP client or obtain IP from other
> trusted source (DHCP?).
>
> Available option is of course to disable validation until valid time is
> received. It seems to me that is unnecessary lowering the security. I
> would like some option to limit checking validity period of used keys
> instead. Just validate existing keys from trust anchor and trust the
> last key that can validate. I think that is far better than no
> verification at all.
>
> Is it possible to do that in BIND? Maybe bootstrap verification could be
> done only with delv tool with time-checking disabled. I found no way to
> do that. Is there good reason why it is not available? Is better method
> for solving secure configuration of timeless system available?
>

I added an RTC to my Pis :-)  It makes life a lot simpler, even after I
wrote a driver and calibration mechanism.

But if you have access to a DHCP server, have the client request Option
42; this returns one or more NTP servers' IP addresses in preference
order.  You can use NTPD (or ntpdate) to get a time.   ISC DHCP client
supports this option; see dhcp-users if you need help.

DNSSEC requires reasonably accurate time, as signatures have validity
periods.  Your scheme would not work; you need time to validate ANY
signature - from the trust anchor down.  If there's no time, you can't
validate any part of the chain - so you might as well use ordinary DNS. 
NTP is fairly robust; it uses consensus from multiple servers to
establish correct time.  For a rogue DNS to inject bad time into your
PI, it would have to know which NTP servers you are using.

Another option is to use DHCP to get the address of a validating
resolver, and rely on that for bootstrapping NTP.  Again, this depends
on whether your control/trust your DHCP server.  More ISPs are providing
validatiing DNS server, but it's not universal. Hardcoding one of the
public ones (e.g. Google - 8.8.8.8, 8.8.4.4, 2001:4860:4860::,
2001:4860:4860::8844) is fairly safe. 

NTP server addresses are more volatile, and it's a serious breach of
netiquette to hardcode them; there are a number of stories of how this
has gone badly wrong for all concerned.

The choice depends on your requirements, available resources, and risk
tolerance.

You also need valid time for many other applications; TSIGs require a
reasonably close (on the order of minutes) time sync between sender and
receiver.

So rather than try to tweak NAMED, focus on getting a reasonable time
early in boot - and make sure that dependencies on a valid time are
properly expressed in your startup scripts.

Bottom line: your problem is getting a reasonable time, not with the
consumer(s).

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: DNAME usage?

2017-11-21 Thread Timothe Litt
On 17-Nov-17 18:04, Mark Andrews wrote:
> DYN used to just require a TSIG signed update request set to a server 
> specified in
> a SRV record.
Depends on which service.  The one I referred to is the one that was
popular (free) for people who wanted to reach a machine on a dynamic IP
address.  Because it was popular, it was implemented in a number of
routers, including Linksys (low end) and Cisco (IOS).  I believe they
discontinued the free version, but the protocol lives on.

It's worse than DNS UPDATE in an number of respects - but is trivial to
implement in a router or script as the core is just an HTTP GET.
>
> We have a perfectly fine protocol for updating the DNS but DNS hosting 
> companies
> want to reinvent the wheel.
Agree. I wish that the DNS UPDATE protocol was the only one in the
wild.  Unfortunately, (non-jail broken) routers don't provide that
option, but do provide the http ("dyn") version.  So if you want to use
a service that requires it - or want to bridge a router that supports it
to DNS UPDATE, some invention is required.  I outlined an approach that
works for me.

For reference, cisco's IOS (now) supports both methods - to some extent.

See
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_dns/configuration/15-sy/dns-15-sy-book/Dynamic-DNS-Support.html#GUID-DCA9088D-EB90-46DE-9E33-306C30BB79CE

And from that page, here's the reference to dyndns (you can change the
URI for other http services; it lists 6 others)

add

http://test:t...@members.dyndns.org/nic/update?system=dyndns==

I use https, of course.

Naturally, IOS doesn't support TSIG - so DNS UPDATE from it has to be
authorized by IP address. :-(

2136/7 have been around since 1997, so there's really no excuse for DNS
providers not tosupport them.

But we live in a world of excuses :-(



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNAME usage?

2017-11-17 Thread Timothe Litt

On 17-Nov-17 14:48, Mark Andrews wrote:
> Alternatively use a http server that can update the records for the 
> interfaces it is listening on. 
>
> This sort of thing is possible. Named gets informed by the OS when addresses 
> get added and removed. It currently just adds and removes listening sockets 
> but you could trigger other actions like sending dynamic dns updates.
>
> Unless you ask for the functionality it won’t be added.
>
>
> --
> Mark Andrews
>
>> On 18 Nov 2017, at 06:38, Mark Andrews  wrote:
>>
>> Just have the machine hosting the http server do a dynamic update of the A 
>> ana  records when they are assigned to the interface.
>>
>> It should be possible to get the os to run a program when this happens so it 
>> can perform a second dynamic update on a the different name. 
>>
>> -- 
>> Mark Andrews
We don't have the whole story from the OP, but in the typical
configuration that prompts this question, neither will solve the
problem.  The problem is that the dhcp client and http server are likely
not the same machine.

If you have a dynamic IP(v4) address & want to update DNS for a server,
it's probably NATed (by a router) before the HTTP server sees it.

The HTTP server always listens on the inside NAT address.  So it won't
see an address change on its interface.

The router implementing NAT is what will get the dynamic address, from
an ISP.  If it's a sealed box, it probably has support for updating DNS
- though it's typically the dyndns protocol, not DNS update.  (Assuming
the ISP hasn't disabled the feature.)  This is what dyndns, NO-IP, &
others use.  If you can modify the URL that it uses, you can point it to
your own script, which then does a DNS UPDATE transaction.  I use this
approach with Cisco IOS routers - though many others allow this - and
still others can be fooled (e.g. with a HOSTS entry for one of the
update servers).  What's nice about this is that you don't have to
jailbreak or modify anything.  Just pretend to be an update service. 

If you're using a jailbroken or other Linux router, and it happens to be
the same physical machine as HTTP server, it could look for routing
updates on the external interface.  I don't think this is a common case
(except for jailbroken routers - like OpenWRT).

Most often, the HTTP server is on a separate machine and LAN - it can't
see the external interface that gets the dynamic address.

When the router won't notify someone about address changes, the usual
solution is for something behind the NAT to poll an external public
server for your IP address, then use the result to initiate a DNS
UDPATE.  (e.g. A local script asks the external server to return the IP
address that contacted it. (REMOTE_ADDR))  There are a bunch of services
and scripts for this.  Most of the scripts update a DNS provider with
the dyndns protocol (others use it).  The nicer "what's my IP address)
scripts return json.  But changing them to do DNS UPDATE is pretty
simple - See Net::DNS if you're a Perl person.

If you have more than one site - or a friend - and prefer to be
independent, you can easily write your own CGI scripts to return the
other's IP address.  echo "Content-Type:
text/plain\nConnection:close\n\n$REMOTE_ADDR\n"; exit  (If your friend
doesn't have a static IP address, beware of deadlocks.)

If you have access to the DHCP client's status (e.g. a leases file or
some GUI or CLI on the router), you can sometimes get the external
address from there. 

A web search for "dynamic IP update script" will turn up lots of
resources - scripts & services.

A drawback with polling solutions is that they're not instantaneous -
you get the polling delay on top of whatever minimum TTL the DNS service
imposes.  (And there are limits on how fast you can - or would want to -
poll.)  That's fine for home hobbyists - especially since dynamic IP
addresses are often stable for a VERY long time.  But I would be careful
about running a business or other critical server where DNS updates lag
address changes.

So get a router that talks some dynamic update protocol and go from
there.  That minimizes the delay, and avoids having to retrieve your
public address from an external source.

https://help.dyn.com/remote-access-api/perform-update/ defines the
dyndns update protocol - writing a server is straightforward.

Of course if you have IPv6 - and are getting a dynamic address - you
don't have to deal with NAT.  In that case, you can certainly have
dhclient or RTNETLINK (see ip monitor) trigger a script.  

But note that in the problem statement is:
> the super domain is managed by an outside service. 
This probably makes the OP's life more difficult.  Those services tend
not to support DNS UPDATE (or even dyndns update).  In that case, you're
into  using curl/wget to forms to their web gui.   And tracking their
"improvements".

Grief like that is why I ended up running my own DNS master server...and
getting static IP addresses for my central site. 

I guess I 

Re: Re: checkhints: view “internal”: b.root-servers.net/AAAA (2001:500:200::b) extra record in hints

2017-09-10 Thread Timothe Litt
The most sensible thing to do is ignore the message, and keep named
reasonably up-to-date.

I used to maintain a local hints file with a script that periodically
downloads and updates it (from internic or the DNS), reconfiguring named
when it changes.  It works well - but it's really not worth the effort. 
I've switched to just using the built-in hints.

The hints are only used to locate a root server ("root priming"); as the
message indicates, once any one is found, named will query it  for the
current servers/addresses and check for consistency.   It uses the query
results; the multiple hints provide redundancy for the initial query -
but you don't need all 13 (26) to be correct.  The only reason to worry
is if most of the hint addresses go stale at once - which would be
unprecedented in the history of the DNS.

Note that when root server addresses go stale, the convention is that
the old address is kept in service for some time after the change, so
there's plenty of time for clients to catch up with no impact.  For B
root, the plan is at least 6 months. 
(https://b.root-servers.org/news/2017/06/01/new-ipv6.html)

There does seem to be an issue where if cache memory size is small &
root references rare, the root server records are evicted - causing the
hints to be re-fetched and the messages repeated.  Arguably, named
should treat these as more precious than other records when doing cache
evictions.

But they're just informational messages.  You should run a reasonably
current version of named for security and performance.  As long as you
do, the built-in hints will be perfectly adequate.  Even if you don't,
the hint addresses from a decade ago are adequate to bootstrap named. 
The only good reason to have private hints is if you have an alternate
DNS universe - which is highly discouraged.

For more detail, see
https://kb.isc.org/article/AA-01309/0/Root-hints-a-collection-of-operational-and-configuration-FAQs.html

Bottom line is that these messages are a nuisance & in almost all cases
the most effective use of your time is to ignore them... The effort of
maintaining a private copy of the root hints isn't worthwhile.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 09-Sep-17 23:14, Stefan Sticht wrote:
> Hi,
>
> thanks for all the suggestions.
>
> I have no forwarders configured.
> I started downloading and using the hints file from 
> ftp://FTP.INTERNIC.NET/domain/named.cache shortly after I noticed the problem.
>
> # grep B.ROOT /var/named/named.ca
> .360  NSB.ROOT-SERVERS.NET.
> B.ROOT-SERVERS.NET.  360  A 192.228.79.201
> B.ROOT-SERVERS.NET.  360    2001:500:200::b
>
> I wouldn’t expect a problem with my hints file.
>
> Thanks,
> Stefan
> .org
> https://lists.isc.org/mailman/listinfo/bind-users


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: make AAAA type the default for dig

2017-06-14 Thread Timothe Litt
On the original topic, it would be nice to have a dig option that
returned both A and  with one command.

Since it does this, I tend to use 'host' (note that host -v gives the
same response detail as dig -t A ; dig -t ; and dig -t MX).

On the other remarks, inline.

On 14-Jun-17 21:09, Mark Andrews wrote:
> In message <20170614132510.6ff832a5@ime1.iment.local>, Paul Kosinski writes:
>> Has IPv4 faded away and I didn't notice? Unlike the well planned switch
>> to Area Codes, IPv6 is not backward compatible.
> It has started to fade away.  If you have IPv6 at home, statistically,
> most of your traffic will be IPv6.  There will be outlier homes but
> in general IPv6 will carry more traffic than IPv4.
Not that I've noticed here in the US.  Comcast does have IPv6 to the
home (well, except for some of their
acquisitions that haven't been upgraded yet.)  Pretty much no other ISP
offers it.  The fiber projects - google and verizon both stopped
deployment (of fiber).  I think Google's supports IPv6.  Verizon does not.

Beyond that, you can get fiber (and sometimes IPv6) if you're a large
business.  When I looked for an alternative to Verizon, I was quoted
~$50K for an "engineering feasibility study" for getting fiber to the
house, with corresponding monthly charges.  Not viable for my hobbies.

There are some fringe ISPs in a few markets that offer IPv6 over DSL if
you insist - but who wants DSL speeds (and prices) when you can usually
at least get cable, and if you're lucky fiber at a much lower cost/bit/sec?

> B2B traffic isn't quite as high but there too IPv6 takes a significant
> amount of traffic.
>
>> (The telcos would have gotten rather a lot of complaints if they said
>> every had to get a new telephone number, and also -- new telephones.)
> I've had to get new telephone numbers to fit in more customers over
> the years with support for the old number being removed after a
> year or so.
>
>   462910 -> 4162910 -> 94162910
Yes, here in the US we have periodic "area code" splits that cause
renumbering, stationary and advertising changes, and general angst.

> As for new telephones, yes this has been manditory, switching from
> rotary to DTMF.  There was a period of overlap but support for
> rotary phones was turned off in the exchange.
Rotary phones are still supported here.

But I use VoIP.  Over IPv4.  (And my VoIP adapters do support rotary
dialing)

> Most of you have thrown out several generations of computing devices
> that have supported IPv6 without even being aware you were doing
> so.  IPv6 support is 20+ years old.  My daughter, who has now left
> home, has lived her entire life with equipement in the house that
> has supported IPv6.  The house had working IPv6 connectivity before
> she went to primary school.  She graduatuted from Y12 last year.
>
> I'm still waiting for my ISP to turn on IPv6.  The CPE router
> supports it.  I just routed around them to get IPv6 at home.
I still can't get native IPv6 - but I have FTTH and can get 500Mb/s IPv4
(for a price I won't pay).
So Tunnels.  BTW, SixXS has retired, leaving no U.S. tunnel provider
that supports DNSSEC
for the reverse delegations.  (Well, none in my price range.)

Bottom line is that experiences vary.  The US has a complex regulatory
environment - and large diverse geography.  It moves with a deliberate
lack of speed.

The other consideration for the ISPs is that it's a lot harder for them
to justify charging for static/more than 1 IPv6 address.  There's an
extreme disincentive for them to cut their revenue stream.  (I've seen
some plans where they're seriously proposing to issue /128s.  As you
say, Luddites - capitalist Luddites.  Sigh.)

The address space exhaustion hasn't really moved the needle at the
consumer/small business level - the ISPs are quite happy to NAT - and
they hoard.

> If you have a piece of computing equipement bought in the last 10
> years that doesn't suppport IPv6 today it is because the manufacture
> is a ludite, not because IPv6 doesn't work.
Agree, though there is also the point of view that since customers cant'
get IPv6, shipping it in products adds cost (qualification, risk of
bugs, memory/documentation) with no perceived benefit to the vendor or
customer.  I don't subscribe to that POV - but it isn't entirely irrational.

> Mark
>
>> On Wed, 14 Jun 2017 22:10:25 +1000
>> Mark Andrews  wrote:
>>
>>> In message , "Marco
>>> Davids (SIDN)" writes:
 Hi,

 Not sure if this has been proposed before, but I am wondering:

 Has ISC ever considered to change the default 'dig -t' option from
 A to ?

 --
 Marco
>>> This would break too many scripts.  You can do this for yourself
>>> by setting the type in $HOME/.digrc
>>>
>>> % cat ~/.digrc
>>> -t 
>>> % dig isc.org
>>> ;; BADCOOKIE, retrying.
>>>
>>> ; <<>> DiG 9.12.0-pre-alpha+hotspot+add-prefetch+marka <<>> isc.org
>>> ;; global options: +cmd
>>> 

Re: RE: Providing GeoIP information for servers

2017-05-11 Thread Timothe Litt
On 10-May-17 17:50, John W. Blue wrote:
> >From the it-could-be-worse department:
>
> https://arstechnica.com/tech-policy/2016/08/kansas-couple-sues-ip-mapping-firm-for-turning-their-life-into-a-digital-hell/
>
> I am more a fan of continental geolocation accuracy when it comes to IP 
> addresses.
>
> John
If your static IP address has a reverse name in DNS, it's a short hop
through whois to your actual location.

Well, usually. It is possible that none of the contact addresses are
where the IP address is located - especially for large organizations.
And there are the whois proxies that obscures your physical location.

Still, it's pretty hard to hide.  (Even in a Kansas lake.)

Depending on your situation, you may wish to have different accuracy
and/or precision in internal and external LOC records.

But on the original topic:  Contact Maxmind and see if they'll fix your
address. https://support.maxmind.com/geoip-data-correction-request/ 
They may require evidence that Comcast has delegated the address to you.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


> 
> From: bind-users <bind-users-boun...@lists.isc.org> on behalf of Mark Andrews 
> <ma...@isc.org>
>
>
> AFAIK Maxmind et al don't lookup LOC records.  That being said if
> enough people published LOC records they might start.
>
> For Google you can update the location using a app which uses the
> phone's GPS.
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
>

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Slow zone signing with ECDSA

2017-04-19 Thread Timothe Litt

On 19-Apr-17 21:43, Mark Andrews wrote:
> ...
> DSA requires random values as part of the signing process.  Really
> all CPU's should have real random number sources built into them
> and new genuine random values should only be a instruction code away.
>
> Mark
Most recent ones do.  See RDRAND for Intel (and AMD).  Even Raspberry Pi.

The tinfoil hat brigade in some distributions has resisted using them,
fearing some conspiracy to provide not-so-random numbers.  (Despite the
fact that /dev/random hashes/whitens the inputs to the entropy pool.) 
You may need to take a positive action to enable use of the hardware
source.  Google RDRAND for plenty of entertainment.

There are also fairly inexpensive (~usd 50) USB devices that provide
reasonable entropy quality at decent speeds.  (But much lower than
RDRAND.)  They're good for the old hardware that you recycle for
single-purpose servers.

Systems that have low activity/low entropy can benefit from
entropybroker (https://www.vanheusden.com/entropybroker/).  Use it to
distribute entropy from those who have to those who don't.  It's really
handy for VMs, and for that isolated system that you use for your root keys.

For most uses, use /dev/urandom - which doesn't block.  /dev/random will
block if the entropy pool is depleted.  (However, if you have a hardware
source, very, very rarely.)  /dev/random is recommended for long lived
keys - which usually includes KSKs, and may include ZSKs.  I don't
believe named makes a distinction...you get to pick one for everything.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Bind Queries log file format

2017-02-04 Thread Timothe Litt

On 04-Feb-17 04:27, Phil Mayers wrote:
> On 03/02/17 16:45, Mukund Sivaraman wrote:
>
>> The query log is getting more fields at the end of it such as
>> CLIENT-SUBNET logging.
>
> Although it would be super-disruptive, has any thought been given to
> moving to an entirely new log format, for example k/v or JSON? They're
> a lot more extendable going forward and most SIEM/ML systems will read
> them with no additional configuration.
>
> Adding the query log hex/ptr thing just inconvenienced me. Strangely,
> changing the entire format to k/v would have massively helped me. This
> applies across all logs (RPZ in particular).
>
> Obviously one sample isn't enough but it's maybe something to consider?
I'm not sure whether I'm in favor of this approach, but it's not
necessarily very disruptive.

It would be trivial to script a converter from JSON to the current log
format - or even one that took a format string to select whatever fields
in random order.  Pipe a new log file though it to existing log readers,
and you're done. 

For almost complete transparency, embed in a daemon that continuously
reads the JSON log & appends to the traditional log; the existing log
readers can read the old format in near real-time...

Then when a support issue (or other requirement) comes up, the enhanced
data is in the JSON log.

When your old log processor is upgraded to use a new field, just add it
to the converter (format).

New processors would preferably read the JSON/native format directly.

The only annoyance is having to manage 2 log files (and some disk space).

FWIW




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNSSEC validation failures for www.hrsa.gov

2016-06-25 Thread Timothe Litt

On 24-Jun-16 22:13, Jay Ford wrote:
> On Sat, 25 Jun 2016, Mark Andrews wrote:
>> The servers for webfarm.dr.hrsa.gov are not EDNS and DNSSEC compliant.
>> They are returning FORMERR to queries with EDNS options.  Unknown
>> EDNS options are supposed to be ignored (RFC 6891).
>>
>> You can workaround this with a server clause to disable sending the
>> cookie option with a server clause.
>>
>> server  { request-sit no; };// 9.10.x
>> server  { send-cookie no; };// 9.11.x
>
> That did it, at least for now.
>
>> Now one could argue that FORMERR is legal under RFC 2671 (the initial
>> EDNS specification) as no options were defined and to use a option
>> you need to bump the EDNS version but the servers don't do EDNS
>> version negotiation either as they return FORMERR to a EDNS version 1
>> query rather than BADVERS.  They also incorrectly copy back unknown
>> EDNS flags.
>
>> Whether this is the cause of your issue I don't know but it won't be
>> helping.
>
> The HRSA folks claim that their "site is fine".  In hopes of
> disabusing them of that notion I'll have our folks who have to try to
> use the HRSA site pass along the trouble report.
>
> Thanks for the diagnosis & work-around.  Excellent as always & crazy
> fast, too!
>
> 
> Jay Ford, Network Engineering Group, Information Technology Services
> University of Iowa, Iowa City, IA 52242
> email: jay-f...@uiowa.edu, phone: 319-335-
>

FWIW, dnsfp identifies the DNS servers as:

fingerprint (162.99.248.222, 162.99.248.222): Unlogic Eagle DNS 1.0 -- 1.0.1 
[New Rules]  

If this is correct, the project website for Eagle DNS would appear to
be: http://www.unlogic.se/projects/eagledns

It seems a rather odd choice for a .gov (US Health and Human Services)
owned domain...though one never knows what IT outsourcing will produce :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Writeable file already in use

2016-01-05 Thread Timothe Litt
Jan-Piet Mens <jpmens@gmail.com> wrote:
> This might make you sad if you have lots of zones or large zones.
> .. or even just want to look at what was transferred (whitout having to
> recurse to a `dig axfr').
>
> I see no reason to omit 'file' (except on a diskless slave 
Or if you care about availability, which is a strong reason for having a
slave in the first place. (Performance is the other.)

If a diskless slave restarts when the master is down, it has no data to
serve.  This will also make you (or your clients) sad, even if you only
have a few small zones :-(

I agree - don't omit 'file', except on a diskless slave.  Don't try to
share the file, even when it seems to work.  And think twice about why
you have a diskless slave...

The only fault that I find with bind's decision to prohibit shared
writable files is that it took so long to arrive.  Instead of
complaining, which seems to appear here every few months, the response
should be "Thank you - for *finally* preventing this disastrous
misconfiguration."

I've lost count of how many times I've encountered someone who had
corruption due to this misconfiguration.   There are many (working) ways
to replicate data.  Among them: in-view, dname, external scripts to copy
files, external tools that write records to multiple files, replicators
triggered by file writes (e.g. inotify) or database update triggers 

Although I remember when a 1MB ("hard") disk was huge - today disk space
is cheap.  Don't trade a few MB (or GB) of space for eventual data
corruption.  And the manpower to implement any of the above is far less
that that spent on recovering from corruption, which can go undetected
for a long time.  [And usually, the folks who run into it haven't tested
their backups...]

As for the "I know I'll never have bind update that zone" - that may be
true today.  But it changes -- perhaps when your successor discovers
it.  Either a tool requires dynamic update, or someone discovers signed
zones, or realizes that dnssec maintain saves a lot of work, or the next
technology comes along.  To misappropriate a K quote - "Your constant
is my variable".  Or the ever popular "If you don't take the time to do
it right, you'll have to make the time to do it over...and over again".

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: intermittent SERVFAIL with a DLV domain

2015-12-24 Thread Timothe Litt
On 23-Dec-15 08:34, Tony Finch wrote:
> Tony Finch <d...@dotat.at> wrote:
>
> Also, why is it trying to get address records for a reverse DNS name? 

An ip6.arpa or in-addra.arpa zone is not restricted to PTR records. 
There's nothing special about 'reverse zones'.

dnsviz uses some heuristics to guess what records are worth looking for.

A while ago I asked Casey to have DNSVIZ check for more than PTR+DNSSEC
records in reverse zones, which he did.
There's a panel in dnsviz where you can change what it looks for if you
want more (or less).

A/ records are used in reverse zones by an obscure RFC (1101
encoding of subnet masks), and by others for similar purposes.

(It shouldn't be surprising that CNAME, TXT, RP, LOC and DNSSEC-related
records can be in reverse zones too.)

dnsviz launches its queries in parallel, so asking for a few extra
records doesn't hurt anyone.


> 23-Dec-2015 13:20:54.328 lame-servers: info: broken trust chain resolving 
> 'a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa/DS/IN': 94.126.40.2#53
> 23-Dec-2015 13:20:54.328 lame-servers: info: broken trust chain resolving 
> '1.0.0.0.3.2.1.0.0.0.0.0.0.0.0.0.2.0.0.f.a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa//IN':
>  2a01:8000:1ffa:f003:bc9d:1dff:fe9b:7466#53
> 23-Dec-2015 13:20:54.398 lame-servers: info: broken trust chain resolving 
> '1.0.0.0.3.2.1.0.0.0.0.0.0.0.0.0.2.0.0.f.a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa/A/IN':
>  217.168.153.95#53
>
> Tony.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477

2015-09-08 Thread Timothe Litt

On 08-Sep-15 00:46, stavrostseriotis wrote:
>
> Ok here is what I did:
>
> · After extracting the package I looked out at directories
> */usr/local/bin *and */usr/local/sbin *as mentioned in the procedure
> but I found that there are no files there.
>
> · I run *configure* command *without openssl* because I had
> trouble with the openssl library when it was enabled. Also since I am
> not currently using DNSSEC I guess that this is not a problem.
>
> · Then I run *make* and I didn’t get any error.
>
> · I run *make install* and I didn’t get any error again.
>
> · Stopped named service
>
> · I copied the /etc/named.conf file and then created another
> empty file as instructed with the correct permissions.
>
> · Started named service. It started normally without any error
> and also the process that was up is the same as before.
>
> · When I do *named –V* and also *rpm –q bind* I still see the
> same versions as before.
>
>  
>
> Yes I know that if I was using the RedHat package I wouldn’t had this
> problem because I already do this for other linux machines. Just this
> machine is old and when it was configured to work as nameserver the
> guys did it this way. Now we are in the process to build a new machine
> for nameserver with RedHat subscription and everything but until that
> happens it will be best if we can get rid of this security
> vulnerability cause I don’t know how long it will take.
>
>  
>
> Thank you for your responses.
>
>  
>
You are not making it easy to diagnose your problem.  The exact commands
that you are using and command output are missing.

From your description, you successfully built named and installed it -
somewhere.

You are not running the image that you built.  To confirm the version of
what you built, from the build directory you can run "./bin/named/named
-V"  This will also show us the configure options, including where it
should have been installed.

If the process has the same ID, you didn't successfully stop the old
named.  This can happen if you have a mix of RedHat and non-RedHat
startup (init) files. 

If rpq -q bind shows a version, then there is a RedHat package on the
system & you are trying to supersede it.  You probably are using the
RedHat startup files, which may be different from what you expect.  As I
wrote previously, the startup environment may have a different PATH from
your terminal.

You should have stopped named BEFORE running make install.

Please provide the output of at least:
named -V; echo $PATH; (build-directory)/bin/named/named -V; systemctl
status named.service; find / -xdev -type f -name named -ls

A few lines from make install should confirm that the new file is being
installed where you expect it.

lsof -p (named's pid) will confirm which image is actually running.

systemctl show --all named.service will show what service you're trying
to start.
systemctl status named.service should match

Or run service named status & look in /etc/init.d/named if you're not
running systemd/named is a SYSV script on your version of RedHat.

You should not have trouble building with openssl.  Make sure that you
have the openssl-dev RPMs installed.  Don't try to build that from
source; RedHat heavily patches it & other packages depend on the changes.

Switching to the RedHat version of named may be your best option.  This
should not be difficult; make uninstall; yum install; edit the config. 
Depending on how your predecessors did things, you may need to yum
remove first, possibly with --force.


Timothe Litt

ACM Distinguished Engineer

--

This communication may not represent the ACM or my employer's views,

if any, on the matters discussed. 


> *From:*bind-users-boun...@lists.isc.org
> [mailto:bind-users-boun...@lists.isc.org] *On Behalf Of *Timothe Litt
> *Sent:* Monday, September 07, 2015 2:29 PM
> *To:* bind-users@lists.isc.org
> *Subject:* Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
>
>  
>
> Subject:
>
> Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
>
> From:
>
> stavrostseriotis <stavrostserio...@semltd.com.cy>
> <mailto:stavrostserio...@semltd.com.cy>
>
> Date:
>
> 07-Sep-15 05:24
>
>  
>
> To:
>
> bind-users@lists.isc.org <mailto:bind-users@lists.isc.org>
>
>  
>
> Hello,
>
>  
>
> I have a RedHat 5.11 machine and currently I am facing the issue
> with BIND vulnerability CVE-2015-5477. I cannot update my BIND
> using yum because I didn’t install BIND from RedHat at the first
> place so I need to do it manually.
>
> I downloaded the package of version 9.9.7-P2 from isc website but
&g

Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477

2015-09-07 Thread Timothe Litt
> Subject:
> Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
> From:
> stavrostseriotis <stavrostserio...@semltd.com.cy>
> Date:
> 07-Sep-15 05:24
>
> To:
> bind-users@lists.isc.org
>
>
> Hello,
>
>  
>
> I have a RedHat 5.11 machine and currently I am facing the issue with
> BIND vulnerability CVE-2015-5477. I cannot update my BIND using yum
> because I didn’t install BIND from RedHat at the first place so I need
> to do it manually.
>
> I downloaded the package of version 9.9.7-P2 from isc website but
> since it is not an rpm file I have to build it myself.
>
> I followed the instructions I found on website
> https://deepthought.isc.org/article/AA-00768/0/Getting-started-with-BIND-ho
> but it does not change the version of bind. I don’t know what I am
> doing wrong.
>
> I am wondering if you can give me a little guideline on how to build
> and install the new version.
>
>  
>
> Thank you
>
"does not change the version of bind" - as reported how?  By named -V? 
Or by a DNS query to version.bind CH TXT?

If the former, you probably have more than one named executable - with
the old one earlier in your PATH.  "which named" should help.  If the
latter, did you remember to restart named?  And did the restart
succeed?  And does your startup process have the same PATH as your
terminal?  (Often they do not.)

Re-read the instructions - and pay special attention to how you run
configure.  The default is to build/install in /usr/local/*bin - which
is not the default for most distributions' startup files.

I strongly recommend keeping track of each step as you build (a big
scrollback buffer helps).  Either write your own instructions, or turn
it into a script.  There are enough steps that it's easy to make a
mistake - and you will be re-building bind again to upgrade.  Plus, if
you ask for help, you will be able to provide the details of what you
did.  Without details of what you did and what you see, people can't
provide specific help.

Note that RedHat usually has a number of patches (often for SeLinux and
systemd) that you won't get if you build yourself from ISC sources. 

Or remove bind and switch to the RedHat version.  You're paying RedHat
to do the maintenance, so unless you have local patches or very special
requirements, you might as well let them do the work. 

Typically, if you really need the latest from ISC on RedHat you're
better off getting the SRC RPM from RedHat & modifying the rpmbuild
config file to fetch the latest ISC source, then build RPMs.  If you
stay with the same ISC code stream, you won't have too many patch
conflicts to resolve.  After you've done this once or twice, you'll want
to revisit you need for local changes - either decide they're not that
important, or offer them to ISC.  Maintaining a private version is work.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Identify source of rndc reconfig command?

2015-08-25 Thread Timothe Litt
Robert,

While all the advice provided is good, you might also send a suggestion
to bind9-bugs.

The received control channel command message  would be more useful if
it included the peer address  port e.g.:
   ... general: info: received control channel command 'reconfig' from
127.0.0.1:48466 .

That would avoid having to use tcpdump to identify the source of these
sorts of problems.

Other thoughts:

If you have selinux enabled, you can (temporarily) deny access to port
953 with a local policy module, and use the resulting audit log entries
to determine the command.  To avoid service disruption, use setenforce 0
(permissive) for the duration.  This is the simplest approach (fewest
tools, quickest  most certain results).  But you do need to know how to
setup a LPM... and if you're not running selinux already, it can be a
hassle to setup.  (I recommend doing it, but not in the middle of this
fire.)

Every 30 mins sounds like some sort of monitor.  Check that named.conf
isn't changing (which could trigger such a monitor.)  Or stop all system
management/monitoring packages until you find the culprit.

Consider  inotify-tools.  If a monitor is keeping an eye on bind, you
can catch it looking at (or touching) named's files.

lsof is a bit heavyweight for this.  Consider ss -p (ss is part of
iproute2) if you have it.

A final thought - look for log file managers (e.g. logrotate).  They may
be noticing named's file size  doing a reconfig to close/reopen the log
file.   (In which case, report a bug in the log manager's config -
named's own log file management avoids all those hassles.)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 24-Aug-15 17:55, Mark Andrews wrote:
 The first thing I would do is make sure only the users you want to
 be able to use the rndc key can read it.  I would then generate a
 new rndc key and configure both rndc and named to use it.

 If that doesn't work generate a new rndc.conf file with a different
 name that refers to a new rndc key.  Teach named to use that key
 then update all the scripts that you know about to use the new
 rndc.conf file.

rndc -c rndc.conf.path

 Mark

 In message 60946bf48ada4e6fb2ed7b0aa297d...@mxph4chrw.fgremc.it, Darcy 
 Kevin
  (FCA) writes:
 Does the rndc protocol have a timeout? If so, what is it set to? I don't see 
 anything about a configurable timeout interval in the man pages for rndc or r
 ndc.conf.

 What I'd probably do is turn off rndc in named.conf, set up a dummy server 
 to listen on port 953, which just accepts the connection, but doesn't respond
  to anything sent to it. That means that whatever is sending this command is 
 going to be stuck for some period of time -- possibly infinitely -- waiting
  for a response from the server. Then you can use something like lsof (whic
 h I assume exists in Debian) to track down which process it is.

  - Kevin

 -Original Message-
 From: bind-users-boun...@lists.isc.org [mailto:bind-users-boun...@lists.isc.o
 rg] On Behalf Of Robert Senger
 Sent: Monday, August 24, 2015 5:02 PM
 To: bind-users@lists.isc.org
 Subject: Identify source of rndc reconfig command?

 Hi all,

 after upgrading from Debian Wheezy to Jessie, bind9 receives rndc reconfig 
 commands every 30 minutes. I've never seen this before. Some of my own script
 s run rndc restart/reload after fiddling with network interfaces, but none 
 of these is the source of the observed 30 minutes interval. There are also no
  cron jobs.

 In the bind9 logs I see this:

 24-Aug-2015 22:53:43.431 general: info: received control channel command 'rec
 onfig'
 24-Aug-2015 22:53:43.458 general: info: loading configuration from '/etc/bind
 /named.conf'
 ... [more than 350 lines reconfig log]

 Running tcpdump on the lo interface gives me this:

 root@prokyon:/etc/bind# tcpdump -i lo port 953
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode li
 stening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
 21:23:35.071602 IP localhost.48466  localhost.953: Flags [S], seq 3862717043
 , win 43690, options [mss 65495,sackOK,TS val 196635776 ecr 0,nop,wscale 5], 
 length 0
 21:23:35.071699 IP localhost.953  localhost.48466: Flags [S.], seq 239114031
 2, ack 3862717044, win 43690, options [mss 65495,sackOK,TS val 196635776 ecr 
 196635776,nop,wscale 5], length 0
 21:23:35.071821 IP localhost.48466  localhost.953: Flags [.], ack 1, win 136
 6, options [nop,nop,TS val 196635776 ecr 196635776], length 0
 21:23:35.075355 IP localhost.48466  localhost.953: Flags [P.], seq 1:148, ac
 k 1, win 1366, options [nop,nop,TS val 196635777 ecr 196635776], length 147
 21:23:35.075435 IP localhost.953  localhost.48466: Flags [.], ack 148, win 1
 399, options [nop,nop,TS val 196635777 ecr 196635777], length 0
 21:23:35.115513 IP localhost

Re: DNSSEC secondary (free) - Was - Re: Can I run two name servers on one host with two IP addresses?

2015-08-20 Thread Timothe Litt
On 20-Aug-15 10:50, /dev/rob0 wrote:
 On Thu, Aug 20, 2015 at 02:07:57PM +0200, Robert Senger wrote:
 There are a number of providers out there offering secondary
 dns services for free or for a few bucks/month. Even DNSSEC
 is possible for free.
 This is good news!  I knew there were several good choices for free 
 DNS hosting, but this is the first I heard of them supporting signed 
 zones.

 https://acc.rollernet.us/help/dns/secondary.php

 Are there others?  I saw another one amongst your NS hosts, but that 
 seems to be your own domain.  (If you're offering secondary NS for
 free, please do mention your service here.)
I use https://puck.nether.net/*dns* https://puck.nether.net/dns//
https://puck.nether.net/dns/ .  It's free, it uses current version of
bind, supports DNSSEC,
has been stable for several years.  Only drawback is that if you're in
Chicago, you won't get
enough geographic diversity.  They have only server, which is there. 
And of course, with free
the SLA is best efforts, no guarantee.

I am not affiliated, just reporting my personal experience.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Of long names...

2015-03-15 Thread Timothe Litt
Discussing a 'you don't handle long names' issue that I discovered with
an application's developer, I thought I'd create a test case or two for him.

I did, but they don't resolve.  I might be missing something, so some
other eyes would be appreciated.

The test domain is hosted on godaddy's DNS.  (Because, well, it's a test
domain.)

dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'  Does
anyone have experience with this server?

The recursive servers queried are mine (bind) - I've flushed their
caches.  I've also tried several web services that run DNS lookups; the
results are consistent.  NXDOMAIN

The two names in question each have  records:

 
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us


 
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

The current zone serial number is 2015031503

dig reports that serial with a NXDOMAIN response to each name, so it's
not a propagation issue.

Exporting the zone file (yes, this is the entire file  -- 10 records) gives:

; SOA Record
LITTS.US.3600INSOAns71.domaincontrol.com.dns.jomax.net (
2015031503
28800
7200
604800
3600
)

; A Records
@3600INA97.74.42.79

; CNAME Records
www3600INCNAME@

; MX Records
@3600INMX10nano.litts.net

; TXT Records
@3600INTXTv=spf1 ip4:96.233.62.58 ip4:96.233.62.59
ip4:96.233.62.60 ip4:96.233.62.61 ip4:96.233.62.62 mx a:micro.litts.net
a:nano.litts.net a:pico.sb.litts.net a:overkill.sb.litts.net
a:hagrid.sb.litts.net a:smtp.litts.net -all

;  Records
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay
   
1800IN2001:4830:11a2:941::43
beautiful.feeling600IN2001:4830:11a2:941::43
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay
   
600IN2001:4830:11a2:941::43

; NS Records
@3600INNSns71.domaincontrol.com
@3600INNSns72.domaincontrol.com

Dig lookups fail on the long names, but the SOA shows the correct serial.

 dig
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us


;  DiG 9.9.4-P1 
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us

;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 57860
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us.
IN 

;; AUTHORITY SECTION:
litts.us.   3600IN  SOA ns71.domaincontrol.com.
dns.jomax.net. 2015031503 28800 7200 604800 3600

;; Query time: 136 msec
;; SERVER: 192.168.148.6#53(192.168.148.6)
;; WHEN: Sun Mar 15 06:57:55 EDT 2015
;; MSG SIZE  rcvd: 216

 dig
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us


;  DiG 9.9.4-P1 
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 60478
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
IN 

;; AUTHORITY SECTION:
litts.us.   2617IN  SOA ns71.domaincontrol.com.
dns.jomax.net. 2015031503 28800 7200 604800 3600

;; Query time: 7 msec
;; SERVER: 192.168.148.4#53(192.168.148.4)
;; WHEN: Sun Mar 15 07:01:16 EDT 2015
;; MSG SIZE  rcvd: 216

I have verified that bind is happy to create and resolve similar names...

Oh, and the third  record does resolve, which makes me suspicious of
the name length.

Any ideas on this mystery?

-- 
Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Of long names...

2015-03-15 Thread Timothe Litt
Thanks.  I appreciate the extra eyes.

I'm pretty sure that GoDaddy has a problem between their WebGUI's database
and their servers.  The records appear in the former, but not (as you
saw), the latter.
Even though their GUI exports the zone file containing them with the
same zone
serial number that your dig's SOA revealed.

After some more detective work, I had a long, unsatisfactory 'webchat'
with GoDaddy
support.  They had all sorts of reasons why they have no problem and
I'm, er, 'wrong'.
Some would be extremely funny if told to a technical audience.

And since there's no problem, they refuse to escalate.  I've made an
out-of-band
attempt to get the attention of their management.

FWIW, bind is quite happy to accept these names in a domain where I run
my own
servers.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 15-Mar-15 19:49, Mukund Sivaraman wrote:
 On Sun, Mar 15, 2015 at 08:26:35AM -0400, Timothe Litt wrote:
 Discussing a 'you don't handle long names' issue that I discovered with
 an application's developer, I thought I'd create a test case or two for him.

 I did, but they don't resolve.  I might be missing something, so some
 other eyes would be appreciated.

 The test domain is hosted on godaddy's DNS.  (Because, well, it's a test
 domain.)

 dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'  Does
 anyone have experience with this server?

 The recursive servers queried are mine (bind) - I've flushed their
 caches.  I've also tried several web services that run DNS lookups; the
 results are consistent.  NXDOMAIN
 The authoritative nameservers for litts.us are returning NXDOMAIN for
  queries on these names:

 [muks@totoro ~]$ dig -t NS litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  -t NS litts.us
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NOERROR, id: 25029
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;litts.us.IN  NS

 ;; ANSWER SECTION:
 litts.us. 3600IN  NS  ns71.domaincontrol.com.
 litts.us. 3600IN  NS  ns72.domaincontrol.com.

 ;; ADDITIONAL SECTION:
 NS72.domaincontrol.com.   132465  IN  A   208.109.255.46
 NS72.domaincontrol.com.   172484  IN  2607:f208:302::2e
 ns71.domaincontrol.com.   132465  IN  A   216.69.185.46
 ns71.domaincontrol.com.   172484  IN  2607:f208:206::2e

 ;; Query time: 83 msec
 ;; SERVER: 127.0.0.1#53(127.0.0.1)
 ;; WHEN: Mon Mar 16 05:13:23 IST 2015
 ;; MSG SIZE  rcvd: 185

 [muks@totoro ~]$ dig +norecurse @ns71.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  +norecurse 
 @ns71.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us
 ; (2 servers found)
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 65035
 ;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
  IN 

 ;; AUTHORITY SECTION:
 litts.us. 3600IN  SOA ns71.domaincontrol.com. dns.jomax.net. 
 2015031503 28800 7200 604800 3600

 ;; Query time: 86 msec
 ;; SERVER: 216.69.185.46#53(216.69.185.46)
 ;; WHEN: Mon Mar 16 05:14:53 IST 2015
 ;; MSG SIZE  rcvd: 216

 [muks@totoro ~]$ dig +norecurse @ns72.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  +norecurse 
 @ns72.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us
 ; (2 servers found)
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 15081
 ;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
  IN 

 ;; AUTHORITY SECTION:
 litts.us. 3600IN  SOA ns71.domaincontrol.com. dns.jomax.net. 
 2015031503 28800 7200 604800 3600

 ;; Query time: 83 msec
 ;; SERVER: 208.109.255.46#53(208.109.255.46)
 ;; WHEN: Mon Mar 16 05:15:41 IST 2015
 ;; MSG SIZE  rcvd: 216


   Mukund




smime.p7s
Description: S/MIME Cryptographic Signature

Re: Of long names...

2015-03-15 Thread Timothe Litt
Mark,

Not a failure to bump serial.  I omitted some detail.

I don't edit the SOA explicitly. The GoDaddy GUI is responsible for
bumping the
serial , which it seems do do with a 'save'.
I did add records incrementally, saved the zone and verified that
the serial increased each time.
Both with dig and with their 'export zone' option.
After adding the first long name, dig reported NXDOMAIN.  The SOA in
authority section had new serial.
I added the shorter one.  Serial bumped.  Short one can be found
with dig.  Long can't.
Added second long name, which had same length, but fewer labels.  Saved.
dig reported NXDOMAIN;  authority section's SOA's serial incremented
again. 
Still only the short one was visible.  Short one has two labels
(left of the domain), so domains with
more than one label are accepted. 
Both nameservers produce the same results.
Logged-out and in to their GUI.  GUI could retrieve all three
names.  So the DB behind
the GUI stored the long records.  But either didn't pass them to the
server, or the server
rejected them.

The limits on name and label length date back to RFC1035; they're in
violation.
Even if they have some policy that reduces them (which I would object
to), the
GUI should reject names that violate that policy.  Not leave them in
limbo.  And I
tried some additional names that broke the 63 char label length limit,
which the GUI
correctly rejected.  So it does some level of validation.  Thus, it's
either total length or
number of labels that sends my names into limbo.  Or maybe both.  Or
maybe they apply
a dictionary and drop names that look like sentences - with no error. 
But their
engineers can sort that out.  Easier to do with the code than by experiment.

I think that's conclusive, which is why I stepped into the support
morass.  I'm tempted
to move the domain to my own servers, but I really hate to let vendors
get away with
customer-unfriendly support.  Other people don't have the same ability
to fight back.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 15-Mar-15 20:59, Mark Andrews wrote:
 In message 55062475.6030...@acm.org, Timothe Litt writes:
 Thanks.  I appreciate the extra eyes.

 I'm pretty sure that GoDaddy has a problem between their WebGUI's database
 and their servers.  The records appear in the former, but not (as you
 saw), the latter.
 Even though their GUI exports the zone file containing them with the
 same zone serial number that your dig's SOA revealed.
 I would say It looks like you failed to update the serial. Bump
 the serial and reload. if this was reported to bind-bugs.

 What is exported could be to be loaded content, rather than
 currently loaded content.  Add another record and publish the zone.
 If you get that record and not the long name records then you
 have proof of a problem.  You can then remove the extra record.

 After some more detective work, I had a long, unsatisfactory 'webchat'
 with GoDaddy support.  They had all sorts of reasons why they have no
 problem and I'm, er, 'wrong'. Some would be extremely funny if told to a
 technical audience.

 And since there's no problem, they refuse to escalate.  I've made an
 out-of-band attempt to get the attention of their management.

 FWIW, bind is quite happy to accept these names in a domain where I run
 my own servers.

 Timothe Litt
 ACM Distinguished Engineer
 --
 This communication may not represent the ACM or my employer's views,
 if any, on the matters discussed.

 On 15-Mar-15 19:49, Mukund Sivaraman wrote:
 On Sun, Mar 15, 2015 at 08:26:35AM -0400, Timothe Litt wrote:
 Discussing a 'you don't handle long names' issue that I discovered with
 an application's developer, I thought I'd create a test case or two
 for him.
 I did, but they don't resolve.  I might be missing something, so some
 other eyes would be appreciated.

 The test domain is hosted on godaddy's DNS.  (Because, well, it's a
 test
 domain.)

 dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'
 Does
 anyone have experience with this server?

 The recursive servers queried are mine (bind) - I've flushed their
 caches.  I've also tried several web services that run DNS lookups; the
 results are consistent.  NXDOMAIN
 The authoritative nameservers for litts.us are returning NXDOMAIN for
  queries on these names:

 [muks@totoro ~]$ dig -t NS litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  -t NS litts.us
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NOERROR, id: 25029
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;litts.us.  IN  NS

 ;; ANSWER SECTION:
 litts.us.   3600IN  NS  ns71.domaincontrol.com.
 litts.us.   3600

Re: BIND DNSSEC Guide draft

2015-01-04 Thread Timothe Litt
On 31-Dec-14 21:00, Jeremy C. Reed wrote:
 ISC is seeking feedback and review for our first public draft of the 
 BIND DNSSEC Guide.  It was written in collaboration with DeepDive 
 Networking.
I haven't had a chance to look in detail, but a quick scan  resulted in
several
observations that I hope are useful.  Also, I posted your note to
dnssec-deployment, where there should be enthusiasm for the topic :-)

The private network section 6.5.4 doesn't talk about how to configure
views/stub zones so that authoritative (internal) zones on a shared
resolver/authoritative
server get validated.  (point 1 in the section dismisses the
possibility.)  This can be done.

Further, it's useful.  People are much more likely to experiment on
internal zones.
More important, consider a typical scenario: my web server on the
internal view
has a different address from the external view.  (Besides efficiency,
some commercial
routers don't do NAT on a stick  - e.g. allow an internal client to NAT
to an external
address served by that router, which is NATed to an internal server.)

So we want to train users to look for DNSSEC authentication.  Unless one
makes
this work, a notebook on the road will authenticate, but the same
notebook in the office
will not.  Don't bother trying to explain this to users; they'll simply
ignore the distinction.

Which is sort of a long way of saying: if the goal is to encourage
people to adopt DNSSEC,
your guide should make Private Networks and the corresponding recipes 
first class
citizens, not a 'don't bother with this' afterthought.  Both for admins
to feel freer to
experiment, and for users to have a consistent experience.

On key rollover - this is still a major hassle.  And while the recipes
look pretty, the process
is ugly.  Key rollover really needs to be automated.  There
are too many steps that require too much indirection.  And too many 'you
could do
this or you could do that' choices - that don't really matter,
especially for getting started. 
I don't see why a person should have to change parameters, dates,
manually generate
keys, etc.  You can work on the recipes, but I don't think they'll make
the problem
approachable - or safe.  Computers are good at this stuff - and people
aren't.

It really needs something like a daily cron job with a simple config
file that does all the work. 
Trigger based on dates, or a 'do it now' emergency/manual command. 
Key generation,
date setting, permissions, etc.  As for key uploads to external
registrars, it can mail the new keys/DS records
to the admin with 'please upload these by 'date'', and only proceed with
the roll-over when it can 'dig' them.
(The e-mail can - via the config file - include a hyperlink to the
upload page...)  For internal,
it can update the trusted keys include file, rndc reconfig, etc.
And the config file should come with reasonable default parameters, so
it 'just works' oob.
E.g. roll the zsks every 6 months and the ksks every 2 years. 
(Semi-random numbers, let's not fight about them.)

Also, RE TLSA - I think it's better to match just the subject public key
- there are several
cases where this reduces management overhead.  I know generating the
hash for that
with openssl isn't fun.  But, https://www.huque.com/bin/gen_tlsa is the
easiest way
that I've found to generate TLSA records. And it supports  SPKI
selectors...  So you might
want to point to it.

I'll try to have a closer look later.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 31-Dec-14 21:00, Jeremy C. Reed wrote:
 ISC is seeking feedback and review for our first public draft of the 
 BIND DNSSEC Guide.  It was written in collaboration with DeepDive 
 Networking.

 The document provides introductory information on how DNSSEC works, how 
 to configure BIND to support some common DNSSEC features, as well as 
 some basic troubleshooting tips.  It has lots of interesting content, 
 including examples of using ISC's delv tool and using a common 
 provider's web-based interface to manage DS records.

 This is a beta edition of the guide. We'd appreciate any feedback or 
 suggestions, good or bad. You may email me directly, or to our 
 bind9-bugs@ bug tracker email, or back to this list as appropriate (such 
 as needing further community discussion). Or you may use the GitHub to 
 provide feedback (or fixes).  We plan to announce the first edition of 
 this BIND DNSSEC Guide at the end of January.

 The guide also has a recipes chapter with step-by-step examples of some 
 common configurations. If you have any requests or would like to 
 contribute some content, please let us know.

 The beta of the guide is available in HTML and PDF formats at

 http://users.isc.org/~jreed/dnssec-guide/dnssec-guide.html
 http://users.isc.org/~jreed/dnssec-guide/dnssec-guide.pdf

 The docbook source for the guide is at GitHub:
 https://github.com/isc-projects/isc

Re: Re: Wrong NSEC3 for wildcard cname

2014-11-20 Thread Timothe Litt
On 19-Nov-14 19:03, Graham Clinch wrote:
 Hi Casey  List folks,
 My apologies - this was actually a bug in DNSViz.  The NSEC3 computation
 was being performed on the wrong name (the wrong origin was being
 applied).  It should be fixed now, as shown in:

 http://dnsviz.net/d/foo.cnametest.lancs.ac.uk/VGzlkA/dnssec/
 http://dnsviz.net/d/foo.cnametest.palatine.ac.uk/VGzrqg/dnssec/
 Thanks - that's certainly looking less red.  DNSViz is an exceptionally
 useful tool!

 The cnametest records were an attempt at simplifying a real issue that's
 been reported to us.

 An unsimplified version is cnametest2.lancs.ac.uk (here the RR is
 *.cnametest2 CNAME cnametest2, with an A RR for cnametest2), which (now)
 passes DNSViz, but not Verisign's DNSSEC debugger
 (http://dnssec-debugger.verisignlabs.com/foo.cnametest2.lancs.ac.uk).

 I'm more confident that this is a bug in Verisign's debugger, as the
 error is 'No DS records found for cnametest2.lancs.ac.uk in the
 cnametest2.lancs.ac zone' (where's the .uk gone, and why the interest in
 a DS where there's no zone cut?).  Do any Verisign DNSSEC debugger
 maintainers lurk on bind-users?  (The 'Contact Us' link on the page
 looks very corporate and not very useful)
Try the dnssec-deployment mailing list. 
dnssec-deploym...@dnssec-deployment.org

 delv +vtrace continues to report NSEC3 at super-domain only for
 foo.cnametest2.palatine.ac.uk records, and not for
 foo.cnametest2.lancs.ac.uk.  Is this a similar
 miscalculating-the-owner-name as for DNSViz?  I'll try to dig (haha!)
 into the delv source tomorrow.  Tested with delv 9.10.0  9.10.1.

 I think this might be one of those cases where I should have trusted my
 gut instinct (to blame the validating resolver), but the more I
 investigated the more red and missing lines in output...

 I'm attempting to discover more about the validating resolver, but since
 I have no access to it and the reporter is just a user of that resolver,
 odds are not stacked in our favour.

 *snipping the bits where I obviously need to read about
 NSEC3 again*
 At the start of the year, I received a piece of wisdom regarding NSEC3
 It is much harder to understand and debug.  At the time I was sure
 that I could outsmart it.  Maybe not so much now.

 Regards,

 Graham






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: recursive lookups for UNSECURE names fail if dlv.isc.org is unreachable and dnssec-lookaside is 'auto'

2014-08-28 Thread Timothe Litt
On 27-Aug-14 20:35, Doug Barton wrote:
 On 8/27/14 3:03 PM, Timothe Litt wrote:
 So you really meant that validating resolvers should only consult DLV if
 their administrator knows that users are looking-up names that are in
 the DLV?  That's how I read your advice.

 You're correct.

 I don't see how that can work; hence we'll disagree.  I think the only
 viable strategy for*resolvers*  is to consult the DLV - as long as it
 exists.

 So that leads to a Catch-22, as ISC has stated that they will continue
 to provide the DLV as long as it is used. You're saying that people
 should continue to consult it as long as it exists.

 Now that the root is signed the traditional argument against continued
 indiscriminate use of the DLV is that it makes it easier for
 registries, service providers, etc. to give DNSSEC a low priority.
 You don't need me to provide DNSSEC for you, you can use the DLV.
 Based on my experience I think there is a lot of validity to that
 argument, although I personally don't think it's persuasive on its own.

I don't want to see indiscriminate use of the DLV.  See below.
 While I appreciate the tone of reasoned discourse in the message I'm
 responding to, what you have done is provide additional details to
 support your thesis that changing providers is hard. I'm not arguing
 the contrary position, so we can agree to agree on that. What you
 haven't done is provide any evidence to refute my thesis that It's
 hard != It's impossible. I'll even go so far as to agree with you
 that in some cases it's really, really hard.

For me, it's impossible.  I've stated why.  I am a very small player - I
run a network for my extended (multi-state) family, and some free
services for a few hundred former colleagues.  I considered the options
that you suggested - they are not practical, affordable or both.  No ISP
in my geography will provide DNSSEC for reverse DNS.  I have asked (in
dnssec-deployment) for help in pressuring the ISPs to solve this
problem.  Comcast (which is not in my geography) has acknowledged the
issue, and has had it on their list for several years.  None of the
others have gone even that far. 

 What that leaves us with is your position (which I will state in an
 admittedly uncharitable way), Some of us would like to have the
 benefits of protecting our authoritative data with DNSSEC without
 having to endure the cost and inconvenience of migrating our resources
 to providers that support it. Therefore the entire Internet should use
 the DLV. In contrast, my position is that people and/or organizations
 which need the protection of DNSSEC should vote with their feet. In
 this way providers that offer DNSSEC will be rewarded, and those that
 do not will be punished. 
I would vote with my feet if I could.  I can't.  The problem with your
market driven approach is that ISPs are largely unregulated monopolies. 
At least, for those of us who are based in residences/small businesses. 
I'm fortunate to have 2 cables pass my house - fiber and cable TV.  
Only the fiber provider has enough outbound bandwidth for site-site
backup, which I get for $low 3 figures/mo.  The cable TV-based
provider says 'yes since you have business class service (static IPs),
we will provide a fiber to your premises.  First, there's the
engineering study for $5 figures, then a construction fee, then %4
figures/month...unless you want serious bandwidth, in whch case it's
more. So there's no competition.  Neither cares about DNSSEC.  Neither
is required to care by regulation, RFC, ICANN/IANA or organized
community pressure.

The answer is different when you're an enterprise with a large budget. 
I've been there.  Let us consolidate your voice  data networks; sure,
we'll eat the engineering costs of switching you to a few OC-48 fibers;
saves us money  maintaining all those copper wires.  You want a couple
of dark fibers, and a couple of hundred PI IP addresses routed - no
problem.  Switch your phone system to VoIP too?  Oh, you got a quote
from them,  including running new fiber from the highway to your plant
for free?  Let me re-work our numbers.  Can we shine your shoes?  When
you pay several $100K/mo for bandwidth per site, it's amazing how
responsive vendors can be.  So your approach works for some, according
to the golden rule (she who has the gold, makes the rules.)

 Completely aside from what I believe to be the absurdity of your
 argument, the position I suggest will almost certainly result in
 market forces which encourage the deployment of DNSSEC. At bare
 minimum it has the moral value of rewarding providers who have done
 the right thing.

I don't think it's absurd to note that people in my position - and there
are a lot of us - are forced to use DLV for some cases.  The most
prominent is reverse DNS.  We *can't* switch providers.  We *can't* get
IP addresses from other sources (and get them routed) without literally
going bankrupt.

Since no one can predict what names a validating resolver

Re: recursive lookups for UNSECURE names fail if dlv.isc.org is unreachable and dnssec-lookaside is 'auto'

2014-08-27 Thread Timothe Litt
On 27-Aug-14 14:54, Doug Barton wrote:
 On 8/26/14 10:35 AM, Timothe Litt wrote:
 I think this is misleading, or at least poorly worded and subject to
 misinterpretation.

 I chose my words carefully, and I stand by them.

The OP was asking about configuring a resolver (bind's).

Where I thought there could be confusion is in conflating two issues:
1) Should validating resolvers consult the DLV?
2) Should entries be made in the DLV?

So you really meant that validating resolvers should only consult DLV if
their administrator knows that users are looking-up names that are in
the DLV?  That's how I read your advice.

I don't see how that can work; hence we'll disagree.  I think the only
viable strategy for *resolvers* is to consult the DLV - as long as it
exists.

If you meant that an administrator should only put entries in DLV for a
domain:
  a) If there is no direct trust path to the root; and
  b) the domain benefits from being DNSSEC-secured (know your user base)
then we agree.

 I did not say that the DLV has no value, and I specifically mentioned
 that there are circumstances when it is valuable and should be used.
 You clearly have a different view, which is fine.

 When it comes to gTLDs, I completely reject the notion that users
 cannot change registrars. It can be hard, no doubt, but it's a
 cost/benefit analysis. If the benefit of DNSSEC outweighs the
 difficulty of moving, then it's worth it. If not, it's not. The fact
 that it's hard doesn't mean it's impossible.

Impossible is a very high standard.  DNSSEC is only one part of the
cost/benefit analysis in choosing/sticking with a registrar.  And part
of the benefit of DNSSEC goes to the registrant's users, not all to the
registrant - this is hard to account for.  Also, it's not just the
technical/financial difficulty of switching registrars.  Some have
policies/practices that some users find unacceptable; unfortunately, for
quite some time those were the ones that offered DNSSEC.  That's
improving, but it's still an issue in some circles. 

DLV has a different set of costs (and benefits - especially when some
resolvers don't consult it). 

If the question is how can I implement DNSSEC in my zones, the
preferred path is certainly not DLV.  But if the choice is a
difficult/expensive switch of registrar or no DNSSEC, DLV is worth
considering.  

 That said, I do recognize that there are situations where a chain of
 trust to the root is not possible (such as some reverse zones). Again,
 this becomes a cost/benefit analysis. For reverse zones if DNSSEC is
 important it may be worth the effort of changing providers, or even
 getting a PI assignment. For TLDs where DNSSEC is not yet available, a
 change may be in order. If enough people vote with their feet in this
 way those providers and TLDs that lose customers may reconsider their
 offerings.

 No one said it would be easy. :)

 Doug

I agree that a chain to the root is the preferred option.

I would love to vote with my feet.  I have a few small problems with
that strategy.

There is no ISP in my geography that provides dnssec reverse delegation
for IPv4.  Not for lack of complaints/escalations from me. 

There is only one ISP here that offers fiber speeds at prices that an
individual can afford.  So it can afford not to care.

For IPv6 - well, I can't get IPv6 directly from any ISP, but my tunnel
provider does allow DNSSEC reverse delegation.  When my ISP finally
implements IPv6 (promised for over 2 years, but again, they don't care),
I'll have to choose between a direct IPv6 connection with no reverse
DNSSEC, or sticking with my tunnel.

A provider-independent IP addresses is out of reach for all but the
largest/best financed organizations.  Not just getting them, but the
additional costs of having to get them routed.  And just try to get an
ISP to route a small number of IP addresses for a home/small business
(or even medium business) customer...at any price. 

So yes, there are trade-offs and a cost/benefit analysis is helpful. 
And if you're a big enough customer and/or you're fortunate enough to
have a choices that enable a direct trust chain to the root, we agree
that is the preferred choice from a strictly DNSSEC perspective.

Certainly DNSSEC is not easy.  It's getting somewhat easier, though not
fast enough. 

One way to make it easier - for now - is to encourage *resolvers* to
consult DLV.  That allows validated resolution of the domains that
require DLV.  That's a good thing. 

And that's where this thread started.  I think that's the only part
that's strictly on-topic for this list...






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: clients-per-query vs max-clients-per-query

2014-06-08 Thread Timothe Litt
On 07-Jun-14 12:36, Evan Hunt wrote:
 On Sat, Jun 07, 2014 at 12:02:24PM -0400, Jorge Fábregas wrote:
 For me, this clients-per-query of 10 is an upper limit (maximum number
 of clients before it starts dropping).  So then, what's the purpose of
 max-clients-per-query?
 Over time, as it runs, named tries to self-tune the clients-per-query
 value.

 If you set clients-per-query to 10 and max-clients-per-query to 100
 (i.e., the default values), that means that the initial limit will be
 10, but if we ever actually hit the limit and drop a query, we try
 adjusting the limit up to 15, then 20, and so on, until we can keep
 up with the queries *or* until we reach 100.

 Once we get to a point where we're not spilling queries anymore, we
 start experimentally adjusting the limit back downward -- reducing it
 by 1 every 20 minutes, if I recall correctly.

 If clients-per-query is 0, that means we don't have a clients-per-query
 limit at all.  If max-clients-per-query is 0, that means there's no upper
 bound on clients-per-query and it can grow as big as it needs to.


This doesn't quite make sense, assuming I understand it correctly from
your + Mark's descriptions.

Consider a continuous stream of queries to a slow server.  For the sake
of exposition, assume the incremental adjustment is 1 rather than 5.

Named drops the 11th query, but increases the limit.

So the 12th query will be accepted.  Why is the 12th query more valuable
than the 11th?

Next, the limit is 11, but the 13th arrives - is dropped  the limit
increased.

  So the 14th is accepted.

And this continues, dropping every other (actually every 5i-th) query
until there's a response or the max is reached.

Meantime, named expects the clients whose requests were dropped to
retry. (Typically 3 sec, up to 5 times.)
If there's a delay at the next stage of resolution, a client has the
same chance of being unlucky again.

This algorithm seems to attempt to deal with two distinct cases:
  o drop abusive bursts
  o limit resource consumption by unresponsive servers/servers of
varying responsiveness

For the former, a global threshold makes some sense - an abusive burst
of queries can be for multiple zones - or focused on one.
But isn't this what response rate limiting is for?  Given RRL, does this
still make sense?

For the latter, separating the measurement/threshold tuning from the
decision to drop would seem to produce more sensible behavior than
dropping every 5i-th packet.  And for it to make any sense at all, it
must be adjusted per server, not globally...

Or I'm missing something, in which case the documentation needs some
more/different words :-(

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: AIX and 9.9.5 compiling

2014-05-09 Thread Timothe Litt
On 09-May-14 14:53, Alan Clegg wrote:
 I do, but I don't have early access, so other than a brief yep, it
 works, I can't get it into the README.  8-)
I'm glad that you make that effort. 

 I was responding to Jeremy's solicitation for suggestions on what
should be done more officially/thoroughly.   (Including routine builds
during development.)

Including ARM - native and cross-compiled - would support parts of the
community that don't get much attention (nor make much noise.)   
Embedded and cross-architecture compilers.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

This communication may not represent my employer's views,
if any, on the matters discussed. 

On 09-May-14 14:53, Alan Clegg wrote:
 On 5/9/14, 2:06 PM, Timothe Litt wrote:
 If you have a suggestion for an important or popular OS version I should 
 add to our build farm, please let me know why.
 I have one suggestion:  get a Raspberry PI and build/run on it (the
 usual OS is Debian - 'Raspbian', but people run a variety of others.)
 I do, but I don't have early access, so other than a brief yep, it
 works, I can't get it into the README.  8-)


 AlanC





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: changing NSEC3 salt

2014-02-06 Thread Timothe Litt

On 06-Feb-14 05:56, Cathy Almond wrote:

On 05/02/2014 18:54, David Newman wrote:

The Michael W. Lucas DNSSEC book recommends changing NSEC3 salt every
time a zone's ZSK changes.

Is this just a matter of a new 'rndc signing' command, or is some action
needed to remove the old salt?

thanks

dn

rndc signing -nsec3param ...

I would expect the old NSEC3 chain and old NSEC3PARAM record to be
removed, once the new chain is in place.

(Similarly, the new NSEC3PARAM record will not appear in the zone until
the new NSEC3 chain has been completely generated).

Cathy

This seems silly.  Why should a person have to select a salt at all?  
It's just a random number, and people are really bad at picking random 
numbers.  Seems like a miss in 'DNSSEC for humans' :-)


There should be a mechanism to tell named to pick a random number and 
use it for the salt.  (I suggest '*' - '-' already means 'none'.)  named 
already has to know how to get random numbers, so this should not be 
difficult.  It should work for records supplied in UPDATE transactions 
as well as rndc signing.


A bit more work to have it function when loaded from a zone file, though 
that doesn't seem unreasonable.  (E.g. if read from a zone file, pick a 
salt, treat the record as if loaded with that value, and do all the 
requisite (re-)signing.)


I'm copying bind9-bugs so this doesn't get lost.  Please don't copy that 
list if you comment on this. (Careful with that 'reply all'!)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: changing NSEC3 salt

2014-02-06 Thread Timothe Litt

On 06-Feb-14 09:14, Klaus Darilion wrote:



On 06.02.2014 14:58, Cathy Almond wrote:

On 06/02/2014 12:58, Timothe Litt wrote:

On 06-Feb-14 05:56, Cathy Almond wrote:

On 05/02/2014 18:54, David Newman wrote:

The Michael W. Lucas DNSSEC book recommends changing NSEC3 salt every
time a zone's ZSK changes.

Is this just a matter of a new 'rndc signing' command, or is some 
action

needed to remove the old salt?

thanks

dn

rndc signing -nsec3param ...

I would expect the old NSEC3 chain and old NSEC3PARAM record to be
removed, once the new chain is in place.

(Similarly, the new NSEC3PARAM record will not appear in the zone 
until

the new NSEC3 chain has been completely generated).

Cathy


This seems silly.  Why should a person have to select a salt at all?
It's just a random number, and people are really bad at picking random
numbers.  Seems like a miss in 'DNSSEC for humans' :-)

There should be a mechanism to tell named to pick a random number and
use it for the salt.  (I suggest '*' - '-' already means 'none'.)  
named

already has to know how to get random numbers, so this should not be
difficult.  It should work for records supplied in UPDATE transactions
as well as rndc signing.

A bit more work to have it function when loaded from a zone file, 
though

that doesn't seem unreasonable.  (E.g. if read from a zone file, pick a
salt, treat the record as if loaded with that value, and do all the
requisite (re-)signing.)

I'm copying bind9-bugs so this doesn't get lost.  Please don't copy 
that

list if you comment on this. (Careful with that 'reply all'!)

Timothe Litt
ACM Distinguished Engineer


Sounds like a good idea - thanks.


Indeed. It would also solve the theoretical problem of NSEC3 hash 
collisions (see my email from 3. Feb 2014)


regards
Klaus


Not quite.  It would enable a solution, but it doesn't solve it unless 
named also checks for a collision, picking a new salt and re-trying in 
that case.  That would be a good idea (though creating a test case would 
be a good student challenge).  [If it isn't tested, it doesn't work...]


Note also the RFC 5155 recommendation:

The salt SHOULD be at least 64 bits long and unpredictable, so that
an attacker cannot anticipate the value of the salt and compute the
next set of dictionaries before the zone is published.
In case it wasn't obvious, I should have noted that the length would be 
a config file entry.



Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Slowing down bind answers ?

2014-01-05 Thread Timothe Litt

  On 04-Jan-14 14:58, Nicolas C. wrote:

On 03/01/2014 18:00, wbr...@e1b.org wrote:

From: Mark Andrews ma...@isc.org

After that specify a final date for them to fix their machines by
after which you will send NXDOMAIN responses.  Sometimes sending a
poisoned reponse is the only way to get peoples attention.

zone . {
type master;
file empty;
};

empty:
@ 0 IN SOA . stop.using.this.nameserver 0 0 0 0 0
@ 0 IN NS .
@ 0 IN A 127.0.0.1


Or really mess with them and answer all A queries with 199.181.132.249


It's not a bad idea. I could wildcard all requests to an internal HTTP 
server saying that the DNS configuration of the client is deprecated.



Which is great until someone tries to send e-mail, ftp a file, lookup a 
SIP server - or any other service.  Do any clients rely on SIP for 
emergency telephone service?  (VoIP phones, softphones, building alarms 
among others)


DNS redirection is evil - and tricky; the world is not just DNS and HTTP 
from a user's desktop/notebook.


To get people's attention, NXDOMAIN to www.* queries is often reasonably 
safe.  Embedded systems are another story.  (Elevators, HVAC 
controllers, security systems, routers, ...)


Think about the all consequences in your environment.  Do you want to be 
responsible if someone can't make an emergency call?  Someone who has 
been out on leave?  Someone stuck in an elevator?


It may be better to simply alias (if necessary, route) the old IP 
address(es) to the new server.  That way you can manage the 
notifications and consequences on a per-service basis.


You can also turn on query logging (which helps slow down the old 
server) - and use the logs to backtrack to the machines that need to be 
reconfigured.  Scripts can send an e-mail daily with a warning and 
instructions on how to reconfigure.  If you have the ownership data, 
scripts can escalate to a manager/sponsor if ignored. Hopefully this 
will get you down to a manageable list of miscreants that require manual 
follow-up.


Redirecting to disney.com is a fine humorous response - but I'd be very 
careful about taking it - or similar - action seriously. Running DNS is 
a serious responsibility.


Whatever transition plan you adopt needs to fit your circumstances and 
manage all the risks.  A 'simple' plan might work for you - or it might 
not.


The risks of draconian operations to encourage migration are a lot 
larger than they were in years past.


--
Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Unable to transfer IPv4 reverse zone

2013-12-19 Thread Timothe Litt
I doubt you'll get help without providing configuration data for master 
and slaves and exact log and error messages.


But I'll take one blind guess.  DNSSEC validation enabled and your 
in-addr.arpa zones are not delegated and not in DLV?


In my configuration IPv4 Reverse zones (which are DNSSEC signed) 
transfer just fine.


Not helpful without my configuration?  That's the point.  Post yours 
with the log messages showing the transfer attempts  failures and maybe 
someone (else) will help.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.

On 19-Dec-13 13:11, Daniel Lintott wrote:

Hi,

I have two BIND DNS servers both running 9.9.4-P1.

I have configured them as master and slave, but have a strange issue.
The IPv4 reverse zone, fails to transfer to the slave.

I have tested the AXFR from the command line and this also fails with
SERVFAIL.

Out of 5 zones (3 forward, 1 IPv6 reverse, 1 IPv4 Reverse) the IPv4
reverse zone is the only one which fails.

The configuration on the master is the same for all zones.

Any ideas?

Regards

Daniel Lintott






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: bind-users Digest, Vol 1629, Issue 1

2013-09-19 Thread Timothe Litt

At the risk of continuing an off-topic thread:


I have toyed with trying to find a cheap Stratum-1 server for home.

   I've had success with a Raspberry Pi  GPS.  You can build a very 
respectable stratum 1 server for less than USD $200, if you can handle a soldering 
iron and build a Linux kernel.

The RPi is nominally $35 - but add a case, power supply, SD card etc and 
it's closer to $100 - depending on what you have on-hand. You need a 
monitor, KB  mouse to get started (configured), but not thereafter (SSH 
will do).  A GPS receiver will cost about $70.  The hockey puck version 
(e.g. GPS18x lVC) is a reasonable choice and can be placed outdoors, 
though it needs a level shifter; or the ceramic MCMs (e.g Adafruit 746) 
at $40 - but you'll probably spend another $40 on antenna  battery 
backup.  It will run quite happily on an SD card; an external disk isn't 
required.  Might want a small UPS.


If you build NTP and a kernel with PPS support, you end up with a pretty 
stable server.  Doesn't use much power (~5W).


For an interface to the 18x LVC, see 
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=41t=1970start=194


If you want multiple servers, the second one usually costs less because 
you kept all the bootstrapping supplies.


Further discussion should probably find another list...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 19-Sep-13 08:00, bind-users-requ...@lists.isc.org wrote:

I have toyed with trying to find a cheap Stratum-1 server for home.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: BIND 9.9.3b1 is now available

2013-01-25 Thread Timothe Litt

On 25-Jan-13 17:32, Michael McNally wrote:

  BIND 9.9.3b1 is the first beta release of BIND 9.9.3.

Makes available a new XML schema (version 3.0) for the statistics
channel that adds query type statistics at the zone level,
flattens the XML tree and uses compressed format to optimize
parsing. It also includes new XSL that permits charting via the
Google Charts API on browsers that support javascript in XSL.
To enable, build BIND with configure --enable-newstats. [RT
#30023]

(c) 2001-2013 Internet Systems Consortium


2 bits of feedback on the beta announcement:

I have software that reads the stats channel.

Please, if you have a new schema, put it on another URI so that software 
that wants the old schema gets it, and software that wants the new 
explicitly requests it.  E.g.  '/statistics/v3'


Flag day changes are not good...

I also have a patch that provides just the config data on another URI 
(/config)  - which I wish you'd accept in some form - it's very useful 
for management software that doesn't want to parse all the stats (which 
in perl takes forever), but does want the list of zones served.   I sent 
it to you folks quite some time ago (and could resend).


Since you're obviously in the code, would you re-consider this? It's 
pretty straightforward, it simply selects a subset of the data in the 
(then-) existing flow.


Thanks on both counts.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: BIND 9.9.3b1 is now available

2013-01-25 Thread Timothe Litt

Maybe it will chug along for a while if you don't configure with newstats.

But suppose someday you want to update to the newer, better software.

Are you going to run a separate server to test against?

If you manage more than one server, when the monitoring software 
changes, are all the servers going to go down and reboot with the new 
config synchronously?  What if you have lots of them (e.g. 10s or 
100s)?  In different admin domains?


As you say, this is an API

Flag days are never fun, and this is avoidable.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 25-Jan-13 18:37, Mike Hoskins (michoski) wrote:

-Original Message-

From: Timothe Litt l...@acm.org
Date: Friday, January 25, 2013 6:13 PM
To: bind-users@lists.isc.org bind-users@lists.isc.org
Subject: Re: BIND 9.9.3b1 is now available


On 25-Jan-13 17:32, Michael McNally wrote:

   BIND 9.9.3b1 is the first beta release of BIND 9.9.3.

 Makes available a new XML schema (version 3.0) for the statistics
 channel that adds query type statistics at the zone level,
 flattens the XML tree and uses compressed format to optimize
 parsing. It also includes new XSL that permits charting via the
 Google Charts API on browsers that support javascript in XSL.
 To enable, build BIND with configure --enable-newstats. [RT
 #30023]

(c) 2001-2013 Internet Systems Consortium


2 bits of feedback on the beta announcement:

I have software that reads the stats channel.

Me too.  Took awhile to get right, I'd hate to see it break.  :-(


Please, if you have a new schema, put it on another URI so that software
that wants the old schema gets it, and software that wants the new
explicitly requests it.  E.g.  '/statistics/v3'

Some sort of API-like deprecation would at least be cool...

But am I reading right?  If I don't build with --enable-newstats, all my
monitoring and trending scripts will continue to chug happily along with
the old view?






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Logging

2013-01-08 Thread Timothe Litt

1. Should ISC change the default logging for lame servers to disabled?


Well, since you asked:  the lame server logging goes back to when the 
internet was a small, collegial place and one wrote a quick note to a 
friend to fix these issues.  And people who accidentally had a lame 
server were embarrassed.  Those days, sadly, are gone.


The current logging only tells the victim why a query failed; it's 
pretty much useless unless troubleshooting a persistent, impactful 
problem.  And at that point, it's easy enough to turn on for the 
duration. So I'd vote for disabled - and the ability to enable for 
resolution of queries to specific domains/nameservers via rndc for 
troubleshooting.


What I think would be more useful is if named actually reported the 
issues to where they'd do some good.  Perhaps a DNS extension I got an 
invalid message from you - so it shows up in the log of the server (and 
administrator) with the problem.  (I'd worry about denial of service, 
though if the server is in fact lame, it's not providing service - at 
least to that zone .  Abuse of the reporting mechanism is the main risk, 
and avoiding it would take some careful engineering.)


Or, perhaps logged to a 'troubled' list of nameservers like the email 
RBL blacklists.  People don't like being on 'bad citizen' lists, so if 
that list sent the whois registered technical contact for the domain an 
e-mail once a week in addition to making the list public... maybe some 
shame would work.   But it's probably a dream. And there'd be a lot of 
fingers pointed at client firewalls...


Since choice 2 is out-of-band, it would be a lot easier to put in place 
- if someone (ISC?) volunteered to host the list...


In general, logging is most useful when the data goes to someone who can 
do something about it.  Logging at the victim is useful for isolating a 
problem - but if no-one is actually troubleshooting (and won't), it's 
largely wasted.


DNSSEC is another area where issues need to be forwarded to the source, 
not the victim.


That's my 3 cents.

--
Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Logging

2013-01-08 Thread Timothe Litt

You might as well solve world poverty and cure cancer while you're at it. :-)

Maybe tomorrow.

How do you notify someone -- good luck getting valid contact data for the 
domain holder
As I suggested - if we put data into a database/trouble list, shame 
should work.  Or their customers will find it and complain.   And in 
that scenario, the database has to be accessible - not the faulty 
server/domain.


 If we log back to their servers (IP address is known, since we traced 
the delegation and  got a lame reply), there some chance it will be seen.

hat their DNS is bust if you can't get to their web site or send them email?
I didn't say it was perfect.  But these days, the admin e-mail is as 
often as not in another domain - I recommend this.  And in theory there 
are supposed to be phone/fax/even snail-mail contact info in Whois - 
though that is controversial these days.  And, frequently the issue is 
that one nameserver is lame, but another is not.  So the admin thinks 
her domain is up, and clients just get slow/broken responses a fraction 
of the time.  So e-mail may go thru.


In any case, *attempting* to record the data where it might be acted 
upon seems like it would be a step up from the current situation.  
Today, the lame server logging delivers data to the source about 0% of 
the time.   If my suggestion increases that to any non-zero number, it 
would be an improvement.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 08-Jan-13 08:44, Jim Reid wrote:

On 8 Jan 2013, at 13:19, Timothe Litt l...@acm.org wrote:


What I think would be more useful is if named actually reported the issues to 
where they'd do some good.

You might as well solve world poverty and cure cancer while you're at it. :-)

I think you may well have not thought this out. How do you notify someone -- 
good luck getting valid contact data for the domain holder -- that their DNS is 
bust if you can't get to their web site or send them email?

FYI, I had to contact all ICANN-accredited registrars last year. Around 15% of 
the email addresses they'd supplied to ICANN when they got accredited didn't 
work. A few of those registrars had no working email servers or DNS server at 
all. If that's what happens with people who are supposedly DNS-clueful, imagine 
what it must be like for the general public.







smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Root hints updates

2012-09-06 Thread Timothe Litt
In doing some system administration, I realized that I have a tool that
might be
generally useful - ISC is welcome to add it to contribs.  Hopefully the
attachment
will make it through the mailing list server.

This is a script to automagically update the root hints file.  There are a
bunch of these floating around the internet; most don't work; those that do
don't work well.  I wrote this several years ago; it's worked for me.

It will FTP the new file - or, if you value speed over comments, will
fabricate
a copy from the existing root servers - yes, it will deal with the case
that a root server is renumbered or returns partial data.  It acts as a
SYS V init script so that it runs on every boot; It's smart enough to
requeue itself hourly if it fails to get data.  It verifies FTP transfers.

It also runs as a cron job monthly to catch any updates.  It will log
actions
to syslog; will also send mail if you like.  It preserves file ownership and
the timestamp of last download.  It knows to run rndc reconfig when it gets
a new file. (And not when nothing has changed.)  

I did some cleanup for this release, but the core logic has run for several
years on Fedora and random embedded Linuxes.  For me, it's install  forget.

README:
Install it (or create a link to it) in /etc/init.d/ as update_root.  E.g. if
it's 
in /usr/local/sbin, then  
   ln -sf ../../../usr/local/sbin/update_root /etc/init.d/ 
Then execute
  /etc/init.d/update_root setup 
and 
  /etc/init.d/update_root  

Create a /etc/sysconfig/update_root file if you want a non-default
configuration.
The most useful configuration variables are:

# Undefined uses FTP (default)
#USEDNS=yes 
# Root file name
HINT=ROOT.HINT
# named control address (undef for none)
NAMEDRNDC=127.0.0.1
# Root file owner
DEFAULTOWNER=named:named (When there's no file; normally copies from old)
# Define for e-mail recipient (default is undef = none)
#TO=hostmas...@example.com
# Cron directories
CRONMONTHLY=/etc/cron.monthly
CRONHOURLY=/etc/cron.hourly
# No IPV6?  This may speed FTP connections.
WGET=$WGET -4

Other parameters are in the first ~80 lines of the script.

The script commands are:
  start - check for update (default if no command)
  setup - run chkconfig and link to monthly queue (don't if you use crontab)
  status - list current file

One caution: Do not copy the script using copy  paste; there are places
where
literal tabs and spaces are important.  [Some environments have very limited
regexps.]

It's freely redistributable, with the usual caveat that there is no warranty
or
promise of support  that you use it at your own risk.

Enjoy.


Timothe Litt
ACM Distinguished Engineer
-
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

 


update_root
Description: Binary data
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RE: Root hints updates

2012-09-06 Thread Timothe Litt
 Since the first thing BIND does at startup is to check the root NS set,
and since DNSSEC guarantees that it is genuine, is there still an use for
this tool?

Unless bind updates the hint file as a result of these checks, yes.

It's not a question of authenticity; named has to start somewhere to find
the root NS; this is the bootstrap cache. 

It wouldn't be a bad thing if bind did the update itself (sort of like
DNSSECS's 5011 for keys).  But so far as I know, it doesn't.

Since I run the tool, I can't say that I've ever seen a message from BIND
complaining about the root hints being out of date.  I know there was a root
hints update last June...  Does it sync to what it finds, or just complain?

Until someone authoritative tells me that BIND manages the hints file on its
own, I'm taking the conservative route and letting my tool run

BTW, I do have systems that come on-line every 5 years or so.  Automation is
good :-)

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: Stephane Bortzmeyer [mailto:bortzme...@nic.fr] 
Sent: Thursday, September 06, 2012 09:08
To: Timothe Litt
Cc: bind-users@lists.isc.org
Subject: Re: Root hints updates

On Thu, Sep 06, 2012 at 08:06:45AM -0400,  Timothe Litt l...@acm.org wrote
a message of 466 lines which said:

 This is a script to automagically update the root hints file. 

Since the first thing BIND does at startup is to check the root NS set, and
since DNSSEC guarantees that it is genuine, is there still an use for this
tool?

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: Re: .TLD minimum number of nameservers rule

2011-12-13 Thread Timothe Litt
Actually, there's a simpler solution to meeting the rule for 2 NS.

Use any of the secondary nameserver services.  The come in a range of
prices/service levels.  (Price and delivered service don't always
correlate.)  Generally they act as slaves off your master; some are bind
based and use IXFR; others poll.  Besides the required redundancy, they will
meet the requirement for geographic separation.  There is at least one free
service that supports DNSSEC (though that's rare.)

Googgle secondary DNS or backup DNS for a starting point.  



-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: nudge...@fastmail.fm [mailto:nudge...@fastmail.fm] 
Sent: Tuesday, December 13, 2011 03:54
To: Fajar A. Nugraha
Cc: bind-us...@isc.org
Subject: Re: Re: .TLD minimum number of nameservers rule

 
 What IS the problem, exactly? You're describing two things that 
 doesn't seem to be related: number of NS for a zone, and PTR/DNAME 
 records.

My appologies if in an attempt to be succint, I failed to be clear.

 
 If you don't own an IP address, then usually you don't need to 
 bother about PTR records at all. If you need to change PTR record for 
 an IP address that you use (e.g. VPS, colo, home connection, etc) you 
 usually need to ask your ISP to update/change it.

The company in question has a single public IP address connecting it's
internal lan with the internet. A classic NAT configuration.

 DNAME creates an alias for one or more subdomains of a domain. Chances 
 are you won't need it for common uses.

I'm not so sure I'd make that assumption.

  For instance, would this be a problem when implementing a wide area 
  bonjour subdomain using my own local dns server for clients that are 
  mobile (internal/external) ?
 
 Bonjour should work even without a DNS server.

Reminds me of Cool Hand Luke  : what we have here is a failure to
communicate :

 You could always create your own DNS server if you REALLY need those 
 record types :) The cheapest VPS is about $15/year, which should be 
 more than enough for a secondary DNS server.

I'm running Bind 9.6 and dnsextd (llq and tsig handling). I have split DNS
views based on source ip address and possession of a tsig key:
internal-trusted/external-trusted/internal-visitor/external-visitor. 
The DNS server and clients are all mac 10.6+ so I'm taking advantage of
mDNSResponder features such as looking in the system keychain for the tsig
keys. I have a WAB subdomain for dns-sd, etc. I've had to replace dnsextd
with an older version, since current macosx versions are dead.

I wondered if the limited access to DNS records at the top level of my
domain would be a problem. 
My first thought was to take over the DNS for this domain but rfc882 saying
a domain must have at least
2 nameservers rules that out. Frankly, I probably don't understand enough
about how glue records function... 

Thanks for your help


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: Exercising RFC 5011 rollovers

2011-11-26 Thread Timothe Litt
There are tools for this.  E.g. libfaketime



-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: Phil Mayers [mailto:p.may...@imperial.ac.uk] 
Sent: Saturday, November 26, 2011 04:20
To: bind-users@lists.isc.org
Subject: Re: Exercising RFC 5011 rollovers

On 11/25/2011 08:49 PM, Evan Hunt wrote:

 Timing considerations make it difficult to have an automatic test for 
 this in the standard BIND test suite; the RFC requires certain things 
 to take a very long time.  Unless you modify named to speed

Feature suggestion: some sort of synthetic clock option to named for use in
the test suite (--test-unixtime-offset) or something?

Obviously non-trivial.


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: DNAME?

2011-07-02 Thread Timothe Litt
Agree that getting DNAME into the TLDs and served is a political issue
rather than a technical one.  However, that isn't much consolation;
political issues are less tractable than technical ones :-(  As a very small
player, dealing with retail registrars like godaddy, mydomain, and the like,
my leverage is zero.  (I was pretty happy with the days when an e-mail, or
just a phone call to Jon or Joyce would get a change made.  But those days
are gone.)

I found a draft of BNAME - while it seems workable, it's another solution
that would hit name servers, resolvers and take time to deploy -- and
require registants to get their registrars to put a special record into the
TLD.

I think my suggestion to do the work for aliasing in the name server is more
practical than BNAME.  The TLD would delegate in the familiar way to a
server; the server just has to fetch/translate the data from another zone
rather than a file.  And no impact on resolvers.  No new record types, and
the politics are limited to the domain getting the benefit.

Everyone would see the domain normally (as today); only the authoritative
server for the aliased domain would know that there's anything special about
it, and that server would do the extra work.  Since it's responsible for the
domain getting the benefit, it seems fair for it to do the work.  And since
named as a forwarder/resolver would have to know how to track down BNAMEs
under that proposal, asking named to track down aliases on the authoritative
side seems like no more (and probably less) work.

So, if I wanted to solve the problem that BNAME is trying to address (and my
orignal problem), I'd pursue a server solution rather than BNAME.

However, in the meantime it looks like I'll need to come up with some
solution on the back end to keep the aliased zones in sync.  Maybe just
AXFER the real zones periodically, run sed over them and write / sign
traditional zone files.  Yuck; another kludge that will not-quite-properly
paper over a problem.

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: Mark Andrews [mailto:ma...@isc.org] 
Sent: Friday, July 01, 2011 21:58
To: Timothe Litt
Cc: 'Jon F.'; bind-us...@isc.org
Subject: Re: DNAME?


When DNAME was being developed the working group had to make a decision
about whether DNAME should redirect the node it was at or just the names
below it.  The decision was made to do the latter because it didn't require
TLD operators to know about DNAME at the cost of a little more work to keep
the apex records in sync.

In hindsight we should have done both as there are use cases for both.
Getting other types added to TLDs isn't a technical issue, its a political
issue.  There are TLDs that accept MX, A,  and I believe DNAME today
instead of NS records at what would be the delegation point.  It's just as
easy to serve these records as it is to serve a delegation.

Mark

In message 2fa4ed65dac044849aa3f57fbcfe2...@sb.litts.net, Timothe Litt
writ
es:
 This is a multi-part message in MIME format.
 
 --===7538508973042255473==
 Content-Type: multipart/alternative;
   boundary==_NextPart_000_000C_01CC37F1.C5C06C70
 
 This is a multi-part message in MIME format.
 
 --=_NextPart_000_000C_01CC37F1.C5C06C70
 Content-Type: text/plain;
   charset=us-ascii
 Content-Transfer-Encoding: 7bit
 
 Yes, the example.us zone loads.  As I mentioned, no errors in 
 named.log, and the statistics webserver (in named) shows example.us as 
 active, albeit with '-' for the serial number instead of the number in the
zone file.
  
 How did you get a DNAME into .com? 
 
 I did make example.us a zone - it is one, isn't it?  If the DNAME has 
 to go in .us, I don't see making this scheme work.  As a practical 
 matter, registrars will put NS records into the TLDs, and some (with 
 encouragement) are starting to accept DNSSEC records for the TLDs).  
 But I've yet to see one that provides a means for a registrant to have a
DNAME inserted...
 Unless I'm missing something.  Did you actually manage to do this, or 
 is your setup working in third+-level domains?
  
 I was hoping/expecting that since my server is the authoritiative 
 server for example.us, the DNAME could go in the example.us zone.  I 
 expected that when, as the authoritative server, it was asked for 
 foo.example.us, it would respond with foo.example.net.  But the RFC 
 wasn't clear, which is why I asked.
  
 thanks.
 
 -
 This communication may not represent my employer's views, if any, on 
 the matters discussed.
   
 
  
 
   _
 
 From: Jon F. [mailto:pikel@gmail.com]
 Sent: Thursday, June 30, 2011 16:11
 To: Timothe Litt
 Cc: bind-users@lists.isc.org
 Subject: Re: DNAME?
 
 
 I have a similar set up to that and it works. Have you checked the 
 logs to make sure the zone properly loaded? I'm

RE: DNAME?

2011-07-01 Thread Timothe Litt
Yes, all my zones are (or will be) signed.  And all are dynamic update;
tricks like pointing all zones to the same zone files don't work.

So the bottom line is that either way I would somehow need to get my
registrar(s) to put special records  (DNAME or BNAME if it escapes the
politics) into the TLDs (.US, .INFO)?

Considering how hard it is just to get DNSSEC records installed, that
doesn't sound like a fun time.  I haven't seen a DNAME option in the GUI for
any of the registrars that I use.  And if I got a record in, I wonder (a) if
it would stay in and (b) if I could get it changed (or removed) when
circumstances change.  Does anyone have a real experience with this?
Especially someone who isn't a megacorp :-)?

Does the BNAME proposal address the MX/CNAME issues with DNAME?

Either way, having to put a record in the parent zone is no big deal -
except when registrars / TLDs are involved.

It seems to me that there's a more manageable approach than that described
for BNAME - that is solely under the control of named.

Given that my BIND servers are authoritative for the real (.net) and aliased
(.us, .info) zones (and, for the external views, properly delegated from the
TLDs), wouldn't it be more practical to have a named solution?  E.g. a
mechanism to tell named to respond authoritatively to all queries to aliased
zones (in my current case, .US, .INFO) as though it was resolving DNAME in
the parent zone?  Put another way: the aliased server is authoritative for
the aliased zone.  Where it gets the zone data from is a private matter.
Normally, it's a zone file.  But for an alias, it could simply query some
other real zone (it might even also be authoritative for that), substitute
the alias name for the real zone names, and serve the data as
authoritative.  (Signing as necessary.)

That would avoid doing anything in the TLD (parent in the general case), and
it would also make it easy to do more subtle things.  For example, put some
records in the aliased zone, and only go to the real zone if no record
matches a query.  Pretty much required for DNSSEC keys, so might as well
look for any record here first. That would seem very flexible.  And, since
it wouldn't need a new record type, no IETF politics!

It might look like
zone example.us {
type master;
alias-of example.net; # Zone to mirror, meaning reflect queries for
example.us to example.net; verify any signatures, then edit reply's
example.us strings = example.us, re-sign and respond as authoritative
file example.us.exceptions.db; # Required to contain (minimally)
.us DNSSEC keys
   # Optionally, look here
before the alias zone when resolving.
};
Of course, the synthesized data can be cached per the usual rules; think of
the alias-of zone as serving misses from the zone file.

I know I'm not the only user with this problem - many corporations get
theirname.{everything posible) and then try to make them look like
theirname.com.  Usually with http redirects - but that doesn't address all
the other services.

But I conclude that as of today, this is wishful thinking - there is no
practical approach.  Sigh.

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: Mark Andrews [mailto:ma...@isc.org] 
Sent: Thursday, June 30, 2011 20:58
To: Jon F.
Cc: Timothe Litt; bind-us...@isc.org
Subject: Re: DNAME?


In message BANLkTim=maau1y+xh7yzibmrznvx30z...@mail.gmail.com, Jon F.
write
s:
 I have a similar set up to that and it works. Have you checked the 
 logs to make sure the zone properly loaded? I'm assuming the zone data 
 you posted below is from the example.us zone but your first question 
 makes it sound like you put it in a seperate zone. That would explain 
 the SERVFAIL if the zone data never loaded but the server was 
 authoritative. It does need to be in the .us.
 
 
 ;; ANSWER SECTION:
 example.com.   60  IN  DNAME   example.net.
 test.example.com. 60  IN  CNAME   test.example.net.
 test.example.net.   60  IN  A   127.0.0.1
 
 
 
 And that's with zone data like this:
 example.com.  IN NS ns1.example.net.
 example.com.   IN NS ns2.example.net.
 example.com.  IN A 10.0.0.1
 example.com. IN DNAME example.net.
 
 
 Truthfully I haven't looked at DNAME's in a long time so I'm unsure 
 how to do it fully for a domain without adding an A record as well. 
 But what your doing works, it's just not very pretty. Someone may have a
better way.

There is an outstanding proposals for BNAME.  This would be added to the
parent zone instead of NS records and would synthesis CNAMEs records for the
domain and its children.

This has got bogged down in IETF politics over how to fix idn rather that be
allowed to stand on its own merits.

Mark
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE

RE: DNAME?

2011-07-01 Thread Timothe Litt
Yes, the example.us zone loads.  As I mentioned, no errors in named.log, and
the statistics webserver (in named) shows example.us as active, albeit with
'-' for the serial number instead of the number in the zone file.
 
How did you get a DNAME into .com? 

I did make example.us a zone - it is one, isn't it?  If the DNAME has to go
in .us, I don't see making this scheme work.  As a practical matter,
registrars will put NS records into the TLDs, and some (with encouragement)
are starting to accept DNSSEC records for the TLDs).  But I've yet to see
one that provides a means for a registrant to have a DNAME inserted...
Unless I'm missing something.  Did you actually manage to do this, or is
your setup working in third+-level domains?
 
I was hoping/expecting that since my server is the authoritiative server for
example.us, the DNAME could go in the example.us zone.  I expected that
when, as the authoritative server, it was asked for foo.example.us, it would
respond with foo.example.net.  But the RFC wasn't clear, which is why I
asked.
 
thanks.

-
This communication may not represent my employer's views,
if any, on the matters discussed.
  

 

  _  

From: Jon F. [mailto:pikel@gmail.com] 
Sent: Thursday, June 30, 2011 16:11
To: Timothe Litt
Cc: bind-users@lists.isc.org
Subject: Re: DNAME?


I have a similar set up to that and it works. Have you checked the logs to
make sure the zone properly loaded? I'm assuming the zone data you posted
below is from the example.us zone but your first question makes it sound
like you put it in a seperate zone. That would explain the SERVFAIL if the
zone data never loaded but the server was authoritative. It does need to be
in the .us.


;; ANSWER SECTION:
example.com.   60  IN  DNAME   example.net.
test.example.com. 60  IN  CNAME   test.example.net.
test.example.net.   60  IN  A   127.0.0.1



And that's with zone data like this:
example.com.  IN NS ns1.example.net.
example.com.   IN NS ns2.example.net.
example.com.  IN A 10.0.0.1
example.com. IN DNAME example.net.


Truthfully I haven't looked at DNAME's in a long time so I'm unsure how to
do it fully for a domain without adding an A record as well. But what your
doing works, it's just not very pretty. Someone may have a better way.




On Thu, Jun 30, 2011 at 2:01 PM, Timothe Litt l...@acm.org wrote:


I have domain example.net in production, and have recently acquired
example.us and example.info.

For whatever reason, I want example.us to simply mirror example.net, which
is dynamically udpdated (and dnssec).  And I want example.us to be zero
maintenance. (Well, OK I know I need separate DNSSEC keys, but I don't want
to mirror every update made in .net to .us)

So, I add a zone to ns1.example.net that looks like:
(In view internal)
   zone example.us {
   auto-dnssec maintain;
   type master;
   allow-transfer { key TSIG_GLOBAL_KEY; };
   file EXAMPLE_US.DB;
   update-policy {
   grant TSIG_GLOBAL_KEY subdomain example.us. ANY ;
   };
};

$ORIGIN .
$TTL 600; 10 minutes
example.us.   IN SOA  ns1.example.net.
examplenetadmin.example.net. (
   2011063001 ; serial
   172800 ; refresh (2 days)
   600; retry (10 minutes)
   2419200; expire (4 weeks)
   600; minimum (10 minutes)
   )
example.us. IN DNAME example.net.
example.us. IN NS ns1.example.net.
example.us. IN NS ns2.example.net.

I get SERVFAIL with dig if I ask about, say www.example.us @ns1.example.net
(www.example.net does exist).

I see nothing in the named.log, except the trace 99 /notrace commands
bracketing the dig, and if I turn on querylog:
client ns1 IP#33256: view internal: query: www.example.us IN A + (ns1
IP).

If I look at the named statistics channel, I see that example.us is being
served, but the zone serial is '-', not '2011063001'.

Questions:
   o Am I confused about DNAME placement - would it have to go in .US?
If so, is this possible?  (I don't mean technically possible - I mean
practically - e.g. thru a registrar such as godaddy, enom, etc).  If not,
what explains the SERVFAIL?
 o Why is '-' reported for the zone serial?
   o I understand that DNAME and MX don't play well together (DNAME is
essentially CNAME, and MX doesn't allow
 CNAMEs).  I suspect I'd have to live with that - unless there are
wiser heads?
   o Is there a better approach?  (Assume that I'll also want to do the
same thing to example.info...)

Thanks.

-
This communication may not represent my employer's views,
if any, on the matters discussed.


___
Please visit https://lists.isc.org/mailman/listinfo/bind

DNAME?

2011-06-30 Thread Timothe Litt
I have domain example.net in production, and have recently acquired
example.us and example.info.

For whatever reason, I want example.us to simply mirror example.net, which
is dynamically udpdated (and dnssec).  And I want example.us to be zero
maintenance. (Well, OK I know I need separate DNSSEC keys, but I don't want
to mirror every update made in .net to .us)

So, I add a zone to ns1.example.net that looks like:
(In view internal)
zone example.us {
auto-dnssec maintain;
type master; 
allow-transfer { key TSIG_GLOBAL_KEY; }; 
file EXAMPLE_US.DB;
update-policy {
grant TSIG_GLOBAL_KEY subdomain example.us. ANY ;
};
 };

$ORIGIN .
$TTL 600; 10 minutes
example.us.   IN SOA  ns1.example.net.
examplenetadmin.example.net. (
2011063001 ; serial
172800 ; refresh (2 days)
600; retry (10 minutes)
2419200; expire (4 weeks)
600; minimum (10 minutes)
)
example.us. IN DNAME example.net.
example.us. IN NS ns1.example.net. 
example.us. IN NS ns2.example.net.

I get SERVFAIL with dig if I ask about, say www.example.us @ns1.example.net
(www.example.net does exist).

I see nothing in the named.log, except the trace 99 /notrace commands
bracketing the dig, and if I turn on querylog:
client ns1 IP#33256: view internal: query: www.example.us IN A + (ns1
IP).

If I look at the named statistics channel, I see that example.us is being
served, but the zone serial is '-', not '2011063001'.

Questions:
o Am I confused about DNAME placement - would it have to go in .US?
If so, is this possible?  (I don't mean technically possible - I mean
practically - e.g. thru a registrar such as godaddy, enom, etc).  If not,
what explains the SERVFAIL?
  o Why is '-' reported for the zone serial?
o I understand that DNAME and MX don't play well together (DNAME is
essentially CNAME, and MX doesn't allow
  CNAMEs).  I suspect I'd have to live with that - unless there are
wiser heads?
o Is there a better approach?  (Assume that I'll also want to do the
same thing to example.info...)

Thanks.

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: start script for bind9

2011-04-14 Thread Timothe Litt
YMMV wrt just works.  Yes, running the latest ISC bind can be worthwhile
after the OS distribution stops updating (or before it gets around to
packaging the latest ISC version.)

People considering the approach suggested by David  Alan should be aware
that the OS startup files often do more than just running named.

For example, under fedora: checkconf, checkzone, chroot and (in some
versions) selinux setup are handled in /etc/init.d/named, along with quite a
few sanity checks.

You may be better off copying the OS script  changing it to run named from
/usr/local/sbin if your environment is non-trivial.

Even if it's not, it's worth doing a few sanity checks (like making sure
networking is up).

In any case, be sure to understand what you're giving up if you elect to go
with the just works defaults in the named executable.  It may be OK for
some, but I tend to be more careful with important systems.

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: David Forrest [mailto:d...@maplepark.com] 
Sent: Thursday, April 14, 2011 11:53
To: Dave Forrest
Cc: bind-users@lists.isc.org
Subject: Re: start script for bind9

On Thu, 14 Apr 2011, Alan Clegg wrote:

 On 4/14/2011 10:23 AM, hugo hugoo wrote:

 I know that if bind is installed via apt-get install (I am using 
 debian linux version), there is automatically a bind9 startup script 
 in /etc/init.d/ directory.

 Since named just works and I do everything else using rndc, I have 
 the following line in /etc/rc.local:

/usr/local/sbin/named

 AlanC

I also find that named just works and, since the source install uses the
/usr/local/sbin as the default target, it does not overwrite my distribution
(Fedora) binary.  It is also important to my setup that named is always
running so I use a root crontab entry:
/usr/bin/pgrep named /dev/null ||  /usr/local/sbin/named -u named that runs
every minute to insure it is up.  I can then stop it with rndc stop  and it
will restart on the next minute's crontab event.

I can return to my distribution's upstart by stopping using the rndc and
immediately start it's binary using the service command, but have found the
ISC source tarballs a better solution to my needs that include DNSSEC and
IPv6 while Fedora 11 is EOL.  YMMV.

Dave F
-- 
St. Louis, Missouri(Sent by ALPINE 2.02 FEDORA 11 LINUX)


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: can I set the second nameserver to a public dns cache?

2011-03-28 Thread Timothe Litt
No.  But you can use a public (commercial or non-commerical) secondary DNS
service.

Google secondary dns or free secondary dns.  You will find a number of
services and reviews.

Be careful in selecting - many charge or limit you based on the number of
queries and/or zones.   QOS and reliablity vary, as do levels of support.
Note that not all secondary services use BIND.

Many of the free services don't yet support DNSSEC, don't accept NOTIFY
(polling instead) or are un-reliable.  Most don't support IPV6 and don't
have any QOS guarantee.  Also, for any serious use, you want geographic
separation for disaster-tolerance.  Nonetheless, you can find reasonable
free services.

Commercial services also vary the same parameters as well as price and
support.

I settled on puck.nether.net/dns for my personal domain, which seems to stay
current with BIND, has been reliable, supports IPV6 and NOTIFY and is
located in Chicago.  But your milage (and criteria) may vary.


-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 
-Original Message-
From: terry [mailto:te...@list.dnsbed.com] 
Sent: Monday, March 28, 2011 08:26
To: bind-users
Subject: can I set the second nameserver to a public dns cache?

Hello,

I have only one nameserver for a domain.
Can I set the second nameserver for this domain to a public dns cache?
for example:


abc.com.  IN  NS  ns1.abc.com.
abc.com.  IN  NS  ns2.abc.com.

ns2.abc.com.  IN  A  8.8.8.8  # 8.8.8.8 is google's public dns server


Since DNS cache does a rec-resolver, so it will also answer with the correct
result?

Thanks.


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: DNSSEC, views trusted keys...episode 43

2010-11-01 Thread Timothe Litt
I have tried to consolidate the several suggestions for how to configure a
view that would respond with AD to recursive queries for authoritative
zoned.
 
I don't have a working recipe.  I could use some help.
 
At this point, it looks like the recursive view is still going to the
external nameservers.

Validation fails because the external views don't use the same keys as the
internal views (when the zones have the same name).

Stub zones are active - as in the queries were made and their .db files
created.  Their content IS coming from the internal view (as expected).

Trusted keys are being specified (I intend to switch to managed-keys, but
one step at a time!)

Here is the configuration - I'm eliding include statements, truncating keys
and showing only one zone.  

Bind 9.7.2-P2

view r-internal in {
//match-clients { !any_external; all_internal; };
match-clients { 192.168.148.136; }; // This is for debugging
match-recursive-only yes;
transfer-source   192.168.42.6; // Required so recursion hits the right
view
query-source address 192.168.42.6; // , but doesn't seem to be obeyed
recursion yes;
allow-recursion { all_internal; };
allow-query-cache { all_internal; };
trusted-keys {
litts.net. 257 3 7
AwEAAd8UA5VFFxqqyj+2peMH+/KOhm3q2H/(...);
};
// dig @192.168.42.6 example.net dnskey
// example.net. 60 IN DNSKEY  257 3 7
AwEAAd8UA5VFFxqqyj2peMH+/KOhm3q2H/(...)
// Yes, there is a ZSK too

zone example.net in {
type stub;
file EXAMPLE_NET.stub.DB;
masters { 192.168.148.4; };
};
};

Named.log:

01-Nov-2010 13:58:41.436 lame-servers: info: error (no valid KEY) resolving
'example.net/DNSKEY/IN': external-IP of NS1#53
01-Nov-2010 13:58:41.535 lame-servers: info: error (no valid KEY) resolving
'example.net/DNSKEY/IN': external-IP of NS2#53
01-Nov-2010 13:58:41.628 lame-servers: info: error (no valid KEY) resolving
'example.net/DNSKEY/IN': external-IP of NS3#53
01-Nov-2010 13:58:41.630 lame-servers: info: error (broken trust chain)
resolving 'www.example.net/A/IN': external-IP of NS3#53

Note that none of the three nameservers in the log are the one configured
for this test.

I suspect that some internal shortcut is causing named to ignore the
query-source directive.
Perhaps in not wanting to do UDP to itself, named isn't setting (or looking
at) the source address?  

In any case, this server is authoritative (a slave) for all the zones
involved, so it's also a bit of a mystery as to why it goes outside to
resolve this at all.

I have created a level 99 trace of this lookup - perhaps it will mean
something to a named internals wizard...

Named.run (domain  ip addresses masked with sed)

01-Nov-2010 14:33:40.579 debug level is now 99
01-Nov-2010 14:33:40.581 socket 0x406bd410: socket_recv: event 0x408ee3c8 -
task 0x40655008
01-Nov-2010 14:33:40.583 socket 0x406bd410: dispatch_recv:  event 0x408ee3c8
- task 0x40655008
01-Nov-2010 14:33:40.584 socket 0x406bd410: internal_recv: task 0x40655008
got event 0x406bd470
01-Nov-2010 14:33:40.578 socket 0x406bd410: destroying
01-Nov-2010 14:33:43.162 socket 0x40697568: dispatch_recv:  event 0x408ef968
- task 0x40910c08
01-Nov-2010 14:33:43.162 socket 0x40697568: internal_recv: task 0x40910c08
got event 0x406975c8
01-Nov-2010 14:33:43.162 socket 0x40697568 192.168.148.136#41145: packet
received correctly
01-Nov-2010 14:33:43.163 socket 0x40697568: processing cmsg 0x40642190
01-Nov-2010 14:33:43.163 client 192.168.148.136#41145: UDP request
01-Nov-2010 14:33:43.164 client 192.168.148.136#41145: view r-internal:
using view 'r-internal'
01-Nov-2010 14:33:43.164 client 192.168.148.136#41145: view r-internal:
request is not signed
01-Nov-2010 14:33:43.158 client 192.168.148.136#41145: view r-internal:
recursion available
01-Nov-2010 14:33:43.158 client 192.168.148.136#41145: view r-internal:
query
01-Nov-2010 14:33:43.158 client 192.168.148.136#41145: view r-internal:
ns_client_attach: ref = 1
01-Nov-2010 14:33:43.159 client 192.168.148.136#41145: view r-internal:
query 'www.example.net/A/IN' approved
01-Nov-2010 14:33:43.160 client 192.168.148.136#41145: view r-internal:
replace
01-Nov-2010 14:33:43.160 clientmgr @0x4063f3c8: createclients
01-Nov-2010 14:33:43.160 clientmgr @0x4063f3c8: recycle
01-Nov-2010 14:33:43.161 createfetch: www.example.net A
01-Nov-2010 14:33:43.161 fctx 0x40867c38(www.example.net/A'): create
01-Nov-2010 14:33:43.168 fctx 0x40867c38(www.example.net/A'): join
01-Nov-2010 14:33:43.168 fetch 0x4064bc70 (fctx
0x40867c38(www.example.net/A)): created
01-Nov-2010 14:33:43.169 client @0x40a08008: udprecv
01-Nov-2010 14:33:43.169 socket 0x40697568: socket_recv: event 0x408c91e8 -
task 0x40a050c8
01-Nov-2010 14:33:43.169 fctx 0x40867c38(www.example.net/A'): start
01-Nov-2010 14:33:43.170 fctx 0x40867c38(www.example.net/A'): try
01-Nov-2010 14:33:43.170 fctx 0x40867c38(www.example.net/A'): cancelqueries
01-Nov-2010 14:33:43.170 fctx 0x40867c38(www.example.net/A'): getaddresses

Auto signing ARM

2010-09-20 Thread Timothe Litt
I'm trying to get named and my management tool cooperating
with named on DNSSEC key management.

I'm seeing behavior with auto-signing that doesn't strictly 
match the ARM and would like to know what's correct.  I'm
also not clear on what named expects for some cases.

4 questions after a little context:

9.7.1-P2

Consider this configuration snippet:

View internal in {
key-directory /...
...
}
zone xx.example.net in {
auto-dnssec maintain;
type master;
file ...
allow-transfer ...
update policy {
   grant ...
}
}

I run (This is a test, /dev/urandom isn't used in real life)
 dnssec-keygen -q -a NSEC3RSASHA1 -b 1024 -P now -A +3mo -r /dev/urandom -K
/... xx.example.net.

I get a Kxx.example.net+... file with all the right permissions.

Now, according to the ARM:

4.9.5 DNSKEY rollovers via UPDATE
It is possible to perform key rollovers via dynamic update. You need to add
the K* files for the new keys
so that named can find them. You can then ***add the new DNSKEY RRs via
dynamic update***. named
will then cause the zone to be signed with the new keys. When the signing is
complete the private type
records will be updated so that the last octet is non zero.

But: if I DON'T add the keys by dynamic update, but instead issue an 
  rndc sign xx.example.net in internal

The new key shows up in the zone.  As expected, nothing is signed.

So, it seems that it is NOT necessary to insert the DNSKEY RRs 
via dynamic update.  Either dynamic update or rndc wakes up named and
causes a scan of the keys directory.

1) Before I decide whether to rely on it, is this a bug or a feature?
   Dynamic update is a bit less work - but avoids having the control
   channel open beyond the local host.  So there are trade-offs.

In the same area of the ARM, the 5011 section seems to be a good way
to let the slave servers learn about key changes.  The section talks
about dnssec-signzone -S as the way to trigger distribution.

2) I would expect that a key to a dnssec-auto maintain zone via
the dynamic update/rndc sign route would also satisfy the 5011
requirements.  Is that correct?  

3) If dnssec-revoke or dnssec-settime are invoked, I assume that 
rndc sign would trigger publication.  If one would rather do everything
with dynamic update, what's the simplest transaction that will trigger 
Re-scanning the changed key?  Do I have to read the key file  insert
the key?  

That leaves the DS records for internally delegated zones.  As best
I can tell, I still need to find the parent zone and insert them via
dynamic update.  But: in the case where the parent zone is served by
the same view in the same server, named has everything it needs to
autogenerate DS record(s) when a DNSKEY is published and install it in
the parent. Well, maybe which hash type(s) are desired, but that would
be easy to put in a .conf file...

4) Shouldn't named handle this?

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: DNSSEC, views trusted keys...

2010-09-14 Thread Timothe Litt
This is getting very involved - or I'm getting confused.  Maybe
both :-)

I've tried to work out how this can work, but each solution 
seems to uncover another question.  I don't want to experiment 
to get to seems to work, only to find the next problem much later...

There doesn't seem to be much description of stub zones in the ARM.

I take it that a stub zone will fetch data from the zone using non-
recursive queries, but the view can provide recursive service to
queries zones served elsewhere?

I gather that they contain just an SOA and NS records.  Presumeably
This means I have to create a new set of zone files for the master -
E.g. grep for SOA and self (but not delegating) NS records.  

How are these maintained?  It wouldn't be too bad if the master
stub server would grab SOA  NS changes from the full zone  
propogate them to the primary copy of the stub zone.  But the
full zone is in a different view from the stub...  If this
is to work, these queries would have to be non-recursive for
the match-recursive view selection to support it.

Since we know that the server is authoritative for each zone, it
would seem that the stub should always have a 'masters' clause that 
points itself (even if the non-stub zone is in fact a slave).  
Otherwise there's a good chance that resolving a query would go
across the wire to some other server, ignoring the local data.
But then update-forwarding won't work, will it?

It would be helpful if someone expert in all the interactions could
trace out the flows (where starts, goes, how destination/view selected) for:
   o Initializing the stub zones on the master and their replication to the
slaves
   o Adding or removing a nameserver for the full zone (Specifically, how
this propagates to the stub)
   o A client's recursive query 
   o Dynamic update
   o Zone notifies/refreshes (full and stub)

Sorry if I'm being opaque -- though if we expect DNSSEC to be used, I won't
be the only person trying to get this work!

-
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

  
-Original Message-
From: Chris Buxton [mailto:chris.p.bux...@gmail.com] 
Sent: Saturday, September 11, 2010 22:41
To: Phil Mayers
Cc: bind-users@lists.isc.org
Subject: Re: DNSSEC, views  trusted keys...


On Sep 11, 2010, at 2:34 AM, Phil Mayers wrote:

 On 09/10/2010 11:12 PM, Timothe Litt wrote:
 
 So it looks like the new (r-internal) view is starting at the root when
it
 resolves -- ignoring what it has data for locally.   It sorta works for
 
 You'll need a:
 
 zone name {
  type forward;
  forward only;
  forwarders {
ips;
  };
 };
 
 It won't automatically detect that another view contains the zone and
redirect it; you have to tell it.

Use a stub zone instead of a forward zone, so that the query will actually
reach the authoritative view. With a forward zone, the query is recursive,
so will be picked up by the recursive view - the view will query itself and
not receive an answer.

zone zone.name {
type stub;
file /path/to/recursive-view-data/zone.name;
masters { 127.0.0.1; }; // or whatever the correct IP is to reach
the internal view };

Chris Buxton
BlueCat Networks



___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Statistics channel patch

2010-09-13 Thread Timothe Litt
I have found the statistics channel useful for getting the active zone
configuration - this lets my management GUI autoconfigure validity checks
and pull-down menus for zones.  This will be especially helpful when the
dynamic add/delete zone situation is sorted out.  But it's useful now
because it avoids having the GUI follow all the nested includes of named's
config files -- or, as that's so ugly, the separate configuration that I had
before.

However, I ran into a few issues with this approach:

 o There is no way to distinguish a built-in zone from a user zone.  I
currently use serial == 0, but that's not guaranteed.  Of course, _bind is
in the CH domain, but that still leaves the built-in private address zones
(like 8.B.D.0.1.0.0.2.IP6.ARPA/IN/internal).

  o The vast quantity of statistics are quite a lot for a perl XML parser to
handle - especially on older machines.  (XML::Simple using XML::Parser on a
400MHz machine takes 20 seconds because the structure of the XML generates
zillions of deeply nested anonymous hashes!)

  o I build quite a few data structures, and want to cache them.  But it's
really hard (computes and lines of code) to separate out the dynamic data
(statistics) from the fairly static (configured zones).  In fact, the zone
serial number, which is not relevant for configuration, is in that section.
So it would require field by field decisions, and on-going maintenance.

This resulted in the attached patch, which others may find useful (and I'd
be really happy if it made it into the next release in some form).

It implements the /config URL on the statistics channel.  /config returns
everything that's fairly static. (e.g. If it weren't for dynamic zone
addition/deletion, you'd find it in a config file - not a zone file or
memory.)  

It uses exactly the same XML structure as the current / URL and in fact uses
the same code.  

It also adds a type tag, with the values builtin or user.  (This is
still based on serial == 0, but perhaps someone can find a better way to
make the determination.)  Except for this addition, the / URL's output is
unchanged.

With this, it's possible to do a simple hash of the returned data and
invalidate the cache when it differs from the stored hash of the cached
copy.  This gets the whole thing down to ~1sec even on my slow machine,
which is quite tolerable.

It should be easy to add any other useful data, as long as the rule that
/config must not have rapidly changing data is obeyed.  

I did not update the XSL to understand that the dynamic stuff is now
optional.  It just shows up as blank if you use a web browser.  

A side note - I was surprised that the viewzoneszonename is of the
form name/class/view, since class and view names are available as separate
tags.  Easy enough to deal with, but it might be worth considering removing
the redundant /class/view from the name at some point.  I didn't because I
wanted to stay compatible with the mainline.

The patch only looks large - as kdiff3 would show, it's mostly indentation
where I skip over the unwanted statistics when responding to the new URL.
About 16 non-whitespace changes, mostly 1 line.

Enjoy.

-
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

  


bind_config_url.patch
Description: Binary data
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RE: DNSSEC, views trusted keys...

2010-09-10 Thread Timothe Litt
Mark,

I must be opaque; I don't see how to make this approach work in any
reasonable way.

I tried this:

(DLV is enabled, and my external keys for example.com are there.)
view r-internal in {
match-clients { !any_external; all_internal; };
match-recursive-only yes;
transfer-source  192.168.self;
query-source address 192.168.self; // This should make resolution match
internal
recursion yes;
allow-recursion { all_internal; };
allow-query-cache { all_internal; };
include internal_trusted_keys.conf;  // contains trusted-keys {}; for
the internal zone apexes
 // example.net,
10.in-addr.arpa, 168.192.in-addr.arpa, etc
};

view internal in {
(as before)
key-directory /xx../internal-keys;
match-clients { !any_external; all_internal; };
 zone example.net {
auto-dnssec maintain;
type master; 
...
 };
zone 168.192.IN-ADDR.ARPA in {
auto-dnssec maintain;
type master;
...
}; 
zone xx.example.net in {...}
zone xx.168.192.IN-ADDR.ARPA in {...}
}
// This has to do with interfaces that have internal addresses, but 
// see the DNS as if they were outside.  Management tools...
view r-external in {
match-clients { any_external; }; 
match-recursive-only yes;
transfer-source  192.168.self-x;
query-source address 192.168.self-x;
recursion yes;
allow-recursion { any_external; };
allow-query-cache { any_external; };
// external trust comes thru the DNS (dlv)
};
view external in {
... 
}
The number of active zones reported by rndc status doubled from 56 to 90!  

I expected the r-internal view to see that it was serving no zones  to
recursively resolve all the client requests with RR=0.  Then the internal
view would catch them.  But that seems to be wrong.

I did get AD set on the first few queries to example.net.  But after a while
I started seeing SRVFAILs and claims that no trusted key matched rrsets.  

Once I started querying the in-addr.arpa zone, things definitely fell apart.
It seems that the resolver was going outside - in fact, I saw trust chain
broken messages in the logs, where the address of the server was one of the
1918 blackhole servers and the query was to an internal 1918 address's PTR
record in a zone of the interal view.  I also got these for example.net...

So it looks like the new (r-internal) view is starting at the root when it
resolves -- ignoring what it has data for locally.   It sorta works for
example.net names because it happens that the internal and external views
use the same (nsx.example.net) names for their nameservers - but of course
the addresses are different! And NAT gets in the way.  in-addr.arpa will
work for non-1918 addresses - mostly.  But for private addresses, this won't
work at all...

It's all logical - but not productive.  Even if the scheme works, it's
certainly going to put a lot of redundant data into memory.  (Which is
limited on my embedded servers.)

I still think that BIND should look at RD on queries that it resolves from
an authoritative zone, and if set it should validate from the trust root to
the key it used to sign the zone.  I can be persuaded that there's not much
point in actually verifying the signatures on the data in the response -
authoritative does mean that the file can be trusted about as much as that
BIND isn't lying about having  validated...

Other ideas?


-Original Message-
From: Mark Andrews [mailto:ma...@isc.org] 
Sent: Thursday, September 09, 2010 22:06
To: Phil Mayers
Cc: bind-us...@isc.org
Subject: Re: DNSSEC, views  trusted keys...


In message 4c891404.3000...@imperial.ac.uk, Phil Mayers writes:
 On 09/09/2010 03:45 PM, Timothe Litt wrote:
 
 
  There is other advice in the ARM that says to put 'your 
  organization's public keys in the trusted-keys list'.  That doesn't 
  help - and in fact, confuses me even more since example.net has TWO 
  different public keys - one for each view.  And trusted-keys is a global
server option...
 
  I must be missing something.
 
 I don't think so. Currently AFAICT bind will not set AD on 
 authoritative zones, with any combination of options.
 ___
 bind-users mailing list
 bind-users@lists.isc.org
 https://lists.isc.org/mailman/listinfo/bind-users

Add a match-recursion-only view;

view secure {
match-clients { internal; };
match-recursion-only yes;
recursion yes;
};

view internal {
match-clients { internal; };
recursion no;
};

view external {
match-clients { !internal; any };
recursion no;
};
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


DNSSEC, views trusted keys...

2010-09-09 Thread Timothe Litt
I have 9.7.1-P2 running and since it's supposed to be 'for humans', I guess
I'm trying to determing if I am one.  It's not going as well as hoped... :-)

I have a domain - example.net, with two views, the usual 'internal' and
'external'; a third is planned.  The master maintaining all the sub-domains
with auto-dnssec maintain.  Master and slaves have dnssec-validation on and
lookaside auto.

My internal systems use these servers as their resolvers.  The external view
doesn't allow recursion.

example.net's internal view is signed by ksk-internal. (Yes, the ZSK sigs
are there too.)
example.net's external view is signed by ksk-external, which is distinct
from ksk-internal.

The external keys are registered in the ISC DLV, and dnsviz seems quite
happy to validate a host that is in a delegated sub-domain signed by a yet
another key.

I'm unclear about how to configure this for the validation side of
example.net.

The ARM has a sentence where it says that BIND 'won't do crypto validation
on zones for which it is authoritative'.

And sure enough, dig +adflag to either view never has AD set on the
response.  (It will on ., isc.org, .gov, so validation is working.)

This doesn't seem right.  How is an ordinary internal client supposed to
know that it has authoritative (signed) data?  Yes, someday there may be
client resolver libraries that provide end-to-end validation.  But until
then, if trusting AD from my configured server is good enough for .gov, why
isn't it good enough for example.net?  

I've heard the argument that 'it doesn't make sense to verify the zone on
your own disk', but I don't buy it.

I'd like, for example, for my internal servers to show green with
http://www.dnssec-validator.cz/'s firefrox plugin...

If a server is authoritative for a zone that it maintains, it knows that the
signatures are all valid (or not).  It also should be able to check with its
parent (dlv, trusted-key list...) that its delegation is still valid.  So
it's surprising that it won't set AD.

The idea that the client should trust AA without AD in this case also seems
a step backwards.

There is other advice in the ARM that says to put 'your organization's
public keys in the trusted-keys list'.  That doesn't help - and in fact,
confuses me even more since example.net has TWO different public keys - one
for each view.  And trusted-keys is a global server option...

I must be missing something.

Bottom line question:
Short of configuring some other systems as caching-only validating
nameservers and having clients point to them, how does one configure BIND to
get AD for authoritative zones - preferably iff it can validate that the
chain of delegations to it is valid?

And no, it's not practical to run nested copies of BIND - most of my systems
are small embedded systems with very limited memory.  Nor is it practical to
double the number of name servers in my network.

Semi-related question:
Does anyone know of a public validating resolver that uses the isc
dlv?  That doesn't solve the internal problem, of course, but it would be
handy for testing from 'outside'.

-
This communication may not represent my employer's views,
if any, on the matters discussed. 
 

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: rndc addzone/delzone in 9.7.2rc1 (was: rndc reconfig delays)

2010-08-28 Thread Timothe Litt
Seems to me that if you stick with this, a couple of things are necessary
for manageability:

 o Some command to translate a zone file name to a view/zone name, and
vice-versa.  That would enable people to debug based on file contents...

 o A method to migrate zones from today's 'named.conf-configured' to
'named-managed'.  I think this needs to be scalable to Rob's 10k* zones.
Perhaps a migration renames a zone file to the new scheme, and writes a stub
file with a magic token in a file with the old name to tell named to ignore
the named.conf entry and look for the new file?  This way, named.conf can be
cleaned of the old entries at leisure...

  o And, as I think I mentioned before, I'd really prefer to see this
function added to the RFC2136 protocol than added under rndc.  Rndc is not
easy to automate reliably (as Rob notes). And of course it will drive
similar non-standardized approaches in the other nameservers - which is a
hassle for management tools.  If you stick with rndc as the mechanism, I'd
at least like to see a perl library that talks the rncd protocol and
provides reliable communciations and useful status.  (Of course if 2136 were
used, extending Net::DNS (::SEC) would make this easier.)

I have always managed my zones as dynamic - and I think DNSSEC will drive
many others to do the same.  I'm all in favor of making it possible to
add/delete zones dynamically - but it has to be possible to
mange/troubleshoot the result.  (Other interesting operations are 'rename',
and perhaps 'copy')


-
This communication may not represent my employer's views,
if any, on the matters discussed.
-Original Message-
From: Rob Foehl [mailto:r...@loonybin.net] 
Sent: Friday, August 27, 2010 18:46
To: Evan Hunt
Cc: bind-users@lists.isc.org
Subject: Re: rndc addzone/delzone in 9.7.2rc1 (was: rndc reconfig delays)

On Fri, 27 Aug 2010, Evan Hunt wrote:

 Non-obvious isn't the point.  We thought of having the file be named 
 directly after the view, but view names are allowed to include 
 characters that are forbidden in file names.  Before opening the file 
 we'd have to check the name's legality, ensure it doesn't include 
 ../ at the beginng, etc.  Rather than deal with that, I decided to 
 just hash the view name, and get a guaranteed-unique, guaranteed-legal
filename for each view.

How does this compare with the defaults for, say, the managed keys zones for
each view?  In any case, 3bf305731dd26307.nzf isn't obvious, having more
than one configured view will make troubleshooting more difficult for the
uninitiated, and something like dynamic-zones.conf.viewname (where
'viewname' is a sanitized version of such -- say all non-alphanumerics
replaced with underscores or dashes) should be simple enough.

 We needed a unique filename for each view because views can't share 
 new-zone files.  (In the prior version, this wasn't explicitly 
 disallowed, but it caused big ugly failure modes if you tried it.)

Shouldn't named explicitly check for overlap, then?  That seems in line with
many of the other sanity checks named does during normal operation...

 Why take away the ability to remove arbitrary zones from the current 
 configuration?

 There are two parts to removing a zone: removing it from the currently 
 running server, and removing it from the configuration file so that it 
 doesn't come back when you restart.

 The second part can only be done with zones that are in the new-zone file.
 (You wouldn't want named to be directly editing named.conf.)

 If you haven't done the second part, then the zone isn't really 
 removed, just temporarily disabled.  I felt that if we can't do both 
 parts, we shouldn't do the first.  If you have a strong argument 
 otherwise, though, I'm listening...

I have a process that implements very careful zone configuration management
and bulk zone updates, which currently triggers per-zone rndc reloads for
existing zones followed by an rndc reconfig if zones have been added or
removed.  The problem I've run into is that rndc reconfig is intolerably
slow past 50,000 or so configured zones, and I'm trying to determine whether
addzone/delzone would be a viable option.

So, I explicitly don't want named to be managing the config.  Changing the
current server state without touching a config would be a drop-in change
here, whereas having named manage the config removes most of the visibility
I have into whether or not changes were successful.  The boolean error
status available from rndc is insufficiently robust for this purpose,
unfortunately; my process makes a number of decisions about whether or not
it should retry an operation based on how it failed.

Of course, none of this would matter if reconfig wasn't a problem with this
many zones, so I'm still interested in that question too... :)

-Rob


___
bind-users mailing list
bind-users@lists.isc.org

Resolving .gov w/dnssec

2010-04-22 Thread Timothe Litt
I'm having trouble resolving uspto.gov with bind 9.6.1-P3 and 9.6-ESV
configured as valdidating resolvers.

Using dig, I get a connection timeout error after a long (~10 sec) delay.
+cdflag provides an immediate response.

state.gov does not get this error.  Note that it uses different nameservers
than uspto.

Resolving uspto.gov using comcast's resolver (75.75.75.75) does not get this
error.

Is anyone else seeing this?  Ideas on how to troubleshoot?

Here are details (using the -ESV server).

Subset named.conf:

options {
listen-on { 192.168.148.4; 192.168.148.5; };
dnssec-enable yes;
  dnssec-validation yes;
  dnssec-lookaside . trust-anchor dlv.isc.org.;
  sig-validity-interval 8 2;
}
trusted-keys {
dlv.isc.org. 257 3 5
BEPHMu/5onzrEE7z1egmhg/WPO0+juoZrW3euWEn4MxDCE1+lLy2
brhQv5rN32RKtMzX6Mj70jdzeND4XknW58dnJNPCxn8+jAGl2FZLK8t+
1uq4W+nnA3qO2+DL+k6BD4mewMLbIYFwe0PG73Te9fZ2kJb56dhgMde5
ymX4BI/oQ+cAK50/xvJv00Frf8kw6ucMTwFlgPe+jnGxPPEmHAte/URk
Y62ZfkLoBAADLHQ9IrS2tryAe7mbBZVcOwIeU/Rw/mRx/vwwMCTgNboM
QKtUdvNXDrYJDSHZws3xiRXF1Rf+al9UmZfSav/4NWLKjHzpT59k/VSt TDN0YUuWrBNh;
};

Examples:


;  DiG 9.6-ESV  @192.168.148.4 state.gov
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 35438
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0

;; QUESTION SECTION:
;state.gov. IN  A

;; ANSWER SECTION:
state.gov.  60  IN  A   72.166.186.160

;; AUTHORITY SECTION:
state.gov.  299 IN  NS  Ns1.terrenap.net.
state.gov.  299 IN  NS  Ns3.yipes.com.
state.gov.  299 IN  NS  Ns1.yipes.com.
state.gov.  299 IN  NS  Ns2.terrenap.net.
state.gov.  299 IN  NS  Ns2.yipes.com.

;; Query time: 441 msec
;; SERVER: 192.168.148.4#53(192.168.148.4)
;; WHEN: Thu Apr 22 07:37:46 2010
;; MSG SIZE  rcvd: 154

 dig @192.168.148.4 uspto.gov

;  DiG 9.6-ESV  @192.168.148.4 uspto.gov
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

 dig @192.168.148.4 +cdflag uspto.gov

;  DiG 9.6-ESV  @192.168.148.4 +cdflag uspto.gov
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 18584
;; flags: qr rd ra cd; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 0

;; QUESTION SECTION:
;uspto.gov. IN  A

;; ANSWER SECTION:
uspto.gov.  7200IN  A   151.207.247.130
uspto.gov.  7200IN  A   151.207.243.129

;; AUTHORITY SECTION:
uspto.gov.  78721   IN  NS  DNS2.uspto.gov.
uspto.gov.  78721   IN  NS  DNS1.uspto.gov.

;; Query time: 27 msec
;; SERVER: 192.168.148.4#53(192.168.148.4)
;; WHEN: Thu Apr 22 07:40:27 2010
;; MSG SIZE  rcvd: 97

dig +dnssec @192.168.148.4 dlv.isc.org

;  DiG 9.6-ESV  +dnssec @192.168.148.4 dlv.isc.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 43521
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 7, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;dlv.isc.org.   IN  A

;; ANSWER SECTION:
dlv.isc.org.300 IN  A   149.20.16.8
dlv.isc.org.300 IN  RRSIG   A 5 3 300 20100522083002
20100422083002 64263 dlv.isc.org.
MG9aDOgjqEMA3QcUQDDUac/YcHki0bPnXre6iyehi2jY3swg/zp3IOb4
Wf5cFQfIxQIf2n9EAw7tkBxhFZ2alDMEkotEVTPF13SYc+PP8EhV7vEF
OZc1snFat7R0YeeATpkZD5xaeYzkLZS1coiSJGiqCYrNoWDKi/DoP9TB RFo=

;; AUTHORITY SECTION:
dlv.isc.org.2696IN  NS  dlv.ord.sns-pb.isc.org.
dlv.isc.org.2696IN  NS  dlv.ams.sns-pb.isc.org.
dlv.isc.org.2696IN  NS  ns2.isc.ultradns.net.
dlv.isc.org.2696IN  NS  dlv.sfba.sns-pb.isc.org.
dlv.isc.org.2696IN  NS  ns1.isc.ultradns.net.
dlv.isc.org.2696IN  NS  ns.isc.afilias-nst.info.
dlv.isc.org.2696IN  RRSIG   NS 5 3 3600 20100522083002
20100422083002 64263 dlv.isc.org.
Ae2XBq3ibOKvx36NfB5ghOnHOH5XG1XFzVC/4ZCyu7lwxxh1RlVrMLcU
UHboYzBqdc/4bQ7SlELBSi34IN8BPm0tDpNmGmafXHj8ZqdojJxyLc07
Q9Hx15IJRkOcqKSmLAZq5VzfJDV9VeaPp6Xt4uVVpV1huzNwdzongjkB F0s=

;; Query time: 16 msec
;; SERVER: 192.168.148.4#53(192.168.148.4)
;; WHEN: Thu Apr 22 07:52:49 2010
;; MSG SIZE  rcvd: 561

Dnssec logging for uspto.gov lookup:

22-Apr-2010 08:00:09.497 dnssec: debug 3: validating @0x8550e58: uspto.gov
A: starting
22-Apr-2010 08:00:09.497 dnssec: debug 3: validating @0x8550e58: uspto.gov
A: looking for DLV
22-Apr-2010 08:00:09.497 dnssec: debug 3: validating @0x8550e58: uspto.gov
A: plain DNSSEC returns unsecure (.): looking for DLV
22-Apr-2010 08:00:09.497 dnssec: debug 3: validating @0x8550e58: uspto.gov
A: looking for DLV 

RE: Resolving .gov w/dnssec

2010-04-22 Thread Timothe Litt
So, others are also seeing this, and it's not unique to bind or my corner of
the internet.  Thanks.

It seems to have been going on for weeks, so it isn't going to fix itself.

Who do I report this to so that it gets resolved?  

FWIW, I tried +vc - from here, it doesn't help.  Also, one sometimes gets
SERVFAIL - and once in a while, it actually resolves!

As for the make work project and less stability comment -- it seems
likely to me that if DNS packets are being mishandled, others are too --
just not as visibly.  So DNSSEC may well be an over-due network diagnostic;
fixing these sorts of problems could equally well reduce retries, delays and
other mishandled fragments for other protocols. I'm not ready to blame the
indicator for the underlying problem.  At least until we get to a
DNSSEC-unique root cause.

-
This communication may not represent my employer's views,
if any, on the matters discussed.
-Original Message-
From: Chris Thompson [mailto:c...@hermes.cam.ac.uk] On Behalf Of Chris
Thompson
Sent: Thursday, April 22, 2010 10:52
To: Paul Wouters
Cc: Timothe Litt; Bind Users Mailing List
Subject: Re: Resolving .gov w/dnssec

On Apr 22 2010, Paul Wouters wrote:

On Thu, 22 Apr 2010, Timothe Litt wrote:

 I'm having trouble resolving uspto.gov with bind 9.6.1-P3 and 9.6-ESV 
 configured as valdidating resolvers.

 Using dig, I get a connection timeout error after a long (~10 sec) delay.
 +cdflag provides an immediate response.

 Is anyone else seeing this?  Ideas on how to troubleshoot?

I have the same problems with our validating unbound instance. 

I suspect that this has to do with

  dig +dnssec +norec dnskey uspto.gov @dns1.uspto.gov.
  dig +dnssec +norec dnskey uspto.gov @sns2.uspto.gov.

failing with timeouts, while 
  
  dig +dnssec +norec +vc dnskey uspto.gov @dns1.uspto.gov.
  dig +dnssec +norec +vc dnskey uspto.gov @dns2.uspto.gov.

work fine ... with a 1736-byte answer. Probably the fragmented UDP response
is getting lost somewhere near the authoritative servers themselves.

--
Chris Thompson
Email: c...@cam.ac.uk

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users