Re: 9.18 BIND not resolving .gov.bd site

2023-10-30 Thread Timothe Litt

On 30-Oct-23 03:46, Marco M. wrote:

Am 30.10.2023 um 12:25:32 Uhr schrieb Mosharaf Hossain:


mofa.gov.bd.86400   IN  NS  ns1.bcc.gov.bd.
mofa.gov.bd.86400   IN  NS  ns2.bcc.gov.bd.
couldn't get address for 'ns1.bcc.gov.bd': not found
couldn't get address for 'ns2.bcc.gov.bd': not found
dig: couldn't get address for 'ns1.bcc.gov.bd': no more
root@ns1:/etc/bind#

I can resolve them, but only A records exist.
Please try it again.

dig a ns2.bcc.gov.bd


When encountering these sorts of errors, particularly if not a DNS 
expert, the easiest diagnostic to use is https://dnsviz.net


It's graphical, detailed and while oriented toward DNSSEC, detects many 
other misconfigurations.


Fix the errors and warnings shown at 
https://dnsviz.net/d/mofa.gov.bd/dnssec/ and retest.



Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature.asc
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Zone stats

2023-08-21 Thread Timothe Litt

(Sorry for the duplicate/reply without context).  See below.

On 21-Aug-23 11:11, Mark Elkins wrote:


Hi,

I'm writing some software to be able to read information from a Zone 
file. I am a legally authorised Secondary Authoritative Nameserver for 
a number of domains or rather zone files, eg. EDU.ZA (and others). Is 
there an easy way to:-


1) Count how many delegated domains there are (Names with NS records)

2) Extract the above Names - so I can look for changes (Added/Deleted 
names)


3) find out how many unique names have DS records (I can DIG I suppose)

I'd also like to spot broken stuff (named-checkzone ?)

So the zones (such as EDU.ZA) contain the domain name of the entity 
(whois.edu.za) along with the Nameserver records and in this case, a 
DS record. e.g... "whois.edu.za" looks like


whois  NS control.vweb.co.za.
   NS secdns1.posix.co.za.
   NS secdns2.posix.co.za.
   NS secdns3.posix.co.za.
   DS    27300 13 2 
8ED21DB407F6AC3E6EA757AE566953C1BBADD8B652BE4C7C0744B1D7 9DF42894
   DS    17837 13 2 
36FD5B19450B672988AE507FB7D2F948ED1E889546C6E16554C7EAF9 CE9C3FEA


One hindrance is that journal files are present - so it is not just 
the zone file but the zone.jnl file as well.


Some African ccTLDs have everything in one zone e.g. their COM, EDU, 
GOV - etc. In South Africa, these are all separate zones, making life 
somewhat easier.


I'd hate to re-invent software that already exists.

The primary purpose is to pull in data into an (ICANN requested) 
African DNS Observatory.



--

Mark James ELKINS  -  Posix Systems - (South) Africa
m...@posix.co.za Tel: +27.826010496 
For fast, reliable, low cost Internet in ZA: https://ftth.posix.co.za


Mark,

a) Use named-compilezone to extract the zone with journals applied.

b) my favorite: do an axfr of the zone, which gives the correct data 
with all the pseudo-ops expanded


c) Use a library - I use Perl's Net::DNS - and write code to do the axfr 
& walk the zone - it allows you to access fields in the records.


https://github.com/tlhackque/certtools has a simple utility called 
acme_token_check  that does (c) to remove stray ACME records - it shows 
how to do the transfer and walk the zone.   (And also how to use DNS 
UPDATE to maintain it.)


Enjoy.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: bind-users Digest, Vol 4302, Issue 1

2023-08-21 Thread Timothe Litt

a) Use named-compilezone to extract the zone with journals applied.

b) my favorite: do an axfr of the zone, which gives the correct data 
with all the pseudo-ops expanded


c) Use a library - I use Perl's Net::DNS - and write code to do the axfr 
& walk the zone - it allows you to access fields in the records.


https://github.com/tlhackque/certtools has a simple utility called 
acme_token_check  that does (c) to remove stray ACME records - it shows 
how to do the transfer and walk the zone.   (And also how to use DNS 
UPDATE to maintain it.)


Enjoy.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 21-Aug-23 18:56, bind-users-requ...@lists.isc.org wrote:

Send bind-users mailing list submissions to
bind-users@lists.isc.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.isc.org/mailman/listinfo/bind-users
or, via email, send a message with subject or body 'help' to
bind-users-requ...@lists.isc.org

You can reach the person managing the list at
bind-users-ow...@lists.isc.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of bind-users digest..."

Today's Topics:

1. Zone stats (Mark Elkins)
2. Re: Zone stats (Grant Taylor)

___
ISC funds the development of this software with paid support subscriptions. 
Contact us athttps://www.isc.org/contact/  for more information.

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RFC7344 (was: Funky Key Tag in AWS Route53 (2)) (2)

2022-12-29 Thread Timothe Litt


On 29-Dec-22 19:30, Mark Andrews wrote:
Valid base64 includes spaces and new lines. Poorly written record 
parsers reject valid records.


--
Mark Andrews



True for DNS records; the RFC clearly states that whitespace is allowed 
in the presentation form's base64 fields of DNSSEC records.  And as 
described, the AWS parser is "poorly written".


Not true in general.  In fact, the base64 RFC states the opposite.  Of 
course, confusion results.  I often wonder why so much effort goes into 
writing RFCs when so many people don't read them carefully.


gnu base64 (the command) does what engineers do when there are multiple 
interpretations - provides an option.  See man (1) base64's 
--ignore-garbage and remarks:


   The  data  are  encoded  as described for the base64 alphabet in RFC
   3548.  Decoding require compliant input by
 default, use --ignore-garbage to attempt to recover from
   non-alphabet characters  (such  as  newlines)  in  the
 encoded stream.

Sigh.


https://datatracker.ietf.org/doc/html/rfc3548#page-3

2.3 <https://datatracker.ietf.org/doc/html/rfc3548#section-2.3>. 
Interpretation of non-alphabet characters in encoded data


   Base encodings use a specific, reduced, alphabet to encode binary
   data._Non alphabet characters could exist within base encoded data, caused by 
data corruption or by design._   Non alphabet characters may

   be exploited as a "covert channel", where non-protocol data can be
   sent for nefarious purposes.  Non alphabet characters might also be
   sent in order to exploit implementation errors leading to, e.g.,
   buffer overflow attacks.

   _Implementations MUST reject the encoding if it contains characters 
outside the base alphabet when interpreting base encoded data, unless 
the specification referring to this document explicitly states otherwise_.  Such specifications may, as MIME does, instead state that

   characters outside the base encoding alphabet should simply be
   ignored when interpreting data ("be liberal in what you accept").
   Note that this means that any CRLF constitute "non alphabet
   characters" and are ignored.  Furthermore, such specifications may
   consider the pad character, "=", as not part of the base alphabet
   until the end of the string.  If more than the allowed number of pad
   characters are found at the end of the string, e.g., a base 64 string
   terminated with "===", the excess pad characters could be ignored.




Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RFC7344 (was: Funky Key Tag in AWS Route53 (2)) (2)

2022-12-29 Thread Timothe Litt


On 29-Dec-22 18:37, Eric Germann wrote:
The really annoying part is it isn’t obvious that they want the public 
key and not the result of dnssec-dsfromkey; they do it themselves. 
 The annoying part is they throw an error if the key isn’t valid 
Base64 (think spaces or newlines), but gladly accept the DS output 
from dnssec-dsfromkey.  Somehow or another they are getting the key 
tag from the incorrect DS  record, because they encode again the 
already encoded string.




S_omehow or another _they are getting the key tag from the incorrect 
DS  record, because they _encode again the already encoded string_.

This isn't quite what's happening.

There is no mystery here. The DS hash data is expressed in hex. All hex 
characters are valid base64.  the DS hash is 64 characters long, which 
is a multiple of 4.  So no padding is required.  Thus, it's a valid 
string and can be decoded into 48 bytes.


The key tag is just a folded checksum 
(https://datatracker.ietf.org/doc/html/rfc2535#page-46, which doesn't 
care what data it's working on.  It's supposed to be on the KEY RR, but 
there's not much you can say about a key unless you know the type that 
determines its structure.  So its perfectly reasonable to compute it 
blindly over what data is presented.


So they _decode_ the DS HASH to binary - which will work. Apply the 
checksum, and you have a "keytag" - or something that looks like one.


By chance, DS has 4 RDATA fields: tag, alg, type, digest.  And DNSKEY 
has 4: flags, proto, alg, key.  In both cases, the first tree are 
decimal numbers.  So, they look the same to a decoder.


That said, I agree that it should be obvious what they want. E.g. put a 
sample record above the input box.  And say "KEY".  For extra credit, 
check that the length matches the key algorithm; make sure that an RSA 
key is odd (if not prime), ...


What I've seen from others is that DNSSEC is not viewed as important 
enough to merit a careful human factors design for the interfaces.  It's 
more "what's the minimum we can do to quiet those few people who insist 
on it?".    So they don't label the fields in their forms, don't provide 
meaningful help, don't advertise the capability.  And, surprise, only 
the truly motivated people use it.  And those customers are so grateful 
to have anything that no complaints are received.    Which discourages 
adoption, keeps the user base small, validates the "don't do much" 
strategy, and - catch-22, DNSSEC doesn't expand beyond the hardcore techies.


The problem is politics, not technology.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RFC7344 (was: Funky Key Tag in AWS Route53 (2)) (2)

2022-12-29 Thread Timothe Litt
Apparently I didn't include the DNS script library link mentioned in my 
note.  Sorry.


https://github.com/srvrco/getssl/tree/master/dns_scripts

On 29-Dec-22 13:45, Peter wrote:


On Thu, Dec 29, 2022 at 09:17:26AM -0500, Timothe Litt wrote:

! (Manual processes
! are error-prone.  That getting registrars to adopt CDS/CDNSKEY - RFC7344 -
! has been so slow is unfortunate.)

Seconded. Do You have information about this moving at all? Because to
me it looks very much like dead-in-the-water, and my registrar didn't
even know what that is.

Otherwise I would have perfect automation for continuous rollover -
but still I have to either hack the data into their webinterface, or
figure out what kind of crappy api they might have - and in my view the
first option is boring and the second is superfluous work.

cheerio,
PMc


Yup, Eric's case was a classic example.  He tried to do the right thing, 
put in the wrong record, and the system didn't produce the expected 
results.  To his credit, he persisted.  Most people don't.  A while ago 
there was a study (cloudflare/APNIC 
<https://blog.cloudflare.com/automatically-provision-and-maintain-dnssec/>) 
that showed that about only about 40% of people who enabled DNSSEC for 
their accounts successfully served DS records in their registry.


There are some registrars and registries who support CDS/CDNSKEY 
(RFC7344/8078).  Unfortunately, not enough.


I don't track it closely, but here are a few who claim support when last 
I looked.


.cz, .ch ,.li, .ne, .se, .sk

DNSSimple, domainnameshop

GoDaddy publishes CDS and CDNSKEY when it manages DNSSEC, but doesn't 
poll when delegated.  I don't think they bridge (poll & then use EPP for 
domain registries that don't poll.)


Cloudflare was an advocate, and has published for a long time. Again, 
the issue is registries.


https://github.com/oskar456/cds-updates has a list that seems more 
current.  Note that none of the big 13 are on it - .com, .net, .org, 
.info, .gov, .edu, ...


There are hybrid approaches.  Most of the registrars have some sort of 
proprietary API that allows a script to insert/delete/modify records.  
So you can let BIND generate them and script the registry updates.  But 
it's ad-hoc for each registrar.


For some idea of what a mess that is, here is a library of DNS update 
scripts for a number of registrars (used by a LetsEncrypt script, but 
the aggravation/diversity is the same).


I suspect that to get any forward progress, someone will have to come up 
with a business case that shows why the registries should take action.  
Or get ICANN to mandate it.  There are various user constituencies in 
ICANN, but that's a highly political process.


So much like DNSSEC itself, the technology is there, but the will to use 
it everywhere it's needed is not.


(I'm not involved with any of the players, aside from reviewing the RFC 
drafts.  Just an interested, and frustrated, observer.)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RFC7344 (was: Funky Key Tag in AWS Route53 (2))

2022-12-29 Thread Timothe Litt

On 29-Dec-22 13:45, Peter wrote:

On Thu, Dec 29, 2022 at 09:17:26AM -0500, Timothe Litt wrote:

! (Manual processes
! are error-prone.  That getting registrars to adopt CDS/CDNSKEY - RFC7344 -
! has been so slow is unfortunate.)

Seconded. Do You have information about this moving at all? Because to
me it looks very much like dead-in-the-water, and my registrar didn't
even know what that is.

Otherwise I would have perfect automation for continuous rollover -
but still I have to either hack the data into their webinterface, or
figure out what kind of crappy api they might have - and in my view the
first option is boring and the second is superfluous work.

cheerio,
PMc


Yup, Eric's case was a classic example.  He tried to do the right thing, 
put in the wrong record, and the system didn't produce the expected 
results.  To his credit, he persisted.  Most people don't.  A while ago 
there was a study (cloudflare/APNIC 
<https://blog.cloudflare.com/automatically-provision-and-maintain-dnssec/>) 
that showed that about only about 40% of people who enabled DNSSEC for 
their accounts successfully served DS records in their registry.


There are some registrars and registries who support CDS/CDNSKEY 
(RFC7344/8078).  Unfortunately, not enough.


I don't track it closely, but here are a few who claim support when last 
I looked.


.cz, .ch ,.li, .ne, .se, .sk

DNSSimple, domainnameshop

GoDaddy publishes CDS and CDNSKEY when it manages DNSSEC, but doesn't 
poll when delegated.  I don't think they bridge (poll & then use EPP for 
domain registries that don't poll.)


Cloudflare was an advocate, and has published for a long time. Again, 
the issue is registries.


https://github.com/oskar456/cds-updates has a list that seems more 
current.  Note that none of the big 13 are on it - .com, .net, .org, 
.info, .gov, .edu, ...


There are hybrid approaches.  Most of the registrars have some sort of 
proprietary API that allows a script to insert/delete/modify records.  
So you can let BIND generate them and script the registry updates.  But 
it's ad-hoc for each registrar.


For some idea of what a mess that is, here is a library of DNS update 
scripts for a number of registrars (used by a LetsEncrypt script, but 
the aggravation/diversity is the same).


I suspect that to get any forward progress, someone will have to come up 
with a business case that shows why the registries should take action.  
Or get ICANN to mandate it.  There are various user constituencies in 
ICANN, but that's a highly political process.


So much like DNSSEC itself, the technology is there, but the will to use 
it everywhere it's needed is not.


(I'm not involved with any of the players, aside from reviewing the RFC 
drafts.  Just an interested, and frustrated, observer.)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Funky Key Tag in AWS Route53 (2)

2022-12-29 Thread Timothe Litt
What is annoying is it accepts the hash as perfectly valid and gets 
the  DS record number as the wrong ID.
A key is just a bundle of bits, no way to validate it.  Well, perhaps 
the length should be consistent with the key type...


In fact, with the same input, dnssec-dsfromkey produces the same keytag 
as AWS.  Below I pretend that the hash from your DS is in a DNSKEY record.


|cat >tmp.key||
||ericgermann.photography. DNSKEY 257 3 8 
A17DF360A9E0CB485BD396A839119441C5FF62A9C9E46D586EBDD1D084E2E36B||

|

|dnssec-dsfromkey -2 tmp||
||ericgermann.photography. IN DS *22755* 8 2 
2E81A125523957ED2C3076B4E58BE159027F659D74E184E2F0B81D922D1E7FA9|


So, as I concluded, AWS was generating a DS for a different "key".  Its 
keytag was correct for the data it got.


Glad you got to a solution.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 29-Dec-22 10:39, Eric Germann wrote:
I understand all the tools and output.  The error I was trying to find 
is why they disagreed and checking all the points along the way. 
 Thanks for your scripts.


Anyways, for GoogleFu, I got it fixed and it works correctly now 
thanks to 
https://www.digitalocean.com/community/tutorials/how-to-setup-dnssec-on-an-authoritative-bind-dns-server-2


For entering the DS record in to Route53, you enter the whole public 
key in Base64 without spaces or newlines, not the hash of the key like 
the registrars I’ve used for other domains.


What is annoying is it accepts the hash as perfectly valid and gets 
the  DS record number as the wrong ID.


Thanks to all who helped!

Eric



On Dec 29, 2022, at 10:06, Timothe Litt  wrote:


That’s why I wanted to decode the DS record to see if it’s encoding 
it as 32686 or 22755


As I said, no decoding required.  Just look at the DS record.  The 
keytag is immediately after "DS" in plain, unencoded text.


If the question is how to verify the keytag from the DNSKEY it 
references, I've shown you two different tools that produce the same 
result.


If you use the same input file, you get the same answer from ISC and 
Net::DNS::SEC.


|cat >tmp.key|

|ericgermann.photography. DNSKEY 257 3 8 
AwEAAatPHgdYxFA74X+17xAMmZNn+I6XVzodbnA/m4M6vV+axYh+PTNt 
xrZSQ4PXEcJkNXF5OR1UPfPWea/gGIuYUbjMaa2H7fd+TXqc+C44U/2O 
vbZqefSUXl1QzqyxPyG7xZuAgTApFt+PuK9CrQtP7IV9qu34cXAXLGF1 
SgrhBi843sTESw8nBAv1MDLMBCDEULVOSghqqxdJQ57yGOdsgYFdt6kL 
UNA1zntZV49dDWHGttZWwhEnnMuNz+e6bRroETOIhtzxLn4HOievnZmV 
4rqzh5Zku/06QMNiUWwePW07RIGVVzUszU0LaAgBh/m111x5UiYfup2N egWHPunS1IM=||

|

|dnssec-dsfromkey -2 tmp||
||ericgermann.photography. IN DS ||*32686 *8||2 
A17DF360A9E0CB485BD396A839119441C5FF62A9C9E46D586EBDD1D084E2E36B|


That's the same answer as Net::DNS::SEC.  Two different tools from 
reputable sources, same answer.


None of the installed keys have 22755.  DNSvis does show a DS record 
installed with 22755 (and no matching key).  So AWS is installing 
that DS from whatever input you provide it.


That leaves:

  * Different input to AWS vs. the local tools
  o perhaps you have a file with a different DNSKEY that you are
uploading to AWS.  I've been known to accidentally overwrite,
rename, or confuse files. (Not often, but it happens.)
  o have you verified that the contents of the file that you are
using matches what's in the DNS?
  o Does AWS have an option to use a DNSKEY from your zone?  That
would avoid the manual step.
  * If you're copy/pasting the DNSKEY file into AWS, corruption in
the process (buffer overruns?)
  * It's not inconceivable that AWS has a bug, but someone should
have hit one like this before you

Before blaming AWS, I'd be very sure that the same key is being 
input.  If it is, they have a bug


You might also consider using a different key experimentally, on the 
off chance that a wrong keytag bug is data-dependent.


But the most likely scenario is that somehow AWS is generating a DS 
for a different key.


I don't use AWS, so that's as far as I can go.

Good luck.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.
On 29-Dec-22 09:28, Eric Germann wrote:
Yeah, that’s the problem I’m trying to solve.  I run the key thru 
dnssec-dsfromkey and get 32686, When I put the key in to Route53, I 
get 22755 from the decoded DS record in the console for Route53.


That’s why I wanted to decode the DS record to see if it’s encoding 
it as 32686 or 22755




On Dec 29, 2022, at 09:17, Timothe Litt  wrote:

On 28-Dec-22 19:40, Eric Germann wrote:

My question is

Is there any way to decode the DS record and see what key tag is 
actually encoded in it?  If it’s 32686 it’s an issue with Route53. 
 If it’s 22755 it’s an issue with dnssec-dsfromkey.


If anyone wants the DNSKEY for algorithm

Re: Funky Key Tag in AWS Route53 (2)

2022-12-29 Thread Timothe Litt
That’s why I wanted to decode the DS record to see if it’s encoding it 
as 32686 or 22755


As I said, no decoding required.  Just look at the DS record.  The 
keytag is immediately after "DS" in plain, unencoded text.


If the question is how to verify the keytag from the DNSKEY it 
references, I've shown you two different tools that produce the same result.


If you use the same input file, you get the same answer from ISC and 
Net::DNS::SEC.


|cat >tmp.key|

|ericgermann.photography. DNSKEY 257 3 8 
AwEAAatPHgdYxFA74X+17xAMmZNn+I6XVzodbnA/m4M6vV+axYh+PTNt 
xrZSQ4PXEcJkNXF5OR1UPfPWea/gGIuYUbjMaa2H7fd+TXqc+C44U/2O 
vbZqefSUXl1QzqyxPyG7xZuAgTApFt+PuK9CrQtP7IV9qu34cXAXLGF1 
SgrhBi843sTESw8nBAv1MDLMBCDEULVOSghqqxdJQ57yGOdsgYFdt6kL 
UNA1zntZV49dDWHGttZWwhEnnMuNz+e6bRroETOIhtzxLn4HOievnZmV 
4rqzh5Zku/06QMNiUWwePW07RIGVVzUszU0LaAgBh/m111x5UiYfup2N egWHPunS1IM=||

|

|dnssec-dsfromkey -2 tmp||
||ericgermann.photography. IN DS ||*32686 *8||2 
A17DF360A9E0CB485BD396A839119441C5FF62A9C9E46D586EBDD1D084E2E36B|


That's the same answer as Net::DNS::SEC.  Two different tools from 
reputable sources, same answer.


None of the installed keys have 22755.  DNSvis does show a DS record 
installed with 22755 (and no matching key).  So AWS is installing that 
DS from whatever input you provide it.


That leaves:

 * Different input to AWS vs. the local tools
 o perhaps you have a file with a different DNSKEY that you are
   uploading to AWS.  I've been known to accidentally overwrite,
   rename, or confuse files.  (Not often, but it happens.)
 o have you verified that the contents of the file that you are
   using matches what's in the DNS?
 o Does AWS have an option to use a DNSKEY from your zone? That
   would avoid the manual step.
 * If you're copy/pasting the DNSKEY file into AWS, corruption in the
   process (buffer overruns?)
 * It's not inconceivable that AWS has a bug, but someone should have
   hit one like this before you

Before blaming AWS, I'd be very sure that the same key is being input.  
If it is, they have a bug


You might also consider using a different key experimentally, on the off 
chance that a wrong keytag bug is data-dependent.


But the most likely scenario is that somehow AWS is generating a DS for 
a different key.


I don't use AWS, so that's as far as I can go.

Good luck.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 29-Dec-22 09:28, Eric Germann wrote:
Yeah, that’s the problem I’m trying to solve.  I run the key thru 
dnssec-dsfromkey and get 32686, When I put the key in to Route53, I 
get 22755 from the decoded DS record in the console for Route53.


That’s why I wanted to decode the DS record to see if it’s encoding it 
as 32686 or 22755




On Dec 29, 2022, at 09:17, Timothe Litt  wrote:

On 28-Dec-22 19:40, Eric Germann wrote:

My question is

Is there any way to decode the DS record and see what key tag is 
actually encoded in it?  If it’s 32686 it’s an issue with Route53. 
 If it’s 22755 it’s an issue with dnssec-dsfromkey.


If anyone wants the DNSKEY for algorithm 8, ping me off list and I 
will share it with you in a private email.


Thoughts?

And because it's trivial, here are the keytags for all your keys and 
DS records and how to get them. Note that you have DNSKEY 32686: 
installed in the DNS, and that the installed DS is 22755.


Can't say how it got that way, but that's what is there.  (Manual 
processes are error-prone.  That getting registrars to adopt 
CDS/CDNSKEY - RFC7344 - has been so slow is unfortunate.)  It's 
rarely the tools.


| perl  -MNet::DNS::SEC -e'@keys = split /\n/, qx(dig +cdflag +short 
ericgermann.photography DNSKEY); print "$_ => 
",Net::DNS::RR->new("ericgermann.photography. DNSKEY 
$_")->keytag,"\n" foreach (@keys);'||
||257 3 8 AwEAAatPHgdYxFA74X+17xAMmZNn+I6XVzodbnA/m4M6vV+axYh+PTNt 
xrZSQ4PXEcJkNXF5OR1UPfPWea/gGIuYUbjMaa2H7fd+TXqc+C44U/2O 
vbZqefSUXl1QzqyxPyG7xZuAgTApFt+PuK9CrQtP7IV9qu34cXAXLGF1 
SgrhBi843sTESw8nBAv1MDLMBCDEULVOSghqqxdJQ57yGOdsgYFdt6kL 
UNA1zntZV49dDWHGttZWwhEnnMuNz+e6bRroETOIhtzxLn4HOievnZmV 
4rqzh5Zku/06QMNiUWwePW07RIGVVzUszU0LaAgBh/m111x5UiYfup2N egWHPunS1IM= 
=> *32686*||
||256 3 8 AwEAAaD+/5eN/zIqYhG/CXXastruIQEBBuD2Y2Yinx+IqWvInKc5Kb6K 
AWvUWECjn0Q7Lrt1s759/04SZXm2M4GwuKBzY+Ern2ukWi0hQmUBqoET 
VSrFhu75FJpi0+8wJZhx5UVPg7NTriYXC29rSTBt/OCr/Ot+utf2P9G2 
hr/BXQqcwausick9Gu9zZtzB0072IEM6okZW1rDwlAwmlDjicJgbAnRt 
qgpWX21CgRG/G8Jjz4pGSP1rt54ilxVbCL8KR3huRaJGb6lnnJnQJckL 
oN2+rGaps1bLYC79fgdL5Y/fzR43J+te7RBo4AJXFhW9n1WL6KOKbprE pbl7yiINzTU= 
=> 43126||
||256 3 13 bX62WTOQmhTaqnQprecHwUjDzBGAQbF0kqywkNzE1yBTrmP/zBNhvtp+ 
H9iYf1OOcfyDo6iE1XXUCNKHKZFHkg== => 36584||

||256 3 15 9SM6gMjImcK0sKPvIlEr9ZNKxsqmSL9zO7P9kZTH8XQ= => 48248||
||257 3 15 A8W3oD5oGEkHjOTfCmPbEBzHHTILksfywXvjQ5r9/dA= => 13075||

Re: Funky Key Tag in AWS Route53 (2)

2022-12-29 Thread Timothe Litt

On 28-Dec-22 19:40, Eric Germann wrote:

My question is

Is there any way to decode the DS record and see what key tag is 
actually encoded in it?  If it’s 32686 it’s an issue with Route53.  If 
it’s 22755 it’s an issue with dnssec-dsfromkey.


If anyone wants the DNSKEY for algorithm 8, ping me off list and I 
will share it with you in a private email.


Thoughts?

And because it's trivial, here are the keytags for all your keys and DS 
records and how to get them.  Note that you have DNSKEY 32686: installed 
in the DNS, and that the installed DS is 22755.


Can't say how it got that way, but that's what is there.  (Manual 
processes are error-prone.  That getting registrars to adopt CDS/CDNSKEY 
- RFC7344 - has been so slow is unfortunate.)  It's rarely the tools.


| perl  -MNet::DNS::SEC -e'@keys = split /\n/, qx(dig +cdflag +short 
ericgermann.photography DNSKEY); print "$_ => 
",Net::DNS::RR->new("ericgermann.photography. DNSKEY $_")->keytag,"\n" 
foreach (@keys);'||
||257 3 8 AwEAAatPHgdYxFA74X+17xAMmZNn+I6XVzodbnA/m4M6vV+axYh+PTNt 
xrZSQ4PXEcJkNXF5OR1UPfPWea/gGIuYUbjMaa2H7fd+TXqc+C44U/2O 
vbZqefSUXl1QzqyxPyG7xZuAgTApFt+PuK9CrQtP7IV9qu34cXAXLGF1 
SgrhBi843sTESw8nBAv1MDLMBCDEULVOSghqqxdJQ57yGOdsgYFdt6kL 
UNA1zntZV49dDWHGttZWwhEnnMuNz+e6bRroETOIhtzxLn4HOievnZmV 
4rqzh5Zku/06QMNiUWwePW07RIGVVzUszU0LaAgBh/m111x5UiYfup2N egWHPunS1IM= => 
*32686*||
||256 3 8 AwEAAaD+/5eN/zIqYhG/CXXastruIQEBBuD2Y2Yinx+IqWvInKc5Kb6K 
AWvUWECjn0Q7Lrt1s759/04SZXm2M4GwuKBzY+Ern2ukWi0hQmUBqoET 
VSrFhu75FJpi0+8wJZhx5UVPg7NTriYXC29rSTBt/OCr/Ot+utf2P9G2 
hr/BXQqcwausick9Gu9zZtzB0072IEM6okZW1rDwlAwmlDjicJgbAnRt 
qgpWX21CgRG/G8Jjz4pGSP1rt54ilxVbCL8KR3huRaJGb6lnnJnQJckL 
oN2+rGaps1bLYC79fgdL5Y/fzR43J+te7RBo4AJXFhW9n1WL6KOKbprE pbl7yiINzTU= => 
43126||
||256 3 13 bX62WTOQmhTaqnQprecHwUjDzBGAQbF0kqywkNzE1yBTrmP/zBNhvtp+ 
H9iYf1OOcfyDo6iE1XXUCNKHKZFHkg== => 36584||

||256 3 15 9SM6gMjImcK0sKPvIlEr9ZNKxsqmSL9zO7P9kZTH8XQ= => 48248||
||257 3 15 A8W3oD5oGEkHjOTfCmPbEBzHHTILksfywXvjQ5r9/dA= => 13075||
||257 3 13 DBT06AacWTT1cD//OgwSSNRT9UTZdAgbJOnU/sWcFYhJ+x9SHvpfZGF6 
tkGehWujsuYtwLf0aKt2b1mjQUk/BA== => 49677|


|perl  -MNet::DNS::SEC -e'@keys = split /\n/, qx(dig +cdflag +short 
ericgermann.photography DS); print "$_ => 
",Net::DNS::RR->new("ericgermann.photography. DS $_")->keytag,"\n" 
foreach (@keys);'||
||22755 8 2 2E81A125523957ED2C3076B4E58BE159027F659D74E184E2F0B81D92 
2D1E7FA9 => *22755*||

|

You can, of course, use data from your files instead of dig. Works for 
both DS and DNSKEY


 perl -MNet::DNS -MNet::DNS::SEC -e' print 
Net::DNS::RR->new("ericgermann.photography. DS 22755 8 2 
2E81A1255ED2C3076B4E58BE159027F659D74E184E2F0B81D92 2D1E7FA9")->keytag,"\n"'



Enjoy.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.





OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Funky Key Tag in AWS Route53

2022-12-29 Thread Timothe Litt


On 28-Dec-22 19:40, Eric Germann wrote:

My question is

Is there any way to decode the DS record and see what key tag is 
actually encoded in it?  If it’s 32686 it’s an issue with Route53.  If 
it’s 22755 it’s an issue with dnssec-dsfromkey.


If anyone wants the DNSKEY for algorithm 8, ping me off list and I 
will share it with you in a private email.


Thoughts?



Perhaps you have TTL issues.

dnssec-dsfromkey and dnsviz are both accurate.

The keytag is visible in the DS record.  No decoding needed First field 
after "DS"|

|

|ericgermann.photography. 3600   IN  DS _22755___8 2 
2E81A125523957ED2C3076B4E58BE159027F659D74E184E2F0B81D92 2D1E7FA9||

|

See also Perl Net::DNS::SEC.  Here are some one-liners from your domain 
that print the keytag from DS and DNSKEY records.


| perl -MNet::DNS -MNet::DNS::SEC -e' print 
Net::DNS::RR->new("ericgermann.photography. DS 22755 8 2 
2E81A1255ED2C3076B4E58BE159027F659D74E184E2F0B81D92 
2D1E7FA9")->keytag,"\n"'_

22755_
|

|perl -MNet::DNS -MNet::DNS::SEC -e' print 
Net::DNS::RR->new("ericgermann.photography. DNSKEY  256 3 15 
9SM6gMjImcK0sKPvIlEr9ZNKxsqmSL9zO7P9kZTH8XQ=")->keytag,"\n"'||

|_|48248|_|
|

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: parental-agents clause - IP address only ?

2022-12-06 Thread Timothe Litt


On 06-Dec-22 01:58, Erich Eckner wrote:


[snip]
This made me curious: Is there some design rule forbidding bind to use 
the system resolver to resolve names it does not know about? I.e. why 
does it not query any resolvers in /etc/resolv.conf (probably via some 
system interface - sry, I have no idea, how "normal" programs resolve 
names) if it encounters an unknown name at a place where only an ip 
address is allowed so far?


That being said: I'm not saying, it *should* do so, I'm merely 
curious, why it does not. :-)


regards,
Erich


See man 3 getnameinfo and gataddrinfo for the current way of resolving 
DNS names via the standard library.  gethostbyname and gethostbyaddr are 
the older functions (overall less functional, harder to deal with IPv6 - 
but can deal with multiple names for a host).


Resolving names in a resolver can be complicated.  Especially when 
recovering from an outage - if you are the first resolver back, who do 
you ask?  Additionally, resolving names is slow - not a big problem for 
configuration data, unless you are operating a really large server. 
(Which many named operators are...)  But you don't want unnecessary 
resolution on live code paths.  Which creates the question: if a name is 
in, say an ACL: does that mean "whatever the name resolved to when the 
server started", or "whatever the name resolves to when the ACL is 
used"?   The latter might be expected, but performance would crater.  
Then there are the security issues: if someone can fool you into using 
their server for name resolution, they can get whatever configuration 
items to do what they want.  So, if allow-update is supposed to be your 
management host, they might supply an IP address that allows them to 
update your DNS.  Of course, there's DNSSEC - but that requires more 
infrastructure to be up when you boot that first server after a blackout.


None are insurmountable technical problems, but it's a lot of complexity 
(hence room for bugs).  The consensus is that it's not worth it for the 
return.  As noted earlier in the thread, most places where IP addresses 
are used are fairly static.  That lends itself to an external solution.  
(As examples, I have a root hints file from the 80s - while a couple of 
addresses have changed, it's still good enough today.  The root DNSSEC 
key has only changed once.  Server IP addresses change on a timescale of 
years - when hardware is replaced - maybe.  And when a corporate merger 
renumbers networks. Or if you're a small operator and don't own your IP 
addresses, when you change ISPs.)


This is also why I emphasized "TRUSTED" in selecting a suitable resolver 
for an external process.  In any case, using "include" in configurations 
can help to modularize/isolate the places where IP addresses are used.



Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: parental-agents clause - IP address only ?

2022-12-05 Thread Timothe Litt


On 05-Dec-22 11:17, vom513 wrote:

On Dec 5, 2022, at 4:06 AM, Matthijs Mekking  wrote:

'parental-agents' work the same as 'primaries'. It only supports addresses.

Listing them as domain names would technically be possible to implement, but it 
requires an authoritative server to act as an resolver. Adding resolver code to 
the path of an authoritative server is like crossing the streams. It adds 
security risks that are unnecessary for an authoritative server, so I'd rather 
not add such functionality.


Thanks for the confirmation - and yes makes sense.

Also thanks to Timothe in this thread for the script inspiration.  I cooked my 
own up and tested it - works brilliant.  I also added some logic to email me if 
there is a diff from the last run.  Will be interesting to see how often there 
actually is.


I'm glad it was helpful.  Rather than do it yourself diffs/email, I'd 
suggest simply committing changes to git (or another source control 
system).  A commit hook can handle the diff and/or e-mail.  And having 
your configuration under source control can be very helpful when things 
go wrong.  It's trivial to roll back or forward, visualize history, and 
(sometimes) bisect.  And it enforces documenting why changes are made.  
Plus, of course, it's easy to replicate changes to a local backup with a 
push...


If you've developed something that's generally useful - or could be made 
generally useful - I encourage you to share it.


Here, or especially if larger, a pointer to one of the usual platforms. 
(GitHub, GitLab, sourceforge, etc).


The community works best when everyone contributes what they can.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: parental-agents clause - IP address only ?

2022-12-05 Thread Timothe Litt


On 04-Dec-22 21:34, vom513 wrote:

Hello all,

So I set up parental-agents lists for my zones, and actually got to see it work 
(awesome !).  bind detected the parent DS records and acted accordingly.

However, I currently have these lists configured using the IP (v4 only at the 
moment) addresses of the parent NS’es.  I tried inputting hostnames, and I got 
errors (i.e. syntax) every time.

I would prefer to put these in as hostnames.  While at a certain level in the 
tree these don’t change very often, they can and do.  I’d rather not have to 
keep track of these in this manner.

So my question - am I just mangling the syntax - or does this clause really 
only support IPs ?  I was thinking if so - perhaps the reason is some chicken 
vs. egg / security reason ?  I.e. not trusting the name (which would have to be 
itself resolved) ?

Thanks in advance for clue++


Let the computer do the work.

Assuming you have a TRUSTED resolver, a work-around for this sort of 
issue is to replace the definition with a 'include'.


Run a cron job that queries your resolver & writes the resolved IP 
address .  You can template the include file. (Or the entire config, but 
I get confused when the main .conf file is modified frequently.)


e.g. I use something like this in other cases.  Season to taste. Don't 
use 8.8.8.8...


include "myagents.conf"

|myagents.conf.template|

|parental-agents port 99 { %host.example.com% key secret ; 
%host.example.net% key sesame; }||

||parental-agents port 96 { %host.example.edu% key password ; }||
||
||agent-update|

|#!/bin/bash

# Update IP addresses

IP4HOSTS="example.com example.edu"
IP6HOSTS="example.net"

TRUSTED="8.8.8.8"
CONF="myagents.conf"

trap "rm -f ${CONF}.tmp" EXIT
if ! cp -p "${CONF}.template" "${CONF}.tmp" ; then
    exit 1
fi

function resolve () {
    local HOST="$1" TYPE="$2" IP=""
    if ! IP="$(dig +short "$HOST" "$TYPE" "@$TRUSTED")"; then
    echo "Failed to resolve \"${HOST}\" \"$TYPE\"" >&2
    exit 1
    fi
    if [ -z "$IP" ]; then
    echo "Failed to resolve \"${HOST}\" \"$TYPE\"" >&2
    exit 1
    fi
    sed -i "${CONF}.tmp" -e"s/%${HOST}%/${IP}/g"
}

for HOST in $IP4HOSTS; do
    resolve "$HOST" "a"
done
for HOST in $IP6HOSTS; do
    resolve "$HOST" ""
done
if ! mv "${CONF}.tmp" "${CONF}" ; then
    exit
fi

exit 0
|

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

Hmm.  Your resolv.conf says that it's written by NetworkManager.

What I suggested should have stopped it from updating resolv.conf.

See 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/manually-configuring-the-etc-resolv-conf-file_configuring-and-managing-networking


After restarting the service, did you edit (or replace) resolv.conf to 
remove the AT address?


If not, stop here & edit the file.

If so, perhaps some other manager is editing the file without replacing 
the comment.


Check to see if resolv.conf is a symlink - some managers (e.g. 
systemd-resolved) will do that.  Not sure when/if it found its way to 
centos (I don't run it), but if it's there, systemctl stop & disable 
it.  It would be running on 127.0.0.53:53, but it usually points 
resolv.conf to itself.


The other managers that I know of aren't in redhat distributions.

You may need to use auditing to identify what is writing the file.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 03-Aug-22 14:39, Robert Moskowitz wrote:



On 8/3/22 12:59, Timothe Litt wrote:


Try

echo -e "[main]\ndns=none" > /etc/NetworkManager/conf.d/no-dns.conf
systemctl restart NetworkManager.service



Same content in resolv.conf.  BTW this is on Centos7.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.
On 03-Aug-22 12:36, Robert Moskowitz wrote:



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run 
a second instance of named.


As for the intermittent issues with resolving external names, 
that's frequently a case of hitting different nameservers.  Or a 
firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that 
IPv6 addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want 
that IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as 
ATT has said there is no assurance with the IPv6 addresses may 
change.  So I added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to 
stop it pushing into my resolv.conf.






OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

Try

echo -e "[main]\ndns=none" > /etc/NetworkManager/conf.d/no-dns.conf
systemctl restart NetworkManager.service

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 03-Aug-22 12:36, Robert Moskowitz wrote:



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run a 
second instance of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that 
IPv6 addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as ATT 
has said there is no assurance with the IPv6 addresses may change.  So 
I added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to 
stop it pushing into my resolv.conf.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to resolve 
internal names.  There is no guarantee that your client resolver will 
try nameservers in order.  If you want a backup, run a second instance 
of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace will 
show what happens (if it still does - it could be that the ATT router's 
resolver is at fault).


Intermediate step would be to use dig.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread Timothe Litt


On 03-Aug-22 09:27, Bob Harold wrote:
I think the best way to soften the effect, and make DNSSEC much less 
brittle, without losing any of the security, is to reduce the TTL of 
the DS record in the parent zone (usually TLD's) drastically - from 2 
days to like 30 minutes.  That allows quick recovery from a failure.  
I realize that will cause an increase in DNS traffic, and I don't know 
how much of an increase, but the 24-48 hour TTL of the DS record is 
the real down-side of DNSSEC, and why it is taking me so long to try 
to develop a bullet-proof process before signing my zones.


--
Bob Harold
University of Michigan



Yes, in planning for DNSSEC changes it's a good idea to include reducing 
TTLs, verifying the change, then increasing the TTLs.


That means keeping track of important (I'd say non-automated) events, 
and reducing TTL a few days in advance.


If you do that, you get the benefit of long TTLs most of the time.  KSK 
rollover - probably the most common cause of errors - is not a frequent 
event.


Then again, with proper planning, you don't make nearly as many mistakes.

Also, while I haven't gotten around to migrating, for a new setup I'd 
look at the dnssec-policy in 9.16+, which appears to do most of the 
automation for you.  All of it if you have a registrar who supports 
CDS/CDNSKEY, in which the parent zone pulls the new DS into itself. 
https://kb.isc.org/docs/dnssec-key-and-signing-policy


Some issues have been reported on this mailing list, but from a distance 
it seems to be a great improvement and doing well.


At this point, creating a new process doesn't seem like a great use of 
time...at least unless you've identified issues with the tools that you 
can't get fixed... The ISC folks working on dnssec-policy seem to have 
been responsive.


FWIW

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: DNSSEC adoption

2022-08-02 Thread Timothe Litt


On 02-Aug-22 13:51, Brown, William wrote:



my guess is that they see dnssec as fragile, have not seen _costly_
dns subversion, and measure a dns outages in thousands of dollars a
minute.

No one wants to be this guy:
http://www.dnssec.comcast.net/DNSSEC_Validation_Failure_NASAGOV_201201
18_FINAL.pdf

so, to me, a crucial question is whether dnssec ccould be made to fail more 
softly and/or with a smaller blast radius?
randy

I'm more of a mail guy than DNS, so yes, like hard v. soft fail in SPF.  Or 
perhaps some way of the client side deciding how to handle hard v./ soft 
failure.


As Mark has pointed out, validation is a client issue.  Setting up 
DNSSEC properly and keeping it running is for the server admin - which 
bind is incrementally automating.


For bind, the work-around for bad servers (which is mentioned in the 
article) is to setup negative trust anchors in the client for zones that 
fail.  And notify the zone administrator to fix the problem.  I usually 
point them to a DNSVIZ report on their zone.


The nasa.gov failure was avoidable.  nasawatch, which is an excellent 
resource for space news, jumped to an incorrect conclusion about the 
outage - and never got the story straight. In fact, all validating 
resolvers (including mine) correctly rejected the signatures.  It wasn't 
comcast's fault - they were a victim.


It is an unfortunate reality that admins will make mistakes.  And that 
there is no way to get all resolvers to fix them - you can't even find 
all the resolvers.  (Consider systemd-resolved, or simply finding all 
the recursive bind, powerdns, etc instances...)


There is no global "soft" option - aside from unsigning the zone and 
waiting for the TTLs to expire.  And besides being a really bad idea, 
it's easier to fix the immediate problem and learn not to repeat it.


Long term, automation of the (re-)signing and key roll-overs will reduce 
the likelihood of these outages.  It is truly unfortunate that it's so 
late in coming.


It may take a flag day to get major resolver operators, dns servers, and 
client resolvers all on the same page.  I'm not holding my breath.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC signing of an internal zone gains nothing (unless??)

2022-08-02 Thread Timothe Litt

On 02-Aug-22 13:18, Peter wrote:

On Tue, Aug 02, 2022 at 11:54:02AM -0400, Timothe Litt wrote:
!
! On 02-Aug-22 11:09,bind-users-requ...@lists.isc.org  wrote:
!
! > | Before your authoritative view, define a recursive view with the internal
! > ! zones defined as static-stub, match-recursive-only "yes",  and a
! > ! server-address of localhost.
! >
! > Uh? Why before?
!
! Because each request attempts to match the views in order.  You want the
! stub view to match recursive requests.  The non-RD requests will fall thru
! to the internal zone and get the authoritative data.

Ahh, I see. But this does not work so well for me, because I have the
public authoritative server also in the same process. And from the
Internet will come requests with RD flag set, and these must get a
REFUSED ("recursion desired but not available").

So I considered it too dangerous to select views depending on the RD
flag being present or not, and resolve this with a slightly different
ordering of the views.

-- PMc


Order matters, and changing it will change behaviors.

My example combines the internal and public servers into one bind instance.  It provides 
recursive and authoritative service to internal clients; the recursive view will set AD.  
For external clients, it is authoritative - but there is provision for known clients to 
get recursive service (with AD).  Such clients would be excluded from matching the 
"*internal" views.  You might use this for DMZ systems, or for management tools 
(e.g. if you want to AXFR the external view.)

My public authoritative server(s) are the "external" view.

The server doesn't select ONLY on the RD flag.  It also selects on IP address 
and/or TSIG keys.  The RD flag is only used to select between the recursive and 
authoritative view pairs for MATCHING CLIENTS.

So you should order the views as I showed.

The public clients will fail the "match-clients" clause of the internal views 
regardless of the RD because of their IP addresses.  They will fall thru to the 
r-external view.  That will also fail unless they are listed clients.  So again, they 
fall thru to the external view.  That has recursion no - which means that RD will return 
REFUSED.

The only danger comes from failing to properly setup the client matching ACLs, 
or from making changes to the logic without understanding how it works.

Instead of guessing, use what I provided and test it.  It works.  It has worked 
for many years.  Once you have tested it and completely understand it, THEN 
make changes.  Carefully.  And test them.

This technique is straightforward if you completely understand what it's doing, 
but if you make assumptions you are likely to get into trouble.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: bind-users Digest, Vol 4031, Issue 3

2022-08-02 Thread Timothe Litt


On 02-Aug-22 11:09, bind-users-requ...@lists.isc.org wrote:


| Before your authoritative view, define a recursive view with the internal
! zones defined as static-stub, match-recursive-only "yes",  and a
! server-address of localhost.

Uh? Why before?


Because each request attempts to match the views in order.  You want the 
stub view to match recursive requests.  The non-RD requests will fall 
thru to the internal zone and get the authoritative data.  The latter 
includes the requests that the stub zone makes for authoritative data 
for its zones.  (You don't want the authoritative view to match the 
recursive requests, since that will not return AD.)  The ordered 
evaluation is why "match-clients {any;}};" in the "external" (last) view 
does NOT match the preceding views.


Then any non-matching clients (e.g. external) go thru the same process.  
Generally external follows internal because you know how to match 
internal (e.g. your IP addresses / RCC1918 addresses), and "external" is 
everyone else.


I find that views require less management than multiple instances, and 
properly set up, I don't buy the "safety in separate instances for 
authoritative and recursive servers" But that's somewhat of a religious 
discussion - there are arguments on both sides.


In any case, the point is that contrary to other advice, it IS possible 
to run an authoritative server that also returns AD to recursive requests.


You seem to have more than two views - that doesn't change the 
principle.  For each authoritative view that you want to return AD, you 
need to add a recursive view that is a static-stub.


The outline that I provided is extracted from my working configuration.

You do need the allow-* and match-clients, but those are site-specific.

You can also slave the root zone - that's orthogonal to AD.

I suggest taking one step at a time.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Bind 9.11/RHEL7 Server Freezes FUTEX_WAKE_PRIVATE

2022-08-02 Thread Timothe Litt


On 01-Aug-22 18:29, Grant Taylor wrote:

On 8/1/22 4:21 PM, Greg Choules via bind-users wrote:

Off the top of my head, could it be this?

random-device

...

BIND will need a good source of randomness for crypto operations.


Drive by plug:  If it is lack of entropy, try installing and running 
Haveged.  At least as a troubleshooting aid.


Or my favorite: entropybroker + a hardware entropy source (or two).  
There are USB keys; I currently use a RPi (cpu has a hardware source); 
recent Intel CPUs also have one.  If you use multiple sources, you don't 
have to worry about one being defective/compromised...


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC signing of an internal zone gains nothing (unless??)

2022-08-02 Thread Timothe Litt

On 01-Aug-22 12:15, John W. Blue wrote:


While that extra overhead is true, it is more accurate to say that if 
internal clients are talking directly to an authoritative server the 
AD flag will not be set.  You will only get the AA flag.  So there is 
nothing to be gained from signing an internal zone.


You can get the AD flag set, with a bit of extra work.  I've done this 
for years.


The question of whether the client resolver does/should trust the AD 
flag is situation dependent.


Before your authoritative view, define a recursive view with the 
internal zones defined as static-stub, match-recursive-only "yes",  and 
a server-address of localhost.  In the authoritative view, you can share 
the cache (attach-cache) with the recursive view.


It's pretty straightforward to automate keeping the static-stub list in 
sync - I keep it in a separate .conf file.


e.g. this outline (the order matters, views are selected first-match)

|view||"r-internal" in {||
||  match-clients {...};
||match-recursive-only yes;
||recursion yes;
   -- standard config --
};|

|/* Included */||
|||

|||-- trusted-keys --

  zone||"example.net" in {||
    type static-stub;
server-addresses {127.0.0.1; };
||   };|

|}:|

|view||"internal" in {||
||attach-cache "r-internal";
||recursion no;|

|  --- standard config --|

|/* included */
|

|  zone "example.net" in {
||auto-dnssec maintain;
||type master;
    file ...;|

|--standard config--
  };|

|||};|

|view "r-external" in { /* if you allow external recursion, or use acls 
to fake external clients */

|

|...|

|};|

|view "external" in {|

|...|

|};
|

A script along the lines of:

|perl -e'while(<>){/^\s*zone/ && print $_," type static-stub;\n  
server-addresses { 127.0.0.1; };  \n}; \n"}' >internal_stub_zones.conf|


will generate the static-stub declarations.

Of course, depending on how you add/remove zones, YMMV.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.






OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Supporting LOC RR's

2022-05-13 Thread Timothe Litt


On 13-May-22 12:21, Philip Prindeville wrote:

That's interesting, and clever work to solve the problem of making APs into 
reliable location references.

They are doing a more involved/automated version of what I suggested - using 
GPS (in their case built-in GPS, plus AP-AP communication) for APs to locate 
themselves.  Once the AP knows where it is, the clients can find out where they 
are in physical (WGS84 coordinate) space using the APs as references.  Note 
that it's an enterprise solution - definitely not for most homes - since it 
requires at least 4, and probably many more suitable APs.

But LOC records don't have any role in what's described.  They *could* be an 
output (e.g. an AP could use DNS UPDATE to install LOC records).  But there's 
still no obviously useful consumer for the LOCs...so why bother?

If you're in WiFi range of the AP, a client is better off getting precise 
information from its broadcast.  If not, it's useless.  And as previously 
noted, LOC for servers suffers from AnyCast, cache, and CDN uncertainty.

LOC was proposed in simpler times.


Actually, if the AP doesn't have GPS but does offer WiFi Certified Location 
Service, then it could use its own LOC record to provision itself...

-Philip

WiFi Certified Location service computes the *relative* location of 2 
WiFi devices. https://www.wi-fi.org/discover-wi-fi/wi-fi-location To 
offer an absolute location (what LOC provides), at least one AP has to 
know where it is (and broadcast it).  Then additional APs can compute 
their positions relative to it, and compute their absolute location(s).  
Either your AP knows where it is, or it finds out via WCL (or some other 
means: e.g. GPS or configuration).


If you want an AP to find out where it is via LOC, someone has to 
generate the LOC record.  And the AP needs to be able to find it - 
meaning it has been configured with an IP address and/or name.  If you 
want to participate in WCL, you want the LOC to have a precise (and 
accurate) position.  OTOH, if an AP is participating in WCL and doesn't 
know its absolute location, it can compute it using WCL if some other AP 
knows and broadcasts its own absolute location.


This conversation has come full circle.  Where does the LOC record's 
position data come from, and who (or what) provisions it?  And (assuming 
the AP doesn't have GPS or another reference, such as installed WCL APs) 
why is that easier/better than putting the data in the AP's config?


As I noted, *after* an AP knows where it is, it *could* generate a LOC 
record, and even install it. But that makes it an *output* of 
provisioning, not an input.  And there's still no obvious customer.  
Yes, some other AP could then use the first AP's LOC with WCL to 
determine its absolute location.  (Well, you probably need three APs to 
triangulate...)  But it's less work all around to get it from the first 
AP's broadcast.  And you still have the bootstrapping problem.  WCL 
clients have no use for LOC.  If you want to map your APs, you can ask 
them for their locations directly.


Much as some would like it to be, involving DNS isn't the answer to 
everything.


If you still want to convince yourself that LOC is useful: starting with 
an empty building, some unprovisioned APs, and no LOC records, provide 
an algorithm that provisions your AP(s). Specify all inputs and where 
they come from.  Contrast it with the HP video and/or manual 
configuration.  Show what steps your algorithm eliminates and/or 
facilitates, and at what cost.  I don't expect a positive outcome, but 
if I'm wrong, by all means post the details.


Since this has indeed come full circle, I'm done.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Supporting LOC RR's

2022-05-02 Thread Timothe Litt


On 02-May-22 09:02, Stephane Bortzmeyer wrote:

On Wed, Apr 13, 2022 at 03:39:33PM +0200,
  Bjørn Mork  wrote
  a message of 14 lines which said:


Which problems do LOC solve?

I remember adding LOC records for fun?() in the previous millennium when
RFC 1876 was fresh out of the press.  But even back then paranoia
finally took over, and I deleted all of them.

Don't think I ever found anything to actually use them for.

Fun is a sufficient reason.

French zip codes to LOC:

% dig +short +nodnssec LOC 34000.cp.bortzmeyer.fr
43 36 47.108 N 3 52 9.113 E 0.00m 1m 1m 10m

https://www.bortzmeyer.org/dns-code-postal-lonlat.html  (in French)


I would never discourage anyone from having (harmless) fun.

On the other hand, unless your codes postaux are spherical (or a 
circular projection), your LOC will be at best an approximation of a 
point in the postal zone.  Perhaps the post office, or the geographic 
center, or a public building, or a monument, or the best restaurant, 
or...?  LOC can't represent the boundaries of most (any?) postal zones, 
as they tend to be polygons (or with curves, simply a closed path).  So 
the most fun may be guessing the meaning of the result :-)


Still, overall DNS seems to generate more problems than fun, so if LOC 
provides amusement, it's a good thing.


Malheureusement, LOC's practical application remains unclear.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Supporting LOC RR's

2022-05-02 Thread Timothe Litt


On 01-May-22 05:03, Bob Harold wrote:


On Wed, Apr 13, 2022 at 9:39 AM Bjørn Mork  wrote:

Timothe Litt  writes:

> Anyhow, it's not clear exactly what problem you're asking LOC (or
> anything) to solve.

Which problems do LOC solve?

I remember adding LOC records for fun?() in the previous
millennium when
RFC 1876 was fresh out of the press.  But even back then paranoia
finally took over, and I deleted all of them.

Don't think I ever found anything to actually use them for.


Bjørn
-- 



WIth regard to locating access points:
Self-locating wifi APs
https://www.youtube.com/watch?v=kVdFNR0R3EE

That's interesting, and clever work to solve the problem of making APs 
into reliable location references.


They are doing a more involved/automated version of what I suggested - 
using GPS (in their case built-in GPS, plus AP-AP communication) for APs 
to locate themselves.  Once the AP knows where it is, the clients can 
find out where they are in physical (WGS84 coordinate) space using the 
APs as references.  Note that it's an enterprise solution - definitely 
not for most homes - since it requires at least 4, and probably many 
more suitable APs.


But LOC records don't have any role in what's described.  They *could* 
be an output (e.g. an AP could use DNS UPDATE to install LOC records).  
But there's still no obviously useful consumer for the LOCs...so why bother?


If you're in WiFi range of the AP, a client is better off getting 
precise information from its broadcast.  If not, it's useless. And as 
previously noted, LOC for servers suffers from AnyCast, cache, and CDN 
uncertainty.


LOC was proposed in simpler times.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Supporting LOC RR's

2022-04-12 Thread Timothe Litt


On 12-Apr-22 14:15, Philip Prindeville wrote:

In my case, I do split-horizon for my domain in-house and use RFC-1918 
addresses, so leaking them with the internet would be pointless anyway.


I have separate LOC records for in-house and external views.  The 
in-house version is high precision.  The external version is fuzzed.


I use LOC records on domains; the comparison with IP geolocation is 
because the usual alternative to LOC is to translate the domain name to 
an IP address; then geolocate that using one of the commercial databases.


Of course, that gets tricky when a hostname has multiple IP addresses or 
is served by anycast (such as a CDN).  In that case, the semantics 
aren't obvious - should the location be that of the CDN server (and 
which one)? The origin server?  And with 1918/NAT, the origin server may 
be in different locations depending on the protocol used.  (E.g. one 
public IP address, with an SMTP server in one building and a WWW server 
in another)


With WPs, you're not trying to locate a host at all; you're trying to 
infer (or calculate) the mobile device client's location.  Or assist the 
mobile device to calculate its location.


It's not clear to me that it's less work to prepopulate LOC records than 
to put a cellphone on top of the WAP before turning it on, getting the 
GPS coordinates (e.g. see the 'gpstest' app), and pasting them into the 
WAP's configuration.


If you really want cm scale accuracy, you need some kind of surveying 
instrument - whose data has to go someplace - be it LOC or the WAP 
configuration.  Or the new AP figures out its location based on 
triangulating from existing APs that somehow are deemed trustworthy.  
THEY might have LOC records to help, but that's not pre-provisioning.  
Maybe the new AP could then publish a LOC record with its location to 
help clients.  But I don't see how pre-provisioning helps setting up a 
new AP in this case; you might do the survey before the WAP arrives, but 
once the survey instrument reports a position, you either have prepared 
a configuration file for it (usual case), or you have to find it & 
configure it at that point.  Either way, setting the location is the 
smallest part of setting up a configuration - VLANs, SSIDs, access 
control/portals take much more work.


Anyhow, it's not clear exactly what problem you're asking LOC (or 
anything) to solve.


BTW, RFC1876 is worth reading for the suggested search algorithms.  I 
don't think it ever moved from "experimental", which may be part of why 
uptake hasn't been great.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Supporting LOC RR's

2022-04-12 Thread Timothe Litt


On 12-Apr-22 01:46, Philip Prindeville wrote:

Does anyone use LOC RR's?  And if so, how?

I've had some Apple devices get seriously confused by their location services 
and I'm trying to provide strong hints.

It would also be nice to prime WiFi 6 Certified WAPs with their locations based 
on LOC RR's since we happen to have convenient infrastructure to do exactly 
that.

Thanks.

LOC RR's are not currently very popular.  But I have some where they 
provide different locations from geolocation services based on IP 
address.  Google, for one, reports the city from the LOC record in 
preference to data associated with the IP address.  I haven't looked 
recently, but last time I looked, the geolocation services' use of LOC 
was spotty.  Some used LOC, other's didn't. For them, it's pretty cheap 
to mine WHOIS and address block assignments; doing a DNS lookup for each 
address (and walking up the tree on a miss) gets expensive fairly fast.


There are some concerns with overly precise LOC records - great if you 
want a shopper to to show up at your store, perhaps less so if you run a 
shelter or secure facility.  I've been known to intentionally misplace 
LOC records so that they're good enough for routing, census, and other 
coarse applications, but not accurate enough for navigation.


With respect to part 2: You might also consider that GPS receivers are 
cheap (every cellphone has one) and retail USB receivers are easily 
found at less than $20.  This may be a better choice than LOC records.  
GPS tells you where you are; LOC tells everyone else...


HTH

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: freebsd ipfw question

2022-02-21 Thread Timothe Litt


On 21-Feb-22 18:36, Randy Bush wrote:

for some reason lost in time, i have the following in `/etc/ipfw.rules`
on a freebsd system running bind9

 add allow tcp from any to me 53 limit src-addr 1 setup
 add deny tcp from any to me 53


Except that rule wouldn't help.  I put the non-local  connections into a 
file, and executed:


sed zz.tmp -e's/^.*->//; s/:[0-9]\+ .*$//;' | sort      | wc -l
sed zz.tmp -e's/^.*->//; s/:[0-9]\+ .*$//;' | sort -u | wc -l

I get the same number in both cases - 156.  They're mostly IPv6 
remotes.  So while there are IPv6 address blocks that are making a lot 
of connections, each address only makes one.  So the rule (limiting to 1 
connection/address) would have no effect.


Interestingly, they come from sequentially numbered hosts. Mostly in 
2607:f8b0:4002::.  (use 'less' instead of wc-l to see this).  Whois says 
the address block 2607:f8b0::/32 is assigned to google (AS15169).


Why these blocks are making connections - and how long they persist may 
deserve some investigation.


They could be a DDOS - or a parallelized DNS survey.

If you decide they are abusive, the previous firewall rule isn't the 
right mitigation.


It's important not to jump to conclusions...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: freebsd ipfw question

2022-02-18 Thread Timothe Litt

On 17-Feb-22 16:45, Randy Bush wrote:

for some reason lost in time, i have the following in `/etc/ipfw.rules`
on a freebsd system running bind9

 add allow tcp from any to me 53 limit src-addr 1 setup
 add deny tcp from any to me 53

the results are

 01000  48358531   6390772849 allow tcp from any to me 53 setup limit 
src-addr 1 :default
 01100165225  9379997 deny tcp from any to me 53

is this about normal?

randy


This seems like an artifact of a time when people assumed that TCP use 
was rare (and expensive), and likely only used for zone transfers.  Were 
that the case, this would have been an attempt to protect against denial 
of service attacks.


This was always a bad assumption.  With today's larger responses & 
traffic profiles, if it ever made sense, it's long past its expiration 
date.  TCP is required, and no RFC requires a client (or clients) on a 
host to minimize the number of TCP connections. Nor to limit the number 
of active zone transfers per host.


The effect is likely to be that client responses are slow and/or pushed 
away from this server to one that's more tolerant.  Whether the 165K 
dropped connections are significant is impossible to tell without (a) 
knowing the amount of time it represents and (b) what those attempts 
were trying to do.  They represent about 0.3% of the traffic in this 
interval - but that doesn't measure their importance.


Since you don't have a specific rationale for the rule based on a known 
situation, I would remove it.  (More precisely, remove the limit, which 
means replacing these rules with something like 'allow tcp from any to 
me 53'.)  If that results in abusive traffic, another (traffic-specific) 
approach to dealing with it would be in order.  And if it comes to that, 
do yourself (and your successors) a favor and document the problem you 
encounter and how your solution works...


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ipv6 adoption (HE & DNSSEC)

2022-02-17 Thread Timothe Litt

On 17-Feb-22 04:06, G.W. Haywood wrote:

Hi Grant,

On Thu, 17 Feb 2022, Grant Taylor wrote:

Please clarify if you are talking about DNSSEC for your own zone that 
they are doing secondary transfers of or if you are talking about 
DNSSEC for the IPv6's reverse DNS namespace that they delegate to you.


Ah, good point Grant.

The reverse zones are delegated to us but they aren't signed.

Yes, the issue with HE is that while they will delegate reverse zones to 
you, they don't accept DS records.  So you can sign your zones, but 
there is no signature chain to the root.


Before ISC retired DLV, it was possible to use that path - and I did.  
But unfortunately that ship has sailed.


dnsviz shows that HE hasn't signed its reverse zone.  That would be a 
prerequisite to DNSSEC for zones it delegates to customers, as would be 
a mechanism for submitting DS records to HE.


The issue has been open for (almost) 12 years.  I haven't seen any 
updates from HE since the incoherent reply in the thread at 
https://forums.he.net/index.php?topic=890.msg22055#msg22055


It's rather difficult to exert pressure on a vendor that's providing a 
free service.   But enough polite requests might help.


Perhaps further discussion of this belongs elsewhere...it seems to be 
wandering from BIND.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ipv6 adoption

2022-02-16 Thread Timothe Litt


On 16-Feb-22 07:38, Andrew Baker wrote:


Firstly, thanks for the advice about the hidden master the other day, 
that’s now setup, working fine and we’ve just finished transferring 
about 4500 records across!


My software team came up this morning and slapped me across the face 
with a wet fish (figuratively speaking as It’s not Thursday yet!) by 
informing me that they are developing a mobile app for one of our 
companies that Apple have mandated an ipv6 DNS requirement before they 
publish.


At the moment, all our infrastructure from ISP device inwards is ipv4 
so setting up the zone on our DNS is going to require a lot of 
significant changes! There are a couple of things reference all this 
that I’m unsure about and am hoping you can educate me on.


Firstly, we are running bind 9.11 on Debian 10 hosts.

  * Is it worth use upgrading to Debian 11 to get the newer version of
bind?
  * Are there any issues/bugs/holes in 9.11 that will cause us a
problem, especially if we start messing with ipv6?
  * If I do upgrade the on-premise servers, is it better to do master
then slaves or the other way around?
  * If we have DNSSEC configured, is it going to break anything
upgrading? (I have lots of backups of the zones and hosts files)

Secondly, reference bind config

  * For the “listen-on-v6” statement, are the only options still
‘none’ or ‘all’?
  * Can the “listen-on-v6” only be enabled globally in the
‘named.conf.options’ or is it possible to enable per zone as we
are (currently) only going to have 1 zone needing ipv6?
  * Once ipv6 is enabled. Is it advisable to setup a sub-domain for
the ipv6 addresses to avoid dual-stacking?

The reverse zones for our ipv4 are handled (badly) by our local 
telecoms provider. How big an issue is it going to be for ipv6 if the 
reverse lookups are badly/not implemented?


If our ISP can’t give us a public ipv6 address, can we still run our 
bind to give out ipv6 addresses or not?


Finally, can anyone point me towards any good reading on bind 
configuration and DNS best practice (preferably with idiot proof 
examples)? I must decide fairly quickly if we roll this zone back to 
our domain registrar who is setup to handle ipv6 or do we strike out 
and bring our DNS setup up to date and future proofed!


Thanks for your time and expertise.

Andy Baker

**

You can get IPv6 via a tunnel broker.  Hurricane Electric 
(http://he.net/) is one of the larger ones.  You can get a /48 from them 
- for free.  Bandwidth is modest.  You can setup reverse zones; they'll 
delegate.  I don't think they support DNSSEC - it's been on their 
wishlist for years.


I use 9.11 (and have used previous) versions of bind with IPv6 - no IPv6 
issues.


Zones have nothing to do with dual stack.  If you create an  record, 
your host can be found via IPv6.  If you create an A record, IPv4.  Both 
gives you "dual stack".  I tend to create x.v[46].example.net style 
names in addition to x.example.net for cases where I want one or the 
other.  This doesn't require a zone - it's just a name.


One reason to not configure your host with both A and  records may 
be that most resolvers will prefer V6, but if you have a tunnel for V6 & 
a wide pipe to your ISP, you may find that you're connecting thru the 
tunnel & limited by its bandwidth.


There is no requirement for named to listen on IPv6 for it to serve  
records.  That's orthogonal, and dependent on what the resolver(s) can 
live with.


HE has a lot of IPv6 educational materials (not bind-specific) that are 
quite good.


Depending on where you are in the world, there are other brokers.  I 
switched to HE when SiXXS went out of business and have been happy.  I 
have no other connection to HE.  YMMV.


DNSSEC doesn't care what transport protocol is used or what records are 
served.  It just signs them.  Moving, you do need to make sure that the 
keys and delegations are present on the receiving end.  Once the move is 
complete, it may be a good time to do a key roll.


Finally, it's not clear from your note how you're setup, but if you run 
your own servers, you need to meet the geographic dispersion rules.  At 
least 2 servers in two places.  That's true no matter what protocols you 
use.  There are backup DNS services that support IPv6.  A free one that 
supports both IPv6 and DNSSEC is puck.nether.net/dns.


There are plenty of DNS books/guides.  I'll let someone else do the reviews.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@l

Re: Freezing a Zone vs. Stopping the DNS Server

2021-09-29 Thread Timothe Litt
Why make manual changes to the zone file?  The zone is already
dynamically updated, so the usual reasons (formatting, structure,
in-line signing) don't apply.

Use nsupdate to add your entries.  Named will update the zone, handle
updating the serial number - an even do some validation on the records. 
It's easier, doesn't stop service, and because it automates the
mechanics, safer.

BTW: I recommend using TSIG for authorization with nsupdate rather than
IP addresses.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 29-Sep-21 07:41, Frank Kyosho Fallon wrote:
> Hi,
>
> Occasionally I need to add hosts manually to forward/reverse lookup
> zones in BIND 9.16. We also have ISC DHCP. Both are on a Mac Mini
> using MacPorts to install.
>
> Since dynamic updates are continually in progress, I understand I need
> to use *rndc**freeze zone* and *rndc**thaw zone* before and after
> making changes (including manually incrementing the sequence number). 
>
> Can I safely accomplish the same thing by issuing an *rndc stop*
> command? Would that allow me to make zone changes followed by an *rndc
> reload* command?
>
> Also, is it safe to simply reboot the server after OS updates, or is
> it necessary to manually stop the DNS server first?
>
> Does it matter where in the dynamically updated zone files I insert
> the new host A record and PTR record?
>
> With /etc/hosts I can add hosts on different subnets. To do that in
> DNS, do I first need to add a reverse zone for the additional subnet
> so that I can add PTR records to correspond to A records in the
> forward zone?
>
> Thanks for any light you can shed on this subject.
> -- 
> Frank Kyosho Fallon
> My pronouns are: He, HIm


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
On 10-Sep-21 13:11, Evan Hunt wrote:
> Recently a critical bug was discovered in which map files that were
> generated by a previous version of BIND caused a crash in newer versions.
> It took over a month for anybody to report the bug to us, which suggests
> that the number of people willing to put up with such a finicky format
> must be pretty small. (Or that the people who use it aren't keeping up with
> regular software updates, I guess.)

Thanks for the history/data.

In my experience, the bigger the operator (of any system), the more
slowly they are likely to update it.

A month doesn't seem like a long time - everyone wants to be second to
update (except for CVEs, and even there I don't rush to update for CVEs
related to features I don't use).

> it would be nice not to have to worry about map files when it came to
> maintaining feature parity.)

I wouldn't worry all that much about blowing away old map files with a
version upgrade; they're pretty well documented as a cache, not a
primary format.  And you supply the tools to convert to a stable format.

In fact, were you to come up with a data structure and loading scheme
that made raw as fast as map, you could treat "map" as a hint that a
user values speed over size & portability - and just write raw format
instead.  Until the pendulum swings again.




OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
I'm not a consumer of this and agree that it's up to users to speak up,
so I'll stop here - with one final observation.

The issue comment containing the benchmarks includes:

> Speedup provided by the |map| format does not seem significant enough
> to warrant the complexity of map format, especially when we take into
> account that the difference measured in terms of "real time" is in
> order of 10s of seconds.
10s of seconds *per zone* can certainly add up.  Call it 10 secs/zone *
100,000 zones = 1M sec / 3600 = 278 hrs *saved*.

Suppose loading zones is not disk limited, and cores scale linearly
(e.g. no lock conflicts & an index lets each core find a zone to work on
for free).  So give it 16 cores (each taking on one complete zone), and
it's still 17 hrs saved.  Real life won't be that efficient - meaning
cores won't help that much.

A new memory mapped data structure that didn't require "updating node
pointers" (e.g. that used offsets instead of pointers) may be worth
considering.  In current hardware and with a decent compiler and coding,
the apparent cost of this over absolute pointers may well be vanishingly
small.

OK, that was two.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 10-Sep-21 12:56, Victoria Risk wrote:
>
>>>> After all the "other improvements in performance" that you cited,
>>>> what is the performance difference between map and the other formats?
>>>
>>> I don’t know that, to be honest. We don’t have the resources to
>>> benchmark everything. Maybe someone on this list could?  We would
>>> also like to be able to embark on a wholesale update to the rbtdb
>>> next year and this is the sort of thing that might complicate
>>> refactoring unnecessarily.
>
>
> I was wrong, and in fact we have benchmarked it.
> See https://gitlab.isc.org/isc-projects/bind9/-/issues/2882
> <https://gitlab.isc.org/isc-projects/bind9/-/issues/2882> for details.
> Map format is still faster than raw, but not so much faster that it
> warrants retaining it, given it is riskier, harder to maintain and we
> have no feedback from users that it is important to them.  It also
> seems not to work with large numbers of zones, (>100K) at least in
> current versions of 9.11 and 9.16, which is further indication that it
> isn’t in wide use or we would have had complaints. 
>
> We also have discussed internally that there are other factors, other
> than loading the zone files, that may have more impact on the time it
> takes a BIND server to restart.
>
> If anyone out there is using it successfully, and wants us to keep
> this feature, this would be the time to speak up!
>
> Thank you,
>
> Vicky
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt

On 10-Sep-21 08:36, Victoria Risk wrote:
>
>
>> On Sep 10, 2021, at 7:24 AM, Timothe Litt > <mailto:l...@acm.org>> wrote:
>>
>> Clearly map format solved a big problem for some users.  Asking
>> whether it's OK to drop it with no statement of what those users
>> would give up today is not reasonable.
>>
> Actually, we are not sure there ARE any users. In fact, the one
> example I could come up with was Anand, who has replied to the list
> that he is in fact NOT using map zone.  I should have asked directly -
> is anyone on this list USING MAP ZONE format?
>
Well, if the answer is "no one", that simplifies matters :-)

I do remember that startup time was a big issue before map came out, and
that the complaints subsided thereafter.  No personal knowledge as to
whether that was cause and effect or a realignment of the planets.  In
general, I don't look to Astrology for answers :-)

>> After all the "other improvements in performance" that you cited,
>> what is the performance difference between map and the other formats?
>
> I don’t know that, to be honest. We don’t have the resources to
> benchmark everything. Maybe someone on this list could?  We would also
> like to be able to embark on a wholesale update to the rbtdb next year
> and this is the sort of thing that might complicate refactoring
> unnecessarily.

IIRC, when I did some work on the stats channel & was concerned with
scalability, Evan said that you keep some large datasets (1M+zones)
around for testing and produced some numbers for that.  So it ought to
be possible to get some basic data.

I'm not suggesting a full benchmarking campaign -but one or two
datapoints are a lot better than none.  E.g. If there's no difference
with 1 or 10M zones with, say, 10K records each, it's pretty clear that
map's time is past.  If it's orders of magnitude faster (and it's used),
it's not.

I don't remember - did your user survey ask about how many/how large
zones people serve?  I vaguely think so, but it's been a while...

>> For a case which took 'several hours' before map was introduced, what
>> would the restart time be for named if raw format was used now?
>>
>>> If I knew that I would have said. 'Raw’ was much faster than the
>>> text version. Map was faster than raw. Raw is apparently not a
>>> problem to maintain.  I believe the improvement with raw was ~3x.
>>>
>
I think the questions are: (a) is startup time an issue (however it's
solved)?, (b) if so, is map format the solution? (c) If it is and people
are using it, what would the consequences be to them if it went away?
(d) If it is, and people aren't using it - is the documentation too
scary (as Anand said it is for him)?
>> It's pretty clear to me that if map format saves a few seconds in the
>> worst case, it's not worth keeping.  If it saves hours for large
>> operators, then the alternative isn't adequate.  Maybe "map" isn't
>> the answer - how might 'raw' compare to a tuned database back end? 
>> (Which has other advantages for some.)  What if operators specified a
>> priority order for loading zones?  Or zones were loaded on demand
>> during startup, with low activity zones added as a background task? 
>> Or???
>
> Well, back when we added map zone format, startup time was a major
> pain point for some users. Now, it seems as though large operators are
> updating their zones all the time (also updating RPZ feeds) and
> efficiency in transfers seems to be a bigger issue. 
>
What I was getting as is how hard the definition of "startup time" is. 
Time to serving all zones?  Important zones? Is it OK for responses to
be slow during startup, or is startup only complete when responses are
at nominal speed?

I wonder if this comes from large operators using a database(DLZ)  back
end.  Database developers tend to have a single-minded focus on
performance, and direct updates are probably faster than going thru
named & its generalized authentication/validation.  Plus, depending on
how you set up your server architecture, DB replication can replace DNS
zone transfers.

> We don’t have any direct data on what features are being used, we can
> only judge based on complaints we receive via bug tickets or posts on
> this list.
You did a survey a while back...
>>
>> A fair question for users would be what restart times are acceptable
>> for their environment - obviously a function of the number and
>> size/content of zones.  And is a restart "all or nothing", or would
>> some priority/sequencing of zone availability meet requirements?
>>
> That is a good question. Can you answer it for yourself?

Sure.  I'm not a large operator, but I've always thought big and
implemented smaller.  

Re: Notice of plan to deprecate map zone file format

2021-09-10 Thread Timothe Litt
Vicky,

I never reflexively "howl in protest", but it's really hard to have an
opinion on this proposal without some data.

Clearly map format solved a big problem for some users.  Asking whether
it's OK to drop it with no statement of what those users would give up
today is not reasonable.

After all the "other improvements in performance" that you cited, what
is the performance difference between map and the other formats?

For a case which took 'several hours' before map was introduced, what
would the restart time be for named if raw format was used now?

It's pretty clear to me that if map format saves a few seconds in the
worst case, it's not worth keeping.  If it saves hours for large
operators, then the alternative isn't adequate.  Maybe "map" isn't the
answer - how might 'raw' compare to a tuned database back end?  (Which
has other advantages for some.)  What if operators specified a priority
order for loading zones?  Or zones were loaded on demand during startup,
with low activity zones added as a background task?  Or???

A few data points would get you much more useful responses. 

A fair question for users would be what restart times are acceptable for
their environment - obviously a function of the number and size/content
of zones.  And is a restart "all or nothing", or would some
priority/sequencing of zone availability meet requirements?

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 09-Sep-21 15:13, Victoria Risk wrote:
> Greetings bind-users,
>
> The `map` zone file format was introduced in BIND
> 9.10. 
> https://bind9.readthedocs.io/en/v9_16_20/reference.html?highlight=map%20zone#additional-file-formats
> <https://bind9.readthedocs.io/en/v9_16_20/reference.html?highlight=map
> zone#additional-file-formats>
>
> At the time, this format significantly sped up a named restart, which
> could take several hours in some situations. This new file format is
> very complicated, and maintaining it has proven difficult. Meanwhile,
> the performance advantage versus the `raw` format, or the default text
> files, has decreased as we have made other improvements in performance. 
>
> We would like to deprecate the `map` zone file format in future
> branches of BIND. The proposal is to deprecate the feature in the 9.16
> branch, (users will see a warning when this feature is used but it
> will still work through the end of the 9.16 branch), and to disable
> the feature in 9.20.0 (it will no longer work in this and subsequent
> versions). 
>
> Per our policy on deprecating named options, we are notifying the user
> mailing list.  You are welcome now to howl in protest or point out
> something we haven’t considered.  ;-)
>
> Regards,
>
> Vicky Risk
> Product Manager


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: No more support for windows

2021-06-10 Thread Timothe Litt
On 09-Jun-21 18:46, Richard T.A. Neal wrote:
> Evan Hunt wrote:
>
>>> My understanding is BIND will still run fine under WSL; it's only the 
>>> native Visual Studio builds that we're removing. 
>>> For people who want to run named on windows, WSL seems like the best way to 
>>> go.
> Sadly no. To quote myself from an earlier email on this topic:
>
> There are two versions of WSL: WSL1 and WSL2. Development has all but ceased 
> on WSL1, but WSL1 is the only version that can be installed on Windows Server 
> 2019.
>
> Microsoft have not yet confirmed whether WSL2 will be available for Windows 
> Server vNext (Windows Server 2022, or whatever they name it).
>
> Even if WSL2 is made available for Windows Server 2022 it has some serious 
> networking limitations: it uses NAT from the host, so your Linux instance 
> gets a private 172.x.y.z style IP address, and that IP address is different 
> every reboot. Proxy port forwarding must therefore be reconfigured on every 
> reboot as well.
>
> Personally I'm comfortable with the decision that's been made and I 
> understand the logic. Saddened, like saying goodbye to an old friend, but 
> comfortable.
>
> Richard.

As I suggested early on, it would be great if the tools could somehow be
available as native binaries.  Sounds like there's progress there -
thanks Evan!

As for running a BIND server, all things considered it seems to me that
the simplest approach is to create a bare-bones VM running Linux.  Run
that on the windows server (use VMware, VirtualBox)  If the only things
running in that machine are named, a firewall, a text editor, logwatch,
and backups, there's really not much effort in keeping that machine
running.  Just remember to do a distribution update once in a while
(e.g. dnf update/apt-get, etc).  You might want to keep
SeLinux/Apparmor, but with no other services, it may not be worth the
effort.  You can tailor Linux distributions down to a very minimal set
of services.  It's often done for embedded applications.  You can even
do the backups by snapshoting the VM.

You can update the zone files via UPDATE.  You can update the config
(and zone files if you like) in the VM, or via an exported directory
from the Windoze host.  (E.g. VirtualBox does this trivially.)

This would completely eliminate the complexity of dealing with the
Windows networking stack - the Linux machine (and named) just see an
ethernet adapter (or two, or...) on the host's network.  (Mechanically,
the VM's "adapter"  injects and retrieves raw ethernet packets into the
driver stack very close to the wire.)  No NAT or proxy (unless you want
it, in which case it can be static.)  And whatever kernel
features/networking libraries ISC uses are just there - no porting.

I haven't measured performance, but I do run my Linux machines in
VirtualBox VMs (mostly hosted on a Linux server, but some on Windows). 
I haven't run into issues - but then I'm not a big operator.  I do use
CPUs (and IO) with hardware virtualization support. 

In any case, the workload on ISC would be zero - unless they choose to
provide the VM (there are portable formats).  That work might be
something that someone who wants a Windows solution could afford to
sponsor.  The biggest part would be scripting packaging from the
selected distro and a test system.  Plus a bit of keeping it
up-to-date.  And documentation.  Optionally, someone might want to do
some configuration/performance tuning - but most of that is what ISC
does anyway inside the VM.  Again, the work would seem to be something
that the Windows community could donate and/or sponsor.

It might even be the case that ISC could use the same VM as part of its
test suite - many CI engines are using that approach to get wide
coverage with minimal hardware.  (The CI folks, like GitHub Actions,
GitLab, etc spin up a VM, install the OS and minimal packages, then run
your tests.)

I confess that this is a practical approach - it won't satisfy those who
insist on a "pure" windows solution. (Though I bet if you looked inside
their routers, storage, phone systems, and certainly cars there'd be
Linux purring away under the hood...)  Nor anyone who thinks that the
status quo is ideal or that only a "no effort" solution is acceptable. 
Anyhow, it's not an attempt to start a religious war or to prolong the
debate on what ISC does.  It assumes BIND won't support windows, that
WSL is imperfect, and that an alternative to complaining might be
helpful...  Feel free to s/Linux/(Solaris|FreeBSD|VMS|yourfavorite/g.

I don't have a need for BIND (except the tools) under Windows, so I'm
not volunteering to implement this.

FWIW.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




OpenPGP_signature
Description: Open

Re: root.hints - apparmor access error with Bind from PPA

2021-06-04 Thread Timothe Litt
I'm not an apparmor user - but have you looked at the parent directory
permissions?  From what you posted, that would be the logical culprit.

In any case, unless you are using a private root zone, since named has
the root nameserver addresses built-in, the use of root.hint is
unnecessary.  (Even if one or two change addresses before the next
release, as does happen infrequently, once named starts it will ask the
network for the full set.  It only needs one - of the 13 - to bootstrap
itself.)

There is an argument for running your own root server with a copy of the
root zone - but most small operators don't.  Simplifying, it makes sense
if you are "far" from the global root servers, have regular outages that
leave a local region intact, or are very concerned about privacy.  (In
the latter case, qname minimization is likely a better choice.)

It seems that a lot of distributions configure a root.hint out of
habit.  It's actually a step backwards, since unless you have a process
to update root.hint, your copy is likely to end up being older than
named's built-ins...

It's been a while since I looked, but at that time, a 20ish year old
root.hint had only a couple of IPv4 addresses wrong.  (Didn't have many
IPv6.)  root.hint really IS stable - and so, therefore, are the named
built-ins.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 03-Jun-21 22:45, 3coma3 wrote:
> Dear list:
>
> I've used the PPA at https://launchpad.net/~isc/+archive/ubuntu/bind to
> upgrade
> bind from 9.11.3+dfsg-1ubuntu1.15 (current version for
> bionic-{updates,security}) to 9.16.16-2+ubuntu18.04.1+isc+1
>
> (I was needing to use the validate-except clause and this new version
> supports it)
>
> After the upgrade, attempting to start the named service failed with
> this error:
> Jun  3 22:03:53 top named[19946]: could not configure root hints from
> '/usr/share/dns/root.hints': permission denied
>
> Right below that apparmor logs this:
>
> Jun  3 22:03:53 top kernel: [17981.067014] audit: type=1400
> audit(1622768633.158:559): apparmor="DENIED" operation="open"
> profile="/usr/sbin/named" name="/usr/share/dns/root.hints" pid=19946
> comm="isc-worker" requested_mask="r" denied_mask="r" fsuid=129 ouid=0
>
>
> What's puzzling is that the apparmor profile apparently allows the read
> @ line 36:
>
> find /etc/apparmor.d -type f | xargs grep -n '/usr/share/dns'
> /etc/apparmor.d/usr.sbin.named:36:  /usr/share/dns/root.* r,
>
> dpkg -S /etc/apparmor.d/usr.sbin.named
> bind9: /etc/apparmor.d/usr.sbin.named
>
> apt-cache policy bind9
> bind9:
>   Installed: 1:9.16.16-2+ubuntu18.04.1+isc+1
>   Candidate: 1:9.16.16-2+ubuntu18.04.1+isc+1
>   Version table:
>  *** 1:9.16.16-2+ubuntu18.04.1+isc+1 500
>     500 http://ppa.launchpad.net/isc/bind/ubuntu bionic/main amd64
> Packages
>     100 /var/lib/dpkg/status
>  1:9.11.3+dfsg-1ubuntu1.15 500
>     500 http://mirrors.us.kernel.org/ubuntu bionic-updates/main
> amd64 Packages
>     500 http://security.ubuntu.com/ubuntu bionic-security/main amd64
> Packages
>  1:9.11.3+dfsg-1ubuntu1 500
>     500 http://mirrors.us.kernel.org/ubuntu bionic/main amd64 Packages
>
>
> Although the error appears to not be related to file perms, here's for
> completeness:
>
> ls -la /usr/share/dns
> total 28
> drwxr-xr-x   2 root root    55 dic 13  2019 .
> drwxr-xr-x 457 root root 12288 jun  3 21:44 ..
> -rw-r--r--   1 root root   166 feb  1  2018 root.ds
> -rw-r--r--   1 root root  3315 feb  1  2018 root.hints
> -rw-r--r--   1 root root   864 feb  1  2018 root.key
>
>
> It helped me to find a previous report at
> https://lists.isc.org/pipermail/bind-users/2020-July/103454.html
>
> And then I ended up solving the problem as Brett did there, by copying
> /usr/share/dns to /etc/bind/dns and changing the zone definition.
>
> Still I am reporting this in case it's affecting someone else, and
> because maybe you guys have an idea as to what's going on with apparmor
> here? I'm not very knowledgeable in it and would appreciate any info /
> help to solve the root cause (and maybe learn something).
>
> Thanks in advance
>
>
> full log:
>
> Jun  3 22:03:53 top systemd[1]: Started BIND Domain Name Server.
> Jun  3 22:03:53 top named[19946]: starting BIND 9.16.16-Ubuntu (Stable
> Release) 
> Jun  3 22:03:53 top named[19946]: running on Linux x86_64
> 5.6.7-050607-generic #202004230933 SMP Thu Apr 23 09:35:28 UTC 2020
> Jun  3 22:03:53 top named[19946]: built with '--build=x86_64-linux-gnu'
> '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr

Re: Deprecating BIND 9.18+ on Windows (or making it community improved and supported)

2021-04-29 Thread Timothe Litt
I gave up on running named on Windows long ago, so I generally support
this direction.

However, I do use the diagnostic tools (dig, delv, rndc, nsupdate) for
troubleshooting.  It can be helpful to diagnose from the client
environment (e.g. thru the same firewalls, anti-virus, buggy network
stack, and APIs).  The BIND tools are better than the windows tools, 
and
using the same tools everywhere is always beneficial.

Would reducing support to just the diagnostic tools be a helpful middle
ground?

It seems to me that they're much simpler (mostly if not all
single-threaded) and easier to maintain.  Do they have the same VS
issues? (I haven't built on windows for some time.)

I don't include tools that assume a local named instance in the
"diagnostic" category - e.g. named-journalprint, dnstap, etc. 

First order discriminant function is whether the tool talks to the
network (to make DNS queries[no, not named!], including control) - yes:
prefer to keep

FWIW - YMMV.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 29-Apr-21 07:35, Ondřej Surý wrote:
> Hi,
>
> we’ve been discussing the /subj for quite some time and we are either 
> thinking about deprecating the BIND 9 on Windows completely or just handing 
> it over to the “community supported” level.
>
> There are couple reasons for the move:
>
> * Neither the VisualStudio 2017 which we use nor VS2019 supports the C11 
> features we extensively use (stdatomic.h) which makes us to write a horrible 
> horrible shims on top of Windows API
> * No BIND 9 developer uses Windows even as secondary platform
> * BIND 9 doesn’t compile on Windows 10 nor with VS2019 and that 
would require extensive work
> * Windows now has WSL2 
> (https://docs.microsoft.com/en-us/windows/wsl/install-win10) that can be used 
> to run BIND 9 natively
>
> We think that the resources that would require us to support new Windows and 
> Visual Studio versions would be better spent elsewhere and therefore we would 
> like to deprecate the official support for Windows since BIND 9.18 (the next 
> ESV, to be released in 2022), the Windows support for BIND 
9.16 will be kept intact.
>
> Now, there are two options:
>
> a) The support will be completely dropped and the official way to run BIND 9 
> on Windows would be using WSL2
> b) A volunteer will step up and improve the Windows implementation to support 
> newer platforms and make it up to par with POSIX platforms.
>
> 1. Let me be absolutely clear here - we are not interested to keep the 
> Windows port just on the life support, that would miss the point. It has been 
> neglected for too long and if we are to keep it, there are several other 
> areas that would need an improvement - the installer, the system integration 
> and the build system would have to be extensively improved as well.
>
> Thanks,
> Ondrej
> --
> Ondřej Surý (He/Him)
> ond...@isc.org


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Status of zytrax.com "DNS for Rocket Scientists" website

2021-04-21 Thread Timothe Litt
Meantime, you can find it on archive.org:

https://web.archive.org/web/20201223052910/https://www.zytrax.com/
<https://web.archive.org/web/20201223052910/https://www.zytrax.com/>

https://web.archive.org/web/20201223034301/https://www.zytrax.com/books/dns/
<https://web.archive.org/web/20201223034301/https://www.zytrax.com/books/dns/>

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 20-Apr-21 19:09, Victoria Risk wrote:
> Ron Aitchinson called me this afternoon. He is fine, and he promised to 
try to resurrect his site. He has been struggling with his hosting provider and 
he said he *might* be looking for new hosting, but he hasn’t thrown in the 
towel yet.  
>
> I will report back if I get further updates. I told him that a lot of users 
> still find his site very useful and to let ‘us’ know 
if he ever plans to pull the plug. 
>
> Vicky
>
>> On Apr 19, 2021, at 8:49 AM, Victoria Risk  wrote:
>>
>> I will contact Ron and see what is up.
>>
>> Thank you for pointing it out Carsten!
>>
>> Vicky
>>
>>> On Apr 19, 2021, at 7:21 AM, Richard T.A. Neal  
>>> wrote:
>>>
>>> Carsten Strotmann wrote:
>>>
>>>> does anyone know about the status of the zytrax.com website and the 
>>>> excellent "DNS for Rocket Scientists" guide?
>>>> The webpage first had a x509 certificate error (expired) in December
>>>> 2020 and now the web server is unreachable.
>>>> I (and colleagues) have tried to reach Ron Aitchison by mail and other 
>>>> communication means, but no success.
>>> Unfortunately I don't but if anyone is able to make contact with Ron I'd be 
>>> very happy to offer to host an archive of the site at no cost.
>>>
>>> Best,
>>> Richard.
>>>
>


OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How Zone Files Are Read

2020-12-16 Thread Timothe Litt

On 16-Dec-20 13:52, Tim Daneliuk wrote:
> On 12/16/20 12:25 PM, Timothe Litt wrote:
>> On 16-Dec-20 11:37, Tim Daneliuk wrote:
>>> I ran into a situation yesterday which got me pondering something about 
>>> bind.
>>>
>>> In this case, a single line in a zone file was bad.  The devops automation
>>> had inserted a space in the hostname field of a PTR record.
>>>
>>> What was interesting was that - at startup - bind absolutely refused
>>> to load the zone file at all.  I would have expected it to complain
>>> about the bad record and ignore it, but load the rest of the
>>> good records.
>>>
>>> Can someone please explain the rationale or logic for this?  Not 
>>> complaining,
>>> just trying to understand for future reference.
>>>
>>> TIA,
>>> Tim
>> DNS is complicated.  The scope of an error in a zonefile is hard to 
>> determine.
>>
>> To avoid this, your automation should use named-checkzone before releasing a 
>> zone file.
>>
>> This will perform all the checks that named will when it is loaded.
>>
>
> Kind of what I thought.  Whoever build the environment in question
> really didn't understand DNS very well and hacked together a kludge
> that I am still trying to get my head around.
>
For a simple example of why it's complicated - what if the typo you had
was for a host that sends e-mail?

You'll see intermittent delivery errors when remote hosts can't resolve
the host's address; some require that a reverse lookup resolve to the
host as an anti-spoofing measure.  Others won't.  You'll spend a long
time diagnosing.

named can't tell this case from a typo for a local printer's PTR - where
it's unlikely that a reverse lookup failure will matter.  Of course,
this means it could go undetected for years - until it IS needed.

Or the typo is in a NS record - which you probably won't detect until
the other NS goes down...

And, any errors are cached for their TTL by resolvers.  The TTL may
(hopefully for query rate reduction) be large.  In your case, it would
be the negative TTL (meaning that even adding the record later wouldn't
have immediate effect).

The bottom line is that named must assume that anything placed in a zone
file is important, and that the external impact - either sin of omission
or commission - might be large.

Thus, while named can't detect all (or even most) errors, those that it
does detect cause immediate failure to load.  That prevents caching and
propagation as well as getting human attention.

When something's wrong, it's best to stop and fix it.  Error recovery is
a very good thing - but only when you can demonstrate that the cure is
better than the disease.  Skipping format errors in a zone file would
not satisfy that constraint.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How Zone Files Are Read

2020-12-16 Thread Timothe Litt
On 16-Dec-20 11:37, Tim Daneliuk wrote:
> I ran into a situation yesterday which got me pondering something about bind.
>
> In this case, a single line in a zone file was bad.  The devops automation
> had inserted a space in the hostname field of a PTR record.
>
> What was interesting was that - at startup - bind absolutely refused
> to load the zone file at all.  I would have expected it to complain
> about the bad record and ignore it, but load the rest of the
> good records.
>
> Can someone please explain the rationale or logic for this?  Not complaining,
> just trying to understand for future reference.
>
> TIA,
> Tim

DNS is complicated.  The scope of an error in a zonefile is hard to
determine.

To avoid this, your automation should use named-checkzone before
releasing a zone file.

This will perform all the checks that named will when it is loaded.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



OpenPGP_signature
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: [External] Re: How can I launch a private Internet DNS server?

2020-11-08 Thread Timothe Litt
On 07-Nov-20 14:06, Tom J. Marcoen wrote:
> Having at least two name servers is not a requirement by the RFC
> standards but which TLD allows for only one NS server to be given when
> hou register a domain?
>
> On Sat, 7 Nov 2020 at 16:53, Kevin A. McGrail  <mailto:kmcgr...@pccc.com>> wrote:
>
> On 11/7/2020 10:15 AM, Reindl Harald wrote:
>>
>> https://tools.ietf.org/html/rfc1537
>> Common DNS Data File Configuration Errors
>>
>> 6. Missing secondary servers
>>
>> > It is required that there be a least 2 nameservers
>> > for a domain.
>>
>> -
>>
>> that above is common knowledge virtually forever and the
>> difference of "must" and "should" in IETF wordings is also very
>> clear 
>
> While I agree this is common knowledge as a best practice, this
> rfc is a memo NOT a standard from my reading:
>
>   This memo provides information for the Internet community.  It does
>not specify an Internet standard.  Distribution of this memo is
>unlimited.
>
> Regards,
> KAM
>
>

I'm amazed that this thread has persisted for so long on this list of
knowledgeable people.

RFC1034 <https://tools.ietf.org/html/rfc1034>, one of the two
foundational RFCs for the DNS:

P.18 in section 4.1 (NAME SERVERS => Introduction):

A given zone will be available from several name servers to insure its
availability in spite of host or communication link failure.  By
administrative fiat, we require every zone to be available on at least
two servers, and many zones have more redundancy than that.

In case the font is too small, the key phrase is:

"we require every zone to be available on at least two servers"

That's "REQUIRE" at least TWO SERVERS.

https://tools.ietf.org/html/rfc1537 documents common misconfigurations -
that is, cases of non-conformance to the RFCs that the author
encountered circa 1993.  It was superseded in 1993 by RFC 1912
<https://tools.ietf.org/html/rfc1912>, where section 2.8 starts with
"You are required to have at least two nameservers for every domain". 
Neither document supersedes RFC1034; rather they attempt to help with
interpreting it.

https://www.iana.org/help/nameserver-requirements  consolidates
information from several RFCs, since the DNS has evolved over time.  It
is not an RFC, but a convenient summary.  It primarily documents the
tests performed by IANA when it processes a delegation change to the
root, .INT, and .ARPA zones.  These tests validate conformance to the
RFCs.  As the introduction says, "These tests do not measure against
best practices or comprehensively measure protocol conformance. They are
a practical set of baseline requirements that catch common
misconfiguration errors that impact stable operations of the DNS."

Bottom line: two servers per zone are required by the DNS architecture. 
It's not folklore.  It's not optional.

It is true that the DNS is robust enough to function with a number of
misconfigurations (including just one server for a zone, since in
practice this is almost indistinguishable from transient conditions.)

Nonetheless, the goal of the DNS architecture (and most of its
operators) is to have a stable and robust name service. 
Misconfigurations, such as those documented in rfc1527, make the DNS
unstable and fragile.  The architecture tends to contain the effects of
many misconfigurations, but that doesn't make them wise.

As I noted earlier: "DNS appears deceptively simple at first blush. 
Setting up a serviceable infrastructure requires an investment of
thought and on-going maintenance.  You will not be happy if you skimp on
that investment, since broken DNS is externally visible - and frequently
catastrophic."

I'll finish with a 1987 quote from Leslie Lamport on distributed
systems, which the DNS most certainly is:

"A distributed system is one in which the failure of a computer you
didn't even know existed can render your own computer  unusable."

Can the quibbling stop now?

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How can I launch a private Internet DNS server?

2020-11-07 Thread Timothe Litt

On 06-Nov-20 08:50, Reindl Harald wrote:
>
>
> Am 06.11.20 um 13:25 schrieb Tom J. Marcoen:
>> First of all, sorry that I cannot reply within the thread, I was not
>> yet a member of the mailing list when those emails were sent.
>>
>>> On Thu 15/Oct/2020 18:57:16 +0200 Jason Long via bind-users wrote:
>>>>
>>>> Excuse me, I just have one server for DNS and that tutorial is
>>>> about secondary
>>>> DNS server too.
>>>
>>> Just skip the chapter about the secondary.  You're better off buying
>>> secondary
>>> DNS services externally.  A good secondary offloads your server
>>> noticeably, and
>>> keeps the domain alive in case of temporary failures.
>>>
>>> Best
>>> Ale
>>
>> Is it not a requirement to have at least two authoritative name
>> servers? I believe all TLDs require at least two name servers but I
>> must be mistaking as no one pointed this out yet.
>
> yes, and "You're better off buying secondary DNS services externally"
> don't say anything else
>
> the point is that the two nameservers are required to be located on
> two different ip-ranges anyways to minimize the risk that both going
> down at the same time
>
Do a web search for "secondary dns provider" and "backup dns provider". 
There are a number of them, some paid, some free.   Not all are equal -
last time I looked, support for DNSSEC was uncommon,, especially among
the free ones.  IPv6 support has been lagging, but improving.  Also, if
you use UPDATE, make sure the service that you use supports NOTIFY. 
Some limit or charge according to the number of queries, zones and/or
names - but that doesn't necessarily correlate with price. 

Also look for minimum TTL restrictions - especially with free services. 

I use a free service that does support IPv6, DNSSEC & NOTIFY - and runs
on BIND.

Often the external services provide better geographic diversity than a
small operation can - and have better internet connections. 

If you have the resources, you can also setup an agreement with a
similarly-situated organization for mutual secondary service - you slave
their zones & they slave yours.  This can work well - often at no cost -
especially if the resource demands are roughly equal.

Other caveats: external services typically won't use hostnames in your
domain - or if you want that, will charge you for it.  And if you depend
on views, external services will only work for external views - you'll
need to provide your own secondary servers for internal-only views. 

Finally, if performance matters and you have a dispersed user base, look
for a provider that has a solid infrastructure - ANYCAST is one good
clue.  You'll almost always have to subscribe to a paid service in these
cases, especially with high query rates.

RFC2182 (https://tools.ietf.org/html/rfc2182) is fairly readable and
describes many of the considerations involved in selecting secondary DNS
servers. 

DNS appears deceptively simple at first blush.  Setting up a serviceable
infrastructure requires an investment of thought and on-going
maintenance.  You will not be happy if you skimp on that investment,
since broken DNS is externally visible - and frequently catastrophic.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Request for review of performance advice

2020-07-10 Thread Timothe Litt
These suggestions - like most performance articles - are oriented toward
achieving the highest performance with large configurations.  E.g. "How
big can/should you go to support big loads?"

That's useful for many users.  But there are also many people who run
smaller operations, where the goal is to provide adequate (or even
exceptional) performance with a minimum footprint. When BIND is one of
many services, overall performance can be improved by minimizing BIND's
resource requirements.  This is also true in embedded applications,
where footprint matters.

So a discussion about how to optimize for the smaller cases - what do
you trade-off?  What knobs can one turn down - and how far? would be a
useful part of or complement to the proposed article.  E.g. "How small
can/should you go when your loads are smaller?"

FWIW, a wizard - even just a spreadsheet - that encapsulates known
performance results might also be useful.  E.g. Given a processor,
number/size of zones, query rate, & type, produce a memory size, disk &
network I/O rates, and starting configuration parameters... Obviously,
this could become arbitrarily complicated, but a simple spreadsheet with
configuration (hardware & software) and performance data that's
searchable would give people a good starting point.  Especially if it's
real-world. (It can be challenging to map artificial
"performance"/stress tests done in a development/verification
environment to the real world...)  While full automation can be fun,
it's amazing how much one can get out of a spreadsheet with/autofilter. 
(For the next level, pivot tables and/or charts...)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 07-Jul-20 21:57, Victoria Risk wrote:
> A while ago we created a KB article with tips on how to improve your
> performance with our Kea dhcp server. The tips were fairly obvious to
> our developers and this was pretty successful. We would like to do
> something similar for BIND, provide a dozen or so tips for how to
> maximize your throughput with BIND. However, as usual, everything is
> more complicated with BIND.
>
> Can those of you who care about performance, who have worked to
> improve your performance, share some of your suggestions that have the
> most impact?  Please also comment if you think any of these ideas
> below are stupid or dangerous. I have combined advice for resolvers
> and for authoritative servers, I hope it is clear which is which...
>
> The ideas we have fall into four general categories:
>
> System design
> 1a) Use a load balancerto specialize your resolvers and maximize your
> cache hit ratio.  A load balancer is traditionally designed to spread
> the traffic out evenly among a pool of servers, but it can also be
> used to concentrate related queries on one server to make its cache as
> hot as possible. For example, if all queries for domains in .info are
> sent to one server in a pool, there is a better chance that an answer
> will be in the cache there.
>
> 1b) If you have a large authoritative system with many servers,
> consider dedicating some machines to propagate transfers. These
> machines, called transfer servers, would not answer client queries,
> but just send notifies and process IXFR requests.
> 1c) Deploy ghost secondaries.  If you store copies of authoritative
> zones on resolvers (resolvers as undelegated secondaries), you can
> avoid querying those authoritative zones. The most obvious uses of
> this would be mirroring the root zone locally or mirroring your own
> authoritative zones on your resolver.
>
> we have other system design ideas that we suspect would help, but we
> are not sure, so I will wait to see if anyone suggests them.
>
> OS settings and the system environment
> 2a) Run on bare metal if possible, not on virtual machines or in the
> cloud. (any idea how much difference this makes? the only reference we
> can cite is pretty out of date
> - 
> https://indico.dns-oarc.net/event/19/contributions/234/attachments/217/411/DNS_perf_OARC_Apr_14.pdf
> )
>
> 2b) Consider using with-tuning-large.
> (https://kb.isc.org/docs/aa-01314) This is a compile time option, so
> not something you can switch on and off during production. 
>
> 2c) Consider which R/W lock choice you want to use -
> https://kb.isc.org/docs/choosing-a-read-write-lock-implementation-to-use-with-named
> For the highest tested query rates (> 100,000 queries per second),
> pthreads read-write locks with hyper-threading /enabled/ seem to be
> the best-performing choice by far.
>
> 2d) Pay attention to your choice of NIC cards. We have found wide
> variations in their performance. (Can anyone suggest what specifically

Re: Question About Recursion In A Split Horizon Setup

2020-04-17 Thread Timothe Litt
On 17-Apr-20 10:56, Tim Daneliuk wrote:
> On 4/17/20 9:50 AM, Bob Harold wrote:
>> Agree, that's odd, and not what the man page says.  Any chance that there is 
>> some other DNS helper running, like resolved, nscd, dnsmasq, etc?
> Nope.  This is vanilla FreeBSD with vanilla bind running.
>
>> 'dig' should tell you what address it used, at the bottom of the output - 
>> what does it say?
>
>
> ;; Query time: 0 msec
> ;; SERVER: ::1#53(::1)
> ;; WHEN: Fri Apr 17 09:53:51 CDT 2020
> ;; MSG SIZE  rcvd: 83
>
>
> Does the SERVER line indicate it's trying to get to the local instance via
> IPV6 or is this just standard notation?  (This is an IPV4 only environment).
>
>
You seem to be selecting views based on IP address.

If the host on which you are running dig is multi-homed, the OS may pick
a source
address other than what you intend.  Use -b to explicitly bind to a
particular interface.

(Or, if you use TSIG to match views, -k)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC - many doubts

2020-04-03 Thread Timothe Litt
The entropy problem is especially severe in many VMs.  Besides Warren's
suggestion:

Many current machines have hardware random noise sources that solve (or
at least
put a big dent) into the entropy problem.  A raspberry Pi is
inexpensive, and unless you
are generating zillions of keys, will solve most of these issues.  I use
entropy broker
https://www.vanheusden.com/entropybroker/ to distribute entropy from a Pi to
my network.  (And you can always add another RPi.)  I don't recall the
last time
I ran out of entropy - and no, I'm not talking about the "organization"
of my physical
desktop :-)

For a while, there USB keys with entropy sources were a good choice -
but with
hardware sources built into most CPUs, I think their time has passed. 
The same
low-power RPi that feeds entropy is also a great NTP server, VPN gateway
and a
few other things - for ~USD 40.  Or any Intel or AMD cpu since ~2015 has
RDRAND/RDSEED.

There are some religious arguments about booby-trapped hardware sources -
these days, kernels will mix all sources, so I don't get too upset.  But
YMMV.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 02-Apr-20 11:58, Warren Kumari wrote:
> On Thu, Apr 2, 2020 at 11:14 AM David Alexandre M. de Carvalho
>  wrote:
>> Hello, good afternoon.
>> My first post in this list :)
>>
>> I'm running BIND Chroot for many years (currently version 9.8.2) on some old 
>> hardware running Oracle Linux 6.
>> I believe it was last year when I was reading about implementing DNSSEC, and 
>> I think I've even tried to generate a
>> keypair in the slowest server, which after more than a day, wasn't ready 
>> yet. Maybe I was doing something wrong, I
>> honestly don't know.
> You almost definitely were -- even a really really slow machine should
> be able to generate keys in a small number of seconds -- you didn't
> list what commands you used, but I'm going to assume you were trying
> to generate an rsa key - you should be able to get a feel for how long
> this takes by running:
> time openssl genrsa -out private.key 2048
> or
> time openssl genrsa -out private.key 4096
>  (note that this is very different to running 'openssl speed rsa2048
> rsa4096', which benchmarks RSA operations, not key generations).
>
> I'm fairly sure that your issue was a lack of entropy -- in order to
> generate crypograohically good keys, you need good a good source of
> randomness. If you are running an older machine and older kernel, the
> /dev/random source is blocking, and if you try and read too much from
> it it will just hang until it has enough entropy to give "safe"
> output. Newer kernels do a better job of mixing in external event
> noise, but there are a number of modules which help with this -
> haveged being the best known (http://www.issihosts.com/haveged/ ).
> You could also test if this is the issue by using /dev/urandom, which
> doesn't block, or 'while true; do cat
> /proc/sys/kernel/random/entropy_avail; sleep 2; done' and see if the
> available entropy drops to zero during key generation...
>
> W
>
>> So now I had some time and reading about this again.
>>
>> If I query either of my servers about my domain:
>> dig @dns di.ubi.pt DNSKEY
>> I do get the DNSKEY, but I have no records when querying about +dnssec. My 
>> topdomain (ubi.pt) doesn't have DNSSEC yet
>> either.
>>
>> my named.conf already has the following:
>>
>> dnssec-enable yes;
>> dnssec-validation auto;
>> dnssec-lookaside auto;
>> bindkeys-file "/etc/named.iscdlv.key";
>> managed-keys-directory "/var/named/dynamic";
>>
>> Outside the configuration file I also have a /etc/named.root.key
>>
>> My questions:
>> 1) Will my old servers (1GB RAM) become much slower with  DNSSEC? Is it 
>> worth it?
>> 2) I have one global "hosts" file and 3 reverse zone files, each for the 
>> respective IP network. Can I use the same
>> Keypair in all of them?
>> 3) Are the files /etc/named.root.key file and /etc/named.iscdlv.key already 
>> being used? I compared them to the result
>> of the DNSKEY dig query but they are different.
>>
>> Thank you so much for your time!
>> Best regards
>>
>> Os melhores cumprimentos
>> David Alexandre M. de Carvalho
>> ---
>> Especialista de Informática
>> Departamento de Informática
>> Universidade da Beira Interior
>>
>>
>>
>> ___
>> Please visit https://l

Re: Machine friendly alternative to nsupdate

2020-04-01 Thread Timothe Litt
These projects tend to be custom... there may be a prepackaged solution,
but everything I've run into has either been tied to the specific
abstractions of a project - or very low level.

Mine uses the Perl Net::DNS module to setup update transactions.

Net::DNS gives you the ability to send update, use TSIG, get all the
response fields conveniently, and get display text.  It's pretty well
supported - and the basis for a number of DNS tools and tests.

When first approached, it can be, er, less than obvious exactly how to
make UDPATE work.  If you get stuck, I can probably extract the code to
do (TSIG-signed) updates.

As for the next layer - XML or whatever - that's another project.  If
you speak Perl, it would not be difficult to wrap Net::DNS to meet your
needs.

P.S. Other than using it (and reporting the occasional bug), I have no
relationship with Net::DNS :-)

Timothe Litt

ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 01-Apr-20 05:07, Petr Bena wrote:
> Hello,
>
> Some preamble: Some time ago I created an open source DNS admin web
> GUI *1 that is basically a wrapper around dig and nsupdate that allows
> people with "less CLI knowledge" to easily manipulate DNS records. The
> main reason for this was that in our corporation we have about 400
> internal DNS zones hosted on over 100 different BIND master servers,
> in more than 10 countries around the planet and this tool allowed us
> to unify the management as it allowed integration with different
> master servers, allow granular role based access for individual zones
> (integrated with LDAP groups), including some web API for our
> automation tools etc.
>
> Now to the actual problem: as I said, this tool is just a wrapper
> around nsupdate and dig, I like it that way because it's non-invasive,
> unlike other similar DNS admin panels, it doesn't require ANY changes
> on DNS server configuration and it integrates well with other
> solutions already in place. The problem I have however, is, that
> nsupdate was created as a tool for humans, rather than machines and
> parsing its output and even giving it input is very hard. Plus some
> things don't even seem to be possible in it.
>
> Is there any alternative to nsupdate, something that can work with XML
> or JSON payloads or provide output in such machine parseable format?
> For example, typical problem I am facing right now - is that nsupdate
> silently ignores things that IMHO shouldn't be ignored - for example
> when someone try to add a record that already exists, or try to add an
> A record over CNAME, nsupdate silently ignores this, even in debug
> output I can't see any difference, in first send the record is
> created, resulting in NOERROR, in second identical send, update is
> ignored resulting in NOERROR, so I have no way to tell users of my app
> that record was not in fact created or changed (because it already
> exists). For example:
>
> Here is operation where I first add a CNAME record and then try to add
> same A record (imagine two different users were doing this so user B
> was unaware that CNAME already exists) you can see in both cases
> nsupdate respond with same answer, despite record is created only in
> first case. And on top of that this answer is not easy to machine parse.
>
> > debug
> > update add petrbena.test.zone. 600 CNAME this.is.test.
> > send
> Sending update to 10.15.12.17#53
> Outgoing update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 48433
> ;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; UPDATE SECTION:
> petrbena.test.zone.    600    IN    CNAME    this.is.test.
>
> ;; TSIG PSEUDOSECTION:
> server. 0    ANY    TSIG    hmac-md5.sig-alg.reg.int. 1585729680 300
> 16 xx== 48433 NOERROR 0
>
>
> Reply from update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 48433
> ;; flags: qr ra; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; TSIG PSEUDOSECTION:
> server. 0    ANY    TSIG    hmac-md5.sig-alg.reg.int. 1585729680 300
> 16 xx== 48433 NOERROR 0
>
> > update add petrbena.test.zone. 600 A 0.0.0.0
> > send
> Sending update to 10.15.12.17#53
> Outgoing update query:
> ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 30709
> ;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
> ;; ZONE SECTION:
> ;test.zone.            IN    SOA
>
> ;; UPDATE SECTION:
> petrbena.test.zone.    600    IN    A    0.0.0.0
>
> ;; TSIG PSEUDOSECTION:
>
> server. 0    ANY    TSIG 

Re: with dot in NAME for ACME via dynamic update (Axel Rau)

2020-03-14 Thread Timothe Litt
Er,

dig _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.

is missing a record type.  The default is A.


dig _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>. txt

will likely give you better results

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 14-Mar-20 13:31, bind-users-requ...@lists.isc.org wrote:
> Am 14.03.2020 um 18:14 schrieb Chuck Aurora  <mailto:c...@nodns4.us>>:
>
>> it seems, the dynamic update protocol does not allow things like
>> _acme-challenge.some-host.some.domain
>> TXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"
>> because there is no zone
>> some-host.some.domain
>
> I am pretty sure that is not correct, but we can't help unless you
> show your work.  If you need to specify the zone to update, you can
> and should.  BIND's nsupdate(8) and other dynamic DNS clients allow
> you to do this.

With this file
- - -
server localhost
debug
zone lrau.net <http://lrau.net>
ttl 3600
add _acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
 3600 TXT  "tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"
show
send
answer
- - -
I get:
- - -
# nsupdate -k /usr/local/etc/namedb/dns-keys/ddns-key.conf
~/admin/ns-update-example.txt
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:      0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; UPDATE SECTION:
_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
3600 INTXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"

Sending update to ::1#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; UPDATE SECTION:
_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.
3600 INTXT"tR0VhMRfb4v5WsctEgoD3aWNRJ73n2wqn9hlTPE9pA0"

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 


Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 

Answer:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  4
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;lrau.net <http://lrau.net>.INSOA

;; TSIG PSEUDOSECTION:
ddns-key.0ANYTSIGhmac-sha256. 1584206515 300 32 . . . 4 NOERROR 0 

# dig _acme-challenge.imap.lrau.net
<http://acme-challenge.imap.lrau.net>.  @localhost

; <<>> DiG 9.16.0 <<>> _acme-challenge.imap.lrau.net
<http://acme-challenge.imap.lrau.net>. @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6153
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 404b9f34e94920a4ef3dd3065e6d14308acdeabfe0744b88 (good)
;; QUESTION SECTION:
;_acme-challenge.imap.lrau.net <http://acme-challenge.imap.lrau.net>.INA

;; AUTHORITY SECTION:
lrau.net <http://lrau.net>.3600INSOAns4.lrau.net <http://ns4.lrau.net>.
hostmaster.lrau.net <http://hostmaster.lrau.net>. 2020030850 86400 7200
604800 3600

;; Query time: 0 msec
;; SERVER: ::1#53(::1)
;; WHEN: Sat Mar 14 17:28:16 UTC 2020
;; MSG SIZE  rcvd: 145

(pki_dev_p37) [root@hermes /usr/local/py_venv/pki_dev_p37/src]# 

Axel
---
PGP-Key: CDE74120  ☀  computing @ chaos claudius


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Advice on balancing web traffic using geoip ACls

2020-02-23 Thread Timothe Litt
"Splitting traffic evenly" may not be in the interest of your clients -
suppose their locations are skewed?


In any case, this seems like a lot of work - including committing to
ongoing maintenance - for not much gain.


Consider setting up an anycast address - let the network do the work. 
This will route to the server closest to the client.  You can do this
with two DNS servers - pair each with a webserver, have the zone file
select the corresponding webserver.  And/Or the webservers - works well
for static content; there's a distributed DB challenge.


(It might be nice if someone with experience could write an end-to-end
tutorial on how to do this - from obtaining a suitable address - at a
reasonable cost - to setting up the BGP routing to the servers...)


Of course the simplest way out is to use a CDN - as this is a previously
solved problem.  It trades money for effort, which may be worthwhile if
it allows you to concentrate on your unique value proposition.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 22-Feb-20 20:25, Scott A. Wozny wrote:
> Greetings BIND gurus,
>
> I’m setting up hot-hot webserver clusters hosted on the west and east
> coasts of the US and would like to use Bind 9.11.4 with the Maxmind
> GeoIP database to split the traffic about evenly between those
> clusters.  Most of the traffic will be from the US so what I would
> like most to do is set up my ACLs to use the longitude parameter in
> the city DB and send traffic less than X (let's say -85) to a zone
> file that prioritizes the west coast servers and those greater than X
> to the east coast servers.  However, when I look through the 9.11.4
> ARM it doesn’t include the longitude field in the geoip available
> field list in section 7.1.  Has anyone tried this and it actually
> works as an undocumented feature or, because it’s not an “exact match”
> type operation, this is a non-starter?
>
> If this isn’t an option at all, does anyone have any suggestions on
> how to get a reasonably close split with ACLs using the geoIP
> database?  My first thought is to do continent based assignments to
> west and east coast zone files for all the non North American IPs with
> country based assignments of the non-US North American countries and
> then region (which, in the US, I believe translates to states) based
> assignments within the US.   I would need to do some balancing, but it
> seems fairly straightforward.  The downside is that the list would be
> fairly long and ACLs in most software can be kind of a performance hit.  
>
> The other alternative I was considering was doing splits by time zone,
> but there are a little over 400 TZs in the MaxMind GeoLite DB last
> time I checked and that also seems like it would be a performance hit
> UNLESS I could use wildcards in the ACL to group overseas time zones.
>  While I’ve not seen a wildcard in a geoip ACL, that doesn’t
> necessarily mean it can’t be done so I was wondering if anyone was
> able to make that work.
>
> Finally, I could try a hybrid of continent matches outside North
> America and then the North American timezones which seems like a
> reasonable compromise, but only if my preferred options of longitude <
> > isn’t available nor is wildcarding tz matches.  OR am I overthinking
> all of this and there is a simple answer for splitting my load that I
> haven’t thought of?  The documentation and examples available online
> are fairly limited so I thought I’d check with the people most likely
> to have actually done this.
>
> Any thoughts or suggestions would be appreciated.
>
> Thanks,
>
> Scott


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: A policy for removing named.conf options.

2019-07-07 Thread Timothe Litt

On 13-Jun-19 06:46, Matthijs Mekking wrote:
> Dear BIND 9 users,
>
> BIND 9 has a lot of configuration options.  Some have lost value over
> the years, but the policy was to keep the options to not break old
> configurations.
>
> However, we also want to clean up the code at some point.  Keeping these
> options increases the number of corner cases and makes maintenance more
> cumbersome.  It can also be confusing to new users.  We are trying to
> establish an orderly, staged process that gives existing users ample
> time to react and adapt to deprecated options.
>
> The policy below describes our proposal for the procedure for removing
> named.conf options. We would like to hear your feedback.
>
> Thanks, best regards,
>
> Matthijs
> [Snip]

Slowly catching-up from being off-line, I've reviewed the discussion on
this.  A couple of observations:

So far, the suggestions have included logging & making sure that
named-checkconf flags deprecated syntax.  While helpful, these all
suffer from a timing problem: notice is only provided after the new
software is installed.  For advance notification, they require someone
to look at "the" log file (or proactively run named-checkconf).  But if
it isn't going to bite now, there's a good chance that won't happen. 
And if it does bite now, the software has been installed - best case in
a test environment, worst case in production.  [The last may be more
likely with packaged distributions than with build-from source sites.] 
Further, "the" log file varies by startup mechanism (sysVinit, systemd,
named's logs, consoles, ...) - and in embedded cases, logs may be remote
and/or hard to access.

One approach to notification that hasn't been mentioned would be to
include a deprecation notice and scan of the default configuration file
in 'make install'.   This should be a separate script called from
install, that can also be used stand-alone.

This has limitations, but covers some interesting cases:

Advantages:

Proactive: can stop install if obsolete directives/syntax is detected -
before starting the test (or for the adventurous, production) environment.

Does not depend on logging, or on anyone reading the logs.

Does not depend on which startup mechanism is in use.

Should be caught by the packagers' build.  They are generally
responsible enough to pass on the deprecations to their users.  The
packagers can run the check script in their package's 'install' mechanism.

Works for most people who build from source.

Limitations:

Does not work for installations who use a non-default configuration
file. (e.g. named -c ...)

May be messy for chroot and enhanced security (selinux,apparmor,...)
environments

Will not inspect dynamic configurations (e.g. rndc addzone, modzone...)

Notes:

In all cases, make install could include a short notice of the form "See
DEPRECATIONS for changes that may require changes in your
configuration files".   The README can also refer to this file to avoid
duplication.

Why install?  Eventually, even packaged distributions use install - it
may be buried in an RPM's spec file, but it's run somewhere.  Install
allows the newly built(or distributed) version to check before the new
version is activated.  "configure" is too soon - you don't have the new
images, and with packaged (and cross-compiled) distributions, it's never
run on the target.

Probably, running the check should be the default (maximum coverage),
but a make install-nocheck target would probably be necessary.

Another mechanism would be to add a --fix option to named-checkconf. 
This would generate a new file(s), commenting out options that no longer
serve a purpose - with an easily detectable marker (e.g. '# OBSOLETE -
named-checkconf V19.2').  For options that are simply renamed, it can
insert the new, equivalent syntax.  For options that can't be
automatically updated, create a marker "# ATTENTION: named-checkconf
V19.2 - The 'use-Klingon-names' option is not supported, see
DEPRECATIONS section 659.712 for details" - and don't comment out the
option!  A log file listing all files modified should be produced. 
--fix would shift the burden of finding the affected options from the
user to software - making it (a) more likely to happen (b) easier -
especially for configurations that span dozens (or hundreds) of
'include'd files.

I don't think there's a single universal solution to handling
deprecations, but I hope that these observations are helpful.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: allow-update in global options (was Re: bind and certbot with dns-challenge)

2019-03-17 Thread Timothe Litt
Data points:

I saw another report of this issue on gitlab - #913 just after my
previous note.  It indicated that a distributions initial configuration
breaks with the change.  I see that it has been updated by Alan since.

I checked my configuration files.

I use allow-update-forwarding at the options level.

I use update-policy at the zone level.

I don't currently use either at the view level.

So my configurations would break.  (I haven't had the cycles to run
9.13, unfortunately for you - apparently, fortunately for me :-)

I don't see the serious harm in allowing these options to be inherited -
there are certainly other options that, if incorrectly/accidentally
inherited, could be dangerous.  Allow-transfer; allow-query,
deny-answer-* I could go on alphabetically, but I'm pretty sure a case
could be made for the majority of options causing mischief if
inadvertently inherited.

I'm curious about why these particular options were singled out -- yes,
they update persistent zone data.  But denial of service, information
leaks, and using the wrong directories can also be serious.

In any case, where a change is made that invalidates existing
configurations, I strongly prefer a deprecation warning at least one
(non-development) release prior.  With documentation.

Given that these prerequisites didn't happen in this case, I believe
that regardless of the merits, the previous behavior should be reinstated.

If there is a determination that the benefits of the change outweigh the
costs, then add a deprecation warning a stable release prior (perhaps
now?) and update the documentation -- including the ARM & release notes.

Also, the same arguments should be applied to all the other inheritable
options -- if there is justification for other changes, it's much better
to force operators to make a bundled set of changes than to dribble them
out piecemeal.

FWIW: In general, I choose to place configuration statements at the
level resulting in the shortest configuration.  (Not for performance,
but for clarity/ease of maintenance.)  So that's sometimes "global
enable, exception disable", and sometimes the inverse.  (This can be
frustrated when there's no obvious inverse to a directive, but that's
for another day.)

Finally, I looked at the 9.13 ARM for a list of which options are
allowed in the view statement.  The view Statement Grammar lists
[view_option; ...] - 'view_option' appears nowhere else in the ARM.  The
definition and usage section (in chapter 5) says only: "Many of the
options given in the *options* statement can also be used within
a *view* statement,".  To find an explicit list, one has to go to the
VIEW section of chapter 8 (the man page for named.conf) - which isn't
tagged with 'view_option'.  This frustrates searchers and people
unfamiliar with the ARM structure.  Note that allow-update and
allow-update-forwarding both appear as valid in the view syntax there,
although in chapter 5 the descriptions on p.97 says "only zone, not
options or view".

My 3.5¢ (USD, but your local currency will do :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 17-Mar-19 16:37, Alan Clegg wrote:
> On 3/17/19 2:51 PM, Alan Clegg wrote:
>> On 3/17/19 7:13 AM, Stephan von Krawczynski wrote:
>>> Hello all,
>>>
>>> I am using "BIND 9.13.7 (Development Release) " on arch linux. 
>>> Up
>>> to few days ago everything was fine using "certbot renew". I had
>>> "allow-update" in nameds' global section, everything worked well. Updating 
>>> to
>>> the above version threw a config error that "allow-update" has no global 
>>> scope
>>> and is to be used in every single zone definition.
>> And you may have found a bug.  I'm checking internally at this time.
> So, after a discussion with one of the BIND engineers this afternoon,
> this turned out to be quite an interesting and deep-rooted issue.
>
> During a cleanup of other code (specifically named-checkconf), code was
> changed that enforced what was believed to have been the default
> previously: specifically, allow-update was only allowed in zone stanzas.
>  The chain of changes follows:
>
> 5136.   [cleanup]   Check in named-checkconf that allow-update and
> allow-update-forwarding are not set at the
> view/options level; fix documentation. [GL #512]
>
> This, if the change remains, will be updated to [func] and additional
> documentation will be added to the release notes.
>
> The other changes down this long and twisting passage are:
>
> 4836.   [bug]   Zones created using "rndc addzone" could
> tem

Re: bind and certbot with dns-challenge

2019-03-17 Thread Timothe Litt
Named has options at the global, view and zone levels.  The 9.11 ARM
shows allow-update
in the options and zone statements.  If it's broken in 9.13 - note that
it is a "Developement Release".
So bugs are expected, and you should raise an issue on bind9-bugs or on
gitlab
(https://gitlab.isc.org/isc-projects/bind9/issues).

You can work around your issue by using 'include "my-common-stuff.conf";'
to simplify your configuration.  This is a useful strategy for things
that don't fit
the three-level model.

If you have large zones, you can speed up load time with
masterfile-format raw or map;
see the "tuning" section of the ARM for more information. 

Parsing configuration data is unlikely to be the dominant factor in
startup, but I'm
sure that the developers would welcome a reproducible test case that
shows otherwise.

You should consider update-policy instead of allow-update; it provides
much better control
and better security.

> It is really very obvious that this is only done by
> ideologists, not technical oriented people.
Actually, I've found that the contributors to named are very technical,
practical people.
Sometimes they introduce bugs, or ideas that work in one context but not
another.
They're responsive to criticism & contributions.  But name-calling is
generally not an
effective way to get anyone to help you.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 17-Mar-19 10:35, Stephan von Krawczynski wrote:
> On Sun, 17 Mar 2019 12:40:35 +0100
> Reindl Harald  wrote:
>
>> Am 17.03.19 um 12:13 schrieb Stephan von Krawczynski:
>>> So why is it, that there is no global way of defining default zone
>>> definitions which are only overriden by the actual zone definition?  
>> maybe because it brings a ton of troubles and whoever deals with more
>> than 5 zones has automatic config management in place anyways?
> If you don't want to follow the positive way (how about a nice additional
> feature), then please accept the negative way: someone broke the config
> semantics by implementing a zone based-only "allow update". This option worked
> globally before (too), so we can assume it is in fact broken now.
> Can someone please point me to the discussion about this incompatible change?
>
>>> Why is there no way to define a hosts-type-of-file with an URL-to-IP list?
>>> Do you really want people to define 50.000 zones to perform adblocking?  
>> no, just use the right tool for the task, this don't fit into the domain
>> concept of named and hence you have dnsmasq and rbldnsd to step into
>> that niche
> In todays' internet this is no niche any more. And the right tool means mostly
> "yet-another-host" because you then need at least a cascade of two, one for
> dnsmasq and one for bind/named. A lot of overhead for quite a simple task...
>
>>> Configs have to be reloaded every now and then, is there really no idea
>>> how to shorten things a bit?  
>> ??
> Shorter config = shorter load time. The semantic change of "allow update" 
> alone
> leaves every setup with 1000 domains in a situation where 999 config statments
> more have to be read, interpreted and configured - just to end up in the same
> runtime setup. It is really very obvious that this is only done by
> ideologists, not technical oriented people.
>


signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: named cpu usage pretty high because of dns_dnssec_findzonekeys2 -> file not found

2019-03-11 Thread Timothe Litt
On 11-Mar-19 03:52, Mark Andrews wrote:
> Because you removed the key from disk before it was removed from the zone.  
> Presumably named
> was logging other error messages before you removed the key from disk or the 
> machine was off
> for a period or you mismanaged the key roll and named keep the key alive.
>
> Named’s re-signing strategy is different to when you are signing the whole 
> zone at once as
> you are signing it incrementally.  You should be allowing most of the 
> sig-validity interval
> before you delete the DNSKEY after you inactive it.  One should check that 
> there are no RRSIGs
> still present in the zone before deleting the DNSKEY from the zone.  
> Inactivating it stops the
> DNSKEY being used to generate new signatures but it needs to stay around 
> until all those RRSIGs
> have expired from caches which only happens after new replacement signatures 
> have been generated.

There are a lot of these "administrator should know" events and timeouts
in DNSSEC.  One could argue that these complexities are one of the
barriers to adoption.

It seems worth considering ways to make life easier, for administrators
and automation alike.

A few thoughts come immediately to mind - no doubt there are more:

- Rather than documenting "wait for n TTLs (or sig-validity interval)",
have bind log events that require/enable administrator actions (at
non-debug levels), such as:

"key (keyid) /foo/bar/.. no longer required and can be removed" - issue
at inactivation + max TTL of any RRSIG is signed.  Allows an admin (or
script) to know when it's safe rather than requiring research and/or math.

"key (keyid) /foo/baz... is now signing zone(s)
example.net,example.org.  It expires on <> and will be removed on <>"

- Provide an "obsolete-keys" directory - have named move keys that are
no longer required there.  (Or delete the files.  But emptying
obsolete-keys, like emptying /tmp, can be automated, and deleting a key
might be a problem if forensics - or audits - is required.) The key idea
is that an admin never removes a file from "keys".  And that should
prevent mistakes.

- Rather than relying on the keys directory for signing, use it only to
import/update keys.  Once named starts using a key, put a copy (or move
it) to ".active-keys" - or a database file - that persists as long is
the protocol requires it.  If the file in the keys directory is updated
with new dates, generate the appropriate events - but work from
.active-keys.  If the file disappears from "keys" before it should, use
.active-keys to restore it -- and add a comment explaining why.  ("#
Restored by named at 1-apr-2411: sig-validity interval for
lost.example.net (internal) extends to 15-may-2412")

- Provide an rndc show class command (or stats channel output) that
explains the status/fate of each signing key.  Perhaps a table:

 Key Zone view State created publish active deactivate remove
next_event

   key (keyid) /foo/baz... example.net external Published 1-jan-2000
1-jun-2000 1-Jul-2000 31-dec-2000 1-feb-2001 activate 1-Jun-2000 #
Assumes today is 11-Mar-2000

   key (keyid) /foo/baz... example.org external Published 1-jan-2000
1-jun-2000 1-Jul-2000 31-dec-2000 1-feb-2001 activate 1-Jun-2000 # Same
key, different zone

- Think more about what admins want to do, rather than how named (and
the protocols) do it.  E.g. "sign a zone", "roll key now|every month",
"use latest|specified|safest signature algorithm | key length", 
"enable/disable nsec|nsec3", "unsign zone"... Provide scripts and/or
named primitives that do this.  "dnssec settime -xyz" doesn't do a good
job of specifying intent - one has to do a lot of math, and the intent
isn't logged - just the date change.

I'm aware of the dnssec keymgr effort - it's still more oriented to
timeouts and e.g. coverage periods than to what one wants to
accomplish.  (As far as I can tell, it also doesn't support multiple
views - which makes it unusable for me.  I don't think this is an
unusual configuration...)

If you look at validate() in policy.py.in, there are 6 different errors
for conditions involving timer relationships.  [And the errors are
reported in seconds, not even as something vaguely human - such as
57w2d1h30m12s.] Why not (by default) adjust the timers & log the result?

I'm sure someone will opine that for every case, there's a choice
between shrinking one timer and extending the other. This is undoubtedly
true.  But better to pick a strategy that is consistent with safe
practice than to kick back each error to an admin.  An admin who has
particular requirements can read the log.  But for those who "just want
things to work", I suspect that we can identify a driver (I nominate key
lifetime) & adjust everything else to fit...

I'm sure there are some challenges in the details - but I hope the
message is clear.  Avoid blaming the admin for trying to make things
work.  Instead, package actions at admin-oriented levels of
abstraction.  Guard data that named needs, and 

Re: Forward zone inside a view

2019-02-12 Thread Timothe Litt
All these replies are correct in the details (as usual), but miss the point.

Blocking name resolution, while popular, does not meet the OP's requirement:

"The point is I have several desktops that *must* have access **only**
to internal domains.*"

Let's say that your client's favorite illicit site is facebook.com.

One dig (or host) command reveals that:

  facebook.com has address 157.240.3.35
 facebook.com has IPv6 address 2a03:2880:f101:83:face:b00c:0:25de

Fits on scrap of paper.  Carry in to office.  Connect - with a Host
header for http, SNI for TLS, and off you go.  Or just put it in
hosts.txt/hosts.

Or use a public nameserver.   Or...

If you want to block access, you need a firewall.  If you merely want to
inconvenience people or reduce the risk of clicking on ransomware
hyperlinks, mess with their default nameserver.  RPZ is good for that. 
If you have a private address space & need to resolve some names
differently inside and out, views are good for that. (Or you can have a
different nameserver; tastes vary.)  If you are resource limited and
want to benefit from a public server's larger cache, while serving
authoritatively some local names, forwarding can be a good choice.

But "**must** have access **only**" implies that one expects that the
solution should resist *more* than a cooperative or unmotivated client. 
NO DNS-only based solution will do that.

Governments and political pressure groups think that DNS corruption is
an effective tool for limiting access.  People here know better.  It
deters certain casual problem behavior.  It does not prevent anyone with
a modicum of knowledge and determination from watching cat videos.  (Or
downloading malware, or whatever other behavior a policy maker wishes to
ban.)

It is worth listening to the OP's problem statement and steering him
away from illusory technology.  It's the responsible thing to do.

That there are technical answers to the question asked doesn't mean that
it's the right question.  If it's not (and in this case it does not
appear to be), those answers are not helpful.  Even though they are
correct in other contexts.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

> On 12-Feb-19 17:45, Kevin Darcy wrote:

> Define root zone. 
>
> Delegate teamviewer.com <http://teamviewer.com> from root zone.
>
> Define teamviewer.com <http://teamviewer.com> as "type forward".
>
> "recursion no" is incompatible with *any* type of forwarding or
> iterative resolution. Should only be used if *everything* you resolve
> is from authoritative data, i.e. for a hosting-only BIND instance.
> Since you want to forward -- selectively -- you need "recursion yes".
> Nothing outside of that part of the namespace will be forwarded, since
> named considers everything else to be contained in the root zone.
>
>                                                                      
>           - Kevin
>
> On Mon, Feb 11, 2019 at 9:06 AM Roberto Carna
> mailto:robertocarn...@gmail.com>> wrote:
>
> Matus, I've followed whatyou say:
>
> view "internet" {
>    match-clients { internet_clients; key "pnet"; };
>
> recursion yes;
>
> zone "teamviewer.com <http://teamviewer.com>" {
>         type forward;
>         forward only;
>         forwarders {
>                 8.8.8.8;
>         };
> };
>
> };
>
> but clients can resolve ANY public Internet domain, in addition to
> teamviewer.comI think "recursion yes" apply to every public
> domain and not just for "teamviewer.com <http://teamviewer.com>",
> but I don't know why.
>
> Please can yoy give me more details, using forward or not, how can
> let some clients resolve just teamviewer.com
> <http://teamviewer.com> ??? I confirm that my BIND is an
> authorittaive name server for internal domains.
>
> Thanks a lot again.
>
> El lun., 11 feb. 2019 a las 10:49, Matus UHLAR - fantomas
> (mailto:uh...@fantomas.sk>>) escribió:
>
> On 11.02.19 10:38, Roberto Carna wrote:
> >Dear Mathus, thanks al lot for your help.
> >
> >>> what is the point of running DNS server with only two
> hostnames allowed
> >>> to resolve?
> >
> >The point is I have several desktops that must have access
> only to internal
> >domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in
> >order to download the Teamviewer client and a 

Re: Forward zone inside a view

2019-02-11 Thread Timothe Litt
On 11-Feb-19 08:38, Roberto Carna wrote:

> The point is I have several desktops that must have access only to
> internal domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in order to download the
> Teamviewer client and a pair of operations in this public domain.
>
(Ab)using the DNS for this is almost certainly the wrong approach,
though this sort of question comes up

frequently.

Any sufficiently motivated user can list a blacklisted domain in
HOSTS.TXT, change his DNS server

to a public one, use an IP address (obtained at home, the local internet
cafe, or elsewhere), or

use other work-arounds.

So besides being painful to set up, it's likely ineffective.  You can
clamp down on some of these with file

system or other administrative controls - but not all.  It will be a
frustrating path.

If you want (or are required) to create a walled garden, the only
effective approach is likely to be

a firewall configuration.  You can set it up to only allow traffic from
particular IP address to the permitted

ones.  And control protocols.  You can either send "not reachable" ICMP
responses, or redirect connection

attempts to a port-appropriate warning/notification service.  (e.g. a
web page, e-mail robot, etc.)

You need a process to update the firewall in the unlikely event that the
IP address of a permitted

service changes.  And if your clients get their addresses from DHCP,
you'll want to set up distinct

address pools - and possibly VLANs.

DNS is the wrong hammer for this nail. 

Whether you should hammer the nail at all is a political, not a
technical issue.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 11-Feb-19 08:38, Roberto Carna wrote:
> Dear Mathus, thanks al lot for your help.
>
> >> what is the point of running DNS server with only two hostnames allowed to
> >> resolve? 
>
> The point is I have several desktops that must have access only to
> internal domains. The unique exception is they have access to
> teamviewer.com <http://teamviewer.com>  in order to download the
> Teamviewer client and a pair of operations in this public domain.
>
> I think if I have setup "recursion = no", if I define a forward zone
> with "type forward" and the corresponding forwarder, this option
> enable the recursion just for this defined zone.
>
> In general, my question is how to forward a public domain to a DNS
> resolver like 8.8.8.8 ???
>
> Thanks again.
>
> El sáb., 9 feb. 2019 a las 12:28, Matus UHLAR - fantomas
> (mailto:uh...@fantomas.sk>>) escribió:
>
> On 07.02.19 16:30, Roberto Carna wrote:
> >Desktops I mentioned can only access to web apps from internal
> domains, but
> >in some web apps there are links to download Teamviewer client
> software
> >from Internet. I can create a private zone "teamviewer.com
> <http://teamviewer.com>" with all the
> >hostnames and IP's we will use, but if they change I will be in
> trouble.
> >
> >So we need to forward the query to our resolvers in order to get
> a valid
> >response.
> >
> >So I think we can use the forward option from BIND, but it
> doesn't work at
> >all as I described:
> >
> >1. "recursion no" can only be set at the top (view) level, not
> overridden
> >   at the zone level.
> >
> >2. If I set "recursion no" at the view level, then a "type forward"
> >   zone has no effect:
> >
> >  view "foo" {
> >    recursion no;
> >    ...
> >    zone "teamviewer.com <http://teamviewer.com>" {
> >      type forward;
> >      forward only;
> >      forwarders {172.18.1.1; 172.18.1.2;};
> >    };
> >
> >-- query for foo.teamviewer.com <http://foo.teamviewer.com> fails
> and tell it's not a recursive query
>
> the whole point of "recursion no" is not to answer recursive queries,
> so there should be no wonder it works that way.
>
>
> >3. If I define "recursion yes" at view level:
> >
> >  view "foo" {
> >    recursion yes;
> >    ...
> >    zone "teamviewer.com <http://teamviewer.com>" {
> >      type forward;
> >      forward only;
> >      forwarders {172.18.1.1; 172.18.1.2;};
> >    };
> >
> >-- query for foo.teamviewer.com <http://foo.teamviewer.com> is

Re: forward all but ANY requests

2018-11-30 Thread Timothe Litt
On 30-Nov-18 08:14, Erich Eckner wrote:

> On 30.11.18 12:26, Timothe Litt wrote:
>> On 30-Nov-18 06:04, Erich Eckner wrote:
>>> Hi,
>>>
>>> I'm running a bind9 name server (9.13.4 on debian) which forwards some
>>> zone (onion.) to tor's name server. Unfortunately, tor's name server
>>> only answers A and  requests, but not e.g. ANY requests.
>>>
>>> 192.168.1.3 is running the tor dns,
>>> 192.168.1.13 is running bind9 forwarding to 192.168.1.3:9053
>>>
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion ANY
>>> ;; Connection to 192.168.1.3#9053(192.168.1.3) for
>>> 3g2upl4pq6kufc4m.onion failed: connection refused.
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion A
>>> 10.255.55.223
>>> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion 
>>> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion ANY
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion A
>>> 10.255.55.223
>>> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion 
>>> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>>>
>>> Is there any option:
>>>  - to make bind fall back to A or  when the ANY request fails (even
>>> the connection fails!) or
>>>  - to only forward requests of certain type(s) or
>>>  - to answer ANY requests _always_ with A or  records (not trying if
>>> the ANY request can be forwarded successfully), possibly for certain
>>> zones only?
>>>
>>> Sry, if that has been asked before, but I seem unable to find anything
>>> useful on the internet, since "ANY" is not a good search term ;-) and
>>> without "ANY" I only turn up how to set bind to ipv4/ipv6-only.
>>>
>>> regards,
>>> Erich
>> This reflects a common misunderstanding.
>>
>> A query for ANY does not return 'everything'.  It returns what the
>> server happens to have cached.  It's a diagnostic.
>>
>> You have to ask explicitly for the record types that you want.
>>
>> Many people have fallen into the trap of thinking that an ANY query will
>> return all records in the DNS, and assume that therefore it can be used
>> to make fewer queries.  You're not the first.
>>
>> Any software (or wetware) that relies on ANY for any purpose other than
>> determining what's in a server's cache for diagnostic purposes is broken.
>>
>>
>> Timothe Litt
>> ACM Distinguished Engineer
>> --
>> This communication may not represent the ACM or my employer's views,
>> if any, on the matters discussed. 
> Thank you for the clarification. Indeed, I can (after querying A and
> ) retrieve those records via ANY requests. :-)
>
> Regards,
> Erich

Note that this result is not guaranteed.  The server is not required to
cache records.  The records may have a TTL less than the time between
your queries.  (E.g. 0)  The records may be evicted from a busy cache
before the TTL expires.  Or the server may reboot between queries.  Or...

Unless you have some specific reason for finding out what is in a
server's cache, you don't want to use queries for ANY.  The results will
seem confusing/unpredictable - and while they may "seem to work" for a
while, will end up wasting a lot of your time.

ANY queries are a classic "sharp tool".  If used properly, they can cut
the time required to diagnose a problem.  If used improperly, they will
cut you instead.  For most people, in most circumstances, the best
strategy is to never issue a ANY query.  (dig is also a sharp tool; else
issuing an ANY query would produce an "are you sure?" prompt :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: forward all but ANY requests

2018-11-30 Thread Timothe Litt
On 30-Nov-18 06:04, Erich Eckner wrote:
> Hi,
>
> I'm running a bind9 name server (9.13.4 on debian) which forwards some
> zone (onion.) to tor's name server. Unfortunately, tor's name server
> only answers A and  requests, but not e.g. ANY requests.
>
> 192.168.1.3 is running the tor dns,
> 192.168.1.13 is running bind9 forwarding to 192.168.1.3:9053
>
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion ANY
> ;; Connection to 192.168.1.3#9053(192.168.1.3) for
> 3g2upl4pq6kufc4m.onion failed: connection refused.
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion A
> 10.255.55.223
> $ dig +short @192.168.1.3 -p9053 3g2upl4pq6kufc4m.onion 
> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion ANY
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion A
> 10.255.55.223
> $ dig +short @192.168.1.13 3g2upl4pq6kufc4m.onion 
> febe:5163:d2b9:98aa:345b:ee04:2c32:d10e
>
> Is there any option:
>  - to make bind fall back to A or  when the ANY request fails (even
> the connection fails!) or
>  - to only forward requests of certain type(s) or
>  - to answer ANY requests _always_ with A or  records (not trying if
> the ANY request can be forwarded successfully), possibly for certain
> zones only?
>
> Sry, if that has been asked before, but I seem unable to find anything
> useful on the internet, since "ANY" is not a good search term ;-) and
> without "ANY" I only turn up how to set bind to ipv4/ipv6-only.
>
> regards,
> Erich

This reflects a common misunderstanding.

A query for ANY does not return 'everything'.  It returns what the
server happens to have cached.  It's a diagnostic.

You have to ask explicitly for the record types that you want.

Many people have fallen into the trap of thinking that an ANY query will
return all records in the DNS, and assume that therefore it can be used
to make fewer queries.  You're not the first.

Any software (or wetware) that relies on ANY for any purpose other than
determining what's in a server's cache for diagnostic purposes is broken.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: dig @ipv6-address

2018-11-29 Thread Timothe Litt
On 29-Nov-18 06:21, Christian Weiske wrote:
> Hello,
>
>
> I'm trying to use dig version 9.10.3 to test my local DNS
> server which listens on IPv6 only.
>
> I only get an error when running dig:
>
>> $ dig @2a01:488:66:1000:53a9:2dde:0:1 cweiske.de
>> couldn't get address for '2a01:488:66:1000:53a:53': not found

This looks like a typo. And the error doesn't match the command given.

I suspect that your actual 'dig' command was 'dig
@2a01:488:66:1000:53a:53 cweiske.de', which will reproduce the error.

'2a01:488:66:1000:53a:53' is not an IPv6 address.  You are missing a ::
or a couple of words.  (There should be 8 16-bit words delimited by ':',
or a single '::' ellipsis to represent a run of zeroes.)  Since it
doesn't parse as an IPv6 address, dig (probably getaddrinfo()) tried to
translate the string as a hostname.  Hence the error.

It's really not in anyone's interest when people post obfuscated
questions...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Method of writing zone files

2018-11-13 Thread Timothe Litt
On 12-Nov-18 14:39, Marcus Frenkel asked about backing up slave zone
files & bind's update mechanism:

I believe you're asking the wrong questions and are likely to run into
complications.  You don't know when BIND will merge the journal, or that
rsync will atomically snapshot the zone file and the journal.  So you
will get a coherent copy of the the zone file - but it may be stale. 
The journal may be earlier or later than the zone file, or it may be
incomplete. This means that a restore  may have the slave serve stale
data - until it queries the master for any updates & if necessary, fixes
the journal file.  If the master happens to be down, this is especially
suboptimal.

So I would exclude both files from rsync, and use another approach to
save a coherent copy of the zone file(s) in my backup procedure.

One approach is to axfr the zone (from the slave, or any other server)
to your backup server.

You can do that directly with dig, or using a library (e.g. Perl's
Net::DNS).  Recent versions of BIND write slave zone files in a binary
format by default.  Just pipe the data through named-compilezone -F when
writing the file.  This has the advantage that it doesn't have to run on
the slave. (You can pull the data to your backup server, or push it from
a machine other than the slave.)

If you are dealing with very large files and/or a slow network link, you
may prefer to use named-compilezone -j to merge the zone & any journal
file into a temporary snapshot & back that up with rsync.  I'm not sure
how BIND's binary format interacts with rsync - it's possible that
outputting text format would be more efficient.  You'd have to benchmark
to decide.

If you restore from such a backup, you'll want to delete the journal
file to save named some work when it loads the zone.  And, of course,
make sure that named isn't running :-)  If you backup the zone in text
format, you can compile it only during restore to save time at the cost
of space.  (Backups should be frequent; restores rare.)

This assumes that your recovery strategy depends on being able to
restore the slave with the master down.

If you can assume that the master will be up (or restored first), by far
the simplest approach is not backup the zone data (file or journal) on
the slaves at all.  This is often the case, since restoring a master
tends to be higher priority than restoring slaves.  To restore, simply
start named.  If you are restoring over existing files, delete both zone
and journal files first.  Named will notice that it has no data, and
will do the axfr(s) & create the files in fairly short order.  Named
will spread the transfers of multiple zones out over time, but you'll
want to do some math to determine if the restore time and impact on your
network are acceptable.

Although you asked about slaves, note that masters have similar issues &
solutions.  Masters will also have journal files if they do dynamic
updates (including for DNSSEC).

For servers which are masters for some zones and slaves for others, the
strategy should be determined per-zone, not per server.

The bottom line: backups of databases (and DNS is a distributed
database) are complicated.  The best approach depends on the details of
your operational environment -- and not on the minutiae of BINDs
implementation.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 12-Nov-18 14:39, Marcus Frenkel wrote:
> Thank you for the quick reply Tony!
>
> Follow-up questions just to be sure:
> 1. The new zone file is renamed in the placed of the old one, only
> after all changes to the new file are written?
> 2. Is the zone file atomically replaced during the renaming process,
> in a sense that there is no window in which the file is empty or
> non-existent?
>
> I'm running BIND on Debian 9. Based on this
> <http://man7.org/linux/man-pages/man2/rename.2.html> Linux man page,
> the rename function should be atomic. I would not imagine that BIND
> does it in different way, like the worst case scenario to first remove
> the current file and then move the new one to the same path. I know
> I'm too cautious, I'm just trying to avoid any chance for rsync to
> transfer incomplete or empty zone file, or maybe delete the file at
> the destination if it does not exist at the source for a short moment.
>
> Marcus
>
> On Mon, Nov 12, 2018 at 7:19 PM Tony Finch  <mailto:d...@dotat.at>> wrote:
>
> Marcus Frenkel  <mailto:marcus.fren...@gmail.com>> wrote:
> >
> > I need to know how BIND writes to slave zone files after zone
> has been
> > updated. Does it modify the file in place or it replaces the
> file with
> > new one at once?
>
> Changes are written to a journal append-only style. Eve

Re: Dropping queries from some well-known ports

2018-08-03 Thread Timothe Litt
On 03-Aug-18 14:00, Petr Menšík wrote:
> Hi!
>
> Our internal support reached to me with question, why are some queries
> bound to low ports silently dropped. I have found there is feature for
> that, that will silently drop queries from selected ports.
>
> I admit queries from such low ports are wrong. But why are some ports
> allowed when some ports are not? Should not it be configured by firewall
> instead?
>
> Just try this command:
> $ sudo dig @127.0.0.1 -b 127.0.0.1#32 localhost
>
> If bind is running on local interface, it will drop the query. If any
> other server is running there, it will respond.
>
> Does such feature make sense in year 2018? Can you remember what was
> motivation to implement it? Is it wise to still enable it by default,
> without at least configure option to disable it?
>
> 1.
> https://gitlab.isc.org/isc-projects/bind9/commit/05d32f6b0f6590ca22136b753309f070ce769000
Those particular ports are reserved for services that have the rather
odd property that any junk set to them will result in a response.  E.g.
simply opening a connection to daytime will result in a response with
the current date and time in some (unspecified) ASCII format.  Daytime
returns a 32-bit time - that will overflow "soon"; you should be using
NTP instead.

They were designed for diagnostic purposes at a time when the internet
was young and friendly.

Suppose someone knows of a server running one of those services (they
have mostly been replaced/blocked for this and other reasons).

If that someone were able to spoof a request from one of these ports on
that server to your named, responding with anything - including a
FORMERR response, would result in another response.  Named would take
that as another ill-formed request, and reply...  In an infinite loop
using whatever bandwidth is available.  This amounts to a denial of
service attack on both servers, for the cost of a single
packet/connection.  Dropping these packets is the right thing to do,
since the non-named services are acting correctly (according to their
specifications).  And if operating according to their specifications,
none of those servers would ever *initiate* a connection to anyone -
including named.

As for why other low-numbered ports are not dropped: unlike these, they
may have legitimate needs for name resolution.  You could configure a
firewall to drop these - and probably should.  But it certainly doesn't
hurt for named to protect itself from this particular attack.

I should note that your example used port 32 - which is not dropped by
the commit that you cited.  Port 32 is not assigned by IANA.

[Although this is a security issue, I'm not revealing anything new
here.  The commit is 12 years old.  It has been standard advice for many
years not to run these services on the public internet.  If anyone IS
running them(I think NIST is still running the time services), they
should know the risk, and at least rate-limit requests from any given
client IP...]

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Authoritative dns with private IP for hostname

2018-07-27 Thread Timothe Litt

On 27-Jul-18 11:59, Elias Pereira wrote:
> hello,
>
> Can an authoritative dns for a domain, eg mydomain.tdl, have a
> hostname, example, wordpress.mydomain.tdl with a private IP?
>
> Would this be accessible from the internet via hostname, if I did a
> nat on the firewall?
>
> -- 
> Elias Pereira

No.  Two issues seem to be conflated here.

For DNS, what you probably want is a setup with views; that way the site
will resolve to the private IP address from inside your site, but to the
external address from outside.

For making your servers accessible, NAT will probably be necessary for
the webserver and the DNS server inside your firewall to be accessible
from outside.  Your secondary DNS servers are required to be
geographically separate.  So either you have another location with a
firewall (where you again NAT), or you use a secondary DNS service.

Views are in the bind ARM, and have been discussed on this list before.

There are some middleboxes (among them Cisco Routers) that do attempt to
rewrite DNS records on the fly in a NAT like fashion.  Stay away from
those.  They tend to break things in the best of circumstances, and
absolutely break DNSSEC.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: tool for finding undelegated children in your DNS

2018-07-27 Thread Timothe Litt
On 26-Jul-18 19:46, Victoria Risk wrote:
> I have been told this is a very poor description of the problem.
>
> What I am concerned about is, how people with a sort of lazy zone file
> can assess the potential impact of QNAME minimization on their ability
> to answer for all of their zones.
>
> I have gotten two suggestions off list:
> - I would use named-checkzone to print the zone with all owner names
> printed out and then use text processing tools
> - “dig ds -f list-of-zones”, Those that return NXDOMAIN are likely
> missing NS records.
>
> Any other ideas?
> Has anyone done this kind of housekeeping on their own zones?
>
>
>> On Jul 26, 2018, at 11:41 AM, Victoria Risk > <mailto:vi...@isc.org>> wrote:
>>
>> Does anyone know of a good tool that you can run on your DNS records
>> to find parent + child pairs where there is no NS record for the
>> child in the parent?
>>
>> Someone must have a perl script for that, right?
>>
>> Thank you for any suggestions.
>>
>> Vicky
>>
>>
If you want to do this validation with zone files, then text tools (e.g.
a Perl, awk, etc) are a reasonable approach.  It would not be
particularly difficult - though you do have to handle include files. 
Rather than working from zone files, the easiest approach is to do a dig
axfr to get the actual zone...

I tend to use dnsviz <http://dnsviz.net/>(http://dnsviz.net) and
zonemaster
<https://www.zonemaster.net/domain_check>(https://www.zonemaster.net/domain_check)
for consistency checking. 

I don't tend to have issues with internal views because of the tools
that I use to update my zones (they pretty
much ensure that mistakes made there will also show up externally :-(). 
So the web checkers are my tools of choice.

But both dnsviz <https://github.com/dnsviz/dnsviz>and zonemaster
<https://github.com/zonemaster/zonemaster>are on GitHub & can be run
internally.  Zonemaster is Perl; dnsviz is Python.  Zonemaster requires
a database (MySQL/MariaDB/PostgresSQL).  The web version of dnsviz is
graphic, and has accessibility issued.  Zonemaster is standard HTML &
more suitable if you use a screen reader.

dnsviz run locally has command line options that will do the analysis -
see the GitHub readme.

Both tools do extensive checks (dnsviz is oriented around DNSSEC, but
does many other checks).

It's a good idea to run one or the other regardless of this point
issue.  Actually - I run both.

Of course the usual caveats about stealth (unlisted) servers apply.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: PKCS#11 vs OpenSSL (BIND Future Development Question)

2018-06-03 Thread Timothe Litt
at eventually, of course, as
   part of refactoring to consolidate some/all of the libraries. 
But the end user just configures one
   name for his/her provider.  The idea is to keep things simple -
for the user, and for development.

  On the other hand, if I have multiple machines with different
providers, I don't have to compile
  a unique BIND for each.  I build it once with (at least) the union
of all the required providers, and
  deploy the same image everywhere.  Better yet, with luck my
distribution ships with all the provider libraries that I need, and I
don't compile anything!  The config file is the only variant.

So, if you can switch to OpenSSL, it seems the best long-run option.  If
you can't, (or are encouraged not to by other customers), you could
solve a lot of the customer pain by making the provider loadable.

For entropy, I use a mixture of USB keys and CPU hardware generators. 
As I may have mentioned, I use EntropyBroker to distribute the entropy
securely - this keeps cost reasonable, especially with many VMs (some of
which don't naturally generate a lot of entropy...).  See
https://www.vanheusden.com/entropybroker/ &
https://github.com/flok99/entropybroker. 

Hope this helps.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Should we bundle the MaxMind GeoIP db?

2018-05-30 Thread Timothe Litt

On 30-May-18 17:27, Victoria Risk wrote:
> Hello GeoIP users,
>
> We are aware that Maxmind is discontinuing their older free GeoLite
> location database and replacing it with a new database with a new
> format (GeoLite2). https://dev.maxmind.com/geoip/geoip2/geolite2/
>
> We have an issue open in the BIND gitlab to update our Geo-IP support
> to use the new database api.
>  https://gitlab.isc.org/isc-projects/bind9/issues/182
>
> The question is, would it be useful if we included the GeoLite2
> database with the BIND distribution? Since we update at least twice a
> year, we could keep it fairly well up to date, and it would save users
> having to go get and update the db themselves. It would add about
> 1.5MB to the BIND distribution (depending on whether we use the
> country or city level).
>
> Votes, comments welcome. 
>
> Thank you,
>
> Vicky
> -
> Product Manager
> Internet Systems Consortium
> vi...@isc.org 
>
>
I use GeoIP with webservers, but not with BIND.  I run a cron job that
pulls the Maxmind updates roughly monthly.  IP address allocations
change a bit more frequently than twice a year.

Rather than bundling the database, you might want to bundle a script to
automate the update process... preferably one that you don't have to
maintain.  (Stick to your core competency...)

I think that would be more useful (and less likely to complicate the
lives of packagers) than bundling the database.

And less work for you :-)




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: BIND Server running but not responding

2018-04-18 Thread Timothe Litt
On 18-Apr-18 09:51, Admin Hardy wrote:
>
> I would be so grateful of your help in this issue.
>
> I am running BIND 9 on Windows 7
> Service "ISC BIND" shows as started up
>

Warren's right.  And change your rndc-key's secret ASAP.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



signature.asc
Description: OpenPGP digital signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Re: Suggestions for a distributed DNS zone hosting solution I'm designing

2018-03-09 Thread Timothe Litt
On 08-Mar-18 07:52, Tony Finch wrote:
> Best way to achieve this is with anycast, which can be pretty
> time-consuming to set up - try searching for Nat Morris's presentation
> "anycast on a shoestring" which he gave at several NOG meetings.
> The advantage of anycast (as opposed to having NS records in lots of
> locations) is that you are depending less on resolvers to work out for
> themselves which of your servers is fastest.
>
Does anyone know what happened to his project?

It looked like an interesting secondary DNS, but it seems to be out of
business.

noc.esgob.com has a recently expired certificate, and redirects to one
line text page (his name).

The github repository is empty.

So it appears to be defunct.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Re: DNSSEC validation without current time

2017-12-18 Thread Timothe Litt
rom NTP time every week or two - that's more than sufficient for
DNSSEC & NTP bootstrap.)

Alternatively, as previously discussed, if you need the best (non PTP)
time, add a GPS receiver, with pool backup.

You can skip the DNS cyclic dependency completely if you have
locally-trusted NTP and DHCP servers - provide your clients with the NTP
server addresses via DHCP.  (They're sent as IP addresses, not names.) 
This isn't as hard as it appears.  If you run NTP on all your machines
(yes, there's NTP for windows), your Pi can get time from them.

Further, since you run your own DNS server - presumably within some
firewall - you can trust it to serve your local zones.  DNSSEC not
required.  If you include your local machines in your NTP
configurations, everything is under your control.  It then becomes a
sequencing issue only if your entire site goes down.   (If so, you want
your local master to be up first.  Otherwise, the rest will coast using
other NTP sources.)  If you're really serious, you run at least 3 local
clocks - preferably something like GPS, WWV (or other radio source), and
a local atomic (or at least, TXCO)  clock.  If you start looking at
failure scenarios, it gets more interesting.

As previously noted, startup scripts need to have the "right" definition
of "system time available" & dependencies for your applications
(including named) to start.

Because the draw minimal power (and so will run a long time with a
modest UPS), I use an RPi with GPS & some pool servers as my preferred
time source.  It boots using an RTC.  My edge router also runs NTP,
preferentially taking time from the RPi - but also configured with other
Public and local servers.  In case the RPi goes down, the local machines
also participate - the low latency and dispersion pretty much ensures
that they'll be taken over the public servers.  I may add another Pi
with another GPS and/or radio receiver, when I acquire enough round TUITs.

So, what to conclude?

  * If you have other machines in your local network, use them as NTP
sources and provide the addresses to your RPi via DHCP.  This is
cheapest and easiest.
  * If you don't need precise time (e.g. for purposes beyond DNSSEC),
the next cheapest solution (in $ and time) is to just add an RTC.
  * If you also want precise time, but don't need it to be highly
available, add a GPS.
  * For more availability, do both.  And possibly add other time sources
(Radio, TCXO, geographically dispersed GPS, more RPis...).

In any case, let us know what you end up with.

Have fun!

(1) This isn't an expensive problem to solve.  My RPi's RTC (TOY) uses a
DS1302 - I got a bunch from e-bay for about $2 (including battery &
shipping).  I could publish the software if there's interest.


rtc/rtc-ctl --show --debug
TOY Clock registers as read (UTC):
81: 57 RUN 57 sec
83: 42 42 min
85: 12 24H 12 hr
87: 18 18 date
89: 12 12 month
8B: 02 02 weekday
8D: 17 17 year
8F: 80  WP ctl
Applying drift correction of -28.370 PPM to 10869574.837 seconds
(125d 19h 19m 35s) elapsed
TOY    time is Mon Dec 18 2017 07:48:05 EST
System time is Mon Dec 18 2017 07:48:07.234 EST
Remaining offset is -2.234 sec (-0.206 PPM)

(2) 20 ppm is ~ one min/month.  Typical crystals can be 100 ppm or more
(depending on temperature & PCB layout), so 5 min/month.  TSIG fudge is
nominally 5 min, so resyncing every 1-2 weeks is close enough.  And also
close enough for sane DNSSEC configurations.  You can resync more often,
but it's a fair bit of bit-banging on a slow bus (I2C or SPI for most),
and there's no point.

Oh, why mention TSIG?  Because ... it's another time-sensitive part of
named, and often used for DHCP - DNS updates...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNSSEC validation without current time

2017-12-15 Thread Timothe Litt

On 15-Dec-17 07:44, Mukund Sivaraman wrote:

On Fri, Dec 15, 2017 at 12:45:11PM +0100, Petr Menšík wrote:
>> Hi folks.
>>
>> I am looking for a way to validate name also on systems, where current
>> time is not available or can be inaccurate.
> I use a Garmin 18x LVC 1pps GPS receiver device connected to RS-232
> serial port. The device plus cables cost me $70 altogether, and ntpd
> works natively with it using the NMEA refclock driver (there's no need
> of gpsd). It has a 1s PPS signal accurate to 1us. It is accurate to
> within +/- 100us on Fedora where due to no hardpps kernel support
> because of tickless kernel, the PPS signal is timestamped and available
> on /dev/pps0 but the kernel doesn't use it to directly maintain the
> clock and it has to be done from userland which is affected by the
> system load.  If you were to recompile a kernel that's configured
> appropriately, I feel the clock can be synchronized to about 1us
> accuracy.
>
> It is more or less reliable and value for $70 if one wants UTC on their
> computer without accessing the internet. This is more than sufficient
> for DNSSEC validation and many other network services, and certainly
> more accurate than using the ntp.org pools.
>
>   Mukund
>
I use an 19xLVC too (On Raspbian == Debian).  But I also have an RTC. 
GPS does have outages,  can take a while to get a fix, and NTP wants
consensus.  So I use my GPS receiver as a local clock source
(preferred), but also configure several servers from the pools as a
sanity check - and to deal with any GPS outages/slow starts.  It's
worked well for me.

Along those lines, I haven't splurged yet, but Adafruit has an
interesting module for ~$40 (US)  with a breakout module, ($45 on a Pi
Hat - which is cheaper/easier than building your own PCB), which
includes a GPS patch antenna.  If you need an external antenna, it comes
up to about the cost of the Garmin, but draws only 20ma vs. 90, and is a
more modern receiver.)   On paper it looks good.

See https://www.adafruit.com/?q=ultimate%20gps - I'm not affiliated with
Adafruit, and while I've looked at the specs, don't have direct
experience.  YMMV.

Enjoy.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: DNSSEC validation without current time

2017-12-15 Thread Timothe Litt

On 15-Dec-17 06:45, Petr Menšík wrote:
> Hi folks.
>
> I am looking for a way to validate name also on systems, where current
> time is not available or can be inaccurate.
>
> This is related to booting with NTP client, when the only configuration
> is hostname that has to be resolved. There is a bit circle dependencies.
> First current time is required for DNSSEC validator to verify signatures
> of all keys. However that is hard to maintain on systems without RTC
> clock running when it is down. Raspberry PI is example of such system.
> Until hostname is known, time cannot be synchronized and corrected to
> real value. They sort of depend on each other. The only secure way I
> found is to hardcode IP address into NTP client or obtain IP from other
> trusted source (DHCP?).
>
> Available option is of course to disable validation until valid time is
> received. It seems to me that is unnecessary lowering the security. I
> would like some option to limit checking validity period of used keys
> instead. Just validate existing keys from trust anchor and trust the
> last key that can validate. I think that is far better than no
> verification at all.
>
> Is it possible to do that in BIND? Maybe bootstrap verification could be
> done only with delv tool with time-checking disabled. I found no way to
> do that. Is there good reason why it is not available? Is better method
> for solving secure configuration of timeless system available?
>

I added an RTC to my Pis :-)  It makes life a lot simpler, even after I
wrote a driver and calibration mechanism.

But if you have access to a DHCP server, have the client request Option
42; this returns one or more NTP servers' IP addresses in preference
order.  You can use NTPD (or ntpdate) to get a time.   ISC DHCP client
supports this option; see dhcp-users if you need help.

DNSSEC requires reasonably accurate time, as signatures have validity
periods.  Your scheme would not work; you need time to validate ANY
signature - from the trust anchor down.  If there's no time, you can't
validate any part of the chain - so you might as well use ordinary DNS. 
NTP is fairly robust; it uses consensus from multiple servers to
establish correct time.  For a rogue DNS to inject bad time into your
PI, it would have to know which NTP servers you are using.

Another option is to use DHCP to get the address of a validating
resolver, and rely on that for bootstrapping NTP.  Again, this depends
on whether your control/trust your DHCP server.  More ISPs are providing
validatiing DNS server, but it's not universal. Hardcoding one of the
public ones (e.g. Google - 8.8.8.8, 8.8.4.4, 2001:4860:4860::,
2001:4860:4860::8844) is fairly safe. 

NTP server addresses are more volatile, and it's a serious breach of
netiquette to hardcode them; there are a number of stories of how this
has gone badly wrong for all concerned.

The choice depends on your requirements, available resources, and risk
tolerance.

You also need valid time for many other applications; TSIGs require a
reasonably close (on the order of minutes) time sync between sender and
receiver.

So rather than try to tweak NAMED, focus on getting a reasonable time
early in boot - and make sure that dependencies on a valid time are
properly expressed in your startup scripts.

Bottom line: your problem is getting a reasonable time, not with the
consumer(s).

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: DNAME usage?

2017-11-21 Thread Timothe Litt
On 17-Nov-17 18:04, Mark Andrews wrote:
> DYN used to just require a TSIG signed update request set to a server 
> specified in
> a SRV record.
Depends on which service.  The one I referred to is the one that was
popular (free) for people who wanted to reach a machine on a dynamic IP
address.  Because it was popular, it was implemented in a number of
routers, including Linksys (low end) and Cisco (IOS).  I believe they
discontinued the free version, but the protocol lives on.

It's worse than DNS UPDATE in an number of respects - but is trivial to
implement in a router or script as the core is just an HTTP GET.
>
> We have a perfectly fine protocol for updating the DNS but DNS hosting 
> companies
> want to reinvent the wheel.
Agree. I wish that the DNS UPDATE protocol was the only one in the
wild.  Unfortunately, (non-jail broken) routers don't provide that
option, but do provide the http ("dyn") version.  So if you want to use
a service that requires it - or want to bridge a router that supports it
to DNS UPDATE, some invention is required.  I outlined an approach that
works for me.

For reference, cisco's IOS (now) supports both methods - to some extent.

See
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_dns/configuration/15-sy/dns-15-sy-book/Dynamic-DNS-Support.html#GUID-DCA9088D-EB90-46DE-9E33-306C30BB79CE

And from that page, here's the reference to dyndns (you can change the
URI for other http services; it lists 6 others)

add

http://test:t...@members.dyndns.org/nic/update?system=dyndns==

I use https, of course.

Naturally, IOS doesn't support TSIG - so DNS UPDATE from it has to be
authorized by IP address. :-(

2136/7 have been around since 1997, so there's really no excuse for DNS
providers not tosupport them.

But we live in a world of excuses :-(



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNAME usage?

2017-11-17 Thread Timothe Litt

On 17-Nov-17 14:48, Mark Andrews wrote:
> Alternatively use a http server that can update the records for the 
> interfaces it is listening on. 
>
> This sort of thing is possible. Named gets informed by the OS when addresses 
> get added and removed. It currently just adds and removes listening sockets 
> but you could trigger other actions like sending dynamic dns updates.
>
> Unless you ask for the functionality it won’t be added.
>
>
> --
> Mark Andrews
>
>> On 18 Nov 2017, at 06:38, Mark Andrews  wrote:
>>
>> Just have the machine hosting the http server do a dynamic update of the A 
>> ana  records when they are assigned to the interface.
>>
>> It should be possible to get the os to run a program when this happens so it 
>> can perform a second dynamic update on a the different name. 
>>
>> -- 
>> Mark Andrews
We don't have the whole story from the OP, but in the typical
configuration that prompts this question, neither will solve the
problem.  The problem is that the dhcp client and http server are likely
not the same machine.

If you have a dynamic IP(v4) address & want to update DNS for a server,
it's probably NATed (by a router) before the HTTP server sees it.

The HTTP server always listens on the inside NAT address.  So it won't
see an address change on its interface.

The router implementing NAT is what will get the dynamic address, from
an ISP.  If it's a sealed box, it probably has support for updating DNS
- though it's typically the dyndns protocol, not DNS update.  (Assuming
the ISP hasn't disabled the feature.)  This is what dyndns, NO-IP, &
others use.  If you can modify the URL that it uses, you can point it to
your own script, which then does a DNS UPDATE transaction.  I use this
approach with Cisco IOS routers - though many others allow this - and
still others can be fooled (e.g. with a HOSTS entry for one of the
update servers).  What's nice about this is that you don't have to
jailbreak or modify anything.  Just pretend to be an update service. 

If you're using a jailbroken or other Linux router, and it happens to be
the same physical machine as HTTP server, it could look for routing
updates on the external interface.  I don't think this is a common case
(except for jailbroken routers - like OpenWRT).

Most often, the HTTP server is on a separate machine and LAN - it can't
see the external interface that gets the dynamic address.

When the router won't notify someone about address changes, the usual
solution is for something behind the NAT to poll an external public
server for your IP address, then use the result to initiate a DNS
UDPATE.  (e.g. A local script asks the external server to return the IP
address that contacted it. (REMOTE_ADDR))  There are a bunch of services
and scripts for this.  Most of the scripts update a DNS provider with
the dyndns protocol (others use it).  The nicer "what's my IP address)
scripts return json.  But changing them to do DNS UPDATE is pretty
simple - See Net::DNS if you're a Perl person.

If you have more than one site - or a friend - and prefer to be
independent, you can easily write your own CGI scripts to return the
other's IP address.  echo "Content-Type:
text/plain\nConnection:close\n\n$REMOTE_ADDR\n"; exit  (If your friend
doesn't have a static IP address, beware of deadlocks.)

If you have access to the DHCP client's status (e.g. a leases file or
some GUI or CLI on the router), you can sometimes get the external
address from there. 

A web search for "dynamic IP update script" will turn up lots of
resources - scripts & services.

A drawback with polling solutions is that they're not instantaneous -
you get the polling delay on top of whatever minimum TTL the DNS service
imposes.  (And there are limits on how fast you can - or would want to -
poll.)  That's fine for home hobbyists - especially since dynamic IP
addresses are often stable for a VERY long time.  But I would be careful
about running a business or other critical server where DNS updates lag
address changes.

So get a router that talks some dynamic update protocol and go from
there.  That minimizes the delay, and avoids having to retrieve your
public address from an external source.

https://help.dyn.com/remote-access-api/perform-update/ defines the
dyndns update protocol - writing a server is straightforward.

Of course if you have IPv6 - and are getting a dynamic address - you
don't have to deal with NAT.  In that case, you can certainly have
dhclient or RTNETLINK (see ip monitor) trigger a script.  

But note that in the problem statement is:
> the super domain is managed by an outside service. 
This probably makes the OP's life more difficult.  Those services tend
not to support DNS UPDATE (or even dyndns update).  In that case, you're
into  using curl/wget to forms to their web gui.   And tracking their
"improvements".

Grief like that is why I ended up running my own DNS master server...and
getting static IP addresses for my central site. 

I guess I 

Re: Re: checkhints: view “internal”: b.root-servers.net/AAAA (2001:500:200::b) extra record in hints

2017-09-10 Thread Timothe Litt
The most sensible thing to do is ignore the message, and keep named
reasonably up-to-date.

I used to maintain a local hints file with a script that periodically
downloads and updates it (from internic or the DNS), reconfiguring named
when it changes.  It works well - but it's really not worth the effort. 
I've switched to just using the built-in hints.

The hints are only used to locate a root server ("root priming"); as the
message indicates, once any one is found, named will query it  for the
current servers/addresses and check for consistency.   It uses the query
results; the multiple hints provide redundancy for the initial query -
but you don't need all 13 (26) to be correct.  The only reason to worry
is if most of the hint addresses go stale at once - which would be
unprecedented in the history of the DNS.

Note that when root server addresses go stale, the convention is that
the old address is kept in service for some time after the change, so
there's plenty of time for clients to catch up with no impact.  For B
root, the plan is at least 6 months. 
(https://b.root-servers.org/news/2017/06/01/new-ipv6.html)

There does seem to be an issue where if cache memory size is small &
root references rare, the root server records are evicted - causing the
hints to be re-fetched and the messages repeated.  Arguably, named
should treat these as more precious than other records when doing cache
evictions.

But they're just informational messages.  You should run a reasonably
current version of named for security and performance.  As long as you
do, the built-in hints will be perfectly adequate.  Even if you don't,
the hint addresses from a decade ago are adequate to bootstrap named. 
The only good reason to have private hints is if you have an alternate
DNS universe - which is highly discouraged.

For more detail, see
https://kb.isc.org/article/AA-01309/0/Root-hints-a-collection-of-operational-and-configuration-FAQs.html

Bottom line is that these messages are a nuisance & in almost all cases
the most effective use of your time is to ignore them... The effort of
maintaining a private copy of the root hints isn't worthwhile.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 09-Sep-17 23:14, Stefan Sticht wrote:
> Hi,
>
> thanks for all the suggestions.
>
> I have no forwarders configured.
> I started downloading and using the hints file from 
> ftp://FTP.INTERNIC.NET/domain/named.cache shortly after I noticed the problem.
>
> # grep B.ROOT /var/named/named.ca
> .360  NSB.ROOT-SERVERS.NET.
> B.ROOT-SERVERS.NET.  360  A 192.228.79.201
> B.ROOT-SERVERS.NET.  360    2001:500:200::b
>
> I wouldn’t expect a problem with my hints file.
>
> Thanks,
> Stefan
> .org
> https://lists.isc.org/mailman/listinfo/bind-users


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: make AAAA type the default for dig

2017-06-14 Thread Timothe Litt
On the original topic, it would be nice to have a dig option that
returned both A and  with one command.

Since it does this, I tend to use 'host' (note that host -v gives the
same response detail as dig -t A ; dig -t ; and dig -t MX).

On the other remarks, inline.

On 14-Jun-17 21:09, Mark Andrews wrote:
> In message <20170614132510.6ff832a5@ime1.iment.local>, Paul Kosinski writes:
>> Has IPv4 faded away and I didn't notice? Unlike the well planned switch
>> to Area Codes, IPv6 is not backward compatible.
> It has started to fade away.  If you have IPv6 at home, statistically,
> most of your traffic will be IPv6.  There will be outlier homes but
> in general IPv6 will carry more traffic than IPv4.
Not that I've noticed here in the US.  Comcast does have IPv6 to the
home (well, except for some of their
acquisitions that haven't been upgraded yet.)  Pretty much no other ISP
offers it.  The fiber projects - google and verizon both stopped
deployment (of fiber).  I think Google's supports IPv6.  Verizon does not.

Beyond that, you can get fiber (and sometimes IPv6) if you're a large
business.  When I looked for an alternative to Verizon, I was quoted
~$50K for an "engineering feasibility study" for getting fiber to the
house, with corresponding monthly charges.  Not viable for my hobbies.

There are some fringe ISPs in a few markets that offer IPv6 over DSL if
you insist - but who wants DSL speeds (and prices) when you can usually
at least get cable, and if you're lucky fiber at a much lower cost/bit/sec?

> B2B traffic isn't quite as high but there too IPv6 takes a significant
> amount of traffic.
>
>> (The telcos would have gotten rather a lot of complaints if they said
>> every had to get a new telephone number, and also -- new telephones.)
> I've had to get new telephone numbers to fit in more customers over
> the years with support for the old number being removed after a
> year or so.
>
>   462910 -> 4162910 -> 94162910
Yes, here in the US we have periodic "area code" splits that cause
renumbering, stationary and advertising changes, and general angst.

> As for new telephones, yes this has been manditory, switching from
> rotary to DTMF.  There was a period of overlap but support for
> rotary phones was turned off in the exchange.
Rotary phones are still supported here.

But I use VoIP.  Over IPv4.  (And my VoIP adapters do support rotary
dialing)

> Most of you have thrown out several generations of computing devices
> that have supported IPv6 without even being aware you were doing
> so.  IPv6 support is 20+ years old.  My daughter, who has now left
> home, has lived her entire life with equipement in the house that
> has supported IPv6.  The house had working IPv6 connectivity before
> she went to primary school.  She graduatuted from Y12 last year.
>
> I'm still waiting for my ISP to turn on IPv6.  The CPE router
> supports it.  I just routed around them to get IPv6 at home.
I still can't get native IPv6 - but I have FTTH and can get 500Mb/s IPv4
(for a price I won't pay).
So Tunnels.  BTW, SixXS has retired, leaving no U.S. tunnel provider
that supports DNSSEC
for the reverse delegations.  (Well, none in my price range.)

Bottom line is that experiences vary.  The US has a complex regulatory
environment - and large diverse geography.  It moves with a deliberate
lack of speed.

The other consideration for the ISPs is that it's a lot harder for them
to justify charging for static/more than 1 IPv6 address.  There's an
extreme disincentive for them to cut their revenue stream.  (I've seen
some plans where they're seriously proposing to issue /128s.  As you
say, Luddites - capitalist Luddites.  Sigh.)

The address space exhaustion hasn't really moved the needle at the
consumer/small business level - the ISPs are quite happy to NAT - and
they hoard.

> If you have a piece of computing equipement bought in the last 10
> years that doesn't suppport IPv6 today it is because the manufacture
> is a ludite, not because IPv6 doesn't work.
Agree, though there is also the point of view that since customers cant'
get IPv6, shipping it in products adds cost (qualification, risk of
bugs, memory/documentation) with no perceived benefit to the vendor or
customer.  I don't subscribe to that POV - but it isn't entirely irrational.

> Mark
>
>> On Wed, 14 Jun 2017 22:10:25 +1000
>> Mark Andrews  wrote:
>>
>>> In message , "Marco
>>> Davids (SIDN)" writes:
 Hi,

 Not sure if this has been proposed before, but I am wondering:

 Has ISC ever considered to change the default 'dig -t' option from
 A to ?

 --
 Marco
>>> This would break too many scripts.  You can do this for yourself
>>> by setting the type in $HOME/.digrc
>>>
>>> % cat ~/.digrc
>>> -t 
>>> % dig isc.org
>>> ;; BADCOOKIE, retrying.
>>>
>>> ; <<>> DiG 9.12.0-pre-alpha+hotspot+add-prefetch+marka <<>> isc.org
>>> ;; global options: +cmd
>>> 

Re: RE: Providing GeoIP information for servers

2017-05-11 Thread Timothe Litt
On 10-May-17 17:50, John W. Blue wrote:
> >From the it-could-be-worse department:
>
> https://arstechnica.com/tech-policy/2016/08/kansas-couple-sues-ip-mapping-firm-for-turning-their-life-into-a-digital-hell/
>
> I am more a fan of continental geolocation accuracy when it comes to IP 
> addresses.
>
> John
If your static IP address has a reverse name in DNS, it's a short hop
through whois to your actual location.

Well, usually. It is possible that none of the contact addresses are
where the IP address is located - especially for large organizations.
And there are the whois proxies that obscures your physical location.

Still, it's pretty hard to hide.  (Even in a Kansas lake.)

Depending on your situation, you may wish to have different accuracy
and/or precision in internal and external LOC records.

But on the original topic:  Contact Maxmind and see if they'll fix your
address. https://support.maxmind.com/geoip-data-correction-request/ 
They may require evidence that Comcast has delegated the address to you.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


> 
> From: bind-users <bind-users-boun...@lists.isc.org> on behalf of Mark Andrews 
> <ma...@isc.org>
>
>
> AFAIK Maxmind et al don't lookup LOC records.  That being said if
> enough people published LOC records they might start.
>
> For Google you can update the location using a app which uses the
> phone's GPS.
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
>

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Slow zone signing with ECDSA

2017-04-19 Thread Timothe Litt

On 19-Apr-17 21:43, Mark Andrews wrote:
> ...
> DSA requires random values as part of the signing process.  Really
> all CPU's should have real random number sources built into them
> and new genuine random values should only be a instruction code away.
>
> Mark
Most recent ones do.  See RDRAND for Intel (and AMD).  Even Raspberry Pi.

The tinfoil hat brigade in some distributions has resisted using them,
fearing some conspiracy to provide not-so-random numbers.  (Despite the
fact that /dev/random hashes/whitens the inputs to the entropy pool.) 
You may need to take a positive action to enable use of the hardware
source.  Google RDRAND for plenty of entertainment.

There are also fairly inexpensive (~usd 50) USB devices that provide
reasonable entropy quality at decent speeds.  (But much lower than
RDRAND.)  They're good for the old hardware that you recycle for
single-purpose servers.

Systems that have low activity/low entropy can benefit from
entropybroker (https://www.vanheusden.com/entropybroker/).  Use it to
distribute entropy from those who have to those who don't.  It's really
handy for VMs, and for that isolated system that you use for your root keys.

For most uses, use /dev/urandom - which doesn't block.  /dev/random will
block if the entropy pool is depleted.  (However, if you have a hardware
source, very, very rarely.)  /dev/random is recommended for long lived
keys - which usually includes KSKs, and may include ZSKs.  I don't
believe named makes a distinction...you get to pick one for everything.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Bind Queries log file format

2017-02-04 Thread Timothe Litt

On 04-Feb-17 04:27, Phil Mayers wrote:
> On 03/02/17 16:45, Mukund Sivaraman wrote:
>
>> The query log is getting more fields at the end of it such as
>> CLIENT-SUBNET logging.
>
> Although it would be super-disruptive, has any thought been given to
> moving to an entirely new log format, for example k/v or JSON? They're
> a lot more extendable going forward and most SIEM/ML systems will read
> them with no additional configuration.
>
> Adding the query log hex/ptr thing just inconvenienced me. Strangely,
> changing the entire format to k/v would have massively helped me. This
> applies across all logs (RPZ in particular).
>
> Obviously one sample isn't enough but it's maybe something to consider?
I'm not sure whether I'm in favor of this approach, but it's not
necessarily very disruptive.

It would be trivial to script a converter from JSON to the current log
format - or even one that took a format string to select whatever fields
in random order.  Pipe a new log file though it to existing log readers,
and you're done. 

For almost complete transparency, embed in a daemon that continuously
reads the JSON log & appends to the traditional log; the existing log
readers can read the old format in near real-time...

Then when a support issue (or other requirement) comes up, the enhanced
data is in the JSON log.

When your old log processor is upgraded to use a new field, just add it
to the converter (format).

New processors would preferably read the JSON/native format directly.

The only annoyance is having to manage 2 log files (and some disk space).

FWIW




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: DNSSEC validation failures for www.hrsa.gov

2016-06-25 Thread Timothe Litt

On 24-Jun-16 22:13, Jay Ford wrote:
> On Sat, 25 Jun 2016, Mark Andrews wrote:
>> The servers for webfarm.dr.hrsa.gov are not EDNS and DNSSEC compliant.
>> They are returning FORMERR to queries with EDNS options.  Unknown
>> EDNS options are supposed to be ignored (RFC 6891).
>>
>> You can workaround this with a server clause to disable sending the
>> cookie option with a server clause.
>>
>> server  { request-sit no; };// 9.10.x
>> server  { send-cookie no; };// 9.11.x
>
> That did it, at least for now.
>
>> Now one could argue that FORMERR is legal under RFC 2671 (the initial
>> EDNS specification) as no options were defined and to use a option
>> you need to bump the EDNS version but the servers don't do EDNS
>> version negotiation either as they return FORMERR to a EDNS version 1
>> query rather than BADVERS.  They also incorrectly copy back unknown
>> EDNS flags.
>
>> Whether this is the cause of your issue I don't know but it won't be
>> helping.
>
> The HRSA folks claim that their "site is fine".  In hopes of
> disabusing them of that notion I'll have our folks who have to try to
> use the HRSA site pass along the trouble report.
>
> Thanks for the diagnosis & work-around.  Excellent as always & crazy
> fast, too!
>
> 
> Jay Ford, Network Engineering Group, Information Technology Services
> University of Iowa, Iowa City, IA 52242
> email: jay-f...@uiowa.edu, phone: 319-335-
>

FWIW, dnsfp identifies the DNS servers as:

fingerprint (162.99.248.222, 162.99.248.222): Unlogic Eagle DNS 1.0 -- 1.0.1 
[New Rules]  

If this is correct, the project website for Eagle DNS would appear to
be: http://www.unlogic.se/projects/eagledns

It seems a rather odd choice for a .gov (US Health and Human Services)
owned domain...though one never knows what IT outsourcing will produce :-)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Writeable file already in use

2016-01-05 Thread Timothe Litt
Jan-Piet Mens <jpmens@gmail.com> wrote:
> This might make you sad if you have lots of zones or large zones.
> .. or even just want to look at what was transferred (whitout having to
> recurse to a `dig axfr').
>
> I see no reason to omit 'file' (except on a diskless slave 
Or if you care about availability, which is a strong reason for having a
slave in the first place. (Performance is the other.)

If a diskless slave restarts when the master is down, it has no data to
serve.  This will also make you (or your clients) sad, even if you only
have a few small zones :-(

I agree - don't omit 'file', except on a diskless slave.  Don't try to
share the file, even when it seems to work.  And think twice about why
you have a diskless slave...

The only fault that I find with bind's decision to prohibit shared
writable files is that it took so long to arrive.  Instead of
complaining, which seems to appear here every few months, the response
should be "Thank you - for *finally* preventing this disastrous
misconfiguration."

I've lost count of how many times I've encountered someone who had
corruption due to this misconfiguration.   There are many (working) ways
to replicate data.  Among them: in-view, dname, external scripts to copy
files, external tools that write records to multiple files, replicators
triggered by file writes (e.g. inotify) or database update triggers 

Although I remember when a 1MB ("hard") disk was huge - today disk space
is cheap.  Don't trade a few MB (or GB) of space for eventual data
corruption.  And the manpower to implement any of the above is far less
that that spent on recovering from corruption, which can go undetected
for a long time.  [And usually, the folks who run into it haven't tested
their backups...]

As for the "I know I'll never have bind update that zone" - that may be
true today.  But it changes -- perhaps when your successor discovers
it.  Either a tool requires dynamic update, or someone discovers signed
zones, or realizes that dnssec maintain saves a lot of work, or the next
technology comes along.  To misappropriate a K quote - "Your constant
is my variable".  Or the ever popular "If you don't take the time to do
it right, you'll have to make the time to do it over...and over again".

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: intermittent SERVFAIL with a DLV domain

2015-12-24 Thread Timothe Litt
On 23-Dec-15 08:34, Tony Finch wrote:
> Tony Finch <d...@dotat.at> wrote:
>
> Also, why is it trying to get address records for a reverse DNS name? 

An ip6.arpa or in-addra.arpa zone is not restricted to PTR records. 
There's nothing special about 'reverse zones'.

dnsviz uses some heuristics to guess what records are worth looking for.

A while ago I asked Casey to have DNSVIZ check for more than PTR+DNSSEC
records in reverse zones, which he did.
There's a panel in dnsviz where you can change what it looks for if you
want more (or less).

A/ records are used in reverse zones by an obscure RFC (1101
encoding of subnet masks), and by others for similar purposes.

(It shouldn't be surprising that CNAME, TXT, RP, LOC and DNSSEC-related
records can be in reverse zones too.)

dnsviz launches its queries in parallel, so asking for a few extra
records doesn't hurt anyone.


> 23-Dec-2015 13:20:54.328 lame-servers: info: broken trust chain resolving 
> 'a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa/DS/IN': 94.126.40.2#53
> 23-Dec-2015 13:20:54.328 lame-servers: info: broken trust chain resolving 
> '1.0.0.0.3.2.1.0.0.0.0.0.0.0.0.0.2.0.0.f.a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa//IN':
>  2a01:8000:1ffa:f003:bc9d:1dff:fe9b:7466#53
> 23-Dec-2015 13:20:54.398 lame-servers: info: broken trust chain resolving 
> '1.0.0.0.3.2.1.0.0.0.0.0.0.0.0.0.2.0.0.f.a.f.f.1.0.0.0.8.1.0.a.2.ip6.arpa/A/IN':
>  217.168.153.95#53
>
> Tony.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477

2015-09-08 Thread Timothe Litt

On 08-Sep-15 00:46, stavrostseriotis wrote:
>
> Ok here is what I did:
>
> · After extracting the package I looked out at directories
> */usr/local/bin *and */usr/local/sbin *as mentioned in the procedure
> but I found that there are no files there.
>
> · I run *configure* command *without openssl* because I had
> trouble with the openssl library when it was enabled. Also since I am
> not currently using DNSSEC I guess that this is not a problem.
>
> · Then I run *make* and I didn’t get any error.
>
> · I run *make install* and I didn’t get any error again.
>
> · Stopped named service
>
> · I copied the /etc/named.conf file and then created another
> empty file as instructed with the correct permissions.
>
> · Started named service. It started normally without any error
> and also the process that was up is the same as before.
>
> · When I do *named –V* and also *rpm –q bind* I still see the
> same versions as before.
>
>  
>
> Yes I know that if I was using the RedHat package I wouldn’t had this
> problem because I already do this for other linux machines. Just this
> machine is old and when it was configured to work as nameserver the
> guys did it this way. Now we are in the process to build a new machine
> for nameserver with RedHat subscription and everything but until that
> happens it will be best if we can get rid of this security
> vulnerability cause I don’t know how long it will take.
>
>  
>
> Thank you for your responses.
>
>  
>
You are not making it easy to diagnose your problem.  The exact commands
that you are using and command output are missing.

From your description, you successfully built named and installed it -
somewhere.

You are not running the image that you built.  To confirm the version of
what you built, from the build directory you can run "./bin/named/named
-V"  This will also show us the configure options, including where it
should have been installed.

If the process has the same ID, you didn't successfully stop the old
named.  This can happen if you have a mix of RedHat and non-RedHat
startup (init) files. 

If rpq -q bind shows a version, then there is a RedHat package on the
system & you are trying to supersede it.  You probably are using the
RedHat startup files, which may be different from what you expect.  As I
wrote previously, the startup environment may have a different PATH from
your terminal.

You should have stopped named BEFORE running make install.

Please provide the output of at least:
named -V; echo $PATH; (build-directory)/bin/named/named -V; systemctl
status named.service; find / -xdev -type f -name named -ls

A few lines from make install should confirm that the new file is being
installed where you expect it.

lsof -p (named's pid) will confirm which image is actually running.

systemctl show --all named.service will show what service you're trying
to start.
systemctl status named.service should match

Or run service named status & look in /etc/init.d/named if you're not
running systemd/named is a SYSV script on your version of RedHat.

You should not have trouble building with openssl.  Make sure that you
have the openssl-dev RPMs installed.  Don't try to build that from
source; RedHat heavily patches it & other packages depend on the changes.

Switching to the RedHat version of named may be your best option.  This
should not be difficult; make uninstall; yum install; edit the config. 
Depending on how your predecessors did things, you may need to yum
remove first, possibly with --force.


Timothe Litt

ACM Distinguished Engineer

--

This communication may not represent the ACM or my employer's views,

if any, on the matters discussed. 


> *From:*bind-users-boun...@lists.isc.org
> [mailto:bind-users-boun...@lists.isc.org] *On Behalf Of *Timothe Litt
> *Sent:* Monday, September 07, 2015 2:29 PM
> *To:* bind-users@lists.isc.org
> *Subject:* Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
>
>  
>
> Subject:
>
> Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
>
> From:
>
> stavrostseriotis <stavrostserio...@semltd.com.cy>
> <mailto:stavrostserio...@semltd.com.cy>
>
> Date:
>
> 07-Sep-15 05:24
>
>  
>
> To:
>
> bind-users@lists.isc.org <mailto:bind-users@lists.isc.org>
>
>  
>
> Hello,
>
>  
>
> I have a RedHat 5.11 machine and currently I am facing the issue
> with BIND vulnerability CVE-2015-5477. I cannot update my BIND
> using yum because I didn’t install BIND from RedHat at the first
> place so I need to do it manually.
>
> I downloaded the package of version 9.9.7-P2 from isc website but
&g

Re: Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477

2015-09-07 Thread Timothe Litt
> Subject:
> Install BIND 9.9.7-P2 to fix vulnerability CVE-2015-5477
> From:
> stavrostseriotis <stavrostserio...@semltd.com.cy>
> Date:
> 07-Sep-15 05:24
>
> To:
> bind-users@lists.isc.org
>
>
> Hello,
>
>  
>
> I have a RedHat 5.11 machine and currently I am facing the issue with
> BIND vulnerability CVE-2015-5477. I cannot update my BIND using yum
> because I didn’t install BIND from RedHat at the first place so I need
> to do it manually.
>
> I downloaded the package of version 9.9.7-P2 from isc website but
> since it is not an rpm file I have to build it myself.
>
> I followed the instructions I found on website
> https://deepthought.isc.org/article/AA-00768/0/Getting-started-with-BIND-ho
> but it does not change the version of bind. I don’t know what I am
> doing wrong.
>
> I am wondering if you can give me a little guideline on how to build
> and install the new version.
>
>  
>
> Thank you
>
"does not change the version of bind" - as reported how?  By named -V? 
Or by a DNS query to version.bind CH TXT?

If the former, you probably have more than one named executable - with
the old one earlier in your PATH.  "which named" should help.  If the
latter, did you remember to restart named?  And did the restart
succeed?  And does your startup process have the same PATH as your
terminal?  (Often they do not.)

Re-read the instructions - and pay special attention to how you run
configure.  The default is to build/install in /usr/local/*bin - which
is not the default for most distributions' startup files.

I strongly recommend keeping track of each step as you build (a big
scrollback buffer helps).  Either write your own instructions, or turn
it into a script.  There are enough steps that it's easy to make a
mistake - and you will be re-building bind again to upgrade.  Plus, if
you ask for help, you will be able to provide the details of what you
did.  Without details of what you did and what you see, people can't
provide specific help.

Note that RedHat usually has a number of patches (often for SeLinux and
systemd) that you won't get if you build yourself from ISC sources. 

Or remove bind and switch to the RedHat version.  You're paying RedHat
to do the maintenance, so unless you have local patches or very special
requirements, you might as well let them do the work. 

Typically, if you really need the latest from ISC on RedHat you're
better off getting the SRC RPM from RedHat & modifying the rpmbuild
config file to fetch the latest ISC source, then build RPMs.  If you
stay with the same ISC code stream, you won't have too many patch
conflicts to resolve.  After you've done this once or twice, you'll want
to revisit you need for local changes - either decide they're not that
important, or offer them to ISC.  Maintaining a private version is work.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Identify source of rndc reconfig command?

2015-08-25 Thread Timothe Litt
Robert,

While all the advice provided is good, you might also send a suggestion
to bind9-bugs.

The received control channel command message  would be more useful if
it included the peer address  port e.g.:
   ... general: info: received control channel command 'reconfig' from
127.0.0.1:48466 .

That would avoid having to use tcpdump to identify the source of these
sorts of problems.

Other thoughts:

If you have selinux enabled, you can (temporarily) deny access to port
953 with a local policy module, and use the resulting audit log entries
to determine the command.  To avoid service disruption, use setenforce 0
(permissive) for the duration.  This is the simplest approach (fewest
tools, quickest  most certain results).  But you do need to know how to
setup a LPM... and if you're not running selinux already, it can be a
hassle to setup.  (I recommend doing it, but not in the middle of this
fire.)

Every 30 mins sounds like some sort of monitor.  Check that named.conf
isn't changing (which could trigger such a monitor.)  Or stop all system
management/monitoring packages until you find the culprit.

Consider  inotify-tools.  If a monitor is keeping an eye on bind, you
can catch it looking at (or touching) named's files.

lsof is a bit heavyweight for this.  Consider ss -p (ss is part of
iproute2) if you have it.

A final thought - look for log file managers (e.g. logrotate).  They may
be noticing named's file size  doing a reconfig to close/reopen the log
file.   (In which case, report a bug in the log manager's config -
named's own log file management avoids all those hassles.)

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 24-Aug-15 17:55, Mark Andrews wrote:
 The first thing I would do is make sure only the users you want to
 be able to use the rndc key can read it.  I would then generate a
 new rndc key and configure both rndc and named to use it.

 If that doesn't work generate a new rndc.conf file with a different
 name that refers to a new rndc key.  Teach named to use that key
 then update all the scripts that you know about to use the new
 rndc.conf file.

rndc -c rndc.conf.path

 Mark

 In message 60946bf48ada4e6fb2ed7b0aa297d...@mxph4chrw.fgremc.it, Darcy 
 Kevin
  (FCA) writes:
 Does the rndc protocol have a timeout? If so, what is it set to? I don't see 
 anything about a configurable timeout interval in the man pages for rndc or r
 ndc.conf.

 What I'd probably do is turn off rndc in named.conf, set up a dummy server 
 to listen on port 953, which just accepts the connection, but doesn't respond
  to anything sent to it. That means that whatever is sending this command is 
 going to be stuck for some period of time -- possibly infinitely -- waiting
  for a response from the server. Then you can use something like lsof (whic
 h I assume exists in Debian) to track down which process it is.

  - Kevin

 -Original Message-
 From: bind-users-boun...@lists.isc.org [mailto:bind-users-boun...@lists.isc.o
 rg] On Behalf Of Robert Senger
 Sent: Monday, August 24, 2015 5:02 PM
 To: bind-users@lists.isc.org
 Subject: Identify source of rndc reconfig command?

 Hi all,

 after upgrading from Debian Wheezy to Jessie, bind9 receives rndc reconfig 
 commands every 30 minutes. I've never seen this before. Some of my own script
 s run rndc restart/reload after fiddling with network interfaces, but none 
 of these is the source of the observed 30 minutes interval. There are also no
  cron jobs.

 In the bind9 logs I see this:

 24-Aug-2015 22:53:43.431 general: info: received control channel command 'rec
 onfig'
 24-Aug-2015 22:53:43.458 general: info: loading configuration from '/etc/bind
 /named.conf'
 ... [more than 350 lines reconfig log]

 Running tcpdump on the lo interface gives me this:

 root@prokyon:/etc/bind# tcpdump -i lo port 953
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode li
 stening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
 21:23:35.071602 IP localhost.48466  localhost.953: Flags [S], seq 3862717043
 , win 43690, options [mss 65495,sackOK,TS val 196635776 ecr 0,nop,wscale 5], 
 length 0
 21:23:35.071699 IP localhost.953  localhost.48466: Flags [S.], seq 239114031
 2, ack 3862717044, win 43690, options [mss 65495,sackOK,TS val 196635776 ecr 
 196635776,nop,wscale 5], length 0
 21:23:35.071821 IP localhost.48466  localhost.953: Flags [.], ack 1, win 136
 6, options [nop,nop,TS val 196635776 ecr 196635776], length 0
 21:23:35.075355 IP localhost.48466  localhost.953: Flags [P.], seq 1:148, ac
 k 1, win 1366, options [nop,nop,TS val 196635777 ecr 196635776], length 147
 21:23:35.075435 IP localhost.953  localhost.48466: Flags [.], ack 148, win 1
 399, options [nop,nop,TS val 196635777 ecr 196635777], length 0
 21:23:35.115513 IP localhost

Re: DNSSEC secondary (free) - Was - Re: Can I run two name servers on one host with two IP addresses?

2015-08-20 Thread Timothe Litt
On 20-Aug-15 10:50, /dev/rob0 wrote:
 On Thu, Aug 20, 2015 at 02:07:57PM +0200, Robert Senger wrote:
 There are a number of providers out there offering secondary
 dns services for free or for a few bucks/month. Even DNSSEC
 is possible for free.
 This is good news!  I knew there were several good choices for free 
 DNS hosting, but this is the first I heard of them supporting signed 
 zones.

 https://acc.rollernet.us/help/dns/secondary.php

 Are there others?  I saw another one amongst your NS hosts, but that 
 seems to be your own domain.  (If you're offering secondary NS for
 free, please do mention your service here.)
I use https://puck.nether.net/*dns* https://puck.nether.net/dns//
https://puck.nether.net/dns/ .  It's free, it uses current version of
bind, supports DNSSEC,
has been stable for several years.  Only drawback is that if you're in
Chicago, you won't get
enough geographic diversity.  They have only server, which is there. 
And of course, with free
the SLA is best efforts, no guarantee.

I am not affiliated, just reporting my personal experience.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Of long names...

2015-03-15 Thread Timothe Litt
Discussing a 'you don't handle long names' issue that I discovered with
an application's developer, I thought I'd create a test case or two for him.

I did, but they don't resolve.  I might be missing something, so some
other eyes would be appreciated.

The test domain is hosted on godaddy's DNS.  (Because, well, it's a test
domain.)

dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'  Does
anyone have experience with this server?

The recursive servers queried are mine (bind) - I've flushed their
caches.  I've also tried several web services that run DNS lookups; the
results are consistent.  NXDOMAIN

The two names in question each have  records:

 
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us


 
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

The current zone serial number is 2015031503

dig reports that serial with a NXDOMAIN response to each name, so it's
not a propagation issue.

Exporting the zone file (yes, this is the entire file  -- 10 records) gives:

; SOA Record
LITTS.US.3600INSOAns71.domaincontrol.com.dns.jomax.net (
2015031503
28800
7200
604800
3600
)

; A Records
@3600INA97.74.42.79

; CNAME Records
www3600INCNAME@

; MX Records
@3600INMX10nano.litts.net

; TXT Records
@3600INTXTv=spf1 ip4:96.233.62.58 ip4:96.233.62.59
ip4:96.233.62.60 ip4:96.233.62.61 ip4:96.233.62.62 mx a:micro.litts.net
a:nano.litts.net a:pico.sb.litts.net a:overkill.sb.litts.net
a:hagrid.sb.litts.net a:smtp.litts.net -all

;  Records
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay
   
1800IN2001:4830:11a2:941::43
beautiful.feeling600IN2001:4830:11a2:941::43
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay
   
600IN2001:4830:11a2:941::43

; NS Records
@3600INNSns71.domaincontrol.com
@3600INNSns72.domaincontrol.com

Dig lookups fail on the long names, but the SOA shows the correct serial.

 dig
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us


;  DiG 9.9.4-P1 
oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us

;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 57860
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;oh.what.a.beautiful.morning.oh.what.a.beautiful.day.oh.what.a.wonderful.feeling.everythings.lost.in.the.hay.litts.us.
IN 

;; AUTHORITY SECTION:
litts.us.   3600IN  SOA ns71.domaincontrol.com.
dns.jomax.net. 2015031503 28800 7200 604800 3600

;; Query time: 136 msec
;; SERVER: 192.168.148.6#53(192.168.148.6)
;; WHEN: Sun Mar 15 06:57:55 EDT 2015
;; MSG SIZE  rcvd: 216

 dig
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us


;  DiG 9.9.4-P1 
oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 60478
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
IN 

;; AUTHORITY SECTION:
litts.us.   2617IN  SOA ns71.domaincontrol.com.
dns.jomax.net. 2015031503 28800 7200 604800 3600

;; Query time: 7 msec
;; SERVER: 192.168.148.4#53(192.168.148.4)
;; WHEN: Sun Mar 15 07:01:16 EDT 2015
;; MSG SIZE  rcvd: 216

I have verified that bind is happy to create and resolve similar names...

Oh, and the third  record does resolve, which makes me suspicious of
the name length.

Any ideas on this mystery?

-- 
Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Of long names...

2015-03-15 Thread Timothe Litt
Thanks.  I appreciate the extra eyes.

I'm pretty sure that GoDaddy has a problem between their WebGUI's database
and their servers.  The records appear in the former, but not (as you
saw), the latter.
Even though their GUI exports the zone file containing them with the
same zone
serial number that your dig's SOA revealed.

After some more detective work, I had a long, unsatisfactory 'webchat'
with GoDaddy
support.  They had all sorts of reasons why they have no problem and
I'm, er, 'wrong'.
Some would be extremely funny if told to a technical audience.

And since there's no problem, they refuse to escalate.  I've made an
out-of-band
attempt to get the attention of their management.

FWIW, bind is quite happy to accept these names in a domain where I run
my own
servers.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 15-Mar-15 19:49, Mukund Sivaraman wrote:
 On Sun, Mar 15, 2015 at 08:26:35AM -0400, Timothe Litt wrote:
 Discussing a 'you don't handle long names' issue that I discovered with
 an application's developer, I thought I'd create a test case or two for him.

 I did, but they don't resolve.  I might be missing something, so some
 other eyes would be appreciated.

 The test domain is hosted on godaddy's DNS.  (Because, well, it's a test
 domain.)

 dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'  Does
 anyone have experience with this server?

 The recursive servers queried are mine (bind) - I've flushed their
 caches.  I've also tried several web services that run DNS lookups; the
 results are consistent.  NXDOMAIN
 The authoritative nameservers for litts.us are returning NXDOMAIN for
  queries on these names:

 [muks@totoro ~]$ dig -t NS litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  -t NS litts.us
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NOERROR, id: 25029
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;litts.us.IN  NS

 ;; ANSWER SECTION:
 litts.us. 3600IN  NS  ns71.domaincontrol.com.
 litts.us. 3600IN  NS  ns72.domaincontrol.com.

 ;; ADDITIONAL SECTION:
 NS72.domaincontrol.com.   132465  IN  A   208.109.255.46
 NS72.domaincontrol.com.   172484  IN  2607:f208:302::2e
 ns71.domaincontrol.com.   132465  IN  A   216.69.185.46
 ns71.domaincontrol.com.   172484  IN  2607:f208:206::2e

 ;; Query time: 83 msec
 ;; SERVER: 127.0.0.1#53(127.0.0.1)
 ;; WHEN: Mon Mar 16 05:13:23 IST 2015
 ;; MSG SIZE  rcvd: 185

 [muks@totoro ~]$ dig +norecurse @ns71.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  +norecurse 
 @ns71.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us
 ; (2 servers found)
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 65035
 ;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
  IN 

 ;; AUTHORITY SECTION:
 litts.us. 3600IN  SOA ns71.domaincontrol.com. dns.jomax.net. 
 2015031503 28800 7200 604800 3600

 ;; Query time: 86 msec
 ;; SERVER: 216.69.185.46#53(216.69.185.46)
 ;; WHEN: Mon Mar 16 05:14:53 IST 2015
 ;; MSG SIZE  rcvd: 216

 [muks@totoro ~]$ dig +norecurse @ns72.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  +norecurse 
 @ns72.domaincontrol.com -t  
 oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us
 ; (2 servers found)
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 15081
 ;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;oh-what-a-beautiful-morning.oh-what-a-beautiful-day.oh-what-a-wonderful-feeling.everythings-lost-in-the-hay.litts.us.
  IN 

 ;; AUTHORITY SECTION:
 litts.us. 3600IN  SOA ns71.domaincontrol.com. dns.jomax.net. 
 2015031503 28800 7200 604800 3600

 ;; Query time: 83 msec
 ;; SERVER: 208.109.255.46#53(208.109.255.46)
 ;; WHEN: Mon Mar 16 05:15:41 IST 2015
 ;; MSG SIZE  rcvd: 216


   Mukund




smime.p7s
Description: S/MIME Cryptographic Signature

Re: Of long names...

2015-03-15 Thread Timothe Litt
Mark,

Not a failure to bump serial.  I omitted some detail.

I don't edit the SOA explicitly. The GoDaddy GUI is responsible for
bumping the
serial , which it seems do do with a 'save'.
I did add records incrementally, saved the zone and verified that
the serial increased each time.
Both with dig and with their 'export zone' option.
After adding the first long name, dig reported NXDOMAIN.  The SOA in
authority section had new serial.
I added the shorter one.  Serial bumped.  Short one can be found
with dig.  Long can't.
Added second long name, which had same length, but fewer labels.  Saved.
dig reported NXDOMAIN;  authority section's SOA's serial incremented
again. 
Still only the short one was visible.  Short one has two labels
(left of the domain), so domains with
more than one label are accepted. 
Both nameservers produce the same results.
Logged-out and in to their GUI.  GUI could retrieve all three
names.  So the DB behind
the GUI stored the long records.  But either didn't pass them to the
server, or the server
rejected them.

The limits on name and label length date back to RFC1035; they're in
violation.
Even if they have some policy that reduces them (which I would object
to), the
GUI should reject names that violate that policy.  Not leave them in
limbo.  And I
tried some additional names that broke the 63 char label length limit,
which the GUI
correctly rejected.  So it does some level of validation.  Thus, it's
either total length or
number of labels that sends my names into limbo.  Or maybe both.  Or
maybe they apply
a dictionary and drop names that look like sentences - with no error. 
But their
engineers can sort that out.  Easier to do with the code than by experiment.

I think that's conclusive, which is why I stepped into the support
morass.  I'm tempted
to move the domain to my own servers, but I really hate to let vendors
get away with
customer-unfriendly support.  Other people don't have the same ability
to fight back.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 15-Mar-15 20:59, Mark Andrews wrote:
 In message 55062475.6030...@acm.org, Timothe Litt writes:
 Thanks.  I appreciate the extra eyes.

 I'm pretty sure that GoDaddy has a problem between their WebGUI's database
 and their servers.  The records appear in the former, but not (as you
 saw), the latter.
 Even though their GUI exports the zone file containing them with the
 same zone serial number that your dig's SOA revealed.
 I would say It looks like you failed to update the serial. Bump
 the serial and reload. if this was reported to bind-bugs.

 What is exported could be to be loaded content, rather than
 currently loaded content.  Add another record and publish the zone.
 If you get that record and not the long name records then you
 have proof of a problem.  You can then remove the extra record.

 After some more detective work, I had a long, unsatisfactory 'webchat'
 with GoDaddy support.  They had all sorts of reasons why they have no
 problem and I'm, er, 'wrong'. Some would be extremely funny if told to a
 technical audience.

 And since there's no problem, they refuse to escalate.  I've made an
 out-of-band attempt to get the attention of their management.

 FWIW, bind is quite happy to accept these names in a domain where I run
 my own servers.

 Timothe Litt
 ACM Distinguished Engineer
 --
 This communication may not represent the ACM or my employer's views,
 if any, on the matters discussed.

 On 15-Mar-15 19:49, Mukund Sivaraman wrote:
 On Sun, Mar 15, 2015 at 08:26:35AM -0400, Timothe Litt wrote:
 Discussing a 'you don't handle long names' issue that I discovered with
 an application's developer, I thought I'd create a test case or two
 for him.
 I did, but they don't resolve.  I might be missing something, so some
 other eyes would be appreciated.

 The test domain is hosted on godaddy's DNS.  (Because, well, it's a
 test
 domain.)

 dns fingerprint (w3dt.net) claims their server is 'VeriSign ATLAS'
 Does
 anyone have experience with this server?

 The recursive servers queried are mine (bind) - I've flushed their
 caches.  I've also tried several web services that run DNS lookups; the
 results are consistent.  NXDOMAIN
 The authoritative nameservers for litts.us are returning NXDOMAIN for
  queries on these names:

 [muks@totoro ~]$ dig -t NS litts.us

 ;  DiG 9.9.6-P1-RedHat-9.9.6-8.P1.fc21  -t NS litts.us
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NOERROR, id: 25029
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 5

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;litts.us.  IN  NS

 ;; ANSWER SECTION:
 litts.us.   3600IN  NS  ns71.domaincontrol.com.
 litts.us.   3600

Re: BIND DNSSEC Guide draft

2015-01-04 Thread Timothe Litt
On 31-Dec-14 21:00, Jeremy C. Reed wrote:
 ISC is seeking feedback and review for our first public draft of the 
 BIND DNSSEC Guide.  It was written in collaboration with DeepDive 
 Networking.
I haven't had a chance to look in detail, but a quick scan  resulted in
several
observations that I hope are useful.  Also, I posted your note to
dnssec-deployment, where there should be enthusiasm for the topic :-)

The private network section 6.5.4 doesn't talk about how to configure
views/stub zones so that authoritative (internal) zones on a shared
resolver/authoritative
server get validated.  (point 1 in the section dismisses the
possibility.)  This can be done.

Further, it's useful.  People are much more likely to experiment on
internal zones.
More important, consider a typical scenario: my web server on the
internal view
has a different address from the external view.  (Besides efficiency,
some commercial
routers don't do NAT on a stick  - e.g. allow an internal client to NAT
to an external
address served by that router, which is NATed to an internal server.)

So we want to train users to look for DNSSEC authentication.  Unless one
makes
this work, a notebook on the road will authenticate, but the same
notebook in the office
will not.  Don't bother trying to explain this to users; they'll simply
ignore the distinction.

Which is sort of a long way of saying: if the goal is to encourage
people to adopt DNSSEC,
your guide should make Private Networks and the corresponding recipes 
first class
citizens, not a 'don't bother with this' afterthought.  Both for admins
to feel freer to
experiment, and for users to have a consistent experience.

On key rollover - this is still a major hassle.  And while the recipes
look pretty, the process
is ugly.  Key rollover really needs to be automated.  There
are too many steps that require too much indirection.  And too many 'you
could do
this or you could do that' choices - that don't really matter,
especially for getting started. 
I don't see why a person should have to change parameters, dates,
manually generate
keys, etc.  You can work on the recipes, but I don't think they'll make
the problem
approachable - or safe.  Computers are good at this stuff - and people
aren't.

It really needs something like a daily cron job with a simple config
file that does all the work. 
Trigger based on dates, or a 'do it now' emergency/manual command. 
Key generation,
date setting, permissions, etc.  As for key uploads to external
registrars, it can mail the new keys/DS records
to the admin with 'please upload these by 'date'', and only proceed with
the roll-over when it can 'dig' them.
(The e-mail can - via the config file - include a hyperlink to the
upload page...)  For internal,
it can update the trusted keys include file, rndc reconfig, etc.
And the config file should come with reasonable default parameters, so
it 'just works' oob.
E.g. roll the zsks every 6 months and the ksks every 2 years. 
(Semi-random numbers, let's not fight about them.)

Also, RE TLSA - I think it's better to match just the subject public key
- there are several
cases where this reduces management overhead.  I know generating the
hash for that
with openssl isn't fun.  But, https://www.huque.com/bin/gen_tlsa is the
easiest way
that I've found to generate TLSA records. And it supports  SPKI
selectors...  So you might
want to point to it.

I'll try to have a closer look later.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

On 31-Dec-14 21:00, Jeremy C. Reed wrote:
 ISC is seeking feedback and review for our first public draft of the 
 BIND DNSSEC Guide.  It was written in collaboration with DeepDive 
 Networking.

 The document provides introductory information on how DNSSEC works, how 
 to configure BIND to support some common DNSSEC features, as well as 
 some basic troubleshooting tips.  It has lots of interesting content, 
 including examples of using ISC's delv tool and using a common 
 provider's web-based interface to manage DS records.

 This is a beta edition of the guide. We'd appreciate any feedback or 
 suggestions, good or bad. You may email me directly, or to our 
 bind9-bugs@ bug tracker email, or back to this list as appropriate (such 
 as needing further community discussion). Or you may use the GitHub to 
 provide feedback (or fixes).  We plan to announce the first edition of 
 this BIND DNSSEC Guide at the end of January.

 The guide also has a recipes chapter with step-by-step examples of some 
 common configurations. If you have any requests or would like to 
 contribute some content, please let us know.

 The beta of the guide is available in HTML and PDF formats at

 http://users.isc.org/~jreed/dnssec-guide/dnssec-guide.html
 http://users.isc.org/~jreed/dnssec-guide/dnssec-guide.pdf

 The docbook source for the guide is at GitHub:
 https://github.com/isc-projects/isc

Re: Re: Wrong NSEC3 for wildcard cname

2014-11-20 Thread Timothe Litt
On 19-Nov-14 19:03, Graham Clinch wrote:
 Hi Casey  List folks,
 My apologies - this was actually a bug in DNSViz.  The NSEC3 computation
 was being performed on the wrong name (the wrong origin was being
 applied).  It should be fixed now, as shown in:

 http://dnsviz.net/d/foo.cnametest.lancs.ac.uk/VGzlkA/dnssec/
 http://dnsviz.net/d/foo.cnametest.palatine.ac.uk/VGzrqg/dnssec/
 Thanks - that's certainly looking less red.  DNSViz is an exceptionally
 useful tool!

 The cnametest records were an attempt at simplifying a real issue that's
 been reported to us.

 An unsimplified version is cnametest2.lancs.ac.uk (here the RR is
 *.cnametest2 CNAME cnametest2, with an A RR for cnametest2), which (now)
 passes DNSViz, but not Verisign's DNSSEC debugger
 (http://dnssec-debugger.verisignlabs.com/foo.cnametest2.lancs.ac.uk).

 I'm more confident that this is a bug in Verisign's debugger, as the
 error is 'No DS records found for cnametest2.lancs.ac.uk in the
 cnametest2.lancs.ac zone' (where's the .uk gone, and why the interest in
 a DS where there's no zone cut?).  Do any Verisign DNSSEC debugger
 maintainers lurk on bind-users?  (The 'Contact Us' link on the page
 looks very corporate and not very useful)
Try the dnssec-deployment mailing list. 
dnssec-deploym...@dnssec-deployment.org

 delv +vtrace continues to report NSEC3 at super-domain only for
 foo.cnametest2.palatine.ac.uk records, and not for
 foo.cnametest2.lancs.ac.uk.  Is this a similar
 miscalculating-the-owner-name as for DNSViz?  I'll try to dig (haha!)
 into the delv source tomorrow.  Tested with delv 9.10.0  9.10.1.

 I think this might be one of those cases where I should have trusted my
 gut instinct (to blame the validating resolver), but the more I
 investigated the more red and missing lines in output...

 I'm attempting to discover more about the validating resolver, but since
 I have no access to it and the reporter is just a user of that resolver,
 odds are not stacked in our favour.

 *snipping the bits where I obviously need to read about
 NSEC3 again*
 At the start of the year, I received a piece of wisdom regarding NSEC3
 It is much harder to understand and debug.  At the time I was sure
 that I could outsmart it.  Maybe not so much now.

 Regards,

 Graham






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: recursive lookups for UNSECURE names fail if dlv.isc.org is unreachable and dnssec-lookaside is 'auto'

2014-08-28 Thread Timothe Litt
On 27-Aug-14 20:35, Doug Barton wrote:
 On 8/27/14 3:03 PM, Timothe Litt wrote:
 So you really meant that validating resolvers should only consult DLV if
 their administrator knows that users are looking-up names that are in
 the DLV?  That's how I read your advice.

 You're correct.

 I don't see how that can work; hence we'll disagree.  I think the only
 viable strategy for*resolvers*  is to consult the DLV - as long as it
 exists.

 So that leads to a Catch-22, as ISC has stated that they will continue
 to provide the DLV as long as it is used. You're saying that people
 should continue to consult it as long as it exists.

 Now that the root is signed the traditional argument against continued
 indiscriminate use of the DLV is that it makes it easier for
 registries, service providers, etc. to give DNSSEC a low priority.
 You don't need me to provide DNSSEC for you, you can use the DLV.
 Based on my experience I think there is a lot of validity to that
 argument, although I personally don't think it's persuasive on its own.

I don't want to see indiscriminate use of the DLV.  See below.
 While I appreciate the tone of reasoned discourse in the message I'm
 responding to, what you have done is provide additional details to
 support your thesis that changing providers is hard. I'm not arguing
 the contrary position, so we can agree to agree on that. What you
 haven't done is provide any evidence to refute my thesis that It's
 hard != It's impossible. I'll even go so far as to agree with you
 that in some cases it's really, really hard.

For me, it's impossible.  I've stated why.  I am a very small player - I
run a network for my extended (multi-state) family, and some free
services for a few hundred former colleagues.  I considered the options
that you suggested - they are not practical, affordable or both.  No ISP
in my geography will provide DNSSEC for reverse DNS.  I have asked (in
dnssec-deployment) for help in pressuring the ISPs to solve this
problem.  Comcast (which is not in my geography) has acknowledged the
issue, and has had it on their list for several years.  None of the
others have gone even that far. 

 What that leaves us with is your position (which I will state in an
 admittedly uncharitable way), Some of us would like to have the
 benefits of protecting our authoritative data with DNSSEC without
 having to endure the cost and inconvenience of migrating our resources
 to providers that support it. Therefore the entire Internet should use
 the DLV. In contrast, my position is that people and/or organizations
 which need the protection of DNSSEC should vote with their feet. In
 this way providers that offer DNSSEC will be rewarded, and those that
 do not will be punished. 
I would vote with my feet if I could.  I can't.  The problem with your
market driven approach is that ISPs are largely unregulated monopolies. 
At least, for those of us who are based in residences/small businesses. 
I'm fortunate to have 2 cables pass my house - fiber and cable TV.  
Only the fiber provider has enough outbound bandwidth for site-site
backup, which I get for $low 3 figures/mo.  The cable TV-based
provider says 'yes since you have business class service (static IPs),
we will provide a fiber to your premises.  First, there's the
engineering study for $5 figures, then a construction fee, then %4
figures/month...unless you want serious bandwidth, in whch case it's
more. So there's no competition.  Neither cares about DNSSEC.  Neither
is required to care by regulation, RFC, ICANN/IANA or organized
community pressure.

The answer is different when you're an enterprise with a large budget. 
I've been there.  Let us consolidate your voice  data networks; sure,
we'll eat the engineering costs of switching you to a few OC-48 fibers;
saves us money  maintaining all those copper wires.  You want a couple
of dark fibers, and a couple of hundred PI IP addresses routed - no
problem.  Switch your phone system to VoIP too?  Oh, you got a quote
from them,  including running new fiber from the highway to your plant
for free?  Let me re-work our numbers.  Can we shine your shoes?  When
you pay several $100K/mo for bandwidth per site, it's amazing how
responsive vendors can be.  So your approach works for some, according
to the golden rule (she who has the gold, makes the rules.)

 Completely aside from what I believe to be the absurdity of your
 argument, the position I suggest will almost certainly result in
 market forces which encourage the deployment of DNSSEC. At bare
 minimum it has the moral value of rewarding providers who have done
 the right thing.

I don't think it's absurd to note that people in my position - and there
are a lot of us - are forced to use DLV for some cases.  The most
prominent is reverse DNS.  We *can't* switch providers.  We *can't* get
IP addresses from other sources (and get them routed) without literally
going bankrupt.

Since no one can predict what names a validating resolver

Re: recursive lookups for UNSECURE names fail if dlv.isc.org is unreachable and dnssec-lookaside is 'auto'

2014-08-27 Thread Timothe Litt
On 27-Aug-14 14:54, Doug Barton wrote:
 On 8/26/14 10:35 AM, Timothe Litt wrote:
 I think this is misleading, or at least poorly worded and subject to
 misinterpretation.

 I chose my words carefully, and I stand by them.

The OP was asking about configuring a resolver (bind's).

Where I thought there could be confusion is in conflating two issues:
1) Should validating resolvers consult the DLV?
2) Should entries be made in the DLV?

So you really meant that validating resolvers should only consult DLV if
their administrator knows that users are looking-up names that are in
the DLV?  That's how I read your advice.

I don't see how that can work; hence we'll disagree.  I think the only
viable strategy for *resolvers* is to consult the DLV - as long as it
exists.

If you meant that an administrator should only put entries in DLV for a
domain:
  a) If there is no direct trust path to the root; and
  b) the domain benefits from being DNSSEC-secured (know your user base)
then we agree.

 I did not say that the DLV has no value, and I specifically mentioned
 that there are circumstances when it is valuable and should be used.
 You clearly have a different view, which is fine.

 When it comes to gTLDs, I completely reject the notion that users
 cannot change registrars. It can be hard, no doubt, but it's a
 cost/benefit analysis. If the benefit of DNSSEC outweighs the
 difficulty of moving, then it's worth it. If not, it's not. The fact
 that it's hard doesn't mean it's impossible.

Impossible is a very high standard.  DNSSEC is only one part of the
cost/benefit analysis in choosing/sticking with a registrar.  And part
of the benefit of DNSSEC goes to the registrant's users, not all to the
registrant - this is hard to account for.  Also, it's not just the
technical/financial difficulty of switching registrars.  Some have
policies/practices that some users find unacceptable; unfortunately, for
quite some time those were the ones that offered DNSSEC.  That's
improving, but it's still an issue in some circles. 

DLV has a different set of costs (and benefits - especially when some
resolvers don't consult it). 

If the question is how can I implement DNSSEC in my zones, the
preferred path is certainly not DLV.  But if the choice is a
difficult/expensive switch of registrar or no DNSSEC, DLV is worth
considering.  

 That said, I do recognize that there are situations where a chain of
 trust to the root is not possible (such as some reverse zones). Again,
 this becomes a cost/benefit analysis. For reverse zones if DNSSEC is
 important it may be worth the effort of changing providers, or even
 getting a PI assignment. For TLDs where DNSSEC is not yet available, a
 change may be in order. If enough people vote with their feet in this
 way those providers and TLDs that lose customers may reconsider their
 offerings.

 No one said it would be easy. :)

 Doug

I agree that a chain to the root is the preferred option.

I would love to vote with my feet.  I have a few small problems with
that strategy.

There is no ISP in my geography that provides dnssec reverse delegation
for IPv4.  Not for lack of complaints/escalations from me. 

There is only one ISP here that offers fiber speeds at prices that an
individual can afford.  So it can afford not to care.

For IPv6 - well, I can't get IPv6 directly from any ISP, but my tunnel
provider does allow DNSSEC reverse delegation.  When my ISP finally
implements IPv6 (promised for over 2 years, but again, they don't care),
I'll have to choose between a direct IPv6 connection with no reverse
DNSSEC, or sticking with my tunnel.

A provider-independent IP addresses is out of reach for all but the
largest/best financed organizations.  Not just getting them, but the
additional costs of having to get them routed.  And just try to get an
ISP to route a small number of IP addresses for a home/small business
(or even medium business) customer...at any price. 

So yes, there are trade-offs and a cost/benefit analysis is helpful. 
And if you're a big enough customer and/or you're fortunate enough to
have a choices that enable a direct trust chain to the root, we agree
that is the preferred choice from a strictly DNSSEC perspective.

Certainly DNSSEC is not easy.  It's getting somewhat easier, though not
fast enough. 

One way to make it easier - for now - is to encourage *resolvers* to
consult DLV.  That allows validated resolution of the domains that
require DLV.  That's a good thing. 

And that's where this thread started.  I think that's the only part
that's strictly on-topic for this list...






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: clients-per-query vs max-clients-per-query

2014-06-08 Thread Timothe Litt
On 07-Jun-14 12:36, Evan Hunt wrote:
 On Sat, Jun 07, 2014 at 12:02:24PM -0400, Jorge Fábregas wrote:
 For me, this clients-per-query of 10 is an upper limit (maximum number
 of clients before it starts dropping).  So then, what's the purpose of
 max-clients-per-query?
 Over time, as it runs, named tries to self-tune the clients-per-query
 value.

 If you set clients-per-query to 10 and max-clients-per-query to 100
 (i.e., the default values), that means that the initial limit will be
 10, but if we ever actually hit the limit and drop a query, we try
 adjusting the limit up to 15, then 20, and so on, until we can keep
 up with the queries *or* until we reach 100.

 Once we get to a point where we're not spilling queries anymore, we
 start experimentally adjusting the limit back downward -- reducing it
 by 1 every 20 minutes, if I recall correctly.

 If clients-per-query is 0, that means we don't have a clients-per-query
 limit at all.  If max-clients-per-query is 0, that means there's no upper
 bound on clients-per-query and it can grow as big as it needs to.


This doesn't quite make sense, assuming I understand it correctly from
your + Mark's descriptions.

Consider a continuous stream of queries to a slow server.  For the sake
of exposition, assume the incremental adjustment is 1 rather than 5.

Named drops the 11th query, but increases the limit.

So the 12th query will be accepted.  Why is the 12th query more valuable
than the 11th?

Next, the limit is 11, but the 13th arrives - is dropped  the limit
increased.

  So the 14th is accepted.

And this continues, dropping every other (actually every 5i-th) query
until there's a response or the max is reached.

Meantime, named expects the clients whose requests were dropped to
retry. (Typically 3 sec, up to 5 times.)
If there's a delay at the next stage of resolution, a client has the
same chance of being unlucky again.

This algorithm seems to attempt to deal with two distinct cases:
  o drop abusive bursts
  o limit resource consumption by unresponsive servers/servers of
varying responsiveness

For the former, a global threshold makes some sense - an abusive burst
of queries can be for multiple zones - or focused on one.
But isn't this what response rate limiting is for?  Given RRL, does this
still make sense?

For the latter, separating the measurement/threshold tuning from the
decision to drop would seem to produce more sensible behavior than
dropping every 5i-th packet.  And for it to make any sense at all, it
must be adjusted per server, not globally...

Or I'm missing something, in which case the documentation needs some
more/different words :-(

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: AIX and 9.9.5 compiling

2014-05-09 Thread Timothe Litt
On 09-May-14 14:53, Alan Clegg wrote:
 I do, but I don't have early access, so other than a brief yep, it
 works, I can't get it into the README.  8-)
I'm glad that you make that effort. 

 I was responding to Jeremy's solicitation for suggestions on what
should be done more officially/thoroughly.   (Including routine builds
during development.)

Including ARM - native and cross-compiled - would support parts of the
community that don't get much attention (nor make much noise.)   
Embedded and cross-architecture compilers.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 

This communication may not represent my employer's views,
if any, on the matters discussed. 

On 09-May-14 14:53, Alan Clegg wrote:
 On 5/9/14, 2:06 PM, Timothe Litt wrote:
 If you have a suggestion for an important or popular OS version I should 
 add to our build farm, please let me know why.
 I have one suggestion:  get a Raspberry PI and build/run on it (the
 usual OS is Debian - 'Raspbian', but people run a variety of others.)
 I do, but I don't have early access, so other than a brief yep, it
 works, I can't get it into the README.  8-)


 AlanC





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: changing NSEC3 salt

2014-02-06 Thread Timothe Litt

On 06-Feb-14 05:56, Cathy Almond wrote:

On 05/02/2014 18:54, David Newman wrote:

The Michael W. Lucas DNSSEC book recommends changing NSEC3 salt every
time a zone's ZSK changes.

Is this just a matter of a new 'rndc signing' command, or is some action
needed to remove the old salt?

thanks

dn

rndc signing -nsec3param ...

I would expect the old NSEC3 chain and old NSEC3PARAM record to be
removed, once the new chain is in place.

(Similarly, the new NSEC3PARAM record will not appear in the zone until
the new NSEC3 chain has been completely generated).

Cathy

This seems silly.  Why should a person have to select a salt at all?  
It's just a random number, and people are really bad at picking random 
numbers.  Seems like a miss in 'DNSSEC for humans' :-)


There should be a mechanism to tell named to pick a random number and 
use it for the salt.  (I suggest '*' - '-' already means 'none'.)  named 
already has to know how to get random numbers, so this should not be 
difficult.  It should work for records supplied in UPDATE transactions 
as well as rndc signing.


A bit more work to have it function when loaded from a zone file, though 
that doesn't seem unreasonable.  (E.g. if read from a zone file, pick a 
salt, treat the record as if loaded with that value, and do all the 
requisite (re-)signing.)


I'm copying bind9-bugs so this doesn't get lost.  Please don't copy that 
list if you comment on this. (Careful with that 'reply all'!)


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: changing NSEC3 salt

2014-02-06 Thread Timothe Litt

On 06-Feb-14 09:14, Klaus Darilion wrote:



On 06.02.2014 14:58, Cathy Almond wrote:

On 06/02/2014 12:58, Timothe Litt wrote:

On 06-Feb-14 05:56, Cathy Almond wrote:

On 05/02/2014 18:54, David Newman wrote:

The Michael W. Lucas DNSSEC book recommends changing NSEC3 salt every
time a zone's ZSK changes.

Is this just a matter of a new 'rndc signing' command, or is some 
action

needed to remove the old salt?

thanks

dn

rndc signing -nsec3param ...

I would expect the old NSEC3 chain and old NSEC3PARAM record to be
removed, once the new chain is in place.

(Similarly, the new NSEC3PARAM record will not appear in the zone 
until

the new NSEC3 chain has been completely generated).

Cathy


This seems silly.  Why should a person have to select a salt at all?
It's just a random number, and people are really bad at picking random
numbers.  Seems like a miss in 'DNSSEC for humans' :-)

There should be a mechanism to tell named to pick a random number and
use it for the salt.  (I suggest '*' - '-' already means 'none'.)  
named

already has to know how to get random numbers, so this should not be
difficult.  It should work for records supplied in UPDATE transactions
as well as rndc signing.

A bit more work to have it function when loaded from a zone file, 
though

that doesn't seem unreasonable.  (E.g. if read from a zone file, pick a
salt, treat the record as if loaded with that value, and do all the
requisite (re-)signing.)

I'm copying bind9-bugs so this doesn't get lost.  Please don't copy 
that

list if you comment on this. (Careful with that 'reply all'!)

Timothe Litt
ACM Distinguished Engineer


Sounds like a good idea - thanks.


Indeed. It would also solve the theoretical problem of NSEC3 hash 
collisions (see my email from 3. Feb 2014)


regards
Klaus


Not quite.  It would enable a solution, but it doesn't solve it unless 
named also checks for a collision, picking a new salt and re-trying in 
that case.  That would be a good idea (though creating a test case would 
be a good student challenge).  [If it isn't tested, it doesn't work...]


Note also the RFC 5155 recommendation:

The salt SHOULD be at least 64 bits long and unpredictable, so that
an attacker cannot anticipate the value of the salt and compute the
next set of dictionaries before the zone is published.
In case it wasn't obvious, I should have noted that the length would be 
a config file entry.



Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.



smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Re: Slowing down bind answers ?

2014-01-05 Thread Timothe Litt

  On 04-Jan-14 14:58, Nicolas C. wrote:

On 03/01/2014 18:00, wbr...@e1b.org wrote:

From: Mark Andrews ma...@isc.org

After that specify a final date for them to fix their machines by
after which you will send NXDOMAIN responses.  Sometimes sending a
poisoned reponse is the only way to get peoples attention.

zone . {
type master;
file empty;
};

empty:
@ 0 IN SOA . stop.using.this.nameserver 0 0 0 0 0
@ 0 IN NS .
@ 0 IN A 127.0.0.1


Or really mess with them and answer all A queries with 199.181.132.249


It's not a bad idea. I could wildcard all requests to an internal HTTP 
server saying that the DNS configuration of the client is deprecated.



Which is great until someone tries to send e-mail, ftp a file, lookup a 
SIP server - or any other service.  Do any clients rely on SIP for 
emergency telephone service?  (VoIP phones, softphones, building alarms 
among others)


DNS redirection is evil - and tricky; the world is not just DNS and HTTP 
from a user's desktop/notebook.


To get people's attention, NXDOMAIN to www.* queries is often reasonably 
safe.  Embedded systems are another story.  (Elevators, HVAC 
controllers, security systems, routers, ...)


Think about the all consequences in your environment.  Do you want to be 
responsible if someone can't make an emergency call?  Someone who has 
been out on leave?  Someone stuck in an elevator?


It may be better to simply alias (if necessary, route) the old IP 
address(es) to the new server.  That way you can manage the 
notifications and consequences on a per-service basis.


You can also turn on query logging (which helps slow down the old 
server) - and use the logs to backtrack to the machines that need to be 
reconfigured.  Scripts can send an e-mail daily with a warning and 
instructions on how to reconfigure.  If you have the ownership data, 
scripts can escalate to a manager/sponsor if ignored. Hopefully this 
will get you down to a manageable list of miscreants that require manual 
follow-up.


Redirecting to disney.com is a fine humorous response - but I'd be very 
careful about taking it - or similar - action seriously. Running DNS is 
a serious responsibility.


Whatever transition plan you adopt needs to fit your circumstances and 
manage all the risks.  A 'simple' plan might work for you - or it might 
not.


The risks of draconian operations to encourage migration are a lot 
larger than they were in years past.


--
Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Unable to transfer IPv4 reverse zone

2013-12-19 Thread Timothe Litt
I doubt you'll get help without providing configuration data for master 
and slaves and exact log and error messages.


But I'll take one blind guess.  DNSSEC validation enabled and your 
in-addr.arpa zones are not delegated and not in DLV?


In my configuration IPv4 Reverse zones (which are DNSSEC signed) 
transfer just fine.


Not helpful without my configuration?  That's the point.  Post yours 
with the log messages showing the transfer attempts  failures and maybe 
someone (else) will help.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

This communication may not represent my employer's views,
if any, on the matters discussed.

On 19-Dec-13 13:11, Daniel Lintott wrote:

Hi,

I have two BIND DNS servers both running 9.9.4-P1.

I have configured them as master and slave, but have a strange issue.
The IPv4 reverse zone, fails to transfer to the slave.

I have tested the AXFR from the command line and this also fails with
SERVFAIL.

Out of 5 zones (3 forward, 1 IPv6 reverse, 1 IPv4 Reverse) the IPv4
reverse zone is the only one which fails.

The configuration on the master is the same for all zones.

Any ideas?

Regards

Daniel Lintott






smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: bind-users Digest, Vol 1629, Issue 1

2013-09-19 Thread Timothe Litt

At the risk of continuing an off-topic thread:


I have toyed with trying to find a cheap Stratum-1 server for home.

   I've had success with a Raspberry Pi  GPS.  You can build a very 
respectable stratum 1 server for less than USD $200, if you can handle a soldering 
iron and build a Linux kernel.

The RPi is nominally $35 - but add a case, power supply, SD card etc and 
it's closer to $100 - depending on what you have on-hand. You need a 
monitor, KB  mouse to get started (configured), but not thereafter (SSH 
will do).  A GPS receiver will cost about $70.  The hockey puck version 
(e.g. GPS18x lVC) is a reasonable choice and can be placed outdoors, 
though it needs a level shifter; or the ceramic MCMs (e.g Adafruit 746) 
at $40 - but you'll probably spend another $40 on antenna  battery 
backup.  It will run quite happily on an SD card; an external disk isn't 
required.  Might want a small UPS.


If you build NTP and a kernel with PPS support, you end up with a pretty 
stable server.  Doesn't use much power (~5W).


For an interface to the 18x LVC, see 
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=41t=1970start=194


If you want multiple servers, the second one usually costs less because 
you kept all the bootstrapping supplies.


Further discussion should probably find another list...

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 19-Sep-13 08:00, bind-users-requ...@lists.isc.org wrote:

I have toyed with trying to find a cheap Stratum-1 server for home.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: BIND 9.9.3b1 is now available

2013-01-25 Thread Timothe Litt

On 25-Jan-13 17:32, Michael McNally wrote:

  BIND 9.9.3b1 is the first beta release of BIND 9.9.3.

Makes available a new XML schema (version 3.0) for the statistics
channel that adds query type statistics at the zone level,
flattens the XML tree and uses compressed format to optimize
parsing. It also includes new XSL that permits charting via the
Google Charts API on browsers that support javascript in XSL.
To enable, build BIND with configure --enable-newstats. [RT
#30023]

(c) 2001-2013 Internet Systems Consortium


2 bits of feedback on the beta announcement:

I have software that reads the stats channel.

Please, if you have a new schema, put it on another URI so that software 
that wants the old schema gets it, and software that wants the new 
explicitly requests it.  E.g.  '/statistics/v3'


Flag day changes are not good...

I also have a patch that provides just the config data on another URI 
(/config)  - which I wish you'd accept in some form - it's very useful 
for management software that doesn't want to parse all the stats (which 
in perl takes forever), but does want the list of zones served.   I sent 
it to you folks quite some time ago (and could resend).


Since you're obviously in the code, would you re-consider this? It's 
pretty straightforward, it simply selects a subset of the data in the 
(then-) existing flow.


Thanks on both counts.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.





smime.p7s
Description: S/MIME Cryptographic Signature
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

  1   2   >