Re: Question about linking jemalloc with Bind 9.18.x when doing the compile.

2022-08-03 Thread Michal Nowak

On 02/08/2022 18:46, Bhangui, Sandeep - BLS CTR via bind-users wrote:

Hello all

We are getting ready to test Bind 9.18.x. Currently we are running the 
latest version of 9.16.x branch.


We have downloaded and successfully installed the jemalloc module on the 
Server ( RHEL 7.9 OS) and getting ready to compile the latest version of 
Bind 9.18.x.


Can someone please point me to some documentation which tells as to what 
exact flags/parameters to use to properly link jemalloc when we compile 
latest version of Bind 9.18.x using “configure” so that we get the 
compile correctly done in the first run.


Thanks in advance.

Sandeep




Sandeep,

not much is needed as BIND 9's ./configure script handles it for you 
when jemalloc and jemalloc-devel packages are installed.


Just check that after ./configure is run, there are the following two lines:

Optional features enabled:
Memory allocator: jemalloc

Once BIND 9 is compiled, run "ldd /path/to/named" and look for the 
jemalloc line, it should look similar to this:


libjemalloc.so.2 => /lib64/libjemalloc.so.2 (0x7f895f20)

Michal
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC signing of an internal zone gains nothing (unless??)

2022-08-03 Thread Peter
On Tue, Aug 02, 2022 at 02:04:22PM -0400, Timothe Litt wrote:   
! On 02-Aug-22 13:18, Peter wrote:  
! > On Tue, Aug 02, 2022 at 11:54:02AM -0400, Timothe Litt wrote:   
! > !   
! > ! On 02-Aug-22 11:09,bind-users-requ...@lists.isc.org  wrote:   
! > !   
! > ! > | Before your authoritative view, define a recursive view with the 
internal   
! > ! > ! zones defined as static-stub, match-recursive-only "yes",  and a  
! > ! > ! server-address of localhost.  
! > ! > 
! > ! > Uh? Why before? 
! > !   
! > ! Because each request attempts to match the views in order.  You want the  
! > ! stub view to match recursive requests.  The non-RD requests will fall thru
! > ! to the internal zone and get the authoritative data.  
! > 
! > Ahh, I see. But this does not work so well for me, because I have the   
! > public authoritative server also in the same process. And from the  
! > Internet will come requests with RD flag set, and these must get a  
! > REFUSED ("recursion desired but not available").
! > 
! > So I considered it too dangerous to select views depending on the RD
! > flag being present or not, and resolve this with a slightly different   
! > ordering of the views.  
! > 
! > -- PMc  
!   
! Order matters, and changing it will change behaviors. 

That is obvious.

! The server doesn't select ONLY on the RD flag.  It also selects on IP address 
!and/or TSIG keys.  The RD flag is only used to select between the recursive and
!authoritative view pairs for MATCHING CLIENTS. 

Fine.   

! So you should order the views as I showed.

That's not going to work with me. I posted the description of   
my approach, so either you provide evidence of why my logic is  
flawed, or You stop telling me that I should obey You.  

I devised my logic, and it is well possible that it is flawed,  
but if so, then I want to understand the exact flaw, and learn  
and improve.

! The public clients will fail the "match-clients" clause of the internal views 
!regardless of the RD because of their IP addresses.  They will fall thru to the
!r-external view.  That will also fail unless they are listed clients.  So  
!again, they fall thru to the external view.  That has recursion no - which 
!means that RD will return REFUSED. 

Fine. Same here.

! The only danger comes from failing to properly setup the client matching ACLs,
!or from making changes to the logic without understanding how it works.

Mistakes can happen, e.g. when in a hurry.  

! Instead of guessing, use what I provided and test it.  It work

Re: DNSSEC signing of an internal zone gains nothing (unless??)

2022-08-03 Thread Peter
On Wed, Aug 03, 2022 at 04:49:35PM +1000, Mark Andrews wrote:
! Additionally authoritative servers for a zone are supposed to answer queries 
with RD=1 set with RA=0 if the client is not being offered recursion.  REFUSED 
is the wrong answer of the query name involves zones you serve. Only if you are 
a recursive only server should you be considering REFUSED. 

I am seeing queries for example.com (literally). I didn't talk about
people querying my own domains. Those seem to get their answer, plus
"recursion desired but ..."

-- PMc
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Stopping ddos

2022-08-03 Thread Victor Johansson via bind-users

Hey,

I just want to add that there is a better way to do this in iptables 
with hashlimit. The normal rate limit in iptables is too crude.


Below is an example from the rate-limit-chain, to which you simply send 
all port 53 traffic from the INPUT chain (make sure to exclude 
127.0.0.1/127.0.0.53 though :) ).



-A INPUT -p udp -m udp --dport 53 -j DNS-RATE-LIMIT
-A INPUT -p tcp -m tcp --dport 53 -j DNS-RATE-LIMIT

-A DNS-RATE-LIMIT -s 127.0.0.1/32 -m comment --comment "Dont rate-limit 
localhost" -j RETURN
-A DNS-RATE-LIMIT -m hashlimit --hashlimit-upto 100/sec 
--hashlimit-burst 300 --hashlimit-mode srcip --hashlimit-name DNS-drop 
--hashlimit-htable-expire 2000 -j ALLOW

-A DNS-RATE-LIMIT -m limit --limit 1/sec -j LOG --log-prefix "DNS-drop: "
-A DNS-RATE-LIMIT -m comment --comment "ansible[dns rate limiting]" -j DROP


//Victor


On 8/2/22 23:16, Michael De Roover wrote:
For my servers I'm using iptables rules to achieve ratelimiting. They 
look as follows:
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -m recent 
--update --seconds 600 --hitcount 4 --name DEFAULT --mask 
255.255.255.255 --rsource -j DROP
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -m recent --set 
--name DEFAULT --mask 255.255.255.255 --rsource


It should be fairly trivial to convert these to use UDP 53, and tweak 
the timings you want. These rules are intended to allow 4 connections 
(which normally should be entire SMTP transactions) every 10 minutes. 
Since I have 2 edge nodes with these rules, that is doubled to 8 
connections total. If you're an authoritative name server only, 
realistically mostly recursors / caching servers would query your 
servers and not too often. You can easily restrict traffic here. If 
you're a recursor too, this becomes a bit more complicated.


Regarding the legitimate queries, it would be prudent to allow common 
recursors (Google, Cloudflare, Quad9 etc) to have exceptions to this 
rule. Just allow their IP addresses to send traffic either 
unrestricted, or using a more relaxed version of the above.


HTH,
Michael

On Tue, 2022-08-02 at 16:02 -0400, Robert Moskowitz wrote:

Recently I have been having problems with my server not responding to my
requests.  I thought it was all sorts of issues, but I finally looked at
the logs and:

Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 114.29.194.4#11205
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
114.29.216.196#64956 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 64.68.114.141#39466
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
209.197.198.45#13280 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
114.29.202.117#41955 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 62.109.204.22#4406
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:49 onlo named[6155]: client @0xa9420720 64.68.104.9#38518
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:50 onlo named[6155]: client @0xaa882dc8 114.29.202.117#9584
(.): view external: query (cache) './A/IN' denied

grep -c denied messages
45868

And that is just since Jul 31 3am.

This is fairly recent so I never looked into what I might do to protect
against this.  I am the master for my domain, so I do need to allow for
legitimate queries.

Any best practices on this?

I am running bind 9.11.4

thanks

-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: DNSSEC adoption

2022-08-03 Thread Bob Harold
I think the best way to soften the effect, and make DNSSEC much less
brittle, without losing any of the security, is to reduce the TTL of the DS
record in the parent zone (usually TLD's) drastically - from 2 days to like
30 minutes.  That allows quick recovery from a failure.  I realize that
will cause an increase in DNS traffic, and I don't know how much of an
increase, but the 24-48 hour TTL of the DS record is the real down-side of
DNSSEC, and why it is taking me so long to try to develop a bullet-proof
process before signing my zones.

-- 
Bob Harold
University of Michigan


On Tue, Aug 2, 2022 at 2:21 PM Timothe Litt  wrote:

>
> On 02-Aug-22 13:51, Brown, William wrote:
>
> my guess is that they see dnssec as fragile, have not seen _costly_
> dns subversion, and measure a dns outages in thousands of dollars a
> minute.
>
> No one wants to be this 
> guy:http://www.dnssec.comcast.net/DNSSEC_Validation_Failure_NASAGOV_201201
> 18_FINAL.pdf
>
> so, to me, a crucial question is whether dnssec ccould be made to fail more 
> softly and/or with a smaller blast radius?
>
> randy
>
> I'm more of a mail guy than DNS, so yes, like hard v. soft fail in SPF.  Or 
> perhaps some way of the client side deciding how to handle hard v./ soft 
> failure.
>
> As Mark has pointed out, validation is a client issue.  Setting up DNSSEC
> properly and keeping it running is for the server admin - which bind is
> incrementally automating.
>
> For bind, the work-around for bad servers (which is mentioned in the
> article) is to setup negative trust anchors in the client for zones that
> fail.  And notify the zone administrator to fix the problem.  I usually
> point them to a DNSVIZ report on their zone.
>
> The nasa.gov failure was avoidable.  nasawatch, which is an excellent
> resource for space news, jumped to an incorrect conclusion about the outage
> - and never got the story straight.  In fact, all validating resolvers
> (including mine) correctly rejected the signatures.  It wasn't comcast's
> fault - they were a victim.
>
> It is an unfortunate reality that admins will make mistakes.  And that
> there is no way to get all resolvers to fix them - you can't even find all
> the resolvers.  (Consider systemd-resolved, or simply finding all the
> recursive bind, powerdns, etc instances...)
>
> There is no global "soft" option - aside from unsigning the zone and
> waiting for the TTLs to expire.  And besides being a really bad idea, it's
> easier to fix the immediate problem and learn not to repeat it.
>
> Long term, automation of the (re-)signing and key roll-overs will reduce
> the likelihood of these outages.  It is truly unfortunate that it's so late
> in coming.
>
> It may take a flag day to get major resolver operators, dns servers, and
> client resolvers all on the same page.  I'm not holding my breath.
>
> Timothe Litt
> ACM Distinguished Engineer
> --
> This communication may not represent the ACM or my employer's views,
> if any, on the matters discussed.
>
>
> --
> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
> from this list
>
> ISC funds the development of this software with paid support
> subscriptions. Contact us at https://www.isc.org/contact/ for more
> information.
>
>
> bind-users mailing list
> bind-users@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users
>
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz
Part of my problem is that caching does not seem to be working in my 
internal view.


Something is happening such that my internal systems AND the server 
itself cannot resolve names and looses it even 5 min later, indicating 
not caching.


I read https://kb.isc.org/docs/aa-00851

In my include for the internal view (named.internal) I have:

    match-clients        { httnets; };
    match-destinations    { httnets; };
    allow-query        { httnets; };
    allow-query-cache    { httnets; };
    allow-recursion        { any; };
    recursion yes;
    empty-zones-enable yes;

Yet I get on my DNS server:

ping www.google.com
ping: www.google.com: Name or service not known

Then later it works.

Then later it doesn't again.

Sigh.  If at least caching was working for internal use, I would be able 
to work more smoothy?





--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Stopping ddos

2022-08-03 Thread Robert Moskowitz

Thanks.  I will look into this.

On 8/3/22 07:47, Victor Johansson via bind-users wrote:


Hey,

I just want to add that there is a better way to do this in iptables 
with hashlimit. The normal rate limit in iptables is too crude.


Below is an example from the rate-limit-chain, to which you simply 
send all port 53 traffic from the INPUT chain (make sure to exclude 
127.0.0.1/127.0.0.53 though :) ).



-A INPUT -p udp -m udp --dport 53 -j DNS-RATE-LIMIT
-A INPUT -p tcp -m tcp --dport 53 -j DNS-RATE-LIMIT

-A DNS-RATE-LIMIT -s 127.0.0.1/32 -m comment --comment "Dont 
rate-limit localhost" -j RETURN
-A DNS-RATE-LIMIT -m hashlimit --hashlimit-upto 100/sec 
--hashlimit-burst 300 --hashlimit-mode srcip --hashlimit-name DNS-drop 
--hashlimit-htable-expire 2000 -j ALLOW

-A DNS-RATE-LIMIT -m limit --limit 1/sec -j LOG --log-prefix "DNS-drop: "
-A DNS-RATE-LIMIT -m comment --comment "ansible[dns rate limiting]" -j 
DROP



//Victor


On 8/2/22 23:16, Michael De Roover wrote:
For my servers I'm using iptables rules to achieve ratelimiting. They 
look as follows:
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -m recent 
--update --seconds 600 --hitcount 4 --name DEFAULT --mask 
255.255.255.255 --rsource -j DROP
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -m recent 
--set --name DEFAULT --mask 255.255.255.255 --rsource


It should be fairly trivial to convert these to use UDP 53, and tweak 
the timings you want. These rules are intended to allow 4 connections 
(which normally should be entire SMTP transactions) every 10 minutes. 
Since I have 2 edge nodes with these rules, that is doubled to 8 
connections total. If you're an authoritative name server only, 
realistically mostly recursors / caching servers would query your 
servers and not too often. You can easily restrict traffic here. If 
you're a recursor too, this becomes a bit more complicated.


Regarding the legitimate queries, it would be prudent to allow common 
recursors (Google, Cloudflare, Quad9 etc) to have exceptions to this 
rule. Just allow their IP addresses to send traffic either 
unrestricted, or using a more relaxed version of the above.


HTH,
Michael

On Tue, 2022-08-02 at 16:02 -0400, Robert Moskowitz wrote:
Recently I have been having problems with my server not responding 
to my
requests.  I thought it was all sorts of issues, but I finally 
looked at

the logs and:

Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 114.29.194.4#11205
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
114.29.216.196#64956 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 
64.68.114.141#39466

(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
209.197.198.45#13280 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80
114.29.202.117#41955 (.): view external: query (cache) './A/IN' denied
Aug  2 15:47:19 onlo named[6155]: client @0xaa3cad80 62.109.204.22#4406
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:49 onlo named[6155]: client @0xa9420720 64.68.104.9#38518
(.): view external: query (cache) './A/IN' denied
Aug  2 15:47:50 onlo named[6155]: client @0xaa882dc8 
114.29.202.117#9584

(.): view external: query (cache) './A/IN' denied

grep -c denied messages
45868

And that is just since Jul 31 3am.

This is fairly recent so I never looked into what I might do to protect
against this.  I am the master for my domain, so I do need to allow for
legitimate queries.

Any best practices on this?

I am running bind 9.11.4

thanks





-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: caching does not seem to be working for internal view

2022-08-03 Thread Greg Choules via bind-users
Hi Robert.
May we see the file /etc/resolv.conf and your BIND configuration? It's
difficult to guess what might be going on with only a small snippet of
information.
If you "ping somewhere" (or "ssh a-server", or whatever) the OS will
consult resolv.conf to determine where to send DNS queries. If that's not
your local instance of BIND then you could be looking for trouble in the
wrong place.

If you *do* have an address of the local machine as the first 'nameserver'
entry in resolv.conf you will need to know what that query looks like to
determine how BIND is going to handle it.
You also need to know what BIND will try and do when it does receive
queries.

Packet captures are your friend here, using tcpdump (to disk, not to
screen). Gather evidence first, then make theories.

Cheers, Greg

On Wed, 3 Aug 2022 at 14:29, Robert Moskowitz  wrote:

> Part of my problem is that caching does not seem to be working in my
> internal view.
>
> Something is happening such that my internal systems AND the server
> itself cannot resolve names and looses it even 5 min later, indicating
> not caching.
>
> I read https://kb.isc.org/docs/aa-00851
>
> In my include for the internal view (named.internal) I have:
>
>  match-clients{ httnets; };
>  match-destinations{ httnets; };
>  allow-query{ httnets; };
>  allow-query-cache{ httnets; };
>  allow-recursion{ any; };
>  recursion yes;
>  empty-zones-enable yes;
>
> Yet I get on my DNS server:
>
> ping www.google.com
> ping: www.google.com: Name or service not known
>
> Then later it works.
>
> Then later it doesn't again.
>
> Sigh.  If at least caching was working for internal use, I would be able
> to work more smoothy?
>
>
>
>
> --
> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
> from this list
>
> ISC funds the development of this software with paid support
> subscriptions. Contact us at https://www.isc.org/contact/ for more
> information.
>
>
> bind-users mailing list
> bind-users@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users
>
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread Timothe Litt


On 03-Aug-22 09:27, Bob Harold wrote:
I think the best way to soften the effect, and make DNSSEC much less 
brittle, without losing any of the security, is to reduce the TTL of 
the DS record in the parent zone (usually TLD's) drastically - from 2 
days to like 30 minutes.  That allows quick recovery from a failure.  
I realize that will cause an increase in DNS traffic, and I don't know 
how much of an increase, but the 24-48 hour TTL of the DS record is 
the real down-side of DNSSEC, and why it is taking me so long to try 
to develop a bullet-proof process before signing my zones.


--
Bob Harold
University of Michigan



Yes, in planning for DNSSEC changes it's a good idea to include reducing 
TTLs, verifying the change, then increasing the TTLs.


That means keeping track of important (I'd say non-automated) events, 
and reducing TTL a few days in advance.


If you do that, you get the benefit of long TTLs most of the time.  KSK 
rollover - probably the most common cause of errors - is not a frequent 
event.


Then again, with proper planning, you don't make nearly as many mistakes.

Also, while I haven't gotten around to migrating, for a new setup I'd 
look at the dnssec-policy in 9.16+, which appears to do most of the 
automation for you.  All of it if you have a registrar who supports 
CDS/CDNSKEY, in which the parent zone pulls the new DS into itself. 
https://kb.isc.org/docs/dnssec-key-and-signing-policy


Some issues have been reported on this mailing list, but from a distance 
it seems to be a great improvement and doing well.


At this point, creating a new process doesn't seem like a great use of 
time...at least unless you've identified issues with the tools that you 
can't get fixed... The ISC folks working on dnssec-policy seem to have 
been responsive.


FWIW

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread rainer

Am 2022-08-03 15:27, schrieb Bob Harold:

I think the best way to soften the effect, and make DNSSEC much less
brittle, without losing any of the security, is to reduce the TTL of
the DS record in the parent zone (usually TLD's) drastically - from 2
days to like 30 minutes.  That allows quick recovery from a failure.
I realize that will cause an increase in DNS traffic, and I don't know
how much of an increase, but the 24-48 hour TTL of the DS record is
the real down-side of DNSSEC, and why it is taking me so long to try
to develop a bullet-proof process before signing my zones.



These days, companies of all sizes are using ultra-short TTLs of 60s 
(and I've seen less) for all sorts of "fail-over" mechanisms and 
load-balancing schemes.


One more thing should *in theory* not matter much. Personally, I'm not 
too happy about short TTLs. This trend is likely significantly 
undermining the stability and redundancy of the internet as a whole 
already.




--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: DNSSEC adoption

2022-08-03 Thread Brown, William
> One more thing should *in theory* not matter much. Personally, I'm not too 
> happy about short TTLs. This trend is likely significantly undermining the 
> stability and redundancy of the internet as a whole already.

In the days of limited, expensive hardware and slow links, long TTLs made 
sense.  Our one vCPU name servers are almost wasting power serving up several 
hundred domains.

To me it seems that a short TTL is going to let you route to a different server 
in case of outage in the absence of multicasting, etc. which may be overkill in 
some cases but a redundant server is adequate.  How does this undermine 
stability and redundancy of the internet?
Confidentiality Notice: This electronic message and any attachments may contain 
confidential or privileged information, and is intended only for the individual 
or entity identified above as the addressee. If you are not the addressee (or 
the employee or agent responsible to deliver it to the addressee), or if this 
message has been addressed to you in error, you are hereby notified that you 
may not copy, forward, disclose or use any part of this message or any 
attachments. Please notify the sender immediately by return e-mail or telephone 
and delete this message from your system.
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz via bind-users
thanks Greg.  Yes I need to figure out how to troubleshoot this. But 
here is some stuff:


# cat resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 2600:1700:9120:4330::1

My server is 23.123.122.146.  That IPv6 addr is my ATT router.

# cat named.conf
    include "/etc/named/named.acl";

options {
    listen-on port 53 { any; };
    listen-on-v6 port 53 { any; };
    use-v4-udp-ports { range 10240 65535; };
    use-v6-udp-ports { range 10240 65535; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { localhost; };

    dnssec-enable no;
    dnssec-validation no;
    bindkeys-file "/etc/named.iscdlv.key";
    managed-keys-directory "/var/named/dynamic";
    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {    channel default_debug {
    file "data/named.run";
    severity dynamic;    };};

view "internal"
{    include "/etc/named/named.internal";};

view    "external"
{    include "/etc/named/named.external";};

include "/etc/named/rndc.key";
include "/etc/named.root.key";

# cat named.acl
acl "httslaves"  {
//    address of NSs
    208.83.69.35;    // ns1.mudkips.net
    208.83.66.130;    // ns2.mudkips.net
    63.68.132.50;    // ns1.icsl.net
    2607:f4b8:2600:1::1;    // ns1.mudkips.net
    2607:f4b8:2600:6::1;    // ns2.mudkips.net
};

acl "httnets" {
    127.0.0.1;
    23.123.122.144/28;
    192.168.32.0/24;
    192.168.64.0/24;
    192.168.96.0/24;
    192.168.160.0/23;
    192.168.128.0/23;
    192.168.192.0/22;
    192.168.224.0/24;
    ::1;
    2600:1700:9120:4330::/64;
};


# cat named.internal

    match-clients        { httnets; };
    match-destinations    { httnets; };
    allow-query        { httnets; };
    allow-query-cache    { httnets; };
    allow-recursion        { any; };
    recursion yes;
    empty-zones-enable yes;

    zone "." IN {
    type hint;
    file "named.ca";    };

    include "/etc/named.rfc1912.zones";

    zone "htt-consult.com" {
    type master;
    file "httin-consult.com.zone";    };

    zone "labs.htt-consult.com" {
    type master;
    file "labs.htt-consult.com.hosts";    };
    zone "intelcon.htt-consult.com" {
    type master;
    file "intelcon.htt-consult.com.hosts";    };
    zone "mobile.htt-consult.com" {
    type master;
    file "mobile.htt-consult.com.hosts";    };
    zone "test.htt-consult.com" {
    type master;
    file "test.httin-consult.com.hosts";    };
    zone "128.168.192.in-addr.arpa" {
    type master;
    file "128.168.192.in-addr.arpa.zone";  };
    zone "0-24.128.168.192.in-addr.arpa" {
    type master;
    file "0-24.128.168.192.in-addr.arpa.zone";    };
    zone "htt" {
    type master;
    file "htt.zone";  };
    zone "home.htt" {
    type master;
    file "home.htt.zone";    };


Do you also want my named.external?


On 8/3/22 09:39, Greg Choules wrote:

Hi Robert.
May we see the file /etc/resolv.conf and your BIND configuration? It's 
difficult to guess what might be going on with only a small snippet of 
information.
If you "ping somewhere" (or "ssh a-server", or whatever) the OS will 
consult resolv.conf to determine where to send DNS queries. If that's 
not your local instance of BIND then you could be looking for trouble 
in the wrong place.


If you *do* have an address of the local machine as the first 
'nameserver' entry in resolv.conf you will need to know what that 
query looks like to determine how BIND is going to handle it.
You also need to know what BIND will try and do when it does receive 
queries.


Packet captures are your friend here, using tcpdump (to disk, not to 
screen). Gather evidence first, then make theories.


Cheers, Greg

On Wed, 3 Aug 2022 at 14:29, Robert Moskowitz  wrote:

Part of my problem is that caching does not seem to be working in my
internal view.

Something is happening such that my internal systems AND the server
itself cannot resolve names and looses it even 5 min later,
indicating
not caching.

I read https://kb.isc.org/docs/aa-00851

In my include for the internal view (named.internal) I have:

 match-clients        { httnets; };
     match-destinations    { httnets; };
     allow-query        { httnets; };
     allow-query-cache    { httnets; };
 allow-recursion        { any; };
     recursion yes;
     empty-zones-enable yes;

Yet I get on my DNS server:

ping www.google.com 
ping: www.google.com : Name or service not
known

Then later it works.

Then later it doe

,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to resolve 
internal names.  There is no guarantee that your client resolver will 
try nameservers in order.  If you want a backup, run a second instance 
of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace will 
show what happens (if it still does - it could be that the ATT router's 
resolver is at fault).


Intermediate step would be to use dig.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.



OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Stopping ddos

2022-08-03 Thread Nathan Ollerenshaw via bind-users

On 8/2/22 3:29 PM, Robert Moskowitz wrote:


My clients use my internal view.  My external view has:

     match-clients        { any; };
     match-destinations    { any; };
     allow-query        { any; };
     allow-query-cache    { localhost; };
     recursion no;


it's been a while but I don't think you need to respond to requests for 
'.' ... so I think you can block access to all zones except the one you 
want to respond for.


I am way behind the times, as I really have not made any significant 
changes to my config for a couple years.  Things have been stable.


And I am running CentOS7-arm which only has 9.11.4...

BTW, I am in the market for a 'affordable' DNS box to run here and get 
out of the business of maintaining my own software.  I am approaching 
72, and not something I want to do anymore.  And I have not see a 
service provider that would let me really config my own zone files...


I was in the same boat and ended up shifting my personal stuff to 
Route53 in Amazon AWS. It costs like, $1 a month per zone to host and 
nobody is going to be killing Route53.


You can configure all the records in the zone however you like, and 
there are APIs if you want to script things so things like a residential 
network connection you can have it update it's A record in Route53 with 
a script when the IP changes.

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread Peter
I see a two-fold issue with DNSSEC:

1. The wide-spread tutorials seem to explain a key rollover as an
   exceptional activity, a *change* that is infrequently done. And
   changes, specifically the infrequent ones, bring along the
   possibility of failure, mostly due to human error.

   I don't see reason why this is so. DNSSEC can be fully
   automated (mine is), and then it can be done frequently, and the
   human factor is out of the loop. It is then no longer a change,
   but a regular operation that happens every 
   without anybody even need noticing it.
   (Let'sEncrypt did the same for certificates, and that also works
   well.)

2. TCP seems still to be considered a second-class-citizen in the
   DNS world. (If I got the details right, TCP is only "optional",
   and must only be tried as a second choice after receiving TC.)
   So people may be induced to try and squeeze replies into whatever
   512 or 1280 or 1500 bytes. Which means, they probably cannot use
   more than one key, and so take possible redundancy out of the game.

   I do not currently know about how or where this issue could be
   tackled appropriately; I for my part have decided to happily ignore
   it, and am using *four* KSK, thereby supporting RFC 5011 and RFC
   7344, all with one simple script - and anyway now I have the longest;
   here you can see it in action: https://dnsviz.net/d/daemon.contact/dnssec/
   Let's see where this leads into problems; for now it appears not to.

-- PMc
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: rate limiting queries with firewall (was: Stopping ddos)

2022-08-03 Thread Grant Taylor via bind-users

On 8/2/22 3:15 PM, Grant Taylor via bind-users wrote:
It looks like you're dealing with A queries for the root domain.  I've 
blocked this, and similar queries, via iptables firewall in the past.


I've seen a number of responses to Robert's "Stopping ddos" thread 
discussing using firewalls (iptables) to /rate/ /limit/ queries.


I wanted to add an overarching comment that such /rate/ /limiting/ 
ultimately means that some amount of state must be maintained on 
systems.  This is a potential vector for a denial of service if left 
unchecked.


So I'd like to clarify that I believe that it is better in some 
situations to /statelessly/ /drop/ traffic that has no reason for going 
to a server.  E.g. a server that's only authoritative for 2nd level 
domains has no business responding to any form of queries for the root zone.


To whit I have the following rule in the PREROUTING chain of the raw 
table to filter out queries for the root zone.


iptables -t raw -A PREROUTING -i eth0 -p udp -m udp --dport 53 -m string 
--hex-string "|ff0001|" --algo bm --from 40 --to 65535 -j DROP


Just a follow up / drive by comment.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run a 
second instance of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that IPv6 
addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as ATT 
has said there is no assurance with the IPv6 addresses may change.  So I 
added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to stop 
it pushing into my resolv.conf.



--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread Mark Elkins via bind-users

I generally agree with you - comments in line

On 8/3/22 5:56 PM, Peter wrote:

I see a two-fold issue with DNSSEC:

1. The wide-spread tutorials seem to explain a key rollover as an
exceptional activity, a *change* that is infrequently done. And
changes, specifically the infrequent ones, bring along the
possibility of failure, mostly due to human error.


Domains with Cloudflare seem to get Signed once -(KSK/DS - etc) and 
that's it!




I don't see reason why this is so. DNSSEC can be fully
automated (mine is), and then it can be done frequently, and the
human factor is out of the loop. It is then no longer a change,
but a regular operation that happens every 
without anybody even need noticing it.
(Let'sEncrypt did the same for certificates, and that also works
well.)


Both my DNSSEC and Let's Encrypt are totally automated as well. I 
usually run two KSK's overlapping by 6 months - so plenty of "rollover" 
time. Other domains, there is only a second KSK for a week or so.




2. TCP seems still to be considered a second-class-citizen in the
DNS world. (If I got the details right, TCP is only "optional",


Agh! No. NOT OPTIONAL. One might see it as a fall-back for when UDP 
fails (Truncated) but it is completely necessary!




and must only be tried as a second choice after receiving TC.)
So people may be induced to try and squeeze replies into whatever
512 or 1280 or 1500 bytes. Which means, they probably cannot use
more than one key, and so take possible redundancy out of the game.

I do not currently know about how or where this issue could be
tackled appropriately; I for my part have decided to happily ignore
it, and am using *four* KSK, thereby supporting RFC 5011 and RFC
7344, all with one simple script - and anyway now I have the longest;
here you can see it in action: https://dnsviz.net/d/daemon.contact/dnssec/
Let's see where this leads into problems; for now it appears not to.

-- PMc



Fair enough. And Elliptical Curve (Algo 13 ???) - so much shorter.

ps - Algorithm rollovers can be fun!!!

--

Mark James ELKINS  -  Posix Systems - (South) Africa
m...@posix.co.za   Tel: +27.826010496 





OpenPGP_0xB6FA15470B82C101.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

Try

echo -e "[main]\ndns=none" > /etc/NetworkManager/conf.d/no-dns.conf
systemctl restart NetworkManager.service

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 03-Aug-22 12:36, Robert Moskowitz wrote:



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run a 
second instance of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that 
IPv6 addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as ATT 
has said there is no assurance with the IPv6 addresses may change.  So 
I added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to 
stop it pushing into my resolv.conf.




OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Anand Buddhdev

On 03/08/2022 18:36, Robert Moskowitz wrote:

Hi Robert,

[snip]


ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


Calm down. Just add "PEERDNS=no" in your ifcfg-eth0 file. This way, the 
resolv.conf file will only contain your specified DNS servers, and 
nothing else.


Regards,
Anand
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz



On 8/3/22 13:10, Anand Buddhdev wrote:

On 03/08/2022 18:36, Robert Moskowitz wrote:

Hi Robert,

[snip]


ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


Calm down. Just add "PEERDNS=no" in your ifcfg-eth0 file. This way, 
the resolv.conf file will only contain your specified DNS servers, and 
nothing else.


I was excited to see this simple approach.  did systemctl restart 
NetworkManager.service


And no change.  :(

I will try Timothe's recommendation next.

BTW, it seems on top of everything else my fiber connect was going south 
to the point that firefox browsing was interspersed by the ATT firewall 
with a message to reset my fiber broadband router! That has helped.  Some.


Multiple failings which is often the case.


--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz



On 8/3/22 12:59, Timothe Litt wrote:


Try

echo -e "[main]\ndns=none" > /etc/NetworkManager/conf.d/no-dns.conf
systemctl restart NetworkManager.service



Same content in resolv.conf.  BTW this is on Centos7.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.
On 03-Aug-22 12:36, Robert Moskowitz wrote:



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run a 
second instance of named.


As for the intermittent issues with resolving external names, that's 
frequently a case of hitting different nameservers.  Or a firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that 
IPv6 addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want that 
IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as 
ATT has said there is no assurance with the IPv6 addresses may 
change.  So I added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to 
stop it pushing into my resolv.conf.






--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz
Perhaps this is only caching the zones in the Internal View, not all 
public stuff looked up by internal clients?


I say this because I get fast responses to internal servers, but slow if 
at all to external ones.


Grasping here because my search foo is weak and I can't find where it is 
defined exactly what IS cached.


On 8/3/22 10:52, Robert Moskowitz via bind-users wrote:
thanks Greg.  Yes I need to figure out how to troubleshoot this. But 
here is some stuff:


# cat resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 2600:1700:9120:4330::1

My server is 23.123.122.146.  That IPv6 addr is my ATT router.

# cat named.conf
    include "/etc/named/named.acl";

options {
    listen-on port 53 { any; };
    listen-on-v6 port 53 { any; };
    use-v4-udp-ports { range 10240 65535; };
    use-v6-udp-ports { range 10240 65535; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { localhost; };

    dnssec-enable no;
    dnssec-validation no;
    bindkeys-file "/etc/named.iscdlv.key";
    managed-keys-directory "/var/named/dynamic";
    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {    channel default_debug {
    file "data/named.run";
    severity dynamic;    };};

view "internal"
{    include "/etc/named/named.internal";};

view    "external"
{    include "/etc/named/named.external";};

include "/etc/named/rndc.key";
include "/etc/named.root.key";

# cat named.acl
acl "httslaves"  {
//    address of NSs
    208.83.69.35;    // ns1.mudkips.net
    208.83.66.130;    // ns2.mudkips.net
    63.68.132.50;    // ns1.icsl.net
    2607:f4b8:2600:1::1;    // ns1.mudkips.net
    2607:f4b8:2600:6::1;    // ns2.mudkips.net
};

acl "httnets" {
    127.0.0.1;
    23.123.122.144/28;
    192.168.32.0/24;
    192.168.64.0/24;
    192.168.96.0/24;
    192.168.160.0/23;
    192.168.128.0/23;
    192.168.192.0/22;
    192.168.224.0/24;
    ::1;
    2600:1700:9120:4330::/64;
};


# cat named.internal

    match-clients        { httnets; };
    match-destinations    { httnets; };
    allow-query        { httnets; };
    allow-query-cache    { httnets; };
    allow-recursion        { any; };
    recursion yes;
    empty-zones-enable yes;

    zone "." IN {
    type hint;
    file "named.ca";    };

    include "/etc/named.rfc1912.zones";

    zone "htt-consult.com" {
    type master;
    file "httin-consult.com.zone";    };

    zone "labs.htt-consult.com" {
    type master;
    file "labs.htt-consult.com.hosts";    };
    zone "intelcon.htt-consult.com" {
    type master;
    file "intelcon.htt-consult.com.hosts";    };
    zone "mobile.htt-consult.com" {
    type master;
    file "mobile.htt-consult.com.hosts";    };
    zone "test.htt-consult.com" {
    type master;
    file "test.httin-consult.com.hosts";    };
    zone "128.168.192.in-addr.arpa" {
    type master;
    file "128.168.192.in-addr.arpa.zone";  };
    zone "0-24.128.168.192.in-addr.arpa" {
    type master;
    file "0-24.128.168.192.in-addr.arpa.zone";    };
    zone "htt" {
    type master;
    file "htt.zone";  };
    zone "home.htt" {
    type master;
    file "home.htt.zone";    };


Do you also want my named.external?


On 8/3/22 09:39, Greg Choules wrote:

Hi Robert.
May we see the file /etc/resolv.conf and your BIND configuration? 
It's difficult to guess what might be going on with only a small 
snippet of information.
If you "ping somewhere" (or "ssh a-server", or whatever) the OS will 
consult resolv.conf to determine where to send DNS queries. If that's 
not your local instance of BIND then you could be looking for trouble 
in the wrong place.


If you *do* have an address of the local machine as the first 
'nameserver' entry in resolv.conf you will need to know what that 
query looks like to determine how BIND is going to handle it.
You also need to know what BIND will try and do when it does receive 
queries.


Packet captures are your friend here, using tcpdump (to disk, not to 
screen). Gather evidence first, then make theories.


Cheers, Greg

On Wed, 3 Aug 2022 at 14:29, Robert Moskowitz  
wrote:


Part of my problem is that caching does not seem to be working in my
internal view.

Something is happening such that my internal systems AND the server
itself cannot resolve names and looses it even 5 min later,
indicating
not caching.

I read https://kb.isc.org/docs/aa-00851

In my include for the internal view (named.internal) I have:

 match-clients        { httnets; };
     match-destinations    { httnets; };
     all

Re: ,Re: caching does not seem to be working for internal view

2022-08-03 Thread Timothe Litt

Hmm.  Your resolv.conf says that it's written by NetworkManager.

What I suggested should have stopped it from updating resolv.conf.

See 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/manually-configuring-the-etc-resolv-conf-file_configuring-and-managing-networking


After restarting the service, did you edit (or replace) resolv.conf to 
remove the AT&T address?


If not, stop here & edit the file.

If so, perhaps some other manager is editing the file without replacing 
the comment.


Check to see if resolv.conf is a symlink - some managers (e.g. 
systemd-resolved) will do that.  Not sure when/if it found its way to 
centos (I don't run it), but if it's there, systemctl stop & disable 
it.  It would be running on 127.0.0.53:53, but it usually points 
resolv.conf to itself.


The other managers that I know of aren't in redhat distributions.

You may need to use auditing to identify what is writing the file.

Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.

On 03-Aug-22 14:39, Robert Moskowitz wrote:



On 8/3/22 12:59, Timothe Litt wrote:


Try

echo -e "[main]\ndns=none" > /etc/NetworkManager/conf.d/no-dns.conf
systemctl restart NetworkManager.service



Same content in resolv.conf.  BTW this is on Centos7.


Timothe Litt
ACM Distinguished Engineer
--
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.
On 03-Aug-22 12:36, Robert Moskowitz wrote:



On 8/3/22 11:35, Timothe Litt wrote:

On 03-Aug-22 10:53, bind-users-requ...@lists.isc.org wrote:

# cat resolv.conf

My server is 23.123.122.146.  That IPv6 addr is my ATT router.



You don't want to do that.  The ATT router will not know how to 
resolve internal names.  There is no guarantee that your client 
resolver will try nameservers in order.  If you want a backup, run 
a second instance of named.


As for the intermittent issues with resolving external names, 
that's frequently a case of hitting different nameservers.  Or a 
firewall.


Get rid of the ATT router first.  Then as suggested, a packet trace 
will show what happens (if it still does - it could be that the ATT 
router's resolver is at fault).




Thank you for your advice.  my ifcfg-eth0 has:

DEVICE="eth0"
BOOTPROTO=none
ONBOOT="yes"
TYPE="Ethernet"
NAME="eth0"
MACADDR=02:67:15:00:00:02
MTU=1500
DNS1=23.123.122.146
GATEWAY="23.123.122.158"
IPADDR="23.123.122.146"
NETMASK="255.255.255.240"
IPV6INIT="yes"

And I am ASSuMEing that it is that IPV6INIT that is providing that 
IPv6 addr in resolv.cat.  So I added:


DNS2=192.168.224.2

And now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1

ARGH!

I want the IPv6 addr from my firewall/gateway.  But I don't want 
that IPv6 nameserver!


So I added the IPv6 address for my server.  I had not done this as 
ATT has said there is no assurance with the IPv6 addresses may 
change.  So I added:


DNS3=2600:1700:9120:4330::49

and now:

# cat /etc/resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 192.168.224.2
nameserver 2600:1700:9120:4330::1
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 2600:1700:9120:4330::49

Sigh.  I have to take that dynamic IPv6 assignment.  But I want to 
stop it pushing into my resolv.conf.






OpenPGP_signature
Description: OpenPGP digital signature
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: DNSSEC adoption

2022-08-03 Thread Ondřej Surý
Not really. Using ECDSA (or EdDSA) CSK is pretty lightweight even during 
rollover.

Ondrej
--
Ondřej Surý — ISC (He/Him)

My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.

> On 3. 8. 2022, at 19:10, Peter  wrote:
> 
> So people may be induced to try and squeeze replies into whatever
>   512 or 1280 or 1500 bytes. Which means, they probably cannot use
>   more than one key, and so take possible redundancy out of the game.
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: caching does not seem to be working for internal view

2022-08-03 Thread Robert Moskowitz

This is boarderline not thinking on my part.

OF COURSE those FQDNs resolve fast; they are in local ZOne files. No 
lookup needed.


Sheesh.

"Slow down, you move to fast.  Got to make the Mornin' last!"  :)

On 8/3/22 14:43, Robert Moskowitz wrote:
Perhaps this is only caching the zones in the Internal View, not all 
public stuff looked up by internal clients?


I say this because I get fast responses to internal servers, but slow 
if at all to external ones.


Grasping here because my search foo is weak and I can't find where it 
is defined exactly what IS cached.


On 8/3/22 10:52, Robert Moskowitz via bind-users wrote:
thanks Greg.  Yes I need to figure out how to troubleshoot this.  But 
here is some stuff:


# cat resolv.conf
# Generated by NetworkManager
search attlocal.net htt-consult.com
nameserver 23.123.122.146
nameserver 2600:1700:9120:4330::1

My server is 23.123.122.146.  That IPv6 addr is my ATT router.

# cat named.conf
    include "/etc/named/named.acl";

options {
    listen-on port 53 { any; };
    listen-on-v6 port 53 { any; };
    use-v4-udp-ports { range 10240 65535; };
    use-v6-udp-ports { range 10240 65535; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { localhost; };

    dnssec-enable no;
    dnssec-validation no;
    bindkeys-file "/etc/named.iscdlv.key";
    managed-keys-directory "/var/named/dynamic";
    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {    channel default_debug {
    file "data/named.run";
    severity dynamic;    };};

view "internal"
{    include "/etc/named/named.internal";};

view    "external"
{    include "/etc/named/named.external";};

include "/etc/named/rndc.key";
include "/etc/named.root.key";

# cat named.acl
acl "httslaves"  {
//    address of NSs
    208.83.69.35;    // ns1.mudkips.net
    208.83.66.130;    // ns2.mudkips.net
    63.68.132.50;    // ns1.icsl.net
    2607:f4b8:2600:1::1;    // ns1.mudkips.net
    2607:f4b8:2600:6::1;    // ns2.mudkips.net
};

acl "httnets" {
    127.0.0.1;
    23.123.122.144/28;
    192.168.32.0/24;
    192.168.64.0/24;
    192.168.96.0/24;
    192.168.160.0/23;
    192.168.128.0/23;
    192.168.192.0/22;
    192.168.224.0/24;
    ::1;
    2600:1700:9120:4330::/64;
};


# cat named.internal

    match-clients        { httnets; };
    match-destinations    { httnets; };
    allow-query        { httnets; };
    allow-query-cache    { httnets; };
    allow-recursion        { any; };
    recursion yes;
    empty-zones-enable yes;

    zone "." IN {
    type hint;
    file "named.ca";    };

    include "/etc/named.rfc1912.zones";

    zone "htt-consult.com" {
    type master;
    file "httin-consult.com.zone";    };

    zone "labs.htt-consult.com" {
    type master;
    file "labs.htt-consult.com.hosts";    };
    zone "intelcon.htt-consult.com" {
    type master;
    file "intelcon.htt-consult.com.hosts";    };
    zone "mobile.htt-consult.com" {
    type master;
    file "mobile.htt-consult.com.hosts";    };
    zone "test.htt-consult.com" {
    type master;
    file "test.httin-consult.com.hosts";    };
    zone "128.168.192.in-addr.arpa" {
    type master;
    file "128.168.192.in-addr.arpa.zone";  };
    zone "0-24.128.168.192.in-addr.arpa" {
    type master;
    file "0-24.128.168.192.in-addr.arpa.zone";    };
    zone "htt" {
    type master;
    file "htt.zone";  };
    zone "home.htt" {
    type master;
    file "home.htt.zone";    };


Do you also want my named.external?


On 8/3/22 09:39, Greg Choules wrote:

Hi Robert.
May we see the file /etc/resolv.conf and your BIND configuration? 
It's difficult to guess what might be going on with only a small 
snippet of information.
If you "ping somewhere" (or "ssh a-server", or whatever) the OS will 
consult resolv.conf to determine where to send DNS queries. If 
that's not your local instance of BIND then you could be looking for 
trouble in the wrong place.


If you *do* have an address of the local machine as the first 
'nameserver' entry in resolv.conf you will need to know what that 
query looks like to determine how BIND is going to handle it.
You also need to know what BIND will try and do when it does receive 
queries.


Packet captures are your friend here, using tcpdump (to disk, not to 
screen). Gather evidence first, then make theories.


Cheers, Greg

On Wed, 3 Aug 2022 at 14:29, Robert Moskowitz  
wrote:


Part of my problem is that caching does not seem to be working
in my
internal view.

Something is happening such that my internal systems AND the server
itself cannot resolve names and looses it even 5 

Re: caching does not seem to be working for internal view

2022-08-03 Thread Greg Choules via bind-users
Hi Robert.
Turn on query logging by doing "rndc querylog". You should see a message
saying that has been done in "named.log", to where each query will now be
logged. If you have views, part of the query log will contain which view
was matched. So this will tell you two things:

   1. If the queries you are making are actually being handled by BIND at
   all.
   2. If they are, which view handled them.

Try that and see what you discover.

By the way, if you want to see a snapshot of the cache you will have to
dump it to disk using "rndc dumpdb -all", This will produce a file called
"named_dump.db" in the working directory. Commonly this will be the same
location as your zone files. It's a text file, so you can look through it
with cat/more/less etc.

Cheers, Greg

On Wed, 3 Aug 2022 at 21:23, Robert Moskowitz  wrote:

> This is boarderline not thinking on my part.
>
> OF COURSE those FQDNs resolve fast; they are in local ZOne files.  No
> lookup needed.
>
> Sheesh.
>
> "Slow down, you move to fast.  Got to make the Mornin' last!"  :)
>
> On 8/3/22 14:43, Robert Moskowitz wrote:
>
> Perhaps this is only caching the zones in the Internal View, not all
> public stuff looked up by internal clients?
>
> I say this because I get fast responses to internal servers, but slow if
> at all to external ones.
>
> Grasping here because my search foo is weak and I can't find where it is
> defined exactly what IS cached.
>
> On 8/3/22 10:52, Robert Moskowitz via bind-users wrote:
>
> thanks Greg.  Yes I need to figure out how to troubleshoot this.  But here
> is some stuff:
>
> # cat resolv.conf
> # Generated by NetworkManager
> search attlocal.net htt-consult.com
> nameserver 23.123.122.146
> nameserver 2600:1700:9120:4330::1
>
> My server is 23.123.122.146.  That IPv6 addr is my ATT router.
>
> # cat named.conf
> include "/etc/named/named.acl";
>
> options {
> listen-on port 53 { any; };
> listen-on-v6 port 53 { any; };
> use-v4-udp-ports { range 10240 65535; };
> use-v6-udp-ports { range 10240 65535; };
> directory "/var/named";
> dump-file "/var/named/data/cache_dump.db";
> statistics-file "/var/named/data/named_stats.txt";
> memstatistics-file "/var/named/data/named_mem_stats.txt";
> allow-query { localhost; };
>
> dnssec-enable no;
> dnssec-validation no;
> bindkeys-file "/etc/named.iscdlv.key";
> managed-keys-directory "/var/named/dynamic";
> pid-file "/run/named/named.pid";
> session-keyfile "/run/named/session.key";
> };
>
> logging {channel default_debug {
> file "data/named.run";
> severity dynamic;};};
>
> view "internal"
> {include "/etc/named/named.internal";};
>
> view"external"
> {include "/etc/named/named.external";};
>
> include "/etc/named/rndc.key";
> include "/etc/named.root.key";
>
> # cat named.acl
> acl "httslaves"  {
> //address of NSs
> 208.83.69.35;// ns1.mudkips.net
> 208.83.66.130;// ns2.mudkips.net
> 63.68.132.50;// ns1.icsl.net
> 2607:f4b8:2600:1::1;// ns1.mudkips.net
> 2607:f4b8:2600:6::1;// ns2.mudkips.net
> };
>
> acl "httnets" {
> 127.0.0.1;
> 23.123.122.144/28;
> 192.168.32.0/24;
> 192.168.64.0/24;
> 192.168.96.0/24;
> 192.168.160.0/23;
> 192.168.128.0/23;
> 192.168.192.0/22;
> 192.168.224.0/24;
> ::1;
> 2600:1700:9120:4330::/64;
> };
>
>
> # cat named.internal
>
> match-clients{ httnets; };
> match-destinations{ httnets; };
> allow-query{ httnets; };
> allow-query-cache{ httnets; };
> allow-recursion{ any; };
> recursion yes;
> empty-zones-enable yes;
>
> zone "." IN {
> type hint;
> file "named.ca";};
>
> include "/etc/named.rfc1912.zones";
>
> zone "htt-consult.com" {
> type master;
> file "httin-consult.com.zone";};
>
> zone "labs.htt-consult.com" {
> type master;
> file "labs.htt-consult.com.hosts";};
> zone "intelcon.htt-consult.com" {
> type master;
> file "intelcon.htt-consult.com.hosts";};
> zone "mobile.htt-consult.com" {
> type master;
> file "mobile.htt-consult.com.hosts";};
> zone "test.htt-consult.com" {
> type master;
> file "test.httin-consult.com.hosts";};
> zone "128.168.192.in-addr.arpa" {
> type master;
> file "128.168.192.in-addr.arpa.zone";  };
> zone "0-24.128.168.192.in-addr.arpa" {
> type master;
> file "0-24.128.168.192.in-addr.arpa.zone";};
> zone "htt" {
> type master;
> file "htt.zone";  };
> zone "home.htt" {
> type master;
> file "home.htt.zone";};
>
>
> Do you also want my named.external?
>
>
> On 8/3/22 09:39, Greg Choules wrote:
>
> Hi Robert.
>

Re: caching does not seem to be working for internal view

2022-08-03 Thread Lee
On 8/3/22, Robert Moskowitz via bind-users wrote:
> thanks Greg.  Yes I need to figure out how to troubleshoot this. But
> here is some stuff:
>
> # cat resolv.conf
> # Generated by NetworkManager
> search attlocal.net htt-consult.com
> nameserver 23.123.122.146
> nameserver 2600:1700:9120:4330::1
>
> My server is 23.123.122.146.  That IPv6 addr is my ATT router.
>
> # cat named.conf
>  include "/etc/named/named.acl";
>
> options {
>  listen-on port 53 { any; };
>  listen-on-v6 port 53 { any; };
>  use-v4-udp-ports { range 10240 65535; };
>  use-v6-udp-ports { range 10240 65535; };
>  directory "/var/named";
>  dump-file "/var/named/data/cache_dump.db";
>  statistics-file "/var/named/data/named_stats.txt";
>  memstatistics-file "/var/named/data/named_mem_stats.txt";
>  allow-query { localhost; };

seems wrong, shouldn't that be
  allow-query{ httnets; };

Lee
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Stopping ddos

2022-08-03 Thread Paul Kosinski via bind-users
On Wed, 3 Aug 2022 13:47:41 +0200
Victor Johansson via bind-users  wrote:

> Hey,
> 
> I just want to add that there is a better way to do this in iptables 
> with hashlimit. The normal rate limit in iptables is too crude.
> 
> Below is an example from the rate-limit-chain, to which you simply send 
> all port 53 traffic from the INPUT chain (make sure to exclude 
> 127.0.0.1/127.0.0.53 though :) ).
> 
> 
> -A INPUT -p udp -m udp --dport 53 -j DNS-RATE-LIMIT
> -A INPUT -p tcp -m tcp --dport 53 -j DNS-RATE-LIMIT
> 
> -A DNS-RATE-LIMIT -s 127.0.0.1/32 -m comment --comment "Dont rate-limit 
> localhost" -j RETURN
> -A DNS-RATE-LIMIT -m hashlimit --hashlimit-upto 100/sec 
> --hashlimit-burst 300 --hashlimit-mode srcip --hashlimit-name DNS-drop 
> --hashlimit-htable-expire 2000 -j ALLOW
> -A DNS-RATE-LIMIT -m limit --limit 1/sec -j LOG --log-prefix "DNS-drop: "
> -A DNS-RATE-LIMIT -m comment --comment "ansible[dns rate limiting]" -j DROP
> 
> 
> //Victor
>


I was using iptables hashlimit for a while but stopped. It wasn't really 
solving my main problem, which was not so much "overloading" my BIND server as 
causing my log files to get filled with useless warnings about bad queries (or 
packets dropped).

It would be nice if BIND had way to record such error messages into a dumpable 
table with query, source IP and count.

-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users