We recently tried a test to see how our internal servers would react to a
loss of their external peers, with the goal being that the internal servers
would switch from forwarding to doing recursive queries for clients.
Normally, the internal servers forward to the external servers. To
simulate
If you have a global forwarder in place there are two options that affect
its use. Forward first, the default, and forward only.
Forward first will exhaust the forwarders you have and then attempt to
follow NS records. Forward only will only use forwarders.
The delay you are seeing is likely the
Ben,
I seem to recall reading at some point in the past that after X amount of
time, BIND would stop trying to contact servers it figured to be dead (at
least it would stop trying for some amount of time). Is that in fact the
case and would it eventually come into play here? Any configurable
If a given forwarder is bad it get its round trip time, rtt, set high
and will not be used until that comes back down via the normal rtt decay
mechanism in BIND. I have not tested the behaviour when all are down. My
assumption would be that if all are down they will all have to be tried
before
I did get a chance to dig through the syslogs finally on one of the
internal name servers and I'm seeing a lot of these three entries for
various domains. I would have to assume that one or all of these items
would also contribute to the lengthy times to resolve queries?
named[16593]: error
If you get the EDNS errors for many or most remote name servers, look to your
firewall as a suspected culprit. Otherwise, a few of these messages are normal.
You might be able to set query-source (and other *-source) options to just IPv4
addresses to disable use of IPv6. However, this shouldn't
6 matches
Mail list logo