We recently tried a test to see how our internal servers would react to a
loss of their external peers, with the goal being that the internal servers
would switch from forwarding to doing recursive queries for clients.
Normally, the internal servers forward to the external servers. To
simulate
If you have a global forwarder in place there are two options that affect
its use. Forward first, the default, and forward only.
Forward first will exhaust the forwarders you have and then attempt to
follow NS records. Forward only will only use forwarders.
The delay you are seeing is likely the
Ben,
I seem to recall reading at some point in the past that after X amount of
time, BIND would stop trying to contact servers it figured to be dead (at
least it would stop trying for some amount of time). Is that in fact the
case and would it eventually come into play here? Any configurable
If a given forwarder is bad it get its round trip time, rtt, set high
and will not be used until that comes back down via the normal rtt decay
mechanism in BIND. I have not tested the behaviour when all are down. My
assumption would be that if all are down they will all have to be tried
before
Hi,
I seem to have hit the same issue on Bind 9.7.3.
=== [Test environment] ===
- The issued system is cache server. It does not have a zone which it
can respond as a master server.
- The server which receives a recursive query asks a recursive query
from root server to the last server in
I did get a chance to dig through the syslogs finally on one of the
internal name servers and I'm seeing a lot of these three entries for
various domains. I would have to assume that one or all of these items
would also contribute to the lengthy times to resolve queries?
named[16593]: error
Hello,
I have DNSSEC validation running on a caching name server which is working
fine. In addition, I have tried to add an entry in the named.conf to forward
lookups for a local Active Directory domain name used for testing purposes so
we can easily resolve the handful of servers in this
On 01/11/11 16:14, vinny_abe...@dell.com wrote:
resolution fail since NXDOMAIN is the valid answer... done, end of
story. I thought the forwarder type would bypass this but apparently
I am wrong. Is there some other way to handle this for non-existent
domains just for testing purposes?
Don't
If you get the EDNS errors for many or most remote name servers, look to your
firewall as a suspected culprit. Otherwise, a few of these messages are normal.
You might be able to set query-source (and other *-source) options to just IPv4
addresses to disable use of IPv6. However, this shouldn't
Hi Phil,
Thanks, however I can't control the domain in question unfortunately. It is
what it is. We have to work with it. I totally understand why this doesn't work
and actually agree with the design, however I just don't have a workaround or
way to force forwarders for this domain with dnssec
On 11/1/2011 11:23 AM, Phil Mayers wrote:
On 01/11/11 16:14, vinny_abe...@dell.com wrote:
resolution fail since NXDOMAIN is the valid answer... done, end of
story. I thought the forwarder type would bypass this but apparently
I am wrong. Is there some other way to handle this for non-existent
On 1 Nov 2011 at 18:12, vinny_abe...@dell.com wrote:
Thanks, however I can't control the domain in question
unfortunately. It is what it is. We have to work with it. I totally
understand why this doesn't work and actually agree with the design,
however I just don't have a workaround or way to
There have been discussions in the past over this, but we were once again
bitten by this dnssec-signzone bug:
Tue Nov 1 12:11:28 2011 signDomain: sign command: /usr/sbin/dnssec-signzone -C
-u -r /dev/random -t -o openswan.org -f /var/tmp/openswan.org.sign.tmp -i
1296000 -e +2592000 -j
On 11/01/2011 06:24 PM, Lyle Giese wrote:
A work-around (and it has some side effects and could be undesirable,
just be aware of the side effects of doing this) is to declare .internal
as a master zone in your DNS servers and then delegate
policydomain.internal to your Windows AD servers in
On 11/01/2011 06:34 PM, Scott Morizot wrote:
Alternatively, you can sign 'policydomain.internal' and configure its key
as one of the trust anchors on the validating name servers. The order of
validation is, if I recall correctly, locally configured trust anchors,
then chain of trust from root,
On 1 Nov 2011 at 20:02, Phil Mayers wrote:
On 11/01/2011 06:34 PM, Scott Morizot wrote:
Alternatively, you can sign 'policydomain.internal' and configure its key
as one of the trust anchors on the validating name servers. The order of
validation is, if I recall correctly, locally
On Tue, 1 Nov 2011, Paul Wouters wrote:
There have been discussions in the past over this, but we were once again
bitten by this dnssec-signzone bug:
Tue Nov 1 12:11:28 2011 signDomain: sign command: /usr/sbin/dnssec-signzone
-C -u -r /dev/random -t -o openswan.org -f
On Tue, 1 Nov 2011, Paul Wouters wrote:
There have been discussions in the past over this, but we were once again
bitten by this dnssec-signzone bug:
Tue Nov 1 12:11:28 2011 signDomain: sign command:
/usr/sbin/dnssec-signzone -C -u -r /dev/random -t -o openswan.org -f
Please ignore. Internal test from ISC.
-Dan Mahoney
ISC Operations
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
from this list
bind-users mailing list
bind-users@lists.isc.org
19 matches
Mail list logo