Re: pre heat cache

2016-02-18 Thread Tony Finch
William Taylor wrote: > Is there anyway to pre-heat the cache in bind on startup besides having > a custom script that did a bunch of queries on top hosts? Funnily enough I recently wrote a tool to do this but I have been failing to publish a blog article about it...

Re: pre heat cache

2016-02-18 Thread Tony Finch
Tony Finch wrote: > > Funnily enough I recently wrote a tool to do this but I have been failing > to publish a blog article about it... Have a look at this: > https://git.csx.cam.ac.uk/x/ucs/ipreg/adns-masterfile.git Longer blurb now published at:

Tuning for lots of SERVFAIL responses

2016-02-18 Thread John Miller
A couple of weeks ago, we experienced an outage on our external Internet links. Ideally, this shouldn't affect queries for internal resources - we expect those queries to continue to be answered. That being said, we saw a bunch of messages in our logs such as: client 192.168.1.2#56075: no more

Re: pre heat cache

2016-02-18 Thread Robert Edmonds
Tony Finch wrote: > Tony Finch wrote: > > > > Funnily enough I recently wrote a tool to do this but I have been failing > > to publish a blog article about it... Have a look at this: > > https://git.csx.cam.ac.uk/x/ucs/ipreg/adns-masterfile.git > > Longer blurb now published at:

Re: pre heat cache

2016-02-18 Thread Tony Finch
> On 18 Feb 2016, at 18:59, Robert Edmonds wrote: > > A large proportion of records are only ever used "once, or a handful of > times", according to researchers: [...] Yes, and there's an amazing amount of crap in the cache too. About 14% of our cache is weird Sonicwall

ZSK rollover detail needed.

2016-02-18 Thread Thomas Schulz
A recommended way to set up a ZSK rollover is to set the inactive date of the current key one month later than the publish date of the replacement key. This makes sense as the RRSIG records are created to last one month from their creation date. Now if I try to speed up the ZSK rollover to make

Re: pre heat cache

2016-02-18 Thread Mark Andrews
In message <20160218185957.ga2...@mycre.ws>, Robert Edmonds writes: > Tony Finch wrote: > > Tony Finch wrote: > > > > > > Funnily enough I recently wrote a tool to do this but I have been failing > > > to publish a blog article about it... Have a look at this: > > >

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Tony Finch
John Miller wrote: > A couple of weeks ago, we experienced an outage on our external > Internet links. Ideally, this shouldn't affect queries for internal > resources - we expect those queries to continue to be answered. We've had a few connectivity losses over the last

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread John Miller
Thanks for the reply, Tony. With the recent glibc bug, I figured most folks would be off putting out those fires! On Thu, Feb 18, 2016 at 3:04 PM, Tony Finch wrote: > John Miller wrote: > >> A couple of weeks ago, we experienced an outage on our external >>

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Mark Andrews
In message

RE: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Darcy Kevin (FCA)
Ah, so "recursive-clients" is the quota of queries that require named to recurse to get the answer, right? I was going to respond with the same advice -- slave your internal zones -- but then I somehow convinced myself that "recursive-clients" was merely the quota of concurrent RD=1 queries

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Mark Andrews
In message <686c619a5c4e4bcabdc6cfaf1e27d...@mxph4chrw.fgremc.it>, "Darcy Kevin (FCA)" writes: > Ah, so "recursive-clients" is the quota of queries that require named to recu > rse to get the answer, right? Yes. > I was going to respond with the same advice -- > slave your internal zones --

Re: ZSK rollover detail needed.

2016-02-18 Thread Mark Andrews
In message <201602181942.u1ijgrkf001...@dolphin.adi.com>, Thomas Schulz writes: > A recommended way to set up a ZSK rollover is to set the inactive date of > the current key one month later than the publish date of the replacement key. > This makes sense as the RRSIG records are created to last

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Dave Warren
On 2016-02-18 14:06, Mark Andrews wrote: For some reason people are afraid to slave internal zones. Back when I was working for CSIRO I used to slave all the internal zones for all of the sites the division had. Each site administered its own zones but all sites slaved all of them. That way

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread John Miller
On Thu, Feb 18, 2016 at 5:06 PM, Mark Andrews wrote: > For some reason people are afraid to slave internal zones. Back > when I was working for CSIRO I used to slave all the internal zones > for all of the sites the division had. Each site administered its > own zones but all

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Tony Finch
John Miller wrote: > Thanks for the reply, Tony. With the recent glibc bug, I figured most > folks would be off putting out those fires! If they haven't done it by now then, gosh, I feel sorry for them. (It's SO NICE to have a redundant service that you can patch and

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread John Miller
>> I was going to respond with the same advice -- >> slave your internal zones -- but then I somehow convinced myself that "recurs >> ive-clients" was merely the quota of concurrent RD=1 queries that named would >> handle, thus slaving wouldn't help in a network-outage situation, since name >> d

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Mark Andrews
In message , John Miller writes: > On Thu, Feb 18, 2016 at 5:06 PM, Mark Andrews wrote: > > For some reason people are afraid to slave internal zones. Back > > when I was working for CSIRO I used to slave all

Re: Tuning for lots of SERVFAIL responses

2016-02-18 Thread Mark Andrews
In message , John Miller writes: > >> I was going to respond with the same advice -- > >> slave your internal zones -- but then I somehow convinced myself that > >> "recurs > >> ive-clients" was merely the quota of concurrent