On Jul 3, 2013, at 5:14 PM, Paul Hoffman <[email protected]> wrote:

> On Jul 3, 2013, at 2:01 PM, "John Levine" <[email protected]> wrote:
> 
>>>>>  If yes, message cache elements are prefetched before they expire
>>>>>  to  keep  the  cache  up to date.  Default is no.  Turning it on
>>>>>  gives about 10 percent more traffic and load on the machine, but
>>>>>  popular items do not expire from the cache.
>> 
>>> My question earlier still stands: does Unbound do what HAMMER says
>>> (waits for a request before refreshing the cache) or does it just
>>> refresh the cache automatically? The Unbound doc is unclear (at least
>>> to me).
>> 
>> If it did it automatically, I'd expect a lot more than 10% more
>> traffic.  I enabled it on my not terribly busy server and I see
>> numbers that look like they're in the 1% range:
>> 
>> Jul 03 14:23:21 unbound[40669:0] info: server stats for thread 0: 47798 
>> queries, 20108 answers from cache, 27690 recursions, 242 prefetch
>> Jul 03 15:23:21 unbound[40669:0] info: server stats for thread 0: 53322 
>> queries, 20319 answers from cache, 33003 recursions, 291 prefetch
>> Jul 03 16:23:21 unbound[40669:0] info: server stats for thread 0: 55260 
>> queries, 20697 answers from cache, 34563 recursions, 272 prefetch
>> 
> 
> Those are pictures. Source code, or developer assurance, or it didn't happen 
> ( to badly bastardize a phrase that the kids these days use ).

It *appears* to me from looking at the source that Unbound triggers upon an 
incoming query (basically what HAMMER suggests).
I should note that this is the first time I have looked at the Unbound source, 
and so it is entirely possible that I'm missing something.

It seems that worker_handle_request() in worker.c has the interesting bits.
>From what I can tell, the worker will only have this sort of request if it was 
>initiated by an incoming query.

Horrendous pseudocode seems like:
worker_handle_request()
 if answer_from_cache:
    if prefetch_ttl_expired:
       reply_and_prefetch()


/** Reply to client and perform prefetch to keep cache up to date */
reply_and_prefetch() 
  reply()
    /* create the prefetch in the mesh as a normal lookup without
         * client addrs waiting, which has the cache blacklisted (to bypass
         * the cache and go to the network for the data). */
        /* this (potentially) runs the mesh for the new query */
        mesh_new_prefetch(worker->env.mesh, qinfo, flags, leeway + 
                PREFETCH_EXPIRY_ADD);


There is also a PREFETCH_EXPIRY_ADD which I don't really understand:

/** 
 * seconds to add to prefetch leeway.  This is a TTL that expires old rrsets
 * earlier than they should in order to put the new update into the cache.
 * This additional value is to make sure that if not all TTLs are equal in
 * the message to be updated(and replaced), that rrsets with up to this much
 * extra TTL are also replaced.  This means that the resulting new message
 * will have (most likely) this TTL at least, avoiding very small 'split
 * second' TTLs due to operators choosing relative primes for TTLs (or so).
 * Also has to be at least one to break ties (and overwrite cached entry).
 */
#define PREFETCH_EXPIRY_ADD 60

I must admit that I'm somewhat shaky on much of the above, it would be great if 
someone who is more familiar with the architecture / code could comment/


W




> 
> _______________________________________________
> DNSOP mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/dnsop
> 

-- 
Do not meddle in the affairs of wizards, for they are subtle and quick to 
anger.  
    -- J.R.R. Tolkien


_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to