Are there any models of DNS cache behavior, either analytic or
simulations?  What I have in mind is something that would help me see
whether I should partition a cache among various kinds of traffic, or
perhaps limit max TTLs, or experiment with replacement strategies.  

For that matter, what's the state of DNS modelling in general?

I found a paper by Jung et al from 2003 on cache models which starts
by asserting that caches are so big that entries only drop out due to
TTL expiry, did a lot of analysis and simulation, and concluded that
15 minute TTLs got nearly the same cache benefit of 24 hr TTL.

A 2010 paper by Alexiou et al.  models the Kaminsky DNS poisoning
attack and the port randomizing fix, which is interesting but not what
I'm looking for.  (They conclude that the attack is real, and the fix
works OK.)

Anything else I should be looking at?


-- 
Regards,
John Levine, [email protected], Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly
_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to