On 15Nov21, Hugo Salgado allegedly wrote:
> > What is the technical or other reason(s) for such TTL limiting?

> There are risks with excessively long TTL, for example, it is used as
> a technique when hijacking or poison a domain, to keep the fake record

It could be those reasons or it might simply be that they are exposing a 
side-effect of
their cache eviction implemention.

Someone like 8.8.8.8 who, at my last reckoning, were getting something like 8% 
of global
queries probably have fairly substantial memory demands, even with their 
selective support
of ECS.

The interesting thing about DNS caches is that the metadata can easily swamp 
the actually
cached data so most sophisticated attempts to manage the cache probably cost a 
lot more
memory than they are worth.

With too much time on your hands you could implement a pretty fancy-pants cache 
system
which has all sorts of memory-heavy metadata around LRU and hit rates and fetch 
costs and
poison risks and so on. Alternatively you could go for the absolute minimalist 
memory-cost
system whereby you have a dumb global memory-pressure knob which is expressed 
in a
modified TTL to reflect the eviction decision.

In the 8.8.8.8 case, I'd probably be inclined towards the latter.

I note that 8.8.8.8 preserves differing TTLs on RRSets, so they haven't gone 
complete
gonzo on memory minimization, so who knows?

In any event, as long as the exposed TTL is not greater than the auth supplied 
value, it
seems like an innocuous possibility to me.


Mark.
_______________________________________________
dns-operations mailing list
[email protected]
https://lists.dns-oarc.net/mailman/listinfo/dns-operations

Reply via email to