On 12/1/2018 4:41 AM, Hugo Connery wrote:
....
> Recursive resolvers serve a wide variety of stub resolvers
> over varying operating systems from some network(s), which the
> operator gets to choose, but the resolvers need to be within those
> network(s).  Some companies choose to offer recursive resolvers
> that serve the entire internet (8.8.8.8, 1.1.1.1 etc.), but these
> are currently the tiniest minority of recursive resolvers (*),
> though this is likely to change with "recurser as a service".
> The consequence of total failure of these resolver services for
> the community within those network(s) is "the internet does
> not work" (for them).

The non-ISP resolvers like 8.8.8.8 or 1.1.1.1 are actually much more
than "a tiny fraction of the Internet". When we measure that, we find
that they server at least a quarter and maybe a third of Internet users,
worldwide.

....

> One can identify recursive resolvers because they will honour
> your TTL values and will be a sparse collection of addresses
> within network(s).  

You wish. We actually track that as part of ICANN's "Identifier
Technology health Indicators" (ITHI). The results are available at
https://ithi.research.icann.org/graph-m5.html. (The data are collected
by APNIC.) We can determine that 30% of resolvers re-fetch according to
TTL, 16% don't, and 54% do their own thing that we cannot really
categorize. Plus, 11% of resolvers "auto-refresh" their caches, i.e. do
their own queries without waiting for user traffic; these 11% of
resolvers serve 63% of Internet users.

As for the number of resolvers, there are a few big ones, but there is
also a very long tail. We measure that the top 10,000 resolvers serve
92% of users. We see about 20,000 more after them serving few users, and
then there is of course another portion of the long tail that we don't see.

Looking at root servers produces similar data, and then some. Root
servers don't just see "regular" traffic, they also attract lots of
random sources.

> This is not a perfect protection because
> recursers can "go rogue" (cracked) or deliberately behave nicely
> for a while (planned attack), but it should help with DDOS type
> attacks that are not well planned.  Similarly, you can grade
> your deliberate refusal of service to network(s) when under attack
> from them.  Refuse service to all not already known recursers
> for a while before refusing service to the entire network(s)
> if the initial service restriction doesn't help.  But, if that's
> the case, you have a good idea who is attacking you from those
> network(s).

Maybe. The numbers above give an idea of the size of the field. Big
servers can probably maintain a list of the top 10,000 plus the next
20,000 or so. For small servers, that will be harder, because they will
only see a fraction of the resolvers on a given day, and that could very
well change day to day. It would be nice if whatever we do does not come
as yet another reason to concentrate all the services on a few big
platforms...

-- Christian Huitema


_______________________________________________
dns-privacy mailing list
dns-privacy@ietf.org
https://www.ietf.org/mailman/listinfo/dns-privacy

Reply via email to