On Mar 13, 2007, at 11:00 AM, Dean Anderson wrote:
On Sat, 10 Mar 2007, Douglas Otis wrote:
The higher gain attacks leverage a large RR not normally found in
most authoritative DNS.
This assertion isn't true. Several examples were given of common
large record types frequently found on authority servers.
A distributed attack requires some number of servers publishing RRs
large enough to pose a higher gain threat. Several servers can be a
problem, but not at the same level as with tens of thousands.
Defending against a broadly distributed attack becomes more
difficult. Blocking recursive servers also disrupts services for a
greater number of valid recipients. Every open recessive server can
become a proxy for the worst of the worst RR. One poorly considered
RR can then be fodder for tens of thousands of open recursive
servers. Blocking a problematic authoritative server at least
directly affects the entity creating the problem. Perhaps this could
become a self healing problem.
Recursive servers can be easily located without any scanning
Really? How?
Bad actors also run authoritative servers and pay attention. : (
and then asked to obtain an answer referencing problematic RRs
(which may be poorly considered SPF RRs).
RRs which come from authority servers. If you are the attacker, why
not simply hit the authority servers? Why not use the root servers
in the attack?
Once a suitably large RR is found, open recursive servers can become
tens of thousands of sources for this single RR.
Preventing access to recursive servers significantly reduces the
possible sources relied upon by a distributed mode of attack.
Your claim here restates your premise without proof. Repeating your
claim does not make it true. I have shown that it is far easier to
find a very large number of authority servers than it is to scan
for recursors.
Which is harder? Finding a distribution of large RRs that greatly
exceed the MTU, or finding recursive servers? You contend finding
recursive servers is harder, but attacks seem to demonstrate otherwise.
There are many authority servers with large records, including the
root servers, that have access to very high bandwidth connections,
making them ideal for use in a DOS attack.
An attack may become apparent when the query rate for a specific RR
greatly exceeds the norms. Many of the reflected attacks using open
recursion did not produce any statically significant effect except
for the victim. Apparent normality makes devising a defensive
strategy extremely difficult.
Why would a DOS attacker try to search out recursors, which can be
mitigated, when the attacker could launch a much more devastating
attack with much less work?
This has been a concern with Sender-ID of course, but in the case of
open recursive servers, an attempt to mitigate open recursive attacks
will also be highly disruptive as well. You contend that finding
open recursive DNS is difficult, and that finding an ample
distribution of large RRs is comparatively easier. Consider me to be
skeptical.
Further, 'closing' the recursors creates additional problems,
including opportunity for additional DOS attacks.
Do you mean that when a provider's DNS is taken out of service, then
finding an alternative will become more difficult?
You haven't addressed any of these harms. As I said, your proposed
solution is worse than the original problem.
Limiting access to recursive DNS is not my proposed solution. BCP38
is another solution that comes to mind, but just as with limiting
access to recursive DNS, BCP38 represents a type of hygiene that
works best when everyone is diligent.
-Doug
_______________________________________________
DNSOP mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/dnsop