On Wed, 14 Mar 2007, Douglas Otis wrote:
>
> On Mar 13, 2007, at 11:00 AM, Dean Anderson wrote:
>
> > On Sat, 10 Mar 2007, Douglas Otis wrote:
> >
> >> The higher gain attacks leverage a large RR not normally found in
> >> most authoritative DNS.
> >
> > This assertion isn't true. Several examples were given of common
> > large record types frequently found on authority servers.
>
> A distributed attack requires some number of servers publishing RRs
> large enough to pose a higher gain threat.
No. Just one on a 10GigE is probably just fine.
If that isn't enough, several hundred anycast root servers on high
bandwidth links at key points in the internet ought to do pretty good.
> Several servers can be a problem, but not at the same level as with
> tens of thousands.
There aren't tens of thousands of domains with large SPF records?
Really? Some others claim SPF is widely deployed. Some from you company
I think. Perhaps you should check around the office and get the 'real
story' about SPF deployment.
There aren't high volume servers with large records? Really. Thats
amazing.
> Defending against a broadly distributed attack becomes more
> difficult.
> Blocking recursive servers also disrupts services for a
> greater number of valid recipients.
???? This is what I said. I think you must mean blocking during an
attack, in which case, one blocks queries from the 'target/victim' And
so the answer to that is no: Blocking queries from a 'victim' could harm
only the victim, and only if they are really a user of the recursor. You
don't need to block the recursor from everything. By contrast, blocking
the victim from the roots, or a large authority server could be more
disruptive than the attack. My customers want to see CNN. I'm not
going to block CNN because someone is blasting CNN with forged queries
from my nameservers.
> Every open recessive server can become a proxy for the worst of the
> worst RR. One poorly considered RR can then be fodder for tens of
> thousands of open recursive servers. Blocking a problematic
> authoritative server at least directly affects the entity creating the
> problem. Perhaps this could become a self healing problem.
This assumes the authority server has no legitimate purpose in creating
a large DNS record. Obviously, I think, this assumption is false, and
you cannot base an argument on a false assumption.
>
> Once a suitably large RR is found, open recursive servers can become
> tens of thousands of sources for this single RR.
You have to find them first. Its much easier to find tens of thousands
of large DNS records, or better, a few records on some very key high
bandwidth servers. There is a neat list of 13 IP addresses that will
net several hundred high bandwidth servers that 1) have large records,
and 2) can't be easily blocked by the recipient.
> Which is harder? Finding a distribution of large RRs that greatly
> exceed the MTU, or finding recursive servers? You contend finding
> recursive servers is harder, but attacks seem to demonstrate otherwise.
Yes. That is interesting, isn't it. It's clear now that a certain small
group of people can find recursors more easily than the rest of the
world. Funny coincidence, isn't it.
> > Why would a DOS attacker try to search out recursors, which can be
> > mitigated, when the attacker could launch a much more devastating
> > attack with much less work?
>
> This has been a concern with Sender-ID of course, but in the case of
> open recursive servers, an attempt to mitigate open recursive attacks
> will also be highly disruptive as well. You contend that finding
> open recursive DNS is difficult, and that finding an ample
> distribution of large RRs is comparatively easier. Consider me to be
> skeptical.
>
> > Further, 'closing' the recursors creates additional problems,
> > including opportunity for additional DOS attacks.
>
> Do you mean that when a provider's DNS is taken out of service, then
> finding an alternative will become more difficult?
I mean that many people have staticly configure DNS addresses, and when
they travel to starbucks with their laptop, they won't be able to use
their home servers if everyone follows this proposed draft.
I mean that when you turn off recursion, you still get a referral to
authority servers, which can be large, especially if the query asked it
to be DNSSEC signed.
Sometimes, as was noted with another previous SPF problem, the 'not
recursing' is worse than the recursion.
> > You haven't addressed any of these harms. As I said, your proposed
> > solution is worse than the original problem.
>
> Limiting access to recursive DNS is not my proposed solution.
This draft proposes limiting public access to recursive DNS.
> BCP38 is another solution that comes to mind, but just as with
> limiting access to recursive DNS, BCP38 represents a type of hygiene
> that works best when everyone is diligent.
I agree.
Of course, getting people to follow recommendations requires credibility
that has to be based in rationality, reason, honesty and integrity.
Irrational and unreasonable demands will be debunked as such, and it
will harm the credibility needed for other recommendations.
Obviously, if I were the victim of such an attack, I would respond in
exactly the same way as with any other forged source IP address attack:
I would work though my upstreams to filter out this traffic. If it
comes from random recursors, it would be easier to filter than if it
comes from authority or root servers. Abused recursors can be blocked
from sending unwanted responses, while permitting the wanted queries,
without serious harm to any party. [not so with authorities see below].
All responses are unwanted temporarily, but queries are still allowed
because we want them to see our names.
Obviously, the group of abused recursive servers is easily identified,
even if there are tens of thousands, by the mere fact that no queries
were sent to the servers that provided a response. [again, not so with
key authorities, roots]
Then it is a matter of working with some of those to identify where the
forged traffic is coming from. This identifies the provider of the
botnet/abuser. Once they are shut, the whole abuse stops. It is always
a myth that on large distributed attacks one needs to stop tens of
thousands of servers. That's nonsense. One typically needs to find and
stop just one botnet operator. The more servers they (ab)use, the
easier it is to find someone who will cooperate in finding them.
If this attack became common with recursors, people would change the IP
addresses of their recursors, and then keep track of scanning. Large
sites and root operators would also keep a closer eye on who might be
logging this information and firing/charging those employees when they
are discovered. If open relay abuse is a guide, then over time, the
chances of detection of scanning increase to close to 100%.
But if this attack became common with authorities or roots, then it
could become quite pernicious, since it would be very difficult to
distinguished the wanted responses from the unwanted responses. The
better positioned the server is for attack, the harder it is generally
to block. [try blocking CNN or MSN, or the roots---they cannot be
'helpful' by blocking your IP address at their servers]. This is why
mitigation is difficult or impossible. And the bad guy can do the
scanning without drawing suspicion.
So, you do the game theory: you're the bad guy, which approach do you
choose?
--Dean
--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000
_______________________________________________
DNSOP mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/dnsop