On Thu, 15 Mar 2007, Douglas Otis wrote:
> On Wed, 2007-03-14 at 22:36 -0400, Dean Anderson wrote:
> > On Wed, 14 Mar 2007, Douglas Otis wrote:
> > >
> > > A distributed attack requires some number of servers publishing RRs
> > > large enough to pose a higher gain threat.
> >
> > No. Just one on a 10GigE is probably just fine.
>
> This is neglecting the basic concern. There are large networks
> everywhere. One source of an attack at a lower gain would be a much
> easier to squelch.
Sure. And there are authority servers on those networks with large
records and large bandwidth. My point here is that a lot of servers
aren't necessary for an if you have a few with big bandwidth. Such an
attack is easier to conduct because the abuser doesn't have to look very
far to find the aggregate bandwidth needed.
> Sender-ID is not a matter of record deployment, but rather the number
> processing the scripts.
Umm, '_a_ problem with sender-id involves the processing of scripts'.
This isn't the only problem, but the relevant issue here is the size of
the DNS record.
> BPC38 and ACLs prevent source spoofed reflected attacks.
I agree, this is the solution that best addresses the problem.
> However, these precautions will not be effective against
> Sender-ID exploits that obtain much higher gain measured against spam
> traffic. Of course they would be spamming anyway. Sender-ID actually
> provides a virtually free method of attack!
Agreed. I was against SPF several years ago, too. sigh.
> A lower level of distribution facilitates squelching a persistent
> attack.
This is true _Only_ if they can be squelched, and only if the
'squelching' isn't worse than the attack. For example, blocking the
victims nameservers from the roots is worse, since now they have no DNS.
> Yes. Squelching queries entering a network from resolvers serving a
> large population of users increases the number of victims caused by a
> DoS attack. On the other hand, if CNN published a large RR and then
> allowed this RR to support a high gain reflected attack, squelching
> traffic from this server could help mitigate an attack with less
> collateral damage.
Not if my customers want to get to CNN. That's worse: thats a total
outage to CNN. I don't think you are appreciating the effects of
mitigation may be worse than the attack itself.
> > You have to find them first. Its much easier to find tens of thousands
> > of large DNS records, or better, a few records on some very key high
> > bandwidth servers. There is a neat list of 13 IP addresses that will
> > net several hundred high bandwidth servers that 1) have large records,
> > and 2) can't be easily blocked by the recipient.
>
> What gain is obtain in this scenario?
Experiment for yourself: block the root servers. See how well DNS works
afterwards. How do you find, say, .com without the roots? Load it into
your cache??? Now do all secondary domains. Suppose the .com servers
are used? Now try all of .com.
The mitigation is worse than the attack. For the attacker:
no searching is necessary,
the several hundred root copies have high bandwidth,
the mitigation of blocking roots is worse.
the victim has to sort wanted responses from unwanted responses.
> > > Which is harder? Finding a distribution of large RRs that greatly
> > > exceed the MTU, or finding recursive servers? You contend finding
> > > recursive servers is harder, but attacks seem to demonstrate otherwise.
> >
> > Yes. That is interesting, isn't it. It's clear now that a certain small
> > group of people can find recursors more easily than the rest of the
> > world. Funny coincidence, isn't it.
>
> A recursive server for a large population of users simply stands out by
> the level of requests being made. Examine a DNS log. It is not a
> mystery.
I understand the 'how' of getting the information. The coincidence is
the point here. The logs are from authority servers, which get requests
from recursors. The authority servers that talk to a large number of
recursors are run by large ISPs, root server operators, and secondary
domain operators. Those authority servers are administered by a finite
group of administrators. These administrators have easy access to the
identification of large numbers of recursors. The rest of the world
doesn't have this access. For the small group that does have access,
though, abusing recursors is quite easy. For the rest, its quite hard.
The strange coincidence is that the actual abuse so far has involved
recursors, not authority servers, which suggests the abuser(s) works for
a fairly large ISP or a root server operator, or a secondary level
domain operator or maybe a cctld operator, and has used logs of their
servers to obtain the list of recursors. While seemingly broad, this
probably represents a small number of administrators---probably no more
than several hundred people.
> > > Do you mean that when a provider's DNS is taken out of service, then
> > > finding an alternative will become more difficult?
> >
> > I mean that many people have staticly configure DNS addresses, and when
> > they travel to starbucks with their laptop, they won't be able to use
> > their home servers if everyone follows this proposed draft.
>
> This would not be a default setting.
It is the default if you don't use DHCP.
> They should also know how to re-adjust the settings. Tunneling over
> SSH via port 443 also thwarts poisoning that might be attempted and
> this port is never blocked.
Most people don't do that. 'Tunneling' as a solution to various
problems, has long be suggested, and long been rejected as a
non-starter. Only a small percentage of the internet users even know
what SSH is.
> > I mean that when you turn off recursion, you still get a referral to
> > authority servers, which can be large, especially if the query asked it
> > to be DNSSEC signed.
>
> Some of the reflected attacks used several KB RRs.
So? The math of an attack is not limited to RR size alone.
Number of Queries times Response Size equals Attack Volume.
Large Response size is not the only factor. It has to be large enough,
and also commonly found. Closed recursors still give an amplied
response, and would still be commonly found.
> > Sometimes, as was noted with another previous SPF problem, the 'not
> > recursing' is worse than the recursion.
>
> I am guessing, but I suspect this depends upon what is being measured.
I'm referring to
http://www1.ietf.org/mail-archive/web/dnsop/current/msg04978.html
=======================
> Victim servers would each answer that they are non-authoritative but
> would have to include large domain (imagine big.bad.example.com being
> close to maximum DNS label size)
No, only one server would answer. That server either recurses (remember
the discussion about "reflectors are evil"), with the right answer (from
cache, even), or else it gives a (relatively larger) response containing
authority records directing to the nameservers that are authoritative
for the domain.
[Gee, it looks like reflectors are actually "good". I wish I had put
this in my opposition to draft-ietf-dnsop-reflectors-are-evil-02.txt]
=======================
> > If it comes from random recursors, it would be easier to filter than
> > if it comes from authority or root servers.
>
> When an attack is coming from thousands of open recursive servers all
> attempting to forward some KB fragmented RR, the problem is not easily
> squelched. One might limit which of the roots are allowed transit in
> either direction. That could limit the gain of the attack to nil.
Sure it is. They are sending responses. It is easy for upstream to
filter DNS response packets from a list of recursors.
> > Abused recursors can be blocked from sending unwanted responses, while
> > permitting the wanted queries, without serious harm to any party.
> > [not so with authorities see below]. All responses are unwanted
> > temporarily, but queries are still allowed because we want them to see
> > our names.
>
> When there are so many of them, the attack from each server is in the
> noise. Only the victim clearly sees the attack in progress. How can
> that be stopped?
The victim contacts their upstreams with a list of IPs. It is no
different whatsoever from a smurf attack.
> > Obviously, the group of abused recursive servers is easily identified,
> > even if there are tens of thousands, by the mere fact that no queries
> > were sent to the servers that provided a response. [again, not so with
> > key authorities, roots]
>
> You mean the victim can see the source of the attack?
They can clearly see that they are getting responses for which there was
no query. Just like in a smurf attack.
> When it is coming from virtually everywhere, the victim is left with
> few options.
This claim is nonsense. Years of abuse filtering for various forged
source IP attacks prove otherwise. I have a great deal of personal
experience with this, having excited a certain group of script kiddies
with botnets over the years.
> > Then it is a matter of working with some of those to identify where the
> > forged traffic is coming from. This identifies the provider of the
> > botnet/abuser. Once they are shut, the whole abuse stops. It is always
> > a myth that on large distributed attacks one needs to stop tens of
> > thousands of servers. That's nonsense. One typically needs to find and
> > stop just one botnet operator. The more servers they (ab)use, the
> > easier it is to find someone who will cooperate in finding them.
>
> Some C&C sources can be identified, but this game is evolving.
Yes. But information theory works here, too.
> It is not just one botnet operator.
It often a small I think. You don't need to get the botnet operator, to
get them to stop. You just need to get close, and kill some of their
bots, and they will stop.
> Few servers are in the same place more than a few minutes.
DNS servers tend to stay put. Some operators will be helpful in
tracking down the bots that are spoofing traffic.
> > If this attack became common with recursors, people would change the IP
> > addresses of their recursors, and then keep track of scanning. Large
> > sites and root operators would also keep a closer eye on who might be
> > logging this information and firing/charging those employees when they
> > are discovered. If open relay abuse is a guide, then over time, the
> > chances of detection of scanning increase to close to 100%.
>
> There is no need to scan for these servers.
There is if you aren't one of the people who has privileged access to
this information.
> > But if this attack became common with authorities or roots, then it
> > could become quite pernicious, since it would be very difficult to
> > distinguished the wanted responses from the unwanted responses. The
> > better positioned the server is for attack, the harder it is generally
> > to block. [try blocking CNN or MSN, or the roots---they cannot be
> > 'helpful' by blocking your IP address at their servers]. This is why
> > mitigation is difficult or impossible. And the bad guy can do the
> > scanning without drawing suspicion.
>
> To squelch an attack, why not block servers permitting a high gain
> reflected attack?
As I already said: Experiment for yourself; block the root servers, see
how well DNS works after that.
> > So, you do the game theory: you're the bad guy, which approach do you
> > choose?
>
> Lowering the gain achieved lessens the chance resource will be expended
> in perhaps a futile effort. Not allowing open recursive DNS still
> appears to be a good solution at this time.
You think blocking the roots is a good solution, too. I don't think you
fully appreciate the problem.
--Dean
--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000
_______________________________________________
DNSOP mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/dnsop