On Wed, 2007-03-14 at 22:36 -0400, Dean Anderson wrote: > On Wed, 14 Mar 2007, Douglas Otis wrote: > > > > A distributed attack requires some number of servers publishing RRs > > large enough to pose a higher gain threat. > > No. Just one on a 10GigE is probably just fine.
This is neglecting the basic concern. There are large networks everywhere. One source of an attack at a lower gain would be a much easier to squelch. > > Several servers can be a problem, but not at the same level as with > > tens of thousands. > > There aren't tens of thousands of domains with large SPF records? > Really? Some others claim SPF is widely deployed. Some from you company > I think. Perhaps you should check around the office and get the 'real > story' about SPF deployment. Sender-ID is not a matter of record deployment, but rather the number processing the scripts. BPC38 and ACLs prevent source spoofed reflected attacks. However, these precautions will not be effective against Sender-ID exploits that obtain much higher gain measured against spam traffic. Of course they would be spamming anyway. Sender-ID actually provides a virtually free method of attack! > There aren't high volume servers with large records? Really. Thats > amazing. A lower level of distribution facilitates squelching a persistent attack. > > Blocking recursive servers also disrupts services for a > > greater number of valid recipients. > > ???? This is what I said. I think you must mean blocking during an > attack, in which case, one blocks queries from the 'target/victim' And > so the answer to that is no: Blocking queries from a 'victim' could harm > only the victim, and only if they are really a user of the recursor. You > don't need to block the recursor from everything. By contrast, blocking > the victim from the roots, or a large authority server could be more > disruptive than the attack. My customers want to see CNN. I'm not > going to block CNN because someone is blasting CNN with forged queries > from my nameservers. Yes. Squelching queries entering a network from resolvers serving a large population of users increases the number of victims caused by a DoS attack. On the other hand, if CNN published a large RR and then allowed this RR to support a high gain reflected attack, squelching traffic from this server could help mitigate an attack with less collateral damage. > This assumes the authority server has no legitimate purpose in creating > a large DNS record. Obviously, I think, this assumption is false, and > you cannot base an argument on a false assumption. There are a number of strategies one might use to squelch a DoS attack, provided an attack is apparent. An abnormally high rate of queries for some large RR provides evidence of an ongoing attack at the server. > > Once a suitably large RR is found, open recursive servers can become > > tens of thousands of sources for this single RR. > > You have to find them first. Its much easier to find tens of thousands > of large DNS records, or better, a few records on some very key high > bandwidth servers. There is a neat list of 13 IP addresses that will > net several hundred high bandwidth servers that 1) have large records, > and 2) can't be easily blocked by the recipient. What gain is obtain in this scenario? > > Which is harder? Finding a distribution of large RRs that greatly > > exceed the MTU, or finding recursive servers? You contend finding > > recursive servers is harder, but attacks seem to demonstrate otherwise. > > Yes. That is interesting, isn't it. It's clear now that a certain small > group of people can find recursors more easily than the rest of the > world. Funny coincidence, isn't it. A recursive server for a large population of users simply stands out by the level of requests being made. Examine a DNS log. It is not a mystery. > > Do you mean that when a provider's DNS is taken out of service, then > > finding an alternative will become more difficult? > > I mean that many people have staticly configure DNS addresses, and when > they travel to starbucks with their laptop, they won't be able to use > their home servers if everyone follows this proposed draft. This would not be a default setting. They should also know how to re-adjust the settings. Tunneling over SSH via port 443 also thwarts poisoning that might be attempted and this port is never blocked. > I mean that when you turn off recursion, you still get a referral to > authority servers, which can be large, especially if the query asked it > to be DNSSEC signed. Some of the reflected attacks used several KB RRs. > Sometimes, as was noted with another previous SPF problem, the 'not > recursing' is worse than the recursion. I am guessing, but I suspect this depends upon what is being measured. > If it comes from random recursors, it would be easier to filter than > if it comes from authority or root servers. When an attack is coming from thousands of open recursive servers all attempting to forward some KB fragmented RR, the problem is not easily squelched. One might limit which of the roots are allowed transit in either direction. That could limit the gain of the attack to nil. > Abused recursors can be blocked from sending unwanted responses, while > permitting the wanted queries, without serious harm to any party. > [not so with authorities see below]. All responses are unwanted > temporarily, but queries are still allowed because we want them to see > our names. When there are so many of them, the attack from each server is in the noise. Only the victim clearly sees the attack in progress. How can that be stopped? > Obviously, the group of abused recursive servers is easily identified, > even if there are tens of thousands, by the mere fact that no queries > were sent to the servers that provided a response. [again, not so with > key authorities, roots] You mean the victim can see the source of the attack? When it is coming from virtually everywhere, the victim is left with few options. > Then it is a matter of working with some of those to identify where the > forged traffic is coming from. This identifies the provider of the > botnet/abuser. Once they are shut, the whole abuse stops. It is always > a myth that on large distributed attacks one needs to stop tens of > thousands of servers. That's nonsense. One typically needs to find and > stop just one botnet operator. The more servers they (ab)use, the > easier it is to find someone who will cooperate in finding them. Some C&C sources can be identified, but this game is evolving. It is not just one botnet operator. Few servers are in the same place more than a few minutes. > If this attack became common with recursors, people would change the IP > addresses of their recursors, and then keep track of scanning. Large > sites and root operators would also keep a closer eye on who might be > logging this information and firing/charging those employees when they > are discovered. If open relay abuse is a guide, then over time, the > chances of detection of scanning increase to close to 100%. There is no need to scan for these servers. > But if this attack became common with authorities or roots, then it > could become quite pernicious, since it would be very difficult to > distinguished the wanted responses from the unwanted responses. The > better positioned the server is for attack, the harder it is generally > to block. [try blocking CNN or MSN, or the roots---they cannot be > 'helpful' by blocking your IP address at their servers]. This is why > mitigation is difficult or impossible. And the bad guy can do the > scanning without drawing suspicion. To squelch an attack, why not block servers permitting a high gain reflected attack? > So, you do the game theory: you're the bad guy, which approach do you > choose? Lowering the gain achieved lessens the chance resource will be expended in perhaps a futile effort. Not allowing open recursive DNS still appears to be a good solution at this time. -Doug _______________________________________________ DNSOP mailing list [email protected] https://www1.ietf.org/mailman/listinfo/dnsop
