Ron DuFresne wrote: > > Has anyone looked at the work described here: > >http://www.arstechnica.com/reviews/2q00/networking/networking-3.html#three_other_attacks > Ars Technica: Network Services in an Uncooperative > Environment: Yes. (I should be discussing this with the author directly, but since you asked, here goes ... ) Note: I believe that Stefan Savage is discussing methods of congestion control, methods of circumventing them, and corrective measures, rather than a peril to our networks. As a paper on congestion control, it certainly is a nice piece of work. All three attacks are basically variations of making a server send its output as quickly as possible. The result would be, according to the author, that the server exhausts its own internet connection by not limiting how much it is sending. However, I do not see that this constitutes a "new" and "dangerous" attack, other than thwarting some fairly new (seen in a TCP/IP historical perspective) attempts at congestion control. I would be a LOT more worried about DDoS than this. Any "old", "simple" or "plain bad" implementation of TCP will do what the author describes _by_default_ with no real attempts at congestion control; no "attack" necessary at all. It simply happens. Granted, the suggested "defense" for the two first attacks > a. Ack division > b. DupACK spoofing would make it harder for an attacker to circumvent the congestion control features of newer stacks. However, simply making multiple requests of a large document would have exactly the same effect and automatic per-connection congestion control would not help at all. On a side note: One thing that I believe that Stefan Savage should have mentioned along with all the graphs on utilization scaling up logarithmically, is that after 16/32/64KB, the TCP send window will fill up, and the server will cease sending packets simply because the recipient is not ACKing the packets. The third "attack", > c. Optimistic acking is indeed harder to "defend" against. The proposed solution involved using random cookies that the server end would use to identify legitimate ACKs. This solution, IMHO, is outright dangerous and would also require a complete rewrite of all TCP stacks. If ever a server would change the sending MSS in the connection (for instance by lowering it as a result of detecting a smaller path MTU) while having unACKed and perhaps lost segments, it would completely break the connection as the cookies would no longer be matching up. As a final note, all the "attacks" require quite some work on the part of the attacker. Basically, you get one large packet of data for every ACK you send. And you'll also have to suffer receipt of all the data you requested. This all sounds like basic resource exhaustion attacks, and also looks a lot like expected TCP behaviour to me :-) Seen from a "danger to our networks" perspective, it all boils down to bandwidth management, which ought to be debated and addressed more than it is. I for one do not think that these issues should be automatically handled at the TCP layer. Rather, some form of manual application bandwidth management is better. If you want to give your web server daemon more bandwidth than your mail server daemon, you tell your web server to use up to X Mbps/sec and your mail server to use up to Y Mbps/sec, where X>Y and X+Y=your allotted server bandwidth. Flames and constructive input welcome. Regards, Mikael Olsson -- Mikael Olsson, EnterNet Sweden AB, Box 393, SE-891 28 �RNSK�LDSVIK Phone: +46-(0)660-105 50 Fax: +46-(0)660-122 50 Mobile: +46-(0)70-66 77 636 WWW: http://www.enternet.se E-mail: [EMAIL PROTECTED]
begin:vcard n:Olsson;Mikael tel;cell:+46 70 66 77 636 tel;fax:+46 660 12250 tel;work:+46 660 10550 x-mozilla-html:FALSE url:http://www.enternet.net org:EnterNet Sweden AB adr;quoted-printable;quoted-printable:;;Box 393=0D=0AStorgatan 12;=D6RNSK=D6LDSVIK;;SE-891 28;SWEDEN version:2.1 email;internet:[EMAIL PROTECTED] fn:Mikael Olsson end:vcard
