On Thursday 01 December 2005 23:13, Matthew Dillon wrote: > :... > : > :> of latency occuring every once in a while would not have any > :> adverse effect. > : > :A few milliseconds of latency / jitter can sometimes completely kill > : TCP throughput at gigabit speeds. A few microseconds won't matter, > : though. > : > :Cheers, > : > :Marko > > Not any more, not with scaled TCP windows and SACK. A few > milliseconds doesn't matter. The only effect is that you need a > larger transmit buffer to hold the data until the round-trap ack > arrives. so, e.g. a 1 Megabyte buffer would allow you to have > 10mS of round-trip latency. That's an edge case, of course, so to be > safe one would want to cut it in half and say 5 mS with a 1 megabyte > buffer.
Mostly true, but having TCP window sizes as large as a megabyte doesn't come at no cost, just as you described later in your note (there may be other problems as well). But I don't think that today's gigabit cards ever delay interrupts for more than a few dozens of microseconds (unless explicitly misconfigured ;), so probably we have a non-issue here. Cheers, Marko > TCP isn't really the problem, anyway, because it can tolerate any > amount of latency without 'losing' packets. So if you have a TCP > link and you suffer, say, 15 ms of delay once every few seconds, > the aggregate bandwidth is still pretty much maintained. > > The real problem with TCP is packet backlogs appearing at choke > points. For example, if you have a GigE LAN and a 45 MBit WAN, an > incomming TCP stream from a host with an aweful TCP stack (such as a > windows server) might build up a megabyte worth of packets on your > network provider's border router all trying to squeeze down into 45 > MBits. NewReno, RED, and other algorithms try to deal with it but the > best solution is for the server to not try to push out so much data > in the first place if the target's *PHYSICAL* infrastructure doesn't > have the bandwidth. But that's another issue. > > -Matt > Matthew Dillon > <[EMAIL PROTECTED]>
