Excelant insite on the issue. But it seems to me that the issue has more of corelation to other TCP parameter tunnings and induces a necesity of control on both the transmit and recieving ends.
I'm attaching a paper from the same source you're persuing about an automatic implementation for this problem. (not any more since it was bounced by our list server, so here is the link to the PDF file if anyone is intrested: http://citeseer.ist.psu.edu/dunigan02tcp.html For me, it makes more sense to tackel the issue of network communication buffering through the network driver level, aka, QoS; if you had a big transmission buffer (at the sending end) and a big recieving buffer (at the recieving end; although not necessary that the communicating ends go with accepting window sizes) it still the gateway with its behaiviour that shapes the performance and you might wind up with a retransmission queue that is big (un acknowledged segments) that wait for processing and you end up with the same situation. (just some ramblling, don't worry your self) also, some notes on the calculations: > buffer size = .0003s * 100Mbps = .03Mb =~ 3900 Bytes should be : buffer size = .0003s * 100Mbps = .03Mb = 30.72Kb = 3.84kBytes (default buffer size is 32kBytes) make the threshould for RTT scales to almost 10 orders of magnitude but it certingly does not conflict with the concepts presented. > Well, let me start off by giving the credit of the technical info Im mentioning next to Brian Tierney. > > Let's say you're writing a small program to tranfer file from one machine to another on a 100Mbs network. > > > Let's consider two important equations: > > Throughput = buffer size / latency > > So to get a better throughput we want a higher buffer size and a lower latency. Lower latency is usually imrpoved at network level so we'll leave that at the moment. > > Let's look at buffer size: > > buffer size = RTT * bandwidth > > A simple ping between the two machines can get you the Round Trip Time. > > So for a 100Mbs bandwidth and say a (.3 ms) RTT on a LAN - I was pinigin Ammar's box - an ideal buffer size = .0003s * 100Mbps = .03Mb =~ 3900 Bytes. OK now let's look at the tcp buffer size on my Mac: > > net.inet.tcp.sendspace: 32768 > net.inet.tcp.recvspace: 32768 > > Hmm .. well that's 32K ... pretty good eh? > > Yet the problem appears somewhere else. Let's say I'm transferring the same file with the same bandwidth over a WAN link (Internet maybe, VPN, you name it). My RTT can go up to something close to 300ms ... well in this case: > buffer size (ideal) = .3s * 100Mbps = 3.75MBytes!!!! > > How much do I have? 32K? > > So when opening my socket I'd be really dumb not to play with the system's TCP buffer size and push it up a lil' bit! With the C > language check the manpage of setsockopt. > > Of course the OS usually has a maximum TCP buffer size that you can ask you're Admin to increase it (use sysctl) On linux the kernel variables look like this: > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > > Bare in mind of course that we're tackling this problem from an > application point of view and for the specific case of High > Bandwidth/High Latency networks. > > Hope that was beneficial! > > > On 11/22/05, Yaman Saqqa <[EMAIL PROTECTED]> wrote: >> Hi Guys, >> >> Anybody ever noticed how scp has always been significantly slower than >> for >> example a traditional FTP transfer? Well, I was reading an article about TCP >> performance tuning and it was discussing playing with the TCP buffer size >> (with socket options) so as to gain better bandwidth optimization (if any >> body is interested in the details I can spend 5 minutes on a sequel email). >> The interesting piece that I found is that OpenSSH suite (of which scp is a >> member) uses "statically defined internal flow control buffers" that limit >> the TCP buffer from getting larger which results in underutilization of the >> bandwidth! The Pittsburgh super computing center have their own patched version to overcome this. >> >> I just thought it was worth sharing as I have noticed this issue a >> couple >> of years back but just figured it out the trick! >> >> Happy Hacking >> >> -- >> abulyomon >> >> www.KiLLTHeUPLiNK.com > > > -- > abulyomon > > www.KiLLTHeUPLiNK.com > > _______________________________________________ > General mailing list > [email protected] > http://mail.jolug.org/mailman/listinfo/general_jolug.org > _______________________________________________ General mailing list [email protected] http://mail.jolug.org/mailman/listinfo/general_jolug.org
