Dan McDonald wrote:
9 months ago, I asked:

On Thu, Nov 20, 2008 at 02:10:45PM -0500, Dan McDonald wrote:
shell(~)[0]% ndd -get /dev/tcp tcp_xmit_hiwat
49152
shell(~)[0]% ndd -get /dev/tcp tcp_recv_hiwat
49152
shell(~)[0]%
Why are our TCP window sizes by default so small?  Many transactions are
performed these days over either long-distances, with latency-inducing
encryption from IPsec or some VPN middlebox/middleware, or worse, with one or
more layers of latency-inducing middleboxes like NATs in between.


Given recent discussion on another list about performance of a
network-centric service, I have to wonder how much of those problems are
related to wicked-small window sizes.

I do realize that the long-term Right Answer (TM) is some sort of auto-tuning
mechnanism, but in the short term, it shouldn't be rocket science to put back
larger default values into the ON gate.

Am I on crack?  Or is this sensible.  One other OS seems to have theirs
default to 512k.  I'd personally prefere 1MB, but can be swayed.


As you know, a number of us have been using Bill's package to run with 1 MB for a long time without any noticeably bad effects, and I've recommended it to many without adverse reports back. Admittedly most of these are client usages, but that's where the pain most often shows up, anyway. I was sorely tempted to recommend it in the Bible, but it would have added several pages to go into ndd and so on, so it didn't make the cut. I think an adjustment is long overdue, though.

Dave
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to