Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-25 Thread Sebastian Reitenbach
Hi,

On Monday, December 5, 2011 10:53 CET, "Sebastian Reitenbach" 
 wrote: 
 
> On Sunday, December 4, 2011 21:01 CET, Mark Kettenis 
>  wrote: 
>  
> > > Date: Sun, 4 Dec 2011 15:10:56 +0100
> > > From: Claudio Jeker 
> > > 
> > > On Sun, Dec 04, 2011 at 01:35:33PM +0100, Sebastian Reitenbach wrote:
> > > > On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar 
> > > >  wrote: 
> > > >  
> > > > > On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> > > > > > the default maximum size of the tcp send and receive buffer used by 
> > > > > > the autosizing algorithm is way too small, when trying to get 
> > > > > > maximum speed with high bandwidth and high latency connections.
> > > > > 
> > > > > I have tweaked SB_MAX on a system too, but it was for UDP.
> > > > > 
> > > > > When running a busy Unbound resolver, the recommendation is too bump 
> > > > > the
> > > > > receive buffer to 4M or even 8M. See
> > > > > http://unbound.net/documentation/howto_optimise.html
> > > > > 
> > > > > Otherwise a lot of queries are dropped when the cache is cold.
> > > > > 
> > > > > I don't think there's a magic value that's right for everyone, so a
> > > > > sysctl would be nice.  Maybe separate ones for tcp and udp.
> > > > > 
> > > > > I know similar sysctl's have been removed recently, and that they are
> > > > > sometimes abused, but I'd say we have two valid use cases now.
> > > > > 
> > > > > So I'd love some more discussion.  :-)
> > > > 
> > > > since they were removed, and there is this keep it simple, and too many
> > > > knobs are bad attitude, which I think is not too bad, I just bumped the
> > > > SB_MAX value.
> > > > If there is consensus that a sysctl would make sense, I'd also look into
> > > > that approach and send new patch. 
> > >  
> > > SB_MAX is there to protect your system. It gives a upperbound on how much
> > > memory a socket may allocate. The current value is a compromize. Running
> > > with a huge SB_MAX may make one connection faster but it will cause
> > > resource starvation issues on busy systems.
> > > Sure you can bump it but be aware of the consequneces (and it is why I
> > > think we should not bump it at the moment). A proper change needs to
> > > include some sort of resource management that ensures that we do not run
> > > the kernel out of memory.
> > 
> > But 256k simply isn't enough for some use cases.  Turning this into a
> > sysctl tunable like FreeBSD and NetBSD would be a good idea if you ask
> > me.  Yes, people will use it to shoot themselves in the foot.  I don't
> > care.
> 
> So to be able to shoot myself in the foot without the need to compile the 
> kernel, I'll look into adding a sysctl to tweak the maximum size of the 
> buffer. Well, depending on time and how fast I figure out how to do that, 
> might take some time.

here is a first try to add such a sysctl. I called it net.inet.ip.sb-max. A 
better name, under a different hierarchy maybe?
The default value SB_MAX defined in sys/socketvar.h did not changed. I used 
sysctl_int for the sysctl, but not perfectly sure whether this is right? sb_max 
is u_long in sys/kern/uipc_socket2.c, so maybe using sysctl_quad? 
Tested and works for me on i386.

Its my first try in kernel land, and I'm no expert with regard to the network 
stack, so there may be things I should have done better. Please comment and let 
me know.

cheers,
Sebastian

Index: lib/libc/gen/sysctl.3
===
RCS file: /cvs/src/lib/libc/gen/sysctl.3,v
retrieving revision 1.210
diff -u -r1.210 sysctl.3
--- lib/libc/gen/sysctl.3   9 Dec 2011 16:14:54 -   1.210
+++ lib/libc/gen/sysctl.3   25 Dec 2011 13:50:15 -
@@ -1210,6 +1210,7 @@
 .It ip Ta porthilast Ta integer Ta yes
 .It ip Ta portlast Ta integer Ta yes
 .It ip Ta redirect Ta integer Ta yes
+.It ip Ta sb-max Ta integer Ta yes
 .It ip Ta sourceroute Ta integer Ta yes
 .It ip Ta stats Ta structure Ta no
 .It ip Ta ttl Ta integer Ta yes
@@ -1517,6 +1518,9 @@
 .Tn IP
 packets,
 and should normally be enabled on all systems.
+.It Li ip.sb-max
+Maximum size of socket buffers. This value is also used by the TCP
+send and receive buffer autosizing algorithm.
 .It Li ip.sourceroute
 Returns 1 when forwarding of source-routed packets is enabled for
 the host.
Index: sbin/sysctl/sysctl.8
===
RCS file: /cvs/src/sbin/sysctl/sysctl.8,v
retrieving revision 1.162
diff -u -r1.162 sysctl.8
--- sbin/sysctl/sysctl.83 Sep 2011 22:59:08 -   1.162
+++ sbin/sysctl/sysctl.825 Dec 2011 13:50:54 -
@@ -228,6 +228,7 @@
 .It net.inet.ip.porthilast Ta integer Ta yes
 .It net.inet.ip.maxqueue Ta integer Ta yes
 .It net.inet.ip.encdebug Ta integer Ta yes
+.It net.inet.ip.sb-max Ta integer Ta yes
 .It net.inet.ip.ipsec-expire-acquire Ta integer Ta yes
 .It net.inet.ip.ipsec-invalid-life Ta integer Ta yes
 .It net.inet.ip.ipsec-pfs Ta integer Ta yes
Index: sys/kern/uip

Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Henning Brauer
* Geoff Steckel  [2011-12-05 16:32]:
> On 12/05/2011 05:25 AM, Kevin Chadwick wrote:
> >On Mon, 05 Dec 2011 10:53:00 +0100
> >"Sebastian Reitenbach" wrote:
> >
> >>So to be able to shoot myself in the foot without the need to compile the 
> >>kernel, I'll look into adding a sysctl to tweak the maximum size of the 
> >>buffer. Well, depending on time and how fast I figure out how to do that, 
> >>might take some time.
> >
> >I don't know the best word but what about something like
> >net.inet.tcp.footshooter.
> >
> >You'd have to be pretty dumb. To write into the mailing list saying
> >
> >I don't know what's wrong. The only changes I've made are to increasse
> >performance by tweaking net.inet.tcp.footshooter.sbmax?
> footshooter.net.sbmax perhaps? Such a hierarchy could be populated
> with all the parameters it's, umm, unwise to tweak without a lot of
> knowledge. A 90% frivolous suggestion.

and now that everybody had his/her fun back to serious pls.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Geoff Steckel

On 12/05/2011 05:25 AM, Kevin Chadwick wrote:

On Mon, 05 Dec 2011 10:53:00 +0100
"Sebastian Reitenbach" wrote:


So to be able to shoot myself in the foot without the need to compile the 
kernel, I'll look into adding a sysctl to tweak the maximum size of the buffer. 
Well, depending on time and how fast I figure out how to do that, might take 
some time.


I don't know the best word but what about something like
net.inet.tcp.footshooter.

You'd have to be pretty dumb. To write into the mailing list saying

I don't know what's wrong. The only changes I've made are to increasse
performance by tweaking net.inet.tcp.footshooter.sbmax?
footshooter.net.sbmax perhaps? Such a hierarchy could be populated with 
all the parameters it's, umm, unwise to tweak without a lot of 
knowledge. A 90% frivolous suggestion.


Geoff Steckel



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Claudio Jeker
On Mon, Dec 05, 2011 at 10:25:14AM +, Kevin Chadwick wrote:
> On Mon, 05 Dec 2011 10:53:00 +0100
> "Sebastian Reitenbach" wrote:
> 
> > So to be able to shoot myself in the foot without the need to compile the 
> > kernel, I'll look into adding a sysctl to tweak the maximum size of the 
> > buffer. Well, depending on time and how fast I figure out how to do that, 
> > might take some time.
> 
> 
> I don't know the best word but what about something like
> net.inet.tcp.footshooter.
> 
> You'd have to be pretty dumb. To write into the mailing list saying
> 
> I don't know what's wrong. The only changes I've made are to increasse
> performance by tweaking net.inet.tcp.footshooter.sbmax?
> 

SB_MAX has nothing to search under net.inet.tcp. SB_MAX is used no only by
TCP and IP.

-- 
:wq Claudio



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Henning Brauer
* Mark Kettenis  [2011-12-04 21:02]:
> But 256k simply isn't enough for some use cases.  Turning this into a
> sysctl tunable like FreeBSD and NetBSD would be a good idea if you ask
> me.  Yes, people will use it to shoot themselves in the foot.  I don't
> care.

I agree.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Kevin Chadwick
On Mon, 05 Dec 2011 10:53:00 +0100
"Sebastian Reitenbach" wrote:

> So to be able to shoot myself in the foot without the need to compile the 
> kernel, I'll look into adding a sysctl to tweak the maximum size of the 
> buffer. Well, depending on time and how fast I figure out how to do that, 
> might take some time.


I don't know the best word but what about something like
net.inet.tcp.footshooter.

You'd have to be pretty dumb. To write into the mailing list saying

I don't know what's wrong. The only changes I've made are to increasse
performance by tweaking net.inet.tcp.footshooter.sbmax?



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-05 Thread Sebastian Reitenbach
On Sunday, December 4, 2011 21:01 CET, Mark Kettenis  
wrote: 
 
> > Date: Sun, 4 Dec 2011 15:10:56 +0100
> > From: Claudio Jeker 
> > 
> > On Sun, Dec 04, 2011 at 01:35:33PM +0100, Sebastian Reitenbach wrote:
> > > On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar  
> > > wrote: 
> > >  
> > > > On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> > > > > the default maximum size of the tcp send and receive buffer used by 
> > > > > the autosizing algorithm is way too small, when trying to get maximum 
> > > > > speed with high bandwidth and high latency connections.
> > > > 
> > > > I have tweaked SB_MAX on a system too, but it was for UDP.
> > > > 
> > > > When running a busy Unbound resolver, the recommendation is too bump the
> > > > receive buffer to 4M or even 8M. See
> > > > http://unbound.net/documentation/howto_optimise.html
> > > > 
> > > > Otherwise a lot of queries are dropped when the cache is cold.
> > > > 
> > > > I don't think there's a magic value that's right for everyone, so a
> > > > sysctl would be nice.  Maybe separate ones for tcp and udp.
> > > > 
> > > > I know similar sysctl's have been removed recently, and that they are
> > > > sometimes abused, but I'd say we have two valid use cases now.
> > > > 
> > > > So I'd love some more discussion.  :-)
> > > 
> > > since they were removed, and there is this keep it simple, and too many
> > > knobs are bad attitude, which I think is not too bad, I just bumped the
> > > SB_MAX value.
> > > If there is consensus that a sysctl would make sense, I'd also look into
> > > that approach and send new patch. 
> >  
> > SB_MAX is there to protect your system. It gives a upperbound on how much
> > memory a socket may allocate. The current value is a compromize. Running
> > with a huge SB_MAX may make one connection faster but it will cause
> > resource starvation issues on busy systems.
> > Sure you can bump it but be aware of the consequneces (and it is why I
> > think we should not bump it at the moment). A proper change needs to
> > include some sort of resource management that ensures that we do not run
> > the kernel out of memory.
> 
> But 256k simply isn't enough for some use cases.  Turning this into a
> sysctl tunable like FreeBSD and NetBSD would be a good idea if you ask
> me.  Yes, people will use it to shoot themselves in the foot.  I don't
> care.

So to be able to shoot myself in the foot without the need to compile the 
kernel, I'll look into adding a sysctl to tweak the maximum size of the buffer. 
Well, depending on time and how fast I figure out how to do that, might take 
some time.

Sebastian



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Mark Kettenis
> Date: Sun, 4 Dec 2011 15:10:56 +0100
> From: Claudio Jeker 
> 
> On Sun, Dec 04, 2011 at 01:35:33PM +0100, Sebastian Reitenbach wrote:
> > On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar  
> > wrote: 
> >  
> > > On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> > > > the default maximum size of the tcp send and receive buffer used by the 
> > > > autosizing algorithm is way too small, when trying to get maximum speed 
> > > > with high bandwidth and high latency connections.
> > > 
> > > I have tweaked SB_MAX on a system too, but it was for UDP.
> > > 
> > > When running a busy Unbound resolver, the recommendation is too bump the
> > > receive buffer to 4M or even 8M. See
> > > http://unbound.net/documentation/howto_optimise.html
> > > 
> > > Otherwise a lot of queries are dropped when the cache is cold.
> > > 
> > > I don't think there's a magic value that's right for everyone, so a
> > > sysctl would be nice.  Maybe separate ones for tcp and udp.
> > > 
> > > I know similar sysctl's have been removed recently, and that they are
> > > sometimes abused, but I'd say we have two valid use cases now.
> > > 
> > > So I'd love some more discussion.  :-)
> > 
> > since they were removed, and there is this keep it simple, and too many
> > knobs are bad attitude, which I think is not too bad, I just bumped the
> > SB_MAX value.
> > If there is consensus that a sysctl would make sense, I'd also look into
> > that approach and send new patch. 
>  
> SB_MAX is there to protect your system. It gives a upperbound on how much
> memory a socket may allocate. The current value is a compromize. Running
> with a huge SB_MAX may make one connection faster but it will cause
> resource starvation issues on busy systems.
> Sure you can bump it but be aware of the consequneces (and it is why I
> think we should not bump it at the moment). A proper change needs to
> include some sort of resource management that ensures that we do not run
> the kernel out of memory.

But 256k simply isn't enough for some use cases.  Turning this into a
sysctl tunable like FreeBSD and NetBSD would be a good idea if you ask
me.  Yes, people will use it to shoot themselves in the foot.  I don't
care.



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Henning Brauer
* Geoff Steckel  [2011-12-04 16:17]:
> To generalize this problem: kernel memory is limited. It is
> autosized at boot time.

that might have been true a decade ago, but not today.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Geoff Steckel

On 12/04/2011 09:10 AM, Claudio Jeker wrote:

On Sun, Dec 04, 2011 at 01:35:33PM +0100, Sebastian Reitenbach wrote:

On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar  wrote:


On 4-12-2011 13:01, Sebastian Reitenbach wrote:

the default maximum size of the tcp send and receive buffer used by the 
autosizing algorithm is way too small, when trying to get maximum speed with 
high bandwidth and high latency connections.

I have tweaked SB_MAX on a system too, but it was for UDP.

When running a busy Unbound resolver, the recommendation is too bump the
receive buffer to 4M or even 8M. See
http://unbound.net/documentation/howto_optimise.html

Otherwise a lot of queries are dropped when the cache is cold.

I don't think there's a magic value that's right for everyone, so a
sysctl would be nice.  Maybe separate ones for tcp and udp.

I know similar sysctl's have been removed recently, and that they are
sometimes abused, but I'd say we have two valid use cases now.

So I'd love some more discussion.  :-)

since they were removed, and there is this keep it simple, and too many
knobs are bad attitude, which I think is not too bad, I just bumped the
SB_MAX value.
If there is consensus that a sysctl would make sense, I'd also look into
that approach and send new patch.


SB_MAX is there to protect your system. It gives a upperbound on how much
memory a socket may allocate. The current value is a compromize. Running
with a huge SB_MAX may make one connection faster but it will cause
resource starvation issues on busy systems.
Sure you can bump it but be aware of the consequneces (and it is why I
think we should not bump it at the moment). A proper change needs to
include some sort of resource management that ensures that we do not run
the kernel out of memory.

How many high speed high latency connections would it take to use a 
"significant" proportion of kernel memory? Waving hands at the problem: 
at 500 ms round trip delay, a 1Gb/s interface saturated link = 63MB 
buffers per direction. A 100Mb/sec link would use 7MB per direction. 
Multiple sockets on such a link should use a similar amount in total 
using the autosizing algorithm. If this is approximately correct, 
documenting a formula might be useful for sysadmins.


A system with 512MB physical memory should be able to saturate 2 or 3 
100Mb/s links with large delays without seriously depleting kernel 
memory. It seems unlikely that a small system with multiple saturated 
1Gb/s links (or 1 10Gb/s link) could do anything very useful.


The pathological case: many sockets each one sequentially saturating the 
link and then going idle. The current limit does not defend against this.


To generalize this problem: kernel memory is limited. It is autosized at 
boot time. Allowing any kernel subsystem to use a large amount 
jeopardizes system stability.


Does it make sense, philosophically and technically, to allow the 
sysadmin to add physical memory to the kernel at run time, perhaps 
limited to (arbitrarily) 50% of physical memory?


Geoff Steckel



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Claudio Jeker
On Sun, Dec 04, 2011 at 01:35:33PM +0100, Sebastian Reitenbach wrote:
> On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar  
> wrote: 
>  
> > On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> > > the default maximum size of the tcp send and receive buffer used by the 
> > > autosizing algorithm is way too small, when trying to get maximum speed 
> > > with high bandwidth and high latency connections.
> > 
> > I have tweaked SB_MAX on a system too, but it was for UDP.
> > 
> > When running a busy Unbound resolver, the recommendation is too bump the
> > receive buffer to 4M or even 8M. See
> > http://unbound.net/documentation/howto_optimise.html
> > 
> > Otherwise a lot of queries are dropped when the cache is cold.
> > 
> > I don't think there's a magic value that's right for everyone, so a
> > sysctl would be nice.  Maybe separate ones for tcp and udp.
> > 
> > I know similar sysctl's have been removed recently, and that they are
> > sometimes abused, but I'd say we have two valid use cases now.
> > 
> > So I'd love some more discussion.  :-)
> 
> since they were removed, and there is this keep it simple, and too many
> knobs are bad attitude, which I think is not too bad, I just bumped the
> SB_MAX value.
> If there is consensus that a sysctl would make sense, I'd also look into
> that approach and send new patch. 
 
SB_MAX is there to protect your system. It gives a upperbound on how much
memory a socket may allocate. The current value is a compromize. Running
with a huge SB_MAX may make one connection faster but it will cause
resource starvation issues on busy systems.
Sure you can bump it but be aware of the consequneces (and it is why I
think we should not bump it at the moment). A proper change needs to
include some sort of resource management that ensures that we do not run
the kernel out of memory.

-- 
:wq Claudio



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Sebastian Reitenbach
On Sunday, December 4, 2011 13:24 CET, Camiel Dobbelaar  wrote: 
 
> On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> > the default maximum size of the tcp send and receive buffer used by the 
> > autosizing algorithm is way too small, when trying to get maximum speed 
> > with high bandwidth and high latency connections.
> 
> I have tweaked SB_MAX on a system too, but it was for UDP.
> 
> When running a busy Unbound resolver, the recommendation is too bump the
> receive buffer to 4M or even 8M. See
> http://unbound.net/documentation/howto_optimise.html
> 
> Otherwise a lot of queries are dropped when the cache is cold.
> 
> I don't think there's a magic value that's right for everyone, so a
> sysctl would be nice.  Maybe separate ones for tcp and udp.
> 
> I know similar sysctl's have been removed recently, and that they are
> sometimes abused, but I'd say we have two valid use cases now.
> 
> So I'd love some more discussion.  :-)

since they were removed, and there is this keep it simple, and too many knobs 
are bad attitude, which I think is not too bad, I just bumped the SB_MAX value.
If there is consensus that a sysctl would make sense, I'd also look into that 
approach and send new patch. 


Sebastian

> 
> --
> Cam



Re: raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Camiel Dobbelaar
On 4-12-2011 13:01, Sebastian Reitenbach wrote:
> the default maximum size of the tcp send and receive buffer used by the 
> autosizing algorithm is way too small, when trying to get maximum speed with 
> high bandwidth and high latency connections.

I have tweaked SB_MAX on a system too, but it was for UDP.

When running a busy Unbound resolver, the recommendation is too bump the
receive buffer to 4M or even 8M. See
http://unbound.net/documentation/howto_optimise.html

Otherwise a lot of queries are dropped when the cache is cold.

I don't think there's a magic value that's right for everyone, so a
sysctl would be nice.  Maybe separate ones for tcp and udp.

I know similar sysctl's have been removed recently, and that they are
sometimes abused, but I'd say we have two valid use cases now.

So I'd love some more discussion.  :-)

--
Cam



raise max value for tcp autosizing buffer [WAS: misc@ network tuning for high bandwidth and high latency]

2011-12-04 Thread Sebastian Reitenbach
Hi,

the default maximum size of the tcp send and receive buffer used by the 
autosizing algorithm is way too small, when trying to get maximum speed with 
high bandwidth and high latency connections.

I tested to get the best speed with a connection between Germany and Canada, 
ping times around 180ms.
On the site in Germany, I have 155MBit Internet uplink, but the card in the 
test box, only negotiated 100MBit, so actually 100MBit uplink to the Internet. 
(Why it only negotiated 100MBit, is not the topic here...)
On the site in Canada, I have even more than 155MBit Internet uplink.

The maximum size of the buffer is defined in sys/sys/socketvar.h:

#define SB_MAX (256*1024)

With this value, I got download speeds, downloading on the Canadian box from 
the host in Germany, average of about 1.5MB/s, which is definitely not optimal.

Doubling the value (512*1024), I got average speeds around 2.5-3MB/s.
Doubling the value again to (1024*1024) I got average speeds of about 5-6MB/s.
Doubling the value again to (2048*1024) I got average speeds of about 8MB/s.

On the same box in Germany, I had a second harddisk with Linux OpenSUSE 11.2 
installed. There, even with tweaking the buffers, I never got more than 4 MB/s 
average transfer speed. But that may be because I did not found the right 
values to use.

The tcp buffer autosizing algorithm is at the end of sys/netinet/usrreq.c.
As far as I understand the comments and the code right, it should be safe to 
raise the value.
The buffer size is scaled up and down, depending on its fill level, and if the 
receiver can ack
packets in a given time. Also, in case of low memory, the buffer is reset to 
small value.
Therefore I think it should also be safe on busy servers with many connections 
on high speed LANs.
But since I'm by far not a network stack expert, I might missing something.

so the patch raises SB_MAX to (2048*1024) which seemed to be the best for my 
use case so far. I also tested higher values, but that did not gained me 
anything. I did not found a good network card that has gigabit, and good 
transfer rates, so I was unable to utilize the full 155MBit on my side here in 
Germany. so the value I propose might still be too small for even faster 
connections.

so please test, comment, or even OK?

cheers,
Sebastian

Index: sys/sys/socketvar.h
===
RCS file: /cvs/src/sys/sys/socketvar.h,v
retrieving revision 1.50
diff -u -r1.50 socketvar.h
--- sys/sys/socketvar.h 4 Jul 2011 22:53:53 -   1.50
+++ sys/sys/socketvar.h 4 Dec 2011 11:29:58 -
@@ -104,7 +104,7 @@
short   sb_flags;   /* flags, see below */
u_short sb_timeo;   /* timeout for read/write */
} so_rcv, so_snd;
-#defineSB_MAX  (256*1024)  /* default for max chars in 
sockbuf */
+#defineSB_MAX  (2048*1024) /* default for max chars in 
sockbuf */
 #defineSB_LOCK 0x01/* lock on data queue */
 #defineSB_WANT 0x02/* someone is waiting to lock */
 #defineSB_WAIT 0x04/* someone is waiting for 
data/space */