On 1 Dec 2009, at 12:05, Elliot Finley wrote:

> On Mon, Nov 30, 2009 at 6:29 PM, Hiroki Sato <[email protected]> wrote:
> Jack Vogel <[email protected]> wrote
>  in <[email protected]>:
> 
> jf> I will look into this Hiroki, as time goes the older hardware does not
> jf> always
> jf> get test cycles like one might wish.
> 
> 
> Here's some more info to throw into the mix.  I have several new boxes 
> running 8-Stable (a few hours after release).
> 
> Leaving all sysctl at default, I get around 400mbps testing with netperf or 
> iperf.  If I set the following on the box running 'netserver' or 'iperf -s':
> 
> kern.ipc.maxsockbuf=16777216
> net.inet.tcp.recvspace=1048576
> 
> then I can get around 926mbps.  But then if I make those same changes on the 
> box running the client side of netperf or iperf the performance drops back 
> down to around 400mbps.
> 
> All boxes have the same hardware.  they have two 4-port Intel NICS in them.
> 
> e...@pci0:5:0:1: class=0x020000 card=0x10a48086 chip=0x10a48086 rev=0x06 
> hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82571EB Gigabit Ethernet Controller'
>     class      = network
>     subclass   = ethernet
> 
> any pointers on further network tuning to get bidirectional link saturation 
> would be much appreciated.  These boxes are not in production yet, so anyone 
> that would like to have access to troubleshoot, just ask.

I've CC'd Lawrence Stewart in on this thread, as he's been doing work on the 
TCP stack lately and might have insight into what you might be running into. 
Lawrence -- there's a bit of a back thread with configuration and problem 
details in the stable@ archives.

Robert_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[email protected]"

Reply via email to