I will put up the throughput vs client data this week on a web page. I'll send out a URL in a day or two..
-Andrew On Thursday 01 August 2002 2:58 am, Goetz Rieger wrote: > Hey Andrew, > > On Wed, 31 Jul 2002 12:56:04 -0500 > Andrew Theurer <[EMAIL PROTECTED]> wrote: > > great stuff. As you can think, I have never got the chance to run "real" > netbench-runs. So I am not very familiar with interpreting the results.... > > Could you publish more data, like the plot throughput against clients? > > Any performance data on Samba/Linux is relevant. ;-) > > Regards, > Goetz > > > I though I'd share some NetBench results on one of our servers. > > > > Server: > > 4 x 1.5 GHz P4, 256K L3, 32MB L4, 2 GB memory > > 4 x 1Gbps acenic ethernet > > 14 SCSI disks in hardware RAID1 with 128 MB writeback NVRAM > > SuSE 8.0, 2.4.18 kernel > > Samba 2.2.3a > > Ext3 fs > > > > Clients: > > 48 x 866 MHz PIII running Windows 2000 > > > > Results: > > > > Baseline 576 Mbps > > ext3 data=writeback 623 Mbps > > samba smblog=1 673 Mbps > > sendfile/zerocopy 801 Mbps > > O(1) scheduler 809 Mbps > > Evenly affined IRQs 800 Mbps *needed to get process affinity correct > > Process affinity 848 Mbps > > /proc/sys/net/hll=764 853 Mbps > > case sens enforced 895 Mbps > > samba spinlocks 912 Mbps > > dcache read copy update 923 Mbps (also had 5% idle time) > > > > I have also achieved 1002 Mbps with ext2. > > > > Some other things I think may be worth investigating: > > > > gettimeofday(). Samba calls this a lot, one for every reply I think, to > > check for connection timeout. This means we go into kernel mode every > > single time we call this, something I'd like to avoid. And I also don't > > think we need the resolution of gettimeofday for this. How about some > > sort of timer in samba with a 1 second granularity? I admit I have not > > thought about how to do this, but there's gotta be a way. > > > > locking for the samba db. Spin locks got us a little better than > > flocks, but again I'd rather not go into kernel mode every time. Has > > anyone considered using Rusty Russell's futexes for this? > > > > Hyperthreading. With 2 physical processors, I can get 25% better > > results!!! with 4 physical processors, I only get 2% better. I may be > > running into other bottlenecks on the 4 physical/8 logical CPU case, so > > I hope there is room for improvement. However there are probably a lot > > more 2-way P4 systems out there than 4-way, so I bet this could really > > benefit a lot of people out there. > > > > Anyway, I have a lot of analysis data (kernel profiles mostly) that goes > > along with this stuff if anyone is interested in looking at it My > > intention was to make some sort of article out of this, so I wanted to > > get some feedback from this list. Is this stuff interesting or relevant > > to anyone out there? What else would you like to see in terms of samba > > on linux performance? > > > > Thanks, > > > > Andrew Theurer
