Steve Herber wrote:
Could you document you setup and test procedure?
I use NFS for most disk storage but have never tried to tune it much.
Any performance suggestions are welcome.

Thanks,

Steve Herber    [EMAIL PROTECTED]        work: 206-221-7262
Security Engineer, UW Medicine, IT Services    home: 425-454-2399

Hi,

Let me say a big sorry for this late reply. Sorry.

Now for the details about the setup. The links for the optimizations that I got
from the net is at the bottom.

Hardware.

Motherboard: MSI Neo4 Platinum

Ram: 2GB

CPU: AMD 64 +3000
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 47
model name      : AMD Athlon(tm) 64 Processor 3000+
stepping        : 0
cpu MHz         : 1809.307
cache size      : 512 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow
pni lahf_lm
bogomips        : 3622.42
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc


IDE drives: 80GD x 2 Raid 1 using Software Raid for the os.

Data Drives: 8 x 300GB Maxtor Sata drives on Promise EX8350 raid card set to
Raid 5 with stripe size set to 64k. The drives are split in 2 groups of 4 so
each group is a logical drive. The Promise raid card is on a pci-express slot.

Ethernet: Two on board gigabit ethernet. One Marvell and the other is Nvidea.
One more Intel gigabit ethernet on pci slot.

Here is the output from "lspci" command.
0000:00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev 
a3)
0000:00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev a3)
0000:00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
0000:00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)
0000:00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)
0000:00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
0000:00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev a2)
0000:00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev a3)
0000:00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
0000:00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
0000:00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
0000:00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
0000:01:06.0 VGA compatible controller: S3 Inc. 86c325 [ViRGE] (rev 06)
0000:01:08.0 Ethernet controller: Intel Corporation 82544GC Gigabit Ethernet
Controller (Copper) (rev 02)
0000:02:00.0 PCI bridge: Intel Corporation 80333 Segment-A PCI Express-to-PCI
Express Bridge
0000:02:00.2 PCI bridge: Intel Corporation 80333 Segment-B PCI Express-to-PCI
Express Bridge
0000:03:0e.0 RAID bus controller: Promise Technology, Inc.: Unknown device 8350
0000:05:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 Gigabit
Ethernet Controller (rev 15)



Software.

OS: Gentoo for 64bit
Linux version 2.6.14-gentoo ([EMAIL PROTECTED]) (gcc version 3.4.4 (Gentoo
3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8)) #1 Tue Nov 1 19:01:57 SGT 2005

Filesystem: ext3 for OS and ext3 for data drives. Tried XFS but there was some
problems with the earlier kernel so I went with ext3. Some test with kernel
2.6.14 with XFS was good. I will go into it when I can get the better speed with
NFS.

This is what I have set on /etc/exports
/VideoDisk01    *(async,insecure,no_root_squash,rw,nohide)
/VideoDisk02    *(async,insecure,no_root_squash,rw,nohide)

Here are some test I did on the drives on the Promise Raid Card.
# time dd if=/dev/zero of=/VideoDisk02/test/lala bs=8192k count=512
512+0 records in
512+0 records out

real    0m46.754s
user    0m0.008s
sys     0m16.321s

That is 4G in 47 seconds. So that is 85MB per second.

More test the raid drives.
# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   3004 MB in  2.00 seconds = 1501.91 MB/sec
 Timing buffered disk reads:  336 MB in  3.01 seconds = 111.70 MB/sec

# hdparm -Tt /dev/sdb

/dev/sdb:
 Timing cached reads:   3032 MB in  2.00 seconds = 1515.91 MB/sec
 Timing buffered disk reads:  356 MB in  3.00 seconds = 118.66 MB/sec

Test on the ethernet connection.
# nttcp -T 10.0.0.100
     Bytes  Real s   CPU s Real-MBit/s  CPU-MBit/s   Calls  Real-C/s   CPU-C/s
l  8388608    0.09    0.02    788.3382   2918.2842    2048  24058.17   89059.0
1  8388608    0.09    0.08    776.0673    838.8084    2195  25383.65   27435.8

another test done a second later.
# nttcp -T 10.0.0.100
     Bytes  Real s   CPU s Real-MBit/s  CPU-MBit/s   Calls  Real-C/s   CPU-C/s
l  8388608    0.50    0.05    134.7476   1369.7643    2048   4112.17   41801.9
1  8388608    0.52    0.07    129.5928    986.8370    3355   6478.79   49335.3

It looks like my network speed is like a bell curve. Going up steady to about 700MBit then slowing coming down to 129MBit. Then it starts to go up then down again.

Could be my network. Need to do more checking.

Please do comment or make some suggestions

Yes one more thing is that kernel 2.6.15 is out and I was looking at the change log and there seems to be many changes with regards to nfs. Maybe 2.6.15 is better. I will do more test.

Here are all the links that I got for the optimizations.

http://www.enterpriseitplanet.com/networking/features/article.php/3497796
http://www-didc.lbl.gov/TCP-tuning/linux.html
http://libarynth.f0.am/cgi-bin/twiki/view/Libarynth/NFSonOsX
http://www.metaconsultancy.com/whitepapers/nfs.htm
http://naeblis.cx/rtomayko/2004/08/09/NFSAutomountOSX
http://astcomm.net/tech/nfs_howto/client/
http://dawuss.student.utwente.nl/blog/entry/38
http://lists.freebsd.org/pipermail/freebsd-net/2004-December/005997.html
http://sial.org/howto/osx/automount/
http://datatag.web.cern.ch/datatag/howto/tcp.html
http://bio3d.colorado.edu/tor/sadocs/filesys/nfs.html
http://www.psc.edu/networking/projects/tcptune/
http://people.redhat.com/alikins/system_tuning.html#tcp







--
[email protected] mailing list

Reply via email to