On Mon, Jul 25, 2011 at 12:06 PM, Kevin Taylor <groucho.64...@hotmail.com> wrote: > > We have a RAID set up as our main fileserver (running samba 3.0.33 on linux, > CentOS 5). The main disk area is an XFS partition of about 8TB. I'm using > iostat to monitor disk I/O since we've gotten complaints about speed and I'm > noticing that when I write something to the samba share, the write speed is > horrible. For a 15GB file it is reporting to finish in about 20 minutes. > > With the command: dd if=/dev/zero of=/data/testfile bs=1024k count=10000 > > I saw the 10GB write with a speed of 270MB/s, which is decent, so I'm not > thinking there's anything wrong with the disk or raid controller. >
dd isn't really a great test since it's heavily uses caches, and it's about as sequential as you can get, where samba access is more likely to be highly random. iometer with dynamo can get you a more "real workload" type benchmark. That said, to me this sounds like a block size and alignment plus write-back type of issue. Here's some background and examples with xfs+lvm+mdadm, the base concept apply to hardware raid too http://www.linux.sgi.com/archives/xfs/2007-06/msg00411.html . Even if you are getting acceptable perf local, you may be able to get better if you aren't doing these things, and anything remote will amplify any latency greatly. Next toss in windows wanting to flush at 4k or 64k, which should pass on through to the disk, causing a 128K stripe to flush again with every 4K, and multiple 128K stripes if things aren't aligned just right. Then add in the read+modify+write+hash+write operation that raid5 does and you can start to see where performance can fail. Hardware raid with battery backed write cache can alleviate this since it won't wait for the disk spindles. Possibly Samba can be tweaked to match your stripe size, I don't know how off-hand. -- To unsubscribe from this list go to the following URL and read the instructions: https://lists.samba.org/mailman/options/samba