On 12/08/2011 11:50 PM, Dallin Jones wrote: > I am having a horrible time getting my servers to actually perform at > the marketing that 3ware says I should be getting. I have used dd to > copy from /dev/zero to and image file and I am only getting around > 40-50 megs/sec. I figure that I should be getting better than that, > but I'm not quite sure if my bench mark is even the right thing. > > Currently things aren't too bad, but I am trying to setup glusterfs in > a replicated/distributed setup (10 w/3 drives each). Unfortunately > when I run dd against my gluster volume, I only get 3MBs. This is > crazy slow to the point of completely unusable. I know the gluster > team recommend using GB Ethernet, but I figure that I should at least > be able to max out my current 100 Mb network pretty fast. I'm not even > coming close though. > > So the question: I remember so coworkers previously say there were > issues with 3ware raid cards, but that he was able to fix the problem. > I wonder if I am hitting the same bug, or I'd maybe it is something > else. > > I have turned on the write cache as well as blockdev --setra 16384, > but it doesn't appear to help much. I did however try to enable the > write cache and got a little speed increase, but nothing like I > expected. > > Any thoughts on better ways to benchmark this, and also thoughts on > tuning my 3ware controller? I am using a 32 but version of Linux, > Hopefully that is enough info to point me in the right direction. > > > --Dallin Jones > > /* > PLUG: http://plug.org, #utah on irc.freenode.net > Unsubscribe: http://plug.org/mailman/options/plug > Don't fear the penguin. > */ > Are these SATA, SAS, or Fibrechannel?
I've had a similar problem using both raid 5 and 6 on a low-end Hitachi array of SATA disks. The arrays are all 7 disks wide, the controller cache is 1GB. I can burst up to 300MB/s on a single dd process on (very rare) occasion. Most of the time the best I can get is 40MB/s sustained write. If I add a second thread doing reading or writing, my throughput drops to 4MB/s sustained. On a single threaded read, I get between 109 and 120MB/s. I've tried aligning my filesystems and LVM physical volumes to the stripe boundaries, and tried a number of other per-disk tweaks. Most things I've tried has had very little effect on the final throughput. My SAS and Fibrechannel arrays with similar raid configs have been able to burst up to 400MB/s for a longer period of time, but eventually they still drop to about 100MB/s sustained writes. But these are higher-end arrays with more cache. After looking at the array back-end statistics, it appears the array is doing a write-read/verify since there are as many back-end reads as writes and my dd process should be generating enough contiguous writes to ensure full stripe writes. Eventually I ran out of ideas and accepted that these "Enterprise" SATA drives may still be of the 1/4 duty cycle variety, and will not perform well on writes; especially when the best case write load ends up being a mixed read/write load anyway. If someone else knows of black voodoo magic that improves array performance, feel free to share. Grazie, ;-Daniel Fussell /* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
