Re: optimising raid performance
dual PII-500 single 233 :) Don't underestimate the StrongArm, if the mylex firmware writers have half a brain I think you'd be suprised. s/w raid 1 over 2 6-disk h/w raid0's is what I meant to ask for I trust the strongarm to handle raid0, but that's about it :) Ok, will give it a go on thursday afternoon when I can get my hands on the kit. what stripe/chunk sizes are you using in the raid? My exp. has been smaller is better down to 4k, although I'm not sure why :) We're currently using 8k but with our load then if I can go smaller I will do. Is there any merit in using -R on mke2fs if we're doing raid1? Chris Good - Technical Architect - Webtop.com -- Chris Good - Dialog Corp. The Westbrook Centre, Milton Rd, Cambridge UK Phone: 01223 715000 Fax: 01223 715001 http://www.dialog.com
Re: optimising raid performance
[ Monday, January 10, 2000 ] [EMAIL PROTECTED] wrote: I've currently got a hardware raid system that I'm maxing out so any ideas on how to speed it up would be gratefully received. Just some quick questions for additional info... - What kinds of numbers are you getting for performance now? - I'd check bonnie with a filesize of twice your RAM and then get http://www.icon.fi/~mak/tiotest/tiotest-0.16.tar.gz and do a make and then ./tiobench.pl --threads 16 - Did you get a chance to benchmark raid 1+0 against 0+1? - Of the 12 disks over 2 channels, which are in the raid0+1, which in the raid5, which spare? how are the drive packs configured? - Is the card using its write cache? write-back or write-through? - Do you have the latest firmware on the card? - Which kernel are you using? - What block size is the filesystem? Did you create with a -R param? - What is your percentage of I/O operations that are writes? Since there is a relatively high proportion of writes a single raid5 set seem to be out. The next best thing looks like a mirror but which is going to be better performance wise, 6 mirror pairs striped together or mirroring 2 stripes of 6 discs? IMO raid 1+0 for 2 stripes of 6 discs (better be around when a drive goes, though, as that second failure will have about a 55% chance of taking out the array :) Does the kernel get any scheduling benefit by seeing the discs and doing things in software? As you can see the machine has a very low cpu load so I'd quite hapily trade some cpu for io throughput... I'd really love to see you do a s/w raid 1 over 2 6-disk raid0's from the card and check that performance-wise... I believe putting the raid1 and raid0 logic on sep. processors could help, and worst case it'll give a nice test case for any read-balancing patches floating around (although you've noted that you are more write-intensive) James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
Re: optimising raid performance
- What kinds of numbers are you getting for performance now? Kinda hard to say, we're far more interested in random IO rather than sequential stuff. and do a make and then ./tiobench.pl --threads 16 /tiotest/ is a single root disk /data1 is the 8 disc raid 1+0 /data2 is the 3 disc raid 5 All discs are IBM DMVS18D so 18Gb 10k rpm 2mb cache, sca2 discs. http://www.storage.ibm.com/hardsoft/diskdrdl/prod/us18lzx36zx.htm MachineDirectory Size(MB) BlkSz Threads Read Write Seeks --- --- - --- - -- --- --- /tiotest/ 512 4096 1 18.092 6.053 2116.40 /tiotest/ 512 4096 2 16.363 5.792 829.876 /tiotest/ 512 4096 4 17.164 5.882 1520.91 /tiotest/ 512 4096 8 14.533 5.852 932.401 /tiotest/ 512 4096 16 16.244 5.806 1731.60 /data1/tiot512 4096 1 29.257 14.406 2234.63 /data1/tiot512 4096 2 38.124 13.734 .11 /data1/tiot512 4096 4 31.373 12.864 5128.20 /data1/tiot512 4096 8 29.341 12.460 4705.88 /data1/tiot512 4096 16 34.806 12.121 .55 /data2/tiot512 4096 1 23.063 16.269 1851.85 /data2/tiot512 4096 2 21.576 16.754 1498.12 /data2/tiot512 4096 4 17.908 17.021 3125.00 /data2/tiot512 4096 8 15.773 17.107 3478.26 /data2/tiot512 4096 16 15.394 16.920 4166.66 - Did you get a chance to benchmark raid 1+0 against 0+1? - Of the 12 disks over 2 channels, which are in the raid0+1, which in the raid5, which spare? how are the drive packs configured? 6 discs on each channel, discs 1-4 of each pack form the raid 6 stripe, discs 5 and 6 on ch1 and 5 on ch2 are in the raid5, disc 6 on ch6 is the spare. - Is the card using its write cache? write-back or write-through? Its using write back on both devices. - Do you have the latest firmware on the card? Pretty much, the firmware changelog implies the only real change is to support PCI hotswap. - Which kernel are you using? Standard Redhat 6.1 kernel Linux xxx.yyy.zzz 2.2.12-20smp #1 SMP Mon Sep 27 10:34:45 EDT 1999 i686 unknown - What block size is the filesystem? Did you create with a -R param? 4k blocksize, didn't use the -R as this is currently hardware raid - What is your percentage of I/O operations that are writes? Approx 50% IMO raid 1+0 for 2 stripes of 6 discs (better be around when a drive goes, though, as that second failure will have about a 55% chance of taking out the array :) Can't fault your logic there... But don't you mean 0+1 ie 2 stripes of 6 discs mirrored together rather than 1+0 (6 mirroring pairs striped together). I'd really love to see you do a s/w raid 1 over 2 6-disk raid0's from the card and check that performance-wise... I believe putting the raid1 and raid0 logic on sep. processors could help, and worst case it'll give a nice test case for any read-balancing patches floating around (although you've noted that you are more write-intensive) Which would you like me to try all software or do part in software and part in hardware and if the latter which part? The raid card seems pretty good (233MHz strongarm onboard) so I doubt that is limiting us. thanks, Chris -- Chris Good - Dialog Corp. The Westbrook Centre, Milton Rd, Cambridge UK Phone: 01223 715000 Fax: 01223 715001 http://www.dialog.com
Re: optimising raid performance
[ Tuesday, January 11, 2000 ] [EMAIL PROTECTED] wrote: I'd really love to see you do a s/w raid 1 over 2 6-disk raid0's from the card and check that performance-wise... I believe putting the raid1 and raid0 logic on sep. processors could help, and worst case it'll give a nice test case for any read-balancing patches floating around (although you've noted that you are more write-intensive) Which would you like me to try all software or do part in software and part in hardware and if the latter which part? The raid card seems pretty good (233MHz strongarm onboard) so I doubt that is limiting us. dual PII-500 single 233 :) s/w raid 1 over 2 6-disk h/w raid0's is what I meant to ask for I trust the strongarm to handle raid0, but that's about it :) what stripe/chunk sizes are you using in the raid? My exp. has been smaller is better down to 4k, although I'm not sure why :) James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
Re: optimising raid performance
[ Tuesday, January 11, 2000 ] [EMAIL PROTECTED] wrote: what stripe/chunk sizes are you using in the raid? My exp. has been smaller is better down to 4k, although I'm not sure why :) We're currently using 8k but with our load then if I can go smaller I will do. Is there any merit in using -R on mke2fs if we're doing raid1? I've always interpreted -R stride= as meaning "how many ext2 blocks to gather before sending to the lower-level block device". This way the block device can deal with things more efficiently. Since the stride= must default to 1 (I can't see how it could pick a different one) then any time your device (h/w or s/w raid) is using larger block sizes -R would seem to be a good choice (for 8K block sizes, stride=2) The raid1 shouldn't matter as much, so try without stride= and then with stride=2 (if still using 8K block sizes) I get the feeling that the parallelism vs. efficiency tradeoff in block sizes still isn't fully understood, but lots of random writes should almost certainly do best with the smallest block sizes available down to a single page (4k) As always, I'd like to solicit other views on this :) James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
optimising raid performance
I've currently got a hardware raid system that I'm maxing out so any ideas on how to speed it up would be gratefully received. Current system is dual p2 500 with mylex extremeraid with 6 10krpm discs on each of 2 channels. of this I have an 8 disc raid6 (0+1) with 4 discs on each channel and and a 3 disc raid5, the remaining disc is the spare. Root FS is on a seperate disc on a seperate controller. Load is lots of small (4k) read and writes. We have 16 processes using the filesystem and they tend to be blocking most of the time. My options seem to be: 1) Combine the 2 raid groups 2) Use some software raid rather than hardware. 3) Add more disks. 1) seems a pretty obvious choice, since the raid controller doesn't seem to be bottling out then I don't think 2) will help and 3) would cost money... Since there is a relatively high proportion of writes a single raid5 set seem to be out. The next best thing looks like a mirror but which is going to be better performance wise, 6 mirror pairs striped together or mirroring 2 stripes of 6 discs? Does the kernel get any scheduling benefit by seeing the discs and doing things in software? As you can see the machine has a very low cpu load so I'd quite hapily trade some cpu for io throughput... procs memoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 15 2 0 2440 154028 329396 0 0 350 184 1024 1910 0 5 95 0 16 8 0 3128 153664 329072 0 0 364 179 1135 2093 0 4 95 0 11 0 0 1924 153864 330084 0 0 316 243 1023 1916 0 4 96 0 9 0 0 2420 153244 330204 0 0 390 156 1107 2095 0 4 96 0 16 3 0 3000 153468 329400 0 0 381 374 1006 2181 0 4 95 0 1 0 0 2236 153460 330160 0 0 39095 1035 1900 0 4 96 -- Chris Good - Dialog Corp. The Westbrook Centre, Milton Rd, Cambridge UK Phone: 01223 715000 Fax: 01223 715001 http://www.dialog.com