s/w raid5 over 10 raid0's of 2 drives each (64k h/w stripe size)

chunk size of 64k, -b 4096 -R stride=16

     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047 18810 99.8 25180 27.1 14528 43.7 19286 78.0 38419 59.8 847.8 12.3

chunk size of 4k, -b 4096 -R stride=1

     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047 18642 99.5 27073 26.7 17203 50.3 20612 82.8 40463 63.6 704.2  9.2

Sure looks like smaller s/w chunk sizes win each time... I guess
I'm going to have to drop the h/w stripe sizes. The sync;date;dd;date
test gave 34.13 MB/sec 1MB block read, 21.33 MB/sec 1MB block write
(kind of bizarre, but bonnie uses the fs and dd doesn't *shrug*)

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development

Reply via email to