Dropping each of the 2 channels down to 4 drives started dropping
the performance...barely. I'm still getting 99.6% CPU util on s/w
raid0 over 2 h/w raid0's scares me, but I'll try the HZ and NR_STRIPES
settings later on. I'm getting worried I'm not bottlenecking on anything
scsi-related at all, and it's something else in the kernel *shrug*
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
nr-spare-disks 0
chunk-size 4
Partitions:
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
2047 22015 99.6 54881 44.3 20914 46.9 23882 88.9 42410 62.0 609.7 5.6
Since last time I started with whole drives, got bad performance, then
went to partitions and got good performance, I decided to do them
backwards this time. dd if=/dev/zero of=/dev/rd/c0d{0,1} bs=512 count=100
to make sure the partition tables were clear.
Whole drives:
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
2047 22238 99.0 54198 43.2 20813 47.0 24282 90.4 42598 60.5 623.5 7.3
So I have no idea what was causing my previous performance problems
using whole drives and not partitions :) Of course, this does mean 1)
my CPU util is still insanely high for a 4-way Xeon and 2) I'm still
writing much faster than reading :)
James
--
Miscellaneous Engineer --- IBM Netfinity Performance Development