[EMAIL PROTECTED] wrote:
>
> Dropping each of the 2 channels down to 4 drives started dropping
> the performance...barely. I'm still getting 99.6% CPU util on s/w
> raid0 over 2 h/w raid0's scares me, but I'll try the HZ and NR_STRIPES
> settings later on. I'm getting worried I'm not bottlenecking on anything
> scsi-related at all, and it's something else in the kernel *shrug*
I've seen some funny results at times running a single I/O benchmark
on various sw/hw raid levels.
Try running multiple parallel instances of bonnie. You might want
to make a script which will try to run the benchmark starting at
a single bonnie and then increasing the number of bonnies by 1
until you have more bonnies running than disks.
In the normal case, the sum of the throughput should be fairly
equal, but in some cases, I've found that maximum total
performance is achieved when the number of benchmarks running are
equal to number of disks - 1.
In the extreme case, I've seen a Nike array on HPUX (in a high
availability cluster configuration with multiple controllers
etc., etc.) where I could get a total throughput on multiple
bonnie's that was more than twice the performance of a single.
I've also observed cases where some of the processes would more
or less block until others where finished. Espesially when the
number of parallel benchmarks exceeds the number of disks.
My experience is mostly from HPUX and IRIX. I haven't tried raid
for Linux yet. Just starting to look at it for use in my company.
Anyone know what the maximum file system throughput of ext2 is
on for instance a PIII 500?
Terje Marthinussen
[EMAIL PROTECTED]