> I'd doubt that it would get much better because of that. Your CPUs can
> probably already handle way beyond 1GB/s in the parity-calculation loop,
> and parity is not calculated during reads anyway.

True... but I 1) only care about block writes and 2) am hoping that
the cache-avoiding techniques KNI uses may help.

To be honest, I'm still a little confused that I can't seem to find
any s/w h/w combination that will yield better than 30MB/sec
writes to 20 drives (18.2GB Cheetah-3's operating over 2 channels,
both at 80MB/sec with LVD).  Any tips?  s/w raid stripe size
should be (# of h/w raids) * (h/w raid stripe size)? or should
it be (# of physical drives) * (h/w raid stripe size)?

All I care about is getting the fastest block writes possible.
All blocks will be 64KB in size.  Also, is there any way to
make sure that each block gets written on a stripe boundary
(ie any way to make sure that the ext2 fs underneath starts
 each file on a 16-bit/64KB boundary?).

> Did you experiment with different chunk-sizes, and did you set the block-size
> on the e2fs to 4KB ?
> And did you remember the -R stride=  option ?

Yup, -b4096 -R stride=(chunk/4k)

> > Out of curiosity, any idea if bonnie is doing %5d somewhere and any
> > rates over 100MB/sec would get cut-off?
> 
> Nope  :)   (sorry, don't have the source handy)
> 
> But a   sync ; date ; dd if=/dev/md0 of=/dev/zero bs=1024k count=1024 ; date
> should give you an idea about what's going on.

Great idea...
Fri Aug 13 09:57:51 EDT 1999
1024+0 records in
1024+0 records out
Fri Aug 13 09:58:17 EDT 1999

~40MB/sec reading... bleah reversing if/of for writing:

Fri Aug 13 09:59:41 EDT 1999
1024+0 records in
1024+0 records out
Fri Aug 13 10:00:29 EDT 1999

Only 21 MB/sec... and I thought the bonnie run was bad! (2GB in size
on a 1GB RAM machine) oh, well...

Thanks,

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development

Reply via email to