Optimizing the md driver for Bonnie, IMHO, is foolishness. Bonnie is a
sequential read/write test and does not produce numbers that mean much
in typical data access patterns. Example: the read_ahead value is bumped
way up (1024), this kills performance when doing more normal accesses.
Linux's average contiguous data area request size is much smaller than
512kb. Yes, this makes Bonnie look better, but not a real working
system.

It is nice to have high Bonnie results, but not at the expense of a
working system. I wish I knew of a more statistically oriented data
access test like Netbench, but on the server side. The reason Bonnie is
so popular is that it is easy (and cheap.)

In the Raid1 case. A Bonnie test will not highlight the advantages of
read balancing. Someone can do tune the chunk size to work best with
Bonnie. But the best chunk size for a Raid1 test on Bonnie will most
likely be a bad choice for a normal operating system.

Please don't think that Bonnie result always mean much. They are fun to
compare, but be careful in how the numbers are interpreted.


<>< Lance.


[EMAIL PROTECTED] wrote:
> 
> On Wed, 15 Sep 1999, James Manning wrote:
> 
> > >             -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
> > > Machine  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
> > > md0     192  5933 86.4 15222 21.8  4172 11.8  5672 81.3  9014 11.2 218.4  4.6
> > > sd0     192  6411 92.0 15072 18.5  4265 11.7  5760 80.6 12069 13.1 201.8  4.5
> >
> > More cases with faster write access (significantly) than read... am I
> > wrong in thinking this is strange?  Is bonnie really worth trusting?
> > Is there a better tool currently available?
> 
> bonnie is the main benchmark i'm optimizing for. hdparm -tT is rather
> useless in this regard, it has only a relevance on maybe e2fsck times.
> 
> i'll have a look at RAID1 read balancing. I once ensured we read better
> than single-disk, but we might have lost this property meanwhile ...
> 
> -- mingo

Reply via email to