On Mon, Jun 9, 2008 at 11:28 PM, Sharpe, Sam J <[EMAIL PROTECTED]>
wrote:

>  At some point, solarflow99 wrote:
> >> On Mon, Jun 9, 2008 at 8:45 AM, solarflow99 wrote:
> >>> The first one is a SATA 12 raid 6 array, the second one is a SATA 6 x
> >> 750GB
> >>> raid 5 array.  I can understand 12 drives should provide more iops, but
> >> this
> >> makes no sense.
> >>
> >> Sorry, too many variables. You are using at least:
> >>
> >> -different OS (el4 vs el5)
> >> -different drives (platters/firmware/cache?)
> >> -different RAID levels
> >>
> >> Also perhaps:
> >> -different RAID cards (model/cache/bus/driver)
> >>  <https://www.redhat.com/mailman/listinfo/rhelv5-list>
> >
> >
> > really now,  the hdparm tests show the throughput; this is a case of real
> > world results being completely opposite of hdparm tests.  it still makes
> no
> > sense.
>
> Err. hdparm is an artificial benchmark - nowhere does it guarantee
> equivalency to real-world metrics.
>
> You haven't considered the disk-in-memory cache for one, are these systems
> exactly equivalent? Basically, comparisons of this sort are only valid when
> everything else is the same.
>
> The same system connected to two different disk arrays is a valid
> comparison. Two systems using two different arrays are not, because any
> equation derived has too many variables to be solvable.
>
> it seems hdparm is not as useful as I thought then, I wonder if you have
any preferred programs for disk IO benchmarks?  It still strikes me as
surprising that the results could be so drastically different, the raid
controllers were not the same brand, but it was fairly similar hardware,
nothing I could account for that much difference.

Thanks,
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to