On May 29, 2008, at 11:44 PM, [EMAIL PROTECTED] wrote:

> On Thu, May 29, 2008 at 02:04:21PM -0700, eric kustarz wrote:
>>>
>>> Currently both tools gives different view of io. While filebench
>>> simulates real workload, it cannot show what vdbench shows.
>>> E.g. we were doing some hardware array tests and it turned out  
>>> (using
>>> vdbench) that the array works in a really strange way: sometimes its
>>> IOPS jumps very high, sometimes it lows very much. There were
>>> a few other tests where vdbench shows how IOPS behaves in each
>>> second of workload. I am affraid filebench _currently_ cannot give  
>>> us
>>> the same data.
>>
>> Good point.
>>
>> We actually have this via Xanadu, though i just tried it and it looks
>> like Xanadu is not working.
>>
>> Also, at the UCSC benchmarking conference last monday we kicked  
>> around
>> the idea of showing a distribution of results instead of just  
>> averages.
>
> Averages can be really misleading. They don't show in depth data.
>
> I remember another hardware arrays (yes, two arrays) test where  
> vdbench shows
> that one array is a bit faster (more IOPS) then the other. While
> filebench (which tries to simulate the real workloads) shows that that
> the other aray is slightly faster. What is true ? I don't know. But I
> wouldn't like to use just one tool. Both has its advantages and
> disadvantages. And _both_ give much wider picture of how the storage
> behaves in particular workload.

So that's disturbing.  What workload were you running?

eric

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to