On May 30, 2008, at 7:56 AM, Sean McGrath - Sun Microsystems Ireland  
wrote:

> eric kustarz stated:
> < >
> < > Currently both tools gives different view of io. While filebench
> < > simulates real workload, it cannot show what vdbench shows.
> < > E.g. we were doing some hardware array tests and it turned out  
> (using
> < > vdbench) that the array works in a really strange way: sometimes  
> its
> < > IOPS jumps very high, sometimes it lows very much. There were
> < > a few other tests where vdbench shows how IOPS behaves in each
> < > second of workload. I am affraid filebench _currently_ cannot  
> give us
> < > the same data.
> <
> < Good point.
> <
> < We actually have this via Xanadu, though i just tried it and it  
> looks
> < like Xanadu is not working.
> <
> < Also, at the UCSC benchmarking conference last monday we kicked  
> around
> < the idea of showing a distribution of results instead of just  
> averages.
>
> Yes, averages tell only part of the story.

Agreed.

>
>
> as an example, libmicro has some nice statistics that it gives with  
> its metric
>  results.  Could the methods be used within filebench ?

Possibly, its something we're discussing so we'll take input (like  
yours)...

I like the distribution output (perhaps similar to Dtrace's quantize)  
better though.

eric

>
>
>  eg:
>
> # STATISTICS         usecs/call (raw)          usecs/call (outliers  
> removed)
> #                    min      0.00448                 0.00448
> #                    max      0.00623                 0.00459
> #                   mean      0.00453                 0.00451
> #                 median      0.00453                 0.00453
> #                 stddev      0.00013                 0.00003
> #         standard error      0.00001                 0.00000
> #   99% confidence level      0.00002                 0.00000
> #                   skew     11.81224                 0.45940
> #               kurtosis    153.61917                -0.55683
> #       time correlation     -0.00000                -0.00000
> #
> #           elasped time      0.01943
> #      number of samples          196
> #     number of outliers            6
> #      getnsecs overhead          288
>
> <
> < eric
> < _______________________________________________
> < perf-discuss mailing list
> < perf-discuss@opensolaris.org
>
> -- 
> Sean.
> .

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to