2017-03-14 15:42 GMT+01:00 Nick Coghlan <ncogh...@gmail.com>:
> That would suggest that the implicit assumption of a measure-of-centrality
> with a measure-of-symmetric-deviation may need to be challenged, as at least
> some meaningful performance problems are going to show up as non-normal
> distributions in the benchmark results.
>
> Network services typically get around the "inherent variance" problem by
> looking at a few key percentiles like 50%, 90% and 95%. Perhaps that would
> be appropriate here as well?

Right now, there is almost no visualisation tool for perf :-( It
started to list projects that may be reused to visualize benchmark
results, to "see" the distribution.

A first step would be to add these "key percentiles like 50%, 90% and
95%" to the perf stats command. I don't know how to compute them.

But my question is for the most important summary: the result of the
"perf show" command, which is what most users see except if they use
more advanced commands.

Victor
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to