On 23 April 2018 at 05:00, Matthew Woodcraft <matt...@woodcraft.me.uk> wrote:
> To get comprehensible results, I think I really need to summarise the
> speed of a particular build+hardware combination as a single number,
> representing Python's performance for "general purpose code".
>
> So does anyone have any recommendations on what the best figure to
> extract from pyperformance results would be?

There's no such number in the general case, since the way different
aspects should be weighted differs significantly based on your use
case (e.g. a long running server or GUI application may care very
little about startup time, while it's critical for command line
application responsiveness). That's why we have a benchmark suite,
rather than just a single benchmark.

https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
is an example of going through and calling out specific benchmarks
based on the kind of code they best represent.

So I don't think you're going to be able to get away from coming up
with your own custom scheme that emphasises a particular usage
profile. While the simplest approach is the one the linked article
took (i.e. weight one benchmark at a time at 100%, ignore the others),
searching for "combining multiple benchmark results into an aggregate
score" returned
https://pubsonline.informs.org/doi/pdf/10.1287/ited.2013.0124 as the
first link for me, and based on skimming the abstract and
introduction, I think it's likely to be quite relevant to your
question.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to