Hi, I implemented a new feature in the pyperf compare_to command: compute the geometric mean of the benchmarks values mean normalized to the reference benchmark suite.
Before making a release, I'm looking for testers and feedback to ensure that I implemented it properly and makes sure that it's useful and relevant. You can test using these commands: --- git clone https://github.com/psf/pyperf/ cd pyperf python3 -m pyperf compare_to ref.json my_change.json --- Example of the new feature: $ python3 -m pyperf compare_to ./pyperf/tests/mult_list_py36.json ./pyperf/tests/mult_list_py37.json [1]*1000: Mean +- std dev: [mult_list_py36] 2.13 us +- 0.06 us -> [mult_list_py37] 2.09 us +- 0.04 us: 1.02x faster (-2%) [1,2]*1000: Mean +- std dev: [mult_list_py36] 3.70 us +- 0.05 us -> [mult_list_py37] 5.28 us +- 0.09 us: 1.42x slower (+42%) [1,2,3]*1000: Mean +- std dev: [mult_list_py36] 4.61 us +- 0.13 us -> [mult_list_py37] 6.05 us +- 0.11 us: 1.31x slower (+31%) Geometric mean: 1.22 (slower) $ python3 -m pyperf compare_to ./pyperf/tests/mult_list_py36.json ./pyperf/tests/mult_list_py37.json -G Slower (2): - [1,2]*1000: 3.70 us +- 0.05 us -> 5.28 us +- 0.09 us: 1.42x slower (+42%) - [1,2,3]*1000: 4.61 us +- 0.13 us -> 6.05 us +- 0.11 us: 1.31x slower (+31%) Faster (1): - [1]*1000: 2.13 us +- 0.06 us -> 2.09 us +- 0.04 us: 1.02x faster (-2%) Geometric mean: 1.22 (slower) $ python3 -m pyperf compare_to ./pyperf/tests/mult_list_py36.json ./pyperf/tests/mult_list_py37.json --table +----------------+----------------+------------------------------+ | Benchmark | mult_list_py36 | mult_list_py37 | +================+================+==============================+ | [1]*1000 | 2.13 us | 2.09 us: 1.02x faster (-2%) | +----------------+----------------+------------------------------+ | [1,2]*1000 | 3.70 us | 5.28 us: 1.42x slower (+42%) | +----------------+----------------+------------------------------+ | [1,2,3]*1000 | 4.61 us | 6.05 us: 1.31x slower (+31%) | +----------------+----------------+------------------------------+ | Geometric mean | (ref) | 1.22 (slower) | +----------------+----------------+------------------------------+ $ python3 -m pyperf compare_to ./pyperf/tests/mult_list_py36.json ./pyperf/tests/mult_list_py37.json --table -G +----------------+----------------+------------------------------+ | Benchmark | mult_list_py36 | mult_list_py37 | +================+================+==============================+ | [1]*1000 | 2.13 us | 2.09 us: 1.02x faster (-2%) | +----------------+----------------+------------------------------+ | [1,2,3]*1000 | 4.61 us | 6.05 us: 1.31x slower (+31%) | +----------------+----------------+------------------------------+ | [1,2]*1000 | 3.70 us | 5.28 us: 1.42x slower (+42%) | +----------------+----------------+------------------------------+ | Geometric mean | (ref) | 1.22 (slower) | +----------------+----------------+------------------------------+ Victor -- Night gathers, and now my watch begins. It shall not end until my death. _______________________________________________ Speed mailing list -- speed@python.org To unsubscribe send an email to speed-le...@python.org https://mail.python.org/mailman3/lists/speed.python.org/ Member address: arch...@mail-archive.com