STINNER Victor added the comment:
Hum, since the discussion restarted, I reopen the issue ...
"Well, pybench is not just one benchmark, it's a whole collection of benchmarks
for various different aspects of the CPython VM and per concept it tries to
calibrate itself per benchmark, since each benchmark has different overhead."
In the performance module, you now get individual timing for each pybench
benchmark, instead of an overall total which was less useful.
"The number of iterations per benchmark will not change between runs, since
this number is fixed in each benchmark."
Please take a look at the new performance module, it has a different design.
Calibration is based on minimum time per sample, no more on hardcoded things. I
modified all benchmarks, not only pybench.
"BTW: Why would you want to run benchmarks in child processes and in parallel ?"
Child processes are run sequentially.
Running benchmarks in multiple processes help to get more reliable benchmarks.
Read my article if you want to learn more about the design of my perf module:
"Ideally, the pybench process should be the only CPU intense work load on the
entire CPU to get reasonable results."
The perf module automatically uses isolated CPU. It strongly suggests to use
this amazing Linux feature to run benchmarks!
I started to write advices to get stable benchmarks:
Note: See also the https://mail.python.org/mailman/listinfo/speed mailing list
resolution: fixed ->
status: closed -> open
Python tracker <rep...@bugs.python.org>
Python-bugs-list mailing list