2016-05-18 20:54 GMT+02:00 Maciej Fijalkowski <fij...@gmail.com>:
>> Ok. I'm not sure yet that it's feasible to get exactly the same memory
>> addresses for "hot" objects allocated by Python between two versions
>> of the code (...)
>
> Well the answer is to do more statistics really in my opinion. That
> is, perf should report average over multiple runs in multiple
> processes. I started a branch for pypy benchmarks for that, but never
> finished it actually.

I'm not sure that I understood you correctly. As I wrote, running the
same benchmark twice using two processes gives exactly the same
timing.

I already modified perf.py locally to run multiple processes and focus
on the average + std dev rather than min of a single process.

Example:

run 10 process x 3 loops (total: 30)
Run average: 205.4 ms +/- 0.1 ms (min: 205.3 ms, max: 205.4 ms)
Run average: 205.3 ms +/- 0.1 ms (min: 205.2 ms, max: 205.4 ms)
Run average: 205.2 ms +/- 0.0 ms (min: 205.2 ms, max: 205.3 ms)
Run average: 205.3 ms +/- 0.1 ms (min: 205.2 ms, max: 205.4 ms)
Run average: 205.3 ms +/- 0.1 ms (min: 205.2 ms, max: 205.4 ms)
Run average: 205.4 ms +/- 0.1 ms (min: 205.3 ms, max: 205.4 ms)
Run average: 205.3 ms +/- 0.2 ms (min: 205.1 ms, max: 205.4 ms)
Run average: 205.2 ms +/- 0.1 ms (min: 205.1 ms, max: 205.2 ms)
Run average: 205.3 ms +/- 0.1 ms (min: 205.2 ms, max: 205.4 ms)
Run average: 205.3 ms +/- 0.1 ms (min: 205.2 ms, max: 205.4 ms)
Total average: 205.3 ms +/- 0.1 ms (min: 205.1 ms, max: 205.4 ms)

The "total" concatenates all lists of timings.

Note: Oh, by the way, the timing also depends on the presence of .pyc
files ;-) I modified perf.py to add a first run with a single
iteration just to rebuild .pyc, since the benchmark always start by
removing alll .pyc files...

Victor
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to