STINNER Victor <victor.stin...@gmail.com> added the comment:

> Likewise, on the same builds, running _decimal/tests/bench.py does not show a 
> significant difference: 
> https://gist.github.com/elprans/fb31510ee28a3aa091aee3f42fe65e00

Note: it may be interesting to rewrite this benchmark my perf module to be able 
to easily check if a benchmark result is significant.

http://perf.readthedocs.io/en/latest/cli.html#perf-compare-to

"perf determines whether two samples differ significantly using a Student’s 
two-sample, two-tailed t-test with alpha equals to 0.95."

=> https://en.wikipedia.org/wiki/Student's_t-test


Usually, I consider that between 5% slower and 5% faster is not significant. 
But it depends how the benchmark was run, it depends on the type of benchmark, 
etc. Here I don't know bench.py so I cannot judge.

For example, for an optimization, I'm more interested by an optimization making 
a benchmark 10% faster ;-)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32630>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to