On Tue, Apr 26, 2016 at 6:36 PM, Antoine Pitrou <solip...@pitrou.net> wrote: > On Tue, 26 Apr 2016 18:28:32 +0200 > Maciej Fijalkowski <fij...@gmail.com> > wrote: >> >> taking the minimum is a terrible idea anyway, none of the statistical >> discussion makes sense if you do that > > The minimum is a reasonable metric for quick throwaway benchmarks as > timeit is designed for, as it has a better hope of alleviating the > impact of system load (as such throwaway benchmarks are often run on > the developer's workstation). > > For a persistent benchmarks suite, where we can afford longer > benchmark runtimes and are able to keep system noise to a minimum, we > might prefer another metric. > > Regards > > Antoine.
No, it's not Antoine. Minimum is not better than one random measurment. We had this discussion before, but you guys are happily dismissing all the papers written on the subject. It *does* get rid of random system stuff, but it *also* does get rid of all the effects related to gc/malloc/caches and infinite details that are not working in the same predictable fashion. _______________________________________________ Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed