Serhiy Storchaka added the comment:

> * Display the average, rather than the minimum, of the timings *and* display 
> the standard deviation. It should help a little bit to get more reproductible 
> results.

This makes hard to compare results with older Python versions.

> * Change the default repeat from 3 to 5 to have a better distribution of 
> timings. It makes the timeit CLI 66% slower (ex: 1 second instead of 600 ms). 
> That's the price of stable benchmarks :-)

For now default timeit run takes from 0.8 to 8 sec. Adding yet 5 sec makes a 
user more angry.

> * Don't disable the garbage collector anymore! Disabling the GC is not fair: 
> real applications use it.

But this makes short microbenchmarks less stable.

> * autorange: start with 1 loop instead of 10 for slow benchmarks like 
> time.sleep(1)

This is good if you run relatively slow benchmark, but it makes the result less 
reliable. You always can specify -n1, but on your own risk.

> * Display large number of loops as power of 10 for readability, ex: "10^6" 
> instead of "1000000". Also accept "10^6" syntax for the --num parameter.

10^6 syntax doesn't look Pythonic. And this change breaks third-party scripts 
that run timeit.

> * Add support for "ns" unit: nanoseconds (10^-9 second)

Even "pass" takes at least 0.02 usec on my computer. What you want to measure 
that takes < 1 ns? I think timeit is just wrong tool for this.

The patch also makes a warning about unreliable results output to stdout and 
always visible. This is yet one compatibility break. Current code allows the 
user to control the visibility of the warning by the -W Python option, and 
don't mix the warning with result output.


Python tracker <>
Python-bugs-list mailing list

Reply via email to