On Sun, Sep 30, 2012 at 07:12:47PM -0400, Brett Cannon wrote:

> > python3 perf.py -T --basedir ../benchmarks -f -b py3k
> ../cpython/builds/2.7-wide/bin/python ../cpython/builds/3.3/bin/python3.3

> ### call_method ###
> Min: 0.491433 -> 0.414841: 1.18x faster
> Avg: 0.493640 -> 0.416564: 1.19x faster
> Significant (t=127.21)
> Stddev: 0.00170 -> 0.00162: 1.0513x smaller

I'm not sure if this is the right place to discuss this, but what is the 
justification for recording the average and std deviation of the 
benchmarks?

If the benchmarks are based on timeit, the timeit docs warn against 
taking any statistic other than the minimum.



-- 
Steven
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to