Some people have brought up the idea of tweaking how perf.py drives the
benchmarks. I personally wonder if we should go from a elapsed time
measurement to # of executions in a set amount of time measurement to get a
more stable number that's easier to measure and will make sense even as
Python and computers get faster (I got this idea from Mozilla's Dromaeo
benchmark suite: https://wiki.mozilla.org/Dromaeo).
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to