Hi

I'm a bit worried with our current benchmarks state. We have around 4
benchmarks that had reasonable slowdowns recently and we keep putting
new features that speed up other things. How can we even say we have
actually fixed the original issue? Can we have a policy of not merging
new performance features before having a story why benchmarks got
slower?

Current list:

http://speed.pypy.org/timeline/?exe=1&base=none&ben=spectral-norm&env=tannit&revs=50

http://speed.pypy.org/timeline/?exe=1&base=none&ben=spitfire&env=tannit&revs=50

This is a good example why we should not work the way we work now:

http://speed.pypy.org/timeline/?exe=1&base=none&ben=slowspitfire&env=tannit&revs=200

There was an issue, then the issue was fixed, but apparently not quite
(7th of June is quite a bit slower than 25th of May) and then recently
we introduced something that make it faster alltogether. Can we even
fish the original issue?

http://speed.pypy.org/timeline/?exe=1&base=none&ben=bm_mako&env=tannit&revs=200

http://speed.pypy.org/timeline/?exe=1&base=none&ben=nbody_modified&env=tannit&revs=50
(is it relevant or just noise?)

http://speed.pypy.org/timeline/?exe=1&base=none&ben=telco&env=tannit&revs=50

Cheers,
fijal
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
http://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to