On 07/16/2011 10:44 PM Maciej Fijalkowski wrote:
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski<fij...@gmail.com>  wrote:
Hi

I'm a bit worried with our current benchmarks state. We have around 4
benchmarks that had reasonable slowdowns recently and we keep putting
new features that speed up other things. How can we even say we have
actually fixed the original issue? Can we have a policy of not merging
new performance features before having a story why benchmarks got
slower?

Current list:

http://speed.pypy.org/timeline/?exe=1&base=none&ben=spectral-norm&env=tannit&revs=50

this fixed itself, recent runs are fast again (and anto could not
reproduce at all)

I am wondering what was sharing the tannit xeon with the the actual code being 
benchmarked.
Are timing values stored via stdout to file(s)? How are they buffered? Could 
the process
of allocating file storage space on a disk have reached a nonlinear overhead 
hump that
was passed, and somehow impacted the benchmark temporarily, so it would seem to have 
"fixed itself?"

I am thinking file system work possibly e.g. clobbering warm caches in some 
temporarily systematic way
due to some convoying[1] between benchmark i/o and background file system 
stuff. Even though user time is not system time,
some CPU and cache resources must be shared. Wonder what a dedicated SSD disk 
with large pre-allocated open
files could do to normalize effects, if the disk were reformatted anew for each 
run?
[1] http://en.wikipedia.org/wiki/Lock_convoy

Regards,
Bengt Richter

_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
http://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to