PyPy branches mostly (did this improve or not really kind of question)
On Fri, Jun 25, 2010 at 11:23 AM, Miquel Torres <tob...@googlemail.com> wrote: > There is no problem in running tests for branches. What other branches or > interpreters would you for example run? > > > 2010/6/25 Maciej Fijalkowski <fij...@gmail.com> >> >> On Fri, Jun 25, 2010 at 5:08 AM, Miquel Torres <tob...@googlemail.com> >> wrote: >> > Hi all!, >> > >> > I want to announce a new version of the benchmarks site speed.pypy.org. >> > >> > After about 6 months, it finally shows the vision I had for such a >> > website: >> > usefull for pypy developers but also for the general public following >> > pypy's >> > or even other python implementation's development. On to the changes. >> > >> > There are now three views: "Changes", "Timeline" and "Comparison": >> > >> > The Overview was renamed to Changes, and its inline plot bars got >> > removed >> > because you can get the exact same plot in the Comparison view now (and >> > then >> > some). >> > >> > The Timeline got selectable baseline and "humanized" date labels for the >> > x >> > axis. >> > >> > The new Comparison view allows, well, comparing of "competing" >> > interpreters, >> > which will also be of interest to the wider Python community (specially >> > if >> > we can add unladen, ironpython and JPython results). >> > >> > >> > Two examples of interesting comparisons are: >> > >> > - relative bars >> > (http://speed.pypy.org/comparison/?bas=2%2B35&chart=relative+bars): here >> > we >> > see that the jit is faster than psyco in all cases except spambayes and >> > slowspitfire, were the jit cannot make up for pypy-c's abismal >> > performance. >> > Interestingly, in the only other case where the jit is slower than >> > cpython, >> > the ai benchmark, psyco performs even worse. >> > >> > - stacked bars >> > >> > horizontal(http://speed.pypy.org/comparison/?hor=true&bas=2%2B35&chart=stacked+bars): >> > This is not meant to "demonstrate" that overall the jit is over two >> > times >> > faster than cpython. It is just another way for a developer to picture >> > how >> > long a programme would take to complete if it were composed of 21 such >> > tasks. You can see that cpython's (the normalization chosen) benchmarks >> > all >> > take 1"relative" second. pypy-c needs more or less the same time, some >> > "tasks" being slower and some faster. Psyco shows an interesting >> > picture: >> > From meteor-contest downwards (fortuitously) , all benchmarks are >> > extremely >> > "compressed", which means they are speeded up by psyco quite a lot. But >> > any >> > further speed up wouldn't make overall time much shorter because the >> > first >> > group of benchmarks now takes most of the time to complete. pypy-c-jit >> > is a >> > more extreme case of this: If the jit accelerated all "fast" benchmarks >> > to 0 >> > seconds (infinitely fast), it would only get about twice as fast as now >> > because ai, slowspitfire, spambayes and twisted_tcp now need half the >> > entire >> > execution time. An good demonstration of "you are only as fast as your >> > slowest part". Of course the aggregate of all benchmarks is not a real >> > app, >> > but it is still fun. >> > >> > I hope you find the new version useful, and as always any feedback is >> > welcome. >> > >> > Cheers! >> > Miquel >> > >> >> Wow, I really like it, great job. >> >> Can we see how we can use this features for branches? >> >> Cheers, >> fijal > > _______________________________________________ pypy-...@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev