On Sat, Dec 31, 2011 at 6:58 PM, Laurence Tratt <lau...@tratt.net> wrote: > On Sat, Dec 31, 2011 at 05:45:35PM +0100, Armin Rigo wrote: > > Hi Armin, > >>> func main(): >>> i := 0 >>> while i < 100000: >>> i += 1 >> A quick update: on this program, with 100 times the number of iterations, >> "converge-opt3" runs in 2.6 seconds on my laptop, and "converge-jit" runs >> in less than 0.7 seconds. That's already a 4x speed-up :-) I think that >> you simply underestimated the warm-up times. > > In fairness, I did know that that program benefits from the JIT at least > somewhat :) I was wondering if there are other micro-benchmarks that the PyPy > folk found paricularly illuminating / surprising when optimising PyPy. > > There's also something else that's weird. Try "time make regress" with > --opt=3 and --opt=jit. The latter is often twice as slow as the former. I > have no useful intuitions as why at the moment. > > Yours, > > > Laurie > -- > Personal http://tratt.net/laurie/ > The Converge programming language http://convergepl.org/ > https://github.com/ltratt http://twitter.com/laurencetratt > _______________________________________________ > pypy-dev mailing list > pypy-dev@python.org > http://mail.python.org/mailman/listinfo/pypy-dev
The most coarse-grained test would be PYPYLOG=jit-summary:- <whatever commands you want to run>, that should provide you some feedback on tracing and other warmup-related times as well as some basic stats. Note that pypy-jit is almost never slowe than pypy-no-jit, but it's certainly slower than CPython for running tests (most of the time). Tests are the particular case of jit-unfriendly code, because ideally they execute each piece of code once. Cheers, fijal _______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev