On 17/07/11 22:15, Maciej Fijalkowski wrote: > I think to summarize we're good now, except spitfire which is to be > investigated by armin. > > Then new thing about go is a bit "we touched the world". Because the > unoptimized traces are now shorter, less gets aborted, less gets run > based on functions and it's slower. saying trace_limit=6000 makes it > fast again. I guess "too bad" is the answer.
This made me wonder how effective is our "compile the loops" idea in practice, so I benchmarked translate.py first with the default settings, and then with just the function jitting: pypy ./translate.py -Ojit [Timer] Timings: [Timer] annotate --- 453.0 s [Timer] rtype_lltype --- 310.7 s [Timer] pyjitpl_lltype --- 392.6 s [Timer] backendopt_lltype --- 132.8 s [Timer] stackcheckinsertion_lltype --- 32.1 s [Timer] database_c --- 197.1 s [Timer] source_c --- 252.0 s [Timer] compile_c --- 707.6 s [Timer] =========================================== [Timer] Total: --- 2478.0 s pypy --jit threshold=-1 ./translate -Ojit [Timer] Timings: [Timer] annotate --- 486.2 s [Timer] rtype_lltype --- 297.8 s [Timer] pyjitpl_lltype --- 396.6 s [Timer] backendopt_lltype --- 128.7 s [Timer] stackcheckinsertion_lltype --- 32.6 s [Timer] database_c --- 190.0 s [Timer] source_c --- 240.0 s [Timer] compile_c --- 594.6 s [Timer] =========================================== [Timer] Total: --- 2366.5 s As you can see, if we ignore the time spent in compile_c, the total time is almost the same. What can we conclude? That "compiling the loops" is uneffective and we only care about compiling single functions? :-( ciao, Anto _______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev