>> >> A number of benchmarks are not applicable to us, or they are >> uninteresting at this point (e.g. pickling, regexp, or just >> microbenchmarks...). > > Uninteresting for benchmarking the jit, but important for python users. >
And benchmarking the jit is what we're actually doing. >> That would leave 2 usable benchmarks, at a first glance: 'ai', and >> possibly 'spitfire/slowspitfire'. > > The django one is also interesting. This is rather dummy loop for template generation, rather than "django". You can probably reduce it to something as advanced as dictionary lookup + string concatenation in a loop. > >> (Btw, I wonder why they think that richards is "too artificial" when >> they include a number of microbenchmarks that look far more artificial >> to me...) > > I thought that too... maybe just adding richards is okay, they can > discard the results if they want. > I think richards does not reflect what they do at google (like pickling :-) _______________________________________________ [email protected] http://codespeak.net/mailman/listinfo/pypy-dev
