We on the Pyston team have created some new benchmarks which I can
recommend using; I wouldn't call them "macrobenchmarks" since they don't
test entire applications, but we've found them to be better than the
existing benchmarks, which tend to be quite microbenchmarky.  For example,
our django-templating benchmark actually exercises the django templating
system, as opposed to bm_django.py which just tests unicode concatenation.
You can find them here
https://github.com/dropbox/pyston-perf/tree/master/benchmarking/benchmark_suite
 The current ones we look at are django_template3_10x,
sqlalchemy_imperative2_10x, and pyxl_bench_10x.

On Thu, Feb 11, 2016 at 10:36 AM, Brett Cannon <br...@python.org> wrote:

> Are we happy with the current benchmarks? Are there some we want to drop?
> How about add? Do we want to have explanations as to why each benchmark is
> included? A better balance of micro vs. macro benchmarks (and probably
> matching groups)?
>
> _______________________________________________
> Speed mailing list
> Speed@python.org
> https://mail.python.org/mailman/listinfo/speed
>
>
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to