On Thu, 11 Feb 2016 18:36:33 +0000
Brett Cannon <br...@python.org> wrote:
> Are we happy with the current benchmarks? Are there some we want to drop?
> How about add? Do we want to have explanations as to why each benchmark is
> included?

There are no real explanations except the provenance of said benchmarks:
- the benchmarks suite was originally developed for Unladen Swallow
- some benchmarks were taken and adapted from the "Great Computer
  Language Shootout" (which I think is a poor source of benchmarks)
- some benchmarks have been added for specific concerns that may not be
  of enough interest in general (for example micro-benchmarks of
  methods calls, or benchmarks of json / pickle performance)

> A better balance of micro vs. macro benchmarks (and probably
> matching groups)?

Easier said than done :-) Macro-benchmarks are harder to write,
especially with the constraints that 1) runtimes should be short enough
for convenient use 2) performance numbers should be stable enough
accross runs.

Regards

Antoine.


_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to