On Wednesday, 17 September 2014 at 18:30:37 UTC, David Nadlinger wrote:
On Wednesday, 17 September 2014 at 14:59:48 UTC, Andrei Alexandrescu wrote:
Awesome. Suggestion in order to leverage crowdsourcing: first focus on setting up the test bed such that adding benchmarks is easy. Then you and others can add a bunch of benchmarks.

On a somewhat related note, I've been working on a CI system to keep tabs on the compile-time/run-time performance, memory usage and file size for our compilers. It's strictly geared towards executing the same test case on different compiler configurations, though, so it doesn't really overlap with what is proposed here.

Right now, its continually building DMD/GDC/LDC from Git and measuring some 40 mostly small benchmarks, but I need to improve the web UI a lot before it is ready for public consumption. Just thought I would mention it here to avoid scope creep in what Peter Alexander (and others) might be working on.

That sounds great. I'm not planning anything grand with this. I'm just going to get the already exiting benchmark framework working with dmd, ldc, and gdc; and put it on github so people can contribute implementations.

I imagine what you have could probably be extended to do comparisons with other languages, but I think there's still value in getting these benchmarks working because they are so well known and respected.

Reply via email to