On Thursday, 9 June 2016 at 01:46:45 UTC, Dave wrote:
In short, the truer metric is how fast does the code run casually writing code. Good languages will run fast without having to think super detailed about it. Now it is also useful to know how fast the language can get when someone dives into the details. Don't get me wrong. I just think the casual case is far more important.
Not sure about that, as inner loops where you want to put in the most effort often are the bottle neck. But the code should be similar and the benchmarks should tests different types of optimizations:
- recursion - loop-unrolling, converting to simd - compile time evaluation - converting heap allocations to stack allocations - heap fragmentation And so on.
And they tout their ability to tweak the compiler and write (at times very esoteric code for the sake of performance) as a win for their language. That's also missing the point.
Yes, that is very bad for floating point. Fast can mean completely wrong results in the general case (even though it may look correct for the benchmark input). The higher probability for wrong results, the faster it goes...
Benchmarks for floating point should also test accuracy.
