Hello colleague,

On Wednesday, June 4, 2014 4:36:12 AM UTC+2, John Myles White wrote:
>
> I’m not sure there’s any single correct way to do benchmarks without 
> information about what you’re trying to optimize.
>
> If you’re trying to optimize the experience of people using your code, I 
> think it’s important to use means rather than medians because you want to 
> use a metric that’s effected by the entire shape of the distribution of 
> times and not entirely determined by the "center" of that distribution.
>
> If you want a theoretically pure measurement for an algorithm, I think 
> measuring time is kind of problematic. For algorithms, I’d prefer seeing a 
> count of CPU instructions.
>

I agree on having a FPU/ALU instruction count for the purest information 
about what an algorithm looks like as compiler output. But then ... in the 
age of superscalar architectures (depends on who you ask, for the last 25 
or 50 years) this does not answer how long to have to wait for a result if 
you apply the algorithm...

I participated in some work on benchmarking (esp. in the sense comparing 
complete systems of HW/OS/Compiler) and overall: wall clock is still a good 
measure for real world problems. For synthetic benchmarks, small sets of 
data, special questions on organising data (MMU problems) runtime, 
instruction count, code+memory footprint may apply. 

Especially i like Stefan's idea not to exclude the gc. 

Reply via email to