Hi all,
I'm noticing a large (factor of >100) regression of performance of promote
function when fed with more than 5 arguments ("julia" is Julia 0.4.5;
"./julia" is the official current nightly build, version 0.5.0-dev+4438,
commit aa1ce87):
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5))'
================ Benchmark Results ========================
Time per evaluation: 2.00 ns [1.96 ns, 2.04 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 4101
Number of evaluations: 99601
R² of OLS model: 0.952
Time spent benchmarking: 1.20 s
% ./julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5))'
================ Benchmark Results ========================
Time per evaluation: 2.11 ns [2.06 ns, 2.16 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 3801
Number of evaluations: 74701
R² of OLS model: 0.950
Time spent benchmarking: 2.35 s
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6))'
================ Benchmark Results ========================
Time per evaluation: 2.38 ns [2.34 ns, 2.42 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 6301
Number of evaluations: 811601
R² of OLS model: 0.956
Time spent benchmarking: 1.75 s
% ./julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6))'
================ Benchmark Results ========================
Time per evaluation: 306.79 ns [300.44 ns, 313.14 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 144.00 bytes
Number of allocations: 3 allocations
Number of samples: 4001
Number of evaluations: 90501
R² of OLS model: 0.955
Time spent benchmarking: 2.64 s
I get the same results also with a custom build of the same commit (I
thought I had a problem with my build and then downloaded the nightly build
from the site). Actually, something similar happens also for Julia 0.4 but
starting from 9 arguments:
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6,7,8))'
================ Benchmark Results ========================
Time per evaluation: 3.71 ns [3.63 ns, 3.79 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
Number of allocations: 0 allocations
Number of samples: 3801
Number of evaluations: 74701
R² of OLS model: 0.955
Time spent benchmarking: 1.13 s
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6,7,8,9))'
================ Benchmark Results ========================
Time per evaluation: 6.27 μs [4.53 μs, 8.02 μs]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 928.00 bytes
Number of allocations: 29 allocations
Number of samples: 100
Number of evaluations: 100
Time spent benchmarking: 0.18 s
% julia -e 'using Benchmarks;print(@benchmark
promote(1,2,3,4,5,6,7,8,9,10))'
================ Benchmark Results ========================
Time per evaluation: 12.85 μs [11.13 μs, 14.56 μs]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 1.11 kb
Number of allocations: 33 allocations
Number of samples: 100
Number of evaluations: 100
Time spent benchmarking: 0.18 s
Is this normal?
Bye,
Mosè