WARNING: silly benchmark ahead: count.nim var c = 0 for i in 0 ..< 1_000_000: c += 1 echo c Run
count.py c = 0 for i in range(1, 1000001): c += 1 print(c) Run count.bash c=0 for ((i=1; i<=1000000; i++)); do ((c++)) done echo "$c" Run count.rb c = 0 (1..1000000).each do c += 1 end puts "#{c}" Run benchmarking with hyperfine (which includes interpreter startup) <https://github.com/sharkdp/hyperfine> `rm -rf ~/.cache/nim/count_* && hyperfine --runs 10 'nim r -d:debug count.nim' 'nim r -d:release count.nim' 'nim r -f -d:debug count.nim' 'nim r -f -d:release count.nim' 'nim e count.nim' 'python count.py' 'bash count.bash' 'ruby count.rb'` Benchmark 1: nim r -d:debug count.nim Time (mean ± σ): 158.5 ms ± 135.1 ms [User: 143.3 ms, System: 25.4 ms] Range (min … max): 112.0 ms … 542.7 ms 10 runs Warning: The first benchmarking run for this command was significantly slower than the rest (542.7 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run. Benchmark 2: nim r -d:release count.nim Time (mean ± σ): 214.1 ms ± 320.8 ms [User: 207.6 ms, System: 27.9 ms] Range (min … max): 110.1 ms … 1127.0 ms 10 runs Warning: The first benchmarking run for this command was significantly slower than the rest (1.127 s). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run. Benchmark 3: nim r -f -d:debug count.nim Time (mean ± σ): 536.1 ms ± 9.7 ms [User: 545.6 ms, System: 101.0 ms] Range (min … max): 523.5 ms … 557.4 ms 10 runs Benchmark 4: nim r -f -d:release count.nim Time (mean ± σ): 1.227 s ± 0.139 s [User: 1.340 s, System: 0.118 s] Range (min … max): 1.150 s … 1.615 s 10 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmark 5: nim e count.nim Time (mean ± σ): 322.9 ms ± 10.7 ms [User: 300.9 ms, System: 21.1 ms] Range (min … max): 310.5 ms … 337.6 ms 10 runs Benchmark 6: python count.py Time (mean ± σ): 160.6 ms ± 11.0 ms [User: 139.5 ms, System: 21.6 ms] Range (min … max): 145.6 ms … 181.1 ms 10 runs Benchmark 7: bash count.bash Time (mean ± σ): 2.114 s ± 0.081 s [User: 2.112 s, System: 0.001 s] Range (min … max): 1.984 s … 2.265 s 10 runs Benchmark 8: ruby count.rb Time (mean ± σ): 95.1 ms ± 4.0 ms [User: 80.1 ms, System: 14.6 ms] Range (min … max): 90.8 ms … 103.2 ms 10 runs Summary 'ruby count.rb' ran 1.67 ± 1.42 times faster than 'nim r -d:debug count.nim' 1.69 ± 0.14 times faster than 'python count.py' 2.25 ± 3.37 times faster than 'nim r -d:release count.nim' 3.39 ± 0.18 times faster than 'nim e count.nim' 5.64 ± 0.26 times faster than 'nim r -f -d:debug count.nim' 12.90 ± 1.56 times faster than 'nim r -f -d:release count.nim' 22.23 ± 1.26 times faster than 'bash count.bash' Run