I'm not getting any speed difference. julia> @time f(a, p) 0.051503 seconds (11 allocations: 304 bytes) 801.6933350167617
julia> @time mapre(a, p) 0.059369 seconds (9 allocations: 272 bytes) 801.6933350167617 More seriously, I did: julia> using BenchmarkTools julia> @benchmark f(a, p) BenchmarkTools.Trial: samples: 98 evals/sample: 1 time tolerance: 5.00% memory tolerance: 1.00% memory estimate: 144.00 bytes allocs estimate: 7 minimum time: 51.16 ms (0.00% GC) median time: 51.21 ms (0.00% GC) mean time: 51.24 ms (0.00% GC) maximum time: 51.57 ms (0.00% GC) julia> @benchmark mapre(a, p) BenchmarkTools.Trial: samples: 92 evals/sample: 1 time tolerance: 5.00% memory tolerance: 1.00% memory estimate: 112.00 bytes allocs estimate: 5 minimum time: 54.47 ms (0.00% GC) median time: 54.57 ms (0.00% GC) mean time: 54.64 ms (0.00% GC) maximum time: 55.72 ms (0.00% GC) Personally I think the generator looks nicer, and it's slightly faster. I like this syntax the best: f2(a, p) = 100 * sumabs(err(ai, pi) for (ai, pi) in zip(a, p)) /length(a) though @benchmark has this slightly slower than the eachindex() version. No idea why. On Thursday, October 6, 2016 at 10:13:22 AM UTC-4, Martin Florek wrote: > > > Hi All, > > I'm new in Julia and I need to decide about the more correct/better > implementation for two data collection. I have implemented mean absolute > percentage error (MAPE) for *Generator Expressions* (Comprehensions > without brackets): > > a = rand(10_000_000) > p = rand(10_000_000) > > err(actual, predicted) = (actual - predicted) / actual > > f(a, p) = 100 * sumabs(err(a[i], p[i]) for i in eachindex(a)) /length(a) > > a with *mapreduce()* function. > > function mapre(a, p) > s = mapreduce(t -> begin b,c=t; abs((b - c) / b) end, +, zip(a, p)) > s * 100/length(a) > end > > When compare *@time f(a,p) *I get: > > 0.026515 seconds (11 allocations: 304 bytes) 797.1301337918511 > > and *@time mapre(a, p):* > > 0.079932 seconds (9 allocations: 272 bytes) 797.1301337918511 > > > Thanks in advance, > Martin >