I ran your functions, using `@time`, on 0.4:

> julia> @time tf1(1:1000000) ;
>    7.140 milliseconds (7 allocations: 7813 KB)
> julia> @time tf2(1:1000000) ;
>    9.419 milliseconds (7 allocations: 7813 KB, 19.99% gc time)
> julia> @time tf3(1:1000000) ;
>  927.697 microseconds (7 allocations: 7813 KB)
> julia> @time tf4(1:1000000) ;
>   46.060 milliseconds (1999 k allocations: 39054 KB, 6.87% gc time)

As you can see, the map version has ~2 allocations per loop... probably at 
least part of the problem...

On Saturday, June 20, 2015 at 5:25:54 PM UTC-4, Xiubo Zhang wrote:
>
> Same tests on latest build (0.4.0-dev+5468):
>
> tf1 (vectorized):    0.00457 seconds
> tf2 (loop):          0.00805 seconds
> tf3 (comprehension): 0.00197 seconds
> tf4 (map):           0.04186 seconds
>
> I have to say the progression in the performance department is exciting -- 
> improvements in all fronts! Yet, relatively (I guess this is 
> almost nitpicking), comprehension is still faster than others.
>
> I did suspected GC to be the cause. But I feel I lack the knowledge to 
> properly analyse its behaviour with @time(d) macro. I did note notice, 
> however, the standard deviations for the seconds used are vastly different:
>
> tf1 (vectorized):    0.0000976434 seconds
> tf2 (loop):          0.0009162303 seconds
> tf3 (comprehension): 0.0001677752 seconds
> tf4 (map):           0.0020671911 seconds
>
> Maybe the variations are related to GC behaviours?
>
>
> On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>>
>> Also, you might want to retry your tests on 0.4 (if you don't mind living 
>> on the bleeding edge!), there've been a number of changes there that would 
>> affect your results.
>>
>
> On Saturday, 20 June 2015 20:24:47 UTC+1, Scott Jones wrote:
>>
>> Also, you might want to retry your tests on 0.4 (if you don't mind living 
>> on the bleeding edge!), there've been a number of changes there that would 
>> affect your results.
>>
>

Reply via email to