and also compare (note the @sync)
@time @sync @parallel for i in 1:10
sleep(1)
end
Also note that using reduction with @parallel will also wait:
z = @parallel (*) for i = 1:n
A
end
On Friday, July 22, 2016 at 3:11:15 AM UTC+10, Kristoffer Carlsson wrote:
>
>
> julia> @time for i in 1:10
> sleep(1)
> end
> 10.054067 seconds (60 allocations: 3.594 KB)
>
>
> julia> @time @parallel for i in 1:10
> sleep(1)
> end
> 0.195556 seconds (28.91 k allocations: 1.302 MB)
> 1-element Array{Future,1}:
> Future(1,1,8,#NULL)
>
>
>
> On Thursday, July 21, 2016 at 6:00:47 PM UTC+2, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:1000000000
>> z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>> 1.0 1.0001
>> 1.0002 1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:1000000000
>> z *= A
>> end
>> toc()
>> A
>>
>> and get
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>> 1.0 1.0001
>> 1.0002 1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>> 0.0 0.0
>> 0.0 0.0
>>
>>
>> So is this what one should expect from this kind of simple
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>