So again, this is an informal description... In particular, my nomenclature is not precise...
So basically, an @parallel is a construct which will take the work to be done in each iteration of a for loop, and will farm them out to available remote processors, all at once. This will happen asynchronously, which means that all these jobs will be started without waiting for any of them to finish. You then want to wait for all the jobs to complete before going on the the "Consume tmp" stage. Hence you put an @async around this, to wait for all the parallel tasks to complete. Hope this makes it a little more understandable. I realise this does not help in designing a parallel system from scratch, but that is a much longer story. Note that with "tmp" being a shared array, this code will work only when all julia processes are in a single physical machine. Also, the @parallel construct is most useful when you combine a reduction operator with the for loop. Hope this helps - Avik On Wednesday, 17 June 2015 10:49:58 UTC+1, Daniel Carrera wrote: > > > On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote: >> >> I haven't used @everywhere in combination with begin..end blocks, I >> usually pair @sync with @parallel - see an example here >> <https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl>, >> where I've parallelized the entire nested loop ranging from lines 25 to 47. >> > > > Aha! Thanks. Copying your example I was able to produce this: > > N = 5 > tmp = SharedArray(Int, (N)) > > for i = 1:N > # Compute tmp in parallel # > @sync @parallel for j = (i + 1):N > tmp[j] = i * j > end > > # Consume tmp in serial # > for j = (i + 1):N > println(tmp[j]) > end > end > > > This seems to work correctly and gives the same answer as the serial code. > Can you help me understand how it works? What does "@sync @parallel" do? I > feel like I half-understand it, but the concept is not clear in my head. > > Thanks. > > Daniel. >
