I assume pmap works the same way...? References to a SharedArray within the 
call of a pmap are treated as references, not copied and transferred? I 
suppose @profile is the only way to measure the amount of communications 
overhead going on; it is hard to fix what you can't measure! 

Thanks much.  

On Monday, September 29, 2014 8:47:57 PM UTC-7, Travis Porco wrote:
>
> Configuration:
> Version 0.3.0 (2014-08-20 20:43 UTC), x86_64-apple-darwin13.3.0
> julia -p 1
>
> Make a shared array, no problem:
> julia> S = SharedArray(Int, (5,11), init = S -> S[localindexes(S)] = 
> myid(),pids=[1, 2])
>
> A similar nonshared array, also no problem:
> julia> nonshared = zeros(5,11)
>
> Toy function, no problem:
> @everywhere function ssq(x)
>               sum(x .^ 2)
>               end
>
> Test ability to do something on the workers, seems to work fine:
> julia> for ww in workers()
>        remotecall(ww,println,ww)
>        end
>
> julia>     From worker 2:    2
>
> I don't expect the nonshared object to be visible:
> julia> @sync for ww in workers()
>               remotecall_fetch(ww, 
> x->println(ssq(nonshared[myid()-1,:])),ww)
>               end
> exception on 2: ERROR: nonshared not defined
>  in anonymous at none:2
>  in anonymous at multi.jl:855
>  in run_work_thunk at multi.jl:621
>  in anonymous at task.jl:855
>
> Try to reference the shared array on the workers:
> julia> for ww in workers()
>               remotecall_fetch(ww, x->println(ssq(S[myid()-1,:])),ww)      
> # I know the ww does nothing here
>               end
> exception on 2: ERROR: S not defined
>  in anonymous at none:2
>  in anonymous at multi.jl:855
>  in run_work_thunk at multi.jl:621
>  in anonymous at task.jl:855
>
> S was supposed to be a shared array, but it can't find it; I expected S to 
> be visible, since it's supposed to be a shared array.
>
> I can call the function on S:
> julia> for ww in workers()
>               remotecall_fetch(ww, x->println(ssq(x[myid()-1,:])),S)
>               end
>     From worker 2:    26
>
> but then again, the nonshared version works in this context too, and so I 
> have to think it's moving the data from pid 1 over to 2, and then doing the 
> computation. This makes me think the operation above with S is actually 
> doing the same thing--moving the data in S from pid 1 to 2 rather than 
> having 2 just having access to it. However, I don't fully understand the 
> issues.
> julia> for ww in workers()
>               remotecall_fetch(ww, 
> x->println(ssq(x[myid()-1,:])),nonshared)
>               end
>     From worker 2:    0
>
> So the question: is this normal behavior? If so, what is the right way to 
> access S on the workers, avoiding moving data? 
> The manual describes the SharedArray as experimental, so I have to 
> check... I've looked around for some very elementary examples of this sort 
> but must have missed them.
>
> Thanks.
>
>
>

Reply via email to