Would just like to add that a regular DArray constructor takes an init
function that initializes the localparts of the DArray - there is no
copying. But in your case with a randperm(n), I think we will have to
create it in the caller and then distribute the parts.


On Mon, Jan 27, 2014 at 8:58 AM, Amit Murthy <[email protected]> wrote:

> The "@parallel for" works only with ranges  - only data that is in the for
> body is copied. We should print a better error message though.
>
> I cannot think of a way to have a distributed randperm that does not
> involve copying other than using a SharedArray.
>
> If it is not an issue copying only the specific parts of the distribution,
> a DArray can also serve your purpose.
>
> n=1000
> x = randperm(n); y = randperm(n)
>
> d=distribute(map(t->t, zip(x,y)))
> # Only the specific localparts are copied to each of the workers
> participating in the darray...
>
>
> @sync begin
>     for p in procs(d)
>         @async begin
>             remotecall_fetch(p,
>                 D -> begin
>                     for t in localpart(D)
>                         println(t)
>                         # do any work on the localpart of the DArray
>                     end
>                 end,
>                 d)
>         end
>     end
> end
>
>
>
>
>
>
>
>
>
> On Sun, Jan 26, 2014 at 11:54 PM, Madeleine Udell <
> [email protected]> wrote:
>
>> @parallel breaks when paralleling a loop over a Zip. Is there a
>> workaround that allows me not to explicitly form the sequence I'm iterating
>> over? I'd like to avoid copying (unnecessarily) the data from the sequences
>> I'm zipping up.
>>
>> n = 1000
>> x = randperm(n); y = randperm(n)
>> @parallel for t in zip(x,y)
>>     x,y = t
>>     println(x,y)
>> end
>>
>> exception on 1: ERROR: no method
>> getindex(Zip2{Array{Int64,1},Array{Int64,1}}, Range1{Int64})
>>  in anonymous at no file:1467
>>  in anonymous at multi.jl:1278
>>  in run_work_thunk at multi.jl:575
>>  in run_work_thunk at multi.jl:584
>>  in anonymous at task.jl:88
>>
>
>

Reply via email to