Thanks for the ideas, Tim and Amit.
I was playing around with some simple ideas, and the following seems to 
work...

# The file Pmodule.jl contains

module Pmodule
export make_b, use_b
function make_b(n::Int)
    global b
    b = rand(n,n);
end
function use_b(x::Float64)
    global b
    x + b
end
end

Here is an interactive Julia session:

julia> pid = addprocs(2)
2-element Array{Any,1}:
 2
 3

julia> @everywhere using Pmodule

julia> @everywhere make_b(3)

julia> remotecall_fetch(pid[1], use_b, 10.0)
3x3 Array{Float64,2}:
 10.1983  10.3043  10.5157
 10.3442  10.9273  10.3606
 10.0316  10.8404  10.8781

julia> remotecall_fetch(pid[2], use_b, 4.0)
3x3 Array{Float64,2}:
 4.57154  4.06708  4.07531
 4.47332  4.42889  4.07916
 4.83437  4.06286  4.19575

julia> use_b(6.0)
3x3 Array{Float64,2}:
 6.59869  6.75167  6.74789
 6.99774  6.22108  6.54333
 6.85287  6.7234   6.00127

Is this a reasonable way to do it?  Will it be inefficient because I'm 
using a global variable?

Thanks,
Peter


On Thursday, February 20, 2014 7:20:34 PM UTC-8, Amit Murthy wrote:
>
> Since he needs to simultaneously compute b, how about using a DArray just 
> to store the local copies. Something like:
>
> d = DArray(init_b_func, (nworkers(), ), workers()) where 
> init_b_func(indexes) returns initializes and returns a single object of 
> TypeB . The length of the DArray is the same as the number of workers.
>
> This is probably not the most canonical use of a DArray, but hey, 
> references to all instances of b on all processes are kept in the main 
> process in a single object.
> Retrieving b on each of the workers is via localpart(d) .
>
>
>
>
> On Fri, Feb 21, 2014 at 8:36 AM, Tim Holy <[email protected] <javascript:>
> > wrote:
>
>> julia> wpid = addprocs(1)[1]
>> 2
>>
>> julia> rr = RemoteRef(wpid)
>> RemoteRef(2,1,5)
>>
>> julia> put!(rr, "Hello")
>> RemoteRef(2,1,5)
>>
>> julia> fetch(rr)
>> "Hello"
>>
>> --Tim
>>
>> On Thursday, February 20, 2014 05:20:49 PM Peter Simon wrote:
>> > I'm finding myself in need of similar functionality lately.  Has any
>> > progress been made on this front?  In my case, since the large object b
>> > needs to be computed at run-time, I would prefer to simultaneously 
>> compute
>> > it on all workers at the beginning, then have these copies stick around 
>> for
>> > later reuse.  In the Matlab version of the code I'm porting to Julia, I 
>> do
>> > this with persistent variables.
>> >
>> > Thanks,
>> > --Peter
>> >
>> > On Thursday, December 13, 2012 2:04:12 AM UTC-8, Viral Shah wrote:
>> > > Need to wrap up remote_call / remote_call_fetch in a few higher level
>> > > functions for such things. I'm going to get cracking on improving our
>> > > parallel support soon.
>> > >
>> > > -viral
>> > >
>> > > On Thursday, December 13, 2012 12:29:58 PM UTC+5:30, Miles Lubin 
>> wrote:
>> > >> Seemingly simple question that I haven't managed to figure out after
>> > >> reading the documentation and playing around: How can you broadcast 
>> an
>> > >> object from one process (say the main process) to all running 
>> processes?
>> > >> I
>> > >> come from an MPI background where this is a fundamental operation.
>> > >>
>> > >> To give an example, say I have a function f(a,b), where b is some 
>> large
>> > >> 100MB+ dataset/matrix/object, and I want to compute f(a,b) for a in 
>> some
>> > >> range and b fixed. It doesn't make sense to send a new copy of b with
>> > >> each
>> > >> call. Instead I'd like to broadcast b to each process and keep a
>> > >> persistent
>> > >> copy in each process to use during the pmap. What's the best and
>> > >> prettiest
>> > >> way to do this?
>> > >>
>> > >> Thanks,
>> > >> Miles
>>
>
>

Reply via email to