Hi Jim- I realized the issue I was having with pmap was related to your issue. Moreover, your method of using include() will not always work in .4. So I posted a link with a working that you may find helpful as a future reference.
As an alternative to the "super" array, you can actually use multiple inputs. For example, output = pmap((x1)->fun(x1,x2),input1,input2) where input1 and input2 are arrays of arrays: input1 = Any[parms for i = 1:Nprocessors] If you want to pass data to the function (e.g. to fit a model), you can do this: @everywhere MyData pmap((x1)->fun(x1,MyData),input1) rather than duplicate it in an array of arrays. Or alternatively pass you can pass MyData through a function containing pmap and @everywhere MyData can be omitted. (I don't understand the scoping completely, but those two methods appear to work). Here are two sources for more information. The second source contains an example of using parallel code that works in .4. One thing that is counterinutivie is that you have to state: using MyModule followed by @everywhere using MyModule. The method you have been using does not seem to work when packages need to be available to multiple processors. For some reason, there are variable conflicts due to scoping when using that method in .4 http://stackoverflow.com/questions/24637064/julia-using-pmap-correctly https://groups.google.com/forum/#!searchin/julia-users/christopher$20fisher%7Csort:date/julia-users/jgT47LBBtWk/Caje1PhiAwAJ Best regards and happy coding! Chris On Sat, Oct 10, 2015 at 9:01 AM, Christopher Fisher <[email protected]> wrote: > Creating a "super" array has worked for me in the past. Of course, someone > with more experience might have a more elegant solution. Good luck! > > > On Saturday, October 10, 2015 at 8:52:38 AM UTC-4, [email protected] > wrote: >> >> Hi Christopher: >> >> Thank you for the suggestions. I think I understand. >> >> Is the fix to create a "super" array, super_array, that contains all the >> inputs PF_outer wants to pass to PF_inner, i.e., output = pmap(PF_Inner, >> super_array)? >> >> I will give your suggestions a go later today or tomorrow and let you >> know the outcome. >> >> Best wishes, >> >> Jim >> >> On Saturday, October 10, 2015 at 8:38:51 AM UTC-4, Christopher Fisher >> wrote: >>> >>> Another thing to check is how you are mapping your inputs to the >>> available workers. Just to give you a simple example, suppose you have a >>> function called MyModel that accepts an array of parameters and each worker >>> receives the same parameters. The basic steps would be: >>> >>> Nworkers = 3 >>> >>> parms = [.3 2.0 .33] >>> >>> #Array of parameter arrays, one for each worker >>> ParmArray = Any[parms for i = 1:Nworkers] >>> >>> output = pmap(MyModel,ParmArray) >>> >>> So you would have to adapt your more complex input structure to the >>> example above. >>> >>> >>> On Wednesday, October 7, 2015 at 7:46:13 PM UTC-4, [email protected] >>> wrote: >>>> >>>> Hi All: >>>> >>>> Julia is opened for a terminal session using Julia -p 4. >>>> >>>> A program test_SI_AR1.jl is run from the julia command line with the >>>> return >>>> >>>> >>>> exception on exception on exception on 4: exception on 3: 2: 5: ERROR: >>>> `PF_SI_AR1_inner` has no method matching PF_SI_AR1_inner(::Array{Any,1}) >>>> in anonymous at multi.jl:855 >>>> in run_work_thunk at multi.jl:621 >>>> in anonymous at task.jl:855 >>>> ERROR: `PF_SI_AR1_inner` has no method matching >>>> PF_SI_AR1_inner(::Array{Any,1}) >>>> in anonymous at multi.jl:855 >>>> in run_work_thunk at multi.jl:621 >>>> in anonymous at task.jl:855 >>>> ERROR: `PF_SI_AR1_inner` has no method matching >>>> PF_SI_AR1_inner(::Array{Any,1}) >>>> in anonymous at multi.jl:855 >>>> in run_work_thunk at multi.jl:621 >>>> in anonymous at task.jl:855 >>>> ERROR: `PF_SI_AR1_inner` has no method matching >>>> PF_SI_AR1_inner(::Array{Any,1}) >>>> in anonymous at multi.jl:855 >>>> in run_work_thunk at multi.jl:621 >>>> in anonymous at task.jl:855 >>>> ERROR: `exp` has no method matching exp(::Array{MethodError,1}) >>>> in PF_SI_AR1_outer at >>>> /home/jim_nason/jmn_work/smith/NS4/jl_code_Summer2015/SI_rho/MH_PF_test/PF_outer_example.jl:123 >>>> in include at ./boot.jl:245 >>>> in include_from_node1 at ./loading.jl:128 >>>> in reload_path at loading.jl:152 >>>> in _require at loading.jl:67 >>>> in require at loading.jl:51 >>>> while loading >>>> /home/jim_nason/jmn_work/smith/NS4/jl_code_Summer2015/SI_rho/MH_PF_test/test_SI_AR1.jl, >>>> in expression starting on line 88 >>>> >>>> >>>> Note that these error statements keep repeating until Julia is forced >>>> to shut down. >>>> >>>> The code for PF_outer_example.jl is attached. PF_outer_example.jl >>>> calls to PF_inner_example.jl, which is also attached. >>>> >>>> This code implements a particle filter Markov chain Monte Carlo >>>> estimator of a state space model. The particle filter is run in >>>> PF_inner_example.jl. >>>> >>>> test_SI_AR1.jl loads parameters and coefficients of the model to be >>>> estimated along with the data, yyy, which is a nvar x obs array and passes >>>> these PF_outer_example.jl >>>> >>>> PF_outer_example.jl runs the particle filter on all the observations of >>>> yyy, j = 1, 2, ..., obs. At each j, PF_outer_example.jl does several >>>> computations and passes these, yyy[:,j], and the parameters and >>>> coefficients to PF_inner_example.jl >>>> >>>> PF_inner_example.jl runs the particle filter on observations yyy[.,j] >>>> for m = 1, 2, ..., mprt replications. mprt = the number of particles which >>>> be anywhere from 500 to 10,000. >>>> >>>> I am trying to implement a pmap command in PF_outer_example.jl to run >>>> the mprt particles in PF_inner_example.jl in parallel. >>>> >>>> With no success. >>>> >>>> I have tried a couple of variations of the pmap command, but the same >>>> error message above is always returned by Julia. The variations of the >>>> pmap command that I have tried are listed in PF_outer_example.jl. >>>> >>>> Obviously, I do not understand something that is fundamental to pmap. >>>> Any advice/suggestions are welcome. >>>> >>>> Best, >>>> >>>> Jim >>>> >>>> >>>>
