I've been trying to modify bradley's code to do something much like that, 
but I keep running into an error. I can get it to work on one core, but I 
am having trouble getting pmap to work using Bradley's example. Basically 
rather than parallelizing a single process that requires the same data over 
and over. I want to parallelize a process where each core requires the same 
data X AND each core gets a different unique data vector.

I can get it to work just fine on one core (in the example below that means 
samples_pmap_core_1 is correct), but I soon as I try Brad's method with 
pmap. It doesnt seem to work (i.e. samples_pmap has an error that the 
function is not defined on the worker). I feel like a solution here would 
help with the OP's as well. 

Best, 

Alex

Here is the separate file that is required on each core. Called 
"needed_all_over.jl"
## Data that stays the same 
X=rand(100,1)

##Function
function square(x)
estimate = mean(x.^2)
    return estimate
end

## Estimate Basic Function Here

sim_estimates=zeros(sim_reps_node,1);

function square_with_rand(error)
for i=1:sim_reps_node
sim_estimates[i,1]=square([X ; error[i]])
end
return sim_estimates
end

Here is the file that runs on that outside. 
srand(2);

## Generate Data that is different across draws (Error Sequence)
## The number of Draws
sim_reps_main=100;
sim_reps_node=int(sim_reps_main/2);
## Generate the std normal error sequence
error=rand(sim_reps_main);

require("needed_all_over.jl")

samples_pmap_core_1 = vcat(square_with_rand(error[1:50]), 
square_with_rand(error[1:50]))

addprocs(1)
nprocs() # Now I am doing this on two cores.

samples_pmap = pmap(square_with_rand,[error[1:50],error[51:100]])
samples = 
vcat(samples_pmap[1],samples_pmap[2],samples_pmap[3],samples_pmap[4])

Reply via email to