I tried using @sync @parallel and ended up getting a segmentation fault, so
I'm not really sure how to parallelize this loop. This is inside of a
larger module, so I'm not sure if something special has to be done (e.g.
putting @everywhere somewhere). I've searched this forum and the
documentation, which is where I got the idea to add @sync. I'd appreciate
any input, as I'm kind of stuck. The larger code in which this
parallelization should take place is here:
https://github.com/pazzo83/QuantJulia.jl/blob/master/src/methods/lattice.jl
Thanks!
Chris
On Friday, January 29, 2016 at 4:54:18 PM UTC-5, Christopher Alexander
wrote:
>
> Hello all, I have a question about the proper usage of SharedArray /
> @parallel. I am trying to use it in a particular part of my code, but I am
> not getting the expected results (instead I am getting an array of zeros
> each time).
>
> Here are the two functions that are involved:
> function partial_rollback!(lattice::TreeLattice, asset::DiscretizedAsset,
> t::Float64)
> from = asset.common.time
>
> if QuantJulia.Math.is_close(from, t)
> return
> end
>
> iFrom = findfirst(lattice.tg.times .>= from)
> iTo = findfirst(lattice.tg.times .>= t)
>
> @simd for i = iFrom-1:-1:iTo
> newVals = step_back(lattice, i, asset.common.values)
> @inbounds asset.common.time = lattice.tg.times[i]
> asset.common.values = sdata(newVals)
>
> if i != iTo
> adjust_values!(asset)
> end
> end
>
> return asset
> end
>
> function step_back(lattice::TreeLattice, i::Int, vals::Vector{Float64})
>
> newVals = SharedArray(Float64, get_size(lattice.impl, i))
> @parallel for j = 1:length(newVals)
> val = 0.0
> for l = 1:lattice.n
> val += probability(lattice.impl, i, j, l) *
> vals[descendant(lattice.impl, i, j, l)]
> end
> val *= discount(lattice.impl, i, j)
> newVals[j] = val
> end
> retArray = sdata(newVals)
>
> return retArray
> end
>
> Is that to much complexity in the parallel loop? Right now the max # of
> times I've seen over this loop is in the 9000+ range, so that's why I
> thought it would be better than pmap.
> Any suggestions?
>
> Thanks!
>
> Chris
>