Here's some behavior using Julia's parallel processing that really 
surprised me. I'm not sure if this behavior is a bug (in which case I 
should report it), or if I'm just missing something.  Really stupid but 
minimal example:

This code fails:
@everywhere function doit(vec::Array{Int64,1})
  AccumulateVectorEntries=SharedArray(Int64,1,1)
  pmap(enumerate(vec)) do index_pair #note: this works in parallel
    (i,entry) = index_pair
    AccumulateVectorEntries+=entry
  end
  return sdata(AccumulateVectorEntries)
end

result=doit([1,2,3,4,5])

This code works as intended:
@everywhere function doit(vec::Array{Int64,1})
  AccumulateVectorEntries=SharedArray(Int64,1,1)
  pmap(enumerate(vec)) do index_pair #note: this works in parallel
    (i,entry) = index_pair
    AccumulateVectorEntries[1,1]+=entry
  end
  return sdata(AccumulateVectorEntries)
end

result=doit([1,2,3,4,5])

Is this sensible based on some principles of how parallel processing works, 
or am I just missing something?  PS - I'm trained as a statistician, so 
please excuse any naivete re: computing principles.

BTW  --  I'm using 

*julia> **versioninfo()*

Julia Version 0.3.8-pre+22

Commit 5078421* (2015-04-28 09:05 UTC)

Platform Info:

  System: Linux (x86_64-amazon-linux)

  CPU: Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz

  WORD_SIZE: 64

  BLAS: libmkl_rt

  LAPACK: libmkl_rt

  LIBM: libimf

  LLVM: libLLVM-3.3

Reply via email to