No. I've thought about maybe writing a wrapper for Spark but only after
julia 0.4 (and the new and improved dataframes) have landed. Also depends
how much time I could spend on it at my day job :)
On Friday, May 1, 2015 at 2:57:57 PM UTC+2, Sebastian Good wrote:
Steven, are you working on
I'd be nice to see a distributed array implemented on top of MPI (or
similar high perf distribution libs) like Fortran co-arrays but since I'm
out of academia and do not have access to real supercomputers anymore
I'm actually more interested in wrappers to cloud base distributed
computing
DistributedArray performance is pretty bad. The reason for removing them
from base was to spur their development. All I can say at this time is
that we are actively working on making their performance better.
For every parallel program you have implicit serial overhead (this is
especially
Also, you want to map(fetch, refs) not pmap.
With that i get better speedup (still not great, but at least 2x with 8
processors)
julia N=100;T=1000;A=rand(3,N);@time SimulationSerial(A,N,T)
elapsed time: 1.822478028 seconds (233 kB allocated)
julia N=100;T=1000;dA=drand(3,N);@time
Yes, performance will be largely the same on 0.4.
If you have to do any performance sensitive code at scale MPI is really the
only option I can recomend now. I don't know what you are trying to do but
the MPI.jl library is a bit incomplete so it would be great if you used it
and could
Hi Jake,
Jake Bolewski jakebolew...@gmail.com writes:
DistributedArray performance is pretty bad. The reason for removing
them from base was to spur their development. All I can say at this
time is that we are actively working on making their performance
better.
OK, thanks. Should I try