For solution take look at stackoverflow thread
http://stackoverflow.com/questions/27396024/poor-parallel-performance

In essence solution is to use global variables (@everywhere) to pass graph 
data to processes.

Dejan

On Wednesday, December 10, 2014 10:58:41 AM UTC-8, Dejan Miljkovic wrote:
>
> I am getting performance degradation after parallelizing the code that is 
> calculating graph centrality. Graph is relatively large, 100K vertices. 
> Single threaded application take approximately 7 minutes. As recommended on 
> julialang site (
> http://julia.readthedocs.org/en/latest/manual/parallel-computing/#man-parallel-computing)
>  
> I adapted code and used pmap api in order to parallelize calculations. I 
> started calculation with 8 processes (julia -p 8 test_parallel_pmap). To 
> my surprise I got 10 fold slow down. Parallel process now take more than 
> hour. I noticed that it take several minutes for parallel process to 
> initialize and starts calculation. Even after all 8 cpus are %100 busy with 
> julia app, calculation is super slow.
>
> Attached is julia code:
>
> 1) test_parallel_pmap.jl reads grapg from file and starts parallel 
> calculation. 
>
> 2) centrality_mean.jl calculatse centrality. Code is based on 
> https://gist.github.com/SirVer/3353761
>
>
> Any suggestion how to improve parallel performance is greatly appreciated. 
>
> Thanks,
>
> Dejan
>
>
>
>

Reply via email to