Thanks, Bradley. I really like your example and in fact I have played with pmap already. I think it is a great tool for getting into distributed computing since - as far as I know - pmap sends the different input variables to different workers and communicates back the result).
In some cases shared memory access might be more feasible (such as in the example I posted above). Does anybody how to do that in parallel? On Tuesday, September 9, 2014 3:42:02 PM UTC-4, Alex wrote: > > Bradley, > > That's an awesome tutorial. Thanks for putting that together. > > > On Monday, August 18, 2014 7:32:17 AM UTC-7, Bradley Setzler wrote: >> >> I found that the easiest way was to use two files - one file contains the >> function to be run in parallel, the other file uses Require() to load the >> function in parallel, and pmap to call the function. >> >> I have a working example of the two-file approach here: >> >> http://juliaeconomics.com/2014/06/18/parallel-processing-in-julia-bootstrapping-the-mle/ >> >> Best, >> Bradley >> >> >> >> >> >> On Wednesday, November 6, 2013 10:08:38 PM UTC-6, Lars Ruthotto wrote: >>> >>> I am relatively new to Julia and doing some simple experiments. So far, >>> I am very impressed by it's nice and intuitive syntax and performance. Good >>> job! >>> >>> However, I have a simple question regarding parallel for loops the >>> manual could not answer for me. Say I am interested in parallelizing this >>> code >>> >>> a = zeros(100000) >>> for i=1:100000 >>> a[i] = i >>> end >>> >>> In the manual it is said (and I verified) that >>> >>> a = zeros(100000) >>> @parallel for i=1:100000 >>> a[i] = i >>> end >>> >>> does not give the correct result. Unfortunately it does not say (or I >>> couldn't find it) how this can be done in Julia? Does anyone have an idea? >>> >>> Thanks! >>> Lars >>> >>>
