If you don't start any worker processes, then you certainly won't get any parallel speedup. In any case, it seems like this code doesn't work on a single process without the @parallel macro, so the first step would be to get it to work at all, before trying to parallelize it.
On Wed, Jun 11, 2014 at 12:00 PM, <[email protected]> wrote: > Many things are not right. > > >> L = 1:1000 >> stride = 5 >> >> function dowork(i) >> if i < 1000*: **not in python* >> sleep(5) >> else >> break *you are assuming your code will be inlined, but it won't* >> end *an extra end is missing here* >> > > That specific piece seems weird, I'm not sure this is legal but at the > very least, this is confusing: > >> i = 1 >> @sync @parallel for i in i:i+stride >> dowork(i) >> i += 1 >> end >> > > I would propose: > j=1 > ... > @parallel for i... j:j+stride > ... > j += stride + 1 > ... > to make things clear again. I'm not sure you'll get what you want though. > > If you want dowork to break the loop, you should have it return something, > like a boolean. > if dowork(i) break end > > I've not spent much time yet at coding in parallel however, I'm really not > sure @sync behaves as you expect > <http://julia.readthedocs.org/en/latest/manual/parallel-computing/#synchronization-with-remote-references>. > From what I've understood, @sync acts like a barrier: the @sync block will > surrender the control to the main task once every @async jobs are run (but > I may well be totally wrong). In your case, if the job for i = 1500 > finishes before i = 1499, your loop will be broken without dowork(1499) > being calculated. This is the ugliest kind of bugs one can encounter: the > ones that occur at random. >
