If you @threads and each time you're in the loop you're acting on a 
different part of the array, it'll be thread-safe. 

I think the better examples are what's not thread-safe. If you're doing 
something like looping and tmp+=A[i], then each thread can grab a different 
tmp and write into the same spot, ultimately not summing up the components 
of A and getting the wrong answer. Another problem comes up with globals. 
You might have a loop where you're using BigFloats, and in the loop it 
might change the precision if a certain condition is met. However, since 
BigFloat precision is a global, it will change the precision for all of the 
threads. This becomes a problem because this behavior is non-deterministic: 
which threads will have been run before this computation depends on random 
factors about why certain threads ran slightly faster/slower. 

On Thursday, September 1, 2016 at 11:17:25 AM UTC-7, digxx wrote:
>
> Do you have a "simple" example of how to write something "thread safe" if 
> I plan to use @threads ?
>
> Am Mittwoch, 31. August 2016 07:35:06 UTC+2 schrieb Chris Rackauckas:
>>
>> That's pretty much it. Threads are shared memory, which have less 
>> overhead (and are thus faster), and can share variables between them. 
>> @parallel is multiprocessing, i.e. each worker process has its own set of 
>> defined variables which do not overlap, and data has to be transferred 
>> between them. @parallel has the advantage that it does not have to be 
>> local: different processes can be on completely different computers/nodes. 
>> But it has a higher startup cost and is thus better suited to larger 
>> parallelizable problems.
>>
>> However, as Yichao states, @threads is still experimental. For one, since 
>> the memory is shared, you have have to make sure everything is 
>> "thread-safe" in order to be correct and not fail (example: two threads 
>> can't write to the same spot at once or else it can be non-deterministic as 
>> to what the result is). But also, the internals still have some rough 
>> edges. Just check out the repo for bug reports and you'll see that things 
>> can still go wrong, or that your performance can even decrease due to 
>> type-inference bugs. <https://github.com/JuliaLang/julia/issues/17395> Thus 
>> it is something to play around with, but it definitely isn't something that 
>> you should put into production yet (though in many cases it is already 
>> looking pretty good!).
>>
>> On Tuesday, August 30, 2016 at 5:46:57 PM UTC-7, Andrew wrote:
>>>
>>> I have also been wondering this. I tried @threads yesterday and it got 
>>> me around a 4-fold speedup on a loop which applied a function to each 
>>> element in an array, and I conveniently didn't need to bother using 
>>> SharedArrays as I would with @parallel.
>>>
>>> On Tuesday, August 30, 2016 at 7:20:36 PM UTC-4, digxx wrote:
>>>>
>>>> Sorry if there is already some information on this though I didnt find 
>>>> it...
>>>> So: What is the difference between these?
>>>> I have used @parallel so far for parallel loops but recently saw this 
>>>> @threads all in some video and I was wondering what the difference is?
>>>> Could anyone elaborate or give me a link with some info?
>>>> Thanks digxx
>>>>
>>>

Reply via email to