Wait-free, Lock-free, and Obstruction-free are well defined things. They 
are all non-blocking, such that suspended threads cannot indefinitely 
prevent all others from making progress, but they differ in the forward 
progress expectations of individual threads. The wikipedia entry does a 
fair job of explaining what each means. It is forward progress in a bounded 
number of steps, rather than "fairness", that is key, is not part of any of 
them. You can be extremely unfair (as in making forward progress at 
1,000,000,000:1 ratios) and wait-free at the same time.

There is a difference between the wait-free/lock-free classification of an 
operation/instruction and the classification of a mechanism/algorithm using 
that operation/instruction...

Both the LOCK XADD operation and LOCK CAS operations are wait free. There 
is nothing any other processor can do to prevent your processor from making 
forward progress and eventually (in a bound number of steps) executing 
either instruction. In fact, [at the very least] all user-mode x86 CPU 
instructions are trivially wait free, otherwise very bad things could be 
done by user-mode code...

The key difference between a LOCK XADD and a LOCK CAS is that the LOCK XADD 
will actually and unconditionally force a visible change to the shared 
memory state in a way that no other processor can prevent, while the shared 
memory effect of a LOCK CAS operation is a conditional, and will only occur 
if the compare shows that the expected value and the memory value are the 
same. Simple (but by no means all) uses of CAS often perform retry loops in 
lock-free, but not wait-free, mechanisms. 

It is easy to see how one can use LOCK XADD operations to build wait-free 
mechanisms. E.g. implementing a wait-free atomic counter using XADD is 
trivial. And more complicated synchronization primitives, like e.g. my 
WriterReaderPhaser 
<http://stuff-gil-says.blogspot.com/2014/11/writerreaderphaser-story-about-new.html>
 is 
wait free on architectures that support atomic increment operations, but 
(in my current implementation) is only lock free on architectures that 
force us to resort to a CAS.. It is somewhat harder to construct wait-free 
mechanisms using LOCK CAS, but it is by no means not impossible, and 
usually just means spending some more memory per concurrent thread or cpu. 
e.g. see "Wait-Free Queues With Multiple Enqueuers and Dequeuers" 
<http://www.cs.technion.ac.il/~erez/Papers/wfquque-ppopp.pdf>, where wait 
free queues are implemented using CAS constructs.

On Friday, August 24, 2018 at 10:00:22 PM UTC-7, Peter Veentjer wrote:
>
> I'm polishing up my lock free knowledge and one question keeps on bugging 
> me.
>
> The question is about the LOCK XADD and why it is considered to be wait 
> free.
>
> AFAIK for wait freedom there needs to be some fairness. 
>
> So image a concurrent counter using a spin on a cas to increment a 
> counter, then at least 1 thread wins and makes progress. Therefor this 
> implementation is lock free. It isn't wait free because some thread could 
> be spinning without bound. The problem here is that there is no fairness.
>
> Now imagine this counter would be implemented using a LOCK XADD and 
> therefor there is no need for a loop. What is guaranteeing that every 
> thread is going to make progress in a bound number of steps? E.g. could it 
> happen that one thread is continuously denied exclusive access to the 
> memory and therefor it won't complete in a bound number of steps? Or do the 
> request get stored in some kind of list and the requests are processed in 
> this order? This order would provide fairness and then it will be wait free.
>
> I have been looking at the Intel documentation of LOCK XADD; but it 
> remains unclear.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to