On 08/10/2017 09:27 AM, Waiman Long wrote:
> On 08/10/2017 07:50 AM, Peter Zijlstra wrote:
>> On Wed, May 24, 2017 at 09:38:28AM -0400, Waiman Long wrote:
>>> # of thread w/o patch with patch % Change
>>> ----------- --------- ---------- --------
>>> 4 4053.3 Mop/s 4223.7 Mop/s +4.2%
>>> 8 3310.4 Mop/s 3406.0 Mop/s +2.9%
>>> 12 2576.4 Mop/s 2674.6 Mop/s +3.8%
>> Waiman, could you run those numbers again but with the below 'fixed' ?
>>> @@ -361,6 +361,13 @@ static void pv_kick_node(struct qspinlock *lock,
>>> struct mcs_spinlock *node)
>>> * observe its next->locked value and advance itself.
>>> * Matches with smp_store_mb() and cmpxchg() in pv_wait_node()
>>> + *
>>> + * The write to next->locked in arch_mcs_spin_unlock_contended()
>>> + * must be ordered before the read of pn->state in the cmpxchg()
>>> + * below for the code to work correctly. However, this is not
>>> + * guaranteed on all architectures when the cmpxchg() call fails.
>>> + * Both x86 and PPC can provide that guarantee, but other
>>> + * architectures not necessarily.
>>> if (cmpxchg(&pn->state, vcpu_halted, vcpu_hashed) != vcpu_halted)
>> Ideally this Power CPU can optimize back-to-back SYNC instructions, but
>> who knows...
> Yes, I can run the numbers again. However, the changes here is in the
> slowpath. My current patch optimizes the fast path only and my original
> test doesn't stress the slowpath at all, I think. I will have to make
> some changes to stress the slowpath.
Looking at past emails, I remember why I put the comment there. Putting
an smp_mb() here will definitely has an negative performance impact on
x86. So I put in the comment here to remind me that the current code may
not work for ARM64.
To fix that, my current thought is to have a cmpxchg variant that
guarantees ordering for both success and failure, for example,
cmpxchg_ordered(). In that way, we only need to insert the barrier for
architectures that need it. That will be a separate patch instead of
integrating into this one.