On Tue, Apr 28, 2015 at 01:38:41PM +0000, Zheng, Lv wrote:
> > -   raw_spin_lock(&ghes_nmi_lock);
> > +   if (!atomic_add_unless(&ghes_in_nmi, 1, 1))
> > +           return ret;
> > +
> 
> if (atomic_cmpxchg(&ghes_in_nmi, 0, 1))
>       return ret;

Ok, now I understand what you mean.

We absolutely want to use atomic_add_unless() because we get to save us
the expensive

        LOCK; CMPXCHG

if the value was already 1. Which is exactly what this patch is trying
to avoid - a thundering herd of cores CMPXCHGing a global variable.

I.e.,

        movl    ghes_in_nmi(%rip), %ecx # MEM[(const int *)&ghes_in_nmi], c
        cmpl    $1, %ecx        #, c
        je      .L311   #,                              <--- exit here if 
ghes_in_nmi == 1.
        leal    1(%rcx), %edx   #, D.37163
        movl    %ecx, %eax      # c, c
#APP
# 177 "./arch/x86/include/asm/atomic.h" 1
        .pushsection .smp_locks,"a"
.balign 4
.long 671f - .
.popsection
671:
        lock; cmpxchgl %edx,ghes_in_nmi(%rip)   # D.37163, MEM[(volatile u32 
*)&ghes_in_nmi]
# 0 "" 2
#NO_APP

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to