On Wed, May 18, 2016 at 22:51:09 +0300, Sergey Fedorov wrote:
> On 14/05/16 06:34, Emilio G. Cota wrote:
> > +static inline void qemu_spin_lock(QemuSpin *spin)
> > +{
> > +    while (atomic_test_and_set_acquire(&spin->value)) {
> 
> A possible optimization might be using unlikely() here, copmare:

Testing with a spinlock-heavy workload reveals a little improvement:

taskset -c 0 tests/qht-bench \
        -d 5 -n 1 -u 100 -k 4096 -K 4096 -l 4096 -r 4096 -s 4096

I'm running this 10 times. Results in Mops/s:
Head                    31.283 +- 0.190557661148069
while (unlikely)        31.397 +- 0.107501937967028
if (likely) + while     31.524 +- 0.219605707272527 

The last case does:
    if (likely(__sync_lock_test_and_set(&spin->value, true) == false)) {
        return;
    }
    while (__sync_lock_test_and_set(&spin->value, true)) {
        while (atomic_read(&spin->value)) {
            cpu_relax();
        }
    }

Although I don't like how this will do the TAS twice if the lock is
contended.

I'll just add the unlikely() to while().

> > +static inline int qemu_spin_trylock(QemuSpin *spin)
> > +{
> > +    if (atomic_test_and_set_acquire(&spin->value)) {
> > +        return -EBUSY;
> > +    }
> > +    return 0;
> > +}
> 
> Here we could also benefit from unlikely(), I think.

I never liked this branch in _trylock, because there will
be a branch anyway around the function. How about:

static inline bool qemu_spin_trylock(QemuSpin *spin)
{
    return __sync_lock_test_and_set(&spin->value, true);
}

We don't return EBUSY, which nobody cares about anyway; callers
will still do if (!trylock). With this we save a branch,
and let callers sprinkle likely/unlikely based on how contented
they expect the lock to be.

                Emilio


Reply via email to