On Tue, Aug 11, 2009 at 12:27 PM, Jamie Lokier <ja...@shareable.org> wrote:

> Jim Donelson wrote:
> > >It's actually not the best, because when it returns "did not match"
> > >you have to loop and try again.
> >
> > Not sure what else you would do?  The purpose of a spin lock is to
> > avoid a more expensive kernel call
>
> It's not a spinlock.
>
> This loop is for a different reason.  You can tell it's different
> because it spins when *unlocking* too; a spinlock never does that.
>
> The "did not match" case for compare-exchange is to simulate an atomic
> operation like this example:
>
>    int atomic_dec_test(unsigned *mem)
>    {
>        do {
>            old = *mem;
>            new = old+1;
>        } while (!compare_exchange(mem, old, new));
>
>        return (new != 0);
>    }
>
>    void mutex_unlock(unsigned *mem)
>    {
>        if (atomic_dec_test(mem))
>           futex(FUTEX_WAKE, mem);
>     }
>

I'd like to see the code for compare_exchange and the lock function.


>
> > > The amount of looping depends on contention level.
> >
> > The real secret is to reduce the time spent holding the mutex in general.
>
> Think about the code above.  The amount of looping in
> atomic_dec_test(), above, is not reduced by reducing the time spent
> holding the mutex.
>
> If you hold the mutex for shorter times in more places (moving the
> mutex from large regions to small ones), paradoxically it will
> *increase* the average amount of looping in atomic_dec_test() and the
> other atomic ops.  Usually not by enough to care, but it depends on
> the program.
>
> That why compare-exchange (and load-locked/store-conditional CPU
> instructions), though universally usable, doesn't have the same
> performance characteristics as atomic read-modify-writed ops.  Though,
> atomic ops are sometimes implemented as lock-locked/store-conditional
> in the chip anyway.. it's a subtle area.
>
> > The purpose of a spin lock is to avoid a more expensive kernel call
> > if the mutex is released quickly (or not taken at all). Presumably
> > you enter the kernel after n tries and sleep so that you are not
> > using up quanta while spinning.
>
> That doesn't work with a single processor.  While you are spinning,
> it's impossible for the other thread to release the mutex until you
> are preempted, so potential benefit from spinning is marginal and
> often outweight by the benefit of not spinning.
>

Of course it does - sleeping on a sp means "preempt me now".




>
> > As for priority inversion (where a lower priority task gets to execute
> > because is it holding a mutex that a higher priority thread is waiting
> > on) that should be addressed in the kernel. The lower priority thread
> > should get temporary priority elevation.
>
> Yes, that's what the robust PI mutexes implemention does.
>
> It uses futexes in userspace to avoid entering the kernel just like
> the standard futex algorithms, but priority inheritance if it enters
> the kernel, and cleverly if a thread or process crashes, the mutex is
> safely recoverable too.
>
> -- Jamie
> _______________________________________________
> uClinux-dev mailing list
> uClinux-dev@uclinux.org
> http://mailman.uclinux.org/mailman/listinfo/uclinux-dev
> This message was resent by uclinux-dev@uclinux.org
> To unsubscribe see:
> http://mailman.uclinux.org/mailman/options/uclinux-dev
>
_______________________________________________
uClinux-dev mailing list
uClinux-dev@uclinux.org
http://mailman.uclinux.org/mailman/listinfo/uclinux-dev
This message was resent by uclinux-dev@uclinux.org
To unsubscribe see:
http://mailman.uclinux.org/mailman/options/uclinux-dev

Reply via email to