Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>  > The root of all this: When Nick Piggin posted his first suggestion for
>  > ticket spinlocks on LKML, I immediately liked the idea. For details
>  > check LWN [1], in a nutshell: This algorithm enforces strict FIFO order
>  > for the admission to contended spinlocks, thus it improves the
>  > determinism on SMP systems with more than 2 CPUs.
>  > 
>  > Meanwhile, ticket spinlocks are mainline (2.6.25). But that version has
>  > to drawbacks for us: it doesn't support nesting like xnlock does, and it
>  > is x86-only so far.
>  > 
>  > So I designed a version for Xenomai which is both nestable and
>  > arch-independent. It is certainly not as optimal as mainline's version,
>  > but our code path stresses the locking code differently anyway.
>  > 
>  > This thing here /seems/ to work, but I'm lacking CPUs at home to test.
>  > You can't truly stress ticket locks with only a single dual-core :-/.
>  > QEMU runs into a live-lock with -smp 2, this patch applied and two
>  > moderate latency loops, but that might be an artifact of its
>  > single-threaded VCPU scheduling. And kvm currently locks up under SMP
>  > even without any change, but kvm and SMP is a story of its own. There is
>  > hope: 16-way is waiting at work... :)
> Xenomai uses a Big Kernel Lock, an approach known to not scale very
> well. So, if we want to scale correctly on machines with many cpus, we
> should change our locking strategy first.

Per-cpu IPC objects, per-cpu nklock - I'm all with you! But that's stuff
for a massive restructuring we could schedule for Xenomai 3.

This thing here surely does not help to scale RT loads across 16 CPUs or
more. But it already pays off determinism-wise with 3 or 4 CPUs involved
in RT jobs with moderate contention on nklock. It is nothing for 2-way
boxes (except for testing) where the exiting algorithm is more efficient
(and where you don't have the risk of unfair lock admission anyway).


Attachment: signature.asc
Description: OpenPGP digital signature

Xenomai-core mailing list

Reply via email to