Philippe Gerum wrote: ... > Read my mail, without listening to your own grumble at the same time, > you should see that this is not a matter of being right or wrong, it is > a matter of who needs what, and how one will use Xenomai. Your grumble > does not prove anything unfortunately, otherwise everything would be > fixed since many moons.
Why things are unfixed has something to do with their complexity. RPI is
a complex thing AND it is a separate mechanism to the core (that's why I
was suggesting to reuse PI code if possible - something that is already
integrated for many moons).
> What I'm suggesting now, so that you can't tell the rest of the world
> that I'm such an old and deaf cranky meatball, is that we do place RPI
> under strict observation until the latest 2.4-rc is out, and we would
> decide at this point whether we should change the default value for the
> skins for which it makes sense (both for v2.3.x and 2.4). Obviously,
> this would only make sense if key users actually give hell to the 2.4
> testing releases (Mathias, the world is watching you).
OK, let's go through this another time, this time under the motto "get
the locking right". As a start (and a help for myself), here comes an
overview of the scheme the final version may expose - as long as there
are separate locks:
gatekeeper_thread / xnshadow_relax:
rpilock, followed by nklock
(while xnshadow_relax puts both under irqsave...)
xnshadow_unmap:
nklock, then rpilock nested
xnshadow_start:
rpilock, followed by nklock
xnshadow_renice:
nklock, then rpilock nested
schedule_event:
only rpilock
setsched_event:
nklock, followed by rpilock, followed by nklock again
And then there is xnshadow_rpi_check which has to be fixed to:
nklock, followed by rpilock (here was our lock-up bug)
That's a scheme which /should/ be safe. Unfortunately, I see no way to
get rid of the remaining nestings.
And I still doubt we are gaining much by the lock split-up on SMP (it's
pointless for UP due to xnshadow_relax). In case there is heavy
migration activity on multiple cores/CPUs, we now regularly content for
two locks in the hot paths instead of just the one everyone has to go
through anyway. And while we obviously don't win a dime for the worst
case, the average reduction of spinning times trades off against more
atomic (cache-line bouncing) operations. Were you able to measure some
improvement?
Jan
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Xenomai-core mailing list [email protected] https://mail.gna.org/listinfo/xenomai-core
