Jan Kiszka wrote:
Philippe Gerum wrote:

Jan Kiszka wrote:

Besides this, we then may want to consider if introducing a pending
ownership of synch objects is worthwhile to improve efficiency of PIP
users. Not critical, but if it comes at a reasonable price... Will try
to draft something.

I've committed an implementation of this stuff to the trunk, tested on
your testcase over the simulator. So far, it's ok.

I'll give up, you are too fast for me. ;)

The only thing that
should change downstream compared to the previous behaviour is that
xnsynch_sleep_on() might unblock immediately at skin level without any
xnsynch_wakeup_sleeper() calls being previously invoked, since the
blocking call does the stealing during the pending ownership window.

This means that skins now _must_ fix their internal state when unblocked
from xnsynch_sleep_on() if they happen to track their own resource owner
field for instance, since they might become the owner of such resource
without any unlock/release/whatever routine being called at the said
skin level. I've fixed a couple of skins for that purpose (not checked
RTDM btw), but it would be safer if you could double-check the impact of
such change on the interfaces you've crafted.

Well, if this means that once you have called xnsynch_wakeup_sleeper()
for some lower-prio task, you must call xnsynch_sleep_on() to steal it
for a higher-prio task, then RTDM needs fixing (it only sets a private
lock bit in this case).

No need to call xnsynch_sleep_on() more than usually done; just have a look at native/mutex.c in rt_mutex_lock(), and follow the code labeled grab_mutex, it should give your the proper illustration of the issue.

This change only affects PIP-enabled synchronization objects in a
reasonably limited manner and seems to behave properly, but please, give
this code hell on your side too.

Will do.


PS: The usage can be checked also via the cross reference:



Xenomai-core mailing list

Reply via email to