Philippe Gerum wrote:
> On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
>>Philippe Gerum wrote:
>>>On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
>>>>So it clearly states that a non-preemtible task may block (and 
>>>>rescheduling occurs in
>>>>this case).
>>>Ok, so this is a must fix. Will do. Thanks for reporting.
>>I had a look at the OSEK specification, it also has non-preemptible
>>tasks. So, I guess we should add an xnpod_locked_schedule that simply does
>>if (xnthread_test_state(xnpod_current_sched()->runthread, XNLOCK)) {
>>      xnpod_unlock_sched();
>>      xnpod_lock_sched();
>>} else
>>      xnpod_schedule();
>>and call this xnpod_locked_schedule() instead of xnpod_schedule() in
>>these skins.
> The more I think of it, the more it becomes obvious that the current
> implementation of the scheduler locks is uselessly restrictive.
> Actually, the only thing we gain from not allowing threads to block
> while holding such kind of lock is the opportunity to panic at best if
> the debug switch is on, or to go south badly if not.
> Even the pattern above would not solve the issue in fact, because things
> like xnsynch_sleep_on() which fire a rescheduling call would have to
> either get a special argument telling us about the policy in this
> matter, or forcibly unlock the scheduler behind the curtains before
> calling xnpod_suspend() internally. While we are at it, we would be
> better off incorporating the latter at the core, and assume that
> callers/skins that do _not_ want to allow sleeping schedlocks did the
> proper sanity checks to prevent this before running the rescheduling
> procedure. Other would just benefit from the feature.
> In short, the following patch against 2.3.0 stock fixes the issue,
> allowing threads to block while holding the scheduler lock. 

Ok, but this means that the skins which use XNLOCK with the previous
meaning need fixing.

                                                 Gilles Chanteperdrix

Xenomai-core mailing list

Reply via email to