On 2011-07-12 14:06, Gilles Chanteperdrix wrote:
> On 07/12/2011 01:58 PM, Jan Kiszka wrote:
>> On 2011-07-12 13:56, Jan Kiszka wrote:
>>> However, this parallel unsynchronized execution of the gatekeeper and
>>> its target thread leaves an increasingly bad feeling on my side. Did we
>>> really catch all corner cases now? I wouldn't guarantee that yet.
>>> Specifically as I still have an obscure crash of a Xenomai thread on
>>> Linux schedule() on my table.
>>> What if the target thread woke up due to a signal, continued much
>>> further on a different CPU, blocked in TASK_INTERRUPTIBLE, and then the
>>> gatekeeper continued? I wish we could already eliminate this complexity
>>> and do the migration directly inside schedule()...
>> BTW, we do we mask out TASK_ATOMICSWITCH when checking the task state in
>> the gatekeeper? What would happen if we included it (state ==
> I would tend to think that what we should check is
> xnthread_test_info(XNATOMIC). Or maybe check both, the interruptible
> state and the XNATOMIC info bit.

Actually, neither the info bits nor the task state is sufficiently
synchronized against the gatekeeper yet. We need to hold a shared lock
when testing and resetting the state. I'm not sure yet if that is
fixable given the gatekeeper architecture.


Attachment: signature.asc
Description: OpenPGP digital signature

Xenomai-core mailing list

Reply via email to