Jan Kiszka wrote:
> Am 07.11.2010 11:03, Philippe Gerum wrote:
>> On Sun, 2010-11-07 at 09:31 +0100, Gilles Chanteperdrix wrote:
>>> Jan Kiszka wrote:
>>>>>> Anyway, after some thoughts, I think we are going to try and make the
>>>>>> current situation work instead of going back to the old way.
>>>>>> You can find the patch which attempts to do so here:
>>>>> Ack. At last, this addresses the real issues without asking for
>>>>> regression funkiness: fix the lack of barrier before testing XNSCHED in
>>>> Check the kernel, we actually need it on both sides. Wherever the final
>>>> barriers will be, we should leave a comment behind why they are there.
>>>> Could be picked up from kernel/smp.c.
>>> We have it on both sides: the non-local flags are modified while holding
>>> the nklock. Unlocking the nklock implies a barrier.
>> I think we may have an issue with this kind of construct:
>> =====> xnpod_schedule_handler on dest CPU
>> The issue would be triggered by the use of recursive locking. In that
>> case, the source CPU would only sync its cache when the lock is actually
>> dropped by the outer xnlock_put_irq* call and the inner
>> xnlock_get/put_irq* would not act as barriers, so the remote
>> rescheduling handler won't always see the XNSCHED update done remotely,
>> and may lead to a no-op. So we need a barrier before sending the IPI in
> That's what I said.
> And we need it on the reader side as an rmb().
This one we have, in xnpod_schedule_handler.
Xenomai-core mailing list