Jan Kiszka wrote:
> Am 07.11.2010 09:31, Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>>>> Anyway, after some thoughts, I think we are going to try and make the
>>>>> current situation work instead of going back to the old way.
>>>>>
>>>>> You can find the patch which attempts to do so here:
>>>>> http://sisyphus.hd.free.fr/~gilles/sched_status.txt
>>>> Ack. At last, this addresses the real issues without asking for
>>>> regression funkiness: fix the lack of barrier before testing XNSCHED in
>>> Check the kernel, we actually need it on both sides. Wherever the final
>>> barriers will be, we should leave a comment behind why they are there.
>>> Could be picked up from kernel/smp.c.
>> We have it on both sides: the non-local flags are modified while holding
>> the nklock. Unlocking the nklock implies a barrier.
>
> I think this does not work if nklock is used nested, ie. hold across
> xnsched_set_resched and the next xnpod_schedule.
Ok. There is a race here, even without nesting, we need a barrier before
sending the IPI.
>>> I do not understand the split logic - or some bits are simply not yet
>>> migrated. XNHDEFER, XNSWLOCK, XNKCOUT are all local-only as well, no?
>>> Then better put them in the _local_ status field, that's more consistent
>>> (and would help if we once wanted to optimize their cache line usage).
>> Maybe the naming is not good the. ->status is everything which is
>> modified under nklock, ->lstatus is for XNINIRQ and XNHTICK which are
>> modified without holding the nklock.
>
> And this is not a good clustering IMHO as it makes no difference for a
> local-only flag if it is protected by nklock or not (as long as IRQs are
> off, of course). What makes a difference it the CPU using the related
> field. If we group according to local and remote usage, less cache line
> bouncing can be achieved (of both fields are also cache line aligned,
> but that is a second step that only helps if the first is made).
The two status are in the same cache line.
--
Gilles.
_______________________________________________
Xenomai-core mailing list
[email protected]
https://mail.gna.org/listinfo/xenomai-core