Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> As the xnsched structures get initialized later, during xnpod_init,
>> xnsched_cpu always returned 0 in the gatekeeper_thread prologue. That
>> caused binding of all gatekeepers to CPU 0.
>>
>> Signed-off-by: Jan Kiszka <jan.kis...@siemens.com>
>> ---
>>
>>  ksrc/nucleus/shadow.c |    5 ++++-
>>  1 files changed, 4 insertions(+), 1 deletions(-)
>>
>> diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c
>> index 1dedd85..2243c0e 100644
>> --- a/ksrc/nucleus/shadow.c
>> +++ b/ksrc/nucleus/shadow.c
>> @@ -823,11 +823,14 @@ static int gatekeeper_thread(void *data)
>>      struct task_struct *this_task = current;
>>      DECLARE_WAITQUEUE(wait, this_task);
>>      struct xnsched *sched = data;
>> -    int cpu = xnsched_cpu(sched);
>>      struct xnthread *target;
>>      cpumask_t cpumask;
>> +    int cpu;
>>      spl_t s;
>>  
>> +    /* sched not fully initialized, xnsched_cpu does not work yet */
>> +    cpu = sched - nkpod_struct.sched;
>> +
>>      this_task->flags |= PF_NOFREEZE;
>>      sigfillset(&this_task->blocked);
>>      cpumask = cpumask_of_cpu(cpu);
> 
> This does not look good, it means that the gatekeeper accesses the sched
> structure before it is initialized. So, IMO, the proper fix would be to
> start the gatekeepers only after the sched structure has been initialized.
> 

I briefly thought about moving xnshadow_mount into xnpod_init. But given
the fact that it worked like this before and that I was not able to
quickly exclude new regressions when reordering things, I decided to
restore the old pattern.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to