Jan Kiszka wrote:
Philippe Gerum wrote:

Jan Kiszka wrote:

Philippe, do you see any remaining issues, e.g. that the leak survived
the task termination? Does this have any meaning for correct driver and
skin code?

The only way I could see this leakage survive a switch transition would
require it to happen over the root context, not over a primary context.
Was it the case?

The task had to leave from primary mode. If I forced it to secondary
before terminating, the problem did not show up.

But does the code causing the leakage could have been run by different
contexts in sequence, including the root one?

I don't think so. Bugs in our software aside, there should be no switch
to secondary mode until termination. Moreover, we installed a SIGXCPU
handler, and that one didn't trigger as well.

I just constructed a simple test by placing rthal_local_irq_disable() in
rt_timer_spin and setting up this user space app:

#include <stdio.h>
#include <signal.h>
#include <native/task.h>
#include <native/timer.h>

RT_TASK task;

void func(void *arg)

void terminate(int sig)

int main()
    signal(SIGINT, terminate);
    rt_task_spawn(&task, "lockup", 0, 10, T_FPU | T_JOINABLE | T_WARNSW,
                  func, NULL);
    return 0;

Should this lock up (as it currently does) or rather continue to run
normally after the RT-task terminated? BTW, I'm still not sure if we are
hunting shadows (is IRQs off a legal state for user space in some skin?)
or a real problem - i.e. is it worth the time.

I've just tested this frag against the current SVN head, patching rt_timer_spin() as required, and cannot reproduce the lockup. As expected, the incoming root thread reinstates the correct stall bit (i.e. clears it) after the RT thread terminates. Any chance some potentially troublesome stuff exists in your setup?



Xenomai-core mailing list

Reply via email to