Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
On Tue, May 13, 2008 at 11:10 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. I do not remember the issue precisely, I know that it was related to thread deletion hooks. The XNARCH_WANT_UNLOCKED_CTXSW complicated the issue further. -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. IMHO, the right solution is to add a call to xnpod_schedule or even directly xnfreesyng in the right place (in skins code, after all threads have been freed) -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. IMHO, the right solution is to add a call to xnpod_schedule or even directly xnfreesyng in the right place (in skins code, after all threads have been freed) Well, as I said, we should rather move all these syncs out of critical sections. So better avoid xnpod_schedule. I need to think about this a bit more, I'm afraid. It is safe to delete the TCB once the terminating thread is scheduled away and its successor of about to leave (or left) xnpod_schedule, right? Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
Philippe Gerum wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 11:10 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. I do not remember the issue precisely, I know that it was related to thread deletion hooks. The point is that we have to run thread deletion hooks on behalf of the outgoing thread context; this is a strong requirement. Yes, I remember. The XNARCH_WANT_UNLOCKED_CTXSW complicated the issue further. I now wonder when in this unlocked case do we schedule the tcb for deletion - should be _before_ the switch - and what then prevents that the tcb is flushed by some other CPU that happen to run xnfreesync while we are unlocked during that switch. Or does XNARCH_WANT_UNLOCKED_CTXSW imply !CONFIG_CMP? Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
On Tue, May 13, 2008 at 11:32 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Philippe Gerum wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 11:10 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. I do not remember the issue precisely, I know that it was related to thread deletion hooks. The point is that we have to run thread deletion hooks on behalf of the outgoing thread context; this is a strong requirement. Yes, I remember. Ok. Now I remember. The point is that the tcb is scheduled for deletion by thread deletion hooks, and accessed shortly after by xnthread_cleanup_tcb and xnarch_finalize_no_switch. So basically, your initial patch looks Ok. However, you should add the same call to __xnpod_finalize_zombie. The XNARCH_WANT_UNLOCKED_CTXSW complicated the issue further. I now wonder when in this unlocked case do we schedule the tcb for deletion - should be _before_ the switch - and what then prevents that the tcb is flushed by some other CPU that happen to run xnfreesync while we are unlocked during that switch. The XNSWLOCK bit. Or does XNARCH_WANT_UNLOCKED_CTXSW imply !CONFIG_CMP? No, it is supposed to work. The sched-zombie points to this zombie, which is finalized in xnpod_finish_unlocked_switch -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 11:10 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. I do not remember the issue precisely, I know that it was related to thread deletion hooks. The point is that we have to run thread deletion hooks on behalf of the outgoing thread context; this is a strong requirement. The XNARCH_WANT_UNLOCKED_CTXSW complicated the issue further. -- Philippe. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 11:32 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Philippe Gerum wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 11:10 AM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: On Tue, May 13, 2008 at 10:26 AM, Jan Kiszka [EMAIL PROTECTED] wrote: @@ -1236,6 +1236,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } No, this does not look good. The point of deferring TCB freeing is that the TCB will be accessed shortly after it is freed. By whom in this case? The thread was not active anymore. IIRC, the use-after-release issue was related to self-deletions. I do not remember the issue precisely, I know that it was related to thread deletion hooks. The point is that we have to run thread deletion hooks on behalf of the outgoing thread context; this is a strong requirement. Yes, I remember. Ok. Now I remember. The point is that the tcb is scheduled for deletion by thread deletion hooks, and accessed shortly after by xnthread_cleanup_tcb and xnarch_finalize_no_switch. So basically, your initial patch looks Ok. However, you should add the same call to __xnpod_finalize_zombie. That's for trunk only: Index: xenomai/ksrc/nucleus/pod.c === --- xenomai/ksrc/nucleus/pod.c (Revision 3770) +++ xenomai/ksrc/nucleus/pod.c (Arbeitskopie) @@ -583,6 +583,9 @@ void __xnpod_finalize_zombie(xnsched_t * xnarch_finalize_no_switch(xnthread_archtcb(thread)); + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); + sched-zombie = NULL; } @@ -1231,6 +1234,9 @@ void xnpod_delete_thread(xnthread_t *thr xnthread_cleanup_tcb(thread); xnarch_finalize_no_switch(xnthread_archtcb(thread)); + + if (xnthread_test_state(sched-runthread, XNROOT)) + xnfreesync(); } unlock_and_exit: Everyone fine when I apply both patches? The XNARCH_WANT_UNLOCKED_CTXSW complicated the issue further. I now wonder when in this unlocked case do we schedule the tcb for deletion - should be _before_ the switch - and what then prevents that the tcb is flushed by some other CPU that happen to run xnfreesync while we are unlocked during that switch. The XNSWLOCK bit. Or does XNARCH_WANT_UNLOCKED_CTXSW imply !CONFIG_CMP? No, it is supposed to work. The sched-zombie points to this zombie, which is finalized in xnpod_finish_unlocked_switch OK. However, I would still prefer to get xnfreesync out of the critical paths. Is there anything which speaks against my VIRQ idea (except that it still has to be written :) )? Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Flush xnfree backlog after thread deletion in root context
On Tue, May 13, 2008 at 12:46 PM, Jan Kiszka [EMAIL PROTECTED] wrote: Gilles Chanteperdrix wrote: No, it is supposed to work. The sched-zombie points to this zombie, which is finalized in xnpod_finish_unlocked_switch OK. However, I would still prefer to get xnfreesync out of the critical paths. Is there anything which speaks against my VIRQ idea (except that it still has to be written :) )? Ok. Even though I am not sure how it works now, getting thread deletion to work with XNARCH_WANT_UNLOCKED_CTXSW was by far the hardest part of the work. I am not sure you can have the same control over what happens with a VIRQ. -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core