Module: xenomai-2.5
Branch: master
Commit: 56ff4329ffa5e779034697d6c6e095f137087b44

Author: Philippe Gerum <>
Date:   Sat Aug 28 17:29:42 2010 +0200

nucleus/sched: prevent remote wakeup from triggering a debug assertion

"The task that was scheduled in without XNRESCHED set locally has been
woken up by a remote CPU. The waker requeued the task and set the
resched condition for itself and in the resched proxy mask for the
remote CPU. But there is at least one place in the Xenomai code where
we drop the nklock between xnsched_set_resched and xnpod_schedule:
do_taskexit_event (I bet there are even more). Now the resched target
CPU runs into a timer handler, issues xnpod_schedule unconditionally,
and happens to find the woken-up task before it is actually informed
via an IPI."

"Yes, and whether we set the bit and call xnpod_schedule atomically
does not really matter either: the IPI takes time to propagate, and
since xnarch_send_ipi does not wait for the IPI to have been received
on the remote CPU, there is no guarantee that xnpod_schedule could not
have been called in the mean time.

More importantly, since in order to do an action on a remote
xnsched_t, we need to hold the nklock, is there any point in not
setting the XNRESCHED bit on that distant structure, at the same time
as when we set the cpu bit on the local sched structure mask and send
the IPI? This way, setting the XNRESCHED bit in the IPI handler would
no longer be necessary, and we would avoid the race."

What this patch does is exactly that, in an attempt to make the remote
rescheduling code safer and simpler:

- by testing XNSCHED in __xnpod_test_resched() instead of the resched
  bitmask for the current CPU; this bitmask is now only used to
  broadcast the IPI to the CPUs pending a reschedule, from the local
  processor POV.

- by setting the XNSCHED bit immediately in the remote scheduler's
  status, which fixes the unwanted assertion.

See there for the discussion regarding this issue:


 include/nucleus/sched.h |    6 ++++--
 ksrc/nucleus/pod.c      |    6 +-----
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/include/nucleus/sched.h b/include/nucleus/sched.h
index 441a3a2..19b4b08 100644
--- a/include/nucleus/sched.h
+++ b/include/nucleus/sched.h
@@ -176,15 +176,17 @@ static inline int xnsched_self_resched_p(struct xnsched 
 /* Set self resched flag for the given scheduler. */
 #define xnsched_set_self_resched(__sched__) do {               \
-  xnarch_cpu_set(xnsched_cpu(__sched__), (__sched__)->resched); \
   setbits((__sched__)->status, XNRESCHED);                     \
 } while (0)
 /* Set specific resched flag into the local scheduler mask. */
 #define xnsched_set_resched(__sched__) do {                            \
   xnsched_t *current_sched = xnpod_current_sched();                    \
-  xnarch_cpu_set(xnsched_cpu(__sched__), current_sched->resched);      \
   setbits(current_sched->status, XNRESCHED);                           \
+  if (current_sched != (__sched__))    {                               \
+      xnarch_cpu_set(xnsched_cpu(__sched__), current_sched->resched);  \
+      setbits((__sched__)->status, XNRESCHED);                         \
+  }                                                                    \
 } while (0)
 void xnsched_zombie_hooks(struct xnthread *thread);
diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c
index 50b6d01..c377a31 100644
--- a/ksrc/nucleus/pod.c
+++ b/ksrc/nucleus/pod.c
@@ -285,7 +285,6 @@ void xnpod_schedule_handler(void) /* Called with hw 
interrupts off. */
-       xnsched_set_self_resched(sched);
@@ -2159,10 +2158,7 @@ static inline void xnpod_switch_to(xnsched_t *sched,
 static inline int __xnpod_test_resched(struct xnsched *sched)
-       int cpu = xnsched_cpu(sched), resched;
-       resched = xnarch_cpu_isset(cpu, sched->resched);
-       xnarch_cpu_clear(cpu, sched->resched);
+       int resched = testbits(sched->status, XNRESCHED);
 #ifdef CONFIG_SMP
        /* Send resched IPI to remote CPU(s). */
        if (unlikely(xnsched_resched_p(sched))) {

Xenomai-git mailing list

Reply via email to