Author: Philippe Gerum <r...@xenomai.org>
Date: Wed Sep 4 11:46:34 2013 +0200
cobalt/posix/thread: throttle real-time activity upon sched_yield() nop
When sched_yield() does not beget any context switch from primary
mode, throttle the calling thread to ensure cooperative scheduling
with activities in secondary mode.
This replaces the former forced migration through the secondary mode
in the same situation, which could not always prevent lockups when
used in tight loops. By introducing an actual delay for the real-time
caller, indexed on the regular kernel activity, we prevent such
lockups without depending on arbitrary sleep times.
kernel/cobalt/posix/thread.c | 22 +++++++++++++++-------
1 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/kernel/cobalt/posix/thread.c b/kernel/cobalt/posix/thread.c
index 205d4eb..83f94d0 100644
@@ -32,6 +32,7 @@
@@ -1225,23 +1226,30 @@ int cobalt_sched_yield(void)
* If the round-robin move did not beget any context switch to
- * a thread running in primary mode, then force a domain
- * transition through secondary mode.
+ * a thread running in primary mode, then wait for the next
+ * linux context switch to happen.
* Rationale: it is most probably unexpected that
* sched_yield() does not cause any context switch, since this
* service is commonly used for implementing a poor man's
- * cooperative scheduling. By forcing a migration through the
- * secondary mode then back, we guarantee that the CPU has
+ * cooperative scheduling. By waiting for a context switch to
+ * happen in the regular kernel, we guarantee that the CPU has
* been relinquished for a while.
* Typically, this behavior allows a thread running in primary
* mode to effectively yield the CPU to a thread of
* same/higher priority stuck in secondary mode.
+ * NOTE: calling xnshadow_yield() with no timeout
+ * (i.e. XN_INFINITE) is probably never a good idea. This
+ * means that a SCHED_FIFO non-rt thread stuck in a tight loop
+ * would prevent the caller from waking up, since no
+ * linux-originated schedule event would happen for unblocking
+ * it on the current CPU. For this reason, we pass the
+ * arbitrary TICK_NSEC value to limit the wait time to a
+ * reasonable amount.
- xnshadow_relax(0, 0);
- return xnshadow_harden();
+ return xnshadow_yield(TICK_NSEC);
Xenomai-git mailing list