Gilles Chanteperdrix wrote:
Philippe Gerum wrote:
> Likely not this time (keep it cold anyway, who knows); I strongly suspect a bug in > xnarch_switch_to() for all archs but x86


Now that you are talking about it, I may be the one who owes a beer to
everyone by first having put a bug in ia64 context-switch code... If I
am not wrong, the bug should be observed on ia64 too. Unfortunately, I
am unable to compile my ia64 kernel right now, so I wrote a patch for
power PC, and would be glad if some power PC owner could try it.


Nope, same lockup with hybrid scheduling.

It should work on PPC 64 too...



------------------------------------------------------------------------

Index: include/asm-powerpc/system.h
===================================================================
--- include/asm-powerpc/system.h        (revision 410)
+++ include/asm-powerpc/system.h        (working copy)
@@ -182,13 +182,19 @@
 {
     struct task_struct *prev = out_tcb->active_task;
     struct task_struct *next = in_tcb->user_task;
+    static unsigned long last_ksp;
+    static xnarchtcb_t *last_tcb;
+ last_tcb = out_tcb;
     in_tcb->active_task = next ?: prev;
if (next && next != prev) /* Switch to new user-space thread? */
        {
        struct mm_struct *mm = next->active_mm;
+ if (prev != out_tcb->user_task)
+            last_ksp = prev->thread.ksp;
+
        /* Switch the mm context.*/
#ifdef CONFIG_PPC64
@@ -245,6 +251,22 @@
     else
         /* Kernel-to-kernel context switch. */
         rthal_switch_context(out_tcb->kspp,in_tcb->kspp);
+
+    /* If we are not the epilogue of a regular linux schedule(),... */
+    if (likely(test_bit(xnarch_current_cpu(),&rthal_cpu_realtime)) &&
+        /* we are a user-space thread,... */
+        out_tcb->user_task &&
+        /* the last context switch used linux switch routine,... */
+        last_tcb->active_task != out_tcb->user_task &&
+        /* but the last xenomai context was a kernel thread,... */
+        last_tcb->user_task != last_tcb->active_task)
+        {
+        /* then linux context switch routine saved kernel thread stack pointer
+           in last user-space context. We hence put the stack pointers in the
+           right place. */
+        last_tcb->ksp = last_tcb->active_task->thread.ksp;
+        last_tcb->active_task->thread.ksp = last_ksp;
+        }
 }
static inline void xnarch_finalize_and_switch (xnarchtcb_t *dead_tcb,


--

Philippe.

Reply via email to