[Xenomai-git] Philippe Gerum : nucleus/shadow: wakeup the gatekeeper immediately when possible
Module: xenomai-rpm Branch: master Commit: 35e9df567e19410526089e24fbbaf92c47de9c43 URL: http://git.xenomai.org/?p=xenomai-rpm.git;a=commit;h=35e9df567e19410526089e24fbbaf92c47de9c43 Author: Philippe Gerum Date: Sun Jul 3 17:34:47 2011 +0200 nucleus/shadow: wakeup the gatekeeper immediately when possible 223685ce enabled the task hardening code over hybrid PREEMPT_RT + I-pipe enabled kernels, by delaying the wake up call for the gatekeeper to the schedule event handler. We may delay the wake up call over non-RT 2.6 kernels as well (and actually did for a while), but we may not do this when running over 2.4 (scheduler innards would not allow this, causing weirdnesses when hardening tasks). So we always do the early wake up when running non-RT, which implicitly includes 2.4 kernels. --- ksrc/nucleus/shadow.c | 21 + 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index f7407e9..7df7625 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -880,10 +880,12 @@ static void lostage_handler(void *cookie) kill_proc(p->pid, arg, 1); break; +#ifdef CONFIG_PREEMPT_RT case LO_GKWAKE_REQ: sched = xnpod_sched_slot(cpu); wake_up_interruptible_sync(&sched->gkwaitq); break; +#endif } } } @@ -1060,6 +1062,23 @@ redo: sched->gktarget = thread; xnthread_set_info(thread, XNATOMIC); set_current_state(TASK_INTERRUPTIBLE | TASK_ATOMICSWITCH); +#ifndef CONFIG_PREEMPT_RT + /* +* We may not hold the preemption lock across calls to +* wake_up_*() services over fully preemptible kernels, since +* tasks might sleep when contending for spinlocks. The wake +* up call for the gatekeeper will happen later, over an APC +* we kick in do_schedule_event() on the way out for the +* hardening task. +* +* We could delay the wake up call over non-RT 2.6 kernels as +* well, but not when running over 2.4 (scheduler innards +* would not allow this, causing weirdnesses when hardening +* tasks). So we always do the early wake up when running +* non-RT, which includes 2.4. +*/ + wake_up_interruptible_sync(&sched->gkwaitq); +#endif schedule(); xnthread_clear_info(thread, XNATOMIC); @@ -2605,8 +2624,10 @@ static inline void do_schedule_event(struct task_struct *next_task) prev_task = current; prev = xnshadow_thread(prev_task); +#ifdef CONFIG_PREEMPT_RT if (prev && xnthread_test_info(prev, XNATOMIC)) schedule_linux_call(LO_GKWAKE_REQ, prev_task, 0); +#endif next = xnshadow_thread(next_task); set_switch_lock_owner(prev_task); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/shadow: wakeup the gatekeeper immediately when possible
Module: xenomai-head Branch: master Commit: 87dffa36f9e6965a9839f601e34a9dbe33306b29 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=87dffa36f9e6965a9839f601e34a9dbe33306b29 Author: Philippe Gerum Date: Sun Jul 3 17:34:47 2011 +0200 nucleus/shadow: wakeup the gatekeeper immediately when possible 223685ce enabled the task hardening code over hybrid PREEMPT_RT + I-pipe enabled kernels, by delaying the wake up call for the gatekeeper to the schedule event handler. We may delay the wake up call over non-RT 2.6 kernels as well (and actually did for a while), but we may not do this when running over 2.4 (scheduler innards would not allow this, causing weirdnesses when hardening tasks). So we always do the early wake up when running non-RT, which implicitly includes 2.4 kernels. --- ksrc/nucleus/shadow.c | 21 + 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index 08ce462..39aceaf 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -817,10 +817,12 @@ static void lostage_handler(void *cookie) kill_proc(p->pid, arg, 1); break; +#ifdef CONFIG_PREEMPT_RT case LO_GKWAKE_REQ: sched = xnpod_sched_slot(cpu); wake_up_interruptible_sync(&sched->gkwaitq); break; +#endif } } } @@ -997,6 +999,23 @@ redo: sched->gktarget = thread; xnthread_set_info(thread, XNATOMIC); set_current_state(TASK_INTERRUPTIBLE | TASK_ATOMICSWITCH); +#ifndef CONFIG_PREEMPT_RT + /* +* We may not hold the preemption lock across calls to +* wake_up_*() services over fully preemptible kernels, since +* tasks might sleep when contending for spinlocks. The wake +* up call for the gatekeeper will happen later, over an APC +* we kick in do_schedule_event() on the way out for the +* hardening task. +* +* We could delay the wake up call over non-RT 2.6 kernels as +* well, but not when running over 2.4 (scheduler innards +* would not allow this, causing weirdnesses when hardening +* tasks). So we always do the early wake up when running +* non-RT, which includes 2.4. +*/ + wake_up_interruptible_sync(&sched->gkwaitq); +#endif schedule(); xnthread_clear_info(thread, XNATOMIC); @@ -2538,8 +2557,10 @@ static inline void do_schedule_event(struct task_struct *next_task) prev_task = current; prev = xnshadow_thread(prev_task); +#ifdef CONFIG_PREEMPT_RT if (prev && xnthread_test_info(prev, XNATOMIC)) schedule_linux_call(LO_GKWAKE_REQ, prev_task, 0); +#endif next = xnshadow_thread(next_task); set_switch_lock_owner(prev_task); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/shadow: wakeup the gatekeeper immediately when possible
Module: xenomai-2.5 Branch: master Commit: 35e9df567e19410526089e24fbbaf92c47de9c43 URL: http://git.xenomai.org/?p=xenomai-2.5.git;a=commit;h=35e9df567e19410526089e24fbbaf92c47de9c43 Author: Philippe Gerum Date: Sun Jul 3 17:34:47 2011 +0200 nucleus/shadow: wakeup the gatekeeper immediately when possible 223685ce enabled the task hardening code over hybrid PREEMPT_RT + I-pipe enabled kernels, by delaying the wake up call for the gatekeeper to the schedule event handler. We may delay the wake up call over non-RT 2.6 kernels as well (and actually did for a while), but we may not do this when running over 2.4 (scheduler innards would not allow this, causing weirdnesses when hardening tasks). So we always do the early wake up when running non-RT, which implicitly includes 2.4 kernels. --- ksrc/nucleus/shadow.c | 21 + 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index f7407e9..7df7625 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -880,10 +880,12 @@ static void lostage_handler(void *cookie) kill_proc(p->pid, arg, 1); break; +#ifdef CONFIG_PREEMPT_RT case LO_GKWAKE_REQ: sched = xnpod_sched_slot(cpu); wake_up_interruptible_sync(&sched->gkwaitq); break; +#endif } } } @@ -1060,6 +1062,23 @@ redo: sched->gktarget = thread; xnthread_set_info(thread, XNATOMIC); set_current_state(TASK_INTERRUPTIBLE | TASK_ATOMICSWITCH); +#ifndef CONFIG_PREEMPT_RT + /* +* We may not hold the preemption lock across calls to +* wake_up_*() services over fully preemptible kernels, since +* tasks might sleep when contending for spinlocks. The wake +* up call for the gatekeeper will happen later, over an APC +* we kick in do_schedule_event() on the way out for the +* hardening task. +* +* We could delay the wake up call over non-RT 2.6 kernels as +* well, but not when running over 2.4 (scheduler innards +* would not allow this, causing weirdnesses when hardening +* tasks). So we always do the early wake up when running +* non-RT, which includes 2.4. +*/ + wake_up_interruptible_sync(&sched->gkwaitq); +#endif schedule(); xnthread_clear_info(thread, XNATOMIC); @@ -2605,8 +2624,10 @@ static inline void do_schedule_event(struct task_struct *next_task) prev_task = current; prev = xnshadow_thread(prev_task); +#ifdef CONFIG_PREEMPT_RT if (prev && xnthread_test_info(prev, XNATOMIC)) schedule_linux_call(LO_GKWAKE_REQ, prev_task, 0); +#endif next = xnshadow_thread(next_task); set_switch_lock_owner(prev_task); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/shadow: wakeup the gatekeeper immediately when possible
Module: xenomai-rpm Branch: for-upstream Commit: 35e9df567e19410526089e24fbbaf92c47de9c43 URL: http://git.xenomai.org/?p=xenomai-rpm.git;a=commit;h=35e9df567e19410526089e24fbbaf92c47de9c43 Author: Philippe Gerum Date: Sun Jul 3 17:34:47 2011 +0200 nucleus/shadow: wakeup the gatekeeper immediately when possible 223685ce enabled the task hardening code over hybrid PREEMPT_RT + I-pipe enabled kernels, by delaying the wake up call for the gatekeeper to the schedule event handler. We may delay the wake up call over non-RT 2.6 kernels as well (and actually did for a while), but we may not do this when running over 2.4 (scheduler innards would not allow this, causing weirdnesses when hardening tasks). So we always do the early wake up when running non-RT, which implicitly includes 2.4 kernels. --- ksrc/nucleus/shadow.c | 21 + 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index f7407e9..7df7625 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -880,10 +880,12 @@ static void lostage_handler(void *cookie) kill_proc(p->pid, arg, 1); break; +#ifdef CONFIG_PREEMPT_RT case LO_GKWAKE_REQ: sched = xnpod_sched_slot(cpu); wake_up_interruptible_sync(&sched->gkwaitq); break; +#endif } } } @@ -1060,6 +1062,23 @@ redo: sched->gktarget = thread; xnthread_set_info(thread, XNATOMIC); set_current_state(TASK_INTERRUPTIBLE | TASK_ATOMICSWITCH); +#ifndef CONFIG_PREEMPT_RT + /* +* We may not hold the preemption lock across calls to +* wake_up_*() services over fully preemptible kernels, since +* tasks might sleep when contending for spinlocks. The wake +* up call for the gatekeeper will happen later, over an APC +* we kick in do_schedule_event() on the way out for the +* hardening task. +* +* We could delay the wake up call over non-RT 2.6 kernels as +* well, but not when running over 2.4 (scheduler innards +* would not allow this, causing weirdnesses when hardening +* tasks). So we always do the early wake up when running +* non-RT, which includes 2.4. +*/ + wake_up_interruptible_sync(&sched->gkwaitq); +#endif schedule(); xnthread_clear_info(thread, XNATOMIC); @@ -2605,8 +2624,10 @@ static inline void do_schedule_event(struct task_struct *next_task) prev_task = current; prev = xnshadow_thread(prev_task); +#ifdef CONFIG_PREEMPT_RT if (prev && xnthread_test_info(prev, XNATOMIC)) schedule_linux_call(LO_GKWAKE_REQ, prev_task, 0); +#endif next = xnshadow_thread(next_task); set_switch_lock_owner(prev_task); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git