[Xenomai-git] Philippe Gerum : nucleus: fix build issue introduced by 4860668b6
Module: xenomai-2.5 Branch: master Commit: e959040d6ebe5f6edb4e2eef14d139df9dc49317 URL: http://git.xenomai.org/?p=xenomai-2.5.git;a=commit;h=e959040d6ebe5f6edb4e2eef14d139df9dc49317 Author: Philippe Gerum r...@xenomai.org Date: Wed Sep 1 20:08:28 2010 +0200 nucleus: fix build issue introduced by 4860668b6 --- ksrc/nucleus/select.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/ksrc/nucleus/select.c b/ksrc/nucleus/select.c index 6ba59cd..1ffdb6a 100644 --- a/ksrc/nucleus/select.c +++ b/ksrc/nucleus/select.c @@ -459,7 +459,8 @@ int xnselect_mount(void) int xnselect_umount(void) { - return rthal_apc_free(xnselect_apc); + rthal_apc_free(xnselect_apc); + return 0; } /*...@}*/ ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/sched: raise self-resched condition when unlocking scheduler
Module: xenomai-2.5 Branch: master Commit: 38f2ca83a8e63cc94eaa911ff1c0940c884b5078 URL: http://git.xenomai.org/?p=xenomai-2.5.git;a=commit;h=38f2ca83a8e63cc94eaa911ff1c0940c884b5078 Author: Philippe Gerum r...@xenomai.org Date: Wed Sep 1 18:37:16 2010 +0200 nucleus/sched: raise self-resched condition when unlocking scheduler This patch turns the xnsched_set_resched() call into xnsched_set_self_resched(), in xnpod_unlock_sched() where we always deal with the local scheduler. --- ksrc/nucleus/pod.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index 0f9ea71..7db0ccf 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -2361,7 +2361,7 @@ void xnpod_unlock_sched(void) if (--xnthread_lock_count(curr) == 0) { xnthread_clear_state(curr, XNLOCK); - xnsched_set_resched(curr-sched); + xnsched_set_self_resched(curr-sched); xnpod_schedule(); } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/sched: fix race in non-atomic suspend path
Module: xenomai-2.5 Branch: master Commit: 47dac49c71e89b684203e854d1b0172ecacbc555 URL: http://git.xenomai.org/?p=xenomai-2.5.git;a=commit;h=47dac49c71e89b684203e854d1b0172ecacbc555 Author: Philippe Gerum r...@xenomai.org Date: Wed Sep 1 18:01:01 2010 +0200 nucleus/sched: fix race in non-atomic suspend path f6af9b831 revealed a nasty race on a legit usage of the scheduling support code, specifically when running the following sequence non-atomically, i.e. nklock-free: xnpod_suspend_thread(current_thread) ... xnpod_schedule() ... Doing so should have been 100% valid. Unfortunately, this used to be unsafe under the hood (see __xnpod_schedule). This patches fixes it, and also goes through testing the XNRESCHED bit to avoid a useless rescheduling from the code path introduced by f6af9b831. --- ksrc/nucleus/pod.c | 11 --- 1 files changed, 8 insertions(+), 3 deletions(-) diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index c377a31..0f9ea71 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -276,14 +276,17 @@ EXPORT_SYMBOL_GPL(xnpod_fatal_helper); void xnpod_schedule_handler(void) /* Called with hw interrupts off. */ { - xnsched_t *sched = xnpod_current_sched(); + xnsched_t *sched; trace_mark(xn_nucleus, sched_remote, MARK_NOARGS); #if defined(CONFIG_SMP) defined(CONFIG_XENO_OPT_PRIOCPL) + sched = xnpod_current_sched(); if (testbits(sched-status, XNRPICK)) { clrbits(sched-status, XNRPICK); xnshadow_rpi_check(); } +#else + (void)sched; #endif /* CONFIG_SMP CONFIG_XENO_OPT_PRIOCPL */ xnpod_schedule(); } @@ -1467,7 +1470,7 @@ void xnpod_suspend_thread(xnthread_t *thread, xnflags_t mask, */ if (mask XNRELAX) { xnlock_clear_irqon(nklock); - __xnpod_schedule(sched); + xnpod_schedule(); return; } /* @@ -2172,8 +2175,8 @@ static inline int __xnpod_test_resched(struct xnsched *sched) void __xnpod_schedule(struct xnsched *sched) { - struct xnthread *prev, *next, *curr = sched-curr; int zombie, switched, need_resched, shadow; + struct xnthread *prev, *next, *curr; spl_t s; if (xnarch_escalate()) @@ -2183,6 +2186,8 @@ void __xnpod_schedule(struct xnsched *sched) xnlock_get_irqsave(nklock, s); + curr = sched-curr; + xnarch_trace_pid(xnthread_user_task(curr) ? xnarch_user_pid(xnthread_archtcb(curr)) : -1, xnthread_current_priority(curr)); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : common: fix private heap unmapping upon fork.
Module: xenomai-head Branch: master Commit: def92aae66fe3335664c92e4b212cc52fd501365 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=def92aae66fe3335664c92e4b212cc52fd501365 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Sun Aug 22 14:11:16 2010 +0200 common: fix private heap unmapping upon fork. In order to fix private heap unmapping/remapping behaviour with dlopen, an atfork handler was installed which unmaps and remaps the private heap by commit e70ce487ac7ab62cd8de28af8ddccf7309f1259d Unfortunately, the atfork handler for semaphore heaps used a system call to get private heap data which turned out returning data for the shared heap after fork, leading to: - a simple segfault if the private heap and shared heap have different sizes; - silent corruption of the shared heap when using private objects in the child otherwise. So, we fix this by only unmapping the private heap upon fork, and waiting for any skin to be bound in the child process for remapping the private heap. --- src/skins/common/sem_heap.c | 86 +-- 1 files changed, 50 insertions(+), 36 deletions(-) diff --git a/src/skins/common/sem_heap.c b/src/skins/common/sem_heap.c index 2adbdbc..c2f62cd 100644 --- a/src/skins/common/sem_heap.c +++ b/src/skins/common/sem_heap.c @@ -16,10 +16,16 @@ #include asm-generic/bits/current.h #include sem_heap.h -unsigned long xeno_sem_heap[2] = { 0, 0 }; +#define PRIVATE 0 +#define SHARED 1 struct xnvdso *nkvdso; +unsigned long xeno_sem_heap[2] = { 0, 0 }; + +static pthread_once_t init_private_heap = PTHREAD_ONCE_INIT; +static struct xnheap_desc private_hdesc; + void *xeno_map_heap(struct xnheap_desc *hd) { int fd, ret; @@ -47,43 +53,42 @@ void *xeno_map_heap(struct xnheap_desc *hd) static void *map_sem_heap(unsigned int shared) { - struct xnheap_desc hdesc; + struct xnheap_desc global_hdesc, *hdesc; int ret; - ret = XENOMAI_SYSCALL2(__xn_sys_sem_heap, hdesc, shared); + hdesc = shared ? global_hdesc : private_hdesc; + ret = XENOMAI_SYSCALL2(__xn_sys_sem_heap, hdesc, shared); if (ret 0) { errno = -ret; perror(Xenomai: sys_sem_heap); return MAP_FAILED; } - return xeno_map_heap(hdesc); + return xeno_map_heap(hdesc); } -static void unmap_sem_heap(unsigned long addr, unsigned int shared) +static void unmap_on_fork(void) { - struct xnheap_desc hdesc; - int ret; - - ret = XENOMAI_SYSCALL2(__xn_sys_sem_heap, hdesc, shared); - if (ret 0) { - errno = -ret; - perror(Xenomai: unmap sem_heap); - return; - } - - munmap((void *)addr, hdesc.size); -} - -static void remap_on_fork(void) -{ - unmap_sem_heap(xeno_sem_heap[0], 0); - - xeno_sem_heap[0] = (unsigned long)map_sem_heap(0); - if (xeno_sem_heap[0] == (unsigned long)MAP_FAILED) { - perror(Xenomai: mmap local sem heap); - exit(EXIT_FAILURE); - } + /* + Remapping the private heap must be done after the process has been + bound again, in order for it to have a new private heap, + Otherwise the global heap would be used instead, which + leads to unwanted effects. + + We set xeno_sem_heap[PRIVATE] to NULL on machines with an + MMU, so that any reference to the private heap prior to + re-binding will cause a segmentation fault. + + On machines without an MMU, we keep the address unchanged, + it will cause unwanted mutual exclusion with the father, + but at least, we will not get any memory corruption. + */ + + munmap((void *)xeno_sem_heap[PRIVATE], private_hdesc.size); +#ifdef CONFIG_MMU + xeno_sem_heap[PRIVATE] = NULL; +#endif + init_private_heap = PTHREAD_ONCE_INIT; } static void xeno_init_vdso(void) @@ -98,22 +103,29 @@ static void xeno_init_vdso(void) exit(EXIT_FAILURE); } - nkvdso = (struct xnvdso *)(xeno_sem_heap[1] + sysinfo.vdso); + nkvdso = (struct xnvdso *)(xeno_sem_heap[SHARED] + sysinfo.vdso); if (!xnvdso_test_feature(XNVDSO_FEAT_DROP_U_MODE)) xeno_current_warn_old(); } -static void xeno_init_sem_heaps_inner(void) +/* Will be called once at library loading time, and when re-binding + after a fork */ +static void xeno_init_private_heap(void) { - xeno_sem_heap[0] = (unsigned long)map_sem_heap(0); - if (xeno_sem_heap[0] == (unsigned long)MAP_FAILED) { + xeno_sem_heap[PRIVATE] = (unsigned long)map_sem_heap(PRIVATE); + if (xeno_sem_heap[PRIVATE] == (unsigned long)MAP_FAILED) { perror(Xenomai: mmap local sem heap); exit(EXIT_FAILURE); } - pthread_atfork(NULL, NULL, remap_on_fork); +} - xeno_sem_heap[1] = (unsigned long)map_sem_heap(1); - if
[Xenomai-git] Gilles Chanteperdrix : common: fix comment
Module: xenomai-head Branch: master Commit: aa749d2f611d513e9551fd7a4f56bb4cac278fb0 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=aa749d2f611d513e9551fd7a4f56bb4cac278fb0 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Sun Aug 22 16:52:32 2010 +0200 common: fix comment --- src/skins/common/sem_heap.c | 10 +++--- 1 files changed, 3 insertions(+), 7 deletions(-) diff --git a/src/skins/common/sem_heap.c b/src/skins/common/sem_heap.c index c2f62cd..189272e 100644 --- a/src/skins/common/sem_heap.c +++ b/src/skins/common/sem_heap.c @@ -75,19 +75,15 @@ static void unmap_on_fork(void) Otherwise the global heap would be used instead, which leads to unwanted effects. - We set xeno_sem_heap[PRIVATE] to NULL on machines with an - MMU, so that any reference to the private heap prior to + We set xeno_sem_heap[PRIVATE] to NULL. On machines with an + MMU, any reference to the private heap prior to re-binding will cause a segmentation fault. - On machines without an MMU, we keep the address unchanged, - it will cause unwanted mutual exclusion with the father, - but at least, we will not get any memory corruption. + On machines without an MMU, there is no such thing as fork. */ munmap((void *)xeno_sem_heap[PRIVATE], private_hdesc.size); -#ifdef CONFIG_MMU xeno_sem_heap[PRIVATE] = NULL; -#endif init_private_heap = PTHREAD_ONCE_INIT; } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : fast_sync: Set user-space current thread handle to XN_NO_HANDLE after fork
Module: xenomai-head Branch: master Commit: ce8a5675124c505732ea52a12a3e2d3cc9a78aa9 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=ce8a5675124c505732ea52a12a3e2d3cc9a78aa9 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Sun Aug 22 16:09:58 2010 +0200 fast_sync: Set user-space current thread handle to XN_NO_HANDLE after fork --- include/asm-generic/bits/current.h |5 + src/skins/common/current.c | 12 ++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/bits/current.h b/include/asm-generic/bits/current.h index 0f299ea..79123e8 100644 --- a/include/asm-generic/bits/current.h +++ b/include/asm-generic/bits/current.h @@ -32,6 +32,10 @@ static inline unsigned long xeno_get_current_mode(void) #else /* ! HAVE___THREAD */ extern pthread_key_t xeno_current_key; +xnhandle_t xeno_slow_get_current(void); + +unsigned long xeno_slow_get_current_mode(void); + static inline xnhandle_t xeno_get_current(void) { void *val = pthread_getspecific(xeno_current_key); @@ -59,6 +63,7 @@ static inline unsigned long xeno_get_current_mode(void) void xeno_set_current(void); unsigned long *xeno_init_current_mode(void); + void xeno_init_current_keys(void); #endif /* _XENO_ASM_GENERIC_CURRENT_H */ diff --git a/src/skins/common/current.c b/src/skins/common/current.c index 4e75690..50d8a65 100644 --- a/src/skins/common/current.c +++ b/src/skins/common/current.c @@ -3,10 +3,11 @@ #include string.h #include pthread.h -#include asm/xenomai/syscall.h #include nucleus/types.h #include nucleus/thread.h #include nucleus/vdso.h +#include asm/xenomai/syscall.h +#include asm-generic/bits/current.h pthread_key_t xeno_current_mode_key; @@ -85,13 +86,20 @@ static void cleanup_current_mode(void *key) } } +static void xeno_current_fork_handler(void) +{ + if (xeno_get_current() != XN_NO_HANDLE) + __xeno_set_current(XN_NO_HANDLE); +} + static void init_current_keys(void) { int err = create_current_key(); - if (err) goto error_exit; + pthread_atfork(NULL, NULL, xeno_current_fork_handler); + err = pthread_key_create(xeno_current_mode_key, cleanup_current_mode); if (err) { error_exit: ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : common: Fix typo in edaf1e2e54343b6e4bf5cf6ece9175ec0ab21cad
Module: xenomai-head Branch: master Commit: 2023fc82801dc9af1ae5d60b14fd6f25d21f0898 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=2023fc82801dc9af1ae5d60b14fd6f25d21f0898 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Mon Aug 23 13:10:26 2010 +0200 common: Fix typo in edaf1e2e54343b6e4bf5cf6ece9175ec0ab21cad --- src/skins/common/current.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/src/skins/common/current.c b/src/skins/common/current.c index 50d8a65..91a5088 100644 --- a/src/skins/common/current.c +++ b/src/skins/common/current.c @@ -47,7 +47,7 @@ static inline int create_current_key(void) static inline void __xeno_set_current(xnhandle_t current) { - current = (current == XN_NO_HANDLE ? current : (xnhandle_t)(0)); + current = (current != XN_NO_HANDLE ? current : (xnhandle_t)(0)); pthread_setspecific(xeno_current_key, (void *)current); } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : powerpc: upgrade I-pipe support to 2.6.34.4-powerpc-2. 10-04
Module: xenomai-head Branch: master Commit: 13ed6c91c199982a98a890cbcf753c8bb7ec37b6 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=13ed6c91c199982a98a890cbcf753c8bb7ec37b6 Author: Philippe Gerum r...@xenomai.org Date: Sun Aug 15 12:12:49 2010 +0200 powerpc: upgrade I-pipe support to 2.6.34.4-powerpc-2.10-04 --- ... = adeos-ipipe-2.6.34.4-powerpc-2.10-04.patch} | 163 ++-- 1 files changed, 80 insertions(+), 83 deletions(-) diff --git a/ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34-powerpc-2.10-03.patch b/ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34.4-powerpc-2.10-04.patch similarity index 99% rename from ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34-powerpc-2.10-03.patch rename to ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34.4-powerpc-2.10-04.patch index 233e0b4..a584678 100644 --- a/ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34-powerpc-2.10-03.patch +++ b/ksrc/arch/powerpc/patches/adeos-ipipe-2.6.34.4-powerpc-2.10-04.patch @@ -262,7 +262,7 @@ index bd100fc..8fa1901 100644 * or should we not care like we do now ? --BenH. diff --git a/arch/powerpc/include/asm/ipipe.h b/arch/powerpc/include/asm/ipipe.h new file mode 100644 -index 000..31d54bb +index 000..32e2f6d --- /dev/null +++ b/arch/powerpc/include/asm/ipipe.h @@ -0,0 +1,277 @@ @@ -313,10 +313,10 @@ index 000..31d54bb +#include asm/paca.h +#endif + -+#define IPIPE_ARCH_STRING 2.10-03 ++#define IPIPE_ARCH_STRING 2.10-04 +#define IPIPE_MAJOR_NUMBER2 +#define IPIPE_MINOR_NUMBER10 -+#define IPIPE_PATCH_NUMBER3 ++#define IPIPE_PATCH_NUMBER4 + +#ifdef CONFIG_IPIPE_WANT_PREEMPTIBLE_SWITCH + @@ -1240,7 +1240,7 @@ index 8773263..aafe4c0 100644 obj-$(CONFIG_PPC_OF) += of_device.o of_platform.o prom_parse.o obj-$(CONFIG_PPC_CLOCK) += clock.o diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c -index c09138d..d725dd6 100644 +index b894721..51aeaf6 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -131,8 +131,12 @@ int main(void) @@ -2186,7 +2186,7 @@ index 50504ae..01b3d31 100644 #define FP_UNAVAILABLE_EXCEPTION\ START_EXCEPTION(FloatingPointUnavailable) \ diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S -index 7255265..b03d7a8 100644 +index edd4a57..c442e23 100644 --- a/arch/powerpc/kernel/head_fsl_booke.S +++ b/arch/powerpc/kernel/head_fsl_booke.S @@ -488,7 +488,11 @@ interrupt_base: @@ -2262,7 +2262,7 @@ index 5328709..8c3a2b7 100644 sync diff --git a/arch/powerpc/kernel/ipipe.c b/arch/powerpc/kernel/ipipe.c new file mode 100644 -index 000..85d9642 +index 000..47603b6 --- /dev/null +++ b/arch/powerpc/kernel/ipipe.c @@ -0,0 +1,866 @@ @@ -2997,7 +2997,7 @@ index 000..85d9642 + if (unlikely(__ipipe_ipending_p(p))) { + add_preempt_count(PREEMPT_ACTIVE); + clear_bit(IPIPE_STALL_FLAG, p-status); -+ __ipipe_sync_pipeline(IPIPE_IRQ_DOALL); ++ __ipipe_sync_pipeline(); + sub_preempt_count(PREEMPT_ACTIVE); + } + @@ -3088,7 +3088,7 @@ index 000..85d9642 + + p = ipipe_root_cpudom_ptr(); + if (__ipipe_ipending_p(p)) -+ __ipipe_sync_pipeline(IPIPE_IRQ_DOVIRT); ++ __ipipe_sync_pipeline(); + +#ifdef CONFIG_PPC32 + local_irq_enable_hw(); @@ -3133,7 +3133,7 @@ index 000..85d9642 +EXPORT_SYMBOL_GPL(atomic_clear_mask); +#endif/* !CONFIG_PPC64 */ diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c -index 066bd31..e75a1d3 100644 +index 127e443..58922b5 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -95,6 +95,8 @@ EXPORT_SYMBOL(irq_desc); @@ -3155,7 +3155,7 @@ index 066bd31..e75a1d3 100644 #endif /* CONFIG_PPC64 */ static int show_other_interrupts(struct seq_file *p, int prec) -@@ -315,7 +320,7 @@ void fixup_irqs(cpumask_t map) +@@ -318,7 +323,7 @@ void fixup_irqs(cpumask_t map) #endif #ifdef CONFIG_IRQSTACKS @@ -3164,7 +3164,7 @@ index 066bd31..e75a1d3 100644 { struct thread_info *curtp, *irqtp; unsigned long saved_sp_limit; -@@ -356,13 +361,13 @@ static inline void handle_one_irq(unsigned int irq) +@@ -359,13 +364,13 @@ static inline void handle_one_irq(unsigned int irq) set_bits(irqtp-flags, curtp-flags); } #else @@ -3180,7 +3180,7 @@ index 066bd31..e75a1d3 100644 { #ifdef CONFIG_DEBUG_STACKOVERFLOW long sp; -@@ -378,6 +383,16 @@ static inline void check_stack_overflow(void) +@@ -381,6 +386,16 @@ static inline void check_stack_overflow(void) #endif } @@ -5396,10 +5396,10 @@ index d5b3876..010aa8b 100644 #endif /* LINUX_HARDIRQ_H */ diff --git a/include/linux/ipipe.h b/include/linux/ipipe.h new file mode 100644 -index 000..c458883 +index 000..9af86be --- /dev/null +++
[Xenomai-git] Philippe Gerum : nucleus: requeue blocked non-periodic timers properly
Module: xenomai-head Branch: master Commit: fb095979f8244d885283852a259a19ef93db8636 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=fb095979f8244d885283852a259a19ef93db8636 Author: Philippe Gerum r...@xenomai.org Date: Fri Aug 20 07:51:59 2010 +0200 nucleus: requeue blocked non-periodic timers properly Single-stepping into a Xenomai application should freeze timers to avoid overruns, until the program is continued. Unfortunately, non-periodic timers are not requeued when the time base is unblocked, preventing their timeout handler to be fired, thus causing the tasks pending on them to hang indefinitely. This patch requeues those timers properly, by interval of 250 ms, until the timebase is unlocked. --- ksrc/nucleus/timer.c | 23 +++ 1 files changed, 15 insertions(+), 8 deletions(-) diff --git a/ksrc/nucleus/timer.c b/ksrc/nucleus/timer.c index bb9416c..aa7fce4 100644 --- a/ksrc/nucleus/timer.c +++ b/ksrc/nucleus/timer.c @@ -342,10 +342,10 @@ void xntimer_tick_aperiodic(void) { xnsched_t *sched = xnpod_current_sched(); xntimerq_t *timerq = sched-timerqueue; + xnticks_t now, interval; xntimerh_t *holder; xntimer_t *timer; xnsticks_t delta; - xnticks_t now; /* * Optimisation: any local timer reprogramming triggered by @@ -389,13 +389,18 @@ void xntimer_tick_aperiodic(void) __setbits(timer-status, XNTIMER_FIRED); } else if (likely(!testbits(timer-status, XNTIMER_PERIODIC))) { /* -* Postpone the next tick to a -* reasonable date in the future, -* waiting for the timebase to be -* unlocked at some point. +* Make the blocked timer elapse again +* at a reasonably close date in the +* future, waiting for the timebase to +* be unlocked at some point. Timers +* are blocked when single-stepping +* into an application using a +* debugger, so it is fine to wait for +* 250 ms for the user to continue +* program execution. */ - xntimerh_date(timer-aplink) = xntimerh_date(sched-htimer.aplink); - continue; + interval = xnarch_ns_to_tsc(25000ULL); + goto requeue; } } else { /* @@ -411,8 +416,10 @@ void xntimer_tick_aperiodic(void) continue; } + interval = timer-interval; + requeue: do { - xntimerh_date(timer-aplink) += timer-interval; + xntimerh_date(timer-aplink) += interval; } while (xntimerh_date(timer-aplink) now + nklatency); xntimer_enqueue_aperiodic(timer); } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : arm: force enable preemptible switch support in SMP mode
Module: xenomai-head Branch: master Commit: b712571a23982ef639adcbc061cb0fef45a6133c URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=b712571a23982ef639adcbc061cb0fef45a6133c Author: Philippe Gerum r...@xenomai.org Date: Sun May 2 12:20:50 2010 +0200 arm: force enable preemptible switch support in SMP mode --- ksrc/arch/arm/Kconfig |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/ksrc/arch/arm/Kconfig b/ksrc/arch/arm/Kconfig index 8dcb28f..deee854 100644 --- a/ksrc/arch/arm/Kconfig +++ b/ksrc/arch/arm/Kconfig @@ -18,8 +18,8 @@ depends on XENO_OPT_NUCLEUS config IPIPE_WANT_PREEMPTIBLE_SWITCH bool - default y if XENO_HW_UNLOCKED_SWITCH - default n if !XENO_HW_UNLOCKED_SWITCH + default y if (XENO_HW_UNLOCKED_SWITCH || SMP) + default n if !XENO_HW_UNLOCKED_SWITCH !SMP config XENO_HW_FPU bool Enable FPU support ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : arm: use rthal_processor_id() over non-linux contexts
Module: xenomai-head Branch: master Commit: 271a5afc9793920a117c83dd1afd66f820ad7e2a URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=271a5afc9793920a117c83dd1afd66f820ad7e2a Author: Philippe Gerum r...@xenomai.org Date: Thu May 6 20:06:17 2010 +0200 arm: use rthal_processor_id() over non-linux contexts --- include/asm-arm/bits/pod.h |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/asm-arm/bits/pod.h b/include/asm-arm/bits/pod.h index 9304250..3dc51f8 100644 --- a/include/asm-arm/bits/pod.h +++ b/include/asm-arm/bits/pod.h @@ -153,7 +153,7 @@ static inline void xnarch_enable_fpu(xnarchtcb_t *tcb) save the fpu state and disable them, to get linux fpu fault handler take care of them correctly. */ rthal_save_fpu(tcb-fpup, fpexc); - last_VFP_context[smp_processor_id()] = NULL; + last_VFP_context[rthal_processor_id()] = NULL; rthal_disable_fpu(); } #else /* !CONFIG_VFP */ @@ -214,7 +214,7 @@ static inline void xnarch_restore_fpu(xnarchtcb_t * tcb) task, into the FPU area of the last non RT task which used the FPU before the preemption by Xenomai. */ - last_VFP_context[smp_processor_id()] = NULL; + last_VFP_context[rthal_processor_id()] = NULL; rthal_disable_fpu(); } #else /* !CONFIG_VFP */ ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : powerpc: resync thread switch code with mainline = 2.6.32
Module: xenomai-head Branch: master Commit: 5957985beea3da52047b26fbec3258ba9775f102 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=5957985beea3da52047b26fbec3258ba9775f102 Author: Philippe Gerum r...@xenomai.org Date: Tue Aug 24 15:23:39 2010 +0200 powerpc: resync thread switch code with mainline = 2.6.32 --- ksrc/arch/powerpc/switch_64.S | 141 - 1 files changed, 138 insertions(+), 3 deletions(-) diff --git a/ksrc/arch/powerpc/switch_64.S b/ksrc/arch/powerpc/switch_64.S index 88512f0..a241727 100644 --- a/ksrc/arch/powerpc/switch_64.S +++ b/ksrc/arch/powerpc/switch_64.S @@ -127,7 +127,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) addir1,r1,SWITCH_FRAME_SIZE blr -#else /* Linux = 2.6.24 */ +#elif LINUX_VERSION_CODE KERNEL_VERSION(2,6,32) .align 7 _GLOBAL(rthal_thread_switch) @@ -140,6 +140,11 @@ _GLOBAL(rthal_thread_switch) mflrr20 /* Return to switch caller */ mfmsr r22 li r0, MSR_FP +#ifdef CONFIG_VSX +BEGIN_FTR_SECTION + orisr0,r0,msr_...@h /* Disable VSX */ +END_FTR_SECTION_IFSET(CPU_FTR_VSX) +#endif /* CONFIG_VSX */ #ifdef CONFIG_ALTIVEC BEGIN_FTR_SECTION orisr0,r0,msr_...@h /* Disable altivec */ @@ -150,7 +155,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) and.r0,r0,r22 beq+1f andcr22,r22,r0 - mtmsrd r22 + MTMSRD(r22) isync 1: std r20,_NIP(r1) mfcrr23 @@ -258,7 +263,137 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) addir1,r1,SWITCH_FRAME_SIZE blr -#endif +#else /* Linux = 2.6.32 */ + +_GLOBAL(rthal_thread_switch) + mflrr0 + std r0,16(r1) + stdur1,-SWITCH_FRAME_SIZE(r1) + /* r3-r13 are caller saved -- Cort */ + SAVE_8GPRS(14, r1) + SAVE_10GPRS(22, r1) + mflrr20 /* Return to switch caller */ + mfmsr r22 + li r0, MSR_FP +#ifdef CONFIG_VSX +BEGIN_FTR_SECTION + orisr0,r0,msr_...@h /* Disable VSX */ +END_FTR_SECTION_IFSET(CPU_FTR_VSX) +#endif /* CONFIG_VSX */ +#ifdef CONFIG_ALTIVEC +BEGIN_FTR_SECTION + orisr0,r0,msr_...@h /* Disable altivec */ + mfspr r24,SPRN_VRSAVE /* save vrsave register value */ + std r24,THREAD_VRSAVE(r3) +END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) +#endif /* CONFIG_ALTIVEC */ + and.r0,r0,r22 + beq+1f + andcr22,r22,r0 + MTMSRD(r22) + isync +1: std r20,_NIP(r1) + mfcrr23 + std r23,_CCR(r1) + std r1,KSP(r3) /* Set old stack pointer */ + +#ifdef CONFIG_SMP + /* We need a sync somewhere here to make sure that if the +* previous task gets rescheduled on another CPU, it sees all +* stores it has performed on this one. +*/ + sync +#endif /* CONFIG_SMP */ + + ld r8,KSP(r4) /* new stack pointer */ + ld r3,PACACURRENT(r13) /* we must return prev when paired to switch_to() */ + + cmpwi cr5,r5,0/* is it a kernel thread */ +bne- cr5,10f /* if so, don't touch 'current' */ + + addir6,r4,-THREAD /* Convert THREAD to 'current' */ + std r6,PACACURRENT(r13) /* Set new 'current' */ +10: +#ifdef CONFIG_PPC_BOOK3S +BEGIN_FTR_SECTION + BEGIN_FTR_SECTION_NESTED(95) + clrrdi r6,r8,28/* get its ESID */ + clrrdi r9,r1,28/* get current sp ESID */ + FTR_SECTION_ELSE_NESTED(95) + clrrdi r6,r8,40/* get its 1T ESID */ + clrrdi r9,r1,40/* get current sp 1T ESID */ + ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_1T_SEGMENT, 95) +FTR_SECTION_ELSE + b 2f +ALT_FTR_SECTION_END_IFSET(CPU_FTR_SLB) + clrldi. r0,r6,2 /* is new ESID c? */ + cmpdcr1,r6,r9 /* or is new ESID the same as current ESID? */ + croreq,4*cr1+eq,eq + beq 2f /* if yes, don't slbie it */ + + /* Bolt in the new stack SLB entry */ + ld r7,KSP_VSID(r4) /* Get new stack's VSID */ + orisr0,r6,(SLB_ESID_V)@h + ori r0,r0,(SLB_NUM_BOLTED-1)@l +BEGIN_FTR_SECTION + li r9,MMU_SEGSIZE_1T /* insert B field */ + orisr6,r6,(MMU_SEGSIZE_1T SLBIE_SSIZE_SHIFT)@h + rldimi r7,r9,SLB_VSID_SSIZE_SHIFT,0 +END_FTR_SECTION_IFSET(CPU_FTR_1T_SEGMENT) + + /* Update the last bolted SLB. No write barriers are needed +* here, provided we only update the current CPU's SLB shadow +* buffer. +*/ + ld r9,PACA_SLBSHADOWPTR(r13) + li r12,0 + std r12,SLBSHADOW_STACKESID(r9) /* Clear ESID */ + std r7,SLBSHADOW_STACKVSID(r9) /* Save VSID */ + std r0,SLBSHADOW_STACKESID(r9) /* Save ESID */ + + /* No need to check for CPU_FTR_NO_SLBIE_B here, since when +* we have 1TB segments, the only CPUs known
[Xenomai-git] Philippe Gerum : x86: increase SMP calibration value
Module: xenomai-head Branch: master Commit: c847aa98f44ba3ba734a373238f572fe8c65bb3b URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=c847aa98f44ba3ba734a373238f572fe8c65bb3b Author: Philippe Gerum r...@xenomai.org Date: Thu Aug 26 17:53:36 2010 +0200 x86: increase SMP calibration value --- include/asm-x86/calibration.h | 13 - 1 files changed, 8 insertions(+), 5 deletions(-) diff --git a/include/asm-x86/calibration.h b/include/asm-x86/calibration.h index bef4ad3..fa3b3b8 100644 --- a/include/asm-x86/calibration.h +++ b/include/asm-x86/calibration.h @@ -33,14 +33,17 @@ static inline unsigned long xnarch_get_sched_latency (void) #if CONFIG_XENO_OPT_TIMING_SCHEDLAT != 0 sched_latency = CONFIG_XENO_OPT_TIMING_SCHEDLAT; #else -#ifdef CONFIG_X86_LOCAL_APIC +#ifdef CONFIG_SMP + sched_latency = 3350; +#elif defined(CONFIG_X86_LOCAL_APIC) sched_latency = 1000; #else /* !CONFIG_X86_LOCAL_APIC */ /* -* Use the bogomips formula to identify low-end x86 boards when using -* the 8254 PIT. The following is still grossly experimental and needs -* work (i.e. more specific cases), but the approach is definitely -* saner than previous attempts to guess such value dynamically. +* Use the bogomips formula to identify low-end x86 boards +* when using the 8254 PIT. The following is still grossly +* experimental and needs work (i.e. more specific cases), but +* the approach is definitely saner than previous attempts to +* guess such value dynamically. */ #define __bogomips (current_cpu_data.loops_per_jiffy/(50/HZ)) sched_latency = (__bogomips 250 ? 17000 : ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/sched: move locking to resume_rpi/suspend_rpi
Module: xenomai-head Branch: master Commit: 46281296696b1342f79045d880e32ae24571928a URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=46281296696b1342f79045d880e32ae24571928a Author: Philippe Gerum r...@xenomai.org Date: Fri Aug 27 07:51:21 2010 +0200 nucleus/sched: move locking to resume_rpi/suspend_rpi Most scheduling classes do not implement RPI resume/suspend callbacks upon thread state transition, so there is no need to grab the nklock for running an empty stub for them. This patch leaves resume_rpi/suspend_rpi callbacks deal with proper locking internally, instead of grabbing the nucleus lock unconditionally around those calls. --- ksrc/nucleus/sched-sporadic.c | 12 ksrc/nucleus/shadow.c | 19 +-- 2 files changed, 21 insertions(+), 10 deletions(-) diff --git a/ksrc/nucleus/sched-sporadic.c b/ksrc/nucleus/sched-sporadic.c index fe12400..8a788b8 100644 --- a/ksrc/nucleus/sched-sporadic.c +++ b/ksrc/nucleus/sched-sporadic.c @@ -401,14 +401,26 @@ static struct xnthread *xnsched_sporadic_peek_rpi(struct xnsched *sched) static void xnsched_sporadic_suspend_rpi(struct xnthread *thread) { + spl_t s; + + xnlock_get_irqsave(nklock, s); + if (thread-pss) sporadic_suspend_activity(thread); + + xnlock_put_irqrestore(nklock, s); } static void xnsched_sporadic_resume_rpi(struct xnthread *thread) { + spl_t s; + + xnlock_get_irqsave(nklock, s); + if (thread-pss) sporadic_resume_activity(thread); + + xnlock_put_irqrestore(nklock, s); } #endif /* CONFIG_XENO_OPT_PRIOCPL */ diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index 3d14784..609151e 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -347,18 +347,16 @@ static void rpi_clear_remote(struct xnthread *thread) static void rpi_migrate(struct xnsched *sched, struct xnthread *thread) { - spl_t s; - rpi_clear_remote(thread); rpi_push(sched, thread); /* * The remote CPU already ran rpi_switch() for the leaving * thread, so there is no point in calling -* xnsched_suspend_rpi() for the latter anew. +* xnsched_suspend_rpi() for the latter anew. Proper locking +* is left to the resume_rpi() callback, so that we don't grab +* the nklock uselessly for nop calls. */ - xnlock_get_irqsave(nklock, s); xnsched_resume_rpi(thread); - xnlock_put_irqrestore(nklock, s); } #else /* !CONFIG_SMP */ @@ -400,10 +398,13 @@ static inline void rpi_switch(struct task_struct *next_task) xnsched_pop_rpi(prev); prev-rpi = NULL; xnlock_put_irqrestore(sched-rpilock, s); - /* Do NOT nest the rpilock and nklock locks. */ - xnlock_get_irqsave(nklock, s); + /* +* Do NOT nest the rpilock and nklock locks. +* Proper locking is left to the suspend_rpi() +* callback, so that we don't grab the nklock +* uselessly for nop calls. +*/ xnsched_suspend_rpi(prev); - xnlock_put_irqrestore(nklock, s); } else xnlock_put_irqrestore(sched-rpilock, s); } @@ -457,9 +458,7 @@ static inline void rpi_switch(struct task_struct *next_task) xnsched_push_rpi(sched, next); next-rpi = sched; xnlock_put_irqrestore(sched-rpilock, s); - xnlock_get_irqsave(nklock, s); xnsched_resume_rpi(next); - xnlock_put_irqrestore(nklock, s); } } else if (unlikely(next-rpi != sched)) /* We hold no lock here. */ ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : hal/generic: inline APC scheduling code
Module: xenomai-head Branch: master Commit: a27bab82ca474073108766d0c60a847a5d14a058 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=a27bab82ca474073108766d0c60a847a5d14a058 Author: Philippe Gerum r...@xenomai.org Date: Sat Aug 28 12:52:55 2010 +0200 hal/generic: inline APC scheduling code rthal_apc_schedule() may be called on the hot path, its implementation is simple and we need it to be as fast as possible. This patch inlines it. In the same move, error checking on the APC parameter is either removed (APC scheduling) or converted to a BUG_ON() assertion, since that interface is way too internal for recovering sanely from such a misuse. --- include/asm-generic/hal.h | 22 +++- ksrc/arch/generic/hal.c | 123 + 2 files changed, 67 insertions(+), 78 deletions(-) diff --git a/include/asm-generic/hal.h b/include/asm-generic/hal.h index 5ff7fb0..a933404 100644 --- a/include/asm-generic/hal.h +++ b/include/asm-generic/hal.h @@ -451,6 +451,10 @@ extern unsigned long rthal_apc_map; extern struct rthal_apc_desc rthal_apc_table[RTHAL_NR_APCS]; +extern unsigned long rthal_apc_pending[RTHAL_NR_CPUS]; + +extern unsigned int rthal_apc_virq; + extern int rthal_arch_init(void); extern void rthal_arch_cleanup(void); @@ -524,9 +528,23 @@ int rthal_apc_alloc(const char *name, void (*handler)(void *cookie), void *cookie); -int rthal_apc_free(int apc); +void rthal_apc_free(int apc); + +static inline void __rthal_apc_schedule(int apc) +{ + int cpu = rthal_processor_id(); + if (!__test_and_set_bit(apc, rthal_apc_pending[cpu])) + rthal_schedule_irq_root(rthal_apc_virq); +} + +static inline void rthal_apc_schedule(int apc) +{ + unsigned long flags; -int rthal_apc_schedule(int apc); + rthal_local_irq_save(flags); + __rthal_apc_schedule(apc); + rthal_local_irq_restore(flags); +} int rthal_irq_affinity(unsigned irq, cpumask_t cpumask, diff --git a/ksrc/arch/generic/hal.c b/ksrc/arch/generic/hal.c index 304962b..9ac52dc 100644 --- a/ksrc/arch/generic/hal.c +++ b/ksrc/arch/generic/hal.c @@ -66,10 +66,6 @@ EXPORT_SYMBOL(rthal_supported_cpus); static int rthal_init_done; -static unsigned rthal_apc_virq; - -static unsigned long rthal_apc_pending[RTHAL_NR_CPUS]; - static rthal_spinlock_t rthal_apc_lock = RTHAL_SPIN_LOCK_UNLOCKED; static atomic_t rthal_sync_count = ATOMIC_INIT(1); @@ -91,6 +87,10 @@ EXPORT_SYMBOL_GPL(rthal_apc_table); volatile int rthal_sync_op; +unsigned long rthal_apc_pending[RTHAL_NR_CPUS]; + +unsigned int rthal_apc_virq; + unsigned long rthal_critical_enter(void (*synch) (void)) { unsigned long flags = rthal_grab_superlock(synch); @@ -508,26 +508,28 @@ void rthal_apc_kicker(unsigned virq, void *cookie) int rthal_apc_alloc(const char *name, void (*handler) (void *cookie), void *cookie) { -unsigned long flags; -int apc; + unsigned long flags; + int apc; -if (handler == NULL) -return -EINVAL; + if (handler == NULL) + return -EINVAL; -rthal_spin_lock_irqsave(rthal_apc_lock, flags); + rthal_spin_lock_irqsave(rthal_apc_lock, flags); -if (rthal_apc_map != ~0) { -apc = ffz(rthal_apc_map); -__set_bit(apc, rthal_apc_map); -rthal_apc_table[apc].handler = handler; -rthal_apc_table[apc].cookie = cookie; -rthal_apc_table[apc].name = name; -} else -apc = -EBUSY; + if (rthal_apc_map == ~0) { + apc = -EBUSY; + goto out; + } -rthal_spin_unlock_irqrestore(rthal_apc_lock, flags); + apc = ffz(rthal_apc_map); + __set_bit(apc, rthal_apc_map); + rthal_apc_table[apc].handler = handler; + rthal_apc_table[apc].cookie = cookie; + rthal_apc_table[apc].name = name; +out: + rthal_spin_unlock_irqrestore(rthal_apc_lock, flags); -return apc; + return apc; } /** @@ -540,10 +542,6 @@ int rthal_apc_alloc(const char *name, * @param apc The APC id. to release, as returned by a successful call * to the rthal_apc_alloc() service. * - * @return 0 is returned upon success. Otherwise: - * - * - -EINVAL is returned if @a apc is invalid. - * * Environments: * * This service can be called from: @@ -551,59 +549,10 @@ int rthal_apc_alloc(const char *name, * - Any domain context. */ -int rthal_apc_free(int apc) +void rthal_apc_free(int apc) { -if (apc 0 || apc = RTHAL_NR_APCS || -!test_and_clear_bit(apc, rthal_apc_map)) -return -EINVAL; - -return 0; -} - -/** - * @fn int rthal_apc_schedule (int apc) - * - * @brief Schedule an APC invocation. - * - * This service marks the APC as pending for the Linux domain, so that - * its handler will be called as soon as possible, when the Linux - * domain gets back in control. - * - * When posted from the Linux domain, the
[Xenomai-git] Philippe Gerum : nucleus, posix: use fast APC scheduling call
Module: xenomai-head Branch: master Commit: 6310b2822e0efcbabe50bc46f81316f4726b9f38 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=6310b2822e0efcbabe50bc46f81316f4726b9f38 Author: Philippe Gerum r...@xenomai.org Date: Sat Aug 28 13:04:45 2010 +0200 nucleus, posix: use fast APC scheduling call --- ksrc/nucleus/pipe.c | 16 ksrc/nucleus/registry.c |4 ++-- ksrc/nucleus/select.c |3 +-- ksrc/nucleus/shadow.c |4 ++-- ksrc/skins/posix/apc.c |3 +-- 5 files changed, 14 insertions(+), 16 deletions(-) diff --git a/ksrc/nucleus/pipe.c b/ksrc/nucleus/pipe.c index 303dce9..7dc32a3 100644 --- a/ksrc/nucleus/pipe.c +++ b/ksrc/nucleus/pipe.c @@ -214,9 +214,9 @@ static void xnpipe_wakeup_proc(void *cookie) xnlock_put_irqrestore(nklock, s); } -static inline void xnpipe_schedule_request(void) +static inline void xnpipe_schedule_request(void) /* hw IRQs off */ { - rthal_apc_schedule(xnpipe_wakeup_apc); + __rthal_apc_schedule(xnpipe_wakeup_apc); } static inline ssize_t xnpipe_flush_bufq(void (*fn)(void *buf, void *xstate), @@ -346,11 +346,11 @@ int xnpipe_connect(int minor, struct xnpipe_operations *ops, void *xstate) } } - xnlock_put_irqrestore(nklock, s); - if (need_sched) xnpipe_schedule_request(); + xnlock_put_irqrestore(nklock, s); + return minor; } EXPORT_SYMBOL_GPL(xnpipe_connect); @@ -415,11 +415,11 @@ cleanup: xnpipe_minor_free(minor); } - xnlock_put_irqrestore(nklock, s); - if (need_sched) xnpipe_schedule_request(); + xnlock_put_irqrestore(nklock, s); + return 0; } EXPORT_SYMBOL_GPL(xnpipe_disconnect); @@ -474,11 +474,11 @@ ssize_t xnpipe_send(int minor, struct xnpipe_mh *mh, size_t size, int flags) need_sched = 1; } - xnlock_put_irqrestore(nklock, s); - if (need_sched) xnpipe_schedule_request(); + xnlock_put_irqrestore(nklock, s); + return (ssize_t) size; } EXPORT_SYMBOL_GPL(xnpipe_send); diff --git a/ksrc/nucleus/registry.c b/ksrc/nucleus/registry.c index 9896c37..9a6f66e 100644 --- a/ksrc/nucleus/registry.c +++ b/ksrc/nucleus/registry.c @@ -485,7 +485,7 @@ static inline void registry_export_pnode(struct xnobject *object, object-pnode = pnode; removeq(registry_obj_busyq, object-link); appendq(registry_obj_procq, object-link); - rthal_apc_schedule(registry_proc_apc); + __rthal_apc_schedule(registry_proc_apc); } static inline void registry_unexport_pnode(struct xnobject *object) @@ -501,7 +501,7 @@ static inline void registry_unexport_pnode(struct xnobject *object) object-pnode-ops-touch(object); removeq(registry_obj_busyq, object-link); appendq(registry_obj_procq, object-link); - rthal_apc_schedule(registry_proc_apc); + __rthal_apc_schedule(registry_proc_apc); } else { /* * Unexporting before the lower stage has had a chance diff --git a/ksrc/nucleus/select.c b/ksrc/nucleus/select.c index 656671d..6ba59cd 100644 --- a/ksrc/nucleus/select.c +++ b/ksrc/nucleus/select.c @@ -405,9 +405,8 @@ void xnselector_destroy(struct xnselector *selector) inith(selector-destroy_link); xnlock_get_irqsave(nklock, s); appendq(xnselectors, selector-destroy_link); + __rthal_apc_schedule(xnselect_apc); xnlock_put_irqrestore(nklock, s); - - rthal_apc_schedule(xnselect_apc); } EXPORT_SYMBOL_GPL(xnselector_destroy); diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index 609151e..5f2a6be 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -903,9 +903,9 @@ static void schedule_linux_call(int type, struct task_struct *p, int arg) rq-req[reqnum].task = p; rq-req[reqnum].arg = arg; - splexit(s); + __rthal_apc_schedule(lostage_apc); - rthal_apc_schedule(lostage_apc); + splexit(s); } static inline int normalize_priority(int prio) diff --git a/ksrc/skins/posix/apc.c b/ksrc/skins/posix/apc.c index 9092d5d..176687e 100644 --- a/ksrc/skins/posix/apc.c +++ b/ksrc/skins/posix/apc.c @@ -47,9 +47,8 @@ void pse51_schedule_lostage(int request, void *arg, size_t size) rq-req[reqnum].arg = arg; rq-req[reqnum].size = size; rq-in = (reqnum + 1) (PSE51_LO_MAX_REQUESTS - 1); + __rthal_apc_schedule(pse51_lostage_apc); splexit(s); - - rthal_apc_schedule(pse51_lostage_apc); } static void pse51_lostage_handle_request(void *cookie) ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/sched: prevent remote wakeup from triggering a debug assertion
Module: xenomai-head Branch: master Commit: 7ea2e7bd261de0f8b7ce41e530e2f1fecda4bf43 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=7ea2e7bd261de0f8b7ce41e530e2f1fecda4bf43 Author: Philippe Gerum r...@xenomai.org Date: Sat Aug 28 17:29:42 2010 +0200 nucleus/sched: prevent remote wakeup from triggering a debug assertion Jan: The task that was scheduled in without XNRESCHED set locally has been woken up by a remote CPU. The waker requeued the task and set the resched condition for itself and in the resched proxy mask for the remote CPU. But there is at least one place in the Xenomai code where we drop the nklock between xnsched_set_resched and xnpod_schedule: do_taskexit_event (I bet there are even more). Now the resched target CPU runs into a timer handler, issues xnpod_schedule unconditionally, and happens to find the woken-up task before it is actually informed via an IPI. Gilles: --- Yes, and whether we set the bit and call xnpod_schedule atomically does not really matter either: the IPI takes time to propagate, and since xnarch_send_ipi does not wait for the IPI to have been received on the remote CPU, there is no guarantee that xnpod_schedule could not have been called in the mean time. More importantly, since in order to do an action on a remote xnsched_t, we need to hold the nklock, is there any point in not setting the XNRESCHED bit on that distant structure, at the same time as when we set the cpu bit on the local sched structure mask and send the IPI? This way, setting the XNRESCHED bit in the IPI handler would no longer be necessary, and we would avoid the race. What this patch does is exactly that, in an attempt to make the remote rescheduling code safer and simpler: - by testing XNSCHED in __xnpod_test_resched() instead of the resched bitmask for the current CPU; this bitmask is now only used to broadcast the IPI to the CPUs pending a reschedule, from the local processor POV. - by setting the XNSCHED bit immediately in the remote scheduler's status, which fixes the unwanted assertion. See there for the discussion regarding this issue: https://mail.gna.org/public/xenomai-core/2010-08/msg00084.html --- include/nucleus/sched.h |6 -- ksrc/nucleus/pod.c |6 +- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/include/nucleus/sched.h b/include/nucleus/sched.h index c46ba4b..d608bdf 100644 --- a/include/nucleus/sched.h +++ b/include/nucleus/sched.h @@ -177,15 +177,17 @@ static inline int xnsched_self_resched_p(struct xnsched *sched) /* Set self resched flag for the given scheduler. */ #define xnsched_set_self_resched(__sched__) do { \ - xnarch_cpu_set(xnsched_cpu(__sched__), (__sched__)-resched); \ setbits((__sched__)-status, XNRESCHED); \ } while (0) /* Set specific resched flag into the local scheduler mask. */ #define xnsched_set_resched(__sched__) do {\ xnsched_t *current_sched = xnpod_current_sched();\ - xnarch_cpu_set(xnsched_cpu(__sched__), current_sched-resched); \ setbits(current_sched-status, XNRESCHED); \ + if (current_sched != (__sched__)){ \ + xnarch_cpu_set(xnsched_cpu(__sched__), current_sched-resched); \ + setbits((__sched__)-status, XNRESCHED); \ + }\ } while (0) void xnsched_zombie_hooks(struct xnthread *thread); diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index df6009f..678d667 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -285,7 +285,6 @@ void xnpod_schedule_handler(void) /* Called with hw interrupts off. */ xnshadow_rpi_check(); } #endif /* CONFIG_SMP CONFIG_XENO_OPT_PRIOCPL */ - xnsched_set_self_resched(sched); xnpod_schedule(); } @@ -2159,10 +2158,7 @@ static inline void xnpod_switch_to(xnsched_t *sched, static inline int __xnpod_test_resched(struct xnsched *sched) { - int cpu = xnsched_cpu(sched), resched; - - resched = xnarch_cpu_isset(cpu, sched-resched); - xnarch_cpu_clear(cpu, sched-resched); + int resched = testbits(sched-status, XNRESCHED); #ifdef CONFIG_SMP /* Send resched IPI to remote CPU(s). */ if (unlikely(xnsched_resched_p(sched))) { ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/shadow: shorten the uninterruptible path to secondary mode
Module: xenomai-head Branch: master Commit: d0a2e0f45a46e25adac820a3d672b23241ed2ba5 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=d0a2e0f45a46e25adac820a3d672b23241ed2ba5 Author: Philippe Gerum r...@xenomai.org Date: Sat Aug 28 16:36:23 2010 +0200 nucleus/shadow: shorten the uninterruptible path to secondary mode Switching a thread from primary to secondary mode entails running a significantly long code path with interrupts off, to hand over the relaxing thread to the Linux scheduler. Investigation on different architectures showed that such code path was involved most of the time in latency peaks, typically when an interrupt arrives at the very beginning of the migration sequence, and remains blocked until the thread is fully switched out. Having RPI enabled may increase the penalty, since pushing the relaxing thread to the local RPI queue is part of this sequence (rpi_push). Tracing reveals that a significant portion of the uninterruptible sequence is actually spent running the rescheduling procedure (xnpod_schedule). However, nothing requires us to suspend /and/ switch out a relaxing thread atomically; actually, this is even inefficient, since this tends to give a high priority to a thread going for less real-time guarantees, over a real-time activity which could be started by a pending interrupt. This patch introduces a special handling of the XNRELAX bit condition in xnpod_suspend_thread(), so that all locks (smp and local interrupts) are dropped right before switching out the current thread, to open a window for interrupt preemption. Additionally, interrupt management is now shared between xnshadow_relax() and xnpod_suspend_thread(), so that basic assumptions can be made on the current interrupt state, to further reduce interrupt masking. Best cases: - no interrupt will be pending, so the relaxed thread will be switched out immediately. - an interrupt will be pending for the Xenomai domain, performing time-critical duties such as waking up a real-time thread, in which case the latency to handle a real-time event will have been lower. Worst case: - an interrupt will be pending for Linux, in which case the rescheduling will be postponed until the interrupt pipeline has logged it (but not dispatched, since we will be running over the high priority Xenomai domain). --- include/asm-generic/system.h |1 + ksrc/nucleus/pod.c | 19 +-- ksrc/nucleus/shadow.c| 24 3 files changed, 38 insertions(+), 6 deletions(-) diff --git a/include/asm-generic/system.h b/include/asm-generic/system.h index a2c8fb9..4b5ce95 100644 --- a/include/asm-generic/system.h +++ b/include/asm-generic/system.h @@ -83,6 +83,7 @@ typedef unsigned long spl_t; #else /* !CONFIG_SMP */ #define splexit(x) rthal_local_irq_restore(x) #endif /* !CONFIG_SMP */ +#define splmax()rthal_local_irq_disable() #define splnone() rthal_local_irq_enable() #define spltest() rthal_local_irq_test() #define splget(x) rthal_local_irq_flags(x) diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index 27d0da8..df6009f 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -1456,12 +1456,27 @@ void xnpod_suspend_thread(xnthread_t *thread, xnflags_t mask, nkpod-schedhook(thread, mask); #endif /* __XENO_SIM__ */ - if (thread == sched-curr) + if (thread == sched-curr) { + /* +* If the current thread is being relaxed, we must +* have been called from xnshadow_relax(), in which +* case we introduce an opportunity for interrupt +* delivery right before switching context, which +* shortens the uninterruptible code path. This +* particular caller expects us to always return with +* interrupts enabled. +*/ + if (mask XNRELAX) { + xnlock_clear_irqon(nklock); + __xnpod_schedule(sched); + return; + } /* * If the thread is runnning on another CPU, -* xnpod_schedule will just trigger the IPI. +* xnpod_schedule will trigger the IPI as needed. */ xnpod_schedule(); + } #ifdef CONFIG_XENO_OPT_PERVASIVE /* * Ok, this one is an interesting corner case, which requires diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c index 5f2a6be..76ea3ae 100644 --- a/ksrc/nucleus/shadow.c +++ b/ksrc/nucleus/shadow.c @@ -1146,7 +1146,6 @@ void xnshadow_relax(int notify, int reason) xnthread_t *thread = xnpod_current_thread(); siginfo_t si; int prio; - spl_t s; XENO_BUGON(NUCLEUS, xnthread_test_state(thread, XNROOT)); @@ -1158,13 +1157,30 @@ void xnshadow_relax(int notify, int reason) trace_mark(xn_nucleus, shadow_gorelax,
[Xenomai-git] Philippe Gerum : powerpc: upgrade I-pipe support to 2.6.35.4-powerpc-2. 11-00
Module: xenomai-head Branch: master Commit: e5eb73853555b07f9eecdf40af971b374dfd2d73 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=e5eb73853555b07f9eecdf40af971b374dfd2d73 Author: Philippe Gerum r...@xenomai.org Date: Mon Aug 30 07:29:50 2010 +0200 powerpc: upgrade I-pipe support to 2.6.35.4-powerpc-2.11-00 --- ... = adeos-ipipe-2.6.35.4-powerpc-2.11-00.patch} | 3097 1 files changed, 1943 insertions(+), 1154 deletions(-) Diff: http://git.xenomai.org/?p=xenomai-head.git;a=commitdiff;h=e5eb73853555b07f9eecdf40af971b374dfd2d73 ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : sem_heap: map an inaccessible heap upon fork.
Module: xenomai-head Branch: master Commit: b2ea4145342b4d89b55f83b4d8f264c295ad2cf6 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=b2ea4145342b4d89b55f83b4d8f264c295ad2cf6 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Sun Aug 29 15:06:39 2010 +0200 sem_heap: map an inaccessible heap upon fork. This prevents private mutexes from corrupting memory if they are used after fork. --- src/skins/common/sem_heap.c | 20 +--- 1 files changed, 13 insertions(+), 7 deletions(-) diff --git a/src/skins/common/sem_heap.c b/src/skins/common/sem_heap.c index 189272e..2355dd8 100644 --- a/src/skins/common/sem_heap.c +++ b/src/skins/common/sem_heap.c @@ -75,15 +75,21 @@ static void unmap_on_fork(void) Otherwise the global heap would be used instead, which leads to unwanted effects. - We set xeno_sem_heap[PRIVATE] to NULL. On machines with an - MMU, any reference to the private heap prior to - re-binding will cause a segmentation fault. - On machines without an MMU, there is no such thing as fork. - */ - munmap((void *)xeno_sem_heap[PRIVATE], private_hdesc.size); - xeno_sem_heap[PRIVATE] = NULL; + As a protection against access to the heaps by the fastsync + code, we set up an inaccessible mapping where the heap was, so + that access to these addresses will cause a segmentation + fault. + */ +#if defined(CONFIG_XENO_FASTSYNCH) + void *addr = mmap((void *)xeno_sem_heap[PRIVATE], + private_hdesc.size, PROT_NONE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0); + if (addr != (void *)xeno_sem_heap[PRIVATE]) +#endif /* CONFIG_XENO_FASTSYNCH */ + munmap((void *)xeno_sem_heap[PRIVATE], private_hdesc.size); + xeno_sem_heap[PRIVATE] = 0UL; init_private_heap = PTHREAD_ONCE_INIT; } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : posix: add a magic to internal structures.
Module: xenomai-head Branch: master Commit: e768b24bd0173acefce3505d3c408766ce6cfa68 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=e768b24bd0173acefce3505d3c408766ce6cfa68 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Sun Aug 29 16:17:35 2010 +0200 posix: add a magic to internal structures. These structures may still be referenced by a process' child if they were destroyed by the father after the fork, so, use a magic in order to be able to detect this case. --- ksrc/skins/posix/cond.c | 37 + ksrc/skins/posix/mutex.c | 30 ++- ksrc/skins/posix/mutex.h |4 ++- ksrc/skins/posix/sem.c | 57 ++--- src/skins/posix/mutex.c |6 ++-- 5 files changed, 79 insertions(+), 55 deletions(-) diff --git a/ksrc/skins/posix/cond.c b/ksrc/skins/posix/cond.c index f86e85e..769eb4b 100644 --- a/ksrc/skins/posix/cond.c +++ b/ksrc/skins/posix/cond.c @@ -51,6 +51,7 @@ #include posix/cond.h typedef struct pse51_cond { + unsigned magic; xnsynch_t synchbase; xnholder_t link;/* Link in pse51_condq */ @@ -101,7 +102,7 @@ static void cond_destroy_internal(pse51_cond_t * cond, pse51_kqueues_t *q) * @see * a href=http://www.opengroup.org/onlinepubs/95399/functions/pthread_cond_init.html; * Specification./a - * + * */ int pthread_cond_init(pthread_cond_t * cnd, const pthread_condattr_t * attr) { @@ -142,6 +143,7 @@ int pthread_cond_init(pthread_cond_t * cnd, const pthread_condattr_t * attr) shadow-magic = PSE51_COND_MAGIC; shadow-cond = cond; + cond-magic = PSE51_COND_MAGIC; xnsynch_init(cond-synchbase, synch_flags, NULL); inith(cond-link); cond-attr = *attr; @@ -179,7 +181,7 @@ int pthread_cond_init(pthread_cond_t * cnd, const pthread_condattr_t * attr) * @see * a href=http://www.opengroup.org/onlinepubs/95399/functions/pthread_cond_destroy.html; * Specification./a - * + * */ int pthread_cond_destroy(pthread_cond_t * cnd) { @@ -189,12 +191,13 @@ int pthread_cond_destroy(pthread_cond_t * cnd) xnlock_get_irqsave(nklock, s); - if (!pse51_obj_active(shadow, PSE51_COND_MAGIC, struct __shadow_cond)) { + cond = shadow-cond; + if (!pse51_obj_active(shadow, PSE51_COND_MAGIC, struct __shadow_cond) + || !pse51_obj_active(cond, PSE51_COND_MAGIC, struct pse51_cond)) { xnlock_put_irqrestore(nklock, s); return EINVAL; } - cond = shadow-cond; if (cond-owningq != pse51_kqueues(cond-attr.pshared)) { xnlock_put_irqrestore(nklock, s); return EPERM; @@ -206,6 +209,7 @@ int pthread_cond_destroy(pthread_cond_t * cnd) } pse51_mark_deleted(shadow); + pse51_mark_deleted(cond); xnlock_put_irqrestore(nklock, s); @@ -224,10 +228,10 @@ static inline int mutex_save_count(xnthread_t *cur, { pse51_mutex_t *mutex; - if (!pse51_obj_active(shadow, PSE51_MUTEX_MAGIC, struct __shadow_mutex)) -return EINVAL; - mutex = shadow-mutex; + if (!pse51_obj_active(shadow, PSE51_MUTEX_MAGIC, struct __shadow_mutex) + || !pse51_obj_active(mutex, PSE51_MUTEX_MAGIC, struct pse51_mutex)) +return EINVAL; if (xnsynch_owner_check(mutex-synchbase, cur) != 0) return EPERM; @@ -267,6 +271,7 @@ int pse51_cond_timedwait_prologue(xnthread_t *cur, /* If another thread waiting for cond does not use the same mutex */ if (!pse51_obj_active(shadow, PSE51_COND_MAGIC, struct __shadow_cond) + || !pse51_obj_active(cond, PSE51_COND_MAGIC, struct pse51_cond) || (cond-mutex cond-mutex != mutex-mutex)) { err = EINVAL; goto unlock_and_return; @@ -403,7 +408,7 @@ int pse51_cond_timedwait_epilogue(xnthread_t *cur, * @see * a href=http://www.opengroup.org/onlinepubs/95399/functions/pthread_cond_wait.html; * Specification./a - * + * */ int pthread_cond_wait(pthread_cond_t * cnd, pthread_mutex_t * mx) { @@ -470,7 +475,7 @@ int pthread_cond_wait(pthread_cond_t * cnd, pthread_mutex_t * mx) * @see * a href=http://www.opengroup.org/onlinepubs/95399/functions/pthread_cond_timedwait.html; * Specification./a - * + * */ int pthread_cond_timedwait(pthread_cond_t * cnd, pthread_mutex_t * mx, const struct timespec *abstime) @@ -521,7 +526,7 @@ int pthread_cond_timedwait(pthread_cond_t * cnd, * @see * a href=http://www.opengroup.org/onlinepubs/95399/functions/pthread_cond_signal.html.; * Specification./a - * + * */ int pthread_cond_signal(pthread_cond_t * cnd) { @@ -531,12 +536,13 @@ int pthread_cond_signal(pthread_cond_t * cnd) xnlock_get_irqsave(nklock, s); - if (!pse51_obj_active(shadow, PSE51_COND_MAGIC, struct __shadow_cond)) { +
[Xenomai-git] Philippe Gerum : nucleus/sched: fix race in non-atomic suspend path
Module: xenomai-head Branch: master Commit: c7ddc41a3f315f447d85c583f9882a2bbc27193c URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=c7ddc41a3f315f447d85c583f9882a2bbc27193c Author: Philippe Gerum r...@xenomai.org Date: Wed Sep 1 18:01:01 2010 +0200 nucleus/sched: fix race in non-atomic suspend path f6af9b831 revealed a nasty race on a legit usage of the scheduling support code, specifically when running the following sequence non-atomically, i.e. nklock-free: xnpod_suspend_thread(current_thread) ... xnpod_schedule() ... Doing so should have been 100% valid. Unfortunately, this used to be unsafe under the hood (see __xnpod_schedule). This patches fixes it, and also goes through testing the XNRESCHED bit to avoid a useless rescheduling from the code path introduced by f6af9b831. --- ksrc/nucleus/pod.c | 11 --- 1 files changed, 8 insertions(+), 3 deletions(-) diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index 678d667..d250503 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -276,14 +276,17 @@ EXPORT_SYMBOL_GPL(xnpod_fatal_helper); void xnpod_schedule_handler(void) /* Called with hw interrupts off. */ { - xnsched_t *sched = xnpod_current_sched(); + xnsched_t *sched; trace_mark(xn_nucleus, sched_remote, MARK_NOARGS); #if defined(CONFIG_SMP) defined(CONFIG_XENO_OPT_PRIOCPL) + sched = xnpod_current_sched(); if (testbits(sched-status, XNRPICK)) { clrbits(sched-status, XNRPICK); xnshadow_rpi_check(); } +#else + (void)sched; #endif /* CONFIG_SMP CONFIG_XENO_OPT_PRIOCPL */ xnpod_schedule(); } @@ -1467,7 +1470,7 @@ void xnpod_suspend_thread(xnthread_t *thread, xnflags_t mask, */ if (mask XNRELAX) { xnlock_clear_irqon(nklock); - __xnpod_schedule(sched); + xnpod_schedule(); return; } /* @@ -2172,8 +2175,8 @@ static inline int __xnpod_test_resched(struct xnsched *sched) void __xnpod_schedule(struct xnsched *sched) { - struct xnthread *prev, *next, *curr = sched-curr; int zombie, switched, need_resched, shadow; + struct xnthread *prev, *next, *curr; spl_t s; if (xnarch_escalate()) @@ -2183,6 +2186,8 @@ void __xnpod_schedule(struct xnsched *sched) xnlock_get_irqsave(nklock, s); + curr = sched-curr; + xnarch_trace_pid(xnthread_user_task(curr) ? xnarch_user_pid(xnthread_archtcb(curr)) : -1, xnthread_current_priority(curr)); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Philippe Gerum : nucleus/sched: raise self-resched condition when unlocking scheduler
Module: xenomai-head Branch: master Commit: d1c3156036698c682e73cafed2056712f34b5bcc URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=d1c3156036698c682e73cafed2056712f34b5bcc Author: Philippe Gerum r...@xenomai.org Date: Wed Sep 1 18:37:16 2010 +0200 nucleus/sched: raise self-resched condition when unlocking scheduler This patch turns the xnsched_set_resched() call into xnsched_set_self_resched(), in xnpod_unlock_sched() where we always deal with the local scheduler. --- ksrc/nucleus/pod.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c index d250503..2d0a842 100644 --- a/ksrc/nucleus/pod.c +++ b/ksrc/nucleus/pod.c @@ -2361,7 +2361,7 @@ void xnpod_unlock_sched(void) if (--xnthread_lock_count(curr) == 0) { xnthread_clear_state(curr, XNLOCK); - xnsched_set_resched(curr-sched); + xnsched_set_self_resched(curr-sched); xnpod_schedule(); } ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : doc: regenerate
Module: xenomai-head Branch: master Commit: a60c65e91fae06e6a6864a4d8ad9ac0d58d981fb URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=a60c65e91fae06e6a6864a4d8ad9ac0d58d981fb Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Thu Sep 2 20:40:15 2010 +0200 doc: regenerate --- doc/generated/html/api/bufp-label_8c-example.html | 261 + .../html/api/bufp-readwrite_8c-example.html| 226 doc/generated/html/api/functions_0x61.html | 81 +++ doc/generated/html/api/functions_0x62.html | 98 doc/generated/html/api/functions_0x63.html | 127 + doc/generated/html/api/functions_0x64.html | 120 doc/generated/html/api/functions_0x65.html | 91 +++ doc/generated/html/api/functions_0x66.html | 90 +++ doc/generated/html/api/functions_0x67.html | 79 +++ doc/generated/html/api/functions_0x68.html | 81 +++ doc/generated/html/api/functions_0x69.html | 102 doc/generated/html/api/functions_0x6c.html | 90 +++ doc/generated/html/api/functions_0x6d.html | 94 doc/generated/html/api/functions_0x6e.html | 94 doc/generated/html/api/functions_0x6f.html | 89 +++ doc/generated/html/api/functions_0x70.html | 110 doc/generated/html/api/functions_0x72.html | 111 doc/generated/html/api/functions_0x73.html | 143 + doc/generated/html/api/functions_0x74.html | 100 doc/generated/html/api/functions_0x76.html | 80 +++ doc/generated/html/api/functions_0x77.html | 81 +++ doc/generated/html/api/functions_vars_0x62.html| 97 doc/generated/html/api/functions_vars_0x63.html| 126 + doc/generated/html/api/functions_vars_0x64.html| 119 doc/generated/html/api/functions_vars_0x65.html| 90 +++ doc/generated/html/api/functions_vars_0x66.html| 89 +++ doc/generated/html/api/functions_vars_0x67.html| 78 +++ doc/generated/html/api/functions_vars_0x68.html| 80 +++ doc/generated/html/api/functions_vars_0x69.html| 101 doc/generated/html/api/functions_vars_0x6c.html| 89 +++ doc/generated/html/api/functions_vars_0x6d.html| 93 doc/generated/html/api/functions_vars_0x6e.html| 93 doc/generated/html/api/functions_vars_0x6f.html| 88 +++ doc/generated/html/api/functions_vars_0x70.html| 109 doc/generated/html/api/functions_vars_0x72.html| 110 doc/generated/html/api/functions_vars_0x73.html| 142 + doc/generated/html/api/functions_vars_0x74.html| 99 doc/generated/html/api/functions_vars_0x76.html| 79 +++ doc/generated/html/api/functions_vars_0x77.html| 80 +++ doc/generated/html/api/globals_0x62.html | 79 +++ doc/generated/html/api/globals_0x67.html | 79 +++ doc/generated/html/api/globals_0x69.html | 85 +++ doc/generated/html/api/globals_defs_0x62.html | 75 +++ doc/generated/html/api/globals_defs_0x69.html | 75 +++ doc/generated/html/api/globals_defs_0x78.html | 87 +++ doc/generated/html/api/globals_func_0x62.html | 70 +++ doc/generated/html/api/globals_func_0x63.html | 72 +++ doc/generated/html/api/globals_func_0x67.html | 74 +++ doc/generated/html/api/globals_func_0x73.html | 74 +++ doc/generated/html/api/globals_vars.html | 60 ++ doc/generated/html/api/group__vfile.html | 580 doc/generated/html/api/group__vfile.png| Bin 0 - 1021 bytes doc/generated/html/api/iddp-label_8c-example.html | 278 ++ .../html/api/iddp-sendrecv_8c-example.html | 234 .../html/api/structrtipc__port__label.html | 76 +++ doc/generated/html/api/structsockaddr__ipc.html| 80 +++ .../html/api/structxnvfile__lock__ops.html | 103 .../html/api/structxnvfile__regular__iterator.html | 156 ++ .../html/api/structxnvfile__regular__ops.html | 205 +++ .../html/api/structxnvfile__rev__tag.html | 76 +++ .../html/api/structxnvfile__snapshot.html | 61 ++ .../api/structxnvfile__snapshot__coll__graph.map |2 + .../api/structxnvfile__snapshot__coll__graph.md5 |1 + .../api/structxnvfile__snapshot__coll__graph.png | Bin 0 - 1843 bytes .../api/structxnvfile__snapshot__iterator.html | 183 ++ ...uctxnvfile__snapshot__iterator__coll__graph.map |3 + ...uctxnvfile__snapshot__iterator__coll__graph.md5 |1 + ...uctxnvfile__snapshot__iterator__coll__graph.png | Bin 0 - 2420 bytes .../html/api/structxnvfile__snapshot__ops.html | 240 doc/generated/html/api/vfile_8c.html | 85 +++ doc/generated/html/api/vfile_8c__incl.map | 19 + doc/generated/html/api/vfile_8c__incl.md5 |1 + doc/generated/html/api/vfile_8c__incl.png
[Xenomai-git] Gilles Chanteperdrix : arm: fix VFP context handling on SMP systems
Module: xenomai-2.5 Branch: master Commit: 348638e82364649062f60e60abbc448adffdf164 URL: http://git.xenomai.org/?p=xenomai-2.5.git;a=commit;h=348638e82364649062f60e60abbc448adffdf164 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Thu Sep 2 22:57:15 2010 +0200 arm: fix VFP context handling on SMP systems --- include/asm-arm/bits/pod.h | 24 +++- 1 files changed, 19 insertions(+), 5 deletions(-) diff --git a/include/asm-arm/bits/pod.h b/include/asm-arm/bits/pod.h index 3dc51f8..71a7330 100644 --- a/include/asm-arm/bits/pod.h +++ b/include/asm-arm/bits/pod.h @@ -141,17 +141,31 @@ static inline void xnarch_enable_fpu(xnarchtcb_t *tcb) newly switched thread uses the FPU, to allow the kernel handler to pick the correct FPU context. */ - if (likely(!tcb-is_root) - || (tcb-fpup tcb-fpup == rthal_task_fpenv(tcb-user_task))) { + if (likely(!tcb-is_root)) { + rthal_enable_fpu(); + /* No exception should be pending, since it should have caused + a trap earlier. + */ + } else if (tcb-fpup tcb-fpup == rthal_task_fpenv(tcb-user_task)) { unsigned fpexc = rthal_enable_fpu(); +#ifndef CONFIG_SMP if (likely(!(fpexc RTHAL_VFP_ANY_EXC) !(rthal_vfp_fmrx(FPSCR) FPSCR_IXE))) return; - - /* If current process has pending exceptions it is + /* + If current process has pending exceptions it is illegal to restore the FPEXC register with them, we must save the fpu state and disable them, to get linux - fpu fault handler take care of them correctly. */ + fpu fault handler take care of them correctly. + */ +#endif + /* + On SMP systems, if we are restoring the root + thread, running the task holding the FPU context at + the time when we switched to real-time domain, + forcibly save the FPU context. It seems to fix SMP + systems for still unknown reasons. + */ rthal_save_fpu(tcb-fpup, fpexc); last_VFP_context[rthal_processor_id()] = NULL; rthal_disable_fpu(); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git
[Xenomai-git] Gilles Chanteperdrix : arm: fix VFP context handling on SMP systems
Module: xenomai-head Branch: master Commit: d096040c5b2552c21bd19e3e8b7e70601d353889 URL: http://git.xenomai.org/?p=xenomai-head.git;a=commit;h=d096040c5b2552c21bd19e3e8b7e70601d353889 Author: Gilles Chanteperdrix gilles.chanteperd...@xenomai.org Date: Thu Sep 2 22:57:15 2010 +0200 arm: fix VFP context handling on SMP systems --- include/asm-arm/bits/pod.h | 24 +++- 1 files changed, 19 insertions(+), 5 deletions(-) diff --git a/include/asm-arm/bits/pod.h b/include/asm-arm/bits/pod.h index 3dc51f8..71a7330 100644 --- a/include/asm-arm/bits/pod.h +++ b/include/asm-arm/bits/pod.h @@ -141,17 +141,31 @@ static inline void xnarch_enable_fpu(xnarchtcb_t *tcb) newly switched thread uses the FPU, to allow the kernel handler to pick the correct FPU context. */ - if (likely(!tcb-is_root) - || (tcb-fpup tcb-fpup == rthal_task_fpenv(tcb-user_task))) { + if (likely(!tcb-is_root)) { + rthal_enable_fpu(); + /* No exception should be pending, since it should have caused + a trap earlier. + */ + } else if (tcb-fpup tcb-fpup == rthal_task_fpenv(tcb-user_task)) { unsigned fpexc = rthal_enable_fpu(); +#ifndef CONFIG_SMP if (likely(!(fpexc RTHAL_VFP_ANY_EXC) !(rthal_vfp_fmrx(FPSCR) FPSCR_IXE))) return; - - /* If current process has pending exceptions it is + /* + If current process has pending exceptions it is illegal to restore the FPEXC register with them, we must save the fpu state and disable them, to get linux - fpu fault handler take care of them correctly. */ + fpu fault handler take care of them correctly. + */ +#endif + /* + On SMP systems, if we are restoring the root + thread, running the task holding the FPU context at + the time when we switched to real-time domain, + forcibly save the FPU context. It seems to fix SMP + systems for still unknown reasons. + */ rthal_save_fpu(tcb-fpup, fpexc); last_VFP_context[rthal_processor_id()] = NULL; rthal_disable_fpu(); ___ Xenomai-git mailing list Xenomai-git@gna.org https://mail.gna.org/listinfo/xenomai-git