[Xenomai-git] Jan Kiszka : nucleus: Move XNINLOCK to xnsched::lflags

2013-01-19 Thread GIT version control
Module: xenomai-2.6
Branch: master
Commit: ef6ff0ecfea2c417b006f552b2c2b967dd7efb7f
URL:
http://git.xenomai.org/?p=xenomai-2.6.git;a=commit;h=ef6ff0ecfea2c417b006f552b2c2b967dd7efb7f

Author: Jan Kiszka jan.kis...@siemens.com
Date:   Fri Jan 18 20:08:51 2013 +0100

nucleus: Move XNINLOCK to xnsched::lflags

Via RTDM spin locks, XNINLOCK is set/cleared outside of the nklock
protection. Thus it has to be carried by lflags, not status which could
concurrently be manipulated by a different CPU.

Signed-off-by: Jan Kiszka jan.kis...@siemens.com

---

 include/nucleus/sched.h |2 +-
 ksrc/nucleus/pod.c  |   10 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/nucleus/sched.h b/include/nucleus/sched.h
index b9d0f01..8db23cc 100644
--- a/include/nucleus/sched.h
+++ b/include/nucleus/sched.h
@@ -42,12 +42,12 @@
 #define XNINTCK0x4000  /* In master tick handler 
context */
 #define XNINSW 0x2000  /* In context switch */
 #define XNRESCHED  0x1000  /* Needs rescheduling */
-#define XNINLOCK   0x0800  /* Scheduler locked */
 
 /* Sched local flags */
 #define XNHTICK0x8000  /* Host tick pending  */
 #define XNINIRQ0x4000  /* In IRQ handling context */
 #define XNHDEFER   0x2000  /* Host tick deferred */
+#define XNINLOCK   0x1000  /* Scheduler locked */
 
 /* Sched RPI status flags */
 #define XNRPICK0x8000  /* Check RPI state */
diff --git a/ksrc/nucleus/pod.c b/ksrc/nucleus/pod.c
index cf6c9de..a5afaa5 100644
--- a/ksrc/nucleus/pod.c
+++ b/ksrc/nucleus/pod.c
@@ -1200,7 +1200,7 @@ void xnpod_delete_thread(xnthread_t *thread)
 * thread zombie state to go through the rescheduling
 * procedure then actually destroy the thread object.
 */
-   __clrbits(sched-status, XNINLOCK);
+   __clrbits(sched-lflags, XNINLOCK);
xnsched_set_resched(sched);
xnpod_schedule();
 #ifdef CONFIG_XENO_HW_UNLOCKED_SWITCH
@@ -1453,7 +1453,7 @@ void xnpod_suspend_thread(xnthread_t *thread, xnflags_t 
mask,
 #endif /* __XENO_SIM__ */
 
if (thread == sched-curr) {
-   __clrbits(sched-status, XNINLOCK);
+   __clrbits(sched-lflags, XNINLOCK);
/*
 * If the current thread is being relaxed, we must
 * have been called from xnshadow_relax(), in which
@@ -2312,7 +2312,7 @@ reschedule:
goto reschedule;
 
if (xnthread_lock_count(curr))
-   __setbits(sched-status, XNINLOCK);
+   __setbits(sched-lflags, XNINLOCK);
 
xnlock_put_irqrestore(nklock, s);
 
@@ -2345,7 +2345,7 @@ void ___xnpod_lock_sched(xnsched_t *sched)
struct xnthread *curr = sched-curr;
 
if (xnthread_lock_count(curr)++ == 0) {
-   __setbits(sched-status, XNINLOCK);
+   __setbits(sched-lflags, XNINLOCK);
xnthread_set_state(curr, XNLOCK);
}
 }
@@ -2360,7 +2360,7 @@ void ___xnpod_unlock_sched(xnsched_t *sched)
 
if (--xnthread_lock_count(curr) == 0) {
xnthread_clear_state(curr, XNLOCK);
-   __clrbits(sched-status, XNINLOCK);
+   __clrbits(sched-lflags, XNINLOCK);
xnpod_schedule();
}
 }


___
Xenomai-git mailing list
Xenomai-git@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai-git


[Xenomai-git] Jan Kiszka : nucleus: Fix migration race in schedule_linux_call

2013-01-19 Thread GIT version control
Module: xenomai-2.6
Branch: master
Commit: c828958a8b62d35fe317942d5c442f31dc3e1eae
URL:
http://git.xenomai.org/?p=xenomai-2.6.git;a=commit;h=c828958a8b62d35fe317942d5c442f31dc3e1eae

Author: Jan Kiszka jan.kis...@siemens.com
Date:   Fri Jan 18 17:44:01 2013 +0100

nucleus: Fix migration race in schedule_linux_call

schedule_linux_call may also be invoked over preemptible, thus
migratable Linux contexts. Therefore we must not read the CPU number
outside the splhigh/splexit section.

Signed-off-by: Jan Kiszka jan.kis...@siemens.com

---

 ksrc/nucleus/shadow.c |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/ksrc/nucleus/shadow.c b/ksrc/nucleus/shadow.c
index 260fdef..c91a6f3 100644
--- a/ksrc/nucleus/shadow.c
+++ b/ksrc/nucleus/shadow.c
@@ -820,7 +820,7 @@ static void lostage_handler(void *cookie)
 
 static void schedule_linux_call(int type, struct task_struct *p, int arg)
 {
-   int cpu = rthal_processor_id(), reqnum;
+   int cpu, reqnum;
struct __lostagerq *rq;
spl_t s;
 
@@ -832,6 +832,7 @@ static void schedule_linux_call(int type, struct 
task_struct *p, int arg)
 
splhigh(s);
 
+   cpu = rthal_processor_id();
rq = lostagerq[cpu];
reqnum = rq-in;
rq-in = (reqnum + 1)  (LO_MAX_REQUESTS - 1);


___
Xenomai-git mailing list
Xenomai-git@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai-git


Re: [Xenomai] GPIO Interrupts problem with RTDM

2013-01-19 Thread Gilles Chanteperdrix
On 01/18/2013 11:11 PM, Paul wrote:

 On Friday 18 January 2013, Pierre LE COZ wrote:
 I have simple program to manage the GPIO of my raspberry Pi using
 RTDM.

 I use :
 rtdm_irq_request( irq_exemple, num_irq, exemple_handler,
 RTDM_IRQTYPE_SHARED | RTDM_IRQTYPE_EDGE, MyProgram, NULL);
 in order to detect interrupts.
 
 rtdm_irq_request does not explicitly enable the interrupts on a gpio pin 
 with ian_cim's patch


This may be a generic issue, which I thought was fixed, but may not be.
Could you try the following patch?

http://git.xenomai.org/?p=ipipe-gch.git;a=commit;h=c14c79d29fed82267560c7bf26d628ef4d39f5b7

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Gilles Chanteperdrix
On 01/17/2013 02:30 PM, Bas Laarhoven wrote:

 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:

 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:

 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:

 Several people have been able to get the Beaglebone ubuntu/xenomai setup 
 working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the kernel 
 includes ext2/3/4 support compiled in, which should take care of two 
 failure reports I got.

 Again that xenomai kernel is based on 3.2.21; it works very stable for 
 me but there have been several reports of 'sudden stops'. The BB is a 
 bit sensitive to power fluctuations but it might be more than that. As 
 for that kernel, it works, but it is based on a branch which will see no 
 further development. It supports most of the stuff needed to 
 development; there might be some patches coming from more active BB 
 users than me.
 Hi Michael,

 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of these 
 boards running over NFS all the time. I dont have Linuxcnc running on them 
 though, I'll do that and see if that changes the picture. Maybe keeping 
 the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.

 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.

 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.

 
 Hi Gilles,
 
 Thanks for the swift reply.
 
 As far as I can see, tsc -w runs without an error:
 
 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us
 
 real6m0.284s
 
 I've also done the other regression tests and all were successful.
 
 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.


You may want to have a look at the xeno-regression-test script to put
your system under pressure (and likely generate the lockup faster).

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Michael Haberler

Am 19.01.2013 um 14:29 schrieb Gilles Chanteperdrix:

 On 01/17/2013 02:30 PM, Bas Laarhoven wrote:
 
 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:
 
 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:
 
 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:
 
 Several people have been able to get the Beaglebone ubuntu/xenomai 
 setup working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the kernel 
 includes ext2/3/4 support compiled in, which should take care of two 
 failure reports I got.
 
 Again that xenomai kernel is based on 3.2.21; it works very stable for 
 me but there have been several reports of 'sudden stops'. The BB is a 
 bit sensitive to power fluctuations but it might be more than that. As 
 for that kernel, it works, but it is based on a branch which will see 
 no further development. It supports most of the stuff needed to 
 development; there might be some patches coming from more active BB 
 users than me.
 Hi Michael,
 
 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of these 
 boards running over NFS all the time. I dont have Linuxcnc running on 
 them though, I'll do that and see if that changes the picture. Maybe 
 keeping the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.
 
 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.
 
 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.
 
 
 Hi Gilles,
 
 Thanks for the swift reply.
 
 As far as I can see, tsc -w runs without an error:
 
 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us
 
 real6m0.284s
 
 I've also done the other regression tests and all were successful.
 
 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.
 
 
 You may want to have a look at the xeno-regression-test script to put
 your system under pressure (and likely generate the lockup faster).

running tsc -w and xeno-regression-test in parallel I get errors like so (not 
on every run; no lockup so far):

++ /usr/xenomai/bin/mutex-torture-native
simple_wait
recursive_wait
timed_mutex
mode_switch
pi_wait
lock_stealing
NOTE: lock_stealing mutex_trylock: not supported
deny_stealing
simple_condwait
recursive_condwait
auto_switchback
FAILURE: current prio (0) != expected prio (2)

dmesg 
[501963.390598] Xenomai: native: cleaning up mutex  (ret=0).
[502170.164984] usb 1-1: reset high-speed USB device number 2 using musb-hdrc

on another run, I got a segfault while running sigdebug:
++ /usr/xenomai/bin/regression/native/sigdebug
mayday page starting at 0x400eb000 [/dev/rtheap]
mayday code: 0c 00 9f e5 0c 70 9f e5 00 00 00 ef 00 00 a0 e3 00 00 80 e5 2b 02 
00 0a 42 00 0f 00 db d7 ee b8
mlockall
syscall
signal
relaxed mutex owner
page fault
watchdog
./xeno-regression-test: line 53:  4210 Segmentation fault  
/usr/xenomai/bin/regression/native/sigdebug

root@bb1:/usr/xenomai/bin# dmesg 
[502442.312996] Xenomai: watchdog triggered -- signaling runaway thread 
'rt_task'
[502443.054186] Xenomai: native: cleaning up mutex prio_invert (ret=0).
[502443.055730] Xenomai: native: cleaning up sem send_signal (ret=0).
[502518.134977] usb 1-1: reset high-speed USB device number 2 using musb-hdrc


unsure what to make of it - any 

Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 03:09 PM, Michael Haberler wrote:

 
 Am 19.01.2013 um 14:29 schrieb Gilles Chanteperdrix:
 
 On 01/17/2013 02:30 PM, Bas Laarhoven wrote:

 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:

 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:

 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:

 Several people have been able to get the Beaglebone ubuntu/xenomai 
 setup working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the 
 kernel includes ext2/3/4 support compiled in, which should take care 
 of two failure reports I got.

 Again that xenomai kernel is based on 3.2.21; it works very stable for 
 me but there have been several reports of 'sudden stops'. The BB is a 
 bit sensitive to power fluctuations but it might be more than that. As 
 for that kernel, it works, but it is based on a branch which will see 
 no further development. It supports most of the stuff needed to 
 development; there might be some patches coming from more active BB 
 users than me.
 Hi Michael,

 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of 
 these boards running over NFS all the time. I dont have Linuxcnc running 
 on them though, I'll do that and see if that changes the picture. Maybe 
 keeping the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.

 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.

 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.


 Hi Gilles,

 Thanks for the swift reply.

 As far as I can see, tsc -w runs without an error:

 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us

 real6m0.284s

 I've also done the other regression tests and all were successful.

 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.


 You may want to have a look at the xeno-regression-test script to put
 your system under pressure (and likely generate the lockup faster).
 
 running tsc -w and xeno-regression-test in parallel I get errors like so (not 
 on every run; no lockup so far):
 
 ++ /usr/xenomai/bin/mutex-torture-native
 simple_wait
 recursive_wait
 timed_mutex
 mode_switch
 pi_wait
 lock_stealing
 NOTE: lock_stealing mutex_trylock: not supported
 deny_stealing
 simple_condwait
 recursive_condwait
 auto_switchback
 FAILURE: current prio (0) != expected prio (2)
 
 dmesg 
 [501963.390598] Xenomai: native: cleaning up mutex  (ret=0).
 [502170.164984] usb 1-1: reset high-speed USB device number 2 using musb-hdrc
 
 on another run, I got a segfault while running sigdebug:
 ++ /usr/xenomai/bin/regression/native/sigdebug
 mayday page starting at 0x400eb000 [/dev/rtheap]
 mayday code: 0c 00 9f e5 0c 70 9f e5 00 00 00 ef 00 00 a0 e3 00 00 80 e5 2b 
 02 00 0a 42 00 0f 00 db d7 ee b8
 mlockall
 syscall
 signal
 relaxed mutex owner
 page fault
 watchdog
 ./xeno-regression-test: line 53:  4210 Segmentation fault  
 /usr/xenomai/bin/regression/native/sigdebug
 
 root@bb1:/usr/xenomai/bin# dmesg 
 [502442.312996] Xenomai: watchdog triggered -- signaling runaway thread 
 'rt_task'
 [502443.054186] Xenomai: native: cleaning up mutex prio_invert (ret=0).
 [502443.055730] Xenomai: native: cleaning up sem send_signal (ret=0).
 [502518.134977] usb 1-1: reset high-speed 

Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Michael Haberler

Am 19.01.2013 um 15:10 schrieb Gilles Chanteperdrix:

 On 01/19/2013 03:09 PM, Michael Haberler wrote:
 
 
 Am 19.01.2013 um 14:29 schrieb Gilles Chanteperdrix:
 
 On 01/17/2013 02:30 PM, Bas Laarhoven wrote:
 
 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:
 
 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:
 
 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:
 
 Several people have been able to get the Beaglebone ubuntu/xenomai 
 setup working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the 
 kernel includes ext2/3/4 support compiled in, which should take care 
 of two failure reports I got.
 
 Again that xenomai kernel is based on 3.2.21; it works very stable 
 for me but there have been several reports of 'sudden stops'. The BB 
 is a bit sensitive to power fluctuations but it might be more than 
 that. As for that kernel, it works, but it is based on a branch which 
 will see no further development. It supports most of the stuff needed 
 to development; there might be some patches coming from more active 
 BB users than me.
 Hi Michael,
 
 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of 
 these boards running over NFS all the time. I dont have Linuxcnc 
 running on them though, I'll do that and see if that changes the 
 picture. Maybe keeping the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.
 
 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.
 
 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.
 
 
 Hi Gilles,
 
 Thanks for the swift reply.
 
 As far as I can see, tsc -w runs without an error:
 
 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us
 
 real6m0.284s
 
 I've also done the other regression tests and all were successful.
 
 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.
 
 
 You may want to have a look at the xeno-regression-test script to put
 your system under pressure (and likely generate the lockup faster).
 
 running tsc -w and xeno-regression-test in parallel I get errors like so 
 (not on every run; no lockup so far):
 
 ++ /usr/xenomai/bin/mutex-torture-native
 simple_wait
 recursive_wait
 timed_mutex
 mode_switch
 pi_wait
 lock_stealing
 NOTE: lock_stealing mutex_trylock: not supported
 deny_stealing
 simple_condwait
 recursive_condwait
 auto_switchback
 FAILURE: current prio (0) != expected prio (2)
 
 dmesg 
 [501963.390598] Xenomai: native: cleaning up mutex  (ret=0).
 [502170.164984] usb 1-1: reset high-speed USB device number 2 using musb-hdrc
 
 on another run, I got a segfault while running sigdebug:
 ++ /usr/xenomai/bin/regression/native/sigdebug
 mayday page starting at 0x400eb000 [/dev/rtheap]
 mayday code: 0c 00 9f e5 0c 70 9f e5 00 00 00 ef 00 00 a0 e3 00 00 80 e5 2b 
 02 00 0a 42 00 0f 00 db d7 ee b8
 mlockall
 syscall
 signal
 relaxed mutex owner
 page fault
 watchdog
 ./xeno-regression-test: line 53:  4210 Segmentation fault  
 /usr/xenomai/bin/regression/native/sigdebug
 
 root@bb1:/usr/xenomai/bin# dmesg 
 [502442.312996] Xenomai: watchdog triggered -- signaling runaway thread 
 'rt_task'
 [502443.054186] Xenomai: native: cleaning up mutex prio_invert (ret=0).
 [502443.055730] Xenomai: native: 

Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 03:14 PM, Michael Haberler wrote:

 that was xenomai 2.6.1 as per release tag in the git repo; the rest as 
 outlined here: 
 http://www.xenomai.org/pipermail/xenomai/2013-January/027164.html


Please upgrade to xenomai master. You are having bug which have already
been fixed since 2.6.1.

 [502738.607343] switchtest: page allocation failure: order:4, mode:0xd0


That is an allocation failure. I am afraid you can run
xeno-regression-test only once after the system boot (it is supposed to
run for several hours anyway).


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] GPIO Interrupts problem with RTDM

2013-01-19 Thread Pierre LE COZ
 Add some debug messages in the areas you request resources and then use
 dmesg - The most likely source of error is either attempting to use a
 resource that has been claimed or not releasing when you have finished
 with it.


Here's my entire init :

static int __init exemple_init (void)
{
int err;

printk(KERN_INFO Requesting GPIO %d\n,GPIO_IN);
if((err = gpio_request_one(GPIO_IN, GPIOF_DIR_IN,
THIS_MODULE-name)) !=0) {
printk(KERN_INFO error %d: could not request gpio: %d\n,
err, GPIO_IN);
return err;
}

printk(KERN_INFO Setting the irq type to trigger rising\n);
irq_set_irq_type(BUTTON_IRQ,  IRQF_TRIGGER_RISING);


printk(KERN_INFO Requesting irq %d\n, BUTTON_IRQ);
if((err = rtdm_irq_request( irq_exemple, BUTTON_IRQ,
exemple_handler, 0, THIS_MODULE-name,  NULL)) !=0) {
printk(KERN_INFO error %d: could not request irq: %d\n,
err, GPIO_IN);
gpio_free(GPIO_IN);
return err;
}

printk(KERN_INFO Enabling irq %d\n, BUTTON_IRQ);

// enable_irq(BUTTON_IRQ);
rtdm_irq_enable( irq_exemple);
return 0;
}

The module is running well, but still no interrupt detected  :
# insmod rpi-rtdm.ko

Requesting GPIO 23
Setting the irq type to trigger rising
Requesting irq 108
Enabling irq 108
/MyDEV/06_GPIO_rtdm_livre # cat /proc/xenomai/irq
IRQ CPU0
  3:3491 [timer]
108:   0 rpi_rtdm
259:   0 [virtual]
/MyDEV/06_GPIO_rtdm_livre #

When replacing rtdm_irq_enable( irq_exemple); with
enable_irq(BUTTON_IRQ); to try to enable the irq, the module fails :
# insmod rpi-rtdm.ko
Requesting GPIO 23
Setting the irq type to trigger rising
Requesting irq 108
Enabling irq 108
[ cut here ]
WARNING: at kernel/irq/manage.c:421 enable_irq+0x50/0x6c()
Unbalanced enable for IRQ 108
Modules linked in: rpi_rtdm(O+)
[c00136bc] (unwind_backtrace+0x0/0xe4) from [c0021c6c]
(warn_slowpath_common+0x4c/0x64)
[c0021c6c] (warn_slowpath_common+0x4c/0x64) from [c0021d04]
(warn_slowpath_fmt+0x2c/0x3c)
[c0021d04] (warn_slowpath_fmt+0x2c/0x3c) from [c0060c50]
(enable_irq+0x50/0x6c)
[c0060c50] (enable_irq+0x50/0x6c) from [bf0020b4]
(exemple_init+0xb4/0xe0 [rpi_rtdm])
[bf0020b4] (exemple_init+0xb4/0xe0 [rpi_rtdm]) from [c00086a8]
(do_one_initcall+0x9c/0x17c)
[c00086a8] (do_one_initcall+0x9c/0x17c) from [c0051a5c]
(sys_init_module+0x1658/0x1830)
[c0051a5c] (sys_init_module+0x1658/0x1830) from [c000dbe0]
(ret_fast_syscall+0x0/0x30)
---[ end trace d3f5d198ddeaf1cf ]---


 About the fact that interrupts are not detected :

This may be a generic issue, which I thought was fixed, but may not be.
 Could you try the following patch?


http://git.xenomai.org/?p=ipipe-gch.git;a=commit;h=c14c79d29fed82267560c7bf26d628ef4d39f5b7


Thank you Gilles for the patch. I tried it but I could not rebuild my
kernel :

  CC  arch/arm/kernel/ipipe.o
arch/arm/kernel/ipipe.c: In function ‘ipipe_get_sysinfo’:
arch/arm/kernel/ipipe.c:226:23: error: ‘__ipipe_hrclock_freq’ undeclared
(first use in this function)
arch/arm/kernel/ipipe.c:226:23: note: each undeclared identifier is
reported only once for each function it appears in
arch/arm/kernel/ipipe.c: In function ‘__switch_mm_inner’:
arch/arm/kernel/ipipe.c:450:3: error: ‘active_mm’ undeclared (first use in
this function)
arch/arm/kernel/ipipe.c:456:3: error: implicit declaration of function
‘__do_switch_mm’
arch/arm/kernel/ipipe.c: In function ‘deferred_switch_mm’:
arch/arm/kernel/ipipe.c:486:3: error: ‘active_mm’ undeclared (first use in
this function)
arch/arm/kernel/ipipe.c:492:3: error: implicit declaration of function
‘__deferred_switch_mm’
arch/arm/kernel/ipipe.c: At top level:
arch/arm/kernel/ipipe.c:564:1: error: ‘cpu_set_reserved_ttbr0’ undeclared
here (not in a function)
arch/arm/kernel/ipipe.c:564:1: warning: type defaults to ‘int’ in
declaration of ‘cpu_set_reserved_ttbr0’
make[1]: *** [arch/arm/kernel/ipipe.o] Error 1
make: *** [arch/arm/kernel] Error 2
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] GPIO Interrupts problem with RTDM

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 04:46 PM, Pierre LE COZ wrote:

 Add some debug messages in the areas you request resources and then use
 dmesg - The most likely source of error is either attempting to use a
 resource that has been claimed or not releasing when you have finished
 with it.

 
 Here's my entire init :
 
 static int __init exemple_init (void)
 {
 int err;
 
 printk(KERN_INFO Requesting GPIO %d\n,GPIO_IN);
 if((err = gpio_request_one(GPIO_IN, GPIOF_DIR_IN,
 THIS_MODULE-name)) !=0) {
 printk(KERN_INFO error %d: could not request gpio: %d\n,
 err, GPIO_IN);
 return err;
 }
 
 printk(KERN_INFO Setting the irq type to trigger rising\n);
 irq_set_irq_type(BUTTON_IRQ,  IRQF_TRIGGER_RISING);
 
 
 printk(KERN_INFO Requesting irq %d\n, BUTTON_IRQ);
 if((err = rtdm_irq_request( irq_exemple, BUTTON_IRQ,
 exemple_handler, 0, THIS_MODULE-name,  NULL)) !=0) {
 printk(KERN_INFO error %d: could not request irq: %d\n,
 err, GPIO_IN);
 gpio_free(GPIO_IN);
 return err;
 }
 
 printk(KERN_INFO Enabling irq %d\n, BUTTON_IRQ);
 
 // enable_irq(BUTTON_IRQ);
 rtdm_irq_enable( irq_exemple);
 return 0;
 }
 
 The module is running well, but still no interrupt detected  :
 # insmod rpi-rtdm.ko
 
 Requesting GPIO 23
 Setting the irq type to trigger rising
 Requesting irq 108
 Enabling irq 108
 /MyDEV/06_GPIO_rtdm_livre # cat /proc/xenomai/irq
 IRQ CPU0
   3:3491 [timer]
 108:   0 rpi_rtdm
 259:   0 [virtual]
 /MyDEV/06_GPIO_rtdm_livre #
 
 When replacing rtdm_irq_enable( irq_exemple); with
 enable_irq(BUTTON_IRQ); to try to enable the irq, the module fails :
 # insmod rpi-rtdm.ko
 Requesting GPIO 23
 Setting the irq type to trigger rising
 Requesting irq 108
 Enabling irq 108
 [ cut here ]
 WARNING: at kernel/irq/manage.c:421 enable_irq+0x50/0x6c()
 Unbalanced enable for IRQ 108
 Modules linked in: rpi_rtdm(O+)
 [c00136bc] (unwind_backtrace+0x0/0xe4) from [c0021c6c]
 (warn_slowpath_common+0x4c/0x64)
 [c0021c6c] (warn_slowpath_common+0x4c/0x64) from [c0021d04]
 (warn_slowpath_fmt+0x2c/0x3c)
 [c0021d04] (warn_slowpath_fmt+0x2c/0x3c) from [c0060c50]
 (enable_irq+0x50/0x6c)
 [c0060c50] (enable_irq+0x50/0x6c) from [bf0020b4]
 (exemple_init+0xb4/0xe0 [rpi_rtdm])
 [bf0020b4] (exemple_init+0xb4/0xe0 [rpi_rtdm]) from [c00086a8]
 (do_one_initcall+0x9c/0x17c)
 [c00086a8] (do_one_initcall+0x9c/0x17c) from [c0051a5c]
 (sys_init_module+0x1658/0x1830)
 [c0051a5c] (sys_init_module+0x1658/0x1830) from [c000dbe0]
 (ret_fast_syscall+0x0/0x30)
 ---[ end trace d3f5d198ddeaf1cf ]---
 
 
  About the fact that interrupts are not detected :
 
 This may be a generic issue, which I thought was fixed, but may not be.
 Could you try the following patch?

 
 http://git.xenomai.org/?p=ipipe-gch.git;a=commit;h=c14c79d29fed82267560c7bf26d628ef4d39f5b7
 
 
 Thank you Gilles for the patch. I tried it but I could not rebuild my
 kernel :


I do not know what you took. What you were supposed to apply is this diff:

http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=c14c79d29fed82267560c7bf26d628ef4d39f5b7;hp=3b3b1d3969106b561ec5fee6c0006eff2f1bc1bb

Which should not cause any such issue.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai Documentation for ARM Integration

2013-01-19 Thread Gregory Perry

From: Gilles Chanteperdrix [gilles.chanteperd...@xenomai.org]
Sent: Saturday, January 19, 2013 7:31 AM
To: Gregory Perry
Cc: xenomai@xenomai.org
Subject: Re: [Xenomai] Xenomai Documentation for ARM Integration
[...]
This process is already documented in the README.INSTALL guide...
The part we do not document is the configuration and compilation of
the Linux kernel, someone else volunteered to provide more documentation
on this subject, you may want to coordinate your efforts with him.

It seems that this is the step where most of the confusion arises; OpenEmbedded 
with Bitbake recipes are the recommended method for compiling a kernel on the 
BeagleBone, but the way that OE is setup does not facilitate easy patching of 
the kernel or even a reasonable way to fetch a specific kernel revision that is 
compatible with Xenomai.  Subsequent invocations of Bitbake will then destroy 
changes that have been made to a custom kernel, there has to be a better way to 
maintain a kernel with Xenomai than this.  I am thinking a non-OE kernel + 
Xenomai integrated git repo with Buildroot for creating a minimal target kernel 
and filesystem with userland Xenomai support.  Unless there is a better way 
than this?

Regards

Gregory Perry

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai Documentation for ARM Integration

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 07:12 PM, Gregory Perry wrote:

  From: Gilles Chanteperdrix
 [gilles.chanteperd...@xenomai.org] Sent: Saturday, January 19, 2013
 7:31 AM To: Gregory Perry Cc: xenomai@xenomai.org Subject: Re:
 [Xenomai] Xenomai Documentation for ARM Integration
 [...]
 This process is already documented in the README.INSTALL guide... 
 The part we do not document is the configuration and compilation
 of the Linux kernel, someone else volunteered to provide more
 documentation on this subject, you may want to coordinate your
 efforts with him.
 
 It seems that this is the step where most of the confusion arises;
 OpenEmbedded with Bitbake recipes are the recommended method for
 compiling a kernel on the BeagleBone, but the way that OE is setup
 does not facilitate easy patching of the kernel or even a reasonable
 way to fetch a specific kernel revision that is compatible with
 Xenomai.  Subsequent invocations of Bitbake will then destroy changes
 that have been made to a custom kernel, there has to be a better way
 to maintain a kernel with Xenomai than this.  I am thinking a non-OE
 kernel + Xenomai integrated git repo with Buildroot for creating a
 minimal target kernel and filesystem with userland Xenomai support.
 Unless there is a better way than this?


As I have already said several times, putting Xenomai and the I-pipe
kernel in the same repository is a bad idea. It makes hard upgrading any
of the two without the other, and following Xenomai stable branch is the
recommended way of working. The rules should be made such that xenomai
and the kernel are two separate dirs, and run prepare-kernel.sh before
compiling the kernel. The custom rootfs build system we use for testing
xenomai works this way, I have worked this way with snapgear, buildroot
works this way[1] and I bet any rootfs build system can be made to work
this way.

[1] http://git.buildroot.net/buildroot/tree/linux/linux-ext-xenomai.mk



-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai Documentation for ARM Integration

2013-01-19 Thread Willy Lambert
2013/1/19 Gilles Chanteperdrix gilles.chanteperd...@xenomai.org:
 On 01/18/2013 10:26 PM, Gregory Perry wrote:

 Hello,

 I have been following the Xenomai project for a few months now and it
 seems there is a lot of confusion about the optimal path to
 integrating Xenomai with the appropriate I-pipe patches and other
 legwork required to get it running and stable on ARM platforms such
 as the BeagleBone and Raspberry Pi.


 The simplest solution is to integrate the support for these processors
 to the mainline kernel so that the I-pipe support can be merged in the
 I-pipe patch itself. That is what I tried to tell here:
 http://xenomai.org/index.php/I-pipe-core:ArmPorting#Publishing_your_modifications

 Failing that, I recommended, several times, to provide pre and post
 patches allowing to apply the support for the additional processor in
 addition to the I-pipe patch, explained how to generate these
 patches, and said that we would be glad to distribute these patches
 as part of the xenomai tarball.


 I would be willing to document at length the entire process required
 for building a kernel successfully with all userland and kernel
 support required, but I want to make sure that it is a workflow
 advocated by the project maintainers in terms of the build
 environment, patching process, and performance testing objectives of
 the documentation.


 This process is already documented in the README.INSTALL guide...
 The part we do not document is the configuration and compilation of
 the Linux kernel, someone else volunteered to provide more documentation
 on this subject, you may want to coordinate your efforts with him.


FYI it's me, I'm looking for time to do it ^^


 Is there any interest in this and would someone from the Xenomai
 project be willing to provide answers to questions if I were to write
 up the documentation?  In the long run it would probably cut down on
 most if not all of the redundant how do I get Xenomai running on the
 BeagleBone threads that pop up from time to time.


 From my point of view, these threads exist because the work on BeagleBone
 is still separated.


 --
 Gilles.

 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 09:17 PM, John Morris wrote:

 Hi list,
 
 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.
 
 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.
 
 ++ /usr/lib/xenomai/clocktest -T 30
 == Tested clock: 0 (CLOCK_REALTIME)
 CPU  ToD offset [us] ToD drift [us/s]  warps max delta [us]
 ---   -- --
   0  0.00.000  00.0
 ^[[1A  0   -1074721.3   39.245  00.0
 ^[[1A  0   -1074711.5   39.172  00.0
 ^[[1A  0   -1074701.8   39.096  00.0
 ^[[1A  0   -1074691.9   39.184  00.0
 
 This is Gilles's i-pipe dev tree and xenomai master pulled yesterday,

 kernel 3.5.7.


Nothing really wrong here, the ToD offset is due to the fact that the
realtime clock was changed between the time Xenomai read it to set its
own realt-ime clock, and the time clocktest is run, and the drift is due
to the tsc frequency adjustments made by Linux after Xenomai is started.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread John Morris


On 01/19/2013 02:25 PM, Gilles Chanteperdrix wrote:
 On 01/19/2013 09:17 PM, John Morris wrote:
 
 Hi list,

 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.

 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.

 Nothing really wrong here, the ToD offset is due to the fact that the
 realtime clock was changed between the time Xenomai read it to set its
 own realt-ime clock, and the time clocktest is run, and the drift is due
 to the tsc frequency adjustments made by Linux after Xenomai is started.
 

Woo hoo!  That means it's time to start working on the package repos.
I'll report back soon.  Thanks for all the help.

John

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 09:32 PM, John Morris wrote:

 
 
 On 01/19/2013 02:25 PM, Gilles Chanteperdrix wrote:
 On 01/19/2013 09:17 PM, John Morris wrote:

 Hi list,

 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.

 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.
 
 Nothing really wrong here, the ToD offset is due to the fact that the
 realtime clock was changed between the time Xenomai read it to set its
 own realt-ime clock, and the time clocktest is run, and the drift is due
 to the tsc frequency adjustments made by Linux after Xenomai is started.

 
 Woo hoo!  That means it's time to start working on the package repos.
 I'll report back soon.  Thanks for all the help.


It would make sense to wait for Jan's results on monday.


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread John Morris
On 01/19/2013 02:17 PM, John Morris wrote:
 Hi list,
 
 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.
 
 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.

Whoops, how embarrassing, I reported the wrong thing:  it's the
native/tsc numbers that look funny.

++ /usr/lib/xenomai/regression/native/tsc
Checking tsc for 1 minute(s)
min: 4294967295, max: 0, avg: -nan
min: 4294967295, max: 0, avg: -nan
min: 4294967295, max: 0, avg: -nan
min: 4294967295, max: 0, avg: -nan
min: 4294967295, max: 0, avg: -nan
min: 4294967295, max: 0, avg: -nan
[...]
min: 4294967295, max: 0, avg: -nan - -nan us

John

 http://www.zultron.com/static/2013/01/xenomai/3.5.7-test-32-bit/config-3.5.7.txt
 
 http://www.zultron.com/static/2013/01/xenomai/3.5.7-test-32-bit/dmesg.log
 
 http://www.zultron.com/static/2013/01/xenomai/3.5.7-test-32-bit/xeno-regression-test.log

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 11:12 PM, John Morris wrote:

 On 01/19/2013 02:17 PM, John Morris wrote:
 Hi list,

 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.

 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.
 
 Whoops, how embarrassing, I reported the wrong thing:  it's the
 native/tsc numbers that look funny.
 
 ++ /usr/lib/xenomai/regression/native/tsc
 Checking tsc for 1 minute(s)
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 [...]
 min: 4294967295, max: 0, avg: -nan - -nan us


You are probably missing --enable-x86-tsc on configure command line.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 32-bit regression tests: CLOCK_REALTIME wonkiness

2013-01-19 Thread Gilles Chanteperdrix
On 01/19/2013 11:14 PM, Gilles Chanteperdrix wrote:

 On 01/19/2013 11:12 PM, John Morris wrote:
 
 On 01/19/2013 02:17 PM, John Morris wrote:
 Hi list,

 These are the final tests on 32-bit before the initial RedHat packages
 can be released for wider testing.

 On this host, a Dell Celeron with ICH5 chipset, the CLOCK_REALTIME
 numbers look funny.  All other tests run correctly.

 Whoops, how embarrassing, I reported the wrong thing:  it's the
 native/tsc numbers that look funny.

 ++ /usr/lib/xenomai/regression/native/tsc
 Checking tsc for 1 minute(s)
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 min: 4294967295, max: 0, avg: -nan
 [...]
 min: 4294967295, max: 0, avg: -nan - -nan us
 
 
 You are probably missing --enable-x86-tsc on configure command line.
 

(or rather have passed --disable-x86-tsc as --enable-x86-tsc should be
the default).

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai