Re: [Xenomai-core] xenomai 2.5.3/native, kernel 2.6.31.8 and fork()

2010-08-20 Thread Krzysztof Błaszkowski

 Yes, now if you find the culprit option, it would be nice to report here
 so that we can fix the I-pipe patch.
 


I do know it still. All i have are two configs. One which does not work
and one working. I have tried so far breaking working one and also
fixing broken. Both attempts have been unsuccessful.

I tried many obvious settings mainly in processor type and features
with no luck.

This process must take some time ( i can't spend whole days on trying
one-by-one each difference, recompile kernel, sync target's rootfs,
reboot target and run fork regression test even that many steps i have
automated)

Regards,
-- 
Krzysztof Blaszkowski


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xenomai 2.5.3/native, kernel 2.6.31.8 and fork()

2010-08-20 Thread Gilles Chanteperdrix
Krzysztof Błaszkowski wrote:
 Yes, now if you find the culprit option, it would be nice to report here
 so that we can fix the I-pipe patch.

 
 
 I do know it still. All i have are two configs. One which does not work
 and one working. I have tried so far breaking working one and also
 fixing broken. Both attempts have been unsuccessful.
 
 I tried many obvious settings mainly in processor type and features
 with no luck.
 
 This process must take some time ( i can't spend whole days on trying
 one-by-one each difference, recompile kernel,ync target's rootfs,
 reboot target and run fork regression test even that many steps i have
 automated)

ever heard about bisecting ? List the diffs between the two configs
apply half of them
if it still works, apply half of the rest
if it does not unapply half of the one you applied
etc...
if there are 65000 differences, you will get to the result in 16 steps.
you can keep the same rootfs, all you have to do is rebuild the kernel
(without make clean, so that only what changed in the .config is
re-compiled).

It should take just an hour or two.

 
 Regards,


-- 
Gilles.


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Mayday support

2010-08-20 Thread Philippe Gerum
On Fri, 2010-08-20 at 14:32 +0200, Jan Kiszka wrote:
 Jan Kiszka wrote:
  Philippe Gerum wrote:
  I've toyed a bit to find a generic approach for the nucleus to regain
  complete control over a userland application running in a syscall-less
  loop.
 
  The original issue was about recovering gracefully from a runaway
  situation detected by the nucleus watchdog, where a thread would spin in
  primary mode without issuing any syscall, but this would also apply for
  real-time signals pending for such a thread. Currently, Xenomai rt
  signals cannot preempt syscall-less code running in primary mode either.
 
  The major difference between the previous approaches we discussed about
  and this one, is the fact that we now force the runaway thread to run a
  piece of valid code that calls into the nucleus. We do not force the
  thread to run faulty code or at a faulty address anymore. Therefore, we
  can reuse this feature to improve the rt signal management, without
  having to forge yet-another signal stack frame for this.
 
  The code introduced only fixes the watchdog related issue, but also does
  some groundwork for enhancing the rt signal support later. The
  implementation details can be found here:
  http://git.xenomai.org/?p=xenomai-rpm.git;a=commit;h=4cf21a2ae58354819da6475ae869b96c2defda0c
 
  The current mayday support is only available for powerpc and x86 for
  now, more will come in the next days. To have it enabled, you have to
  upgrade your I-pipe patch to 2.6.32.15-2.7-00 or 2.6.34-2.7-00 for x86,
  2.6.33.5-2.10-01 or 2.6.34-2.10-00 for powerpc. That feature relies on a
  new interface available from those latest patches.
 
  The current implementation does not break the 2.5.x ABI on purpose, so
  we could merge it into the stable branch.
 
  We definitely need user feedback on this. Typically, does arming the
  nucleus watchdog with that patch support in, properly recovers from your
  favorite get me out of here situation? TIA,
 
  You can pull this stuff from
  git://git.xenomai.org/xenomai-rpm.git, queue/mayday branch.
 
  
  I've retested the feature as it's now in master, and it has one
  remaining problem: If you run the cpu hog under gdb control and try to
  break out of the while(1) loop, this doesn't work before the watchdog
  expired - of course. But if you send the break before the expiry (or hit
  a breakpoint), something goes wrong. The Xenomai task continues to spin,
  and there is no chance to kill its process (only gdb).
  
  # cat /proc/xenomai/sched
  CPU  PIDCLASS  PRI  TIMEOUT   TIMEBASE   STAT   NAME
0  0  idle-1  - master RR ROOT/0

Eeek, we really need to have a look at this funky STAT output.

1  0  idle-1  - master R  ROOT/1
0  6120   rt  99  - master Tt cpu-hog
  # cat /proc/xenomai/stat
  CPU  PIDMSWCSWPFSTAT   %CPU  NAME
0  0  0  0  0 005000880.0  ROOT/0
1  0  0  0  0 00500080   99.7  ROOT/1
0  6120   0  1  0 00342180  100.0  cpu-hog
0  0  0  21005  0 0.0  IRQ3340: [timer]
1  0  0  35887  0 0.3  IRQ3340: [timer]
  
 
 Fixable by this tiny change:
 
 diff --git a/ksrc/nucleus/sched.c b/ksrc/nucleus/sched.c
 index 5242d9f..04a344e 100644
 --- a/ksrc/nucleus/sched.c
 +++ b/ksrc/nucleus/sched.c
 @@ -175,7 +175,8 @@ void xnsched_init(struct xnsched *sched, int cpu)
xnthread_name(sched-rootcb));
  
  #ifdef CONFIG_XENO_OPT_WATCHDOG
 - xntimer_init(sched-wdtimer, nktbase, xnsched_watchdog_handler);
 + xntimer_init_noblock(sched-wdtimer, nktbase,
 +  xnsched_watchdog_handler);
   xntimer_set_name(sched-wdtimer, [watchdog]);
   xntimer_set_priority(sched-wdtimer, XNTIMER_LOPRIO);
   xntimer_set_sched(sched-wdtimer, sched);
 
 
 I.e. the watchdog timer should not be stopped by any ongoing debug
 session of a Xenomai app. Will queue this for upstream.

Yes, that makes a lot of sense now. The watchdog would not fire if the
task was single-stepped anyway, since the latter would have been moved
to secondary mode first.

Did you see this bug happening in a uniprocessor context as well?

 
 Jan
 

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Mayday support

2010-08-20 Thread Jan Kiszka
Philippe Gerum wrote:
 On Fri, 2010-08-20 at 14:32 +0200, Jan Kiszka wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 I've toyed a bit to find a generic approach for the nucleus to regain
 complete control over a userland application running in a syscall-less
 loop.

 The original issue was about recovering gracefully from a runaway
 situation detected by the nucleus watchdog, where a thread would spin in
 primary mode without issuing any syscall, but this would also apply for
 real-time signals pending for such a thread. Currently, Xenomai rt
 signals cannot preempt syscall-less code running in primary mode either.

 The major difference between the previous approaches we discussed about
 and this one, is the fact that we now force the runaway thread to run a
 piece of valid code that calls into the nucleus. We do not force the
 thread to run faulty code or at a faulty address anymore. Therefore, we
 can reuse this feature to improve the rt signal management, without
 having to forge yet-another signal stack frame for this.

 The code introduced only fixes the watchdog related issue, but also does
 some groundwork for enhancing the rt signal support later. The
 implementation details can be found here:
 http://git.xenomai.org/?p=xenomai-rpm.git;a=commit;h=4cf21a2ae58354819da6475ae869b96c2defda0c

 The current mayday support is only available for powerpc and x86 for
 now, more will come in the next days. To have it enabled, you have to
 upgrade your I-pipe patch to 2.6.32.15-2.7-00 or 2.6.34-2.7-00 for x86,
 2.6.33.5-2.10-01 or 2.6.34-2.10-00 for powerpc. That feature relies on a
 new interface available from those latest patches.

 The current implementation does not break the 2.5.x ABI on purpose, so
 we could merge it into the stable branch.

 We definitely need user feedback on this. Typically, does arming the
 nucleus watchdog with that patch support in, properly recovers from your
 favorite get me out of here situation? TIA,

 You can pull this stuff from
 git://git.xenomai.org/xenomai-rpm.git, queue/mayday branch.

 I've retested the feature as it's now in master, and it has one
 remaining problem: If you run the cpu hog under gdb control and try to
 break out of the while(1) loop, this doesn't work before the watchdog
 expired - of course. But if you send the break before the expiry (or hit
 a breakpoint), something goes wrong. The Xenomai task continues to spin,
 and there is no chance to kill its process (only gdb).

 # cat /proc/xenomai/sched
 CPU  PIDCLASS  PRI  TIMEOUT   TIMEBASE   STAT   NAME
   0  0  idle-1  - master RR ROOT/0
 
 Eeek, we really need to have a look at this funky STAT output.

I've a patch for this queued as well. Was only a cosmetic thing.

 
   1  0  idle-1  - master R  ROOT/1
   0  6120   rt  99  - master Tt cpu-hog
 # cat /proc/xenomai/stat
 CPU  PIDMSWCSWPFSTAT   %CPU  NAME
   0  0  0  0  0 005000880.0  ROOT/0
   1  0  0  0  0 00500080   99.7  ROOT/1
   0  6120   0  1  0 00342180  100.0  cpu-hog
   0  0  0  21005  0 0.0  IRQ3340: [timer]
   1  0  0  35887  0 0.3  IRQ3340: [timer]

 Fixable by this tiny change:

 diff --git a/ksrc/nucleus/sched.c b/ksrc/nucleus/sched.c
 index 5242d9f..04a344e 100644
 --- a/ksrc/nucleus/sched.c
 +++ b/ksrc/nucleus/sched.c
 @@ -175,7 +175,8 @@ void xnsched_init(struct xnsched *sched, int cpu)
   xnthread_name(sched-rootcb));
  
  #ifdef CONFIG_XENO_OPT_WATCHDOG
 -xntimer_init(sched-wdtimer, nktbase, xnsched_watchdog_handler);
 +xntimer_init_noblock(sched-wdtimer, nktbase,
 + xnsched_watchdog_handler);
  xntimer_set_name(sched-wdtimer, [watchdog]);
  xntimer_set_priority(sched-wdtimer, XNTIMER_LOPRIO);
  xntimer_set_sched(sched-wdtimer, sched);


 I.e. the watchdog timer should not be stopped by any ongoing debug
 session of a Xenomai app. Will queue this for upstream.
 
 Yes, that makes a lot of sense now. The watchdog would not fire if the
 task was single-stepped anyway, since the latter would have been moved
 to secondary mode first.

Yep.

 
 Did you see this bug happening in a uniprocessor context as well?

No, as it is impossible on a uniprocessor to interact with gdb if a cpu
hog - the only existing CPU is simply not available. :)

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Mayday support

2010-08-20 Thread Philippe Gerum
On Fri, 2010-08-20 at 16:06 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
  On Fri, 2010-08-20 at 14:32 +0200, Jan Kiszka wrote:
  Jan Kiszka wrote:
  Philippe Gerum wrote:
  I've toyed a bit to find a generic approach for the nucleus to regain
  complete control over a userland application running in a syscall-less
  loop.
 
  The original issue was about recovering gracefully from a runaway
  situation detected by the nucleus watchdog, where a thread would spin in
  primary mode without issuing any syscall, but this would also apply for
  real-time signals pending for such a thread. Currently, Xenomai rt
  signals cannot preempt syscall-less code running in primary mode either.
 
  The major difference between the previous approaches we discussed about
  and this one, is the fact that we now force the runaway thread to run a
  piece of valid code that calls into the nucleus. We do not force the
  thread to run faulty code or at a faulty address anymore. Therefore, we
  can reuse this feature to improve the rt signal management, without
  having to forge yet-another signal stack frame for this.
 
  The code introduced only fixes the watchdog related issue, but also does
  some groundwork for enhancing the rt signal support later. The
  implementation details can be found here:
  http://git.xenomai.org/?p=xenomai-rpm.git;a=commit;h=4cf21a2ae58354819da6475ae869b96c2defda0c
 
  The current mayday support is only available for powerpc and x86 for
  now, more will come in the next days. To have it enabled, you have to
  upgrade your I-pipe patch to 2.6.32.15-2.7-00 or 2.6.34-2.7-00 for x86,
  2.6.33.5-2.10-01 or 2.6.34-2.10-00 for powerpc. That feature relies on a
  new interface available from those latest patches.
 
  The current implementation does not break the 2.5.x ABI on purpose, so
  we could merge it into the stable branch.
 
  We definitely need user feedback on this. Typically, does arming the
  nucleus watchdog with that patch support in, properly recovers from your
  favorite get me out of here situation? TIA,
 
  You can pull this stuff from
  git://git.xenomai.org/xenomai-rpm.git, queue/mayday branch.
 
  I've retested the feature as it's now in master, and it has one
  remaining problem: If you run the cpu hog under gdb control and try to
  break out of the while(1) loop, this doesn't work before the watchdog
  expired - of course. But if you send the break before the expiry (or hit
  a breakpoint), something goes wrong. The Xenomai task continues to spin,
  and there is no chance to kill its process (only gdb).
 
  # cat /proc/xenomai/sched
  CPU  PIDCLASS  PRI  TIMEOUT   TIMEBASE   STAT   NAME
0  0  idle-1  - master RR ROOT/0
  
  Eeek, we really need to have a look at this funky STAT output.
 
 I've a patch for this queued as well. Was only a cosmetic thing.
 
  
1  0  idle-1  - master R  ROOT/1
0  6120   rt  99  - master Tt cpu-hog
  # cat /proc/xenomai/stat
  CPU  PIDMSWCSWPFSTAT   %CPU  NAME
0  0  0  0  0 005000880.0  ROOT/0
1  0  0  0  0 00500080   99.7  ROOT/1
0  6120   0  1  0 00342180  100.0  cpu-hog
0  0  0  21005  0 0.0  IRQ3340: [timer]
1  0  0  35887  0 0.3  IRQ3340: [timer]
 
  Fixable by this tiny change:
 
  diff --git a/ksrc/nucleus/sched.c b/ksrc/nucleus/sched.c
  index 5242d9f..04a344e 100644
  --- a/ksrc/nucleus/sched.c
  +++ b/ksrc/nucleus/sched.c
  @@ -175,7 +175,8 @@ void xnsched_init(struct xnsched *sched, int cpu)
  xnthread_name(sched-rootcb));
   
   #ifdef CONFIG_XENO_OPT_WATCHDOG
  -  xntimer_init(sched-wdtimer, nktbase, xnsched_watchdog_handler);
  +  xntimer_init_noblock(sched-wdtimer, nktbase,
  +   xnsched_watchdog_handler);
 xntimer_set_name(sched-wdtimer, [watchdog]);
 xntimer_set_priority(sched-wdtimer, XNTIMER_LOPRIO);
 xntimer_set_sched(sched-wdtimer, sched);
 
 
  I.e. the watchdog timer should not be stopped by any ongoing debug
  session of a Xenomai app. Will queue this for upstream.
  
  Yes, that makes a lot of sense now. The watchdog would not fire if the
  task was single-stepped anyway, since the latter would have been moved
  to secondary mode first.
 
 Yep.
 
  
  Did you see this bug happening in a uniprocessor context as well?
 
 No, as it is impossible on a uniprocessor to interact with gdb if a cpu
 hog - the only existing CPU is simply not available. :)

I was rather thinking of your hit-a-breakpoint-or-^C-early scenario... I
thought you did see this on UP as well, and scratched my head to
understand how this would have been possible. Ok, so let's merge this.

 
 Jan
 

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org

[Xenomai-core] [git pull] small RTDM fixes and assorted patches

2010-08-20 Thread Jan Kiszka
The following changes since commit 7e2735614ebe515d57abeaa3ff6df375a7c4149f:

  sched: avoid infinite reschedule loops (2010-08-03 00:11:21 +0200)

are available in the git repository at:
  git://git.xenomai.org/xenomai-jki.git for-upstream

Jan Kiszka (8):
  rt_print: Properly return printed length
  RTDM: Protect xnshadow_ppd_get via nklock
  RTDM: Plug race between proc_read_dev_info and device deregistration
  RTDM: Properly clean up on xnvfile setup errors
  RTDM: Extend device name space in open_fildes proc output
  Fix symbolic status ouput of root threads
  Create watchdog as non-blockable timer
  native: Improve documentation of rt_task_join and rt_task_delete

 ksrc/nucleus/sched.c |3 ++-
 ksrc/nucleus/thread.c|4 
 ksrc/skins/native/task.c |   15 +--
 ksrc/skins/rtdm/core.c   |2 ++
 ksrc/skins/rtdm/proc.c   |   45 +++--
 src/rtdk/rt_print.c  |1 +
 6 files changed, 57 insertions(+), 13 deletions(-)

The bug fixes (patches 1-4 and 7) should all be considered for 2.5 as
well, but some need rebasing. Will look into this once the series is
acceptable.

Jan

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [git pull] small RTDM fixes and assorted patches

2010-08-20 Thread Stefan Kisdaroczi
Hi Jan,

https://mail.gna.org/public/xenomai-core/2010-05/msg00059.html

Stefan

Am 20.08.2010 17:02, schrieb Jan Kiszka:
 The following changes since commit 7e2735614ebe515d57abeaa3ff6df375a7c4149f:
 
   sched: avoid infinite reschedule loops (2010-08-03 00:11:21 +0200)
 
 are available in the git repository at:
   git://git.xenomai.org/xenomai-jki.git for-upstream
 
 Jan Kiszka (8):
   rt_print: Properly return printed length
   RTDM: Protect xnshadow_ppd_get via nklock
   RTDM: Plug race between proc_read_dev_info and device deregistration
   RTDM: Properly clean up on xnvfile setup errors
   RTDM: Extend device name space in open_fildes proc output
   Fix symbolic status ouput of root threads
   Create watchdog as non-blockable timer
   native: Improve documentation of rt_task_join and rt_task_delete
 
  ksrc/nucleus/sched.c |3 ++-
  ksrc/nucleus/thread.c|4 
  ksrc/skins/native/task.c |   15 +--
  ksrc/skins/rtdm/core.c   |2 ++
  ksrc/skins/rtdm/proc.c   |   45 +++--
  src/rtdk/rt_print.c  |1 +
  6 files changed, 57 insertions(+), 13 deletions(-)
 
 The bug fixes (patches 1-4 and 7) should all be considered for 2.5 as
 well, but some need rebasing. Will look into this once the series is
 acceptable.
 
 Jan
 
 ___
 Xenomai-core mailing list
 Xenomai-core@gna.org
 https://mail.gna.org/listinfo/xenomai-core
 



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [git pull] small RTDM fixes and assorted patches

2010-08-20 Thread Jan Kiszka
Stefan Kisdaroczi wrote:
 Hi Jan,
 
 https://mail.gna.org/public/xenomai-core/2010-05/msg00059.html

Oh, sorry - hard vacation reset, still recovering. Will add this.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [git pull v2] small RTDM fixes and assorted patches

2010-08-20 Thread Jan Kiszka
The following changes since commit 7e2735614ebe515d57abeaa3ff6df375a7c4149f:

  sched: avoid infinite reschedule loops (2010-08-03 00:11:21 +0200)

are available in the git repository at:
  git://git.xenomai.org/xenomai-jki.git for-upstream

Jan Kiszka (8):
  rt_print: Properly return printed length
  RTDM: Protect xnshadow_ppd_get via nklock
  RTDM: Plug race between proc_read_dev_info and device deregistration
  RTDM: Properly clean up on xnvfile setup errors
  RTDM: Extend device name space in open_fildes proc output
  Fix symbolic status ouput of root threads
  Create watchdog as non-blockable timer
  native: Improve documentation of rt_task_join and rt_task_delete

Stefan Kisdaroczi (1):
  RTDM device profiles: Document open_rt, socket_rt and close_rt deprecation

 include/rtdm/rtcan.h |4 ++--
 include/rtdm/rtserial.h  |4 ++--
 include/rtdm/rttesting.h |4 ++--
 ksrc/nucleus/sched.c |3 ++-
 ksrc/nucleus/thread.c|4 
 ksrc/skins/native/task.c |   15 +--
 ksrc/skins/rtdm/core.c   |2 ++
 ksrc/skins/rtdm/proc.c   |   45 +++--
 src/rtdk/rt_print.c  |1 +
 9 files changed, 63 insertions(+), 19 deletions(-)

Updated to include Stefan's long-pending RTDM profile updates.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xenomai 2.5.3/native, kernel 2.6.31.8 and fork()

2010-08-20 Thread Krzysztof Błaszkowski
On Fri, 2010-08-20 at 11:54 +0200, Gilles Chanteperdrix wrote:
 Krzysztof Błaszkowski wrote:
  Yes, now if you find the culprit option, it would be nice to report here
  so that we can fix the I-pipe patch.
 
  
  
  I do know it still. All i have are two configs. One which does not work
  and one working. I have tried so far breaking working one and also
  fixing broken. Both attempts have been unsuccessful.
  
  I tried many obvious settings mainly in processor type and features
  with no luck.
  
  This process must take some time ( i can't spend whole days on trying
  one-by-one each difference, recompile kernel,ync target's rootfs,
  reboot target and run fork regression test even that many steps i have
  automated)
 
 ever heard about bisecting ?

sure i had.


  List the diffs between the two configs
 apply half of them
 if it still works, apply half of the rest
 if it does not unapply half of the one you applied
 etc...
 if there are 65000 differences, you will get to the result in 16 steps.
 you can keep the same rootfs, all you have to do is rebuild the kernel
 (without make clean, so that only what changed in the .config is
 re-compiled).
 

i used to use more fine grained changes set until it made me tired.

and i as you may know most changes in processor features lead to
recompile whole kernel - by not cleaning won't save anything.


 It should take just an hour or two.

poss. but i don't think so.

Regards,

 
  
  Regards,
 
 


-- 
Krzysztof Blaszkowski


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt timer jitter

2010-08-20 Thread Krzysztof Błaszkowski

Do you have any idea about reducing rt timer jitter ?
I experience annoyingly big jitter in a thread which is supposed to run
at 400us (i reckon this is nothing extra demanding from atom @1.6G)


the thread's loop looks like:

{
function1()
..2()
..3()
..4()

rt_task_wait_period()
}

(^yet another simplified model^)

task is periodic while native skin works in aperiodic timer mode.
(periodic timer has horrible timings - it is apparently not rt timer).

rt_task_wait_period() always exists with 0 (no overruns) and also these
functions take no longer than 120usec (average is: 80 .. 90)

after task_wait_period() i read tsc (on atom it is constant and also any
frq adjusting is disabled)

and if compared to previous readout and converted to ns i got jitter in
range of 10usec .. -18usec.  10 usec means that wait_period exited
before given time point and -18usec means that it did it with 18usec
delay.

I noticed that UP configuration has sligthly less jitter.

a part of dmesg:

SGEN-lpc 0x148f: division factor 20 (700), tcks: 2, 15
sgen_fpga_init:736 [0]: acc  (before rst ) test 
sgen_fpga_init:742  c000
sgen_fpga_init:742  c000
sgen_fpga_init:742  c000
sgen_fpga_init:752  8000
thread_task:2651 peak rt jitter -710[ns], tsc delta 667850
SGEN-lpc :detected inputs failure. Mask 0x000c
MOTION: setting Traj cycle time to 40 nsecs
MOTION: setting Servo cycle time to 400 nsecs
thread_task:2651 peak rt jitter -3512[ns], tsc delta 672520
thread_task:2651 peak rt jitter 3694[ns], tsc delta 660510
thread_task:2651 peak rt jitter -4382[ns], tsc delta 673970
thread_task:2651 peak rt jitter 5146[ns], tsc delta 658090
thread_task:2651 peak rt jitter -5558[ns], tsc delta 675930
thread_task:2651 peak rt jitter 5626[ns], tsc delta 657290
thread_task:2651 peak rt jitter -5828[ns], tsc delta 676380
thread_task:2651 peak rt jitter 7264[ns], tsc delta 654560
thread_task:2651 peak rt jitter -7742[ns], tsc delta 679570
thread_task:2651 peak rt jitter -9626[ns], tsc delta 682710
thread_task:2651 peak rt jitter 10156[ns], tsc delta 649740
thread_task:2651 peak rt jitter -16262[ns], tsc delta 693770
SGEN-lpc [0] Fmax set to 285714Hz (18)
SGEN-lpc [1] Fmax set to 285714Hz (18)
SGEN-lpc [2] Fmax set to 285714Hz (15)
thread_task:2651 peak rt jitter -18470[ns], tsc delta 697450


I must say it is 4 - 5 times worse if compared to rtai 3.7 / 2.6.27.19
UP.

I use now xenomai 2.5.4 with adeos 2.2-06 patch for same 2.6.27 kernel
to make these comparisons more reliable.


Can i do something with this ?

Regards,


-- 
Krzysztof Blaszkowski


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt timer jitter

2010-08-20 Thread Krzysztof Błaszkowski
On Fri, 2010-08-20 at 18:01 +0200, Gilles Chanteperdrix wrote:
 Krzysztof Błaszkowski wrote:
  Can i do something with this ?
 
 Do you observe the same latencies with the latency test?
 

this test does not produce reliable results except some hints.

e.g. min. latency shifts about 1.5usec when i run on 2nd console dd
if=/dev/urandom of=/dev/null bs=16k.

as i recall max latency was more than 10 usec.



 


-- 
Krzysztof Blaszkowski


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt timer jitter

2010-08-20 Thread Philippe Gerum
On Fri, 2010-08-20 at 18:20 +0200, Krzysztof Błaszkowski wrote:
 On Fri, 2010-08-20 at 18:06 +0200, Philippe Gerum wrote:
  On Fri, 2010-08-20 at 17:55 +0200, Krzysztof Błaszkowski wrote:
   Do you have any idea about reducing rt timer jitter ?
   I experience annoyingly big jitter in a thread which is supposed to run
   at 400us (i reckon this is nothing extra demanding from atom @1.6G)
   
   
   the thread's loop looks like:
   
   {
   function1()
   ..2()
   ..3()
   ..4()
   
   rt_task_wait_period()
   }
   
   (^yet another simplified model^)
  
  This is the typical pattern of the latency test. What figures do you get
  with:
  
  # /usr/xenomai/bin/latency -t0
  ...
  # /usr/xenomai/bin/latency -t1
  
 
 t0:
 
 RTS| -1.337| -0.039| 13.285|   0| 0|
 00:02:13/00:02:13
 

Those are common figures for user-space latency on the kind of hw you
run this test on.

 
 i can't run t1 because of missing seno_timerbench.ko (i have no idea how
 to find a config option which would build it)
 

Did you consider using the Search feature from
xconfig/gconfig/whatever, looking for timerbench?

config XENO_DRIVERS_TIMERBENCH
depends on XENO_SKIN_RTDM
tristate Timer benchmark driver
default y
help
Kernel-based benchmark driver for timer latency evaluation.
See testsuite/latency for a possible front-end.

If you run your app in kernel space, then -t1 is what you want to run.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt timer jitter

2010-08-20 Thread Philippe Gerum
On Fri, 2010-08-20 at 18:14 +0200, Krzysztof Błaszkowski wrote:
 On Fri, 2010-08-20 at 18:01 +0200, Gilles Chanteperdrix wrote:
  Krzysztof Błaszkowski wrote:
   Can i do something with this ?
  
  Do you observe the same latencies with the latency test?
  
 
 this test does not produce reliable results except some hints.
 
 e.g. min. latency shifts about 1.5usec when i run on 2nd console dd
 if=/dev/urandom of=/dev/null bs=16k.
 

and?

 as i recall max latency was more than 10 usec.
 

which is correct on your platform.

 
 
  
 
 

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] problem - eldk 4.2 - ppc

2010-08-20 Thread Rodolfo Oliveira
Hi,

Today use the eldk 3.0 version in this processor MPC5200B PPC, but I want to
upgrade the kernel to version 2.6 and eldk 4.2 with ELDK can do this more
easily.
The problem happens when I try to compile by running the following commands:

make distclean
make clean
make menuconfig
make

many minutes after the following error appears:
  LD drivers / net / bonding / built-in.o
  CC [M] drivers / net / bonding / bond_main.o
  CC [M] drivers/net/bonding/bond_3ad.o
  CC [M] drivers / net / bonding / bond_alb.o
  CC [M] drivers / net / bonding / bond_sysfs.o
  LD [M] drivers / net / bonding / bonding.o
  LD drivers / net / can / built-in.o
  CC [M] drivers / net / can / vcan.o
drivers / net / can / vcan.c: In function 'vcan_setup':
drivers / net / can / vcan.c: 207: error: implicit declaration of function
'SET_MODULE_OWNER'
make [3]: ** [drivers / net / can / vcan.o] Error 1
make [2]: ** [drivers / net / can] Error 2
make [1]: ** [drivers / net] Error 2
make: ** [drivers] Error 2
rodolfo @ df2684: / $ ls opt/eldk/ppc_82xx/usr/src/linux-2.6.24

can anyone help me?

Thanks

-- 

[]´s



===
Rodolfo R. de O. Neto, Eng. , MBA
Engenheiro de computação
MBA - Governança em TI
E-mail: rodolforo...@gmail.com
===
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core