Re: [Xenomai-core] system() question

2007-02-08 Thread Gilles Chanteperdrix
Stéphane ANCELOT wrote:
  my linux user task uses a system() call in order to call a bash script 
  to restart the realtime task as follow :
  
  user interface C call :
  system(restart_task.sh);
  give back hand to user interface
  
  
  
  
  
  bash script restart_task.sh :
  killall -15 mytask
  sleep 2
  mytask 
  
  
  system call dies because sh script has been launched

Try calling daemon(0,0) in mytask instead of using  to make it run in
the background.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Linux lock-up with rtcanrecv

2007-02-08 Thread Wolfgang Grandegger

Jan Kiszka wrote:

Jan Kiszka wrote:

Hi all,

fiddling with latest Xenomai trunk and 2.3.x on one of our robots (there
is still a bug in trunk /wrt broken timeouts of rt_dev_read on
xeno_16550A - different issue...), I ran into a weird behaviour of
rtcanrecv:

I have a continuous stream of a few thousand packets/s towards the
robot. When I start up two rtcanrecv rtcan0 -p1000 instances (or one +
our own receiver application), the second one causes a Linux lock-up.
Sometimes this happens during startup of the second rtcanrecv, but at
latest on its termination. Other RT tasks are still running. I can
resolve the lock-up by pulling the CAN cable, everyone is fine
afterwards and can be cleaned up. I played with quite a few combinations
of recent ipipe patches and Xenomai revisions (even back to #1084 in
v2.3.x), no noticeable difference.



Forgot to mention one further observation: removing the usleep form
rtcanrecv's cleanup() works around the shutdown lock-up. I can't
interpret this yet. [BTW, Wolfgang, what is it good for?]


Hm, I think the usleep() only make sense for rtcansend to allow messages 
to got out before the close. You can remove it.


Wolfgang.


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: Linux lock-up with rtcanrecv

2007-02-08 Thread Wolfgang Grandegger

Hi Jan,

Jan Kiszka wrote:

Hi all,

fiddling with latest Xenomai trunk and 2.3.x on one of our robots (there
is still a bug in trunk /wrt broken timeouts of rt_dev_read on
xeno_16550A - different issue...), I ran into a weird behaviour of
rtcanrecv:

I have a continuous stream of a few thousand packets/s towards the
robot. When I start up two rtcanrecv rtcan0 -p1000 instances (or one +
our own receiver application), the second one causes a Linux lock-up.
Sometimes this happens during startup of the second rtcanrecv, but at
latest on its termination. Other RT tasks are still running. I can
resolve the lock-up by pulling the CAN cable, everyone is fine
afterwards and can be cleaned up. I played with quite a few combinations
of recent ipipe patches and Xenomai revisions (even back to #1084 in
v2.3.x), no noticeable difference.

Seems like I have to take a closer look - once time permits and the
robot is available. So any ideas or attempts to reproduce this are
welcome, current .config attached (no magic knob found there yet).


I will try to reporduce the problem a.s.a.p.


Jan


PS: Wolfgang, any objections against decoupling -v from -p and
lowering the receiver priority to 0?


No, -v with -p looks like a bug anyway. And does it make sense to define 
an option for the task priority?




Index: src/utils/can/rtcanrecv.c
===
--- src/utils/can/rtcanrecv.c   (revision 2146)
+++ src/utils/can/rtcanrecv.c   (working copy)
@@ -192,6 +192,7 @@ int main(int argc, char **argv)

case 'p':
print = strtoul(optarg, NULL, 0);
+   break;

case 'v':
verbose = 1;
@@ -312,7 +313,7 @@ int main(int argc, char **argv)
 }

 snprintf(name, sizeof(name), rtcanrecv-%d, getpid());
-ret = rt_task_shadow(rt_task_desc, name, 1, 0);
+ret = rt_task_shadow(rt_task_desc, name, 0, 0);
 if (ret) {
fprintf(stderr, rt_task_shadow: %s\n, strerror(-ret));
goto failure;



Wolfgang.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Fixed two timer base regressions

2007-02-08 Thread Jan Kiszka
Hi Philippe,

the trivial bugs are fixed already: see #2152 for the reason why
rt_dev_read timeouts took too long (the timer mode was ignored by
xnsynch_sleep_on), and I also found a yet invisible bug in
rtdm_toseq_init that would have picked the wrong time base (#2153).

Now just that rtcanrecv issue remains...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: Linux lock-up with rtcanrecv

2007-02-08 Thread Jan Kiszka
Wolfgang Grandegger wrote:
 Hi Jan,
 
 Jan Kiszka wrote:
 Hi all,

 fiddling with latest Xenomai trunk and 2.3.x on one of our robots (there
 is still a bug in trunk /wrt broken timeouts of rt_dev_read on
 xeno_16550A - different issue...), I ran into a weird behaviour of
 rtcanrecv:

 I have a continuous stream of a few thousand packets/s towards the
 robot. When I start up two rtcanrecv rtcan0 -p1000 instances (or one +
 our own receiver application), the second one causes a Linux lock-up.
 Sometimes this happens during startup of the second rtcanrecv, but at
 latest on its termination. Other RT tasks are still running. I can
 resolve the lock-up by pulling the CAN cable, everyone is fine
 afterwards and can be cleaned up. I played with quite a few combinations
 of recent ipipe patches and Xenomai revisions (even back to #1084 in
 v2.3.x), no noticeable difference.

 Seems like I have to take a closer look - once time permits and the
 robot is available. So any ideas or attempts to reproduce this are
 welcome, current .config attached (no magic knob found there yet).
 
 I will try to reporduce the problem a.s.a.p.

TiA.

 
 Jan


 PS: Wolfgang, any objections against decoupling -v from -p and
 lowering the receiver priority to 0?
 
 No, -v with -p looks like a bug anyway. And does it make sense to define
 an option for the task priority?

I don't think so because a) the timestamps are recorded at IRQ level
anyway and we b) printf the result in secondary mode. My reason for
lowering the prio was to avoid the the receiver runs under Linux with
SCHED_FIFO.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: Fixed two timer base regressions

2007-02-08 Thread Philippe Gerum
On Thu, 2007-02-08 at 09:38 +0100, Jan Kiszka wrote:
 Hi Philippe,
 
 the trivial bugs are fixed already: see #2152 for the reason why
 rt_dev_read timeouts took too long (the timer mode was ignored by
 xnsynch_sleep_on),

Ok.

  and I also found a yet invisible bug in
 rtdm_toseq_init that would have picked the wrong time base (#2153).
 

Using xnpod_current_thread()'s time base in rtdm_toseq_init() will
always pick the master one when called over a secondary mode context,
which according to the doc, is allowed. Is this intended?

 Now just that rtcanrecv issue remains...
 
 Jan
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: Fixed two timer base regressions

2007-02-08 Thread Jan Kiszka
Philippe Gerum wrote:
 On Thu, 2007-02-08 at 09:38 +0100, Jan Kiszka wrote:
 Hi Philippe,

 the trivial bugs are fixed already: see #2152 for the reason why
 rt_dev_read timeouts took too long (the timer mode was ignored by
 xnsynch_sleep_on),
 
 Ok.
 
  and I also found a yet invisible bug in
 rtdm_toseq_init that would have picked the wrong time base (#2153).

 
 Using xnpod_current_thread()'s time base in rtdm_toseq_init() will
 always pick the master one when called over a secondary mode context,
 which according to the doc, is allowed. Is this intended?

rtdm_toseq_init will only be called over primary context, it belongs
into the same context as rtdm_mutex_timedlock, rtdm_sem_timeddown, etc.



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Re: Fixed two timer base regressions

2007-02-08 Thread Jan Kiszka
Jan Kiszka wrote:
 Philippe Gerum wrote:
 On Thu, 2007-02-08 at 09:38 +0100, Jan Kiszka wrote:
 Hi Philippe,

 the trivial bugs are fixed already: see #2152 for the reason why
 rt_dev_read timeouts took too long (the timer mode was ignored by
 xnsynch_sleep_on),
 Ok.

  and I also found a yet invisible bug in
 rtdm_toseq_init that would have picked the wrong time base (#2153).

 Using xnpod_current_thread()'s time base in rtdm_toseq_init() will
 always pick the master one when called over a secondary mode context,
 which according to the doc, is allowed. Is this intended?
 
 rtdm_toseq_init will only be called over primary context, it belongs
 into the same context as rtdm_mutex_timedlock, rtdm_sem_timeddown, etc.
 

OK, got your point: you were referring to the rtdm_toseq_init doc which
talks about secondary mode usage - this needs fixing now (and never made
any sense :( ).

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Re: Linux lock-up with rtcanrecv

2007-02-08 Thread Jan Kiszka
Jan Kiszka wrote:
 Wolfgang Grandegger wrote:
 Hi Jan,

 Jan Kiszka wrote:
 Hi all,

 fiddling with latest Xenomai trunk and 2.3.x on one of our robots (there
 is still a bug in trunk /wrt broken timeouts of rt_dev_read on
 xeno_16550A - different issue...), I ran into a weird behaviour of
 rtcanrecv:

 I have a continuous stream of a few thousand packets/s towards the
 robot. When I start up two rtcanrecv rtcan0 -p1000 instances (or one +
 our own receiver application), the second one causes a Linux lock-up.
 Sometimes this happens during startup of the second rtcanrecv, but at
 latest on its termination. Other RT tasks are still running. I can
 resolve the lock-up by pulling the CAN cable, everyone is fine
 afterwards and can be cleaned up. I played with quite a few combinations
 of recent ipipe patches and Xenomai revisions (even back to #1084 in
 v2.3.x), no noticeable difference.

 Seems like I have to take a closer look - once time permits and the
 robot is available. So any ideas or attempts to reproduce this are
 welcome, current .config attached (no magic knob found there yet).
 I will try to reporduce the problem a.s.a.p.
 
 TiA.

Grmbl. You can forget about it, I found the magic knob.

Normally I don't even notice that the tracer is running in background.
This time I did notice it, but didn't realised that it was the reason.
Already disabling it during runtime solves my problem. It looks like
its overhead combined with a few more debug options of Linux, the high
IRQ load, and a low-end board drove the otherwise only moderately loaded
box into starvation.

Sorry for making noise, let's go back to business.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Enable usage of pthread_set_{mode, name}_np from kernel space

2007-02-08 Thread Stelian Pop
Hi,

Is there a reason why pthread_set_{mode,name}_np are not allowed to be
called from a kernel space POSIX thread ?

If there is none, please apply the patch below.

Thanks.

Index: ksrc/skins/posix/thread.c
===
--- ksrc/skins/posix/thread.c   (révision 2162)
+++ ksrc/skins/posix/thread.c   (copie de travail)
@@ -745,3 +745,5 @@
 EXPORT_SYMBOL(pthread_self);
 EXPORT_SYMBOL(pthread_make_periodic_np);
 EXPORT_SYMBOL(pthread_wait_np);
+EXPORT_SYMBOL(pthread_set_name_np);
+EXPORT_SYMBOL(pthread_set_mode_np);

-- 
Stelian Pop [EMAIL PROTECTED]


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core