Re: [Xenomai-core] Re: POSIX include problem

2006-03-16 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
   Hi Gilles,
   
   don't know yet what's going wrong, but the following demo code doesn't
   compile against the POSIX skin due to unresolved SIG_BLOCK:
   
   #include pthread.h
   #include signal.h
   
   int main()
   {
   return SIG_BLOCK;
   }
   
   Comment out the pthread include, and it will work again. Any ideas?
 
 Fixed in revision 714
 

Yep, thanks.

I found this while trying Thomas Gleixner's cyclic test over the POSIX
skin (http://www.tglx.de/projects/misc/cyclictest). After fixing a
rather ugly bug in his code (missing mlockall) I ran into a yet unknown
issue with the POSIX skin: the code just hangs when wrapped to Xenomai.

Compilation:
gcc -o cyclictest cyclictest.c posix-cflags posix-ldflags

Invocation:
cyclictest -n -p 99

Maybe its just real-time starvation (but the watchdog doesn't trigger,
and I do not see why it should starve), maybe its a crash (will try to
attach a serial console later). Anyway, it's an easy test case (and also
a nice tool), so you may want to have a look as well.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-16 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
  Gilles Chanteperdrix wrote:
   Jan Kiszka wrote:
 Likely too simple: The periodic IRQ seems to pop up on every CPU so that
 the TSC could be recorded, but will this happen synchronously? At least
 we will see (IRQ) jitters, and those jitters could already create in the
 single-CPU case a non-monotonic clock...
   
   I do not know how this issue is solved in Linux, but there seem to be a
   simple solution: before adding the tsc offset to the last tick time,
   this tsc offset should be compared with the tick duration in tsc
   counts; if it is greater, then replace with the tick duration in tsc.
   
  
  Hmm, I would rather express it in absolute tsc values, i.e. always save
  the tuple (absolute_tsc, jiffies):
  
  [timer IRQ]
  new_tsc = read_rsc() - old_tsc;
  if (new_tsc  old_tsc + period_in_tsc_ticks)
   new_tsc = old_tsc + period_in_tsc;
  old_tsc = new_tsc;
  
  Disclaimer: I haven't thought about potential accuracy side effects of
  this implementation, e.g. what would happen over the long term if the
  condition is always fulfilled and executed...

Here is what I meant:

[timer IRQ]
irq_tsc = rdtsc();
irq_jitter_ns = read_8254()

[xnpod_gettime_offset]
offset_ns = tsc2ns(rdtsc() - irq_tsc()) + irq_jitter_ns
if (offset_ns  period_ns)
offset_ns = period_ns; /* Avoid non monotonic clock. */

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Synchronising TSC and periodic timer

2006-03-16 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
   Gilles Chanteperdrix wrote:
Jan Kiszka wrote:
  Likely too simple: The periodic IRQ seems to pop up on every CPU so 
 that
  the TSC could be recorded, but will this happen synchronously? At 
 least
  we will see (IRQ) jitters, and those jitters could already create in 
 the
  single-CPU case a non-monotonic clock...

I do not know how this issue is solved in Linux, but there seem to be a
simple solution: before adding the tsc offset to the last tick time,
this tsc offset should be compared with the tick duration in tsc
counts; if it is greater, then replace with the tick duration in tsc.

   
   Hmm, I would rather express it in absolute tsc values, i.e. always save
   the tuple (absolute_tsc, jiffies):
   
   [timer IRQ]
   new_tsc = read_rsc() - old_tsc;
   if (new_tsc  old_tsc + period_in_tsc_ticks)
  new_tsc = old_tsc + period_in_tsc;
   old_tsc = new_tsc;
   
   Disclaimer: I haven't thought about potential accuracy side effects of
   this implementation, e.g. what would happen over the long term if the
   condition is always fulfilled and executed...
 
 Here is what I meant:
 
 [timer IRQ]
 irq_tsc = rdtsc();
 irq_jitter_ns = read_8254()
 
 [xnpod_gettime_offset]
 offset_ns = tsc2ns(rdtsc() - irq_tsc()) + irq_jitter_ns
 if (offset_ns  period_ns)
 offset_ns = period_ns; /* Avoid non monotonic clock. */
 

Ah, I see. Hmm, wouldn't this create some resolution hole between the
time offset_ns exceeds a precise period and the time the next IRQ
actually strikes? The returned timestamps would then just stick to last
(irq_tsc + irq_jitter_ns) until the next update occurs.

Jan

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Re: POSIX include problem

2006-03-16 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
   ...
   I found this while trying Thomas Gleixner's cyclic test over the POSIX
   skin (http://www.tglx.de/projects/misc/cyclictest). After fixing a
   rather ugly bug in his code (missing mlockall) I ran into a yet unknown
   issue with the POSIX skin: the code just hangs when wrapped to Xenomai.
   
   Compilation:
   gcc -o cyclictest cyclictest.c posix-cflags posix-ldflags
   
   Invocation:
   cyclictest -n -p 99
   
   Maybe its just real-time starvation (but the watchdog doesn't trigger,
   and I do not see why it should starve), maybe its a crash (will try to
   attach a serial console later). Anyway, it's an easy test case (and also
   a nice tool), so you may want to have a look as well.
 
 A second, better guess: the created thread is not a Xenomai realtime
 thread, so never suspends (Xenomai calls return EPERM when not called
 from a real-time thread) and hangs. Replacing sched_setscheduler with
 pthread_setschedparam should solve this issue.

Haven't tried this yet, but I'm quite sure that this is the reason. Then
this must have been a classic Linux SCHED_FIFO lock-up.

 
 I would not be surprised if, with NPTL, sched_setscheduler had an effect
 on the whole process, i.e. set the priority of all the threads in the
 process.
 

From reading the POSIX spec, I would say the calling sched_setscheduler
multiple times in individual threads indicates a wrong usage, doesn't
it? And what NPTL does with it, specifically in the presence of multiple
threads, is a good questions...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core