Re: [Xenomai-core] Re: SVN checkin #2010

2007-01-03 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 
Philippe Gerum wrote:


On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:


Philippe Gerum wrote:


On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:



Philippe Gerum wrote:



On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:




Hi all - and happy new year,

I haven't looked at all the new code yet, only the commit messages. I
found something similar to my fast-forward-on-timer-overrun patch in
#2010 and wondered if Gilles' original concerns on side effects for the
POSIX skin were addressed [1]. I recalled that my own final summary on
this was leave it as it is [2].


The best approach is to update the POSIX skin so that it does not rely
on the timer code to act in a sub-optimal way; that's why this patch
found its way in. Scheduling and processing timer shots uselessly is a
bug, not a feature in this case.

There is some work to be done on the posix skin anyway, this will all be
at once. By the way, I tested the trunk on ARM, and I still get a lockup
when the latency period is too low. I wonder if we should not compare to
now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.

You mean as below, in order to account for the time spent in the 
handler(s)?   

-  while ((xntimerh_date(timer-aplink) += timer-interval)  now)
+  while ((xntimerh_date(timer-aplink) += timer-interval)  
xnarch_get_cpu_tsc())
   ;


I mean even:

while ((xntimerh_date(timer-aplink) += timer-interval) 
xnarch_get_cpu_tsc() + nkschedlat)
;

Because if the timer date is between now and now + nkschedlat, its
handler will be called again.


Ack.



Keep in mind that this code is now a performance regression for the
non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
than just a simple CPU register access.

My previous leave it as it is was also due to the consideration that
we shouldn't pay too much in hotpaths for things that go wrong on
misdesigned systems.
 
 
 What about a greedy version like this.

This version seems to help a bit: starting latency -p 100 on ARM does
not lock up immediately, the latency test runs with some overruns for a
while, and then locks up after a while. latency -p 200 runs with no
overruns, and locks up after a while too.


-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-03 Thread Philippe Gerum
On Tue, 2007-01-02 at 15:32 +0100, Gilles Chanteperdrix wrote:

 What about a greedy version like this.
 

Applied, thanks.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Philippe Gerum
On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
 Hi all - and happy new year,
 
 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].
 

The best approach is to update the POSIX skin so that it does not rely
on the timer code to act in a sub-optimal way; that's why this patch
found its way in. Scheduling and processing timer shots uselessly is a
bug, not a feature in this case.

 Jan
 
 [1]https://mail.gna.org/public/xenomai-core/2006-08/msg00122.html
 [2]https://mail.gna.org/public/xenomai-core/2006-08/msg00133.html
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
 
Hi all - and happy new year,

I haven't looked at all the new code yet, only the commit messages. I
found something similar to my fast-forward-on-timer-overrun patch in
#2010 and wondered if Gilles' original concerns on side effects for the
POSIX skin were addressed [1]. I recalled that my own final summary on
this was leave it as it is [2].

 
 
 The best approach is to update the POSIX skin so that it does not rely
 on the timer code to act in a sub-optimal way; that's why this patch
 found its way in. Scheduling and processing timer shots uselessly is a
 bug, not a feature in this case.

There is some work to be done on the posix skin anyway, this will all be
at once. By the way, I tested the trunk on ARM, and I still get a lockup
when the latency period is too low. I wonder if we should not compare to
now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Philippe Gerum
On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
  
 Hi all - and happy new year,
 
 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].
 
  
  
  The best approach is to update the POSIX skin so that it does not rely
  on the timer code to act in a sub-optimal way; that's why this patch
  found its way in. Scheduling and processing timer shots uselessly is a
  bug, not a feature in this case.
 
 There is some work to be done on the posix skin anyway, this will all be
 at once. By the way, I tested the trunk on ARM, and I still get a lockup
 when the latency period is too low. I wonder if we should not compare to
 now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.

You mean as below, in order to account for the time spent in the handler(s)?

-   while ((xntimerh_date(timer-aplink) += timer-interval)  now)
+   while ((xntimerh_date(timer-aplink) += timer-interval)  
xnarch_get_cpu_tsc())
;

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
 
Philippe Gerum wrote:

On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:


Hi all - and happy new year,

I haven't looked at all the new code yet, only the commit messages. I
found something similar to my fast-forward-on-timer-overrun patch in
#2010 and wondered if Gilles' original concerns on side effects for the
POSIX skin were addressed [1]. I recalled that my own final summary on
this was leave it as it is [2].



The best approach is to update the POSIX skin so that it does not rely
on the timer code to act in a sub-optimal way; that's why this patch
found its way in. Scheduling and processing timer shots uselessly is a
bug, not a feature in this case.

There is some work to be done on the posix skin anyway, this will all be
at once. By the way, I tested the trunk on ARM, and I still get a lockup
when the latency period is too low. I wonder if we should not compare to
now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.
 
 
 You mean as below, in order to account for the time spent in the handler(s)?  
 
 - while ((xntimerh_date(timer-aplink) += timer-interval)  now)
 + while ((xntimerh_date(timer-aplink) += timer-interval)  
 xnarch_get_cpu_tsc())
   ;
 

I mean even:

while ((xntimerh_date(timer-aplink) += timer-interval) 
xnarch_get_cpu_tsc() + nkschedlat)
 ;

Because if the timer date is between now and now + nkschedlat, its
handler will be called again.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Philippe Gerum
On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
  
 Philippe Gerum wrote:
 
 On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
 
 
 Hi all - and happy new year,
 
 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].
 
 
 
 The best approach is to update the POSIX skin so that it does not rely
 on the timer code to act in a sub-optimal way; that's why this patch
 found its way in. Scheduling and processing timer shots uselessly is a
 bug, not a feature in this case.
 
 There is some work to be done on the posix skin anyway, this will all be
 at once. By the way, I tested the trunk on ARM, and I still get a lockup
 when the latency period is too low. I wonder if we should not compare to
 now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.
  
  
  You mean as below, in order to account for the time spent in the 
  handler(s)?
  
  -   while ((xntimerh_date(timer-aplink) += timer-interval)  now)
  +   while ((xntimerh_date(timer-aplink) += timer-interval)  
  xnarch_get_cpu_tsc())
  ;
  
 
 I mean even:
 
   while ((xntimerh_date(timer-aplink) += timer-interval) 
   xnarch_get_cpu_tsc() + nkschedlat)
  ;
 
 Because if the timer date is between now and now + nkschedlat, its
 handler will be called again.
 

Ack.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Jan Kiszka
Philippe Gerum wrote:
 On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:

 Philippe Gerum wrote:

 On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:


 Hi all - and happy new year,

 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].


 The best approach is to update the POSIX skin so that it does not rely
 on the timer code to act in a sub-optimal way; that's why this patch
 found its way in. Scheduling and processing timer shots uselessly is a
 bug, not a feature in this case.
 There is some work to be done on the posix skin anyway, this will all be
 at once. By the way, I tested the trunk on ARM, and I still get a lockup
 when the latency period is too low. I wonder if we should not compare to
 now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.

 You mean as below, in order to account for the time spent in the 
 handler(s)?

 -   while ((xntimerh_date(timer-aplink) += timer-interval)  now)
 +   while ((xntimerh_date(timer-aplink) += timer-interval)  
 xnarch_get_cpu_tsc())
 ;

 I mean even:

  while ((xntimerh_date(timer-aplink) += timer-interval) 
  xnarch_get_cpu_tsc() + nkschedlat)
  ;

 Because if the timer date is between now and now + nkschedlat, its
 handler will be called again.

 
 Ack.
 

Keep in mind that this code is now a performance regression for the
non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
than just a simple CPU register access.

My previous leave it as it is was also due to the consideration that
we shouldn't pay too much in hotpaths for things that go wrong on
misdesigned systems.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Philippe Gerum wrote:
 
On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:

Philippe Gerum wrote:

On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:


Philippe Gerum wrote:


On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:



Hi all - and happy new year,

I haven't looked at all the new code yet, only the commit messages. I
found something similar to my fast-forward-on-timer-overrun patch in
#2010 and wondered if Gilles' original concerns on side effects for the
POSIX skin were addressed [1]. I recalled that my own final summary on
this was leave it as it is [2].


The best approach is to update the POSIX skin so that it does not rely
on the timer code to act in a sub-optimal way; that's why this patch
found its way in. Scheduling and processing timer shots uselessly is a
bug, not a feature in this case.

There is some work to be done on the posix skin anyway, this will all be
at once. By the way, I tested the trunk on ARM, and I still get a lockup
when the latency period is too low. I wonder if we should not compare to
now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.

You mean as below, in order to account for the time spent in the 
handler(s)?

-   while ((xntimerh_date(timer-aplink) += timer-interval)  now)
+   while ((xntimerh_date(timer-aplink) += timer-interval)  
xnarch_get_cpu_tsc())
;


I mean even:

 while ((xntimerh_date(timer-aplink) += timer-interval) 
 xnarch_get_cpu_tsc() + nkschedlat)
 ;

Because if the timer date is between now and now + nkschedlat, its
handler will be called again.


Ack.

 
 
 Keep in mind that this code is now a performance regression for the
 non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
 than just a simple CPU register access.
 
 My previous leave it as it is was also due to the consideration that
 we shouldn't pay too much in hotpaths for things that go wrong on
 misdesigned systems.

What about a greedy version like this.

-- 
 Gilles Chanteperdrix
Index: ksrc/nucleus/timer.c
===
--- ksrc/nucleus/timer.c	(révision 2037)
+++ ksrc/nucleus/timer.c	(copie de travail)
@@ -184,10 +184,10 @@
 	xntimer_t *timer;
 	xnticks_t now;
 
+	now = xnarch_get_cpu_tsc();
 	while ((holder = xntimerq_head(timerq)) != NULL) {
 		timer = aplink2timer(holder);
 
-		now = xnarch_get_cpu_tsc();
 		if (xntimerh_date(timer-aplink) - nkschedlat  now)
 			/* No need to continue in aperiodic mode since timeout
 			   dates are ordered by increasing values. */
@@ -199,6 +199,7 @@
 			if (!testbits(nktbase.status, XNTBLCK)) {
 timer-handler(timer);
 
+now = xnarch_get_cpu_tsc();
 if (timer-interval == XN_INFINITE ||
 !testbits(timer-status, XNTIMER_DEQUEUED)
 || testbits(timer-status, XNTIMER_KILLED))
@@ -221,8 +222,9 @@
 			   translates into precious microsecs on low-end hw. */
 			__setbits(sched-status, XNHTICK);
 
-		while ((xntimerh_date(timer-aplink) += timer-interval)  now)
-		;
+		do {
+			xntimerh_date(timer-aplink) += timer-interval;
+		} while (xntimerh_date(timer-aplink)  now + nkschedlat);
 		xntimer_enqueue_aperiodic(timer);
 	}
 
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Philippe Gerum
On Tue, 2007-01-02 at 15:22 +0100, Jan Kiszka wrote:
 Philippe Gerum wrote:
  On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:
  Philippe Gerum wrote:
  On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
 
  Philippe Gerum wrote:
 
  On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
 
 
  Hi all - and happy new year,
 
  I haven't looked at all the new code yet, only the commit messages. I
  found something similar to my fast-forward-on-timer-overrun patch in
  #2010 and wondered if Gilles' original concerns on side effects for the
  POSIX skin were addressed [1]. I recalled that my own final summary on
  this was leave it as it is [2].
 
 
  The best approach is to update the POSIX skin so that it does not rely
  on the timer code to act in a sub-optimal way; that's why this patch
  found its way in. Scheduling and processing timer shots uselessly is a
  bug, not a feature in this case.
  There is some work to be done on the posix skin anyway, this will all be
  at once. By the way, I tested the trunk on ARM, and I still get a lockup
  when the latency period is too low. I wonder if we should not compare to
  now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.
 
  You mean as below, in order to account for the time spent in the 
  handler(s)?  
 
  - while ((xntimerh_date(timer-aplink) += timer-interval)  now)
  + while ((xntimerh_date(timer-aplink) += timer-interval)  
  xnarch_get_cpu_tsc())
;
 
  I mean even:
 
 while ((xntimerh_date(timer-aplink) += timer-interval) 
 xnarch_get_cpu_tsc() + nkschedlat)
   ;
 
  Because if the timer date is between now and now + nkschedlat, its
  handler will be called again.
 
  
  Ack.
  
 
 Keep in mind that this code is now a performance regression for the
 non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
 than just a simple CPU register access.
 
 My previous leave it as it is was also due to the consideration that
 we shouldn't pay too much in hotpaths for things that go wrong on
 misdesigned systems.

Sure, but on the other hand, it's precisely when things tend to go wrong
that one may expect the system to be resilient to sporadic issues; IOW,
people who do provide for some contingency plan in their code upon
missed deadlines should be able to rely on the timer infrastructure not
to worsen the situation.

To address your concern, nothing prevents us from providing an
arch-specific wrapper like:

#ifdef reading_tsc_is_cheap
#define xnarch_refresh_from_tsc(oldtsc)  xnarch_get_cpu_tsc()
#else
#define xnarch_refresh_from_tsc(oldtsc)  (oldtsc)
#endif

Not that I would be particularly fond of that, mm, thing, but this would
allow to fix the bogus x86+8254 setup relic, which is likely the only
one which would cause any significant delay among the supported
archs/platforms.

 
 Jan
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Philippe Gerum
On Tue, 2007-01-02 at 15:32 +0100, Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
  Philippe Gerum wrote:
  
 On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:
 
 Philippe Gerum wrote:
 
 On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
 
 
 Philippe Gerum wrote:
 
 
 On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
 
 
 
 Hi all - and happy new year,
 
 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].
 
 
 The best approach is to update the POSIX skin so that it does not rely
 on the timer code to act in a sub-optimal way; that's why this patch
 found its way in. Scheduling and processing timer shots uselessly is a
 bug, not a feature in this case.
 
 There is some work to be done on the posix skin anyway, this will all be
 at once. By the way, I tested the trunk on ARM, and I still get a lockup
 when the latency period is too low. I wonder if we should not compare to
 now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.
 
 You mean as below, in order to account for the time spent in the 
 handler(s)?  
 
 - while ((xntimerh_date(timer-aplink) += timer-interval)  now)
 + while ((xntimerh_date(timer-aplink) += timer-interval)  
 xnarch_get_cpu_tsc())
   ;
 
 
 I mean even:
 
while ((xntimerh_date(timer-aplink) += timer-interval) 
xnarch_get_cpu_tsc() + nkschedlat)
  ;
 
 Because if the timer date is between now and now + nkschedlat, its
 handler will be called again.
 
 
 Ack.
 
  
  
  Keep in mind that this code is now a performance regression for the
  non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
  than just a simple CPU register access.
  
  My previous leave it as it is was also due to the consideration that
  we shouldn't pay too much in hotpaths for things that go wrong on
  misdesigned systems.
 
 What about a greedy version like this.
 

Likely the best trade-off.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: SVN checkin #2010

2007-01-02 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:

 On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:

 Philippe Gerum wrote:

 On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:


 Philippe Gerum wrote:


 On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:



 Hi all - and happy new year,

 I haven't looked at all the new code yet, only the commit messages. I
 found something similar to my fast-forward-on-timer-overrun patch in
 #2010 and wondered if Gilles' original concerns on side effects for the
 POSIX skin were addressed [1]. I recalled that my own final summary on
 this was leave it as it is [2].

 The best approach is to update the POSIX skin so that it does not rely
 on the timer code to act in a sub-optimal way; that's why this patch
 found its way in. Scheduling and processing timer shots uselessly is a
 bug, not a feature in this case.
 There is some work to be done on the posix skin anyway, this will all be
 at once. By the way, I tested the trunk on ARM, and I still get a lockup
 when the latency period is too low. I wonder if we should not compare to
 now + nkschedlat, or even use xnarch_get_cpu_tsc() instead of now.
 You mean as below, in order to account for the time spent in the 
 handler(s)?  

 - while ((xntimerh_date(timer-aplink) += timer-interval)  now)
 + while ((xntimerh_date(timer-aplink) += timer-interval)  
 xnarch_get_cpu_tsc())
   ;

 I mean even:

while ((xntimerh_date(timer-aplink) += timer-interval) 
xnarch_get_cpu_tsc() + nkschedlat)
 ;

 Because if the timer date is between now and now + nkschedlat, its
 handler will be called again.

 Ack.


 Keep in mind that this code is now a performance regression for the
 non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
 than just a simple CPU register access.

 My previous leave it as it is was also due to the consideration that
 we shouldn't pay too much in hotpaths for things that go wrong on
 misdesigned systems.
 
 What about a greedy version like this.
 
 
 
 
 
 Index: ksrc/nucleus/timer.c
 ===
 --- ksrc/nucleus/timer.c  (révision 2037)
 +++ ksrc/nucleus/timer.c  (copie de travail)
 @@ -184,10 +184,10 @@
   xntimer_t *timer;
   xnticks_t now;
  
 + now = xnarch_get_cpu_tsc();
   while ((holder = xntimerq_head(timerq)) != NULL) {
   timer = aplink2timer(holder);
  
 - now = xnarch_get_cpu_tsc();
   if (xntimerh_date(timer-aplink) - nkschedlat  now)
   /* No need to continue in aperiodic mode since timeout
  dates are ordered by increasing values. */
 @@ -199,6 +199,7 @@
   if (!testbits(nktbase.status, XNTBLCK)) {
   timer-handler(timer);
  
 + now = xnarch_get_cpu_tsc();
   if (timer-interval == XN_INFINITE ||
   !testbits(timer-status, XNTIMER_DEQUEUED)
   || testbits(timer-status, XNTIMER_KILLED))
 @@ -221,8 +222,9 @@
  translates into precious microsecs on low-end hw. */
   __setbits(sched-status, XNHTICK);
  
 - while ((xntimerh_date(timer-aplink) += timer-interval)  now)
 - ;
 + do {
 + xntimerh_date(timer-aplink) += timer-interval;
 + } while (xntimerh_date(timer-aplink)  now + nkschedlat);
   xntimer_enqueue_aperiodic(timer);
   }
  

Unless I'm overseeing some pitfall right now: looks good! Would also
avoid the #if/#else stuff Philippe reluctantly proposed and I didn't
dared to come up with on my own. :)

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core