Jan Kiszka wrote:
> Philippe Gerum wrote:
> 
>>On Tue, 2007-01-02 at 14:56 +0100, Gilles Chanteperdrix wrote:
>>
>>>Philippe Gerum wrote:
>>>
>>>>On Tue, 2007-01-02 at 14:30 +0100, Gilles Chanteperdrix wrote:
>>>>
>>>>
>>>>>Philippe Gerum wrote:
>>>>>
>>>>>
>>>>>>On Tue, 2007-01-02 at 11:20 +0100, Jan Kiszka wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>>Hi all - and happy new year,
>>>>>>>
>>>>>>>I haven't looked at all the new code yet, only the commit messages. I
>>>>>>>found something similar to my fast-forward-on-timer-overrun patch in
>>>>>>>#2010 and wondered if Gilles' original concerns on side effects for the
>>>>>>>POSIX skin were addressed [1]. I recalled that my own final summary on
>>>>>>>this was "leave it as it is" [2].
>>>>>>>
>>>>>>
>>>>>>The best approach is to update the POSIX skin so that it does not rely
>>>>>>on the timer code to act in a sub-optimal way; that's why this patch
>>>>>>found its way in. Scheduling and processing timer shots uselessly is a
>>>>>>bug, not a feature in this case.
>>>>>
>>>>>There is some work to be done on the posix skin anyway, this will all be
>>>>>at once. By the way, I tested the trunk on ARM, and I still get a lockup
>>>>>when the latency period is too low. I wonder if we should not compare to
>>>>>"now + nkschedlat", or even use xnarch_get_cpu_tsc() instead of "now".
>>>>
>>>>You mean as below, in order to account for the time spent in the 
>>>>handler(s)?        
>>>>
>>>>-   while ((xntimerh_date(&timer->aplink) += timer->interval) < now)
>>>>+   while ((xntimerh_date(&timer->aplink) += timer->interval) < 
>>>>xnarch_get_cpu_tsc())
>>>>                ;
>>>>
>>>
>>>I mean even:
>>>
>>>     while ((xntimerh_date(&timer->aplink) += timer->interval) <
>>>             xnarch_get_cpu_tsc() + nkschedlat)
>>>                 ;
>>>
>>>Because if the timer date is between now and now + nkschedlat, its
>>>handler will be called again.
>>>
>>
>>Ack.
>>
> 
> 
> Keep in mind that this code is now a performance regression for the
> non-overflow case, specifically when xnarch_get_cpu_tsc() costs more
> than just a simple CPU register access.
> 
> My previous "leave it as it is" was also due to the consideration that
> we shouldn't pay too much in hotpaths for things that go wrong on
> misdesigned systems.

What about a greedy version like this.

-- 
                                                 Gilles Chanteperdrix
Index: ksrc/nucleus/timer.c
===================================================================
--- ksrc/nucleus/timer.c	(révision 2037)
+++ ksrc/nucleus/timer.c	(copie de travail)
@@ -184,10 +184,10 @@
 	xntimer_t *timer;
 	xnticks_t now;
 
+	now = xnarch_get_cpu_tsc();
 	while ((holder = xntimerq_head(timerq)) != NULL) {
 		timer = aplink2timer(holder);
 
-		now = xnarch_get_cpu_tsc();
 		if (xntimerh_date(&timer->aplink) - nkschedlat > now)
 			/* No need to continue in aperiodic mode since timeout
 			   dates are ordered by increasing values. */
@@ -199,6 +199,7 @@
 			if (!testbits(nktbase.status, XNTBLCK)) {
 				timer->handler(timer);
 
+				now = xnarch_get_cpu_tsc();
 				if (timer->interval == XN_INFINITE ||
 				    !testbits(timer->status, XNTIMER_DEQUEUED)
 				    || testbits(timer->status, XNTIMER_KILLED))
@@ -221,8 +222,9 @@
 			   translates into precious microsecs on low-end hw. */
 			__setbits(sched->status, XNHTICK);
 
-		while ((xntimerh_date(&timer->aplink) += timer->interval) < now)
-		    ;
+		do {
+			xntimerh_date(&timer->aplink) += timer->interval;
+		} while (xntimerh_date(&timer->aplink) < now + nkschedlat);
 		xntimer_enqueue_aperiodic(timer);
 	}
 
_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to