Jim Cromie wrote:

IIUC, a periodic timer cannot handle this problem, except by slicing
time into small chunks, and transitioning on multiples of them.
This sounds bad, it raises the 'interrupt' rate, or makes the pulse-width
changes very discrete (limited steps)


As Gilles pointed out, asking for aperiodic mode (TM_ONESHOT) would properly solve this issue.



Is this something I should be using nucleus for ?
( while Im asking,  rt_timer_start is part of native services,
and xntimer_start is nucleus services ? )


xn* services are part of the nucleus interface which should be solely used to implement upper APIs or skins (e.g. native RTAI, VxWorks, VRTX etc.) IOW, these services form the abstract RTOS API. If you need to program apps, then I'd suggest you use the native API (rt_*) which is built over it.

It seems that the API supports creating multiple timers,
but I see no advantage in using multiples vs 1 for my app,
either way I have to reprogramm timer-duration and handler each time.


See rt_alarm_*() from the native API; this allows you to create any number of watchdogs (in the VxWorks sense). Internally, the nucleus uses this ability to have multiple concurrent timers to properly deal with timeouts, because in the course of execution, one may find very useful to have more than a single pending timer or any given task/thread.


Also, I see that

int xnpod_start_timer     (      u_long      /nstick/,

xnisr_t /tickhandler/

)
Start the system timer.


suggests that there is 1 hardware timer.
Isnt there hardware that has more than 1 timer ?  (examples ?)
how does one use the extra capabilities ?


The hardware timer is used to provide the multiple software timer/alarm/watchdog abstraction. The former defines the overall timing mode for the latter. IOW, if the hw timer is set in aperiodic mode, the alarm system will be able to provide wakeup times not necessarily aligned on tick boundaries, at the expense of reprogramming the hw timer each time the current watchdog elapses, in order to wait for the next one. If it is set periodic mode, all timeout specs passed to the alarm system will be interpreted as ticks/jiffies, and if the underlying hw timer is a true PIT, then programming it for the desired frequency will be done once for all. Some (most?) archs like PPC or ia64 have hw timers which are inherently oneshot, so the periodic mode is just emulated using the aperiodic one on those.

It's not unusual that archs have multiple hw timers, or multiple timing channels for a single hw timer (e.g. 8254 PIT on x86). When the LAPIC is enabled on x86, fusion uses a different timer interrupt vector than Linux's for instance, and we have one LAPIC per CPU, so practically, fusion maintains a different pending timer list for each CPU. Aside of its internal per-CPU interval timer counter, Itanium 2 also provides a slew of programmable counters which trigger interrupts upon overflow, and can be programmed through the performance monitoring unit, etc. But those are not made directly accessible to the application.

IOW, fusion selects an appropriate hw time source depending on the underlying arch, then the app is expected to use the timer/alarm/watchdog abstraction which is built over it.

Finally, I note that linux appears to be moving towards uniform treatment of time in nanoseconds, rather than jiffies/ticks (I think this work, by John Stultz, is in -mm now)
Do you have a sense how this might affect fusion going forward ?


AFAICS, this won't affect fusion since we don't mix both time bases. fusion will still propagate incoming timer interrupts at the proper pace to Linux, which will then proceed to its own housekeeping as usual. Wallclock values in fusion depends on the current timing mode, either jiffies or count of nanoseconds as converted from local TSC values (which is not that SMP-consistent btw). Still, this would be an opportunity to provide some RT safe service returning the normalized Linux time in nanoseconds too, because gettimeofday() is clearly unsafe in this context, especially in SMP mode.

thanks in advance,
Jim Cromie

_______________________________________________
Rtai-dev mailing list
[EMAIL PROTECTED]
https://mail.gna.org/listinfo/rtai-dev


--

Philippe.

Reply via email to