Philippe Gerum wrote:
> ...
> The issue that worries me - provided that we bound the adjustment offset
> to the duration of one tick after some jittery - is that any attempt to
> get intra-tick precision would lead to a possible discrepancy regarding
> the elapsed time according to those two different scales, between the
> actual count of jiffies tracked by the timer ISR on the timekeeper CPU,
> and the corrected time value returned by rtdm_read_clock. And this
> discrepancy would last for the total duration of the jitter. E.g., for a
> 100 us period, xnpod_get_time() could return 2 albeit rtdm_read_clock
> returns 300, instead of 200. Spuriously mixing both units in
> applications would lead to some funky chaos.
> 

Trying to pick up this thread again, I just tried to understand your
concerns, but failed so far to imagine a concrete scenario. Could you
sketch such a "funky chaotic" situation from the application point of
view? And what would prevent us from improving the accuracy of other
timestamping API functions beyond RTDM as well, e.g. on converting from
ticks to nanos in rt_timer_ticks2ns()?

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to