Hi,

between some football half-times of the last days ;), I played a bit
with a hand-optimised xnarch_tsc_to_ns() for x86. Using scaled math, I
achieved between 3 (P-I 133 MHz) to 4 times (P-M 1.3 GHz) faster
conversions than with the current variant. While this optimisation only
saves a few ten nanoseconds on high-end, slow processors can gain
several hundreds of nanos per conversion (my P-133: -600 ns).

This does not come for free: accuracy of very large values is slightly
worse, but that's likely negligible compared to the clock accuracy of
TSCs (does anyone have any real numbers on the latter, BTW?).

As we loose some bits the one way, converting back still requires "real"
division (i.e. the use of the existing slower xnarch_ns_to_tsc).
Otherwise, we would get significant errors already for small intervals.

To avoid loosing the optimisation again in ns_to_tsc, I thought about
basing the whole internal timer arithmetics on nanoseconds instead of
TSCs as it is now. Although I dug quite a lot in the current timer
subsystem the last weeks, I may still oversee aspects and I'm
x86-biased. Therefore my question before thinking or even patching
further this way: What was the motivation to choose TSCs as internal
time base? Any pitfalls down the road (except introducing regressions)?

Jan


PS: All this would be 2.3-stuff, for sure.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to