Gilles Chanteperdrix wrote:
On Fri, Apr 4, 2008 at 3:25 PM, Jan Kiszka <[EMAIL PROTECTED]> wrote:
Gilles Chanteperdrix wrote:

On Fri, Apr 4, 2008 at 12:45 PM, Jan Kiszka <[EMAIL PROTECTED]> wrote:

Sebastian Smolorz wrote:


Jan Kiszka wrote:


Sebastian Smolorz wrote:


Jan Kiszka wrote:


This patch may do the trick: it uses the inverted tsc-to-ns
function
instead of the frequency-based one. Be warned, it is totally untested
inside
Xenomai, I just ran it in a user space test program. But it may give an
idea.

Your patch needed two minor corrections (ns instead of ts in
functions
xnarch_ns_to_tsc()) in order to compile. A short run (30 minutes) of
latency
-t1 seems to prove your bug-fix: There seems to be no drift.

That's good to hear.



If I got your patch correctly, it doesn't make xnarch_tsc_to_ns
more
precise but introduces a new function xnarch_ns_to_tsc() which is also
less
precise than the generic xnarch_ns_to_tsc(), right?

Yes. It is now precisely the inverse imprecision, so to say. :)



So isn't there still the danger of getting wrong values when
calling
xnarch_tsc_to_ns()  not in combination with xnarch_ns_to_tsc()?

Only if the user decides to implement his own conversion. Xenomai
with
all its skins and both in kernel and user space should always run
through
the xnarch_* path.

OK, would you commit the patch?


 Will do unless someone else has concerns. Gilles, Philippe? ARM and
Blackfin then need to be fixed similarly, full patch attached.

Well, I am sorry, but I do not like this solution;
- the aim of scaled math is to avoid divisions, and with this patch we
end up using divisions;

 Please check again, no new division due to my patch, just different
parameters for the existing one.

I just checked your patch rapidly, but saw that xnarch_ns_to_tsc was
using llimd, so does use division. My fast_llimd could be used to
replace both the llimd calls in xnarch_tsc_to_ns and xnarch_ns_to_tsc.



- with scaled math we do wrong calculations, and making a wrong
xnarch_ns_to_tsc only works for values which should be passed to
xnarch_tsc_to_ns.

 IMHO, the error is within the range of the clock's precision, if not even
below. So struggling for mathematically precise conversion of imprecise
physical values makes no sense to me. Therefore I once proposed the
scaled-math optimization.

Now that I have understood what really happens, I disagree with this
approach. Take the implementation of clock_gettime (or
rtdm_clock_read, for that matter). They basically do
xnarch_tsc_to_ns(ipipe_read_tsc()). The relative error may be small,
but in the very frequent use case of substracting two results of
consecutive reads of ipipe_read_tsc, the result of the substraction is
essentially garbage, because the result of the substraction may be of
the same order as the absolute error of the conversion. And I insist,
this use case of clock_gettime or rtdm_clock_read is a very realistic
use case.

This use case is valid, but I don't see the error scenario you sketch: The error of the conversion is only relevant for large deltas, tsc_to_ns(B)-tsc_to_ns(A)=B-A for any small B-A. Cornelius' test nicely showed constantly increasing deviation, not something that jumped around. Essentially, we are just replacing

        xnarch_llimd(ts, 1000000000, RTHAL_CPU_FREQ);

with

        xnarch_llimd(ts, xnarch_tsc_scale, 1<<xnarch_tsc_shift);

which introduces a linearly increasing error of the _absolute_ results, not of relative ones. But if you can prove me wrong, I would take everything back and agree on kicking out the scaled math immediately!

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to