Harlan,

I'm uneasy calling random() for every read of the clock, as it could be expensive. It seems a bit overkill to fuzz the nanobits when the caller has timespec and fuzzed to 10 ms. Perhaps a test of precision greater than a microsecond is advised.

Dave

Harlan Stenn wrote:
Dave,

Your patch handles the fuzz for gettimeofday() but not the case where some
OS implements getclock() or clock_gettime() badly.

What would be bad about moving the fuzz code after the #endif that closes
the "get time" routines and just fuzzing in all cases?  If that is really
overkill for high-res systems, change the test from:

   if (sys_precision != 0)

to

  if ((sys_precision != 0) && (sys_precision > -7))

(for example).

H
--

In article <[EMAIL PROTECTED]>, "David L. Mills" <[EMAIL PROTECTED]> writes:


David> But, I finally punctured my skull about the precision measurement
David> method, which until now fuzzed the measurement. ...

David> So, the get_systime() routine in current ntp-dev has been changed to
David> fuzz the bits only after calibration and to fuzz all the
David> nonsignificant bits less than the measured precision.



_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to