I've since realized that on this Vista laptop, the minimum step
observed from the OS clock is 1 msec but only after starting ntpd with
-M to use timeBeginPeriod to request 1ms timing service.  Run withouut
-M, my hacked ntpd observes the traditional 15.mumble msec clock
quantum.  That means the interpolation scheme should have a much
better shot at working on Vista without -M specified.

I've also added another fix to the mix, locking both the main thread
and the high-priority timer thread to the same (2nd) logical processor
on computers with more than one core or processor.  There's already an
inadequate call in the source to SetThreadAffinityMask to keep the
timing thread locked to the first processor (not the best choice if
there is more than one, due to typical interrupt routing,  but easiest
to code).  It is inadequate because the main thread compares
QueryPerformanceCounter results with the timing thread, and those are
known (and observed by me) to diverge across processors by tens to
more than 100 msecs worth of ticks.

As noted in this ancient Microsoft KB article, some hardware
(including a 1998-ish Dell PowerEdge 2300 of mine) is known to jump
the performance counter values forward out of step with system clock
and GetTickCount:

http://support.microsoft.com/default.aspx/kb/274323

To observe and clamp QueryPerformanceCounter forward leaps (never
backwards, at least so far for me) whether due to changing processors
before the affinity fix, or due to hardware or Hardware Adaptation
Layer (HAL) bugs, I modified the interpolation scheme to add
GetTickCount to the mix alongside the system clock and
QueryPerformanceCounter.  It is used as a bounds check on
QueryPerformanceCounter and a fallback lower-resolution interpolation
when divergence is detected.

I could use some help testing from anyone interested in source or .exe
files.

Cheers,
Dave Hart

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to