On Mon, Dec 05, 2016 at 02:35:22PM -0800, Denny Page wrote:
> > On Dec 05, 2016, at 01:05, Miroslav Lichvar <mlich...@redhat.com> wrote:
> > If I understand your change correctly, it increases the minimum
> > acceptable delay, which increases the number of combined readings and
> > makes the offset more stable. I'm worried on a busy system bad
> > readings could be included and disrupt the mean value.
> On a busier systems, I was actually seeing more error than on quiet systems.
> This brings the noise and spike level on the busy system down to the same
> level as the quiet system.
Have you tried it on a system where the network card is busy? Here is
how the distribution of the delay changes on my test machine:
What should be the maximum acceptable delay?
> The change has a couple of things it does. First, it ignore bad slots. On all
> the systems I’ve tested, slot one is always the worst slot, which long
> delays, resulting in a baseline skew.
The current code does that too.
> The second is avoiding the SysPrecision variable as a gate. Using this
> results in a tendency to select the wider slots for averaging, while ignoring
> the consistent slots. In my testing on busy systems, it discarded many slots,
> sometimes to the point of all but two slots. This is obviously bad for
> averaging. My first attempt was simply to remove the averaging and use the
> best slot. This provided an improvement on the noise over the prior approach,
> but using the slots within 10% is smoother.
> FWIW, I haven’t dug into how SysPrecision is calculated, but in looking at
> several systems it appears to be inconsistent on identical hardware.
It's the minimum time it takes to read the system clock. It will
change if CPU frequency scaling is enabled.
I agree the current code may be dropping too many readings. The
assumption was that the precision and stability of a PHC is better
than that of the system clock and it is better to use just readings
with the delays very close to the minimum instead of averaging
multiple delayed readings as they may have a large error due to
asymmetry on the PCI-E bus, etc. The question is how much should be
the limit increased. You say 10% of the minimum delay works for you
well and I'm wondering if that will work universally.
> > Note that if you save the offset between PHC and system clock to a
> > double, you will lose precision when the offset is large as double has
> > a 53-bit precision.
> I agree. I think all of this should be done using 64 bit integers, but the
> code that was there was using floating point so I stayed with that to fit in.
> Use of doubles seems rather pervasive throughout chrony. I was very surprised
> by this. I think it makes sense to use integer math for as much as possible.
> At least for anything to do with time intervals. But it seemed a bit much to
> rewrite everything to address this one issue. :)
Well, double has an advantage that you don't have to care about
overflows as much and I think the code is consistent that it uses
doubles only for values that are normally expected to have a
reasonably small upper bound. In the code you have changed, the first
reading was used as a reference to avoid the loss of precision.
Anyway, using doubles allowed chrony to support refclocks with
nanosecond (and even sub-nanosecond if it was possible) resolution
before it was using timespec.
To unsubscribe email chrony-dev-requ...@chrony.tuxfamily.org with "unsubscribe"
in the subject.
For help email chrony-dev-requ...@chrony.tuxfamily.org with "help" in the
Trouble? Email listmas...@chrony.tuxfamily.org.