On Fri, Aug 4, 2017 at 4:30 AM, Miroslav Lichvar <mlich...@redhat.com> wrote:
> Do you need this to show compliance with the new MiFID directive?
> I think one possibility here might be to reduce root delay on clients
> by minimum latency of switches along the path to the server. If a
> switch is guaranteed to add at least 5 microseconds in each direction,
> you could substract 10 microseconds from the delay and get a more
> accurate estimate of the maximum error.

Yes, exactly.

The idea about some minimum about of delay introduced by the switches
is interesting.  I'll have to think about that one a bit harder.

What I had been considering as a workaround was something like:

Record the root dispersion on the server on some regular interval (say
every few seconds or so).

When looking at a given client's stats (`chronyc tracking' output)
over some time interval, if you take the max dispersion reported by
the server over that same interval, round up to the nearest 15us
boundary and subtract the root dispersion, that value can be
subtracted from the clients reported root dispersion.

As an example, if over some interval the server reported a max root
dispersion of 2us (`chronyc tracking' on the server), then you could
subtract 13us from the reported root dispersion for clients that were
tracking this server over that same interval.

-- 
To unsubscribe email chrony-users-requ...@chrony.tuxfamily.org 
with "unsubscribe" in the subject.
For help email chrony-users-requ...@chrony.tuxfamily.org 
with "help" in the subject.
Trouble?  Email listmas...@chrony.tuxfamily.org.

Reply via email to