On Sun, Nov 20, 2016 at 05:18:56PM -0800, Denny Page wrote:
> From that article:
> 
>       • A preamble timestamp is struck as near to the start of the packet as 
> possible. The preferred point follows the last bit of the preamble and 
> start-of-frame (STF) octet and before the first octet of the data.
>       • A trailer timestamp is struck as near to the end of the packet as 
> possible. On transmit this follows the last octet of the data and before the 
> frame check sequence (FCS); on receive this follows the last octet of the FCS.

Ok, that does make sense. I was incorrectly assuming the rules were
similar to PTP.

> What this means for NTP is that hardware timestamps are off by a minimum of 
> 752 transmission bits with IPv4. This assumes no VLAN, and no IP option 
> headers. In a 100Mb network, this means a guaranteed minimum timestamp error 
> of 7.52 microseconds.
> 
> In order to generate a correct receive timestamp from the Ethernet hardware 
> timestamp, one needs to need to have the FCS timestamp, the current interface 
> speed, and the length of the packet at the Ethernet level. This is doable, 
> but requires use of raw sockets and is quite a bit of work. A simpler (and 
> safer) approach would be to use a combination of hardware timestamping for 
> send (SOF_TIMESTAMPING_TX_HARDWARE), and software timestamping for receive 
> (SOF_TIMESTAMPING_RX_SOFTWARE). This is probably the best available option.

If the error in the software timestamp was generally smaller than the
error in the hardware timestamp, I'd agree. But in most cases I think
it's the opposite. There is interrupt coalescing and the delivery of
the interrupt itself may have a significant delay. The variance of SW
timestamps alone is normally larger than the error in HW timestamp, at
least with gigabit speeds.

In your case you will still need to have some correction for the extra
delay due the switch in the 100mbit->1gbit direction and I suspect
switches in general will be the biggest trouble when trying to get the
best accuracy.

At least by default, I think we should stick to the RX HW timestamp
and try to figure out the correction. We know the length of the
transmitted data at layer 2. Maybe we could use the same length of the
headers for received packets on the same interface? The link speed
should not be too difficult to get. Can you suggest a formula?

The hwtimestamp directive could have a new option to use SW RX
timestamps even if HW timestamps are available. There could be also an
option to override the automatic correction.

What do you think?

-- 
Miroslav Lichvar

-- 
To unsubscribe email chrony-dev-requ...@chrony.tuxfamily.org with "unsubscribe" 
in the subject.
For help email chrony-dev-requ...@chrony.tuxfamily.org with "help" in the 
subject.
Trouble?  Email listmas...@chrony.tuxfamily.org.

Reply via email to