On Wed, 28 Jul 2010, Yaakov Stein wrote:
The problem is that you have to put in a timestamp that reflects the time the packet is placed on the wire. So you have to sign after timestamping, and unless this signature can be computed in zero time (or with completely deterministic latency that can be pre-added) the signing degrades the timing accuracy.
Since things are timestamped on the ingress in the PHY in some cases (1588), then perhaps the same methodology could be used here, in that a device might add a compensation factor that includes how long the signing took. This adjustment value would of course not be signed in itself, but it could have a maximum value that would mean at least for time, the signed stated time wouldn't be too much off (an attacker could only tamper with the adjustment value) ?
Or perhaps this doesn't really help, it's still a too big attack vector? For server time setting it might be enough... Or is the recommendation to just run NTP over IPSEC so NTP itself doesn't have to care?
I think that this should be thoroughly tested. In systems that I have seen in the lab, the degradation rules out sub-microsecond accuracy.
I have little doubt of that, but I can imagine applications where sub-microsecond isn't needed but one still wants to know the time is not off by more than that?
-- Mikael Abrahamsson email: [email protected] _______________________________________________ TICTOC mailing list [email protected] https://www.ietf.org/mailman/listinfo/tictoc
