Absolutely.  But if you allow, say, one second round trip time, you have to 
assume that your time is off by that amount from the master.  In an environment 
without active attackers you would assume that the error is a fair amount 
smaller, basically the estimate of the difference between the two legs of the 
trip plus some allowance for jitter.  If you introduce attackers, you might 
have an underlying network that offers near-zero latency, and all the latency 
you're seeing is due to active attack on one or the other legs of the round 
trip.

                paul

From: Kevin Gross [mailto:[email protected]]
Sent: Tuesday, October 18, 2011 1:45 PM
To: Koning, Paul
Cc: [email protected]; [email protected]; [email protected]; [email protected]; 
[email protected]; [email protected]
Subject: Re: [IPsec] [TICTOC] Review request for IPsec security for packet 
based synchronization (Yang Cui)

Nico's contention is that it should take a constant amount of time to decrypt a 
packet once it is received. I don't think this is exactly true but when 
compared to other (variable) latencies in the system, possibly a reasonable 
approximation.

If an attacker delays or drops synchronization packets, clock quality will 
suffer. In the extreme case, all useful clock communication is lost and nothing 
works. I don't think this situation is any different for clock traffic than it 
is for other traffic. Encryption cannot prevent denial of service.

Kevin Gross
On Tue, Oct 18, 2011 at 10:49 AM, 
<[email protected]<mailto:[email protected]>> wrote:
But why would you assume that the delays are consistent?

In the non-encrypted case, you can reasonably assume that because there is an 
underlying assumption that there are no malicious agents in the network.  
However, if you believe that encryption is needed because the network does 
contain malicious agents, you would want to assume that anything that's 
interesting to attack is in fact attacked.

In particular, if you assume that active attacks are taking place where time 
sync packets are selectively delayed, what does that do to your protocol?

                paul

From: [email protected]<mailto:[email protected]> 
[mailto:[email protected]<mailto:[email protected]>] On Behalf Of 
Kevin Gross
Sent: Tuesday, October 18, 2011 12:43 PM
To: Nico Williams
Cc: [email protected]<mailto:[email protected]>; Danny Mayer; 
[email protected]<mailto:[email protected]>; Cui Yang; David L. Mills
Subject: Re: [IPsec] [TICTOC] Review request for IPsec security for packet 
based synchronization (Yang Cui)

It does seem reasonable to consider modeling encryption and decryption in as 
part of network latency. As long as delays introduced are the same each 
direction, the sync protocols will naturally subtract out this contribution.

Kevin Gross

On Fri, Oct 14, 2011 at 11:25 AM, Nico Williams 
<[email protected]<mailto:[email protected]>> wrote:

The cost of crypto can be measured, and performance generally
deterministic (particularly when there's no side channels in the
crypto) (assuming no mid-crypto context switches), so that it should
be possible to correct for the delays introduced by crypto (just as
it's possible to measure and estimate network latency).  Indeed,
crypto processing will likely be more deterministic than network
latency :)

_______________________________________________
IPsec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to