Matthew Toseland wrote:
> I don't understand. We can indicate how long a packet had to wait for
> coalescing, and therefore record a value based on the network delay,
> with a bit of the CPU delay thrown in, no?

Right, we can get an accurate measure of the RTT by using timestamps, 
but we still have to delay retransmissions by an extra 100ms in case the 
ack is being held for coalescing (we won't know until it arrives). The 
extra lag makes congestion control harder - you can't really do things 
like fast retransmission if the acks for packets that were sent almost 
simultaneously can be separated by 100ms.

Maybe I'm getting too caught up in the TCP way of doing things, but when 
I look at the amount of work that's gone into the design of TCP and the 
number of subtle problems they've found, I start to doubt that we can do 
better from scratch...

> 52 = 5% * 1000. The probability of a packet being dropped is less than
> 5% on most useful links.

Sorry, I don't see how the loss rate of the link is relevant - I'm 
talking about the overhead of sending an ack straight away (52 bytes) 
versus the overhead of retransmitting a packet unnecessarily (1000 
bytes). Unnecessary retransmissions happen when the RTT variance is 
high, which it is at the moment because acks are held for anywhere 
between 0 and 100ms. Currently we retransmit a packet if it hasn't been 
acked for 4 * RTT + MAX_DELAY, but we don't know what fraction of acks 
arrive after 4 * RTT + MAX_DELAY. If it's more than 5% then it would be 
cheaper to send the acks straight away.

Cheers,
Michael

Reply via email to