On 1 Dec 2006, at 12:38, Mark Handley wrote:
I agree that running a very small no-feedback timer is a bad idea.
But I think that 1 second is probably far too large. The purpose of
the nofeedback timer is to slow DCCP down when there is serious
network congestion. Waiting 1 second on a LAN would mean sending for
thousands of RTTs before starting to slow down. And on wide-area
links in places like the UK, it could be 100 RTTs before you slow
down, although this would be mitigated a little if the problem was
congestion, and a queue built up.
My gut feeling is that there should be a lower bound on the nofeedback
timer, but that 100ms would be a more appropriate value. This is
motivated by an attempt to compromise between a large value for
efficient DCCP implementations, and a small value to avoid disrupting
the network for too long when bad stuff is happening. From a human
usability point of view, you probably can cope with dropouts in audio
of 100ms without it being too bad, but 1 second is too long.
I'd actually suggest something on the order of 16-20ms. The rationale
would be to match the typical inter-frame interval for multimedia
applications, so that the kernel will likely be processing a sent
packet when the timer expires, and can amortise the costs of checking
the nofeedback timer into the send routine.
Colin