On Jun 24, 2014, at 1:33 PM, Daniel Havey <[email protected]> wrote:

> There may be scenarios where the interaction of the interval, the RTT and the 
> bandwidth cause this to happen recurringly constantly underflowing the 
> bandwidth.

To be honest, the real concern is very long delay paths, and it applies to AQM 
algorithms generally. During TCP slow start (which is not particularly slow, 
but entertains contains exponential growth), we have an initial burst, which 
with TCP Offload Engines can, I’m told, spit 65K bytes out in the initial 
burst. The burst travels somewhere and results in a set of acks, which 
presumably arrive at the sender at approximately the rate the burst went 
through the bottleneck, but elicit a burst roughly twice as fast as the 
bottleneck. That happens again and again until either a loss/mark event is 
detected or cwnd hits ssthresh, at which point the growth of cwnd becomes 
linear.

If the burst is allowed to use the entire memory of the bottleneck system’s 
interface, it will very possibly approach the capacity of the bottleneck. 
However, with pretty much any AQM algorithm I’m aware of, the algorithm will 
sense an issue and drop or mark something, kicking the session into congestion 
avoidance relatively early.

This is well-known behavior, and something we have a couple of RFCs on.

But yes, it can happen on more nominal paths as well.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to