On 12/8/19 3:37 PM, Fred Baker wrote:

Sent from my iPad

On Dec 5, 2019, at 9:03 PM, Stephen Satchell <l...@satchell.net> wrote:

For SP-grade routers, there isn't "code" that needs to be added to combat 
buffer bloat.  All an admin has to do is cut back on the number of packet buffers on each 
interface -- an interface setting, you see.
A common misconception, and disagrees with the research on the topic.

Let me describe this conceptually. Think of a file transfer (a streaming flow 
can be thought of in those terms, as can web pages etc) as four groups of 
packets - those that have been delivered correctly and therefore don’t affect 
the window or flow rate, those that have been delivered out of order and 
therefore reduce the window and might get retransmitted even though they need 
not be resent, those that are sitting in a queue somewhere and therefore add 
latency, and those that haven’t been transmitted yet. If I have a large number 
of sessions transiting an interface, each one is likely to have a packet or two 
near the head of the queue; after that, it tends to thin out, with the sessions 
with the largest windows having packets deep in the queue, and sessions with 
smaller windows not so much.

If you reduce the queue depth, it does reduce that deep-in-the-queue group - 
there is no storage deep in the queue to hold it. What it does, however, is 
increase any given packet’s probability of loss (loss being the extreme case of 
delay, and when you reduce delay unintelligently is the byproduct), and 
therefore the second category of packets - the ones that managed to get through 
after a packet was lost, and therefore arrived out of order and have some 
probability of being retransmitted and therefore delivered multiple times.

What AQM technologies attempt to do (we can argue about the relative degree of 
success in different technologies; I’m talking about them as a class) is 
identify sessions in that deep-in-the-queue category and cause them to 
temporarily reduce their windows - keeping most of their outstanding packets 
near the head of the queue. Reducing their windows has the effect of moving 
packets out of the network buffers (bufferbloat) and reordering queues in the 
receiving host to “hasn’t been sent yet” in the sending host.  That also 
reduces median latency, meaning that the sessions with reduced windows don’t 
generally “slow down” - they simply keep less of their data streams in the 
network with reduced median latency.


So are saying in effect that the receiving host is, essentially, managing the sending host's queue by modulating the receiver's window? That seems really weird to me, and probably means I've got it wrong. How would the receiving host know when and why it should change the window, unless of course loss or other measurable things? If everything is chugging away, the receiver doesn't have any idea that the sender is starving other sessions, right?

It just seems to me that this is a sending hosts queuing problem?

Mike

Reply via email to