On Thu, 8 Oct 2015, Joe Touch wrote:

On 10/8/2015 2:31 PM, David Lang wrote:
On Thu, 8 Oct 2015, Joe Touch wrote:

On 10/7/2015 12:42 AM, LAUTENSCHLAEGER, Wolfram (Wolfram) wrote:
...
Is this topic addressed in some RFC already?

It's a direct violation of RFC793, which expects one ACK for every two
segments:

4.2 Generating Acknowledgments

  The delayed ACK algorithm specified in [Bra89] SHOULD be used by a
  TCP receiver.  When used, a TCP receiver MUST NOT excessively delay
  acknowledgments.  Specifically, an ACK SHOULD be generated for at
  least every second full-sized segment, and MUST be generated within
  500 ms of the arrival of the first unacknowledged packet.

actually, this is only a violation of the SHOULD section, not the MUST
section.

When you violate a SHOULD, you need to have a good reason that applies
in a limited subset of cases.

"it benefits me" isn't one of them, otherwise the SHOULD would *always*
apply.

And if the Ack packets are going to arrive at wire-speed anyway (due to
other causes), is there really an advantage to having 32 ack packets
arriving one after the other instead of making it so that the first ack
packet (which arrives at the same time) can ack everything?

If the first ACK confirms everything, you're giving the endpoint a false
sense of how fast the data was received. This is valid only if the
*last* ACK is the only one you retain, but then you'll increase delay.

why does it give the server a false sense of how fast the data was received? the packets don't have timestamps that the server can trust, they are just packets arriving. And if the server concludes something different from 32 packets arriving, each acking 2 packet, but all arriving one after the other at it's wire speed (let's say it's a slow network, only Gig-E) compared to a single packet arriving that acks 64 packets of data at once, it's doing something very strange and making assumptions about how the network works that are invalid.

Unless you know that the endpoint supports ABC and pacing, yes, there's
a very distinct advantage to getting 32 ACKs rather than 1. It also
helps with better accuracy on the RTT calculation, which is based on
sampling (and you've killed 97% of the samples).

the 97% of the samples that I've killed would be producing invalid data for your calculation because they were delayed in returning. The one packet I'd deliver would be accurate for the last data it acks, the others would all give misleading info.

And if there is such an advantage, does it outweight the disadvantages
that the extra ack packets cause by causing highly asymmetric links to
be overloaded and drop packets?

Why is it so bad to drop packets?

because forcing packets for other services to be dropped to make room for acks degrades those other services.

 Cumulative ACKs will still work
properly in the presence of the losses. If you make reasonable progress,
you have to determine whether the lost ACKs should make you slow down.

the problem isn't if acks are dropped, it's when the acks fill the available bandwidth so that other services get dropped.

AFAICT, this isn't horrible if you KNOW something specific about the
other end (ABC + pacing), but how can you know? In a closed deployment,
maybe. But this IMO is over-optimizing TCP for a very specific environment.

TCP isn't supposed to be the most efficient in EVERY corner case. It's
supposed to *always work* in EVERY corner case.

I don't see how it fails to work in this case. As people have pointed out, some cable routers have been doing this for 15 years and the Internet has not imploded from it yet, so the drawbacks of dropping these already-delayed and redundant ack packets cannot be the end-of-the-internet that you are painting it to be.

We are talking about only doing this in one specific case, the case where other things have already caused some of the acks to be delayed to the point where later acks have 'caught up' with them on the network and both early and late acks are sitting in the same queue on the same device waiting to be sent at the same time.

At this point there are three possiblilities

1. all the acks get sent back-to-back, wasting bandwith with their redundancy

2. send only the newest ack, trashing all the ones that would be redundant

3. the total of the acks that are queued exceeds the next transmit window, so only some of the acks get sent, the newest one doesn't and gets delayed further.


we know that #2 doesn't break the Internet, it's within the range of responses permitted by the RFC SHOULD. It decreases load on congested links.

But you keep insisting that it's a horrible thing to consider doing.

David Lang

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to