Ok, so it turns out that what we are talking about here is talked about
explicitly in section 5.2.1 of RFC3449. It warns of increased sender burst
sizes, but I really question if that's a valid concern in the cases that we are
talking about here. The choice in these cases isn't to send more acks spaced
out, but rather to send a lot of acks back to back in one transmit slot vs one
stretch ack per transmit slot.
RFC3449 doesn't address this directly, but it does address it indirectly by
proposing several mechanisms to recreate the missing acks. These all talk about
the need to space out the re-created ack messages to prevetnt large transmit
bursts. This implies that if the re-created acks were not spaced out, the
various problems would still happen. [1]
In the case we are talking about, the choice is between spending bandwidth on
sending the acks back to back (the equivalent of re-creating the acks and not
spacing them) or not just sending one ack. This looks to me like they have the
equivalent effect.
Section 5.3.3 talking about ways to mitigate excessive transmit burst sizes
sounds like a primitive version of running AQM on the downstream side. It looks
to me like any of the currently considered strong AQM options would satisfy this
requiement.
The final recommendations section says about 5.3.1 "Use in the Internet is
dependent on a scheme for preventing excessive TCP transmission bursts"
Since AQM on the downstream side does exactly that, it looks like thinning out
duplicate acks in the AQM queue is actually following the recommendations of
RFC3449, as long as AQM in also in place on the downstream side
[1] A major problem with any of the options that try to reconstruct the lost
acks is that in order to space out the acks appropriately, it can only do so by
delaying the ack that it actually received.
David Lang
On Fri, 9 Oct 2015, [email protected] wrote:
RFC3449 pointed to some of the reasons & problems with manipulating TCP
ACKs, and could provide a useful background if people needed to get up to
speed on the history...
Gorry
I'm not sure why this discussion is happening on aqm@ instead of tcpm@...
I have added cpm to the cc line, and would recommend that anyone
responding to this thread do the same and remove aqm@.
On Oct 7, 2015, at 2:13 PM, David Lang <[email protected]> wrote:
So things that reduce the flow of acks can result in very real benefits
to users.
Dumb question of the month. What would it take to see wide deployment of
RFC 5690? That would result in the data/ack ratio being reduced, on
average, to whatever amount had been negotiated.
Summary for those that haven't read it - TCP implementations today
generally ack every other packet, with caveats for isolated or final data
packets. This proposal allows consenting adults to change that ratio,
acking every third or fourth packet, or every tenth.
If ack reductions are so very valuable, what's the chance of doing that on
an end to end basis instead of in the network?
On Oct 7, 2015, at 2:13 PM, David Lang <[email protected]> wrote:
On Wed, 7 Oct 2015, Jonathan Morton wrote:
On 7 Oct, 2015, at 23:40, Agarwal, Anil <[email protected]>
wrote:
Since the cable modem link will lead to clumped ACKs the difference
between sending 100 ACKs vs. 1 ACK is probably not that big...
(except w.r.t. reliability).
The difference may not be big in the spacing of new packets that a
sender will send, unless the sender implements some sort of pacing or
if the return link is very thin.
But with ABC, there will be a difference in the amount of cwnd
increase at the sender.
There is also a potential difference for detecting packet loss in the
forward direction. Itâ??s entirely possible that thinning would cause
a DupAck condition to be recognised only after three MAC grants in the
reverse direction have elapsed, rather than one. Receivers are
REQUIRED to send an ack for every received packet under these
conditions, but that would be subverted by the modem. AckCC would not
induce this effect, because the receiver would still produce the extra
acks as required.
Packet loss causes head-of-line blocking at the application level,
which is perceived as latency and jerkiness by the end-user, until the
lost packet is retransmitted and actually arrives. Hence the addition
of two MAC grant delays (60ms?) may make the difference between an
imperceptible problem and a noticeable one.
and excessive ack traffic causes congestion and results in packet loss
on real-world hightly asymmetric links.
So things that reduce the flow of acks can result in very real benefits
to users.
David Lang_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm
_______________________________________________
tcpm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/tcpm
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm