On Tue, 13 Oct 2015, Joe Touch wrote:
David,
On 10/11/2015 3:49 PM, David Lang wrote:
One of the reasons we use packets is to provide more timely,
fine-grained feedback between endpoints. Aggregating them at the source
or in the network (rather than at the receiver) can amplify reaction to
packet loss (when the aggregate ACK is lost)
the issue of amplified reaction to packet loss is a valid one, but
packet loss is still pretty rare,
But you claim to need to drop the ACKs to avoid dropping other packets.
What's to say that 1) the ACKs won't end up getting dropped in your
system or 2) the ACKs won't get dropped further downstream?
Again, *you* don't know.
combining ACKs further downstream should not hurt anything (as long as the
RFC3449 criteria are covered)
_dropping_ ACKs without combinging them is packet loss, pure and simple. With
all the implications of that. The aggregated ACK is more significant that one of
the other ACKs that were combined into it, but reducing the number of packets
reduces the chances of needing to drop a packet. And the suggestion that was
made to dup the aggregated ACK into the next transmit slot will greatly reduce
the impact of the aggregated ACK being lost.
and increases
discretization effects in the TCP algorithms.
This is where I disagree with you. The ACK packets are already collapsed
time-wise.
Time is only one element of the discretization. The other is fate
sharing - dropping one aggregate ACK has the effect of dropping the
entire set of un-aggregated ones. The stream of unaggregated ACKs gives
TCP a robustness to drops, but you're now forcing a large burst of drops
for a specific packet type. That's the problem.
this is why the suggestion to dup the aggregated ACK into the next transmit slot
is such a great idea (I'd have to go back through this thread and dig up who
suggested it). It almost entirely mitigates the problem of the ACK packet
getting dropped.
We've gone round this issue for a lot of messages, but the key point is
that this isn't a new idea, nor are the numerous considerations and
concerns.
Ultimately, if you have too many packets from a protocol that reacts to
congestion, the correct solution is to provide that feedback and let the
ends figure out how to respond. That's the only solution that will work
when (not if) new messages or encrypted headers enter the picture.
These kinds of in-network hacks are what's really killing the ability of
the Internet to evolve.
If people are faced with existing protocols not working well, or making local
modifications that make things work, but limit the ability for the "Internet to
eveolve" the local modifications are going to win every time. This is the
Internet evolving, just not in the way that you want.
David Lang
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm