On Fri, 9 Oct 2015, Christian Huitema wrote:
All that discussion is fun, but we should be careful to stay within the
respective group charters.
As far as AQM is concerned, the question is whether the group wants to
standardize some kind of special handling of TCP ACK as part of queue
management. As far as the IETF rules are concerned, the answer is clearly NO
-- we cannot create an IETF recommendation that breaks other IETF
recommendations. Besides, such rules would not be very effective if the
transport protocol is encrypted, as for example QUIC, or SCTP over DTLS. AQM
should certainly not depend on end-systems not using encryption.
AQM should not depend on end-systems not using encryption.
But that doesn't mean that AQM should not take advantage of additional data that
it can get from non-encrypted sessions.
For example, in the fq_codel/cake development, we're finding that there are some
transports that bundle very large numbers of packets together to send at one
time in order to maximize the transport bandwidth. (for example, 4x4 wifi sends
a LOT of data in one transmit timeslot). Treating that large aggregate as a
single packet seriously hurts fairness and latency on the next hop. So 'pulling
apart' this aggregate into the individual packets/streams and making decisions
based on the pieces ends up being a serious win in fairness and latency.
As for TCP and other protocols, the question is whether they should pay more
attention to the volume of ACK and other control packets. The deployment of
queue management systems like FQ-CODEL actually creates an incentive to do
that, because a transport protocol that creates congestion on its uplink will
be automatically penalized. But that discussion belongs in TCPM and other
transport working groups, not AQM.
I can't say where the discussion needs to take place, but people need to realize
that it's not just hurting itself. In fact it's probably not going to hurt
itself the most.
most AQM systems tend to (at least slightly) prioritize acks, DNS responses, and
other small packets because letting them get delayed has such a disproportionate
impact on the overall throughput and user experience. As a result, the overload
of ACKs is probably going to end up impacting other uses of the link instead.
Suggesting that the queues that build up produce a special enough case to
consider thinning out the duplicate acks is a far cry from 'making a
recommendation that breaks other recommendations'
David Lang
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm