> On 29 Sep, 2021, at 2:17 am, Dave Taht <[email protected]> wrote: > > In today's rpm meeting I didn't quite manage to make a complicated point. > This long-ago proposal of matt mathis's has often intrigued (inspired? > frightened?) me: > > https://datatracker.ietf.org/doc/html/draft-mathis-iccrg-relentless-tcp-00 > > where he proposed that a tcp variant have no response at all to loss or > markings, merely replacing lost segments as they are requested, continually > ramping up until the network basically explodes.
I think "no response at all" is overstating it. Right in the abstract, it is described as removing the lost segments from the cwnd; ie. only acked segments result in new segments being transmitted (modulo the 2-segment minimum). In this sense, Relentless TCP is an AIAD algorithm much like DCTCP, to be classified distinctly from Reno (AIMD) and Scalable TCP (MIMD). Relentless congestion control is a simple modification that can be applied to almost any AIMD style congestion control: instead of applying a multiplicative reduction to cwnd after a loss, cwnd is reduced by the number of lost segments. It can be modeled as a strict implementation of van Jacobson's Packet Conservation Principle. During recovery, new segments are injected into the network in exact accordance with the segments that are reported to have been delivered to the receiver by the returning ACKs. Obviously, an AIAD congestion control would not coexist nicely with AIMD based traffic. We know this directly from experience with DCTCP. It cannot therefore be recommended for general use on the Internet. This is acknowledged extensively in Mathis' draft. > In the context of *testing* bidirectional network behaviors in particular, > seeing tcp tested more than unicast udp has been, in more labs, has long been > on my mind. Yes, as a tool specifically for testing with, and distributed with copious warnings against attempting to use it more generally, this might be interesting. - Jonathan Morton _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
