Hi, I was explaining the bufferbloat problem to some undergrad students showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I asked them to find a solution for the problem and someone pointed at Fig. 1 and said "That's easy. All you have to do is to operate in the sweet point where the throughput is maximum and the delay is minimum".
It seemed to me that it was a good idea and I tried to think a way to force TCP to operate close to the optimal point. The goal is to increase the congestion window until it is larger than the optimal one. At that point, start decreasing the congestion window until is lower than the optimal point. To be more specific, TCP would be at any time increasing or decreasing the congestion window. In other words, it will be moving in one direction (right or left) along the x axis of Fig. 1 of Getty's paper. Each RTT, the performance is measured in terms of delay and throughput. If there is a performance improvement, we keep moving in the same direction. If there is a performance loss, we change the direction. I tried to explain the algorithm here: https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true I am not an expert on TCP, so I decided to share it with this list to get some expert opinions. Thanks, Jaume _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat