Rainer Toebbicke wrote:
Looking at some of the RX performance problems that we see in particular with respect to single stream throughput I stumbled across the following:upon receipt of an ACK the sender retains all ACKed packets above [t]first in the transmission queue (i.e. until they're hard acked).The question is: why? The packet has been received by the RX stack at the receiver, albeit not delivered to the "consumer". There is certainly a point throttling transfer at the sender for congestion, hence transmission and congestion windows, but why would the sender ever need to resend a packet to a receiver who's already got it?And if it could... what would be the mechanism? I only see the possibility of explicitly nacking the packet later again, but unless the receiver would drop a packet already received this can't be as far as I can see, and I don't see the latter happening. Am I struck with blindness or is this ancient history?Given that long queues in conjunction with big window sizes are a drag I plan to short-circuit this. Any comments?
I believe the existing behavior is most likely an artifact. The ACK'd packet might have been held onto in order to ease debugging or because analysis of the code at the time indicated that frequently obtaining the global free packet queue mutex was a greater expense than holding onto the packets until an explicit ACK permitted multiple packets to be freed at once. Now that we have per thread free packet queues reducing the lengths of the queues will be a win.
Clearly any patch will have to be reviewed and be tested. Profiling of the code with and without the change would be useful to determine what benefits are achieved.
Jeffrey Altman
smime.p7s
Description: S/MIME Cryptographic Signature
