Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
BD R Y -- mean TCPRecovLat 3s -7% +39% +38% mean TCPRecovLat252s +1% -11% -11% This is indeed very interesting and somewhat unexpected. Do you have any clue why Y is as bad as R and so much worse than B? By my understanding I would have expected Y to be similar to B. At least tests on the Mean Response Waiting Time of sender limited flows show hardly any difference to B (as expected). Also, is a potential longer time in TCPRecovLat such a bad thing considering your information on HTTP response performance?
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
On Tue, Jun 21, 2016 at 10:53 PM, Yuchung Chengwrote: > > On Fri, Jun 17, 2016 at 11:56 AM, Yuchung Cheng wrote: > > > > On Fri, Jun 17, 2016 at 11:32 AM, David Miller wrote: > > > > > > From: Daniel Metz > > > Date: Wed, 15 Jun 2016 20:00:03 +0200 > > > > > > > This patch adjusts Linux RTO calculation to be RFC6298 Standard > > > > compliant. MinRTO is no longer added to the computed RTO, RTO damping > > > > and overestimation are decreased. > > > ... > > > > > > Yuchung, I assume I am waiting for you to do the testing you said > > > you would do for this patch, right? > > Yes I spent the last two days resolving some unrelated glitches to > > start my testing on Web servers. I should be able to get some results > > over the weekend. > > > > I will test > > 0) current Linux > > 1) this patch > > 2) RFC6298 with min_RTO=1sec > > 3) RFC6298 with minimum RTTVAR of 200ms (so it is more like current > > Linux style of min RTO which only applies to RTTVAR) > > > > and collect the TCP latency (how long to send an HTTP response) and > > (spurious) timeout & retransmission stats. > > > Thanks for the patience. I've collected data from some Google Web > servers. They serve both a mix of US and SouthAm users using > HTTP1 and HTTP2. The traffic is Web browsing (e.g., search, maps, > gmails, etc but not Youtube videos). The mean RTT is about 100ms. > > The user connections were split into 4 groups of different TCP RTO > configs. Each group has many millions of connections but the > size variation among groups is well under 1%. > > B: baseline Linux > D: this patch > R: change RTTYAR averaging as in D, but bound RTO to 1sec per RFC6298 > Y: change RTTVAR averaging as in D, but bound RTTVAR to 200ms instead (like B) > > For mean TCP latency of HTTP responses (first byte sent to last byte > acked), B < R < Y < D. But the differences are so insignificant (<1%). > The median, 95pctl, and 99pctl has similar indifference. In summary > there's hardly visible impact on latency. I also look at only response > less than 4KB but do not see a different picture. > > The main difference is the retransmission rate where R =~ Y < B =~D. > R and Y are ~20% lower than B and D. Parsing the SNMP stats reveal > more interesting details. The table shows the deltas in percentage to > the baseline B. > > D R Y > -- > Timeout +12% -16% -16% > TailLossProb +28%-7% -7% > DSACK_rcvd +37%-7% -7% > Cwnd-undo+16% -29% -29% > > RTO change affects TLP because TLP will use the min of RTO and TLP > timer value to arm the probe timer. > > The stats indicate that the main culprit of spurious timeouts / rtx is > the RTO lower-bound. But they also show the RFC RTTVAR averaging is as > good as current Linux approach. > > Given that I would recommend we revise this patch to use the RFC > averaging but keep existing lower-bound (of RTTVAR to 200ms). We can > further experiment the lower-bound and change that in a separate > patch. Hi I have some update. I instrumented the kernel to capture the time spent in recovery (attached). The latency measurement starts when TCP goes into recovery, triggered by either ACKs or RTOs. The start time is the (original) sent time of the first unacked packet. The end time is when the ACK covers the highest sent sequence when recovery started. The total latency in usec and count are recorded in MIB_TCPRECOVLAT and MIB_TCPRECOVCNT. If the connection times out or closes while the sender was still in recovery, the total latency and count are stored in MIB_TCPRECOVLAT2 and MIB_TCPRECOVCNT2. This second bucket is to capture long recovery that led to eventual connection aborts. Since network stat is usually power distribution, the mean of such distribution is gonna be dominated by the tail. but the new metrics still shows very interesting impact of different RTOs. Using the same table format like my previous email. This table shows the difference in percentage to the baseline. BD R Y -- mean TCPRecovLat 3s -7% +39% +38% mean TCPRecovLat252s +1% -11% -11% The new metrics show that lower-bounding RTO to 200ms (D) indeed lowers the latency. But by my previous analysis, D has a lot more spurious rtx and TLPs (which the collateral damage on latency is not captured by these metrics). And note that TLP timer uses the min of RTO and TLP timeout, so TLP fires 28% more often in (D). Therefore the latency may be mainly benefited from a faster TLP timer. Nevertheless the significant impacts on recovery latency do not show up on the response latency we measured earlier. My conjecture is that only a small fraction of flows experience losses so even a 40% increase on average on loss recovery does not move the needle, or the latency
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
On Wed, Jun 22, 2016 at 4:21 AM, Hagen Paul Pfeiferwrote: > > > On June 22, 2016 at 7:53 AM Yuchung Cheng wrote: > > > > Thanks for the patience. I've collected data from some Google Web > > servers. They serve both a mix of US and SouthAm users using > > HTTP1 and HTTP2. The traffic is Web browsing (e.g., search, maps, > > gmails, etc but not Youtube videos). The mean RTT is about 100ms. > > > > The user connections were split into 4 groups of different TCP RTO > > configs. Each group has many millions of connections but the > > size variation among groups is well under 1%. > > > > B: baseline Linux > > D: this patch > > R: change RTTYAR averaging as in D, but bound RTO to 1sec per RFC6298 > > Y: change RTTVAR averaging as in D, but bound RTTVAR to 200ms instead (like > > B) > > > > For mean TCP latency of HTTP responses (first byte sent to last byte > > acked), B < R < Y < D. But the differences are so insignificant (<1%). > > The median, 95pctl, and 99pctl has similar indifference. In summary > > there's hardly visible impact on latency. I also look at only response > > less than 4KB but do not see a different picture. > > > > The main difference is the retransmission rate where R =~ Y < B =~D. > > R and Y are ~20% lower than B and D. Parsing the SNMP stats reveal > > more interesting details. The table shows the deltas in percentage to > > the baseline B. > > > > D R Y > > -- > > Timeout +12% -16% -16% > > TailLossProb +28%-7% -7% > > DSACK_rcvd +37%-7% -7% > > Cwnd-undo+16% -29% -29% > > > > RTO change affects TLP because TLP will use the min of RTO and TLP > > timer value to arm the probe timer. > > > > The stats indicate that the main culprit of spurious timeouts / rtx is > > the RTO lower-bound. But they also show the RFC RTTVAR averaging is as > > good as current Linux approach. > > > > Given that I would recommend we revise this patch to use the RFC > > averaging but keep existing lower-bound (of RTTVAR to 200ms). We can > > further experiment the lower-bound and change that in a separate > > patch. > > Great news Yuchung! > > Then Daniel will prepare v4 with a min-rto lower bound: > > max(RTTVAR, tcp_rto_min_us(struct sock)) > > Any further suggestions Yuchung, Eric? We will also feed this v4 in our test > environment to check the behavior for sender limited, non-continuous flows. yes a small one: I think the patch should change __tcp_set_rto() instead of tcp_set_rto() so it applies to recurring timeouts as well. > > Hagen
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
> On June 22, 2016 at 7:53 AM Yuchung Chengwrote: > > Thanks for the patience. I've collected data from some Google Web > servers. They serve both a mix of US and SouthAm users using > HTTP1 and HTTP2. The traffic is Web browsing (e.g., search, maps, > gmails, etc but not Youtube videos). The mean RTT is about 100ms. > > The user connections were split into 4 groups of different TCP RTO > configs. Each group has many millions of connections but the > size variation among groups is well under 1%. > > B: baseline Linux > D: this patch > R: change RTTYAR averaging as in D, but bound RTO to 1sec per RFC6298 > Y: change RTTVAR averaging as in D, but bound RTTVAR to 200ms instead (like B) > > For mean TCP latency of HTTP responses (first byte sent to last byte > acked), B < R < Y < D. But the differences are so insignificant (<1%). > The median, 95pctl, and 99pctl has similar indifference. In summary > there's hardly visible impact on latency. I also look at only response > less than 4KB but do not see a different picture. > > The main difference is the retransmission rate where R =~ Y < B =~D. > R and Y are ~20% lower than B and D. Parsing the SNMP stats reveal > more interesting details. The table shows the deltas in percentage to > the baseline B. > > D R Y > -- > Timeout +12% -16% -16% > TailLossProb +28%-7% -7% > DSACK_rcvd +37%-7% -7% > Cwnd-undo+16% -29% -29% > > RTO change affects TLP because TLP will use the min of RTO and TLP > timer value to arm the probe timer. > > The stats indicate that the main culprit of spurious timeouts / rtx is > the RTO lower-bound. But they also show the RFC RTTVAR averaging is as > good as current Linux approach. > > Given that I would recommend we revise this patch to use the RFC > averaging but keep existing lower-bound (of RTTVAR to 200ms). We can > further experiment the lower-bound and change that in a separate > patch. Great news Yuchung! Then Daniel will prepare v4 with a min-rto lower bound: max(RTTVAR, tcp_rto_min_us(struct sock)) Any further suggestions Yuchung, Eric? We will also feed this v4 in our test environment to check the behavior for sender limited, non-continuous flows. Hagen
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
On Fri, Jun 17, 2016 at 11:56 AM, Yuchung Chengwrote: > > On Fri, Jun 17, 2016 at 11:32 AM, David Miller wrote: > > > > From: Daniel Metz > > Date: Wed, 15 Jun 2016 20:00:03 +0200 > > > > > This patch adjusts Linux RTO calculation to be RFC6298 Standard > > > compliant. MinRTO is no longer added to the computed RTO, RTO damping > > > and overestimation are decreased. > > ... > > > > Yuchung, I assume I am waiting for you to do the testing you said > > you would do for this patch, right? > Yes I spent the last two days resolving some unrelated glitches to > start my testing on Web servers. I should be able to get some results > over the weekend. > > I will test > 0) current Linux > 1) this patch > 2) RFC6298 with min_RTO=1sec > 3) RFC6298 with minimum RTTVAR of 200ms (so it is more like current > Linux style of min RTO which only applies to RTTVAR) > > and collect the TCP latency (how long to send an HTTP response) and > (spurious) timeout & retransmission stats. > Thanks for the patience. I've collected data from some Google Web servers. They serve both a mix of US and SouthAm users using HTTP1 and HTTP2. The traffic is Web browsing (e.g., search, maps, gmails, etc but not Youtube videos). The mean RTT is about 100ms. The user connections were split into 4 groups of different TCP RTO configs. Each group has many millions of connections but the size variation among groups is well under 1%. B: baseline Linux D: this patch R: change RTTYAR averaging as in D, but bound RTO to 1sec per RFC6298 Y: change RTTVAR averaging as in D, but bound RTTVAR to 200ms instead (like B) For mean TCP latency of HTTP responses (first byte sent to last byte acked), B < R < Y < D. But the differences are so insignificant (<1%). The median, 95pctl, and 99pctl has similar indifference. In summary there's hardly visible impact on latency. I also look at only response less than 4KB but do not see a different picture. The main difference is the retransmission rate where R =~ Y < B =~D. R and Y are ~20% lower than B and D. Parsing the SNMP stats reveal more interesting details. The table shows the deltas in percentage to the baseline B. D R Y -- Timeout +12% -16% -16% TailLossProb +28%-7% -7% DSACK_rcvd +37%-7% -7% Cwnd-undo+16% -29% -29% RTO change affects TLP because TLP will use the min of RTO and TLP timer value to arm the probe timer. The stats indicate that the main culprit of spurious timeouts / rtx is the RTO lower-bound. But they also show the RFC RTTVAR averaging is as good as current Linux approach. Given that I would recommend we revise this patch to use the RFC averaging but keep existing lower-bound (of RTTVAR to 200ms). We can further experiment the lower-bound and change that in a separate patch.
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
On Fri, Jun 17, 2016 at 11:32 AM, David Millerwrote: > > From: Daniel Metz > Date: Wed, 15 Jun 2016 20:00:03 +0200 > > > This patch adjusts Linux RTO calculation to be RFC6298 Standard > > compliant. MinRTO is no longer added to the computed RTO, RTO damping > > and overestimation are decreased. > ... > > Yuchung, I assume I am waiting for you to do the testing you said > you would do for this patch, right? Yes I spent the last two days resolving some unrelated glitches to start my testing on Web servers. I should be able to get some results over the weekend. I will test 0) current Linux 1) this patch 2) RFC6298 with min_RTO=1sec 3) RFC6298 with minimum RTTVAR of 200ms (so it is more like current Linux style of min RTO which only applies to RTTVAR) and collect the TCP latency (how long to send an HTTP response) and (spurious) timeout & retransmission stats. I didn't respond to Hagen's email yet b/c I thought data would help the discussion better :-)
Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation
From: Daniel MetzDate: Wed, 15 Jun 2016 20:00:03 +0200 > This patch adjusts Linux RTO calculation to be RFC6298 Standard > compliant. MinRTO is no longer added to the computed RTO, RTO damping > and overestimation are decreased. ... Yuchung, I assume I am waiting for you to do the testing you said you would do for this patch, right?