tcp_tso_should_defer() first heuristic is to not defer
if last send is "old enough".

Its current implementation uses jiffies and its low granularity.

TSO autodefer performance should not rely on kernel HZ :/

After EDT conversion, we have state variables in nanoseconds that
can allow us to properly implement the heuristic.

This patch increases TSO chunk sizes on medium rate flows,
especially when receivers do not use GRO or similar aggregation.

It also reduces bursts for HZ=100 or HZ=250 kernels, making TCP
behavior more uniform.

Signed-off-by: Eric Dumazet <eduma...@google.com>
Acked-by: Soheil Hassas Yeganeh <soh...@google.com>
---
 net/ipv4/tcp_output.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 
78a56cef7e397c65ad18897d550f3800e8fe8f41..75dcf4daca724a6819e9ecc9d0f3e6dc6df72e9b
 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1920,9 +1920,12 @@ static bool tcp_tso_should_defer(struct sock *sk, struct 
sk_buff *skb,
                goto send_now;
 
        /* Avoid bursty behavior by allowing defer
-        * only if the last write was recent.
+        * only if the last write was recent (1 ms).
+        * Note that tp->tcp_wstamp_ns can be in the future if we have
+        * packets waiting in a qdisc or device for EDT delivery.
         */
-       if ((s32)(tcp_jiffies32 - tp->lsndtime) > 0)
+       delta = tp->tcp_clock_cache - tp->tcp_wstamp_ns - NSEC_PER_MSEC;
+       if (delta > 0)
                goto send_now;
 
        in_flight = tcp_packets_in_flight(tp);
-- 
2.19.1.930.g4563a0d9d0-goog

Reply via email to