Oleksandr Natalenko reported performance issues with BBR without FQ
packet scheduler that were root caused to lack of SG and GSO/TSO on
his configuration.

In this mode, TCP internal pacing has to setup a high resolution timer
for each MSS sent.

We could implement in TCP a strategy similar to the one adopted
in commit fefa569a9d4b ("net_sched: sch_fq: account for schedule/timers drifts")
or decide to finally switch TCP stack to a GSO only mode.

This has many benefits :

1) Most TCP developments are done with TSO in mind.
2) Less high-resolution timers needs to be armed for TCP-pacing
3) GSO can benefit of xmit_more hint
4) Receiver GRO is more effective (as if TSO was used for real on sender)
   -> Lower ACK traffic
5) Write queues have less overhead (one skb holds about 64KB of payload)
6) SACK coalescing just works.
7) rtx rb-tree contains less packets, SACK is cheaper.

This patch implements the minimum patch, but we can remove some legacy
code as follow ups.

Tested:

On 40Gbit link, one netperf -t TCP_STREAM

BBR+fq:
sg on:  26 Gbits/sec
sg off: 15.7 Gbits/sec   (was 2.3 Gbit before patch)

BBR+pfifo_fast:
sg on:  24.2 Gbits/sec
sg off: 14.9 Gbits/sec  (was 0.66 Gbit before patch !!! )

BBR+fq_codel:
sg on:  24.4 Gbits/sec
sg off: 15 Gbits/sec  (was 0.66 Gbit before patch !!! )

Signed-off-by: Eric Dumazet <eduma...@google.com>
Reported-by: Oleksandr Natalenko <oleksa...@natalenko.name>
---
 include/net/sock.h | 1 +
 net/core/sock.c    | 2 +-
 net/ipv4/tcp.c     | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 
3aa7b7d6e6c7faddf08a77f5c0844049b31d8442..f0f576ff5603eb0f282f37fbf76138a9bdd0f724
 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -417,6 +417,7 @@ struct sock {
        struct page_frag        sk_frag;
        netdev_features_t       sk_route_caps;
        netdev_features_t       sk_route_nocaps;
+       netdev_features_t       sk_route_forced_caps;
        int                     sk_gso_type;
        unsigned int            sk_gso_max_size;
        gfp_t                   sk_allocation;
diff --git a/net/core/sock.c b/net/core/sock.c
index 
a1fa4a548f1be714c5b505b4f269ffbee5572321..507d8c6c431965242efa19f206a1eef28d0f2cff
 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1777,7 +1777,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
        u32 max_segs = 1;
 
        sk_dst_set(sk, dst);
-       sk->sk_route_caps = dst->dev->features;
+       sk->sk_route_caps = dst->dev->features | sk->sk_route_forced_caps;
        if (sk->sk_route_caps & NETIF_F_GSO)
                sk->sk_route_caps |= NETIF_F_GSO_SOFTWARE;
        sk->sk_route_caps &= ~sk->sk_route_nocaps;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 
48636aee23c31244494b7c7acbc911a7f1823691..4b46a2ae46e3ae89e26e7c0885347ab289f16814
 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -453,6 +453,7 @@ void tcp_init_sock(struct sock *sk)
        sk->sk_rcvbuf = sock_net(sk)->ipv4.sysctl_tcp_rmem[1];
 
        sk_sockets_allocated_inc(sk);
+       sk->sk_route_forced_caps = NETIF_F_GSO;
 }
 EXPORT_SYMBOL(tcp_init_sock);
 
-- 
2.16.1.291.g4437f3f132-goog

Reply via email to