On Sat, Aug 27, 2016 at 07:37:54AM -0700, Eric Dumazet wrote: > +bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) > +{ > + u32 limit = sk->sk_rcvbuf + sk->sk_sndbuf; ^^^ ... > + if (!skb->data_len) > + skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); > + > + if (unlikely(sk_add_backlog(sk, skb, limit))) { ... > - } else if (unlikely(sk_add_backlog(sk, skb, > - sk->sk_rcvbuf + sk->sk_sndbuf))) { ^---- [1] > - bh_unlock_sock(sk); > - __NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP); > + } else if (tcp_add_backlog(sk, skb)) {
Hi Eric, after this patch, do you think we still need to add sk_sndbuf as a stretching factor to the backlog here? It was added by [1] and it was justified that the (s)ack packets were just too big for the rx buf size. Maybe this new patch alone is enough already, as such packets will have a very small truesize then. Marcelo [1] da882c1f2eca ("tcp: sk_add_backlog() is too agressive for TCP")