On 3/14/26 9:13 PM, [email protected] wrote:
> From: Wesley Atwell <[email protected]>
>
> Teach TCP to grow sk_rcvbuf when scale rounding would otherwise expose
> more sender-visible window than the current hard receive-memory backing
> can cover.
>
> The new helper keeps backlog and memory-pressure limits in the same
> units as the rest of the receive path, while __tcp_select_window()
> backs any rounding slack before advertising it.
>
> Signed-off-by: Wesley Atwell <[email protected]>
> ---
> include/net/tcp.h | 12 ++++++++++++
> net/ipv4/tcp_input.c | 36 ++++++++++++++++++++++++++++++++++--
> net/ipv4/tcp_output.c | 15 +++++++++++++--
> 3 files changed, 59 insertions(+), 4 deletions(-)
>
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index fc22ab6b80d5..5b479ad44f89 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -397,6 +397,7 @@ int tcp_ioctl(struct sock *sk, int cmd, int *karg);
> enum skb_drop_reason tcp_rcv_state_process(struct sock *sk, struct sk_buff
> *skb);
> void tcp_rcv_established(struct sock *sk, struct sk_buff *skb);
> void tcp_rcvbuf_grow(struct sock *sk, u32 newval);
> +bool tcp_try_grow_rcvbuf(struct sock *sk, int needed);
> void tcp_rcv_space_adjust(struct sock *sk);
> int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp);
> void tcp_twsk_destructor(struct sock *sk);
> @@ -1844,6 +1845,17 @@ static inline int tcp_rwnd_avail(const struct sock *sk)
> return tcp_rmem_avail(sk) - READ_ONCE(sk->sk_backlog.len);
> }
>
> +/* Passive children clone the listener's sk_socket until accept() grafts
> + * their own struct socket,
AFAICS, the above statement is false, see sk_set_socket() in sk_clone()
> so only sockets that point back to themselves
> + * should autotune receive-buffer backing.
> + */
> +static inline bool tcp_rcvbuf_grow_allowed(const struct sock *sk)
> +{
> + struct socket *sock = READ_ONCE(sk->sk_socket);
You can just check `sk->sk_socket`. Also you could re-use this helper in
tcp_data_queue_ofo().
/P