On 3/14/26 9:13 PM, [email protected] wrote:
> +/* Sender-visible window rescue does not relax hard receive-memory admission.
> + * If growth did not make room, fall back to the established prune/collapse
> + * path.
> + */
>  static int tcp_try_rmem_schedule(struct sock *sk, const struct sk_buff *skb,
>                                unsigned int size)
>  {
> -     if (!tcp_can_ingest(sk, skb) ||
> -         !sk_rmem_schedule(sk, skb, size)) {
> +     bool can_ingest = tcp_can_ingest(sk, skb);
> +     bool scheduled = can_ingest && sk_rmem_schedule(sk, skb, size);
> +
> +     if (!scheduled) {
> +             int pruned = tcp_prune_queue(sk, skb);
>  
> -             if (tcp_prune_queue(sk, skb) < 0)
> +             if (pruned < 0)
>                       return -1;
>  
>               while (!sk_rmem_schedule(sk, skb, size)) {
> -                     if (!tcp_prune_ofo_queue(sk, skb))
> +                     bool pruned_ofo = tcp_prune_ofo_queue(sk, skb);
> +
> +                     if (!pruned_ofo)
>                               return -1;
>               }
>       }

The above chunk is AFAICS pure noise. Please have a more careful local
review of this series before any next revision.

/P


Reply via email to