Eric noted that with UDP GRO and NAPI timeout, we could keep a single
UDP packet inside the GRO hash forever, if the related NAPI instance
calls napi_gro_complete() at an higher frequency than the NAPI timeout.
Willem noted that even TCP packets could be trapped there, till the
next retransmission.
This patch tries to address the issue, flushing the oldest packets before
scheduling the NAPI timeout. The rationale is that such a timeout should be
well below a jiffy and we are not flushing packets eligible for sane GRO.

RFC -> v1:
 - added 'Fixes tags', cleaned-up the wording.

Reported-by: Eric Dumazet <eric.duma...@gmail.com>
Fixes: 3b47d30396ba ("net: gro: add a per device gro flush timer")
Fixes: e20cf8d3f1f7 ("udp: implement GRO for plain UDP sockets.")
Signed-off-by: Paolo Abeni <pab...@redhat.com>
--
Note: since one of the fixed commit is currently only on net-next,
the other one is really old, and the affected scenario without the
more recent commit is really a corner case, targeting net-next.
---
 net/core/dev.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 5927f6a7c301..b6eb4e0bfa91 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5975,11 +5975,14 @@ bool napi_complete_done(struct napi_struct *n, int 
work_done)
                if (work_done)
                        timeout = n->dev->gro_flush_timeout;
 
+               /* When the NAPI instance uses a timeout, we still need to
+                * somehow bound the time packets are keept in the GRO layer
+                * under heavy traffic
+                */
+               napi_gro_flush(n, !!timeout);
                if (timeout)
                        hrtimer_start(&n->timer, ns_to_ktime(timeout),
                                      HRTIMER_MODE_REL_PINNED);
-               else
-                       napi_gro_flush(n, false);
        }
        if (unlikely(!list_empty(&n->poll_list))) {
                /* If n->poll_list is not empty, we need to mask irqs */
-- 
2.17.2

Reply via email to