On Wed, Dec 10, 2025 at 11:33 PM Bui Quang Minh
<[email protected]> wrote:
>
> On 12/10/25 12:45, Jason Wang wrote:
> > On Tue, Dec 9, 2025 at 11:23 PM Bui Quang Minh <[email protected]> 
> > wrote:
> >> On 12/9/25 11:30, Jason Wang wrote:
> >>> On Mon, Dec 8, 2025 at 11:35 PM Bui Quang Minh <[email protected]> 
> >>> wrote:
> >>>> Calling napi_disable() on an already disabled napi can cause the
> >>>> deadlock. In commit 4bc12818b363 ("virtio-net: disable delayed refill
> >>>> when pausing rx"), to avoid the deadlock, when pausing the RX in
> >>>> virtnet_rx_pause[_all](), we disable and cancel the delayed refill work.
> >>>> However, in the virtnet_rx_resume_all(), we enable the delayed refill
> >>>> work too early before enabling all the receive queue napis.
> >>>>
> >>>> The deadlock can be reproduced by running
> >>>> selftests/drivers/net/hw/xsk_reconfig.py with multiqueue virtio-net
> >>>> device and inserting a cond_resched() inside the for loop in
> >>>> virtnet_rx_resume_all() to increase the success rate. Because the worker
> >>>> processing the delayed refilled work runs on the same CPU as
> >>>> virtnet_rx_resume_all(), a reschedule is needed to cause the deadlock.
> >>>> In real scenario, the contention on netdev_lock can cause the
> >>>> reschedule.
> >>>>
> >>>> This fixes the deadlock by ensuring all receive queue's napis are
> >>>> enabled before we enable the delayed refill work in
> >>>> virtnet_rx_resume_all() and virtnet_open().
> >>>>
> >>>> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing 
> >>>> rx")
> >>>> Reported-by: Paolo Abeni <[email protected]>
> >>>> Closes: 
> >>>> https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
> >>>> Signed-off-by: Bui Quang Minh <[email protected]>
> >>>> ---
> >>>>    drivers/net/virtio_net.c | 59 +++++++++++++++++++---------------------
> >>>>    1 file changed, 28 insertions(+), 31 deletions(-)
> >>>>
> >>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>>> index 8e04adb57f52..f2b1ea65767d 100644
> >>>> --- a/drivers/net/virtio_net.c
> >>>> +++ b/drivers/net/virtio_net.c
> >>>> @@ -2858,6 +2858,20 @@ static bool try_fill_recv(struct virtnet_info 
> >>>> *vi, struct receive_queue *rq,
> >>>>           return err != -ENOMEM;
> >>>>    }
> >>>>
> >>>> +static void virtnet_rx_refill_all(struct virtnet_info *vi)
> >>>> +{
> >>>> +       bool schedule_refill = false;
> >>>> +       int i;
> >>>> +
> >>>> +       enable_delayed_refill(vi);
> >>> This seems to be still racy?
> >>>
> >>> For example, in virtnet_open() we had:
> >>>
> >>> static int virtnet_open(struct net_device *dev)
> >>> {
> >>>           struct virtnet_info *vi = netdev_priv(dev);
> >>>           int i, err;
> >>>
> >>>           for (i = 0; i < vi->max_queue_pairs; i++) {
> >>>                   err = virtnet_enable_queue_pair(vi, i);
> >>>                   if (err < 0)
> >>>                           goto err_enable_qp;
> >>>           }
> >>>
> >>>           virtnet_rx_refill_all(vi);
> >>>
> >>> So NAPI and refill work is enabled in this case, so the refill work
> >>> could be scheduled and run at the same time?
> >> Yes, that's what we expect. We must ensure that refill work is scheduled
> >> only when all NAPIs are enabled. The deadlock happens when refill work
> >> is scheduled but there are still disabled RX NAPIs.
> > Just to make sure we are on the same page, I meant, after refill work
> > is enabled, rq0 is NAPI is enabled, in this case the refill work could
> > be triggered by the rq0's NAPI so we may end up in the refill work
> > that it tries to disable rq1's NAPI while holding the netdev lock.
>
> I don't quite get your point. The current deadlock scenario is this
>
> virtnet_rx_resume_all
> napi_enable(rq0) (the rq1 napi is still disabled)
> enable_refill_work
>
> refill_work
> napi_disable(rq0) -> still okay
> napi_enable(rq0) -> still okay
> napi_disable(rq1)
> -> hold netdev_lock
>      -> stuck inside the while loop in napi_disable_locked
>              while (val & (NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC)) {
>                  usleep_range(20, 200);
>                  val = READ_ONCE(n->state);
>              }
>
>
> napi_enable(rq1)
> -> stuck while trying to acquire the netdev_lock
>
> The problem is that we must not call napi_disable() on an already
> disabled NAPI (rq1's NAPI in the example).
>
> In the new virtnet_open
>
> static int virtnet_open(struct net_device *dev)
> {
>           struct virtnet_info *vi = netdev_priv(dev);
>           int i, err;
>
>           // Note that at this point, refill work is still disabled, 
> vi->refill_enabled == false,
>           // so even if virtnet_receive is called, the refill_work will not 
> be scheduled.
>           for (i = 0; i < vi->max_queue_pairs; i++) {
>                   err = virtnet_enable_queue_pair(vi, i);
>                   if (err < 0)
>                           goto err_enable_qp;
>           }
>
>           // Here all RX NAPIs are enabled so it's safe to enable refill work 
> again
>           virtnet_rx_refill_all(vi);
>

I meant this part:

+static void virtnet_rx_refill_all(struct virtnet_info *vi)
+{
+       bool schedule_refill = false;
+       int i;
+
+       enable_delayed_refill(vi);

refill_work could run here.

+       for (i = 0; i < vi->curr_queue_pairs; i++)
+               if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
+                       schedule_refill = true;
+

I think it can be fixed by moving enable_delayed_refill() here.

+       if (schedule_refill)
+               schedule_delayed_work(&vi->refill, 0);
+}

Thanks

>
> Thanks,
> Quang Minh.
>


Reply via email to