On 1/4/26 22:12, Michael S. Tsirkin wrote:
On Sun, Jan 04, 2026 at 09:54:30PM +0700, Bui Quang Minh wrote:On 1/4/26 21:03, Michael S. Tsirkin wrote:On Sun, Jan 04, 2026 at 03:34:52PM +0700, Bui Quang Minh wrote:On 1/4/26 13:09, Jason Wang wrote:On Fri, Jan 2, 2026 at 11:20 PM Bui Quang Minh <[email protected]> wrote:When we fail to refill the receive buffers, we schedule a delayed worker to retry later. However, this worker creates some concurrency issues such as races and deadlocks. To simplify the logic and avoid further problems, we will instead retry refilling in the next NAPI poll.Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx") Reported-by: Paolo Abeni <[email protected]> Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr Cc: [email protected] Suggested-by: Xuan Zhuo <[email protected]> Signed-off-by: Bui Quang Minh <[email protected]> --- drivers/net/virtio_net.c | 55 ++++++++++++++++++++++------------------ 1 file changed, 30 insertions(+), 25 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bb3aeca66c6..ac514c9383ae 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3035,7 +3035,7 @@ static int virtnet_receive_packets(struct virtnet_info *vi, } static int virtnet_receive(struct receive_queue *rq, int budget, - unsigned int *xdp_xmit) + unsigned int *xdp_xmit, bool *retry_refill) { struct virtnet_info *vi = rq->vq->vdev->priv; struct virtnet_rq_stats stats = {}; @@ -3047,12 +3047,8 @@ static int virtnet_receive(struct receive_queue *rq, int budget, packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats); if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { - spin_lock(&vi->refill_lock); - if (vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock(&vi->refill_lock); - } + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) + *retry_refill = true; } u64_stats_set(&stats.packets, packets); @@ -3129,18 +3125,18 @@ static int virtnet_poll(struct napi_struct *napi, int budget) struct send_queue *sq; unsigned int received; unsigned int xdp_xmit = 0; - bool napi_complete; + bool napi_complete, retry_refill = false; virtnet_poll_cleantx(rq, budget); - received = virtnet_receive(rq, budget, &xdp_xmit); + received = virtnet_receive(rq, budget, &xdp_xmit, &retry_refill);I think we can simply let virtnet_receive() to return the budget when reill fails.That makes sense, I'll change it.rq->packets_in_napi += received; if (xdp_xmit & VIRTIO_XDP_REDIR) xdp_do_flush(); /* Out of packets? */ - if (received < budget) { + if (received < budget && !retry_refill) { napi_complete = virtqueue_napi_complete(napi, rq->vq, received); /* Intentionally not taking dim_lock here. This may result in a * spurious net_dim call. But if that happens virtnet_rx_dim_work @@ -3160,7 +3156,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget) virtnet_xdp_put_sq(vi, sq); } - return received; + return retry_refill ? budget : received; } static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index) @@ -3230,9 +3226,11 @@ static int virtnet_open(struct net_device *dev) for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) - /* Make sure we have some buffers: if oom use wq. */ - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->refill, 0); + /* If this fails, we will retry later in + * NAPI poll, which is scheduled in the below + * virtnet_enable_queue_pair + */ + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);Consider NAPI will be eventually scheduled, I wonder if it's still worth to refill here.With GFP_KERNEL here, I think it's more likely to succeed than GFP_ATOMIC in NAPI poll. Another small benefit is that the actual packet can happen earlier. In case the receive buffer is empty and we don't refill here, the #1 NAPI poll refill the buffer and the #2 NAPI poll can receive packets. The #2 NAPI poll is scheduled in the interrupt handler because the #1 NAPI poll will deschedule the NAPI and enable the device interrupt. In case we successfully refill here, the #1 NAPI poll can receive packets right away.Right. But I think this is a part that needs elucidating, not error handling. /* Pre-fill rq agressively, to make sure we are ready to get packets * immediately. * */err = virtnet_enable_queue_pair(vi, i); if (err < 0) @@ -3473,15 +3471,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi, bool refill) { bool running = netif_running(vi->dev); - bool schedule_refill = false; - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) - schedule_refill = true; + if (refill) + /* If this fails, we will retry later in NAPI poll, which is + * scheduled in the below virtnet_napi_enable + */ + try_fill_recv(vi, rq, GFP_KERNEL);and here.+ if (running) virtnet_napi_enable(rq);here the part that isn't clear is why are we refilling if !running and what handles failures in that case.You are right, we should not refill when !running. I'll move the if (refill) inside the if (running).Sounds like a helper that does refill+virtnet_napi_enable would be in order then? fill_recv_aggressively(vi, rq) ?
I think the helper can make the code a little more complicated. In virtnet_open(), the RX NAPI is enabled in virtnet_enable_queue_pair() so we need to add a flag like enable_rx. Then change the virtnet_open() to
for (i = 0; i < vi->max_queue_pairs; i++) {
if (i < vi->curr_queue_pairs) {
fill_recv_aggressively(vi, rq);
err = virtnet_enable_queue_pair(..., enable_rx = false);
if (err < 0)
goto err_enable_qp;
} else {
err = virtnet_enable_queue_pair(..., enable_rx = true);
if (err < 0)
goto err_enable_qp;
}
}
- - if (schedule_refill) - schedule_delayed_work(&vi->refill, 0); } static void virtnet_rx_resume_all(struct virtnet_info *vi) @@ -3777,6 +3775,7 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) struct virtio_net_rss_config_trailer old_rss_trailer; struct net_device *dev = vi->dev; struct scatterlist sg; + int i; if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ)) return 0; @@ -3829,11 +3828,17 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) } succ: vi->curr_queue_pairs = queue_pairs; - /* virtnet_open() will refill when device is going to up. */ - spin_lock_bh(&vi->refill_lock); - if (dev->flags & IFF_UP && vi->refill_enabled) - schedule_delayed_work(&vi->refill, 0); - spin_unlock_bh(&vi->refill_lock); + if (dev->flags & IFF_UP) { + /* Let the NAPI poll refill the receive buffer for us. We can't + * safely call try_fill_recv() here because the NAPI might be + * enabled already. + */ + local_bh_disable(); + for (i = 0; i < vi->curr_queue_pairs; i++) + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq); + + local_bh_enable(); + } return 0; } -- 2.43.0ThanksThanks, Quang Minh.

