On Thu, Feb 5, 2026 at 3:36 AM Vishwanath Seshagiri <[email protected]> wrote:
>
> Use page_pool for RX buffer allocation in mergeable and small buffer
> modes to enable page recycling and avoid repeated page allocator calls.
> skb_mark_for_recycle() enables page reuse in the network stack.
>
> Big packets mode is unchanged because it uses page->private for linked
> list chaining of multiple pages per buffer, which conflicts with
> page_pool's internal use of page->private.
>
> Implement conditional DMA premapping using virtqueue_dma_dev():
> - When non-NULL (vhost, virtio-pci): use PP_FLAG_DMA_MAP with page_pool
>   handling DMA mapping, submit via virtqueue_add_inbuf_premapped()
> - When NULL (VDUSE, direct physical): page_pool handles allocation only,
>   submit via virtqueue_add_inbuf_ctx()
>
> This preserves the DMA premapping optimization from commit 31f3cd4e5756b
> ("virtio-net: rq submits premapped per-buffer") while adding page_pool
> support as a prerequisite for future zero-copy features (devmem TCP,
> io_uring ZCRX).
>
> Page pools are created in probe and destroyed in remove (not open/close),
> following existing driver behavior where RX buffers remain in virtqueues
> across interface state changes.
>
> Signed-off-by: Vishwanath Seshagiri <[email protected]>
> ---
>  drivers/net/Kconfig      |   1 +
>  drivers/net/virtio_net.c | 351 ++++++++++++++++++++++-----------------
>  2 files changed, 201 insertions(+), 151 deletions(-)
>

Looks good overall, just one spot.

> -static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 
> len)
> -{
> -       struct virtnet_info *vi = rq->vq->vdev->priv;
> -       struct virtnet_rq_dma *dma;
> -       dma_addr_t addr;
> -       u32 offset;
> -       void *head;
> -
> -       BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
> -
> -       head = page_address(rq->alloc_frag.page);
> -
> -       offset = buf - head;
> -
> -       dma = head;
> -
> -       addr = dma->addr - sizeof(*dma) + offset;
> -
> -       sg_init_table(rq->sg, 1);
> -       sg_fill_dma(rq->sg, addr, len);
> -}
> -
> -static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
> -{
> -       struct page_frag *alloc_frag = &rq->alloc_frag;
> -       struct virtnet_info *vi = rq->vq->vdev->priv;
> -       struct virtnet_rq_dma *dma;
> -       void *buf, *head;
> -       dma_addr_t addr;
>
>         BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
>
> -       head = page_address(alloc_frag->page);
> -
> -       dma = head;
> -
> -       /* new pages */
> -       if (!alloc_frag->offset) {
> -               if (rq->last_dma) {
> -                       /* Now, the new page is allocated, the last dma
> -                        * will not be used. So the dma can be unmapped
> -                        * if the ref is 0.
> -                        */
> -                       virtnet_rq_unmap(rq, rq->last_dma, 0);
> -                       rq->last_dma = NULL;
> -               }
> -
> -               dma->len = alloc_frag->size - sizeof(*dma);
> -
> -               addr = virtqueue_map_single_attrs(rq->vq, dma + 1,
> -                                                 dma->len, DMA_FROM_DEVICE, 
> 0);
> -               if (virtqueue_map_mapping_error(rq->vq, addr))
> -                       return NULL;
> -
> -               dma->addr = addr;
> -               dma->need_sync = virtqueue_map_need_sync(rq->vq, addr);
> -
> -               /* Add a reference to dma to prevent the entire dma from
> -                * being released during error handling. This reference
> -                * will be freed after the pages are no longer used.
> -                */
> -               get_page(alloc_frag->page);
> -               dma->ref = 1;
> -               alloc_frag->offset = sizeof(*dma);
> -
> -               rq->last_dma = dma;
> -       }
> -
> -       ++dma->ref;

This patch still uses virtnet_rq_unmap() for free_receive_page_frags()
which looks like a bug.

Thanks


Reply via email to