On 1/28/26 10:30 PM, Michael S. Tsirkin wrote:
On Wed, Jan 28, 2026 at 01:20:30PM -0800, Vishwanath Seshagiri wrote:
Use page_pool for RX buffer allocation in mergeable and small buffer
modes to enable page recycling and avoid repeated page allocator calls.
skb_mark_for_recycle() enables page reuse in the network stack.

Big packets mode is unchanged because it uses page->private for linked
list chaining of multiple pages per buffer, which conflicts with
page_pool's internal use of page->private.

Implement conditional DMA premapping using virtqueue_dma_dev():
- When non-NULL (vhost, virtio-pci): use PP_FLAG_DMA_MAP with page_pool
   handling DMA mapping, submit via virtqueue_add_inbuf_premapped()
- When NULL (VDUSE, direct physical): page_pool handles allocation only,
   submit via virtqueue_add_inbuf_ctx()

This preserves the DMA premapping optimization from commit 31f3cd4e5756b
("virtio-net: rq submits premapped per-buffer") while adding page_pool
support as a prerequisite for future zero-copy features (devmem TCP,
io_uring ZCRX).

Page pools are created in probe and destroyed in remove (not open/close),
following existing driver behavior where RX buffers remain in virtqueues
across interface state changes.

The rx_mode_work_enabled flag prevents virtnet_rx_mode_work() from
sending control virtqueue commands while ndo_close is tearing down
device state, avoiding virtqueue corruption during concurrent operations.

Signed-off-by: Vishwanath Seshagiri <[email protected]>
---
  drivers/net/Kconfig      |   1 +
  drivers/net/virtio_net.c | 353 ++++++++++++++++++++++-----------------
  2 files changed, 203 insertions(+), 151 deletions(-)

diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index ac12eaf11755..f1e6b6b0a86f 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -450,6 +450,7 @@ config VIRTIO_NET
        depends on VIRTIO
        select NET_FAILOVER
        select DIMLIB
+       select PAGE_POOL
        help
          This is the virtual network driver for virtio.  It can be used with
          QEMU based VMMs (like KVM or Xen).  Say Y or M.
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index db88dcaefb20..df2a5fc5187e 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -26,6 +26,7 @@
  #include <net/netdev_rx_queue.h>
  #include <net/netdev_queues.h>
  #include <net/xdp_sock_drv.h>
+#include <net/page_pool/helpers.h>
static int napi_weight = NAPI_POLL_WEIGHT;
  module_param(napi_weight, int, 0444);
@@ -359,6 +360,11 @@ struct receive_queue {
        /* Page frag for packet buffer allocation. */
        struct page_frag alloc_frag;
+ struct page_pool *page_pool;
+
+       /* True if page_pool handles DMA mapping via PP_FLAG_DMA_MAP */
+       bool use_page_pool_dma;
+
        /* RX: fragments + linear part + virtio header */
        struct scatterlist sg[MAX_SKB_FRAGS + 2];
@@ -521,11 +527,13 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
                               struct virtnet_rq_stats *stats);
  static void virtnet_receive_done(struct virtnet_info *vi, struct 
receive_queue *rq,
                                 struct sk_buff *skb, u8 flags);
-static struct sk_buff *virtnet_skb_append_frag(struct sk_buff *head_skb,
+static struct sk_buff *virtnet_skb_append_frag(struct receive_queue *rq,
+                                              struct sk_buff *head_skb,
                                               struct sk_buff *curr_skb,
                                               struct page *page, void *buf,
                                               int len, int truesize);
  static void virtnet_xsk_completed(struct send_queue *sq, int num);
+static void free_unused_bufs(struct virtnet_info *vi);
enum virtnet_xmit_type {
        VIRTNET_XMIT_TYPE_SKB,
@@ -706,15 +714,21 @@ static struct page *get_a_page(struct receive_queue *rq, 
gfp_t gfp_mask)
        return p;
  }
+static void virtnet_put_page(struct receive_queue *rq, struct page *page,
+                            bool allow_direct)
+{
+       page_pool_put_page(rq->page_pool, page, -1, allow_direct);
+}
+
  static void virtnet_rq_free_buf(struct virtnet_info *vi,
                                struct receive_queue *rq, void *buf)
  {
        if (vi->mergeable_rx_bufs)
-               put_page(virt_to_head_page(buf));
+               virtnet_put_page(rq, virt_to_head_page(buf), false);
        else if (vi->big_packets)
                give_pages(rq, buf);
        else
-               put_page(virt_to_head_page(buf));
+               virtnet_put_page(rq, virt_to_head_page(buf), false);
  }


what I dislike here is how big_packets mode still pokes
at give_pages but other modes use the page pool.

Given all modes operate with struct page it's hard to
shake the feeling we could be trying to put a page
we did not get from the pool back into the pool,
or vice versa.

The allocation and free paths are symmetric for each of the modes.
Although, I made an error in the give_pages() path of big buffers.
That being said, the dual system approach is fragile. and I excluded
big_packets because it chains via page->private, which conflicts with
page_pool's internal use of page->private.

Would you prefer I extend the page_pool to big_packets in v3 or
followup? This would require building an alternate chaining mechanism,
which conflicts with page_pool's internal use of page->private.




static void enable_rx_mode_work(struct virtnet_info *vi)
@@ -877,9 +891,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
                if (unlikely(!skb))
                        return NULL;
- page = (struct page *)page->private;
-               if (page)
-                       give_pages(rq, page);
                goto ok;
        }


For example above you did not touch give_pages, here
you are ripping out give_pages. Superficially, weird.

This was done in error, I will fix it in v3.



I ask myself whether page pool is not better than the
homegrown linked list that give_pages uses, anyway.

Extending page pool support is not only for performance, but
it is also needed for future zero copy features across RX modes.


Will need some perf testing though.

If you want me to run specific performance tests, I can add it in v3.




Reply via email to