On 11/22/25 10:03, Barry Song wrote:
From: Barry Song <[email protected]>

In many cases, the pages passed to vmap() may include
high-order pages—for example, the systemheap often allocates
pages in descending order: order 8, then 4, then 0. Currently,
vmap() iterates over every page individually—even the pages
inside a high-order block are handled one by one. This patch
detects high-order pages and maps them as a single contiguous
block whenever possible.

Another possibility is to implement a new API, vmap_sg().
However, that change seems to be quite large in scope.

When vmapping a 128MB dma-buf using the systemheap,
this RFC appears to make system_heap_do_vmap() 16× faster:

W/ patch:
[   51.363682] system_heap_do_vmap took 2474000 ns
[   53.307044] system_heap_do_vmap took 2469008 ns
[   55.061985] system_heap_do_vmap took 2519008 ns
[   56.653810] system_heap_do_vmap took 2674000 ns

W/o patch:
[    8.260880] system_heap_do_vmap took 39490000 ns
[   32.513292] system_heap_do_vmap took 38784000 ns
[   82.673374] system_heap_do_vmap took 40711008 ns
[   84.579062] system_heap_do_vmap took 40236000 ns

Cc: Uladzislau Rezki <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Maxime Ripard <[email protected]>
Signed-off-by: Barry Song <[email protected]>
---
  mm/vmalloc.c | 49 +++++++++++++++++++++++++++++++++++++++++++------
  1 file changed, 43 insertions(+), 6 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0832f944544c..af2e3e8c052a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -642,6 +642,34 @@ static int vmap_small_pages_range_noflush(unsigned long 
addr, unsigned long end,
        return err;
  }
+static inline int get_vmap_batch_order(struct page **pages,
+               unsigned int stride,
+               int max_steps,
+               unsigned int idx)

These fit into less lines.

ideally

\t\tunsigned int stride, int max_steps, unsigned int idx)

+{

int order, nr_pages, i;
struct page *base;

But I think you can just drop "base". And order.

+       /*
+        * Currently, batching is only supported in vmap_pages_range
+        * when page_shift == PAGE_SHIFT.
+        */
+       if (stride != 1)
+               return 0;
+
+       struct page *base = pages[idx];
+       if (!PageHead(base))
+               return 0;
+
+       int order = compound_order(base);
+       int nr_pages = 1 << order;


You can drop the head check etc and simply do

nr_pages = compound_nr(pages[idx]);
if (nr_pages == 1)
        return 0;

Which raises the question: are these things folios? I assume not.

+
+       if (max_steps < nr_pages)
+               return 0;
+
+       for (int i = 0; i < nr_pages; i++)
+               if (pages[idx + i] != base + i)
+                       return 0;

if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages)
        return compound_order(pages[idx]);
return 0;

--
Cheers

David

Reply via email to