On Mon, Jan 05, 2026 at 08:09:39AM -0800, Matthew Brost wrote:
> On Mon, Jan 05, 2026 at 12:18:28PM +0100, Francois Dugast wrote:
> > This enables support for Transparent Huge Pages (THP) for device pages by
> > using MIGRATE_VMA_SELECT_COMPOUND during migration. It removes the need to
> > split folios and loop multiple times over all pages to perform required
> > operations at page level. Instead, we rely on newly introduced support for
> > higher orders in drm_pagemap and folio-level API.
> >
> > In Xe, this drastically improves performance when using SVM. The GT stats
> > below collected after a 2MB page fault show overall servicing is more than
> > 7 times faster, and thanks to reduced CPU overhead the time spent on the
> > actual copy goes from 23% without THP to 80% with THP:
> >
> > Without THP:
> >
> > svm_2M_pagefault_us: 966
> > svm_2M_migrate_us: 942
> > svm_2M_device_copy_us: 223
> > svm_2M_get_pages_us: 9
> > svm_2M_bind_us: 10
> >
> > With THP:
> >
> > svm_2M_pagefault_us: 132
> > svm_2M_migrate_us: 128
> > svm_2M_device_copy_us: 106
> > svm_2M_get_pages_us: 1
> > svm_2M_bind_us: 2
> >
> > v2:
> > - Fix one occurrence of drm_pagemap_get_devmem_page() (Matthew Brost)
> >
> > Cc: Matthew Brost <[email protected]>
> > Cc: Thomas Hellström <[email protected]>
> > Cc: Michal Mrozek <[email protected]>
> > Signed-off-by: Francois Dugast <[email protected]>
> > ---
> > drivers/gpu/drm/drm_pagemap.c | 77 +++++++++++++++++++++++++++++++----
> > drivers/gpu/drm/xe/xe_svm.c | 4 ++
> > include/drm/drm_pagemap.h | 3 ++
> > 3 files changed, 76 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> > index 05e708730132..1ea8526ce946 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -200,16 +200,20 @@ static void
> > drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> > /**
> > * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> > * @page: Pointer to the page
> > + * @order: Order
> > * @zdd: Pointer to the GPU SVM zone device data
> > *
> > * This function associates the given page with the specified GPU SVM zone
> > * device data and initializes it for zone device usage.
> > */
> > static void drm_pagemap_get_devmem_page(struct page *page,
> > + unsigned int order,
> > struct drm_pagemap_zdd *zdd)
> > {
> > - page->zone_device_data = drm_pagemap_zdd_get(zdd);
> > - zone_device_page_init(page, 0);
> > + struct folio *folio = page_folio(page);
> > +
> > + folio_set_zone_device_data(folio, drm_pagemap_zdd_get(zdd));
> > + zone_device_page_init(page, order);
> > }
> >
> > /**
> > @@ -522,7 +526,7 @@ int drm_pagemap_migrate_to_devmem(struct
> > drm_pagemap_devmem *devmem_allocation,
> > .end = end,
> > .pgmap_owner = pagemap->owner,
> > .flags = MIGRATE_VMA_SELECT_SYSTEM |
> > MIGRATE_VMA_SELECT_DEVICE_COHERENT |
> > - MIGRATE_VMA_SELECT_DEVICE_PRIVATE,
> > + MIGRATE_VMA_SELECT_DEVICE_PRIVATE | MIGRATE_VMA_SELECT_COMPOUND,
> > };
> > unsigned long i, npages = npages_in_range(start, end);
> > unsigned long own_pages = 0, migrated_pages = 0;
> > @@ -628,10 +632,13 @@ int drm_pagemap_migrate_to_devmem(struct
> > drm_pagemap_devmem *devmem_allocation,
> >
> > own_pages = 0;
> >
> > - for (i = 0; i < npages; ++i) {
> > + mutex_lock(&dpagemap->folio_split_lock);
> > + for (i = 0; i < npages;) {
> > + unsigned long j;
> > struct page *page = pfn_to_page(migrate.dst[i]);
> > struct page *src_page = migrate_pfn_to_page(migrate.src[i]);
> > cur.start = i;
> > + unsigned int order;
> >
> > pages[i] = NULL;
> > if (src_page && is_device_private_page(src_page)) {
> > @@ -658,7 +665,23 @@ int drm_pagemap_migrate_to_devmem(struct
> > drm_pagemap_devmem *devmem_allocation,
> > pages[i] = page;
> > }
> > migrate.dst[i] = migrate_pfn(migrate.dst[i]);
> > - drm_pagemap_get_devmem_page(page, zdd);
> > +
> > + if (migrate.src[i] & MIGRATE_PFN_COMPOUND) {
> > + order = HPAGE_PMD_ORDER;
>
> Can we assert the folio order is HPAGE_PMD_ORDER? Eventually mTHP could
> be enabled on device folios where the order could be different than
> HPAGE_PMD_ORDER. I suspect we will need some small updates the GPU SVM /
> Xe to support mTHP and this assert would pop telling us we need to fix
> GPU SVM.
Yes good idea.
>
> > +
> > + migrate.dst[i] |= MIGRATE_PFN_COMPOUND;
> > +
> > + for (j = 1; j < NR_PAGES(order) && i + j < npages; j++)
> > + migrate.dst[i + j] = 0;
> > +
> > + } else {
> > + order = 0;
> > +
> > + if (folio_order(page_folio(page)))
> > + migrate_device_split_page(page);
> > + }
> > +
> > + drm_pagemap_get_devmem_page(page, order, zdd);
> >
> > /* If we switched the migrating drm_pagemap, migrate previous
> > pages now */
> > err = drm_pagemap_migrate_range(devmem_allocation, migrate.src,
> > migrate.dst,
> > @@ -666,7 +689,11 @@ int drm_pagemap_migrate_to_devmem(struct
> > drm_pagemap_devmem *devmem_allocation,
> > mdetails);
> > if (err)
> > goto err_finalize;
> > +
> > + i += NR_PAGES(order);
> > }
> > + mutex_unlock(&dpagemap->folio_split_lock);
> > +
> > cur.start = npages;
> > cur.ops = NULL; /* Force migration */
> > err = drm_pagemap_migrate_range(devmem_allocation, migrate.src,
> > migrate.dst,
> > @@ -775,6 +802,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct
> > vm_area_struct *vas,
> > page = folio_page(folio, 0);
> > mpfn[i] = migrate_pfn(page_to_pfn(page));
> >
> > + if (order)
> > + mpfn[i] |= MIGRATE_PFN_COMPOUND;
> > next:
> > if (page)
> > addr += page_size(page);
> > @@ -1030,8 +1059,15 @@ int drm_pagemap_evict_to_ram(struct
> > drm_pagemap_devmem *devmem_allocation)
> > if (err)
> > goto err_finalize;
> >
> > - for (i = 0; i < npages; ++i)
> > + for (i = 0; i < npages;) {
> > + unsigned int order = 0;
> > +
> > pages[i] = migrate_pfn_to_page(src[i]);
> > + if (pages[i])
> > + order = folio_order(page_folio(pages[i]));
> > +
> > + i += NR_PAGES(order);
> > + }
> >
> > err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> > if (err)
> > @@ -1084,7 +1120,8 @@ static int __drm_pagemap_migrate_to_ram(struct
> > vm_area_struct *vas,
> > .vma = vas,
> > .pgmap_owner = page_pgmap(page)->owner,
> > .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
> > - MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > + MIGRATE_VMA_SELECT_DEVICE_COHERENT |
> > + MIGRATE_VMA_SELECT_COMPOUND,
>
> The alignment was off here - e.g. my vi settings make the original code
> look like this:
>
> .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
> MIGRATE_VMA_SELECT_DEVICE_COHERENT,
>
> Since you are changing this line can you fix the alignment?
Sure.
>
> > .fault_page = page,
> > };
> > struct drm_pagemap_migrate_details mdetails = {};
> > @@ -1150,8 +1187,15 @@ static int __drm_pagemap_migrate_to_ram(struct
> > vm_area_struct *vas,
> > if (err)
> > goto err_finalize;
> >
> > - for (i = 0; i < npages; ++i)
> > + for (i = 0; i < npages;) {
> > + unsigned int order = 0;
> > +
> > pages[i] = migrate_pfn_to_page(migrate.src[i]);
> > + if (pages[i])
> > + order = folio_order(page_folio(pages[i]));
> > +
> > + i += NR_PAGES(order);
> > + }
> >
> > err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> > if (err)
> > @@ -1209,9 +1253,26 @@ static vm_fault_t drm_pagemap_migrate_to_ram(struct
> > vm_fault *vmf)
> > return err ? VM_FAULT_SIGBUS : 0;
> > }
> >
> > +static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio
> > *new_folio)
> > +{
> > + struct drm_pagemap_zdd *zdd;
> > +
> > + if (!new_folio)
> > + return;
> > +
> > + new_folio->pgmap = orig_folio->pgmap;
> > + zdd = folio_zone_device_data(orig_folio);
> > + if (folio_order(new_folio))
> > + folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd));
> > + else
> > + folio_page(new_folio, 0)->zone_device_data =
> > + drm_pagemap_zdd_get(zdd);
>
> Is this if statement needed? I believe folio_set_zone_device_data can
> just be called.
Yes we can probably do so no matter what the order is.
>
> > +}
> > +
> > static const struct dev_pagemap_ops drm_pagemap_pagemap_ops = {
> > .folio_free = drm_pagemap_folio_free,
> > .migrate_to_ram = drm_pagemap_migrate_to_ram,
> > + .folio_split = drm_pagemap_folio_split,
> > };
> >
> > /**
> > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > index fa2ee2c08f31..05dba6abbcc8 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.c
> > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > @@ -1760,6 +1760,10 @@ static struct xe_pagemap *xe_pagemap_create(struct
> > xe_device *xe, struct xe_vram
> > if (err)
> > goto out_no_dpagemap;
> >
> > + err = drmm_mutex_init(&xe->drm, &dpagemap->folio_split_lock);
> > + if (err)
> > + goto out_err;
> > +
> > res = devm_request_free_mem_region(dev, &iomem_resource,
> > vr->usable_size);
> > if (IS_ERR(res)) {
> > diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> > index 736fb6cb7b33..3c8bacfc79e6 100644
> > --- a/include/drm/drm_pagemap.h
> > +++ b/include/drm/drm_pagemap.h
> > @@ -161,6 +161,7 @@ struct drm_pagemap_ops {
> > * &struct drm_pagemap. May be NULL if no cache is used.
> > * @shrink_link: Link into the shrinker's list of drm_pagemaps. Only
> > * used if also using a pagemap cache.
> > + * @folio_split_lock: Lock to protect device folio splitting.
> > */
> > struct drm_pagemap {
> > const struct drm_pagemap_ops *ops;
> > @@ -170,6 +171,8 @@ struct drm_pagemap {
> > struct drm_pagemap_dev_hold *dev_hold;
> > struct drm_pagemap_cache *cache;
> > struct list_head shrink_link;
> > + /* Protect device folio splitting */
>
> I don't think you need this comment as we have kernel doc.
This is redundant but it is to avoid this checkpatch warning:
-:248: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#248: FILE: include/drm/drm_pagemap.h:174:
+ struct mutex folio_split_lock;
Francois
>
> Nits aside, patch looks good.
>
> Matt
>
> > + struct mutex folio_split_lock;
> > };
> >
> > struct drm_pagemap_devmem;
> > --
> > 2.43.0
> >