On 2025-10-03 18:16, Felix Kuehling wrote:
[+Linux MM and HMM maintainers]
Please see below my question about the safety of using
zone_device_page_init.
On 2025-10-03 18:02, Philip Yang wrote:
On 2025-10-03 17:46, Felix Kuehling wrote:
On 2025-10-03 17:18, Philip Yang wrote:
On 2025-10-03 17:05, Felix Kuehling wrote:
On 2025-09-26 17:03, Philip Yang wrote:
zone_device_page_init uses set_page_count to set vram page
refcount to
1, there is race if step 2 happens between step 1 and 3.
1. CPU page fault handler get vram page, migrate the vram page to
system page
2. GPU page fault migrate to the vram page, set page refcount to 1
3. CPU page fault handler put vram page, the vram page refcount is
0 and reduce the vram_bo refcount
4. vram_bo refcount is 1 off because the vram page is still used.
Afterwards, this causes use-after-free bug and page refcount
warning.
This implies that migration to RAM and to VRAM of the same range
are happening at the same time. Isn't that a bigger problem? It
means someone doing a migration is not holding the
prange->migrate_mutex.
Migration hold prange->migrate_mutex so we don't have migration to
RAM and VRAM of same range at same time, the issue is in step 3,
CPU page fault handler do_swap_page put_page after
pgmap->ops->migrate_to_ram() returns and during migate_to_vram.
That's the part I don't understand. The CPU page fault handler
(svm_migrate_to_ram) is holding prange->migrate_mutex until the very
end. Where do we have a put_page for a zone_device page outside the
prange->migrate_mutex? Do you have a backtrace?
do_swap_page() {
.......
} else if (is_device_private_entry(entry)) {
........
/*
* Get a page reference while we know the page can't be
* freed.
*/
if (trylock_page(vmf->page)) {
struct dev_pagemap *pgmap;
get_page(vmf->page);
pte_unmap_unlock(vmf->pte, vmf->ptl);
pgmap = page_pgmap(vmf->page);
ret = pgmap->ops->migrate_to_ram(vmf);
unlock_page(vmf->page);
put_page(vmf->page);
This put_page reduce the vram page refcount to zero if
migrate_to_vram -> svm_migrate_get_vram_page already call
zone_device_page_init set page refcount to 1.
put_page must be after unlock_page as put_page may free the page,
svm_migrate_get_vram_page can lock the page, but page refcount
becomes 0.
OK. Then you must have hit the
WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref)) in that
function.
This warning is for pgmap percpu_refcount, not for page refcount, I
didn't see this warning.
It sounds like zone_device_page_init is just unsafe to use in general.
It assumes that pages have a 0 refcount. But I don't see a good way
for drivers to guarantee that, because they are not in control of when
the page refcounts for their zone-device pages get decremented.
Seems this issue is caused by the change in commit 1afaeb8293c9
"mm/migrate: Trylock device page in do_swap_page", I am not sure if the
same fix is needed in several drivers calling zone_device_page_init.
Regards,
Philip
Regards,
Felix
Regards,
Philip
Regards,
Felix
Regards,
Philip
Regards,
Felix
zone_device_page_init should not use in page migration, change to
get_page fix the race bug.
Add WARN_ONCE to report this issue early because the refcount bug is
hard to investigate.
Signed-off-by: Philip Yang <[email protected]>
---
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index d10c6673f4de..15ab2db4af1d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -217,7 +217,8 @@ svm_migrate_get_vram_page(struct svm_range
*prange, unsigned long pfn)
page = pfn_to_page(pfn);
svm_range_bo_ref(prange->svm_bo);
page->zone_device_data = prange->svm_bo;
- zone_device_page_init(page);
+ get_page(page);
+ lock_page(page);
}
static void
@@ -552,6 +553,17 @@ svm_migrate_ram_to_vram(struct svm_range
*prange, uint32_t best_loc,
if (mpages) {
prange->actual_loc = best_loc;
prange->vram_pages += mpages;
+ /*
+ * To guarent we hold correct page refcount for all
prange vram
+ * pages and svm_bo refcount.
+ * After prange migrated to VRAM, each vram page
refcount hold
+ * one svm_bo refcount, and vram node hold one refcount.
+ * After page migrated to system memory, vram page refcount
+ * reduced to 0, svm_migrate_page_free reduce svm_bo
refcount.
+ * svm_range_vram_node_free will free the svm_bo.
+ */
+ WARN_ONCE(prange->vram_pages ==
kref_read(&prange->svm_bo->kref),
+ "svm_bo refcount leaking\n");
} else if (!prange->actual_loc) {
/* if no page migrated and all pages from prange are at
* sys ram drop svm_bo got from svm_range_vram_node_new