This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/

Cc: Alex Williamson <alex.william...@redhat.com>
Cc: Cornelia Huck <coh...@redhat.com>
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard <jhubb...@nvidia.com>
---

Hi,

Changes since v1: rebased onto Linux-5.8-rc2.

thanks,
John Hubbard

 drivers/vfio/vfio_iommu_spapr_tce.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c 
b/drivers/vfio/vfio_iommu_spapr_tce.c
index 16b3adc508db..fe888b5dcc00 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -383,7 +383,7 @@ static void tce_iommu_unuse_page(struct tce_container 
*container,
        struct page *page;
 
        page = pfn_to_page(hpa >> PAGE_SHIFT);
-       put_page(page);
+       unpin_user_page(page);
 }
 
 static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container,
@@ -486,7 +486,7 @@ static int tce_iommu_use_page(unsigned long tce, unsigned 
long *hpa)
        struct page *page = NULL;
        enum dma_data_direction direction = iommu_tce_direction(tce);
 
-       if (get_user_pages_fast(tce & PAGE_MASK, 1,
+       if (pin_user_pages_fast(tce & PAGE_MASK, 1,
                        direction != DMA_TO_DEVICE ? FOLL_WRITE : 0,
                        &page) != 1)
                return -EFAULT;
-- 
2.27.0

Reply via email to