This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

Note that this effectively changes the code's behavior as well: it now
ultimately calls set_page_dirty_lock(), instead of SetPageDirty(). This
is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [3]

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/

[3] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Signed-off-by: John Hubbard <jhubb...@nvidia.com>
---

Hi,

Note that I have only compile-tested this patch, although that does
also include cross-compiling for a few other arches.

thanks,
John Hubbard
NVIDIA

 drivers/misc/mic/scif/scif_rma.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c
index 01e27682ea30..406cd5abfa72 100644
--- a/drivers/misc/mic/scif/scif_rma.c
+++ b/drivers/misc/mic/scif/scif_rma.c
@@ -113,14 +113,17 @@ static int scif_destroy_pinned_pages(struct 
scif_pinned_pages *pin)
        int writeable = pin->prot & SCIF_PROT_WRITE;
        int kernel = SCIF_MAP_KERNEL & pin->map_flags;
 
-       for (j = 0; j < pin->nr_pages; j++) {
-               if (pin->pages[j] && !kernel) {
-                       if (writeable)
-                               SetPageDirty(pin->pages[j]);
-                       put_page(pin->pages[j]);
+       if (kernel) {
+               for (j = 0; j < pin->nr_pages; j++) {
+                       if (pin->pages[j] && !kernel) {
+                               if (writeable)
+                                       set_page_dirty_lock(pin->pages[j]);
+                               put_page(pin->pages[j]);
+                       }
                }
-       }
-
+       } else
+               unpin_user_pages_dirty_lock(pin->pages, pin->nr_pages,
+                                           writeable);
        scif_free(pin->pages,
                  pin->nr_pages * sizeof(*pin->pages));
        scif_free(pin, sizeof(*pin));
@@ -1375,7 +1378,7 @@ int __scif_pin_pages(void *addr, size_t len, int 
*out_prot,
                        }
                }
 
-               pinned_pages->nr_pages = get_user_pages_fast(
+               pinned_pages->nr_pages = pin_user_pages_fast(
                                (u64)addr,
                                nr_pages,
                                (prot & SCIF_PROT_WRITE) ? FOLL_WRITE : 0,
@@ -1385,11 +1388,8 @@ int __scif_pin_pages(void *addr, size_t len, int 
*out_prot,
                                if (ulimit)
                                        __scif_dec_pinned_vm_lock(mm, nr_pages);
                                /* Roll back any pinned pages */
-                               for (i = 0; i < pinned_pages->nr_pages; i++) {
-                                       if (pinned_pages->pages[i])
-                                               put_page(
-                                               pinned_pages->pages[i]);
-                               }
+                               unpin_user_pages(pinned_pages->pages,
+                                                pinned_pages->nr_pages);
                                prot &= ~SCIF_PROT_WRITE;
                                try_upgrade = false;
                                goto retry;
-- 
2.26.2

Reply via email to