Re: [PATCH mm-unstable v1 10/20] RDMA/umem: remove FOLL_FORCE usage

2022-11-16 Thread Jason Gunthorpe
On Wed, Nov 16, 2022 at 11:26:49AM +0100, David Hildenbrand wrote:
> GUP now supports reliable R/O long-term pinning in COW mappings, such
> that we break COW early. MAP_SHARED VMAs only use the shared zeropage so
> far in one corner case (DAXFS file with holes), which can be ignored
> because GUP does not support long-term pinning in fsdax (see
> check_vma_flags()).
> 
> Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required
> for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop
> using FOLL_FORCE, which is really only for ptrace access.
> 
> Tested-by: Leon Romanovsky  # Over mlx4 and mlx5.
> Cc: Jason Gunthorpe 
> Cc: Leon Romanovsky 
> Signed-off-by: David Hildenbrand 
> ---
>  drivers/infiniband/core/umem.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Jason Gunthorpe 

Jason


[PATCH mm-unstable v1 10/20] RDMA/umem: remove FOLL_FORCE usage

2022-11-16 Thread David Hildenbrand
GUP now supports reliable R/O long-term pinning in COW mappings, such
that we break COW early. MAP_SHARED VMAs only use the shared zeropage so
far in one corner case (DAXFS file with holes), which can be ignored
because GUP does not support long-term pinning in fsdax (see
check_vma_flags()).

Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required
for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop
using FOLL_FORCE, which is really only for ptrace access.

Tested-by: Leon Romanovsky  # Over mlx4 and mlx5.
Cc: Jason Gunthorpe 
Cc: Leon Romanovsky 
Signed-off-by: David Hildenbrand 
---
 drivers/infiniband/core/umem.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 86d479772fbc..755a9c57db6f 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -156,7 +156,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, 
unsigned long addr,
struct mm_struct *mm;
unsigned long npages;
int pinned, ret;
-   unsigned int gup_flags = FOLL_WRITE;
+   unsigned int gup_flags = FOLL_LONGTERM;
 
/*
 * If the combination of the addr and size requested for this memory
@@ -210,8 +210,8 @@ struct ib_umem *ib_umem_get(struct ib_device *device, 
unsigned long addr,
 
cur_base = addr & PAGE_MASK;
 
-   if (!umem->writable)
-   gup_flags |= FOLL_FORCE;
+   if (umem->writable)
+   gup_flags |= FOLL_WRITE;
 
while (npages) {
cond_resched();
@@ -219,7 +219,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, 
unsigned long addr,
  min_t(unsigned long, npages,
PAGE_SIZE /
sizeof(struct page *)),
- gup_flags | FOLL_LONGTERM, page_list);
+ gup_flags, page_list);
if (pinned < 0) {
ret = pinned;
goto umem_release;
-- 
2.38.1