Re: [PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-05 Thread John Hubbard
On 10/5/18 8:20 AM, Jason Gunthorpe wrote:
> On Thu, Oct 04, 2018 at 09:02:25PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard 
>>
>> For code that retains pages via get_user_pages*(),
>> release those pages via the new put_user_page(),
>> instead of put_page().
>>
>> This prepares for eventually fixing the problem described
>> in [1], and is following a plan listed in [2], [3], [4].
>>
>> [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
>>
>> [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
>> Proposed steps for fixing get_user_pages() + DMA problems.
>>
>> [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
>> Bounce buffers (otherwise [2] is not really viable).
>>
>> [4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
>> Follow-up discussions.
>>
>> CC: Doug Ledford 
>> CC: Jason Gunthorpe 
>> CC: Mike Marciniszyn 
>> CC: Dennis Dalessandro 
>> CC: Christian Benvenuti 
>>
>> CC: linux-r...@vger.kernel.org
>> CC: linux-kernel@vger.kernel.org
>> CC: linux...@kvack.org
>> Signed-off-by: John Hubbard 
>>  drivers/infiniband/core/umem.c  |  2 +-
>>  drivers/infiniband/core/umem_odp.c  |  2 +-
>>  drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
>>  drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
>>  drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
>>  drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
>>  drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
>>  7 files changed, 18 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index a41792dbae1f..9430d697cb9f 100644
>> +++ b/drivers/infiniband/core/umem.c
>> @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, 
>> struct ib_umem *umem, int d
>>  page = sg_page(sg);
>>  if (!PageDirty(page) && umem->writable && dirty)
>>  set_page_dirty_lock(page);
>> -put_page(page);
>> +put_user_page(page);
>>  }
> 
> How about ?
> 
> if (umem->writable && dirty)
>  put_user_pages_dirty_lock(, 1);
> else
>  put_user_page(page);
> 
> ?

OK, I'll make that change.

> 
>> diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
>> b/drivers/infiniband/hw/hfi1/user_pages.c
>> index e341e6dcc388..99ccc0483711 100644
>> +++ b/drivers/infiniband/hw/hfi1/user_pages.c
>> @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
>> unsigned long vaddr, size_t np
>>  void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
>>   size_t npages, bool dirty)
>>  {
>> -size_t i;
>> -
>> -for (i = 0; i < npages; i++) {
>> -if (dirty)
>> -set_page_dirty_lock(p[i]);
>> -put_page(p[i]);
>> -}
>> +if (dirty)
>> +put_user_pages_dirty_lock(p, npages);
>> +else
>> +put_user_pages(p, npages);
> 
> And I know Jan gave the feedback to remove the bool argument, but just
> pointing out that quite possibly evey caller will wrapper it in an if
> like this..
> 

Yes, that attracted me, too. It's nice to write the "if" code once, instead of 
many times. But doing it efficiently requires using a bool argument (otherwise,
you end up with another "if" branch, to convert from bool to an enum or flag 
arg),
and that's generally avoided because no one wants to see code of the form:

   do_this(0, 1, 0, 1);
   do_this(1, 0, 0, 1);

, which, although hilarious, is still evil. haha. Anyway, maybe I'll leave it 
as-is
for now, to inject some hysteresis into this aspect of the review?


thanks,
-- 
John Hubbard
NVIDIA


Re: [PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-05 Thread John Hubbard
On 10/5/18 8:20 AM, Jason Gunthorpe wrote:
> On Thu, Oct 04, 2018 at 09:02:25PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard 
>>
>> For code that retains pages via get_user_pages*(),
>> release those pages via the new put_user_page(),
>> instead of put_page().
>>
>> This prepares for eventually fixing the problem described
>> in [1], and is following a plan listed in [2], [3], [4].
>>
>> [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
>>
>> [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
>> Proposed steps for fixing get_user_pages() + DMA problems.
>>
>> [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
>> Bounce buffers (otherwise [2] is not really viable).
>>
>> [4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
>> Follow-up discussions.
>>
>> CC: Doug Ledford 
>> CC: Jason Gunthorpe 
>> CC: Mike Marciniszyn 
>> CC: Dennis Dalessandro 
>> CC: Christian Benvenuti 
>>
>> CC: linux-r...@vger.kernel.org
>> CC: linux-kernel@vger.kernel.org
>> CC: linux...@kvack.org
>> Signed-off-by: John Hubbard 
>>  drivers/infiniband/core/umem.c  |  2 +-
>>  drivers/infiniband/core/umem_odp.c  |  2 +-
>>  drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
>>  drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
>>  drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
>>  drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
>>  drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
>>  7 files changed, 18 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index a41792dbae1f..9430d697cb9f 100644
>> +++ b/drivers/infiniband/core/umem.c
>> @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, 
>> struct ib_umem *umem, int d
>>  page = sg_page(sg);
>>  if (!PageDirty(page) && umem->writable && dirty)
>>  set_page_dirty_lock(page);
>> -put_page(page);
>> +put_user_page(page);
>>  }
> 
> How about ?
> 
> if (umem->writable && dirty)
>  put_user_pages_dirty_lock(, 1);
> else
>  put_user_page(page);
> 
> ?

OK, I'll make that change.

> 
>> diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
>> b/drivers/infiniband/hw/hfi1/user_pages.c
>> index e341e6dcc388..99ccc0483711 100644
>> +++ b/drivers/infiniband/hw/hfi1/user_pages.c
>> @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
>> unsigned long vaddr, size_t np
>>  void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
>>   size_t npages, bool dirty)
>>  {
>> -size_t i;
>> -
>> -for (i = 0; i < npages; i++) {
>> -if (dirty)
>> -set_page_dirty_lock(p[i]);
>> -put_page(p[i]);
>> -}
>> +if (dirty)
>> +put_user_pages_dirty_lock(p, npages);
>> +else
>> +put_user_pages(p, npages);
> 
> And I know Jan gave the feedback to remove the bool argument, but just
> pointing out that quite possibly evey caller will wrapper it in an if
> like this..
> 

Yes, that attracted me, too. It's nice to write the "if" code once, instead of 
many times. But doing it efficiently requires using a bool argument (otherwise,
you end up with another "if" branch, to convert from bool to an enum or flag 
arg),
and that's generally avoided because no one wants to see code of the form:

   do_this(0, 1, 0, 1);
   do_this(1, 0, 0, 1);

, which, although hilarious, is still evil. haha. Anyway, maybe I'll leave it 
as-is
for now, to inject some hysteresis into this aspect of the review?


thanks,
-- 
John Hubbard
NVIDIA


Re: [PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-05 Thread Jason Gunthorpe
On Thu, Oct 04, 2018 at 09:02:25PM -0700, john.hubb...@gmail.com wrote:
> From: John Hubbard 
> 
> For code that retains pages via get_user_pages*(),
> release those pages via the new put_user_page(),
> instead of put_page().
> 
> This prepares for eventually fixing the problem described
> in [1], and is following a plan listed in [2], [3], [4].
> 
> [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
> 
> [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
> Proposed steps for fixing get_user_pages() + DMA problems.
> 
> [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
> Bounce buffers (otherwise [2] is not really viable).
> 
> [4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
> Follow-up discussions.
> 
> CC: Doug Ledford 
> CC: Jason Gunthorpe 
> CC: Mike Marciniszyn 
> CC: Dennis Dalessandro 
> CC: Christian Benvenuti 
> 
> CC: linux-r...@vger.kernel.org
> CC: linux-kernel@vger.kernel.org
> CC: linux...@kvack.org
> Signed-off-by: John Hubbard 
>  drivers/infiniband/core/umem.c  |  2 +-
>  drivers/infiniband/core/umem_odp.c  |  2 +-
>  drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
>  drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
>  drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
>  drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
>  drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
>  7 files changed, 18 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index a41792dbae1f..9430d697cb9f 100644
> +++ b/drivers/infiniband/core/umem.c
> @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct 
> ib_umem *umem, int d
>   page = sg_page(sg);
>   if (!PageDirty(page) && umem->writable && dirty)
>   set_page_dirty_lock(page);
> - put_page(page);
> + put_user_page(page);
>   }

How about ?

if (umem->writable && dirty)
 put_user_pages_dirty_lock(, 1);
else
 put_user_page(page);

?

> diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
> b/drivers/infiniband/hw/hfi1/user_pages.c
> index e341e6dcc388..99ccc0483711 100644
> +++ b/drivers/infiniband/hw/hfi1/user_pages.c
> @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
> unsigned long vaddr, size_t np
>  void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
>size_t npages, bool dirty)
>  {
> - size_t i;
> -
> - for (i = 0; i < npages; i++) {
> - if (dirty)
> - set_page_dirty_lock(p[i]);
> - put_page(p[i]);
> - }
> + if (dirty)
> + put_user_pages_dirty_lock(p, npages);
> + else
> + put_user_pages(p, npages);

And I know Jan gave the feedback to remove the bool argument, but just
pointing out that quite possibly evey caller will wrapper it in an if
like this..

Jason


Re: [PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-05 Thread Jason Gunthorpe
On Thu, Oct 04, 2018 at 09:02:25PM -0700, john.hubb...@gmail.com wrote:
> From: John Hubbard 
> 
> For code that retains pages via get_user_pages*(),
> release those pages via the new put_user_page(),
> instead of put_page().
> 
> This prepares for eventually fixing the problem described
> in [1], and is following a plan listed in [2], [3], [4].
> 
> [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
> 
> [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
> Proposed steps for fixing get_user_pages() + DMA problems.
> 
> [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
> Bounce buffers (otherwise [2] is not really viable).
> 
> [4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
> Follow-up discussions.
> 
> CC: Doug Ledford 
> CC: Jason Gunthorpe 
> CC: Mike Marciniszyn 
> CC: Dennis Dalessandro 
> CC: Christian Benvenuti 
> 
> CC: linux-r...@vger.kernel.org
> CC: linux-kernel@vger.kernel.org
> CC: linux...@kvack.org
> Signed-off-by: John Hubbard 
>  drivers/infiniband/core/umem.c  |  2 +-
>  drivers/infiniband/core/umem_odp.c  |  2 +-
>  drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
>  drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
>  drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
>  drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
>  drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
>  7 files changed, 18 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index a41792dbae1f..9430d697cb9f 100644
> +++ b/drivers/infiniband/core/umem.c
> @@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct 
> ib_umem *umem, int d
>   page = sg_page(sg);
>   if (!PageDirty(page) && umem->writable && dirty)
>   set_page_dirty_lock(page);
> - put_page(page);
> + put_user_page(page);
>   }

How about ?

if (umem->writable && dirty)
 put_user_pages_dirty_lock(, 1);
else
 put_user_page(page);

?

> diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
> b/drivers/infiniband/hw/hfi1/user_pages.c
> index e341e6dcc388..99ccc0483711 100644
> +++ b/drivers/infiniband/hw/hfi1/user_pages.c
> @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
> unsigned long vaddr, size_t np
>  void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
>size_t npages, bool dirty)
>  {
> - size_t i;
> -
> - for (i = 0; i < npages; i++) {
> - if (dirty)
> - set_page_dirty_lock(p[i]);
> - put_page(p[i]);
> - }
> + if (dirty)
> + put_user_pages_dirty_lock(p, npages);
> + else
> + put_user_pages(p, npages);

And I know Jan gave the feedback to remove the bool argument, but just
pointing out that quite possibly evey caller will wrapper it in an if
like this..

Jason


[PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-04 Thread john . hubbard
From: John Hubbard 

For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().

This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2], [3], [4].

[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"

[2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
Proposed steps for fixing get_user_pages() + DMA problems.

[3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
Bounce buffers (otherwise [2] is not really viable).

[4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
Follow-up discussions.

CC: Doug Ledford 
CC: Jason Gunthorpe 
CC: Mike Marciniszyn 
CC: Dennis Dalessandro 
CC: Christian Benvenuti 

CC: linux-r...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux...@kvack.org
Signed-off-by: John Hubbard 
---
 drivers/infiniband/core/umem.c  |  2 +-
 drivers/infiniband/core/umem_odp.c  |  2 +-
 drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
 drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
 drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
 drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
 drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
 7 files changed, 18 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index a41792dbae1f..9430d697cb9f 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct 
ib_umem *umem, int d
page = sg_page(sg);
if (!PageDirty(page) && umem->writable && dirty)
set_page_dirty_lock(page);
-   put_page(page);
+   put_user_page(page);
}
 
sg_free_table(>sg_head);
diff --git a/drivers/infiniband/core/umem_odp.c 
b/drivers/infiniband/core/umem_odp.c
index 6ec748eccff7..6227b89cf05c 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 
user_virt, u64 bcnt,
ret = -EFAULT;
break;
}
-   put_page(local_page_list[j]);
+   put_user_page(local_page_list[j]);
continue;
}
 
diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
b/drivers/infiniband/hw/hfi1/user_pages.c
index e341e6dcc388..99ccc0483711 100644
--- a/drivers/infiniband/hw/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
@@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
unsigned long vaddr, size_t np
 void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
 size_t npages, bool dirty)
 {
-   size_t i;
-
-   for (i = 0; i < npages; i++) {
-   if (dirty)
-   set_page_dirty_lock(p[i]);
-   put_page(p[i]);
-   }
+   if (dirty)
+   put_user_pages_dirty_lock(p, npages);
+   else
+   put_user_pages(p, npages);
 
if (mm) { /* during close after signal, mm can be NULL */
down_write(>mmap_sem);
diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c 
b/drivers/infiniband/hw/mthca/mthca_memfree.c
index cc9c0c8ccba3..b8b12effd009 100644
--- a/drivers/infiniband/hw/mthca/mthca_memfree.c
+++ b/drivers/infiniband/hw/mthca/mthca_memfree.c
@@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct 
mthca_uar *uar,
 
ret = pci_map_sg(dev->pdev, _tab->page[i].mem, 1, PCI_DMA_TODEVICE);
if (ret < 0) {
-   put_page(pages[0]);
+   put_user_page(pages[0]);
goto out;
}
 
@@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct 
mthca_uar *uar,
 mthca_uarc_virt(dev, uar, i));
if (ret) {
pci_unmap_sg(dev->pdev, _tab->page[i].mem, 1, 
PCI_DMA_TODEVICE);
-   put_page(sg_page(_tab->page[i].mem));
+   put_user_page(sg_page(_tab->page[i].mem));
goto out;
}
 
@@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, 
struct mthca_uar *uar,
if (db_tab->page[i].uvirt) {
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1);
pci_unmap_sg(dev->pdev, _tab->page[i].mem, 1, 
PCI_DMA_TODEVICE);
-   put_page(sg_page(_tab->page[i].mem));
+   put_user_page(sg_page(_tab->page[i].mem));
}
}
 
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c 
b/drivers/infiniband/hw/qib/qib_user_pages.c
index 

[PATCH v2 3/3] infiniband/mm: convert to the new put_user_page[s]() calls

2018-10-04 Thread john . hubbard
From: John Hubbard 

For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().

This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2], [3], [4].

[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"

[2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubb...@nvidia.com
Proposed steps for fixing get_user_pages() + DMA problems.

[3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrc...@quack2.suse.cz
Bounce buffers (otherwise [2] is not really viable).

[4] https://lkml.kernel.org/r/20181003162115.gg24...@quack2.suse.cz
Follow-up discussions.

CC: Doug Ledford 
CC: Jason Gunthorpe 
CC: Mike Marciniszyn 
CC: Dennis Dalessandro 
CC: Christian Benvenuti 

CC: linux-r...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: linux...@kvack.org
Signed-off-by: John Hubbard 
---
 drivers/infiniband/core/umem.c  |  2 +-
 drivers/infiniband/core/umem_odp.c  |  2 +-
 drivers/infiniband/hw/hfi1/user_pages.c | 11 ---
 drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
 drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ---
 drivers/infiniband/hw/qib/qib_user_sdma.c   |  8 
 drivers/infiniband/hw/usnic/usnic_uiom.c|  2 +-
 7 files changed, 18 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index a41792dbae1f..9430d697cb9f 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -60,7 +60,7 @@ static void __ib_umem_release(struct ib_device *dev, struct 
ib_umem *umem, int d
page = sg_page(sg);
if (!PageDirty(page) && umem->writable && dirty)
set_page_dirty_lock(page);
-   put_page(page);
+   put_user_page(page);
}
 
sg_free_table(>sg_head);
diff --git a/drivers/infiniband/core/umem_odp.c 
b/drivers/infiniband/core/umem_odp.c
index 6ec748eccff7..6227b89cf05c 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -717,7 +717,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 
user_virt, u64 bcnt,
ret = -EFAULT;
break;
}
-   put_page(local_page_list[j]);
+   put_user_page(local_page_list[j]);
continue;
}
 
diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
b/drivers/infiniband/hw/hfi1/user_pages.c
index e341e6dcc388..99ccc0483711 100644
--- a/drivers/infiniband/hw/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
@@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, 
unsigned long vaddr, size_t np
 void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
 size_t npages, bool dirty)
 {
-   size_t i;
-
-   for (i = 0; i < npages; i++) {
-   if (dirty)
-   set_page_dirty_lock(p[i]);
-   put_page(p[i]);
-   }
+   if (dirty)
+   put_user_pages_dirty_lock(p, npages);
+   else
+   put_user_pages(p, npages);
 
if (mm) { /* during close after signal, mm can be NULL */
down_write(>mmap_sem);
diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c 
b/drivers/infiniband/hw/mthca/mthca_memfree.c
index cc9c0c8ccba3..b8b12effd009 100644
--- a/drivers/infiniband/hw/mthca/mthca_memfree.c
+++ b/drivers/infiniband/hw/mthca/mthca_memfree.c
@@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct 
mthca_uar *uar,
 
ret = pci_map_sg(dev->pdev, _tab->page[i].mem, 1, PCI_DMA_TODEVICE);
if (ret < 0) {
-   put_page(pages[0]);
+   put_user_page(pages[0]);
goto out;
}
 
@@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct 
mthca_uar *uar,
 mthca_uarc_virt(dev, uar, i));
if (ret) {
pci_unmap_sg(dev->pdev, _tab->page[i].mem, 1, 
PCI_DMA_TODEVICE);
-   put_page(sg_page(_tab->page[i].mem));
+   put_user_page(sg_page(_tab->page[i].mem));
goto out;
}
 
@@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, 
struct mthca_uar *uar,
if (db_tab->page[i].uvirt) {
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1);
pci_unmap_sg(dev->pdev, _tab->page[i].mem, 1, 
PCI_DMA_TODEVICE);
-   put_page(sg_page(_tab->page[i].mem));
+   put_user_page(sg_page(_tab->page[i].mem));
}
}
 
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c 
b/drivers/infiniband/hw/qib/qib_user_pages.c
index