Re: [PATCH v3 38/41] powerpc: convert put_page() to put_user_page*()

2019-08-09 Thread Michael Ellerman
John Hubbard  writes:
> On 8/7/19 10:42 PM, Michael Ellerman wrote:
>> Hi John,
>> 
>> john.hubb...@gmail.com writes:
>>> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c 
>>> b/arch/powerpc/mm/book3s64/iommu_api.c
>>> index b056cae3388b..e126193ba295 100644
>>> --- a/arch/powerpc/mm/book3s64/iommu_api.c
>>> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
>>> @@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct 
>>> mm_iommu_table_group_mem_t *mem)
>>>  {
>>> long i;
>>> struct page *page = NULL;
>>> +   bool dirty = false;
>> 
>> I don't think you need that initialisation do you?
>> 
>
> Nope, it can go. Fixed locally, thanks.

Thanks.

> Did you get a chance to look at enough of the other bits to feel comfortable 
> with the patch, overall?

Mostly :) It's not really my area, but all the conversions looked
correct to me as best as I could tell.

So I'm fine for it to go in as part of the series:

Acked-by: Michael Ellerman  (powerpc)

cheers


Re: [PATCH v3 38/41] powerpc: convert put_page() to put_user_page*()

2019-08-08 Thread John Hubbard
On 8/7/19 10:42 PM, Michael Ellerman wrote:
> Hi John,
> 
> john.hubb...@gmail.com writes:
>> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c 
>> b/arch/powerpc/mm/book3s64/iommu_api.c
>> index b056cae3388b..e126193ba295 100644
>> --- a/arch/powerpc/mm/book3s64/iommu_api.c
>> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
>> @@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct 
>> mm_iommu_table_group_mem_t *mem)
>>  {
>>  long i;
>>  struct page *page = NULL;
>> +bool dirty = false;
> 
> I don't think you need that initialisation do you?
> 

Nope, it can go. Fixed locally, thanks.

Did you get a chance to look at enough of the other bits to feel comfortable 
with the patch, overall?

thanks,
-- 
John Hubbard
NVIDIA


Re: [PATCH v3 38/41] powerpc: convert put_page() to put_user_page*()

2019-08-07 Thread Michael Ellerman
Hi John,

john.hubb...@gmail.com writes:
> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c 
> b/arch/powerpc/mm/book3s64/iommu_api.c
> index b056cae3388b..e126193ba295 100644
> --- a/arch/powerpc/mm/book3s64/iommu_api.c
> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
> @@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct 
> mm_iommu_table_group_mem_t *mem)
>  {
>   long i;
>   struct page *page = NULL;
> + bool dirty = false;

I don't think you need that initialisation do you?

>   if (!mem->hpas)
>   return;
> @@ -215,10 +215,9 @@ static void mm_iommu_unpin(struct 
> mm_iommu_table_group_mem_t *mem)
>   if (!page)
>   continue;
>  
> - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
> - SetPageDirty(page);
> + dirty = mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
> - put_page(page);
> + put_user_pages_dirty_lock(, 1, dirty);
>   mem->hpas[i] = 0;
>   }
>  }

cheers


[PATCH v3 38/41] powerpc: convert put_page() to put_user_page*()

2019-08-06 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
mm_iommu_unpin(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Benjamin Herrenschmidt 
Cc: Christoph Hellwig 
Cc: Michael Ellerman 
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: John Hubbard 
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c|  4 ++--
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 19 ++-
 arch/powerpc/kvm/e500_mmu.c|  3 +--
 arch/powerpc/mm/book3s64/iommu_api.c   | 11 +--
 4 files changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 9a75f0e1933b..18646b738ce1 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -731,7 +731,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
 * we have to drop the reference on the correct tail
 * page to match the get inside gup()
 */
-   put_page(pages[0]);
+   put_user_page(pages[0]);
}
return ret;
 
@@ -1206,7 +1206,7 @@ void kvmppc_unpin_guest_page(struct kvm *kvm, void *va, 
unsigned long gpa,
unsigned long gfn;
int srcu_idx;
 
-   put_page(page);
+   put_user_page(page);
 
if (!dirty)
return;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c 
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..f53273fbfa2d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -821,8 +821,12 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 */
if (!ptep) {
local_irq_enable();
-   if (page)
-   put_page(page);
+   if (page) {
+   if (upgrade_write)
+   put_user_page(page);
+   else
+   put_page(page);
+   }
return RESUME_GUEST;
}
pte = *ptep;
@@ -870,9 +874,14 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
*levelp = level;
 
if (page) {
-   if (!ret && (pte_val(pte) & _PAGE_WRITE))
-   set_page_dirty_lock(page);
-   put_page(page);
+   bool dirty = !ret && (pte_val(pte) & _PAGE_WRITE);
+   if (upgrade_write)
+   put_user_pages_dirty_lock(, 1, dirty);
+   else {
+   if (dirty)
+   set_page_dirty_lock(page);
+   put_page(page);
+   }
}
 
/* Increment number of large pages if we (successfully) inserted one */
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index 2d910b87e441..67bb8d59d4b1 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -850,8 +850,7 @@ int kvm_vcpu_ioctl_config_tlb(struct kvm_vcpu *vcpu,
  free_privs_first:
kfree(privs[0]);
  put_pages:
-   for (i = 0; i < num_pages; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, num_pages);
  free_pages:
kfree(pages);
return ret;
diff --git a/arch/powerpc/mm/book3s64/iommu_api.c 
b/arch/powerpc/mm/book3s64/iommu_api.c
index b056cae3388b..e126193ba295 100644
--- a/arch/powerpc/mm/book3s64/iommu_api.c
+++ b/arch/powerpc/mm/book3s64/iommu_api.c
@@ -170,9 +170,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, 
unsigned long ua,
return 0;
 
 free_exit:
-   /* free the reference taken */
-   for (i = 0; i < pinned; i++)
-   put_page(mem->hpages[i]);
+   /* free the references taken */
+   put_user_pages(mem->hpages, pinned);
 
vfree(mem->hpas);
kfree(mem);
@@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct 
mm_iommu_table_group_mem_t *mem)
 {
long i;
struct page *page = NULL;
+   bool dirty = false;
 
if (!mem->hpas)
return;
@@ -215,10 +215,9 @@ static void mm_iommu_unpin(struct 
mm_iommu_table_group_mem_t *mem)
if (!page)
continue;
 
-   if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
-   SetPageDirty(page);
+   dirty = mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;