On 3/12/20 12:33 PM, Jason Gunthorpe wrote:
pmd_to_hmm_pfn_flags() already checks it and makes the cpu flags 0. If no
fault is requested then the pfns should be returned with the not valid
flags.

It should not unconditionally fault if faulting is not requested.

Fixes: 2aee09d8c116 ("mm/hmm: change hmm_vma_fault() to allow write fault on page 
basis")
Signed-off-by: Jason Gunthorpe <j...@mellanox.com>

Looks good to me.
Reviewed-by: Ralph Campbell <rcampb...@nvidia.com>

---
  mm/hmm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

Bonus patch, this one got found after I made the series..

diff --git a/mm/hmm.c b/mm/hmm.c
index ca33d086bdc190..6d9da4b0f0a9f8 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -226,7 +226,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, 
unsigned long addr,
        hmm_range_need_fault(hmm_vma_walk, pfns, npages, cpu_flags,
                             &fault, &write_fault);
- if (pmd_protnone(pmd) || fault || write_fault)
+       if (fault || write_fault)
                return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to