Re: [PATCH 4/5] dax: fix missing writeprotect the pte entry

2022-01-23 Thread Christoph Hellwig
On Fri, Jan 21, 2022 at 03:55:14PM +0800, Muchun Song wrote:
> Reuse some infrastructure of page_mkclean_one() to let DAX can handle
> similar case to fix this issue.

Can you split out some of the infrastructure changes into proper
well-documented preparation patches?

> + pgoff_t pgoff_end = pgoff_start + npfn - 1;
>  
>   i_mmap_lock_read(mapping);
> - vma_interval_tree_foreach(vma, >i_mmap, index, index) {
> - struct mmu_notifier_range range;
> - unsigned long address;
> -
> + vma_interval_tree_foreach(vma, >i_mmap, pgoff_start, 
> pgoff_end) {

Please avoid the overly long lines here.  Just using start and end
might be an easy option.




Re: [PATCH 3/5] mm: page_vma_mapped: support checking if a pfn is mapped into a vma

2022-01-23 Thread Christoph Hellwig
On Fri, Jan 21, 2022 at 03:55:13PM +0800, Muchun Song wrote:
> + if (pvmw->pte && ((pvmw->flags & PVMW_PFN_WALK) || 
> !PageHuge(pvmw->page)))

Please avoid the overly long line here and in a few other places.

> +/*
> + * Then at what user virtual address will none of the page be found in vma?

Doesn't parse, what is this trying to say?



Re: [PATCH 2/5] dax: fix cache flush on PMD-mapped pages

2022-01-23 Thread Christoph Hellwig
On Fri, Jan 21, 2022 at 03:55:12PM +0800, Muchun Song wrote:
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue.
> 
> Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
> Signed-off-by: Muchun Song 
> ---
>  fs/dax.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 88be1c02a151..2955ec65eb65 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -857,7 +857,7 @@ static void dax_entry_mkclean(struct address_space 
> *mapping, pgoff_t index,
>   if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
>   goto unlock_pmd;
>  
> - flush_cache_page(vma, address, pfn);
> + flush_cache_range(vma, address, address + 
> HPAGE_PMD_SIZE);

Same comment as for the previous one.




Re: [PATCH 1/5] mm: rmap: fix cache flush on THP pages

2022-01-23 Thread Christoph Hellwig
On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote:
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b0fd9dc19eba..65670cb805d6 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct 
> vm_area_struct *vma,
>   if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
>   continue;
>  
> - flush_cache_page(vma, address, page_to_pfn(page));
> + flush_cache_range(vma, address, address + 
> HPAGE_PMD_SIZE);

Do we need a flush_cache_folio here given that we must be dealing with
what effectively is a folio here?

Also please avoid the overly long line.