Re: [PATCH 4/5] dax: fix missing writeprotect the pte entry

2022-01-24 Thread Muchun Song
On Mon, Jan 24, 2022 at 3:41 PM Christoph Hellwig  wrote:
>
> On Fri, Jan 21, 2022 at 03:55:14PM +0800, Muchun Song wrote:
> > Reuse some infrastructure of page_mkclean_one() to let DAX can handle
> > similar case to fix this issue.
>
> Can you split out some of the infrastructure changes into proper
> well-documented preparation patches?

Will do. I'll introduce page_vma_mkclean_one in a prepared patch
and then fix the DAX issue in a separate patch. Thanks for your
suggestions.

>
> > + pgoff_t pgoff_end = pgoff_start + npfn - 1;
> >
> >   i_mmap_lock_read(mapping);
> > - vma_interval_tree_foreach(vma, >i_mmap, index, index) {
> > - struct mmu_notifier_range range;
> > - unsigned long address;
> > -
> > + vma_interval_tree_foreach(vma, >i_mmap, pgoff_start, 
> > pgoff_end) {
>
> Please avoid the overly long lines here.  Just using start and end
> might be an easy option.
>

Will do.

Thanks.



Re: [PATCH 3/5] mm: page_vma_mapped: support checking if a pfn is mapped into a vma

2022-01-24 Thread Muchun Song
On Mon, Jan 24, 2022 at 3:36 PM Christoph Hellwig  wrote:
>
> On Fri, Jan 21, 2022 at 03:55:13PM +0800, Muchun Song wrote:
> > + if (pvmw->pte && ((pvmw->flags & PVMW_PFN_WALK) || 
> > !PageHuge(pvmw->page)))
>
> Please avoid the overly long line here and in a few other places.

OK.

>
> > +/*
> > + * Then at what user virtual address will none of the page be found in vma?
>
> Doesn't parse, what is this trying to say?

Well, I am also confused.

BTW, this is not introduced by me, it is introduced by:

  commit 37ffe9f4d7ff ("mm/thp: fix vma_address() if virtual address
below file offset")

If it is really confusing, I can replace this line with:

"Return the end user virtual address of a page within a vma"

Thanks.



Re: [PATCH 1/5] mm: rmap: fix cache flush on THP pages

2022-01-24 Thread Muchun Song
On Mon, Jan 24, 2022 at 3:34 PM Christoph Hellwig  wrote:
>
> On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote:
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index b0fd9dc19eba..65670cb805d6 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct 
> > vm_area_struct *vma,
> >   if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> >   continue;
> >
> > - flush_cache_page(vma, address, page_to_pfn(page));
> > + flush_cache_range(vma, address, address + 
> > HPAGE_PMD_SIZE);
>
> Do we need a flush_cache_folio here given that we must be dealing with
> what effectively is a folio here?

I think it is a future improvement. I suspect it will be easy if
someone wants to backport this patch. If we do not want
someone to do this, I think it is better to introduce
flush_cache_folio in this patch. What do you think?

>
> Also please avoid the overly long line.
>

OK.

Thanks.