Re: [PATCH] mm/mmu_notifier: avoid double notification when it is useless

2017-10-03 Thread Jerome Glisse
On Tue, Oct 03, 2017 at 05:43:47PM -0700, Nadav Amit wrote:
> Jerome Glisse  wrote:
> 
> > On Wed, Oct 04, 2017 at 01:42:15AM +0200, Andrea Arcangeli wrote:
> > 
> >> I'd like some more explanation about the inner working of "that new
> >> user" as per comment above.
> >> 
> >> It would be enough to drop mmu_notifier_invalidate_range from above
> >> without adding it to the filebacked case. The above gives higher prio
> >> to the hypothetical and uncertain future case, than to the current
> >> real filebacked case that doesn't need ->invalidate_range inside the
> >> PT lock, or do you see something that might already need such
> >> ->invalidate_range?
> > 
> > No i don't see any new user today that might need such invalidate but
> > i was trying to be extra cautious as i have a tendency to assume that
> > someone might do a patch that use try_to_unmap() without going through
> > all the comments in the function and thus possibly using it in a an
> > unexpected way from mmu_notifier callback point of view. I am fine
> > with putting the burden on new user to get it right and adding an
> > extra warning in the function description to try to warn people in a
> > sensible way.
> 
> I must be missing something. After the PTE is changed, but before the
> secondary TLB notification/invalidation - What prevents another thread from
> changing the mappings (e.g., using munmap/mmap), and setting a new page
> at that PTE?
> 
> Wouldn’t it end with the page being mapped without a secondary TLB flush in
> between?

munmap would call mmu_notifier to invalidate the range too so secondary
TLB would be properly flush before any new pte can be setup in for that
particular virtual address range. Unlike CPU TLB flush, secondary TLB
flush are un-conditional and thus current pte value does not play any
role.

Cheers,
Jérôme
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] mm/mmu_notifier: avoid double notification when it is useless

2017-10-03 Thread Nadav Amit
Jerome Glisse  wrote:

> On Wed, Oct 04, 2017 at 01:42:15AM +0200, Andrea Arcangeli wrote:
> 
>> I'd like some more explanation about the inner working of "that new
>> user" as per comment above.
>> 
>> It would be enough to drop mmu_notifier_invalidate_range from above
>> without adding it to the filebacked case. The above gives higher prio
>> to the hypothetical and uncertain future case, than to the current
>> real filebacked case that doesn't need ->invalidate_range inside the
>> PT lock, or do you see something that might already need such
>> ->invalidate_range?
> 
> No i don't see any new user today that might need such invalidate but
> i was trying to be extra cautious as i have a tendency to assume that
> someone might do a patch that use try_to_unmap() without going through
> all the comments in the function and thus possibly using it in a an
> unexpected way from mmu_notifier callback point of view. I am fine
> with putting the burden on new user to get it right and adding an
> extra warning in the function description to try to warn people in a
> sensible way.

I must be missing something. After the PTE is changed, but before the
secondary TLB notification/invalidation - What prevents another thread from
changing the mappings (e.g., using munmap/mmap), and setting a new page
at that PTE?

Wouldn’t it end with the page being mapped without a secondary TLB flush in
between?

Nadav

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] mm/mmu_notifier: avoid double notification when it is useless

2017-10-03 Thread Jerome Glisse
On Wed, Oct 04, 2017 at 01:42:15AM +0200, Andrea Arcangeli wrote:
> Hello Jerome,
> 
> On Fri, Sep 01, 2017 at 01:30:11PM -0400, Jerome Glisse wrote:
> > +Case A is obvious you do not want to take the risk for the device to write 
> > to
> > +a page that might now be use by some completely different task.
> 
> used
> 
> > +is true ven if the thread doing the page table update is preempted right 
> > after
> 
> even
> 
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 90731e3b7e58..5706252b828a 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1167,8 +1167,15 @@ static int do_huge_pmd_wp_page_fallback(struct 
> > vm_fault *vmf, pmd_t orig_pmd,
> > goto out_free_pages;
> > VM_BUG_ON_PAGE(!PageHead(page), page);
> >  
> > +   /*
> > +* Leave pmd empty until pte is filled note we must notify here as
> > +* concurrent CPU thread might write to new page before the call to
> > +* mmu_notifier_invalidate_range_end() happen which can lead to a
> 
> happens
> 
> > +* device seeing memory write in different order than CPU.
> > +*
> > +* See Documentation/vm/mmu_notifier.txt
> > +*/
> > pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
> > -   /* leave pmd empty until pte is filled */
> >  
> 
> Here we can change the following mmu_notifier_invalidate_range_end to
> skip calling ->invalidate_range. It could be called
> mmu_notifier_invalidate_range_only_end, or other suggestions
> welcome. Otherwise we'll repeat the call for nothing.
> 
> We need it inside the PT lock for the ordering issue, but we don't
> need to run it twice.
> 
> Same in do_huge_pmd_wp_page, wp_page_copy and
> migrate_vma_insert_page. Every time *clear_flush_notify is used
> mmu_notifier_invalidate_range_only_end should be called after it,
> instead of mmu_notifier_invalidate_range_end.
> 
> I think optimizing that part too, fits in the context of this patchset
> (if not in the same patch), because the objective is still the same:
> to remove unnecessary ->invalidate_range calls.

Yes you are right, good idea, i will respin with that too (and with the
various typo you noted thank you for that). I can do 2 patch or 1, i don't
mind either way. I will probably do 2 as first and they can be folded into
1 if people prefer just one.


> 
> > +* No need to notify as we downgrading page
> 
> are
> 
> > +* table protection not changing it to point
> > +* to a new page.
> > +*
> 
> > +* No need to notify as we downgrading page table to read only
> 
> are
> 
> > +* No need to notify as we replacing a read only page with another
> 
> are
> 
> > @@ -1510,13 +1515,43 @@ static bool try_to_unmap_one(struct page *page, 
> > struct vm_area_struct *vma,
> > if (pte_soft_dirty(pteval))
> > swp_pte = pte_swp_mksoft_dirty(swp_pte);
> > set_pte_at(mm, address, pvmw.pte, swp_pte);
> > -   } else
> > +   } else {
> > +   /*
> > +* We should not need to notify here as we reach this
> > +* case only from freeze_page() itself only call from
> > +* split_huge_page_to_list() so everything below must
> > +* be true:
> > +*   - page is not anonymous
> > +*   - page is locked
> > +*
> > +* So as it is a shared page and it is locked, it can
> > +* not be remove from the page cache and replace by
> > +* a new page before mmu_notifier_invalidate_range_end
> > +* so no concurrent thread might update its page table
> > +* to point at new page while a device still is using
> > +* this page.
> > +*
> > +* But we can not assume that new user of try_to_unmap
> > +* will have that in mind so just to be safe here call
> > +* mmu_notifier_invalidate_range()
> > +*
> > +* See Documentation/vm/mmu_notifier.txt
> > +*/
> > dec_mm_counter(mm, mm_counter_file(page));
> > +   mmu_notifier_invalidate_range(mm, address,
> > + address + PAGE_SIZE);
> > +   }
> >  discard:
> > +   /*
> > +* No need to call mmu_notifier_invalidate_range() as we are
> > +* either replacing a present pte with non present one (either
> > +* a swap or special one). We handling the clearing pte case
> > +* above.
> > +*
> > +* See Documentation/vm/mmu_notifier.txt
> > +*/
> > page_remove_rmap(subpage, PageHuge(page));
> > 

Re: [PATCH] mm/mmu_notifier: avoid double notification when it is useless

2017-10-03 Thread Andrea Arcangeli
Hello Jerome,

On Fri, Sep 01, 2017 at 01:30:11PM -0400, Jerome Glisse wrote:
> +Case A is obvious you do not want to take the risk for the device to write to
> +a page that might now be use by some completely different task.

used

> +is true ven if the thread doing the page table update is preempted right 
> after

even

> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 90731e3b7e58..5706252b828a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1167,8 +1167,15 @@ static int do_huge_pmd_wp_page_fallback(struct 
> vm_fault *vmf, pmd_t orig_pmd,
>   goto out_free_pages;
>   VM_BUG_ON_PAGE(!PageHead(page), page);
>  
> + /*
> +  * Leave pmd empty until pte is filled note we must notify here as
> +  * concurrent CPU thread might write to new page before the call to
> +  * mmu_notifier_invalidate_range_end() happen which can lead to a

happens

> +  * device seeing memory write in different order than CPU.
> +  *
> +  * See Documentation/vm/mmu_notifier.txt
> +  */
>   pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
> - /* leave pmd empty until pte is filled */
>  

Here we can change the following mmu_notifier_invalidate_range_end to
skip calling ->invalidate_range. It could be called
mmu_notifier_invalidate_range_only_end, or other suggestions
welcome. Otherwise we'll repeat the call for nothing.

We need it inside the PT lock for the ordering issue, but we don't
need to run it twice.

Same in do_huge_pmd_wp_page, wp_page_copy and
migrate_vma_insert_page. Every time *clear_flush_notify is used
mmu_notifier_invalidate_range_only_end should be called after it,
instead of mmu_notifier_invalidate_range_end.

I think optimizing that part too, fits in the context of this patchset
(if not in the same patch), because the objective is still the same:
to remove unnecessary ->invalidate_range calls.

> +  * No need to notify as we downgrading page

are

> +  * table protection not changing it to point
> +  * to a new page.
> +  *

> +  * No need to notify as we downgrading page table to read only

are

> +  * No need to notify as we replacing a read only page with another

are

> @@ -1510,13 +1515,43 @@ static bool try_to_unmap_one(struct page *page, 
> struct vm_area_struct *vma,
>   if (pte_soft_dirty(pteval))
>   swp_pte = pte_swp_mksoft_dirty(swp_pte);
>   set_pte_at(mm, address, pvmw.pte, swp_pte);
> - } else
> + } else {
> + /*
> +  * We should not need to notify here as we reach this
> +  * case only from freeze_page() itself only call from
> +  * split_huge_page_to_list() so everything below must
> +  * be true:
> +  *   - page is not anonymous
> +  *   - page is locked
> +  *
> +  * So as it is a shared page and it is locked, it can
> +  * not be remove from the page cache and replace by
> +  * a new page before mmu_notifier_invalidate_range_end
> +  * so no concurrent thread might update its page table
> +  * to point at new page while a device still is using
> +  * this page.
> +  *
> +  * But we can not assume that new user of try_to_unmap
> +  * will have that in mind so just to be safe here call
> +  * mmu_notifier_invalidate_range()
> +  *
> +  * See Documentation/vm/mmu_notifier.txt
> +  */
>   dec_mm_counter(mm, mm_counter_file(page));
> + mmu_notifier_invalidate_range(mm, address,
> +   address + PAGE_SIZE);
> + }
>  discard:
> + /*
> +  * No need to call mmu_notifier_invalidate_range() as we are
> +  * either replacing a present pte with non present one (either
> +  * a swap or special one). We handling the clearing pte case
> +  * above.
> +  *
> +  * See Documentation/vm/mmu_notifier.txt
> +  */
>   page_remove_rmap(subpage, PageHuge(page));
>   put_page(page);
> - mmu_notifier_invalidate_range(mm, address,
> -   address + PAGE_SIZE);
>   }
>  
>   mmu_notifier_invalidate_range_end(vma->vm_mm, start, end);

That is the path that unmaps filebacked pages (btw, not necessarily
shared unlike comment says, they can be private but still filebacked).

I'd like some more explanation about the inner working of "that new

[PATCH] mm/mmu_notifier: avoid double notification when it is useless

2017-09-01 Thread jglisse
From: Jérôme Glisse 

(Note that this is 4.15 material or 4.14 if people are extra confident. I
 am posting now to get people to test. To that effect maybe it would be a
 good idea to have that patch sit in linux-next for a while for testing.

 Other motivation is that the problem is fresh in everyone's memory

 Thanks to Andrea for thinking of a problematic scenario for COW)

When clearing a pte/pmd we are given a choice to notify the event through
(notify version of *_clear_flush call mmu_notifier_invalidate_range) under
the page table lock. But that notification is not necessary in all cases.

This patches remove almost all the case where it is useless to have a call to
mmu_notifier_invalidate_range() before mmu_notifier_invalidate_range_end().
It also adds documentation in all those case explaining why.

Below is a more in depth analysis of why this is fine to do this:

For secondary TLB (non CPU TLB) like IOMMU TLB or device TLB (when device use
thing like ATS/PASID to get the IOMMU to walk the CPU page table to access a
process virtual address space). There is only 2 cases when you need to notify
those secondary TLB while holding page table lock when clearing a pte/pmd:

  A) page backing address is free before mmu_notifier_invalidate_range_end()
  B) a page table entry is updated to point to a new page (COW, write fault
 on zero page, __replace_page(), ...)

Case A is obvious you do not want to take the risk for the device to write to
a page that might now be use by some completely different task.

Case B is more subtle. For correctness it requires the following sequence to
happen:
  - take page table lock
  - clear page table entry and notify ([pmd/pte]p_huge_clear_flush_notify())
  - set page table entry to point to new page

If clearing the page table entry is not followed by a notify before setting
the new pte/pmd value then you can break memory model like C11 or C++11 for
the device.

Consider the following scenario (device use a feature similar to ATS/PASID):

Two address addrA and addrB such that |addrA - addrB| >= PAGE_SIZE we assume
they are write protected for COW (other case of B apply too).

[Time N] 
CPU-thread-0  {try to write to addrA}
CPU-thread-1  {try to write to addrB}
CPU-thread-2  {}
CPU-thread-3  {}
DEV-thread-0  {read addrA and populate device TLB}
DEV-thread-2  {read addrB and populate device TLB}
[Time N+1] --
CPU-thread-0  {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}}
CPU-thread-1  {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}}
CPU-thread-2  {}
CPU-thread-3  {}
DEV-thread-0  {}
DEV-thread-2  {}
[Time N+2] --
CPU-thread-0  {COW_step1: {update page table to point to new page for addrA}}
CPU-thread-1  {COW_step1: {update page table to point to new page for addrB}}
CPU-thread-2  {}
CPU-thread-3  {}
DEV-thread-0  {}
DEV-thread-2  {}
[Time N+3] --
CPU-thread-0  {preempted}
CPU-thread-1  {preempted}
CPU-thread-2  {write to addrA which is a write to new page}
CPU-thread-3  {}
DEV-thread-0  {}
DEV-thread-2  {}
[Time N+3] --
CPU-thread-0  {preempted}
CPU-thread-1  {preempted}
CPU-thread-2  {}
CPU-thread-3  {write to addrB which is a write to new page}
DEV-thread-0  {}
DEV-thread-2  {}
[Time N+4] --
CPU-thread-0  {preempted}
CPU-thread-1  {COW_step3: {mmu_notifier_invalidate_range_end(addrB)}}
CPU-thread-2  {}
CPU-thread-3  {}
DEV-thread-0  {}
DEV-thread-2  {}
[Time N+5] --
CPU-thread-0  {preempted}
CPU-thread-1  {}
CPU-thread-2  {}
CPU-thread-3  {}
DEV-thread-0  {read addrA from old page}
DEV-thread-2  {read addrB from new page}

So here because at time N+2 the clear page table entry was not pair with a
notification to invalidate the secondary TLB, the device see the new value for
addrB before seing the new value for addrA. This break total memory ordering
for the device.

When changing a pte to write protect or to point to a new write protected page
with same content (KSM) it is fine to delay the mmu_notifier_invalidate_range
call to mmu_notifier_invalidate_range_end() outside the page table lock. This
is true ven if the thread doing the page table update is preempted right after
releasing page table lock but before call mmu_notifier_invalidate_range_end().

Signed-off-by: Jérôme Glisse 
Cc: Andrea Arcangeli 
Cc: Nadav Amit 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Cc: Joerg Roedel 
Cc: Suravee Suthikulpanit 
Cc: David Woodhouse