Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-26 Thread Peter Xu
On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect 
> > > > > > to explicit
> > > > > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > > > > __split_huge_pmd()
> > > > > > internally which will generate that unwanted CLEAR notify.
> > > > >
> > > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > > which will always CLEAR. However gup only calls 
> > > > > split_huge_pmd_address() if it
> > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > >
> > > > >   if (likely(!pmd_trans_huge(pmdval)))
> > > > >   return follow_page_pte(vma, address, pmd, flags, 
> > > > > >pgmap);
> > > > >
> > > > > So I don't think we have a problem here.
> > > >
> > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, 
> > > > right?  I
> > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should 
> > > > return
> > > > true above, so we'll skip follow_page_pte(); then we'll check 
> > > > FOLL_SPLIT_PMD
> > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> > >
> > > That seems correct - if the thp is not mapped with a pmd we won't split 
> > > and we
> > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that 
> > > case it
> > > is fine - we will retry, but the retry will won't CLEAR because the pmd 
> > > has
> > > already been split.
> > 
> > Aha!
> > 
> > >
> > > The issue arises with doing it unconditionally in make device exclusive 
> > > is that
> > > you *always* CLEAR even if there is no thp pmd to split. Or at least 
> > > that's my
> > > understanding, please let me know if it doesn't make sense.
> > 
> > Exactly.  But if you see what I meant here, even if it can work like this, 
> > it
> > sounds still fragile, isn't it?  I just feel something is slightly off 
> > there..
> > 
> > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > performance, afaict, because if it's not a thp even without locking, then it
> > won't be, so further __split_huge_pmd() is not necessary.
> > 
> > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > __split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
> > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> 
> Well I would argue the performance of memory notifiers is becoming 
> increasingly
> important, and a change that causes them to be called unnecessarily is
> therefore not very legal. Likely the correct fix here is to optimise
> __split_huge_pmd() to only call the notifier if it's actually going to split a
> pmd. As you said though that's a completely different story which I think 
> would
> be best done as a separate series.

Right, maybe I can look a bit more into that later; but my whole point was to
express that one functionality shouldn't depend on such a trivial detail of
implementation of other modules (thp split in this case).

> 
> > This lets me goes back a step to think about why do we need this notifier at
> > all to cover this whole range of make_device_exclusive() procedure..
> > 
> > What I am thinking is, we're afraid some CPU accesses this page so the pte 
> > got
> > quickly restored when device atomic operation is carrying on.  Then with 
> > this
> > notifier we'll be able to cancel it.  Makes perfect sense.
> > 
> > However do we really need to register this notifier so early?  The thing is 
> > the
> > GPU driver still has all the page locks, so even if there's a race to 
> > restore
> > the ptes, they'll block at taking the page lock until the driver releases 
> > it.
> > 
> > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > mmu_interval_notifier_insert() that early: what if we register that notifier
> > after make_device_exclusive_range() returns but before page_unlock() 
> > somehow?
> > So before page_unlock(), race is protected fully by the lock itself; after
> > that, it's done by mmu notifier.  Then maybe we don't need to worry about 
> > all
> > these notifications during marking exclusive (while we shouldn't)?
> 
> The notifier is needed to protect against races with pte changes. Once a page
> has been marked for exclusive access the driver will update it's page tables 
> to
> allow atomic access to the page. However in the meantime the page could become
> unmapped entirely or write protected.
> 
> As I understand things the page lock won't protect against these kind of pte
> changes, hence the need for mmu_interval_read_begin/retry which allows the
> driver to hold a mutex protecting against invalidations via blocking the
> 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-26 Thread Peter Xu
On Tue, Jun 15, 2021 at 01:08:11PM +1000, Alistair Popple wrote:
> On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> > On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> > > On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > > > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar 
> > > > > > > > effect to explicit
> > > > > > > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > > > > > > __split_huge_pmd()
> > > > > > > > internally which will generate that unwanted CLEAR notify.
> > > > > > >
> > > > > > > Agree that gup calls __split_huge_pmd() via 
> > > > > > > split_huge_pmd_address()
> > > > > > > which will always CLEAR. However gup only calls 
> > > > > > > split_huge_pmd_address() if it
> > > > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > > > >
> > > > > > >   if (likely(!pmd_trans_huge(pmdval)))
> > > > > > >   return follow_page_pte(vma, address, pmd, flags, 
> > > > > > > >pgmap);
> > > > > > >
> > > > > > > So I don't think we have a problem here.
> > > > > >
> > > > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this 
> > > > > > check, right?  I
> > > > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() 
> > > > > > should return
> > > > > > true above, so we'll skip follow_page_pte(); then we'll check 
> > > > > > FOLL_SPLIT_PMD
> > > > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss 
> > > > > > something?
> > > > >
> > > > > That seems correct - if the thp is not mapped with a pmd we won't 
> > > > > split and we
> > > > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in 
> > > > > that case it
> > > > > is fine - we will retry, but the retry will won't CLEAR because the 
> > > > > pmd has
> > > > > already been split.
> > > >
> > > > Aha!
> > > >
> > > > >
> > > > > The issue arises with doing it unconditionally in make device 
> > > > > exclusive is that
> > > > > you *always* CLEAR even if there is no thp pmd to split. Or at least 
> > > > > that's my
> > > > > understanding, please let me know if it doesn't make sense.
> > > >
> > > > Exactly.  But if you see what I meant here, even if it can work like 
> > > > this, it
> > > > sounds still fragile, isn't it?  I just feel something is slightly off 
> > > > there..
> > > >
> > > > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > > > performance, afaict, because if it's not a thp even without locking, 
> > > > then it
> > > > won't be, so further __split_huge_pmd() is not necessary.
> > > >
> > > > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > > > __split_huge_pmd() directly, then AFAIU device exclusive API will be 
> > > > the 1st
> > > > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> > >
> > > Well I would argue the performance of memory notifiers is becoming 
> > > increasingly
> > > important, and a change that causes them to be called unnecessarily is
> > > therefore not very legal. Likely the correct fix here is to optimise
> > > __split_huge_pmd() to only call the notifier if it's actually going to 
> > > split a
> > > pmd. As you said though that's a completely different story which I think 
> > > would
> > > be best done as a separate series.
> > 
> > Right, maybe I can look a bit more into that later; but my whole point was 
> > to
> > express that one functionality shouldn't depend on such a trivial detail of
> > implementation of other modules (thp split in this case).
> > 
> > >
> > > > This lets me goes back a step to think about why do we need this 
> > > > notifier at
> > > > all to cover this whole range of make_device_exclusive() procedure..
> > > >
> > > > What I am thinking is, we're afraid some CPU accesses this page so the 
> > > > pte got
> > > > quickly restored when device atomic operation is carrying on.  Then 
> > > > with this
> > > > notifier we'll be able to cancel it.  Makes perfect sense.
> > > >
> > > > However do we really need to register this notifier so early?  The 
> > > > thing is the
> > > > GPU driver still has all the page locks, so even if there's a race to 
> > > > restore
> > > > the ptes, they'll block at taking the page lock until the driver 
> > > > releases it.
> > > >
> > > > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > > > mmu_interval_notifier_insert() that early: what if we register that 
> > > > notifier
> > > > after make_device_exclusive_range() returns but before page_unlock() 
> > > > somehow?
> > > > So before page_unlock(), race is protected fully by the lock itself; 
> > > > after
> > > > that, it's done by mmu notifier.  Then maybe we don't need to worry 
> > > > about all
> > > > these 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-26 Thread Peter Xu
On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to 
> > explicit
> > call split_huge_pmd_address(), afaict.  Since both of them use 
> > __split_huge_pmd()
> > internally which will generate that unwanted CLEAR notify.
> 
> Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> which will always CLEAR. However gup only calls split_huge_pmd_address() if it
> finds a thp pmd. In follow_pmd_mask() we have:
> 
>   if (likely(!pmd_trans_huge(pmdval)))
>   return follow_page_pte(vma, address, pmd, flags, >pgmap);
> 
> So I don't think we have a problem here.

Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right?  I
mean, if it's a thp for the current mm, afaict pmd_trans_huge() should return
true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
and do the split, then the CLEAR notify.  Hmm.. Did I miss something?

-- 
Peter Xu

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-26 Thread Peter Xu
On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to 
> > > > explicit
> > > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > > __split_huge_pmd()
> > > > internally which will generate that unwanted CLEAR notify.
> > >
> > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > which will always CLEAR. However gup only calls split_huge_pmd_address() 
> > > if it
> > > finds a thp pmd. In follow_pmd_mask() we have:
> > >
> > >   if (likely(!pmd_trans_huge(pmdval)))
> > >   return follow_page_pte(vma, address, pmd, flags, 
> > > >pgmap);
> > >
> > > So I don't think we have a problem here.
> > 
> > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right? 
> >  I
> > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should 
> > return
> > true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
> > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> 
> That seems correct - if the thp is not mapped with a pmd we won't split and we
> won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that case 
> it
> is fine - we will retry, but the retry will won't CLEAR because the pmd has
> already been split.

Aha!

> 
> The issue arises with doing it unconditionally in make device exclusive is 
> that
> you *always* CLEAR even if there is no thp pmd to split. Or at least that's my
> understanding, please let me know if it doesn't make sense.

Exactly.  But if you see what I meant here, even if it can work like this, it
sounds still fragile, isn't it?  I just feel something is slightly off there..

IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
performance, afaict, because if it's not a thp even without locking, then it
won't be, so further __split_huge_pmd() is not necessary.

IOW, it's very legal if someday we'd like to let split_huge_pmd() call
__split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
one to be broken with that seems-to-be-irrelevant change I'm afraid..

This lets me goes back a step to think about why do we need this notifier at
all to cover this whole range of make_device_exclusive() procedure..

What I am thinking is, we're afraid some CPU accesses this page so the pte got
quickly restored when device atomic operation is carrying on.  Then with this
notifier we'll be able to cancel it.  Makes perfect sense.

However do we really need to register this notifier so early?  The thing is the
GPU driver still has all the page locks, so even if there's a race to restore
the ptes, they'll block at taking the page lock until the driver releases it.

IOW, I'm wondering whether the "non-fragile" way to do this is not do
mmu_interval_notifier_insert() that early: what if we register that notifier
after make_device_exclusive_range() returns but before page_unlock() somehow?
So before page_unlock(), race is protected fully by the lock itself; after
that, it's done by mmu notifier.  Then maybe we don't need to worry about all
these notifications during marking exclusive (while we shouldn't)?

Sorry in advance if I overlooked anything as I know little on device side (even
less than mm itself).  Also sorry to know that this series got marked
to-be-update in -mm; hopefully it'll still land soon even if it still needs
some rebase to other more important bugfixes - I definitely jumped in too late
even if to mess this all up. :-)

-- 
Peter Xu

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-26 Thread Peter Xu
On Thu, Jun 10, 2021 at 10:18:25AM +1000, Alistair Popple wrote:
> > > The main problem is split_huge_pmd_address() unconditionally calls a mmu
> > > notifier so I would need to plumb in passing an owner everywhere which 
> > > could
> > > get messy.
> > 
> > Could I ask why?  split_huge_pmd_address() will notify with CLEAR, so I'm a 
> > bit
> > confused why we need to pass over the owner.
> 
> Sure, it is the same reason we need to pass it for the exclusive notifier.
> Any invalidation during the make exclusive operation will break the mmu read
> side critical section forcing a retry of the operation. The owner field is 
> what
> is used to filter out invalidations (such as the exclusive invalidation) that
> don't need to be retried.

Do you mean the mmu_interval_read_begin|retry() calls?

Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to explicit
call split_huge_pmd_address(), afaict.  Since both of them use 
__split_huge_pmd()
internally which will generate that unwanted CLEAR notify.

If that's the case, I think it fails because split_huge_pmd_address() will
trigger that CLEAR notify unconditionally (even if it's not a thp; not sure
whether it should be optimized to not notify at all... definitely another
story), while FOLL_SPLIT_PMD will skip the notify as it calls split_huge_pmd()
instead, who checks the pmd before calling __split_huge_pmd().

Does it also mean that if there's a real THP it won't really work?  As then
FOLL_SPLIT_PMD will start to trigger that CLEAR notify too, I think..

-- 
Peter Xu

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-15 Thread Alistair Popple
On Wednesday, 16 June 2021 2:25:09 AM AEST Peter Xu wrote:
> On Tue, Jun 15, 2021 at 01:08:11PM +1000, Alistair Popple wrote:
> > On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> > > On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:

[...]

> > > Do you think we can restore pte right before wr-protect or zap?  Then all
> > > things serializes with page lock (btw: it's already an insane userspace to
> > > either unmap a page or wr-protect a page if it knows the device is using 
> > > it!).
> > > If these are the only two cases, it still sounds a cleaner approach to me 
> > > than
> > > the current approach.
> >
> > Perhaps we could but it would make {zap|change}_pte_range() much more 
> > complex as
> > we can't sleep taking the page lock whilst holding the ptl, so we'd have to
> > implement a retry scheme similar to copy_pte_range() in both those 
> > functions as
> > well.
> 
> Yes, but shouldn't be hard to do so, imho. E.g., see when __tlb_remove_page()
> returns true in zap_pte_range(), so we already did something like that.  IMHO
> it's not uncommon to have such facilities as we do have requirements to sleep
> during a spinlock critical section for a lot of places in mm, so we release
> them when needed and retake.

Agreed, it's not hard to do and it's a common enough pattern. However we decided
that for such a specific application this (trying to take the lock or drop locks
and retry) was too complex for copy_pte_range() so it seems like the same should
apply here.

Admittedly copy_pte_range() already had several other retry paths so perhaps
it was adding yet another that made it relatively more complex. Overall I have
been trying to minimise the impact on core mm code for this feature, and adding
this pattern to zap_pte_range(), etc. would make it more complex for any future
addition that requires locks to be dropped and retried so I guess in that sense
it is no different.

> > Given mmu_interval_read_begin/retry was IMHO added to solve this type of
> > problem (freezing pte's to safely program device pte's) it seems like the
> > better option rather than adding more complex code to generic mm paths.
> >
> > It's also worth noting i915 seems to use mmu_interval_read_begin/retry() 
> > with
> > gup to sync mappings so this isn't an entirely new concept. I'm not an 
> > expert
> > in that driver but I imagine changing gup to generate unconditional mmu 
> > notifier
> > invalidates would also cause issues there. So I think overall this is the
> > cleanest solution as it reduces the amount of code (particularly in generic 
> > mm
> > paths).
> 
> I could be wrong somewhere, but to me depending on mmu notifiers being
> "accurate" in general is fragile..
> 
> Take an example of change_pte_range(), which will generate PROTECTION_VMA
> notifies.  Let's imaging an userspace calls mprotect() e.g. twice or even more
> times with the same PROT_* and upon the same region, we know very possibly the
> 2nd,3rd,... calls will generate those notifies with totally no change to the
> pgtable at all as they're all done on the 1st shot.  However we'll generate 
> mmu
> notifies anyways for the 2nd,3rd,... calls.  It means mmu notifiers should
> really be tolerant of false positives as it does happen, and such thing can be
> triggered even from userspace system calls very easily like this.  That's why 
> I
> think any kernel facility that depends on mmu notifiers being accurate is
> probably not the right approach..

Argh, thanks. I was focused on the specifics of this series but I think I
understand your point better now - that as a more general principle we can't
assume notifiers are accurate.

> But yeah as you said I think it's working as is with the series (I think the
> follow_pmd_mask() checking pmd_trans_huge before calling split_huge_pmd is a
> double safety-net for it, so even if the GUP split_huge_pmd got replaced with
> __split_huge_pmd it should still work with the one-retry logic), not sure
> whether it matters a lot, as it's not common mm path; I think I'll step back 
> so
> Andrew could still pick it up as wish, I'm just still not fully convinced it's
> the best solution to have for a long term to depend on that..

Ok, thanks. I guess you have somewhat convinced me - depending on it for the
long term might be a bit fragile. However as you say the current implementation
does work and I am starting to look at support for PMD and file backed pages
which require changes here anyway. So I am hoping Andrew can still take this
(once rebased) as it would be easier for me to do those changes if the basic
support and clean ups were already in place.

> > > This also reminded me that right now the cpu pgtable recovery is lazy - it
> > > happens either from fork() or a cpu page fault.  Even after device 
> > > finished
> > > using it, swap ptes keep there.
> > >
> > > What if the device tries to do atomic op on the same page twice?  I am 
> > > not sure
> > > whether it means we may also want 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-14 Thread Alistair Popple
On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> > On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar 
> > > > > > > effect to explicit
> > > > > > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > > > > > __split_huge_pmd()
> > > > > > > internally which will generate that unwanted CLEAR notify.
> > > > > >
> > > > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > > > which will always CLEAR. However gup only calls 
> > > > > > split_huge_pmd_address() if it
> > > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > > >
> > > > > >   if (likely(!pmd_trans_huge(pmdval)))
> > > > > >   return follow_page_pte(vma, address, pmd, flags, 
> > > > > > >pgmap);
> > > > > >
> > > > > > So I don't think we have a problem here.
> > > > >
> > > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, 
> > > > > right?  I
> > > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() 
> > > > > should return
> > > > > true above, so we'll skip follow_page_pte(); then we'll check 
> > > > > FOLL_SPLIT_PMD
> > > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> > > >
> > > > That seems correct - if the thp is not mapped with a pmd we won't split 
> > > > and we
> > > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that 
> > > > case it
> > > > is fine - we will retry, but the retry will won't CLEAR because the pmd 
> > > > has
> > > > already been split.
> > >
> > > Aha!
> > >
> > > >
> > > > The issue arises with doing it unconditionally in make device exclusive 
> > > > is that
> > > > you *always* CLEAR even if there is no thp pmd to split. Or at least 
> > > > that's my
> > > > understanding, please let me know if it doesn't make sense.
> > >
> > > Exactly.  But if you see what I meant here, even if it can work like 
> > > this, it
> > > sounds still fragile, isn't it?  I just feel something is slightly off 
> > > there..
> > >
> > > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > > performance, afaict, because if it's not a thp even without locking, then 
> > > it
> > > won't be, so further __split_huge_pmd() is not necessary.
> > >
> > > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > > __split_huge_pmd() directly, then AFAIU device exclusive API will be the 
> > > 1st
> > > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> >
> > Well I would argue the performance of memory notifiers is becoming 
> > increasingly
> > important, and a change that causes them to be called unnecessarily is
> > therefore not very legal. Likely the correct fix here is to optimise
> > __split_huge_pmd() to only call the notifier if it's actually going to 
> > split a
> > pmd. As you said though that's a completely different story which I think 
> > would
> > be best done as a separate series.
> 
> Right, maybe I can look a bit more into that later; but my whole point was to
> express that one functionality shouldn't depend on such a trivial detail of
> implementation of other modules (thp split in this case).
> 
> >
> > > This lets me goes back a step to think about why do we need this notifier 
> > > at
> > > all to cover this whole range of make_device_exclusive() procedure..
> > >
> > > What I am thinking is, we're afraid some CPU accesses this page so the 
> > > pte got
> > > quickly restored when device atomic operation is carrying on.  Then with 
> > > this
> > > notifier we'll be able to cancel it.  Makes perfect sense.
> > >
> > > However do we really need to register this notifier so early?  The thing 
> > > is the
> > > GPU driver still has all the page locks, so even if there's a race to 
> > > restore
> > > the ptes, they'll block at taking the page lock until the driver releases 
> > > it.
> > >
> > > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > > mmu_interval_notifier_insert() that early: what if we register that 
> > > notifier
> > > after make_device_exclusive_range() returns but before page_unlock() 
> > > somehow?
> > > So before page_unlock(), race is protected fully by the lock itself; after
> > > that, it's done by mmu notifier.  Then maybe we don't need to worry about 
> > > all
> > > these notifications during marking exclusive (while we shouldn't)?
> >
> > The notifier is needed to protect against races with pte changes. Once a 
> > page
> > has been marked for exclusive access the driver will update it's page 
> > tables to
> > allow atomic access to the page. However in the meantime the page 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-10 Thread Alistair Popple
On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect 
> > > > > to explicit
> > > > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > > > __split_huge_pmd()
> > > > > internally which will generate that unwanted CLEAR notify.
> > > >
> > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > which will always CLEAR. However gup only calls 
> > > > split_huge_pmd_address() if it
> > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > >
> > > >   if (likely(!pmd_trans_huge(pmdval)))
> > > >   return follow_page_pte(vma, address, pmd, flags, 
> > > > >pgmap);
> > > >
> > > > So I don't think we have a problem here.
> > >
> > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, 
> > > right?  I
> > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should 
> > > return
> > > true above, so we'll skip follow_page_pte(); then we'll check 
> > > FOLL_SPLIT_PMD
> > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> >
> > That seems correct - if the thp is not mapped with a pmd we won't split and 
> > we
> > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that 
> > case it
> > is fine - we will retry, but the retry will won't CLEAR because the pmd has
> > already been split.
> 
> Aha!
> 
> >
> > The issue arises with doing it unconditionally in make device exclusive is 
> > that
> > you *always* CLEAR even if there is no thp pmd to split. Or at least that's 
> > my
> > understanding, please let me know if it doesn't make sense.
> 
> Exactly.  But if you see what I meant here, even if it can work like this, it
> sounds still fragile, isn't it?  I just feel something is slightly off there..
> 
> IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> performance, afaict, because if it's not a thp even without locking, then it
> won't be, so further __split_huge_pmd() is not necessary.
> 
> IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> __split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
> one to be broken with that seems-to-be-irrelevant change I'm afraid..

Well I would argue the performance of memory notifiers is becoming increasingly
important, and a change that causes them to be called unnecessarily is
therefore not very legal. Likely the correct fix here is to optimise
__split_huge_pmd() to only call the notifier if it's actually going to split a
pmd. As you said though that's a completely different story which I think would
be best done as a separate series.

> This lets me goes back a step to think about why do we need this notifier at
> all to cover this whole range of make_device_exclusive() procedure..
> 
> What I am thinking is, we're afraid some CPU accesses this page so the pte got
> quickly restored when device atomic operation is carrying on.  Then with this
> notifier we'll be able to cancel it.  Makes perfect sense.
> 
> However do we really need to register this notifier so early?  The thing is 
> the
> GPU driver still has all the page locks, so even if there's a race to restore
> the ptes, they'll block at taking the page lock until the driver releases it.
> 
> IOW, I'm wondering whether the "non-fragile" way to do this is not do
> mmu_interval_notifier_insert() that early: what if we register that notifier
> after make_device_exclusive_range() returns but before page_unlock() somehow?
> So before page_unlock(), race is protected fully by the lock itself; after
> that, it's done by mmu notifier.  Then maybe we don't need to worry about all
> these notifications during marking exclusive (while we shouldn't)?

The notifier is needed to protect against races with pte changes. Once a page
has been marked for exclusive access the driver will update it's page tables to
allow atomic access to the page. However in the meantime the page could become
unmapped entirely or write protected.

As I understand things the page lock won't protect against these kind of pte
changes, hence the need for mmu_interval_read_begin/retry which allows the
driver to hold a mutex protecting against invalidations via blocking the
notifier until the device page tables have been updated.

> Sorry in advance if I overlooked anything as I know little on device side 
> (even
> less than mm itself).  Also sorry to know that this series got marked
> to-be-update in -mm; hopefully it'll still land soon even if it still needs
> some rebase to other more important bugfixes - I definitely jumped in too late
> even if to mess this all up. :-)

I was thinking that was probably coming anyway, but I'm still hoping it will be
just a rebase on Hugh's work 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-10 Thread Alistair Popple
On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to 
> > > explicit
> > > call split_huge_pmd_address(), afaict.  Since both of them use 
> > > __split_huge_pmd()
> > > internally which will generate that unwanted CLEAR notify.
> >
> > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > which will always CLEAR. However gup only calls split_huge_pmd_address() if 
> > it
> > finds a thp pmd. In follow_pmd_mask() we have:
> >
> >   if (likely(!pmd_trans_huge(pmdval)))
> >   return follow_page_pte(vma, address, pmd, flags, >pgmap);
> >
> > So I don't think we have a problem here.
> 
> Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right?  I
> mean, if it's a thp for the current mm, afaict pmd_trans_huge() should return
> true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
> and do the split, then the CLEAR notify.  Hmm.. Did I miss something?

That seems correct - if the thp is not mapped with a pmd we won't split and we
won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that case it
is fine - we will retry, but the retry will won't CLEAR because the pmd has
already been split.

The issue arises with doing it unconditionally in make device exclusive is that
you *always* CLEAR even if there is no thp pmd to split. Or at least that's my
understanding, please let me know if it doesn't make sense.

 - Alistair

> --
> Peter Xu
> 




___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-10 Thread Alistair Popple
On Friday, 11 June 2021 4:04:35 AM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Thu, Jun 10, 2021 at 10:18:25AM +1000, Alistair Popple wrote:
> > > > The main problem is split_huge_pmd_address() unconditionally calls a mmu
> > > > notifier so I would need to plumb in passing an owner everywhere which 
> > > > could
> > > > get messy.
> > >
> > > Could I ask why?  split_huge_pmd_address() will notify with CLEAR, so I'm 
> > > a bit
> > > confused why we need to pass over the owner.
> >
> > Sure, it is the same reason we need to pass it for the exclusive notifier.
> > Any invalidation during the make exclusive operation will break the mmu read
> > side critical section forcing a retry of the operation. The owner field is 
> > what
> > is used to filter out invalidations (such as the exclusive invalidation) 
> > that
> > don't need to be retried.
> 
> Do you mean the mmu_interval_read_begin|retry() calls?

Yep.

> Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to 
> explicit
> call split_huge_pmd_address(), afaict.  Since both of them use 
> __split_huge_pmd()
> internally which will generate that unwanted CLEAR notify.

Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
which will always CLEAR. However gup only calls split_huge_pmd_address() if it
finds a thp pmd. In follow_pmd_mask() we have:

if (likely(!pmd_trans_huge(pmdval)))
return follow_page_pte(vma, address, pmd, flags, >pgmap);

So I don't think we have a problem here.

> If that's the case, I think it fails because split_huge_pmd_address() will
> trigger that CLEAR notify unconditionally (even if it's not a thp; not sure
> whether it should be optimized to not notify at all... definitely another
> story), while FOLL_SPLIT_PMD will skip the notify as it calls split_huge_pmd()
> instead, who checks the pmd before calling __split_huge_pmd().
> 
> Does it also mean that if there's a real THP it won't really work?  As then
> FOLL_SPLIT_PMD will start to trigger that CLEAR notify too, I think..
> 
> --
> Peter Xu
> 




___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-09 Thread Peter Xu
On Wed, Jun 09, 2021 at 07:38:04PM +1000, Alistair Popple wrote:
> On Wednesday, 9 June 2021 4:33:52 AM AEST Peter Xu wrote:
> > On Mon, Jun 07, 2021 at 05:58:52PM +1000, Alistair Popple wrote:
> > 
> > [...]
> > 
> > > +static bool page_make_device_exclusive_one(struct page *page,
> > > + struct vm_area_struct *vma, unsigned long address, void 
> > > *priv)
> > > +{
> > > + struct mm_struct *mm = vma->vm_mm;
> > > + struct page_vma_mapped_walk pvmw = {
> > > + .page = page,
> > > + .vma = vma,
> > > + .address = address,
> > > + };
> > > + struct make_exclusive_args *args = priv;
> > > + pte_t pteval;
> > > + struct page *subpage;
> > > + bool ret = true;
> > > + struct mmu_notifier_range range;
> > > + swp_entry_t entry;
> > > + pte_t swp_pte;
> > > +
> > > + mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0, vma,
> > > +   vma->vm_mm, address, min(vma->vm_end,
> > > +   address + page_size(page)), 
> > > args->owner);
> > > + mmu_notifier_invalidate_range_start();
> > > +
> > > + while (page_vma_mapped_walk()) {
> > > + /* Unexpected PMD-mapped THP? */
> > > + VM_BUG_ON_PAGE(!pvmw.pte, page);
> > 
> > [1]
> > 
> > > +
> > > + if (!pte_present(*pvmw.pte)) {
> > > + ret = false;
> > > + page_vma_mapped_walk_done();
> > > + break;
> > > + }
> > > +
> > > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> > > + address = pvmw.address;
> > 
> > I raised a question here previously and didn't get an answer...
> > 
> > https://lore.kernel.org/linux-mm/YLDr%2FRyAdUR4q0kk@t490s/
> 
> Sorry, I had overlooked that. Will continue the discussion here.

No problem.  I also didn't really express clearly last time, I'm happy we can
discuss this more thoroughly, even if it may be a corner case only.

> 
> > I think I get your point now and it does look possible that the split page 
> > can
> > still be mapped somewhere else as thp, then having some subpage maintainance
> > looks necessary.  The confusing part is above [1] you've also got that
> > VM_BUG_ON_PAGE() assuming it must not be a mapped pmd at all..
> 
> Going back I thought your original question was whether subpage != page is
> possible. My main point was it's possible if we get a thp head. In that case 
> we
> need to replace all pte's with exclusive entries because I haven't (yet)
> defined a pmd version of device exclusive entries and also rmap_walk won't 
> deal
> with tail pages (see below).
> 
> > Then I remembered these code majorly come from the try_to_unmap() so I 
> > looked
> > there.  I _think_ what's missing here is something like:
> > 
> > if (flags & TTU_SPLIT_HUGE_PMD)
> > split_huge_pmd_address(vma, address, false, page);
> > 
> > at the entry of page_make_device_exclusive_one()?
> >
> > That !pte assertion in try_to_unmap() makes sense to me as long as it has 
> > split
> > the thp page first always.  However seems not the case for FOLL_SPLIT_PMD as
> > you previously mentioned.
> 
> At present this is limited to PageAnon pages which have had CoW broken, which 
> I
> think means there shouldn't be other mappings so I expect the PMD will always
> have been split into small PTEs mapping subpages by GUP which is what that
> assertion [1] is checking. I could call split_huge_pmd_address() 
> unconditionally
> as suggested but see the discussion below.

Yes, I think calling that unconditionally should be enough.

> 
> > Meanwhile, I also started to wonder whether it's even right to call 
> > rmap_walk()
> > with tail pages...  Please see below.
> > 
> > > +
> > > + /* Nuke the page table entry. */
> > > + flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> > > + pteval = ptep_clear_flush(vma, address, pvmw.pte);
> > > +
> > > + /* Move the dirty bit to the page. Now the pte is gone. */
> > > + if (pte_dirty(pteval))
> > > + set_page_dirty(page);
> > > +
> > > + /*
> > > +  * Check that our target page is still mapped at the 
> > > expected
> > > +  * address.
> > > +  */
> > > + if (args->mm == mm && args->address == address &&
> > > + pte_write(pteval))
> > > + args->valid = true;
> > > +
> > > + /*
> > > +  * Store the pfn of the page in a special migration
> > > +  * pte. do_swap_page() will wait until the migration
> > > +  * pte is removed and then restart fault handling.
> > > +  */
> > > + if (pte_write(pteval))
> > > + entry = make_writable_device_exclusive_entry(
> > > + 
> > > 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-09 Thread Alistair Popple
On Thursday, 10 June 2021 2:05:06 AM AEST Peter Xu wrote:
> On Wed, Jun 09, 2021 at 07:38:04PM +1000, Alistair Popple wrote:
> > On Wednesday, 9 June 2021 4:33:52 AM AEST Peter Xu wrote:
> > > On Mon, Jun 07, 2021 at 05:58:52PM +1000, Alistair Popple wrote:

[...]

> > For thp this means we could end up passing
> > tail pages to rmap_walk(), however it doesn't actually walk them.
> >
> > Based on the results of previous testing I had done I assumed rmap_walk()
> > filtered out tail pages. It does, and I didn't hit the BUG_ON above, but the
> > filtering was not as deliberate as assumed.
> >
> > I've gone back and looked at what was happening in my earlier tests and the
> > tail pages get filtered because the VMA is not getting locked in
> > page_lock_anon_vma_read() due to failing this check:
> >
> >   anon_mapping = (unsigned long)READ_ONCE(page->mapping);
> >   if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> >   goto out;
> >
> > And now I'm not sure it makes sense to read page->mapping of a tail page. So
> > it might be best if we explicitly ignore any tail pages returned from GUP, 
> > at
> > least for now (a future series will improve thp support such as adding a pmd
> > version for exclusive entries).
> 
> I feel like it's illegal to access page->mapping of tail pages; I looked at
> what happens if we call page_anon_vma() on a tail page:
> 
> struct anon_vma *page_anon_vma(struct page *page)
> {
> unsigned long mapping;
> 
> page = compound_head(page);
> mapping = (unsigned long)page->mapping;
> if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> return NULL;
> return __page_rmapping(page);
> }
> 
> It'll just take the head's mapping instead.  It makes sense since the tail 
> page
> shouldn't have a different value against the head page, afaiu.

Right, it makes no sense to look at ->mapping on a tail page because the field
is used for something else. On the 1st tail page it is ->compound_nr and on the
2nd tail page it is ->deferred_list. See the definitions of compound_nr() and
page_deferred_list() respectively. I suppose on the rest of the pages it could
be anything.

I think in practice it is probably ok - iuc bit 0 won't be set for compound_nr
and certainly not for deferred_list->next (a pointer). But none of that seems
intentional, so it would be better to be explicit and not walk the tail pages.

> It would be great if thp experts could chim in.  Before that happens, I agree
> with you that a safer approach is to explicitly not walk a tail page for its
> rmap (and I think the rmap of a tail page will be the same of the head
> anyways.. since they seem to share the anon_vma as quoted).
> >
> > > So... for thp mappings, wondering whether we should do normal GUP (without
> > > SPLIT), pass in always normal or head pages into rmap_walk(), but then
> > > unconditionally split_huge_pmd_address() in 
> > > page_make_device_exclusive_one()?
> >
> > That could work (although I think GUP will still return tail pages - see
> > follow_trans_huge_pmd() which is called from follow_pmd_mask() in gup).
> 
> Agreed.
> 
> > The main problem is split_huge_pmd_address() unconditionally calls a mmu
> > notifier so I would need to plumb in passing an owner everywhere which could
> > get messy.
> 
> Could I ask why?  split_huge_pmd_address() will notify with CLEAR, so I'm a 
> bit
> confused why we need to pass over the owner.

Sure, it is the same reason we need to pass it for the exclusive notifier.
Any invalidation during the make exclusive operation will break the mmu read
side critical section forcing a retry of the operation. The owner field is what
is used to filter out invalidations (such as the exclusive invalidation) that
don't need to be retried.
 
> I thought plumb it right before your EXCLUSIVE notifier init would work?

I did try this just to double check and it doesn't work due to the unconditional
notifier.

> ---8<---
> diff --git a/mm/rmap.c b/mm/rmap.c
> index a94d9aed9d95..360ce86f3822 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2042,6 +2042,12 @@ static bool page_make_device_exclusive_one(struct page 
> *page,
> swp_entry_t entry;
> pte_t swp_pte;
> 
> +   /*
> +* Make sure thps split as device exclusive entries only support pte
> +* level for now.
> +*/
> +   split_huge_pmd_address(vma, address, false, page);
> +
> mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0, vma,
>   vma->vm_mm, address, min(vma->vm_end,
>   address + page_size(page)), 
> args->owner);
> ---8<---
> 
> Thanks,
> 
> --
> Peter Xu
> 




___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-09 Thread Alistair Popple
On Wednesday, 9 June 2021 4:33:52 AM AEST Peter Xu wrote:
> On Mon, Jun 07, 2021 at 05:58:52PM +1000, Alistair Popple wrote:
> 
> [...]
> 
> > +static bool page_make_device_exclusive_one(struct page *page,
> > + struct vm_area_struct *vma, unsigned long address, void *priv)
> > +{
> > + struct mm_struct *mm = vma->vm_mm;
> > + struct page_vma_mapped_walk pvmw = {
> > + .page = page,
> > + .vma = vma,
> > + .address = address,
> > + };
> > + struct make_exclusive_args *args = priv;
> > + pte_t pteval;
> > + struct page *subpage;
> > + bool ret = true;
> > + struct mmu_notifier_range range;
> > + swp_entry_t entry;
> > + pte_t swp_pte;
> > +
> > + mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0, vma,
> > +   vma->vm_mm, address, min(vma->vm_end,
> > +   address + page_size(page)), 
> > args->owner);
> > + mmu_notifier_invalidate_range_start();
> > +
> > + while (page_vma_mapped_walk()) {
> > + /* Unexpected PMD-mapped THP? */
> > + VM_BUG_ON_PAGE(!pvmw.pte, page);
> 
> [1]
> 
> > +
> > + if (!pte_present(*pvmw.pte)) {
> > + ret = false;
> > + page_vma_mapped_walk_done();
> > + break;
> > + }
> > +
> > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> > + address = pvmw.address;
> 
> I raised a question here previously and didn't get an answer...
> 
> https://lore.kernel.org/linux-mm/YLDr%2FRyAdUR4q0kk@t490s/

Sorry, I had overlooked that. Will continue the discussion here.

> I think I get your point now and it does look possible that the split page can
> still be mapped somewhere else as thp, then having some subpage maintainance
> looks necessary.  The confusing part is above [1] you've also got that
> VM_BUG_ON_PAGE() assuming it must not be a mapped pmd at all..

Going back I thought your original question was whether subpage != page is
possible. My main point was it's possible if we get a thp head. In that case we
need to replace all pte's with exclusive entries because I haven't (yet)
defined a pmd version of device exclusive entries and also rmap_walk won't deal
with tail pages (see below).

> Then I remembered these code majorly come from the try_to_unmap() so I looked
> there.  I _think_ what's missing here is something like:
> 
> if (flags & TTU_SPLIT_HUGE_PMD)
> split_huge_pmd_address(vma, address, false, page);
> 
> at the entry of page_make_device_exclusive_one()?
>
> That !pte assertion in try_to_unmap() makes sense to me as long as it has 
> split
> the thp page first always.  However seems not the case for FOLL_SPLIT_PMD as
> you previously mentioned.

At present this is limited to PageAnon pages which have had CoW broken, which I
think means there shouldn't be other mappings so I expect the PMD will always
have been split into small PTEs mapping subpages by GUP which is what that
assertion [1] is checking. I could call split_huge_pmd_address() unconditionally
as suggested but see the discussion below.

> Meanwhile, I also started to wonder whether it's even right to call 
> rmap_walk()
> with tail pages...  Please see below.
> 
> > +
> > + /* Nuke the page table entry. */
> > + flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> > + pteval = ptep_clear_flush(vma, address, pvmw.pte);
> > +
> > + /* Move the dirty bit to the page. Now the pte is gone. */
> > + if (pte_dirty(pteval))
> > + set_page_dirty(page);
> > +
> > + /*
> > +  * Check that our target page is still mapped at the expected
> > +  * address.
> > +  */
> > + if (args->mm == mm && args->address == address &&
> > + pte_write(pteval))
> > + args->valid = true;
> > +
> > + /*
> > +  * Store the pfn of the page in a special migration
> > +  * pte. do_swap_page() will wait until the migration
> > +  * pte is removed and then restart fault handling.
> > +  */
> > + if (pte_write(pteval))
> > + entry = make_writable_device_exclusive_entry(
> > + page_to_pfn(subpage));
> > + else
> > + entry = make_readable_device_exclusive_entry(
> > + page_to_pfn(subpage));
> > + swp_pte = swp_entry_to_pte(entry);
> > + if (pte_soft_dirty(pteval))
> > + swp_pte = pte_swp_mksoft_dirty(swp_pte);
> > + if (pte_uffd_wp(pteval))
> > + swp_pte = pte_swp_mkuffd_wp(swp_pte);
> > +
> > + set_pte_at(mm, address, pvmw.pte, swp_pte);
> > +
> > 

Re: [Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-08 Thread Peter Xu
On Mon, Jun 07, 2021 at 05:58:52PM +1000, Alistair Popple wrote:

[...]

> +static bool page_make_device_exclusive_one(struct page *page,
> + struct vm_area_struct *vma, unsigned long address, void *priv)
> +{
> + struct mm_struct *mm = vma->vm_mm;
> + struct page_vma_mapped_walk pvmw = {
> + .page = page,
> + .vma = vma,
> + .address = address,
> + };
> + struct make_exclusive_args *args = priv;
> + pte_t pteval;
> + struct page *subpage;
> + bool ret = true;
> + struct mmu_notifier_range range;
> + swp_entry_t entry;
> + pte_t swp_pte;
> +
> + mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0, vma,
> +   vma->vm_mm, address, min(vma->vm_end,
> +   address + page_size(page)), args->owner);
> + mmu_notifier_invalidate_range_start();
> +
> + while (page_vma_mapped_walk()) {
> + /* Unexpected PMD-mapped THP? */
> + VM_BUG_ON_PAGE(!pvmw.pte, page);

[1]

> +
> + if (!pte_present(*pvmw.pte)) {
> + ret = false;
> + page_vma_mapped_walk_done();
> + break;
> + }
> +
> + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> + address = pvmw.address;

I raised a question here previously and didn't get an answer...

https://lore.kernel.org/linux-mm/YLDr%2FRyAdUR4q0kk@t490s/

I think I get your point now and it does look possible that the split page can
still be mapped somewhere else as thp, then having some subpage maintainance
looks necessary.  The confusing part is above [1] you've also got that
VM_BUG_ON_PAGE() assuming it must not be a mapped pmd at all..

Then I remembered these code majorly come from the try_to_unmap() so I looked
there.  I _think_ what's missing here is something like:

if (flags & TTU_SPLIT_HUGE_PMD)
split_huge_pmd_address(vma, address, false, page);

at the entry of page_make_device_exclusive_one()?

That !pte assertion in try_to_unmap() makes sense to me as long as it has split
the thp page first always.  However seems not the case for FOLL_SPLIT_PMD as
you previously mentioned.

Meanwhile, I also started to wonder whether it's even right to call rmap_walk()
with tail pages...  Please see below.

> +
> + /* Nuke the page table entry. */
> + flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> + pteval = ptep_clear_flush(vma, address, pvmw.pte);
> +
> + /* Move the dirty bit to the page. Now the pte is gone. */
> + if (pte_dirty(pteval))
> + set_page_dirty(page);
> +
> + /*
> +  * Check that our target page is still mapped at the expected
> +  * address.
> +  */
> + if (args->mm == mm && args->address == address &&
> + pte_write(pteval))
> + args->valid = true;
> +
> + /*
> +  * Store the pfn of the page in a special migration
> +  * pte. do_swap_page() will wait until the migration
> +  * pte is removed and then restart fault handling.
> +  */
> + if (pte_write(pteval))
> + entry = make_writable_device_exclusive_entry(
> + page_to_pfn(subpage));
> + else
> + entry = make_readable_device_exclusive_entry(
> + page_to_pfn(subpage));
> + swp_pte = swp_entry_to_pte(entry);
> + if (pte_soft_dirty(pteval))
> + swp_pte = pte_swp_mksoft_dirty(swp_pte);
> + if (pte_uffd_wp(pteval))
> + swp_pte = pte_swp_mkuffd_wp(swp_pte);
> +
> + set_pte_at(mm, address, pvmw.pte, swp_pte);
> +
> + /*
> +  * There is a reference on the page for the swap entry which has
> +  * been removed, so shouldn't take another.
> +  */
> + page_remove_rmap(subpage, false);
> + }
> +
> + mmu_notifier_invalidate_range_end();
> +
> + return ret;
> +}
> +
> +/**
> + * page_make_device_exclusive - mark the page exclusively owned by a device
> + * @page: the page to replace page table entries for
> + * @mm: the mm_struct where the page is expected to be mapped
> + * @address: address where the page is expected to be mapped
> + * @owner: passed to MMU_NOTIFY_EXCLUSIVE range notifier callbacks
> + *
> + * Tries to remove all the page table entries which are mapping this page and
> + * replace them with special device exclusive swap entries to grant a device
> + * exclusive access to the page. Caller must hold the page lock.
> + *
> + * Returns false if the page is still mapped, or if it could not be unmapped
> + * from the expected address. Otherwise 

[Nouveau] [PATCH v10 07/10] mm: Device exclusive memory access

2021-06-07 Thread Alistair Popple
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.

In order to do this introduce a new swap entry
type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for
exclusive access by a device all page table mappings for the particular
range are replaced with device exclusive swap entries. This causes any
CPU access to the page to result in a fault.

Faults are resovled by replacing the faulting entry with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.

Walking of the page tables to find the target pages is handled by
get_user_pages() rather than a direct page table walk. A direct page
table walk similar to what migrate_vma_collect()/unmap() does could also
have been utilised. However this resulted in more code similar in
functionality to what get_user_pages() provides as page faulting is
required to make the PTEs present and to break COW.

Signed-off-by: Alistair Popple 
Reviewed-by: Christoph Hellwig 

---

v10:
* Make device exclusive code conditional on CONFIG_DEVICE_PRIVATE.
* Updates to code comments and more minor code cleanups.

v9:
* Split rename of migrate_pgmap_owner into a separate patch.
* Added comments explaining SWP_DEVICE_EXCLUSIVE_* entries.
* Renamed try_to_protect{_one} to page_make_device_exclusive{_one} based
  somewhat on a suggestion from Peter Xu. I was never particularly happy
  with try_to_protect() as a name so think this is better.
* Removed unneccesary code and reworded some comments based on feedback
  from Peter Xu.
* Removed the VMA walk when restoring PTEs for device-exclusive entries.
* Simplified implementation of copy_pte_range() to fail if the page
  cannot be locked. This might lead to occasional fork() failures but at
  this stage we don't think that will be an issue.

v8:
* Remove device exclusive entries on fork rather than copy them.

v7:
* Added Christoph's Reviewed-by.
* Minor cosmetic cleanups suggested by Christoph.
* Replace mmu_notifier_range_init_migrate/exclusive with
  mmu_notifier_range_init_owner as suggested by Christoph.
* Replaced lock_page() with lock_page_retry() when handling faults.
* Restrict to anonymous pages for now.

v6:
* Fixed a bisectablity issue due to incorrectly applying the rename of
  migrate_pgmap_owner to the wrong patches for Nouveau and hmm_test.

v5:
* Renamed range->migrate_pgmap_owner to range->owner.
* Added MMU_NOTIFY_EXCLUSIVE to allow passing of a driver cookie which
  allows notifiers called as a result of make_device_exclusive_range() to
  be ignored.
* Added a check to try_to_protect_one() to detect if the pages originally
  returned from get_user_pages() have been unmapped or not.
* Removed check_device_exclusive_range() as it is no longer required with
  the other changes.
* Documentation update.

v4:
* Add function to check that mappings are still valid and exclusive.
* s/long/unsigned long/ in make_device_exclusive_entry().
---
 Documentation/vm/hmm.rst |  17 
 include/linux/mmu_notifier.h |   6 ++
 include/linux/rmap.h |   4 +
 include/linux/swap.h |   9 +-
 include/linux/swapops.h  |  44 -
 mm/hmm.c |   5 +
 mm/memory.c  | 127 +++-
 mm/mprotect.c|   8 ++
 mm/page_vma_mapped.c |   9 +-
 mm/rmap.c| 182 +++
 10 files changed, 401 insertions(+), 10 deletions(-)

diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 3df79307a797..a14c2938e7af 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -405,6 +405,23 @@ between device driver specific code and shared common code:
 
The lock can now be released.
 
+Exclusive access memory
+===
+
+Some devices have features such as atomic PTE bits that can be used to 
implement
+atomic access to system memory. To support atomic operations to a shared 
virtual
+memory page such a device needs access to that page which is exclusive of any
+userspace access from the CPU. The ``make_device_exclusive_range()`` function
+can be used to make a memory range inaccessible from userspace.
+
+This replaces all mappings for pages in the given range with special swap
+entries. Any attempt to access the swap entry results in a fault which is
+resovled by replacing the entry with the original mapping. A driver gets
+notified that the mapping has been changed by MMU notifiers, after which point
+it will no longer have exclusive access to the page. Exclusive access is
+guranteed to last until the driver drops the page lock and page reference, at
+which point any CPU faults on the page may proceed as