"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Alistair Popple writes:
>>&g
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Alistair Popple writes:
>>>
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> Huang Ying writes:
>>>>
>>>>> Previously, a fixed abstract distance MEMTIER_
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Hi, Alistair,
>>>>>
&g
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Hi, Alistair,
>>>
>>> Sorry for late response. Just come back from vacation.
>>
>> Ditto for this response :-)
>>
>>
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> Previously, a fixed abstract distance MEMTIER_DEFAULT_DAX_ADISTANCE is
>>> used for slow memory type in kmem driver. This limits the usage of
>>> km
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> A memory tiering abstract distance calculation algorithm based on ACPI
>>> HMAT is implemented. The basic idea is as follows.
>>>
>>> The
"Huang, Ying" writes:
> Hi, Alistair,
>
> Sorry for late response. Just come back from vacation.
Ditto for this response :-)
I see Andrew has taken this into mm-unstable though, so my bad for not
getting around to following all this up sooner.
> Alistair Popple wri
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>>>>> While other memory device drivers can use the general notifier chain
>>>>>&
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>>>> And, I don't think that we are forced to use the general notifier
>>>>> chain interface in all memory device drivers. If the memory devic
"Huang, Ying" writes:
>>> The other way (suggested by this series) is to make dax/kmem call a
>>> notifier chain, then CXL CDAT or ACPI HMAT can identify the type of
>>> device and calculate the distance if the type is correct for them. I
>>> don't think that it's good to make dax/kem to know
"Huang, Ying" writes:
> Hi, Alistair,
>
> Thanks a lot for comments!
>
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> The abstract distance may be calculated by various drivers, such as
>>> ACPI HMAT, CXL CDAT, etc. Whi
ut into the "kmem_memory_types" list and protected by
> kmem_memory_type_lock.
See below but I wonder if kmem_memory_types could be a common helper
rather than kdax specific?
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Pop
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Popple
> Cc: Dan Williams
> Cc: Dave Hansen
> Cc: Davidlohr Bueso
> Cc: Johannes Weiner
> Cc: Jonathan Cameron
> Cc: Michal Hocko
> Cc: Yang Shi
refactor looks good and I have run the whole series on a system with
some hmat data so:
Reviewed-by: Alistair Popple
Tested-by: Alistair Popple
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Popple
> Cc: Dan Williams
> Cc: Dave
algorithm implementations can be specified via
> priority (notifier_block.priority).
How/what decides the priority though? That seems like something better
decided by a device driver than the algorithm driver IMHO.
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei
Thanks for this Huang, I had been hoping to take a look at it this week
but have run out of time. I'm keen to do some testing with it as well.
Hopefully next week...
Huang Ying writes:
> We have the explicit memory tiers framework to manage systems with
> multiple types of memory, e.g., DRAM
On Friday, 16 April 2021 2:19:18 PM AEST Dan Williams wrote:
> The revoke_iomem() change seems like something that should be moved
> into a leaf helper and not called by __request_free_mem_region()
> directly.
Ok. I have split this up but left the call to revoke_iomem() in
Refactor the portion of __request_region() done whilst holding the
resource_lock into a separate function to allow callers to hold the
lock.
Signed-off-by: Alistair Popple
---
kernel/resource.c | 52 +--
1 file changed, 32 insertions(+), 20 deletions
arn("Unaddressable device %s %pR conflicts with %pR",
conflict->name, conflict, res);
These unexpected failures can be corrected by holding resource_lock across
the two calls. This also requires memory allocation to be performed prior
to taking the lock.
Signed-
Introduce a version of region_intersects() that can be called with the
resource_lock already held. This is used in a future fix to
__request_free_mem_region().
Signed-off-by: Alistair Popple
---
kernel/resource.c | 52 ---
1 file changed, 31
resource code so cannot be called with the resource lock held.
Therefore call it only after dropping the lock.
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
Acked-by: Balbir Singh
Reported-by: kernel test robot
---
Changes fo
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v8:
* Renamed try_to_munlock to page_mlock to better reflect what the
function
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 +++
lib/test_hmm_uapi.h| 2 +
tools/testing
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
h checks the results of atomic GPU operations on a
SVM buffer whilst also writing to the same buffer from the CPU.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry manipulation code
mm/rmap: Split try_to_munlock from try_to_unmap
mm/rmap: Split migration
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
to proceed.
Signed-off-by: Alistair Popple
---
v7:
* Removed magic values for fault access levels
* Improved readability of fault comparison code
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1
pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn()
functions as this results is shorter code that is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Reworded commit message to include pfn_swap_entry_to_page
with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
On Thursday, 1 April 2021 3:56:05 PM AEDT Muchun Song wrote:
> External email: Use caution opening links or attachments
>
>
> On Fri, Mar 26, 2021 at 9:22 AM Alistair Popple wrote:
> >
> > request_free_mem_region() is used to find an empty range of physical
>
On Wednesday, 31 March 2021 10:57:46 PM AEDT Jason Gunthorpe wrote:
> On Wed, Mar 31, 2021 at 03:15:47PM +1100, Alistair Popple wrote:
> > On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> > > On 3/30/21 3:56 PM, Alistair Popple wrote:
> > > ...
> &
On Thursday, 1 April 2021 11:48:13 AM AEDT Jason Gunthorpe wrote:
> On Thu, Apr 01, 2021 at 11:45:57AM +1100, Alistair Popple wrote:
> > On Thursday, 1 April 2021 12:46:04 AM AEDT Jason Gunthorpe wrote:
> > > On Thu, Apr 01, 2021 at 12:27:52AM +1100, Alistair Popple wrote:
>
On Thursday, 1 April 2021 12:46:04 AM AEDT Jason Gunthorpe wrote:
> On Thu, Apr 01, 2021 at 12:27:52AM +1100, Alistair Popple wrote:
> > On Thursday, 1 April 2021 12:18:54 AM AEDT Jason Gunthorpe wrote:
> > > On Wed, Mar 31, 2021 at 11:59:28PM +1100, Alistair Popple wrote:
>
On Thursday, 1 April 2021 12:18:54 AM AEDT Jason Gunthorpe wrote:
> On Wed, Mar 31, 2021 at 11:59:28PM +1100, Alistair Popple wrote:
>
> > I guess that makes sense as the split could go either way at the
> > moment but I should add a check to make sure this isn't used with
>
On Wednesday, 31 March 2021 6:32:34 AM AEDT Jason Gunthorpe wrote:
> On Fri, Mar 26, 2021 at 11:08:02AM +1100, Alistair Popple wrote:
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 3a5705cfc891..33d11527ef77 100644
> > +++ b/mm/memory.c
> > @@ -781,6 +781,27 @@
On Tuesday, 30 March 2021 8:13:32 PM AEDT David Hildenbrand wrote:
> External email: Use caution opening links or attachments
>
>
> On 29.03.21 03:37, Alistair Popple wrote:
> > On Friday, 26 March 2021 7:57:51 PM AEDT David Hildenbrand wrote:
> >> On 26.03.21 0
On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> On 3/30/21 3:56 PM, Alistair Popple wrote:
> ...
> >> +1 for renaming "munlock*" items to "mlock*", where applicable. good
grief.
> >
> > At least the situation was weird enough to pr
On Wednesday, 31 March 2021 9:43:19 AM AEDT John Hubbard wrote:
> On 3/30/21 3:24 PM, Jason Gunthorpe wrote:
> ...
> >> As far as I can tell this has always been called try_to_munlock() even
though
> >> it appears to do the opposite.
> >
> > Maybe we should change it then?
> >
> >>> /**
> >>>
On Wednesday, 31 March 2021 9:09:30 AM AEDT Alistair Popple wrote:
> On Wednesday, 31 March 2021 5:49:03 AM AEDT Jason Gunthorpe wrote:
> > On Fri, Mar 26, 2021 at 11:08:00AM +1100, Alistair Popple wrote:
> > So what clears PG_mlocked on this call path?
>
> See munloc
On Wednesday, 31 March 2021 5:49:03 AM AEDT Jason Gunthorpe wrote:
> On Fri, Mar 26, 2021 at 11:08:00AM +1100, Alistair Popple wrote:
>
> > +static bool try_to_munlock_one(struct page *page, struct vm_area_struct
*vma,
> > +unsigned long
On Tuesday, 30 March 2021 2:42:34 PM AEDT John Hubbard wrote:
> On 3/29/21 5:38 PM, Alistair Popple wrote:
> > request_free_mem_region() is used to find an empty range of physical
> > addresses for hotplugging ZONE_DEVICE memory. It does this by iterating
> > over the range
ee_mem_region variant")
Fixes: 0092908d16c60 ("mm: factor out a devm_request_free_mem_region helper")
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
Acked-by: Balbir Singh
Reported-by: kernel test robot
---
ttps://github.com/0day-ci/linux/commits/Alistair-Popple/kernel-resource-Fix-locking-in-request_free_mem_region/20210326-092150
> base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
a74e6a014c9d4d4161061f770c9b4f98372ac778
>
> in testcase: boot
>
> on test machine: qemu
On Friday, 26 March 2021 4:15:36 PM AEDT Balbir Singh wrote:
> On Fri, Mar 26, 2021 at 12:20:35PM +1100, Alistair Popple wrote:
> > +static int __region_intersects(resource_size_t start, size_t size,
> > +unsigned long flags, unsigned long desc)
> >
On Friday, 26 March 2021 7:57:51 PM AEDT David Hildenbrand wrote:
> On 26.03.21 02:20, Alistair Popple wrote:
> > request_free_mem_region() is used to find an empty range of physical
> > addresses for hotplugging ZONE_DEVICE memory. It does this by iterating
> > over
ee_mem_region variant")
Fixes: 0092908d16c60 ("mm: factor out a devm_request_free_mem_region helper")
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
---
v2:
- Added Fixes tag
---
kernel/resource.c | 146
may be held over the required calls.
Instead of creating another version of devm_request_mem_region() that
doesn't take the lock open-code it to allow the caller to pre-allocate
the required memory prior to taking the lock.
Signed-off-by: Alistair Popple
---
kernel/
to proceed.
Signed-off-by: Alistair Popple
---
v7:
* Removed magic values for fault access levels
* Improved readability of fault comparison code
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 +++
lib/test_hmm_uapi.h| 2 +
tools/testing
with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn()
functions as this results is shorter code that is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Reworded commit message to include pfn_swap_entry_to_page
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Added Christoph's Reviewed-by
v4:
* Removed redundant check for VM_LOCKED
upstream Mesa userspace with a simple
OpenCL test program which checks the results of atomic GPU operations on a
SVM buffer whilst also writing to the same buffer from the CPU.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry manipulation code
mm/rma
On Tuesday, 23 March 2021 9:26:43 PM AEDT David Hildenbrand wrote:
> On 20.03.21 10:36, Miaohe Lin wrote:
> > If the zone device page does not belong to un-addressable device memory,
> > the variable entry will be uninitialized and lead to indeterminate pte
> > entry ultimately. Fix this
On Monday, 15 March 2021 6:42:45 PM AEDT Christoph Hellwig wrote:
> > +Not all devices support atomic access to system memory. To support atomic
> > +operations to a shared virtual memory page such a device needs access to
that
> > +page which is exclusive of any userspace access from the CPU.
On Monday, 15 March 2021 6:51:13 PM AEDT Christoph Hellwig wrote:
> > - /*XXX: atomic? */
> > - return (fa->access == 0 || fa->access == 3) -
> > - (fb->access == 0 || fb->access == 3);
> > + /* Atomic access (2) has highest priority */
> > + return (-1*(fa->access == 2) +
On Monday, 15 March 2021 6:27:57 PM AEDT Christoph Hellwig wrote:
> On Fri, Mar 12, 2021 at 07:38:44PM +1100, Alistair Popple wrote:
> > Remove the migration and device private entry_to_page() and
> > entry_to_pfn() inline functions and instead open code them directly.
> > Th
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 ++
lib/test_hmm_uapi.h| 2 +
tools/testing
to proceed.
Signed-off-by: Alistair Popple
---
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 100 --
drivers/gpu/drm/nouveau/nvkm
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
---
v6:
* Fixed a bisectablity
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
---
Christoph - I didn't add your Reviewed-by from v3 because removal of the
extra VM_LOCKED check in v4 changed
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
n tested using the latest upstream Mesa userspace with a simple
OpenCL test program which checks the results of atomic GPU operations on a
SVM buffer whilst also writing to the same buffer from the CPU.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry mani
Remove the migration and device private entry_to_page() and
entry_to_pfn() inline functions and instead open code them directly.
This results in shorter code which is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
---
v6:
* Removed redundant compound_page
On Tuesday, 9 March 2021 11:49:49 PM AEDT Matthew Wilcox wrote:
> On Tue, Mar 09, 2021 at 11:14:58PM +1100, Alistair Popple wrote:
> > -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> > -{
> > - struct page *p = pfn_to_page(swp_offset(entry));
>
to proceed.
Signed-off-by: Alistair Popple
---
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 102 --
drivers/gpu/drm/nouveau/nvkm
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 126 +-
lib/test_hmm_uapi.h| 2 +
tools/testing
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
---
Christoph - I didn't add your Reviewed-by from v3 because removal of the
extra VM_LOCKED check in v4 changed
with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
---
v5:
* Renamed range
l swap entries instead of device
private pages.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry manipulation code
mm/rmap: Split try_to_munlock from try_to_unmap
mm/rmap: Split migration into its own function
mm: Device exclusive memory access
Remove the migration and device private entry_to_page() and
entry_to_pfn() inline functions and instead open code them directly.
This results in shorter code which is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
---
v4:
* Added pfn_swap_entry_to_page
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
On Tuesday, 9 March 2021 6:44:41 AM AEDT Ralph Campbell wrote:
>
> On 3/3/21 10:16 PM, Alistair Popple wrote:
> > Some devices require exclusive write access to shared virtual
> > memory (SVM) ranges to perform atomic operations on that memory. This
> > requires CPU p
On Tuesday, 9 March 2021 5:58:12 AM AEDT Ralph Campbell wrote:
>
> On 3/3/21 10:16 PM, Alistair Popple wrote:
> > Migration is currently implemented as a mode of operation for
> > try_to_unmap_one() generally specified by passing the TTU_MIGRATION flag
> > or in the
On Wednesday, 3 March 2021 9:08:15 AM AEDT Zi Yan wrote:
> On 26 Feb 2021, at 2:18, Alistair Popple wrote:
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index 7f1ee411bd7b..77fa17de51d7 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linu
with the original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
---
v4:
* Add function to check
to proceed.
Signed-off-by: Alistair Popple
---
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 107 --
drivers/gpu/drm/nouveau/nvkm
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 ++
lib/test_hmm_uapi.h| 2 +
tools/testing
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
---
Christoph - I didn't add your Reviewed-by because removal of the extra
VM_LOCKED check changed things slightly. Let me know if you're still ok
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
---
include/linux/rmap.h | 4 +-
mm/huge_memory.c | 10 +-
mm/migrate.c | 9 +-
mm/rmap.c| 352 +++
4 files changed
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26 +--
mm/hugetlb.c
g device page tables.
v3:
* Refactored some existing functionality.
* Switched to using get_user_pages_remote() instead of open-coding it.
* Moved code out of hmm.
v2:
* Changed implementation to use special swap entries instead of device
private pages.
Alistair Popple (8):
mm: Remove special sw
Remove the migration and device private entry_to_page() and
entry_to_pfn() inline functions and instead open code them directly.
This results in shorter code which is easier to understand.
Signed-off-by: Alistair Popple
---
v4:
* Added pfn_swap_entry_to_page()
* Reinstated check that migration
On Tuesday, 2 March 2021 11:41:52 PM AEDT Jason Gunthorpe wrote:
> > However try_to_protect() scans the PTEs again under the PTL so checking
the
> > mapping of interest actually gets replaced during the rmap walk seems like
a
> > reasonable solution. Thanks for the comments.
>
> It does seem
On Tuesday, 2 March 2021 3:10:49 AM AEDT Jason Gunthorpe wrote:
> > + while (page_vma_mapped_walk()) {
> > + /*
> > +* If the page is mlock()d, we cannot swap it out.
> > +* If it's recently referenced (perhaps page_referenced
> > +
On Tuesday, 2 March 2021 7:52:53 PM AEDT Alistair Popple wrote:
> On Saturday, 27 February 2021 2:59:09 AM AEDT Christoph Hellwig wrote:
> > > - struct page *page = migration_entry_to_page(entry);
> > > + struct page *page = pfn_to_page(swp_offset(entry
On Tuesday, 2 March 2021 10:14:56 AM AEDT Ralph Campbell wrote:
> > From: Alistair Popple
> > Sent: Thursday, February 25, 2021 11:19 PM
> > To: linux...@kvack.org; nouv...@lists.freedesktop.org;
> > bske...@redhat.com; a...@linux-foundation.org
> > Cc: linux-...
On Tuesday, 2 March 2021 11:05:59 AM AEDT Jason Gunthorpe wrote:
> On Fri, Feb 26, 2021 at 06:18:29PM +1100, Alistair Popple wrote:
>
> > +/**
> > + * make_device_exclusive_range() - Mark a range for exclusive use by a
device
> > + * @mm: mm_struct of assoicated targe
On Saturday, 27 February 2021 2:59:09 AM AEDT Christoph Hellwig wrote:
> > - struct page *page = migration_entry_to_page(entry);
> > + struct page *page = pfn_to_page(swp_offset(entry));
>
> I wonder if keeping a single special_entry_to_page() helper would still
> me a useful.
On Tuesday, 2 March 2021 4:46:42 AM AEDT Jason Gunthorpe wrote:
>
> I wish you could come up with a more descriptive word that special
> here
>
> What I understand is this is true when the swap_offset is a pfn?
Correct, and that points to a better name. Maybe is_pfn_swap_entry()? In which
case
1 - 100 of 288 matches
Mail list logo