Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 15 ---
drivers/gpu/drm
device private pages owned by the caller of migrate_vma_setup().
Rename the src_owner field to pgmap_owner to reflect it is now used only
to identify which device private pages to migrate.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 4 +++-
drivers/gpu/drm/nouveau
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 30 +++---
tools/testing
rcise the HMM test driver
invalidation changes.
Removed reviewed-by Bharata B Rao since this version is moderately
changed.
Changes in v2:
Rebase to Jason Gunthorpe's HMM tree.
Added reviewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggeste
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 3 +++
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c
On 7/20/20 4:16 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 01:49:09PM -0700, Ralph Campbell wrote:
On 7/20/20 12:59 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 12:54:53PM -0700, Ralph Campbell wrote:
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index
On 7/20/20 12:59 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 12:54:53PM -0700, Ralph Campbell wrote:
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3e546cbf03dd..620f2235d7d4 100644
+++ b/include/linux/migrate.h
@@ -180,6 +180,11 @@ static inline unsigned long
On 7/20/20 11:41 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:44AM -0700, Ralph Campbell wrote:
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have
On 7/20/20 11:40 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:47AM -0700, Ralph Campbell wrote:
Currently migrate_vma_setup() calls mmu_notifier_invalidate_range_start()
which flushes all device private page mappings whether or not a page
is being migrated to/from device private
On 7/20/20 11:36 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:46AM -0700, Ralph Campbell wrote:
The src_owner field in struct migrate_vma is being used for two purposes,
it implies the direction of the migration and it identifies device private
pages owned by the caller. Split
On 7/13/20 5:13 PM, Kees Cook wrote:
On Mon, Jul 13, 2020 at 12:08:08PM -0700, Ralph Campbell wrote:
On 6/22/20 11:16 AM, Kees Cook wrote:
Plumb the old XFAIL result into a TAP SKIP.
Signed-off-by: Kees Cook
---
tools/testing/selftests/kselftest_harness.h | 64
On 6/22/20 11:16 AM, Kees Cook wrote:
Plumb the old XFAIL result into a TAP SKIP.
Signed-off-by: Kees Cook
---
tools/testing/selftests/kselftest_harness.h | 64 ++-
tools/testing/selftests/seccomp/seccomp_bpf.c | 8 +--
2 files changed, 52 insertions(+), 20
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 11 ---
drivers/gpu/drm/nouveau
ewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggested by Jason Gunthorpe.
Ralph Campbell (5):
nouveau: fix storing invalid ptes
mm/migrate: add a direction parameter to migrate_vma
mm/notifier: add migration invalidation type
nouv
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c | 8 +++-
2 files chang
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
by the caller of migrate_vma_setup().
Signed-off-by: Ralph Campbell
Reviewed-by: Bharata B Rao
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 ++
include/linux/migrate.h| 12 +---
lib/test_hmm.c | 2 ++
mm
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 31 ++-
1 file changed, 18 insertions
On 7/12/20 9:27 AM, Dan Williams wrote:
The 'struct resource' in 'struct dev_pagemap' is only used for holding
resource span information. The other fields, 'name', 'flags', 'desc',
'parent', 'sibling', and 'child' are all unused wasted space.
This is in preparation for introducing a
On 7/10/20 12:39 PM, Jason Gunthorpe wrote:
On Mon, Jul 06, 2020 at 03:23:45PM -0700, Ralph Campbell wrote:
Currently migrate_vma_setup() calls mmu_notifier_invalidate_range_start()
which flushes all device private page mappings whether or not a page
is being migrated to/from device private
On 7/10/20 12:27 PM, Jason Gunthorpe wrote:
On Wed, Jul 01, 2020 at 03:53:47PM -0700, Ralph Campbell wrote:
The goal for this series is to introduce the hmm_pfn_to_map_order()
function. This allows a device driver to know that a given 4K PFN is
actually mapped by the CPU using a larger sized
migrating.
Ralph Campbell (2):
mm/migrate: optimize migrate_vma_setup() for holes
mm/migrate: add migrate-shared test for migrate_vma_*()
mm/migrate.c | 16 ++--
tools/testing/selftests/vm/hmm-tests.c | 35 ++
2 files changed, 49
Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that
!vma_anonymous() ranges won't be migrated.
Signed-off-by: Ralph Campbell
---
tools/testing/selftests/vm/hmm-tests.c | 35 ++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/vm/hmm
array entries as not migrating to
avoid this overhead.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index b0125c082549..ec00b7a6ea2a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
On 7/9/20 11:35 PM, Bharata B Rao wrote:
On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
When migrating system memory to device private memory, if the source
address range is a valid VMA range and there is no memory or a zero page,
the source PFN array is marked as valid
A simple optimization for migrate_vma_*() when the source vma is not an
anonymous vma and a new test case to exercise it.
This is based on linux-mm and is for Andrew Morton's tree.
Ralph Campbell (2):
mm/migrate: optimize migrate_vma_setup() for holes
mm/migrate: add migrate-shared test
Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that
!vma_anonymous() ranges won't be migrated.
Signed-off-by: Ralph Campbell
---
tools/testing/selftests/vm/hmm-tests.c | 35 ++
1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/vm/hmm
array entries as not migrating to
avoid this overhead.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index b0125c082549..8aa434691577 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2204,9
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
ps://lore.kernel.org/linux-mm/2020062008.9971-1-rcampb...@nvidia.com
("nouveau: fix mixed normal and device private page migration")
https://lore.kernel.org/lkml/20200622233854.10889-3-rcampb...@nvidia.com
Ralph Campbell (5):
nouveau: fix storing invalid ptes
mm/migrate: add a dire
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 11 ---
drivers/gpu/drm/nouveau
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c | 8 +++-
2 files chang
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 31 ++-
1 file changed, 18 insertions
by the caller of migrate_vma_setup().
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 ++
include/linux/migrate.h| 12 +---
lib/test_hmm.c | 2 ++
mm/migrate.c
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
hmm_pfn_to_map_order() function to support mapping system memory pages
that are PMD_SIZE.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau
Add a sanity test for hmm_range_fault() returning the page mapping size
order.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 4 ++
lib/test_hmm_uapi.h| 4 ++
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files
The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly
setting the hardware specific GPU page table entries for 2MB sized
pages. Fix this by adding functions to set and clear PD0 GPU page
table entries.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
a new function hmm_pfn_to_map_order() to return the mapping size
order so that callers know the pages are being mapped with consistent
permissions and a large device page table mapping can be used if one is
available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 24
.
Only add support for 2MB nouveau mappings initially since changing the
1:1 CPU/GPU page table size assumptions requires a bigger set of changes.
Rebase to 5.8.0-rc3.
Ralph Campbell (5):
nouveau/hmm: fault one page at a time
mm/hmm: add hmm_mapping order
nouveau: fix mapping 2MB sysmem pages
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly
setting the hardware specific GPU page table entries for 2MB sized
pages. Fix this by adding functions to set and clear PD0 GPU page
table entries.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
Add a sanity test for hmm_range_fault() returning the HMM_PFN_PMD
flag.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 4 ++
lib/test_hmm_uapi.h| 4 ++
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files changed, 84
two new output flags to indicate the mapping size (PMD or PUD sized)
so that callers know the pages are being mapped with consistent permissions
and a large device page table mapping can be used if one is available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 11 ++-
mm/hmm.c
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
HMM_PFN_PMD flag that hmm_range_fault() returns to support mapping
system memory pages that are PMD_SIZE.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
:1 CPU/GPU page table size assumptions requires a bigger set of changes.
Rebase to 5.8.0-rc3.
Ralph Campbell (5):
nouveau/hmm: fault one page at a time
mm/hmm: add output flags for PMD/PUD page mapping
nouveau: fix mapping 2MB sysmem pages
nouveau/hmm: support mapping large sysmem pages
program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.
Fixes: 08da667b ("mm/hmm: check the device private page owner in
hmm_range_fault()")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Reviewed-by: Jason Gunthorpe
---
This is base
program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.
Fixes: 08da667b ("mm/hmm: check the device private page owner in
hmm_range_fault()")
Signed-off-by: Ralph Campbell
---
This is based on Linux-5.8.0-rc2 and is for Ben Skeggs nouveau tree.
On 6/25/20 10:31 AM, Jason Gunthorpe wrote:
On Thu, Jun 25, 2020 at 10:25:38AM -0700, Ralph Campbell wrote:
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been
On 6/23/20 4:40 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 03:20:08PM -0700, Ralph Campbell wrote:
The caller of migrate_vma_setup() does not know what type of page is
stored in the CPU's page tables. Pages within the specified range are
free to be swapped out, migrated, or freed
On 6/22/20 5:30 PM, John Hubbard wrote:
On 2020-06-22 16:38, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been migrated to device private
On 6/22/20 4:54 PM, Yang Shi wrote:
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard wrote:
On 2020-06-22 15:33, Yang Shi wrote:
On Mon, Jun 22, 2020 at 3:30 PM Yang Shi wrote:
On Mon, Jun 22, 2020 at 2:53 PM Zi Yan wrote:
On 22 Jun 2020, at 17:31, Ralph Campbell wrote:
On 6/22/20 1:10 PM
page and incorrectly computes the GPU's physical
address of local memory leading to data corruption.
Fix this by checking the source struct page and computing the correct
physical address.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 8
1 file changed, 8
uveau/nouveau/hmm: fix migrate zero page to GPU")
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..cc
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +-
drivers/gpu/drm/nouveau/nvkm
-1-rcampb...@nvidia.com/
Note that in order to exercise/test patch 2 here, you will need a
kernel with patch 1 from the original series (the fix to mm/migrate.c).
It is safe to apply these changes before the fix to mm/migrate.c
though.
Ralph Campbell (3):
nouveau: fix migrate page regression
On 6/22/20 4:18 PM, Jason Gunthorpe wrote:
On Mon, Jun 22, 2020 at 11:10:05AM -0700, Ralph Campbell wrote:
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how
and allow a range of normal and device private
pages to be migrated.
Fixes: 800bb1c8dc80 ("mm: handle multiple owners of device private pages in
migrate_vma")
Signed-off-by: Ralph Campbell
---
This is based on 5.8.0-rc2 for Andrew Morton's mm tree.
I believe it can be queued for 5.8-rcX a
On 6/21/20 5:15 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph
On 6/22/20 1:10 PM, Zi Yan wrote:
On 22 Jun 2020, at 15:36, Ralph Campbell wrote:
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
indicate the huge page was fully mapped by the CPU.
Export
On 6/22/20 10:22 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:41PM -0700, Ralph Campbell wrote:
The SVM page fault handler groups faults into a range of contiguous
virtual addresses and requests hmm_range_fault() to populate and
return the page frame number of system memory mapped
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 6/22/20 5:39 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:33PM -0700, Ralph Campbell wrote:
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau
) and compound page migration to device private memory
(patches 12-16). Since these changes are split across mm core, nouveau,
and testing, I'm guessing Jason Gunthorpe's HMM tree would be appropriate.
Ralph Campbell (16):
mm: fix migrate_vma_setup() src_owner and normal pages
nouveau: fix
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 16
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 171 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
and allow a range of normal and device private
pages to be migrated.
Fixes: 800bb1c8dc80 ("mm: handle multiple owners of device private pages in
migrate_vma")
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migra
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
HMM_PFN_COMPOUND flag that hmm_range_fault() returns to support mapping
system memory pages larger than PAGE_SIZE.
Signed-off-by: Ralph Campbell
---
drivers
entries as not migrating to
avoid this overhead.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 24535281cea3..87c52e0ee580 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2178,9
that bit of extra code.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 14 --
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 28528285942c..f7c2b51a7a9d 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1018,15 +1018,6 @@ static
uveau/nouveau/hmm: fix migrate zero page to GPU")
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..cc
memremap_pages().
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 1 +
include/linux/mm.h | 1 +
mm/huge_memory.c| 30 --
mm/internal.h | 1 -
mm/memory.c | 10 +-
mm/memremap.c | 9 +-
mm/migrate.c| 226
page and incorrectly computes the GPU's physical
address of local memory leading to data corruption.
Fix this by checking the source struct page and computing the correct
physical address.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 8
1 file changed, 8
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +-
drivers/gpu/drm/nouveau/nvkm
The HMM self test "migrate_multiple" can timeout on slower machines.
Lower the number of loop iterations to fix this.
Signed-off-by: Ralph Campbell
---
tools/testing/selftests/vm/hmm-tests.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selfte
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 323 -
tools/testing/selftests/vm/hmm-tests.c | 292 ++
2 files changed
Add some sanity tests for hmm_range_fault() returning the HMM_PFN_COMPOUND
flag.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 2 +
lib/test_hmm_uapi.h| 2 +
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files
Add a test to check that migrating a range of addresses with mixed
device private pages and normal anonymous pages are all migrated.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 22 +-
tools/testing/selftests/vm/hmm-tests.c | 18
is the same as the underlying compound page size.
Add a new output flag to indicate this so that callers know it is safe to
use a large device page table mapping if one is available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 4 +++-
mm/hmm.c| 10 +++---
2 files
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
is NULL but dereferenced.
lib/test_hmm.c:524:29-36: ERROR: devmem is NULL but dereferenced.
Fix these by using the local variable 'res' instead of devmem.
Signed-off-by: Randy Dunlap
Cc: Jérôme Glisse
Cc: linux...@kvack.org
Cc: Ralph Campbell
---
lib/test_hmm.c |3 +--
1 file changed, 1
In zap_pte_range(), the check for non_swap_entry() and
is_device_private_entry() is unnecessary since the latter is sufficient
to determine if the page is a device private page. Remove the test for
non_swap_entry() to simplify the code and for clarity.
Signed-off-by: Ralph Campbell
Reviewed
On 6/12/20 12:42 PM, Jason Gunthorpe wrote:
On Fri, Jun 12, 2020 at 12:35:24PM -0700, Matthew Wilcox wrote:
On Fri, Jun 12, 2020 at 12:26:18PM -0700, Ralph Campbell wrote:
In zap_pte_range(), the check for non_swap_entry() and
is_device_private_entry() is redundant since the latter
On 6/12/20 12:33 PM, Jason Gunthorpe wrote:
On Fri, Jun 12, 2020 at 12:26:18PM -0700, Ralph Campbell wrote:
In zap_pte_range(), the check for non_swap_entry() and
is_device_private_entry() is redundant since the latter is a subset of the
former. Remove the redundant check to simplify the code
In zap_pte_range(), the check for non_swap_entry() and
is_device_private_entry() is redundant since the latter is a subset of the
former. Remove the redundant check to simplify the code and for clarity.
Signed-off-by: Ralph Campbell
---
This is based on the current linux tree and is intended
On 5/26/20 3:29 PM, Zi Yan wrote:
On 8 May 2020, at 16:06, Ralph Campbell wrote:
On 5/8/20 12:51 PM, Christoph Hellwig wrote:
On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped
On 5/25/20 6:41 AM, Jason Gunthorpe wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 5/20/20 12:20 PM, Jason Gunthorpe wrote:
On Wed, May 20, 2020 at 11:36:52AM -0700, Ralph Campbell wrote:
When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that
is backed by pte_none() or zero pages, migrate_vma_setup() will fill the
source PFN array with an entry
and zero filling it instead of failing to migrate the page.
Signed-off-by: Ralph Campbell
---
This patch applies cleanly to Jason's Gunthorpe's hmm tree plus two
patches I posted earlier. The first is queued in Ben Skegg's nouveau
tree and the second is still pending review/not queued.
[1] ("no
exits unexpectedly or is killed, the range can be
[0..ULONG_MAX] in which case calling xa_erase() for every possible PFN
results in CPU timeouts.
Use xa_for_each_range() to efficiently erase entries in the range.
Signed-off-by: Ralph Campbell
---
This patch is based on Jason Gunthorpe's hmm tree
On 5/15/20 4:15 PM, Jason Gunthorpe wrote:
On Wed, May 13, 2020 at 02:45:07PM -0700, Ralph Campbell wrote:
The test driver uses an xa_array to store virtual to physical address
translations for a simulated hardware device. The MMU notifier
invalidation callback is used to keep the table
..ULONG_MAX] explicitly and just destroy the whole table.
Signed-off-by: Ralph Campbell
---
This patch is based on Jason Gunthorpe's hmm tree and should be folded
into the ("mm/hmm/test: add selftest driver for HMM") patch once this
patch is reviewed, etc.
lib/test_hmm.c | 6
od, thanks!
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 00bca6116f93..30462193c4ff 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -647,8 +647,10 @@
ei Yongjun
Looks good, thanks!
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 00bca6116f93..b4d9434e49e7 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1119,8 +1119,10 @@ static
On 5/8/20 8:17 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 01:17:55PM -0700, Ralph Campbell wrote:
On 5/8/20 12:59 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how
On 5/8/20 12:59 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
101 - 200 of 287 matches
Mail list logo