. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
two new output flags to indicate the mapping size (PMD or PUD sized)
so that callers know the pages are being mapped with consistent permissions
and a large device page table mapping can be used if one is available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 11 ++-
mm/hmm.c
Add a sanity test for hmm_range_fault() returning the HMM_PFN_PMD
flag.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 4 ++
lib/test_hmm_uapi.h| 4 ++
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files changed, 84
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
HMM_PFN_PMD flag that hmm_range_fault() returns to support mapping
system memory pages that are PMD_SIZE.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm
program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.
Fixes: 08da667b ("mm/hmm: check the device private page owner in
hmm_range_fault()")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Reviewed-by: Jason Gunthorpe
---
This is base
program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.
Fixes: 08da667b ("mm/hmm: check the device private page owner in
hmm_range_fault()")
Signed-off-by: Ralph Campbell
---
This is based on Linux-5.8.0-rc2 and is for Ben Skeggs nouveau tree.
On 6/25/20 10:31 AM, Jason Gunthorpe wrote:
On Thu, Jun 25, 2020 at 10:25:38AM -0700, Ralph Campbell wrote:
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been
On 6/22/20 5:30 PM, John Hubbard wrote:
On 2020-06-22 16:38, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been migrated to device private
On 6/22/20 4:54 PM, Yang Shi wrote:
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard wrote:
On 2020-06-22 15:33, Yang Shi wrote:
On Mon, Jun 22, 2020 at 3:30 PM Yang Shi wrote:
On Mon, Jun 22, 2020 at 2:53 PM Zi Yan wrote:
On 22 Jun 2020, at 17:31, Ralph Campbell wrote:
On 6/22/20 1:10 PM
-1-rcampb...@nvidia.com/
Note that in order to exercise/test patch 2 here, you will need a
kernel with patch 1 from the original series (the fix to mm/migrate.c).
It is safe to apply these changes before the fix to mm/migrate.c
though.
Ralph Campbell (3):
nouveau: fix migrate page regression
page and incorrectly computes the GPU's physical
address of local memory leading to data corruption.
Fix this by checking the source struct page and computing the correct
physical address.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 8
1 file changed, 8
uveau/nouveau/hmm: fix migrate zero page to GPU")
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..cc
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +-
drivers/gpu/drm/nouveau/nvkm
On 6/22/20 4:18 PM, Jason Gunthorpe wrote:
On Mon, Jun 22, 2020 at 11:10:05AM -0700, Ralph Campbell wrote:
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how
On 6/21/20 5:15 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph
On 6/22/20 1:10 PM, Zi Yan wrote:
On 22 Jun 2020, at 15:36, Ralph Campbell wrote:
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
indicate the huge page was fully mapped by the CPU.
Export
On 6/22/20 10:22 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:41PM -0700, Ralph Campbell wrote:
The SVM page fault handler groups faults into a range of contiguous
virtual addresses and requests hmm_range_fault() to populate and
return the page frame number of system memory mapped
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 6/22/20 5:39 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:33PM -0700, Ralph Campbell wrote:
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 323 -
tools/testing/selftests/vm/hmm-tests.c | 292 ++
2 files changed
Add some sanity tests for hmm_range_fault() returning the HMM_PFN_COMPOUND
flag.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 2 +
lib/test_hmm_uapi.h| 2 +
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files
entries as not migrating to
avoid this overhead.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 24535281cea3..87c52e0ee580 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2178,9
The HMM self test "migrate_multiple" can timeout on slower machines.
Lower the number of loop iterations to fix this.
Signed-off-by: Ralph Campbell
---
tools/testing/selftests/vm/hmm-tests.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selfte
that bit of extra code.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 14 --
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 28528285942c..f7c2b51a7a9d 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1018,15 +1018,6 @@ static
and allow a range of normal and device private
pages to be migrated.
Fixes: 800bb1c8dc80 ("mm: handle multiple owners of device private pages in
migrate_vma")
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migra
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 16
uveau/nouveau/hmm: fix migrate zero page to GPU")
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..cc
page and incorrectly computes the GPU's physical
address of local memory leading to data corruption.
Fix this by checking the source struct page and computing the correct
physical address.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 8
1 file changed, 8
memremap_pages().
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 1 +
include/linux/mm.h | 1 +
mm/huge_memory.c| 30 --
mm/internal.h | 1 -
mm/memory.c | 10 +-
mm/memremap.c | 9 +-
mm/migrate.c| 226
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 171 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +-
drivers/gpu/drm/nouveau/nvkm
Add a test to check that migrating a range of addresses with mixed
device private pages and normal anonymous pages are all migrated.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 22 +-
tools/testing/selftests/vm/hmm-tests.c | 18
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
HMM_PFN_COMPOUND flag that hmm_range_fault() returns to support mapping
system memory pages larger than PAGE_SIZE.
Signed-off-by: Ralph Campbell
---
drivers
is the same as the underlying compound page size.
Add a new output flag to indicate this so that callers know it is safe to
use a large device page table mapping if one is available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 4 +++-
mm/hmm.c| 10 +++---
2 files
) and compound page migration to device private memory
(patches 12-16). Since these changes are split across mm core, nouveau,
and testing, I'm guessing Jason Gunthorpe's HMM tree would be appropriate.
Ralph Campbell (16):
mm: fix migrate_vma_setup() src_owner and normal pages
nouveau: fix
On 5/26/20 3:29 PM, Zi Yan wrote:
On 8 May 2020, at 16:06, Ralph Campbell wrote:
On 5/8/20 12:51 PM, Christoph Hellwig wrote:
On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped
On 5/25/20 6:41 AM, Jason Gunthorpe wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 5/20/20 12:20 PM, Jason Gunthorpe wrote:
On Wed, May 20, 2020 at 11:36:52AM -0700, Ralph Campbell wrote:
When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that
is backed by pte_none() or zero pages, migrate_vma_setup() will fill the
source PFN array with an entry
and zero filling it instead of failing to migrate the page.
Signed-off-by: Ralph Campbell
---
This patch applies cleanly to Jason's Gunthorpe's hmm tree plus two
patches I posted earlier. The first is queued in Ben Skegg's nouveau
tree and the second is still pending review/not queued.
[1] ("no
On 5/8/20 8:17 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 01:17:55PM -0700, Ralph Campbell wrote:
On 5/8/20 12:59 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how
On 5/8/20 12:59 PM, Matthew Wilcox wrote:
On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 5/8/20 12:51 PM, Christoph Hellwig wrote:
On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Also, remove a useless semicolon after a {} statement block.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm
Add some sanity tests for hmm_range_fault() returning the HMM_PFN_COMPOUND
flag.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 2 +
lib/test_hmm_uapi.h| 2 +
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 198 +-
1 file
the same as the underlying compound page size.
Add a new output flag to indicate this so that callers know it is safe to
use a large device page table mapping if one is available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 4 +++-
mm/hmm.c| 10 +++---
2 files changed
in Ben Skeggs' nouveau
tree ("nouveau/hmm: map pages after migration") and the patches queued
in Jason's HMM tree.
There is also a patch outstanding ("nouveau/hmm: fix nouveau_dmem_chunk
allocations") that is independent of the above and could be applied
before or after.
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
HMM_PFN_COMPOUND flag that hmm_range_fault() returns to support mapping
system memory pages larger than PAGE_SIZE.
Signed-off-by: Ralph Campbell
---
drivers
-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Jason Gunthorpe
Cc: "Jérôme Glisse"
Cc: Ben Skeggs
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 46 +---
drivers/gpu/drm/nouveau/nouveau_dmem.h | 2 +
drivers/gpu/drm/nouveau/nouveau_s
gt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Acked-by: Felix Kuehling
Tested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
Signed-off-by: Christoph Hellwig
---
Documentation/vm/hmm.rst
On 4/23/20 5:17 AM, Jason Gunthorpe wrote:
On Tue, Apr 21, 2020 at 04:11:07PM -0700, Ralph Campbell wrote:
In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated
and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(),
a nouveau_dmem_chunk is remo
of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two drivers, thanks!
For nouveau you can add:
Tested-by: Ralph Campbell
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https
ice private pages and GPU memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 304 +
1 file changed, 112 insertions(+), 192 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
in
On 3/19/20 5:14 PM, Jason Gunthorpe wrote:
On Tue, Mar 17, 2020 at 04:14:31PM -0700, Ralph Campbell wrote:
+static int dmirror_fault(struct dmirror *dmirror, unsigned long start,
+unsigned long end, bool write)
+{
+ struct mm_struct *mm = dmirror->
Adding linux-kselft...@vger.kernel.org for the test config question.
On 3/19/20 11:17 AM, Jason Gunthorpe wrote:
On Tue, Mar 17, 2020 at 04:14:31PM -0700, Ralph Campbell wrote:
On 3/17/20 5:59 AM, Christoph Hellwig wrote:
On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote:
I've
00:00:00 2001
From: Ralph Campbell
Date: Tue, 17 Mar 2020 11:10:38 -0700
Subject: [PATCH] mm/hmm/test: add self tests for HMM
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Add some basic stand alone self tests for HMM.
Signed-off-by: Ralph Campb
On 3/17/20 4:56 AM, Jason Gunthorpe wrote:
On Mon, Mar 16, 2020 at 01:24:09PM -0700, Ralph Campbell wrote:
The reason for it being backwards is that "normally" a device doesn't want
the device private page to be faulted back to system memory, it wants to
get the device private stru
On 3/17/20 12:34 AM, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 03:49:51PM -0700, Ralph Campbell wrote:
On 3/16/20 12:32 PM, Christoph Hellwig wrote:
Remove the code to fault device private pages back into system memory
that has never been used by any driver. Also replace the usage
memory. Fix this by
passing in an expected pgmap owner in the hmm_range_fault structure.
Signed-off-by: Christoph Hellwig
Fixes: 4ef589dc9b10 ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Looks good.
Reviewed-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_d
On 3/16/20 12:32 PM, Christoph Hellwig wrote:
Remove the code to fault device private pages back into system memory
that has never been used by any driver. Also replace the usage of the
HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple
is_device_private_page check in nouveau.
t
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.
Just had to think this through.
Reviewed-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 1 +
drivers/gpu/drm/nouveau/nouveau_dm
This looks like a reasonable approach to take.
Reviewed-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
include/linux/memremap.h | 4
mm/memremap.c | 4
4 files changed, 11
On 3/16/20 1:09 PM, Jason Gunthorpe wrote:
On Mon, Mar 16, 2020 at 07:49:35PM +0100, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 11:42:19AM -0700, Ralph Campbell wrote:
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature
On 3/16/20 11:49 AM, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 11:42:19AM -0700, Ralph Campbell wrote:
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature.
There is various code related to it in nouveau, but as far as I
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature.
There is various code related to it in nouveau, but as far as I can tell
it never actually got turned on, and the only changes since the initial
commit are global cleanups.
When migrating system memory to GPU memory, check that SVM has been
enabled. Even though most errors can be ignored since migration is
a performance optimization, return an error because this is a violation
of the API.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 5
-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Jason Gunthorpe
Cc: "Jérôme Glisse"
Cc: Ben Skeggs
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 46 +---
drivers/gpu/drm/nouveau/nouveau_dmem.h | 2 +
drivers/gpu/drm/nouveau/nouveau_s
ter than
svmm->unmanaged.limit which is greater than svmm->unmanaged.start and the
start = max_t(u64, start, svmm->unmanaged.limit) will change nothing.
Just remove the useless lines of code.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 3 ---
1 file changed
n call migrate_vma_setup() with a starting address less than
vma->vm_start. This results in migrate_vma_setup() returning -EINVAL for
the range instead of nouveau skipping that part of the range and migrating
the rest.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 1 +
1 file c
patches 1-3 to fix some minor issues.
Eliminated nouveau_find_svmm() since it is easily found.
Applied Jason Gunthorpe's suggestions for nouveau_pfns_to_args().
Changes since v1:
Rebase to linux-5.6.0-rc4
Address Christoph Hellwig's comments
Ralph Campbell (4):
nouveau/hmm: fix vma range check
On 3/3/20 4:42 AM, Jason Gunthorpe wrote:
On Mon, Mar 02, 2020 at 05:00:23PM -0800, Ralph Campbell wrote:
When memory is migrated to the GPU, it is likely to be accessed by GPU
code soon afterwards. Instead of waiting for a GPU fault, map the
migrated memory into the GPU page tables
-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Jason Gunthorpe
Cc: "Jérôme Glisse"
Cc: Ben Skeggs
---
Originally this patch was targeted for Jason's rdma tree since other HMM
related changes were queued there. Now that those have been merged, this
patch just contains changes to
On 1/16/20 12:21 PM, Jason Gunthorpe wrote:
On Thu, Jan 16, 2020 at 12:16:30PM -0800, Ralph Campbell wrote:
Can you point me to the latest ODP code? Seems like my understanding is
quite off.
https://elixir.bootlin.com/linux/v5.5-rc6/source/drivers/infiniband/hw/mlx5/odp.c
Look for the word
On 1/16/20 8:00 AM, Jason Gunthorpe wrote:
On Wed, Jan 15, 2020 at 02:09:47PM -0800, Ralph Campbell wrote:
I don't understand the lifetime/membership issue. The driver is the only thing
that allocates, inserts, or removes struct mmu_interval_notifier and thus
completely controls the lifetime
On 1/14/20 5:00 AM, Jason Gunthorpe wrote:
On Mon, Jan 13, 2020 at 02:47:02PM -0800, Ralph Campbell wrote:
void
nouveau_svmm_fini(struct nouveau_svmm **psvmm)
{
struct nouveau_svmm *svmm = *psvmm;
+ struct mmu_interval_notifier *mni;
+
if (svmm
On 1/14/20 4:49 AM, Jason Gunthorpe wrote:
On Mon, Jan 13, 2020 at 02:47:01PM -0800, Ralph Campbell wrote:
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 47ad9cc89aab..4efecc0f13cb 100644
+++ b/mm/mmu_notifier.c
@@ -1171,6 +1171,39 @@ void mmu_interval_notifier_update(struct
ing the mmu interval notifier but more efficient.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 4 +++
mm/mmu_notifier.c| 69 ++--
2 files changed, 70 insertions(+), 3 deletions(-)
diff --git a/include/linux/mmu_notifier.h b/incl
Update nouveau to only use the mmu interval notifiers.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 313 +-
1 file changed, 201 insertions(+), 112 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c
b/drivers/gpu/drm/nouveau
Add some basic stand alone self tests for HMM.
Signed-off-by: Ralph Campbell
Signed-off-by: Jérôme Glisse
---
MAINTAINERS|3 +
lib/Kconfig.debug | 11 +
lib/Makefile |1 +
lib/test_hmm.c
to return the start and last address of the interval.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 15 +++
mm/mmu_notifier.c| 33 +
2 files changed, 48 insertions(+)
diff --git a/include/linux/mmu_notifier.h b/include
mmu interval notifier API
Changes v4 -> v5:
Added mmu interval notifier insert/remove/update callable from the
invalidate() callback
Updated HMM tests to use the new core interval notifier API
Changes v1 -> v4:
https://lore.kernel.org/linux-mm/20191104222141.5173-1-rcampb...@nvidia.c
is no longer needed.
Add a new function mmu_interval_notifier_put() which is safe to call from
the invalidate() callback. The ops->release() function will be called when
all callbacks are finished and no CPUs are accessing the
mmu_interval_notifier.
Signed-off-by: Ralph Campbell
---
incl
event
type MMU_NOTIFY_UNMAP) and the interval needs to be split in order to
continue receiving callbacks for the remaining left and right intervals.
Add a new function mmu_interval_notifier_insert_safe() which can be called
from the invalidate() callback.
Signed-off-by: Ralph Campbell
---
inc
I hit this while testing HMM with nouveau on linux-5.5-rc5.
I'm not a lockdep expert but my understanding of this is that an
invalidation callback could potentially call kzalloc(GFP_KERNEL)
which could cause another invalidation and recursively deadlock.
Looking at the
On 11/13/19 8:46 AM, Jason Gunthorpe wrote:
On Wed, Nov 13, 2019 at 05:59:52AM -0800, Christoph Hellwig wrote:
+int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
+ struct mm_struct *mm, unsigned long start,
+
my Tested-by for the mm and nouveau changes.
IOW, patches 1-4, 10-11, and 15.
Tested-by: Ralph Campbell
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau
On 9/12/19 1:26 AM, Christoph Hellwig wrote:
+static int hmm_pfns_fill(unsigned long addr,
+unsigned long end,
+struct hmm_range *range,
+enum hmm_pfn_value_e value)
Nit: can we use the space a little more efficient,
On 9/12/19 1:26 AM, Christoph Hellwig wrote:
On Wed, Sep 11, 2019 at 03:28:27PM -0700, Ralph Campbell wrote:
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampb...@nvidia.com/
Ralph Campbell (4):
mm/hmm: make full use of walk_page_range()
mm/hmm: allow snapshot of the special zero page
mm/hmm: allow hmm_range_fault() of mmap(PROT_NONE)
mm/hmm/test: add self tests for HMM
MAINTAINERS
efore calling hmm_range_fault().
If the call to hmm_range_fault() is not a snapshot, the caller can still
check that pfns have the desired access permissions.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
---
mm/hmm.c | 4 +++-
1 file chan
hmm_range_fault() was not checking
start >= vma->vm_start before checking vma->vm_flags so hmm_range_fault()
could return an error based on the wrong vma for the requested range.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
Add self tests for HMM.
Signed-off-by: Ralph Campbell
---
MAINTAINERS|3 +
drivers/char/Kconfig | 11 +
drivers/char/Makefile |1 +
drivers/char/hmm_dmirror.c | 1504
include/Kbuild
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
private memory instead of DMAing a zero page.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
On 8/27/19 11:41 AM, Jason Gunthorpe wrote:
On Fri, Aug 23, 2019 at 03:17:53PM -0700, Ralph Campbell wrote:
Signed-off-by: Ralph Campbell
mm/hmm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/hmm.c b/mm/hmm.c
index 29371485fe94..4882b83aeccb 100644
+++ b/mm/hmm.c
@@ -292,6
On 8/26/19 11:09 AM, Jason Gunthorpe wrote:
On Mon, Aug 26, 2019 at 11:02:12AM -0700, Ralph Campbell wrote:
On 8/24/19 3:37 PM, Christoph Hellwig wrote:
On Fri, Aug 23, 2019 at 03:17:52PM -0700, Ralph Campbell wrote:
Although hmm_range_fault() calls find_vma() to make sure that a vma exists
On 8/24/19 3:37 PM, Christoph Hellwig wrote:
On Fri, Aug 23, 2019 at 03:17:52PM -0700, Ralph Campbell wrote:
Although hmm_range_fault() calls find_vma() to make sure that a vma exists
before calling walk_page_range(), hmm_vma_walk_hole() can still be called
with walk->vma == NULL if the st
101 - 200 of 244 matches
Mail list logo