:
Reviewed-by: Ralph Campbell
a second attempt will succeed,
and the retry adds complexity. So clean this up by removing the retry
and MIGRATE_PFN_LOCKED flag.
Destination pages are also meant to have the MIGRATE_PFN_LOCKED flag
set, but nothing actually checks that.
Signed-off-by: Alistair Popple
You can add:
Reviewed-by: Ralph
On 5/13/21 6:15 AM, Matthew Wilcox wrote:
On Thu, Oct 01, 2020 at 11:17:13AM -0700, Ralph Campbell wrote:
This is still an RFC because after looking at the pmem/dax code some
more, I realized that the ZONE_DEVICE struct pages are being inserted
into the process' page tables
On 3/3/21 10:16 PM, Alistair Popple wrote:
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this
combinations
of TTU_XXX flags are needed in which case a careful check of try_to_migrate()
and try_to_unmap() will be needed.
Reviewed-by: Ralph Campbell
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo
() which specifies no other flags. Therefore rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Looks good to me.
Reviewed-by: Ralph Campbell
for both read and write entry
creation.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Looks good to me too.
Reviewed-by: Ralph Campbell
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https
-by: Ralph Campbell
---
v4:
* Added pfn_swap_entry_to_page()
* Reinstated check that migration entries point to locked pages
* Removed #define swapcache_prepare which isn't needed for CONFIG_SWAP=0
builds
---
arch/s390/mm/pgtable.c | 2 +-
fs/proc/task_mmu.c | 23
Popple
One minor nit below, but you can add
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
> +static int dmirror_exclusive(struct dmirror *dmirror,
> + struct hmm_dmirror_cmd *cmd)
> +{
> + unsigned long start, end, addr;
> + unsigned long s
> From: Alistair Popple
> Sent: Thursday, February 25, 2021 11:18 PM
> To: linux...@kvack.org; nouveau@lists.freedesktop.org;
> bske...@redhat.com; a...@linux-foundation.org
> Cc: linux-...@vger.kernel.org; linux-ker...@vger.kernel.org; dri-
> de...@lists.freedesktop.org; Jo
.org; John Hubbard
> ; Ralph Campbell ;
> jgli...@redhat.com; h...@infradead.org; dan...@ffwll.ch
> Subject: Re: [PATCH v3 6/8] mm: Selftests for exclusive device memory
>
> On Fri, Feb 26, 2021 at 06:18:30PM +1100, Alistair Popple wrote:
> > Adds some selftests for excl
On 11/9/20 1:14 AM, Christoph Hellwig wrote:
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote:
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references
On 11/9/20 1:14 AM, Christoph Hellwig wrote:
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote:
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references on the ZONE_DEVICE
pages first.
First, thanks for the review comments.
I don't like the extra refcount either, that is why I
On 11/5/20 11:55 PM, Christoph Hellwig wrote:
On Thu, Nov 05, 2020 at 04:51:42PM -0800, Ralph Campbell wrote:
+extern void prep_transhuge_device_private_page(struct page *page);
No need for the extern.
Right, I was just copying the style.
Would you like to see a preparatory patch
On 11/6/20 4:14 AM, Matthew Wilcox wrote:
On Thu, Nov 05, 2020 at 04:51:42PM -0800, Ralph Campbell wrote:
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
I think you'd be better
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
Signed-off-by: Ralph Campbell
---
include/linux/huge_mm.h | 5 +
mm/huge_memory.c| 9 +
2 files changed, 14
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 289 ++---
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 14
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 437 +
lib/test_hmm_uapi.h| 3 +
tools/testing/selftests/vm/hmm-tests.c
.
Changes in v2:
Added splitting a THP midway in the migration process:
i.e., in migrate_vma_pages().
[1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampb...@nvidia.com
[2] https://lore.kernel.org/linux-mm/20200902165830.5367-1-rcampb...@nvidia.com
Ralph Campbell (6):
m
to indicate a huge page can be migrated. If the device driver can allocate
a huge page, it sets the MIGRATE_PFN_COMPOUND flag in the destination PFN
array. migrate_vma_pages() will fallback to PAGE_SIZE pages if
MIGRATE_PFN_COMPOUND is not set in both source and destination arrays.
Signed-off-by: Ralph
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
I thought I sent this out
to
be treated specially for device private pages, leaving DAX as still being
a special case.
Signed-off-by: Ralph Campbell
---
I'm sending this as a separate patch since I think it is ready to
merge. Originally, this was part of an RFC:
https://lore.kernel.org/linux-mm/20201001181715.17416-1-rcampb
to
be treated specially for device private pages, leaving DAX as still being
a special case.
Signed-off-by: Ralph Campbell
---
I'm sending this as a separate patch since I think it is ready to
merge. Originally, this was part of an RFC:
https://lore.kernel.org/linux-mm/20201001181715.17416-1-rcampb
On 10/7/20 10:17 PM, Ram Pai wrote:
On Thu, Oct 01, 2020 at 11:17:15AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page
On 10/9/20 9:53 AM, Ira Weiny wrote:
On Thu, Oct 08, 2020 at 10:25:44AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page
On 10/1/20 10:59 PM, Christoph Hellwig wrote:
On Thu, Oct 01, 2020 at 11:17:15AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see
On 10/1/20 10:56 PM, Christoph Hellwig wrote:
On Thu, Oct 01, 2020 at 11:17:14AM -0700, Ralph Campbell wrote:
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide
to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
fs/dax.c | 4 +-
include/linux/dax.h| 2 +-
include/linux/memremap.h
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by: Ralph Campbell
---
fs/dax.c| 4 ++--
fs/ext4/inode.c | 5 +
fs/xfs
ed to Linux-5.9.0-rc6 to include pmem fixes.
I added patch 1 to introduce a page refcount helper for ext4 and xfs as
suggested by Christoph Hellwig.
I also applied Christoph Hellwig's other suggested changes for removing
the devmap_managed_key, etc.
Ralph Campbell (2):
ext4/xfs: add page re
On 9/25/20 11:41 PM, Christoph Hellwig wrote:
On Fri, Sep 25, 2020 at 01:44:42PM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see
On 9/25/20 11:35 PM, Christoph Hellwig wrote:
On Fri, Sep 25, 2020 at 01:44:41PM -0700, Ralph Campbell wrote:
error = ___wait_var_event(>_refcount,
- atomic_read(>_refcount) == 1,
+ dax_layout_is_idle_pag
to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
include/linux/dax.h| 2 +-
include/linux/memremap.h | 7 ++-
include/linux/mm.h
llwig's other suggested changes for removing
the devmap_managed_key, etc.
Ralph Campbell (2):
ext4/xfs: add page refcount helper
mm: remove extra ZONE_DEVICE struct page refcount
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by: Ralph Campbell
---
fs/dax.c| 8
fs/ext4/inode.c | 2 +-
fs/xfs
On 9/25/20 1:51 PM, Dan Williams wrote:
On Fri, Sep 25, 2020 at 1:45 PM Ralph Campbell wrote:
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail
On 9/15/20 10:36 PM, Christoph Hellwig wrote:
On Tue, Sep 15, 2020 at 09:39:47AM -0700, Ralph Campbell wrote:
I don't think any of the three ->page_free instances even cares about
the page refcount.
Not true. The page_free() callback records the page is free by setting
a bit or putt
On 9/15/20 11:10 PM, Christoph Hellwig wrote:
On Mon, Sep 14, 2020 at 04:10:38PM -0700, Dan Williams wrote:
You also need to fix up ext4_break_layouts() and
xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
also needs some fstests exposure.
While we're at it, can we add
On 9/15/20 11:09 PM, Christoph Hellwig wrote:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 517751310dd2..5a82037a4b26 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1093,34 +1093,6 @@ static inline bool is_zone_device_page(const struct page
*page)
#ifdef
On 9/15/20 9:29 AM, Christoph Hellwig wrote:
On Mon, Sep 14, 2020 at 04:53:25PM -0700, Ralph Campbell wrote:
Since set_page_refcounted() is defined in mm_interal.h I would have to
move the definition to someplace like page_ref.h or have the drivers
cal init_page_count() or set_page_count
to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE
struct page reference counting is ugly/broken. This is my attempt to
fix it and it works for the HMM migration self tests.
I'm only sending this out
use it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)).
Fix these problems by checking for a PMD migration entry.
Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path")
cc: sta...@vger.kernel.org # 4.14+
Signed-off-by: Ralph Campbell
Reviewed
On 9/2/20 2:47 PM, Zi Yan wrote:
On 2 Sep 2020, at 12:58, Ralph Campbell wrote:
A migrating transparent huge page has to already be unmapped. Otherwise,
the page could be modified while it is being copied to a new page and
data could be lost. The function __split_huge_pmd() checks for a PMD
to indicate a huge page can be migrated. If the device driver can allocate
a huge page, it sets the MIGRATE_PFN_COMPOUND flag in the destination PFN
array. migrate_vma_pages() will fallback to PAGE_SIZE pages if
MIGRATE_PFN_COMPOUND is not set in both source and destination arrays.
Signed-off-by: Ralph
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 439 +
lib/test_hmm_uapi.h| 3 +
tools/testing/selftests/vm/hmm-tests.c
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 14
use it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)).
Fix these problems by checking for a PMD migration entry.
Signed-off-by: Ralph Campbell
---
mm/huge_memory.c | 42 +++---
1 file changed, 23 insertions(+), 19 deletions(-)
d
Move the definition of migrate_vma_collect_skip() to make it callable
by migrate_vma_collect_hole(). This helps make the next patch easier
to read.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 30 +++---
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git
like Ben Skeggs.
[1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampb...@nvidia.com
Ralph Campbell (7):
mm/thp: fix __split_huge_pmd_locked() for migration PMD
mm/migrate: move migrate_vma_collect_skip()
mm: support THP migration to device private memory
mm/thp: add
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
Signed-off-by: Ralph Campbell
---
include/linux/huge_mm.h | 5 +
mm/huge_memory.c| 8
2 files changed, 13
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 289 ++---
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
This is for Ben Skegg's
On 8/31/20 11:02 AM, Jason Gunthorpe wrote:
On Mon, Aug 31, 2020 at 10:21:41AM -0700, Ralph Campbell wrote:
On 8/31/20 4:51 AM, Jason Gunthorpe wrote:
On Thu, Aug 27, 2020 at 02:37:44PM -0700, Ralph Campbell wrote:
The user level OpenCL code shouldn't have to align start and end
addresses
On 8/31/20 4:51 AM, Jason Gunthorpe wrote:
On Thu, Aug 27, 2020 at 02:37:44PM -0700, Ralph Campbell wrote:
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
This is for Ben Skegg's
On 7/31/20 12:15 PM, Jason Gunthorpe wrote:
On Tue, Jul 28, 2020 at 03:04:07PM -0700, Ralph Campbell wrote:
On 7/28/20 12:19 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
When migrating the special zero page, migrate_vma_pages() calls
On 7/28/20 12:19 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
When migrating the special zero page, migrate_vma_pages() calls
mmu_notifier_invalidate_range_start() before replacing the zero page
PFN in the CPU page tables. This is unnecessary
On 7/28/20 12:15 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:01PM -0700, Ralph Campbell wrote:
static inline int mm_has_notifiers(struct mm_struct *mm)
@@ -513,6 +519,7 @@ static inline void mmu_notifier_range_init(struct
mmu_notifier_range *range,
range->start = st
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 30 +++---
tools/testing
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 3 +++
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c
device private pages owned by the caller of migrate_vma_setup().
Rename the src_owner field to pgmap_owner to reflect it is now used only
to identify which device private pages to migrate.
Signed-off-by: Ralph Campbell
Reviewed-by: Bharata B Rao
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 4
ase to Jason Gunthorpe's HMM tree.
Added reviewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggested by Jason Gunthorpe.
Ralph Campbell (6):
nouveau: fix storing invalid ptes
mm/migrate: add a flags parameter to migrate_vma
mm/notifier: a
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 15 ---
drivers/gpu/drm
uBo3ZylCHJ0ckD85BcjQQxpXyY99cBsvIomZw9wzg6QGm7Ksbq6d+UKSkb0L04d6v8
fiRvvLNq3kCbPzrifaBTj3klQcVcKXz34km0XUoRQlSaftlq4BJWopBPX8U7gQtstO
OvA7Al9t87sCpKjSnqjE7N1jThU0KzjPrCxJiEHq/0Vf4sqeUA42bOkc+bk/CV1ZSF
n9jm36j4kRTUg=
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 20
device private pages owned by the caller of migrate_vma_setup().
Rename the src_owner field to pgmap_owner to reflect it is now used only
to identify which device private pages to migrate.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 4 +++-
drivers/gpu/drm/nouveau
rcise the HMM test driver
invalidation changes.
Removed reviewed-by Bharata B Rao since this version is moderately
changed.
Changes in v2:
Rebase to Jason Gunthorpe's HMM tree.
Added reviewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggeste
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 15 ---
drivers/gpu/drm
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 3 +++
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 30 +++---
tools/testing
On 7/20/20 4:16 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 01:49:09PM -0700, Ralph Campbell wrote:
On 7/20/20 12:59 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 12:54:53PM -0700, Ralph Campbell wrote:
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index
On 7/20/20 12:59 PM, Jason Gunthorpe wrote:
On Mon, Jul 20, 2020 at 12:54:53PM -0700, Ralph Campbell wrote:
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3e546cbf03dd..620f2235d7d4 100644
+++ b/include/linux/migrate.h
@@ -180,6 +180,11 @@ static inline unsigned long
On 7/20/20 11:41 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:44AM -0700, Ralph Campbell wrote:
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have
On 7/20/20 11:40 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:47AM -0700, Ralph Campbell wrote:
Currently migrate_vma_setup() calls mmu_notifier_invalidate_range_start()
which flushes all device private page mappings whether or not a page
is being migrated to/from device private
On 7/20/20 11:36 AM, Jason Gunthorpe wrote:
On Mon, Jul 13, 2020 at 10:21:46AM -0700, Ralph Campbell wrote:
The src_owner field in struct migrate_vma is being used for two purposes,
it implies the direction of the migration and it identifies device private
pages owned by the caller. Split
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 11 ---
drivers/gpu/drm/nouveau
by the caller of migrate_vma_setup().
Signed-off-by: Ralph Campbell
Reviewed-by: Bharata B Rao
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 ++
include/linux/migrate.h| 12 +---
lib/test_hmm.c | 2 ++
mm
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 31 ++-
1 file changed, 18 insertions
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c | 8 +++-
2 files chang
ewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggested by Jason Gunthorpe.
Ralph Campbell (5):
nouveau: fix storing invalid ptes
mm/migrate: add a direction parameter to migrate_vma
mm/notifier: add migration invalidation type
nouv
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
On 7/10/20 12:39 PM, Jason Gunthorpe wrote:
On Mon, Jul 06, 2020 at 03:23:45PM -0700, Ralph Campbell wrote:
Currently migrate_vma_setup() calls mmu_notifier_invalidate_range_start()
which flushes all device private page mappings whether or not a page
is being migrated to/from device private
On 7/10/20 12:27 PM, Jason Gunthorpe wrote:
On Wed, Jul 01, 2020 at 03:53:47PM -0700, Ralph Campbell wrote:
The goal for this series is to introduce the hmm_pfn_to_map_order()
function. This allows a device driver to know that a given 4K PFN is
actually mapped by the CPU using a larger sized
value is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c | 8 +++-
2 files chang
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 11 ---
drivers/gpu/drm/nouveau
ps://lore.kernel.org/linux-mm/2020062008.9971-1-rcampb...@nvidia.com
("nouveau: fix mixed normal and device private page migration")
https://lore.kernel.org/lkml/20200622233854.10889-3-rcampb...@nvidia.com
Ralph Campbell (5):
nouveau: fix storing invalid ptes
mm/migrate: add a dire
a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 31 ++-
1 file changed, 18 insertions
by the caller of migrate_vma_setup().
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 ++
include/linux/migrate.h| 12 +---
lib/test_hmm.c | 2 ++
mm/migrate.c
Nouveau currently only supports mapping PAGE_SIZE sized pages of system
memory when shared virtual memory (SVM) is enabled. Use the new
hmm_pfn_to_map_order() function to support mapping system memory pages
that are PMD_SIZE.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau
.
Only add support for 2MB nouveau mappings initially since changing the
1:1 CPU/GPU page table size assumptions requires a bigger set of changes.
Rebase to 5.8.0-rc3.
Ralph Campbell (5):
nouveau/hmm: fault one page at a time
mm/hmm: add hmm_mapping order
nouveau: fix mapping 2MB sysmem pages
The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly
setting the hardware specific GPU page table entries for 2MB sized
pages. Fix this by adding functions to set and clear PD0 GPU page
table entries.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
a new function hmm_pfn_to_map_order() to return the mapping size
order so that callers know the pages are being mapped with consistent
permissions and a large device page table mapping can be used if one is
available.
Signed-off-by: Ralph Campbell
---
include/linux/hmm.h | 24
. In addition, use the hmm_range
default_flags to fix a corner case where the input hmm_pfns array
is not reinitialized after hmm_range_fault() returns -EBUSY and must
be called again.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +-
1 file
Add a sanity test for hmm_range_fault() returning the page mapping size
order.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 4 ++
lib/test_hmm_uapi.h| 4 ++
tools/testing/selftests/vm/hmm-tests.c | 76 ++
3 files
The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly
setting the hardware specific GPU page table entries for 2MB sized
pages. Fix this by adding functions to set and clear PD0 GPU page
table entries.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
:1 CPU/GPU page table size assumptions requires a bigger set of changes.
Rebase to 5.8.0-rc3.
Ralph Campbell (5):
nouveau/hmm: fault one page at a time
mm/hmm: add output flags for PMD/PUD page mapping
nouveau: fix mapping 2MB sysmem pages
nouveau/hmm: support mapping large sysmem pages
1 - 100 of 244 matches
Mail list logo