Hi,

On 9/8/23 10:44, Joonas Lahtinen wrote:
Quoting Thomas Hellström (2023-08-22 19:21:32)
This series adds a flag at VM_BIND time to pin the memory backing a VMA.
Initially this is needed for long-running workloads on hardware that
neither support mid-thread preemption nor pagefaults, since without it
the userptr MMU notifier will wait for preemption until preemption times
out.
 From terminology perspective we have a lot of folks in the userspace and
kernel developers who have come to understand pinned memory as something
that is locked in place while a dependent context is active on the
hardware. And that has been related to lack of page-fault support.

As here the plan is to go a step forward and never move that memory, would
it be worthy to call such memory LOCKED (would also align with CPU side).
And per my understanding the aspiration is to keep supporting locking
memory in place (within sysadmin configured limits) even if page-faults
will become de-facto usage.

So, in short, should we do s/pinned/locked/, to avoid terminology
confusion between new and old drivers which userspace may have to deal
from same codebase?

This is mainly a problem for people used to i915 pinning where we at some point used the term "short term pinning" and "long term pinning".

There are some discussions about the terminology here:
https://lwn.net/Articles/600502/
although I'm not sure what the outcome of that patchset was, but in this patchset, we're at least using pin_user_pages() for these VMAS. TTM and dma-buf also uses the term pinning.

The Linux distinction appears to be that locked pages are never paged out but may be migrated, (allowed to cause minor but not major pagefaults). Pinned pages are neither swapped nor migrated, and we're using the latter.

So I think pinning is the correct terminology?

(As a side note, Maarten was spending considerable time trying to remove the short term pinning from upstream i915 before xe work started).

Anyway, this patch series is on hold until we've merged Xe and can follow up with a discussion on exactly how we can support pinning in drm.

/Thomas



Regards, Joonas

Moving forward this could be supported also for bo-backed VMAs given
a proper accounting takes place. A sysadmin could then optionally configure
a system to be optimized for dealing with a single GPU application
at a time.

The series will be followed up with an igt series to exercise the uAPI.

v2:
- Address review comments by Matthew Brost.

Thomas Hellström (4):
   drm/xe/vm: Use onion unwind for xe_vma_userptr_pin_pages()
   drm/xe/vm: Implement userptr page pinning
   drm/xe/vm: Perform accounting of userptr pinned pages
   drm/xe/uapi: Support pinning of userptr vmas

  drivers/gpu/drm/xe/xe_vm.c       | 194 ++++++++++++++++++++++++-------
  drivers/gpu/drm/xe/xe_vm.h       |   9 ++
  drivers/gpu/drm/xe/xe_vm_types.h |  14 +++
  include/uapi/drm/xe_drm.h        |  18 +++
  4 files changed, 190 insertions(+), 45 deletions(-)

--
2.41.0

Reply via email to