Re: [Intel-gfx] [PATCH 2/2] gpu/i915: use HMM mirror for userptr buffer object.

2018-11-22 Thread Daniel Vetter
On Thu, Nov 22, 2018 at 03:59:50PM +0200, Joonas Lahtinen wrote:
> Hi Jerome,
> 
> Bit late reply, but here goes :)
> 
> We're working quite hard to avoid pinning any pages unless they're in
> the GPU page tables. And when they are in the GPU page tables, they must
> be pinned for whole of that duration, for the reason that our GPUs can
> not take a fault. And to avoid thrashing GPU page tables, we do leave
> objects in page tables with the expectation that smart userspace
> recycles buffers.
> 
> So what I understand of your proposal, it wouldn't really make a
> difference for us in the amount of pinned pages (which I agree,
> we'd love to see going down). When we're unable to take a fault,
> the first use effectively forces us to pin any pages and keep them
> pinned to avoid thrashing GPU page tables.
> 
> So from i915 perspective, it just seems to be mostly an exchange of
> an API to an another for getting the pages. You already mentioned
> the fast path is being worked on, which is an obvious difference.
> But is there some other improvement one would be expecting, beyond
> the page pinning?
> 
> Also, is the requirement for a single non-file-backed VMA in the
> plans of being eliminated or is that inherent restriction of the
> HMM_MIRROR feature? We're currently not imposing such a limitation.

I think a clear plus for HMM would be if this helps us fix the deadlocks
and races we're seeing. But I have no idea whether this gets us any closer
here or not.
-Daniel

> 
> Regards, Joonas
> 
> Quoting jgli...@redhat.com (2018-09-10 03:57:36)
> > From: Jérôme Glisse 
> > 
> > This replace existing code that rely on get_user_page() aka GUP with
> > code that now use HMM mirror to mirror a range of virtual address as
> > a buffer object accessible by the GPU. There is no functional changes
> > from userspace point of view.
> > 
> > From kernel point of view we no longer pin pages for userptr buffer
> > object which is a welcome change (i am assuming that everyone dislike
> > page pin as i do).
> > 
> > Another change, from kernel point of view, is that it does no longer
> > have a fast path with get_user_pages_fast() this can eventually added
> > back through HMM.
> > 
> > Signed-off-by: Jérôme Glisse 
> > Cc: dri-de...@lists.freedesktop.org
> > Cc: David Airlie 
> > Cc: Daniel Vetter 
> > Cc: Chris Wilson 
> > Cc: Lionel Landwerlin 
> > Cc: Jani Nikula 
> > Cc: Joonas Lahtinen 
> > Cc: Rodrigo Vivi 
> > Cc: intel-gfx@lists.freedesktop.org

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 2/2] gpu/i915: use HMM mirror for userptr buffer object.

2018-11-22 Thread Joonas Lahtinen
Hi Jerome,

Bit late reply, but here goes :)

We're working quite hard to avoid pinning any pages unless they're in
the GPU page tables. And when they are in the GPU page tables, they must
be pinned for whole of that duration, for the reason that our GPUs can
not take a fault. And to avoid thrashing GPU page tables, we do leave
objects in page tables with the expectation that smart userspace
recycles buffers.

So what I understand of your proposal, it wouldn't really make a
difference for us in the amount of pinned pages (which I agree,
we'd love to see going down). When we're unable to take a fault,
the first use effectively forces us to pin any pages and keep them
pinned to avoid thrashing GPU page tables.

So from i915 perspective, it just seems to be mostly an exchange of
an API to an another for getting the pages. You already mentioned
the fast path is being worked on, which is an obvious difference.
But is there some other improvement one would be expecting, beyond
the page pinning?

Also, is the requirement for a single non-file-backed VMA in the
plans of being eliminated or is that inherent restriction of the
HMM_MIRROR feature? We're currently not imposing such a limitation.

Regards, Joonas

Quoting jgli...@redhat.com (2018-09-10 03:57:36)
> From: Jérôme Glisse 
> 
> This replace existing code that rely on get_user_page() aka GUP with
> code that now use HMM mirror to mirror a range of virtual address as
> a buffer object accessible by the GPU. There is no functional changes
> from userspace point of view.
> 
> From kernel point of view we no longer pin pages for userptr buffer
> object which is a welcome change (i am assuming that everyone dislike
> page pin as i do).
> 
> Another change, from kernel point of view, is that it does no longer
> have a fast path with get_user_pages_fast() this can eventually added
> back through HMM.
> 
> Signed-off-by: Jérôme Glisse 
> Cc: dri-de...@lists.freedesktop.org
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Chris Wilson 
> Cc: Lionel Landwerlin 
> Cc: Jani Nikula 
> Cc: Joonas Lahtinen 
> Cc: Rodrigo Vivi 
> Cc: intel-gfx@lists.freedesktop.org
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 2/2] gpu/i915: use HMM mirror for userptr buffer object.

2018-09-09 Thread jglisse
From: Jérôme Glisse 

This replace existing code that rely on get_user_page() aka GUP with
code that now use HMM mirror to mirror a range of virtual address as
a buffer object accessible by the GPU. There is no functional changes
from userspace point of view.

From kernel point of view we no longer pin pages for userptr buffer
object which is a welcome change (i am assuming that everyone dislike
page pin as i do).

Another change, from kernel point of view, is that it does no longer
have a fast path with get_user_pages_fast() this can eventually added
back through HMM.

Signed-off-by: Jérôme Glisse 
Cc: dri-de...@lists.freedesktop.org
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Chris Wilson 
Cc: Lionel Landwerlin 
Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
Cc: intel-gfx@lists.freedesktop.org
---
 drivers/gpu/drm/i915/i915_gem_userptr.c | 206 
 1 file changed, 102 insertions(+), 104 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 5e09b654b5ad..378aab438ebd 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -464,7 +464,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object 
*obj,
 
 static int
 __i915_gem_userptr_set_active(struct drm_i915_gem_object *obj,
- bool value)
+ struct hmm_range *range)
 {
int ret = 0;
 
@@ -486,86 +486,120 @@ __i915_gem_userptr_set_active(struct drm_i915_gem_object 
*obj,
/* In order to serialise get_pages with an outstanding
 * cancel_userptr, we must drop the struct_mutex and try again.
 */
-   if (!value)
+   if (range) {
+   if (!hmm_vma_range_done(range))
+   ret = -EAGAIN;
+   else
+   add_object(obj->userptr.mmu_object);
+   } else
del_object(obj->userptr.mmu_object);
-   else if (!work_pending(>userptr.mmu_object->work))
-   add_object(obj->userptr.mmu_object);
-   else
-   ret = -EAGAIN;
spin_unlock(>userptr.mmu_object->mirror->lock);
 #endif
 
return ret;
 }
 
-static void
-__i915_gem_userptr_get_pages_worker(struct work_struct *_work)
+static int
+i915_gem_userptr_map(struct drm_i915_gem_object *obj, bool try)
 {
-   struct get_pages_work *work = container_of(_work, typeof(*work), work);
-   struct drm_i915_gem_object *obj = work->obj;
-   const int npages = obj->base.size >> PAGE_SHIFT;
+#if defined(CONFIG_HMM_MIRROR)
+   static const uint64_t i915_range_flags[HMM_PFN_FLAG_MAX] = {
+   (1 << 0), /* HMM_PFN_VALID */
+   (1 << 1), /* HMM_PFN_WRITE */
+   0 /* HMM_PFN_DEVICE_PRIVATE */
+   };
+   static const uint64_t i915_range_values[HMM_PFN_VALUE_MAX] = {
+   0xfffeUL, /* HMM_PFN_ERROR */
+   0, /* HMM_PFN_NONE */
+   0xfffcUL /* HMM_PFN_SPECIAL */
+   };
+
+   const unsigned long npages = obj->base.size >> PAGE_SHIFT;
+   struct mm_struct *mm = obj->userptr.mm->mm;
+   struct sg_table *pages;
+   struct hmm_range range;
struct page **pvec;
-   int pinned, ret;
-
-   ret = -ENOMEM;
-   pinned = 0;
-
-   pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
-   if (pvec != NULL) {
-   struct mm_struct *mm = obj->userptr.mm->mm;
-   unsigned int flags = 0;
-
-   if (!i915_gem_object_is_readonly(obj))
-   flags |= FOLL_WRITE;
-
-   ret = -EFAULT;
-   if (mmget_not_zero(mm)) {
-   down_read(>mmap_sem);
-   while (pinned < npages) {
-   ret = get_user_pages_remote
-   (work->task, mm,
-obj->userptr.ptr + pinned * PAGE_SIZE,
-npages - pinned,
-flags,
-pvec + pinned, NULL, NULL);
-   if (ret < 0)
-   break;
-
-   pinned += ret;
-   }
-   up_read(>mmap_sem);
-   mmput(mm);
-   }
+   unsigned long i;
+   bool write = !i915_gem_object_is_readonly(obj);
+   int err;
+
+   range.pfns = kvmalloc_array(npages, sizeof(uint64_t),
+   try ? GFP_KERNEL | __GFP_NORETRY | 
__GFP_NOWARN : GFP_KERNEL);
+   if (range.pfns == NULL)
+   return try ? -EAGAIN : -ENOMEM;
+
+   range.pfn_shift = 12;
+   range.start = obj->userptr.ptr;
+   range.flags = i915_range_flags;
+   range.values = i915_range_values;
+   range.end = range.start + obj->base.size;
+
+