On Mon, Oct 17, 2016 at 10:56:27AM +0100, Tvrtko Ursulin wrote:
> 
> On 14/10/2016 15:07, Chris Wilson wrote:
> >On Fri, Oct 14, 2016 at 02:32:03PM +0100, Tvrtko Ursulin wrote:
> >>On 14/10/2016 13:18, Chris Wilson wrote:
> >>>A while ago we switched from a contiguous array of pages into an sglist,
> >>>for that was both more convenient for mapping to hardware and avoided
> >>>the requirement for a vmalloc array of pages on every object. However,
> >>>certain GEM API calls (like pwrite, pread as well as performing
> >>>relocations) do desire access to individual struct pages. A quick hack
> >>>was to introduce a cache of the last access such that finding the
> >>>following page was quick - this works so long as the caller desired
> >>>sequential access. Walking backwards, or multiple callers, still hits a
> >>>slow linear search for each page. One solution is to store each
> >>>successful lookup in a radix tree.
> >>>
> >>>v2: Rewrite building the radixtree for clarity, hopefully.
> >>>
> >>>Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
> >>>---
> >>>  drivers/gpu/drm/i915/i915_drv.h         |  69 +++++-------
> >>>  drivers/gpu/drm/i915/i915_gem.c         | 179 
> >>> +++++++++++++++++++++++++++++---
> >>>  drivers/gpu/drm/i915/i915_gem_stolen.c  |   4 +-
> >>>  drivers/gpu/drm/i915/i915_gem_userptr.c |   4 +-
> >>>  4 files changed, 193 insertions(+), 63 deletions(-)
> >>>
> >>>diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> >>>b/drivers/gpu/drm/i915/i915_drv.h
> >>>index 38467dde1efe..53cf4b0e5359 100644
> >>>--- a/drivers/gpu/drm/i915/i915_drv.h
> >>>+++ b/drivers/gpu/drm/i915/i915_drv.h
> >>>@@ -2273,9 +2273,12 @@ struct drm_i915_gem_object {
> >>>   struct sg_table *pages;
> >>>   int pages_pin_count;
> >>>-  struct get_page {
> >>>-          struct scatterlist *sg;
> >>>-          int last;
> >>>+  struct i915_gem_object_page_iter {
> >>>+          struct scatterlist *sg_pos;
> >>>+          unsigned long sg_idx;
> >>We are not consistent in the code with type used for number of pages
> >>in an object. sg_alloc_table even takes an unsigned int for it, so
> >>for as long as we build them as we do, unsigned long is a waste.
> >I know. It's worrying, today there is a possibility that we overflow a
> >32bit size. If this was counting in pages, we would have a few more
> >years of grace. All I can say is that we are fortunate that memory
> >remains expensive in the exabyte range.
> >
> >>>@@ -4338,6 +4349,8 @@ void i915_gem_object_init(struct drm_i915_gem_object 
> >>>*obj,
> >>>   obj->frontbuffer_ggtt_origin = ORIGIN_GTT;
> >>>   obj->madv = I915_MADV_WILLNEED;
> >>>+  INIT_RADIX_TREE(&obj->get_page.radix, GFP_ATOMIC | __GFP_NOWARN);
> >>Pros & cons of GFP_ATOMIC here? Versus perhaps going with the mutex?
> >>I don't know how much data radix tree allocates with this, per node,
> >>but we can have a lot of pages. Would this create extra pressure on
> >>slab shrinking, and in turn out objects?
> >The problem is that we require sg lookup on a !pagefault path, hence
> >mutexes and GFP_KERNEL turn out to be illegal. :|
> 
> Bummer.  I don't know enough about the atomic reserve to judge how
> bad this might be then. Because userspace could drain it easily
> after this work by just pread/pwrite on large objects.

Yes. The pathological case of touching the last page (and only the last
page) will cause large amount of allocation (one page for every 512
pages in the object, and so on recursively - a 64MiB object will require
32 + 1 atomic page allocations, aiui). Not that bad really.

> >>Hmm... I assume failures happen then since you added this fallback.
> >>Due GFP_ATOMIC?
> >No, this was always in the code, because malloc failures happen. Quite
> >often if you run igt ;)
> 
> But GFP_ATOMIC failures primarily, yes?

GFP_ATOMIC certainly increases the likelihood of failure (by removing
the direct reclaim and sleep paths), on the other hand we do get a
reserve pool.
 
> It is a bit unfortunate that with this fancy lookup we can easily
> and silently fall back to linear lookup and lose the performance
> benefit. Do you know how often this happens? Should we have some
> stats collected and exported via debugfs to evaluate on
> realistically busy/loaded systems?

Hmm, in my head I had it wrongly sketched out as basically falling back
to the old behaviour after allocation failure. Ok, if I stored the last
successful allocation (currently sg_idx-1) and the current scan position
separately, we can keep the old style scheme in place for allocation
failure and so not degrade performance for the worst case.

I wasn't planning on seeing failures outside of igt/gem_shrink (with the
exception of people running firefox or chrome!)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to