Re: [Intel-gfx] [PATCH 24/38] drm/i915: Move GEM activity tracking into a common struct reservation_object

2016-09-26 Thread Joonas Lahtinen
On ti, 2016-09-20 at 09:29 +0100, Chris Wilson wrote:
> +static struct intel_engine_cs *
> +last_write_engine(struct drm_i915_gem_object *obj)

This function is repeated below, better move it to header as
__i915_gem_obj_last_write_engine?


I'd say there is not an ABI change now as non-i915 fences were not
supported before either.

So with the duplicate code removed,

Reviewed-by: Joonas Lahtinen 

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 24/38] drm/i915: Move GEM activity tracking into a common struct reservation_object

2016-09-20 Thread Chris Wilson
In preparation to support many distinct timelines, we need to expand the
activity tracking on the GEM object to handle more than just a request
per engine. We already use the struct reservation_object on the dma-buf
to handle many fence contexts, so integrating that into the GEM object
itself is the preferred solution. (For example, we can now share the same
reservation_object between every consumer/producer using this buffer and
skip the manual import/export via dma-buf.)

v2: Reimplement busy-ioctl (by walking the reservation object), postpone
the ABI change for another day. Similarly use the reservation object to
find the last_write request (if active and from i915) for choosing
display CS flips.

Caveats:

 * busy-ioctl: busy-ioctl only reports on the native fences, it will not
warn of stalls (in set-domain-ioctl, pread/pwrite etc) if the object is
being rendered to by external fences. It also will not report the same
busy state as wait-ioctl (or polling on the dma-buf) in the same
circumstances. On the plus side, it does retain reporting of which
*i915* engines are engaged with this object.

 * non-blocking atomic modesets take a step backwards as the wait for
render completion blocks the ioctl. This is fixed in a subsequent
patch to use a fence instead for awaiting on the rendering, see
"drm/i915: Restore nonblocking awaits for modesetting"

 * dynamic array manipulation for shared-fences in reservation is slower
than the previous lockless static assignment (e.g. gem_exec_lut_handle
runtime on ivb goes from 42s to 72s). The runtime effect is far larger
than the overhead added to execbuf as indicated by perf - interesting
secondary effects?

 * loss of object-level retirement callbacks, emulated by VMA retirement
tracking.

 * minor loss of object-level last activity information from debugfs,
could be replaced with per-vma information if desired

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_debugfs.c|  32 ++--
 drivers/gpu/drm/i915/i915_drv.h|  45 +
 drivers/gpu/drm/i915/i915_gem.c| 277 +
 drivers/gpu/drm/i915/i915_gem_batch_pool.c |   3 +-
 drivers/gpu/drm/i915/i915_gem_dmabuf.c |  46 +
 drivers/gpu/drm/i915/i915_gem_dmabuf.h |  45 -
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  71 ++--
 drivers/gpu/drm/i915/i915_gem_gtt.c|  32 
 drivers/gpu/drm/i915/i915_gem_gtt.h|   1 +
 drivers/gpu/drm/i915/i915_gem_request.c|  48 +++--
 drivers/gpu/drm/i915/i915_gem_request.h|  37 +---
 drivers/gpu/drm/i915/i915_gpu_error.c  |   6 +-
 drivers/gpu/drm/i915/intel_atomic_plane.c  |   2 -
 drivers/gpu/drm/i915/intel_display.c   | 131 --
 drivers/gpu/drm/i915/intel_drv.h   |   3 -
 15 files changed, 234 insertions(+), 545 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/i915_gem_dmabuf.h

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
b/drivers/gpu/drm/i915/i915_debugfs.c
index 355eec8f7cac..7ecdd5cc27dd 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -130,6 +130,23 @@ static u64 i915_gem_obj_total_ggtt_size(struct 
drm_i915_gem_object *obj)
return size;
 }
 
+static struct intel_engine_cs *
+last_write_engine(struct drm_i915_gem_object *obj)
+{
+   struct intel_engine_cs *engine = NULL;
+   struct fence *fence;
+
+   rcu_read_lock();
+   fence = reservation_object_get_excl_rcu(obj->resv);
+   rcu_read_unlock();
+
+   if (fence && fence_is_i915(fence) && !fence_is_signaled(fence))
+   engine = to_request(fence)->engine;
+   fence_put(fence);
+
+   return engine;
+}
+
 static void
 describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 {
@@ -138,11 +155,10 @@ describe_obj(struct seq_file *m, struct 
drm_i915_gem_object *obj)
struct i915_vma *vma;
unsigned int frontbuffer_bits;
int pin_count = 0;
-   enum intel_engine_id id;
 
lockdep_assert_held(>base.dev->struct_mutex);
 
-   seq_printf(m, "%pK: %c%c%c%c%c %8zdKiB %02x %02x [ ",
+   seq_printf(m, "%pK: %c%c%c%c%c %8zdKiB %02x %02x %s%s%s",
   >base,
   get_active_flag(obj),
   get_pin_flag(obj),
@@ -151,14 +167,7 @@ describe_obj(struct seq_file *m, struct 
drm_i915_gem_object *obj)
   get_pin_mapped_flag(obj),
   obj->base.size / 1024,
   obj->base.read_domains,
-  obj->base.write_domain);
-   for_each_engine_id(engine, dev_priv, id)
-   seq_printf(m, "%x ",
-  i915_gem_active_get_seqno(>last_read[id],
-
>base.dev->struct_mutex));
-   seq_printf(m, "] %x %s%s%s",
-  i915_gem_active_get_seqno(>last_write,
->base.dev->struct_mutex),
+