[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-15 Thread Daniel Vetter
On Thu, Jul 14, 2016 at 04:24:41PM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 04:36:37PM +0200, Daniel Vetter wrote:
> > On Thu, Jul 14, 2016 at 02:39:54PM +0100, Chris Wilson wrote:
> > > On Thu, Jul 14, 2016 at 02:23:04PM +0100, Chris Wilson wrote:
> > > > The biggest reason I had against going the sw_sync only route was that
> > > > vgem should provide unprivileged fences and that through the bookkeeping
> > > > in vgem we can keep them safe, ensure that we don't leak random buffers
> > > > or fences. (And I need a source of foriegn dma-buf with implicit fence
> > > > tracking with which I can try and break the driver.)
> > > 
> > > And for testing passing around content + fences is more useful than
> > > passing fences alone.
> > 
> > Yup, agreed. But having fences free-standing isn't a real issue since
> > their refcounted and the userspace parts (sync_file) will get cleaned up
> > on process exit latest. Ḯ'm not advocating for any behaviour change at
> > all, just for hiding these things in debugfs.
> 
> It's just a choice of api. We could equally hide it behind a separate
> config flag.
> 
> First question, are we happy that there is a legitimate usecase for fences
> on vgem?
> 
> If so, what enforced timeout on the fence should we use?
> 
> (I think that this ioctl api is correct, I don't forsee sw_sync being
> viable for unprivileged use.)
> 
> Then we can restrict this patch to add the safe interface, enable a bunch
> more tests and get on with discussing how to break the kernel "safely"!

I think the interface is sound. We could probably bikeshed the timeout
forever, but 10s is still reasonable imo.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Daniel Vetter
On Thu, Jul 14, 2016 at 02:39:54PM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 02:23:04PM +0100, Chris Wilson wrote:
> > The biggest reason I had against going the sw_sync only route was that
> > vgem should provide unprivileged fences and that through the bookkeeping
> > in vgem we can keep them safe, ensure that we don't leak random buffers
> > or fences. (And I need a source of foriegn dma-buf with implicit fence
> > tracking with which I can try and break the driver.)
> 
> And for testing passing around content + fences is more useful than
> passing fences alone.

Yup, agreed. But having fences free-standing isn't a real issue since
their refcounted and the userspace parts (sync_file) will get cleaned up
on process exit latest. Ḯ'm not advocating for any behaviour change at
all, just for hiding these things in debugfs.

Or maybe we could add a special (tainting) module option to vgem.ko which
enables this interface? That would be even less work, can easily be
integrated into igt (just set that knob at runtime, done), and with a
stern enough warning in dmesg + tainting the point should be clear. Of
course that switch would be off by default. Thoughts?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Daniel Vetter
On Thu, Jul 14, 2016 at 02:23:04PM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 02:40:59PM +0200, Daniel Vetter wrote:
> > On Thu, Jul 14, 2016 at 11:11:02AM +0100, Chris Wilson wrote:
> > > On Thu, Jul 14, 2016 at 10:59:04AM +0100, Chris Wilson wrote:
> > > > On Thu, Jul 14, 2016 at 10:12:17AM +0200, Daniel Vetter wrote:
> > > > > On Thu, Jul 14, 2016 at 08:04:19AM +0100, Chris Wilson wrote:
> > > > > > vGEM buffers are useful for passing data between software clients 
> > > > > > and
> > > > > > hardware renders. By allowing the user to create and attach fences 
> > > > > > to
> > > > > > the exported vGEM buffers (on the dma-buf), the user can implement a
> > > > > > deferred renderer and queue hardware operations like flipping and 
> > > > > > then
> > > > > > signal the buffer readiness (i.e. this allows the user to schedule
> > > > > > operations out-of-order, but have them complete in-order).
> > > > > > 
> > > > > > This also makes it much easier to write tightly controlled 
> > > > > > testcases for
> > > > > > dma-buf fencing and signaling between hardware drivers.
> > > > > > 
> > > > > > v2: Don't pretend the fences exist in an ordered timeline, but 
> > > > > > allocate
> > > > > > a separate fence-context for each fence so that the fences are
> > > > > > unordered.
> > > > > > v3: Make the debug output more interesting, and so the signaled 
> > > > > > status.
> > > > > > 
> > > > > > Testcase: igt/vgem_basic/dmabuf-fence
> > > > > > Signed-off-by: Chris Wilson 
> > > > > > Cc: Sean Paul 
> > > > > > Cc: Zach Reizner 
> > > > > > Cc: Gustavo Padovan 
> > > > > > Cc: Daniel Vetter 
> > > > > > Acked-by: Zach Reizner 
> > > > > 
> > > > > One thing I completely forgotten: This allows userspace to hang kernel
> > > > > drivers. i915 (and other gpu drivers) can recover using hangcheck, but
> > > > > dumber drivers (v4l, if that ever happens) probably never except such 
> > > > > a
> > > > > case. We've had a similar discusion with the userspace fences exposed 
> > > > > in
> > > > > sw_fence, and decided to move all those ioctl into debugfs. I think we
> > > > > should do the same for this vgem-based debugging of implicit sync. 
> > > > > Sorry
> > > > > for realizing this this late.
> > > > 
> > > > One of the very tests I make is to ensure that we recover from such a
> > > > hang. I don't see the difference between this any of the other ways
> > > > userspace can shoot itself (and others) in the foot.
> > > 
> > > So one solution would be to make vgem fences automatically timeout (with
> > > a flag for root to override for the sake of testing hang detection).
> > 
> > The problem is other drivers. E.g. right now atomic helpers assume that
> > fences will signal, and can't recover if they don't. This is why drivers
> > where things might fail must have some recovery (hangcheck, timeout) to
> > make sure dma_fences always signal.
> 
> Urm, all the atomic helpers should work with fails. The waits on dma-buf
> should be before any hardware is modified and so cancellation is trivial.
> Anyone using a foriegn fence (or even native) must cope that it may not
> meet some deadline.
> 
> They have to. Anyone sharing a i915 dma-buf is susceptible to all kinds
> of (unprivileged) fun.
>  
> > Imo not even root should be allowed to break this, since it could put
> > drivers into a non-recoverable state. I think this must be restricted to
> > something known-unsafe-don't-enable-on-production like debugfs.
> 
> Providing fences is extremely useful, even for software buffers. (For
> the sake of argument, just imagine an asynchronous multithreaded llvmpipe
> wanting to support client fences for deferred rendering.) The only
> question in my mind is how much cotton wool to use.
> 
> > Other solutions which I don't like:
> > - Everyone needs to be able to recover. Given how much effort it is to
> >   just keep i915 hangcheck in working order I think that's totally
> >   illusionary to assume. At least once world+dog (atomic, v4l, ...) all
> >   consume/produce fences, subsystems where the usual assumption holds that
> >   async ops complete.
> > 
> > - Really long timeouts are allowed for root in vgem. Could lead to even
> >   more fun in testing i915 hangchecks I think, so don't like that much
> >   either.
> 
> The whole point is in testing our handling before we become suspectible
> to real world fail - because as you point out, not everyone guarantees
> that a fence will be signaled. I can't simply pass around i915 dma-buf
> simply because we may unwind them and in the process completely curtail
> being able to test a foriegn fence that hangs.

I think that's where we differ in opinion: Right now we do have the
guarantee that every fence gets signalled in finite time. For drivers
where that is not just guaranteed there must be a hangcheck to force the
completion.

The only exception thus far is the debugfs-only sw_fence interface.
-Daniel

> 
> > I think the best option is to just do the same as 

[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Chris Wilson
On Thu, Jul 14, 2016 at 04:36:37PM +0200, Daniel Vetter wrote:
> On Thu, Jul 14, 2016 at 02:39:54PM +0100, Chris Wilson wrote:
> > On Thu, Jul 14, 2016 at 02:23:04PM +0100, Chris Wilson wrote:
> > > The biggest reason I had against going the sw_sync only route was that
> > > vgem should provide unprivileged fences and that through the bookkeeping
> > > in vgem we can keep them safe, ensure that we don't leak random buffers
> > > or fences. (And I need a source of foriegn dma-buf with implicit fence
> > > tracking with which I can try and break the driver.)
> > 
> > And for testing passing around content + fences is more useful than
> > passing fences alone.
> 
> Yup, agreed. But having fences free-standing isn't a real issue since
> their refcounted and the userspace parts (sync_file) will get cleaned up
> on process exit latest. Ḯ'm not advocating for any behaviour change at
> all, just for hiding these things in debugfs.

It's just a choice of api. We could equally hide it behind a separate
config flag.

First question, are we happy that there is a legitimate usecase for fences
on vgem?

If so, what enforced timeout on the fence should we use?

(I think that this ioctl api is correct, I don't forsee sw_sync being
viable for unprivileged use.)

Then we can restrict this patch to add the safe interface, enable a bunch
more tests and get on with discussing how to break the kernel "safely"!
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Daniel Vetter
On Thu, Jul 14, 2016 at 11:11:02AM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 10:59:04AM +0100, Chris Wilson wrote:
> > On Thu, Jul 14, 2016 at 10:12:17AM +0200, Daniel Vetter wrote:
> > > On Thu, Jul 14, 2016 at 08:04:19AM +0100, Chris Wilson wrote:
> > > > vGEM buffers are useful for passing data between software clients and
> > > > hardware renders. By allowing the user to create and attach fences to
> > > > the exported vGEM buffers (on the dma-buf), the user can implement a
> > > > deferred renderer and queue hardware operations like flipping and then
> > > > signal the buffer readiness (i.e. this allows the user to schedule
> > > > operations out-of-order, but have them complete in-order).
> > > > 
> > > > This also makes it much easier to write tightly controlled testcases for
> > > > dma-buf fencing and signaling between hardware drivers.
> > > > 
> > > > v2: Don't pretend the fences exist in an ordered timeline, but allocate
> > > > a separate fence-context for each fence so that the fences are
> > > > unordered.
> > > > v3: Make the debug output more interesting, and so the signaled status.
> > > > 
> > > > Testcase: igt/vgem_basic/dmabuf-fence
> > > > Signed-off-by: Chris Wilson 
> > > > Cc: Sean Paul 
> > > > Cc: Zach Reizner 
> > > > Cc: Gustavo Padovan 
> > > > Cc: Daniel Vetter 
> > > > Acked-by: Zach Reizner 
> > > 
> > > One thing I completely forgotten: This allows userspace to hang kernel
> > > drivers. i915 (and other gpu drivers) can recover using hangcheck, but
> > > dumber drivers (v4l, if that ever happens) probably never except such a
> > > case. We've had a similar discusion with the userspace fences exposed in
> > > sw_fence, and decided to move all those ioctl into debugfs. I think we
> > > should do the same for this vgem-based debugging of implicit sync. Sorry
> > > for realizing this this late.
> > 
> > One of the very tests I make is to ensure that we recover from such a
> > hang. I don't see the difference between this any of the other ways
> > userspace can shoot itself (and others) in the foot.
> 
> So one solution would be to make vgem fences automatically timeout (with
> a flag for root to override for the sake of testing hang detection).

The problem is other drivers. E.g. right now atomic helpers assume that
fences will signal, and can't recover if they don't. This is why drivers
where things might fail must have some recovery (hangcheck, timeout) to
make sure dma_fences always signal.

Imo not even root should be allowed to break this, since it could put
drivers into a non-recoverable state. I think this must be restricted to
something known-unsafe-don't-enable-on-production like debugfs.

Other solutions which I don't like:
- Everyone needs to be able to recover. Given how much effort it is to
  just keep i915 hangcheck in working order I think that's totally
  illusionary to assume. At least once world+dog (atomic, v4l, ...) all
  consume/produce fences, subsystems where the usual assumption holds that
  async ops complete.

- Really long timeouts are allowed for root in vgem. Could lead to even
  more fun in testing i915 hangchecks I think, so don't like that much
  either.

I think the best option is to just do the same as we've done for sw_fence,
and move it to debugfs. We could reuse the debugfs sw_fence interface to
create them (gives us more control as a bonus), and just have an ioctl to
attach fences to vgem (which could be unpriviledged).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Chris Wilson
On Thu, Jul 14, 2016 at 02:23:04PM +0100, Chris Wilson wrote:
> The biggest reason I had against going the sw_sync only route was that
> vgem should provide unprivileged fences and that through the bookkeeping
> in vgem we can keep them safe, ensure that we don't leak random buffers
> or fences. (And I need a source of foriegn dma-buf with implicit fence
> tracking with which I can try and break the driver.)

And for testing passing around content + fences is more useful than
passing fences alone.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Chris Wilson
On Thu, Jul 14, 2016 at 02:40:59PM +0200, Daniel Vetter wrote:
> On Thu, Jul 14, 2016 at 11:11:02AM +0100, Chris Wilson wrote:
> > On Thu, Jul 14, 2016 at 10:59:04AM +0100, Chris Wilson wrote:
> > > On Thu, Jul 14, 2016 at 10:12:17AM +0200, Daniel Vetter wrote:
> > > > On Thu, Jul 14, 2016 at 08:04:19AM +0100, Chris Wilson wrote:
> > > > > vGEM buffers are useful for passing data between software clients and
> > > > > hardware renders. By allowing the user to create and attach fences to
> > > > > the exported vGEM buffers (on the dma-buf), the user can implement a
> > > > > deferred renderer and queue hardware operations like flipping and then
> > > > > signal the buffer readiness (i.e. this allows the user to schedule
> > > > > operations out-of-order, but have them complete in-order).
> > > > > 
> > > > > This also makes it much easier to write tightly controlled testcases 
> > > > > for
> > > > > dma-buf fencing and signaling between hardware drivers.
> > > > > 
> > > > > v2: Don't pretend the fences exist in an ordered timeline, but 
> > > > > allocate
> > > > > a separate fence-context for each fence so that the fences are
> > > > > unordered.
> > > > > v3: Make the debug output more interesting, and so the signaled 
> > > > > status.
> > > > > 
> > > > > Testcase: igt/vgem_basic/dmabuf-fence
> > > > > Signed-off-by: Chris Wilson 
> > > > > Cc: Sean Paul 
> > > > > Cc: Zach Reizner 
> > > > > Cc: Gustavo Padovan 
> > > > > Cc: Daniel Vetter 
> > > > > Acked-by: Zach Reizner 
> > > > 
> > > > One thing I completely forgotten: This allows userspace to hang kernel
> > > > drivers. i915 (and other gpu drivers) can recover using hangcheck, but
> > > > dumber drivers (v4l, if that ever happens) probably never except such a
> > > > case. We've had a similar discusion with the userspace fences exposed in
> > > > sw_fence, and decided to move all those ioctl into debugfs. I think we
> > > > should do the same for this vgem-based debugging of implicit sync. Sorry
> > > > for realizing this this late.
> > > 
> > > One of the very tests I make is to ensure that we recover from such a
> > > hang. I don't see the difference between this any of the other ways
> > > userspace can shoot itself (and others) in the foot.
> > 
> > So one solution would be to make vgem fences automatically timeout (with
> > a flag for root to override for the sake of testing hang detection).
> 
> The problem is other drivers. E.g. right now atomic helpers assume that
> fences will signal, and can't recover if they don't. This is why drivers
> where things might fail must have some recovery (hangcheck, timeout) to
> make sure dma_fences always signal.

Urm, all the atomic helpers should work with fails. The waits on dma-buf
should be before any hardware is modified and so cancellation is trivial.
Anyone using a foriegn fence (or even native) must cope that it may not
meet some deadline.

They have to. Anyone sharing a i915 dma-buf is susceptible to all kinds
of (unprivileged) fun.

> Imo not even root should be allowed to break this, since it could put
> drivers into a non-recoverable state. I think this must be restricted to
> something known-unsafe-don't-enable-on-production like debugfs.

Providing fences is extremely useful, even for software buffers. (For
the sake of argument, just imagine an asynchronous multithreaded llvmpipe
wanting to support client fences for deferred rendering.) The only
question in my mind is how much cotton wool to use.

> Other solutions which I don't like:
> - Everyone needs to be able to recover. Given how much effort it is to
>   just keep i915 hangcheck in working order I think that's totally
>   illusionary to assume. At least once world+dog (atomic, v4l, ...) all
>   consume/produce fences, subsystems where the usual assumption holds that
>   async ops complete.
> 
> - Really long timeouts are allowed for root in vgem. Could lead to even
>   more fun in testing i915 hangchecks I think, so don't like that much
>   either.

The whole point is in testing our handling before we become suspectible
to real world fail - because as you point out, not everyone guarantees
that a fence will be signaled. I can't simply pass around i915 dma-buf
simply because we may unwind them and in the process completely curtail
being able to test a foriegn fence that hangs.

> I think the best option is to just do the same as we've done for sw_fence,
> and move it to debugfs. We could reuse the debugfs sw_fence interface to
> create them (gives us more control as a bonus), and just have an ioctl to
> attach fences to vgem (which could be unpriviledged).

The biggest reason I had against going the sw_sync only route was that
vgem should provide unprivileged fences and that through the bookkeeping
in vgem we can keep them safe, ensure that we don't leak random buffers
or fences. (And I need a source of foriegn dma-buf with implicit fence
tracking with which I can try and break the driver.)
-Chris

-- 
Chris Wilson, Intel Open 

[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Chris Wilson
On Thu, Jul 14, 2016 at 11:11:02AM +0100, Chris Wilson wrote:
> So one solution would be to make vgem fences automatically timeout (with
> a flag for root to override for the sake of testing hang detection).

diff --git a/drivers/gpu/drm/vgem/vgem_fence.c 
b/drivers/gpu/drm/vgem/vgem_fence.c
index b7da11419ad6..17c63c9a8ea0 100644
--- a/drivers/gpu/drm/vgem/vgem_fence.c
+++ b/drivers/gpu/drm/vgem/vgem_fence.c
@@ -28,6 +28,7 @@
 struct vgem_fence {
struct fence base;
struct spinlock lock;
+   struct timer_list timer;
 };

 static const char *vgem_fence_get_driver_name(struct fence *fence)
@@ -50,6 +51,14 @@ static bool vgem_fence_enable_signaling(struct fence *fence)
return true;
 }

+static void vgem_fence_release(struct fence *base)
+{
+   struct vgem_fence *fence = container_of(base, typeof(*fence), base);
+
+   del_timer_sync(>timer);
+   fence_free(>base);
+}
+
 static void vgem_fence_value_str(struct fence *fence, char *str, int size)
 {
snprintf(str, size, "%u", fence->seqno);
@@ -67,11 +76,21 @@ const struct fence_ops vgem_fence_ops = {
.enable_signaling = vgem_fence_enable_signaling,
.signaled = vgem_fence_signaled,
.wait = fence_default_wait,
+   .release = vgem_fence_release,
+
.fence_value_str = vgem_fence_value_str,
.timeline_value_str = vgem_fence_timeline_value_str,
 };

-static struct fence *vgem_fence_create(struct vgem_file *vfile)
+static void vgem_fence_timeout(unsigned long data)
+{
+   struct vgem_fence *fence = (struct vgem_fence *)data;
+
+   fence_signal(>base);
+}
+
+static struct fence *vgem_fence_create(struct vgem_file *vfile,
+  unsigned int flags)
 {
struct vgem_fence *fence;

@@ -83,6 +102,12 @@ static struct fence *vgem_fence_create(struct vgem_file 
*vfile)
fence_init(>base, _fence_ops, >lock,
   fence_context_alloc(1), 1);

+   setup_timer(>timer, vgem_fence_timeout, (unsigned long)fence);
+
+   /* We force the fence to expire within 10s to prevent driver hangs */
+   if (!(flags & VGEM_FENCE_NOTIMEOUT))
+   mod_timer(>timer, 10*HZ);
+
return >base;
 }

@@ -114,9 +139,12 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
struct fence *fence;
int ret;

-   if (arg->flags & ~VGEM_FENCE_WRITE)
+   if (arg->flags & ~(VGEM_FENCE_WRITE | VGEM_FENCE_NOTIMEOUT))
return -EINVAL;

+   if (arg->flags & VGEM_FENCE_NOTIMEOUT && !capable(CAP_SYS_ADMIN))
+   return -EPERM;
+
if (arg->pad)
return -EINVAL;

@@ -128,7 +156,7 @@ int vgem_fence_attach_ioctl(struct drm_device *dev,
if (ret)
goto out;

-   fence = vgem_fence_create(vfile);
+   fence = vgem_fence_create(vfile, arg->flags);
if (!fence) {
ret = -ENOMEM;
goto out;
diff --git a/include/uapi/drm/vgem_drm.h b/include/uapi/drm/vgem_drm.h
index 352d2fae8de9..55fd08750773 100644
--- a/include/uapi/drm/vgem_drm.h
+++ b/include/uapi/drm/vgem_drm.h
@@ -45,7 +45,8 @@ extern "C" {
 struct drm_vgem_fence_attach {
__u32 handle;
__u32 flags;
-#define VGEM_FENCE_WRITE 0x1
+#define VGEM_FENCE_WRITE   0x1
+#define VGEM_FENCE_NOTIMEOUT   0x2
__u32 out_fence;
__u32 pad;
 };


-- 
Chris Wilson, Intel Open Source Technology Centre


[Intel-gfx] [PATCH v3] drm/vgem: Attach sw fences to exported vGEM dma-buf (ioctl)

2016-07-14 Thread Chris Wilson
On Thu, Jul 14, 2016 at 10:59:04AM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 10:12:17AM +0200, Daniel Vetter wrote:
> > On Thu, Jul 14, 2016 at 08:04:19AM +0100, Chris Wilson wrote:
> > > vGEM buffers are useful for passing data between software clients and
> > > hardware renders. By allowing the user to create and attach fences to
> > > the exported vGEM buffers (on the dma-buf), the user can implement a
> > > deferred renderer and queue hardware operations like flipping and then
> > > signal the buffer readiness (i.e. this allows the user to schedule
> > > operations out-of-order, but have them complete in-order).
> > > 
> > > This also makes it much easier to write tightly controlled testcases for
> > > dma-buf fencing and signaling between hardware drivers.
> > > 
> > > v2: Don't pretend the fences exist in an ordered timeline, but allocate
> > > a separate fence-context for each fence so that the fences are
> > > unordered.
> > > v3: Make the debug output more interesting, and so the signaled status.
> > > 
> > > Testcase: igt/vgem_basic/dmabuf-fence
> > > Signed-off-by: Chris Wilson 
> > > Cc: Sean Paul 
> > > Cc: Zach Reizner 
> > > Cc: Gustavo Padovan 
> > > Cc: Daniel Vetter 
> > > Acked-by: Zach Reizner 
> > 
> > One thing I completely forgotten: This allows userspace to hang kernel
> > drivers. i915 (and other gpu drivers) can recover using hangcheck, but
> > dumber drivers (v4l, if that ever happens) probably never except such a
> > case. We've had a similar discusion with the userspace fences exposed in
> > sw_fence, and decided to move all those ioctl into debugfs. I think we
> > should do the same for this vgem-based debugging of implicit sync. Sorry
> > for realizing this this late.
> 
> One of the very tests I make is to ensure that we recover from such a
> hang. I don't see the difference between this any of the other ways
> userspace can shoot itself (and others) in the foot.

So one solution would be to make vgem fences automatically timeout (with
a flag for root to override for the sake of testing hang detection).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre