Re: [RFC PATCH] fence: dma-buf cross-device synchronization (v12)

2013-08-15 Thread Maarten Lankhorst
Op 12-08-13 17:43, Rob Clark schreef:
 On Mon, Jul 29, 2013 at 10:05 AM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 A fence can be attached to a buffer which is being filled or consumed
 by hw, to allow userspace to pass the buffer without waiting to another
 device.  For example, userspace can call page_flip ioctl to display the
 next frame of graphics after kicking the GPU but while the GPU is still
 rendering.  The display device sharing the buffer with the GPU would
 attach a callback to get notified when the GPU's rendering-complete IRQ
 fires, to update the scan-out address of the display, without having to
 wake up userspace.

 A driver must allocate a fence context for each execution ring that can
 run in parallel. The function for this takes an argument with how many
 contexts to allocate:
   + fence_context_alloc()

 A fence is transient, one-shot deal.  It is allocated and attached
 to one or more dma-buf's.  When the one that attached it is done, with
 the pending operation, it can signal the fence:
   + fence_signal()

 To have a rough approximation whether a fence is fired, call:
   + fence_is_signaled()

 The dma-buf-mgr handles tracking, and waiting on, the fences associated
 with a dma-buf.

 The one pending on the fence can add an async callback:
   + fence_add_callback()

 The callback can optionally be cancelled with:
   + fence_remove_callback()

 To wait synchronously, optionally with a timeout:
   + fence_wait()
   + fence_wait_timeout()

 A default software-only implementation is provided, which can be used
 by drivers attaching a fence to a buffer when they have no other means
 for hw sync.  But a memory backed fence is also envisioned, because it
 is common that GPU's can write to, or poll on some memory location for
 synchronization.  For example:

   fence = custom_get_fence(...);
   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
 dma_buf *fence_buf = fence-sync_buf;
 get_dma_buf(fence_buf);

 ... tell the hw the memory location to wait ...
 custom_wait_on(fence_buf, fence-seqno_ofs, fence-seqno);
   } else {
 /* fall-back to sw sync * /
 fence_add_callback(fence, my_cb);
   }

 On SoC platforms, if some other hw mechanism is provided for synchronizing
 between IP blocks, it could be supported as an alternate implementation
 with it's own fence ops in a similar way.

 enable_signaling callback is used to provide sw signaling in case a cpu
 waiter is requested or no compatible hardware signaling could be used.

 The intention is to provide a userspace interface (presumably via eventfd)
 later, to be used in conjunction with dma-buf's mmap support for sw access
 to buffers (or for userspace apps that would prefer to do their own
 synchronization).

 v1: Original
 v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
 that dma-fence didn't need to care about the sw-hw signaling path
 (it can be handled same as sw-sw case), and therefore the fence-ops
 can be simplified and more handled in the core.  So remove the signal,
 add_callback, cancel_callback, and wait ops, and replace with a simple
 enable_signaling() op which can be used to inform a fence supporting
 hw-hw signaling that one or more devices which do not support hw
 signaling are waiting (and therefore it should enable an irq or do
 whatever is necessary in order that the CPU is notified when the
 fence is passed).
 v3: Fix locking fail in attach_fence() and get_fence()
 v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
 we decided that we need to be able to attach one fence to N dma-buf's,
 so using the list_head in dma-fence struct would be problematic.
 v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
 v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some 
 comments
 about checking if fence fired or not. This is broken by design.
 waitqueue_active during destruction is now fatal, since the signaller
 should be holding a reference in enable_signalling until it signalled
 the fence. Pass the original dma_fence_cb along, and call __remove_wait
 in the dma_fence_callback handler, so that no cleanup needs to be
 performed.
 v7: [ Maarten Lankhorst ] Set cb-func and only enable sw signaling if
 fence wasn't signaled yet, for example for hardware fences that may
 choose to signal blindly.
 v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
 header and fixed include mess. dma-fence.h now includes dma-buf.h
 All members are now initialized, so kmalloc can be used for
 allocating a dma-fence. More documentation added.
 v9: Change compiler bitfields to flags, change return type of
 enable_signaling to bool. Rework dma_fence_wait. Added
 dma_fence_is_signaled and dma_fence_wait_timeout.
 s/dma// and change exports to non GPL. Added fence_is_signaled and
 

Re: [RFC PATCH] fence: dma-buf cross-device synchronization (v12)

2013-08-15 Thread Rob Clark
On Thu, Aug 15, 2013 at 7:16 AM, Maarten Lankhorst
maarten.lankho...@canonical.com wrote:
 Op 12-08-13 17:43, Rob Clark schreef:
 On Mon, Jul 29, 2013 at 10:05 AM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 +
[snip]
 +/**
 + * fence_add_callback - add a callback to be called when the fence
 + * is signaled
 + * @fence: [in]the fence to wait on
 + * @cb:[in]the callback to register
 + * @func:  [in]the function to call
 + * @priv:  [in]the argument to pass to function
 + *
 + * cb will be initialized by fence_add_callback, no initialization
 + * by the caller is required. Any number of callbacks can be registered
 + * to a fence, but a callback can only be registered to one fence at a 
 time.
 + *
 + * Note that the callback can be called from an atomic context.  If
 + * fence is already signaled, this function will return -ENOENT (and
 + * *not* call the callback)
 + *
 + * Add a software callback to the fence. Same restrictions apply to
 + * refcount as it does to fence_wait, however the caller doesn't need to
 + * keep a refcount to fence afterwards: when software access is enabled,
 + * the creator of the fence is required to keep the fence alive until
 + * after it signals with fence_signal. The callback itself can be called
 + * from irq context.
 + *
 + */
 +int fence_add_callback(struct fence *fence, struct fence_cb *cb,
 +  fence_func_t func, void *priv)
 +{
 +   unsigned long flags;
 +   int ret = 0;
 +   bool was_set;
 +
 +   if (WARN_ON(!fence || !func))
 +   return -EINVAL;
 +
 +   if (test_bit(FENCE_FLAG_SIGNALED_BIT, fence-flags))
 +   return -ENOENT;
 +
 +   spin_lock_irqsave(fence-lock, flags);
 +
 +   was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, 
 fence-flags);
 +
 +   if (test_bit(FENCE_FLAG_SIGNALED_BIT, fence-flags))
 +   ret = -ENOENT;
 +   else if (!was_set  !fence-ops-enable_signaling(fence)) {
 +   __fence_signal(fence);
 +   ret = -ENOENT;
 +   }
 +
 +   if (!ret) {
 +   cb-func = func;
 +   cb-priv = priv;
 +   list_add_tail(cb-node, fence-cb_list);
 since the user is providing the 'struct fence_cb', why not drop the
 priv  func args, and have some cb-initialize macro, ie.

 INIT_FENCE_CB(foo-fence, cbfxn);

 and I guess we can just drop priv and let the user embed fence in
 whatever structure they like.  Ie. make it look a bit how work_struct
 works.
 I don't mind killing priv. But a INIT_FENCE_CB macro is silly, when all it 
 would do is set cb-func.
 So passing it as  an argument to fence_add_callback is fine, unless you have 
 a better reason to
 do so.

 INIT_WORK seems to have a bit more initialization than us, it seems work can 
 be more complicated
 than callbacks, because the callbacks can only be called once and work can be 
 rescheduled multiple times.

yeah, INIT_WORK does more.. although maybe some day we want
INIT_FENCE_CB to do more (ie. if we add some debug features to help
catch misuse of fence/fence-cb's).  And if nothing else, having it
look a bit like other constructs that we have in the kernel seems
useful.  And with my point below, you'd want INIT_FENCE_CB to do a
INIT_LIST_HEAD(), so it is (very) slightly more than just setting the
fxn ptr.

 maybe also, if (!list_empty(cb-node) return -EBUSY?
 I think checking for list_empty(cb-node) is a terrible idea. This is no 
 different from any other list corruption,
 and it's a programming error. Not a runtime error. :-)

I was thinking for crtc and page-flip, embed the fence_cb in the crtc.
 You should only use the cb once at a time, but in this case you might
want to re-use it for the next page flip.  Having something to catch
cb mis-use in this sort of scenario seems useful.

maybe how I am thinking to use fence_cb is not quite what you had in
mind.  I'm not sure.  I was trying to think how I could just directly
use fence/fence_cb in msm for everything (imported dmabuf or just
regular 'ol gem buffers).

 cb-node.next/prev may be NULL, which would fail with this check. The 
 contents of cb-node are undefined
 before fence_add_callback is called. Calling fence_remove_callback on a fence 
 that hasn't been added is
 undefined too. Calling fence_remove_callback works, but I'm thinking of 
 changing the list_del_init to list_del,
 which would make calling fence_remove_callback twice a fatal error if 
 CONFIG_DEBUG_LIST is enabled,
 and a possible memory corruption otherwise.
 ...
 +
[snip]
 +
 +/**
 + * fence context counter: each execution context should have its own
 + * fence context, this allows checking if fences belong to the same
 + * context or not. One device can have multiple separate contexts,
 + * and they're used if some engine can run independently of another.
 + */
 +extern atomic_t fence_context_counter;
 context-alloc should not be in the critical path.. I'd think probably

Re: [RFC PATCH] fence: dma-buf cross-device synchronization (v12)

2013-08-15 Thread Maarten Lankhorst
Op 15-08-13 15:14, Rob Clark schreef:
 On Thu, Aug 15, 2013 at 7:16 AM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 Op 12-08-13 17:43, Rob Clark schreef:
 On Mon, Jul 29, 2013 at 10:05 AM, Maarten Lankhorst
 maarten.lankho...@canonical.com wrote:
 +
 [snip]
 +/**
 + * fence_add_callback - add a callback to be called when the fence
 + * is signaled
 + * @fence: [in]the fence to wait on
 + * @cb:[in]the callback to register
 + * @func:  [in]the function to call
 + * @priv:  [in]the argument to pass to function
 + *
 + * cb will be initialized by fence_add_callback, no initialization
 + * by the caller is required. Any number of callbacks can be registered
 + * to a fence, but a callback can only be registered to one fence at a 
 time.
 + *
 + * Note that the callback can be called from an atomic context.  If
 + * fence is already signaled, this function will return -ENOENT (and
 + * *not* call the callback)
 + *
 + * Add a software callback to the fence. Same restrictions apply to
 + * refcount as it does to fence_wait, however the caller doesn't need to
 + * keep a refcount to fence afterwards: when software access is enabled,
 + * the creator of the fence is required to keep the fence alive until
 + * after it signals with fence_signal. The callback itself can be called
 + * from irq context.
 + *
 + */
 +int fence_add_callback(struct fence *fence, struct fence_cb *cb,
 +  fence_func_t func, void *priv)
 +{
 +   unsigned long flags;
 +   int ret = 0;
 +   bool was_set;
 +
 +   if (WARN_ON(!fence || !func))
 +   return -EINVAL;
 +
 +   if (test_bit(FENCE_FLAG_SIGNALED_BIT, fence-flags))
 +   return -ENOENT;
 +
 +   spin_lock_irqsave(fence-lock, flags);
 +
 +   was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, 
 fence-flags);
 +
 +   if (test_bit(FENCE_FLAG_SIGNALED_BIT, fence-flags))
 +   ret = -ENOENT;
 +   else if (!was_set  !fence-ops-enable_signaling(fence)) {
 +   __fence_signal(fence);
 +   ret = -ENOENT;
 +   }
 +
 +   if (!ret) {
 +   cb-func = func;
 +   cb-priv = priv;
 +   list_add_tail(cb-node, fence-cb_list);
 since the user is providing the 'struct fence_cb', why not drop the
 priv  func args, and have some cb-initialize macro, ie.

 INIT_FENCE_CB(foo-fence, cbfxn);

 and I guess we can just drop priv and let the user embed fence in
 whatever structure they like.  Ie. make it look a bit how work_struct
 works.
 I don't mind killing priv. But a INIT_FENCE_CB macro is silly, when all it 
 would do is set cb-func.
 So passing it as  an argument to fence_add_callback is fine, unless you have 
 a better reason to
 do so.

 INIT_WORK seems to have a bit more initialization than us, it seems work can 
 be more complicated
 than callbacks, because the callbacks can only be called once and work can 
 be rescheduled multiple times.
 yeah, INIT_WORK does more.. although maybe some day we want
 INIT_FENCE_CB to do more (ie. if we add some debug features to help
 catch misuse of fence/fence-cb's).  And if nothing else, having it
 look a bit like other constructs that we have in the kernel seems
 useful.  And with my point below, you'd want INIT_FENCE_CB to do a
 INIT_LIST_HEAD(), so it is (very) slightly more than just setting the
 fxn ptr.
I don't think list is a good idea for that.
 maybe also, if (!list_empty(cb-node) return -EBUSY?
 I think checking for list_empty(cb-node) is a terrible idea. This is no 
 different from any other list corruption,
 and it's a programming error. Not a runtime error. :-)
 I was thinking for crtc and page-flip, embed the fence_cb in the crtc.
  You should only use the cb once at a time, but in this case you might
 want to re-use it for the next page flip.  Having something to catch
 cb mis-use in this sort of scenario seems useful.

 maybe how I am thinking to use fence_cb is not quite what you had in
 mind.  I'm not sure.  I was trying to think how I could just directly
 use fence/fence_cb in msm for everything (imported dmabuf or just
 regular 'ol gem buffers).


 cb-node.next/prev may be NULL, which would fail with this check. The 
 contents of cb-node are undefined
 before fence_add_callback is called. Calling fence_remove_callback on a 
 fence that hasn't been added is
 undefined too. Calling fence_remove_callback works, but I'm thinking of 
 changing the list_del_init to list_del,
 which would make calling fence_remove_callback twice a fatal error if 
 CONFIG_DEBUG_LIST is enabled,
 and a possible memory corruption otherwise.
 ...
 +
 [snip]
 +
 +/**
 + * fence context counter: each execution context should have its own
 + * fence context, this allows checking if fences belong to the same
 + * context or not. One device can have multiple separate contexts,
 + * and they're used if some engine can run independently of another.
 + */
 +extern 

Re: [RFC PATCH] fence: dma-buf cross-device synchronization (v12)

2013-08-12 Thread Rob Clark
On Mon, Jul 29, 2013 at 10:05 AM, Maarten Lankhorst
maarten.lankho...@canonical.com wrote:
 A fence can be attached to a buffer which is being filled or consumed
 by hw, to allow userspace to pass the buffer without waiting to another
 device.  For example, userspace can call page_flip ioctl to display the
 next frame of graphics after kicking the GPU but while the GPU is still
 rendering.  The display device sharing the buffer with the GPU would
 attach a callback to get notified when the GPU's rendering-complete IRQ
 fires, to update the scan-out address of the display, without having to
 wake up userspace.

 A driver must allocate a fence context for each execution ring that can
 run in parallel. The function for this takes an argument with how many
 contexts to allocate:
   + fence_context_alloc()

 A fence is transient, one-shot deal.  It is allocated and attached
 to one or more dma-buf's.  When the one that attached it is done, with
 the pending operation, it can signal the fence:
   + fence_signal()

 To have a rough approximation whether a fence is fired, call:
   + fence_is_signaled()

 The dma-buf-mgr handles tracking, and waiting on, the fences associated
 with a dma-buf.

 The one pending on the fence can add an async callback:
   + fence_add_callback()

 The callback can optionally be cancelled with:
   + fence_remove_callback()

 To wait synchronously, optionally with a timeout:
   + fence_wait()
   + fence_wait_timeout()

 A default software-only implementation is provided, which can be used
 by drivers attaching a fence to a buffer when they have no other means
 for hw sync.  But a memory backed fence is also envisioned, because it
 is common that GPU's can write to, or poll on some memory location for
 synchronization.  For example:

   fence = custom_get_fence(...);
   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
 dma_buf *fence_buf = fence-sync_buf;
 get_dma_buf(fence_buf);

 ... tell the hw the memory location to wait ...
 custom_wait_on(fence_buf, fence-seqno_ofs, fence-seqno);
   } else {
 /* fall-back to sw sync * /
 fence_add_callback(fence, my_cb);
   }

 On SoC platforms, if some other hw mechanism is provided for synchronizing
 between IP blocks, it could be supported as an alternate implementation
 with it's own fence ops in a similar way.

 enable_signaling callback is used to provide sw signaling in case a cpu
 waiter is requested or no compatible hardware signaling could be used.

 The intention is to provide a userspace interface (presumably via eventfd)
 later, to be used in conjunction with dma-buf's mmap support for sw access
 to buffers (or for userspace apps that would prefer to do their own
 synchronization).

 v1: Original
 v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
 that dma-fence didn't need to care about the sw-hw signaling path
 (it can be handled same as sw-sw case), and therefore the fence-ops
 can be simplified and more handled in the core.  So remove the signal,
 add_callback, cancel_callback, and wait ops, and replace with a simple
 enable_signaling() op which can be used to inform a fence supporting
 hw-hw signaling that one or more devices which do not support hw
 signaling are waiting (and therefore it should enable an irq or do
 whatever is necessary in order that the CPU is notified when the
 fence is passed).
 v3: Fix locking fail in attach_fence() and get_fence()
 v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
 we decided that we need to be able to attach one fence to N dma-buf's,
 so using the list_head in dma-fence struct would be problematic.
 v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
 v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some 
 comments
 about checking if fence fired or not. This is broken by design.
 waitqueue_active during destruction is now fatal, since the signaller
 should be holding a reference in enable_signalling until it signalled
 the fence. Pass the original dma_fence_cb along, and call __remove_wait
 in the dma_fence_callback handler, so that no cleanup needs to be
 performed.
 v7: [ Maarten Lankhorst ] Set cb-func and only enable sw signaling if
 fence wasn't signaled yet, for example for hardware fences that may
 choose to signal blindly.
 v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
 header and fixed include mess. dma-fence.h now includes dma-buf.h
 All members are now initialized, so kmalloc can be used for
 allocating a dma-fence. More documentation added.
 v9: Change compiler bitfields to flags, change return type of
 enable_signaling to bool. Rework dma_fence_wait. Added
 dma_fence_is_signaled and dma_fence_wait_timeout.
 s/dma// and change exports to non GPL. Added fence_is_signaled and
 fence_enable_sw_signaling calls, add ability to override 

[RFC PATCH] fence: dma-buf cross-device synchronization (v12)

2013-07-29 Thread Maarten Lankhorst
A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device.  For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering.  The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.

A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
  + fence_context_alloc()

A fence is transient, one-shot deal.  It is allocated and attached
to one or more dma-buf's.  When the one that attached it is done, with
the pending operation, it can signal the fence:
  + fence_signal()

To have a rough approximation whether a fence is fired, call:
  + fence_is_signaled()

The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.

The one pending on the fence can add an async callback:
  + fence_add_callback()

The callback can optionally be cancelled with:
  + fence_remove_callback()

To wait synchronously, optionally with a timeout:
  + fence_wait()
  + fence_wait_timeout()

A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync.  But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization.  For example:

  fence = custom_get_fence(...);
  if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
dma_buf *fence_buf = fence-sync_buf;
get_dma_buf(fence_buf);

... tell the hw the memory location to wait ...
custom_wait_on(fence_buf, fence-seqno_ofs, fence-seqno);
  } else {
/* fall-back to sw sync * /
fence_add_callback(fence, my_cb);
  }

On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.

enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.

The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).

v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
that dma-fence didn't need to care about the sw-hw signaling path
(it can be handled same as sw-sw case), and therefore the fence-ops
can be simplified and more handled in the core.  So remove the signal,
add_callback, cancel_callback, and wait ops, and replace with a simple
enable_signaling() op which can be used to inform a fence supporting
hw-hw signaling that one or more devices which do not support hw
signaling are waiting (and therefore it should enable an irq or do
whatever is necessary in order that the CPU is notified when the
fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
we decided that we need to be able to attach one fence to N dma-buf's,
so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
about checking if fence fired or not. This is broken by design.
waitqueue_active during destruction is now fatal, since the signaller
should be holding a reference in enable_signalling until it signalled
the fence. Pass the original dma_fence_cb along, and call __remove_wait
in the dma_fence_callback handler, so that no cleanup needs to be
performed.
v7: [ Maarten Lankhorst ] Set cb-func and only enable sw signaling if
fence wasn't signaled yet, for example for hardware fences that may
choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
header and fixed include mess. dma-fence.h now includes dma-buf.h
All members are now initialized, so kmalloc can be used for
allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
enable_signaling to bool. Rework dma_fence_wait. Added
dma_fence_is_signaled and dma_fence_wait_timeout.
s/dma// and change exports to non GPL. Added fence_is_signaled and
fence_enable_sw_signaling calls, add ability to override default
wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
scheduler. Remove fence lock and use a global spinlock instead,
this should