[PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access

2016-06-21 Thread Daniel Vetter
On Tue, Jun 21, 2016 at 08:04:00AM +0100, Chris Wilson wrote:
> Rendering operations to the dma-buf are tracked implicitly via the
> reservation_object (dmabuf->resv). This is used to allow poll() to
> wait upon outstanding rendering (or just query the current status of
> rendering). The dma-buf sync ioctl allows userspace to prepare the
> dma-buf for CPU access, which should include waiting upon rendering.
> (Some drivers may need to do more work to ensure that the dma-buf mmap
> is coherent as well as complete.)
>
> Signed-off-by: Chris Wilson 
> Cc: Sumit Semwal 
> Cc: Daniel Vetter 
> Cc: linux-media at vger.kernel.org
> Cc: dri-devel at lists.freedesktop.org
> Cc: linaro-mm-sig at lists.linaro.org
> Cc: linux-kernel at vger.kernel.org
> ---
>
> I'm wondering whether it makes sense just to always do the wait first.
> It is one of the first operations every driver has to make. A driver
> that wants to implement it differently (e.g. they can special case
> native waits) will still require a wait on the reservation object to
> finish external rendering.

Worst case (if the driver uses reservation objects also internally) we'll
end up calling this twice. It should be cheap enough to do that. I'll add
a few folks who might want to chip in with an opinion ...
-Daniel

> -Chris
>
> ---
>  drivers/dma-buf/dma-buf.c | 18 ++
>  1 file changed, 18 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ddaee60ae52a..123f14b8e882 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment 
> *attach,
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>
> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +  enum dma_data_direction direction)
> +{
> + bool write = (direction == DMA_BIDIRECTIONAL ||
> +  direction == DMA_TO_DEVICE);
> + struct reservation_object *resv = dma_buf->resv;
> + long ret;
> +
> + /* Wait on any implicit rendering fences */
> + ret = reservation_object_wait_timeout_rcu(resv, write, true,
> +  MAX_SCHEDULE_TIMEOUT);
> + if (ret < 0)
> + return ret;
> +
> + return 0;
> +}
>
>  /**
>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from 
> the
> @@ -607,6 +623,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>
>   if (dmabuf->ops->begin_cpu_access)
>   ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
> + else
> + ret = __dma_buf_begin_cpu_access(dmabuf, direction);
>
>   return ret;
>  }
> --
> 2.8.1
>

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access

2016-06-21 Thread Chris Wilson
Rendering operations to the dma-buf are tracked implicitly via the
reservation_object (dmabuf->resv). This is used to allow poll() to
wait upon outstanding rendering (or just query the current status of
rendering). The dma-buf sync ioctl allows userspace to prepare the
dma-buf for CPU access, which should include waiting upon rendering.
(Some drivers may need to do more work to ensure that the dma-buf mmap
is coherent as well as complete.)

Signed-off-by: Chris Wilson 
Cc: Sumit Semwal 
Cc: Daniel Vetter 
Cc: linux-media at vger.kernel.org
Cc: dri-devel at lists.freedesktop.org
Cc: linaro-mm-sig at lists.linaro.org
Cc: linux-kernel at vger.kernel.org
---

I'm wondering whether it makes sense just to always do the wait first.
It is one of the first operations every driver has to make. A driver
that wants to implement it differently (e.g. they can special case
native waits) will still require a wait on the reservation object to
finish external rendering.
-Chris

---
 drivers/dma-buf/dma-buf.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ddaee60ae52a..123f14b8e882 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment 
*attach,
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);

+static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+   bool write = (direction == DMA_BIDIRECTIONAL ||
+ direction == DMA_TO_DEVICE);
+   struct reservation_object *resv = dma_buf->resv;
+   long ret;
+
+   /* Wait on any implicit rendering fences */
+   ret = reservation_object_wait_timeout_rcu(resv, write, true,
+ MAX_SCHEDULE_TIMEOUT);
+   if (ret < 0)
+   return ret;
+
+   return 0;
+}

 /**
  * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from 
the
@@ -607,6 +623,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,

if (dmabuf->ops->begin_cpu_access)
ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
+   else
+   ret = __dma_buf_begin_cpu_access(dmabuf, direction);

return ret;
 }
-- 
2.8.1