Re: [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-06 Thread Semwal, Sumit
Hi Daniel,
On Tue, Mar 6, 2012 at 12:27 AM, Daniel Vetter daniel.vet...@ffwll.ch wrote:
 On Fri, Mar 2, 2012 at 23:24, Rob Clark robdcl...@gmail.com wrote:
 Perhaps we should check somewhere for required dmabuf ops fxns (like
 kmap_atomic here), rather than just calling unconditionally what might
 be a null ptr.  At least put it in the WARN_ON(), but it might be
 nicer to catch a missing required fxns at export time, rather than
 waiting for an importer to try and call it.  Less likely that way, for
 newly added required functions go unnoticed.

 (same comment applies below for the non-atomic variant.. and possibly
 some other existing dmabuf ops)

 Agreed, I'll rework the patch to do that when rebasing onto Sumit's latest 
 tree.
In addition, you'd not need to check for !dmabuf-ops since the export
should already catch it.

As I sent in the other mail a while back, could you please rebase on
for-next at git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git

Best regards,
~Sumit.
 -Daniel
 --
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-05 Thread Daniel Vetter
On Fri, Mar 2, 2012 at 23:24, Rob Clark robdcl...@gmail.com wrote:
 Perhaps we should check somewhere for required dmabuf ops fxns (like
 kmap_atomic here), rather than just calling unconditionally what might
 be a null ptr.  At least put it in the WARN_ON(), but it might be
 nicer to catch a missing required fxns at export time, rather than
 waiting for an importer to try and call it.  Less likely that way, for
 newly added required functions go unnoticed.

 (same comment applies below for the non-atomic variant.. and possibly
 some other existing dmabuf ops)

Agreed, I'll rework the patch to do that when rebasing onto Sumit's latest tree.
-Daniel
-- 
Daniel Vetter
daniel.vet...@ffwll.ch - +41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-05 Thread Daniel Vetter
On Fri, Mar 2, 2012 at 23:53, Rob Clark rob.cl...@linaro.org wrote:
 nitially the expectation was that userspace would not pass a buffer
 to multiple subsystems for writing (or that if it did, it would get
 the undefined results that one could expect)..  so dealing w/
 synchronization was punted.

Imo synchronization should not be part of the dma_buf core, i.e.
userspace needs to ensure that access is synchronized.
begin/end_cpu_access are the coherency brackets (like map/unmap for
device dma). And if userspace asks for a gun and some bullets, the
kernel should just deliver. Even in drm/i915 gem land we don't (and
simply can't) make any promises about concurrent reads/writes/ioctls.

 I expect, though, that one of the next steps is some sort of
 sync-object mechanism to supplement dmabuf

Imo the only reason to add sync objects as explicit things is to make
device-to-device sync more efficient by using hw semaphores and
signalling lines. Or maybe a quick irq handler in the kernel that
kicks of the next device. I don't think we should design these to make
userspace simpler.

Cheers, Daniel
-- 
Daniel Vetter
daniel.vet...@ffwll.ch - +41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-02 Thread Rob Clark
On Thu, Mar 1, 2012 at 9:36 AM, Daniel Vetter daniel.vet...@ffwll.ch wrote:
 Big differences to other contenders in the field (like ion) is
 that this also supports highmem, so we have to split up the cpu
 access from the kernel side into a prepare and a kmap step.

 Prepare is allowed to fail and should do everything required so that
 the kmap calls can succeed (like swapin/backing storage allocation,
 flushing, ...).

 More in-depth explanations will follow in the follow-up documentation
 patch.

 Changes in v2:

 - Clear up begin_cpu_access confusion noticed by Sumit Semwal.
 - Don't automatically fallback from the _atomic variants to the
  non-atomic variants. The _atomic callbacks are not allowed to
  sleep, so we want exporters to make this decision explicit. The
  function signatures are explicit, so simpler exporters can still
  use the same function for both.
 - Make the unmap functions optional. Simpler exporters with permanent
  mappings don't need to do anything at unmap time.

 Signed-Off-by: Daniel Vetter daniel.vet...@ffwll.ch
 ---
  drivers/base/dma-buf.c  |  120 
 +++
  include/linux/dma-buf.h |   60 +++
  2 files changed, 180 insertions(+), 0 deletions(-)

 diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
 index 1b11192..bf54b89 100644
 --- a/drivers/base/dma-buf.c
 +++ b/drivers/base/dma-buf.c
 @@ -285,3 +285,123 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment 
 *attach,

  }
  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
 +
 +
 +/**
 + * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from 
 the
 + * cpu in the kernel context. Calls begin_cpu_access to allow 
 exporter-specific
 + * preparations. Coherency is only guaranteed in the specified range for the
 + * specified access direction.
 + * @dma_buf:   [in]    buffer to prepare cpu access for.
 + * @start:     [in]    start of range for cpu access.
 + * @len:       [in]    length of range for cpu access.
 + * @direction: [in]    length of range for cpu access.
 + *
 + * Can return negative error values, returns 0 on success.
 + */
 +int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t 
 len,
 +                            enum dma_data_direction direction)
 +{
 +       int ret = 0;
 +
 +       if (WARN_ON(!dmabuf || !dmabuf-ops))
 +               return EINVAL;
 +
 +       if (dmabuf-ops-begin_cpu_access)
 +               ret = dmabuf-ops-begin_cpu_access(dmabuf, start, len, 
 direction);
 +
 +       return ret;
 +}
 +EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
 +
 +/**
 + * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
 + * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
 + * actions. Coherency is only guaranteed in the specified range for the
 + * specified access direction.
 + * @dma_buf:   [in]    buffer to complete cpu access for.
 + * @start:     [in]    start of range for cpu access.
 + * @len:       [in]    length of range for cpu access.
 + * @direction: [in]    length of range for cpu access.
 + *
 + * This call must always succeed.
 + */
 +void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
 +                           enum dma_data_direction direction)
 +{
 +       WARN_ON(!dmabuf || !dmabuf-ops);
 +
 +       if (dmabuf-ops-end_cpu_access)
 +               dmabuf-ops-end_cpu_access(dmabuf, start, len, direction);
 +}
 +EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
 +
 +/**
 + * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
 + * space. The same restrictions as for kmap_atomic and friends apply.
 + * @dma_buf:   [in]    buffer to map page from.
 + * @page_num:  [in]    page in PAGE_SIZE units to map.
 + *
 + * This call must always succeed, any necessary preparations that might fail
 + * need to be done in begin_cpu_access.
 + */
 +void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
 +{
 +       WARN_ON(!dmabuf || !dmabuf-ops);
 +
 +       return dmabuf-ops-kmap_atomic(dmabuf, page_num);


Perhaps we should check somewhere for required dmabuf ops fxns (like
kmap_atomic here), rather than just calling unconditionally what might
be a null ptr.  At least put it in the WARN_ON(), but it might be
nicer to catch a missing required fxns at export time, rather than
waiting for an importer to try and call it.  Less likely that way, for
newly added required functions go unnoticed.

(same comment applies below for the non-atomic variant.. and possibly
some other existing dmabuf ops)

BR,
-R

 +}
 +EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
 +
 +/**
 + * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
 + * @dma_buf:   [in]    buffer to unmap page from.
 + * @page_num:  [in]    page in PAGE_SIZE units to unmap.
 + * @vaddr:     [in]    kernel space pointer obtained from 
 dma_buf_kmap_atomic.
 + *
 + * This call must always succeed.
 + */
 +void 

Re: [Linaro-mm-sig] [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-02 Thread Chris Wilson
On Thu,  1 Mar 2012 16:36:00 +0100, Daniel Vetter daniel.vet...@ffwll.ch 
wrote:
 Big differences to other contenders in the field (like ion) is
 that this also supports highmem, so we have to split up the cpu
 access from the kernel side into a prepare and a kmap step.
 
 Prepare is allowed to fail and should do everything required so that
 the kmap calls can succeed (like swapin/backing storage allocation,
 flushing, ...).
 
 More in-depth explanations will follow in the follow-up documentation
 patch.
 
 Changes in v2:
 
 - Clear up begin_cpu_access confusion noticed by Sumit Semwal.
 - Don't automatically fallback from the _atomic variants to the
   non-atomic variants. The _atomic callbacks are not allowed to
   sleep, so we want exporters to make this decision explicit. The
   function signatures are explicit, so simpler exporters can still
   use the same function for both.
 - Make the unmap functions optional. Simpler exporters with permanent
   mappings don't need to do anything at unmap time.

Are we going to have to have a dma_buf-ops-begin_async_access(me, dir)
variant for coherency control of rendering with an imported dma_buf?
There is also no concurrency control here between multiple importers
doing simultaneous begin_cpu_access(). I presume that is going to be a
common function across all exporters so the midlayer might offer a
semaphore as a library function and then the
dma_buf-ops-begin_cpu_access() becomes mandatory as at a minimum it
has to point to the default implementation.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-02 Thread Rob Clark
On Fri, Mar 2, 2012 at 4:38 PM, Chris Wilson ch...@chris-wilson.co.uk wrote:
 On Thu,  1 Mar 2012 16:36:00 +0100, Daniel Vetter daniel.vet...@ffwll.ch 
 wrote:
 Big differences to other contenders in the field (like ion) is
 that this also supports highmem, so we have to split up the cpu
 access from the kernel side into a prepare and a kmap step.

 Prepare is allowed to fail and should do everything required so that
 the kmap calls can succeed (like swapin/backing storage allocation,
 flushing, ...).

 More in-depth explanations will follow in the follow-up documentation
 patch.

 Changes in v2:

 - Clear up begin_cpu_access confusion noticed by Sumit Semwal.
 - Don't automatically fallback from the _atomic variants to the
   non-atomic variants. The _atomic callbacks are not allowed to
   sleep, so we want exporters to make this decision explicit. The
   function signatures are explicit, so simpler exporters can still
   use the same function for both.
 - Make the unmap functions optional. Simpler exporters with permanent
   mappings don't need to do anything at unmap time.

 Are we going to have to have a dma_buf-ops-begin_async_access(me, dir)
 variant for coherency control of rendering with an imported dma_buf?
 There is also no concurrency control here between multiple importers
 doing simultaneous begin_cpu_access(). I presume that is going to be a
 common function across all exporters so the midlayer might offer a
 semaphore as a library function and then the
 dma_buf-ops-begin_cpu_access() becomes mandatory as at a minimum it
 has to point to the default implementation.

Initially the expectation was that userspace would not pass a buffer
to multiple subsystems for writing (or that if it did, it would get
the undefined results that one could expect)..  so dealing w/
synchronization was punted.

I expect, though, that one of the next steps is some sort of
sync-object mechanism to supplement dmabuf

BR,
-R

 -Chris

 --
 Chris Wilson, Intel Open Source Technology Centre
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/3] dma-buf: add support for kernel cpu access

2012-03-01 Thread Daniel Vetter
Big differences to other contenders in the field (like ion) is
that this also supports highmem, so we have to split up the cpu
access from the kernel side into a prepare and a kmap step.

Prepare is allowed to fail and should do everything required so that
the kmap calls can succeed (like swapin/backing storage allocation,
flushing, ...).

More in-depth explanations will follow in the follow-up documentation
patch.

Changes in v2:

- Clear up begin_cpu_access confusion noticed by Sumit Semwal.
- Don't automatically fallback from the _atomic variants to the
  non-atomic variants. The _atomic callbacks are not allowed to
  sleep, so we want exporters to make this decision explicit. The
  function signatures are explicit, so simpler exporters can still
  use the same function for both.
- Make the unmap functions optional. Simpler exporters with permanent
  mappings don't need to do anything at unmap time.

Signed-Off-by: Daniel Vetter daniel.vet...@ffwll.ch
---
 drivers/base/dma-buf.c  |  120 +++
 include/linux/dma-buf.h |   60 +++
 2 files changed, 180 insertions(+), 0 deletions(-)

diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 1b11192..bf54b89 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -285,3 +285,123 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment 
*attach,
 
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
+
+
+/**
+ * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from 
the
+ * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
+ * preparations. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf:   [in]buffer to prepare cpu access for.
+ * @start: [in]start of range for cpu access.
+ * @len:   [in]length of range for cpu access.
+ * @direction: [in]length of range for cpu access.
+ *
+ * Can return negative error values, returns 0 on success.
+ */
+int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+enum dma_data_direction direction)
+{
+   int ret = 0;
+
+   if (WARN_ON(!dmabuf || !dmabuf-ops))
+   return EINVAL;
+
+   if (dmabuf-ops-begin_cpu_access)
+   ret = dmabuf-ops-begin_cpu_access(dmabuf, start, len, 
direction);
+
+   return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
+
+/**
+ * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
+ * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
+ * actions. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf:   [in]buffer to complete cpu access for.
+ * @start: [in]start of range for cpu access.
+ * @len:   [in]length of range for cpu access.
+ * @direction: [in]length of range for cpu access.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+   enum dma_data_direction direction)
+{
+   WARN_ON(!dmabuf || !dmabuf-ops);
+
+   if (dmabuf-ops-end_cpu_access)
+   dmabuf-ops-end_cpu_access(dmabuf, start, len, direction);
+}
+EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
+
+/**
+ * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
+ * space. The same restrictions as for kmap_atomic and friends apply.
+ * @dma_buf:   [in]buffer to map page from.
+ * @page_num:  [in]page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
+{
+   WARN_ON(!dmabuf || !dmabuf-ops);
+
+   return dmabuf-ops-kmap_atomic(dmabuf, page_num);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
+
+/**
+ * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
+ * @dma_buf:   [in]buffer to unmap page from.
+ * @page_num:  [in]page in PAGE_SIZE units to unmap.
+ * @vaddr: [in]kernel space pointer obtained from dma_buf_kmap_atomic.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
+  void *vaddr)
+{
+   WARN_ON(!dmabuf || !dmabuf-ops);
+
+   if (dmabuf-ops-kunmap_atomic)
+   dmabuf-ops-kunmap_atomic(dmabuf, page_num, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
+
+/**
+ * dma_buf_kmap - Map a page of the buffer object into kernel address space. 
The
+ * same restrictions as for kmap and friends apply.
+ * @dma_buf:   [in]buffer to map page from.
+ * @page_num:  [in]page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap(struct