[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-12-01 Thread Semwal, Sumit
Hi Dave, Daniel, Rob,
>
> On Sun, Nov 27, 2011 at 12:29 PM, Rob Clark  wrote:
>>
>> On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter  wrote:
>> > On Fri, Nov 25, 2011 at 17:28, Dave Airlie  wrote:
>> >> I've rebuilt my PRIME interface on top of dmabuf to see how it would
>> >> work,
>> >>
>> >> I've got primed gears running again on top, but I expect all my object
>> >> lifetime and memory ownership rules need fixing up (i.e. leaks like a
>> >> sieve).
>> >>
>> >> http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
>> >>
>> >> has the i915/nouveau patches for the kernel to produce the prime
>> >> interface.
>> >
>> > I've noticed that your implementations for get_scatterlist (at least
>> > for the i915 driver) doesn't return the sg table mapped into the
>> > device address space. I've checked and the documentation makes it
>> > clear that this should be the case (and we really need this to support
>> > certain insane hw), but the get/put_scatterlist names are a bit
>> > misleading. Proposal:
>> >
>> > - use struct sg_table instead of scatterlist like you've already done
>> > in you branch. Simply more consistent with the dma api.
>>
>> yup
>>
>> > - rename get/put_scatterlist into map/unmap for consistency with all
>> > the map/unmap dma api functions. The attachement would then serve as
>> > the abstract cookie to the backing storage, similar to how struct page
>> > * works as an abstract cookie for dma_map/unmap_page. The only special
>> > thing is that struct device * parameter because that's already part of
>> > the attachment.
>>
>> yup
>>
>> > - add new wrapper functions dma_buf_map_attachment and
>> > dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
>> > we currently expose to users of this interface.
>>
>> I thought that was one of the earlier comments on the initial dmabuf
>> patch, but either way: yup
>
Thanks for your comments; I will incorporate all of these in the next
version I'll send out.
>>
>>
>> BR,
>> -R
>
BR,
Sumit.
>>
>>
>> > Comments?
>> >
>> > Cheers, Daniel
>> > --
>> > Daniel Vetter
>> > daniel.vetter at ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe linux-media"
>> > in
>> > the body of a message to majordomo at vger.kernel.org
>> > More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>> >
>
>


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-12-01 Thread Semwal, Sumit
Hi Dave, Daniel, Rob,

On Sun, Nov 27, 2011 at 12:29 PM, Rob Clark  wrote:

> On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter  wrote:
> > On Fri, Nov 25, 2011 at 17:28, Dave Airlie  wrote:
> >> I've rebuilt my PRIME interface on top of dmabuf to see how it would
> work,
> >>
> >> I've got primed gears running again on top, but I expect all my object
> >> lifetime and memory ownership rules need fixing up (i.e. leaks like a
> >> sieve).
> >>
> >> http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
> >>
> >> has the i915/nouveau patches for the kernel to produce the prime
> interface.
> >
> > I've noticed that your implementations for get_scatterlist (at least
> > for the i915 driver) doesn't return the sg table mapped into the
> > device address space. I've checked and the documentation makes it
> > clear that this should be the case (and we really need this to support
> > certain insane hw), but the get/put_scatterlist names are a bit
> > misleading. Proposal:
> >
> > - use struct sg_table instead of scatterlist like you've already done
> > in you branch. Simply more consistent with the dma api.
>
> yup
>
> > - rename get/put_scatterlist into map/unmap for consistency with all
> > the map/unmap dma api functions. The attachement would then serve as
> > the abstract cookie to the backing storage, similar to how struct page
> > * works as an abstract cookie for dma_map/unmap_page. The only special
> > thing is that struct device * parameter because that's already part of
> > the attachment.
>
> yup
>
> > - add new wrapper functions dma_buf_map_attachment and
> > dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
> > we currently expose to users of this interface.
>
> I thought that was one of the earlier comments on the initial dmabuf
> patch, but either way: yup
>
Thanks for your comments; I will incorporate all of these in the next
version I'll send out.

>
> BR,
> -R
>
BR,
Sumit.

>
> > Comments?
> >
> > Cheers, Daniel
> > --
> > Daniel Vetter
> > daniel.vetter at ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-media" in
> > the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
>
-- next part --
An HTML attachment was scrubbed...
URL: 



Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-30 Thread Semwal, Sumit
Hi Dave, Daniel, Rob,

On Sun, Nov 27, 2011 at 12:29 PM, Rob Clark robdcl...@gmail.com wrote:

 On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter dan...@ffwll.ch wrote:
  On Fri, Nov 25, 2011 at 17:28, Dave Airlie airl...@gmail.com wrote:
  I've rebuilt my PRIME interface on top of dmabuf to see how it would
 work,
 
  I've got primed gears running again on top, but I expect all my object
  lifetime and memory ownership rules need fixing up (i.e. leaks like a
  sieve).
 
  http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
 
  has the i915/nouveau patches for the kernel to produce the prime
 interface.
 
  I've noticed that your implementations for get_scatterlist (at least
  for the i915 driver) doesn't return the sg table mapped into the
  device address space. I've checked and the documentation makes it
  clear that this should be the case (and we really need this to support
  certain insane hw), but the get/put_scatterlist names are a bit
  misleading. Proposal:
 
  - use struct sg_table instead of scatterlist like you've already done
  in you branch. Simply more consistent with the dma api.

 yup

  - rename get/put_scatterlist into map/unmap for consistency with all
  the map/unmap dma api functions. The attachement would then serve as
  the abstract cookie to the backing storage, similar to how struct page
  * works as an abstract cookie for dma_map/unmap_page. The only special
  thing is that struct device * parameter because that's already part of
  the attachment.

 yup

  - add new wrapper functions dma_buf_map_attachment and
  dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
  we currently expose to users of this interface.

 I thought that was one of the earlier comments on the initial dmabuf
 patch, but either way: yup

Thanks for your comments; I will incorporate all of these in the next
version I'll send out.


 BR,
 -R

BR,
Sumit.


  Comments?
 
  Cheers, Daniel
  --
  Daniel Vetter
  daniel.vet...@ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
  --
  To unsubscribe from this list: send the line unsubscribe linux-media in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-30 Thread Semwal, Sumit
Hi Dave, Daniel, Rob,

 On Sun, Nov 27, 2011 at 12:29 PM, Rob Clark robdcl...@gmail.com wrote:

 On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter dan...@ffwll.ch wrote:
  On Fri, Nov 25, 2011 at 17:28, Dave Airlie airl...@gmail.com wrote:
  I've rebuilt my PRIME interface on top of dmabuf to see how it would
  work,
 
  I've got primed gears running again on top, but I expect all my object
  lifetime and memory ownership rules need fixing up (i.e. leaks like a
  sieve).
 
  http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
 
  has the i915/nouveau patches for the kernel to produce the prime
  interface.
 
  I've noticed that your implementations for get_scatterlist (at least
  for the i915 driver) doesn't return the sg table mapped into the
  device address space. I've checked and the documentation makes it
  clear that this should be the case (and we really need this to support
  certain insane hw), but the get/put_scatterlist names are a bit
  misleading. Proposal:
 
  - use struct sg_table instead of scatterlist like you've already done
  in you branch. Simply more consistent with the dma api.

 yup

  - rename get/put_scatterlist into map/unmap for consistency with all
  the map/unmap dma api functions. The attachement would then serve as
  the abstract cookie to the backing storage, similar to how struct page
  * works as an abstract cookie for dma_map/unmap_page. The only special
  thing is that struct device * parameter because that's already part of
  the attachment.

 yup

  - add new wrapper functions dma_buf_map_attachment and
  dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
  we currently expose to users of this interface.

 I thought that was one of the earlier comments on the initial dmabuf
 patch, but either way: yup

Thanks for your comments; I will incorporate all of these in the next
version I'll send out.


 BR,
 -R

BR,
Sumit.


  Comments?
 
  Cheers, Daniel
  --
  Daniel Vetter
  daniel.vet...@ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
  --
  To unsubscribe from this list: send the line unsubscribe linux-media
  in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-27 Thread Rob Clark
On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter  wrote:
> On Fri, Nov 25, 2011 at 17:28, Dave Airlie  wrote:
>> I've rebuilt my PRIME interface on top of dmabuf to see how it would work,
>>
>> I've got primed gears running again on top, but I expect all my object
>> lifetime and memory ownership rules need fixing up (i.e. leaks like a
>> sieve).
>>
>> http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
>>
>> has the i915/nouveau patches for the kernel to produce the prime interface.
>
> I've noticed that your implementations for get_scatterlist (at least
> for the i915 driver) doesn't return the sg table mapped into the
> device address space. I've checked and the documentation makes it
> clear that this should be the case (and we really need this to support
> certain insane hw), but the get/put_scatterlist names are a bit
> misleading. Proposal:
>
> - use struct sg_table instead of scatterlist like you've already done
> in you branch. Simply more consistent with the dma api.

yup

> - rename get/put_scatterlist into map/unmap for consistency with all
> the map/unmap dma api functions. The attachement would then serve as
> the abstract cookie to the backing storage, similar to how struct page
> * works as an abstract cookie for dma_map/unmap_page. The only special
> thing is that struct device * parameter because that's already part of
> the attachment.

yup

> - add new wrapper functions dma_buf_map_attachment and
> dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
> we currently expose to users of this interface.

I thought that was one of the earlier comments on the initial dmabuf
patch, but either way: yup

BR,
-R

> Comments?
>
> Cheers, Daniel
> --
> Daniel Vetter
> daniel.vetter at ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-26 Thread Daniel Vetter
On Fri, Nov 25, 2011 at 17:28, Dave Airlie  wrote:
> I've rebuilt my PRIME interface on top of dmabuf to see how it would work,
>
> I've got primed gears running again on top, but I expect all my object
> lifetime and memory ownership rules need fixing up (i.e. leaks like a
> sieve).
>
> http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
>
> has the i915/nouveau patches for the kernel to produce the prime interface.

I've noticed that your implementations for get_scatterlist (at least
for the i915 driver) doesn't return the sg table mapped into the
device address space. I've checked and the documentation makes it
clear that this should be the case (and we really need this to support
certain insane hw), but the get/put_scatterlist names are a bit
misleading. Proposal:

- use struct sg_table instead of scatterlist like you've already done
in you branch. Simply more consistent with the dma api.

- rename get/put_scatterlist into map/unmap for consistency with all
the map/unmap dma api functions. The attachement would then serve as
the abstract cookie to the backing storage, similar to how struct page
* works as an abstract cookie for dma_map/unmap_page. The only special
thing is that struct device * parameter because that's already part of
the attachment.

- add new wrapper functions dma_buf_map_attachment and
dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
we currently expose to users of this interface.

Comments?

Cheers, Daniel
-- 
Daniel Vetter
daniel.vetter at ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-26 Thread Daniel Vetter
On Fri, Nov 25, 2011 at 17:28, Dave Airlie airl...@gmail.com wrote:
 I've rebuilt my PRIME interface on top of dmabuf to see how it would work,

 I've got primed gears running again on top, but I expect all my object
 lifetime and memory ownership rules need fixing up (i.e. leaks like a
 sieve).

 http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf

 has the i915/nouveau patches for the kernel to produce the prime interface.

I've noticed that your implementations for get_scatterlist (at least
for the i915 driver) doesn't return the sg table mapped into the
device address space. I've checked and the documentation makes it
clear that this should be the case (and we really need this to support
certain insane hw), but the get/put_scatterlist names are a bit
misleading. Proposal:

- use struct sg_table instead of scatterlist like you've already done
in you branch. Simply more consistent with the dma api.

- rename get/put_scatterlist into map/unmap for consistency with all
the map/unmap dma api functions. The attachement would then serve as
the abstract cookie to the backing storage, similar to how struct page
* works as an abstract cookie for dma_map/unmap_page. The only special
thing is that struct device * parameter because that's already part of
the attachment.

- add new wrapper functions dma_buf_map_attachment and
dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
we currently expose to users of this interface.

Comments?

Cheers, Daniel
-- 
Daniel Vetter
daniel.vet...@ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-26 Thread Rob Clark
On Sat, Nov 26, 2011 at 8:00 AM, Daniel Vetter dan...@ffwll.ch wrote:
 On Fri, Nov 25, 2011 at 17:28, Dave Airlie airl...@gmail.com wrote:
 I've rebuilt my PRIME interface on top of dmabuf to see how it would work,

 I've got primed gears running again on top, but I expect all my object
 lifetime and memory ownership rules need fixing up (i.e. leaks like a
 sieve).

 http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf

 has the i915/nouveau patches for the kernel to produce the prime interface.

 I've noticed that your implementations for get_scatterlist (at least
 for the i915 driver) doesn't return the sg table mapped into the
 device address space. I've checked and the documentation makes it
 clear that this should be the case (and we really need this to support
 certain insane hw), but the get/put_scatterlist names are a bit
 misleading. Proposal:

 - use struct sg_table instead of scatterlist like you've already done
 in you branch. Simply more consistent with the dma api.

yup

 - rename get/put_scatterlist into map/unmap for consistency with all
 the map/unmap dma api functions. The attachement would then serve as
 the abstract cookie to the backing storage, similar to how struct page
 * works as an abstract cookie for dma_map/unmap_page. The only special
 thing is that struct device * parameter because that's already part of
 the attachment.

yup

 - add new wrapper functions dma_buf_map_attachment and
 dma_buf_unmap_attachement to hide all the pointer/vtable-chasing that
 we currently expose to users of this interface.

I thought that was one of the earlier comments on the initial dmabuf
patch, but either way: yup

BR,
-R

 Comments?

 Cheers, Daniel
 --
 Daniel Vetter
 daniel.vet...@ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Daniel Vetter
On Fri, Nov 25, 2011 at 02:13:22PM +, Dave Airlie wrote:
> On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal  wrote:
> > This is the first step in defining a dma buffer sharing mechanism.
> >
> > A new buffer object dma_buf is added, with operations and API to allow easy
> > sharing of this buffer object across devices.
> >
> > The framework allows:
> > - a new buffer-object to be created with fixed size.
> > - different devices to 'attach' themselves to this buffer, to facilitate
> > ?backing storage negotiation, using dma_buf_attach() API.
> > - association of a file pointer with each user-buffer and associated
> > ? allocator-defined operations on that buffer. This operation is called the
> > ? 'export' operation.
> > - this exported buffer-object to be shared with the other entity by asking 
> > for
> > ? its 'file-descriptor (fd)', and sharing the fd across.
> > - a received fd to get the buffer object back, where it can be accessed 
> > using
> > ? the associated exporter-defined operations.
> > - the exporter and user to share the scatterlist using get_scatterlist and
> > ? put_scatterlist operations.
> >
> > Atleast one 'attach()' call is required to be made prior to calling the
> > get_scatterlist() operation.
> >
> > Couple of building blocks in get_scatterlist() are added to ease 
> > introduction
> > of sync'ing across exporter and users, and late allocation by the exporter.
> >
> > mmap() file operation is provided for the associated 'fd', as wrapper over 
> > the
> > optional allocator defined mmap(), to be used by devices that might need 
> > one.
> >
> > More details are there in the documentation patch.
> >
> 
> Some questions, I've started playing around with using this framework
> to do buffer sharing between DRM devices,
> 
> Why struct scatterlist and not struct sg_table? it seems like I really
> want to use an sg_table,

No reason at all besides that intel-gtt is using scatterlist internally
(and only kludges the sg_table together in an ad-hoc fashion) and so I
haven't noticed. sg_table for more consistency with the dma api sounds
good.

> I'm not convinced fd's are really useful over just some idr allocated
> handle, so far I'm just returning the "fd" to userspace as a handle,
> and passing it back in the other side, so I'm not really sure what an
> fd wins us here, apart from the mmap thing which I think shouldn't be
> here anyways.
> (if fd's do win us more we should probably record that in the docs patch).

Imo fds are nice because their known and there's already all the
preexisting infrastructure for them around. And if we ever get fancy with
e.g. sync objects we can easily add poll support (or some insane ioctls).
But I agree that "we can mmap" is bust as a reason and should just die.
-Daniel
-- 
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
I've rebuilt my PRIME interface on top of dmabuf to see how it would work,

I've got primed gears running again on top, but I expect all my object
lifetime and memory ownership rules need fixing up (i.e. leaks like a
sieve).

http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf

has the i915/nouveau patches for the kernel to produce the prime interface.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
> +struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? struct device *dev)
> +{
> + ? ? ? struct dma_buf_attachment *attach;
> + ? ? ? int ret;
> +
> + ? ? ? BUG_ON(!dmabuf || !dev);
> +
> + ? ? ? mutex_lock(>lock);
> +
> + ? ? ? attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
> + ? ? ? if (attach == NULL)
> + ? ? ? ? ? ? ? goto err_alloc;
> +
> + ? ? ? attach->dev = dev;
> + ? ? ? if (dmabuf->ops->attach) {
> + ? ? ? ? ? ? ? ret = dmabuf->ops->attach(dmabuf, dev, attach);
> + ? ? ? ? ? ? ? if (!ret)
> + ? ? ? ? ? ? ? ? ? ? ? goto err_attach;
> + ? ? ? }
> + ? ? ? list_add(>node, >attachments);
> +

I would assume at some point this needed at
attach->dmabuf = dmabuf;
added.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal  wrote:
> This is the first step in defining a dma buffer sharing mechanism.
>
> A new buffer object dma_buf is added, with operations and API to allow easy
> sharing of this buffer object across devices.
>
> The framework allows:
> - a new buffer-object to be created with fixed size.
> - different devices to 'attach' themselves to this buffer, to facilitate
> ?backing storage negotiation, using dma_buf_attach() API.
> - association of a file pointer with each user-buffer and associated
> ? allocator-defined operations on that buffer. This operation is called the
> ? 'export' operation.
> - this exported buffer-object to be shared with the other entity by asking for
> ? its 'file-descriptor (fd)', and sharing the fd across.
> - a received fd to get the buffer object back, where it can be accessed using
> ? the associated exporter-defined operations.
> - the exporter and user to share the scatterlist using get_scatterlist and
> ? put_scatterlist operations.
>
> Atleast one 'attach()' call is required to be made prior to calling the
> get_scatterlist() operation.
>
> Couple of building blocks in get_scatterlist() are added to ease introduction
> of sync'ing across exporter and users, and late allocation by the exporter.
>
> mmap() file operation is provided for the associated 'fd', as wrapper over the
> optional allocator defined mmap(), to be used by devices that might need one.
>
> More details are there in the documentation patch.
>

Some questions, I've started playing around with using this framework
to do buffer sharing between DRM devices,

Why struct scatterlist and not struct sg_table? it seems like I really
want to use an sg_table,

I'm not convinced fd's are really useful over just some idr allocated
handle, so far I'm just returning the "fd" to userspace as a handle,
and passing it back in the other side, so I'm not really sure what an
fd wins us here, apart from the mmap thing which I think shouldn't be
here anyways.
(if fd's do win us more we should probably record that in the docs patch).

Dave.


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal sumit.sem...@ti.com wrote:
 This is the first step in defining a dma buffer sharing mechanism.

 A new buffer object dma_buf is added, with operations and API to allow easy
 sharing of this buffer object across devices.

 The framework allows:
 - a new buffer-object to be created with fixed size.
 - different devices to 'attach' themselves to this buffer, to facilitate
  backing storage negotiation, using dma_buf_attach() API.
 - association of a file pointer with each user-buffer and associated
   allocator-defined operations on that buffer. This operation is called the
   'export' operation.
 - this exported buffer-object to be shared with the other entity by asking for
   its 'file-descriptor (fd)', and sharing the fd across.
 - a received fd to get the buffer object back, where it can be accessed using
   the associated exporter-defined operations.
 - the exporter and user to share the scatterlist using get_scatterlist and
   put_scatterlist operations.

 Atleast one 'attach()' call is required to be made prior to calling the
 get_scatterlist() operation.

 Couple of building blocks in get_scatterlist() are added to ease introduction
 of sync'ing across exporter and users, and late allocation by the exporter.

 mmap() file operation is provided for the associated 'fd', as wrapper over the
 optional allocator defined mmap(), to be used by devices that might need one.

 More details are there in the documentation patch.


Some questions, I've started playing around with using this framework
to do buffer sharing between DRM devices,

Why struct scatterlist and not struct sg_table? it seems like I really
want to use an sg_table,

I'm not convinced fd's are really useful over just some idr allocated
handle, so far I'm just returning the fd to userspace as a handle,
and passing it back in the other side, so I'm not really sure what an
fd wins us here, apart from the mmap thing which I think shouldn't be
here anyways.
(if fd's do win us more we should probably record that in the docs patch).

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Daniel Vetter
On Fri, Nov 25, 2011 at 02:13:22PM +, Dave Airlie wrote:
 On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal sumit.sem...@ti.com wrote:
  This is the first step in defining a dma buffer sharing mechanism.
 
  A new buffer object dma_buf is added, with operations and API to allow easy
  sharing of this buffer object across devices.
 
  The framework allows:
  - a new buffer-object to be created with fixed size.
  - different devices to 'attach' themselves to this buffer, to facilitate
   backing storage negotiation, using dma_buf_attach() API.
  - association of a file pointer with each user-buffer and associated
    allocator-defined operations on that buffer. This operation is called the
    'export' operation.
  - this exported buffer-object to be shared with the other entity by asking 
  for
    its 'file-descriptor (fd)', and sharing the fd across.
  - a received fd to get the buffer object back, where it can be accessed 
  using
    the associated exporter-defined operations.
  - the exporter and user to share the scatterlist using get_scatterlist and
    put_scatterlist operations.
 
  Atleast one 'attach()' call is required to be made prior to calling the
  get_scatterlist() operation.
 
  Couple of building blocks in get_scatterlist() are added to ease 
  introduction
  of sync'ing across exporter and users, and late allocation by the exporter.
 
  mmap() file operation is provided for the associated 'fd', as wrapper over 
  the
  optional allocator defined mmap(), to be used by devices that might need 
  one.
 
  More details are there in the documentation patch.
 
 
 Some questions, I've started playing around with using this framework
 to do buffer sharing between DRM devices,
 
 Why struct scatterlist and not struct sg_table? it seems like I really
 want to use an sg_table,

No reason at all besides that intel-gtt is using scatterlist internally
(and only kludges the sg_table together in an ad-hoc fashion) and so I
haven't noticed. sg_table for more consistency with the dma api sounds
good.

 I'm not convinced fd's are really useful over just some idr allocated
 handle, so far I'm just returning the fd to userspace as a handle,
 and passing it back in the other side, so I'm not really sure what an
 fd wins us here, apart from the mmap thing which I think shouldn't be
 here anyways.
 (if fd's do win us more we should probably record that in the docs patch).

Imo fds are nice because their known and there's already all the
preexisting infrastructure for them around. And if we ever get fancy with
e.g. sync objects we can easily add poll support (or some insane ioctls).
But I agree that we can mmap is bust as a reason and should just die.
-Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
 +struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
 +                                               struct device *dev)
 +{
 +       struct dma_buf_attachment *attach;
 +       int ret;
 +
 +       BUG_ON(!dmabuf || !dev);
 +
 +       mutex_lock(dmabuf-lock);
 +
 +       attach = kzalloc(sizeof(struct dma_buf_attachment), GFP_KERNEL);
 +       if (attach == NULL)
 +               goto err_alloc;
 +
 +       attach-dev = dev;
 +       if (dmabuf-ops-attach) {
 +               ret = dmabuf-ops-attach(dmabuf, dev, attach);
 +               if (!ret)
 +                       goto err_attach;
 +       }
 +       list_add(attach-node, dmabuf-attachments);
 +

I would assume at some point this needed at
attach-dmabuf = dmabuf;
added.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-11-25 Thread Dave Airlie
I've rebuilt my PRIME interface on top of dmabuf to see how it would work,

I've got primed gears running again on top, but I expect all my object
lifetime and memory ownership rules need fixing up (i.e. leaks like a
sieve).

http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf

has the i915/nouveau patches for the kernel to produce the prime interface.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Daniel Vetter
On Wed, Oct 12, 2011 at 03:34:54PM +0100, Dave Airlie wrote:
> On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark  wrote:
> > On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie  wrote:
> >>> But then we'd need a different set of accessors for every different
> >>> drm/v4l/etc driver, wouldn't we?
> >>
> >> Not any more different than you need for this, you just have a new
> >> interface that you request a sw object from,
> >> then mmap that object, and underneath it knows who owns it in the kernel.
> >
> > oh, ok, so you are talking about a kernel level interface, rather than
> > userspace..
> >
> > but I guess in this case I don't quite see the difference. ?It amounts
> > to which fd you call mmap (or ioctl[*]) on.. ?If you use the dmabuf fd
> > directly then you don't have to pass around a 2nd fd.
> >
> > [*] there is nothing stopping defining some dmabuf ioctls (such as for
> > synchronization).. although the thinking was to keep it simple for
> > first version of dmabuf
> >
> 
> Yes a separate kernel level interface.
> 
> Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
> shoehorning in a sw mapping API isn't making it simpler.
> 
> The problem I have with implementing mmap on the sharing fd, is that
> nothing says this should be purely optional and userspace shouldn't
> rely on it.
> 
> In the Intel GEM space alone you have two types of mapping, one direct
> to shmem one via GTT, the GTT could be even be a linear view. The
> intel guys initially did GEM mmaps direct to the shmem pages because
> it seemed simple, up until they
> had to do step two which was do mmaps on the GTT copy and ended up
> having two separate mmap methods. I think the problem here is it seems
> deceptively simple to add this to the API now because the API is
> simple, however I think in the future it'll become a burden that we'll
> have to workaround.

Yeah, that's my feeling, too. Adding mmap sounds like a neat, simple idea,
that could simplify things for simple devices like v4l. But as soon as
you're dealing with a real gpu, nothing is simple. Those who don't believe
this, just take a look at the data upload/download paths in the
open-source i915,nouveau,radeon drivers. Making this fast (and for gpus,
it needs to be fast) requires tons of tricks, special-cases and jumping
through loops.

You absolutely want the device-specific ioctls to do that. Adding a
generic mmap just makes matters worse, especially if userspace expects
this to work synchronized with everything else that is going on.

Cheers, Daniel
-- 
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark  wrote:
> On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie  wrote:
>>> But then we'd need a different set of accessors for every different
>>> drm/v4l/etc driver, wouldn't we?
>>
>> Not any more different than you need for this, you just have a new
>> interface that you request a sw object from,
>> then mmap that object, and underneath it knows who owns it in the kernel.
>
> oh, ok, so you are talking about a kernel level interface, rather than
> userspace..
>
> but I guess in this case I don't quite see the difference. ?It amounts
> to which fd you call mmap (or ioctl[*]) on.. ?If you use the dmabuf fd
> directly then you don't have to pass around a 2nd fd.
>
> [*] there is nothing stopping defining some dmabuf ioctls (such as for
> synchronization).. although the thinking was to keep it simple for
> first version of dmabuf
>

Yes a separate kernel level interface.

Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
shoehorning in a sw mapping API isn't making it simpler.

The problem I have with implementing mmap on the sharing fd, is that
nothing says this should be purely optional and userspace shouldn't
rely on it.

In the Intel GEM space alone you have two types of mapping, one direct
to shmem one via GTT, the GTT could be even be a linear view. The
intel guys initially did GEM mmaps direct to the shmem pages because
it seemed simple, up until they
had to do step two which was do mmaps on the GTT copy and ended up
having two separate mmap methods. I think the problem here is it seems
deceptively simple to add this to the API now because the API is
simple, however I think in the future it'll become a burden that we'll
have to workaround.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
> But then we'd need a different set of accessors for every different
> drm/v4l/etc driver, wouldn't we?

Not any more different than you need for this, you just have a new
interface that you request a sw object from,
then mmap that object, and underneath it knows who owns it in the kernel.

mmap just feels wrong in this API, which is a buffer sharing API not a
buffer mapping API.

> I guess if sharing a buffer between multiple drm devices, there is
> nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
> pass when the buffer is allocated, then you don't have to support
> dmabuf->mmap(), and instead mmap via device and use some sort of
> DRM_CPU_PREP/FINI ioctls for synchronization..

Or we could make a generic CPU accessor that we don't have to worry about.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
>
> well, the mmap is actually implemented by the buffer allocator
> (v4l/drm).. although not sure if this was the point

Then why not use the correct interface? doing some sort of not-quite
generic interface isn't really helping anyone except adding an ABI
that we have to support.

If someone wants to bypass the current kernel APIs we should add a new
API for them not shove it into this generic buffer sharing layer.

> The intent was that this is for well defined formats.. ie. it would
> need to be a format that both v4l and drm understood in the first
> place for sharing to make sense at all..

How will you know the stride to take a simple example? The userspace
had to create this buffer somehow and wants to share it with
"something", you sound like
you really needs another API that is a simple accessor API that can
handle mmaps.

> Anyways, the basic reason is to handle random edge cases where you
> need sw access to the buffer. ?For example, you are decoding video and
> pull out a frame to generate a thumbnail w/ a sw jpeg encoder..

Again, doesn't sound like it should be part of this API, and also
sounds like the sw jpeg encoder will need more info about the buffer
anyways like stride and format.

> With this current scheme, synchronization could be handled in
> dmabufops->mmap() and vm_ops->close().. ?it is perhaps a bit heavy to
> require mmap/munmap for each sw access, but I suppose this isn't
> really for the high-performance use case. ?It is just so that some
> random bit of sw that gets passed a dmabuf handle without knowing who
> allocated it can have sw access if really needed.

So I think thats fine, write a sw accessor providers, don't go
overloading the buffer sharing code.

This API will limit what people can use this buffer sharing for with
pure hw accessors, you might say, oh buts its okay to fail the mmap
then, but the chances of sw handling that I'm not so sure off.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal  wrote:
> This is the first step in defining a dma buffer sharing mechanism.
>
> A new buffer object dma_buf is added, with operations and API to allow easy
> sharing of this buffer object across devices.
>
> The framework allows:
> - a new buffer-object to be created with fixed size.
> - different devices to 'attach' themselves to this buffer, to facilitate
> ?backing storage negotiation, using dma_buf_attach() API.
> - association of a file pointer with each user-buffer and associated
> ? allocator-defined operations on that buffer. This operation is called the
> ? 'export' operation.
> - this exported buffer-object to be shared with the other entity by asking for
> ? its 'file-descriptor (fd)', and sharing the fd across.
> - a received fd to get the buffer object back, where it can be accessed using
> ? the associated exporter-defined operations.
> - the exporter and user to share the scatterlist using get_scatterlist and
> ? put_scatterlist operations.
>
> Atleast one 'attach()' call is required to be made prior to calling the
> get_scatterlist() operation.
>
> Couple of building blocks in get_scatterlist() are added to ease introduction
> of sync'ing across exporter and users, and late allocation by the exporter.
>
> mmap() file operation is provided for the associated 'fd', as wrapper over the
> optional allocator defined mmap(), to be used by devices that might need one.

Why is this needed? it really doesn't make sense to be mmaping objects
independent of some front-end like drm or v4l.

how will you know what contents are in them, how will you synchronise
access. Unless someone has a hard use-case for this I'd say we drop it
until someone does.

Dave.


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 9:34 AM, Dave Airlie  wrote:
> On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark  wrote:
>> On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie  wrote:
 But then we'd need a different set of accessors for every different
 drm/v4l/etc driver, wouldn't we?
>>>
>>> Not any more different than you need for this, you just have a new
>>> interface that you request a sw object from,
>>> then mmap that object, and underneath it knows who owns it in the kernel.
>>
>> oh, ok, so you are talking about a kernel level interface, rather than
>> userspace..
>>
>> but I guess in this case I don't quite see the difference. ?It amounts
>> to which fd you call mmap (or ioctl[*]) on.. ?If you use the dmabuf fd
>> directly then you don't have to pass around a 2nd fd.
>>
>> [*] there is nothing stopping defining some dmabuf ioctls (such as for
>> synchronization).. although the thinking was to keep it simple for
>> first version of dmabuf
>>
>
> Yes a separate kernel level interface.

I'm not against it, but if it is a device-independent interface, it
just seems like six of one, half-dozen of the other..

Ie. how does it differ if the dmabuf fd is the fd used for ioctl/mmap,
vs if some other /dev/buffer-sharer file that you open?

But I think maybe I'm misunderstanding what you have in mind?

BR,
-R

> Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
> shoehorning in a sw mapping API isn't making it simpler.
>
> The problem I have with implementing mmap on the sharing fd, is that
> nothing says this should be purely optional and userspace shouldn't
> rely on it.
>
> In the Intel GEM space alone you have two types of mapping, one direct
> to shmem one via GTT, the GTT could be even be a linear view. The
> intel guys initially did GEM mmaps direct to the shmem pages because
> it seemed simple, up until they
> had to do step two which was do mmaps on the GTT copy and ended up
> having two separate mmap methods. I think the problem here is it seems
> deceptively simple to add this to the API now because the API is
> simple, however I think in the future it'll become a burden that we'll
> have to workaround.
>
> Dave.
>


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie  wrote:
>> But then we'd need a different set of accessors for every different
>> drm/v4l/etc driver, wouldn't we?
>
> Not any more different than you need for this, you just have a new
> interface that you request a sw object from,
> then mmap that object, and underneath it knows who owns it in the kernel.

oh, ok, so you are talking about a kernel level interface, rather than
userspace..

but I guess in this case I don't quite see the difference.  It amounts
to which fd you call mmap (or ioctl[*]) on..  If you use the dmabuf fd
directly then you don't have to pass around a 2nd fd.

[*] there is nothing stopping defining some dmabuf ioctls (such as for
synchronization).. although the thinking was to keep it simple for
first version of dmabuf

BR,
-R

> mmap just feels wrong in this API, which is a buffer sharing API not a
> buffer mapping API.
>
>> I guess if sharing a buffer between multiple drm devices, there is
>> nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
>> pass when the buffer is allocated, then you don't have to support
>> dmabuf->mmap(), and instead mmap via device and use some sort of
>> DRM_CPU_PREP/FINI ioctls for synchronization..
>
> Or we could make a generic CPU accessor that we don't have to worry about.
>
> Dave.
>


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 8:35 AM, Dave Airlie  wrote:
>>
>> well, the mmap is actually implemented by the buffer allocator
>> (v4l/drm).. although not sure if this was the point
>
> Then why not use the correct interface? doing some sort of not-quite
> generic interface isn't really helping anyone except adding an ABI
> that we have to support.

But what if you don't know who allocated the buffer?  How do you know
what interface to use to mmap?

> If someone wants to bypass the current kernel APIs we should add a new
> API for them not shove it into this generic buffer sharing layer.
>
>> The intent was that this is for well defined formats.. ie. it would
>> need to be a format that both v4l and drm understood in the first
>> place for sharing to make sense at all..
>
> How will you know the stride to take a simple example? The userspace
> had to create this buffer somehow and wants to share it with
> "something", you sound like
> you really needs another API that is a simple accessor API that can
> handle mmaps.

Well, things like stride, width, height, color format, userspace needs
to know all this already, even for malloc()'d sw buffers.  The
assumption is userspace already has a way to pass this information
around so it was not required to be duplicated by dmabuf.

>> Anyways, the basic reason is to handle random edge cases where you
>> need sw access to the buffer. ?For example, you are decoding video and
>> pull out a frame to generate a thumbnail w/ a sw jpeg encoder..
>
> Again, doesn't sound like it should be part of this API, and also
> sounds like the sw jpeg encoder will need more info about the buffer
> anyways like stride and format.
>
>> With this current scheme, synchronization could be handled in
>> dmabufops->mmap() and vm_ops->close().. ?it is perhaps a bit heavy to
>> require mmap/munmap for each sw access, but I suppose this isn't
>> really for the high-performance use case. ?It is just so that some
>> random bit of sw that gets passed a dmabuf handle without knowing who
>> allocated it can have sw access if really needed.
>
> So I think thats fine, write a sw accessor providers, don't go
> overloading the buffer sharing code.

But then we'd need a different set of accessors for every different
drm/v4l/etc driver, wouldn't we?

> This API will limit what people can use this buffer sharing for with
> pure hw accessors, you might say, oh buts its okay to fail the mmap
> then, but the chances of sw handling that I'm not so sure off.

I'm not entirely sure the case you are worried about.. sharing buffers
between multiple GPU's that understand same tiled formats?  I guess
that is a bit different from a case like a jpeg encoder that is passed
a dmabuf handle without any idea where it came from..

I guess if sharing a buffer between multiple drm devices, there is
nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
pass when the buffer is allocated, then you don't have to support
dmabuf->mmap(), and instead mmap via device and use some sort of
DRM_CPU_PREP/FINI ioctls for synchronization..

BR,
-R

> Dave.
>


[Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 7:41 AM, Dave Airlie  wrote:
> On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal  wrote:
>> This is the first step in defining a dma buffer sharing mechanism.
>>
>> A new buffer object dma_buf is added, with operations and API to allow easy
>> sharing of this buffer object across devices.
>>
>> The framework allows:
>> - a new buffer-object to be created with fixed size.
>> - different devices to 'attach' themselves to this buffer, to facilitate
>> ?backing storage negotiation, using dma_buf_attach() API.
>> - association of a file pointer with each user-buffer and associated
>> ? allocator-defined operations on that buffer. This operation is called the
>> ? 'export' operation.
>> - this exported buffer-object to be shared with the other entity by asking 
>> for
>> ? its 'file-descriptor (fd)', and sharing the fd across.
>> - a received fd to get the buffer object back, where it can be accessed using
>> ? the associated exporter-defined operations.
>> - the exporter and user to share the scatterlist using get_scatterlist and
>> ? put_scatterlist operations.
>>
>> Atleast one 'attach()' call is required to be made prior to calling the
>> get_scatterlist() operation.
>>
>> Couple of building blocks in get_scatterlist() are added to ease introduction
>> of sync'ing across exporter and users, and late allocation by the exporter.
>>
>> mmap() file operation is provided for the associated 'fd', as wrapper over 
>> the
>> optional allocator defined mmap(), to be used by devices that might need one.
>
> Why is this needed? it really doesn't make sense to be mmaping objects
> independent of some front-end like drm or v4l.

well, the mmap is actually implemented by the buffer allocator
(v4l/drm).. although not sure if this was the point

> how will you know what contents are in them, how will you synchronise
> access. Unless someone has a hard use-case for this I'd say we drop it
> until someone does.

The intent was that this is for well defined formats.. ie. it would
need to be a format that both v4l and drm understood in the first
place for sharing to make sense at all..

Anyways, the basic reason is to handle random edge cases where you
need sw access to the buffer.  For example, you are decoding video and
pull out a frame to generate a thumbnail w/ a sw jpeg encoder..

On gstreamer 0.11 branch, for example, there is already a map/unmap
virtual method on the gst buffer for sw access (ie. same purpose as
PrepareAccess/FinishAccess in EXA).  The idea w/ dmabuf mmap() support
is that we could implement support to mmap()/munmap() before/after sw
access.

With this current scheme, synchronization could be handled in
dmabufops->mmap() and vm_ops->close()..  it is perhaps a bit heavy to
require mmap/munmap for each sw access, but I suppose this isn't
really for the high-performance use case.  It is just so that some
random bit of sw that gets passed a dmabuf handle without knowing who
allocated it can have sw access if really needed.

BR,
-R

> Dave.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal sumit.sem...@ti.com wrote:
 This is the first step in defining a dma buffer sharing mechanism.

 A new buffer object dma_buf is added, with operations and API to allow easy
 sharing of this buffer object across devices.

 The framework allows:
 - a new buffer-object to be created with fixed size.
 - different devices to 'attach' themselves to this buffer, to facilitate
  backing storage negotiation, using dma_buf_attach() API.
 - association of a file pointer with each user-buffer and associated
   allocator-defined operations on that buffer. This operation is called the
   'export' operation.
 - this exported buffer-object to be shared with the other entity by asking for
   its 'file-descriptor (fd)', and sharing the fd across.
 - a received fd to get the buffer object back, where it can be accessed using
   the associated exporter-defined operations.
 - the exporter and user to share the scatterlist using get_scatterlist and
   put_scatterlist operations.

 Atleast one 'attach()' call is required to be made prior to calling the
 get_scatterlist() operation.

 Couple of building blocks in get_scatterlist() are added to ease introduction
 of sync'ing across exporter and users, and late allocation by the exporter.

 mmap() file operation is provided for the associated 'fd', as wrapper over the
 optional allocator defined mmap(), to be used by devices that might need one.

Why is this needed? it really doesn't make sense to be mmaping objects
independent of some front-end like drm or v4l.

how will you know what contents are in them, how will you synchronise
access. Unless someone has a hard use-case for this I'd say we drop it
until someone does.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 7:41 AM, Dave Airlie airl...@gmail.com wrote:
 On Tue, Oct 11, 2011 at 10:23 AM, Sumit Semwal sumit.sem...@ti.com wrote:
 This is the first step in defining a dma buffer sharing mechanism.

 A new buffer object dma_buf is added, with operations and API to allow easy
 sharing of this buffer object across devices.

 The framework allows:
 - a new buffer-object to be created with fixed size.
 - different devices to 'attach' themselves to this buffer, to facilitate
  backing storage negotiation, using dma_buf_attach() API.
 - association of a file pointer with each user-buffer and associated
   allocator-defined operations on that buffer. This operation is called the
   'export' operation.
 - this exported buffer-object to be shared with the other entity by asking 
 for
   its 'file-descriptor (fd)', and sharing the fd across.
 - a received fd to get the buffer object back, where it can be accessed using
   the associated exporter-defined operations.
 - the exporter and user to share the scatterlist using get_scatterlist and
   put_scatterlist operations.

 Atleast one 'attach()' call is required to be made prior to calling the
 get_scatterlist() operation.

 Couple of building blocks in get_scatterlist() are added to ease introduction
 of sync'ing across exporter and users, and late allocation by the exporter.

 mmap() file operation is provided for the associated 'fd', as wrapper over 
 the
 optional allocator defined mmap(), to be used by devices that might need one.

 Why is this needed? it really doesn't make sense to be mmaping objects
 independent of some front-end like drm or v4l.

well, the mmap is actually implemented by the buffer allocator
(v4l/drm).. although not sure if this was the point

 how will you know what contents are in them, how will you synchronise
 access. Unless someone has a hard use-case for this I'd say we drop it
 until someone does.

The intent was that this is for well defined formats.. ie. it would
need to be a format that both v4l and drm understood in the first
place for sharing to make sense at all..

Anyways, the basic reason is to handle random edge cases where you
need sw access to the buffer.  For example, you are decoding video and
pull out a frame to generate a thumbnail w/ a sw jpeg encoder..

On gstreamer 0.11 branch, for example, there is already a map/unmap
virtual method on the gst buffer for sw access (ie. same purpose as
PrepareAccess/FinishAccess in EXA).  The idea w/ dmabuf mmap() support
is that we could implement support to mmap()/munmap() before/after sw
access.

With this current scheme, synchronization could be handled in
dmabufops-mmap() and vm_ops-close()..  it is perhaps a bit heavy to
require mmap/munmap for each sw access, but I suppose this isn't
really for the high-performance use case.  It is just so that some
random bit of sw that gets passed a dmabuf handle without knowing who
allocated it can have sw access if really needed.

BR,
-R

 Dave.
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie

 well, the mmap is actually implemented by the buffer allocator
 (v4l/drm).. although not sure if this was the point

Then why not use the correct interface? doing some sort of not-quite
generic interface isn't really helping anyone except adding an ABI
that we have to support.

If someone wants to bypass the current kernel APIs we should add a new
API for them not shove it into this generic buffer sharing layer.

 The intent was that this is for well defined formats.. ie. it would
 need to be a format that both v4l and drm understood in the first
 place for sharing to make sense at all..

How will you know the stride to take a simple example? The userspace
had to create this buffer somehow and wants to share it with
something, you sound like
you really needs another API that is a simple accessor API that can
handle mmaps.

 Anyways, the basic reason is to handle random edge cases where you
 need sw access to the buffer.  For example, you are decoding video and
 pull out a frame to generate a thumbnail w/ a sw jpeg encoder..

Again, doesn't sound like it should be part of this API, and also
sounds like the sw jpeg encoder will need more info about the buffer
anyways like stride and format.

 With this current scheme, synchronization could be handled in
 dmabufops-mmap() and vm_ops-close()..  it is perhaps a bit heavy to
 require mmap/munmap for each sw access, but I suppose this isn't
 really for the high-performance use case.  It is just so that some
 random bit of sw that gets passed a dmabuf handle without knowing who
 allocated it can have sw access if really needed.

So I think thats fine, write a sw accessor providers, don't go
overloading the buffer sharing code.

This API will limit what people can use this buffer sharing for with
pure hw accessors, you might say, oh buts its okay to fail the mmap
then, but the chances of sw handling that I'm not so sure off.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 8:35 AM, Dave Airlie airl...@gmail.com wrote:

 well, the mmap is actually implemented by the buffer allocator
 (v4l/drm).. although not sure if this was the point

 Then why not use the correct interface? doing some sort of not-quite
 generic interface isn't really helping anyone except adding an ABI
 that we have to support.

But what if you don't know who allocated the buffer?  How do you know
what interface to use to mmap?

 If someone wants to bypass the current kernel APIs we should add a new
 API for them not shove it into this generic buffer sharing layer.

 The intent was that this is for well defined formats.. ie. it would
 need to be a format that both v4l and drm understood in the first
 place for sharing to make sense at all..

 How will you know the stride to take a simple example? The userspace
 had to create this buffer somehow and wants to share it with
 something, you sound like
 you really needs another API that is a simple accessor API that can
 handle mmaps.

Well, things like stride, width, height, color format, userspace needs
to know all this already, even for malloc()'d sw buffers.  The
assumption is userspace already has a way to pass this information
around so it was not required to be duplicated by dmabuf.

 Anyways, the basic reason is to handle random edge cases where you
 need sw access to the buffer.  For example, you are decoding video and
 pull out a frame to generate a thumbnail w/ a sw jpeg encoder..

 Again, doesn't sound like it should be part of this API, and also
 sounds like the sw jpeg encoder will need more info about the buffer
 anyways like stride and format.

 With this current scheme, synchronization could be handled in
 dmabufops-mmap() and vm_ops-close()..  it is perhaps a bit heavy to
 require mmap/munmap for each sw access, but I suppose this isn't
 really for the high-performance use case.  It is just so that some
 random bit of sw that gets passed a dmabuf handle without knowing who
 allocated it can have sw access if really needed.

 So I think thats fine, write a sw accessor providers, don't go
 overloading the buffer sharing code.

But then we'd need a different set of accessors for every different
drm/v4l/etc driver, wouldn't we?

 This API will limit what people can use this buffer sharing for with
 pure hw accessors, you might say, oh buts its okay to fail the mmap
 then, but the chances of sw handling that I'm not so sure off.

I'm not entirely sure the case you are worried about.. sharing buffers
between multiple GPU's that understand same tiled formats?  I guess
that is a bit different from a case like a jpeg encoder that is passed
a dmabuf handle without any idea where it came from..

I guess if sharing a buffer between multiple drm devices, there is
nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
pass when the buffer is allocated, then you don't have to support
dmabuf-mmap(), and instead mmap via device and use some sort of
DRM_CPU_PREP/FINI ioctls for synchronization..

BR,
-R

 Dave.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
 But then we'd need a different set of accessors for every different
 drm/v4l/etc driver, wouldn't we?

Not any more different than you need for this, you just have a new
interface that you request a sw object from,
then mmap that object, and underneath it knows who owns it in the kernel.

mmap just feels wrong in this API, which is a buffer sharing API not a
buffer mapping API.

 I guess if sharing a buffer between multiple drm devices, there is
 nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
 pass when the buffer is allocated, then you don't have to support
 dmabuf-mmap(), and instead mmap via device and use some sort of
 DRM_CPU_PREP/FINI ioctls for synchronization..

Or we could make a generic CPU accessor that we don't have to worry about.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie airl...@gmail.com wrote:
 But then we'd need a different set of accessors for every different
 drm/v4l/etc driver, wouldn't we?

 Not any more different than you need for this, you just have a new
 interface that you request a sw object from,
 then mmap that object, and underneath it knows who owns it in the kernel.

oh, ok, so you are talking about a kernel level interface, rather than
userspace..

but I guess in this case I don't quite see the difference.  It amounts
to which fd you call mmap (or ioctl[*]) on..  If you use the dmabuf fd
directly then you don't have to pass around a 2nd fd.

[*] there is nothing stopping defining some dmabuf ioctls (such as for
synchronization).. although the thinking was to keep it simple for
first version of dmabuf

BR,
-R

 mmap just feels wrong in this API, which is a buffer sharing API not a
 buffer mapping API.

 I guess if sharing a buffer between multiple drm devices, there is
 nothing stopping you from having some NOT_DMABUF_MMAPABLE flag you
 pass when the buffer is allocated, then you don't have to support
 dmabuf-mmap(), and instead mmap via device and use some sort of
 DRM_CPU_PREP/FINI ioctls for synchronization..

 Or we could make a generic CPU accessor that we don't have to worry about.

 Dave.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Dave Airlie
On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark robdcl...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie airl...@gmail.com wrote:
 But then we'd need a different set of accessors for every different
 drm/v4l/etc driver, wouldn't we?

 Not any more different than you need for this, you just have a new
 interface that you request a sw object from,
 then mmap that object, and underneath it knows who owns it in the kernel.

 oh, ok, so you are talking about a kernel level interface, rather than
 userspace..

 but I guess in this case I don't quite see the difference.  It amounts
 to which fd you call mmap (or ioctl[*]) on..  If you use the dmabuf fd
 directly then you don't have to pass around a 2nd fd.

 [*] there is nothing stopping defining some dmabuf ioctls (such as for
 synchronization).. although the thinking was to keep it simple for
 first version of dmabuf


Yes a separate kernel level interface.

Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
shoehorning in a sw mapping API isn't making it simpler.

The problem I have with implementing mmap on the sharing fd, is that
nothing says this should be purely optional and userspace shouldn't
rely on it.

In the Intel GEM space alone you have two types of mapping, one direct
to shmem one via GTT, the GTT could be even be a linear view. The
intel guys initially did GEM mmaps direct to the shmem pages because
it seemed simple, up until they
had to do step two which was do mmaps on the GTT copy and ended up
having two separate mmap methods. I think the problem here is it seems
deceptively simple to add this to the API now because the API is
simple, however I think in the future it'll become a burden that we'll
have to workaround.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Daniel Vetter
On Wed, Oct 12, 2011 at 03:34:54PM +0100, Dave Airlie wrote:
 On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark robdcl...@gmail.com wrote:
  On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie airl...@gmail.com wrote:
  But then we'd need a different set of accessors for every different
  drm/v4l/etc driver, wouldn't we?
 
  Not any more different than you need for this, you just have a new
  interface that you request a sw object from,
  then mmap that object, and underneath it knows who owns it in the kernel.
 
  oh, ok, so you are talking about a kernel level interface, rather than
  userspace..
 
  but I guess in this case I don't quite see the difference.  It amounts
  to which fd you call mmap (or ioctl[*]) on..  If you use the dmabuf fd
  directly then you don't have to pass around a 2nd fd.
 
  [*] there is nothing stopping defining some dmabuf ioctls (such as for
  synchronization).. although the thinking was to keep it simple for
  first version of dmabuf
 
 
 Yes a separate kernel level interface.
 
 Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
 shoehorning in a sw mapping API isn't making it simpler.
 
 The problem I have with implementing mmap on the sharing fd, is that
 nothing says this should be purely optional and userspace shouldn't
 rely on it.
 
 In the Intel GEM space alone you have two types of mapping, one direct
 to shmem one via GTT, the GTT could be even be a linear view. The
 intel guys initially did GEM mmaps direct to the shmem pages because
 it seemed simple, up until they
 had to do step two which was do mmaps on the GTT copy and ended up
 having two separate mmap methods. I think the problem here is it seems
 deceptively simple to add this to the API now because the API is
 simple, however I think in the future it'll become a burden that we'll
 have to workaround.

Yeah, that's my feeling, too. Adding mmap sounds like a neat, simple idea,
that could simplify things for simple devices like v4l. But as soon as
you're dealing with a real gpu, nothing is simple. Those who don't believe
this, just take a look at the data upload/download paths in the
open-source i915,nouveau,radeon drivers. Making this fast (and for gpus,
it needs to be fast) requires tons of tricks, special-cases and jumping
through loops.

You absolutely want the device-specific ioctls to do that. Adding a
generic mmap just makes matters worse, especially if userspace expects
this to work synchronized with everything else that is going on.

Cheers, Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism

2011-10-12 Thread Rob Clark
On Wed, Oct 12, 2011 at 9:34 AM, Dave Airlie airl...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 3:24 PM, Rob Clark robdcl...@gmail.com wrote:
 On Wed, Oct 12, 2011 at 9:01 AM, Dave Airlie airl...@gmail.com wrote:
 But then we'd need a different set of accessors for every different
 drm/v4l/etc driver, wouldn't we?

 Not any more different than you need for this, you just have a new
 interface that you request a sw object from,
 then mmap that object, and underneath it knows who owns it in the kernel.

 oh, ok, so you are talking about a kernel level interface, rather than
 userspace..

 but I guess in this case I don't quite see the difference.  It amounts
 to which fd you call mmap (or ioctl[*]) on..  If you use the dmabuf fd
 directly then you don't have to pass around a 2nd fd.

 [*] there is nothing stopping defining some dmabuf ioctls (such as for
 synchronization).. although the thinking was to keep it simple for
 first version of dmabuf


 Yes a separate kernel level interface.

I'm not against it, but if it is a device-independent interface, it
just seems like six of one, half-dozen of the other..

Ie. how does it differ if the dmabuf fd is the fd used for ioctl/mmap,
vs if some other /dev/buffer-sharer file that you open?

But I think maybe I'm misunderstanding what you have in mind?

BR,
-R

 Well I'd like to keep it even simpler. dmabuf is a buffer sharing API,
 shoehorning in a sw mapping API isn't making it simpler.

 The problem I have with implementing mmap on the sharing fd, is that
 nothing says this should be purely optional and userspace shouldn't
 rely on it.

 In the Intel GEM space alone you have two types of mapping, one direct
 to shmem one via GTT, the GTT could be even be a linear view. The
 intel guys initially did GEM mmaps direct to the shmem pages because
 it seemed simple, up until they
 had to do step two which was do mmaps on the GTT copy and ended up
 having two separate mmap methods. I think the problem here is it seems
 deceptively simple to add this to the API now because the API is
 simple, however I think in the future it'll become a burden that we'll
 have to workaround.

 Dave.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel