Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-18 Thread 吴勇
On Tue, 2023-09-12 at 11:05 -0400, Nicolas Dufresne wrote:
>
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  Le mardi 12 septembre 2023 à 08:47 +, Yong Wu (吴勇) a écrit :
> > On Mon, 2023-09-11 at 12:12 -0400, Nicolas Dufresne wrote:
> > >   
> > > External email : Please do not click links or open attachments
> until
> > > you have verified the sender or the content.
> > >  Hi,
> > > 
> > > Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit :
> > > > From: John Stultz 
> > > > 
> > > > This allows drivers who don't want to create their own
> > > > DMA-BUF exporter to be able to allocate DMA-BUFs directly
> > > > from existing DMA-BUF Heaps.
> > > > 
> > > > There is some concern that the premise of DMA-BUF heaps is
> > > > that userland knows better about what type of heap memory
> > > > is needed for a pipeline, so it would likely be best for
> > > > drivers to import and fill DMA-BUFs allocated by userland
> > > > instead of allocating one themselves, but this is still
> > > > up for debate.
> > > 
> > > 
> > > Would be nice for the reviewers to provide the information about
> the
> > > user of
> > > this new in-kernel API. I noticed it because I was CCed, but
> > > strangely it didn't
> > > make it to the mailing list yet and its not clear in the cover
> what
> > > this is used
> > > with. 
> > > 
> > > I can explain in my words though, my read is that this is used to
> > > allocate both
> > > user visible and driver internal memory segments in MTK VCODEC
> > > driver.
> > > 
> > > I'm somewhat concerned that DMABuf objects are used to abstract
> > > secure memory
> > > allocation from tee. For framebuffers that are going to be
> exported
> > > and shared
> > > its probably fair use, but it seems that internal shared memory
> and
> > > codec
> > > specific reference buffers also endup with a dmabuf fd (often
> called
> > > a secure fd
> > > in the v4l2 patchset) for data that is not being shared, and
> requires
> > > a 1:1
> > > mapping to a tee handle anyway. Is that the design we'd like to
> > > follow ? 
> > 
> > Yes. basically this is right.
> > 
> > > Can't
> > > we directly allocate from the tee, adding needed helper to make
> this
> > > as simple
> > > as allocating from a HEAP ?
> > 
> > If this happens, the memory will always be inside TEE. Here we
> create a
> > new _CMA heap, it will cma_alloc/free dynamically. Reserve it
> before
> > SVP start, and release to kernel after SVP done.
> 
> Ok, I see the benefit of having a common driver then. It would add to
> the
> complexity, but having a driver for the tee allocator and v4l2/heaps
> would be
> another option?

It's ok for v4l2. But our DRM also use this new heap and it will be
sent upstream in the next few days.

> 
> >   
> > Secondly. the v4l2/drm has the mature driver control flow, like
> > drm_gem_prime_import_dev that always use dma_buf ops. So we can use
> the
> > current flow as much as possible without having to re-plan a flow
> in
> > the TEE.
> 
> From what I've read from Yunfei series, this is only partially true
> for V4L2.
> The vb2 queue MMAP feature have dmabuf exportation as optional, but
> its not a
> problem to always back it up with a dmabuf object. But for internal
> SHM buffers
> used for firmware communication, I've never seen any driver use a
> DMABuf.
> 
> Same applies for primary decode buffers when frame buffer compression
> or post-
> processing it used (or reconstruction buffer in encoders), these are
> not user
> visible and are usually not DMABuf.

If they aren't dmabuf, of course it is ok. I guess we haven't used
these. The SHM buffer is got by tee_shm_register_kernel_buf in this
case and we just use the existed dmabuf ops to complete SVP.

In our case, the vcodec input/output/working buffers and DRM input
buffer all use this new secure heap during secure video play.

> 
> > 
> > > 
> > > Nicolas
> > > 
> > > > 
> > > > Signed-off-by: John Stultz 
> > > > Signed-off-by: T.J. Mercier 
> > > > Signed-off-by: Yong Wu 
> > > > [Yong: Fix the checkpatch alignment warning]
> > > > ---
> > > >  drivers/dma-buf/dma-heap.c | 60 
> 
> > > --
> > > >  include/linux/dma-heap.h   | 25 
> > > >  2 files changed, 69 insertions(+), 16 deletions(-)
> > > > 
> > [snip]
> 


Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-13 Thread Christian König

Am 12.09.23 um 16:58 schrieb Nicolas Dufresne:

Le mardi 12 septembre 2023 à 16:46 +0200, Christian König a écrit :

Am 12.09.23 um 10:52 schrieb Yong Wu (吴勇):

[SNIP]

But what we should try to avoid is that newly merged drivers provide
both a driver specific UAPI and DMA-heaps. The justification that
this
makes it easier to transit userspace to the new UAPI doesn't really
count.

That would be adding UAPI already with a plan to deprecate it and
that
is most likely not helpful considering that UAPI must be supported
forever as soon as it is upstream.

Sorry, I didn't understand this. I think we have not change the UAPI.
Which code are you referring to?

Well, what do you need this for if not a new UAPI?

My assumption here is that you need to export the DMA-heap allocation
function so that you can server an UAPI in your new driver. Or what else
is that good for?

As far as I understand you try to upstream your new vcodec driver. So
while this change here seems to be a good idea to clean up existing
drivers it doesn't look like a good idea for a newly created driver.

MTK VCODEC has been upstream for quite some time now. The other patchset is
trying to add secure decoding/encoding support to that existing upstream driver.

Regarding the uAPI, it seems that this addition to dmabuf heap internal API is
exactly the opposite. By making heaps available to drivers, modification to the
V4L2 uAPI is being reduced to adding "SECURE_MODE" + "SECURE_HEAP_ID" controls
(this is not debated yet has an approach). The heaps is being used internally in
replacement to every allocation, user visible or not.


Thanks a lot for that explanation, I was really wondering what the use 
case for this is if it's not to serve new UAPI.


In this case I don't see any reason why we shouldn't do it. It's indeed 
much cleaner.


Christian.



Nicolas


Regards,
Christian.


So I think this patch is a little confusing in this series, as I

don't

see much of it actually being used here (though forgive me if I'm
missing it).

Instead, It seems it get used in a separate patch series here:
 

https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/

Please try to avoid stuff like that it is really confusing and eats
reviewers time.

My fault, I thought dma-buf and media belonged to the different tree,
so I send them separately. The cover letter just said "The consumers of
the new heap and new interface are our codecs and DRM, which will be
sent upstream soon", and there was no vcodec link at that time.

In the next version, we will put the first three patches into the
vcodec patchset.

Thanks.





Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread Nicolas Dufresne
Le mardi 12 septembre 2023 à 08:47 +, Yong Wu (吴勇) a écrit :
> On Mon, 2023-09-11 at 12:12 -0400, Nicolas Dufresne wrote:
> >  
> > External email : Please do not click links or open attachments until
> > you have verified the sender or the content.
> >  Hi,
> > 
> > Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit :
> > > From: John Stultz 
> > > 
> > > This allows drivers who don't want to create their own
> > > DMA-BUF exporter to be able to allocate DMA-BUFs directly
> > > from existing DMA-BUF Heaps.
> > > 
> > > There is some concern that the premise of DMA-BUF heaps is
> > > that userland knows better about what type of heap memory
> > > is needed for a pipeline, so it would likely be best for
> > > drivers to import and fill DMA-BUFs allocated by userland
> > > instead of allocating one themselves, but this is still
> > > up for debate.
> > 
> > 
> > Would be nice for the reviewers to provide the information about the
> > user of
> > this new in-kernel API. I noticed it because I was CCed, but
> > strangely it didn't
> > make it to the mailing list yet and its not clear in the cover what
> > this is used
> > with. 
> > 
> > I can explain in my words though, my read is that this is used to
> > allocate both
> > user visible and driver internal memory segments in MTK VCODEC
> > driver.
> > 
> > I'm somewhat concerned that DMABuf objects are used to abstract
> > secure memory
> > allocation from tee. For framebuffers that are going to be exported
> > and shared
> > its probably fair use, but it seems that internal shared memory and
> > codec
> > specific reference buffers also endup with a dmabuf fd (often called
> > a secure fd
> > in the v4l2 patchset) for data that is not being shared, and requires
> > a 1:1
> > mapping to a tee handle anyway. Is that the design we'd like to
> > follow ? 
> 
> Yes. basically this is right.
> 
> > Can't
> > we directly allocate from the tee, adding needed helper to make this
> > as simple
> > as allocating from a HEAP ?
> 
> If this happens, the memory will always be inside TEE. Here we create a
> new _CMA heap, it will cma_alloc/free dynamically. Reserve it before
> SVP start, and release to kernel after SVP done.

Ok, I see the benefit of having a common driver then. It would add to the
complexity, but having a driver for the tee allocator and v4l2/heaps would be
another option?

>   
> Secondly. the v4l2/drm has the mature driver control flow, like
> drm_gem_prime_import_dev that always use dma_buf ops. So we can use the
> current flow as much as possible without having to re-plan a flow in
> the TEE.

>From what I've read from Yunfei series, this is only partially true for V4L2.
The vb2 queue MMAP feature have dmabuf exportation as optional, but its not a
problem to always back it up with a dmabuf object. But for internal SHM buffers
used for firmware communication, I've never seen any driver use a DMABuf.

Same applies for primary decode buffers when frame buffer compression or post-
processing it used (or reconstruction buffer in encoders), these are not user
visible and are usually not DMABuf.

> 
> > 
> > Nicolas
> > 
> > > 
> > > Signed-off-by: John Stultz 
> > > Signed-off-by: T.J. Mercier 
> > > Signed-off-by: Yong Wu 
> > > [Yong: Fix the checkpatch alignment warning]
> > > ---
> > >  drivers/dma-buf/dma-heap.c | 60 
> > --
> > >  include/linux/dma-heap.h   | 25 
> > >  2 files changed, 69 insertions(+), 16 deletions(-)
> > > 
> [snip]



Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread Nicolas Dufresne
Le mardi 12 septembre 2023 à 16:46 +0200, Christian König a écrit :
> Am 12.09.23 um 10:52 schrieb Yong Wu (吴勇):
> > [SNIP]
> > > But what we should try to avoid is that newly merged drivers provide
> > > both a driver specific UAPI and DMA-heaps. The justification that
> > > this
> > > makes it easier to transit userspace to the new UAPI doesn't really
> > > count.
> > > 
> > > That would be adding UAPI already with a plan to deprecate it and
> > > that
> > > is most likely not helpful considering that UAPI must be supported
> > > forever as soon as it is upstream.
> > Sorry, I didn't understand this. I think we have not change the UAPI.
> > Which code are you referring to?
> 
> Well, what do you need this for if not a new UAPI?
> 
> My assumption here is that you need to export the DMA-heap allocation 
> function so that you can server an UAPI in your new driver. Or what else 
> is that good for?
> 
> As far as I understand you try to upstream your new vcodec driver. So 
> while this change here seems to be a good idea to clean up existing 
> drivers it doesn't look like a good idea for a newly created driver.

MTK VCODEC has been upstream for quite some time now. The other patchset is
trying to add secure decoding/encoding support to that existing upstream driver.

Regarding the uAPI, it seems that this addition to dmabuf heap internal API is
exactly the opposite. By making heaps available to drivers, modification to the
V4L2 uAPI is being reduced to adding "SECURE_MODE" + "SECURE_HEAP_ID" controls
(this is not debated yet has an approach). The heaps is being used internally in
replacement to every allocation, user visible or not.

Nicolas

> 
> Regards,
> Christian.
> 
> > > > So I think this patch is a little confusing in this series, as I
> > > don't
> > > > see much of it actually being used here (though forgive me if I'm
> > > > missing it).
> > > > 
> > > > Instead, It seems it get used in a separate patch series here:
> > > > 
> > > https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/
> > > 
> > > Please try to avoid stuff like that it is really confusing and eats
> > > reviewers time.
> > My fault, I thought dma-buf and media belonged to the different tree,
> > so I send them separately. The cover letter just said "The consumers of
> > the new heap and new interface are our codecs and DRM, which will be
> > sent upstream soon", and there was no vcodec link at that time.
> > 
> > In the next version, we will put the first three patches into the
> > vcodec patchset.
> > 
> > Thanks.
> > 
> 



Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread Nicolas Dufresne
Le lundi 11 septembre 2023 à 12:13 +0200, Christian König a écrit :
> Am 11.09.23 um 04:30 schrieb Yong Wu:
> > From: John Stultz 
> > 
> > This allows drivers who don't want to create their own
> > DMA-BUF exporter to be able to allocate DMA-BUFs directly
> > from existing DMA-BUF Heaps.
> > 
> > There is some concern that the premise of DMA-BUF heaps is
> > that userland knows better about what type of heap memory
> > is needed for a pipeline, so it would likely be best for
> > drivers to import and fill DMA-BUFs allocated by userland
> > instead of allocating one themselves, but this is still
> > up for debate.
> 
> The main design goal of having DMA-heaps in the first place is to avoid 
> per driver allocation and this is not necessary because userland know 
> better what type of memory it wants.

If the memory is user visible, yes. When I look at the MTK VCODEC changes, this
seems to be used for internal codec state and SHM buffers used to communicate
with firmware.

> 
> The background is rather that we generally want to decouple allocation 
> from having a device driver connection so that we have better chance 
> that multiple devices can work with the same memory.
> 
> I once create a prototype which gives userspace a hint which DMA-heap to 
> user for which device: 
> https://patchwork.kernel.org/project/linux-media/patch/20230123123756.401692-2-christian.koe...@amd.com/
> 
> Problem is that I don't really have time to look into it and maintain 
> that stuff, but I think from the high level design that is rather the 
> general direction we should push at.
> 
> Regards,
> Christian.
> 
> > 
> > Signed-off-by: John Stultz 
> > Signed-off-by: T.J. Mercier 
> > Signed-off-by: Yong Wu 
> > [Yong: Fix the checkpatch alignment warning]
> > ---
> >   drivers/dma-buf/dma-heap.c | 60 --
> >   include/linux/dma-heap.h   | 25 
> >   2 files changed, 69 insertions(+), 16 deletions(-)
> > 
> > diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> > index dcc0e38c61fa..908bb30dc864 100644
> > --- a/drivers/dma-buf/dma-heap.c
> > +++ b/drivers/dma-buf/dma-heap.c
> > @@ -53,12 +53,15 @@ static dev_t dma_heap_devt;
> >   static struct class *dma_heap_class;
> >   static DEFINE_XARRAY_ALLOC(dma_heap_minors);
> >   
> > -static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> > -unsigned int fd_flags,
> > -unsigned int heap_flags)
> > +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> > + unsigned int fd_flags,
> > + unsigned int heap_flags)
> >   {
> > -   struct dma_buf *dmabuf;
> > -   int fd;
> > +   if (fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
> > +   return ERR_PTR(-EINVAL);
> > +
> > +   if (heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
> > +   return ERR_PTR(-EINVAL);
> >   
> > /*
> >  * Allocations from all heaps have to begin
> > @@ -66,9 +69,20 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, 
> > size_t len,
> >  */
> > len = PAGE_ALIGN(len);
> > if (!len)
> > -   return -EINVAL;
> > +   return ERR_PTR(-EINVAL);
> >   
> > -   dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags);
> > +   return heap->ops->allocate(heap, len, fd_flags, heap_flags);
> > +}
> > +EXPORT_SYMBOL_GPL(dma_heap_buffer_alloc);
> > +
> > +static int dma_heap_bufferfd_alloc(struct dma_heap *heap, size_t len,
> > +  unsigned int fd_flags,
> > +  unsigned int heap_flags)
> > +{
> > +   struct dma_buf *dmabuf;
> > +   int fd;
> > +
> > +   dmabuf = dma_heap_buffer_alloc(heap, len, fd_flags, heap_flags);
> > if (IS_ERR(dmabuf))
> > return PTR_ERR(dmabuf);
> >   
> > @@ -106,15 +120,9 @@ static long dma_heap_ioctl_allocate(struct file *file, 
> > void *data)
> > if (heap_allocation->fd)
> > return -EINVAL;
> >   
> > -   if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
> > -   return -EINVAL;
> > -
> > -   if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
> > -   return -EINVAL;
> > -
> > -   fd = dma_heap_buffer_alloc(heap, heap_allocation->len,
> > -  heap_allocation->fd_flags,
> > -  heap_allocation->heap_flags);
> > +   fd = dma_heap_bufferfd_alloc(heap, heap_allocation->len,
> > +heap_allocation->fd_flags,
> > +heap_allocation->heap_flags);
> > if (fd < 0)
> > return fd;
> >   
> > @@ -205,6 +213,7 @@ const char *dma_heap_get_name(struct dma_heap *heap)
> >   {
> > return heap->name;
> >   }
> > +EXPORT_SYMBOL_GPL(dma_heap_get_name);
> >   
> >   struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
> >   {
> > @@ -290,6 +299,24 @@ struct dma_heap *dma_heap_add(const 

Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread Christian König

Am 12.09.23 um 10:52 schrieb Yong Wu (吴勇):

[SNIP]

But what we should try to avoid is that newly merged drivers provide
both a driver specific UAPI and DMA-heaps. The justification that
this
makes it easier to transit userspace to the new UAPI doesn't really
count.

That would be adding UAPI already with a plan to deprecate it and
that
is most likely not helpful considering that UAPI must be supported
forever as soon as it is upstream.

Sorry, I didn't understand this. I think we have not change the UAPI.
Which code are you referring to?


Well, what do you need this for if not a new UAPI?

My assumption here is that you need to export the DMA-heap allocation 
function so that you can server an UAPI in your new driver. Or what else 
is that good for?


As far as I understand you try to upstream your new vcodec driver. So 
while this change here seems to be a good idea to clean up existing 
drivers it doesn't look like a good idea for a newly created driver.


Regards,
Christian.


So I think this patch is a little confusing in this series, as I

don't

see much of it actually being used here (though forgive me if I'm
missing it).

Instead, It seems it get used in a separate patch series here:


https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/

Please try to avoid stuff like that it is really confusing and eats
reviewers time.

My fault, I thought dma-buf and media belonged to the different tree,
so I send them separately. The cover letter just said "The consumers of
the new heap and new interface are our codecs and DRM, which will be
sent upstream soon", and there was no vcodec link at that time.

In the next version, we will put the first three patches into the
vcodec patchset.

Thanks.





Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread 吴勇
On Tue, 2023-09-12 at 09:06 +0200, Christian König wrote:
>
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  Am 11.09.23 um 20:29 schrieb John Stultz:
> > On Mon, Sep 11, 2023 at 3:14 AM Christian König
> >  wrote:
> >> Am 11.09.23 um 04:30 schrieb Yong Wu:
> >>> From: John Stultz 
> >>>
> >>> This allows drivers who don't want to create their own
> >>> DMA-BUF exporter to be able to allocate DMA-BUFs directly
> >>> from existing DMA-BUF Heaps.
> >>>
> >>> There is some concern that the premise of DMA-BUF heaps is
> >>> that userland knows better about what type of heap memory
> >>> is needed for a pipeline, so it would likely be best for
> >>> drivers to import and fill DMA-BUFs allocated by userland
> >>> instead of allocating one themselves, but this is still
> >>> up for debate.
> >> The main design goal of having DMA-heaps in the first place is to
> avoid
> >> per driver allocation and this is not necessary because userland
> know
> >> better what type of memory it wants.
> >>
> >> The background is rather that we generally want to decouple
> allocation
> >> from having a device driver connection so that we have better
> chance
> >> that multiple devices can work with the same memory.
> > Yep, very much agreed, and this is what the comment above is trying
> to describe.
> >
> > Ideally user-allocated buffers would be used to ensure driver's
> don't
> > create buffers with constraints that limit which devices the
> buffers
> > might later be shared with.
> >
> > However, this patch was created as a hold-over from the old ION
> logic
> > to help vendors transition to dmabuf heaps, as vendors had
> situations
> > where they still wanted to export dmabufs that were not to be
> > generally shared and folks wanted to avoid duplication of logic
> > already in existing heaps.  At the time, I never pushed it upstream
> as
> > there were no upstream users.  But I think if there is now a
> potential
> > upstream user, it's worth having the discussion to better
> understand
> > the need.
> 
> Yeah, that indeed makes much more sense.
> 
> When existing drivers want to avoid their own handling and move
> their 
> memory management over to using DMA-heaps even for internal
> allocations 
> then no objections from my side. That is certainly something we
> should 
> aim for if possible.

Thanks.

> 
> But what we should try to avoid is that newly merged drivers provide 
> both a driver specific UAPI and DMA-heaps. The justification that
> this 
> makes it easier to transit userspace to the new UAPI doesn't really
> count.
> 
> That would be adding UAPI already with a plan to deprecate it and
> that 
> is most likely not helpful considering that UAPI must be supported 
> forever as soon as it is upstream.

Sorry, I didn't understand this. I think we have not change the UAPI.
Which code are you referring to?

> 
> > So I think this patch is a little confusing in this series, as I
> don't
> > see much of it actually being used here (though forgive me if I'm
> > missing it).
> >
> > Instead, It seems it get used in a separate patch series here:
> >
> https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/
> 
> Please try to avoid stuff like that it is really confusing and eats 
> reviewers time.

My fault, I thought dma-buf and media belonged to the different tree,
so I send them separately. The cover letter just said "The consumers of
the new heap and new interface are our codecs and DRM, which will be
sent upstream soon", and there was no vcodec link at that time.

In the next version, we will put the first three patches into the
vcodec patchset.

Thanks.

> 
> Regards,
> Christian.
> 
> >
> > Yong, I appreciate you sending this out! But maybe if the secure
> heap
> > submission doesn't depend on this functionality, I might suggest
> > moving this patch (or at least the majority of it) to be part of
> the
> > vcodec series instead?  That way reviewers will have more context
> for
> > how the code being added is used?

Will do.
Thanks.

> >
> > thanks
> > -john
> 


Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread 吴勇
On Mon, 2023-09-11 at 12:12 -0400, Nicolas Dufresne wrote:
>
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  Hi,
> 
> Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit :
> > From: John Stultz 
> > 
> > This allows drivers who don't want to create their own
> > DMA-BUF exporter to be able to allocate DMA-BUFs directly
> > from existing DMA-BUF Heaps.
> > 
> > There is some concern that the premise of DMA-BUF heaps is
> > that userland knows better about what type of heap memory
> > is needed for a pipeline, so it would likely be best for
> > drivers to import and fill DMA-BUFs allocated by userland
> > instead of allocating one themselves, but this is still
> > up for debate.
> 
> 
> Would be nice for the reviewers to provide the information about the
> user of
> this new in-kernel API. I noticed it because I was CCed, but
> strangely it didn't
> make it to the mailing list yet and its not clear in the cover what
> this is used
> with. 
> 
> I can explain in my words though, my read is that this is used to
> allocate both
> user visible and driver internal memory segments in MTK VCODEC
> driver.
> 
> I'm somewhat concerned that DMABuf objects are used to abstract
> secure memory
> allocation from tee. For framebuffers that are going to be exported
> and shared
> its probably fair use, but it seems that internal shared memory and
> codec
> specific reference buffers also endup with a dmabuf fd (often called
> a secure fd
> in the v4l2 patchset) for data that is not being shared, and requires
> a 1:1
> mapping to a tee handle anyway. Is that the design we'd like to
> follow ? 

Yes. basically this is right.

> Can't
> we directly allocate from the tee, adding needed helper to make this
> as simple
> as allocating from a HEAP ?

If this happens, the memory will always be inside TEE. Here we create a
new _CMA heap, it will cma_alloc/free dynamically. Reserve it before
SVP start, and release to kernel after SVP done.
  
Secondly. the v4l2/drm has the mature driver control flow, like
drm_gem_prime_import_dev that always use dma_buf ops. So we can use the
current flow as much as possible without having to re-plan a flow in
the TEE.

> 
> Nicolas
> 
> > 
> > Signed-off-by: John Stultz 
> > Signed-off-by: T.J. Mercier 
> > Signed-off-by: Yong Wu 
> > [Yong: Fix the checkpatch alignment warning]
> > ---
> >  drivers/dma-buf/dma-heap.c | 60 
> --
> >  include/linux/dma-heap.h   | 25 
> >  2 files changed, 69 insertions(+), 16 deletions(-)
> > 
[snip]


Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-12 Thread Christian König

Am 11.09.23 um 20:29 schrieb John Stultz:

On Mon, Sep 11, 2023 at 3:14 AM Christian König
 wrote:

Am 11.09.23 um 04:30 schrieb Yong Wu:

From: John Stultz 

This allows drivers who don't want to create their own
DMA-BUF exporter to be able to allocate DMA-BUFs directly
from existing DMA-BUF Heaps.

There is some concern that the premise of DMA-BUF heaps is
that userland knows better about what type of heap memory
is needed for a pipeline, so it would likely be best for
drivers to import and fill DMA-BUFs allocated by userland
instead of allocating one themselves, but this is still
up for debate.

The main design goal of having DMA-heaps in the first place is to avoid
per driver allocation and this is not necessary because userland know
better what type of memory it wants.

The background is rather that we generally want to decouple allocation
from having a device driver connection so that we have better chance
that multiple devices can work with the same memory.

Yep, very much agreed, and this is what the comment above is trying to describe.

Ideally user-allocated buffers would be used to ensure driver's don't
create buffers with constraints that limit which devices the buffers
might later be shared with.

However, this patch was created as a hold-over from the old ION logic
to help vendors transition to dmabuf heaps, as vendors had situations
where they still wanted to export dmabufs that were not to be
generally shared and folks wanted to avoid duplication of logic
already in existing heaps.  At the time, I never pushed it upstream as
there were no upstream users.  But I think if there is now a potential
upstream user, it's worth having the discussion to better understand
the need.


Yeah, that indeed makes much more sense.

When existing drivers want to avoid their own handling and move their 
memory management over to using DMA-heaps even for internal allocations 
then no objections from my side. That is certainly something we should 
aim for if possible.


But what we should try to avoid is that newly merged drivers provide 
both a driver specific UAPI and DMA-heaps. The justification that this 
makes it easier to transit userspace to the new UAPI doesn't really count.


That would be adding UAPI already with a plan to deprecate it and that 
is most likely not helpful considering that UAPI must be supported 
forever as soon as it is upstream.



So I think this patch is a little confusing in this series, as I don't
see much of it actually being used here (though forgive me if I'm
missing it).

Instead, It seems it get used in a separate patch series here:
   https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/


Please try to avoid stuff like that it is really confusing and eats 
reviewers time.


Regards,
Christian.



Yong, I appreciate you sending this out! But maybe if the secure heap
submission doesn't depend on this functionality, I might suggest
moving this patch (or at least the majority of it) to be part of the
vcodec series instead?  That way reviewers will have more context for
how the code being added is used?

thanks
-john




Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-11 Thread John Stultz
On Mon, Sep 11, 2023 at 3:14 AM Christian König
 wrote:
> Am 11.09.23 um 04:30 schrieb Yong Wu:
> > From: John Stultz 
> >
> > This allows drivers who don't want to create their own
> > DMA-BUF exporter to be able to allocate DMA-BUFs directly
> > from existing DMA-BUF Heaps.
> >
> > There is some concern that the premise of DMA-BUF heaps is
> > that userland knows better about what type of heap memory
> > is needed for a pipeline, so it would likely be best for
> > drivers to import and fill DMA-BUFs allocated by userland
> > instead of allocating one themselves, but this is still
> > up for debate.
>
> The main design goal of having DMA-heaps in the first place is to avoid
> per driver allocation and this is not necessary because userland know
> better what type of memory it wants.
>
> The background is rather that we generally want to decouple allocation
> from having a device driver connection so that we have better chance
> that multiple devices can work with the same memory.

Yep, very much agreed, and this is what the comment above is trying to describe.

Ideally user-allocated buffers would be used to ensure driver's don't
create buffers with constraints that limit which devices the buffers
might later be shared with.

However, this patch was created as a hold-over from the old ION logic
to help vendors transition to dmabuf heaps, as vendors had situations
where they still wanted to export dmabufs that were not to be
generally shared and folks wanted to avoid duplication of logic
already in existing heaps.  At the time, I never pushed it upstream as
there were no upstream users.  But I think if there is now a potential
upstream user, it's worth having the discussion to better understand
the need.

So I think this patch is a little confusing in this series, as I don't
see much of it actually being used here (though forgive me if I'm
missing it).

Instead, It seems it get used in a separate patch series here:
  https://lore.kernel.org/all/20230911125936.10648-1-yunfei.d...@mediatek.com/

Yong, I appreciate you sending this out! But maybe if the secure heap
submission doesn't depend on this functionality, I might suggest
moving this patch (or at least the majority of it) to be part of the
vcodec series instead?  That way reviewers will have more context for
how the code being added is used?

thanks
-john


Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-11 Thread Nicolas Dufresne
Hi,

Le lundi 11 septembre 2023 à 10:30 +0800, Yong Wu a écrit :
> From: John Stultz 
> 
> This allows drivers who don't want to create their own
> DMA-BUF exporter to be able to allocate DMA-BUFs directly
> from existing DMA-BUF Heaps.
> 
> There is some concern that the premise of DMA-BUF heaps is
> that userland knows better about what type of heap memory
> is needed for a pipeline, so it would likely be best for
> drivers to import and fill DMA-BUFs allocated by userland
> instead of allocating one themselves, but this is still
> up for debate.


Would be nice for the reviewers to provide the information about the user of
this new in-kernel API. I noticed it because I was CCed, but strangely it didn't
make it to the mailing list yet and its not clear in the cover what this is used
with. 

I can explain in my words though, my read is that this is used to allocate both
user visible and driver internal memory segments in MTK VCODEC driver.

I'm somewhat concerned that DMABuf objects are used to abstract secure memory
allocation from tee. For framebuffers that are going to be exported and shared
its probably fair use, but it seems that internal shared memory and codec
specific reference buffers also endup with a dmabuf fd (often called a secure fd
in the v4l2 patchset) for data that is not being shared, and requires a 1:1
mapping to a tee handle anyway. Is that the design we'd like to follow ? Can't
we directly allocate from the tee, adding needed helper to make this as simple
as allocating from a HEAP ?

Nicolas

> 
> Signed-off-by: John Stultz 
> Signed-off-by: T.J. Mercier 
> Signed-off-by: Yong Wu 
> [Yong: Fix the checkpatch alignment warning]
> ---
>  drivers/dma-buf/dma-heap.c | 60 --
>  include/linux/dma-heap.h   | 25 
>  2 files changed, 69 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> index dcc0e38c61fa..908bb30dc864 100644
> --- a/drivers/dma-buf/dma-heap.c
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -53,12 +53,15 @@ static dev_t dma_heap_devt;
>  static struct class *dma_heap_class;
>  static DEFINE_XARRAY_ALLOC(dma_heap_minors);
>  
> -static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> -  unsigned int fd_flags,
> -  unsigned int heap_flags)
> +struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> +   unsigned int fd_flags,
> +   unsigned int heap_flags)
>  {
> - struct dma_buf *dmabuf;
> - int fd;
> + if (fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
> + return ERR_PTR(-EINVAL);
> +
> + if (heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
> + return ERR_PTR(-EINVAL);
>  
>   /*
>* Allocations from all heaps have to begin
> @@ -66,9 +69,20 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, 
> size_t len,
>*/
>   len = PAGE_ALIGN(len);
>   if (!len)
> - return -EINVAL;
> + return ERR_PTR(-EINVAL);
>  
> - dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags);
> + return heap->ops->allocate(heap, len, fd_flags, heap_flags);
> +}
> +EXPORT_SYMBOL_GPL(dma_heap_buffer_alloc);
> +
> +static int dma_heap_bufferfd_alloc(struct dma_heap *heap, size_t len,
> +unsigned int fd_flags,
> +unsigned int heap_flags)
> +{
> + struct dma_buf *dmabuf;
> + int fd;
> +
> + dmabuf = dma_heap_buffer_alloc(heap, len, fd_flags, heap_flags);
>   if (IS_ERR(dmabuf))
>   return PTR_ERR(dmabuf);
>  
> @@ -106,15 +120,9 @@ static long dma_heap_ioctl_allocate(struct file *file, 
> void *data)
>   if (heap_allocation->fd)
>   return -EINVAL;
>  
> - if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
> - return -EINVAL;
> -
> - if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
> - return -EINVAL;
> -
> - fd = dma_heap_buffer_alloc(heap, heap_allocation->len,
> -heap_allocation->fd_flags,
> -heap_allocation->heap_flags);
> + fd = dma_heap_bufferfd_alloc(heap, heap_allocation->len,
> +  heap_allocation->fd_flags,
> +  heap_allocation->heap_flags);
>   if (fd < 0)
>   return fd;
>  
> @@ -205,6 +213,7 @@ const char *dma_heap_get_name(struct dma_heap *heap)
>  {
>   return heap->name;
>  }
> +EXPORT_SYMBOL_GPL(dma_heap_get_name);
>  
>  struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
>  {
> @@ -290,6 +299,24 @@ struct dma_heap *dma_heap_add(const struct 
> dma_heap_export_info *exp_info)
>   kfree(heap);
>   return err_ret;
>  }
> +EXPORT_SYMBOL_GPL(dma_heap_add);
> +
> +struct dma_heap *dma_heap_find(const char *name)
> +{
> 

Re: [PATCH 3/9] dma-heap: Provide accessors so that in-kernel drivers can allocate dmabufs from specific heaps

2023-09-11 Thread Christian König

Am 11.09.23 um 04:30 schrieb Yong Wu:

From: John Stultz 

This allows drivers who don't want to create their own
DMA-BUF exporter to be able to allocate DMA-BUFs directly
from existing DMA-BUF Heaps.

There is some concern that the premise of DMA-BUF heaps is
that userland knows better about what type of heap memory
is needed for a pipeline, so it would likely be best for
drivers to import and fill DMA-BUFs allocated by userland
instead of allocating one themselves, but this is still
up for debate.


The main design goal of having DMA-heaps in the first place is to avoid 
per driver allocation and this is not necessary because userland know 
better what type of memory it wants.


The background is rather that we generally want to decouple allocation 
from having a device driver connection so that we have better chance 
that multiple devices can work with the same memory.


I once create a prototype which gives userspace a hint which DMA-heap to 
user for which device: 
https://patchwork.kernel.org/project/linux-media/patch/20230123123756.401692-2-christian.koe...@amd.com/


Problem is that I don't really have time to look into it and maintain 
that stuff, but I think from the high level design that is rather the 
general direction we should push at.


Regards,
Christian.



Signed-off-by: John Stultz 
Signed-off-by: T.J. Mercier 
Signed-off-by: Yong Wu 
[Yong: Fix the checkpatch alignment warning]
---
  drivers/dma-buf/dma-heap.c | 60 --
  include/linux/dma-heap.h   | 25 
  2 files changed, 69 insertions(+), 16 deletions(-)

diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index dcc0e38c61fa..908bb30dc864 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -53,12 +53,15 @@ static dev_t dma_heap_devt;
  static struct class *dma_heap_class;
  static DEFINE_XARRAY_ALLOC(dma_heap_minors);
  
-static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,

-unsigned int fd_flags,
-unsigned int heap_flags)
+struct dma_buf *dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
+ unsigned int fd_flags,
+ unsigned int heap_flags)
  {
-   struct dma_buf *dmabuf;
-   int fd;
+   if (fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
+   return ERR_PTR(-EINVAL);
+
+   if (heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
+   return ERR_PTR(-EINVAL);
  
  	/*

 * Allocations from all heaps have to begin
@@ -66,9 +69,20 @@ static int dma_heap_buffer_alloc(struct dma_heap *heap, 
size_t len,
 */
len = PAGE_ALIGN(len);
if (!len)
-   return -EINVAL;
+   return ERR_PTR(-EINVAL);
  
-	dmabuf = heap->ops->allocate(heap, len, fd_flags, heap_flags);

+   return heap->ops->allocate(heap, len, fd_flags, heap_flags);
+}
+EXPORT_SYMBOL_GPL(dma_heap_buffer_alloc);
+
+static int dma_heap_bufferfd_alloc(struct dma_heap *heap, size_t len,
+  unsigned int fd_flags,
+  unsigned int heap_flags)
+{
+   struct dma_buf *dmabuf;
+   int fd;
+
+   dmabuf = dma_heap_buffer_alloc(heap, len, fd_flags, heap_flags);
if (IS_ERR(dmabuf))
return PTR_ERR(dmabuf);
  
@@ -106,15 +120,9 @@ static long dma_heap_ioctl_allocate(struct file *file, void *data)

if (heap_allocation->fd)
return -EINVAL;
  
-	if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)

-   return -EINVAL;
-
-   if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
-   return -EINVAL;
-
-   fd = dma_heap_buffer_alloc(heap, heap_allocation->len,
-  heap_allocation->fd_flags,
-  heap_allocation->heap_flags);
+   fd = dma_heap_bufferfd_alloc(heap, heap_allocation->len,
+heap_allocation->fd_flags,
+heap_allocation->heap_flags);
if (fd < 0)
return fd;
  
@@ -205,6 +213,7 @@ const char *dma_heap_get_name(struct dma_heap *heap)

  {
return heap->name;
  }
+EXPORT_SYMBOL_GPL(dma_heap_get_name);
  
  struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)

  {
@@ -290,6 +299,24 @@ struct dma_heap *dma_heap_add(const struct 
dma_heap_export_info *exp_info)
kfree(heap);
return err_ret;
  }
+EXPORT_SYMBOL_GPL(dma_heap_add);
+
+struct dma_heap *dma_heap_find(const char *name)
+{
+   struct dma_heap *h;
+
+   mutex_lock(_list_lock);
+   list_for_each_entry(h, _list, list) {
+   if (!strcmp(h->name, name)) {
+   kref_get(>refcount);
+   mutex_unlock(_list_lock);
+   return h;
+   }
+   }
+   mutex_unlock(_list_lock);