Re: [Linaro-mm-sig] [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-02-28 Thread Liam Mark
+ Sumit

Hi Sumit,

Do you have any thoughts on this patch? 

It fixes a potential crash in on older kernel and I think limiting 
begin/end_cpu_access to only apply cache maintenance when the buffer is 
dma mapped makes sense from a logical perspective and performance 
perspective.



On Wed, 6 Feb 2019, Ørjan Eide wrote:

> 
> I've run some testing, and this patch does indeed fix the crash in
> dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg
> list.
> 
> Tested-by: Ørjan Eide 
> 
> I tested this on an older kernel, v4.14, since the dma-mapping code
> moved, in v4.19, to ignore the dma_address and instead use sg_phys() to
> get a valid address from the page, which is always valid in the ion sg
> lists. While this wouldn't crash on newer kernels, it's still good to
> avoid the unnecessary work when no CMO is needed.
> 

Isn't a fix like this also required from a stability perspective for 
future kernels? I understand from your analysis below that the crash has 
been fixed after 4.19 by using sg_phys to get the address but aren't we 
breaking the DMA API contract by calling dma_sync_* without first dma 
mapping the memory, if so then we have no guarantee that future 
implementations of functions like dma_direct_sync_sg_for_cpu will properly 
handle calls to dma_sync_* if the memory is not dma mapped.

> Is this patch a candidate for the relevant stable kernels, those that 
> have this bug exposed to user space via Ion and DMA_BUF_IOCTL_SYNC?
> 

My belief is that is relevant for older kernels otherwise an unprivileged 
malicious userspace application may be able to crash the system if they 
can call DMA_BUF_IOCTL_SYNC at the right time.

BTW thanks Ørjan testing and anaalsyis you have carried out on this 
change.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

RE: [PATCH v3] staging: android: ion: Add implementation of dma_buf_vmap and dma_buf_vunmap

2019-01-29 Thread Liam Mark
On Fri, 4 Jan 2019, Skidanov, Alexey wrote:

> 
> 
> > -Original Message-
> > From: Liam Mark [mailto:lm...@codeaurora.org]
> > Sent: Friday, January 04, 2019 19:42
> > To: Skidanov, Alexey 
> > Cc: Laura Abbott ; Greg KH ;
> > de...@driverdev.osuosl.org; tk...@android.com; r...@android.com; linux-
> > ker...@vger.kernel.org; m...@android.com; sumit.sem...@linaro.org
> > Subject: Re: [PATCH v3] staging: android: ion: Add implementation of 
> > dma_buf_vmap and
> > dma_buf_vunmap
> > 
> > On Tue, 18 Dec 2018, Alexey Skidanov wrote:
> > 
> > > >>> I was wondering if we could re-open the discussion on adding support 
> > > >>> to
> > > >>> ION for dma_buf_vmap.
> > > >>> It seems like the patch was not taken as the reviewers wanted more
> > > >>> evidence of an upstream use case.
> > > >>>
> > > >>> Here would be my upstream usage argument for including dma_buf_vmap
> > > >>> support in ION.
> > > >>>
> > > >>> Currently all calls to ion_dma_buf_begin_cpu_access result in the 
> > > >>> creation
> > > >>> of a kernel mapping for the buffer, unfortunately the resulting call 
> > > >>> to
> > > >>> alloc_vmap_area can be quite expensive and this has caused a 
> > > >>> performance
> > > >>> regression for certain clients when they have moved to the new 
> > > >>> version of
> > > >>> ION.
> > > >>>
> > > >>> The kernel mapping is not actually needed in 
> > > >>> ion_dma_buf_begin_cpu_access,
> > > >>> and generally isn't needed by clients. So if we remove the creation 
> > > >>> of the
> > > >>> kernel mapping in ion_dma_buf_begin_cpu_access and only create it when
> > > >>> needed we can speed up the calls to ion_dma_buf_begin_cpu_access.
> > > >>>
> > > >>> An additional benefit of removing the creation of kernel mappings from
> > > >>> ion_dma_buf_begin_cpu_access is that it makes the ION code more 
> > > >>> secure.
> > > >>> Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL 
> > > >>> with
> > > >>> flags DMA_BUF_SYNC_END multiple times to cause the ION buffer 
> > > >>> kmap_cnt to
> > > >>> go negative which could lead to undesired behavior.
> > > >>>
> > > >>> One disadvantage of the above change is that a kernel mapping is not
> > > >>> already created when a client calls dma_buf_kmap. So the following
> > > >>> dma_buf_kmap contract can't be satisfied.
> > > >>>
> > > >>> /**
> > > >>> * dma_buf_kmap - Map a page of the buffer object into kernel address
> > > >>> space. The
> > > >>> * same restrictions as for kmap and friends apply.
> > > >>> * @dmabuf:[in]buffer to map page from.
> > > >>> * @page_num:  [in]page in PAGE_SIZE units to map.
> > > >>> *
> > > >>> * This call must always succeed, any necessary preparations that might
> > > >>> fail
> > > >>> * need to be done in begin_cpu_access.
> > > >>> */
> > > >>>
> > > >>> But hopefully we can work around this by moving clients to 
> > > >>> dma_buf_vmap.
> > > >> I think the problem is with the contract. We can't ensure that the call
> > > >> is always succeeds regardless the implementation - any mapping might
> > > >> fail. Probably this is why  *all* clients of dma_buf_kmap() check the
> > > >> return value (so it's safe to return NULL in case of failure).
> > > >>
> > > >
> > > > I think currently the call to dma_buf_kmap will always succeed since the
> > > > DMA-Buf contract requires that the client first successfully call
> > > > dma_buf_begin_cpu_access(), and if dma_buf_begin_cpu_access() succeeds
> > > > then dma_buf_kmap will succeed.
> > > >
> > > >> I would suggest to fix the contract and to keep the dma_buf_kmap()
> > > >> support in ION.
> > > >
> > > > I will leave it to the DMA-Buf maintainers as to whether they want to
> > > > change their contract.
> &g

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-29 Thread Liam Mark
On Fri, 18 Jan 2019, Liam Mark wrote:

> On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> 
> > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > The ION begin_cpu_access and end_cpu_access functions use the
> > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > maintenance.
> > > 
> > > Currently it is possible to apply cache maintenance, via the
> > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > dma mapped.
> > > 
> > > The dma sync sg APIs should not be called on sg lists which have not been
> > > dma mapped as this can result in cache maintenance being applied to the
> > > wrong address. If an sg list has not been dma mapped then its dma_address
> > > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > > use the dma_address field to calculate the address onto which to apply
> > > cache maintenance.
> > > 
> > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > dma mapped as the memory should already be coherent for access from the
> > > CPU. Any CMOs required for device access taken care of in the
> > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > > apply CMOs if the buffer is dma mapped.
> > > 
> > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > > cache maintenance to buffers which are dma mapped.
> > > 
> > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
> > > and mapping")
> > > Signed-off-by: Liam Mark 
> > > ---
> > >  drivers/staging/android/ion/ion.c | 26 +-
> > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/staging/android/ion/ion.c 
> > > b/drivers/staging/android/ion/ion.c
> > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > --- a/drivers/staging/android/ion/ion.c
> > > +++ b/drivers/staging/android/ion/ion.c
> > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > >   struct device *dev;
> > >   struct sg_table *table;
> > >   struct list_head list;
> > > + bool dma_mapped;
> > >  };
> > >  
> > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > >  
> > >   a->table = table;
> > >   a->dev = attachment->dev;
> > > + a->dma_mapped = false;
> > >   INIT_LIST_HEAD(>list);
> > >  
> > >   attachment->priv = a;
> > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > >  {
> > >   struct ion_dma_buf_attachment *a = attachment->priv;
> > >   struct sg_table *table;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > >  
> > >   table = a->table;
> > >  
> > > + mutex_lock(>lock);
> > >   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > - direction))
> > > + direction)) {
> > > + mutex_unlock(>lock);
> > >   return ERR_PTR(-ENOMEM);
> > > + }
> > > + a->dma_mapped = true;
> > > + mutex_unlock(>lock);
> > >  
> > >   return table;
> > >  }
> > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > > struct sg_table *table,
> > > enum dma_data_direction direction)
> > >  {
> > > + struct ion_dma_buf_attachment *a = attachment->priv;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > +
> > > + mutex_lock(>lock);
> > >   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > > + a->dma_mapped = false;
> > > + mutex_unlock(>lock);
> > >  }
> > >  
> > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > dma_buf *dmabuf,
> > >  
> > >   mutex_lock(>lock);
> > >   list_for_each_entry(a, >attachments, list) {
> > 
> > When no devices are attached then buffer->attachments is empty and th

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-22 Thread Liam Mark
On Mon, 21 Jan 2019, Brian Starkey wrote:

> Hi,
> 
> Sorry for being a bit sporadic on this. I was out travelling last week
> with little time for email.
> 
> On Fri, Jan 18, 2019 at 11:16:31AM -0600, Andrew F. Davis wrote:
> > On 1/17/19 7:11 PM, Liam Mark wrote:
> > > On Thu, 17 Jan 2019, Andrew F. Davis wrote:
> > > 
> > >> On 1/16/19 4:54 PM, Liam Mark wrote:
> > >>> On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> > >>>
> > >>>> On 1/16/19 9:19 AM, Brian Starkey wrote:
> > >>>>> Hi :-)
> > >>>>>
> > >>>>> On Tue, Jan 15, 2019 at 12:40:16PM -0600, Andrew F. Davis wrote:
> > >>>>>> On 1/15/19 12:38 PM, Andrew F. Davis wrote:
> > >>>>>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> > >>>>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> > >>>>>>>>
> > >>>>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> > >>>>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> > >>>>>>>>>>
> > >>>>>>>>>>> Buffers may not be mapped from the CPU so skip cache 
> > >>>>>>>>>>> maintenance here.
> > >>>>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> > >>>>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be 
> > >>>>>>>>>>> needed anyway.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Signed-off-by: Andrew F. Davis 
> > >>>>>>>>>>> ---
> > >>>>>>>>>>>  drivers/staging/android/ion/ion.c | 7 ---
> > >>>>>>>>>>>  1 file changed, 4 insertions(+), 3 deletions(-)
> > >>>>>>>>>>>
> > >>>>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c 
> > >>>>>>>>>>> b/drivers/staging/android/ion/ion.c
> > >>>>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> > >>>>>>>>>>> --- a/drivers/staging/android/ion/ion.c
> > >>>>>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> > >>>>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table 
> > >>>>>>>>>>> *ion_map_dma_buf(struct dma_buf_attachment *attachment,
> > >>>>>>>>>>>  
> > >>>>>>>>>>> table = a->table;
> > >>>>>>>>>>>  
> > >>>>>>>>>>> -   if (!dma_map_sg(attachment->dev, table->sgl, 
> > >>>>>>>>>>> table->nents,
> > >>>>>>>>>>> -   direction))
> > >>>>>>>>>>> +   if (!dma_map_sg_attrs(attachment->dev, table->sgl, 
> > >>>>>>>>>>> table->nents,
> > >>>>>>>>>>> + direction, 
> > >>>>>>>>>>> DMA_ATTR_SKIP_CPU_SYNC))
> > >>>>>>>>>>
> > >>>>>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> > >>>>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache 
> > >>>>>>>>>> maintenance.
> > >>>>>>>>>> If the calls to {begin,end}_cpu_access were made before the call 
> > >>>>>>>>>> to 
> > >>>>>>>>>> dma_buf_attach then there won't have been a device attached so 
> > >>>>>>>>>> the calls 
> > >>>>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> > >>>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> That should be okay though, if you have no attachments (or all
> > >>>>>>>>> attachments are IO-coherent) then there is no need for cache
> > >>>>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent 
> > >&g

Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-22 Thread Liam Mark
On Tue, 22 Jan 2019, Andrew F. Davis wrote:

> On 1/21/19 4:12 PM, Liam Mark wrote:
> > On Mon, 21 Jan 2019, Christoph Hellwig wrote:
> > 
> >> On Mon, Jan 21, 2019 at 11:44:10AM -0800, Liam Mark wrote:
> >>> The main use case is for allowing clients to pass in 
> >>> DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
> >>> which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In 
> >>> ION the buffers aren't usually accessed from the CPU so this allows 
> >>> clients to often avoid doing unnecessary cache maintenance.
> >>
> >> This can't work.  The cpu can still easily speculate into this area.
> > 
> > Can you provide more detail on your concern here.
> > The use case I am thinking about here is a cached buffer which is accessed 
> > by a non IO-coherent device (quite a common use case for ION).
> > 
> > Guessing on your concern:
> > The speculative access can be an issue if you are going to access the 
> > buffer from the CPU after the device has written to it, however if you 
> > know you aren't going to do any CPU access before the buffer is again 
> > returned to the device then I don't think the speculative access is a 
> > concern.
> > 
> >> Moreover in general these operations should be cheap if the addresses
> >> aren't cached.
> >>
> > 
> > I am thinking of use cases with cached buffers here, so CMO isn't cheap.
> > 
> 
> These buffers are cacheable, not cached, if you haven't written anything
> the data wont actually be in cache. 

That's true

> And in the case of speculative cache
> filling the lines are marked clean. In either case the only cost is the
> little 7 instruction loop calling the clean/invalidate instruction (dc
> civac for ARMv8) for the cache-lines. Unless that is the cost you are
> trying to avoid?
> 

This is the cost I am trying to avoid and this comes back to our previous 
discussion.  We have a coherent system cache so if you are doing this for 
every cache line on a large buffer it adds up with this work and the going 
to the bus.
For example I believe 1080P buffers are 8MB, and 4K buffers are even 
larger.

I also still think you would want to solve this properly such that 
invalidates aren't being done unnecessarily.

> In that case if you are mapping and unmapping so much that the little
> CMO here is hurting performance then I would argue your usage is broken
> and needs to be re-worked a bit.
> 

I am not sure I would say it is broken, the large buffers (example 1080P 
buffers) are mapped and unmapped on every frame. I don't think there is 
any clean way to avoid that in a pipelining framework, you could ask 
clients to keep the buffers dma mapped but there isn't necessarily a good 
time to tell them to unmap. 

It would be unfortunate to not consider this something legitimate for 
usespace to do in a pipelining use case.
Requiring devices to stay attached doesn't seem very clean to me as there 
isn't necessarily a nice place to tell them when to detach.


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-22 Thread Liam Mark
On Tue, 22 Jan 2019, Andrew F. Davis wrote:

> On 1/21/19 4:18 PM, Liam Mark wrote:
> > On Mon, 21 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/21/19 2:20 PM, Liam Mark wrote:
> >>> On Mon, 21 Jan 2019, Andrew F. Davis wrote:
> >>>
> >>>> On 1/21/19 1:44 PM, Liam Mark wrote:
> >>>>> On Mon, 21 Jan 2019, Christoph Hellwig wrote:
> >>>>>
> >>>>>> On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
> >>>>>>>> And who is going to decide which ones to pass?  And who documents
> >>>>>>>> which ones are safe?
> >>>>>>>>
> >>>>>>>> I'd much rather have explicit, well documented dma-buf flags that
> >>>>>>>> might get translated to the DMA API flags, which are not error 
> >>>>>>>> checked,
> >>>>>>>> not very well documented and way to easy to get wrong.
> >>>>>>>>
> >>>>>>>
> >>>>>>> I'm not sure having flags in dma-buf really solves anything
> >>>>>>> given drivers can use the attributes directly with dma_map
> >>>>>>> anyway, which is what we're looking to do. The intention
> >>>>>>> is for the driver creating the dma_buf attachment to have
> >>>>>>> the knowledge of which flags to use.
> >>>>>>
> >>>>>> Well, there are very few flags that you can simply use for all calls of
> >>>>>> dma_map*.  And given how badly these flags are defined I just don't 
> >>>>>> want
> >>>>>> people to add more places where they indirectly use these flags, as
> >>>>>> it will be more than enough work to clean up the current mess.
> >>>>>>
> >>>>>> What flag(s) do you want to pass this way, btw?  Maybe that is where
> >>>>>> the problem is.
> >>>>>>
> >>>>>
> >>>>> The main use case is for allowing clients to pass in 
> >>>>> DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
> >>>>> which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. 
> >>>>> In 
> >>>>> ION the buffers aren't usually accessed from the CPU so this allows 
> >>>>> clients to often avoid doing unnecessary cache maintenance.
> >>>>>
> >>>>
> >>>> How can a client know that no CPU access has occurred that needs to be
> >>>> flushed out?
> >>>>
> >>>
> >>> I have left this to clients, but if they own the buffer they can have the 
> >>> knowledge as to whether CPU access is needed in that use case (example 
> >>> for 
> >>> post-processing).
> >>>
> >>> For example with the previous version of ION we left all decisions of 
> >>> whether cache maintenance was required up to the client, they would use 
> >>> the ION cache maintenance IOCTL to force cache maintenance only when it 
> >>> was required.
> >>> In these cases almost all of the access was being done by the device and 
> >>> in the rare cases CPU access was required clients would initiate the 
> >>> required cache maintenance before and after the CPU access.
> >>>
> >>
> >> I think we have different definitions of "client", I'm talking about the
> >> DMA-BUF client (the importer), that is who can set this flag. It seems
> >> you mean the userspace application, which has no control over this flag.
> >>
> > 
> > I am also talking about dma-buf clients, I am referring to both the 
> > userspace and kernel component of the client. For example our Camera ION 
> > client has both a usersapce and kernel component and they have ION 
> > buffers, which they control the access to, which may or may not be 
> > accessed by the CPU in certain uses cases.
> > 
> 
> I know they often work together, but for this discussion it would be
> good to keep kernel clients and usperspace clients separate. There are
> three types of actors at play here, userspace clients, kernel clients,
> and exporters.
> 
> DMA-BUF only provides the basic sync primitive + mmap directly to
> userspace, 

Well dma-buf does provide dma_buf_kmap/dma_buf_begin_cpu_access which 
allows the same fucntionality in the kernel, but I don't think that

Re: [PATCH 4/4] staging: android: ion: Support for mapping with dma mapping attributes

2019-01-22 Thread Liam Mark
On Mon, 21 Jan 2019, Brian Starkey wrote:

> Hi Liam,
> 
> On Fri, Jan 18, 2019 at 10:37:47AM -0800, Liam Mark wrote:
> > Add support for configuring dma mapping attributes when mapping
> > and unmapping memory through dma_buf_map_attachment and
> > dma_buf_unmap_attachment.
> > 
> > For example this will allow ION clients to skip cache maintenance, by
> > using DMA_ATTR_SKIP_CPU_SYNC, for buffers which are clean and haven't been
> > accessed by the CPU.
> 
> How can a client know that the buffer won't be accessed by the CPU in
> the future though?
> 
Yes, for use cases where you don't if it will be accessed in the future 
then you would only use it to optimize the dma map path, but as I 
mentioned in the other thread there are cases (such as in our Camera) 
where we have complete ownership of buffers and do know if it will be 
accessed in the future.

> I don't think we can push this decision to clients, because they are
> lacking information about what else is going on with the buffer. It
> needs to be done by the exporter, IMO.
> 

I do agree it would be better to handle in the exporter, but in a 
pipelining use case where there might not be any devices attached that 
doesn't seem very doable.

> Thanks,
> -Brian
> 
> > 
> > Signed-off-by: Liam Mark 
> > ---
> >  drivers/staging/android/ion/ion.c | 7 ---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/staging/android/ion/ion.c 
> > b/drivers/staging/android/ion/ion.c
> > index 1fe633a7fdba..0aae845b20ba 100644
> > --- a/drivers/staging/android/ion/ion.c
> > +++ b/drivers/staging/android/ion/ion.c
> > @@ -268,8 +268,8 @@ static struct sg_table *ion_map_dma_buf(struct 
> > dma_buf_attachment *attachment,
> > table = a->table;
> >  
> > mutex_lock(>lock);
> > -   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > -   direction)) {
> > +   if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> > + direction, attachment->dma_map_attrs)) {
> > mutex_unlock(>lock);
> > return ERR_PTR(-ENOMEM);
> > }
> > @@ -287,7 +287,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
> > *attachment,
> > struct ion_buffer *buffer = attachment->dmabuf->priv;
> >  
> > mutex_lock(>lock);
> > -   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > +   dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents, direction,
> > +  attachment->dma_map_attrs);
> > a->dma_mapped = false;
> > mutex_unlock(>lock);
> >  }
> > -- 
> > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
> > a Linux Foundation Collaborative Project
> > 
> > ___
> > dri-devel mailing list
> > dri-de...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Andrew F. Davis wrote:

> On 1/21/19 2:20 PM, Liam Mark wrote:
> > On Mon, 21 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/21/19 1:44 PM, Liam Mark wrote:
> >>> On Mon, 21 Jan 2019, Christoph Hellwig wrote:
> >>>
> >>>> On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
> >>>>>> And who is going to decide which ones to pass?  And who documents
> >>>>>> which ones are safe?
> >>>>>>
> >>>>>> I'd much rather have explicit, well documented dma-buf flags that
> >>>>>> might get translated to the DMA API flags, which are not error checked,
> >>>>>> not very well documented and way to easy to get wrong.
> >>>>>>
> >>>>>
> >>>>> I'm not sure having flags in dma-buf really solves anything
> >>>>> given drivers can use the attributes directly with dma_map
> >>>>> anyway, which is what we're looking to do. The intention
> >>>>> is for the driver creating the dma_buf attachment to have
> >>>>> the knowledge of which flags to use.
> >>>>
> >>>> Well, there are very few flags that you can simply use for all calls of
> >>>> dma_map*.  And given how badly these flags are defined I just don't want
> >>>> people to add more places where they indirectly use these flags, as
> >>>> it will be more than enough work to clean up the current mess.
> >>>>
> >>>> What flag(s) do you want to pass this way, btw?  Maybe that is where
> >>>> the problem is.
> >>>>
> >>>
> >>> The main use case is for allowing clients to pass in 
> >>> DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
> >>> which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In 
> >>> ION the buffers aren't usually accessed from the CPU so this allows 
> >>> clients to often avoid doing unnecessary cache maintenance.
> >>>
> >>
> >> How can a client know that no CPU access has occurred that needs to be
> >> flushed out?
> >>
> > 
> > I have left this to clients, but if they own the buffer they can have the 
> > knowledge as to whether CPU access is needed in that use case (example for 
> > post-processing).
> > 
> > For example with the previous version of ION we left all decisions of 
> > whether cache maintenance was required up to the client, they would use 
> > the ION cache maintenance IOCTL to force cache maintenance only when it 
> > was required.
> > In these cases almost all of the access was being done by the device and 
> > in the rare cases CPU access was required clients would initiate the 
> > required cache maintenance before and after the CPU access.
> > 
> 
> I think we have different definitions of "client", I'm talking about the
> DMA-BUF client (the importer), that is who can set this flag. It seems
> you mean the userspace application, which has no control over this flag.
> 

I am also talking about dma-buf clients, I am referring to both the 
userspace and kernel component of the client. For example our Camera ION 
client has both a usersapce and kernel component and they have ION 
buffers, which they control the access to, which may or may not be 
accessed by the CPU in certain uses cases.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Christoph Hellwig wrote:

> On Mon, Jan 21, 2019 at 12:20:42PM -0800, Liam Mark wrote:
> > I have left this to clients, but if they own the buffer they can have the 
> > knowledge as to whether CPU access is needed in that use case (example for 
> > post-processing).
> 
> That is an API design which the user is more likely to get wrong than
> right and thus does not pass the smell test.
> 

With the previous version of ION Android ION clients were successfully 
managing all their cache maintenance.



Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Christoph Hellwig wrote:

> On Mon, Jan 21, 2019 at 11:44:10AM -0800, Liam Mark wrote:
> > The main use case is for allowing clients to pass in 
> > DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
> > which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In 
> > ION the buffers aren't usually accessed from the CPU so this allows 
> > clients to often avoid doing unnecessary cache maintenance.
> 
> This can't work.  The cpu can still easily speculate into this area.

Can you provide more detail on your concern here.
The use case I am thinking about here is a cached buffer which is accessed 
by a non IO-coherent device (quite a common use case for ION).

Guessing on your concern:
The speculative access can be an issue if you are going to access the 
buffer from the CPU after the device has written to it, however if you 
know you aren't going to do any CPU access before the buffer is again 
returned to the device then I don't think the speculative access is a 
concern.

> Moreover in general these operations should be cheap if the addresses
> aren't cached.
> 

I am thinking of use cases with cached buffers here, so CMO isn't cheap.


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Andrew F. Davis wrote:

> On 1/21/19 1:44 PM, Liam Mark wrote:
> > On Mon, 21 Jan 2019, Christoph Hellwig wrote:
> > 
> >> On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
> >>>> And who is going to decide which ones to pass?  And who documents
> >>>> which ones are safe?
> >>>>
> >>>> I'd much rather have explicit, well documented dma-buf flags that
> >>>> might get translated to the DMA API flags, which are not error checked,
> >>>> not very well documented and way to easy to get wrong.
> >>>>
> >>>
> >>> I'm not sure having flags in dma-buf really solves anything
> >>> given drivers can use the attributes directly with dma_map
> >>> anyway, which is what we're looking to do. The intention
> >>> is for the driver creating the dma_buf attachment to have
> >>> the knowledge of which flags to use.
> >>
> >> Well, there are very few flags that you can simply use for all calls of
> >> dma_map*.  And given how badly these flags are defined I just don't want
> >> people to add more places where they indirectly use these flags, as
> >> it will be more than enough work to clean up the current mess.
> >>
> >> What flag(s) do you want to pass this way, btw?  Maybe that is where
> >> the problem is.
> >>
> > 
> > The main use case is for allowing clients to pass in 
> > DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
> > which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In 
> > ION the buffers aren't usually accessed from the CPU so this allows 
> > clients to often avoid doing unnecessary cache maintenance.
> > 
> 
> How can a client know that no CPU access has occurred that needs to be
> flushed out?
> 

I have left this to clients, but if they own the buffer they can have the 
knowledge as to whether CPU access is needed in that use case (example for 
post-processing).

For example with the previous version of ION we left all decisions of 
whether cache maintenance was required up to the client, they would use 
the ION cache maintenance IOCTL to force cache maintenance only when it 
was required.
In these cases almost all of the access was being done by the device and 
in the rare cases CPU access was required clients would initiate the 
required cache maintenance before and after the CPU access.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-21 Thread Liam Mark
On Fri, 18 Jan 2019, Andrew F. Davis wrote:

> On 1/17/19 7:11 PM, Liam Mark wrote:
> > On Thu, 17 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/16/19 4:54 PM, Liam Mark wrote:
> >>> On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> >>>
> >>>> On 1/16/19 9:19 AM, Brian Starkey wrote:
> >>>>> Hi :-)
> >>>>>
> >>>>> On Tue, Jan 15, 2019 at 12:40:16PM -0600, Andrew F. Davis wrote:
> >>>>>> On 1/15/19 12:38 PM, Andrew F. Davis wrote:
> >>>>>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>
> >>>>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>>>
> >>>>>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance 
> >>>>>>>>>>> here.
> >>>>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed 
> >>>>>>>>>>> anyway.
> >>>>>>>>>>>
> >>>>>>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>>>>>> ---
> >>>>>>>>>>>  drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>>>>>  1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>>>>>
> >>>>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c 
> >>>>>>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table 
> >>>>>>>>>>> *ion_map_dma_buf(struct dma_buf_attachment *attachment,
> >>>>>>>>>>>  
> >>>>>>>>>>>   table = a->table;
> >>>>>>>>>>>  
> >>>>>>>>>>> - if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>>>>>> - direction))
> >>>>>>>>>>> + if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >>>>>>>>>>> +   direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>>>>>
> >>>>>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache 
> >>>>>>>>>> maintenance.
> >>>>>>>>>> If the calls to {begin,end}_cpu_access were made before the call 
> >>>>>>>>>> to 
> >>>>>>>>>> dma_buf_attach then there won't have been a device attached so the 
> >>>>>>>>>> calls 
> >>>>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> That should be okay though, if you have no attachments (or all
> >>>>>>>>> attachments are IO-coherent) then there is no need for cache
> >>>>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent 
> >>>>>>>>> device
> >>>>>>>>> is attached later after data has already been written. Does that
> >>>>>>>>> sequence need supporting? 
> >>>>>>>>
> >>>>>>>> Yes, but also I think there are cases where CPU access can happen 
> >>>>>>>> before 
> >>>>>>>> in Android, but I will focus on later for now.
> >>>>>>>>
> >>>>>>>>> DMA-BUF doesn't have to allocate the backing
> >>>>>>>>> memory until map_dma_buf() ti

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Andrew F. Davis wrote:

> On 1/18/19 3:43 PM, Liam Mark wrote:
> > On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/17/19 7:04 PM, Liam Mark wrote:
> >>> On Thu, 17 Jan 2019, Andrew F. Davis wrote:
> >>>
> >>>> On 1/16/19 4:48 PM, Liam Mark wrote:
> >>>>> On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> >>>>>
> >>>>>> On 1/15/19 1:05 PM, Laura Abbott wrote:
> >>>>>>> On 1/15/19 10:38 AM, Andrew F. Davis wrote:
> >>>>>>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>>>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>>
> >>>>>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance
> >>>>>>>>>>>> here.
> >>>>>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed
> >>>>>>>>>>>> anyway.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>>>>>>> ---
> >>>>>>>>>>>>   drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>>>>>>   1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>>>>>>
> >>>>>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c
> >>>>>>>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table 
> >>>>>>>>>>>> *ion_map_dma_buf(struct
> >>>>>>>>>>>> dma_buf_attachment *attachment,
> >>>>>>>>>>>>     table = a->table;
> >>>>>>>>>>>>   -    if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>>>>>>> -    direction))
> >>>>>>>>>>>> +    if (!dma_map_sg_attrs(attachment->dev, table->sgl, 
> >>>>>>>>>>>> table->nents,
> >>>>>>>>>>>> +  direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>>>>>>
> >>>>>>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache
> >>>>>>>>>>> maintenance.
> >>>>>>>>>>> If the calls to {begin,end}_cpu_access were made before the call 
> >>>>>>>>>>> to
> >>>>>>>>>>> dma_buf_attach then there won't have been a device attached so the
> >>>>>>>>>>> calls
> >>>>>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> That should be okay though, if you have no attachments (or all
> >>>>>>>>>> attachments are IO-coherent) then there is no need for cache
> >>>>>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent 
> >>>>>>>>>> device
> >>>>>>>>>> is attached later after data has already been written. Does that
> >>>>>>>>>> sequence need supporting?
> >>>>>>>>>
> >>>>>>>>> Yes, but also I think there are cases where CPU access can happen 
> >>>>>>>>> before
> >>>>>>>>> in Android, but I will 

Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-21 Thread Liam Mark
On Mon, 21 Jan 2019, Christoph Hellwig wrote:

> On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
> > > And who is going to decide which ones to pass?  And who documents
> > > which ones are safe?
> > > 
> > > I'd much rather have explicit, well documented dma-buf flags that
> > > might get translated to the DMA API flags, which are not error checked,
> > > not very well documented and way to easy to get wrong.
> > > 
> > 
> > I'm not sure having flags in dma-buf really solves anything
> > given drivers can use the attributes directly with dma_map
> > anyway, which is what we're looking to do. The intention
> > is for the driver creating the dma_buf attachment to have
> > the knowledge of which flags to use.
> 
> Well, there are very few flags that you can simply use for all calls of
> dma_map*.  And given how badly these flags are defined I just don't want
> people to add more places where they indirectly use these flags, as
> it will be more than enough work to clean up the current mess.
> 
> What flag(s) do you want to pass this way, btw?  Maybe that is where
> the problem is.
> 

The main use case is for allowing clients to pass in 
DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance 
which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In 
ION the buffers aren't usually accessed from the CPU so this allows 
clients to often avoid doing unnecessary cache maintenance.


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-18 Thread Liam Mark
On Fri, 18 Jan 2019, Andrew F. Davis wrote:

> On 1/17/19 7:04 PM, Liam Mark wrote:
> > On Thu, 17 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/16/19 4:48 PM, Liam Mark wrote:
> >>> On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> >>>
> >>>> On 1/15/19 1:05 PM, Laura Abbott wrote:
> >>>>> On 1/15/19 10:38 AM, Andrew F. Davis wrote:
> >>>>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>
> >>>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>>
> >>>>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance
> >>>>>>>>>> here.
> >>>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed
> >>>>>>>>>> anyway.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>>>>> ---
> >>>>>>>>>>   drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>>>>   1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c
> >>>>>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct
> >>>>>>>>>> dma_buf_attachment *attachment,
> >>>>>>>>>>     table = a->table;
> >>>>>>>>>>   -    if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>>>>> -    direction))
> >>>>>>>>>> +    if (!dma_map_sg_attrs(attachment->dev, table->sgl, 
> >>>>>>>>>> table->nents,
> >>>>>>>>>> +  direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>>>>
> >>>>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache
> >>>>>>>>> maintenance.
> >>>>>>>>> If the calls to {begin,end}_cpu_access were made before the call to
> >>>>>>>>> dma_buf_attach then there won't have been a device attached so the
> >>>>>>>>> calls
> >>>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> That should be okay though, if you have no attachments (or all
> >>>>>>>> attachments are IO-coherent) then there is no need for cache
> >>>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent 
> >>>>>>>> device
> >>>>>>>> is attached later after data has already been written. Does that
> >>>>>>>> sequence need supporting?
> >>>>>>>
> >>>>>>> Yes, but also I think there are cases where CPU access can happen 
> >>>>>>> before
> >>>>>>> in Android, but I will focus on later for now.
> >>>>>>>
> >>>>>>>> DMA-BUF doesn't have to allocate the backing
> >>>>>>>> memory until map_dma_buf() time, and that should only happen after 
> >>>>>>>> all
> >>>>>>>> the devices have attached so it can know where to put the buffer. So 
> >>>>>>>> we
> >>>>>>>> shouldn't expect any CPU access to buffers before all the devices are
> >>>>>>>> attached and mapped, right?
> >>>>>>>>
> >>>>&g

Re: [PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-18 Thread Liam Mark
On Fri, 18 Jan 2019, Laura Abbott wrote:

> On 1/18/19 10:37 AM, Liam Mark wrote:
> > Add support for configuring dma mapping attributes when mapping
> > and unmapping memory through dma_buf_map_attachment and
> > dma_buf_unmap_attachment.
> > 
> > Signed-off-by: Liam Mark 
> > ---
> >   include/linux/dma-buf.h | 3 +++
> >   1 file changed, 3 insertions(+)
> > 
> > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > index 58725f890b5b..59bf33e09e2d 100644
> > --- a/include/linux/dma-buf.h
> > +++ b/include/linux/dma-buf.h
> > @@ -308,6 +308,8 @@ struct dma_buf {
> >* @dev: device attached to the buffer.
> >* @node: list of dma_buf_attachment.
> >* @priv: exporter specific attachment data.
> > + * @dma_map_attrs: DMA mapping attributes to be used in
> > + *dma_buf_map_attachment() and dma_buf_unmap_attachment().
> >*
> >* This structure holds the attachment information between the dma_buf
> > buffer
> >* and its user device(s). The list contains one attachment struct per
> > device
> > @@ -323,6 +325,7 @@ struct dma_buf_attachment {
> > struct device *dev;
> > struct list_head node;
> > void *priv;
> > +   unsigned long dma_map_attrs;
> >   };
> > /**
> > 
> 
> Did you miss part of this patch? This only adds it to the structure but
> doesn't
> add it to any API. The same commment applies to the follow up patch,
> I don't quite see how it's being used.
> 

Were you asking for a cleaner DMA-buf API to set this field or were you 
asking for a change to an upstream client to make use of this field?

I have clients set the dma_map_attrs field directly on their
dma_buf_attachment struct before calling dma_buf_map_attachment (if they
need this functionality).
Of course this is all being used in Android for out of tree drivers, but
I assume it is just as useful to everyone else who has cached ION buffers
which aren't always accessed by the CPU.

My understanding is that AOSP Android on Hikey 960 also is currently
suffering from too many CMOs due to dma_map_attachemnt always applying
CMOs, so this support should help them avoid it.

> Thanks,
> Laura
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-18 Thread Liam Mark
On Fri, 18 Jan 2019, Andrew F. Davis wrote:

> On 1/18/19 12:37 PM, Liam Mark wrote:
> > The ION begin_cpu_access and end_cpu_access functions use the
> > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > maintenance.
> > 
> > Currently it is possible to apply cache maintenance, via the
> > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > dma mapped.
> > 
> > The dma sync sg APIs should not be called on sg lists which have not been
> > dma mapped as this can result in cache maintenance being applied to the
> > wrong address. If an sg list has not been dma mapped then its dma_address
> > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > use the dma_address field to calculate the address onto which to apply
> > cache maintenance.
> > 
> > Also I don’t think we want CMOs to be applied to a buffer which is not
> > dma mapped as the memory should already be coherent for access from the
> > CPU. Any CMOs required for device access taken care of in the
> > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > apply CMOs if the buffer is dma mapped.
> > 
> > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > cache maintenance to buffers which are dma mapped.
> > 
> > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
> > and mapping")
> > Signed-off-by: Liam Mark 
> > ---
> >  drivers/staging/android/ion/ion.c | 26 +-
> >  1 file changed, 21 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/staging/android/ion/ion.c 
> > b/drivers/staging/android/ion/ion.c
> > index 6f5afab7c1a1..1fe633a7fdba 100644
> > --- a/drivers/staging/android/ion/ion.c
> > +++ b/drivers/staging/android/ion/ion.c
> > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > struct device *dev;
> > struct sg_table *table;
> > struct list_head list;
> > +   bool dma_mapped;
> >  };
> >  
> >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> >  
> > a->table = table;
> > a->dev = attachment->dev;
> > +   a->dma_mapped = false;
> > INIT_LIST_HEAD(>list);
> >  
> > attachment->priv = a;
> > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > dma_buf_attachment *attachment,
> >  {
> > struct ion_dma_buf_attachment *a = attachment->priv;
> > struct sg_table *table;
> > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> >  
> > table = a->table;
> >  
> > +   mutex_lock(>lock);
> > if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > -   direction))
> > +   direction)) {
> > +   mutex_unlock(>lock);
> > return ERR_PTR(-ENOMEM);
> > +   }
> > +   a->dma_mapped = true;
> > +   mutex_unlock(>lock);
> >  
> > return table;
> >  }
> > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > dma_buf_attachment *attachment,
> >   struct sg_table *table,
> >   enum dma_data_direction direction)
> >  {
> > +   struct ion_dma_buf_attachment *a = attachment->priv;
> > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > +
> > +   mutex_lock(>lock);
> > dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > +   a->dma_mapped = false;
> > +   mutex_unlock(>lock);
> >  }
> >  
> >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
> > *dmabuf,
> >  
> > mutex_lock(>lock);
> > list_for_each_entry(a, >attachments, list) {
> 
> When no devices are attached then buffer->attachments is empty and the
> below does not run, so if I understand this patch correctly then what
> you are protecting against is CPU access in the window after
> dma_buf_attach but before dma_buf_map.
> 

Yes

> This is the kind of thing that again makes me think a couple more
> ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> the backing memory to be allocated until map time, this is why the
> dma_address field would still be null

[PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-18 Thread Liam Mark
The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Also I don’t think we want CMOs to be applied to a buffer which is not
dma mapped as the memory should already be coherent for access from the
CPU. Any CMOs required for device access taken care of in the
dma_buf_map_attachment and dma_buf_unmap_attachment calls.
So really it only makes sense for begin_cpu_access and end_cpu_access to
apply CMOs if the buffer is dma mapped.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which are dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
mapping")
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 26 +-
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 6f5afab7c1a1..1fe633a7fdba 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+   bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf,
@@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
 
a->table = table;
a->dev = attachment->dev;
+   a->dma_mapped = false;
INIT_LIST_HEAD(>list);
 
attachment->priv = a;
@@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
dma_buf_attachment *attachment,
 {
struct ion_dma_buf_attachment *a = attachment->priv;
struct sg_table *table;
+   struct ion_buffer *buffer = attachment->dmabuf->priv;
 
table = a->table;
 
+   mutex_lock(>lock);
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
-   direction))
+   direction)) {
+   mutex_unlock(>lock);
return ERR_PTR(-ENOMEM);
+   }
+   a->dma_mapped = true;
+   mutex_unlock(>lock);
 
return table;
 }
@@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
 {
+   struct ion_dma_buf_attachment *a = attachment->priv;
+   struct ion_buffer *buffer = attachment->dmabuf->priv;
+
+   mutex_lock(>lock);
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+   a->dma_mapped = false;
+   mutex_unlock(>lock);
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-   direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+   a->table->nents, direction);
}
 
 unlock:
@@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-  direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_device(a->dev, a->table->sgl,
+  a->table->nents, direction);
}
mutex_unlock(>lock);
 
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project



[PATCH 4/4] staging: android: ion: Support for mapping with dma mapping attributes

2019-01-18 Thread Liam Mark
Add support for configuring dma mapping attributes when mapping
and unmapping memory through dma_buf_map_attachment and
dma_buf_unmap_attachment.

For example this will allow ION clients to skip cache maintenance, by
using DMA_ATTR_SKIP_CPU_SYNC, for buffers which are clean and haven't been
accessed by the CPU.

Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 1fe633a7fdba..0aae845b20ba 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -268,8 +268,8 @@ static struct sg_table *ion_map_dma_buf(struct 
dma_buf_attachment *attachment,
table = a->table;
 
mutex_lock(>lock);
-   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
-   direction)) {
+   if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
+ direction, attachment->dma_map_attrs)) {
mutex_unlock(>lock);
return ERR_PTR(-ENOMEM);
}
@@ -287,7 +287,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
struct ion_buffer *buffer = attachment->dmabuf->priv;
 
mutex_lock(>lock);
-   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+   dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents, direction,
+  attachment->dma_map_attrs);
a->dma_mapped = false;
mutex_unlock(>lock);
 }
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project



[PATCH 1/4] staging: android: ion: Support cpu access during dma_buf_detach

2019-01-18 Thread Liam Mark
Often userspace doesn't know when the kernel will be calling dma_buf_detach
on the buffer.
If userpace starts its CPU access at the same time as the sg list is being
freed it could end up accessing the sg list after it has been freed.

Thread AThread B
- DMA_BUF_IOCTL_SYNC IOCT
 - ion_dma_buf_begin_cpu_access
  - list_for_each_entry
- ion_dma_buf_detatch
 - free_duped_table
   - dma_sync_sg_for_cpu

Fix this by getting the ion_buffer lock before freeing the sg table memory.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
mapping")
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index a0802de8c3a1..6f5afab7c1a1 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -248,10 +248,10 @@ static void ion_dma_buf_detatch(struct dma_buf *dmabuf,
struct ion_dma_buf_attachment *a = attachment->priv;
struct ion_buffer *buffer = dmabuf->priv;
 
-   free_duped_table(a->table);
mutex_lock(>lock);
list_del(>list);
mutex_unlock(>lock);
+   free_duped_table(a->table);
 
kfree(a);
 }
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project



[PATCH 3/4] dma-buf: add support for mapping with dma mapping attributes

2019-01-18 Thread Liam Mark
Add support for configuring dma mapping attributes when mapping
and unmapping memory through dma_buf_map_attachment and
dma_buf_unmap_attachment.

Signed-off-by: Liam Mark 
---
 include/linux/dma-buf.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 58725f890b5b..59bf33e09e2d 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -308,6 +308,8 @@ struct dma_buf {
  * @dev: device attached to the buffer.
  * @node: list of dma_buf_attachment.
  * @priv: exporter specific attachment data.
+ * @dma_map_attrs: DMA mapping attributes to be used in
+ *dma_buf_map_attachment() and dma_buf_unmap_attachment().
  *
  * This structure holds the attachment information between the dma_buf buffer
  * and its user device(s). The list contains one attachment struct per device
@@ -323,6 +325,7 @@ struct dma_buf_attachment {
struct device *dev;
struct list_head node;
void *priv;
+   unsigned long dma_map_attrs;
 };
 
 /**
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project



[PATCH 0/4] ION stability and perf changes

2019-01-18 Thread Liam Mark
Some stability changes to improve ION robustness and a perf related
change to make it easier for clients to avoid unnecessary cache
maintenance, such as when buffers are clean and haven't had any CPU
access.

Liam Mark (4):
  staging: android: ion: Support cpu access during dma_buf_detach
  staging: android: ion: Restrict cache maintenance to dma mapped memory
  dma-buf: add support for mapping with dma mapping attributes
  staging: android: ion: Support for mapping with dma mapping attributes

 drivers/staging/android/ion/ion.c | 33 +
 include/linux/dma-buf.h   |  3 +++
 2 files changed, 28 insertions(+), 8 deletions(-)

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project



Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-17 Thread Liam Mark
On Thu, 17 Jan 2019, Andrew F. Davis wrote:

> On 1/16/19 4:54 PM, Liam Mark wrote:
> > On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/16/19 9:19 AM, Brian Starkey wrote:
> >>> Hi :-)
> >>>
> >>> On Tue, Jan 15, 2019 at 12:40:16PM -0600, Andrew F. Davis wrote:
> >>>> On 1/15/19 12:38 PM, Andrew F. Davis wrote:
> >>>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>>>
> >>>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>>
> >>>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance 
> >>>>>>>>> here.
> >>>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed 
> >>>>>>>>> anyway.
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>>>> ---
> >>>>>>>>>  drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>>>  1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>>>
> >>>>>>>>> diff --git a/drivers/staging/android/ion/ion.c 
> >>>>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct 
> >>>>>>>>> dma_buf_attachment *attachment,
> >>>>>>>>>  
> >>>>>>>>> table = a->table;
> >>>>>>>>>  
> >>>>>>>>> -   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>>>> -   direction))
> >>>>>>>>> +   if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >>>>>>>>> + direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>>>
> >>>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache 
> >>>>>>>> maintenance.
> >>>>>>>> If the calls to {begin,end}_cpu_access were made before the call to 
> >>>>>>>> dma_buf_attach then there won't have been a device attached so the 
> >>>>>>>> calls 
> >>>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>>>
> >>>>>>>
> >>>>>>> That should be okay though, if you have no attachments (or all
> >>>>>>> attachments are IO-coherent) then there is no need for cache
> >>>>>>> maintenance. Unless you mean a sequence where a non-io-coherent device
> >>>>>>> is attached later after data has already been written. Does that
> >>>>>>> sequence need supporting? 
> >>>>>>
> >>>>>> Yes, but also I think there are cases where CPU access can happen 
> >>>>>> before 
> >>>>>> in Android, but I will focus on later for now.
> >>>>>>
> >>>>>>> DMA-BUF doesn't have to allocate the backing
> >>>>>>> memory until map_dma_buf() time, and that should only happen after all
> >>>>>>> the devices have attached so it can know where to put the buffer. So 
> >>>>>>> we
> >>>>>>> shouldn't expect any CPU access to buffers before all the devices are
> >>>>>>> attached and mapped, right?
> >>>>>>>
> >>>>>>
> >>>>>> Here is an example where CPU access can happen later in Android.
> >>>>>>
> >>>>>> Camera device records video -> software post processing -> video 
> >>>>>>

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-17 Thread Liam Mark
On Thu, 17 Jan 2019, Andrew F. Davis wrote:

> On 1/16/19 4:48 PM, Liam Mark wrote:
> > On Wed, 16 Jan 2019, Andrew F. Davis wrote:
> > 
> >> On 1/15/19 1:05 PM, Laura Abbott wrote:
> >>> On 1/15/19 10:38 AM, Andrew F. Davis wrote:
> >>>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>>
> >>>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>>
> >>>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance
> >>>>>>>> here.
> >>>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed
> >>>>>>>> anyway.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>>> ---
> >>>>>>>>   drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>>   1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>>
> >>>>>>>> diff --git a/drivers/staging/android/ion/ion.c
> >>>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct
> >>>>>>>> dma_buf_attachment *attachment,
> >>>>>>>>     table = a->table;
> >>>>>>>>   -    if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>>> -    direction))
> >>>>>>>> +    if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >>>>>>>> +  direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>>
> >>>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>>> You can't rely on {begin,end}_cpu_access calls to do cache
> >>>>>>> maintenance.
> >>>>>>> If the calls to {begin,end}_cpu_access were made before the call to
> >>>>>>> dma_buf_attach then there won't have been a device attached so the
> >>>>>>> calls
> >>>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>>
> >>>>>>
> >>>>>> That should be okay though, if you have no attachments (or all
> >>>>>> attachments are IO-coherent) then there is no need for cache
> >>>>>> maintenance. Unless you mean a sequence where a non-io-coherent device
> >>>>>> is attached later after data has already been written. Does that
> >>>>>> sequence need supporting?
> >>>>>
> >>>>> Yes, but also I think there are cases where CPU access can happen before
> >>>>> in Android, but I will focus on later for now.
> >>>>>
> >>>>>> DMA-BUF doesn't have to allocate the backing
> >>>>>> memory until map_dma_buf() time, and that should only happen after all
> >>>>>> the devices have attached so it can know where to put the buffer. So we
> >>>>>> shouldn't expect any CPU access to buffers before all the devices are
> >>>>>> attached and mapped, right?
> >>>>>>
> >>>>>
> >>>>> Here is an example where CPU access can happen later in Android.
> >>>>>
> >>>>> Camera device records video -> software post processing -> video device
> >>>>> (who does compression of raw data) and writes to a file
> >>>>>
> >>>>> In this example assume the buffer is cached and the devices are not
> >>>>> IO-coherent (quite common).
> >>>>>
> >>>>
> >>>> This is the start of the problem, having cached mappings of memory that
> >>>> is also being accessed non-coherently is going to cause issues one way
> >>>> or another. On top of the speculative cache f

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-16 Thread Liam Mark
On Wed, 16 Jan 2019, Andrew F. Davis wrote:

> On 1/16/19 9:19 AM, Brian Starkey wrote:
> > Hi :-)
> > 
> > On Tue, Jan 15, 2019 at 12:40:16PM -0600, Andrew F. Davis wrote:
> >> On 1/15/19 12:38 PM, Andrew F. Davis wrote:
> >>> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>>
> >>>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>>
> >>>>>>> Buffers may not be mapped from the CPU so skip cache maintenance here.
> >>>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>>> {begin,end}_cpu_access calls so maintenance should not be needed 
> >>>>>>> anyway.
> >>>>>>>
> >>>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>>> ---
> >>>>>>>  drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>>  1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/drivers/staging/android/ion/ion.c 
> >>>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct 
> >>>>>>> dma_buf_attachment *attachment,
> >>>>>>>  
> >>>>>>>   table = a->table;
> >>>>>>>  
> >>>>>>> - if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>>> - direction))
> >>>>>>> + if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >>>>>>> +   direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>>
> >>>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>>> You can't rely on {begin,end}_cpu_access calls to do cache maintenance.
> >>>>>> If the calls to {begin,end}_cpu_access were made before the call to 
> >>>>>> dma_buf_attach then there won't have been a device attached so the 
> >>>>>> calls 
> >>>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>>
> >>>>>
> >>>>> That should be okay though, if you have no attachments (or all
> >>>>> attachments are IO-coherent) then there is no need for cache
> >>>>> maintenance. Unless you mean a sequence where a non-io-coherent device
> >>>>> is attached later after data has already been written. Does that
> >>>>> sequence need supporting? 
> >>>>
> >>>> Yes, but also I think there are cases where CPU access can happen before 
> >>>> in Android, but I will focus on later for now.
> >>>>
> >>>>> DMA-BUF doesn't have to allocate the backing
> >>>>> memory until map_dma_buf() time, and that should only happen after all
> >>>>> the devices have attached so it can know where to put the buffer. So we
> >>>>> shouldn't expect any CPU access to buffers before all the devices are
> >>>>> attached and mapped, right?
> >>>>>
> >>>>
> >>>> Here is an example where CPU access can happen later in Android.
> >>>>
> >>>> Camera device records video -> software post processing -> video device 
> >>>> (who does compression of raw data) and writes to a file
> >>>>
> >>>> In this example assume the buffer is cached and the devices are not 
> >>>> IO-coherent (quite common).
> >>>>
> >>>
> >>> This is the start of the problem, having cached mappings of memory that
> >>> is also being accessed non-coherently is going to cause issues one way
> >>> or another. On top of the speculative cache fills that have to be
> >>> constantly fought back against with CMOs like below; some coherent
> >>> interconnects behave badly when you mix coherent and non-coherent access
> >>> (snoop filters get messed up).
> &g

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-16 Thread Liam Mark
On Wed, 16 Jan 2019, Andrew F. Davis wrote:

> On 1/15/19 1:05 PM, Laura Abbott wrote:
> > On 1/15/19 10:38 AM, Andrew F. Davis wrote:
> >> On 1/15/19 11:45 AM, Liam Mark wrote:
> >>> On Tue, 15 Jan 2019, Andrew F. Davis wrote:
> >>>
> >>>> On 1/14/19 11:13 AM, Liam Mark wrote:
> >>>>> On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> >>>>>
> >>>>>> Buffers may not be mapped from the CPU so skip cache maintenance
> >>>>>> here.
> >>>>>> Accesses from the CPU to a cached heap should be bracketed with
> >>>>>> {begin,end}_cpu_access calls so maintenance should not be needed
> >>>>>> anyway.
> >>>>>>
> >>>>>> Signed-off-by: Andrew F. Davis 
> >>>>>> ---
> >>>>>>   drivers/staging/android/ion/ion.c | 7 ---
> >>>>>>   1 file changed, 4 insertions(+), 3 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/staging/android/ion/ion.c
> >>>>>> b/drivers/staging/android/ion/ion.c
> >>>>>> index 14e48f6eb734..09cb5a8e2b09 100644
> >>>>>> --- a/drivers/staging/android/ion/ion.c
> >>>>>> +++ b/drivers/staging/android/ion/ion.c
> >>>>>> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct
> >>>>>> dma_buf_attachment *attachment,
> >>>>>>     table = a->table;
> >>>>>>   -    if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >>>>>> -    direction))
> >>>>>> +    if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >>>>>> +  direction, DMA_ATTR_SKIP_CPU_SYNC))
> >>>>>
> >>>>> Unfortunately I don't think you can do this for a couple reasons.
> >>>>> You can't rely on {begin,end}_cpu_access calls to do cache
> >>>>> maintenance.
> >>>>> If the calls to {begin,end}_cpu_access were made before the call to
> >>>>> dma_buf_attach then there won't have been a device attached so the
> >>>>> calls
> >>>>> to {begin,end}_cpu_access won't have done any cache maintenance.
> >>>>>
> >>>>
> >>>> That should be okay though, if you have no attachments (or all
> >>>> attachments are IO-coherent) then there is no need for cache
> >>>> maintenance. Unless you mean a sequence where a non-io-coherent device
> >>>> is attached later after data has already been written. Does that
> >>>> sequence need supporting?
> >>>
> >>> Yes, but also I think there are cases where CPU access can happen before
> >>> in Android, but I will focus on later for now.
> >>>
> >>>> DMA-BUF doesn't have to allocate the backing
> >>>> memory until map_dma_buf() time, and that should only happen after all
> >>>> the devices have attached so it can know where to put the buffer. So we
> >>>> shouldn't expect any CPU access to buffers before all the devices are
> >>>> attached and mapped, right?
> >>>>
> >>>
> >>> Here is an example where CPU access can happen later in Android.
> >>>
> >>> Camera device records video -> software post processing -> video device
> >>> (who does compression of raw data) and writes to a file
> >>>
> >>> In this example assume the buffer is cached and the devices are not
> >>> IO-coherent (quite common).
> >>>
> >>
> >> This is the start of the problem, having cached mappings of memory that
> >> is also being accessed non-coherently is going to cause issues one way
> >> or another. On top of the speculative cache fills that have to be
> >> constantly fought back against with CMOs like below; some coherent
> >> interconnects behave badly when you mix coherent and non-coherent access
> >> (snoop filters get messed up).
> >>
> >> The solution is to either always have the addresses marked non-coherent
> >> (like device memory, no-map carveouts), or if you really want to use
> >> regular system memory allocated at runtime, then all cached mappings of
> >> it need to be dropped, even the kernel logical address (area as painful
> >> as that would be).
> >>
> > 
> > I

Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-15 Thread Liam Mark
On Tue, 15 Jan 2019, Andrew F. Davis wrote:

> On 1/14/19 11:13 AM, Liam Mark wrote:
> > On Fri, 11 Jan 2019, Andrew F. Davis wrote:
> > 
> >> Buffers may not be mapped from the CPU so skip cache maintenance here.
> >> Accesses from the CPU to a cached heap should be bracketed with
> >> {begin,end}_cpu_access calls so maintenance should not be needed anyway.
> >>
> >> Signed-off-by: Andrew F. Davis 
> >> ---
> >>  drivers/staging/android/ion/ion.c | 7 ---
> >>  1 file changed, 4 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/staging/android/ion/ion.c 
> >> b/drivers/staging/android/ion/ion.c
> >> index 14e48f6eb734..09cb5a8e2b09 100644
> >> --- a/drivers/staging/android/ion/ion.c
> >> +++ b/drivers/staging/android/ion/ion.c
> >> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct 
> >> dma_buf_attachment *attachment,
> >>  
> >>table = a->table;
> >>  
> >> -  if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >> -  direction))
> >> +  if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> >> +direction, DMA_ATTR_SKIP_CPU_SYNC))
> > 
> > Unfortunately I don't think you can do this for a couple reasons.
> > You can't rely on {begin,end}_cpu_access calls to do cache maintenance.
> > If the calls to {begin,end}_cpu_access were made before the call to 
> > dma_buf_attach then there won't have been a device attached so the calls 
> > to {begin,end}_cpu_access won't have done any cache maintenance.
> > 
> 
> That should be okay though, if you have no attachments (or all
> attachments are IO-coherent) then there is no need for cache
> maintenance. Unless you mean a sequence where a non-io-coherent device
> is attached later after data has already been written. Does that
> sequence need supporting? 

Yes, but also I think there are cases where CPU access can happen before 
in Android, but I will focus on later for now.

> DMA-BUF doesn't have to allocate the backing
> memory until map_dma_buf() time, and that should only happen after all
> the devices have attached so it can know where to put the buffer. So we
> shouldn't expect any CPU access to buffers before all the devices are
> attached and mapped, right?
> 

Here is an example where CPU access can happen later in Android.

Camera device records video -> software post processing -> video device 
(who does compression of raw data) and writes to a file

In this example assume the buffer is cached and the devices are not 
IO-coherent (quite common).

ION buffer is allocated.

//Camera device records video
dma_buf_attach
dma_map_attachment (buffer needs to be cleaned)
[camera device writes to buffer]
dma_buf_unmap_attachment (buffer needs to be invalidated)
dma_buf_detach  (device cannot stay attached because it is being sent down 
the pipeline and Camera doesn't know the end of the use case)

//buffer is send down the pipeline

// Usersapce software post processing occurs
mmap buffer
DMA_BUF_IOCTL_SYNC IOCT with flags DMA_BUF_SYNC_START // No CMO since no 
devices attached to buffer
[CPU reads/writes to the buffer]
DMA_BUF_IOCTL_SYNC IOCTL with flags DMA_BUF_SYNC_END // No CMO since no 
devices attached to buffer
munmap buffer

//buffer is send down the pipeline
// Buffer is send to video device (who does compression of raw data) and 
writes to a file
dma_buf_attach
dma_map_attachment (buffer needs to be cleaned)
[video device writes to buffer]
dma_buf_unmap_attachment 
dma_buf_detach  (device cannot stay attached because it is being sent down 
the pipeline and Video doesn't know the end of the use case)



> > Also ION no longer provides DMA ready memory, so if you are not doing CPU 
> > access then there is no requirement (that I am aware of) for you to call 
> > {begin,end}_cpu_access before passing the buffer to the device and if this 
> > buffer is cached and your device is not IO-coherent then the cache 
> > maintenance
> > in ion_map_dma_buf and ion_unmap_dma_buf is required.
> > 
> 
> If I am not doing any CPU access then why do I need CPU cache
> maintenance on the buffer?
> 

Because ION no longer provides DMA ready memory.
Take the above example.

ION allocates memory from buddy allocator and requests zeroing.
Zeros are written to the cache.

You pass the buffer to the camera device which is not IO-coherent.
The camera devices writes directly to the buffer in DDR.
Since you didn't clean the buffer a dirty cache line (one of the zeros) is 
evicted from the cache, this zero overwrites data the camera device has 
written which corrupts your data.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH 13/14] staging: android: ion: Do not sync CPU cache on map/unmap

2019-01-14 Thread Liam Mark
On Fri, 11 Jan 2019, Andrew F. Davis wrote:

> Buffers may not be mapped from the CPU so skip cache maintenance here.
> Accesses from the CPU to a cached heap should be bracketed with
> {begin,end}_cpu_access calls so maintenance should not be needed anyway.
> 
> Signed-off-by: Andrew F. Davis 
> ---
>  drivers/staging/android/ion/ion.c | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c 
> b/drivers/staging/android/ion/ion.c
> index 14e48f6eb734..09cb5a8e2b09 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -261,8 +261,8 @@ static struct sg_table *ion_map_dma_buf(struct 
> dma_buf_attachment *attachment,
>  
>   table = a->table;
>  
> - if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> - direction))
> + if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
> +   direction, DMA_ATTR_SKIP_CPU_SYNC))

Unfortunately I don't think you can do this for a couple reasons.
You can't rely on {begin,end}_cpu_access calls to do cache maintenance.
If the calls to {begin,end}_cpu_access were made before the call to 
dma_buf_attach then there won't have been a device attached so the calls 
to {begin,end}_cpu_access won't have done any cache maintenance.

Also ION no longer provides DMA ready memory, so if you are not doing CPU 
access then there is no requirement (that I am aware of) for you to call 
{begin,end}_cpu_access before passing the buffer to the device and if this 
buffer is cached and your device is not IO-coherent then the cache maintenance
in ion_map_dma_buf and ion_unmap_dma_buf is required.

>   return ERR_PTR(-ENOMEM);
>  
>   return table;
> @@ -272,7 +272,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
> *attachment,
> struct sg_table *table,
> enum dma_data_direction direction)
>  {
> - dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> + dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents,
> +direction, DMA_ATTR_SKIP_CPU_SYNC);
>  }
>  
>  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> -- 
> 2.19.1
> 
> ___
> devel mailing list
> de...@linuxdriverproject.org
> http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v3] staging: android: ion: Add implementation of dma_buf_vmap and dma_buf_vunmap

2019-01-04 Thread Liam Mark
On Tue, 18 Dec 2018, Alexey Skidanov wrote:

> >>> I was wondering if we could re-open the discussion on adding support to 
> >>> ION for dma_buf_vmap.
> >>> It seems like the patch was not taken as the reviewers wanted more 
> >>> evidence of an upstream use case.
> >>>
> >>> Here would be my upstream usage argument for including dma_buf_vmap 
> >>> support in ION.
> >>>
> >>> Currently all calls to ion_dma_buf_begin_cpu_access result in the 
> >>> creation 
> >>> of a kernel mapping for the buffer, unfortunately the resulting call to 
> >>> alloc_vmap_area can be quite expensive and this has caused a performance 
> >>> regression for certain clients when they have moved to the new version of 
> >>> ION.
> >>>
> >>> The kernel mapping is not actually needed in 
> >>> ion_dma_buf_begin_cpu_access, 
> >>> and generally isn't needed by clients. So if we remove the creation of 
> >>> the 
> >>> kernel mapping in ion_dma_buf_begin_cpu_access and only create it when 
> >>> needed we can speed up the calls to ion_dma_buf_begin_cpu_access.
> >>>
> >>> An additional benefit of removing the creation of kernel mappings from 
> >>> ion_dma_buf_begin_cpu_access is that it makes the ION code more secure.
> >>> Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL with 
> >>> flags DMA_BUF_SYNC_END multiple times to cause the ION buffer kmap_cnt to 
> >>> go negative which could lead to undesired behavior.
> >>>
> >>> One disadvantage of the above change is that a kernel mapping is not 
> >>> already created when a client calls dma_buf_kmap. So the following 
> >>> dma_buf_kmap contract can't be satisfied.
> >>>
> >>> /**
> >>> * dma_buf_kmap - Map a page of the buffer object into kernel address 
> >>> space. The
> >>> * same restrictions as for kmap and friends apply.
> >>> * @dmabuf:[in]buffer to map page from.
> >>> * @page_num:  [in]page in PAGE_SIZE units to map.
> >>> *
> >>> * This call must always succeed, any necessary preparations that might 
> >>> fail
> >>> * need to be done in begin_cpu_access.
> >>> */
> >>>
> >>> But hopefully we can work around this by moving clients to dma_buf_vmap.
> >> I think the problem is with the contract. We can't ensure that the call
> >> is always succeeds regardless the implementation - any mapping might
> >> fail. Probably this is why  *all* clients of dma_buf_kmap() check the
> >> return value (so it's safe to return NULL in case of failure).
> >>
> > 
> > I think currently the call to dma_buf_kmap will always succeed since the 
> > DMA-Buf contract requires that the client first successfully call 
> > dma_buf_begin_cpu_access(), and if dma_buf_begin_cpu_access() succeeds 
> > then dma_buf_kmap will succeed.
> > 
> >> I would suggest to fix the contract and to keep the dma_buf_kmap()
> >> support in ION.
> > 
> > I will leave it to the DMA-Buf maintainers as to whether they want to 
> > change their contract.
> > 
> > Liam
> > 
> > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> > a Linux Foundation Collaborative Project
> > 
> 
> Ok. We need the list of the clients using the ION in the mainline tree.
> 

Looks to me like the only functions which might be calling 
dma_buf_kmap/dma_buf_kunmap on ION buffers are 
tegra_bo_kmap/tegra_bo_kunmap, I assume Tegra is used in some Android 
automotive products.

Looks like these functions could be moved over to using 
dma_buf_vmap/dma_buf_vunmap but it wouldn't be very clean and would add a 
performance hit.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v3] staging: android: ion: Add implementation of dma_buf_vmap and dma_buf_vunmap

2018-12-17 Thread Liam Mark
On Sun, 16 Dec 2018, Alexey Skidanov wrote:

> 
> 
> On 12/16/18 7:20 AM, Liam Mark wrote:
> > On Tue, 6 Feb 2018, Alexey Skidanov wrote:
> > 
> >>
> >>
> >> On 02/07/2018 01:56 AM, Laura Abbott wrote:
> >>> On 01/31/2018 10:10 PM, Alexey Skidanov wrote:
> >>>>
> >>>> On 01/31/2018 03:00 PM, Greg KH wrote:
> >>>>> On Wed, Jan 31, 2018 at 02:03:42PM +0200, Alexey Skidanov wrote:
> >>>>>> Any driver may access shared buffers, created by ion, using
> >>>>>> dma_buf_vmap and
> >>>>>> dma_buf_vunmap dma-buf API that maps/unmaps previosuly allocated
> >>>>>> buffers into
> >>>>>> the kernel virtual address space. The implementation of these API is
> >>>>>> missing in
> >>>>>> the current ion implementation.
> >>>>>>
> >>>>>> Signed-off-by: Alexey Skidanov 
> >>>>>> ---
> >>>>>
> >>>>> No review from any other Intel developers? :(
> >>>> Will add.
> >>>>>
> >>>>> Anyway, what in-tree driver needs access to these functions?
> >>>> I'm not sure that there are the in-tree drivers using these functions
> >>>> and ion as> buffer exporter because they are not implemented in ion :)
> >>>> But there are some in-tre> drivers using these APIs (gpu drivers) with
> >>>> other buffer exporters.
> >>>
> >>> It's still not clear why you need to implement these APIs.
> >> How the importing kernel module may access the content of the buffer? :)
> >> With the current ion implementation it's only possible by dma_buf_kmap,
> >> mapping one page at a time. For pretty large buffers, it might have some
> >> performance impact.
> >> (Probably, the page by page mapping is the only way to access large
> >> buffers on 32 bit systems, where the vmalloc range is very small. By the
> >> way, the current ion dma_map_kmap doesn't really map only 1 page at a
> >> time - it uses the result of vmap() that might fail on 32 bit systems.)
> >>
> >>> Are you planning to use Ion with GPU drivers? I'm especially
> >>> interested in this if you have a non-Android use case.
> >> Yes, my use case is the non-Android one. But not with GPU drivers.
> >>>
> >>> Thanks,
> >>> Laura
> >>
> >> Thanks,
> >> Alexey
> > 
> > I was wondering if we could re-open the discussion on adding support to 
> > ION for dma_buf_vmap.
> > It seems like the patch was not taken as the reviewers wanted more 
> > evidence of an upstream use case.
> > 
> > Here would be my upstream usage argument for including dma_buf_vmap 
> > support in ION.
> > 
> > Currently all calls to ion_dma_buf_begin_cpu_access result in the creation 
> > of a kernel mapping for the buffer, unfortunately the resulting call to 
> > alloc_vmap_area can be quite expensive and this has caused a performance 
> > regression for certain clients when they have moved to the new version of 
> > ION.
> > 
> > The kernel mapping is not actually needed in ion_dma_buf_begin_cpu_access, 
> > and generally isn't needed by clients. So if we remove the creation of the 
> > kernel mapping in ion_dma_buf_begin_cpu_access and only create it when 
> > needed we can speed up the calls to ion_dma_buf_begin_cpu_access.
> > 
> > An additional benefit of removing the creation of kernel mappings from 
> > ion_dma_buf_begin_cpu_access is that it makes the ION code more secure.
> > Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL with 
> > flags DMA_BUF_SYNC_END multiple times to cause the ION buffer kmap_cnt to 
> > go negative which could lead to undesired behavior.
> > 
> > One disadvantage of the above change is that a kernel mapping is not 
> > already created when a client calls dma_buf_kmap. So the following 
> > dma_buf_kmap contract can't be satisfied.
> > 
> > /**
> > * dma_buf_kmap - Map a page of the buffer object into kernel address 
> > space. The
> > * same restrictions as for kmap and friends apply.
> > * @dmabuf:  [in]buffer to map page from.
> > * @page_num:[in]page in PAGE_SIZE units to map.
> > *
> > * This call must always succeed, any necessary preparations that might 
> > fail
> > * need to be done in begin_cpu_access.
> > */
> > 
> > But hopefully we can work around this by moving clients to dma_buf_vmap.
> I think the problem is with the contract. We can't ensure that the call
> is always succeeds regardless the implementation - any mapping might
> fail. Probably this is why  *all* clients of dma_buf_kmap() check the
> return value (so it's safe to return NULL in case of failure).
> 

I think currently the call to dma_buf_kmap will always succeed since the 
DMA-Buf contract requires that the client first successfully call 
dma_buf_begin_cpu_access(), and if dma_buf_begin_cpu_access() succeeds 
then dma_buf_kmap will succeed.

> I would suggest to fix the contract and to keep the dma_buf_kmap()
> support in ION.

I will leave it to the DMA-Buf maintainers as to whether they want to 
change their contract.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v3] staging: android: ion: Add implementation of dma_buf_vmap and dma_buf_vunmap

2018-12-15 Thread Liam Mark
On Tue, 6 Feb 2018, Alexey Skidanov wrote:

> 
> 
> On 02/07/2018 01:56 AM, Laura Abbott wrote:
> > On 01/31/2018 10:10 PM, Alexey Skidanov wrote:
> >>
> >> On 01/31/2018 03:00 PM, Greg KH wrote:
> >>> On Wed, Jan 31, 2018 at 02:03:42PM +0200, Alexey Skidanov wrote:
>  Any driver may access shared buffers, created by ion, using
>  dma_buf_vmap and
>  dma_buf_vunmap dma-buf API that maps/unmaps previosuly allocated
>  buffers into
>  the kernel virtual address space. The implementation of these API is
>  missing in
>  the current ion implementation.
> 
>  Signed-off-by: Alexey Skidanov 
>  ---
> >>>
> >>> No review from any other Intel developers? :(
> >> Will add.
> >>>
> >>> Anyway, what in-tree driver needs access to these functions?
> >> I'm not sure that there are the in-tree drivers using these functions
> >> and ion as> buffer exporter because they are not implemented in ion :)
> >> But there are some in-tre> drivers using these APIs (gpu drivers) with
> >> other buffer exporters.
> > 
> > It's still not clear why you need to implement these APIs.
> How the importing kernel module may access the content of the buffer? :)
> With the current ion implementation it's only possible by dma_buf_kmap,
> mapping one page at a time. For pretty large buffers, it might have some
> performance impact.
> (Probably, the page by page mapping is the only way to access large
> buffers on 32 bit systems, where the vmalloc range is very small. By the
> way, the current ion dma_map_kmap doesn't really map only 1 page at a
> time - it uses the result of vmap() that might fail on 32 bit systems.)
> 
> > Are you planning to use Ion with GPU drivers? I'm especially
> > interested in this if you have a non-Android use case.
> Yes, my use case is the non-Android one. But not with GPU drivers.
> > 
> > Thanks,
> > Laura
> 
> Thanks,
> Alexey

I was wondering if we could re-open the discussion on adding support to 
ION for dma_buf_vmap.
It seems like the patch was not taken as the reviewers wanted more 
evidence of an upstream use case.

Here would be my upstream usage argument for including dma_buf_vmap 
support in ION.

Currently all calls to ion_dma_buf_begin_cpu_access result in the creation 
of a kernel mapping for the buffer, unfortunately the resulting call to 
alloc_vmap_area can be quite expensive and this has caused a performance 
regression for certain clients when they have moved to the new version of 
ION.

The kernel mapping is not actually needed in ion_dma_buf_begin_cpu_access, 
and generally isn't needed by clients. So if we remove the creation of the 
kernel mapping in ion_dma_buf_begin_cpu_access and only create it when 
needed we can speed up the calls to ion_dma_buf_begin_cpu_access.

An additional benefit of removing the creation of kernel mappings from 
ion_dma_buf_begin_cpu_access is that it makes the ION code more secure.
Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL with 
flags DMA_BUF_SYNC_END multiple times to cause the ION buffer kmap_cnt to 
go negative which could lead to undesired behavior.

One disadvantage of the above change is that a kernel mapping is not 
already created when a client calls dma_buf_kmap. So the following 
dma_buf_kmap contract can't be satisfied.

/**
* dma_buf_kmap - Map a page of the buffer object into kernel address 
space. The
* same restrictions as for kmap and friends apply.
* @dmabuf:  [in]buffer to map page from.
* @page_num:[in]page in PAGE_SIZE units to map.
*
* This call must always succeed, any necessary preparations that might 
fail
* need to be done in begin_cpu_access.
*/

But hopefully we can work around this by moving clients to dma_buf_vmap.

Based on discussions at LPC here is what was proposed:
- #1 Add support to ION for dma_buf_vmap and dma_buf_vunmap
- #2 Move any existing ION clients over from using dma_buf_kmap to 
dma_buf_vmap
- #3 Deprecate support in ION for dma_buf_kmap?
- #4 Make the above performance optimization to 
ion_dma_buf_begin_cpu_access to remove the creation of a kernel mapping.

Thoughts?

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [RFC] android: ion: How to properly clean caches for uncached allocations

2018-03-08 Thread Liam Mark
On Wed, 7 Mar 2018, Laura Abbott wrote:

> On 02/28/2018 09:18 PM, Liam Mark wrote:
> > The issue:
> > 
> > Currently in ION if you allocate uncached memory it is possible that there
> > are still dirty lines in the cache.  And often these dirty lines in the
> > cache are the zeros which were meant to clear out any sensitive kernel
> > data.
> > 
> > What this means is that if you allocate uncached memory from ION, and then
> > subsequently write to that buffer (using the uncached mapping you are
> > provided by ION) then the data you have written could be corrupted at some
> > point in the future if a dirty line is evicted from the cache.
> > 
> > Also this means there is a potential security issue.  If an un-privileged
> > userspace user allocated uncached memory (for example from the system heap)
> > and then if they were to read from that buffer (through the un-cached
> > mapping they are provided by ION), and if some of the zeros which were
> > written to that memory are still in the cache then this un-privileged
> > userspace user could read potentially sensitive kernel data.
> 
> For the use case you are describing we don't actually need the
> memory to be non-cached until it comes time to do the dma mapping.
> Here's a proposal to shoot holes in:
> 
> - Before any dma_buf attach happens, all mmap mappings are cached
> - At the time attach happens, we shoot down any existing userspace
> mappings, do the dma_map with appropriate flags to clean the pages
> and then allow remapping to userspace as uncached. Really this
> looks like a variation on the old Ion faulting code which I removed
> except it's for uncached buffers instead of cached buffers.
> 

Thanks Laura, I will take a look to see if I can think of any concerns.

Initial thoughts.
- What about any kernel mappings (kmap/vmap) the client has made?

- I guess it would be tempting to only do this behavior for memory that 
came from buddy (as opposed to the pool since it should be clean), but we 
would need to be careful that no pages sneak into the pool without being 
cleaned (example: client allocs then frees without ever call 
dma_buf_attach).

> Potential problems:
> - I'm not 100% about the behavior here if the attaching device
> is already dma_coherent. I also consider uncached mappings
> enough of a device specific optimization that you shouldn't
> do them unless you know it's needed.

I don't believe we want to allow uncached memory to be dma mapped by an 
io-coherent device and this is something I would like to eventually block.

Since there is always a kernel cached mapping for ION uncached memory then 
speculative access could still be putting lines in the cache, so when an 
io-coherent device tries to read this uncached memory its snoop into the 
cache could find one of these 'stale' clean cache lines and end up using 
stale data. 
Agree?

> - The locking/sequencing with userspace could be tricky
> since userspace may not like us ripping mappings out from
> underneath if it's trying to access.

Perhaps delay this work to the dma_map_attachment call since when the data 
is dma mapped the CPU shouldn't be accessing it? 

Or worst case perhaps fail all map attempts to uncached memory until the 
memory has been dma mapped (and cleaned) at least once?

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [RFC] android: ion: How to properly clean caches for uncached allocations

2018-03-08 Thread Liam Mark
On Wed, 7 Mar 2018, Laura Abbott wrote:

> On 02/28/2018 09:18 PM, Liam Mark wrote:
> > The issue:
> > 
> > Currently in ION if you allocate uncached memory it is possible that there
> > are still dirty lines in the cache.  And often these dirty lines in the
> > cache are the zeros which were meant to clear out any sensitive kernel
> > data.
> > 
> > What this means is that if you allocate uncached memory from ION, and then
> > subsequently write to that buffer (using the uncached mapping you are
> > provided by ION) then the data you have written could be corrupted at some
> > point in the future if a dirty line is evicted from the cache.
> > 
> > Also this means there is a potential security issue.  If an un-privileged
> > userspace user allocated uncached memory (for example from the system heap)
> > and then if they were to read from that buffer (through the un-cached
> > mapping they are provided by ION), and if some of the zeros which were
> > written to that memory are still in the cache then this un-privileged
> > userspace user could read potentially sensitive kernel data.
> 
> For the use case you are describing we don't actually need the
> memory to be non-cached until it comes time to do the dma mapping.
> Here's a proposal to shoot holes in:
> 
> - Before any dma_buf attach happens, all mmap mappings are cached
> - At the time attach happens, we shoot down any existing userspace
> mappings, do the dma_map with appropriate flags to clean the pages
> and then allow remapping to userspace as uncached. Really this
> looks like a variation on the old Ion faulting code which I removed
> except it's for uncached buffers instead of cached buffers.
> 

Thanks Laura, I will take a look to see if I can think of any concerns.

Initial thoughts.
- What about any kernel mappings (kmap/vmap) the client has made?

- I guess it would be tempting to only do this behavior for memory that 
came from buddy (as opposed to the pool since it should be clean), but we 
would need to be careful that no pages sneak into the pool without being 
cleaned (example: client allocs then frees without ever call 
dma_buf_attach).

> Potential problems:
> - I'm not 100% about the behavior here if the attaching device
> is already dma_coherent. I also consider uncached mappings
> enough of a device specific optimization that you shouldn't
> do them unless you know it's needed.

I don't believe we want to allow uncached memory to be dma mapped by an 
io-coherent device and this is something I would like to eventually block.

Since there is always a kernel cached mapping for ION uncached memory then 
speculative access could still be putting lines in the cache, so when an 
io-coherent device tries to read this uncached memory its snoop into the 
cache could find one of these 'stale' clean cache lines and end up using 
stale data. 
Agree?

> - The locking/sequencing with userspace could be tricky
> since userspace may not like us ripping mappings out from
> underneath if it's trying to access.

Perhaps delay this work to the dma_map_attachment call since when the data 
is dma mapped the CPU shouldn't be accessing it? 

Or worst case perhaps fail all map attempts to uncached memory until the 
memory has been dma mapped (and cleaned) at least once?

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[RFC] android: ion: How to properly clean caches for uncached allocations

2018-02-28 Thread Liam Mark
The issue:

Currently in ION if you allocate uncached memory it is possible that there
are still dirty lines in the cache.  And often these dirty lines in the
cache are the zeros which were meant to clear out any sensitive kernel
data.

What this means is that if you allocate uncached memory from ION, and then
subsequently write to that buffer (using the uncached mapping you are
provided by ION) then the data you have written could be corrupted at some
point in the future if a dirty line is evicted from the cache.

Also this means there is a potential security issue.  If an un-privileged
userspace user allocated uncached memory (for example from the system heap)
and then if they were to read from that buffer (through the un-cached
mapping they are provided by ION), and if some of the zeros which were
written to that memory are still in the cache then this un-privileged
userspace user could read potentially sensitive kernel data.


An unacceptable fix:

I have included some sample code which should resolve this issue for the
system heap and the cma heap on some architectures, however this code
would not be acceptable for upstreaming since it uses hacks to clean
the cache.
Similar changes would need to be made to carveout heap and chunk heap.

I would appreciate some feedback on the proper way for ION to clean the
caches for memory it has allocated that is intended for uncached access.

I realize that it may be tempting, as a solution to this problem, to
simply strip uncached support from ION. I hope that we can try to find a
solution which preserves uncached memory support as ION uncached memory
is often used (though perhaps not in upstreamed code) in cases such as
multimedia use cases where there is no CPU access required, in secure
heap allocations, and in some cases where there is minimal CPU access
and therefore uncached memory performs better.

Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
 drivers/staging/android/ion/ion.c | 16 
 drivers/staging/android/ion/ion.h |  5 -
 drivers/staging/android/ion/ion_cma_heap.c|  3 +++
 drivers/staging/android/ion/ion_page_pool.c   |  8 ++--
 drivers/staging/android/ion/ion_system_heap.c |  7 ++-
 5 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 57e0d8035b2e..10e967b0a0f4 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -38,6 +38,22 @@ bool ion_buffer_cached(struct ion_buffer *buffer)
return !!(buffer->flags & ION_FLAG_CACHED);
 }
 
+void ion_pages_sync_for_device(struct page *page, size_t size,
+  enum dma_data_direction dir)
+{
+   struct scatterlist sg;
+   struct device dev = {0};
+
+   /* hack, use dummy device */
+   arch_setup_dma_ops(, 0, 0, NULL, false);
+
+   sg_init_table(, 1);
+   sg_set_page(, page, size, 0);
+   /* hack, use phys address for dma address */
+   sg_dma_address() = page_to_phys(page);
+   dma_sync_sg_for_device(, , 1, dir);
+}
+
 /* this function should only be called while dev->lock is held */
 static void ion_buffer_add(struct ion_device *dev,
   struct ion_buffer *buffer)
diff --git a/drivers/staging/android/ion/ion.h 
b/drivers/staging/android/ion/ion.h
index a238f23c9116..227b9928d185 100644
--- a/drivers/staging/android/ion/ion.h
+++ b/drivers/staging/android/ion/ion.h
@@ -192,6 +192,9 @@ struct ion_heap {
  */
 bool ion_buffer_cached(struct ion_buffer *buffer);
 
+void ion_pages_sync_for_device(struct page *page, size_t size,
+  enum dma_data_direction dir);
+
 /**
  * ion_buffer_fault_user_mappings - fault in user mappings of this buffer
  * @buffer:buffer
@@ -333,7 +336,7 @@ struct ion_page_pool {
 struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order,
   bool cached);
 void ion_page_pool_destroy(struct ion_page_pool *pool);
-struct page *ion_page_pool_alloc(struct ion_page_pool *pool);
+struct page *ion_page_pool_alloc(struct ion_page_pool *pool, bool *from_pool);
 void ion_page_pool_free(struct ion_page_pool *pool, struct page *page);
 
 /** ion_page_pool_shrink - shrinks the size of the memory cached in the pool
diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 49718c96bf9e..82e80621d114 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -59,6 +59,9 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
memset(page_address(pages), 0, size);
}
 
+   if (!ion_buffer_cached(buffer))
+   ion_pages_sync_for_device(pages, size, DMA_BIDIRECTIONAL);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto

[RFC] android: ion: How to properly clean caches for uncached allocations

2018-02-28 Thread Liam Mark
The issue:

Currently in ION if you allocate uncached memory it is possible that there
are still dirty lines in the cache.  And often these dirty lines in the
cache are the zeros which were meant to clear out any sensitive kernel
data.

What this means is that if you allocate uncached memory from ION, and then
subsequently write to that buffer (using the uncached mapping you are
provided by ION) then the data you have written could be corrupted at some
point in the future if a dirty line is evicted from the cache.

Also this means there is a potential security issue.  If an un-privileged
userspace user allocated uncached memory (for example from the system heap)
and then if they were to read from that buffer (through the un-cached
mapping they are provided by ION), and if some of the zeros which were
written to that memory are still in the cache then this un-privileged
userspace user could read potentially sensitive kernel data.


An unacceptable fix:

I have included some sample code which should resolve this issue for the
system heap and the cma heap on some architectures, however this code
would not be acceptable for upstreaming since it uses hacks to clean
the cache.
Similar changes would need to be made to carveout heap and chunk heap.

I would appreciate some feedback on the proper way for ION to clean the
caches for memory it has allocated that is intended for uncached access.

I realize that it may be tempting, as a solution to this problem, to
simply strip uncached support from ION. I hope that we can try to find a
solution which preserves uncached memory support as ION uncached memory
is often used (though perhaps not in upstreamed code) in cases such as
multimedia use cases where there is no CPU access required, in secure
heap allocations, and in some cases where there is minimal CPU access
and therefore uncached memory performs better.

Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 16 
 drivers/staging/android/ion/ion.h |  5 -
 drivers/staging/android/ion/ion_cma_heap.c|  3 +++
 drivers/staging/android/ion/ion_page_pool.c   |  8 ++--
 drivers/staging/android/ion/ion_system_heap.c |  7 ++-
 5 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 57e0d8035b2e..10e967b0a0f4 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -38,6 +38,22 @@ bool ion_buffer_cached(struct ion_buffer *buffer)
return !!(buffer->flags & ION_FLAG_CACHED);
 }
 
+void ion_pages_sync_for_device(struct page *page, size_t size,
+  enum dma_data_direction dir)
+{
+   struct scatterlist sg;
+   struct device dev = {0};
+
+   /* hack, use dummy device */
+   arch_setup_dma_ops(, 0, 0, NULL, false);
+
+   sg_init_table(, 1);
+   sg_set_page(, page, size, 0);
+   /* hack, use phys address for dma address */
+   sg_dma_address() = page_to_phys(page);
+   dma_sync_sg_for_device(, , 1, dir);
+}
+
 /* this function should only be called while dev->lock is held */
 static void ion_buffer_add(struct ion_device *dev,
   struct ion_buffer *buffer)
diff --git a/drivers/staging/android/ion/ion.h 
b/drivers/staging/android/ion/ion.h
index a238f23c9116..227b9928d185 100644
--- a/drivers/staging/android/ion/ion.h
+++ b/drivers/staging/android/ion/ion.h
@@ -192,6 +192,9 @@ struct ion_heap {
  */
 bool ion_buffer_cached(struct ion_buffer *buffer);
 
+void ion_pages_sync_for_device(struct page *page, size_t size,
+  enum dma_data_direction dir);
+
 /**
  * ion_buffer_fault_user_mappings - fault in user mappings of this buffer
  * @buffer:buffer
@@ -333,7 +336,7 @@ struct ion_page_pool {
 struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order,
   bool cached);
 void ion_page_pool_destroy(struct ion_page_pool *pool);
-struct page *ion_page_pool_alloc(struct ion_page_pool *pool);
+struct page *ion_page_pool_alloc(struct ion_page_pool *pool, bool *from_pool);
 void ion_page_pool_free(struct ion_page_pool *pool, struct page *page);
 
 /** ion_page_pool_shrink - shrinks the size of the memory cached in the pool
diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 49718c96bf9e..82e80621d114 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -59,6 +59,9 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
memset(page_address(pages), 0, size);
}
 
+   if (!ion_buffer_cached(buffer))
+   ion_pages_sync_for_device(pages, size, DMA_BIDIRECTIONAL);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
diff --git a/drivers/staging/a

[PATCH v2] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
Fix the dup_sg_table function to initialize the dma_address of the new
sg list entries instead of the source dma_address entries.

Since ION duplicates the sg_list this issue does not appear to result in
an actual bug.

Signed-off-by: Liam Mark <lm...@codeaurora.org>
Acked-by: Laura Abbott <labb...@redhat.com>
---
Changes in v2:
  - Add to commit message that it doesn't cause an actual bug
  - Remove 'Fixes:' since it doesn't cause a bug
  - Add Acked-by from Laura Abbott

 drivers/staging/android/ion/ion.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 57e0d8035b2e..517d4f40d1b7 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -187,7 +187,7 @@ static struct sg_table *dup_sg_table(struct sg_table *table)
new_sg = new_table->sgl;
for_each_sg(table->sgl, sg, table->nents, i) {
memcpy(new_sg, sg, sizeof(*sg));
-   sg->dma_address = 0;
+   new_sg->dma_address = 0;
new_sg = sg_next(new_sg);
}
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH v2] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
Fix the dup_sg_table function to initialize the dma_address of the new
sg list entries instead of the source dma_address entries.

Since ION duplicates the sg_list this issue does not appear to result in
an actual bug.

Signed-off-by: Liam Mark 
Acked-by: Laura Abbott 
---
Changes in v2:
  - Add to commit message that it doesn't cause an actual bug
  - Remove 'Fixes:' since it doesn't cause a bug
  - Add Acked-by from Laura Abbott

 drivers/staging/android/ion/ion.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 57e0d8035b2e..517d4f40d1b7 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -187,7 +187,7 @@ static struct sg_table *dup_sg_table(struct sg_table *table)
new_sg = new_table->sgl;
for_each_sg(table->sgl, sg, table->nents, i) {
memcpy(new_sg, sg, sizeof(*sg));
-   sg->dma_address = 0;
+   new_sg->dma_address = 0;
new_sg = sg_next(new_sg);
}
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
On Fri, 16 Feb 2018, Greg KH wrote:

> On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > Fix the dup_sg_table function to initialize the dma_address of the new
> > sg list entries instead of the source dma_address entries.
> > 
> > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> 
> So this should be sent to the stable trees as well, right?
> 
> Please add that when you resend...

My understanding was that I should only send this to the stable branch if 
it fixes a real bug.

Both myself and Laura can't see any actual problems that result from this 
issue, the change is more to help future proof the code.

My commit message wasn't clear about this and could have mislead people 
into thinking there was a bug.
I will update the commit message to make it clear that this issue doesn't 
currently result in any problem.

Do you still want me to send it to the stable trees?

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
On Fri, 16 Feb 2018, Greg KH wrote:

> On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > Fix the dup_sg_table function to initialize the dma_address of the new
> > sg list entries instead of the source dma_address entries.
> > 
> > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> 
> So this should be sent to the stable trees as well, right?
> 
> Please add that when you resend...

My understanding was that I should only send this to the stable branch if 
it fixes a real bug.

Both myself and Laura can't see any actual problems that result from this 
issue, the change is more to help future proof the code.

My commit message wasn't clear about this and could have mislead people 
into thinking there was a bug.
I will update the commit message to make it clear that this issue doesn't 
currently result in any problem.

Do you still want me to send it to the stable trees?

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
On Thu, 15 Feb 2018, Laura Abbott wrote:

> On 02/12/2018 01:25 PM, Liam Mark wrote:
> > 
> > On Mon, 12 Feb 2018, Dan Carpenter wrote:
> > 
> > > On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > > > Fix the dup_sg_table function to initialize the dma_address of the new
> > > > sg list entries instead of the source dma_address entries.
> > > > 
> > > > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> > > > Signed-off-by: Liam Mark <lm...@codeaurora.org>
> > > 
> > > How did you find this bug?  What are the user visible effects of this
> > > bug?  I'm probably going to ask you to send a v2 patch with a new
> > > changelog depending on the answers to these questions.
> > 
> > I noticed the bug when reading through the code, I haven?t seen any
> > visible effects of this bug.
> > 
> >  From looking at the code I don?t see any obvious ways that this bug
> would
> > introduce any issues with the current code base.
> > 
> > Liam
> > 
> 
> Given the way we duplicate the sg_list this should be harmless but you
> are right it's cleaner if we initialize the new list. You can add my
> Ack when you update with a new change log clarifying this.
> 
> Thanks,
> Laura
> 

Will do.

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-16 Thread Liam Mark
On Thu, 15 Feb 2018, Laura Abbott wrote:

> On 02/12/2018 01:25 PM, Liam Mark wrote:
> > 
> > On Mon, 12 Feb 2018, Dan Carpenter wrote:
> > 
> > > On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > > > Fix the dup_sg_table function to initialize the dma_address of the new
> > > > sg list entries instead of the source dma_address entries.
> > > > 
> > > > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> > > > Signed-off-by: Liam Mark 
> > > 
> > > How did you find this bug?  What are the user visible effects of this
> > > bug?  I'm probably going to ask you to send a v2 patch with a new
> > > changelog depending on the answers to these questions.
> > 
> > I noticed the bug when reading through the code, I haven?t seen any
> > visible effects of this bug.
> > 
> >  From looking at the code I don?t see any obvious ways that this bug
> would
> > introduce any issues with the current code base.
> > 
> > Liam
> > 
> 
> Given the way we duplicate the sg_list this should be harmless but you
> are right it's cleaner if we initialize the new list. You can add my
> Ack when you update with a new change log clarifying this.
> 
> Thanks,
> Laura
> 

Will do.

Thanks,
Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory

2018-02-12 Thread Liam Mark
On Mon, 12 Feb 2018, Laura Abbott wrote:

> On 02/09/2018 10:21 PM, Liam Mark wrote:
> > The ION begin_cpu_access and end_cpu_access functions use the
> > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > maintenance.
> > 
> > Currently it is possible to apply cache maintenance, via the
> > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > dma mapped.
> > 
> > The dma sync sg APIs should not be called on sg lists which have not been
> > dma mapped as this can result in cache maintenance being applied to the
> > wrong address. If an sg list has not been dma mapped then its dma_address
> > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > use the dma_address field to calculate the address onto which to apply
> > cache maintenance.
> > 
> > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > cache maintenance to buffers which have been dma mapped.
> > 
> I think this looks okay. I was initially concerned about concurrency and
> setting the dma_mapped flag but I think that should be handled by the
> caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
> in the future)


I had convinced myself that concurrency wasn't a problem, but you are 
right it does need to be re-evaluated. For example the code could be at 
the point after the dma unmap call has completed but before dma_mapped has 
been set to false, and if userspace happened to slip in a call to begin/end cpu 
access cache maintenance would happen on memory which isn't dma mapped. 
So at least this would need to be addressed, maybe for this issue just 
move the setting of dma_mapped to the start of the ion_unmap_dma_buf function.

I can clean this up and any other concurrency issues we can identify.

> 
> I would like to hold on queuing this for just a little bit until I
> finish working on the Ion unit test (doing this in the complete opposite
> order of course). I'm assuming this passed your internal tests Liam?

Yes it has passed my internal ION unit tests, though I haven't given 
the change to internal ION clients yet.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory

2018-02-12 Thread Liam Mark
On Mon, 12 Feb 2018, Laura Abbott wrote:

> On 02/09/2018 10:21 PM, Liam Mark wrote:
> > The ION begin_cpu_access and end_cpu_access functions use the
> > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > maintenance.
> > 
> > Currently it is possible to apply cache maintenance, via the
> > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > dma mapped.
> > 
> > The dma sync sg APIs should not be called on sg lists which have not been
> > dma mapped as this can result in cache maintenance being applied to the
> > wrong address. If an sg list has not been dma mapped then its dma_address
> > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > use the dma_address field to calculate the address onto which to apply
> > cache maintenance.
> > 
> > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > cache maintenance to buffers which have been dma mapped.
> > 
> I think this looks okay. I was initially concerned about concurrency and
> setting the dma_mapped flag but I think that should be handled by the
> caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
> in the future)


I had convinced myself that concurrency wasn't a problem, but you are 
right it does need to be re-evaluated. For example the code could be at 
the point after the dma unmap call has completed but before dma_mapped has 
been set to false, and if userspace happened to slip in a call to begin/end cpu 
access cache maintenance would happen on memory which isn't dma mapped. 
So at least this would need to be addressed, maybe for this issue just 
move the setting of dma_mapped to the start of the ion_unmap_dma_buf function.

I can clean this up and any other concurrency issues we can identify.

> 
> I would like to hold on queuing this for just a little bit until I
> finish working on the Ion unit test (doing this in the complete opposite
> order of course). I'm assuming this passed your internal tests Liam?

Yes it has passed my internal ION unit tests, though I haven't given 
the change to internal ION clients yet.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-12 Thread Liam Mark

On Mon, 12 Feb 2018, Dan Carpenter wrote:

> On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > Fix the dup_sg_table function to initialize the dma_address of the new
> > sg list entries instead of the source dma_address entries.
> > 
> > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> > Signed-off-by: Liam Mark <lm...@codeaurora.org>
> 
> How did you find this bug?  What are the user visible effects of this
> bug?  I'm probably going to ask you to send a v2 patch with a new
> changelog depending on the answers to these questions.

I noticed the bug when reading through the code, I haven’t seen any 
visible effects of this bug.

>From looking at the code I don’t see any obvious ways that this bug would 
introduce any issues with the current code base.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

Re: [PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-12 Thread Liam Mark

On Mon, 12 Feb 2018, Dan Carpenter wrote:

> On Fri, Feb 09, 2018 at 10:16:56PM -0800, Liam Mark wrote:
> > Fix the dup_sg_table function to initialize the dma_address of the new
> > sg list entries instead of the source dma_address entries.
> > 
> > Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
> > Signed-off-by: Liam Mark 
> 
> How did you find this bug?  What are the user visible effects of this
> bug?  I'm probably going to ask you to send a v2 patch with a new
> changelog depending on the answers to these questions.

I noticed the bug when reading through the code, I haven’t seen any 
visible effects of this bug.

>From looking at the code I don’t see any obvious ways that this bug would 
introduce any issues with the current code base.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

[PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory

2018-02-09 Thread Liam Mark
The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which have been dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
mapping")
Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
 drivers/staging/android/ion/ion.c | 16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index f480885e346b..e5df5272823d 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+   bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
@@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, 
struct device *dev,
 
a->table = table;
a->dev = dev;
+   a->dma_mapped = false;
INIT_LIST_HEAD(>list);
 
attachment->priv = a;
@@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct 
dma_buf_attachment *attachment,
direction))
return ERR_PTR(-ENOMEM);
 
+   a->dma_mapped = true;
return table;
 }
 
@@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
 {
+   struct ion_dma_buf_attachment *a = attachment->priv;
+
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+   a->dma_mapped = false;
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-   direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+   a->table->nents, direction);
}
mutex_unlock(>lock);
 
@@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-  direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_device(a->dev, a->table->sgl,
+  a->table->nents, direction);
}
mutex_unlock(>lock);
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory

2018-02-09 Thread Liam Mark
The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which have been dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
mapping")
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index f480885e346b..e5df5272823d 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+   bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
@@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, 
struct device *dev,
 
a->table = table;
a->dev = dev;
+   a->dma_mapped = false;
INIT_LIST_HEAD(>list);
 
attachment->priv = a;
@@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct 
dma_buf_attachment *attachment,
direction))
return ERR_PTR(-ENOMEM);
 
+   a->dma_mapped = true;
return table;
 }
 
@@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
 {
+   struct ion_dma_buf_attachment *a = attachment->priv;
+
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+   a->dma_mapped = false;
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-   direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+   a->table->nents, direction);
}
mutex_unlock(>lock);
 
@@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-  direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_device(a->dev, a->table->sgl,
+  a->table->nents, direction);
}
mutex_unlock(>lock);
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-09 Thread Liam Mark
Fix the dup_sg_table function to initialize the dma_address of the new
sg list entries instead of the source dma_address entries.

Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
 drivers/staging/android/ion/ion.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index f480885e346b..3ace3a0d9210 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -197,7 +197,7 @@ static struct sg_table *dup_sg_table(struct sg_table *table)
new_sg = new_table->sgl;
for_each_sg(table->sgl, sg, table->nents, i) {
memcpy(new_sg, sg, sizeof(*sg));
-   sg->dma_address = 0;
+   new_sg->dma_address = 0;
new_sg = sg_next(new_sg);
}
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH] staging: android: ion: Initialize dma_address of new sg list

2018-02-09 Thread Liam Mark
Fix the dup_sg_table function to initialize the dma_address of the new
sg list entries instead of the source dma_address entries.

Fixes: 17fd283f3870 ("staging: android: ion: Duplicate sg_table")
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index f480885e346b..3ace3a0d9210 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -197,7 +197,7 @@ static struct sg_table *dup_sg_table(struct sg_table *table)
new_sg = new_table->sgl;
for_each_sg(table->sgl, sg, table->nents, i) {
memcpy(new_sg, sg, sizeof(*sg));
-   sg->dma_address = 0;
+   new_sg->dma_address = 0;
new_sg = sg_next(new_sg);
}
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH v3] staging: android: ion: Zero CMA allocated memory

2018-01-26 Thread Liam Mark
Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
the CMA API is now used directly and therefore the allocated memory is no
longer automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed to
userspace.

Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
Changes in v2:
  - Clean up the commit message.
  - Add 'Fixes:'

Changes in v3:
 - Add support for highmem pages

 drivers/staging/android/ion/ion_cma_heap.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ffd2faf..fa3e4b7e0c9f 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "ion.h"
 
@@ -51,6 +52,22 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   if (PageHighMem(pages)) {
+   unsigned long nr_clear_pages = nr_pages;
+   struct page *page = pages;
+
+   while (nr_clear_pages > 0) {
+   void *vaddr = kmap_atomic(page);
+
+   memset(vaddr, 0, PAGE_SIZE);
+   kunmap_atomic(vaddr);
+   page++;
+   nr_clear_pages--;
+   }
+   } else {
+   memset(page_address(pages), 0, size);
+   }
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH v3] staging: android: ion: Zero CMA allocated memory

2018-01-26 Thread Liam Mark
Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
the CMA API is now used directly and therefore the allocated memory is no
longer automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed to
userspace.

Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
Signed-off-by: Liam Mark 
---
Changes in v2:
  - Clean up the commit message.
  - Add 'Fixes:'

Changes in v3:
 - Add support for highmem pages

 drivers/staging/android/ion/ion_cma_heap.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ffd2faf..fa3e4b7e0c9f 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "ion.h"
 
@@ -51,6 +52,22 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   if (PageHighMem(pages)) {
+   unsigned long nr_clear_pages = nr_pages;
+   struct page *page = pages;
+
+   while (nr_clear_pages > 0) {
+   void *vaddr = kmap_atomic(page);
+
+   memset(vaddr, 0, PAGE_SIZE);
+   kunmap_atomic(vaddr);
+   page++;
+   nr_clear_pages--;
+   }
+   } else {
+   memset(page_address(pages), 0, size);
+   }
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v2] staging: android: ion: Zero CMA allocated memory

2018-01-24 Thread Liam Mark
On Wed, 24 Jan 2018, Laura Abbott wrote:

> On 01/22/2018 09:46 AM, Liam Mark wrote:
> > Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> > the CMA API is now used directly and therefore the allocated memory is no
> > longer automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed to
> > userspace.
> > 
> > Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> > Signed-off-by: Liam Mark <lm...@codeaurora.org>
> > ---
> > Changes in v2:
> >- Clean up the commit message.
> >- Add 'Fixes:'
> > 
> >   drivers/staging/android/ion/ion_cma_heap.c | 2 ++
> >   1 file changed, 2 insertions(+)
> > 
> > diff --git a/drivers/staging/android/ion/ion_cma_heap.c
> > b/drivers/staging/android/ion/ion_cma_heap.c
> > index 86196ffd2faf..91a98785607a 100644
> > --- a/drivers/staging/android/ion/ion_cma_heap.c
> > +++ b/drivers/staging/android/ion/ion_cma_heap.c
> > @@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct
> > ion_buffer *buffer,
> > if (!pages)
> > return -ENOMEM;
> > 
> > +   memset(page_address(pages), 0, size);
> > +
> 
> This won't work for highmem pages. You need to zero by page via
> kmap_atomic in that case. You can use PageHighMem to separate the
> paths.

Thanks for catching that, I will update the patch.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v2] staging: android: ion: Zero CMA allocated memory

2018-01-24 Thread Liam Mark
On Wed, 24 Jan 2018, Laura Abbott wrote:

> On 01/22/2018 09:46 AM, Liam Mark wrote:
> > Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> > the CMA API is now used directly and therefore the allocated memory is no
> > longer automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed to
> > userspace.
> > 
> > Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> > Signed-off-by: Liam Mark 
> > ---
> > Changes in v2:
> >- Clean up the commit message.
> >- Add 'Fixes:'
> > 
> >   drivers/staging/android/ion/ion_cma_heap.c | 2 ++
> >   1 file changed, 2 insertions(+)
> > 
> > diff --git a/drivers/staging/android/ion/ion_cma_heap.c
> > b/drivers/staging/android/ion/ion_cma_heap.c
> > index 86196ffd2faf..91a98785607a 100644
> > --- a/drivers/staging/android/ion/ion_cma_heap.c
> > +++ b/drivers/staging/android/ion/ion_cma_heap.c
> > @@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct
> > ion_buffer *buffer,
> > if (!pages)
> > return -ENOMEM;
> > 
> > +   memset(page_address(pages), 0, size);
> > +
> 
> This won't work for highmem pages. You need to zero by page via
> kmap_atomic in that case. You can use PageHighMem to separate the
> paths.

Thanks for catching that, I will update the patch.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH v2] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
the CMA API is now used directly and therefore the allocated memory is no
longer automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed to
userspace.

Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
Changes in v2:
  - Clean up the commit message.
  - Add 'Fixes:'

 drivers/staging/android/ion/ion_cma_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ffd2faf..91a98785607a 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   memset(page_address(pages), 0, size);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH v2] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
the CMA API is now used directly and therefore the allocated memory is no
longer automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed to
userspace.

Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
Signed-off-by: Liam Mark 
---
Changes in v2:
  - Clean up the commit message.
  - Add 'Fixes:'

 drivers/staging/android/ion/ion_cma_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ffd2faf..91a98785607a 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   memset(page_address(pages), 0, size);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
On Sat, 20 Jan 2018, Greg KH wrote:

> On Fri, Jan 19, 2018 at 11:16:47AM -0800, Liam Mark wrote:
> > Since the CMA API is now used directly the allocated memory is no longer
> > automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed
> > to userspace.
> 
> How far back does this patch need to be ported?  What is the git commit
> id that caused this change to be needed?  Please add it as a "Fixes:"
> tag to the patch.
> 

It goes back to 204f672255c2 ("ion: Use CMA APIs directly"), I will update 
the change.

> > 
> > Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
> 
> I'm guessing you didn't run scripts/checkpatch.pl on this patch :(
> 
> Please fix up and resend.

Thanks, I will clean it up.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
On Sat, 20 Jan 2018, Greg KH wrote:

> On Fri, Jan 19, 2018 at 11:16:47AM -0800, Liam Mark wrote:
> > Since the CMA API is now used directly the allocated memory is no longer
> > automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed
> > to userspace.
> 
> How far back does this patch need to be ported?  What is the git commit
> id that caused this change to be needed?  Please add it as a "Fixes:"
> tag to the patch.
> 

It goes back to 204f672255c2 ("ion: Use CMA APIs directly"), I will update 
the change.

> > 
> > Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
> 
> I'm guessing you didn't run scripts/checkpatch.pl on this patch :(
> 
> Please fix up and resend.

Thanks, I will clean it up.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
On Fri, 19 Jan 2018, Dan Carpenter wrote:

> On Fri, Jan 19, 2018 at 11:16:47AM -0800, Liam Mark wrote:
> > Since the CMA API is now used directly the allocated memory is no longer
> > automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed
> > to userspace.
> > 
> > Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
> 
> How do I use this Gerrit tag?  I type it into
> https://android-review.googlesource.com/ somewhere?

Sorry, this tag won’t be any use to you, I will strip it out.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

Re: [PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-22 Thread Liam Mark
On Fri, 19 Jan 2018, Dan Carpenter wrote:

> On Fri, Jan 19, 2018 at 11:16:47AM -0800, Liam Mark wrote:
> > Since the CMA API is now used directly the allocated memory is no longer
> > automatically zeroed.
> > 
> > Explicitly zero CMA allocated memory to ensure that no data is exposed
> > to userspace.
> > 
> > Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
> 
> How do I use this Gerrit tag?  I type it into
> https://android-review.googlesource.com/ somewhere?

Sorry, this tag won’t be any use to you, I will strip it out.

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

[PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-19 Thread Liam Mark
Since the CMA API is now used directly the allocated memory is no longer
automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed
to userspace.

Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
Signed-off-by: Liam Mark <lm...@codeaurora.org>
---
 drivers/staging/android/ion/ion_cma_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ff..91a9878 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   memset(page_address(pages), 0, size);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project


[PATCH] staging: android: ion: Zero CMA allocated memory

2018-01-19 Thread Liam Mark
Since the CMA API is now used directly the allocated memory is no longer
automatically zeroed.

Explicitly zero CMA allocated memory to ensure that no data is exposed
to userspace.

Change-Id: I08e143707a0d31610821a7f16826c262bf3c1999
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion_cma_heap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/staging/android/ion/ion_cma_heap.c 
b/drivers/staging/android/ion/ion_cma_heap.c
index 86196ff..91a9878 100644
--- a/drivers/staging/android/ion/ion_cma_heap.c
+++ b/drivers/staging/android/ion/ion_cma_heap.c
@@ -51,6 +51,8 @@ static int ion_cma_allocate(struct ion_heap *heap, struct 
ion_buffer *buffer,
if (!pages)
return -ENOMEM;
 
+   memset(page_address(pages), 0, size);
+
table = kmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
goto err;
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project