On Wed, Jan 30, 2019 at 11:31:23AM +, Brian Starkey wrote:
>
> On Tue, Jan 29, 2019 at 03:44:53PM -0800, Liam Mark wrote:
> > On Fri, 18 Jan 2019, Liam Mark wrote:
> >
> > > On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> > >
> > > > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > > > The ION begin_cpu_access and end_cpu_access functions use the
> > > > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > > > maintenance.
> > > > >
> > > > > Currently it is possible to apply cache maintenance, via the
> > > > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > > > dma mapped.
> > > > >
> > > > > The dma sync sg APIs should not be called on sg lists which have not
> > > > > been
> > > > > dma mapped as this can result in cache maintenance being applied to
> > > > > the
> > > > > wrong address. If an sg list has not been dma mapped then its
> > > > > dma_address
> > > > > field has not been populated, some dma ops such as the
> > > > > swiotlb_dma_ops ops
> > > > > use the dma_address field to calculate the address onto which to apply
> > > > > cache maintenance.
> > > > >
> > > > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > > > dma mapped as the memory should already be coherent for access from
> > > > > the
> > > > > CPU. Any CMOs required for device access taken care of in the
> > > > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > > > So really it only makes sense for begin_cpu_access and end_cpu_access
> > > > > to
> > > > > apply CMOs if the buffer is dma mapped.
> > > > >
> > > > > Fix the ION begin_cpu_access and end_cpu_access functions to only
> > > > > apply
> > > > > cache maintenance to buffers which are dma mapped.
> > > > >
> > > > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for
> > > > > syncing and mapping")
> > > > > Signed-off-by: Liam Mark
> > > > > ---
> > > > > drivers/staging/android/ion/ion.c | 26 +-
> > > > > 1 file changed, 21 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/drivers/staging/android/ion/ion.c
> > > > > b/drivers/staging/android/ion/ion.c
> > > > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > > > --- a/drivers/staging/android/ion/ion.c
> > > > > +++ b/drivers/staging/android/ion/ion.c
> > > > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > > > > struct device *dev;
> > > > > struct sg_table *table;
> > > > > struct list_head list;
> > > > > + bool dma_mapped;
> > > > > };
> > > > >
> > > > > static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf
> > > > > *dmabuf,
> > > > >
> > > > > a->table = table;
> > > > > a->dev = attachment->dev;
> > > > > + a->dma_mapped = false;
> > > > > INIT_LIST_HEAD(>list);
> > > > >
> > > > > attachment->priv = a;
> > > > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct
> > > > > dma_buf_attachment *attachment,
> > > > > {
> > > > > struct ion_dma_buf_attachment *a = attachment->priv;
> > > > > struct sg_table *table;
> > > > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > >
> > > > > table = a->table;
> > > > >
> > > > > + mutex_lock(>lock);
> > > > > if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > > > - direction))
> > > > > + direction)) {
> > > > > + mutex_unlock(>lock);
> > > > > return ERR_PTR(-ENOMEM);
> > > > > + }
> > > > > + a->dma_mapped = true;
> > > > > + mutex_unlock(>lock);
> > > > >
> > > > > return table;
> > > > > }
> > > > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct
> > > > > dma_buf_attachment *attachment,
> > > > > struct sg_table *table,
> > > > > enum dma_data_direction direction)
> > > > > {
> > > > > + struct ion_dma_buf_attachment *a = attachment->priv;
> > > > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > > +
> > > > > + mutex_lock(>lock);
> > > > > dma_unmap_sg(attachment->dev, table->sgl, table->nents,
> > > > > direction);
> > > > > + a->dma_mapped = false;
> > > > > + mutex_unlock(>lock);
> > > > > }
> > > > >
> > > > > static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct
> > > > > *vma)
> > > > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct
> > > > > dma_buf *dmabuf,
> > > > >
> > > > > mutex_lock(>lock);
> > > > > list_for_each_entry(a, >attachments, list) {
> > > >
> > > > When no devices are attached then buffer->attachments is empty and the
> > > > below does not run, so if I understand this patch correctly then what
> > > > you are protecting against is CPU access in the window after
> > > > dma_buf_attach but before dma_buf_map.