On Mon, Oct 21, 2019 at 01:15:22PM +0200, Christian König wrote:
> The attachment list is now protected by the dma_resv object.
> So we can drop holding this lock to allow concurrent attach
> and detach operations.
> 
> Signed-off-by: Christian König <[email protected]>
> ---
>  drivers/dma-buf/dma-buf.c | 16 ----------------
>  1 file changed, 16 deletions(-)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 753be84b5fd6..c736e67ae1a1 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -685,8 +685,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct 
> device *dev,
>       attach->dmabuf = dmabuf;
>       attach->dynamic_mapping = dynamic_mapping;
>  
> -     mutex_lock(&dmabuf->lock);
> -
>       if (dmabuf->ops->attach) {
>               ret = dmabuf->ops->attach(dmabuf, attach);
>               if (ret)
> @@ -696,8 +694,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct 
> device *dev,
>       list_add(&attach->node, &dmabuf->attachments);
>       dma_resv_unlock(dmabuf->resv);
>  
> -     mutex_unlock(&dmabuf->lock);

This changes the rules, now ->attach/->detach and the list manipulation
aren't done under the same lock anymore. I don't think this matters, but
imo good to mention in the commit message.

> -
>       /* When either the importer or the exporter can't handle dynamic
>        * mappings we cache the mapping here to avoid issues with the
>        * reservation object lock.
> @@ -726,7 +722,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct 
> device *dev,
>  
>  err_attach:
>       kfree(attach);
> -     mutex_unlock(&dmabuf->lock);
>       return ERR_PTR(ret);
>  
>  err_unlock:
> @@ -776,14 +771,12 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct 
> dma_buf_attachment *attach)
>                       dma_resv_unlock(attach->dmabuf->resv);
>       }
>  
> -     mutex_lock(&dmabuf->lock);
>       dma_resv_lock(dmabuf->resv, NULL);
>       list_del(&attach->node);
>       dma_resv_unlock(dmabuf->resv);
>       if (dmabuf->ops->detach)
>               dmabuf->ops->detach(dmabuf, attach);
>  
> -     mutex_unlock(&dmabuf->lock);
>       kfree(attach);
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_detach);
> @@ -1247,14 +1240,6 @@ static int dma_buf_debug_show(struct seq_file *s, void 
> *unused)
>                  "size", "flags", "mode", "count", "ino");
>  
>       list_for_each_entry(buf_obj, &db_list.head, list_node) {
> -             ret = mutex_lock_interruptible(&buf_obj->lock);
> -
> -             if (ret) {
> -                     seq_puts(s,
> -                              "\tERROR locking buffer object: skipping\n");
> -                     continue;
> -             }
> -

This will mildly conflict with the revised version of patch 1 (since the
dma_resv_lock needs to be here).

With both nits addressed:

Reviewed-by: Daniel Vetter <[email protected]>

>               seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\t%08lu\t%s\n",
>                               buf_obj->size,
>                               buf_obj->file->f_flags, buf_obj->file->f_mode,
> @@ -1307,7 +1292,6 @@ static int dma_buf_debug_show(struct seq_file *s, void 
> *unused)
>  
>               count++;
>               size += buf_obj->size;
> -             mutex_unlock(&buf_obj->lock);
>       }
>  
>       seq_printf(s, "\nTotal %d objects, %zu bytes\n", count, size);
> -- 
> 2.17.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Reply via email to