Re: [Freedreno] [PATCH] drm/msm: Separate locking of buffer resources from struct_mutex

2017-06-15 Thread Chris Wilson
Quoting Rob Clark (2017-06-14 17:49:02)
> On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
>  wrote:
> > Buffer object specific resources like pages, domains, sg list
> > need not be protected with struct_mutex. They can be protected
> > with a buffer object level lock. This simplifies locking and
> > makes it easier to avoid potential recursive locking scenarios
> > for SVM involving mmap_sem and struct_mutex. This also removes
> > unnecessary serialization when creating buffer objects, and also
> > between buffer object creation and GPU command submission.
> 
> I've rebased this on top of the other changes that are in the queue for 4.13:
> 
> https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885
> 
> I've done some light testing on a5xx.. although between this and
> moving gpu->hw_init() under struct_mutex, I need to double check on
> a3xx/a4xx.
> 
> I do think we probably need a bit more work for shrinker.  In
> particular msm_gem_purge() still assumes everything is protected by a
> single struct_mutex, which is no longer true.  The tricky thing here
> is that shrinker can be triggered by code-paths where we already hold
> msm_obj->lock.

Make sure that msm_obj->lock *only* covers the backing store (and its
operations). (e.g. I called it obj->mm.lock and moved everything that it
covers into obj->mm). Then you can ensure that you don't attempt to
shrink msm_obj itself whilst holding msm_obj->mm.lock (e.g. by tracking
a obj->mm.pages_pin_count and not attempting to shrink any object that
has its pages still pinned, and you then only allocate underneath the
msm_obj->mm.lock whilst holding a page reference). After all that you
can use mutex_lock_nested(.subclass = MSM_SHRINKER) for the special case
where the shrinker may be acquiring the lock on other objects whilst
holding msm_obj->mm.lock.
-Chris
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH] drm/msm: Separate locking of buffer resources from struct_mutex

2017-06-14 Thread Rob Clark
I think the trick is what is the best way to synchronize access to
msm_obj->madv in the shrinker path, since we can't reliably just take
msm_obj->lock in shrinker path since we might already hold it:
https://hastebin.com/ubafepahov.xml

fwiw, that was with my work-in-progress attempt to deal with shrinker:
https://hastebin.com/hucijugufo.cs

Anyways, got distracted on other things and didn't get much further,
but hopefully should have some time for another look tomorrow.
Suggestions welcome ;-)

BR,
-R

On Wed, Jun 14, 2017 at 7:08 PM, Susheelendra, Sushmita
 wrote:
> Hi Rob,
>
> Yes, I’d completely missed the shrinker path in this cleanup. But, yeah, I
> see how get_pages (which is called with msm_obj->lock held) ->
> drm_gem_get_pages could trigger shrinker_scan which calls msm_gem_purge.
> It makes sense to prevent any get_pages/vmap on objects that’ve been marked
> as DONTNEED. I’ll send you a patch soon for that.
>
> Thanks,
> Sushmita
>
>
> On Jun 14, 2017, at 10:49 AM, Rob Clark  wrote:
>
> On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
>  wrote:
>
> Buffer object specific resources like pages, domains, sg list
> need not be protected with struct_mutex. They can be protected
> with a buffer object level lock. This simplifies locking and
> makes it easier to avoid potential recursive locking scenarios
> for SVM involving mmap_sem and struct_mutex. This also removes
> unnecessary serialization when creating buffer objects, and also
> between buffer object creation and GPU command submission.
>
>
> I've rebased this on top of the other changes that are in the queue for
> 4.13:
>
> https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885
>
> I've done some light testing on a5xx.. although between this and
> moving gpu->hw_init() under struct_mutex, I need to double check on
> a3xx/a4xx.
>
> I do think we probably need a bit more work for shrinker.  In
> particular msm_gem_purge() still assumes everything is protected by a
> single struct_mutex, which is no longer true.  The tricky thing here
> is that shrinker can be triggered by code-paths where we already hold
> msm_obj->lock.
>
> I think we probably want anything that ends up in get_pages() or
> vmap() to have something along the lines of
>
>  if (WARN_ON(msm_obj->madv == DONTNEED)) {
>mutex_unlock(&msm_obj->lock);
>return -EINVAL;   // or -EBUSY, or ??
>  }
>
> it's really only something that a badly behaved userspace could
> triger.. but otoh we probably don't want to let a badly behaved
> userspace deadlock things in the kernel.  I guess I probably should
> have written some test cases for this by now.
>
> Change-Id: I4fba9f8c38a6cd13f80f660639d1e74d4336e3fb
>
>
> jfyi, no need for the Change-Id.. I usually strip them out when I
> apply patches, but I appreciate if you can strip them out before
> sending (because sometimes I forget)
>
> BR,
> -R
>
> Signed-off-by: Sushmita Susheelendra 
> ---
> drivers/gpu/drm/msm/adreno/a5xx_gpu.c|   2 -
> drivers/gpu/drm/msm/adreno/a5xx_power.c  |   2 -
> drivers/gpu/drm/msm/adreno/adreno_gpu.c  |   2 -
> drivers/gpu/drm/msm/dsi/dsi_host.c   |   5 +-
> drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c |   2 +-
> drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c  |   2 -
> drivers/gpu/drm/msm/msm_drv.c|   1 +
> drivers/gpu/drm/msm/msm_drv.h|   7 +-
> drivers/gpu/drm/msm/msm_fbdev.c  |   6 +-
> drivers/gpu/drm/msm/msm_gem.c| 190
> +++
> drivers/gpu/drm/msm/msm_gem.h|   2 +
> drivers/gpu/drm/msm/msm_gem_submit.c |   6 +-
> drivers/gpu/drm/msm/msm_gem_vma.c|  10 +-
> drivers/gpu/drm/msm/msm_gpu.c|   4 +-
> drivers/gpu/drm/msm/msm_rd.c |   4 +-
> drivers/gpu/drm/msm/msm_ringbuffer.c |   2 +-
> 16 files changed, 120 insertions(+), 127 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> index 31a9bce..7893de1 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> @@ -235,9 +235,7 @@ static struct drm_gem_object *a5xx_ucode_load_bo(struct
> msm_gpu *gpu,
>struct drm_gem_object *bo;
>void *ptr;
>
> -   mutex_lock(&drm->struct_mutex);
>bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
>if (IS_ERR(bo))
>return bo;
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c
> b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> index 72d52c7..eb88f44 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> @@ -294,9 +294,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
> */
>bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
>
> -   mutex_lock(&drm->struct_mutex);
>a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
>if (

Re: [Freedreno] [PATCH] drm/msm: Separate locking of buffer resources from struct_mutex

2017-06-14 Thread Susheelendra, Sushmita
Hi Rob,

Yes, I’d completely missed the shrinker path in this cleanup. But, yeah, I see 
how get_pages (which is called with msm_obj->lock held) -> drm_gem_get_pages 
could trigger shrinker_scan which calls msm_gem_purge.
It makes sense to prevent any get_pages/vmap on objects that’ve been marked as 
DONTNEED. I’ll send you a patch soon for that.

Thanks,
Sushmita


> On Jun 14, 2017, at 10:49 AM, Rob Clark  wrote:
> 
> On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
> mailto:ssush...@codeaurora.org>> wrote:
>> Buffer object specific resources like pages, domains, sg list
>> need not be protected with struct_mutex. They can be protected
>> with a buffer object level lock. This simplifies locking and
>> makes it easier to avoid potential recursive locking scenarios
>> for SVM involving mmap_sem and struct_mutex. This also removes
>> unnecessary serialization when creating buffer objects, and also
>> between buffer object creation and GPU command submission.
> 
> I've rebased this on top of the other changes that are in the queue for 4.13:
> 
> https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885
>  
> 
> 
> I've done some light testing on a5xx.. although between this and
> moving gpu->hw_init() under struct_mutex, I need to double check on
> a3xx/a4xx.
> 
> I do think we probably need a bit more work for shrinker.  In
> particular msm_gem_purge() still assumes everything is protected by a
> single struct_mutex, which is no longer true.  The tricky thing here
> is that shrinker can be triggered by code-paths where we already hold
> msm_obj->lock.
> 
> I think we probably want anything that ends up in get_pages() or
> vmap() to have something along the lines of
> 
>  if (WARN_ON(msm_obj->madv == DONTNEED)) {
>mutex_unlock(&msm_obj->lock);
>return -EINVAL;   // or -EBUSY, or ??
>  }
> 
> it's really only something that a badly behaved userspace could
> triger.. but otoh we probably don't want to let a badly behaved
> userspace deadlock things in the kernel.  I guess I probably should
> have written some test cases for this by now.
> 
>> Change-Id: I4fba9f8c38a6cd13f80f660639d1e74d4336e3fb
> 
> jfyi, no need for the Change-Id.. I usually strip them out when I
> apply patches, but I appreciate if you can strip them out before
> sending (because sometimes I forget)
> 
> BR,
> -R
> 
>> Signed-off-by: Sushmita Susheelendra 
>> ---
>> drivers/gpu/drm/msm/adreno/a5xx_gpu.c|   2 -
>> drivers/gpu/drm/msm/adreno/a5xx_power.c  |   2 -
>> drivers/gpu/drm/msm/adreno/adreno_gpu.c  |   2 -
>> drivers/gpu/drm/msm/dsi/dsi_host.c   |   5 +-
>> drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c |   2 +-
>> drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c  |   2 -
>> drivers/gpu/drm/msm/msm_drv.c|   1 +
>> drivers/gpu/drm/msm/msm_drv.h|   7 +-
>> drivers/gpu/drm/msm/msm_fbdev.c  |   6 +-
>> drivers/gpu/drm/msm/msm_gem.c| 190 
>> +++
>> drivers/gpu/drm/msm/msm_gem.h|   2 +
>> drivers/gpu/drm/msm/msm_gem_submit.c |   6 +-
>> drivers/gpu/drm/msm/msm_gem_vma.c|  10 +-
>> drivers/gpu/drm/msm/msm_gpu.c|   4 +-
>> drivers/gpu/drm/msm/msm_rd.c |   4 +-
>> drivers/gpu/drm/msm/msm_ringbuffer.c |   2 +-
>> 16 files changed, 120 insertions(+), 127 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
>> b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> index 31a9bce..7893de1 100644
>> --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> @@ -235,9 +235,7 @@ static struct drm_gem_object *a5xx_ucode_load_bo(struct 
>> msm_gpu *gpu,
>>struct drm_gem_object *bo;
>>void *ptr;
>> 
>> -   mutex_lock(&drm->struct_mutex);
>>bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
>> -   mutex_unlock(&drm->struct_mutex);
>> 
>>if (IS_ERR(bo))
>>return bo;
>> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c 
>> b/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> index 72d52c7..eb88f44 100644
>> --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> @@ -294,9 +294,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
>> */
>>bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
>> 
>> -   mutex_lock(&drm->struct_mutex);
>>a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
>> -   mutex_unlock(&drm->struct_mutex);
>> 
>>if (IS_ERR(a5xx_gpu->gpmu_bo))
>>goto err;
>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
>> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> index 5b63fc6..1162c15 100644
>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> @@ -392,10 +392,8 @@ int adreno_gpu_init(struct drm_device *drm, struct 
>> platform_device *pdev,

Re: [Freedreno] [PATCH] drm/msm: Separate locking of buffer resources from struct_mutex

2017-06-14 Thread Rob Clark
On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
 wrote:
> Buffer object specific resources like pages, domains, sg list
> need not be protected with struct_mutex. They can be protected
> with a buffer object level lock. This simplifies locking and
> makes it easier to avoid potential recursive locking scenarios
> for SVM involving mmap_sem and struct_mutex. This also removes
> unnecessary serialization when creating buffer objects, and also
> between buffer object creation and GPU command submission.

I've rebased this on top of the other changes that are in the queue for 4.13:

https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885

I've done some light testing on a5xx.. although between this and
moving gpu->hw_init() under struct_mutex, I need to double check on
a3xx/a4xx.

I do think we probably need a bit more work for shrinker.  In
particular msm_gem_purge() still assumes everything is protected by a
single struct_mutex, which is no longer true.  The tricky thing here
is that shrinker can be triggered by code-paths where we already hold
msm_obj->lock.

I think we probably want anything that ends up in get_pages() or
vmap() to have something along the lines of

  if (WARN_ON(msm_obj->madv == DONTNEED)) {
mutex_unlock(&msm_obj->lock);
return -EINVAL;   // or -EBUSY, or ??
  }

it's really only something that a badly behaved userspace could
triger.. but otoh we probably don't want to let a badly behaved
userspace deadlock things in the kernel.  I guess I probably should
have written some test cases for this by now.

> Change-Id: I4fba9f8c38a6cd13f80f660639d1e74d4336e3fb

jfyi, no need for the Change-Id.. I usually strip them out when I
apply patches, but I appreciate if you can strip them out before
sending (because sometimes I forget)

BR,
-R

> Signed-off-by: Sushmita Susheelendra 
> ---
>  drivers/gpu/drm/msm/adreno/a5xx_gpu.c|   2 -
>  drivers/gpu/drm/msm/adreno/a5xx_power.c  |   2 -
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c  |   2 -
>  drivers/gpu/drm/msm/dsi/dsi_host.c   |   5 +-
>  drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c |   2 +-
>  drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c  |   2 -
>  drivers/gpu/drm/msm/msm_drv.c|   1 +
>  drivers/gpu/drm/msm/msm_drv.h|   7 +-
>  drivers/gpu/drm/msm/msm_fbdev.c  |   6 +-
>  drivers/gpu/drm/msm/msm_gem.c| 190 
> +++
>  drivers/gpu/drm/msm/msm_gem.h|   2 +
>  drivers/gpu/drm/msm/msm_gem_submit.c |   6 +-
>  drivers/gpu/drm/msm/msm_gem_vma.c|  10 +-
>  drivers/gpu/drm/msm/msm_gpu.c|   4 +-
>  drivers/gpu/drm/msm/msm_rd.c |   4 +-
>  drivers/gpu/drm/msm/msm_ringbuffer.c |   2 +-
>  16 files changed, 120 insertions(+), 127 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
> b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> index 31a9bce..7893de1 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> @@ -235,9 +235,7 @@ static struct drm_gem_object *a5xx_ucode_load_bo(struct 
> msm_gpu *gpu,
> struct drm_gem_object *bo;
> void *ptr;
>
> -   mutex_lock(&drm->struct_mutex);
> bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
> if (IS_ERR(bo))
> return bo;
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c 
> b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> index 72d52c7..eb88f44 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> @@ -294,9 +294,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
>  */
> bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
>
> -   mutex_lock(&drm->struct_mutex);
> a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
> if (IS_ERR(a5xx_gpu->gpmu_bo))
> goto err;
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 5b63fc6..1162c15 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -392,10 +392,8 @@ int adreno_gpu_init(struct drm_device *drm, struct 
> platform_device *pdev,
> return ret;
> }
>
> -   mutex_lock(&drm->struct_mutex);
> adreno_gpu->memptrs_bo = msm_gem_new(drm, 
> sizeof(*adreno_gpu->memptrs),
> MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
> if (IS_ERR(adreno_gpu->memptrs_bo)) {
> ret = PTR_ERR(adreno_gpu->memptrs_bo);
> adreno_gpu->memptrs_bo = NULL;
> diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
> b/drivers/gpu/drm/msm/dsi/dsi_host.c
> index 4f79b10..6a1b0da 100644
> --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
> +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
> @@ -980,19 +980,16 @@ static int dsi

Re: [Freedreno] [PATCH] drm/msm: Separate locking of buffer resources from struct_mutex

2017-06-13 Thread Rob Clark
On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
 wrote:
> Buffer object specific resources like pages, domains, sg list
> need not be protected with struct_mutex. They can be protected
> with a buffer object level lock. This simplifies locking and
> makes it easier to avoid potential recursive locking scenarios
> for SVM involving mmap_sem and struct_mutex. This also removes
> unnecessary serialization when creating buffer objects, and also
> between buffer object creation and GPU command submission.

oh, cool.. this might be a bit conflicty w/ "drm/msm: fix locking
inconsistency for gpu->hw_init()" and possibly the patchset to support
arbitrary # of vma's per gem bo.  But it is definitely the direction
we want to go.  It should also help us move away from the
mutex_trylock_recursive() hack in the shrinker code.

I'll have a go at rebasing it, and in the process taking a closer
look, tomorrow since this is something that has been on the TODO list
for quite a while now and is pretty badly needed.

BR,
-R


> Change-Id: I4fba9f8c38a6cd13f80f660639d1e74d4336e3fb
> Signed-off-by: Sushmita Susheelendra 
> ---
>  drivers/gpu/drm/msm/adreno/a5xx_gpu.c|   2 -
>  drivers/gpu/drm/msm/adreno/a5xx_power.c  |   2 -
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c  |   2 -
>  drivers/gpu/drm/msm/dsi/dsi_host.c   |   5 +-
>  drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c |   2 +-
>  drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c  |   2 -
>  drivers/gpu/drm/msm/msm_drv.c|   1 +
>  drivers/gpu/drm/msm/msm_drv.h|   7 +-
>  drivers/gpu/drm/msm/msm_fbdev.c  |   6 +-
>  drivers/gpu/drm/msm/msm_gem.c| 190 
> +++
>  drivers/gpu/drm/msm/msm_gem.h|   2 +
>  drivers/gpu/drm/msm/msm_gem_submit.c |   6 +-
>  drivers/gpu/drm/msm/msm_gem_vma.c|  10 +-
>  drivers/gpu/drm/msm/msm_gpu.c|   4 +-
>  drivers/gpu/drm/msm/msm_rd.c |   4 +-
>  drivers/gpu/drm/msm/msm_ringbuffer.c |   2 +-
>  16 files changed, 120 insertions(+), 127 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
> b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> index 31a9bce..7893de1 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
> @@ -235,9 +235,7 @@ static struct drm_gem_object *a5xx_ucode_load_bo(struct 
> msm_gpu *gpu,
> struct drm_gem_object *bo;
> void *ptr;
>
> -   mutex_lock(&drm->struct_mutex);
> bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
> if (IS_ERR(bo))
> return bo;
> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c 
> b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> index 72d52c7..eb88f44 100644
> --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
> +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
> @@ -294,9 +294,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
>  */
> bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
>
> -   mutex_lock(&drm->struct_mutex);
> a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
>
> if (IS_ERR(a5xx_gpu->gpmu_bo))
> goto err;
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 5b63fc6..1162c15 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -392,10 +392,8 @@ int adreno_gpu_init(struct drm_device *drm, struct 
> platform_device *pdev,
> return ret;
> }
>
> -   mutex_lock(&drm->struct_mutex);
> adreno_gpu->memptrs_bo = msm_gem_new(drm, 
> sizeof(*adreno_gpu->memptrs),
> MSM_BO_UNCACHED);
> -   mutex_unlock(&drm->struct_mutex);
> if (IS_ERR(adreno_gpu->memptrs_bo)) {
> ret = PTR_ERR(adreno_gpu->memptrs_bo);
> adreno_gpu->memptrs_bo = NULL;
> diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
> b/drivers/gpu/drm/msm/dsi/dsi_host.c
> index 4f79b10..6a1b0da 100644
> --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
> +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
> @@ -980,19 +980,16 @@ static int dsi_tx_buf_alloc(struct msm_dsi_host 
> *msm_host, int size)
> uint64_t iova;
>
> if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) {
> -   mutex_lock(&dev->struct_mutex);
> msm_host->tx_gem_obj = msm_gem_new(dev, size, 
> MSM_BO_UNCACHED);
> if (IS_ERR(msm_host->tx_gem_obj)) {
> ret = PTR_ERR(msm_host->tx_gem_obj);
> pr_err("%s: failed to allocate gem, %d\n",
> __func__, ret);
> msm_host->tx_gem_obj = NULL;
> -   mutex_unlock(&dev->struct_mutex);
> return ret;
> }
>
> -   ret = msm_gem_get_iova_locked(msm_hos