On 12/1/25 14:41, Natalie Vock wrote:
> Otherwise userspace may be fooled into believing it has a reserved VMID
> when in reality it doesn't, ultimately leading to GPU hangs when SPM is
> used.

Good catch!

> Fixes: 80e709ee6ecc ("drm/amdgpu: add option params to enforce process 
> isolation between graphics and compute")
> Cc: [email protected]
> Signed-off-by: Natalie Vock <[email protected]>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 61820166efbf6..52f8038125530 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2913,6 +2913,7 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void *data, 
> struct drm_file *filp)
>       struct amdgpu_device *adev = drm_to_adev(dev);
>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>       struct amdgpu_vm *vm = &fpriv->vm;
> +     int r = 0;

Initializing local variables used as return code is usually seen as bad coding 
style, but see below.

>  
>       /* No valid flags defined yet */
>       if (args->in.flags)
> @@ -2921,16 +2922,16 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void 
> *data, struct drm_file *filp)
>       switch (args->in.op) {
>       case AMDGPU_VM_OP_RESERVE_VMID:
>               /* We only have requirement to reserve vmid from gfxhub */
> -             amdgpu_vmid_alloc_reserved(adev, vm, AMDGPU_GFXHUB(0));
> +             r = amdgpu_vmid_alloc_reserved(adev, vm, AMDGPU_GFXHUB(0));
>               break;

You can just use return amdgpu_vmid_alloc_reserved((..) here, no need for the 
local variables.

Apart from that looks good to me.

Regards,
Christian.

>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>               amdgpu_vmid_free_reserved(adev, vm, AMDGPU_GFXHUB(0));
>               break;
>       default:
> -             return -EINVAL;
> +             r = -EINVAL;
>       }
>  
> -     return 0;
> +     return r;
>  }
>  
>  /**

Reply via email to