See the SPM buffer address is set using CP commands as well, right? And
those execute asynchronously.
When we now synchronously update the SPM VMID we risk that we switch
from one process to another while the new process is not ready yet with
its setup.
That could have quite a bunch of unforeseen consequences, including
accidentally writing SPM data into the new process address space at
whatever buffer address was used before.
This is something we at least should try to avoid.
Regards,
Christian.
Am 03.03.20 um 16:28 schrieb He, Jacob:
[AMD Official Use Only - Internal Distribution Only]
Thanks! Could you please take an example of trouble “This way we
avoid a bunch of trouble when one process drops the VMID reservation
and another one grabs it.”?
Thanks
Jacob
*From: *Koenig, Christian <mailto:christian.koe...@amd.com>
*Sent: *Tuesday, March 3, 2020 11:03 PM
*To: *He, Jacob <mailto:jacob...@amd.com>;
amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
*Subject: *Re: [PATCH] drm/amdgpu: Update SPM_VMID with the job's vmid
when application reserves the vmid
Am 03.03.20 um 15:34 schrieb He, Jacob:
[AMD Official Use Only - Internal Distribution Only]
/It would be better if we could do that asynchronously with a
register
write on the ring./
Sorry, I don’t get your point. Could you please elaborate more?
You pass the ring from amdgpu_vm_flush() to the *_update_spm_vmid()
functions.
And then instead of using WREG32() you call amdgpu_ring_emit_wreg() to
make the write asynchronously on the ring buffer using a CP command.
This way we avoid a bunch of trouble when one process drops the VMID
reservation and another one grabs it.
Regards,
Christian.
Thanks
Jacob
*From: *Christian König <mailto:ckoenig.leichtzumer...@gmail.com>
*Sent: *Tuesday, March 3, 2020 10:16 PM
*To: *He, Jacob <mailto:jacob...@amd.com>;
amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
*Subject: *Re: [PATCH] drm/amdgpu: Update SPM_VMID with the job's
vmid when application reserves the vmid
Am 02.03.20 um 06:35 schrieb Jacob He:
> SPM access the video memory according to SPM_VMID. It should be
updated
> with the job's vmid right before the job is scheduled. SPM_VMID is a
> global resource
>
> Change-Id: Id3881908960398f87e7c95026a54ff83ff826700
> Signed-off-by: Jacob He <jacob...@amd.com> <mailto:jacob...@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index c00696f3017e..c761d3a0b6e8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -1080,8 +1080,12 @@ int amdgpu_vm_flush(struct amdgpu_ring
*ring, struct amdgpu_job *job,
> struct dma_fence *fence = NULL;
> bool pasid_mapping_needed = false;
> unsigned patch_offset = 0;
> + bool update_spm_vmid_needed = (job->vm &&
(job->vm->reserved_vmid[vmhub] != NULL));
> int r;
>
> + if (update_spm_vmid_needed &&
adev->gfx.rlc.funcs->update_spm_vmid)
> + adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);
> +
It would be better if we could do that asynchronously with a register
write on the ring.
The alternative is that we block for the VM to be idle in
amdgpu_vm_ioctl() before unreserving the VMID.
In other words lock the reservation object of the root PD and call
amdgpu_vm_wait_idle() before calling amdgpu_vmid_free_reserved().
Regards,
Christian.
> if (amdgpu_vmid_had_gpu_reset(adev, id)) {
> gds_switch_needed = true;
> vm_flush_needed = true;
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx