From: Srinivasan Shanmugam <[email protected]>

This commit applies isolation enforcement to the GFX and Compute rings
in the gfx_v9_4_3 module.

The commit sets `amdgpu_gfx_enforce_isolation_ring_begin_use` and
`amdgpu_gfx_enforce_isolation_ring_end_use` as the functions to be
called when a ring begins and ends its use, respectively.

`amdgpu_gfx_enforce_isolation_ring_begin_use` is called when a ring
begins its use. This function cancels any scheduled
`enforce_isolation_work` and, if necessary, signals the Kernel Fusion
Driver (KFD) to stop the runqueue.

`amdgpu_gfx_enforce_isolation_ring_end_use` is called when a ring ends
its use. This function schedules `enforce_isolation_work` to be run
after a delay.

These functions are part of the Enforce Isolation Handler, which
enforces shader isolation on AMD GPUs to prevent data leakage between
different processes.

The commit also includes a check for the type of the ring. If the type
of the ring is `AMDGPU_RING_TYPE_COMPUTE`, the `xcp_id` of the
`enforce_isolation` structure in the `gfx` structure of the
`amdgpu_device` is set to the `xcp_id` of the ring. This ensures that
the correct `xcp_id` is used when enforcing isolation on compute rings.
The `xcp_id` is an identifier for an XCP partition, and different rings
can be associated with different XCP partitions.

Cc: Christian König <[email protected]>
Cc: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Srinivasan Shanmugam <[email protected]>
---
 drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c | 4 ++++
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c    | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c 
b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
index 228fd4dd32f1..26e2188101e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
@@ -75,6 +75,8 @@ static void aqua_vanjaram_set_xcp_id(struct amdgpu_device 
*adev,
        uint32_t inst_mask;
 
        ring->xcp_id = AMDGPU_XCP_NO_PARTITION;
+       if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE)
+               adev->gfx.enforce_isolation[0].xcp_id = ring->xcp_id;
        if (adev->xcp_mgr->mode == AMDGPU_XCP_MODE_NONE)
                return;
 
@@ -103,6 +105,8 @@ static void aqua_vanjaram_set_xcp_id(struct amdgpu_device 
*adev,
        for (xcp_id = 0; xcp_id < adev->xcp_mgr->num_xcps; xcp_id++) {
                if (adev->xcp_mgr->xcp[xcp_id].ip[ip_blk].inst_mask & 
inst_mask) {
                        ring->xcp_id = xcp_id;
+                       if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE)
+                               adev->gfx.enforce_isolation[xcp_id].xcp_id = 
xcp_id;
                        break;
                }
        }
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index fa6752585a72..2067f26d3a9d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -4671,6 +4671,8 @@ static const struct amdgpu_ring_funcs 
gfx_v9_4_3_ring_funcs_compute = {
        .emit_wave_limit = gfx_v9_4_3_emit_wave_limit,
        .reset = gfx_v9_4_3_reset_kcq,
        .emit_cleaner_shader = gfx_v9_4_3_ring_emit_cleaner_shader,
+       .begin_use = amdgpu_gfx_enforce_isolation_ring_begin_use,
+       .end_use = amdgpu_gfx_enforce_isolation_ring_end_use,
 };
 
 static const struct amdgpu_ring_funcs gfx_v9_4_3_ring_funcs_kiq = {
-- 
2.46.0

Reply via email to