Re: [PATCH v5 1/4] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Nirmoy


On 3/2/20 6:16 PM, Christian König wrote:



or else

@@ -222,7 +229,8 @@ struct amdgpu_ring {
    struct mutex    priority_mutex;
    /* protected by priority_mutex */
    int priority;
-   bool    gfx_pipe_priority;
+
+   enum gfx_pipe_priority  pipe_priority;

doesn't work because of compilation error: " field ‘pipe_priority’ 
has incomplete type"


Mhm, let me ask from the other direction: What is that good for in the 
first place?


As far as I can see this is just to communicate to the ctx handling 
what priority a hw ring has, right?


But what we actually need in the ctx handling is an array of ring with 
normal and high priorty. So why don't we create that in the first place?


Do you mean to create two array ring for both priority ? We still need a 
way to detect ring priority in ctx to populate those array in 
amdgpu_ctx_init_sched.


I think the previous approach to have bool to indicate ring priority 
status should be good enough for ctx. Let me send the next version of 
the patch


to explain what I mean.

Regards,

Nirmoy



Regards,
Christian.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v5 1/4] drm/amdgpu: set compute queue priority at mqd_init

2020-03-02 Thread Christian König

Am 02.03.20 um 15:20 schrieb Nirmoy:

Hi Christian

On 3/2/20 2:10 PM, Christian König wrote:




diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 24caff085d00..201c6ac7bf9d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -222,6 +222,7 @@ struct amdgpu_ring {
  struct mutex    priority_mutex;
  /* protected by priority_mutex */
  int    priority;
+    bool    gfx_pipe_priority;


Didn't you wanted to make this an enum? Or was that another field.


Shall I move gfx_pipe_priority to amdgpu_ring.h  from amdgpu_gfx.h

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -35,6 +35,13 @@
 #define AMDGPU_MAX_VCE_RINGS   3
 #define AMDGPU_MAX_UVD_ENC_RINGS   2

+/* gfx ring's pipe priority */
+enum gfx_pipe_priority {
+   AMDGPU_GFX_PIPE_PRIO_NORMAL = 1,
+   AMDGPU_GFX_PIPE_PRIO_HIGH,
+   AMDGPU_GFX_PIPE_PRIO_MAX
+};

or else

@@ -222,7 +229,8 @@ struct amdgpu_ring {
    struct mutex    priority_mutex;
    /* protected by priority_mutex */
    int priority;
-   bool    gfx_pipe_priority;
+
+   enum gfx_pipe_priority  pipe_priority;

doesn't work because of compilation error: " field ‘pipe_priority’ has 
incomplete type"


Mhm, let me ask from the other direction: What is that good for in the 
first place?


As far as I can see this is just to communicate to the ctx handling what 
priority a hw ring has, right?


But what we actually need in the ctx handling is an array of ring with 
normal and high priorty. So why don't we create that in the first place?


Regards,
Christian.



Regards,

Nirmoy

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v5 1/4] drm/amdgpu: set compute queue priority at mqd_init

2020-03-02 Thread Nirmoy

Hi Christian

On 3/2/20 2:10 PM, Christian König wrote:




diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 24caff085d00..201c6ac7bf9d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -222,6 +222,7 @@ struct amdgpu_ring {
  struct mutex    priority_mutex;
  /* protected by priority_mutex */
  int    priority;
+    bool    gfx_pipe_priority;


Didn't you wanted to make this an enum? Or was that another field.


Shall I move gfx_pipe_priority to amdgpu_ring.h  from amdgpu_gfx.h

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -35,6 +35,13 @@
 #define AMDGPU_MAX_VCE_RINGS   3
 #define AMDGPU_MAX_UVD_ENC_RINGS   2

+/* gfx ring's pipe priority */
+enum gfx_pipe_priority {
+   AMDGPU_GFX_PIPE_PRIO_NORMAL = 1,
+   AMDGPU_GFX_PIPE_PRIO_HIGH,
+   AMDGPU_GFX_PIPE_PRIO_MAX
+};

or else

@@ -222,7 +229,8 @@ struct amdgpu_ring {
    struct mutex    priority_mutex;
    /* protected by priority_mutex */
    int priority;
-   bool    gfx_pipe_priority;
+
+   enum gfx_pipe_priority  pipe_priority;

doesn't work because of compilation error: " field ‘pipe_priority’ has 
incomplete type"


Regards,

Nirmoy

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v5 1/4] drm/amdgpu: set compute queue priority at mqd_init

2020-03-02 Thread Christian König

Am 02.03.20 um 13:58 schrieb Nirmoy Das:

We were changing compute ring priority while rings were being used
before every job submission which is not recommended. This patch
sets compute queue priority at mqd initialization for gfx8, gfx9 and
gfx10.

Policy: make queue 0 of each pipe as high priority compute queue

High/normal priority compute sched lists are generated from set of high/normal
priority compute queues. At context creation, entity of compute queue
get a sched list from high or normal priority depending on ctx->priority

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 61 +---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
  9 files changed, 136 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index f397ff97b4e4..8304d0c87899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
-   struct amdgpu_ring *ring;
struct amdgpu_bo_list_entry *e;
struct amdgpu_job *job;
uint64_t seq;
@@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);

-   ring = to_amdgpu_ring(entity->rq->sched);
-   amdgpu_ring_priority_get(ring, priority);
-
amdgpu_vm_move_to_lru_tail(p->adev, >vm);

ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 94a6c42f29ea..8c52152e3a6e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
return -EACCES;
  }

+static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
drm_sched_priority prio)
+{
+   switch(prio) {
+   case DRM_SCHED_PRIORITY_MIN:
+   case DRM_SCHED_PRIORITY_NORMAL:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+   case DRM_SCHED_PRIORITY_HIGH_SW:


Probably better to keep HIGH_SW on the normal HW priority.


+   case DRM_SCHED_PRIORITY_HIGH_HW:
+   case DRM_SCHED_PRIORITY_KERNEL:
+   return AMDGPU_GFX_PIPE_PRIO_HIGH;
+   default:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+   }
+
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+}
+
  static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, const u32 hw_ip, 
const u32 ring)
  {
struct amdgpu_device *adev = ctx->adev;
struct amdgpu_ctx_entity *entity;
struct drm_gpu_scheduler **scheds = NULL, *sched = NULL;
unsigned num_scheds = 0;
+   enum gfx_pipe_priority compute_priority;


I would call that hw_priority everywhere. Since we are probably getting 
the same thing for the GFX and VCE/VCN rings as well.



enum drm_sched_priority priority;
int r;

@@ -85,8 +103,10 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, 
const u32 hw_ip, const
num_scheds = 1;
break;
case AMDGPU_HW_IP_COMPUTE:
-   scheds = adev->gfx.compute_sched;
-   num_scheds = adev->gfx.num_compute_sched;
+   compute_priority =
+   amdgpu_ctx_sched_prio_to_compute_prio(priority);
+   scheds = adev->gfx.compute_prio_sched[compute_priority];
+   num_scheds = 
adev->gfx.num_compute_sched[compute_priority];
break;
case AMDGPU_HW_IP_DMA:
scheds = adev->sdma.sdma_sched;
@@ -628,20 +648,47 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr)
mutex_destroy(>lock);
  }

+
+static void amdgpu_ctx_init_compute_sched(struct amdgpu_device *adev)
+{
+   int num_compute_sched_normal = 0;
+   int num_compute_sched_high = AMDGPU_MAX_COMPUTE_RINGS - 1;
+   int i;
+
+   /* fill compute_sched array as: start from 0th index for normal 
priority scheds and
+* start from (last_index - num_compute_sched_normal) for high priority
+* scheds */
+   for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+   if (!adev->gfx.compute_ring[i].gfx_pipe_priority)
+