Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-05 Thread Luben Tuikov
On 2020-03-05 16:13, Nirmoy wrote:
> 
> On 3/5/20 7:42 PM, Luben Tuikov wrote:
>>
>>> A quick search leads me amdgpu_sched_ioctl() which is using 
>>> DRM_SCHED_PRIORITY_INVALID
>>>
>>> to indicate a invalid value from userspace. I don't know much about drm 
>>> api to suggest any useful
>>>
>>> changes regarding this. But again this isn't in the scope of this patch 
>>> series.
>> I'm not asking you to eliminate "DRM_SCHED_PRIORITY_INVALID".
>> Oh wait! You forgot what I suggested in a previous review on
>> how to fix enum drm_sched_priority, did you? :-D Search
>> for that email.
> 
> Let me quote[1]:
> 
> "
> 
> Also consider changing to this:
> 
> 
> enum drm_sched_priority {
>     DRM_SCHED_PRIORITY_UNSET,
> DRM_SCHED_PRIORITY_INVALID,<--- remove
>     DRM_SCHED_PRIORITY_MIN,
>     DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
>     DRM_SCHED_PRIORITY_NORMAL,
>     DRM_SCHED_PRIORITY_HIGH_SW,
>     DRM_SCHED_PRIORITY_HIGH_HW,
>     DRM_SCHED_PRIORITY_KERNEL,
>     DRM_SCHED_PRIORITY_MAX,
> };
> 
> We should never have an "invalid priority", just ignored priority. :-)
> Second, having 0 as UNSET gives you easy priority when you set
> map[0] = normal_priority, as memory usually comes memset to 0.
> IOW, you'd not need to check against UNSET, and simply use
> the default [0] which you'd set to normal priority.
> 
> "
> 

Excellent! You found the email.

Yes, there is no reason to represent and carry around
an "invalid" priority--it simply means that the *input*
was invalid, but the priority would always be set to
something valid and meaningful: the input was invalid,
not the priority.

If you encounter an invalid input, simply set the priority
to unset, or even better, minimum. From then on,
any further map look-ups, would be a trivial linear
transformation.

It is generally a good idea to represent 0, default software
memory initialization, to "*_NONE", in enums. Then start the
meaningful labels at 1, so one can distinguish between
"this data has never been touched by software" and "this was
set to something by someone after memory initialization".
So you should rename "_UNSET" to "_NONE". It *was* set (by memset()),
just not meaningfully so.

To look like this:

enum drm_sched_priority {
DRM_SCHED_PRIORITY_NONE,
DRM_SCHED_PRIORITY_LOW,
DRM_SCHED_PRIORITY_NORMAL,
DRM_SCHED_PRIORITY_HIGH,
DRM_SCHED_PRIORITY_HIGHER,
DRM_SCHED_PRIORITY_KERNEL,

DRM_SCHED_PRIORITY_SIZE  /* intentionally no comma */
};

Valid values are "*_NONE" to "*_KERNEL".
Meaningful values are "*_LOW" to "*_KERNEL". (i.e. intentional)
"*_SIZE" you use to allocate storage (array, for instance)
of size which can index all valid priorities.

Then you can do:

enum drm_sched_priority some_map[XYZ_DOMAIN_PRIO_SIZE] = {
[0 ... XYZ_DOMAIN_PRIO_SIZE - 1] = XYZ_DOMAIN_PRIO_DEFAULT,
[A]  = XYZ_DOMAIN_PRIO_NORMAL,
[B]  = XYZ_DOMAIN_PRIO_HIGH,
};

Where 0 <= A, B < XYZ_DOMAIN_PRIO_SIZE.

This also allows you to chain those transformations, similarly.

Regards,
Luben


> I guess understood it wrong.
> 
> 
> [1]https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg45621.html 
> 
> 
> 
> Regards,
> 
> Nirmoy
> 

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-05 Thread Nirmoy


On 3/5/20 7:42 PM, Luben Tuikov wrote:



A quick search leads me amdgpu_sched_ioctl() which is using
DRM_SCHED_PRIORITY_INVALID

to indicate a invalid value from userspace. I don't know much about drm
api to suggest any useful

changes regarding this. But again this isn't in the scope of this patch
series.

I'm not asking you to eliminate "DRM_SCHED_PRIORITY_INVALID".
Oh wait! You forgot what I suggested in a previous review on
how to fix enum drm_sched_priority, did you? :-D Search
for that email.


Let me quote[1]:

"

Also consider changing to this:


enum drm_sched_priority {
    DRM_SCHED_PRIORITY_UNSET,
DRM_SCHED_PRIORITY_INVALID,<--- remove
    DRM_SCHED_PRIORITY_MIN,
    DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
    DRM_SCHED_PRIORITY_NORMAL,
    DRM_SCHED_PRIORITY_HIGH_SW,
    DRM_SCHED_PRIORITY_HIGH_HW,
    DRM_SCHED_PRIORITY_KERNEL,
    DRM_SCHED_PRIORITY_MAX,
};

We should never have an "invalid priority", just ignored priority. :-)
Second, having 0 as UNSET gives you easy priority when you set
map[0] = normal_priority, as memory usually comes memset to 0.
IOW, you'd not need to check against UNSET, and simply use
the default [0] which you'd set to normal priority.

"

I guess understood it wrong.


[1]https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg45621.html 




Regards,

Nirmoy

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-05 Thread Luben Tuikov
It is very difficult to find your comment replies when
you do not add an empty line around them.

Do you not see how everyone responds and adds
an empty line around them?

Why don't you?

cont'd below

On 2020-03-05 01:21, Nirmoy wrote:
> 
> On 3/4/20 10:41 PM, Luben Tuikov wrote:
>> On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
>>> We were changing compute ring priority while rings were being used
>>> before every job submission which is not recommended. This patch
>>> sets compute queue priority at mqd initialization for gfx8, gfx9 and
>>> gfx10.
>>>
>>> Policy: make queue 0 of each pipe as high priority compute queue
>>>
>>> High/normal priority compute sched lists are generated from set of 
>>> high/normal
>>> priority compute queues. At context creation, entity of compute queue
>>> get a sched list from high or normal priority depending on ctx->priority
>>>
>>> Signed-off-by: Nirmoy Das 
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 60 +---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
>>>   9 files changed, 135 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> index f397ff97b4e4..8304d0c87899 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> @@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser 
>>> *p,
>>> struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
>>> struct drm_sched_entity *entity = p->entity;
>>> enum drm_sched_priority priority;
>>> -   struct amdgpu_ring *ring;
>>> struct amdgpu_bo_list_entry *e;
>>> struct amdgpu_job *job;
>>> uint64_t seq;
>>> @@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser 
>>> *p,
>>> priority = job->base.s_priority;
>>> drm_sched_entity_push_job(>base, entity);
>>>
>>> -   ring = to_amdgpu_ring(entity->rq->sched);
>>> -   amdgpu_ring_priority_get(ring, priority);
>>> -
>>> amdgpu_vm_move_to_lru_tail(p->adev, >vm);
>>>
>>> ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> index 94a6c42f29ea..4ad944f85672 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> @@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file 
>>> *filp,
>>> return -EACCES;
>>>   }
>>>
>>> +static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
>>> drm_sched_priority prio)
>>> +{
>>> +   switch(prio) {
>> LKCS wants a space after a keyword ("switch") and before parenthesis "(".
>> https://www.kernel.org/doc/html/v4.10/process/coding-style.html#placing-braces-and-spaces
>>
>> Please read the LKCS in you local Linux source code:
>> Documentation/process/coding-style.rst
>> so we don't have to point that out anymore.
> Yes this is happening more than often with me. I will be careful.
>>

See above? No empty lines around your reply.

Just read the LKCS document a few times.

>>> +   case DRM_SCHED_PRIORITY_MIN:
>>> +   case DRM_SCHED_PRIORITY_NORMAL:
>>> +   case DRM_SCHED_PRIORITY_HIGH_SW:
>>> +   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
>> This is taken care of by the "default:" label and
>> unnecessary here (can be removed completely).
> Thanks!
>>
>>> +   case DRM_SCHED_PRIORITY_HIGH_HW:
>>> +   case DRM_SCHED_PRIORITY_KERNEL:
>>> +   return AMDGPU_GFX_PIPE_PRIO_HIGH;
>>> +   default:
>>> +   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
>>> +   }
>> This can be a map. We're mapping from one integer
>> space to another. There is no reason for a jump switch.
>>
>> For instance,
>>
>> /* Map of the DRM scheduling priority to pipe
>>   * priority.
>>   */
>> const enum gfx_pipe_priority s2p_prio_map[] = {
>>  [0] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [1] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [2] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [3] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [4] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [5] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>>  [6] = AMDGPU_GFX_PIPE_PRIO_HIGH,
>>  [7] = AMDGPU_GFX_PIPE_PRIO_HIGH,
>>  [8] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>> };
>>
>> /* Map it!
>>   */
>> pipe_prio = s2p_prio_map[sched_prio + 2];   ## You can view this as y = f(x 
>> + 2).
> 
> I think that would be over-engineering for not so much of extra benefit. 
> A switch statement here is  more expressive and
> 
> bit more immune to changes that might happen to DRM_SCHED_PRIORITY_*. I 
> 

Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-04 Thread Nirmoy


On 3/4/20 10:41 PM, Luben Tuikov wrote:

On 2020-03-03 7:50 a.m., Nirmoy Das wrote:

We were changing compute ring priority while rings were being used
before every job submission which is not recommended. This patch
sets compute queue priority at mqd initialization for gfx8, gfx9 and
gfx10.

Policy: make queue 0 of each pipe as high priority compute queue

High/normal priority compute sched lists are generated from set of high/normal
priority compute queues. At context creation, entity of compute queue
get a sched list from high or normal priority depending on ctx->priority

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 60 +---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
  9 files changed, 135 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index f397ff97b4e4..8304d0c87899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
-   struct amdgpu_ring *ring;
struct amdgpu_bo_list_entry *e;
struct amdgpu_job *job;
uint64_t seq;
@@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);

-   ring = to_amdgpu_ring(entity->rq->sched);
-   amdgpu_ring_priority_get(ring, priority);
-
amdgpu_vm_move_to_lru_tail(p->adev, >vm);

ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 94a6c42f29ea..4ad944f85672 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
return -EACCES;
  }

+static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
drm_sched_priority prio)
+{
+   switch(prio) {

LKCS wants a space after a keyword ("switch") and before parenthesis "(".
https://www.kernel.org/doc/html/v4.10/process/coding-style.html#placing-braces-and-spaces

Please read the LKCS in you local Linux source code:
Documentation/process/coding-style.rst
so we don't have to point that out anymore.

Yes this is happening more than often with me. I will be careful.



+   case DRM_SCHED_PRIORITY_MIN:
+   case DRM_SCHED_PRIORITY_NORMAL:
+   case DRM_SCHED_PRIORITY_HIGH_SW:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;

This is taken care of by the "default:" label and
unnecessary here (can be removed completely).

Thanks!



+   case DRM_SCHED_PRIORITY_HIGH_HW:
+   case DRM_SCHED_PRIORITY_KERNEL:
+   return AMDGPU_GFX_PIPE_PRIO_HIGH;
+   default:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+   }

This can be a map. We're mapping from one integer
space to another. There is no reason for a jump switch.

For instance,

/* Map of the DRM scheduling priority to pipe
  * priority.
  */
const enum gfx_pipe_priority s2p_prio_map[] = {
[0] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[1] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[2] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[3] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[4] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[5] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[6] = AMDGPU_GFX_PIPE_PRIO_HIGH,
[7] = AMDGPU_GFX_PIPE_PRIO_HIGH,
[8] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
};

/* Map it!
  */
pipe_prio = s2p_prio_map[sched_prio + 2];   ## You can view this as y = f(x + 
2).


I think that would be over-engineering for not so much of extra benefit. 
A switch statement here is  more expressive and


bit more immune to changes that might happen to DRM_SCHED_PRIORITY_*. I 
would rather let compiler does its


job.



Note that if you fix enum drm_sched_priority as I described
in an earlier review, you'd not need the additive factor of 2
in the above linear transformation.


A quick search leads me amdgpu_sched_ioctl() which is using 
DRM_SCHED_PRIORITY_INVALID


to indicate a invalid value from userspace. I don't know much about drm 
api to suggest any useful


changes regarding this. But again this isn't in the scope of this patch 
series.





+
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;

You don't need a return here at all, when you have a "default:" label

Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-04 Thread Luben Tuikov
On 2020-03-04 4:41 p.m., Luben Tuikov wrote:
> On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
[snip]
>> +case DRM_SCHED_PRIORITY_HIGH_HW:
>> +case DRM_SCHED_PRIORITY_KERNEL:
>> +return AMDGPU_GFX_PIPE_PRIO_HIGH;
>> +default:
>> +return AMDGPU_GFX_PIPE_PRIO_NORMAL;
>> +}
> 
> This can be a map. We're mapping from one integer
> space to another. There is no reason for a jump switch.
> 
> For instance,
> 
> /* Map of the DRM scheduling priority to pipe
>  * priority.
>  */
> const enum gfx_pipe_priority s2p_prio_map[] = {
>   [0] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [1] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [2] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [3] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [4] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [5] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
>   [6] = AMDGPU_GFX_PIPE_PRIO_HIGH,
>   [7] = AMDGPU_GFX_PIPE_PRIO_HIGH,
>   [8] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
> };
> 
> /* Map it!
>  */
> pipe_prio = s2p_prio_map[sched_prio + 2];   ## You can view this as y = f(x + 
> 2).

Note that you can make this into a 

static inline enum gfx_pipe_priority s2p_prio_map(enum drm_sched_priority sp)
{
return _s2p_prio_map[sched_prio + 2];
}

Regards,
Luben

> 
> Note that if you fix enum drm_sched_priority as I described
> in an earlier review, you'd not need the additive factor of 2
> in the above linear transformation.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-04 Thread Luben Tuikov
On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
> We were changing compute ring priority while rings were being used
> before every job submission which is not recommended. This patch
> sets compute queue priority at mqd initialization for gfx8, gfx9 and
> gfx10.
> 
> Policy: make queue 0 of each pipe as high priority compute queue
> 
> High/normal priority compute sched lists are generated from set of high/normal
> priority compute queues. At context creation, entity of compute queue
> get a sched list from high or normal priority depending on ctx->priority
> 
> Signed-off-by: Nirmoy Das 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 60 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
>  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
>  9 files changed, 135 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index f397ff97b4e4..8304d0c87899 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
>   struct drm_sched_entity *entity = p->entity;
>   enum drm_sched_priority priority;
> - struct amdgpu_ring *ring;
>   struct amdgpu_bo_list_entry *e;
>   struct amdgpu_job *job;
>   uint64_t seq;
> @@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   priority = job->base.s_priority;
>   drm_sched_entity_push_job(>base, entity);
> 
> - ring = to_amdgpu_ring(entity->rq->sched);
> - amdgpu_ring_priority_get(ring, priority);
> -
>   amdgpu_vm_move_to_lru_tail(p->adev, >vm);
> 
>   ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index 94a6c42f29ea..4ad944f85672 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file 
> *filp,
>   return -EACCES;
>  }
> 
> +static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
> drm_sched_priority prio)
> +{
> + switch(prio) {

LKCS wants a space after a keyword ("switch") and before parenthesis "(".
https://www.kernel.org/doc/html/v4.10/process/coding-style.html#placing-braces-and-spaces

Please read the LKCS in you local Linux source code:
Documentation/process/coding-style.rst
so we don't have to point that out anymore.

> + case DRM_SCHED_PRIORITY_MIN:
> + case DRM_SCHED_PRIORITY_NORMAL:
> + case DRM_SCHED_PRIORITY_HIGH_SW:
> + return AMDGPU_GFX_PIPE_PRIO_NORMAL;

This is taken care of by the "default:" label and
unnecessary here (can be removed completely).

> + case DRM_SCHED_PRIORITY_HIGH_HW:
> + case DRM_SCHED_PRIORITY_KERNEL:
> + return AMDGPU_GFX_PIPE_PRIO_HIGH;
> + default:
> + return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> + }

This can be a map. We're mapping from one integer
space to another. There is no reason for a jump switch.

For instance,

/* Map of the DRM scheduling priority to pipe
 * priority.
 */
const enum gfx_pipe_priority s2p_prio_map[] = {
[0] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[1] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[2] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[3] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[4] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[5] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
[6] = AMDGPU_GFX_PIPE_PRIO_HIGH,
[7] = AMDGPU_GFX_PIPE_PRIO_HIGH,
[8] = AMDGPU_GFX_PIPE_PRIO_NORMAL,
};

/* Map it!
 */
pipe_prio = s2p_prio_map[sched_prio + 2];   ## You can view this as y = f(x + 
2).

Note that if you fix enum drm_sched_priority as I described
in an earlier review, you'd not need the additive factor of 2
in the above linear transformation.

> +
> + return AMDGPU_GFX_PIPE_PRIO_NORMAL;

You don't need a return here at all, when you have a "default:" label
up there.

> +}
> +
>  static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, const u32 hw_ip, 
> const u32 ring)
>  {
>   struct amdgpu_device *adev = ctx->adev;
>   struct amdgpu_ctx_entity *entity;
>   struct drm_gpu_scheduler **scheds = NULL, *sched = NULL;
>   unsigned num_scheds = 0;
> + enum gfx_pipe_priority hw_prio;
>   enum drm_sched_priority priority;
>   int r;
> 
> @@ -85,8 +103,9 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, 
> const u32 hw_ip, const
>   num_scheds = 1;
>  

Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-04 Thread Alex Deucher
On Tue, Mar 3, 2020 at 7:47 AM Nirmoy Das  wrote:
>
> We were changing compute ring priority while rings were being used
> before every job submission which is not recommended. This patch
> sets compute queue priority at mqd initialization for gfx8, gfx9 and
> gfx10.
>
> Policy: make queue 0 of each pipe as high priority compute queue
>
> High/normal priority compute sched lists are generated from set of high/normal
> priority compute queues. At context creation, entity of compute queue
> get a sched list from high or normal priority depending on ctx->priority
>
> Signed-off-by: Nirmoy Das 

I haven't had a chance to follow all of the scheduler discussions, but
the MQD setup looks good to me, so:
Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 60 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
>  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
>  9 files changed, 135 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index f397ff97b4e4..8304d0c87899 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> struct drm_sched_entity *entity = p->entity;
> enum drm_sched_priority priority;
> -   struct amdgpu_ring *ring;
> struct amdgpu_bo_list_entry *e;
> struct amdgpu_job *job;
> uint64_t seq;
> @@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> priority = job->base.s_priority;
> drm_sched_entity_push_job(>base, entity);
>
> -   ring = to_amdgpu_ring(entity->rq->sched);
> -   amdgpu_ring_priority_get(ring, priority);
> -
> amdgpu_vm_move_to_lru_tail(p->adev, >vm);
>
> ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index 94a6c42f29ea..4ad944f85672 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file 
> *filp,
> return -EACCES;
>  }
>
> +static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
> drm_sched_priority prio)
> +{
> +   switch(prio) {
> +   case DRM_SCHED_PRIORITY_MIN:
> +   case DRM_SCHED_PRIORITY_NORMAL:
> +   case DRM_SCHED_PRIORITY_HIGH_SW:
> +   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> +   case DRM_SCHED_PRIORITY_HIGH_HW:
> +   case DRM_SCHED_PRIORITY_KERNEL:
> +   return AMDGPU_GFX_PIPE_PRIO_HIGH;
> +   default:
> +   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> +   }
> +
> +   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
> +}
> +
>  static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, const u32 hw_ip, 
> const u32 ring)
>  {
> struct amdgpu_device *adev = ctx->adev;
> struct amdgpu_ctx_entity *entity;
> struct drm_gpu_scheduler **scheds = NULL, *sched = NULL;
> unsigned num_scheds = 0;
> +   enum gfx_pipe_priority hw_prio;
> enum drm_sched_priority priority;
> int r;
>
> @@ -85,8 +103,9 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, 
> const u32 hw_ip, const
> num_scheds = 1;
> break;
> case AMDGPU_HW_IP_COMPUTE:
> -   scheds = adev->gfx.compute_sched;
> -   num_scheds = adev->gfx.num_compute_sched;
> +   hw_prio = 
> amdgpu_ctx_sched_prio_to_compute_prio(priority);
> +   scheds = adev->gfx.compute_prio_sched[hw_prio];
> +   num_scheds = adev->gfx.num_compute_sched[hw_prio];
> break;
> case AMDGPU_HW_IP_DMA:
> scheds = adev->sdma.sdma_sched;
> @@ -628,20 +647,47 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr)
> mutex_destroy(>lock);
>  }
>
> +
> +static void amdgpu_ctx_init_compute_sched(struct amdgpu_device *adev)
> +{
> +   int num_compute_sched_normal = 0;
> +   int num_compute_sched_high = AMDGPU_MAX_COMPUTE_RINGS - 1;
> +   int i;
> +
> +   /* fill compute_sched array as: start from 0th index for normal 
> priority scheds and
> +* start from (last_index - num_compute_sched_normal) for high 
> priority
> +* scheds */
> +   for (i = 0; i < adev->gfx.num_compute_rings; 

Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-04 Thread Nirmoy


On 3/3/20 4:30 PM, Christian König wrote:

Am 03.03.20 um 16:22 schrieb Nirmoy:

Hi Christian,

On 3/3/20 4:14 PM, Christian König wrote:


I mean the drm_gpu_scheduler * array doesn't needs to be 
constructed by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else 
may be in amdgpu_ring.c ?


That's one possibility, yes. On the other hand feel free to go ahead 
with the boolean for now. \


Are you fine with struct drm_gpu_scheduler 
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX]; for now as well ?


Yeah, that should work anyway. My only concern was adding the boolean 
to the ring structure.


Thanks.

I will later work on the ctx code cleanup to move drm_gpu_scheduler 
array construction to a proper place.



Regards,

Nirmoy



regards,
Christian.




Regards,

Nirmoy




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Christian König

Am 03.03.20 um 16:22 schrieb Nirmoy:

Hi Christian,

On 3/3/20 4:14 PM, Christian König wrote:


I mean the drm_gpu_scheduler * array doesn't needs to be 
constructed by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else 
may be in amdgpu_ring.c ?


That's one possibility, yes. On the other hand feel free to go ahead 
with the boolean for now. \


Are you fine with struct drm_gpu_scheduler 
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX]; for now as well ?


Yeah, that should work anyway. My only concern was adding the boolean to 
the ring structure.


regards,
Christian.




Regards,

Nirmoy



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Nirmoy

Hi Christian,

On 3/3/20 4:14 PM, Christian König wrote:


I mean the drm_gpu_scheduler * array doesn't needs to be constructed 
by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else 
may be in amdgpu_ring.c ?


That's one possibility, yes. On the other hand feel free to go ahead 
with the boolean for now. \


Are you fine with struct drm_gpu_scheduler 
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX]; for now as well ?



Regards,

Nirmoy

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Christian König

Am 03.03.20 um 15:29 schrieb Nirmoy:


On 3/3/20 3:03 PM, Christian König wrote:

Am 03.03.20 um 13:50 schrieb Nirmoy Das:

[SNIP]
  struct amdgpu_mec {
  struct amdgpu_bo    *hpd_eop_obj;
  u64    hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
  uint32_t    num_gfx_sched;
  unsigned    num_gfx_rings;
  struct amdgpu_ring compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
+    struct drm_gpu_scheduler 
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
  struct drm_gpu_scheduler 
*compute_sched[AMDGPU_MAX_COMPUTE_RINGS];

-    uint32_t    num_compute_sched;
+    uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
  unsigned    num_compute_rings;
  struct amdgpu_irq_src    eop_irq;
  struct amdgpu_irq_src    priv_reg_irq;


Well my question is why we we need compute_prio_sched here?


This one is so that I can leverage single sched array 
compute_sched[AMDGPU_MAX_COMPUTE_RINGS]


to store both priority  sched list  instead of

struct drm_gpu_scheduler 
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_COMPUTE_RINGS];


I guess this make ctx code unnecessarily complex.


Well, it would be good if the ctx code didn't need to fill in 
compute_sched in the first place.


E.g. that the hardware backends provide to the ctx handling which 
schedulers to use for a specific use case.



Can't we just design that as:
struct drm_gpu_scheduler 
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_HI_COMPUTE_RINGS];

uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];

What should be the value of AMDGPU_MAX_HI_COMPUTE_RINGS ?


I mean the drm_gpu_scheduler * array doesn't needs to be constructed 
by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else 
may be in amdgpu_ring.c ?


That's one possibility, yes. On the other hand feel free to go ahead 
with the boolean for now.


This seems to be a to complex and to wide cleanup that we should do it 
as part of this patch set.


Regards,
Christian.



Or am I missing something?

Regards,
Christian.

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Nirmoy


On 3/3/20 3:03 PM, Christian König wrote:

Am 03.03.20 um 13:50 schrieb Nirmoy Das:

[SNIP]
  struct amdgpu_mec {
  struct amdgpu_bo    *hpd_eop_obj;
  u64    hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
  uint32_t    num_gfx_sched;
  unsigned    num_gfx_rings;
  struct amdgpu_ring compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
+    struct drm_gpu_scheduler 
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX];

  struct drm_gpu_scheduler *compute_sched[AMDGPU_MAX_COMPUTE_RINGS];
-    uint32_t    num_compute_sched;
+    uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
  unsigned    num_compute_rings;
  struct amdgpu_irq_src    eop_irq;
  struct amdgpu_irq_src    priv_reg_irq;


Well my question is why we we need compute_prio_sched here?


This one is so that I can leverage single sched array 
compute_sched[AMDGPU_MAX_COMPUTE_RINGS]


to store both priority  sched list  instead of

struct drm_gpu_scheduler 
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_COMPUTE_RINGS];


I guess this make ctx code unnecessarily complex.



Can't we just design that as:
struct drm_gpu_scheduler 
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_HI_COMPUTE_RINGS];

uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];

What should be the value of AMDGPU_MAX_HI_COMPUTE_RINGS ?


I mean the drm_gpu_scheduler * array doesn't needs to be constructed 
by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else may 
be in amdgpu_ring.c ?


Or am I missing something?

Regards,
Christian.

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Christian König

Am 03.03.20 um 13:50 schrieb Nirmoy Das:

[SNIP]
  struct amdgpu_mec {
struct amdgpu_bo*hpd_eop_obj;
u64 hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
uint32_tnum_gfx_sched;
unsignednum_gfx_rings;
struct amdgpu_ring  compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
+   struct drm_gpu_scheduler
**compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
struct drm_gpu_scheduler
*compute_sched[AMDGPU_MAX_COMPUTE_RINGS];
-   uint32_tnum_compute_sched;
+   uint32_t
num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
unsignednum_compute_rings;
struct amdgpu_irq_src   eop_irq;
struct amdgpu_irq_src   priv_reg_irq;


Well my question is why we we need compute_prio_sched here?

Can't we just design that as:
struct drm_gpu_scheduler 
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_HI_COMPUTE_RINGS];

uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];

I mean the drm_gpu_scheduler * array doesn't needs to be constructed by 
the context code in the first place.


Or am I missing something?

Regards,
Christian.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init

2020-03-03 Thread Nirmoy Das
We were changing compute ring priority while rings were being used
before every job submission which is not recommended. This patch
sets compute queue priority at mqd initialization for gfx8, gfx9 and
gfx10.

Policy: make queue 0 of each pipe as high priority compute queue

High/normal priority compute sched lists are generated from set of high/normal
priority compute queues. At context creation, entity of compute queue
get a sched list from high or normal priority depending on ctx->priority

Signed-off-by: Nirmoy Das 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 60 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c  |  8 
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h  | 15 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  6 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 19 
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 23 +++--
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 20 
 9 files changed, 135 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index f397ff97b4e4..8304d0c87899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1205,7 +1205,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
-   struct amdgpu_ring *ring;
struct amdgpu_bo_list_entry *e;
struct amdgpu_job *job;
uint64_t seq;
@@ -1258,9 +1257,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);

-   ring = to_amdgpu_ring(entity->rq->sched);
-   amdgpu_ring_priority_get(ring, priority);
-
amdgpu_vm_move_to_lru_tail(p->adev, >vm);

ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 94a6c42f29ea..4ad944f85672 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -61,12 +61,30 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
return -EACCES;
 }

+static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum 
drm_sched_priority prio)
+{
+   switch(prio) {
+   case DRM_SCHED_PRIORITY_MIN:
+   case DRM_SCHED_PRIORITY_NORMAL:
+   case DRM_SCHED_PRIORITY_HIGH_SW:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+   case DRM_SCHED_PRIORITY_HIGH_HW:
+   case DRM_SCHED_PRIORITY_KERNEL:
+   return AMDGPU_GFX_PIPE_PRIO_HIGH;
+   default:
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+   }
+
+   return AMDGPU_GFX_PIPE_PRIO_NORMAL;
+}
+
 static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, const u32 hw_ip, 
const u32 ring)
 {
struct amdgpu_device *adev = ctx->adev;
struct amdgpu_ctx_entity *entity;
struct drm_gpu_scheduler **scheds = NULL, *sched = NULL;
unsigned num_scheds = 0;
+   enum gfx_pipe_priority hw_prio;
enum drm_sched_priority priority;
int r;

@@ -85,8 +103,9 @@ static int amdgpu_ctx_init_entity(struct amdgpu_ctx *ctx, 
const u32 hw_ip, const
num_scheds = 1;
break;
case AMDGPU_HW_IP_COMPUTE:
-   scheds = adev->gfx.compute_sched;
-   num_scheds = adev->gfx.num_compute_sched;
+   hw_prio = 
amdgpu_ctx_sched_prio_to_compute_prio(priority);
+   scheds = adev->gfx.compute_prio_sched[hw_prio];
+   num_scheds = adev->gfx.num_compute_sched[hw_prio];
break;
case AMDGPU_HW_IP_DMA:
scheds = adev->sdma.sdma_sched;
@@ -628,20 +647,47 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr)
mutex_destroy(>lock);
 }

+
+static void amdgpu_ctx_init_compute_sched(struct amdgpu_device *adev)
+{
+   int num_compute_sched_normal = 0;
+   int num_compute_sched_high = AMDGPU_MAX_COMPUTE_RINGS - 1;
+   int i;
+
+   /* fill compute_sched array as: start from 0th index for normal 
priority scheds and
+* start from (last_index - num_compute_sched_normal) for high priority
+* scheds */
+   for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+   if (!adev->gfx.compute_ring[i].has_high_prio)
+   adev->gfx.compute_sched[num_compute_sched_normal++] =
+   >gfx.compute_ring[i].sched;
+   else
+   adev->gfx.compute_sched[num_compute_sched_high--] =
+   >gfx.compute_ring[i].sched;
+   }
+
+   /* compute ring only has two