On 2020-03-05 16:13, Nirmoy wrote:
>
> On 3/5/20 7:42 PM, Luben Tuikov wrote:
>>
>>> A quick search leads me amdgpu_sched_ioctl() which is using
>>> DRM_SCHED_PRIORITY_INVALID
>>>
>>> to indicate a invalid value from userspace. I don't know much about drm
>>> api to suggest any useful
>>>
>>>
On 3/5/20 7:42 PM, Luben Tuikov wrote:
A quick search leads me amdgpu_sched_ioctl() which is using
DRM_SCHED_PRIORITY_INVALID
to indicate a invalid value from userspace. I don't know much about drm
api to suggest any useful
changes regarding this. But again this isn't in the scope of this
It is very difficult to find your comment replies when
you do not add an empty line around them.
Do you not see how everyone responds and adds
an empty line around them?
Why don't you?
cont'd below
On 2020-03-05 01:21, Nirmoy wrote:
>
> On 3/4/20 10:41 PM, Luben Tuikov wrote:
>> On 2020-03-03
On 3/4/20 10:41 PM, Luben Tuikov wrote:
On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
We were changing compute ring priority while rings were being used
before every job submission which is not recommended. This patch
sets compute queue priority at mqd initialization for gfx8, gfx9 and
gfx10.
On 2020-03-04 4:41 p.m., Luben Tuikov wrote:
> On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
[snip]
>> +case DRM_SCHED_PRIORITY_HIGH_HW:
>> +case DRM_SCHED_PRIORITY_KERNEL:
>> +return AMDGPU_GFX_PIPE_PRIO_HIGH;
>> +default:
>> +return AMDGPU_GFX_PIPE_PRIO_NORMAL;
On 2020-03-03 7:50 a.m., Nirmoy Das wrote:
> We were changing compute ring priority while rings were being used
> before every job submission which is not recommended. This patch
> sets compute queue priority at mqd initialization for gfx8, gfx9 and
> gfx10.
>
> Policy: make queue 0 of each pipe
On Tue, Mar 3, 2020 at 7:47 AM Nirmoy Das wrote:
>
> We were changing compute ring priority while rings were being used
> before every job submission which is not recommended. This patch
> sets compute queue priority at mqd initialization for gfx8, gfx9 and
> gfx10.
>
> Policy: make queue 0 of
On 3/3/20 4:30 PM, Christian König wrote:
Am 03.03.20 um 16:22 schrieb Nirmoy:
Hi Christian,
On 3/3/20 4:14 PM, Christian König wrote:
I mean the drm_gpu_scheduler * array doesn't needs to be
constructed by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should
Am 03.03.20 um 16:22 schrieb Nirmoy:
Hi Christian,
On 3/3/20 4:14 PM, Christian König wrote:
I mean the drm_gpu_scheduler * array doesn't needs to be
constructed by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else
may be in
Hi Christian,
On 3/3/20 4:14 PM, Christian König wrote:
I mean the drm_gpu_scheduler * array doesn't needs to be constructed
by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else
may be in amdgpu_ring.c ?
That's one possibility, yes. On
Am 03.03.20 um 15:29 schrieb Nirmoy:
On 3/3/20 3:03 PM, Christian König wrote:
Am 03.03.20 um 13:50 schrieb Nirmoy Das:
[SNIP]
struct amdgpu_mec {
struct amdgpu_bo *hpd_eop_obj;
u64 hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
uint32_t
On 3/3/20 3:03 PM, Christian König wrote:
Am 03.03.20 um 13:50 schrieb Nirmoy Das:
[SNIP]
struct amdgpu_mec {
struct amdgpu_bo *hpd_eop_obj;
u64 hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
uint32_t num_gfx_sched;
unsigned
Am 03.03.20 um 13:50 schrieb Nirmoy Das:
[SNIP]
struct amdgpu_mec {
struct amdgpu_bo*hpd_eop_obj;
u64 hpd_eop_gpu_addr;
@@ -280,8 +290,9 @@ struct amdgpu_gfx {
uint32_tnum_gfx_sched;
unsigned
We were changing compute ring priority while rings were being used
before every job submission which is not recommended. This patch
sets compute queue priority at mqd initialization for gfx8, gfx9 and
gfx10.
Policy: make queue 0 of each pipe as high priority compute queue
High/normal priority
14 matches
Mail list logo