Am 04.08.2018 05:27, schrieb Dieter Nützel:
Am 03.08.2018 13:09, schrieb Christian König:
Am 03.08.2018 um 03:08 schrieb Dieter Nützel:
Hello Christian, AMD guys,
this one _together_ with these series
[PATCH 1/7] drm/amdgpu: use new scheduler load balancing for VMs
Am 03.08.2018 13:09, schrieb Christian König:
Am 03.08.2018 um 03:08 schrieb Dieter Nützel:
Hello Christian, AMD guys,
this one _together_ with these series
[PATCH 1/7] drm/amdgpu: use new scheduler load balancing for VMs
https://lists.freedesktop.org/archives/amd-gfx/2018-August/024802.html
Am Montag, den 06.08.2018, 14:57 +0200 schrieb Christian König:
> Am 03.08.2018 um 16:29 schrieb Lucas Stach:
> > drm_sched_job_finish() is a work item scheduled for each finished job on
> > a unbound system workqueue. This means the workers can execute out of order
> > with regard to the real
On Fri, Aug 3, 2018 at 11:53 AM, Michel Dänzer wrote:
> From: Michel Dänzer
>
> Instead of the Xorg version. This should allow glamor backported from
> xserver >= 1.20 to work with older Xorg versions.
>
> Signed-off-by: Michel Dänzer
Reviewed-by: Alex Deucher
> ---
> src/amdgpu_glamor.c |
Am 03.08.2018 um 17:26 schrieb Michel Dänzer:
From: Michel Dänzer
The allocated size can be (at least?) as large as megabytes, and
there's no need for it to be physically contiguous.
May avoid spurious failures to initialize / suspend the corresponding
block while there's memory pressure.
Am 03.08.2018 um 16:29 schrieb Lucas Stach:
drm_sched_job_finish() is a work item scheduled for each finished job on
a unbound system workqueue. This means the workers can execute out of order
with regard to the real hardware job completions.
If this happens queueing a timeout worker for the
From: Michel Dänzer
This is to avoid submitting more flips while we are waiting for pending
ones to complete.
Signed-off-by: Michel Dänzer
---
v2:
* Rebased on top of new patch 2.5
src/amdgpu_drm_queue.c | 41 +++--
src/amdgpu_drm_queue.h | 1 +
From: Michel Dänzer
Instead of processing DRM events directly from drmHandleEvent's
callbacks, there are three phases:
1. drmHandleEvent is called, and signalled events are re-queued to
_signalled lists from its callbacks.
2. Signalled page flip completion events are processed.
3. Signalled
From: Michel Dänzer
Instead of the Xorg version. This should allow glamor backported from
xserver >= 1.20 to work with older Xorg versions.
Signed-off-by: Michel Dänzer
---
src/amdgpu_glamor.c | 8
src/amdgpu_kms.c| 20
2 files changed, 16 insertions(+), 12
From: Michel Dänzer
The allocated size can be (at least?) as large as megabytes, and
there's no need for it to be physically contiguous.
May avoid spurious failures to initialize / suspend the corresponding
block while there's memory pressure.
Bugzilla: https://bugs.freedesktop.org/107432
On Fri, Aug 3, 2018 at 4:41 AM, Rex Zhu wrote:
> Different from ordinary stoney,For Stoney Fanless,
> smu firmware do not poweron/off acp tiles, so need to
> poweron/off acp in driver.
>
> Partially revert
> 'commit f766dd23e5ce ("drm/amdgpu/acp: Powrgate acp via smu")'
>
> Signed-off-by: Rex Zhu
Ping ajax or daniels or anholt on IRC.
Alex
From: amd-gfx on behalf of Christian
König
Sent: Friday, August 3, 2018 9:54:42 AM
To: Michel Dänzer
Cc: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH libdrm 6/6] amdgpu: always add all BOs to lockup table
Am
drm_sched_job_finish() is a work item scheduled for each finished job on
a unbound system workqueue. This means the workers can execute out of order
with regard to the real hardware job completions.
If this happens queueing a timeout worker for the first job on the ring
mirror list is wrong, as
Am 03.08.2018 um 16:09 schrieb Lucas Stach:
Hi Christian,
Am Freitag, den 03.08.2018, 15:51 +0200 schrieb Christian König:
Hi Lucas,
thanks a lot for taking care of that, but there is one thing you have
missed:
It is perfectly possible that the job is the last one on the list and
Hi Christian,
Am Freitag, den 03.08.2018, 15:51 +0200 schrieb Christian König:
> Hi Lucas,
>
> thanks a lot for taking care of that, but there is one thing you have
> missed:
>
> It is perfectly possible that the job is the last one on the list and
> list_next_entry() doesn't test for that,
Am 03.08.2018 um 15:52 schrieb Michel Dänzer:
On 2018-08-03 01:34 PM, Christian König wrote:
This way we can always find a BO structure by its handle.
Signed-off-by: Christian König
In the shortlog, should be "handle table" instead of "lockup table"?
With that fixed, the series is
On 2018-08-03 01:34 PM, Christian König wrote:
> This way we can always find a BO structure by its handle.
>
> Signed-off-by: Christian König
In the shortlog, should be "handle table" instead of "lockup table"?
With that fixed, the series is
Reviewed-by: Michel Dänzer
Thanks for doing
Hi Lucas,
thanks a lot for taking care of that, but there is one thing you have
missed:
It is perfectly possible that the job is the last one on the list and
list_next_entry() doesn't test for that, e.g. it never return NULL.
BTW: There are also quite a lot of other things we could
drm_sched_job_finish() is a work item scheduled for each finished job on
a unbound system workqueue. This means the workers can execute out of order
with regard to the real hardware job completions.
If this happens queueing a timeout worker for the first job on the ring
mirror list is wrong, as
Ah...you are correct. We will reschedule on the first job push. I didn't
take that into account. Let's drop this patch then.
Thanks,
Nayan
On Fri, Aug 3, 2018, 4:12 PM Christian König <
ckoenig.leichtzumer...@gmail.com> wrote:
> Am 03.08.2018 um 09:06 schrieb Nayan Deshmukh:
> > Instead of
Instead of the hash use the handle table.
Signed-off-by: Christian König
---
amdgpu/amdgpu_bo.c | 26 +-
amdgpu/amdgpu_device.c | 15 +--
amdgpu/amdgpu_internal.h | 2 +-
3 files changed, 15 insertions(+), 28 deletions(-)
diff --git
We have so few devices that just walking a linked list is probably
faster.
Signed-off-by: Christian König
---
amdgpu/amdgpu_device.c | 49
amdgpu/amdgpu_internal.h | 1 +
2 files changed, 17 insertions(+), 33 deletions(-)
diff --git
This way we can always find a BO structure by its handle.
Signed-off-by: Christian König
---
amdgpu/amdgpu_bo.c | 14 --
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
index 02592377..422c7c99 100644
--- a/amdgpu/amdgpu_bo.c
Instead of the hash use the handle table.
Signed-off-by: Christian König
---
amdgpu/amdgpu_bo.c | 19 ++-
amdgpu/amdgpu_device.c | 3 +--
amdgpu/amdgpu_internal.h | 3 ++-
3 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/amdgpu/amdgpu_bo.c
Not used any more.
Signed-off-by: Christian König
---
amdgpu/Makefile.sources | 4 -
amdgpu/util_hash.c | 383 ---
amdgpu/util_hash.h | 103 -
amdgpu/util_hash_table.c | 270 -
The kernel handles are dense and the kernel always tries to use the
lowest free id. Use this to implement a more efficient handle table
by using a resizeable array instead of a hash.
v2: add handle_table_fini function, extra key checks,
fix typo in function name
Signed-off-by: Christian
Am 03.08.2018 um 13:11 schrieb Huang Rui:
Demangle amdgpu.h.
Signed-off-by: Huang Rui
Acked-by: Christian König for the entire series.
Thanks a lot for taking care of this,
Christian.
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 23 ---
Demangle amdgpu.h.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 24
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 25 +
2 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
Demangle amdgpu.h.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 6 --
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h | 7 +++
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
Demangle amdgpu.h.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 15 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 1 +
Demangle amdgpu.h.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index a500466..67c8738 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++
Demangle amdgpu.h.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 23 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 24
2 files changed, 24 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
Am 03.08.2018 um 03:08 schrieb Dieter Nützel:
Hello Christian, AMD guys,
this one _together_ with these series
[PATCH 1/7] drm/amdgpu: use new scheduler load balancing for VMs
https://lists.freedesktop.org/archives/amd-gfx/2018-August/024802.html
on top of
amd-staging-drm-next 53d5f1e4a6d9
Am 03.08.2018 um 09:06 schrieb Nayan Deshmukh:
Instead of assigning entity to the first scheduler in the list
assign it to the least loaded scheduler.
I thought about that as well, but then abandoned the idea.
The reason is that we are going to reassign the rq when the first job is
pushed to
Different from ordinary stoney,For Stoney Fanless,
smu firmware do not poweron/off acp tiles, so need to
poweron/off acp in driver.
Partially revert
'commit f766dd23e5ce ("drm/amdgpu/acp: Powrgate acp via smu")'
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 118
When insert or lookup a handle in table,
it needs to check if the handle is vaild or not.
Sometimes it may find a non-existing bo in table
Signed-off-by: Junwei Zhang
---
amdgpu/handle_table.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git
Instead of assigning entity to the first scheduler in the list
assign it to the least loaded scheduler.
Signed-off-by: Nayan Deshmukh
---
drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c
37 matches
Mail list logo