For series:
Review-by: Likun Gao
Thanks for all your efforts and will apply those patch to pco topic branch
Regards,
Likun
-Original Message-
From: James Zhu
Sent: Friday, August 10, 2018 12:32 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, James ; Deucher, Alexander
; Gao, Likun ; Hu
On 8/10/2018 12:02 PM, Zhu, Rex wrote:
I am Ok with the check when call vce_v3_0_hw_fini.
But we may still need to call amdpug_vce_suspend/resume.
Done in V2. Have moved the check such that both are executed.
Regards,
Shirish S
and not sure whether need to do ring test when resume back.
On Fri, Aug 10, 2018 at 01:44:28PM +0800, Junwei Zhang wrote:
> code cleanup for amdgpu ttm structures
>
> Signed-off-by: Junwei Zhang
Acked-by: Huang Rui
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 20
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 17
On 2018 Jul 27, Michel Dänzer wrote:
> From: Michel Dänzer
>
> We were only storing the FB provided by the client, but on CRTCs with
> TearFree enabled, we use a separate FB. This could cause
> drmmode_flip_handler to fail to clear drmmode_crtc->flip_pending, which
> could result in a hang when w
On Thu, Aug 09, 2018 at 03:16:23PM -0500, Alex Deucher wrote:
> Compare the current vrefresh in addition to the number of displays
> when determining whether or not the smu needs updates when changing
> modes. The SMU needs to be updated if the vbi timeout changes due
> to a different refresh rate.
On Thu, Aug 09, 2018 at 01:37:09PM +0200, Christian König wrote:
> Add a helper to access the shared fences in an reservation object.
>
> Signed-off-by: Christian König
Reviewed-by: Huang Rui
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 7 ++-
> drivers/gpu/drm/amd/amdgpu/a
Quoting Christian König (2018-08-09 15:54:31)
> Am 09.08.2018 um 16:22 schrieb Daniel Vetter:
> > On Thu, Aug 9, 2018 at 3:58 PM, Christian König
> > wrote:
> >> Am 09.08.2018 um 15:38 schrieb Daniel Vetter:
> >>> On Thu, Aug 09, 2018 at 01:37:07PM +0200, Christian König wrote:
> >>> [SNIP]
> >> S
Am 10.08.2018 um 09:51 schrieb Chris Wilson:
Quoting Christian König (2018-08-09 15:54:31)
Am 09.08.2018 um 16:22 schrieb Daniel Vetter:
On Thu, Aug 9, 2018 at 3:58 PM, Christian König
wrote:
Am 09.08.2018 um 15:38 schrieb Daniel Vetter:
On Thu, Aug 09, 2018 at 01:37:07PM +0200, Christian Kö
On Thu, Aug 9, 2018 at 4:54 PM, Christian König
wrote:
> Am 09.08.2018 um 16:22 schrieb Daniel Vetter:
>>
>> On Thu, Aug 9, 2018 at 3:58 PM, Christian König
>> wrote:
>>>
>>> Am 09.08.2018 um 15:38 schrieb Daniel Vetter:
On Thu, Aug 09, 2018 at 01:37:07PM +0200, Christian König wrote:
>
On Fri, Aug 10, 2018 at 10:24 AM, Christian König
wrote:
> Am 10.08.2018 um 09:51 schrieb Chris Wilson:
>>
>> Quoting Christian König (2018-08-09 15:54:31)
>>>
>>> Am 09.08.2018 um 16:22 schrieb Daniel Vetter:
On Thu, Aug 9, 2018 at 3:58 PM, Christian König
wrote:
>
> Am 09
Am 10.08.2018 um 10:32 schrieb Daniel Vetter:
On Fri, Aug 10, 2018 at 10:24 AM, Christian König
wrote:
Am 10.08.2018 um 09:51 schrieb Chris Wilson:
Quoting Christian König (2018-08-09 15:54:31)
Am 09.08.2018 um 16:22 schrieb Daniel Vetter:
On Thu, Aug 9, 2018 at 3:58 PM, Christian König
wro
On Thu, Aug 09, 2018 at 08:27:10PM +0800, Koenig, Christian wrote:
> Am 09.08.2018 um 14:25 schrieb Huang Rui:
> > On Thu, Aug 09, 2018 at 03:18:55PM +0800, Koenig, Christian wrote:
> >> Am 09.08.2018 um 08:18 schrieb Huang Rui:
> >>> On Wed, Aug 08, 2018 at 06:47:49PM +0800, Christian König wrote:
Am 10.08.2018 um 10:29 schrieb Daniel Vetter:
[SNIP]
I'm only interested in the case of shared buffers. And for those you
_do_ pessimistically assume that all access must be implicitly synced.
Iirc amdgpu doesn't support EGL_ANDROID_native_fence_sync, so this
makes sense that you don't bother wit
On Fri, Aug 10, 2018 at 11:14 AM, Christian König
wrote:
> Am 10.08.2018 um 10:29 schrieb Daniel Vetter:
>>
>> [SNIP]
>> I'm only interested in the case of shared buffers. And for those you
>> _do_ pessimistically assume that all access must be implicitly synced.
>> Iirc amdgpu doesn't support EGL
On Fri, Aug 10, 2018 at 10:51 AM, Christian König
wrote:
> Am 10.08.2018 um 10:32 schrieb Daniel Vetter:
>>
>> On Fri, Aug 10, 2018 at 10:24 AM, Christian König
>> wrote:
>>>
>>> Am 10.08.2018 um 09:51 schrieb Chris Wilson:
Quoting Christian König (2018-08-09 15:54:31)
>
> Am 09
On Fri, Aug 10, 2018 at 11:14 AM, Christian König
wrote:
> Am 10.08.2018 um 10:29 schrieb Daniel Vetter:
>>
>> [SNIP]
>> I'm only interested in the case of shared buffers. And for those you
>> _do_ pessimistically assume that all access must be implicitly synced.
>> Iirc amdgpu doesn't support EGL
We accidentally left out the size of the amdgpu_bo_list struct. It
could lead to memory corruption on 32 bit systems. You'd have to
pick the absolute maximum and set "num_entries == 59652323" then size
would wrap to 16 bytes.
Fixes: 920990cb080a ("drm/amdgpu: allocate the bo_list array after the
Reviewed-by: Bas Nieuwenhuizen
On Fri, Aug 10, 2018 at 12:50 PM Dan Carpenter wrote:
>
> We accidentally left out the size of the amdgpu_bo_list struct. It
> could lead to memory corruption on 32 bit systems. You'd have to
> pick the absolute maximum and set "num_entries == 59652323" then size
On Fri, Aug 10, 2018 at 06:50:32PM +0800, Dan Carpenter wrote:
> We accidentally left out the size of the amdgpu_bo_list struct. It
> could lead to memory corruption on 32 bit systems. You'd have to
> pick the absolute maximum and set "num_entries == 59652323" then size
> would wrap to 16 bytes.
The idea and proposal is originally from Christian, and I continue to work to
deliver it.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact perfor
From: Christian König
Add bulk move pos to store the pointer of first and last buffer object.
The list in between will be bulk moved on lru list.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
---
include/drm/ttm/ttm_bo_driver.h | 28
1 file changed, 28 i
From: Christian König
When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 -
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriousl
This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
---
drivers/gpu/drm/ttm/ttm_bo.c | 52 +
The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
LRU is fixed. So move them on LRU again.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
Why should it? Adding the handle is now not more than setting an array
entry.
I've tested with allocating 250k BOs of 4k size each and there wasn't
any measurable performance differences.
Christian.
Am 09.08.2018 um 18:56 schrieb Marek Olšák:
I don't think this is a good idea. Can you pleas
Am 10.08.2018 um 13:55 schrieb Huang Rui:
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end
Am 10.08.2018 um 07:05 schrieb Junwei Zhang:
the flink bo is used to export
Why should we do this? That makes no sense, this way we would create a
memory leak.
Christian.
Signed-off-by: Junwei Zhang
---
amdgpu/amdgpu_bo.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/amdgpu
Well NAK, that is intentionally kept local to the amdgpu_ttm.c file.
This way we can make sure that we don't accidentally leak the structure
somewhere else.
What you could do is to move it to the beginning of the file.
Christian.
Am 10.08.2018 um 07:44 schrieb Junwei Zhang:
code cleanup for
Am 10.08.2018 um 11:21 schrieb Daniel Vetter:
[SNIP]
Then don't track _any_ of the amdgpu internal fences in the reservation object:
- 1 reservation object that you hand to ttm, for use internally within amdgpu
- 1 reservation object that you attach to the dma-buf (or get from the
imported dma-bu
OK. Thanks.
Marek
On Fri, Aug 10, 2018 at 9:06 AM, Christian König
wrote:
> Why should it? Adding the handle is now not more than setting an array
> entry.
>
> I've tested with allocating 250k BOs of 4k size each and there wasn't any
> measurable performance differences.
>
> Christian.
>
>
> Am
Allow the user to disable AFVS via ppfeaturemask for debugging.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c
b/drivers/gpu/drm/amd/power
Allow the user to disable AFVS via ppfeaturemask for debugging.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
b/drivers/gpu/drm/amd/power
Add a ppfeaturemask flag to disable AVFS control.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/include/amd_shared.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h
b/drivers/gpu/drm/amd/include/amd_shared.h
index 265621d8945c..86b167ec9863 100
On Fri, Aug 10, 2018 at 10:18:24PM +0800, Koenig, Christian wrote:
> Am 10.08.2018 um 13:55 schrieb Huang Rui:
> > I continue to work for bulk moving that based on the proposal by Christian.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move
> > all o
35 matches
Mail list logo