> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Wednesday, May 03, 2017 11:33 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex
> Subject: [PATCH] drm/amd/powerplay: disable engine spread spectrum
> feature on Vega10.
>
Reviewed-by: Ken Wang
From: amd-gfx on behalf of Rex Zhu
Sent: Thursday, May 4, 2017 11:33:24 AM
To: amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: [PATCH] drm/amd/powerplay:
Reviewed-by: Xiangliang Yu
Thanks!
Xiangliang Yu
> -Original Message-
> From: Trigger Huang [mailto:trigger.hu...@amd.com]
> Sent: Thursday, May 04, 2017 10:40 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk ; Yu, Xiangliang
>
Reviewed-by: Monk Liu
-Original Message-
From: Trigger Huang [mailto:trigger.hu...@amd.com]
Sent: Thursday, May 4, 2017 10:40 AM
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Monk ; Yu, Xiangliang ;
Huang, Trigger
Some hw finish operations should not be applied in SR-IOV case.
This works as workaround to fix multi-VFs reboot/shutdown issues
Signed-off-by: Trigger Huang
---
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 6 ++
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 8 +++-
From: Charlene Liu
Signed-off-by: Charlene Liu
Acked-by: Harry Wentland
Reviewed-by: Charlene Liu
---
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 21 -
From: Eric Cook
-check for min/max range in freesync calculation and handle it accordingly
Signed-off-by: Eric
Acked-by: Harry Wentland
Reviewed-by: Anthony Koo
---
From: Tony Cheng
to avoid null access in case res_ctx is used to access res_pool before it's
fully constructed
also make it clear which function has dependency on resource_pool
Signed-off-by: Tony Cheng
Reviewed-by: Harry Wentland
From: Dmytro Laktyushkin
Signed-off-by: Dmytro Laktyushkin
Reviewed-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/dce/dce_opp.c| 10 +-
From: Hersen Wu
Signed-off-by: Hersen Wu
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 40 +--
From: Dmytro Laktyushkin
Signed-off-by: Dmytro Laktyushkin
Reviewed-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/core/dc.c | 26 --
1 file changed, 12 insertions(+), 14
From: Zeyu Fan
Signed-off-by: Zeyu Fan
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
From: Tony Cheng
Signed-off-by: Tony Cheng
Acked-by: Harry Wentland
Reviewed-by: Dmytro Laktyushkin
---
drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h | 7 +++
From: Zeyu Fan
Signed-off-by: Zeyu Fan
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +-
drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
2
From: Tony Cheng
Signed-off-by: Tony Cheng
Reviewed-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/dc_hw_types.h | 12 ++--
drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h | 12
2
From: Sylvia Tsai
- Refacotr PSR to follow correct module pattern
- fix eDP only working on sink index 0.
Signed-off-by: Sylvia Tsai
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
From: Dmytro Laktyushkin
Signed-off-by: Dmytro Laktyushkin
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/dce/dce_ipp.c | 12 +-
From: Amy Zhang
Signed-off-by: Amy Zhang
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git
From: Dmytro Laktyushkin
Signed-off-by: Dmytro Laktyushkin
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/dce/Makefile| 2 +-
From: Ding Wang
Signed-off-by: Ding Wang
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/core/dc.c | 41
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
From: Dmytro Laktyushkin
This change will cache cursor attributes and reprogram them
when enabling cursor after power gating if the attributes were not
yet reprogrammed
Signed-off-by: Dmytro Laktyushkin
Acked-by: Harry Wentland
Signed-off-by: Harry Wentland
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/core/dc.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git
From: Yongqiang Sun
Signed-off-by: Yongqiang Sun
Acked-by: Harry Wentland
Reviewed-by: Tony Cheng
---
drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c | 4 ++--
On 2017-05-03 11:02 AM, Daniel Vetter wrote:
On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
Hi Harry,
while this looks more and more like it could work something which would
really help would be to have a set of patches squashed together and rebased
on drm-next.
The
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote:
> since we don't need hdp flush/inval for KIQ anymore
>
> Change-Id: I8518f479afebb73c68ef922880f92dae53b665b9
> Signed-off-by: Monk Liu
Reviewed-by: Alex Deucher
> ---
>
On Wed, May 3, 2017 at 5:10 AM, Liu, Monk wrote:
> It's correct and already working on vega10/tonga for days,
> In fact the guilty context already works at my side
Need to use ARRAY_SIZE for the the loops rather than open coding it.
Beyond that, if it works for sr-iov, it's
OK, will change later by another amend patch, already submitted
-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com]
Sent: Wednesday, May 3, 2017 11:20 PM
To: Christian König
Cc: Liu, Monk ; amd-gfx list
On Wed, May 3, 2017 at 5:05 AM, Christian König wrote:
> Am 03.05.2017 um 05:48 schrieb Monk Liu:
>>
>> if sriov gpu reset is invoked by job timeout, it is run
>> in a global work-queue which is very slow and better not call
>> msleep ortherwise it takes long time to get
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote:
> Change-Id: Ie8672e0c9358d9542810ce05c822d9367249bbd7
> Signed-off-by: Monk Liu
Reviewed-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 4 ++--
> 1 file changed,
>Need to dig through the TTM code as well to find that, but it is something
>very basic of TTM so I'm pretty sure it should work as expected.
That's what make me feel a little confused0,
if a BO is destroyed, then how TTM system track its resv pointer , without this
resv pointer, how TTM wait
On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
> Hi Harry,
>
> while this looks more and more like it could work something which would
> really help would be to have a set of patches squashed together and rebased
> on drm-next.
>
> The dc-drm-next-atomic-wip looks like a start,
Hi Harry,
while this looks more and more like it could work something which would
really help would be to have a set of patches squashed together and
rebased on drm-next.
The dc-drm-next-atomic-wip looks like a start, but we need more
something like:
drm/amdgpu: add base DC components
Hi all,
Over the last few months we (mostly Andrey and myself) have taken and
addressed some of the feedback received from December's DC RFC. A lot of
our work so far centers around atomic. We were able to take a whole
bunch of the areas where we rolled our own solution and use DRM atomic
If kref of one BO go downs to 0, this Bo will be destroy (amdgpu_bo_destory), I
don't see what code to prevent this destroy invoked if the resv of this BO
Still have fence not signaling, can you share this tricks ?
IIRC the destroy itself is not prevented, but TTM prevents reusing of
the memory
That should be way you said , but I didn't see the logic to assure that,
If kref of one BO go downs to 0, this Bo will be destroy (amdgpu_bo_destory), I
don't see what code to prevent this destroy invoked if the resv of this BO
Still have fence not signaling, can you share this tricks ?
Can we guarantee the pde/pte and PRT/CSA are all alive (BO, mappings) when
resubmitting the timeout job (assume this time out job can signal after the
resubmit)?
Yes, that's why we add all fences of each command submission to the
PD/PT BOs.
Regards,
Christian.
Am 03.05.2017 um 15:31 schrieb
Even we release the ctx as usual way,
Can we guarantee the pde/pte and PRT/CSA are all alive (BO, mappings) when
resubmitting the timeout job (assume this time out job can signal after the
resubmit)?
You know App can submit a command a release all BO and free_ctx, close FD/VM,
and exit very
On Wed, May 3, 2017 at 8:41 AM, Nicolai Hähnle wrote:
> On 02.05.2017 21:50, Alex Deucher wrote:
>>
>> We already have this info: max_gs_threads. Drop the duplicate.
>
>
> max_gs_waves_per_vgt seems to be the better name for this number though.
> Threads is usually what we
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote:
> otherwise we'll lost the high 32 bit for pte, which lead
> to incorrect MTYPE for vega10.
>
> Change-Id: I1b0c7b8df14e340a36d4d2a72c6c03f469fdc29c
> Signed-off-by: Monk Liu
> Reviewed-by: Christian König
and the ironic thing is I want to alive as well (especially CSA, PTR)
Yes, and exactly that is the danger I was talking about. We messed up
the tear down oder with that and try to access resources which are
already freed when the job is now scheduled.
I would rather say we should get
Am 02.05.2017 um 22:04 schrieb SF Markus Elfring:
From: Markus Elfring
Date: Tue, 2 May 2017 22:00:02 +0200
Three update suggestions were taken into account
from static source code analysis.
Markus Elfring (3):
Use seq_putc() in radeon_sa_bo_dump_debug_info()
On 02.05.2017 21:50, Alex Deucher wrote:
We already have this info: max_gs_threads. Drop the duplicate.
max_gs_waves_per_vgt seems to be the better name for this number though.
Threads is usually what we call an item, of which each wave has 64.
Cheers,
Nicolai
Signed-off-by: Alex
Since I get one more kref for ctx when creating jobs, so
amdgpu_ctx_mgr_fini(>ctx_mgr) here won't actually waiting ... because
the " amdgpu_ctx_do_release"
Won't going to run (kref > 0 before all job signaled).
That way amdgpu_driver_postclose_kms() can continue go on ,
So actually " UVD and
Out of this title, it let me think job->vm isn't safe as well, vm could
be freed when it is being used amdgpu_ib_schedule, do you have any
thoughts to solve it?
Regards,
David Zhou
On 2017年05月03日 17:18, Christian König wrote:
I'm afraid not: CSA is gone with the VM, and VM is gone after app
Hi,
> > R600+ supports bigendian framebuffer formats, so no byteswapping on
> > access is needed. Not sure whenever that includes 16bpp formats or
> > whenever this is limited to the 8 bit-per-color formats [...]
>
> It includes 16bpp. Looking at
>
Please give me some detail approach on dma_fence, thanks !
I think the idea of using the fence_context is even better than using
the fence_status. It is already present for a while and filtering the
jobs/entities by it needs something like 10 lines of code.
The fence status is new and only
I'm afraid not: CSA is gone with the VM, and VM is gone after app close our
FD, I don't see amdgpu_vm_fini() is depended on context living or not ...
See the teardown order in amdgpu_driver_postclose_kms():
amdgpu_ctx_mgr_fini(>ctx_mgr);
amdgpu_uvd_free_handles(adev, file_priv);
I thought of that for the first place as well, that keep an ID and parsing the
ID with every job not signaled to see if the job
Belongs to something guilty , But keep ctx alive is more simple so I choose
this way.
But I admit it is doable as well, and I want to compare this method and the
> 1) fence_context have chance to incorrectly represent the context
behind it, because the number can be used up and will wrapped around
from beginning
No, fence_context is unique in global in kernel.
Yeah, I would go with that as well.
You note the fence_context of the job which caused the
It's correct and already working on vega10/tonga for days,
In fact the guilty context already works at my side
BR Monk
-Original Message-
From: Christian König [mailto:deathsim...@vodafone.de]
Sent: Wednesday, May 03, 2017 5:02 PM
To: Liu, Monk ;
1,My idea is that userspace should rather gather the feedback during the next
command submission. This has the advantage that you don't need to keep
userspace alive till all jobs are done.
> No, we need to clean the hw ring (cherry-pick out guilty entities' job in all
> rings) after gpu
Am 03.05.2017 um 05:48 schrieb Monk Liu:
Change-Id: Ie8672e0c9358d9542810ce05c822d9367249bbd7
Signed-off-by: Monk Liu
Acked-by: christian.koe...@amd.com>
---
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
Am 03.05.2017 um 05:48 schrieb Monk Liu:
if sriov gpu reset is invoked by job timeout, it is run
in a global work-queue which is very slow and better not call
msleep ortherwise it takes long time to get back CPU.
so make below changes:
1: Change msleep 1 to mdelay 5
2: Ignore the ack fail from
Am 03.05.2017 um 05:48 schrieb Monk Liu:
since we don't need hdp flush/inval for KIQ anymore
Change-Id: I8518f479afebb73c68ef922880f92dae53b665b9
Signed-off-by: Monk Liu
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c |
Am 03.05.2017 um 05:48 schrieb Monk Liu:
1,this way we make those routines compatible with the sequence
requirment for both Tonga and Vega10
2,ignore PSP hw init when doing TDR, because for SR-IOV device
the ucode won't get lost after VF FLR, so no need to invoke PSP
doing the ucode reloading
1, This is necessary otherwise how can I access entity pointer after a job
timedout
No that isn't necessary.
The problem with your idea is that you want to actively push the
feedback/status from the job execution back to userspace when an error
(timeout) happens.
My idea is that userspace
Am 02.05.2017 um 22:17 schrieb Alex Deucher:
rather than hardcoding it.
Signed-off-by: Alex Deucher
Acked-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff
On 2017年05月03日 14:02, Liu, Monk wrote:
You can add ctx as filed of job, but not get reference of it, when you
try to use ctx, just check if ctx == NULL.
> that doesn't work at all... job->ctx will always be non-NULL after
it is initialized, you just refer to a wild pointer after CTX released
Reviewed-by: Xiangliang Yu
Thanks!
Xiangliang Yu
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, May 03, 2017 11:48 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
Reviewed-by: Xiangliang Yu
Thanks!
Xiangliang Yu
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, May 03, 2017 11:48 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
Reviewed-by: Xiangliang Yu
Thanks!
Xiangliang Yu
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, May 03, 2017 11:48 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
Reviewed-by: Xiangliang Yu
Thanks!
Xiangliang Yu
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Monk Liu
> Sent: Wednesday, May 03, 2017 11:48 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Monk
On 05/03/2017 01:44 PM, Chunming Zhou wrote:
v2: delete unused comments.
Change-Id: If533576eb8a65bd019a3480d6fe2a64f23e3c944
Signed-off-by: Chunming Zhou
Reviewed-by: Monk Liu
Reviewed-by: Junwei Zhang
---
amdgpu/amdgpu.h|
You can add ctx as filed of job, but not get reference of it, when you try to
use ctx, just check if ctx == NULL.
> that doesn't work at all... job->ctx will always be non-NULL after it is
> initialized, you just refer to a wild pointer after CTX released
Another stupid method:
Use
64 matches
Mail list logo