Reviewed-by: Luben Tuikov
On 2020-08-07 04:48, Liu ChengZhe wrote:
> Some registers are not accessible to virtual function setup, so
> skip their initialization when in VF-SRIOV mode.
>
> v2: move SRIOV VF check into specify functions;
> modify commit description and comment.
>
>
Hi Dave, Daniel,
Fixes for 5.9.
The following changes since commit dc100bc8fae59aafd2ea2e1a1a43ef1f65f8a8bc:
Merge tag 'drm-msm-next-2020-07-30' of https://gitlab.freedesktop.org/drm/msm
into drm-next (2020-08-05 08:05:31 +1000)
are available in the Git repository at:
On Tue, Aug 4, 2020 at 5:32 PM Bas Nieuwenhuizen
wrote:
> This expose modifier support on GFX9+.
>
> Only modifiers that can be rendered on the current GPU are
> added. This is to reduce the number of modifiers exposed.
>
> The HW could expose more, but the best mechanism to decide
> what to
[AMD Official Use Only - Internal Distribution Only]
On Fri, Aug 7, 2020 at 5:32 PM Li, Dennis wrote:
>
> [AMD Public Use]
>
> On Fri, Aug 7, 2020 at 1:59 PM Li, Dennis wrote:
> >
> > [AMD Public Use]
> >
> > Hi, Daniel,
> > Thanks your review. And I also understand your concern. I guess
On Fri, Aug 7, 2020 at 5:32 PM Li, Dennis wrote:
>
> [AMD Public Use]
>
> On Fri, Aug 7, 2020 at 1:59 PM Li, Dennis wrote:
> >
> > [AMD Public Use]
> >
> > Hi, Daniel,
> > Thanks your review. And I also understand your concern. I guess you
> > missed the description of this issue, so I
[AMD Public Use]
On Fri, Aug 7, 2020 at 1:59 PM Li, Dennis wrote:
>
> [AMD Public Use]
>
> Hi, Daniel,
> Thanks your review. And I also understand your concern. I guess you
> missed the description of this issue, so I paste it again in the below, and
> explain why this issue happens.
>
>
[AMD Official Use Only - Internal Distribution Only]
Acked-by: Alex Deucher
From: Quan, Evan
Sent: Friday, August 7, 2020 5:30 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Kuehling, Felix
; Quan, Evan
Subject: [PATCH] drm/amd/powerplay:
[AMD Official Use Only - Internal Distribution Only]
Acked-by: Alex Deucher
From: Quan, Evan
Sent: Friday, August 7, 2020 5:31 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Quan, Evan
Subject: [PATCH] drm/amd/powerplay: correct UVD/VCE PG
Am 2020-08-07 um 8:33 a.m. schrieb Christian König:
> This is not allocated any more for SG BOs.
Can you point me at the relevant TTM changes that require this change?
We'd need to test that the SG BO is still working as expected. A
doorbell self-ring test or a GPU HDP flush test in KFDTest
Am 2020-08-07 um 4:25 a.m. schrieb Huang Rui:
> We still have a few iommu issues which need to address, so force raven
> as "dgpu" path for the moment.
>
> Will enable IOMMUv2 since the issues are fixed.
Do you mean "_when_ the issues are fixed"?
The current iommuv2 troubles aside, I think this
On 2020-08-07 4:52 a.m., dan...@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
@@ -440,7 +431,7 @@ struct dm_crtc_state {
#define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
struct dm_atomic_state {
- struct
On 2020-08-07 4:30 a.m., dan...@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
[Why]
We're racing with userspace as the flags could potentially change
from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I
On 2020-08-07 4:34 a.m., dan...@ffwll.ch wrote:
On Thu, Jul 30, 2020 at 04:36:40PM -0400, Nicholas Kazlauskas wrote:
[Why]
MEDIUM or FULL updates can require global validation or affect
bandwidth. By treating these all simply as surface updates we aren't
actually passing this through DC global
Hi everybody,
in amdgpu we got the following issue which I'm seeking advise how to cleanly
handle it.
We have a bunch of trace points which are related to the VM subsystem and
executed in either a work item, kthread or foreign process context.
Now tracing the pid of the context which we are
Trace something useful instead of the pid of a kernel thread here.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
index
This is not allocated any more for SG BOs.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
On Fri, Aug 7, 2020 at 1:59 PM Li, Dennis wrote:
>
> [AMD Public Use]
>
> Hi, Daniel,
> Thanks your review. And I also understand your concern. I guess you
> missed the description of this issue, so I paste it again in the below, and
> explain why this issue happens.
>
> For
[AMD Public Use]
Hi, Daniel,
Thanks your review. And I also understand your concern. I guess you
missed the description of this issue, so I paste it again in the below, and
explain why this issue happens.
For example, in a XGMI system with 2 GPU devices whose device entity is
On Fri, Aug 7, 2020 at 11:34 AM Christian König
wrote:
>
> Am 07.08.20 um 11:23 schrieb Daniel Vetter:
> > On Fri, Aug 7, 2020 at 11:20 AM Daniel Vetter wrote:
> >> On Fri, Aug 7, 2020 at 11:08 AM Christian König
> >> wrote:
> >>> [SNIP]
> >>> What we should do instead is to make sure we
Am 07.08.20 um 11:23 schrieb Daniel Vetter:
On Fri, Aug 7, 2020 at 11:20 AM Daniel Vetter wrote:
On Fri, Aug 7, 2020 at 11:08 AM Christian König
wrote:
[SNIP]
What we should do instead is to make sure we have only a single lock for the
complete hive instead.
[Dennis Li] If we use a single
The UVD/VCE PG state is managed by UVD and VCE IP. It's error-prone to
assume the bootup state in SMU based on the dpm status.
Change-Id: Ib88298ab9812d7d242592bcd55eea140bef6696a
Signed-off-by: Evan Quan
---
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 6 --
1 file changed, 6
Correct the cached smu feature state on pp_features sysfs
setting.
Change-Id: Icc4c3ce764876a0ffdc86ad4c8a8b9c9f0ed0e97
Signed-off-by: Evan Quan
---
.../drm/amd/powerplay/hwmgr/vega20_hwmgr.c| 38 +--
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git
On Fri, Aug 7, 2020 at 11:20 AM Daniel Vetter wrote:
>
> On Fri, Aug 7, 2020 at 11:08 AM Christian König
> wrote:
> >
> > [SNIP]
> > What we should do instead is to make sure we have only a single lock
> > for the complete hive instead.
> > [Dennis Li] If we use a single lock,
On Fri, Aug 7, 2020 at 11:08 AM Christian König
wrote:
>
> [SNIP]
> What we should do instead is to make sure we have only a single lock for
> the complete hive instead.
> [Dennis Li] If we use a single lock, users will must wait for all
> devices resuming successfully, but
On 2020-08-06 2:56 p.m., Christian König wrote:
> We need to allocate that manually now.
>
> Signed-off-by: Christian König
> Fixes: 2ddef17678bc (HEAD) drm/ttm: make TT creation purely optional v3
> ---
> .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c| 2 +-
>
[SNIP]
What we should do instead is to make sure we have only a single lock for the
complete hive instead.
[Dennis Li] If we use a single lock, users will must wait for all devices
resuming successfully, but in fact their tasks are only running in device a. It
is not friendly to users.
Well
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
> @@ -440,7 +431,7 @@ struct dm_crtc_state {
> #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
>
> struct dm_atomic_state {
> - struct drm_private_state base;
> + struct drm_atomic_state base;
Some registers are not accessible to virtual function setup, so
skip their initialization when in VF-SRIOV mode.
v2: move SRIOV VF check into specify functions;
modify commit description and comment.
Signed-off-by: Liu ChengZhe
---
drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c | 19
On Thu, Jul 30, 2020 at 04:36:42PM -0400, Nicholas Kazlauskas wrote:
> [Why]
> DM atomic check was structured in a way that we required old DC state
> in order to dynamically add and remove planes and streams from the
> context to build the DC state context for validation.
>
> DRM private objects
On Thu, Jul 30, 2020 at 04:36:40PM -0400, Nicholas Kazlauskas wrote:
> [Why]
> MEDIUM or FULL updates can require global validation or affect
> bandwidth. By treating these all simply as surface updates we aren't
> actually passing this through DC global validation.
>
> [How]
> There's currently
On Thu, Jul 30, 2020 at 04:36:38PM -0400, Nicholas Kazlauskas wrote:
> [Why]
> We're racing with userspace as the flags could potentially change
> from when we acquired and validated them in commit_check.
Uh ... I didn't know these could change. I think my comments on Bas'
series are even more
On Thu, Jul 30, 2020 at 04:36:35PM -0400, Nicholas Kazlauskas wrote:
> Based on the analysis of the bug from [1] the best course of action seems
> to be swapping off of DRM private objects back to subclassing DRM atomic
> state instead.
>
> This patch series implements this change, but not yet
We still have a few iommu issues which need to address, so force raven
as "dgpu" path for the moment.
Will enable IOMMUv2 since the issues are fixed.
Signed-off-by: Huang Rui
---
drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 6 ++
drivers/gpu/drm/amd/amdkfd/kfd_device.c | 4 ++--
2 files
On Thu, Jul 30, 2020 at 04:36:36PM -0400, Nicholas Kazlauskas wrote:
> [Why]
> Store these in advance so we can reuse them later in commit_tail without
> having to reserve the fbo again.
>
> These will also be used for checking for tiling changes when deciding
> to reset the plane or not.
I've
[ 584.110304]
[ 584.110590] WARNING: possible recursive locking detected
[ 584.110876] 5.6.0-deli-v5.6-2848-g3f3109b0e75f #1 Tainted: G OE
[ 584.64]
[ 584.111456] kworker/38:1/553 is trying
[AMD Public Use]
> [AMD Public Use]
>
>> [SNIP]
I think it is a limitation of init_rwsem.
>>> And exactly that's wrong, this is intentional and perfectly correct.
>>>
>>> [Dennis Li] I couldn't understand. Why is it a perfectly correct?
>>> For example, we define two rw_sem: a and b. If we
Am 2020-08-07 um 2:57 a.m. schrieb Christian König:
[snip]
> That's a really good argument, but I still hesitate to merge this
> patch. How severe is the lockdep splat?
I argued before that any lockdep splat is bad, because it disables
further lockdep checking and can hide other lockdep problems
Am 07.08.20 um 04:22 schrieb Li, Dennis:
[AMD Public Use]
[SNIP]
I think it is a limitation of init_rwsem.
And exactly that's wrong, this is intentional and perfectly correct.
[Dennis Li] I couldn't understand. Why is it a perfectly correct?
For example, we define two rw_sem: a and b. If we
38 matches
Mail list logo