Hi Monk,
oh, I've miss interpreted Hans response. It indeed sounds like that
could work.
We don't even need full preemption under VF, it would also make things
easier if we just have the same CSA handling for both.
Regards,
Christian.
Am 19.02.20 um 06:04 schrieb Liu, Monk:
Hi Hans
For
Am 18.02.20 um 22:46 schrieb Luben Tuikov:
On 2020-02-17 10:08 a.m., Christian König wrote:
Am 17.02.20 um 15:44 schrieb Alex Deucher:
On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote:
Add a AMDGPU_GEM_CREATE_MASK and use it to check
for valid/invalid GEM create flags coming in from
Am 19.02.20 um 02:54 schrieb Pan, Xinhui:
2020年2月19日 07:10,Kuehling, Felix 写道:
Hi Xinhui,
Two suggestions inline. Looks good to me otherwise.
On 2020-02-17 10:36 p.m., xinhui pan wrote:
No need to trigger eviction as the memory mapping will not be used
anymore.
All pt/pd bos share same
On 2/18/20 8:06 PM, Daniel Vetter wrote:
On Tue, Feb 18, 2020 at 07:37:44PM +0100, Christian König wrote:
Am 18.02.20 um 19:28 schrieb Thomas Zimmermann:
Hi
Am 18.02.20 um 19:23 schrieb Christian König:
Am 18.02.20 um 19:16 schrieb Thomas Zimmermann:
Hi
Am 18.02.20 um 18:13 schrieb
Use the existence of the workqueue itself to determine when to
enable HDCP features rather than sprinkling asic checks all over
the code. Also add a check for the existence of the hdcp
workqueue in the irq handling on the off chance we get and HPD
RX interrupt with the CP bit set. This avoids a
Doesn't build even with conflict resolved:
[root@raven linux]# make
CALL scripts/checksyscalls.sh
CALL scripts/atomic/check-atomics.sh
DESCEND objtool
CHK include/generated/compile.h
CC [M] drivers/gpu/drm/amd/amdgpu/amdgpu_vm.o
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c: In
This doesn't apply on top of 7fd3b632e17e55c5ffd008f9f025754e7daa1b66
which is the tip of drm-next
Tom
On 2020-02-19 9:20 a.m., Christian König wrote:
Add update fences to the root PD while mapping BOs.
Otherwise PDs freed during the mapping won't wait for
updates to finish and can cause
I get this conflict on top of drm-next
++<<< HEAD
+ r = vm->update_funcs->prepare(, resv, sync_mode);
++===
+ if (flags & AMDGPU_PTE_VALID) {
+ struct amdgpu_bo *root = vm->root.base.bo;
+
+ if (!dma_fence_is_signaled(vm->last_direct))
+
[AMD Official Use Only - Internal Distribution Only]
Sure, I'll send v2 soon.
BR,
Xiaojie
From: StDenis, Tom
Sent: Wednesday, February 19, 2020 7:51 PM
To: Yuan, Xiaojie; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH umr] fix field names for
Yup, my bad. We also need to fix the streaming version (line 432 of
src/lib/umr_pm4_decode_opcodes.c). Would you like to incorporate this
into this patch? Otherwise I can do it separately.
Thanks,
Tom
On 2020-02-19 6:26 a.m., Xiaojie Yuan wrote:
field names for INDIRECT_BUFFER_CONST/CIK
No need to trigger eviction as the memory mapping will not be used
anymore.
All pt/pd bos share same resv, hence the same shared eviction fence.
Everytime page table is freed, the fence will be signled and that cuases
kfd unexcepted evictions.
CC: Christian König
CC: Felix Kuehling
CC: Alex
I got some messages after a while:
[ 741.788564] Failed to send Message 8.
[ 746.671509] Failed to send Message 8.
[ 748.749673] Failed to send Message 2b.
[ 759.245414] Failed to send Message 7.
[ 763.216902] Failed to send Message 2a.
Are there any additional locks that should be held?
field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same.
fields like OFFLOAD_POLLING and VALID are defined in mec's
INDIRECT_BUFFER packet, so not applicable here.
v2: fix umr_pm4_decode_opcodes.c as well
Signed-off-by: Xiaojie Yuan
---
src/lib/ring_decode.c| 23
Add update fences to the root PD while mapping BOs.
Otherwise PDs freed during the mapping won't wait for
updates to finish and can cause corruptions.
v2: rebased on drm-misc-next
Signed-off-by: Christian König
Fixes: 90b69cdc5f159 drm/amdgpu: stop adding VM updates fences to the resv obj
---
On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis wrote:
>
> Signed-off-by: Tom St Denis
Please add a patch description. With that fixed:
Reviewed-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git
Ignore that my brain wasn't engaged in the process. It's clear where
you wanted the prepare call.
Tom
On 2020-02-19 10:06 a.m., Tom St Denis wrote:
I get this conflict on top of drm-next
++<<< HEAD
+ r = vm->update_funcs->prepare(, resv, sync_mode);
++===
+ if (flags &
Thanks.
Reviewed-by: Bhawanpreet Lakha
On 2020-02-19 9:24 a.m., Alex Deucher wrote:
Use the existence of the workqueue itself to determine when to
enable HDCP features rather than sprinkling asic checks all over
the code. Also add a check for the existence of the hdcp
workqueue in the irq
Add update fences to the root PD while mapping BOs.
Otherwise PDs freed during the mapping won't wait for
updates to finish and can cause corruptions.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
Well what branch are you trying to merge that into?
The conflict resolution should be simple, just keep the
vm->update_funcs->prepare(...) line as it is in your branch.
When you get those errors then something went wrong in your rebase.
Christian.
Am 19.02.20 um 16:14 schrieb Tom St Denis:
The tip of origin/amd-staging-drm-next for me is:
commit 7fd3b632e17e55c5ffd008f9f025754e7daa1b66
Refs: {origin/amd-staging-drm-next}, v5.4-rc7-2847-g7fd3b632e17e
Author: Monk Liu
AuthorDate: Thu Feb 6 23:55:58 2020 +0800
Commit: Monk Liu
CommitDate: Wed Feb 19 13:33:02 2020 +0800
Hmm it doesn't apply on top of the tip of master. I'll just manually
apply it.
Tom
On 2020-02-19 6:56 a.m., Xiaojie Yuan wrote:
field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same.
fields like OFFLOAD_POLLING and VALID are defined in mec's
INDIRECT_BUFFER packet, so not
[AMD Official Use Only - Internal Distribution Only]
Thanks Tom.
BR,
Xiaojie
From: StDenis, Tom
Sent: Wednesday, February 19, 2020 8:01 PM
To: Yuan, Xiaojie; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH umr v2] fix field names for
Store ttm bo->offset in struct nouveau_bo instead.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/nouveau/dispnv04/crtc.c | 6 +++---
drivers/gpu/drm/nouveau/dispnv04/disp.c | 2 +-
drivers/gpu/drm/nouveau/dispnv04/overlay.c | 6 +++---
drivers/gpu/drm/nouveau/dispnv50/base507c.c |
Calculate GPU offset in radeon_bo_gpu_offset without depending on
bo->offset
Signed-off-by: Nirmoy Das
Reviewed-and-tested-by: Christian König
---
drivers/gpu/drm/radeon/radeon.h| 1 +
drivers/gpu/drm/radeon/radeon_object.h | 16 +++-
drivers/gpu/drm/radeon/radeon_ttm.c
This patch removes slot->gpu_offset which is not required as
VRAM and PRIV slot are in separate PCI bar
This patch also removes unused qxl_bo_gpu_offset()
Signed-off-by: Nirmoy Das
Acked-by: Christian König
Acked-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_drv.h| 6 ++
Calculate GEM VRAM bo's offset within vram-helper without depending on
bo->offset
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/drm_gem_vram_helper.c | 17 -
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c
GPU address handling is device specific and should be handle by its device
driver.
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/ttm/ttm_bo.c| 7 ---
include/drm/ttm/ttm_bo_api.h| 2 --
include/drm/ttm/ttm_bo_driver.h | 1 -
3 files changed, 10 deletions(-)
diff --git
Calculate GPU offset within vmwgfx driver itself without depending on
bo->offset
Signed-off-by: Nirmoy Das
Acked-by: Christian König
---
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 4 ++--
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c| 2 +-
drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 2 +-
GPU address should belong to driver not in memory management.
This patch moves ttm bo.offset and gpu_offset calculation to amdgpu driver.
Signed-off-by: Nirmoy Das
Acked-by: Huang Rui
Reviewed-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 22 ++--
Switch over to GEM VRAM's implementation to retrieve bo->offset
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm/bochs/bochs_kms.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
b/drivers/gpu/drm/bochs/bochs_kms.c
index
With this patch series I am trying to remove GPU address dependency in
TTM and moving GPU address calculation to individual drm drivers.
I tested this patch series on qxl, bochs and amdgpu. Christian tested it on
radeon HW.
It would be nice if someone test this for nouveau and vmgfx.
v2:
* set
Well it should apply on top of amd-staging-drm-next. But I haven't
fetched that today yet.
Give me a minute to rebase.
Christian.
Am 19.02.20 um 15:27 schrieb Tom St Denis:
This doesn't apply on top of 7fd3b632e17e55c5ffd008f9f025754e7daa1b66
which is the tip of drm-next
Tom
On
Signed-off-by: Tom St Denis
---
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 7379910790c9..66f763300c96 100644
---
On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote:
>
> Yes, I'd go with absolute units when it comes to memory, because it's
> not a renewable resource like CPU and IO, and so we do have cliff
> behavior around the edge where you transition from ok to not-enough.
>
> memory.low is a bit in
For amd-staging-drm-next you need the first version of the patch.
For drm-misc-next or drm-next you need the second version of the patch.
We probably need to merge the patch through drm-misc-next anyway since
there is also the patch which causes the problems.
Christian.
Am 19.02.20 um 16:47
On 2020-02-19 7:46, xinhui pan wrote:
No need to trigger eviction as the memory mapping will not be used
anymore.
All pt/pd bos share same resv, hence the same shared eviction fence.
Everytime page table is freed, the fence will be signled and that cuases
kfd unexcepted evictions.
CC:
On 2020-02-19 9:44 a.m., Christian König wrote:
> Well it should apply on top of amd-staging-drm-next. But I haven't
> fetched that today yet.
>
> Give me a minute to rebase.
This patch seems to have fixed the regression we saw yesterday.
It applies to amd-staging-drm-next with a small jitter:
This adds a message lock to the smu_send_smc_msg* implementations to
protect against concurrent access to the mmu registers used to
communicate with the SMU
---
drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git
Hey Alex,
I took a crack at the implementation I was talking about here, where we
can protect the read argument register reads as well. I only
transitioned the actual implementation for hardware that I have funds
for/access to, and left an '_unsafe' path for the other implementations
since I
Move the responsibility for reading argument registers into the
smu_send_smc_msg* implementations, so that adding a message-sending lock
to protect the SMU registers will result in the lock still being held
when the argument is read.
For code compatibility on hardware I don't have the funds to
On 2020-02-19 3:20 a.m., Christian König wrote:
> Am 18.02.20 um 22:46 schrieb Luben Tuikov:
>> On 2020-02-17 10:08 a.m., Christian König wrote:
>>> Am 17.02.20 um 15:44 schrieb Alex Deucher:
On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote:
> Add a AMDGPU_GEM_CREATE_MASK and use it to
Hi Dave, Daniel,
Fixes for 5.6.
The following changes since commit 6f4134b30b6ee33e2fd4d602099e6c5e60d0351a:
Merge tag 'drm-intel-next-fixes-2020-02-13' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes (2020-02-14 13:04:46
+1000)
are available in the Git repository at:
On Mon, Dec 16, 2019 at 12:18 PM Alex Deucher wrote:
>
> From: Andrey Grodzovsky
>
> CRTC in DPMS state off calls for low power state entry.
> Support both atomic mode setting and pre-atomic mode setting.
>
> v2: move comment
>
> Signed-off-by: Andrey Grodzovsky
> Signed-off-by: Alex Deucher
On Wed, Feb 19, 2020 at 11:28:48AM -0500, Kenny Ho wrote:
> On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote:
> >
> > Yes, I'd go with absolute units when it comes to memory, because it's
> > not a renewable resource like CPU and IO, and so we do have cliff
> > behavior around the edge
Make the IP block base output debug only.
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index
[AMD Official Use Only - Internal Distribution Only]
Christian is right here, that will cause many problems for simply using VMID in
kernel.
We already have an pair interface for RGP, I think you can use it instead of
involving additional kernel change.
amdgpu_vm_reserve_vmid/
I was able to bisect it to this commit:
$git bisect good
6643ba1ff05d252e451bada9443759edb95eab3b is the first bad commit
commit 6643ba1ff05d252e451bada9443759edb95eab3b
Author: Luben Tuikov
Date: Mon Feb 10 18:16:45 2020 -0500
drm/amdgpu: Move to a per-IB secure flag (TMZ)
Move
New developments:
Running "amdgpu_test -s 1 -t 4" causes timeouts and koops. Attached
is the system log, tested Navi 10:
[ 144.484547] [drm:amdgpu_dm_atomic_commit_tail [amdgpu]] *ERROR* Waiting for
fences timed out!
[ 149.604641] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0
Thanks. I went through that bug report. And it seems weird the table lock works
but msg lock does not considering if it was really caused by some race
conditions.
Considering the issue was found on multi monitors setup. Maybe mclk dpm is
related.
Is it possible to try with single monitor only?
Well of hand this patch looks like a clear NAK to me.
Returning without raising an error is certainly the wrong thing to do
here because we just drop the necessary page table updates.
How does the entity->rq ends up as NULL in the first place?
Regards,
Christian.
Am 19.02.20 um 07:26
Am 19.02.20 um 11:15 schrieb Jacob He:
[WHY]
When SPM trace enabled, SPM_VMID should be updated with the current
vmid.
[HOW]
Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
which job should update SPM_VMID.
Right before a job is submitted to GPU, set the SPM_VMID accordingly.
field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same.
fields like OFFLOAD_POLLING and VALID are defined in mec's
INDIRECT_BUFFER packet, so not applicable here.
Signed-off-by: Xiaojie Yuan
---
src/lib/ring_decode.c | 23 +++
1 file changed, 7 insertions(+),
On Tue, Feb 18, 2020 at 04:46:21PM -0500, Luben Tuikov wrote:
> On 2020-02-17 10:08 a.m., Christian König wrote:
> > Am 17.02.20 um 15:44 schrieb Alex Deucher:
> >> On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote:
> >>> Add a AMDGPU_GEM_CREATE_MASK and use it to check
> >>> for valid/invalid
> + if (!entity->rq)
> + return 0;
> +
Yes, supposedly we shouldn't get 'entity->rq == NULL' case , that looks the
true bug
-邮件原件-
发件人: amd-gfx 代表 Christian K?nig
发送时间: 2020年2月19日 18:50
收件人: Zhang, Hawking ; Li, Dennis ;
amd-gfx@lists.freedesktop.org; Deucher,
On Fri, Feb 14, 2020 at 03:28:40PM -0500, Kenny Ho wrote:
> On Fri, Feb 14, 2020 at 2:17 PM Tejun Heo wrote:
> > Also, a rather trivial high level question. Is drm a good controller
> > name given that other controller names are like cpu, memory, io?
>
> There was a discussion about naming early
On Fri, Feb 14, 2020 at 02:17:54PM -0500, Tejun Heo wrote:
> Hello, Kenny, Daniel.
>
> (cc'ing Johannes)
>
> On Fri, Feb 14, 2020 at 01:51:32PM -0500, Kenny Ho wrote:
> > On Fri, Feb 14, 2020 at 1:34 PM Daniel Vetter wrote:
> > >
> > > I think guidance from Tejun in previos discussions was
[WHY]
When SPM trace enabled, SPM_VMID should be updated with the current
vmid.
[HOW]
Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
which job should update SPM_VMID.
Right before a job is submitted to GPU, set the SPM_VMID accordingly.
[Limitation]
Running more than one SPM
57 matches
Mail list logo