Re: 回复: 回复: [PATCH 3/3] drm/amdgpu: fix colliding of preemption

2020-02-19 Thread Christian König
Hi Monk, oh, I've miss interpreted Hans response. It indeed sounds like that could work. We don't even need full preemption under VF, it would also make things easier if we just have the same CSA handling for both. Regards, Christian. Am 19.02.20 um 06:04 schrieb Liu, Monk: Hi Hans For

Re: [PATCH] drm/amdgpu: Add a GEM_CREATE mask and bugfix

2020-02-19 Thread Christian König
Am 18.02.20 um 22:46 schrieb Luben Tuikov: On 2020-02-17 10:08 a.m., Christian König wrote: Am 17.02.20 um 15:44 schrieb Alex Deucher: On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote: Add a AMDGPU_GEM_CREATE_MASK and use it to check for valid/invalid GEM create flags coming in from

Re: [RFC PATCH v5] drm/amdgpu: Remove kfd eviction fence before release bo

2020-02-19 Thread Christian König
Am 19.02.20 um 02:54 schrieb Pan, Xinhui: 2020年2月19日 07:10,Kuehling, Felix 写道: Hi Xinhui, Two suggestions inline. Looks good to me otherwise. On 2020-02-17 10:36 p.m., xinhui pan wrote: No need to trigger eviction as the memory mapping will not be used anymore. All pt/pd bos share same

Re: [Nouveau] [PATCH 8/8] drm/ttm: do not keep GPU dependent addresses

2020-02-19 Thread Nirmoy
On 2/18/20 8:06 PM, Daniel Vetter wrote: On Tue, Feb 18, 2020 at 07:37:44PM +0100, Christian König wrote: Am 18.02.20 um 19:28 schrieb Thomas Zimmermann: Hi Am 18.02.20 um 19:23 schrieb Christian König: Am 18.02.20 um 19:16 schrieb Thomas Zimmermann: Hi Am 18.02.20 um 18:13 schrieb

[PATCH] drm/amdgpu/display: clean up hdcp workqueue handling

2020-02-19 Thread Alex Deucher
Use the existence of the workqueue itself to determine when to enable HDCP features rather than sprinkling asic checks all over the code. Also add a check for the existence of the hdcp workqueue in the irq handling on the off chance we get and HPD RX interrupt with the CP bit set. This avoids a

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Tom St Denis
Doesn't build even with conflict resolved: [root@raven linux]# make   CALL    scripts/checksyscalls.sh   CALL    scripts/atomic/check-atomics.sh   DESCEND  objtool   CHK include/generated/compile.h   CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.o drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c: In

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Tom St Denis
This doesn't apply on top of 7fd3b632e17e55c5ffd008f9f025754e7daa1b66 which is the tip of drm-next Tom On 2020-02-19 9:20 a.m., Christian König wrote: Add update fences to the root PD while mapping BOs. Otherwise PDs freed during the mapping won't wait for updates to finish and can cause

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Tom St Denis
I get this conflict on top of drm-next ++<<< HEAD  +  r = vm->update_funcs->prepare(, resv, sync_mode); ++=== +   if (flags & AMDGPU_PTE_VALID) { +   struct amdgpu_bo *root = vm->root.base.bo; + +   if (!dma_fence_is_signaled(vm->last_direct)) +

Re: [PATCH umr] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Yuan, Xiaojie
[AMD Official Use Only - Internal Distribution Only] Sure, I'll send v2 soon. BR, Xiaojie From: StDenis, Tom Sent: Wednesday, February 19, 2020 7:51 PM To: Yuan, Xiaojie; amd-gfx@lists.freedesktop.org Subject: Re: [PATCH umr] fix field names for

Re: [PATCH umr] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Tom St Denis
Yup, my bad.  We also need to fix the streaming version (line 432 of src/lib/umr_pm4_decode_opcodes.c).  Would you like to incorporate this into this patch?  Otherwise I can do it separately. Thanks, Tom On 2020-02-19 6:26 a.m., Xiaojie Yuan wrote: field names for INDIRECT_BUFFER_CONST/CIK

[RFC PATCH v6] drm/amdgpu: Remove kfd eviction fence before release bo

2020-02-19 Thread xinhui pan
No need to trigger eviction as the memory mapping will not be used anymore. All pt/pd bos share same resv, hence the same shared eviction fence. Everytime page table is freed, the fence will be signled and that cuases kfd unexcepted evictions. CC: Christian König CC: Felix Kuehling CC: Alex

Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-19 Thread Tom St Denis
I got some messages after a while: [  741.788564] Failed to send Message 8. [  746.671509] Failed to send Message 8. [  748.749673] Failed to send Message 2b. [  759.245414] Failed to send Message 7. [  763.216902] Failed to send Message 2a. Are there any additional locks that should be held? 

[PATCH umr v2] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Xiaojie Yuan
field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same. fields like OFFLOAD_POLLING and VALID are defined in mec's INDIRECT_BUFFER packet, so not applicable here. v2: fix umr_pm4_decode_opcodes.c as well Signed-off-by: Xiaojie Yuan --- src/lib/ring_decode.c| 23

[PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Christian König
Add update fences to the root PD while mapping BOs. Otherwise PDs freed during the mapping won't wait for updates to finish and can cause corruptions. v2: rebased on drm-misc-next Signed-off-by: Christian König Fixes: 90b69cdc5f159 drm/amdgpu: stop adding VM updates fences to the resv obj ---

Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-19 Thread Alex Deucher
On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis wrote: > > Signed-off-by: Tom St Denis Please add a patch description. With that fixed: Reviewed-by: Alex Deucher > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Tom St Denis
Ignore that my brain wasn't engaged in the process.  It's clear where you wanted the prepare call. Tom On 2020-02-19 10:06 a.m., Tom St Denis wrote: I get this conflict on top of drm-next ++<<< HEAD  +  r = vm->update_funcs->prepare(, resv, sync_mode); ++=== +   if (flags &

Re: [PATCH] drm/amdgpu/display: clean up hdcp workqueue handling

2020-02-19 Thread Bhawanpreet Lakha
Thanks. Reviewed-by: Bhawanpreet Lakha On 2020-02-19 9:24 a.m., Alex Deucher wrote: Use the existence of the workqueue itself to determine when to enable HDCP features rather than sprinkling asic checks all over the code. Also add a check for the existence of the hdcp workqueue in the irq

[PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Christian König
Add update fences to the root PD while mapping BOs. Otherwise PDs freed during the mapping won't wait for updates to finish and can cause corruptions. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-)

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Christian König
Well what branch are you trying to merge that into? The conflict resolution should be simple, just keep the vm->update_funcs->prepare(...) line as it is in your branch. When you get those errors then something went wrong in your rebase. Christian. Am 19.02.20 um 16:14 schrieb Tom St Denis:

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Tom St Denis
The tip of origin/amd-staging-drm-next for me is: commit 7fd3b632e17e55c5ffd008f9f025754e7daa1b66 Refs: {origin/amd-staging-drm-next}, v5.4-rc7-2847-g7fd3b632e17e Author: Monk Liu AuthorDate: Thu Feb 6 23:55:58 2020 +0800 Commit: Monk Liu CommitDate: Wed Feb 19 13:33:02 2020 +0800    

Re: [PATCH umr v2] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Tom St Denis
Hmm it doesn't apply on top of the tip of master.  I'll just manually apply it. Tom On 2020-02-19 6:56 a.m., Xiaojie Yuan wrote: field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same. fields like OFFLOAD_POLLING and VALID are defined in mec's INDIRECT_BUFFER packet, so not

Re: [PATCH umr v2] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Yuan, Xiaojie
[AMD Official Use Only - Internal Distribution Only] Thanks Tom. BR, Xiaojie From: StDenis, Tom Sent: Wednesday, February 19, 2020 8:01 PM To: Yuan, Xiaojie; amd-gfx@lists.freedesktop.org Subject: Re: [PATCH umr v2] fix field names for

[PATCH 4/8] drm/nouveau: don't use ttm bo->offset v3

2020-02-19 Thread Nirmoy Das
Store ttm bo->offset in struct nouveau_bo instead. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/nouveau/dispnv04/crtc.c | 6 +++--- drivers/gpu/drm/nouveau/dispnv04/disp.c | 2 +- drivers/gpu/drm/nouveau/dispnv04/overlay.c | 6 +++--- drivers/gpu/drm/nouveau/dispnv50/base507c.c |

[PATCH 2/8] drm/radeon: don't use ttm bo->offset

2020-02-19 Thread Nirmoy Das
Calculate GPU offset in radeon_bo_gpu_offset without depending on bo->offset Signed-off-by: Nirmoy Das Reviewed-and-tested-by: Christian König --- drivers/gpu/drm/radeon/radeon.h| 1 + drivers/gpu/drm/radeon/radeon_object.h | 16 +++- drivers/gpu/drm/radeon/radeon_ttm.c

[PATCH 5/8] drm/qxl: don't use ttm bo->offset

2020-02-19 Thread Nirmoy Das
This patch removes slot->gpu_offset which is not required as VRAM and PRIV slot are in separate PCI bar This patch also removes unused qxl_bo_gpu_offset() Signed-off-by: Nirmoy Das Acked-by: Christian König Acked-by: Gerd Hoffmann --- drivers/gpu/drm/qxl/qxl_drv.h| 6 ++

[PATCH 6/8] drm/vram-helper: don't use ttm bo->offset v2

2020-02-19 Thread Nirmoy Das
Calculate GEM VRAM bo's offset within vram-helper without depending on bo->offset Signed-off-by: Nirmoy Das --- drivers/gpu/drm/drm_gem_vram_helper.c | 17 - 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c

[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses

2020-02-19 Thread Nirmoy Das
GPU address handling is device specific and should be handle by its device driver. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/ttm/ttm_bo.c| 7 --- include/drm/ttm/ttm_bo_api.h| 2 -- include/drm/ttm/ttm_bo_driver.h | 1 - 3 files changed, 10 deletions(-) diff --git

[PATCH 3/8] drm/vmwgfx: don't use ttm bo->offset

2020-02-19 Thread Nirmoy Das
Calculate GPU offset within vmwgfx driver itself without depending on bo->offset Signed-off-by: Nirmoy Das Acked-by: Christian König --- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c| 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 2 +-

[PATCH 1/8] drm/amdgpu: move ttm bo->offset to amdgpu_bo

2020-02-19 Thread Nirmoy Das
GPU address should belong to driver not in memory management. This patch moves ttm bo.offset and gpu_offset calculation to amdgpu driver. Signed-off-by: Nirmoy Das Acked-by: Huang Rui Reviewed-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 22 ++--

[PATCH 7/8] drm/bochs: use drm_gem_vram_offset to get bo offset v2

2020-02-19 Thread Nirmoy Das
Switch over to GEM VRAM's implementation to retrieve bo->offset Signed-off-by: Nirmoy Das --- drivers/gpu/drm/bochs/bochs_kms.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c index

[PATCH v3 0/8] do not store GPU address in TTM

2020-02-19 Thread Nirmoy Das
With this patch series I am trying to remove GPU address dependency in TTM and moving GPU address calculation to individual drm drivers. I tested this patch series on qxl, bochs and amdgpu. Christian tested it on radeon HW. It would be nice if someone test this for nouveau and vmgfx. v2: * set

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Christian König
Well it should apply on top of amd-staging-drm-next. But I haven't fetched that today yet. Give me a minute to rebase. Christian. Am 19.02.20 um 15:27 schrieb Tom St Denis: This doesn't apply on top of 7fd3b632e17e55c5ffd008f9f025754e7daa1b66 which is the tip of drm-next Tom On

[PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-19 Thread Tom St Denis
Signed-off-by: Tom St Denis --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 7379910790c9..66f763300c96 100644 ---

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-19 Thread Kenny Ho
On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote: > > Yes, I'd go with absolute units when it comes to memory, because it's > not a renewable resource like CPU and IO, and so we do have cliff > behavior around the edge where you transition from ok to not-enough. > > memory.low is a bit in

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD v2

2020-02-19 Thread Christian König
For amd-staging-drm-next you need the first version of the patch. For drm-misc-next or drm-next you need the second version of the patch. We probably need to merge the patch through drm-misc-next anyway since there is also the patch which causes the problems. Christian. Am 19.02.20 um 16:47

Re: [RFC PATCH v6] drm/amdgpu: Remove kfd eviction fence before release bo

2020-02-19 Thread Felix Kuehling
On 2020-02-19 7:46, xinhui pan wrote: No need to trigger eviction as the memory mapping will not be used anymore. All pt/pd bos share same resv, hence the same shared eviction fence. Everytime page table is freed, the fence will be signled and that cuases kfd unexcepted evictions. CC:

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Luben Tuikov
On 2020-02-19 9:44 a.m., Christian König wrote: > Well it should apply on top of amd-staging-drm-next. But I haven't > fetched that today yet. > > Give me a minute to rebase. This patch seems to have fixed the regression we saw yesterday. It applies to amd-staging-drm-next with a small jitter:

[PATCH 2/2] drm/amdgpu/smu: Add message sending lock

2020-02-19 Thread Matt Coffin
This adds a message lock to the smu_send_smc_msg* implementations to protect against concurrent access to the mmu registers used to communicate with the SMU --- drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 12 +++- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git

[PATCH 0/2] Implement SMU message register protection

2020-02-19 Thread Matt Coffin
Hey Alex, I took a crack at the implementation I was talking about here, where we can protect the read argument register reads as well. I only transitioned the actual implementation for hardware that I have funds for/access to, and left an '_unsafe' path for the other implementations since I

[PATCH 1/2] drm/amdgpu/powerplay: Refactor SMU message handling for safety

2020-02-19 Thread Matt Coffin
Move the responsibility for reading argument registers into the smu_send_smc_msg* implementations, so that adding a message-sending lock to protect the SMU registers will result in the lock still being held when the argument is read. For code compatibility on hardware I don't have the funds to

Re: [PATCH] drm/amdgpu: Add a GEM_CREATE mask and bugfix

2020-02-19 Thread Luben Tuikov
On 2020-02-19 3:20 a.m., Christian König wrote: > Am 18.02.20 um 22:46 schrieb Luben Tuikov: >> On 2020-02-17 10:08 a.m., Christian König wrote: >>> Am 17.02.20 um 15:44 schrieb Alex Deucher: On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote: > Add a AMDGPU_GEM_CREATE_MASK and use it to

[pull] amdgpu 5.6 fixes

2020-02-19 Thread Alex Deucher
Hi Dave, Daniel, Fixes for 5.6. The following changes since commit 6f4134b30b6ee33e2fd4d602099e6c5e60d0351a: Merge tag 'drm-intel-next-fixes-2020-02-13' of git://anongit.freedesktop.org/drm/drm-intel into drm-fixes (2020-02-14 13:04:46 +1000) are available in the Git repository at:

Re: [PATCH 3/3] drm/amdgpu: Enter low power state if CRTC active.

2020-02-19 Thread Alex Deucher
On Mon, Dec 16, 2019 at 12:18 PM Alex Deucher wrote: > > From: Andrey Grodzovsky > > CRTC in DPMS state off calls for low power state entry. > Support both atomic mode setting and pre-atomic mode setting. > > v2: move comment > > Signed-off-by: Andrey Grodzovsky > Signed-off-by: Alex Deucher

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-19 Thread Johannes Weiner
On Wed, Feb 19, 2020 at 11:28:48AM -0500, Kenny Ho wrote: > On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote: > > > > Yes, I'd go with absolute units when it comes to memory, because it's > > not a renewable resource like CPU and IO, and so we do have cliff > > behavior around the edge

[PATCH] drm/amdgpu/discovery: make the discovery code less chatty

2020-02-19 Thread Alex Deucher
Make the IP block base output debug only. Signed-off-by: Alex Deucher --- drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c index

RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-19 Thread Zhou, David(ChunMing)
[AMD Official Use Only - Internal Distribution Only] Christian is right here, that will cause many problems for simply using VMID in kernel. We already have an pair interface for RGP, I think you can use it instead of involving additional kernel change. amdgpu_vm_reserve_vmid/

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Luben Tuikov
I was able to bisect it to this commit: $git bisect good 6643ba1ff05d252e451bada9443759edb95eab3b is the first bad commit commit 6643ba1ff05d252e451bada9443759edb95eab3b Author: Luben Tuikov Date: Mon Feb 10 18:16:45 2020 -0500 drm/amdgpu: Move to a per-IB secure flag (TMZ) Move

Re: [PATCH] drm/amdgpu: add VM update fences back to the root PD

2020-02-19 Thread Luben Tuikov
New developments: Running "amdgpu_test -s 1 -t 4" causes timeouts and koops. Attached is the system log, tested Navi 10: [ 144.484547] [drm:amdgpu_dm_atomic_commit_tail [amdgpu]] *ERROR* Waiting for fences timed out! [ 149.604641] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0

RE: [PATCH] drm/amdgpu/smu: add an update table lock

2020-02-19 Thread Quan, Evan
Thanks. I went through that bug report. And it seems weird the table lock works but msg lock does not considering if it was really caused by some race conditions. Considering the issue was found on multi monitors setup. Maybe mclk dpm is related. Is it possible to try with single monitor only?

Re: [PATCH] drm/amdgpu: fix a bug NULL pointer dereference

2020-02-19 Thread Christian König
Well of hand this patch looks like a clear NAK to me. Returning without raising an error is certainly the wrong thing to do here because we just drop the necessary page table updates. How does the entity->rq ends up as NULL in the first place? Regards, Christian. Am 19.02.20 um 07:26

Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-19 Thread Christian König
Am 19.02.20 um 11:15 schrieb Jacob He: [WHY] When SPM trace enabled, SPM_VMID should be updated with the current vmid. [HOW] Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us which job should update SPM_VMID. Right before a job is submitted to GPU, set the SPM_VMID accordingly.

[PATCH umr] fix field names for INDIRECT_BUFFER_CONST/CIK for gfx9/gfx10

2020-02-19 Thread Xiaojie Yuan
field names for INDIRECT_BUFFER_CONST/CIK of gfx9/gfx10 are the same. fields like OFFLOAD_POLLING and VALID are defined in mec's INDIRECT_BUFFER packet, so not applicable here. Signed-off-by: Xiaojie Yuan --- src/lib/ring_decode.c | 23 +++ 1 file changed, 7 insertions(+),

Re: [PATCH] drm/amdgpu: Add a GEM_CREATE mask and bugfix

2020-02-19 Thread Huang Rui
On Tue, Feb 18, 2020 at 04:46:21PM -0500, Luben Tuikov wrote: > On 2020-02-17 10:08 a.m., Christian König wrote: > > Am 17.02.20 um 15:44 schrieb Alex Deucher: > >> On Fri, Feb 14, 2020 at 7:17 PM Luben Tuikov wrote: > >>> Add a AMDGPU_GEM_CREATE_MASK and use it to check > >>> for valid/invalid

回复: [PATCH] drm/amdgpu: fix a bug NULL pointer dereference

2020-02-19 Thread Liu, Monk
> + if (!entity->rq) > + return 0; > + Yes, supposedly we shouldn't get 'entity->rq == NULL' case , that looks the true bug -邮件原件- 发件人: amd-gfx 代表 Christian K?nig 发送时间: 2020年2月19日 18:50 收件人: Zhang, Hawking ; Li, Dennis ; amd-gfx@lists.freedesktop.org; Deucher,

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-19 Thread Johannes Weiner
On Fri, Feb 14, 2020 at 03:28:40PM -0500, Kenny Ho wrote: > On Fri, Feb 14, 2020 at 2:17 PM Tejun Heo wrote: > > Also, a rather trivial high level question. Is drm a good controller > > name given that other controller names are like cpu, memory, io? > > There was a discussion about naming early

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-19 Thread Johannes Weiner
On Fri, Feb 14, 2020 at 02:17:54PM -0500, Tejun Heo wrote: > Hello, Kenny, Daniel. > > (cc'ing Johannes) > > On Fri, Feb 14, 2020 at 01:51:32PM -0500, Kenny Ho wrote: > > On Fri, Feb 14, 2020 at 1:34 PM Daniel Vetter wrote: > > > > > > I think guidance from Tejun in previos discussions was

[PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-19 Thread Jacob He
[WHY] When SPM trace enabled, SPM_VMID should be updated with the current vmid. [HOW] Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us which job should update SPM_VMID. Right before a job is submitted to GPU, set the SPM_VMID accordingly. [Limitation] Running more than one SPM