Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 06:41:32PM -0400, Kenny Ho wrote: > On Wed, Jun 26, 2019 at 5:41 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote: > > > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > > So what happens when you start a lot of threads all

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:25 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote: > > The bandwidth is measured by keeping track of the amount of bytes moved > > by ttm within a time period. We defined two type of bandwidth: burst > > and average. Average

Re: [RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:12 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 11:05:18AM -0400, Kenny Ho wrote: > > drm.memory.stats > > A read-only nested-keyed file which exists on all cgroups. > > Each entry is keyed by the drm device's major:minor. The > >

Re: [pull] amdgpu, radeon, amdkfd drm-next-5.3

2019-06-26 Thread Dave Airlie
On Thu, 27 Jun 2019 at 13:07, Dave Airlie wrote: > > Thanks, > > I've pulled this, but it introduced one warning > > /home/airlied/devel/kernel/dim/src/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c: In > function ‘vcn_v2_0_start_dpg_mode’: >

RE: [PATCH] drm/amd/powerplay: warn on smu interface version mismatch

2019-06-26 Thread Quan, Evan
I do not think this is a good idea. As there is still some cases that version mismatch will cause unexpected issues. And they will be hard to debug. If this is for debug purpose only, I would suggest to keep this in your custom branch only. Regards, Evan > -Original Message- > From:

Re: [RFC PATCH v3 11/11] drm, cgroup: Allow more aggressive memory reclaim

2019-06-26 Thread Kenny Ho
Ok. I am not too familiar with shrinker but I will dig into it. Just so that I am looking into the right things, you are referring to things like struct shrinker and struct shrink_control? Regards, Kenny On Wed, Jun 26, 2019 at 12:44 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 5:41 PM Daniel Vetter wrote: > On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote: > > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > So what happens when you start a lot of threads all at the same time, > > > allocating gem bo? Also would be nice if we

[PATCH 2/2] drm/amdgpu: handle AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID on gfx10

2019-06-26 Thread Marek Olšák
From: Marek Olšák Signed-off-by: Marek Olšák --- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 17 + 1 file changed, 17 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c index 6baaa65a1daa..5b807a19bbbf 100644 ---

[PATCH 1/2] drm/amdgpu: fix transform feedback GDS hang on gfx10 (v2)

2019-06-26 Thread Marek Olšák
From: Marek Olšák v2: update emit_ib_size (though it's still wrong because it was wrong before) Signed-off-by: Marek Olšák --- drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h | 3 ++- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 14 +++--- 2 files changed, 13 insertions(+), 4 deletions(-) diff

Re: [PATCH] drm/amdgpu: Set queue_preemption_timeout_ms default value

2019-06-26 Thread Kuehling, Felix
On 2019-06-26 11:22 a.m., Zeng, Oak wrote: > Set default value of this kernel parameter to 9000 > > Change-Id: If91db4d2c2ac08e25d7728d49629cbaec0d6c773 > Signed-off-by: Oak Zeng Reviewed-by: Felix Kuehling > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +- > 1 file changed, 1

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 5:04 PM Daniel Vetter wrote: > On Wed, Jun 26, 2019 at 10:37 PM Kenny Ho wrote: > > (sending again, I keep missing the reply-all in gmail.) > You can make it the default somewhere in the gmail options. Um... interesting, my option was actually not set (neither reply or

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote: > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > > > drm.buffer.default > > > A read-only flat-keyed file which exists on the root cgroup. > > > Each entry is keyed by the drm device's major:minor. > > > > > >

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > drm.buffer.default > > A read-only flat-keyed file which exists on the root cgroup. > > Each entry is keyed by the drm device's major:minor. > > > > Default limits on the total GEM buffer allocation in bytes. > >

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 10:37 PM Kenny Ho wrote: > (sending again, I keep missing the reply-all in gmail.) You can make it the default somewhere in the gmail options. (also resending, I missed that you didn't group-replied). On Wed, Jun 26, 2019 at 10:25 PM Kenny Ho wrote: > > On Wed, Jun 26,

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
(sending again, I keep missing the reply-all in gmail.) On Wed, Jun 26, 2019 at 11:56 AM Daniel Vetter wrote: > > Why the separate, explicit registration step? I think a simpler design for > drivers would be that we set up cgroups if there's anything to be > controlled, and then for GEM drivers

Re: [RFC PATCH v3 01/11] cgroup: Introduce cgroup for drm subsystem

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 9:35 PM Kenny Ho wrote: > > On Wed, Jun 26, 2019 at 11:49 AM Daniel Vetter wrote: > > > > Bunch of naming bikesheds > > I appreciate the suggestions, naming is hard :). > > > > +#include > > > + > > > +struct drmcgrp { > > > > drm_cgroup for more consistency how we

Re: [RFC PATCH v3 01/11] cgroup: Introduce cgroup for drm subsystem

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 11:49 AM Daniel Vetter wrote: > > Bunch of naming bikesheds I appreciate the suggestions, naming is hard :). > > +#include > > + > > +struct drmcgrp { > > drm_cgroup for more consistency how we usually call these things. I was hoping to keep the symbol short if

Re: [PATCH v18 10/15] drm/radeon: untag user pointers in radeon_gem_userptr_ioctl

2019-06-26 Thread Khalid Aziz
On 6/24/19 8:32 AM, Andrey Konovalov wrote: > This patch is a part of a series that extends kernel ABI to allow to pass > tagged user pointers (with the top byte set to something else other than > 0x00) as syscall arguments. > > In radeon_gem_userptr_ioctl() an MMU notifier is set up with a

Re: [PATCH] drm/amdkfd: remove unnecessary warning message on gpu reset

2019-06-26 Thread Grodzovsky, Andrey
Reviewed-by: Andrey Grodzovsky Andrey On 6/26/19 1:50 PM, Liu, Shaoyun wrote: > In XGMI configuration, more than one asic can be reset at same time, > kfd is able to handle this and no need to trigger the warning > > Change-Id: If339503860e86ee1dbeed294ba1c103fcce70b7b > Signed-off-by: shaoyunl

[PATCH] drm/amdkfd: remove unnecessary warning message on gpu reset

2019-06-26 Thread Liu, Shaoyun
In XGMI configuration, more than one asic can be reset at same time, kfd is able to handle this and no need to trigger the warning Change-Id: If339503860e86ee1dbeed294ba1c103fcce70b7b Signed-off-by: shaoyunl --- drivers/gpu/drm/amd/amdkfd/kfd_device.c | 1 - 1 file changed, 1 deletion(-) diff

Re: [PATCH v18 00/15] arm64: untag user pointers passed to the kernel

2019-06-26 Thread Catalin Marinas
Hi Andrew, On Mon, Jun 24, 2019 at 04:32:45PM +0200, Andrey Konovalov wrote: > Andrey Konovalov (14): > arm64: untag user pointers in access_ok and __uaccess_mask_ptr > lib: untag user pointers in strn*_user > mm: untag user pointers passed to memory syscalls > mm: untag user pointers in

Re: [RFC PATCH v3 11/11] drm, cgroup: Allow more aggressive memory reclaim

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:22AM -0400, Kenny Ho wrote: > Allow DRM TTM memory manager to register a work_struct, such that, when > a drmcgrp is under memory pressure, memory reclaiming can be triggered > immediately. > > Change-Id: I25ac04e2db9c19ff12652b88ebff18b44b2706d8 > Signed-off-by:

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote: > The bandwidth is measured by keeping track of the amount of bytes moved > by ttm within a time period. We defined two type of bandwidth: burst > and average. Average bandwidth is calculated by dividing the total > amount of bytes moved

Re: [RFC PATCH v3 08/11] drm, cgroup: Add TTM buffer peak usage stats

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:19AM -0400, Kenny Ho wrote: > drm.memory.peak.stats > A read-only nested-keyed file which exists on all cgroups. > Each entry is keyed by the drm device's major:minor. The > following nested keys are defined. > > ==

Re: [RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:18AM -0400, Kenny Ho wrote: > The drm resource being measured is the TTM (Translation Table Manager) > buffers. TTM manages different types of memory that a GPU might access. > These memory types include dedicated Video RAM (VRAM) and host/system > memory accessible

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:15AM -0400, Kenny Ho wrote: > The drm resource being measured and limited here is the GEM buffer > objects. User applications allocate and free these buffers. In > addition, a process can allocate a buffer and share it with another > process. The consumer of a

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:13AM -0400, Kenny Ho wrote: > Change-Id: I908ee6975ea0585e4c30eafde4599f87094d8c65 > Signed-off-by: Kenny Ho Why the separate, explicit registration step? I think a simpler design for drivers would be that we set up cgroups if there's anything to be controlled, and

Re: [RFC PATCH v3 01/11] cgroup: Introduce cgroup for drm subsystem

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:12AM -0400, Kenny Ho wrote: Needs a bit more commit message here I htink. > Change-Id: I6830d3990f63f0c13abeba29b1d330cf28882831 > Signed-off-by: Kenny Ho Bunch of naming bikesheds > --- > include/linux/cgroup_drm.h| 76 +++ >

Re: [PATCH] drm/amdgpu: Set queue_preemption_timeout_ms default value

2019-06-26 Thread Deucher, Alexander
Acked-by: Alex Deucher From: Zeng, Oak Sent: Wednesday, June 26, 2019 11:22 AM To: amd-gfx@lists.freedesktop.org Cc: Kuehling, Felix; Deucher, Alexander; Li, Candice; Zeng, Oak Subject: [PATCH] drm/amdgpu: Set queue_preemption_timeout_ms default value Set default

[PATCH] drm/amdgpu: Set queue_preemption_timeout_ms default value

2019-06-26 Thread Zeng, Oak
Set default value of this kernel parameter to 9000 Change-Id: If91db4d2c2ac08e25d7728d49629cbaec0d6c773 Signed-off-by: Oak Zeng --- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

[RFC PATCH v3 08/11] drm, cgroup: Add TTM buffer peak usage stats

2019-06-26 Thread Kenny Ho
drm.memory.peak.stats A read-only nested-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. The following nested keys are defined. == == system

[RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-26 Thread Kenny Ho
The drm resource being measured is the TTM (Translation Table Manager) buffers. TTM manages different types of memory that a GPU might access. These memory types include dedicated Video RAM (VRAM) and host/system memory accessible through IOMMU (GART/GTT). TTM is currently used by multiple drm

[RFC PATCH v3 10/11] drm, cgroup: Add soft VRAM limit

2019-06-26 Thread Kenny Ho
The drm resource being limited is the TTM (Translation Table Manager) buffers. TTM manages different types of memory that a GPU might access. These memory types include dedicated Video RAM (VRAM) and host/system memory accessible through IOMMU (GART/GTT). TTM is currently used by multiple drm

[RFC PATCH v3 11/11] drm, cgroup: Allow more aggressive memory reclaim

2019-06-26 Thread Kenny Ho
Allow DRM TTM memory manager to register a work_struct, such that, when a drmcgrp is under memory pressure, memory reclaiming can be triggered immediately. Change-Id: I25ac04e2db9c19ff12652b88ebff18b44b2706d8 Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c| 47

[RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
The bandwidth is measured by keeping track of the amount of bytes moved by ttm within a time period. We defined two type of bandwidth: burst and average. Average bandwidth is calculated by dividing the total amount of bytes moved within a cgroup by the lifetime of the cgroup. Burst bandwidth is

[RFC PATCH v3 03/11] drm/amdgpu: Register AMD devices for DRM cgroup

2019-06-26 Thread Kenny Ho
Change-Id: I3750fc657b956b52750a36cb303c54fa6a265b44 Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c index da7b4fe8ade3..2568fd730161

[RFC PATCH v3 06/11] drm, cgroup: Add GEM buffer allocation count stats

2019-06-26 Thread Kenny Ho
drm.buffer.count.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of GEM buffer allocated. Change-Id: Id3e1809d5fee8562e47a7d2b961688956d844ec6 Signed-off-by: Kenny Ho ---

[RFC PATCH v3 00/11] new cgroup controller for gpu/drm subsystem

2019-06-26 Thread Kenny Ho
This is a follow up to the RFC I made previously to introduce a cgroup controller for the GPU/DRM subsystem [v1,v2]. The goal is to be able to provide resource management to GPU resources using things like container. The cover letter from v1 is copied below for reference. [v1]:

[RFC PATCH v3 05/11] drm, cgroup: Add peak GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
drm.buffer.peak.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Largest GEM buffer allocated in bytes. drm.buffer.peak.default A read-only flat-keyed file which exists on the root cgroup.

[RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
The drm resource being measured and limited here is the GEM buffer objects. User applications allocate and free these buffers. In addition, a process can allocate a buffer and share it with another process. The consumer of a shared buffer can also outlive the allocator of the buffer. For the

[RFC PATCH v3 01/11] cgroup: Introduce cgroup for drm subsystem

2019-06-26 Thread Kenny Ho
Change-Id: I6830d3990f63f0c13abeba29b1d330cf28882831 Signed-off-by: Kenny Ho --- include/linux/cgroup_drm.h| 76 +++ include/linux/cgroup_subsys.h | 4 ++ init/Kconfig | 5 +++ kernel/cgroup/Makefile| 1 + kernel/cgroup/drm.c

[RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
Change-Id: I908ee6975ea0585e4c30eafde4599f87094d8c65 Signed-off-by: Kenny Ho --- include/drm/drm_cgroup.h | 24 include/linux/cgroup_drm.h | 10 kernel/cgroup/drm.c| 116 + 3 files changed, 150 insertions(+) create mode 100644

[PATCH] drm/amd/powerplay: force the trim of the mclk dpm_levels if OD is enabled

2019-06-26 Thread Sergei Lopatin
Should prevent flicker if PP_OVERDRIVE_MASK is set. bug: https://bugs.freedesktop.org/show_bug.cgi?id=102646 bug: https://bugs.freedesktop.org/show_bug.cgi?id=108941 Signed-off-by: Sergei Lopatin --- drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 5 - 1 file changed, 4 insertions(+), 1

Re: [PATCH][next[ drm/amd/display: fix a couple of spelling mistakes

2019-06-26 Thread Colin Ian King
On 26/06/2019 14:25, Daniel Stone wrote: > Hi Colin, > > On Wed, 26 Jun 2019 at 14:24, Colin King wrote: >> There are a couple of spelling mistakes in dm_error messages and >> a comment. Fix these. > > Whilst there, you might fix the '[next[' typo in the commit message. Ugh, fickle fingers.

Re: [PATCH][next[ drm/amd/display: fix a couple of spelling mistakes

2019-06-26 Thread Daniel Stone
Hi Colin, On Wed, 26 Jun 2019 at 14:24, Colin King wrote: > There are a couple of spelling mistakes in dm_error messages and > a comment. Fix these. Whilst there, you might fix the '[next[' typo in the commit message. Cheers, Daniel

[PATCH][next[ drm/amd/display: fix a couple of spelling mistakes

2019-06-26 Thread Colin King
From: Colin Ian King There are a couple of spelling mistakes in dm_error messages and a comment. Fix these. Signed-off-by: Colin Ian King --- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dsc.c | 2 +- drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c | 8 2 files changed, 5

Re: [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v13

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 02:23:05PM +0200, Christian König wrote: > On the exporter side we add optional explicit pinning callbacks. If those > callbacks are implemented the framework no longer caches sg tables and the > map/unmap callbacks are always called with the lock of the reservation object

[PATCH 6/6] drm/amdgpu: add independent DMA-buf import v7

2019-06-26 Thread Christian König
Instead of relying on the DRM functions just implement our own import functions. This prepares support for taking care of unpinned DMA-buf. v2: enable for all exporters, not just amdgpu, fix invalidation handling, lock reservation object while setting callback v3: change to new dma_buf attach

[PATCH 3/6] drm/ttm: use the parent resv for ghost objects v2

2019-06-26 Thread Christian König
This way we can even pipeline imported BO evictions. v2: Limit this to only cases when the parent object uses a separate reservation object as well. This fixes another OOM problem. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/ttm_bo_util.c | 20 +++- 1 file

[PATCH 2/6] drm/ttm: remove the backing store if no placement is given

2019-06-26 Thread Christian König
Pipeline removal of the BOs backing store when no placement is given during validation. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/ttm_bo.c | 12 1 file changed, 12 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index

[PATCH 1/6] dma-buf: add dynamic DMA-buf handling v13

2019-06-26 Thread Christian König
On the exporter side we add optional explicit pinning callbacks. If those callbacks are implemented the framework no longer caches sg tables and the map/unmap callbacks are always called with the lock of the reservation object held. On the importer side we add an optional invalidate callback.

[PATCH 4/6] drm/amdgpu: use allowed_domains for exported DMA-bufs

2019-06-26 Thread Christian König
Avoid that we ping/pong the buffers when we stop to pin DMA-buf exports by using the allowed domains for exported buffers. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git

[PATCH 5/6] drm/amdgpu: add independent DMA-buf export v6

2019-06-26 Thread Christian König
The caching of SGT's is actually quite harmful and should probably removed altogether when all drivers are audited. Start by providing a separate DMA-buf export implementation in amdgpu. This is also a prerequisite of unpinned DMA-buf handling. v2: fix unintended recursion, remove debugging

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 1:53 PM Christian König wrote: > > [SNIP] > > I'm confused here: Atm ->moving isn't in resv_obj, there's only one > >>> exclusive fence. And yes you need to set that every time you do a move > >>> (because a move needs to be pretty exclusive access). But I'm not seeing

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Christian König
[SNIP] I'm confused here: Atm ->moving isn't in resv_obj, there's only one exclusive fence. And yes you need to set that every time you do a move (because a move needs to be pretty exclusive access). But I'm not seeing a separate not_quite_exclusive fence slot for moves. Yeah, but shouldn't

Re: [PATCH v2] drm/amd/powerplay:clean up the residual mutex for smu_hw_init

2019-06-26 Thread Kevin Wang
Reviewed-by: Kevin Wang On 6/25/19 3:43 PM, Prike Liang wrote: > The mutex for procting SMU during hw_init was removed as system > will be deadlock when smu_populate_umd_state_clk try get SMU mutex. > Therefore need remove the residual mutex from failed path. > > Change-Id:

Re: [PATCH 1/2] drm/amdgpu/powerplay: FEATURE_MASK is 64 bit so use ULL

2019-06-26 Thread Kevin Wang
Reviewed-by: Kevin Wang Best Regards, Kevin On 6/25/19 9:58 PM, Alex Deucher wrote: > ULL is needed for 32 bit arches. > > Signed-off-by: Alex Deucher > --- > drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 2 +- > drivers/gpu/drm/amd/powerplay/vega20_ppt.c | 2 +- > 2 files changed, 2

Re: [PATCH -next v4] drm/amdgpu: return 'ret' immediately if failed in amdgpu_pmu_init

2019-06-26 Thread maowenan
On 2019/6/25 1:42, Kim, Jonathan wrote: > Immediate return should be ok since perf registration isn't dependent on gpu > hw. > > Reviewed-by: Jonathan Kim thanks for review. > > -Original Message- > From: Mao Wenan > Sent: Monday, June 24, 2019 7:23 AM > To: airl...@linux.ie;

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:28 AM Koenig, Christian wrote: > > Am 26.06.19 um 10:17 schrieb Daniel Vetter: > > On Wed, Jun 26, 2019 at 09:49:03AM +0200, Christian König wrote: > >> Am 25.06.19 um 18:05 schrieb Daniel Vetter: > >>> On Tue, Jun 25, 2019 at 02:46:49PM +0200, Christian König wrote: >

Re: [PATCH 1/1] drm/ttm: return -EBUSY if waiting for busy BO fails

2019-06-26 Thread Michel Dänzer
On 2019-06-26 9:04 a.m., Kuehling, Felix wrote: > On 2019-06-26 2:54 a.m., Koenig, Christian wrote: >> Am 26.06.19 um 08:40 schrieb Kuehling, Felix: >>> Returning -EAGAIN prevents ttm_bo_mem_space from trying alternate >>> placements and can lead to live-locks in amdgpu_cs, retrying >>>

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Koenig, Christian
Am 26.06.19 um 10:17 schrieb Daniel Vetter: > On Wed, Jun 26, 2019 at 09:49:03AM +0200, Christian König wrote: >> Am 25.06.19 um 18:05 schrieb Daniel Vetter: >>> On Tue, Jun 25, 2019 at 02:46:49PM +0200, Christian König wrote: On the exporter side we add optional explicit pinning callbacks.

Re: [PATCH 4/7] drm/radeon: Fill out gem_object->resv

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 07:10:21AM +, Koenig, Christian wrote: > Those patches would become superfluous when merging Gerd's work. Not entirely, they still remove the gem_prime_res_obj. Setting up gem_bo.resv is only one half of what these do here. And yeah I think that single addition can be

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 09:49:03AM +0200, Christian König wrote: > Am 25.06.19 um 18:05 schrieb Daniel Vetter: > > On Tue, Jun 25, 2019 at 02:46:49PM +0200, Christian König wrote: > > > On the exporter side we add optional explicit pinning callbacks. If those > > > callbacks are implemented the

Re: [Intel-gfx] [PATCH 1/6] dma-buf: add dynamic DMA-buf handling v12

2019-06-26 Thread Christian König
Am 25.06.19 um 18:05 schrieb Daniel Vetter: On Tue, Jun 25, 2019 at 02:46:49PM +0200, Christian König wrote: On the exporter side we add optional explicit pinning callbacks. If those callbacks are implemented the framework no longer caches sg tables and the map/unmap callbacks are always called

Re: [PATCH 4/7] drm/radeon: Fill out gem_object->resv

2019-06-26 Thread Koenig, Christian
Those patches would become superfluous when merging Gerd's work. But I'm not sure if that is going to fly soon or not. Christian. Am 25.06.19 um 22:42 schrieb Daniel Vetter: > That way we can ditch our gem_prime_res_obj implementation. Since ttm > absolutely needs the right reservation object

Re: [PATCH 1/1] drm/ttm: return -EBUSY if waiting for busy BO fails

2019-06-26 Thread Kuehling, Felix
On 2019-06-26 2:54 a.m., Koenig, Christian wrote: > Am 26.06.19 um 08:40 schrieb Kuehling, Felix: >> Returning -EAGAIN prevents ttm_bo_mem_space from trying alternate >> placements and can lead to live-locks in amdgpu_cs, retrying >> indefinitely and never succeeding. >> >> Fixes: cfcc52e477e4

Re: [PATCH 1/1] drm/ttm: return -EBUSY if waiting for busy BO fails

2019-06-26 Thread Koenig, Christian
Am 26.06.19 um 08:40 schrieb Kuehling, Felix: > Returning -EAGAIN prevents ttm_bo_mem_space from trying alternate > placements and can lead to live-locks in amdgpu_cs, retrying > indefinitely and never succeeding. > > Fixes: cfcc52e477e4 ("drm/ttm: fix busy memory to fail other user v10") > CC:

[PATCH 1/1] drm/ttm: return -EBUSY if waiting for busy BO fails

2019-06-26 Thread Kuehling, Felix
Returning -EAGAIN prevents ttm_bo_mem_space from trying alternate placements and can lead to live-locks in amdgpu_cs, retrying indefinitely and never succeeding. Fixes: cfcc52e477e4 ("drm/ttm: fix busy memory to fail other user v10") CC: Christian Koenig Signed-off-by: Felix Kuehling ---

Re: [PATCH] drm/amd/powerplay: warn on smu interface version mismatch

2019-06-26 Thread Kevin Wang
please add this message in patch commit. after Reviewed-by: Kevin Wang Best Regards, Kevin On 6/26/19 2:34 PM, Yuan, Xiaojie wrote: > Current SMU IF version check is too strict, driver with old smu11_driver_if.h > sometimes works fine with new SMU firmware. We prefer to see a warning >

Re: [PATCH 06/10] drm/ttm: fix busy memory to fail other user v10

2019-06-26 Thread Kuehling, Felix
I believe I found a live-lock due to this patch when running our KFD eviction test in a loop. I pretty reliably hangs on the second loop iteration. If I revert this patch, the problem disappears. With some added instrumentation, I see that amdgpu_cs_list_validate in amdgpu_cs_parser_bos

Re: [PATCH] drm/amd/powerplay: warn on smu interface version mismatch

2019-06-26 Thread Yuan, Xiaojie
Current SMU IF version check is too strict, driver with old smu11_driver_if.h sometimes works fine with new SMU firmware. We prefer to see a warning instead a error for debug purposes. BR, Xiaojie From: Yuan, Xiaojie Sent: Wednesday, June 26, 2019

[PATCH] drm/amd/powerplay: warn on smu interface version mismatch

2019-06-26 Thread Yuan, Xiaojie
Signed-off-by: Xiaojie Yuan --- drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c index c3f48fae6f32..339d063e24ff 100644 ---