> -Original Message-
> From: Alex Deucher
> Sent: Saturday, August 18, 2018 1:26 AM
> To: Zhu, Rex
> Cc: amd-gfx list
> Subject: Re: [PATCH 2/2] drm/amdgpu: Refine gfx_v9_0_kcq_disable function
>
> On Fri, Aug 17, 2018 at 5:35 AM Rex Zhu wrote:
> >
> > Send all kcq unmap_queue packet
ROCm CQE is seeing what looks like hangs during amdgpu initialization on
Raven and Vega20. Amdgpu basically stops printing messages while trying
to load VCN firmware. It never completes initialization, but there is no
obvious error message. These are the last messages from amdgpu in the log:
[
On 08/16/2018 08:09 PM, Wen Yang wrote:
> Fix comile warning like,
> CC [M] drivers/gpu/drm/i915/gvt/execlist.o
> CC [M] drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.o
> CC [M] drivers/gpu/drm/radeon/btc_dpm.o
> CC [M] drivers/isdn/hisax/avm_a1p.o
> CC [M] drivers/gpu/drm/amd/amd
On Fri, Aug 17, 2018 at 1:42 PM Christian König
wrote:
>
> That's the PID of the creator of the file (usually the X server) and not
> the end user of the file.
>
> Signed-off-by: Christian König
> CC: sta...@vger.kernel.org
Series is:
Acked-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/amdgpu/
Hi,
no that isn't possible at the moment. Could probably be implemented
using cgroups.
Regards,
Christian.
Am 17.08.2018 um 10:06 schrieb sunnanyong:
Hi All ,
Is there a way to limit how much VRAM the application can use?
I want to allocate vram averagely for each application, how to real
That's the PID of the creator of the file (usually the X server) and not
the end user of the file.
Signed-off-by: Christian König
CC: sta...@vger.kernel.org
---
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 19 ---
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/driv
The usage isn't RCU protected.
Signed-off-by: Christian König
CC: sta...@vger.kernel.org
---
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
ind
On Fri, Aug 17, 2018 at 5:35 AM Rex Zhu wrote:
>
> Send all kcq unmap_queue packets and then wait for
> complete.
>
> Signed-off-by: Rex Zhu
This series and the gfx8 series are:
Reviewed-by: Alex Deucher
Longer term, we may want to dynamically map and umap compute queues as
part of scheduling
Hi All ,
Is there a way to limit how much VRAM the application can use?
I want to allocate vram averagely for each application, how to realize this?
OS: Ubuntu17.10
GPU driver: mesa17.2.8
GPU: Radeon pro wx5100
Please help, thanks!
___
amd-gfx mailing li
On 2018-08-17 08:24 AM, Christian König wrote:
> We need to figure out the address after validating the BO, not before.
>
> Signed-off-by: Christian König
Reviewed-by: Felix Kuehling
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff
The flag will prevent another thread from same process to
reinsert the entity queue into scheduler's rq after it was already
removed from there by another thread during drm_sched_entity_flush.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/scheduler/sched_entity.c | 10 +-
include/
From: Hans Verkuil
Add DisplayPort CEC-Tunneling-over-AUX support to nouveau.
Signed-off-by: Hans Verkuil
---
drivers/gpu/drm/nouveau/nouveau_connector.c | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c
b/driver
From: Hans Verkuil
A big problem with DP CEC-Tunneling-over-AUX is that it is tricky
to find adapters with a chipset that supports this AND where the
manufacturer actually connected the HDMI CEC line to the chipset.
Add a mention of the MegaChips 2900 chipset which seems to support
this feature
From: Hans Verkuil
Add DisplayPort CEC-Tunneling-over-AUX support to amdgpu.
Signed-off-by: Hans Verkuil
Acked-by: Alex Deucher
---
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 13 +++--
.../drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 2 ++
2 files changed, 13 insertio
From: Hans Verkuil
When parsing the reply of a DP_REMOTE_DPCD_READ DPCD command the
result is wrong due to a missing idx increment.
This was never noticed since DP_REMOTE_DPCD_READ is currently not
used, but if you enable it, then it is all wrong.
Signed-off-by: Hans Verkuil
---
drivers/gpu/d
From: Hans Verkuil
If aux->transfer == NULL, then just return without doing
anything. In that case the function is likely called for
a non-(e)DP connector.
This never happened for the i915 driver, but the nouveau and amdgpu
drivers need this check.
The alternative would be to add this check in
From: Hans Verkuil
Repost because I wasn't a member of the nouveau mailinglist the
first time around and this series was blocked. I also updated this
cover letter for the part about the amdgpu patch after receiving
feedback from Alex Deucher. The patches are unchanged (except for
adding Alex' Ack
On 2018-08-17 03:16 AM, Christian König wrote:
Am 16.08.2018 um 21:44 schrieb sunpeng...@amd.com:
[SNIP]
+config DRM_AMD_DC_DCN1_0
+ bool "DCN 1.0 Raven family"
+ depends on DRM_AMD_DC && X86
+ default y
+ help
+ Choose this option if you want to have
+ RV family for disp
We need to figure out the address after validating the BO, not before.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
in
On 2018 Aug 16, Michel Dänzer wrote:
> On 2018-08-10 09:06 AM, Johannes Hirte wrote:
> > On 2018 Jul 27, Michel Dänzer wrote:
> >> From: Michel Dänzer
> >>
> >> We were only storing the FB provided by the client, but on CRTCs with
> >> TearFree enabled, we use a separate FB. This could cause
> >>
Am 17.08.2018 um 12:08 schrieb Huang Rui:
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end
It would be really nice to have support for the automatic
extension-less fullscreen game scenario. Maybe you don't have to solve
everything in the first implementation...
So a friendly ping here!
Regards
//Ernst
Den tis 24 apr. 2018 kl 23:58 skrev Daniel Vetter :
>
> On Tue, Apr 24, 2018 at 4:28
I have tested the patches on Rv/Vega10/Vega12.
Series is:
Reviewed-and-Tested-by: Rex Zhu
Best Regards
Rex
> -Original Message-
> From: amd-gfx On Behalf Of
> Christian König
> Sent: Friday, August 17, 2018 3:23 PM
> To: Deng, Emily ; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriousl
The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
LRU is fixed. So move them on LRU again.
Signed-off-by: Huang Rui
Tested-by: Mike Lothian
Tested-by: Dieter N??tzel
Acked-by: Chunming Zhou
Reviewed-by: Junwei Zhang
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2
This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
Tested-by: Mike Lothian
Tested-by: Dieter Nützel
Acked-by: Chunming Zhou
From: Christian König
When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
Tested-by: Mike Lothian
Tested-by: Dieter Nützel
Ack
From: Christian König
Add bulk move pos to store the pointer of first and last buffer object.
The list in between will be bulk moved on lru list.
Signed-off-by: Christian König
Signed-off-by: Huang Rui
Tested-by: Mike Lothian
Tested-by: Dieter Nützel
Acked-by: Chunming Zhou
Reviewed-by: Jun
The idea and proposal is originally from Christian, and I continue to work to
deliver it.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact perfor
On Thu, Aug 16, 2018 at 08:41:44AM +0800, Dieter Nützel wrote:
> For the series
>
> Tested-by: Dieter Nützel
>
> on RX580,
> amd-staging-drm-next
> #5024f8dfe478
>
Thank you so much, will add tested-by in next version.
Thanks,
Ray
> Dieter
>
> Am 13.08.2018 11:58, schrieb Huang Rui:
> > The
Send all kcq unmap_queue packets and then wait for
complete.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 33 -
1 file changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
b/drivers/gpu/drm/amd/amdgp
There are no any logical changes here.
1. if kcq can be enabled via kiq, we don't need to
do kiq ring test.
2. amdgpu_ring_test_ring function can be used to
sync the ring complete, remove the duplicate code.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 72 ++-
>-Original Message-
>From: Christian König
>Sent: Friday, August 17, 2018 3:23 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: Re: [PATCH v2 1/2] drm/amdgpu: Remove the sriov checking and add
>firmware checking
>
>Am 17.08.2018 um 07:40 schrieb Emily Deng:
>> Refine the patch
Am 17.08.2018 um 09:06 schrieb Rex Zhu:
There are no any logical changes here.
1. if kcq can be enabled via kiq, we don't need to
do kiq ring test.
2. amdgpu_ring_test_ring function can be used to
sync the ring complete, remove the duplicate code.
v2: alloc 6 (not 7) dws for unmap_queue
On 2018-08-16 09:44 PM, sunpeng...@amd.com wrote:
> From: "Leo (Sunpeng) Li"
>
> DCN1 contains code that utilizes fp math. When
> CONFIG_KCOV_INSTRUMENT_ALL and CONFIG_KCOV_ENABLE_COMPARISONS are
> enabled, build errors are found. See this earlier patch for details:
>
> https://lists.freedesktop
Am 17.08.2018 um 07:40 schrieb Emily Deng:
Refine the patch 1, and the lock about invalidate_lock.
Unify bare metal and sriov, and add firmware checking for
reg write and reg wait unify command.
Signed-off-by: Emily Deng
Acked-by: Christian König for this one
because I can't verify the fir
Am 16.08.2018 um 21:44 schrieb sunpeng...@amd.com:
[SNIP]
+config DRM_AMD_DC_DCN1_0
+ bool "DCN 1.0 Raven family"
+ depends on DRM_AMD_DC && X86
+ default y
+ help
+ Choose this option if you want to have
+ RV family for display engine
+
Probably better t
Fix comile warning like,
CC [M] drivers/gpu/drm/i915/gvt/execlist.o
CC [M] drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.o
CC [M] drivers/gpu/drm/radeon/btc_dpm.o
CC [M] drivers/isdn/hisax/avm_a1p.o
CC [M] drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_dpp.o
drivers/gpu/drm/
Send all kcq unmap_queue packets and then wait for
complete.
Signed-off-by: Rex Zhu
---
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 29 +++--
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
b/drivers/gpu/drm/amd/amdgpu/gf
There are no any logical changes here.
1. if kcq can be enabled via kiq, we don't need to
do kiq ring test.
2. amdgpu_ring_test_ring function can be used to
sync the ring complete, remove the duplicate code.
v2: alloc 6 (not 7) dws for unmap_queues
Signed-off-by: Rex Zhu
---
drivers/gpu/
40 matches
Mail list logo