RE: [PATCH] drm/amd/powerplay: disable engine spread spectrum feature on Vega10.

2017-05-03 Thread Deucher, Alexander
> -Original Message- > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf > Of Rex Zhu > Sent: Wednesday, May 03, 2017 11:33 PM > To: amd-gfx@lists.freedesktop.org > Cc: Zhu, Rex > Subject: [PATCH] drm/amd/powerplay: disable engine spread spectrum > feature on Vega10. >

Re: [PATCH] drm/amd/powerplay: disable engine spread spectrum feature on Vega10.

2017-05-03 Thread Wang, Ken
Reviewed-by: Ken Wang From: amd-gfx on behalf of Rex Zhu Sent: Thursday, May 4, 2017 11:33:24 AM To: amd-gfx@lists.freedesktop.org Cc: Zhu, Rex Subject: [PATCH] drm/amd/powerplay:

RE: [PATCH] drm/amdgpu: Bypass GMC, UVD and VCE in hw_fini

2017-05-03 Thread Yu, Xiangliang
Reviewed-by: Xiangliang Yu Thanks! Xiangliang Yu > -Original Message- > From: Trigger Huang [mailto:trigger.hu...@amd.com] > Sent: Thursday, May 04, 2017 10:40 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk ; Yu, Xiangliang >

RE: [PATCH] drm/amdgpu: Bypass GMC, UVD and VCE in hw_fini

2017-05-03 Thread Liu, Monk
Reviewed-by: Monk Liu -Original Message- From: Trigger Huang [mailto:trigger.hu...@amd.com] Sent: Thursday, May 4, 2017 10:40 AM To: amd-gfx@lists.freedesktop.org Cc: Liu, Monk ; Yu, Xiangliang ; Huang, Trigger

[PATCH] drm/amdgpu: Bypass GMC, UVD and VCE in hw_fini

2017-05-03 Thread Trigger Huang
Some hw finish operations should not be applied in SR-IOV case. This works as workaround to fix multi-VFs reboot/shutdown issues Signed-off-by: Trigger Huang --- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 6 ++ drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 8 +++-

[PATCH 09/28] drm/amd/display: USB-c DP-HDMI dongle shows garbage on Sony TV

2017-05-03 Thread Harry Wentland
From: Charlene Liu Signed-off-by: Charlene Liu Acked-by: Harry Wentland Reviewed-by: Charlene Liu --- drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 21 -

[PATCH 22/28] drm/amd/display: Check for Zero Range in FreeSync Calc

2017-05-03 Thread Harry Wentland
From: Eric Cook -check for min/max range in freesync calculation and handle it accordingly Signed-off-by: Eric Acked-by: Harry Wentland Reviewed-by: Anthony Koo ---

[PATCH 19/28] drm/amd/display: decouple resource_pool from resource_context

2017-05-03 Thread Harry Wentland
From: Tony Cheng to avoid null access in case res_ctx is used to access res_pool before it's fully constructed also make it clear which function has dependency on resource_pool Signed-off-by: Tony Cheng Reviewed-by: Harry Wentland

[PATCH 04/28] drm/amd/display: remove unnecessary allocation for regamma_params inside opp

2017-05-03 Thread Harry Wentland
From: Dmytro Laktyushkin Signed-off-by: Dmytro Laktyushkin Reviewed-by: Harry Wentland --- drivers/gpu/drm/amd/display/dc/dce/dce_opp.c| 10 +-

[PATCH 27/28] drm/amd/display: Get dprefclk ss percentage from vbios

2017-05-03 Thread Harry Wentland
From: Hersen Wu Signed-off-by: Hersen Wu Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 40 +--

[PATCH 06/28] drm/amd/display: fix memory leak

2017-05-03 Thread Harry Wentland
From: Dmytro Laktyushkin Signed-off-by: Dmytro Laktyushkin Reviewed-by: Harry Wentland --- drivers/gpu/drm/amd/display/dc/core/dc.c | 26 -- 1 file changed, 12 insertions(+), 14

[PATCH 02/28] drm/amd/display: Block YCbCr formats for eDP. Revert previous change.

2017-05-03 Thread Harry Wentland
From: Zeyu Fan Signed-off-by: Zeyu Fan Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-)

[PATCH 26/28] drm/amd/display: move drr_params definition to TG

2017-05-03 Thread Harry Wentland
From: Tony Cheng Signed-off-by: Tony Cheng Acked-by: Harry Wentland Reviewed-by: Dmytro Laktyushkin --- drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h | 7 +++

[PATCH 11/28] drm/amd/display: Make dc_link param const in set_drive_settings

2017-05-03 Thread Harry Wentland
From: Zeyu Fan Signed-off-by: Zeyu Fan Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +- drivers/gpu/drm/amd/display/dc/dc.h | 2 +- 2

[PATCH 13/28] drm/amd/display: move tg_color to dc_hw_types

2017-05-03 Thread Harry Wentland
From: Tony Cheng Signed-off-by: Tony Cheng Reviewed-by: Harry Wentland --- drivers/gpu/drm/amd/display/dc/dc_hw_types.h | 12 ++-- drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h | 12 2

[PATCH 12/28] drm/amd/display: PSR Refactor

2017-05-03 Thread Harry Wentland
From: Sylvia Tsai - Refacotr PSR to follow correct module pattern - fix eDP only working on sink index 0. Signed-off-by: Sylvia Tsai Acked-by: Harry Wentland Reviewed-by: Tony Cheng ---

[PATCH 16/28] drm/amd/display: dce80, 100, 110 and 112 to dce ipp refactor

2017-05-03 Thread Harry Wentland
From: Dmytro Laktyushkin Signed-off-by: Dmytro Laktyushkin Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/dce/dce_ipp.c | 12 +-

[PATCH 07/28] drm/amd/display: always retrieve PSR cap

2017-05-03 Thread Harry Wentland
From: Amy Zhang Signed-off-by: Amy Zhang Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git

[PATCH 15/28] drm/amd/display: dce120 to dce ipp refactor

2017-05-03 Thread Harry Wentland
From: Dmytro Laktyushkin Signed-off-by: Dmytro Laktyushkin Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/dce/Makefile| 2 +-

[PATCH 23/28] drm/amd/display: Add function to set dither option

2017-05-03 Thread Harry Wentland
From: Ding Wang Signed-off-by: Ding Wang Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/core/dc.c | 41 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c

[PATCH 10/28] drm/amd/display: improve cursor programming reliability

2017-05-03 Thread Harry Wentland
From: Dmytro Laktyushkin This change will cache cursor attributes and reprogram them when enabling cursor after power gating if the attributes were not yet reprogrammed Signed-off-by: Dmytro Laktyushkin Acked-by: Harry Wentland

[PATCH 03/28] drm/amd/display: Fix memory leak in post_update_surfaces

2017-05-03 Thread Harry Wentland
Signed-off-by: Harry Wentland Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/core/dc.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git

[PATCH 05/28] drm/amd/display: set correct v_total_min and v_total_max for dce.

2017-05-03 Thread Harry Wentland
From: Yongqiang Sun Signed-off-by: Yongqiang Sun Acked-by: Harry Wentland Reviewed-by: Tony Cheng --- drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.c | 4 ++--

Re: Soliciting DRM feedback on latest DC rework

2017-05-03 Thread Harry Wentland
On 2017-05-03 11:02 AM, Daniel Vetter wrote: On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote: Hi Harry, while this looks more and more like it could work something which would really help would be to have a set of patches squashed together and rebased on drm-next. The

Re: [PATCH 4/6] drm/amdgpu:cleanups KIQ ring_funcs emit_frame_size

2017-05-03 Thread Alex Deucher
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote: > since we don't need hdp flush/inval for KIQ anymore > > Change-Id: I8518f479afebb73c68ef922880f92dae53b665b9 > Signed-off-by: Monk Liu Reviewed-by: Alex Deucher > --- >

Re: [PATCH 3/6] drm/amdgpu:re-write sriov_reinit_early/late

2017-05-03 Thread Alex Deucher
On Wed, May 3, 2017 at 5:10 AM, Liu, Monk wrote: > It's correct and already working on vega10/tonga for days, > In fact the guilty context already works at my side Need to use ARRAY_SIZE for the the loops rather than open coding it. Beyond that, if it works for sr-iov, it's

RE: [PATCH 2/6] drm/amdgpu:need som change on vega10 mailbox

2017-05-03 Thread Liu, Monk
OK, will change later by another amend patch, already submitted -Original Message- From: Alex Deucher [mailto:alexdeuc...@gmail.com] Sent: Wednesday, May 3, 2017 11:20 PM To: Christian König Cc: Liu, Monk ; amd-gfx list

Re: [PATCH 2/6] drm/amdgpu:need som change on vega10 mailbox

2017-05-03 Thread Alex Deucher
On Wed, May 3, 2017 at 5:05 AM, Christian König wrote: > Am 03.05.2017 um 05:48 schrieb Monk Liu: >> >> if sriov gpu reset is invoked by job timeout, it is run >> in a global work-queue which is very slow and better not call >> msleep ortherwise it takes long time to get

Re: [PATCH 1/6] drm/amdgpu:fix cannot receive rcv/ack irq bug

2017-05-03 Thread Alex Deucher
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote: > Change-Id: Ie8672e0c9358d9542810ce05c822d9367249bbd7 > Signed-off-by: Monk Liu Reviewed-by: Alex Deucher > --- > drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 4 ++-- > 1 file changed,

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
>Need to dig through the TTM code as well to find that, but it is something >very basic of TTM so I'm pretty sure it should work as expected. That's what make me feel a little confused0, if a BO is destroyed, then how TTM system track its resv pointer , without this resv pointer, how TTM wait

Re: Soliciting DRM feedback on latest DC rework

2017-05-03 Thread Daniel Vetter
On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote: > Hi Harry, > > while this looks more and more like it could work something which would > really help would be to have a set of patches squashed together and rebased > on drm-next. > > The dc-drm-next-atomic-wip looks like a start,

Re: Soliciting DRM feedback on latest DC rework

2017-05-03 Thread Christian König
Hi Harry, while this looks more and more like it could work something which would really help would be to have a set of patches squashed together and rebased on drm-next. The dc-drm-next-atomic-wip looks like a start, but we need more something like: drm/amdgpu: add base DC components

Soliciting DRM feedback on latest DC rework

2017-05-03 Thread Harry Wentland
Hi all, Over the last few months we (mostly Andrey and myself) have taken and addressed some of the feedback received from December's DC RFC. A lot of our work so far centers around atomic. We were able to take a whole bunch of the areas where we rolled our own solution and use DRM atomic

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
If kref of one BO go downs to 0, this Bo will be destroy (amdgpu_bo_destory), I don't see what code to prevent this destroy invoked if the resv of this BO Still have fence not signaling, can you share this tricks ? IIRC the destroy itself is not prevented, but TTM prevents reusing of the memory

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
That should be way you said , but I didn't see the logic to assure that, If kref of one BO go downs to 0, this Bo will be destroy (amdgpu_bo_destory), I don't see what code to prevent this destroy invoked if the resv of this BO Still have fence not signaling, can you share this tricks ?

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
Can we guarantee the pde/pte and PRT/CSA are all alive (BO, mappings) when resubmitting the timeout job (assume this time out job can signal after the resubmit)? Yes, that's why we add all fences of each command submission to the PD/PT BOs. Regards, Christian. Am 03.05.2017 um 15:31 schrieb

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
Even we release the ctx as usual way, Can we guarantee the pde/pte and PRT/CSA are all alive (BO, mappings) when resubmitting the timeout job (assume this time out job can signal after the resubmit)? You know App can submit a command a release all BO and free_ctx, close FD/VM, and exit very

Re: [PATCH] drm/amdgpu/gfx: drop max_gs_waves_per_vgt

2017-05-03 Thread Alex Deucher
On Wed, May 3, 2017 at 8:41 AM, Nicolai Hähnle wrote: > On 02.05.2017 21:50, Alex Deucher wrote: >> >> We already have this info: max_gs_threads. Drop the duplicate. > > > max_gs_waves_per_vgt seems to be the better name for this number though. > Threads is usually what we

Re: [PATCH 6/6] drm/amdgpu:PTE flag should be 64 bit width

2017-05-03 Thread Alex Deucher
On Tue, May 2, 2017 at 11:48 PM, Monk Liu wrote: > otherwise we'll lost the high 32 bit for pte, which lead > to incorrect MTYPE for vega10. > > Change-Id: I1b0c7b8df14e340a36d4d2a72c6c03f469fdc29c > Signed-off-by: Monk Liu > Reviewed-by: Christian König

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
and the ironic thing is I want to alive as well (especially CSA, PTR) Yes, and exactly that is the danger I was talking about. We messed up the tear down oder with that and try to access resources which are already freed when the job is now scheduled. I would rather say we should get

Re: [PATCH 0/3] GPU-DRM-Radeon: Fine-tuning for three function implementations

2017-05-03 Thread Christian König
Am 02.05.2017 um 22:04 schrieb SF Markus Elfring: From: Markus Elfring Date: Tue, 2 May 2017 22:00:02 +0200 Three update suggestions were taken into account from static source code analysis. Markus Elfring (3): Use seq_putc() in radeon_sa_bo_dump_debug_info()

Re: [PATCH] drm/amdgpu/gfx: drop max_gs_waves_per_vgt

2017-05-03 Thread Nicolai Hähnle
On 02.05.2017 21:50, Alex Deucher wrote: We already have this info: max_gs_threads. Drop the duplicate. max_gs_waves_per_vgt seems to be the better name for this number though. Threads is usually what we call an item, of which each wave has 64. Cheers, Nicolai Signed-off-by: Alex

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
Since I get one more kref for ctx when creating jobs, so amdgpu_ctx_mgr_fini(>ctx_mgr) here won't actually waiting ... because the " amdgpu_ctx_do_release" Won't going to run (kref > 0 before all job signaled). That way amdgpu_driver_postclose_kms() can continue go on , So actually " UVD and

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread zhoucm1
Out of this title, it let me think job->vm isn't safe as well, vm could be freed when it is being used amdgpu_ib_schedule, do you have any thoughts to solve it? Regards, David Zhou On 2017年05月03日 17:18, Christian König wrote: I'm afraid not: CSA is gone with the VM, and VM is gone after app

Re: [PATCH 1/3] drm: fourcc byteorder: drop DRM_FORMAT_BIG_ENDIAN

2017-05-03 Thread Gerd Hoffmann
Hi, > > R600+ supports bigendian framebuffer formats, so no byteswapping on > > access is needed. Not sure whenever that includes 16bpp formats or > > whenever this is limited to the 8 bit-per-color formats [...] > > It includes 16bpp. Looking at >

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
Please give me some detail approach on dma_fence, thanks ! I think the idea of using the fence_context is even better than using the fence_status. It is already present for a while and filtering the jobs/entities by it needs something like 10 lines of code. The fence status is new and only

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
I'm afraid not: CSA is gone with the VM, and VM is gone after app close our FD, I don't see amdgpu_vm_fini() is depended on context living or not ... See the teardown order in amdgpu_driver_postclose_kms(): amdgpu_ctx_mgr_fini(>ctx_mgr); amdgpu_uvd_free_handles(adev, file_priv);

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
I thought of that for the first place as well, that keep an ID and parsing the ID with every job not signaled to see if the job Belongs to something guilty , But keep ctx alive is more simple so I choose this way. But I admit it is doable as well, and I want to compare this method and the

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
> 1) fence_context have chance to incorrectly represent the context behind it, because the number can be used up and will wrapped around from beginning No, fence_context is unique in global in kernel. Yeah, I would go with that as well. You note the fence_context of the job which caused the

RE: [PATCH 3/6] drm/amdgpu:re-write sriov_reinit_early/late

2017-05-03 Thread Liu, Monk
It's correct and already working on vega10/tonga for days, In fact the guilty context already works at my side BR Monk -Original Message- From: Christian König [mailto:deathsim...@vodafone.de] Sent: Wednesday, May 03, 2017 5:02 PM To: Liu, Monk ;

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
1,My idea is that userspace should rather gather the feedback during the next command submission. This has the advantage that you don't need to keep userspace alive till all jobs are done. > No, we need to clean the hw ring (cherry-pick out guilty entities' job in all > rings) after gpu

Re: [PATCH 1/6] drm/amdgpu:fix cannot receive rcv/ack irq bug

2017-05-03 Thread Christian König
Am 03.05.2017 um 05:48 schrieb Monk Liu: Change-Id: Ie8672e0c9358d9542810ce05c822d9367249bbd7 Signed-off-by: Monk Liu Acked-by: christian.koe...@amd.com> --- drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git

Re: [PATCH 2/6] drm/amdgpu:need som change on vega10 mailbox

2017-05-03 Thread Christian König
Am 03.05.2017 um 05:48 schrieb Monk Liu: if sriov gpu reset is invoked by job timeout, it is run in a global work-queue which is very slow and better not call msleep ortherwise it takes long time to get back CPU. so make below changes: 1: Change msleep 1 to mdelay 5 2: Ignore the ack fail from

Re: [PATCH 4/6] drm/amdgpu:cleanups KIQ ring_funcs emit_frame_size

2017-05-03 Thread Christian König
Am 03.05.2017 um 05:48 schrieb Monk Liu: since we don't need hdp flush/inval for KIQ anymore Change-Id: I8518f479afebb73c68ef922880f92dae53b665b9 Signed-off-by: Monk Liu Reviewed-by: Christian König --- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c |

Re: [PATCH 3/6] drm/amdgpu:re-write sriov_reinit_early/late

2017-05-03 Thread Christian König
Am 03.05.2017 um 05:48 schrieb Monk Liu: 1,this way we make those routines compatible with the sequence requirment for both Tonga and Vega10 2,ignore PSP hw init when doing TDR, because for SR-IOV device the ucode won't get lost after VF FLR, so no need to invoke PSP doing the ucode reloading

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Christian König
1, This is necessary otherwise how can I access entity pointer after a job timedout No that isn't necessary. The problem with your idea is that you want to actively push the feedback/status from the job execution back to userspace when an error (timeout) happens. My idea is that userspace

Re: [PATCH] drm/amdgpu/gfx9: derive tile pipes from golden settings

2017-05-03 Thread Christian König
Am 02.05.2017 um 22:17 schrieb Alex Deucher: rather than hardcoding it. Signed-off-by: Alex Deucher Acked-by: Christian König --- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff

Re: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread zhoucm1
On 2017年05月03日 14:02, Liu, Monk wrote: You can add ctx as filed of job, but not get reference of it, when you try to use ctx, just check if ctx == NULL. > that doesn't work at all... job->ctx will always be non-NULL after it is initialized, you just refer to a wild pointer after CTX released

RE: [PATCH 3/6] drm/amdgpu:re-write sriov_reinit_early/late

2017-05-03 Thread Yu, Xiangliang
Reviewed-by: Xiangliang Yu Thanks! Xiangliang Yu > -Original Message- > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, May 03, 2017 11:48 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk

RE: [PATCH 5/6] drm/amdgpu:kiq reg access need timeout(v2)

2017-05-03 Thread Yu, Xiangliang
Reviewed-by: Xiangliang Yu Thanks! Xiangliang Yu > -Original Message- > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, May 03, 2017 11:48 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk

RE: [PATCH 4/6] drm/amdgpu:cleanups KIQ ring_funcs emit_frame_size

2017-05-03 Thread Yu, Xiangliang
Reviewed-by: Xiangliang Yu Thanks! Xiangliang Yu > -Original Message- > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, May 03, 2017 11:48 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk

RE: [PATCH 2/6] drm/amdgpu:need som change on vega10 mailbox

2017-05-03 Thread Yu, Xiangliang
Reviewed-by: Xiangliang Yu Thanks! Xiangliang Yu > -Original Message- > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, May 03, 2017 11:48 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk

Re: [PATCH] amdgpu: add interface for reserve/unserve vmid v2

2017-05-03 Thread Zhang, Jerry (Junwei)
On 05/03/2017 01:44 PM, Chunming Zhou wrote: v2: delete unused comments. Change-Id: If533576eb8a65bd019a3480d6fe2a64f23e3c944 Signed-off-by: Chunming Zhou Reviewed-by: Monk Liu Reviewed-by: Junwei Zhang --- amdgpu/amdgpu.h|

RE: [PATCH 1/5] drm/amdgpu:keep ctx alive till all job finished

2017-05-03 Thread Liu, Monk
You can add ctx as filed of job, but not get reference of it, when you try to use ctx, just check if ctx == NULL. > that doesn't work at all... job->ctx will always be non-NULL after it is > initialized, you just refer to a wild pointer after CTX released Another stupid method: Use