[pull] amdgpu, amdkfd, radeon 5.6 fixes

2020-02-05 Thread Alex Deucher
Hi Dave, Daniel,

A bit bigger than normal, but this is several weeks of fixes.

The following changes since commit d7ca2d19c751b6715e9cb899a6b94f47b3499d02:

  Merge tag 'drm-msm-next-2020-01-14' of https://gitlab.freedesktop.org/drm/msm 
into drm-next (2020-01-20 14:09:43 +1000)

are available in the Git repository at:

  git://people.freedesktop.org/~agd5f/linux tags/amd-drm-next-5.6-2020-02-05

for you to fetch changes up to 58fe03d6dec908a1bec07eea7e94907af5c07eec:

  drm/amd/dm/mst: Ignore payload update failures (2020-02-04 23:30:39 -0500)


amd-drm-next-5.6-2020-02-05:

amdgpu:
- EDC fixes for Arcturus
- GDDR6 memory training fixe
- Fix for reading gfx clockgating registers while in GFXOFF state
- i2c freq fixes
- Misc display fixes
- TLB invalidation fix when using semaphores
- VCN 2.5 instancing fixes
- Switch raven1 gfxoff to a blacklist
- Coreboot workaround for KV/KB
- Root cause dongle fixes for display and revert workaround
- Enable GPU reset for renoir and navi
- Navi overclocking fixes
- Fix up confusing warnings in display clock validation on raven

amdkfd:
- SDMA fix

radeon:
- Misc LUT fixes


Alex Deucher (12):
  drm/amdgpu: attempt to enable gfxoff on more raven1 boards (v2)
  drm/amdgpu: original raven doesn't support full asic reset
  drm/amdgpu: enable GPU reset by default on Navi
  drm/amdgpu: enable GPU reset by default on renoir
  drm/amdgpu/navi10: add mclk to navi10_get_clock_by_type_with_latency
  drm/amdgpu/navi: fix index for OD MCLK
  drm/amdgpu/navi10: add OD_RANGE for navi overclocking
  drm/amdgpu: fetch default VDDC curve voltages (v2)
  drm/amdgpu/display: handle multiple numbers of fclks in dcn_calcs.c (v2)
  drm/amdgpu/smu10: fix smu10_get_clock_by_type_with_latency
  drm/amdgpu/smu10: fix smu10_get_clock_by_type_with_voltage
  drm/amdgpu: update default voltage for boot od table for navi1x

Alex Sierra (1):
  drm/amdgpu: modify packet size for pm4 flush tlbs

Anthony Koo (1):
  drm/amd/display: Refactor to remove diags specific rgam func

Aric Cyr (1):
  drm/amd/display: 3.2.69

Bhawanpreet Lakha (1):
  drm/amd/display: Fix HW/SW state mismatch

Brandon Syu (1):
  drm/amd/display: fix rotation_angle to use enum values

Christian König (1):
  drm/amdgpu: add coreboot workaround for KV/KB

Colin Ian King (4):
  drm/amd/amdgpu: fix spelling mistake "to" -> "too"
  drm/amd/display: fix for-loop with incorrectly sized loop counter (v2)
  drm/amd/powerplay: fix spelling mistake "Attemp" -> "Attempt"
  drm/amd/display: fix spelling mistake link_integiry_check -> 
link_integrity_check

Daniel Vetter (2):
  radeon: insert 10ms sleep in dce5_crtc_load_lut
  radeon: completely remove lut leftovers

Dennis Li (6):
  drm/amdgpu: update mmhub 9.4.1 header files for Acrturus
  drm/amdgpu: enable RAS feature for more mmhub sub-blocks of Acrturus
  drm/amdgpu: refine the security check for RAS functions
  drm/amdgpu: abstract EDC counter clear to a separated function
  drm/amdgpu: add EDC counter registers of gc for Arcturus
  drm/amdgpu: add RAS support for the gfx block of Arcturus

Dor Askayo (1):
  drm/amd/display: do not allocate display_mode_lib unnecessarily

Evan Quan (1):
  drm/amd/powerplay: fix navi10 system intermittent reboot issue V2

Felix Kuehling (2):
  drm/amdgpu: Fix TLB invalidation request when using semaphore
  drm/amdgpu: Use the correct flush_type in flush_gpu_tlb_pasid

Haiyi Zhou (1):
  drm/amd/display: Fixed comment styling

Harry Wentland (2):
  drm/amd/display: Retrain dongles when SINK_COUNT becomes non-zero
  Revert "drm/amd/display: Don't skip link training for empty dongle"

Isabel Zhang (1):
  drm/amd/display: changed max_downscale_src_width to 4096.

James Zhu (5):
  drm/amdgpu/vcn: Share vcn_v2_0_dec_ring_test_ring to vcn2.5
  drm/amdgpu/vcn2.5: fix a bug for the 2nd vcn instance (v2)
  drm/amdgpu/vcn: fix vcn2.5 instance issue
  drm/amdgpu/vcn: fix typo error
  drm/amdgpu/vcn: use inst_idx relacing inst

Jerry (Fangzhi) Zuo (1):
  drm/amd/display: Fix DML dummyinteger types mismatch

John Clements (1):
  drm/amdgpu: added support to get mGPU DRAM base

Joseph Greathouse (1):
  drm/amdgpu: Enable DISABLE_BARRIER_WAITCNT for Arcturus

Lewis Huang (2):
  drm/amd/display: Refine i2c frequency calculating sequence
  drm/amd/display: init hw i2c speed

Lyude Paul (1):
  drm/amd/dm/mst: Ignore payload update failures

Matt Coffin (1):
  drm/amdgpu/smu_v11_0: Correct behavior of restoring default tables (v2)

Mikita Lipski (1):
  drm/amd/display: Fix a typo when computing dsc configuration

Nathan Chancellor (1):
  drm/amdgpu: Fix implicit enum conversion in gfx_v9_4_ras_error_inject

Nicholas Kazlauskas (8):
  

RE: [PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

2020-02-05 Thread Quan, Evan
Acked-by: Evan Quan 

-Original Message-
From: amd-gfx  On Behalf Of Jack Zhang
Sent: Wednesday, February 5, 2020 5:18 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Jack (Jian) 
Subject: [PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

For sriov and pp_onevf_mode, do not send message to set smu
status, becasue smu doesn't support these messages under VF.

Besides, it should skip smu_suspend when pp_onevf_mode is disabled.

Signed-off-by: Jack Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 ---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 21 +
 2 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4ff7ce3..2d1f8d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2353,15 +2353,16 @@ static int amdgpu_device_ip_suspend_phase2(struct 
amdgpu_device *adev)
}
adev->ip_blocks[i].status.hw = false;
/* handle putting the SMC in the appropriate state */
-   if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
-   r = amdgpu_dpm_set_mp1_state(adev, adev->mp1_state);
-   if (r) {
-   DRM_ERROR("SMC failed to set mp1 state %d, 
%d\n",
- adev->mp1_state, r);
-   return r;
+   if(!amdgpu_sriov_vf(adev)){
+   if (adev->ip_blocks[i].version->type == 
AMD_IP_BLOCK_TYPE_SMC) {
+   r = amdgpu_dpm_set_mp1_state(adev, 
adev->mp1_state);
+   if (r) {
+   DRM_ERROR("SMC failed to set mp1 state 
%d, %d\n",
+   adev->mp1_state, r);
+   return r;
+   }
}
}
-
adev->ip_blocks[i].status.hw = false;
}
 
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 99ad4dd..a6d7b5f 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1461,21 +1461,26 @@ static int smu_suspend(void *handle)
struct smu_context *smu = >smu;
bool baco_feature_is_enabled = false;
 
+   if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
+   return 0;
+
if (!smu->pm_enabled)
return 0;
 
if(!smu->is_apu)
baco_feature_is_enabled = smu_feature_is_enabled(smu, 
SMU_FEATURE_BACO_BIT);
 
-   ret = smu_system_features_control(smu, false);
-   if (ret)
-   return ret;
-
-   if (baco_feature_is_enabled) {
-   ret = smu_feature_set_enabled(smu, SMU_FEATURE_BACO_BIT, true);
-   if (ret) {
-   pr_warn("set BACO feature enabled failed, return %d\n", 
ret);
+   if(!amdgpu_sriov_vf(adev)) {
+   ret = smu_system_features_control(smu, false);
+   if (ret)
return ret;
+
+   if (baco_feature_is_enabled) {
+   ret = smu_feature_set_enabled(smu, 
SMU_FEATURE_BACO_BIT, true);
+   if (ret) {
+   pr_warn("set BACO feature enabled failed, 
return %d\n", ret);
+   return ret;
+   }
}
}
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=02%7C01%7Cevan.quan%40amd.com%7Cd2316a5aaf5c4a34251d08d7aa1c5a73%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637164911103084738sdata=l8oqMsQDf0t%2FUi%2B8yR57BrKU71IZJaZl1T83%2F9%2BRm4I%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

2020-02-05 Thread Alex Deucher
On Wed, Feb 5, 2020 at 4:18 AM Jack Zhang  wrote:
>
> For sriov and pp_onevf_mode, do not send message to set smu
> status, becasue smu doesn't support these messages under VF.

Typo: becasue -> because
With that fixed:
Acked-by: Alex Deucher 

>
> Besides, it should skip smu_suspend when pp_onevf_mode is disabled.
>
> Signed-off-by: Jack Zhang 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 ---
>  drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 21 +
>  2 files changed, 21 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 4ff7ce3..2d1f8d4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2353,15 +2353,16 @@ static int amdgpu_device_ip_suspend_phase2(struct 
> amdgpu_device *adev)
> }
> adev->ip_blocks[i].status.hw = false;
> /* handle putting the SMC in the appropriate state */
> -   if (adev->ip_blocks[i].version->type == 
> AMD_IP_BLOCK_TYPE_SMC) {
> -   r = amdgpu_dpm_set_mp1_state(adev, adev->mp1_state);
> -   if (r) {
> -   DRM_ERROR("SMC failed to set mp1 state %d, 
> %d\n",
> - adev->mp1_state, r);
> -   return r;
> +   if(!amdgpu_sriov_vf(adev)){
> +   if (adev->ip_blocks[i].version->type == 
> AMD_IP_BLOCK_TYPE_SMC) {
> +   r = amdgpu_dpm_set_mp1_state(adev, 
> adev->mp1_state);
> +   if (r) {
> +   DRM_ERROR("SMC failed to set mp1 
> state %d, %d\n",
> +   adev->mp1_state, r);
> +   return r;
> +   }
> }
> }
> -
> adev->ip_blocks[i].status.hw = false;
> }
>
> diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
> b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> index 99ad4dd..a6d7b5f 100644
> --- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> @@ -1461,21 +1461,26 @@ static int smu_suspend(void *handle)
> struct smu_context *smu = >smu;
> bool baco_feature_is_enabled = false;
>
> +   if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> +   return 0;
> +
> if (!smu->pm_enabled)
> return 0;
>
> if(!smu->is_apu)
> baco_feature_is_enabled = smu_feature_is_enabled(smu, 
> SMU_FEATURE_BACO_BIT);
>
> -   ret = smu_system_features_control(smu, false);
> -   if (ret)
> -   return ret;
> -
> -   if (baco_feature_is_enabled) {
> -   ret = smu_feature_set_enabled(smu, SMU_FEATURE_BACO_BIT, 
> true);
> -   if (ret) {
> -   pr_warn("set BACO feature enabled failed, return 
> %d\n", ret);
> +   if(!amdgpu_sriov_vf(adev)) {
> +   ret = smu_system_features_control(smu, false);
> +   if (ret)
> return ret;
> +
> +   if (baco_feature_is_enabled) {
> +   ret = smu_feature_set_enabled(smu, 
> SMU_FEATURE_BACO_BIT, true);
> +   if (ret) {
> +   pr_warn("set BACO feature enabled failed, 
> return %d\n", ret);
> +   return ret;
> +   }
> }
> }
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/4] drm/amdgpu: use amdgpu_device_vram_access in amdgpu_ttm_access_memory

2020-02-05 Thread Felix Kuehling

On 2020-02-05 10:22 a.m., Christian König wrote:

Make use of the better performance here as well.

This patch is only compile tested!

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 38 +++--
  1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 58d143b24ba0..538c3b52b712 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1565,7 +1565,7 @@ static int amdgpu_ttm_access_memory(struct 
ttm_buffer_object *bo,
  
  	while (len && pos < adev->gmc.mc_vram_size) {

uint64_t aligned_pos = pos & ~(uint64_t)3;
-   uint32_t bytes = 4 - (pos & 3);
+   uint64_t bytes = 4 - (pos & 3);
uint32_t shift = (pos & 3) * 8;
uint32_t mask = 0x << shift;
  
@@ -1574,20 +1574,28 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,

bytes = len;
}
  
-		spin_lock_irqsave(>mmio_idx_lock, flags);

-   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
-   if (!write || mask != 0x)
-   value = RREG32_NO_KIQ(mmMM_DATA);
-   if (write) {
-   value &= ~mask;
-   value |= (*(uint32_t *)buf << shift) & mask;
-   WREG32_NO_KIQ(mmMM_DATA, value);
-   }
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
-   if (!write) {
-   value = (value & mask) >> shift;
-   memcpy(buf, , bytes);
+   if (mask != 0x) {
+   spin_lock_irqsave(>mmio_idx_lock, flags);
+   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 
0x8000);
+   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
+   if (!write || mask != 0x)
+   value = RREG32_NO_KIQ(mmMM_DATA);
+   if (write) {
+   value &= ~mask;
+   value |= (*(uint32_t *)buf << shift) & mask;
+   WREG32_NO_KIQ(mmMM_DATA, value);
+   }
+   spin_unlock_irqrestore(>mmio_idx_lock, flags);
+   if (!write) {
+   value = (value & mask) >> shift;
+   memcpy(buf, , bytes);
+   }
+   } else {
+   bytes = (nodes->start + nodes->size) << PAGE_SHIFT;
+   bytes = min(pos - bytes, (uint64_t)len & ~0x3ull);


I think this is incorrect. The following should be true: pos < 
((nodes->start + nodes->size) << PAGE_SHIFT). Consequently pos - bytes 
is always negative here. But because you're doing unsigned math it will 
underflow to a big positive number, which is never the minimum. 
Therefore the min will always be len & ~0x3ull.


I believe this should be min(bytes - pos, (uint64_t)len & ~0x3ull).

Jon, to catch this bug, you'd need a test that first fragments VRAM 
(allocates lots of 2MB buffers and frees every other buffer), then 
allocates a large non-contiguous buffer. Then you need one 4KB or 
smaller access that crosses a boundary between 2MB VRAM buffer chunks.


Christian, your optimized VRAM allocator that tries to get large 
contiguous chunks is nice for performance, but it probably has a 
tendency to hide this kind of bug. I wonder if we should have a debug 
mode that forces non-contiguous buffers to be actually non-contiguous.


Regards,
  Felix


+
+   amdgpu_device_vram_access(adev, pos, (uint32_t *)buf,
+ bytes, write);
}
  
  		ret += bytes;

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/4] drm/amdgpu: use the BAR if possible in amdgpu_device_vram_access

2020-02-05 Thread Felix Kuehling
If we're using the BAR, we should probably flush HDP cache/buffers 
before reading or after writing.


Regards,
  Felix


On 2020-02-05 10:22 a.m., Christian König wrote:

This should speed up debugging VRAM access a lot.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +
  1 file changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index d39630edda01..7d65c9aedecd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -188,6 +188,27 @@ void amdgpu_device_vram_access(struct amdgpu_device *adev, 
loff_t pos,
uint32_t hi = ~0;
uint64_t last;
  
+

+#ifdef CONFIG_64BIT
+   last = min(pos + size, adev->gmc.visible_vram_size);
+   if (last > pos) {
+   void __iomem *addr = adev->mman.aper_base_kaddr + pos;
+   size_t count = last - pos;
+
+   if (write)
+   memcpy_toio(addr, buf, count);
+   else
+   memcpy_fromio(buf, addr, count);
+
+   if (count == size)
+   return;
+
+   pos += count;
+   buf += count / 4;
+   size -= count;
+   }
+#endif
+
spin_lock_irqsave(>mmio_idx_lock, flags);
for (last = pos + size; pos < last; pos += 4) {
uint32_t tmp = pos >> 31;

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 5/6] drm/amdkfd: Only count active sdma queues

2020-02-05 Thread Yong Zhao

Please disregard the patch 5 and 6, as I have a new version for them.

Yong

On 2020-02-05 6:39 p.m., Yong Zhao wrote:

One minor fix added.

Yong

On 2020-02-05 6:28 p.m., Yong Zhao wrote:

The sdma_queue_count was only used for inferring whether we should
unmap SDMA queues under HWS mode. In contrast, We mapped active queues
rather than all in map_queues_cpsch(). In order to match the map and 
unmap

for SDMA queues, we should just count active SDMA queues. Meanwhile,
rename sdma_queue_count to active_sdma_queue_count to reflect the new
usage.

Change-Id: I9f1c3305dad044a3c779ec0730fcf7554050de8b
Signed-off-by: Yong Zhao 
---
  .../drm/amd/amdkfd/kfd_device_queue_manager.c | 54 ---
  .../drm/amd/amdkfd/kfd_device_queue_manager.h |  5 +-
  .../amd/amdkfd/kfd_process_queue_manager.c    | 16 +++---
  3 files changed, 31 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c

index 064108cf493b..cf77b866054a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -138,6 +138,10 @@ void increment_queue_count(struct 
device_queue_manager *dqm,

  dqm->active_queue_count++;
  if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
  dqm->active_cp_queue_count++;
+    else if (type == KFD_QUEUE_TYPE_SDMA)
+    dqm->active_sdma_queue_count++;
+    else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+    dqm->active_xgmi_sdma_queue_count++;
  }
    void decrement_queue_count(struct device_queue_manager *dqm,
@@ -146,6 +150,10 @@ void decrement_queue_count(struct 
device_queue_manager *dqm,

  dqm->active_queue_count--;
  if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
  dqm->active_cp_queue_count--;
+    else if (type == KFD_QUEUE_TYPE_SDMA)
+    dqm->active_sdma_queue_count--;
+    else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+    dqm->active_xgmi_sdma_queue_count--;
  }
    static int allocate_doorbell(struct qcm_process_device *qpd, 
struct queue *q)
@@ -377,11 +385,6 @@ static int create_queue_nocpsch(struct 
device_queue_manager *dqm,

  if (q->properties.is_active)
  increment_queue_count(dqm, q->properties.type);
  -    if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
-    dqm->sdma_queue_count++;
-    else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-    dqm->xgmi_sdma_queue_count++;
-
  /*
   * Unconditionally increment this counter, regardless of the 
queue's

   * type or whether the queue is active.
@@ -462,15 +465,13 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,

  mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
  q->properties.type)];
  -    if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE) {
+    if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE)
  deallocate_hqd(dqm, q);
-    } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) {
-    dqm->sdma_queue_count--;
+    else if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
  deallocate_sdma_queue(dqm, q);
-    } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
-    dqm->xgmi_sdma_queue_count--;
+    else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
  deallocate_sdma_queue(dqm, q);
-    } else {
+    else {
  pr_debug("q->properties.type %d is invalid\n",
  q->properties.type);
  return -EINVAL;
@@ -916,8 +917,8 @@ static int initialize_nocpsch(struct 
device_queue_manager *dqm)

  mutex_init(>lock_hidden);
  INIT_LIST_HEAD(>queues);
  dqm->active_queue_count = dqm->next_pipe_to_allocate = 0;
-    dqm->sdma_queue_count = 0;
-    dqm->xgmi_sdma_queue_count = 0;
+    dqm->active_sdma_queue_count = 0;
+    dqm->active_xgmi_sdma_queue_count = 0;
    for (pipe = 0; pipe < get_pipes_per_mec(dqm); pipe++) {
  int pipe_offset = pipe * get_queues_per_pipe(dqm);
@@ -1081,8 +1082,8 @@ static int initialize_cpsch(struct 
device_queue_manager *dqm)

  mutex_init(>lock_hidden);
  INIT_LIST_HEAD(>queues);
  dqm->active_queue_count = dqm->processes_count = 0;
-    dqm->sdma_queue_count = 0;
-    dqm->xgmi_sdma_queue_count = 0;
+    dqm->active_sdma_queue_count = 0;
+    dqm->active_xgmi_sdma_queue_count = 0;
  dqm->active_runlist = false;
  dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
  dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - 
get_num_xgmi_sdma_queues(dqm));
@@ -1254,11 +1255,6 @@ static int create_queue_cpsch(struct 
device_queue_manager *dqm, struct queue *q,

  list_add(>list, >queues_list);
  qpd->queue_count++;
  -    if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
-    dqm->sdma_queue_count++;
-    else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-    dqm->xgmi_sdma_queue_count++;
-
  if (q->properties.is_active) {
  increment_queue_count(dqm, 

[PATCH 1/3] drm/amdkfd: Delete excessive printings

2020-02-05 Thread Yong Zhao
Those printings are duplicated or useless.

Change-Id: I88fbe8f5748bbd0a20bcf1f6ca67b9dde99733fe
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c  | 2 --
 drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c | 4 +---
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index a3c44d88314b..958275db3f55 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -297,8 +297,6 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
struct mqd_manager *mqd_mgr;
int retval;
 
-   print_queue(q);
-
dqm_lock(dqm);
 
if (dqm->total_queue_count >= max_num_of_queues_per_device) {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
index c604a2ede3f5..3bfa5c8d9654 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
@@ -257,7 +257,6 @@ int pqm_create_queue(struct process_queue_manager *pqm,
pqn->q = q;
pqn->kq = NULL;
retval = dev->dqm->ops.create_queue(dev->dqm, q, >qpd);
-   pr_debug("DQM returned %d for create_queue\n", retval);
print_queue(q);
break;
 
@@ -278,7 +277,6 @@ int pqm_create_queue(struct process_queue_manager *pqm,
pqn->q = q;
pqn->kq = NULL;
retval = dev->dqm->ops.create_queue(dev->dqm, q, >qpd);
-   pr_debug("DQM returned %d for create_queue\n", retval);
print_queue(q);
break;
case KFD_QUEUE_TYPE_DIQ:
@@ -299,7 +297,7 @@ int pqm_create_queue(struct process_queue_manager *pqm,
}
 
if (retval != 0) {
-   pr_err("Pasid 0x%x DQM create queue %d failed. ret %d\n",
+   pr_err("Pasid 0x%x DQM create queue type %d failed. ret %d\n",
pqm->process->pasid, type, retval);
goto err_create_queue;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/3] drm/amdkfd: Fix bugs in SDMA queues mapping in HWS mode

2020-02-05 Thread Yong Zhao
The sdma_queue_count was only used for inferring whether we should
unmap SDMA queues under HWS mode. In contrast, We only mapped active
queues rather than all in map_queues_cpsch(). In order to match the
map and unmap for SDMA queues, we should just count active SDMA
queues.

Moreover, previously in execute_queues_cpsch(), we determined whether
to unmap SDMA queues based on active_sdma_queue_count. However, its
value only reflectd the "to be mapped" SDMA queue count, rather than
the "mapped" count, which actually should be used. For example, if
there is a SDMA queue mapped and the application is destroying it,
when the driver reaches unmap_queues_cpsch(), active_sdma_queue_count
is already 0, so unmap_sdma_queues() won't be triggered, which is a bug.
Fix the issue by recording whether we should call unmap_sdma_queues()
in next execute_queues_cpsch() before mapping all queues.

An optimization is also made. Previously whenever unmapping SDMA queues,
the code would send one unmapping packet for each SDMA engine to CP
firmware regardless whether there are SDMA queues mapped on that engine.
By introducing used_sdma_engines_bitmap, which is calculated during
mapping, we can just send only necessary engines during unmapping.

Change-Id: I84fd2f7e63d6b7f664580b425a78d3e995ce9abc
Signed-off-by: Yong Zhao 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 131 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |   4 +-
 .../amd/amdkfd/kfd_process_queue_manager.c|  16 +--
 3 files changed, 71 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 958275db3f55..3ca660acaa1d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -109,6 +109,11 @@ static unsigned int get_num_xgmi_sdma_engines(struct 
device_queue_manager *dqm)
return dqm->dev->device_info->num_xgmi_sdma_engines;
 }
 
+static unsigned int get_num_all_sdma_engines(struct device_queue_manager *dqm)
+{
+   return get_num_sdma_engines(dqm) + get_num_xgmi_sdma_engines(dqm);
+}
+
 unsigned int get_num_sdma_queues(struct device_queue_manager *dqm)
 {
return dqm->dev->device_info->num_sdma_engines
@@ -133,19 +138,27 @@ void program_sh_mem_settings(struct device_queue_manager 
*dqm,
 }
 
 void increment_queue_count(struct device_queue_manager *dqm,
-   enum kfd_queue_type type)
+   struct queue *q)
 {
+   enum kfd_queue_type type = q->properties.type;
+
dqm->active_queue_count++;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count++;
+   else
+   dqm->used_queues_on_sdma[q->properties.sdma_engine_id]++;
 }
 
 void decrement_queue_count(struct device_queue_manager *dqm,
-   enum kfd_queue_type type)
+   struct queue *q)
 {
+   enum kfd_queue_type type = q->properties.type;
+
dqm->active_queue_count--;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count--;
+   else
+   dqm->used_queues_on_sdma[q->properties.sdma_engine_id]--;
 }
 
 static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
@@ -373,12 +386,7 @@ static int create_queue_nocpsch(struct 
device_queue_manager *dqm,
list_add(>list, >queues_list);
qpd->queue_count++;
if (q->properties.is_active)
-   increment_queue_count(dqm, q->properties.type);
-
-   if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
-   dqm->sdma_queue_count++;
-   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-   dqm->xgmi_sdma_queue_count++;
+   increment_queue_count(dqm, q);
 
/*
 * Unconditionally increment this counter, regardless of the queue's
@@ -460,15 +468,13 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
 
-   if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE) {
+   if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE)
deallocate_hqd(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) {
-   dqm->sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
deallocate_sdma_queue(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
-   dqm->xgmi_sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
deallocate_sdma_queue(dqm, q);
-   } else {
+   else {
pr_debug("q->properties.type %d is invalid\n",
q->properties.type);
return -EINVAL;
@@ 

[PATCH 2/3] drm/amdgpu: Use MAX_SDMA_ENGINE_NUM instead of a number

2020-02-05 Thread Yong Zhao
MAX_SDMA_ENGINE_NUM will be used in more than one place.

Change-Id: I99c84086ee62612b373c547a9d29bc4a69e7c72e
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h| 2 +-
 drivers/gpu/drm/amd/include/kgd_kfd_interface.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
index 3fa18003d4d6..9d41d983a40f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
@@ -52,7 +52,7 @@ struct amdgpu_doorbell_index {
uint32_t userqueue_end;
uint32_t gfx_ring0;
uint32_t gfx_ring1;
-   uint32_t sdma_engine[8];
+   uint32_t sdma_engine[MAX_SDMA_ENGINE_NUM];
uint32_t ih;
union {
struct {
diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h 
b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
index 55750890b73f..3709d3603fb0 100644
--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
@@ -35,6 +35,7 @@
 struct pci_dev;
 
 #define KGD_MAX_QUEUES 128
+#define MAX_SDMA_ENGINE_NUM 8
 
 struct kfd_dev;
 struct kgd_dev;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 5/6] drm/amdkfd: Only count active sdma queues

2020-02-05 Thread Yong Zhao

One minor fix added.

Yong

On 2020-02-05 6:28 p.m., Yong Zhao wrote:

The sdma_queue_count was only used for inferring whether we should
unmap SDMA queues under HWS mode. In contrast, We mapped active queues
rather than all in map_queues_cpsch(). In order to match the map and unmap
for SDMA queues, we should just count active SDMA queues. Meanwhile,
rename sdma_queue_count to active_sdma_queue_count to reflect the new
usage.

Change-Id: I9f1c3305dad044a3c779ec0730fcf7554050de8b
Signed-off-by: Yong Zhao 
---
  .../drm/amd/amdkfd/kfd_device_queue_manager.c | 54 ---
  .../drm/amd/amdkfd/kfd_device_queue_manager.h |  5 +-
  .../amd/amdkfd/kfd_process_queue_manager.c| 16 +++---
  3 files changed, 31 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 064108cf493b..cf77b866054a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -138,6 +138,10 @@ void increment_queue_count(struct device_queue_manager 
*dqm,
dqm->active_queue_count++;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count++;
+   else if (type == KFD_QUEUE_TYPE_SDMA)
+   dqm->active_sdma_queue_count++;
+   else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+   dqm->active_xgmi_sdma_queue_count++;
  }
  
  void decrement_queue_count(struct device_queue_manager *dqm,

@@ -146,6 +150,10 @@ void decrement_queue_count(struct device_queue_manager 
*dqm,
dqm->active_queue_count--;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count--;
+   else if (type == KFD_QUEUE_TYPE_SDMA)
+   dqm->active_sdma_queue_count--;
+   else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+   dqm->active_xgmi_sdma_queue_count--;
  }
  
  static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)

@@ -377,11 +385,6 @@ static int create_queue_nocpsch(struct 
device_queue_manager *dqm,
if (q->properties.is_active)
increment_queue_count(dqm, q->properties.type);
  
-	if (q->properties.type == KFD_QUEUE_TYPE_SDMA)

-   dqm->sdma_queue_count++;
-   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-   dqm->xgmi_sdma_queue_count++;
-
/*
 * Unconditionally increment this counter, regardless of the queue's
 * type or whether the queue is active.
@@ -462,15 +465,13 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
  
-	if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE) {

+   if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE)
deallocate_hqd(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) {
-   dqm->sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
deallocate_sdma_queue(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
-   dqm->xgmi_sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
deallocate_sdma_queue(dqm, q);
-   } else {
+   else {
pr_debug("q->properties.type %d is invalid\n",
q->properties.type);
return -EINVAL;
@@ -916,8 +917,8 @@ static int initialize_nocpsch(struct device_queue_manager 
*dqm)
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
dqm->active_queue_count = dqm->next_pipe_to_allocate = 0;
-   dqm->sdma_queue_count = 0;
-   dqm->xgmi_sdma_queue_count = 0;
+   dqm->active_sdma_queue_count = 0;
+   dqm->active_xgmi_sdma_queue_count = 0;
  
  	for (pipe = 0; pipe < get_pipes_per_mec(dqm); pipe++) {

int pipe_offset = pipe * get_queues_per_pipe(dqm);
@@ -1081,8 +1082,8 @@ static int initialize_cpsch(struct device_queue_manager 
*dqm)
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
dqm->active_queue_count = dqm->processes_count = 0;
-   dqm->sdma_queue_count = 0;
-   dqm->xgmi_sdma_queue_count = 0;
+   dqm->active_sdma_queue_count = 0;
+   dqm->active_xgmi_sdma_queue_count = 0;
dqm->active_runlist = false;
dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
@@ -1254,11 +1255,6 @@ static int create_queue_cpsch(struct 
device_queue_manager *dqm, struct queue *q,
list_add(>list, >queues_list);
qpd->queue_count++;
  
-	if (q->properties.type == KFD_QUEUE_TYPE_SDMA)

-   dqm->sdma_queue_count++;
-   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
- 

[PATCH 6/6] drm/amdkfd: Delete excessive printings

2020-02-05 Thread Yong Zhao
Those printings are duplicated or useless.

Change-Id: I88fbe8f5748bbd0a20bcf1f6ca67b9dde99733fe
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c  | 2 --
 drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c | 4 +---
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index cf77b866054a..3bfdc9b251b3 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -305,8 +305,6 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
struct mqd_manager *mqd_mgr;
int retval;
 
-   print_queue(q);
-
dqm_lock(dqm);
 
if (dqm->total_queue_count >= max_num_of_queues_per_device) {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
index 941b5876f19f..cf11f4dce98a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
@@ -253,7 +253,6 @@ int pqm_create_queue(struct process_queue_manager *pqm,
pqn->q = q;
pqn->kq = NULL;
retval = dev->dqm->ops.create_queue(dev->dqm, q, >qpd);
-   pr_debug("DQM returned %d for create_queue\n", retval);
print_queue(q);
break;
 
@@ -274,7 +273,6 @@ int pqm_create_queue(struct process_queue_manager *pqm,
pqn->q = q;
pqn->kq = NULL;
retval = dev->dqm->ops.create_queue(dev->dqm, q, >qpd);
-   pr_debug("DQM returned %d for create_queue\n", retval);
print_queue(q);
break;
case KFD_QUEUE_TYPE_DIQ:
@@ -295,7 +293,7 @@ int pqm_create_queue(struct process_queue_manager *pqm,
}
 
if (retval != 0) {
-   pr_err("Pasid 0x%x DQM create queue %d failed. ret %d\n",
+   pr_err("Pasid 0x%x DQM create queue type %d failed. ret %d\n",
pqm->process->pasid, type, retval);
goto err_create_queue;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 4/6] drm/amdkfd: Fix a memory leak in queue creation error handling

2020-02-05 Thread Yong Zhao
When the queue creation is failed, some resources were not freed. Fix it.

Change-Id: Ia24b6ad31528dceddfd4d1c58bb1d22c35d3eabf
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
index b62ee2e3344a..c604a2ede3f5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
@@ -329,6 +329,9 @@ int pqm_create_queue(struct process_queue_manager *pqm,
return retval;
 
 err_create_queue:
+   uninit_queue(q);
+   if (kq)
+   kernel_queue_uninit(kq, false);
kfree(pqn);
 err_allocate_pqn:
/* check if queues list is empty unregister process from device */
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/6] drm/amdkfd: Avoid ambiguity by indicating it's cp queue

2020-02-05 Thread Yong Zhao
The queues represented in queue_bitmap are only CP queues.

Change-Id: I7e6a75de39718d7c4da608166b85b9377d06d1b3
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c   |  4 ++--
 .../gpu/drm/amd/amdkfd/kfd_device_queue_manager.c| 12 ++--
 .../gpu/drm/amd/amdkfd/kfd_device_queue_manager.h|  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c  |  2 +-
 .../gpu/drm/amd/amdkfd/kfd_process_queue_manager.c   |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c|  2 +-
 drivers/gpu/drm/amd/include/kgd_kfd_interface.h  |  2 +-
 7 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 8609287620ea..ebe4b8f88e79 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -126,7 +126,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
/* this is going to have a few of the MSBs set that we need to
 * clear
 */
-   bitmap_complement(gpu_resources.queue_bitmap,
+   bitmap_complement(gpu_resources.cp_queue_bitmap,
  adev->gfx.mec.queue_bitmap,
  KGD_MAX_QUEUES);
 
@@ -137,7 +137,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
* adev->gfx.mec.num_pipe_per_mec
* adev->gfx.mec.num_queue_per_pipe;
for (i = last_valid_bit; i < KGD_MAX_QUEUES; ++i)
-   clear_bit(i, gpu_resources.queue_bitmap);
+   clear_bit(i, gpu_resources.cp_queue_bitmap);
 
amdgpu_doorbell_get_kfd_info(adev,
_resources.doorbell_physical_address,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 7ef9b89f5c70..973581c2b401 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -78,14 +78,14 @@ static bool is_pipe_enabled(struct device_queue_manager 
*dqm, int mec, int pipe)
/* queue is available for KFD usage if bit is 1 */
for (i = 0; i <  dqm->dev->shared_resources.num_queue_per_pipe; ++i)
if (test_bit(pipe_offset + i,
- dqm->dev->shared_resources.queue_bitmap))
+ dqm->dev->shared_resources.cp_queue_bitmap))
return true;
return false;
 }
 
-unsigned int get_queues_num(struct device_queue_manager *dqm)
+unsigned int get_cp_queues_num(struct device_queue_manager *dqm)
 {
-   return bitmap_weight(dqm->dev->shared_resources.queue_bitmap,
+   return bitmap_weight(dqm->dev->shared_resources.cp_queue_bitmap,
KGD_MAX_QUEUES);
 }
 
@@ -908,7 +908,7 @@ static int initialize_nocpsch(struct device_queue_manager 
*dqm)
 
for (queue = 0; queue < get_queues_per_pipe(dqm); queue++)
if (test_bit(pipe_offset + queue,
-dqm->dev->shared_resources.queue_bitmap))
+
dqm->dev->shared_resources.cp_queue_bitmap))
dqm->allocated_queues[pipe] |= 1 << queue;
}
 
@@ -1029,7 +1029,7 @@ static int set_sched_resources(struct 
device_queue_manager *dqm)
mec = (i / dqm->dev->shared_resources.num_queue_per_pipe)
/ dqm->dev->shared_resources.num_pipe_per_mec;
 
-   if (!test_bit(i, dqm->dev->shared_resources.queue_bitmap))
+   if (!test_bit(i, dqm->dev->shared_resources.cp_queue_bitmap))
continue;
 
/* only acquire queues from the first MEC */
@@ -1979,7 +1979,7 @@ int dqm_debugfs_hqds(struct seq_file *m, void *data)
 
for (queue = 0; queue < get_queues_per_pipe(dqm); queue++) {
if (!test_bit(pipe_offset + queue,
- dqm->dev->shared_resources.queue_bitmap))
+ 
dqm->dev->shared_resources.cp_queue_bitmap))
continue;
 
r = dqm->dev->kfd2kgd->hqd_dump(
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
index ee3400e92c30..3f0fb0d28c01 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
@@ -219,7 +219,7 @@ void device_queue_manager_init_v10_navi10(
struct device_queue_manager_asic_ops *asic_ops);
 void program_sh_mem_settings(struct device_queue_manager *dqm,
struct qcm_process_device *qpd);
-unsigned int get_queues_num(struct 

[PATCH 5/6] drm/amdkfd: Only count active sdma queues

2020-02-05 Thread Yong Zhao
The sdma_queue_count was only used for inferring whether we should
unmap SDMA queues under HWS mode. In contrast, We mapped active queues
rather than all in map_queues_cpsch(). In order to match the map and unmap
for SDMA queues, we should just count active SDMA queues. Meanwhile,
rename sdma_queue_count to active_sdma_queue_count to reflect the new
usage.

Change-Id: I9f1c3305dad044a3c779ec0730fcf7554050de8b
Signed-off-by: Yong Zhao 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 54 ---
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  5 +-
 .../amd/amdkfd/kfd_process_queue_manager.c| 16 +++---
 3 files changed, 31 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 064108cf493b..cf77b866054a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -138,6 +138,10 @@ void increment_queue_count(struct device_queue_manager 
*dqm,
dqm->active_queue_count++;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count++;
+   else if (type == KFD_QUEUE_TYPE_SDMA)
+   dqm->active_sdma_queue_count++;
+   else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+   dqm->active_xgmi_sdma_queue_count++;
 }
 
 void decrement_queue_count(struct device_queue_manager *dqm,
@@ -146,6 +150,10 @@ void decrement_queue_count(struct device_queue_manager 
*dqm,
dqm->active_queue_count--;
if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
dqm->active_cp_queue_count--;
+   else if (type == KFD_QUEUE_TYPE_SDMA)
+   dqm->active_sdma_queue_count--;
+   else if (type == KFD_QUEUE_TYPE_SDMA_XGMI)
+   dqm->active_xgmi_sdma_queue_count--;
 }
 
 static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
@@ -377,11 +385,6 @@ static int create_queue_nocpsch(struct 
device_queue_manager *dqm,
if (q->properties.is_active)
increment_queue_count(dqm, q->properties.type);
 
-   if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
-   dqm->sdma_queue_count++;
-   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-   dqm->xgmi_sdma_queue_count++;
-
/*
 * Unconditionally increment this counter, regardless of the queue's
 * type or whether the queue is active.
@@ -462,15 +465,13 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
 
-   if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE) {
+   if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE)
deallocate_hqd(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) {
-   dqm->sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
deallocate_sdma_queue(dqm, q);
-   } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI) {
-   dqm->xgmi_sdma_queue_count--;
+   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
deallocate_sdma_queue(dqm, q);
-   } else {
+   else {
pr_debug("q->properties.type %d is invalid\n",
q->properties.type);
return -EINVAL;
@@ -916,8 +917,8 @@ static int initialize_nocpsch(struct device_queue_manager 
*dqm)
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
dqm->active_queue_count = dqm->next_pipe_to_allocate = 0;
-   dqm->sdma_queue_count = 0;
-   dqm->xgmi_sdma_queue_count = 0;
+   dqm->active_sdma_queue_count = 0;
+   dqm->active_xgmi_sdma_queue_count = 0;
 
for (pipe = 0; pipe < get_pipes_per_mec(dqm); pipe++) {
int pipe_offset = pipe * get_queues_per_pipe(dqm);
@@ -1081,8 +1082,8 @@ static int initialize_cpsch(struct device_queue_manager 
*dqm)
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
dqm->active_queue_count = dqm->processes_count = 0;
-   dqm->sdma_queue_count = 0;
-   dqm->xgmi_sdma_queue_count = 0;
+   dqm->active_sdma_queue_count = 0;
+   dqm->active_xgmi_sdma_queue_count = 0;
dqm->active_runlist = false;
dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
@@ -1254,11 +1255,6 @@ static int create_queue_cpsch(struct 
device_queue_manager *dqm, struct queue *q,
list_add(>list, >queues_list);
qpd->queue_count++;
 
-   if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
-   dqm->sdma_queue_count++;
-   else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
-   dqm->xgmi_sdma_queue_count++;
-
if 

[PATCH 3/6] drm/amdkfd: Count active CP queues directly

2020-02-05 Thread Yong Zhao
The previous code of calculating active CP queues is problematic if
some SDMA queues are inactive. Fix that by counting CP queues directly.

Change-Id: I5ffaa75a95cbebc984558199ba2f3db6909c52a9
Signed-off-by: Yong Zhao 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 45 +--
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  1 +
 .../gpu/drm/amd/amdkfd/kfd_packet_manager.c   |  3 +-
 3 files changed, 33 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 973581c2b401..064108cf493b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -132,6 +132,22 @@ void program_sh_mem_settings(struct device_queue_manager 
*dqm,
qpd->sh_mem_bases);
 }
 
+void increment_queue_count(struct device_queue_manager *dqm,
+   enum kfd_queue_type type)
+{
+   dqm->active_queue_count++;
+   if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
+   dqm->active_cp_queue_count++;
+}
+
+void decrement_queue_count(struct device_queue_manager *dqm,
+   enum kfd_queue_type type)
+{
+   dqm->active_queue_count--;
+   if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
+   dqm->active_cp_queue_count--;
+}
+
 static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
 {
struct kfd_dev *dev = qpd->dqm->dev;
@@ -359,7 +375,7 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
list_add(>list, >queues_list);
qpd->queue_count++;
if (q->properties.is_active)
-   dqm->active_queue_count++;
+   increment_queue_count(dqm, q->properties.type);
 
if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
dqm->sdma_queue_count++;
@@ -494,7 +510,7 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,
}
qpd->queue_count--;
if (q->properties.is_active)
-   dqm->active_queue_count--;
+   decrement_queue_count(dqm, q->properties.type);
 
return retval;
 }
@@ -567,9 +583,9 @@ static int update_queue(struct device_queue_manager *dqm, 
struct queue *q)
 * uploaded.
 */
if (q->properties.is_active && !prev_active)
-   dqm->active_queue_count++;
+   increment_queue_count(dqm, q->properties.type);
else if (!q->properties.is_active && prev_active)
-   dqm->active_queue_count--;
+   decrement_queue_count(dqm, q->properties.type);
 
if (dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS)
retval = map_queues_cpsch(dqm);
@@ -618,7 +634,7 @@ static int evict_process_queues_nocpsch(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
q->properties.is_active = false;
-   dqm->active_queue_count--;
+   decrement_queue_count(dqm, q->properties.type);
 
if (WARN_ONCE(!dqm->sched_running, "Evict when stopped\n"))
continue;
@@ -662,7 +678,7 @@ static int evict_process_queues_cpsch(struct 
device_queue_manager *dqm,
continue;
 
q->properties.is_active = false;
-   dqm->active_queue_count--;
+   decrement_queue_count(dqm, q->properties.type);
}
retval = execute_queues_cpsch(dqm,
qpd->is_debug ?
@@ -731,7 +747,7 @@ static int restore_process_queues_nocpsch(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
q->properties.is_active = true;
-   dqm->active_queue_count++;
+   increment_queue_count(dqm, q->properties.type);
 
if (WARN_ONCE(!dqm->sched_running, "Restore when stopped\n"))
continue;
@@ -786,7 +802,7 @@ static int restore_process_queues_cpsch(struct 
device_queue_manager *dqm,
continue;
 
q->properties.is_active = true;
-   dqm->active_queue_count++;
+   increment_queue_count(dqm, q->properties.type);
}
retval = execute_queues_cpsch(dqm,
KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
@@ -1158,7 +1174,7 @@ static int create_kernel_queue_cpsch(struct 
device_queue_manager *dqm,
dqm->total_queue_count);
 
list_add(>list, >priv_queue_list);
-   dqm->active_queue_count++;
+   increment_queue_count(dqm, kq->queue->properties.type);
qpd->is_debug = true;
execute_queues_cpsch(dqm, 

[PATCH 1/6] drm/amdkfd: Rename queue_count to active_queue_count

2020-02-05 Thread Yong Zhao
The name is easier to understand the code.

Change-Id: I9064dab1d022e02780023131f940fff578a06b72
Signed-off-by: Yong Zhao 
---
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 38 +--
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |  2 +-
 .../gpu/drm/amd/amdkfd/kfd_packet_manager.c   |  4 +-
 .../amd/amdkfd/kfd_process_queue_manager.c|  2 +-
 4 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 80d22bf702e8..7ef9b89f5c70 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -359,7 +359,7 @@ static int create_queue_nocpsch(struct device_queue_manager 
*dqm,
list_add(>list, >queues_list);
qpd->queue_count++;
if (q->properties.is_active)
-   dqm->queue_count++;
+   dqm->active_queue_count++;
 
if (q->properties.type == KFD_QUEUE_TYPE_SDMA)
dqm->sdma_queue_count++;
@@ -494,7 +494,7 @@ static int destroy_queue_nocpsch_locked(struct 
device_queue_manager *dqm,
}
qpd->queue_count--;
if (q->properties.is_active)
-   dqm->queue_count--;
+   dqm->active_queue_count--;
 
return retval;
 }
@@ -563,13 +563,13 @@ static int update_queue(struct device_queue_manager *dqm, 
struct queue *q)
/*
 * check active state vs. the previous state and modify
 * counter accordingly. map_queues_cpsch uses the
-* dqm->queue_count to determine whether a new runlist must be
+* dqm->active_queue_count to determine whether a new runlist must be
 * uploaded.
 */
if (q->properties.is_active && !prev_active)
-   dqm->queue_count++;
+   dqm->active_queue_count++;
else if (!q->properties.is_active && prev_active)
-   dqm->queue_count--;
+   dqm->active_queue_count--;
 
if (dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS)
retval = map_queues_cpsch(dqm);
@@ -618,7 +618,7 @@ static int evict_process_queues_nocpsch(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
q->properties.is_active = false;
-   dqm->queue_count--;
+   dqm->active_queue_count--;
 
if (WARN_ONCE(!dqm->sched_running, "Evict when stopped\n"))
continue;
@@ -662,7 +662,7 @@ static int evict_process_queues_cpsch(struct 
device_queue_manager *dqm,
continue;
 
q->properties.is_active = false;
-   dqm->queue_count--;
+   dqm->active_queue_count--;
}
retval = execute_queues_cpsch(dqm,
qpd->is_debug ?
@@ -731,7 +731,7 @@ static int restore_process_queues_nocpsch(struct 
device_queue_manager *dqm,
mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
q->properties.type)];
q->properties.is_active = true;
-   dqm->queue_count++;
+   dqm->active_queue_count++;
 
if (WARN_ONCE(!dqm->sched_running, "Restore when stopped\n"))
continue;
@@ -786,7 +786,7 @@ static int restore_process_queues_cpsch(struct 
device_queue_manager *dqm,
continue;
 
q->properties.is_active = true;
-   dqm->queue_count++;
+   dqm->active_queue_count++;
}
retval = execute_queues_cpsch(dqm,
KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
@@ -899,7 +899,7 @@ static int initialize_nocpsch(struct device_queue_manager 
*dqm)
 
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
-   dqm->queue_count = dqm->next_pipe_to_allocate = 0;
+   dqm->active_queue_count = dqm->next_pipe_to_allocate = 0;
dqm->sdma_queue_count = 0;
dqm->xgmi_sdma_queue_count = 0;
 
@@ -924,7 +924,7 @@ static void uninitialize(struct device_queue_manager *dqm)
 {
int i;
 
-   WARN_ON(dqm->queue_count > 0 || dqm->processes_count > 0);
+   WARN_ON(dqm->active_queue_count > 0 || dqm->processes_count > 0);
 
kfree(dqm->allocated_queues);
for (i = 0 ; i < KFD_MQD_TYPE_MAX ; i++)
@@ -1064,7 +1064,7 @@ static int initialize_cpsch(struct device_queue_manager 
*dqm)
 
mutex_init(>lock_hidden);
INIT_LIST_HEAD(>queues);
-   dqm->queue_count = dqm->processes_count = 0;
+   dqm->active_queue_count = dqm->processes_count = 0;
dqm->sdma_queue_count = 0;
dqm->xgmi_sdma_queue_count = 0;
dqm->active_runlist = false;
@@ -1158,7 +1158,7 @@ static int create_kernel_queue_cpsch(struct 
device_queue_manager *dqm,

Re: [PATCH 4/4] drm/amdgpu: use amdgpu_device_vram_access in amdgpu_ttm_access_memory

2020-02-05 Thread Alex Deucher
Series is:
Reviewed-by: Alex Deucher 

On Wed, Feb 5, 2020 at 10:22 AM Christian König
 wrote:
>
> Make use of the better performance here as well.
>
> This patch is only compile tested!
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 38 +++--
>  1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 58d143b24ba0..538c3b52b712 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1565,7 +1565,7 @@ static int amdgpu_ttm_access_memory(struct 
> ttm_buffer_object *bo,
>
> while (len && pos < adev->gmc.mc_vram_size) {
> uint64_t aligned_pos = pos & ~(uint64_t)3;
> -   uint32_t bytes = 4 - (pos & 3);
> +   uint64_t bytes = 4 - (pos & 3);
> uint32_t shift = (pos & 3) * 8;
> uint32_t mask = 0x << shift;
>
> @@ -1574,20 +1574,28 @@ static int amdgpu_ttm_access_memory(struct 
> ttm_buffer_object *bo,
> bytes = len;
> }
>
> -   spin_lock_irqsave(>mmio_idx_lock, flags);
> -   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 
> 0x8000);
> -   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
> -   if (!write || mask != 0x)
> -   value = RREG32_NO_KIQ(mmMM_DATA);
> -   if (write) {
> -   value &= ~mask;
> -   value |= (*(uint32_t *)buf << shift) & mask;
> -   WREG32_NO_KIQ(mmMM_DATA, value);
> -   }
> -   spin_unlock_irqrestore(>mmio_idx_lock, flags);
> -   if (!write) {
> -   value = (value & mask) >> shift;
> -   memcpy(buf, , bytes);
> +   if (mask != 0x) {
> +   spin_lock_irqsave(>mmio_idx_lock, flags);
> +   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 
> 0x8000);
> +   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
> +   if (!write || mask != 0x)
> +   value = RREG32_NO_KIQ(mmMM_DATA);
> +   if (write) {
> +   value &= ~mask;
> +   value |= (*(uint32_t *)buf << shift) & mask;
> +   WREG32_NO_KIQ(mmMM_DATA, value);
> +   }
> +   spin_unlock_irqrestore(>mmio_idx_lock, flags);
> +   if (!write) {
> +   value = (value & mask) >> shift;
> +   memcpy(buf, , bytes);
> +   }
> +   } else {
> +   bytes = (nodes->start + nodes->size) << PAGE_SHIFT;
> +   bytes = min(pos - bytes, (uint64_t)len & ~0x3ull);
> +
> +   amdgpu_device_vram_access(adev, pos, (uint32_t *)buf,
> + bytes, write);
> }
>
> ret += bytes;
> --
> 2.17.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Dali] Raven 2 detection Patch

2020-02-05 Thread Alex Deucher
just a couple of typos in the patch title:
drm\amdgpu: [DALI] Dali Varient Detection
It should be:
drm/amdgpu: [DALI] Dali Variant Detection
With that fixed, patch is:
Reviewed-by: Alex Deucher 

On Wed, Feb 5, 2020 at 9:59 AM Tawfik, Aly  wrote:
>
> Hi,
>
>
>
> Dali is a raven2 based asic that drives at a lower (6W) TDP than other raven 
> 2 chips. Currently the fused internal id is the same on all raven 2 boards, 
> this means that the detection process must be done through PCIE REV ID.
>
> Unfortunately PCIE REV ID is not defined inside the scope of display. I 
> created a patch to alter the fused value for internal rev_id if the chip is 
> detected as dali through PCIE REV ID. So that detection of the chip will be 
> possible inside of Display core.
>
>
>
> Can you kindly provide feedback on this workaround.
>
>
>
> Best Regards,
>
> Aly Tawfik
>
>
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix some use after free bugs

2020-02-05 Thread Alex Deucher
Applied.  Thanks!

Alex

On Wed, Feb 5, 2020 at 1:44 PM Bhawanpreet Lakha
 wrote:
>
> Reviewed-by: Bhawanpreet Lakha 
>
> On 2020-02-05 1:38 p.m., Dan Carpenter wrote:
> > These frees need to be re-ordered so that we don't dereference "hdcp_work"
> > right after it's freed.  Also in hdcp_create_workqueue() there is a
> > problem that "hdcp_work" can be NULL if the allocation fails so it would
> > lead to a NULL dereference in the cleanup code.
> >
> > Fixes: 9aeb8a134a0a ("drm/amd/display: Add sysfs interface for set/get srm")
> > Signed-off-by: Dan Carpenter 
> > ---
> >   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c | 9 +
> >   1 file changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
> > index 1768a33b1dc3..f3330df782a4 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
> > @@ -380,9 +380,9 @@ void hdcp_destroy(struct hdcp_workqueue *hdcp_work)
> >   cancel_delayed_work_sync(_work[i].watchdog_timer_dwork);
> >   }
> >
> > - kfree(hdcp_work);
> >   kfree(hdcp_work->srm);
> >   kfree(hdcp_work->srm_temp);
> > + kfree(hdcp_work);
> >   }
> >
> >   static void update_config(void *handle, struct cp_psp_stream_config 
> > *config)
> > @@ -555,11 +555,12 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct 
> > amdgpu_device *adev, struct
> >   {
> >
> >   int max_caps = dc->caps.max_links;
> > - struct hdcp_workqueue *hdcp_work = 
> > kzalloc(max_caps*sizeof(*hdcp_work), GFP_KERNEL);
> > + struct hdcp_workqueue *hdcp_work;
> >   int i = 0;
> >
> > + hdcp_work = kcalloc(max_caps, sizeof(*hdcp_work), GFP_KERNEL);
> >   if (hdcp_work == NULL)
> > - goto fail_alloc_context;
> > + return NULL;
> >
> >   hdcp_work->srm = kcalloc(PSP_HDCP_SRM_FIRST_GEN_MAX_SIZE, 
> > sizeof(*hdcp_work->srm), GFP_KERNEL);
> >
> > @@ -602,9 +603,9 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct 
> > amdgpu_device *adev, struct
> >   return hdcp_work;
> >
> >   fail_alloc_context:
> > - kfree(hdcp_work);
> >   kfree(hdcp_work->srm);
> >   kfree(hdcp_work->srm_temp);
> > + kfree(hdcp_work);
> >
> >   return NULL;
> >
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix wrongly passed static prefix

2020-02-05 Thread Alex Deucher
On Wed, Feb 5, 2020 at 11:55 AM Takashi Iwai  wrote:
>
> On Thu, 28 Nov 2019 15:35:23 +0100,
> Harry Wentland wrote:
> >
> > On 2019-11-28 3:27 a.m., Takashi Iwai wrote:
> > > Currently, gcc spews a warning as:
> > >   drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hubbub.c: In 
> > > function ‘hubbub1_verify_allow_pstate_change_high’:
> > >   ./include/drm/drm_print.h:316:2: warning: ‘debug_data’ may be used 
> > > uninitialized in this function [-Wmaybe-uninitialized]
> > >
> > > This is because the code checks against a static value although it's
> > > basically a constant and guaranteed to be set.
> > >
> > > This patch changes the type prefix from static to const for addressing
> > > the compile warning above and also for letting the compiler optimize
> > > better.
> > >
> > > Fixes: 62d591a8e00c ("drm/amd/display: create new files for hubbub 
> > > functions")
> > > Signed-off-by: Takashi Iwai 
> >
> > Reviewed-by: Harry Wentland 
> >
> > Harry
>
> This patch seems forgotten?  The compile warning is still present in
> the latest for-next.
>

Sorry, totally missed this one.  Applied.

Alex

>
> thanks,
>
> Takashi
>
> >
> > > ---
> > >  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
> > > b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > > index a02c10e23e0d..b5c44c3bdb98 100644
> > > --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > > +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > > @@ -128,8 +128,8 @@ bool hubbub1_verify_allow_pstate_change_high(
> > >  * pstate takes around ~100us on linux. Unknown currently as to
> > >  * why it takes that long on linux
> > >  */
> > > -   static unsigned int pstate_wait_timeout_us = 200;
> > > -   static unsigned int pstate_wait_expected_timeout_us = 40;
> > > +   const unsigned int pstate_wait_timeout_us = 200;
> > > +   const unsigned int pstate_wait_expected_timeout_us = 40;
> > > static unsigned int max_sampled_pstate_wait_us; /* data collection */
> > > static bool forced_pstate_allow; /* help with revert wa */
> > >
> > >
> >
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix some use after free bugs

2020-02-05 Thread Bhawanpreet Lakha

Reviewed-by: Bhawanpreet Lakha 

On 2020-02-05 1:38 p.m., Dan Carpenter wrote:

These frees need to be re-ordered so that we don't dereference "hdcp_work"
right after it's freed.  Also in hdcp_create_workqueue() there is a
problem that "hdcp_work" can be NULL if the allocation fails so it would
lead to a NULL dereference in the cleanup code.

Fixes: 9aeb8a134a0a ("drm/amd/display: Add sysfs interface for set/get srm")
Signed-off-by: Dan Carpenter 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c | 9 +
  1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
index 1768a33b1dc3..f3330df782a4 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
@@ -380,9 +380,9 @@ void hdcp_destroy(struct hdcp_workqueue *hdcp_work)
cancel_delayed_work_sync(_work[i].watchdog_timer_dwork);
}
  
-	kfree(hdcp_work);

kfree(hdcp_work->srm);
kfree(hdcp_work->srm_temp);
+   kfree(hdcp_work);
  }
  
  static void update_config(void *handle, struct cp_psp_stream_config *config)

@@ -555,11 +555,12 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct 
amdgpu_device *adev, struct
  {
  
  	int max_caps = dc->caps.max_links;

-   struct hdcp_workqueue *hdcp_work = kzalloc(max_caps*sizeof(*hdcp_work), 
GFP_KERNEL);
+   struct hdcp_workqueue *hdcp_work;
int i = 0;
  
+	hdcp_work = kcalloc(max_caps, sizeof(*hdcp_work), GFP_KERNEL);

if (hdcp_work == NULL)
-   goto fail_alloc_context;
+   return NULL;
  
  	hdcp_work->srm = kcalloc(PSP_HDCP_SRM_FIRST_GEN_MAX_SIZE, sizeof(*hdcp_work->srm), GFP_KERNEL);
  
@@ -602,9 +603,9 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct

return hdcp_work;
  
  fail_alloc_context:

-   kfree(hdcp_work);
kfree(hdcp_work->srm);
kfree(hdcp_work->srm_temp);
+   kfree(hdcp_work);
  
  	return NULL;
  

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job.

2020-02-05 Thread Lucas Stach
Hi Andrey,

This commit breaks all drivers, which may bail out of the timeout
processing as they wish to extend the timeout (etnaviv, v3d).

Those drivers currently just return from the timeout handler before
calling drm_sched_stop(), which means with this commit applied we are
removing the first job from the ring_mirror_list, but never put it
back. This leads to jobs getting lost from the ring mirror, which then
causes quite a bit of fallout like unsignaled fences.

Not sure yet what to do about it, we can either add a function to add
the job back to the ring_mirror if the driver wants to extend the
timeout, or we could look for another way to stop
drm_sched_cleanup_jobs from freeing jobs that are currently in timeout
processing.

Regards,
Lucas

On Mo, 2019-11-25 at 15:51 -0500, Andrey Grodzovsky wrote:
> Problem:
> Due to a race between drm_sched_cleanup_jobs in sched thread and
> drm_sched_job_timedout in timeout work there is a possiblity that
> bad job was already freed while still being accessed from the
> timeout thread.
> 
> Fix:
> Instead of just peeking at the bad job in the mirror list
> remove it from the list under lock and then put it back later when
> we are garanteed no race with main sched thread is possible which
> is after the thread is parked.
> 
> v2: Lock around processing ring_mirror_list in drm_sched_cleanup_jobs.
> 
> v3: Rebase on top of drm-misc-next. v2 is not needed anymore as
> drm_sched_get_cleanup_job already has a lock there.
> 
> v4: Fix comments to relfect latest code in drm-misc.
> 
> Signed-off-by: Andrey Grodzovsky 
> Reviewed-by: Christian König 
> Tested-by: Emily Deng 
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 27 +++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> b/drivers/gpu/drm/scheduler/sched_main.c
> index 6774955..1bf9c40 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct work_struct 
> *work)
>   unsigned long flags;
>  
>   sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
> +
> + /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
> + spin_lock_irqsave(>job_list_lock, flags);
>   job = list_first_entry_or_null(>ring_mirror_list,
>  struct drm_sched_job, node);
>  
>   if (job) {
> + /*
> +  * Remove the bad job so it cannot be freed by concurrent
> +  * drm_sched_cleanup_jobs. It will be reinserted back after 
> sched->thread
> +  * is parked at which point it's safe.
> +  */
> + list_del_init(>node);
> + spin_unlock_irqrestore(>job_list_lock, flags);
> +
>   job->sched->ops->timedout_job(job);
>  
>   /*
> @@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct work_struct 
> *work)
>   job->sched->ops->free_job(job);
>   sched->free_guilty = false;
>   }
> + } else {
> + spin_unlock_irqrestore(>job_list_lock, flags);
>   }
>  
>   spin_lock_irqsave(>job_list_lock, flags);
> @@ -370,6 +383,20 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> struct drm_sched_job *bad)
>   kthread_park(sched->thread);
>  
>   /*
> +  * Reinsert back the bad job here - now it's safe as
> +  * drm_sched_get_cleanup_job cannot race against us and release the
> +  * bad job at this point - we parked (waited for) any in progress
> +  * (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> +  * now until the scheduler thread is unparked.
> +  */
> + if (bad && bad->sched == sched)
> + /*
> +  * Add at the head of the queue to reflect it was the earliest
> +  * job extracted.
> +  */
> + list_add(>node, >ring_mirror_list);
> +
> + /*
>* Iterate the job list from later to  earlier one and either deactive
>* their HW callbacks or remove them from mirror list if they already
>* signaled.

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: Fix some use after free bugs

2020-02-05 Thread Dan Carpenter
These frees need to be re-ordered so that we don't dereference "hdcp_work"
right after it's freed.  Also in hdcp_create_workqueue() there is a
problem that "hdcp_work" can be NULL if the allocation fails so it would
lead to a NULL dereference in the cleanup code.

Fixes: 9aeb8a134a0a ("drm/amd/display: Add sysfs interface for set/get srm")
Signed-off-by: Dan Carpenter 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
index 1768a33b1dc3..f3330df782a4 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
@@ -380,9 +380,9 @@ void hdcp_destroy(struct hdcp_workqueue *hdcp_work)
cancel_delayed_work_sync(_work[i].watchdog_timer_dwork);
}
 
-   kfree(hdcp_work);
kfree(hdcp_work->srm);
kfree(hdcp_work->srm_temp);
+   kfree(hdcp_work);
 }
 
 static void update_config(void *handle, struct cp_psp_stream_config *config)
@@ -555,11 +555,12 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct 
amdgpu_device *adev, struct
 {
 
int max_caps = dc->caps.max_links;
-   struct hdcp_workqueue *hdcp_work = kzalloc(max_caps*sizeof(*hdcp_work), 
GFP_KERNEL);
+   struct hdcp_workqueue *hdcp_work;
int i = 0;
 
+   hdcp_work = kcalloc(max_caps, sizeof(*hdcp_work), GFP_KERNEL);
if (hdcp_work == NULL)
-   goto fail_alloc_context;
+   return NULL;
 
hdcp_work->srm = kcalloc(PSP_HDCP_SRM_FIRST_GEN_MAX_SIZE, 
sizeof(*hdcp_work->srm), GFP_KERNEL);
 
@@ -602,9 +603,9 @@ struct hdcp_workqueue *hdcp_create_workqueue(struct 
amdgpu_device *adev, struct
return hdcp_work;
 
 fail_alloc_context:
-   kfree(hdcp_work);
kfree(hdcp_work->srm);
kfree(hdcp_work->srm_temp);
+   kfree(hdcp_work);
 
return NULL;
 
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 4/4] drm/amdgpu: use amdgpu_device_vram_access in amdgpu_ttm_access_memory

2020-02-05 Thread Kim, Jonathan
[AMD Public Use]

Tested on Vega20 via proc mem op reads.

Old MMIO ~2.7MB/s, Improved MMIO ~3.2MB/s, BAR ~44MB/s

Acked-by: Jonathan Kim 

-Original Message-
From: Christian König  
Sent: Wednesday, February 5, 2020 10:23 AM
To: amd-gfx@lists.freedesktop.org; Kuehling, Felix ; 
Kim, Jonathan 
Subject: [PATCH 4/4] drm/amdgpu: use amdgpu_device_vram_access in 
amdgpu_ttm_access_memory

[CAUTION: External Email]

Make use of the better performance here as well.

This patch is only compile tested!

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 38 +++--
 1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 58d143b24ba0..538c3b52b712 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1565,7 +1565,7 @@ static int amdgpu_ttm_access_memory(struct 
ttm_buffer_object *bo,

while (len && pos < adev->gmc.mc_vram_size) {
uint64_t aligned_pos = pos & ~(uint64_t)3;
-   uint32_t bytes = 4 - (pos & 3);
+   uint64_t bytes = 4 - (pos & 3);
uint32_t shift = (pos & 3) * 8;
uint32_t mask = 0x << shift;

@@ -1574,20 +1574,28 @@ static int amdgpu_ttm_access_memory(struct 
ttm_buffer_object *bo,
bytes = len;
}

-   spin_lock_irqsave(>mmio_idx_lock, flags);
-   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
-   if (!write || mask != 0x)
-   value = RREG32_NO_KIQ(mmMM_DATA);
-   if (write) {
-   value &= ~mask;
-   value |= (*(uint32_t *)buf << shift) & mask;
-   WREG32_NO_KIQ(mmMM_DATA, value);
-   }
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
-   if (!write) {
-   value = (value & mask) >> shift;
-   memcpy(buf, , bytes);
+   if (mask != 0x) {
+   spin_lock_irqsave(>mmio_idx_lock, flags);
+   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 
0x8000);
+   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
+   if (!write || mask != 0x)
+   value = RREG32_NO_KIQ(mmMM_DATA);
+   if (write) {
+   value &= ~mask;
+   value |= (*(uint32_t *)buf << shift) & mask;
+   WREG32_NO_KIQ(mmMM_DATA, value);
+   }
+   spin_unlock_irqrestore(>mmio_idx_lock, flags);
+   if (!write) {
+   value = (value & mask) >> shift;
+   memcpy(buf, , bytes);
+   }
+   } else {
+   bytes = (nodes->start + nodes->size) << PAGE_SHIFT;
+   bytes = min(pos - bytes, (uint64_t)len & ~0x3ull);
+
+   amdgpu_device_vram_access(adev, pos, (uint32_t *)buf,
+ bytes, write);
}

ret += bytes;
--
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix wrongly passed static prefix

2020-02-05 Thread Takashi Iwai
On Thu, 28 Nov 2019 15:35:23 +0100,
Harry Wentland wrote:
> 
> On 2019-11-28 3:27 a.m., Takashi Iwai wrote:
> > Currently, gcc spews a warning as:
> >   drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hubbub.c: In 
> > function ‘hubbub1_verify_allow_pstate_change_high’:
> >   ./include/drm/drm_print.h:316:2: warning: ‘debug_data’ may be used 
> > uninitialized in this function [-Wmaybe-uninitialized]
> > 
> > This is because the code checks against a static value although it's
> > basically a constant and guaranteed to be set.
> > 
> > This patch changes the type prefix from static to const for addressing
> > the compile warning above and also for letting the compiler optimize
> > better.
> > 
> > Fixes: 62d591a8e00c ("drm/amd/display: create new files for hubbub 
> > functions")
> > Signed-off-by: Takashi Iwai 
> 
> Reviewed-by: Harry Wentland 
> 
> Harry

This patch seems forgotten?  The compile warning is still present in
the latest for-next.


thanks,

Takashi

> 
> > ---
> >  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
> > b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > index a02c10e23e0d..b5c44c3bdb98 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
> > @@ -128,8 +128,8 @@ bool hubbub1_verify_allow_pstate_change_high(
> >  * pstate takes around ~100us on linux. Unknown currently as to
> >  * why it takes that long on linux
> >  */
> > -   static unsigned int pstate_wait_timeout_us = 200;
> > -   static unsigned int pstate_wait_expected_timeout_us = 40;
> > +   const unsigned int pstate_wait_timeout_us = 200;
> > +   const unsigned int pstate_wait_expected_timeout_us = 40;
> > static unsigned int max_sampled_pstate_wait_us; /* data collection */
> > static bool forced_pstate_allow; /* help with revert wa */
> >  
> > 
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH V6] drm: Add support for DP 1.4 Compliance edid corruption test

2020-02-05 Thread Harry Wentland



On 2020-02-05 10:22 a.m., Jerry (Fangzhi) Zuo wrote:
> Unlike DP 1.2 edid corruption test, DP 1.4 requires to calculate
> real CRC value of the last edid data block, and write it back.
> Current edid CRC calculates routine adds the last CRC byte,
> and check if non-zero.
> 
> This behavior is not accurate; actually, we need to return
> the actual CRC value when corruption is detected.
> This commit changes this issue by returning the calculated CRC,
> and initiate the required sequence.
> 
> Change since v6
> - Add return check
> 
> Change since v5
> - Obtain real CRC value before dumping bad edid
> 
> Change since v4
> - Fix for CI.CHECKPATCH
> 
> Change since v3
> - Fix a minor typo.
> 
> Change since v2
> - Rewrite checksum computation routine to avoid duplicated code.
> - Rename to avoid confusion.
> 
> Change since v1
> - Have separate routine for returning real CRC.
> 
> Signed-off-by: Jerry (Fangzhi) Zuo 

Please make sure to add the Reviewed-bys you've received on previous
versions. I've already reviewed v5 and an earlier one. Please add my
Reviewed-by.

Harry

> ---
>  drivers/gpu/drm/drm_dp_helper.c | 51 +
>  drivers/gpu/drm/drm_edid.c  | 23 ---
>  include/drm/drm_connector.h |  6 
>  include/drm/drm_dp_helper.h |  3 ++
>  4 files changed, 79 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
> index f629fc5494a4..1efd609df402 100644
> --- a/drivers/gpu/drm/drm_dp_helper.c
> +++ b/drivers/gpu/drm/drm_dp_helper.c
> @@ -351,6 +351,57 @@ int drm_dp_dpcd_read_link_status(struct drm_dp_aux *aux,
>  }
>  EXPORT_SYMBOL(drm_dp_dpcd_read_link_status);
>  
> +/**
> + * drm_dp_send_real_edid_checksum() - send back real edid checksum value
> + * @aux: DisplayPort AUX channel
> + * @real_edid_checksum: real edid checksum for the last block
> + *
> + * Returns:
> + * True on success
> + */
> +bool drm_dp_send_real_edid_checksum(struct drm_dp_aux *aux,
> + u8 real_edid_checksum)
> +{
> + u8 link_edid_read = 0, auto_test_req = 0, test_resp = 0;
> +
> + if (drm_dp_dpcd_read(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, _test_req, 
> 1) < 1) {
> + DRM_ERROR("DPCD failed read at register 0x%x\n", 
> DP_DEVICE_SERVICE_IRQ_VECTOR);
> + return false;
> + }
> + auto_test_req &= DP_AUTOMATED_TEST_REQUEST;
> +
> + if (drm_dp_dpcd_read(aux, DP_TEST_REQUEST, _edid_read, 1) < 1) {
> + DRM_ERROR("DPCD failed read at register 0x%x\n", 
> DP_TEST_REQUEST);
> + return false;
> + }
> + link_edid_read &= DP_TEST_LINK_EDID_READ;
> +
> + if (!auto_test_req || !link_edid_read) {
> + DRM_DEBUG_KMS("Source DUT does not support TEST_EDID_READ\n");
> + return false;
> + }
> +
> + if (drm_dp_dpcd_write(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, 
> _test_req, 1) < 1) {
> + DRM_ERROR("DPCD failed write at register 0x%x\n", 
> DP_DEVICE_SERVICE_IRQ_VECTOR);
> + return false;
> + }
> +
> + /* send back checksum for the last edid extension block data */
> + if (drm_dp_dpcd_write(aux, DP_TEST_EDID_CHECKSUM, _edid_checksum, 
> 1) < 1) {
> + DRM_ERROR("DPCD failed write at register 0x%x\n", 
> DP_TEST_EDID_CHECKSUM);
> + return false;
> + }
> +
> + test_resp |= DP_TEST_EDID_CHECKSUM_WRITE;
> + if (drm_dp_dpcd_write(aux, DP_TEST_RESPONSE, _resp, 1) < 1) {
> + DRM_ERROR("DPCD failed write at register 0x%x\n", 
> DP_TEST_RESPONSE);
> + return false;
> + }
> +
> + return true;
> +}
> +EXPORT_SYMBOL(drm_dp_send_real_edid_checksum);
> +
>  /**
>   * drm_dp_downstream_max_clock() - extract branch device max
>   * pixel rate for legacy VGA
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 99769d6c9f84..f064e75fb4c5 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -1590,11 +1590,22 @@ static int validate_displayid(u8 *displayid, int 
> length, int idx);
>  static int drm_edid_block_checksum(const u8 *raw_edid)
>  {
>   int i;
> - u8 csum = 0;
> - for (i = 0; i < EDID_LENGTH; i++)
> + u8 csum = 0, crc = 0;
> +
> + for (i = 0; i < EDID_LENGTH - 1; i++)
>   csum += raw_edid[i];
>  
> - return csum;
> + crc = 0x100 - csum;
> +
> + return crc;
> +}
> +
> +static bool drm_edid_block_checksum_diff(const u8 *raw_edid, u8 
> real_checksum)
> +{
> + if (raw_edid[EDID_LENGTH - 1] != real_checksum)
> + return true;
> + else
> + return false;
>  }
>  
>  static bool drm_edid_is_zero(const u8 *in_edid, int length)
> @@ -1652,7 +1663,7 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool 
> print_bad_edid,
>   }
>  
>   csum = drm_edid_block_checksum(raw_edid);
> - if (csum) {
> + if (drm_edid_block_checksum_diff(raw_edid, csum)) {

[PATCH 06/15] drm/amdgpu/gem: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for gem.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 4 
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index bcd10daa6e39..cb7db7edfc3f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1248,6 +1248,10 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
if (amdgpu_debugfs_fence_init(adev))
dev_err(adev->dev, "fence debugfs file creation failed\n");
 
+   r = amdgpu_debugfs_gem_init(adev);
+   if (r)
+   DRM_ERROR("registering gem debugfs failed (%d).\n", r);
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 3b09897eb1e9..64a275664c72 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3091,10 +3091,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
} else
adev->ucode_sysfs_en = true;
 
-   r = amdgpu_debugfs_gem_init(adev);
-   if (r)
-   DRM_ERROR("registering gem debugfs failed (%d).\n", r);
-
r = amdgpu_debugfs_regs_init(adev);
if (r)
DRM_ERROR("registering register debugfs failed (%d).\n", r);
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/15] drm/amdgpu/display: add a late register connector callback

2020-02-05 Thread Alex Deucher
To handle debugfs setup on non DP MST connectors.

Reviewed-by: Harry Wentland 
Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index af8155708569..b6190079ed3f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -4486,6 +4486,19 @@ amdgpu_dm_connector_atomic_duplicate_state(struct 
drm_connector *connector)
return _state->base;
 }
 
+static int
+amdgpu_dm_connector_late_register(struct drm_connector *connector)
+{
+   struct amdgpu_dm_connector *amdgpu_dm_connector =
+   to_amdgpu_dm_connector(connector);
+
+#if defined(CONFIG_DEBUG_FS)
+   connector_debugfs_init(amdgpu_dm_connector);
+#endif
+
+   return 0;
+}
+
 static const struct drm_connector_funcs amdgpu_dm_connector_funcs = {
.reset = amdgpu_dm_connector_funcs_reset,
.detect = amdgpu_dm_connector_detect,
@@ -4495,6 +4508,7 @@ static const struct drm_connector_funcs 
amdgpu_dm_connector_funcs = {
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
.atomic_set_property = amdgpu_dm_connector_atomic_set_property,
.atomic_get_property = amdgpu_dm_connector_atomic_get_property,
+   .late_register = amdgpu_dm_connector_late_register,
.early_unregister = amdgpu_dm_connector_unregister
 };
 
@@ -5834,10 +5848,6 @@ static int amdgpu_dm_connector_init(struct 
amdgpu_display_manager *dm,
drm_connector_attach_encoder(
>base, >base);
 
-#if defined(CONFIG_DEBUG_FS)
-   connector_debugfs_init(aconnector);
-#endif
-
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort
|| connector_type == DRM_MODE_CONNECTOR_eDP)
amdgpu_dm_initialize_dp_connector(dm, aconnector);
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/15] drm/amdgpu: don't call drm_connector_register for non-MST ports

2020-02-05 Thread Alex Deucher
The core does this for us now.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c| 1 -
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c  | 1 -
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 -
 3 files changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
index a62cbc8199de..ec1501e3a63a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -1931,7 +1931,6 @@ amdgpu_connector_add(struct amdgpu_device *adev,
connector->polled = DRM_CONNECTOR_POLL_HPD;
 
connector->display_info.subpixel_order = subpixel_order;
-   drm_connector_register(connector);
 
if (has_aux)
amdgpu_atombios_dp_aux_init(amdgpu_connector);
diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index e4f94863332c..3c9f2d2490a5 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -609,7 +609,6 @@ static int dce_virtual_connector_encoder_init(struct 
amdgpu_device *adev,
connector->display_info.subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = false;
connector->doublescan_allowed = false;
-   drm_connector_register(connector);
 
/* link them */
drm_connector_attach_encoder(connector, encoder);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index bd798b6bdf0f..50137df9cdad 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5839,7 +5839,6 @@ static int amdgpu_dm_connector_init(struct 
amdgpu_display_manager *dm,
drm_connector_attach_encoder(
>base, >base);
 
-   drm_connector_register(>base);
 #if defined(CONFIG_DEBUG_FS)
connector_debugfs_init(aconnector);
aconnector->debugfs_dpcd_address = 0;
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/15] drm/amdgpu/ring: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for rings.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 23 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c| 15 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h|  4 
 3 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 82d30bae2ba0..a7e6b5de2c62 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1218,7 +1218,7 @@ DEFINE_SIMPLE_ATTRIBUTE(fops_ib_preempt, NULL,
 
 int amdgpu_debugfs_init(struct amdgpu_device *adev)
 {
-   int r;
+   int r, i;
 
adev->debugfs_preempt =
debugfs_create_file("amdgpu_preempt_ib", 0600,
@@ -1266,12 +1266,33 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
DRM_ERROR("amdgpu: failed initialize dtn debugfs 
support.\n");
}
 
+   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+   struct amdgpu_ring *ring = adev->rings[i];
+
+   if (!ring)
+   continue;
+
+   if (amdgpu_debugfs_ring_init(adev, ring)) {
+   DRM_ERROR("Failed to register debugfs file for rings 
!\n");
+   }
+   }
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
 
 void amdgpu_debugfs_fini(struct amdgpu_device *adev)
 {
+   int i;
+
+   for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+   struct amdgpu_ring *ring = adev->rings[i];
+
+   if (!ring)
+   continue;
+
+   amdgpu_debugfs_ring_fini(ring);
+   }
amdgpu_ttm_debugfs_fini(adev);
debugfs_remove(adev->debugfs_preempt);
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index e5c83e164d82..539be138260e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -48,9 +48,6 @@
  * wptr.  The GPU then starts fetching commands and executes
  * them until the pointers are equal again.
  */
-static int amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
-   struct amdgpu_ring *ring);
-static void amdgpu_debugfs_ring_fini(struct amdgpu_ring *ring);
 
 /**
  * amdgpu_ring_alloc - allocate space on the ring buffer
@@ -334,10 +331,6 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct 
amdgpu_ring *ring,
for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
atomic_set(>num_jobs[i], 0);
 
-   if (amdgpu_debugfs_ring_init(adev, ring)) {
-   DRM_ERROR("Failed to register debugfs file for rings !\n");
-   }
-
return 0;
 }
 
@@ -367,8 +360,6 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
  >gpu_addr,
  (void **)>ring);
 
-   amdgpu_debugfs_ring_fini(ring);
-
dma_fence_put(ring->vmid_wait);
ring->vmid_wait = NULL;
ring->me = 0;
@@ -485,8 +476,8 @@ static const struct file_operations 
amdgpu_debugfs_ring_fops = {
 
 #endif
 
-static int amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
-   struct amdgpu_ring *ring)
+int amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
+struct amdgpu_ring *ring)
 {
 #if defined(CONFIG_DEBUG_FS)
struct drm_minor *minor = adev->ddev->primary;
@@ -507,7 +498,7 @@ static int amdgpu_debugfs_ring_init(struct amdgpu_device 
*adev,
return 0;
 }
 
-static void amdgpu_debugfs_ring_fini(struct amdgpu_ring *ring)
+void amdgpu_debugfs_ring_fini(struct amdgpu_ring *ring)
 {
 #if defined(CONFIG_DEBUG_FS)
debugfs_remove(ring->ent);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 5134d0dd6dc2..0d098dafd23c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -329,4 +329,8 @@ static inline void amdgpu_ring_write_multiple(struct 
amdgpu_ring *ring,
 
 int amdgpu_ring_test_helper(struct amdgpu_ring *ring);
 
+int amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
+struct amdgpu_ring *ring);
+void amdgpu_debugfs_ring_fini(struct amdgpu_ring *ring);
+
 #endif
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/15] drm/amdgpu/display: split dp connector registration

2020-02-05 Thread Alex Deucher
Split into init and register functions to avoid a segfault
in some configs when the load/unload callbacks are removed.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c| 10 ++
 drivers/gpu/drm/amd/amdgpu/atombios_dp.c  |  8 +---
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c   | 11 ---
 3 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
index ec1501e3a63a..635f6c9f625c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -1461,6 +1461,14 @@ static enum drm_mode_status 
amdgpu_connector_dp_mode_valid(struct drm_connector
return MODE_OK;
 }
 
+static int
+amdgpu_connector_late_register(struct drm_connector *connector)
+{
+   struct amdgpu_connector *amdgpu_connector = 
to_amdgpu_connector(connector);
+
+   return drm_dp_aux_register(_connector->ddc_bus->aux);
+}
+
 static const struct drm_connector_helper_funcs 
amdgpu_connector_dp_helper_funcs = {
.get_modes = amdgpu_connector_dp_get_modes,
.mode_valid = amdgpu_connector_dp_mode_valid,
@@ -1475,6 +1483,7 @@ static const struct drm_connector_funcs 
amdgpu_connector_dp_funcs = {
.early_unregister = amdgpu_connector_unregister,
.destroy = amdgpu_connector_destroy,
.force = amdgpu_connector_dvi_force,
+   .late_register = amdgpu_connector_late_register,
 };
 
 static const struct drm_connector_funcs amdgpu_connector_edp_funcs = {
@@ -1485,6 +1494,7 @@ static const struct drm_connector_funcs 
amdgpu_connector_edp_funcs = {
.early_unregister = amdgpu_connector_unregister,
.destroy = amdgpu_connector_destroy,
.force = amdgpu_connector_dvi_force,
+   .late_register = amdgpu_connector_late_register,
 };
 
 void
diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c 
b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
index ea702a64f807..dd1e3530399d 100644
--- a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
+++ b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
@@ -186,16 +186,10 @@ amdgpu_atombios_dp_aux_transfer(struct drm_dp_aux *aux, 
struct drm_dp_aux_msg *m
 
 void amdgpu_atombios_dp_aux_init(struct amdgpu_connector *amdgpu_connector)
 {
-   int ret;
-
amdgpu_connector->ddc_bus->rec.hpd = amdgpu_connector->hpd.hpd;
amdgpu_connector->ddc_bus->aux.dev = amdgpu_connector->base.kdev;
amdgpu_connector->ddc_bus->aux.transfer = 
amdgpu_atombios_dp_aux_transfer;
-   ret = drm_dp_aux_register(_connector->ddc_bus->aux);
-   if (!ret)
-   amdgpu_connector->ddc_bus->has_aux = true;
-
-   WARN(ret, "drm_dp_aux_register_i2c_bus() failed with error %d\n", ret);
+   drm_dp_aux_init(_connector->ddc_bus->aux);
 }
 
 /* general DP utility functions */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 3959c942c88b..a4e6f9d39e12 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -155,6 +155,13 @@ amdgpu_dm_mst_connector_late_register(struct drm_connector 
*connector)
struct amdgpu_dm_connector *amdgpu_dm_connector =
to_amdgpu_dm_connector(connector);
struct drm_dp_mst_port *port = amdgpu_dm_connector->port;
+   int r;
+
+   r = drm_dp_aux_register(_dm_connector->dm_dp_aux.aux);
+   if (r)
+   return r;
+   drm_dp_cec_register_connector(_dm_connector->dm_dp_aux.aux,
+ connector);
 
 #if defined(CONFIG_DEBUG_FS)
connector_debugfs_init(amdgpu_dm_connector);
@@ -484,9 +491,7 @@ void amdgpu_dm_initialize_dp_connector(struct 
amdgpu_display_manager *dm,
aconnector->dm_dp_aux.aux.transfer = dm_dp_aux_transfer;
aconnector->dm_dp_aux.ddc_service = aconnector->dc_link->ddc;
 
-   drm_dp_aux_register(>dm_dp_aux.aux);
-   drm_dp_cec_register_connector(>dm_dp_aux.aux,
- >base);
+   drm_dp_aux_init(>dm_dp_aux.aux);
 
if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
return;
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/15] drm/amdgpu: drop legacy drm load and unload callbacks

2020-02-05 Thread Alex Deucher
We've moved the debugfs handling into a centralized place
so we can remove the legacy load an unload callbacks.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  5 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c| 13 +++--
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4dc7145368fc..12aab522f459 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3091,10 +3091,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
} else
adev->ucode_sysfs_en = true;
 
-   r = amdgpu_debugfs_init(adev);
-   if (r)
-   DRM_ERROR("Creating debugfs files failed (%d).\n", r);
-
if ((amdgpu_testing & 1)) {
if (adev->accel_working)
amdgpu_test_moves(adev);
@@ -3216,7 +3212,6 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
amdgpu_ucode_sysfs_fini(adev);
if (IS_ENABLED(CONFIG_PERF_EVENTS))
amdgpu_pmu_fini(adev);
-   amdgpu_debugfs_fini(adev);
if (amdgpu_discovery && adev->asic_type >= CHIP_NAVI10)
amdgpu_discovery_fini(adev);
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index f26532998781..9753c55b317d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1031,6 +1031,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
 {
struct drm_device *dev;
+   struct amdgpu_device *adev;
unsigned long flags = ent->driver_data;
int ret, retry = 0;
bool supports_atomic = false;
@@ -1100,6 +1101,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
 
pci_set_drvdata(pdev, dev);
 
+   amdgpu_driver_load_kms(dev, ent->driver_data);
+
 retry_init:
ret = drm_dev_register(dev, ent->driver_data);
if (ret == -EAGAIN && ++retry <= 3) {
@@ -1110,6 +1113,11 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
} else if (ret)
goto err_pci;
 
+   adev = dev->dev_private;
+   ret = amdgpu_debugfs_init(adev);
+   if (ret)
+   DRM_ERROR("Creating debugfs files failed (%d).\n", ret);
+
return 0;
 
 err_pci:
@@ -1123,6 +1131,7 @@ static void
 amdgpu_pci_remove(struct pci_dev *pdev)
 {
struct drm_device *dev = pci_get_drvdata(pdev);
+   struct amdgpu_device *adev = dev->dev_private;
 
 #ifdef MODULE
if (THIS_MODULE->state != MODULE_STATE_GOING)
@@ -1130,6 +1139,8 @@ amdgpu_pci_remove(struct pci_dev *pdev)
DRM_ERROR("Hotplug removal is not supported\n");
drm_dev_unplug(dev);
drm_dev_put(dev);
+   amdgpu_debugfs_fini(adev);
+   amdgpu_driver_unload_kms(dev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
 }
@@ -1434,11 +1445,9 @@ static struct drm_driver kms_driver = {
DRIVER_GEM |
DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ |
DRIVER_SYNCOBJ_TIMELINE,
-   .load = amdgpu_driver_load_kms,
.open = amdgpu_driver_open_kms,
.postclose = amdgpu_driver_postclose_kms,
.lastclose = amdgpu_driver_lastclose_kms,
-   .unload = amdgpu_driver_unload_kms,
.get_vblank_counter = amdgpu_get_vblank_counter_kms,
.enable_vblank = amdgpu_enable_vblank_kms,
.disable_vblank = amdgpu_disable_vblank_kms,
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/15] drm/amdgpu/regs: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for register access files.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 4 
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index cb7db7edfc3f..7721f1416cdb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1252,6 +1252,10 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
if (r)
DRM_ERROR("registering gem debugfs failed (%d).\n", r);
 
+   r = amdgpu_debugfs_regs_init(adev);
+   if (r)
+   DRM_ERROR("registering register debugfs failed (%d).\n", r);
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 64a275664c72..d84a61e18bf8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3091,10 +3091,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
} else
adev->ucode_sysfs_en = true;
 
-   r = amdgpu_debugfs_regs_init(adev);
-   if (r)
-   DRM_ERROR("registering register debugfs failed (%d).\n", r);
-
r = amdgpu_debugfs_firmware_init(adev);
if (r)
DRM_ERROR("registering firmware debugfs failed (%d).\n", r);
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/15] drm/amd/display: move dpcd debugfs members setup

2020-02-05 Thread Alex Deucher
Into the function that creates the debugfs files rather
than setting them explicitly in the callers.

Reviewed-by: Harry Wentland 
Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c   | 2 --
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c   | 3 +++
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 2 --
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 81c8d8c61d62..af8155708569 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5836,8 +5836,6 @@ static int amdgpu_dm_connector_init(struct 
amdgpu_display_manager *dm,
 
 #if defined(CONFIG_DEBUG_FS)
connector_debugfs_init(aconnector);
-   aconnector->debugfs_dpcd_address = 0;
-   aconnector->debugfs_dpcd_size = 0;
 #endif
 
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index ead5c05eec92..6bc0bdc8835c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -1066,6 +1066,9 @@ void connector_debugfs_init(struct amdgpu_dm_connector 
*connector)
debugfs_create_file_unsafe("force_yuv420_output", 0644, dir, connector,
   _yuv420_output_fops);
 
+   connector->debugfs_dpcd_address = 0;
+   connector->debugfs_dpcd_size = 0;
+
 }
 
 /*
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 5672f7765919..3959c942c88b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -158,8 +158,6 @@ amdgpu_dm_mst_connector_late_register(struct drm_connector 
*connector)
 
 #if defined(CONFIG_DEBUG_FS)
connector_debugfs_init(amdgpu_dm_connector);
-   amdgpu_dm_connector->debugfs_dpcd_address = 0;
-   amdgpu_dm_connector->debugfs_dpcd_size = 0;
 #endif
 
return drm_dp_mst_connector_late_register(connector, port);
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 05/15] drm/amdgpu/fence: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for fence handling.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 3 ---
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 84c5e9f23c76..bcd10daa6e39 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1245,6 +1245,9 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
dev_err(adev->dev, "failed to register debugfs file for SA\n");
}
 
+   if (amdgpu_debugfs_fence_init(adev))
+   dev_err(adev->dev, "fence debugfs file creation failed\n");
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 3c01252b1e0e..7531527067df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -503,9 +503,6 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
  */
 int amdgpu_fence_driver_init(struct amdgpu_device *adev)
 {
-   if (amdgpu_debugfs_fence_init(adev))
-   dev_err(adev->dev, "fence debugfs file creation failed\n");
-
return 0;
 }
 
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/15] drm/amdgpu/pm: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for pm.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 7 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c  | 9 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h  | 2 ++
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index f49604c0d0b8..c1d66cc6e6d8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -31,6 +31,7 @@
 #include 
 
 #include "amdgpu.h"
+#include "amdgpu_pm.h"
 
 /**
  * amdgpu_debugfs_add_files - Add simple debugfs entries
@@ -1234,6 +1235,12 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
return r;
}
 
+   r = amdgpu_debugfs_pm_init(adev);
+   if (r) {
+   DRM_ERROR("Failed to register debugfs file for dpm!\n");
+   return r;
+   }
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index b03b1eb7ba04..bc3cf04a1a94 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -41,8 +41,6 @@
 #include "hwmgr.h"
 #define WIDTH_4K 3840
 
-static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
-
 static const struct cg_flag_name clocks[] = {
{AMD_CG_SUPPORT_GFX_MGCG, "Graphics Medium Grain Clock Gating"},
{AMD_CG_SUPPORT_GFX_MGLS, "Graphics Medium Grain memory Light Sleep"},
@@ -3398,11 +3396,6 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
DRM_ERROR("failed to create device file unique_id\n");
return ret;
}
-   ret = amdgpu_debugfs_pm_init(adev);
-   if (ret) {
-   DRM_ERROR("Failed to register debugfs file for dpm!\n");
-   return ret;
-   }
 
if ((adev->asic_type >= CHIP_VEGA10) &&
!(adev->flags & AMD_IS_APU)) {
@@ -3669,7 +3662,7 @@ static const struct drm_info_list amdgpu_pm_info_list[] = 
{
 };
 #endif
 
-static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev)
+int amdgpu_debugfs_pm_init(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
return amdgpu_debugfs_add_files(adev, amdgpu_pm_info_list, 
ARRAY_SIZE(amdgpu_pm_info_list));
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h
index 3da1da277805..5db0ef86e84c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h
@@ -43,4 +43,6 @@ void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool 
enable);
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
 
+int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
+
 #endif
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/15] drm/amdgpu/display: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for display.

Reviewed-by: Harry Wentland 
Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 6 ++
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 -
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 5bf43f20ec30..82d30bae2ba0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -32,6 +32,7 @@
 
 #include "amdgpu.h"
 #include "amdgpu_pm.h"
+#include "amdgpu_dm_debugfs.h"
 
 /**
  * amdgpu_debugfs_add_files - Add simple debugfs entries
@@ -1260,6 +1261,11 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
if (r)
DRM_ERROR("registering firmware debugfs failed (%d).\n", r);
 
+   if (amdgpu_device_has_dc_support(adev)) {
+   if (dtn_debugfs_init(adev))
+   DRM_ERROR("amdgpu: failed initialize dtn debugfs 
support.\n");
+   }
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 50137df9cdad..81c8d8c61d62 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -991,11 +991,6 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
 
-#if defined(CONFIG_DEBUG_FS)
-   if (dtn_debugfs_init(adev))
-   DRM_ERROR("amdgpu: failed initialize dtn debugfs support.\n");
-#endif
-
DRM_DEBUG_DRIVER("KMS initialized.\n");
 
return 0;
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 02/15] drm/amdgpu/ttm: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for ttm.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 10 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 14 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  3 +++
 3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 58b5e1b4f814..f49604c0d0b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1216,6 +1216,8 @@ DEFINE_SIMPLE_ATTRIBUTE(fops_ib_preempt, NULL,
 
 int amdgpu_debugfs_init(struct amdgpu_device *adev)
 {
+   int r;
+
adev->debugfs_preempt =
debugfs_create_file("amdgpu_preempt_ib", 0600,
adev->ddev->primary->debugfs_root, adev,
@@ -1225,12 +1227,20 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
return -EIO;
}
 
+   /* Register debugfs entries for amdgpu_ttm */
+   r = amdgpu_ttm_debugfs_init(adev);
+   if (r) {
+   DRM_ERROR("Failed to init debugfs\n");
+   return r;
+   }
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
 
 void amdgpu_debugfs_fini(struct amdgpu_device *adev)
 {
+   amdgpu_ttm_debugfs_fini(adev);
debugfs_remove(adev->debugfs_preempt);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 56f743698868..0c35978626d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -66,9 +66,6 @@ static int amdgpu_map_buffer(struct ttm_buffer_object *bo,
 struct amdgpu_ring *ring,
 uint64_t *addr);
 
-static int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev);
-static void amdgpu_ttm_debugfs_fini(struct amdgpu_device *adev);
-
 static int amdgpu_invalidate_caches(struct ttm_bo_device *bdev, uint32_t flags)
 {
return 0;
@@ -1911,12 +1908,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
return r;
}
 
-   /* Register debugfs entries for amdgpu_ttm */
-   r = amdgpu_ttm_debugfs_init(adev);
-   if (r) {
-   DRM_ERROR("Failed to init debugfs\n");
-   return r;
-   }
return 0;
 }
 
@@ -1938,7 +1929,6 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
if (!adev->mman.initialized)
return;
 
-   amdgpu_ttm_debugfs_fini(adev);
amdgpu_ttm_training_reserve_vram_fini(adev);
/* return the IP Discovery TMR memory back to VRAM */
amdgpu_bo_free_kernel(>discovery_memory, NULL, NULL);
@@ -2545,7 +2535,7 @@ static const struct {
 
 #endif
 
-static int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev)
+int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
unsigned count;
@@ -2581,7 +2571,7 @@ static int amdgpu_ttm_debugfs_init(struct amdgpu_device 
*adev)
 #endif
 }
 
-static void amdgpu_ttm_debugfs_fini(struct amdgpu_device *adev)
+void amdgpu_ttm_debugfs_fini(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
unsigned i;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index f1ebd424510c..2c4ad5b589d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -133,4 +133,7 @@ uint64_t amdgpu_ttm_tt_pde_flags(struct ttm_tt *ttm, struct 
ttm_mem_reg *mem);
 uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt 
*ttm,
 struct ttm_mem_reg *mem);
 
+int amdgpu_ttm_debugfs_init(struct amdgpu_device *adev);
+void amdgpu_ttm_debugfs_fini(struct amdgpu_device *adev);
+
 #endif
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/15] drm/amdgpu/sa: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for SA (sub allocator).

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  | 7 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  | 1 +
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index c1d66cc6e6d8..84c5e9f23c76 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1241,6 +1241,10 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
return r;
}
 
+   if (amdgpu_debugfs_sa_init(adev)) {
+   dev_err(adev->dev, "failed to register debugfs file for SA\n");
+   }
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 6e0f97afb030..abf286f2bc5e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -48,7 +48,6 @@
  * produce command buffers which are send to the kernel and
  * put in IBs for execution by the requested ring.
  */
-static int amdgpu_debugfs_sa_init(struct amdgpu_device *adev);
 
 /**
  * amdgpu_ib_get - request an IB (Indirect Buffer)
@@ -295,9 +294,7 @@ int amdgpu_ib_pool_init(struct amdgpu_device *adev)
}
 
adev->ib_pool_ready = true;
-   if (amdgpu_debugfs_sa_init(adev)) {
-   dev_err(adev->dev, "failed to register debugfs file for SA\n");
-   }
+
return 0;
 }
 
@@ -421,7 +418,7 @@ static const struct drm_info_list amdgpu_debugfs_sa_list[] 
= {
 
 #endif
 
-static int amdgpu_debugfs_sa_init(struct amdgpu_device *adev)
+int amdgpu_debugfs_sa_init(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_sa_list, 1);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 26a654cbd530..7d41f7b9a340 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -330,6 +330,7 @@ void amdgpu_sa_bo_free(struct amdgpu_device *adev,
 void amdgpu_sa_bo_dump_debug_info(struct amdgpu_sa_manager *sa_manager,
 struct seq_file *m);
 #endif
+int amdgpu_debugfs_sa_init(struct amdgpu_device *adev);
 
 bool amdgpu_bo_support_uswc(u64 bo_flags);
 
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/15] drm/amdgpu/firmware: move debugfs init into core amdgpu debugfs

2020-02-05 Thread Alex Deucher
In order to remove the load and unload drm callbacks,
we need to reorder the init sequence to move all the drm
debugfs file handling.  Do this for firmware.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 4 
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 7721f1416cdb..5bf43f20ec30 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1256,6 +1256,10 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
if (r)
DRM_ERROR("registering register debugfs failed (%d).\n", r);
 
+   r = amdgpu_debugfs_firmware_init(adev);
+   if (r)
+   DRM_ERROR("registering firmware debugfs failed (%d).\n", r);
+
return amdgpu_debugfs_add_files(adev, amdgpu_debugfs_list,
ARRAY_SIZE(amdgpu_debugfs_list));
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index d84a61e18bf8..4dc7145368fc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3091,10 +3091,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
} else
adev->ucode_sysfs_en = true;
 
-   r = amdgpu_debugfs_firmware_init(adev);
-   if (r)
-   DRM_ERROR("registering firmware debugfs failed (%d).\n", r);
-
r = amdgpu_debugfs_init(adev);
if (r)
DRM_ERROR("Creating debugfs files failed (%d).\n", r);
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/15] drm/amdgpu: rename amdgpu_debugfs_preempt_cleanup

2020-02-05 Thread Alex Deucher
to amdgpu_debugfs_fini.  It will be used for other things in
the future.

Acked-by: Christian König 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index f24ed9a1a3e5..58b5e1b4f814 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1229,7 +1229,7 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
ARRAY_SIZE(amdgpu_debugfs_list));
 }
 
-void amdgpu_debugfs_preempt_cleanup(struct amdgpu_device *adev)
+void amdgpu_debugfs_fini(struct amdgpu_device *adev)
 {
debugfs_remove(adev->debugfs_preempt);
 }
@@ -1239,7 +1239,7 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
 {
return 0;
 }
-void amdgpu_debugfs_preempt_cleanup(struct amdgpu_device *adev) { }
+void amdgpu_debugfs_fini(struct amdgpu_device *adev) { }
 int amdgpu_debugfs_regs_init(struct amdgpu_device *adev)
 {
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
index f289d28ad6b2..b382527e359a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
@@ -34,7 +34,7 @@ struct amdgpu_debugfs {
 int amdgpu_debugfs_regs_init(struct amdgpu_device *adev);
 void amdgpu_debugfs_regs_cleanup(struct amdgpu_device *adev);
 int amdgpu_debugfs_init(struct amdgpu_device *adev);
-void amdgpu_debugfs_preempt_cleanup(struct amdgpu_device *adev);
+void amdgpu_debugfs_fini(struct amdgpu_device *adev);
 int amdgpu_debugfs_add_files(struct amdgpu_device *adev,
 const struct drm_info_list *files,
 unsigned nfiles);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 8df7727815cb..3b09897eb1e9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3228,7 +3228,7 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
amdgpu_ucode_sysfs_fini(adev);
if (IS_ENABLED(CONFIG_PERF_EVENTS))
amdgpu_pmu_fini(adev);
-   amdgpu_debugfs_preempt_cleanup(adev);
+   amdgpu_debugfs_fini(adev);
if (amdgpu_discovery && adev->asic_type >= CHIP_NAVI10)
amdgpu_discovery_fini(adev);
 }
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/15] amdgpu: remove load and unload callbacks (v2)

2020-02-05 Thread Alex Deucher
These are deprecated and the drm will soon start warning when drivers still
use them.  It was a long and twisty road, but seems to work.

v2: Add additional patch (13/15) which should fix the crash reported by
Thomas Zimmermann.

Alex Deucher (15):
  drm/amdgpu: rename amdgpu_debugfs_preempt_cleanup
  drm/amdgpu/ttm: move debugfs init into core amdgpu debugfs
  drm/amdgpu/pm: move debugfs init into core amdgpu debugfs
  drm/amdgpu/sa: move debugfs init into core amdgpu debugfs
  drm/amdgpu/fence: move debugfs init into core amdgpu debugfs
  drm/amdgpu/gem: move debugfs init into core amdgpu debugfs
  drm/amdgpu/regs: move debugfs init into core amdgpu debugfs
  drm/amdgpu/firmware: move debugfs init into core amdgpu debugfs
  drm/amdgpu: don't call drm_connector_register for non-MST ports
  drm/amdgpu/display: move debugfs init into core amdgpu debugfs
  drm/amd/display: move dpcd debugfs members setup
  drm/amdgpu/display: add a late register connector callback
  drm/amdgpu/display: split dp connector registration
  drm/amdgpu/ring: move debugfs init into core amdgpu debugfs
  drm/amdgpu: drop legacy drm load and unload callbacks

 .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 11 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 67 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 17 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 13 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  3 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c|  7 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  9 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h|  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  | 15 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  4 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   | 14 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |  3 +
 drivers/gpu/drm/amd/amdgpu/atombios_dp.c  |  8 +--
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 -
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 26 +++
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c |  3 +
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   | 13 ++--
 19 files changed, 131 insertions(+), 88 deletions(-)

-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/vcn2.5: fix DPG mode power off issue on instance 1

2020-02-05 Thread Leo Liu



On 2020-02-05 9:45 a.m., James Zhu wrote:

Support pause_state for multiple instance, and it will fix vcn2.5 DPG mode
power off issue on instance 1.

Signed-off-by: James Zhu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h |  3 +--
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c   | 14 --
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c   |  6 +++---
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c   |  6 +++---
  4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index d6deb0e..fb3dfe3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -179,6 +179,7 @@ struct amdgpu_vcn_inst {
struct amdgpu_irq_src   irq;
struct amdgpu_vcn_reg   external;
struct amdgpu_bo*dpg_sram_bo;
+   struct dpg_pause_state pause_state;


Can this variable be aligned with other variables in the structure? With 
that fixed, the patch is


Reviewed-by: Leo Liu 



void*dpg_sram_cpu_addr;
uint64_tdpg_sram_gpu_addr;
uint32_t*dpg_sram_curr_addr;
@@ -190,8 +191,6 @@ struct amdgpu_vcn {
const struct firmware   *fw;/* VCN firmware */
unsignednum_enc_rings;
enum amd_powergating_state cur_state;
-   struct dpg_pause_state pause_state;
-
boolindirect_sram;
  
  	uint8_t	num_vcn_inst;

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 1a24fad..71f61af 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1207,9 +1207,10 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
struct amdgpu_ring *ring;
  
  	/* pause/unpause if state is changed */

-   if (adev->vcn.pause_state.fw_based != new_state->fw_based) {
+   if (adev->vcn.inst[inst_idx].pause_state.fw_based != 
new_state->fw_based) {
DRM_DEBUG("dpg pause state changed %d:%d -> %d:%d",
-   adev->vcn.pause_state.fw_based, 
adev->vcn.pause_state.jpeg,
+   adev->vcn.inst[inst_idx].pause_state.fw_based,
+   adev->vcn.inst[inst_idx].pause_state.jpeg,
new_state->fw_based, new_state->jpeg);
  
  		reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &

@@ -1258,13 +1259,14 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
reg_data &= ~UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, reg_data);
}
-   adev->vcn.pause_state.fw_based = new_state->fw_based;
+   adev->vcn.inst[inst_idx].pause_state.fw_based = 
new_state->fw_based;
}
  
  	/* pause/unpause if state is changed */

-   if (adev->vcn.pause_state.jpeg != new_state->jpeg) {
+   if (adev->vcn.inst[inst_idx].pause_state.jpeg != new_state->jpeg) {
DRM_DEBUG("dpg pause state changed %d:%d -> %d:%d",
-   adev->vcn.pause_state.fw_based, 
adev->vcn.pause_state.jpeg,
+   adev->vcn.inst[inst_idx].pause_state.fw_based,
+   adev->vcn.inst[inst_idx].pause_state.jpeg,
new_state->fw_based, new_state->jpeg);
  
  		reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &

@@ -1318,7 +1320,7 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
reg_data &= ~UVD_DPG_PAUSE__JPEG_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, reg_data);
}
-   adev->vcn.pause_state.jpeg = new_state->jpeg;
+   adev->vcn.inst[inst_idx].pause_state.jpeg = new_state->jpeg;
}
  
  	return 0;

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index 4f72167..c387c81 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -1137,9 +1137,9 @@ static int vcn_v2_0_pause_dpg_mode(struct amdgpu_device 
*adev,
int ret_code;
  
  	/* pause/unpause if state is changed */

-   if (adev->vcn.pause_state.fw_based != new_state->fw_based) {
+   if (adev->vcn.inst[inst_idx].pause_state.fw_based != 
new_state->fw_based) {
DRM_DEBUG("dpg pause state changed %d -> %d",
-   adev->vcn.pause_state.fw_based,  
new_state->fw_based);
+   adev->vcn.inst[inst_idx].pause_state.fw_based,   
new_state->fw_based);
reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &
(~UVD_DPG_PAUSE__NJ_PAUSE_DPG_ACK_MASK);
  
@@ -1185,7 +1185,7 @@ static int vcn_v2_0_pause_dpg_mode(struct amdgpu_device *adev,

reg_data &= ~UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, 

[PATCH V6] drm: Add support for DP 1.4 Compliance edid corruption test

2020-02-05 Thread Jerry (Fangzhi) Zuo
Unlike DP 1.2 edid corruption test, DP 1.4 requires to calculate
real CRC value of the last edid data block, and write it back.
Current edid CRC calculates routine adds the last CRC byte,
and check if non-zero.

This behavior is not accurate; actually, we need to return
the actual CRC value when corruption is detected.
This commit changes this issue by returning the calculated CRC,
and initiate the required sequence.

Change since v6
- Add return check

Change since v5
- Obtain real CRC value before dumping bad edid

Change since v4
- Fix for CI.CHECKPATCH

Change since v3
- Fix a minor typo.

Change since v2
- Rewrite checksum computation routine to avoid duplicated code.
- Rename to avoid confusion.

Change since v1
- Have separate routine for returning real CRC.

Signed-off-by: Jerry (Fangzhi) Zuo 
---
 drivers/gpu/drm/drm_dp_helper.c | 51 +
 drivers/gpu/drm/drm_edid.c  | 23 ---
 include/drm/drm_connector.h |  6 
 include/drm/drm_dp_helper.h |  3 ++
 4 files changed, 79 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
index f629fc5494a4..1efd609df402 100644
--- a/drivers/gpu/drm/drm_dp_helper.c
+++ b/drivers/gpu/drm/drm_dp_helper.c
@@ -351,6 +351,57 @@ int drm_dp_dpcd_read_link_status(struct drm_dp_aux *aux,
 }
 EXPORT_SYMBOL(drm_dp_dpcd_read_link_status);
 
+/**
+ * drm_dp_send_real_edid_checksum() - send back real edid checksum value
+ * @aux: DisplayPort AUX channel
+ * @real_edid_checksum: real edid checksum for the last block
+ *
+ * Returns:
+ * True on success
+ */
+bool drm_dp_send_real_edid_checksum(struct drm_dp_aux *aux,
+   u8 real_edid_checksum)
+{
+   u8 link_edid_read = 0, auto_test_req = 0, test_resp = 0;
+
+   if (drm_dp_dpcd_read(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, _test_req, 
1) < 1) {
+   DRM_ERROR("DPCD failed read at register 0x%x\n", 
DP_DEVICE_SERVICE_IRQ_VECTOR);
+   return false;
+   }
+   auto_test_req &= DP_AUTOMATED_TEST_REQUEST;
+
+   if (drm_dp_dpcd_read(aux, DP_TEST_REQUEST, _edid_read, 1) < 1) {
+   DRM_ERROR("DPCD failed read at register 0x%x\n", 
DP_TEST_REQUEST);
+   return false;
+   }
+   link_edid_read &= DP_TEST_LINK_EDID_READ;
+
+   if (!auto_test_req || !link_edid_read) {
+   DRM_DEBUG_KMS("Source DUT does not support TEST_EDID_READ\n");
+   return false;
+   }
+
+   if (drm_dp_dpcd_write(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, 
_test_req, 1) < 1) {
+   DRM_ERROR("DPCD failed write at register 0x%x\n", 
DP_DEVICE_SERVICE_IRQ_VECTOR);
+   return false;
+   }
+
+   /* send back checksum for the last edid extension block data */
+   if (drm_dp_dpcd_write(aux, DP_TEST_EDID_CHECKSUM, _edid_checksum, 
1) < 1) {
+   DRM_ERROR("DPCD failed write at register 0x%x\n", 
DP_TEST_EDID_CHECKSUM);
+   return false;
+   }
+
+   test_resp |= DP_TEST_EDID_CHECKSUM_WRITE;
+   if (drm_dp_dpcd_write(aux, DP_TEST_RESPONSE, _resp, 1) < 1) {
+   DRM_ERROR("DPCD failed write at register 0x%x\n", 
DP_TEST_RESPONSE);
+   return false;
+   }
+
+   return true;
+}
+EXPORT_SYMBOL(drm_dp_send_real_edid_checksum);
+
 /**
  * drm_dp_downstream_max_clock() - extract branch device max
  * pixel rate for legacy VGA
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 99769d6c9f84..f064e75fb4c5 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -1590,11 +1590,22 @@ static int validate_displayid(u8 *displayid, int 
length, int idx);
 static int drm_edid_block_checksum(const u8 *raw_edid)
 {
int i;
-   u8 csum = 0;
-   for (i = 0; i < EDID_LENGTH; i++)
+   u8 csum = 0, crc = 0;
+
+   for (i = 0; i < EDID_LENGTH - 1; i++)
csum += raw_edid[i];
 
-   return csum;
+   crc = 0x100 - csum;
+
+   return crc;
+}
+
+static bool drm_edid_block_checksum_diff(const u8 *raw_edid, u8 real_checksum)
+{
+   if (raw_edid[EDID_LENGTH - 1] != real_checksum)
+   return true;
+   else
+   return false;
 }
 
 static bool drm_edid_is_zero(const u8 *in_edid, int length)
@@ -1652,7 +1663,7 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool 
print_bad_edid,
}
 
csum = drm_edid_block_checksum(raw_edid);
-   if (csum) {
+   if (drm_edid_block_checksum_diff(raw_edid, csum)) {
if (edid_corrupt)
*edid_corrupt = true;
 
@@ -1793,6 +1804,10 @@ static void connector_bad_edid(struct drm_connector 
*connector,
   u8 *edid, int num_blocks)
 {
int i;
+   u8 num_of_ext = edid[0x7e];
+
+   /* Calculate real checksum for the last edid extension block data */
+   connector->real_edid_checksum = 

[PATCH 4/4] drm/amdgpu: use amdgpu_device_vram_access in amdgpu_ttm_access_memory

2020-02-05 Thread Christian König
Make use of the better performance here as well.

This patch is only compile tested!

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 38 +++--
 1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 58d143b24ba0..538c3b52b712 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1565,7 +1565,7 @@ static int amdgpu_ttm_access_memory(struct 
ttm_buffer_object *bo,
 
while (len && pos < adev->gmc.mc_vram_size) {
uint64_t aligned_pos = pos & ~(uint64_t)3;
-   uint32_t bytes = 4 - (pos & 3);
+   uint64_t bytes = 4 - (pos & 3);
uint32_t shift = (pos & 3) * 8;
uint32_t mask = 0x << shift;
 
@@ -1574,20 +1574,28 @@ static int amdgpu_ttm_access_memory(struct 
ttm_buffer_object *bo,
bytes = len;
}
 
-   spin_lock_irqsave(>mmio_idx_lock, flags);
-   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
-   if (!write || mask != 0x)
-   value = RREG32_NO_KIQ(mmMM_DATA);
-   if (write) {
-   value &= ~mask;
-   value |= (*(uint32_t *)buf << shift) & mask;
-   WREG32_NO_KIQ(mmMM_DATA, value);
-   }
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
-   if (!write) {
-   value = (value & mask) >> shift;
-   memcpy(buf, , bytes);
+   if (mask != 0x) {
+   spin_lock_irqsave(>mmio_idx_lock, flags);
+   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)aligned_pos) | 
0x8000);
+   WREG32_NO_KIQ(mmMM_INDEX_HI, aligned_pos >> 31);
+   if (!write || mask != 0x)
+   value = RREG32_NO_KIQ(mmMM_DATA);
+   if (write) {
+   value &= ~mask;
+   value |= (*(uint32_t *)buf << shift) & mask;
+   WREG32_NO_KIQ(mmMM_DATA, value);
+   }
+   spin_unlock_irqrestore(>mmio_idx_lock, flags);
+   if (!write) {
+   value = (value & mask) >> shift;
+   memcpy(buf, , bytes);
+   }
+   } else {
+   bytes = (nodes->start + nodes->size) << PAGE_SHIFT;
+   bytes = min(pos - bytes, (uint64_t)len & ~0x3ull);
+
+   amdgpu_device_vram_access(adev, pos, (uint32_t *)buf,
+ bytes, write);
}
 
ret += bytes;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/4] drm/amdgpu: use the BAR if possible in amdgpu_device_vram_access

2020-02-05 Thread Christian König
This should speed up debugging VRAM access a lot.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +
 1 file changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index d39630edda01..7d65c9aedecd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -188,6 +188,27 @@ void amdgpu_device_vram_access(struct amdgpu_device *adev, 
loff_t pos,
uint32_t hi = ~0;
uint64_t last;
 
+
+#ifdef CONFIG_64BIT
+   last = min(pos + size, adev->gmc.visible_vram_size);
+   if (last > pos) {
+   void __iomem *addr = adev->mman.aper_base_kaddr + pos;
+   size_t count = last - pos;
+
+   if (write)
+   memcpy_toio(addr, buf, count);
+   else
+   memcpy_fromio(buf, addr, count);
+
+   if (count == size)
+   return;
+
+   pos += count;
+   buf += count / 4;
+   size -= count;
+   }
+#endif
+
spin_lock_irqsave(>mmio_idx_lock, flags);
for (last = pos + size; pos < last; pos += 4) {
uint32_t tmp = pos >> 31;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/4] drm/amdgpu: optimize amdgpu_device_vram_access a bit.

2020-02-05 Thread Christian König
Only write the _HI register when necessary.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 5030a09babb8..d39630edda01 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -184,20 +184,25 @@ bool amdgpu_device_supports_baco(struct drm_device *dev)
 void amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,
   uint32_t *buf, size_t size, bool write)
 {
-   uint64_t last;
unsigned long flags;
+   uint32_t hi = ~0;
+   uint64_t last;
+
+   spin_lock_irqsave(>mmio_idx_lock, flags);
+   for (last = pos + size; pos < last; pos += 4) {
+   uint32_t tmp = pos >> 31;
 
-   last = size - 4;
-   for (last += pos; pos <= last; pos += 4) {
-   spin_lock_irqsave(>mmio_idx_lock, flags);
WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, pos >> 31);
+   if (tmp != hi) {
+   WREG32_NO_KIQ(mmMM_INDEX_HI, tmp);
+   hi = tmp;
+   }
if (write)
WREG32_NO_KIQ(mmMM_DATA, *buf++);
else
*buf++ = RREG32_NO_KIQ(mmMM_DATA);
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
}
+   spin_unlock_irqrestore(>mmio_idx_lock, flags);
 }
 
 /*
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/4] drm/amdgpu: use amdgpu_device_vram_access in amdgpu_ttm_vram_read

2020-02-05 Thread Christian König
This speeds up the access quite a bit from 2.2 MB/s to
2.9 MB/s on 32bit and 12,8 MB/s on 64bit.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 27 ++---
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index ae1b00def5d8..58d143b24ba0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -59,6 +59,8 @@
 #include "amdgpu_ras.h"
 #include "bif/bif_4_1_d.h"
 
+#define AMDGPU_TTM_VRAM_MAX_DW_READ(size_t)128
+
 static int amdgpu_map_buffer(struct ttm_buffer_object *bo,
 struct ttm_mem_reg *mem, unsigned num_pages,
 uint64_t offset, unsigned window,
@@ -2255,27 +2257,20 @@ static ssize_t amdgpu_ttm_vram_read(struct file *f, 
char __user *buf,
if (*pos >= adev->gmc.mc_vram_size)
return -ENXIO;
 
+   size = min(size, (size_t)(adev->gmc.mc_vram_size - *pos));
while (size) {
-   unsigned long flags;
-   uint32_t value;
-
-   if (*pos >= adev->gmc.mc_vram_size)
-   return result;
-
-   spin_lock_irqsave(>mmio_idx_lock, flags);
-   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)*pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, *pos >> 31);
-   value = RREG32_NO_KIQ(mmMM_DATA);
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
+   size_t bytes = min(size, AMDGPU_TTM_VRAM_MAX_DW_READ * 4);
+   uint32_t value[AMDGPU_TTM_VRAM_MAX_DW_READ];
 
-   r = put_user(value, (uint32_t *)buf);
+   amdgpu_device_vram_access(adev, *pos, value, bytes, false);
+   r = copy_to_user(buf, value, bytes);
if (r)
return r;
 
-   result += 4;
-   buf += 4;
-   *pos += 4;
-   size -= 4;
+   result += bytes;
+   buf += bytes;
+   *pos += bytes;
+   size -= bytes;
}
 
return result;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[Dali] Raven 2 detection Patch

2020-02-05 Thread Tawfik, Aly
Hi,



Dali is a raven2 based asic that drives at a lower (6W) TDP than other raven 2 
chips. Currently the fused internal id is the same on all raven 2 boards, this 
means that the detection process must be done through PCIE REV ID.

Unfortunately PCIE REV ID is not defined inside the scope of display. I created 
a patch to alter the fused value for internal rev_id if the chip is detected as 
dali through PCIE REV ID. So that detection of the chip will be possible inside 
of Display core.



Can you kindly provide feedback on this workaround.



Best Regards,

Aly Tawfik



0001-drm-amdgpu-DALI-Dali-Varient-Detection.patch
Description: 0001-drm-amdgpu-DALI-Dali-Varient-Detection.patch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu/vcn2.5: fix DPG mode power off issue on instance 1

2020-02-05 Thread James Zhu
Support pause_state for multiple instance, and it will fix vcn2.5 DPG mode
power off issue on instance 1.

Signed-off-by: James Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h |  3 +--
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c   | 14 --
 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c   |  6 +++---
 drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c   |  6 +++---
 4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index d6deb0e..fb3dfe3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -179,6 +179,7 @@ struct amdgpu_vcn_inst {
struct amdgpu_irq_src   irq;
struct amdgpu_vcn_reg   external;
struct amdgpu_bo*dpg_sram_bo;
+   struct dpg_pause_state pause_state;
void*dpg_sram_cpu_addr;
uint64_tdpg_sram_gpu_addr;
uint32_t*dpg_sram_curr_addr;
@@ -190,8 +191,6 @@ struct amdgpu_vcn {
const struct firmware   *fw;/* VCN firmware */
unsignednum_enc_rings;
enum amd_powergating_state cur_state;
-   struct dpg_pause_state pause_state;
-
boolindirect_sram;
 
uint8_t num_vcn_inst;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 1a24fad..71f61af 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1207,9 +1207,10 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
struct amdgpu_ring *ring;
 
/* pause/unpause if state is changed */
-   if (adev->vcn.pause_state.fw_based != new_state->fw_based) {
+   if (adev->vcn.inst[inst_idx].pause_state.fw_based != 
new_state->fw_based) {
DRM_DEBUG("dpg pause state changed %d:%d -> %d:%d",
-   adev->vcn.pause_state.fw_based, 
adev->vcn.pause_state.jpeg,
+   adev->vcn.inst[inst_idx].pause_state.fw_based,
+   adev->vcn.inst[inst_idx].pause_state.jpeg,
new_state->fw_based, new_state->jpeg);
 
reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &
@@ -1258,13 +1259,14 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
reg_data &= ~UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, reg_data);
}
-   adev->vcn.pause_state.fw_based = new_state->fw_based;
+   adev->vcn.inst[inst_idx].pause_state.fw_based = 
new_state->fw_based;
}
 
/* pause/unpause if state is changed */
-   if (adev->vcn.pause_state.jpeg != new_state->jpeg) {
+   if (adev->vcn.inst[inst_idx].pause_state.jpeg != new_state->jpeg) {
DRM_DEBUG("dpg pause state changed %d:%d -> %d:%d",
-   adev->vcn.pause_state.fw_based, 
adev->vcn.pause_state.jpeg,
+   adev->vcn.inst[inst_idx].pause_state.fw_based,
+   adev->vcn.inst[inst_idx].pause_state.jpeg,
new_state->fw_based, new_state->jpeg);
 
reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &
@@ -1318,7 +1320,7 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device 
*adev,
reg_data &= ~UVD_DPG_PAUSE__JPEG_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, reg_data);
}
-   adev->vcn.pause_state.jpeg = new_state->jpeg;
+   adev->vcn.inst[inst_idx].pause_state.jpeg = new_state->jpeg;
}
 
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index 4f72167..c387c81 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -1137,9 +1137,9 @@ static int vcn_v2_0_pause_dpg_mode(struct amdgpu_device 
*adev,
int ret_code;
 
/* pause/unpause if state is changed */
-   if (adev->vcn.pause_state.fw_based != new_state->fw_based) {
+   if (adev->vcn.inst[inst_idx].pause_state.fw_based != 
new_state->fw_based) {
DRM_DEBUG("dpg pause state changed %d -> %d",
-   adev->vcn.pause_state.fw_based, new_state->fw_based);
+   adev->vcn.inst[inst_idx].pause_state.fw_based,  
new_state->fw_based);
reg_data = RREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE) &
(~UVD_DPG_PAUSE__NJ_PAUSE_DPG_ACK_MASK);
 
@@ -1185,7 +1185,7 @@ static int vcn_v2_0_pause_dpg_mode(struct amdgpu_device 
*adev,
reg_data &= ~UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK;
WREG32_SOC15(UVD, 0, mmUVD_DPG_PAUSE, reg_data);
}
-   adev->vcn.pause_state.fw_based = new_state->fw_based;
+   adev->vcn.inst[inst_idx].pause_state.fw_based 

Re: [PATCH 00/14] amdgpu: remove load and unload callbacks

2020-02-05 Thread Harry Wentland
Patches 10-12 are
Reviewed-by: Harry Wentland 

Harry

On 2020-02-04 10:48 p.m., Alex Deucher wrote:
> These are deprecated and the drm will soon start warning when drivers still
> use them.  It was a long and twisty road, but seems to work.
> 
> Alex Deucher (14):
>   drm/amdgpu: rename amdgpu_debugfs_preempt_cleanup
>   drm/amdgpu/ttm: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/pm: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/sa: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/fence: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/gem: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/regs: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/firmware: move debugfs init into core amdgpu debugfs
>   drm/amdgpu: don't call drm_connector_register for non-MST ports
>   drm/amdgpu/display: move debugfs init into core amdgpu debugfs
>   drm/amd/display: move dpcd debugfs members setup
>   drm/amdgpu/display: add a late register connector callback
>   drm/amdgpu/ring: move debugfs init into core amdgpu debugfs
>   drm/amdgpu: drop legacy drm load and unload callbacks
> 
>  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c|  1 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 67 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 17 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 13 +++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  3 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c|  7 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  9 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h|  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  | 15 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  4 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   | 14 +---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |  3 +
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 -
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 26 +++
>  .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c |  3 +
>  .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  2 -
>  18 files changed, 112 insertions(+), 78 deletions(-)
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [RFC PATCH] drm/amdgpu: Remove eviction fence before release bo

2020-02-05 Thread Christian König

Am 05.02.20 um 13:56 schrieb Pan, Xinhui:

No need to trigger eviction as the memory mapping will not be used anymore.

All pt/pd bos share same resv, hence the same shared eviction fence. Everytime 
page table is freed, the fence will be signled and that cuases kfd unexcepted 
evictions.

kfd bo uses its own resv, so it is not affetced.

Signed-off-by: xinhui pan 
---

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index 47b0f29..265b1ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -96,6 +96,7 @@
   struct mm_struct *mm);
  bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
  struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
+int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
  
  struct amdkfd_process_info {

/* List head of all VMs that belong to a KFD process */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index ef721cb..a3c55ad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -276,6 +276,26 @@
return 0;
  }
  
+int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)

+{
+   struct amdgpu_vm *vm;
+   int ret = 0;
+
+   if (bo->vm_bo && bo->vm_bo->vm) {
+   vm = bo->vm_bo->vm;
+   if (vm->process_info && vm->process_info->eviction_fence) {


Better write that as checking of prerequisites, e.g. if (!...) return;


+   BUG_ON(!dma_resv_trylock(>tbo.base._resv));
+   if (bo->tbo.base.resv != >tbo.base._resv) {
+   dma_resv_copy_fences(>tbo.base._resv, 
bo->tbo.base.resv);
+   bo->tbo.base.resv = >tbo.base._resv;


That doesn't work correctly and could crash really really badly. We need 
to rework how deleted BOs are handled in TTM first for this.


Roughly a month or two ago I send out a patch set which does that, but I 
never got around to finish it up.


Regards,
Christian.


+   }
+   ret = amdgpu_amdkfd_remove_eviction_fence(bo, 
vm->process_info->eviction_fence);
+   dma_resv_unlock(bo->tbo.base.resv);
+   }
+   }
+   return ret;
+}
+
  static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
 bool wait)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 6f60a58..4b5bee0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1307,6 +1307,9 @@
if (abo->kfd_bo)
amdgpu_amdkfd_unreserve_memory_limit(abo);
  
+	amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);

+   abo->vm_bo = NULL;
+
if (bo->mem.mem_type != TTM_PL_VRAM || !bo->mem.mm_node ||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
return;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index cc56eab..187cdb3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -945,7 +945,6 @@
  static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
  {
if (entry->base.bo) {
-   entry->base.bo->vm_bo = NULL;
list_del(>base.vm_status);
amdgpu_bo_unref(>base.bo->shadow);
amdgpu_bo_unref(>base.bo);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[RFC PATCH] drm/amdgpu: Remove eviction fence before release bo

2020-02-05 Thread Pan, Xinhui


No need to trigger eviction as the memory mapping will not be used anymore.

All pt/pd bos share same resv, hence the same shared eviction fence. Everytime 
page table is freed, the fence will be signled and that cuases kfd unexcepted 
evictions.

kfd bo uses its own resv, so it is not affetced.

Signed-off-by: xinhui pan 
---

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index 47b0f29..265b1ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -96,6 +96,7 @@
   struct mm_struct *mm);
 bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
 struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
+int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
 
 struct amdkfd_process_info {
/* List head of all VMs that belong to a KFD process */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index ef721cb..a3c55ad 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -276,6 +276,26 @@
return 0;
 }
 
+int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
+{
+   struct amdgpu_vm *vm;
+   int ret = 0;
+
+   if (bo->vm_bo && bo->vm_bo->vm) {
+   vm = bo->vm_bo->vm;
+   if (vm->process_info && vm->process_info->eviction_fence) {
+   BUG_ON(!dma_resv_trylock(>tbo.base._resv));
+   if (bo->tbo.base.resv != >tbo.base._resv) {
+   dma_resv_copy_fences(>tbo.base._resv, 
bo->tbo.base.resv);
+   bo->tbo.base.resv = >tbo.base._resv;
+   }
+   ret = amdgpu_amdkfd_remove_eviction_fence(bo, 
vm->process_info->eviction_fence);
+   dma_resv_unlock(bo->tbo.base.resv);
+   }
+   }
+   return ret;
+}
+
 static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
 bool wait)
 {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 6f60a58..4b5bee0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1307,6 +1307,9 @@
if (abo->kfd_bo)
amdgpu_amdkfd_unreserve_memory_limit(abo);
 
+   amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);
+   abo->vm_bo = NULL;
+
if (bo->mem.mem_type != TTM_PL_VRAM || !bo->mem.mm_node ||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
return;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index cc56eab..187cdb3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -945,7 +945,6 @@
 static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
 {
if (entry->base.bo) {
-   entry->base.bo->vm_bo = NULL;
list_del(>base.vm_status);
amdgpu_bo_unref(>base.bo->shadow);
amdgpu_bo_unref(>base.bo);
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

2020-02-05 Thread Zhang, Jack (Jian)
Hi, Team,

Would you please help to take a look at this patch?

Regards,
Jack

-Original Message-
From: amd-gfx  On Behalf Of Jack Zhang
Sent: Wednesday, February 5, 2020 5:18 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Jack (Jian) 
Subject: [PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

For sriov and pp_onevf_mode, do not send message to set smu status, becasue smu 
doesn't support these messages under VF.

Besides, it should skip smu_suspend when pp_onevf_mode is disabled.

Signed-off-by: Jack Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 ---  
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 21 +
 2 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4ff7ce3..2d1f8d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2353,15 +2353,16 @@ static int amdgpu_device_ip_suspend_phase2(struct 
amdgpu_device *adev)
}
adev->ip_blocks[i].status.hw = false;
/* handle putting the SMC in the appropriate state */
-   if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
-   r = amdgpu_dpm_set_mp1_state(adev, adev->mp1_state);
-   if (r) {
-   DRM_ERROR("SMC failed to set mp1 state %d, 
%d\n",
- adev->mp1_state, r);
-   return r;
+   if(!amdgpu_sriov_vf(adev)){
+   if (adev->ip_blocks[i].version->type == 
AMD_IP_BLOCK_TYPE_SMC) {
+   r = amdgpu_dpm_set_mp1_state(adev, 
adev->mp1_state);
+   if (r) {
+   DRM_ERROR("SMC failed to set mp1 state 
%d, %d\n",
+   adev->mp1_state, r);
+   return r;
+   }
}
}
-
adev->ip_blocks[i].status.hw = false;
}
 
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 99ad4dd..a6d7b5f 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1461,21 +1461,26 @@ static int smu_suspend(void *handle)
struct smu_context *smu = >smu;
bool baco_feature_is_enabled = false;
 
+   if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
+   return 0;
+
if (!smu->pm_enabled)
return 0;
 
if(!smu->is_apu)
baco_feature_is_enabled = smu_feature_is_enabled(smu, 
SMU_FEATURE_BACO_BIT);
 
-   ret = smu_system_features_control(smu, false);
-   if (ret)
-   return ret;
-
-   if (baco_feature_is_enabled) {
-   ret = smu_feature_set_enabled(smu, SMU_FEATURE_BACO_BIT, true);
-   if (ret) {
-   pr_warn("set BACO feature enabled failed, return %d\n", 
ret);
+   if(!amdgpu_sriov_vf(adev)) {
+   ret = smu_system_features_control(smu, false);
+   if (ret)
return ret;
+
+   if (baco_feature_is_enabled) {
+   ret = smu_feature_set_enabled(smu, 
SMU_FEATURE_BACO_BIT, true);
+   if (ret) {
+   pr_warn("set BACO feature enabled failed, 
return %d\n", ret);
+   return ret;
+   }
}
}
 
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=02%7C01%7CJack.Zhang1%40amd.com%7Ceb00cb04455340909ef908d7aa1c5ab5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637164911088955698sdata=AtqrYF%2Br53lO9oQLu6Q%2BPeco5KKDGKODjCvOWQmO9hw%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu/sriov Don't send msg when smu suspend

2020-02-05 Thread Jack Zhang
For sriov and pp_onevf_mode, do not send message to set smu
status, becasue smu doesn't support these messages under VF.

Besides, it should skip smu_suspend when pp_onevf_mode is disabled.

Signed-off-by: Jack Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 ---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 21 +
 2 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4ff7ce3..2d1f8d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2353,15 +2353,16 @@ static int amdgpu_device_ip_suspend_phase2(struct 
amdgpu_device *adev)
}
adev->ip_blocks[i].status.hw = false;
/* handle putting the SMC in the appropriate state */
-   if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
-   r = amdgpu_dpm_set_mp1_state(adev, adev->mp1_state);
-   if (r) {
-   DRM_ERROR("SMC failed to set mp1 state %d, 
%d\n",
- adev->mp1_state, r);
-   return r;
+   if(!amdgpu_sriov_vf(adev)){
+   if (adev->ip_blocks[i].version->type == 
AMD_IP_BLOCK_TYPE_SMC) {
+   r = amdgpu_dpm_set_mp1_state(adev, 
adev->mp1_state);
+   if (r) {
+   DRM_ERROR("SMC failed to set mp1 state 
%d, %d\n",
+   adev->mp1_state, r);
+   return r;
+   }
}
}
-
adev->ip_blocks[i].status.hw = false;
}
 
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 99ad4dd..a6d7b5f 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1461,21 +1461,26 @@ static int smu_suspend(void *handle)
struct smu_context *smu = >smu;
bool baco_feature_is_enabled = false;
 
+   if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
+   return 0;
+
if (!smu->pm_enabled)
return 0;
 
if(!smu->is_apu)
baco_feature_is_enabled = smu_feature_is_enabled(smu, 
SMU_FEATURE_BACO_BIT);
 
-   ret = smu_system_features_control(smu, false);
-   if (ret)
-   return ret;
-
-   if (baco_feature_is_enabled) {
-   ret = smu_feature_set_enabled(smu, SMU_FEATURE_BACO_BIT, true);
-   if (ret) {
-   pr_warn("set BACO feature enabled failed, return %d\n", 
ret);
+   if(!amdgpu_sriov_vf(adev)) {
+   ret = smu_system_features_control(smu, false);
+   if (ret)
return ret;
+
+   if (baco_feature_is_enabled) {
+   ret = smu_feature_set_enabled(smu, 
SMU_FEATURE_BACO_BIT, true);
+   if (ret) {
+   pr_warn("set BACO feature enabled failed, 
return %d\n", ret);
+   return ret;
+   }
}
}
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 00/14] amdgpu: remove load and unload callbacks

2020-02-05 Thread Thomas Zimmermann
Hi Alex

Am 05.02.20 um 04:48 schrieb Alex Deucher:
> These are deprecated and the drm will soon start warning when drivers still
> use them.  It was a long and twisty road, but seems to work.
> 
> Alex Deucher (14):
>   drm/amdgpu: rename amdgpu_debugfs_preempt_cleanup
>   drm/amdgpu/ttm: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/pm: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/sa: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/fence: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/gem: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/regs: move debugfs init into core amdgpu debugfs
>   drm/amdgpu/firmware: move debugfs init into core amdgpu debugfs
>   drm/amdgpu: don't call drm_connector_register for non-MST ports
>   drm/amdgpu/display: move debugfs init into core amdgpu debugfs
>   drm/amd/display: move dpcd debugfs members setup
>   drm/amdgpu/display: add a late register connector callback
>   drm/amdgpu/ring: move debugfs init into core amdgpu debugfs
>   drm/amdgpu: drop legacy drm load and unload callbacks

Trying out the patches (on drm-tip) resulted in a NULL-pointer access
during startup.

[   10.059945]
==
[   10.067285] BUG: KASAN: null-ptr-deref in drm_dp_aux_register+0xcc/0xf0
[   10.073941] Read of size 8 at addr 0050 by task
systemd-udevd/379
[   10.081118]
[   10.089117]
==
[   10.096675] BUG: kernel NULL pointer dereference, address:
0050
[   10.103674] #PF: supervisor read access in kernel mode
[   10.108840] #PF: error_code(0x) - not-present page
[   10.114004] PGD 0 P4D 0
[   10.116557] Oops:  [#1] SMP KASAN PTI
[   10.120586] CPU: 2 PID: 379 Comm: systemd-udevd Tainted: GB
 E 5.5.0-1-default+ #235
[   10.129500] Hardware name: Dell Inc. OptiPlex 9020/0N4YC8, BIOS A24
10/24/2018
[   10.136775] RIP: 0010:drm_dp_aux_register+0xcc/0xf0
[   10.136778] Code: 41 5c c3 4c 89 e7 e8 53 b4 28 00 85 c0 74 eb 48 89
ef 89 44 24 04 e8 23 1d 02 00 8b 44 24 04 eb d9 48 8d 7b 50 e8 d4 98 8f
ff <48> 8b 73 50 48 85 f6 75 aa 48 89 df e8 c3 98 8f ff 48 8b 33 eb 9d
[   10.136779] RSP: 0018:c9a3f0b0 EFLAGS: 00010286
[   10.165792] RAX: 8886dc5505c0 RBX:  RCX:
dc00
[   10.165793] RDX: 0007 RSI: 0004 RDI:
0297
[   10.165805] RBP: 8886c1f1c900 R08: 9f19ec71 R09:
fbfff4246b5d
[   10.165806] R10: fbfff4246b5c R11: a1235ae3 R12:
8886c1f1c908
[   10.165807] R13:  R14: 0001 R15:
8886bbd64000
[   10.165809] FS:  7f094629adc0() GS:8886fb60()
knlGS:
[   10.165810] CS:  0010 DS:  ES:  CR0: 80050033
[   10.165812] CR2: 0050 CR3: 0006dc558006 CR4:
001606e0
[   10.165813] Call Trace:
[   10.165995]  amdgpu_atombios_dp_aux_init+0xa2/0xf0 [amdgpu]
[   10.230952]  amdgpu_connector_add+0x8c3/0x1320 [amdgpu]
[   10.231113]  ? amdgpu_connector_is_dp12_capable+0xf0/0xf0 [amdgpu]
[  OK 10.242611]  ? amdgpu_connector_is_dp12_capable+0xf0/0xf0 [amdgpu]
[   10.242783]
amdgpu_atombios_get_connector_info_from_object_table+0x9fd/0xd60 [amdgpu]
0m] Started10.258253]  ?
amdgpu_atombios_has_dce_engine_info+0x110/0x110 [amdgpu]
[   10.258259]  ? match_held_lock+0x1b/0x240
1;39mForward Pas[   10.270175]  ? __lock_acquire+0x543/0xaf0
[   10.270188]  ? collect_percpu_times+0x3bb/0x400
[   10.270198]  ? rcu_read_lock_bh_held+0xa0/0xa0
[   10.284629]  ? debug_lockdep_rcu_enabled.part.0+0x16/0x30
[   10.284634]  ? __kmalloc+0x310/0x6e0
sword R�…s to P[   10.293648]  ? drm_mode_crtc_set_gamma_size+0x44/0xf0
[   10.293834]  ? dce_v6_0_sw_init+0x3c4/0x720 [amdgpu]
[   10.293986]  dce_v6_0_sw_init+0x3c4/0x720 [amdgpu]
lymouth Director[   10.310201]  ? si_dma_sw_init+0x8b/0x120 [amdgpu]
[   10.310409]  amdgpu_device_ip_init+0xbd/0x64e [amdgpu]
y Watch.
[[   10.321430]  amdgpu_device_init.cold+0xb92/0xf26 [amdgpu]
[   10.321577]  ? amdgpu_driver_load_kms+0x7a/0x370 [amdgpu]
[   10.333506]  ? rcu_read_lock_sched_held+0x85/0x90
[   10.333689]  ? amdgpu_device_has_dc_support+0x30/0x30 [amdgpu]
  OK 10.344105]  ? debug_lockdep_rcu_enabled.part.0+0x16/0x30
[   10.344108]  ? kmem_cache_alloc_trace+0x51e/0x6b0
[   10.344112]  ? kstrdup+0x44/0x60
[   10.344269]  amdgpu_driver_load_kms+0xc7/0x370 [amdgpu]
[   10.344414]  ? amdgpu_register_gpu_instance+0xd0/0xd0 [amdgpu]
[   10.344581]  ? amdgpu_dm_initialize_drm_device+0xe88/0x1544 [amdgpu]
m] Reached targe[   10.376553]  amdgpu_pci_probe+0x12e/0x1d0 [amdgpu]
[   10.376698]  ? amdgpu_pmops_runtime_idle+0xf0/0xf0 [amdgpu]
[   10.388229]  local_pci_probe+0x74/0xc0
t Paths[   10.388233]  pci_device_probe+0x1c9/0x2d0
[   10.388252]  ? pci_device_remove+0x180/0x180
[   10.388258]  ? sysfs_do_create_link_sd.isra.0+0x74/0xd0
.
[   10.406977]  really_probe+0x184/0x530

Re: [PATCH 00/14] amdgpu: remove load and unload callbacks

2020-02-05 Thread Christian König

Am 05.02.20 um 04:48 schrieb Alex Deucher:

These are deprecated and the drm will soon start warning when drivers still
use them.  It was a long and twisty road, but seems to work.


Acked-by: Christian König  for the whole series.



Alex Deucher (14):
   drm/amdgpu: rename amdgpu_debugfs_preempt_cleanup
   drm/amdgpu/ttm: move debugfs init into core amdgpu debugfs
   drm/amdgpu/pm: move debugfs init into core amdgpu debugfs
   drm/amdgpu/sa: move debugfs init into core amdgpu debugfs
   drm/amdgpu/fence: move debugfs init into core amdgpu debugfs
   drm/amdgpu/gem: move debugfs init into core amdgpu debugfs
   drm/amdgpu/regs: move debugfs init into core amdgpu debugfs
   drm/amdgpu/firmware: move debugfs init into core amdgpu debugfs
   drm/amdgpu: don't call drm_connector_register for non-MST ports
   drm/amdgpu/display: move debugfs init into core amdgpu debugfs
   drm/amd/display: move dpcd debugfs members setup
   drm/amdgpu/display: add a late register connector callback
   drm/amdgpu/ring: move debugfs init into core amdgpu debugfs
   drm/amdgpu: drop legacy drm load and unload callbacks

  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c|  1 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 67 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h   |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 17 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 13 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  3 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c|  7 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h|  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  9 +--
  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h|  2 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  | 15 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  4 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   | 14 +---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |  3 +
  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 -
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 26 +++
  .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c |  3 +
  .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  2 -
  18 files changed, 112 insertions(+), 78 deletions(-)



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx