RE: [PATCH] drm/amdgpu: add error handle to avoid out-of-bounds

2024-04-24 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

Thanks for catching. Please populate the fix to sdma_v4_4_2 as well if 
necessary.

> -Original Message-
> From: Bob Zhou 
> Sent: Tuesday, April 23, 2024 5:15 PM
> To: amd-gfx@lists.freedesktop.org; Ma, Le 
> Cc: Deucher, Alexander ; Koenig, Christian
> ; Zhou, Bob 
> Subject: [PATCH] drm/amdgpu: add error handle to avoid out-of-bounds
>
> if the sdma_v4_0_irq_id_to_seq return -EINVAL, the process should be stop to
> avoid out-of-bounds read, so directly return -EINVAL.
>
> Signed-off-by: Bob Zhou 
> ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index e2e3856938ed..101038395c3b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -2021,6 +2021,9 @@ static int sdma_v4_0_process_trap_irq(struct
> amdgpu_device *adev,
>
>   DRM_DEBUG("IH: SDMA trap\n");
>   instance = sdma_v4_0_irq_id_to_seq(entry->client_id);
> + if (instance < 0)
> + return instance;
> +
>   switch (entry->ring_id) {
>   case 0:
>   amdgpu_fence_process(>sdma.instance[instance].ring);
> --
> 2.34.1



RE: [PATCH 1/1] drm/amdgpu: drop setting buffer funcs in sdma442

2024-03-15 Thread Ma, Le
[AMD Official Use Only - General]


> -Original Message-
> From: Lazar, Lijo mailto:lijo.la...@amd.com>>
> Sent: Friday, March 15, 2024 6:14 PM
> To: Ma, Le mailto:le...@amd.com>>; 
> amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>
> Cc: Zhang, Hawking mailto:hawking.zh...@amd.com>>; 
> Song, Asher
> mailto:asher.s...@amd.com>>; Deucher, Alexander 
> mailto:alexander.deuc...@amd.com>>
> Subject: Re: [PATCH 1/1] drm/amdgpu: drop setting buffer funcs in sdma442
>
>
>
> On 3/15/2024 2:46 PM, Le Ma wrote:
> > To fix the entity rq NULL issue. This setting has been moved to upper level.
> >
>
> Need to call amdgpu_ttm_set_buffer_funcs_status(adev, true/false) in
> mode-2 reset handlers as well.

Thanks for pointing out this. I think we can make another separated patch to 
handle it for mode2 since this patch is for alignment purpose. Actually, the 
set_buffer_funcs will not be unset/set in reset case as the conditions below:

  void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool 
enable)
  {
  struct ttm_resource_manager *man = 
ttm_manager_type(>mman.bdev, TTM_PL_VRAM);
  uint64_t size;
  int r;

  if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
  adev->mman.buffer_funcs_enabled == enable || 
adev->gmc.is_app_apu)
  return;


>
> Thanks,
> Lijo
>
> > Fixes b70438004a14 ("drm/amdgpu: move buffer funcs setting up a
> > level")
> >
> > Signed-off-by: Le Ma mailto:le...@amd.com>>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 20 +---
> >  1 file changed, 1 insertion(+), 19 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > index eaa4f5f49949..589a734982a7 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > @@ -431,16 +431,11 @@ static void sdma_v4_4_2_inst_gfx_stop(struct
> amdgpu_device *adev,
> > struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
> > u32 doorbell_offset, doorbell;
> > u32 rb_cntl, ib_cntl;
> > -   int i, unset = 0;
> > +   int i;
> >
> > for_each_inst(i, inst_mask) {
> > sdma[i] = >sdma.instance[i].ring;
> >
> > -   if ((adev->mman.buffer_funcs_ring == sdma[i]) && unset != 1) {
> > -   amdgpu_ttm_set_buffer_funcs_status(adev, false);
> > -   unset = 1;
> > -   }
> > -
> > rb_cntl = RREG32_SDMA(i, regSDMA_GFX_RB_CNTL);
> > rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_GFX_RB_CNTL,
> RB_ENABLE, 0);
> > WREG32_SDMA(i, regSDMA_GFX_RB_CNTL, rb_cntl); @@ -
> 490,17 +485,10 @@
> > static void sdma_v4_4_2_inst_page_stop(struct amdgpu_device *adev,
> > struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
> > u32 rb_cntl, ib_cntl;
> > int i;
> > -   bool unset = false;
> >
> > for_each_inst(i, inst_mask) {
> > sdma[i] = >sdma.instance[i].page;
> >
> > -   if ((adev->mman.buffer_funcs_ring == sdma[i]) &&
> > -   (!unset)) {
> > -   amdgpu_ttm_set_buffer_funcs_status(adev, false);
> > -   unset = true;
> > -   }
> > -
> > rb_cntl = RREG32_SDMA(i, regSDMA_PAGE_RB_CNTL);
> > rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_PAGE_RB_CNTL,
> > RB_ENABLE, 0);
> > @@ -950,13 +938,7 @@ static int sdma_v4_4_2_inst_start(struct
> amdgpu_device *adev,
> > r = amdgpu_ring_test_helper(page);
> > if (r)
> > return r;
> > -
> > -   if (adev->mman.buffer_funcs_ring == page)
> > -   amdgpu_ttm_set_buffer_funcs_status(adev,
> true);
> > }
> > -
> > -   if (adev->mman.buffer_funcs_ring == ring)
> > -   amdgpu_ttm_set_buffer_funcs_status(adev, true);
> > }
> >
> > return r;


RE: [PATCH] drm/amd/pm: Fix esm reg mask use to get pcie speed

2024-02-28 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: Kamal, Asad 
> Sent: Wednesday, February 28, 2024 2:52 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Lazar, Lijo ; Zhang, Hawking
> ; Ma, Le ; Zhang, Morris
> ; Kamal, Asad 
> Subject: [PATCH] drm/amd/pm: Fix esm reg mask use to get pcie speed
>
> Fix mask used for esm ctrl register to get pcie link speed on smu_v11_0_3,
> smu_v13_0_2 & smu_v13_0_6
>
> Fixes: 511a95552ec8 ("drm/amd/pm: Add SMU 13.0.6 support")
> Fixes: c05d1c401572 ("drm/amd/swsmu: add aldebaran smu13 ip support (v3)")
> Fixes: f1c378593153 ("drm/amd/powerplay: add Arcturus support for gpu
> metrics export")
> Signed-off-by: Asad Kamal 
> Reviewed-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c| 4 ++--
>  drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c   | 4 ++--
>  drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c | 4 ++--
>  3 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> index bcad42534da4..1d96eb274d72 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> @@ -2272,8 +2272,8 @@ static uint16_t
> arcturus_get_current_pcie_link_speed(struct smu_context *smu)
>
>   /* TODO: confirm this on real target */
>   esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
> - if ((esm_ctrl >> 15) & 0x1)
> - return (uint16_t)(((esm_ctrl >> 8) & 0x3F) + 128);
> + if ((esm_ctrl >> 15) & 0x1)
> + return (uint16_t)(((esm_ctrl >> 8) & 0x7F) + 128);
>
>   return smu_v11_0_get_current_pcie_link_speed(smu);
>  }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> index f122ef49106c..0467864a1aa8 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> @@ -1683,8 +1683,8 @@ static int
> aldebaran_get_current_pcie_link_speed(struct smu_context *smu)
>
>   /* TODO: confirm this on real target */
>   esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
> - if ((esm_ctrl >> 15) & 0x1)
> - return (((esm_ctrl >> 8) & 0x3F) + 128);
> + if ((esm_ctrl >> 15) & 0x1)
> + return (((esm_ctrl >> 8) & 0x7F) + 128);
>
>   return smu_v13_0_get_current_pcie_link_speed(smu);
>  }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> index 69c64bc6e2dc..744c84f3029f 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> @@ -2148,8 +2148,8 @@ static int
> smu_v13_0_6_get_current_pcie_link_speed(struct smu_context *smu)
>
>   /* TODO: confirm this on real target */
>   esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
> - if ((esm_ctrl >> 15) & 0x1)
> - return (((esm_ctrl >> 8) & 0x3F) + 128);
> + if ((esm_ctrl >> 15) & 0x1)
> + return (((esm_ctrl >> 8) & 0x7F) + 128);
>
>   speed_level = (RREG32_PCIE(smnPCIE_LC_SPEED_CNTL) &
>   PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK)
> --
> 2.42.0



RE: [PATCH v2 v2 3/5] drm/amdgpu: Add ras helper to query boot errors v2

2024-01-08 Thread Ma, Le
[AMD Official Use Only - General]

The patch series is Reviewed-by: Le Ma  .

If pattern is changed on future asic, we may consider using macro of asic 
function callback as well.

> -Original Message-
> From: Hawking Zhang 
> Sent: Sunday, January 7, 2024 11:40 PM
> To: amd-gfx@lists.freedesktop.org; Zhou1, Tao ; Yang,
> Stanley ; Wang, Yang(Kevin)
> ; Chai, Thomas ; Li,
> Candice 
> Cc: Zhang, Hawking ; Deucher, Alexander
> ; Lazar, Lijo ; Ma, Le
> 
> Subject: [PATCH v2 v2 3/5] drm/amdgpu: Add ras helper to query boot errors v2
>
> Add ras helper function to query boot time gpu errors.
> v2: use aqua_vanjaram smn addressing pattern
>
> Signed-off-by: Hawking Zhang 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h |  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 95
> +  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h |
> 15 +++-
>  3 files changed, 110 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 9da14436a373..df3aa69be425 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1330,6 +1330,7 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
>  #define WREG32_FIELD_OFFSET(reg, offset, field, val) \
>   WREG32(mm##reg + offset, (RREG32(mm##reg + offset) &
> ~REG_FIELD_MASK(reg, field)) | (val) << REG_FIELD_SHIFT(reg, field))
>
> +#define AMDGPU_GET_REG_FIELD(x, h, l) (((x) & GENMASK_ULL(h, l)) >>
> +(l))
>  /*
>   * BIOS helpers.
>   */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index fc42fb6ee191..a901b00d4949 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -3763,3 +3763,98 @@ int amdgpu_ras_error_statistic_ce_count(struct
> ras_err_data *err_data,
>
>   return 0;
>  }
> +
> +#define mmMP0_SMN_C2PMSG_92  0x1609C
> +#define mmMP0_SMN_C2PMSG_126 0x160BE
> +static void amdgpu_ras_boot_time_error_reporting(struct amdgpu_device
> *adev,
> +  u32 instance, u32 boot_error)
> +{
> + u32 socket_id, aid_id, hbm_id;
> + u32 reg_data;
> + u64 reg_addr;
> +
> + socket_id = AMDGPU_RAS_GPU_ERR_SOCKET_ID(boot_error);
> + aid_id = AMDGPU_RAS_GPU_ERR_AID_ID(boot_error);
> + hbm_id = AMDGPU_RAS_GPU_ERR_HBM_ID(boot_error);
> +
> + /* The pattern for smn addressing in other SOC could be different from
> +  * the one for aqua_vanjaram. We should revisit the code if the pattern
> +  * is changed. In such case, replace the aqua_vanjaram implementation
> +  * with more common helper */
> + reg_addr = (mmMP0_SMN_C2PMSG_92 << 2) +
> +aqua_vanjaram_encode_ext_smn_addressing(instance);
> +
> + reg_data = amdgpu_device_indirect_rreg_ext(adev, reg_addr);
> + dev_err(adev->dev, "socket: %d, aid: %d, firmware boot failed, fw
> status is 0x%x\n",
> + socket_id, aid_id, reg_data);
> +
> + if (AMDGPU_RAS_GPU_ERR_MEM_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, hbm: %d, memory
> training failed\n",
> +  socket_id, aid_id, hbm_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_FW_LOAD(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, firmware load failed
> at boot time\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_WAFL_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, wafl link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_XGMI_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, xgmi link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_USR_CP_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, usr cp link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_USR_DP_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, usr dp link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_HBM_MEM_TEST(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, hbm: %d, hbm
> memory test failed\n",
> +  socket_id, aid_id, hbm_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_HBM_BIST_TEST(boot_error))
> +  

RE: [PATCH 3/5] drm/amdgpu: Add ras helper to query boot errors

2024-01-01 Thread Ma, Le
[AMD Official Use Only - General]

> -Original Message-
> From: Hawking Zhang 
> Sent: Tuesday, January 2, 2024 11:44 AM
> To: amd-gfx@lists.freedesktop.org; Zhou1, Tao ; Yang,
> Stanley ; Wang, Yang(Kevin)
> ; Chai, Thomas ; Li,
> Candice 
> Cc: Zhang, Hawking ; Deucher, Alexander
> ; Lazar, Lijo ; Ma, Le
> 
> Subject: [PATCH 3/5] drm/amdgpu: Add ras helper to query boot errors
>
> Add ras helper function to query boot time gpu errors.
>
> Signed-off-by: Hawking Zhang 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h |  3 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 95
> +  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.h |
> 15 +++-
>  3 files changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 616b6c911767..db44ec857a31 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1328,6 +1328,9 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
>  #define WREG32_FIELD_OFFSET(reg, offset, field, val) \
>   WREG32(mm##reg + offset, (RREG32(mm##reg + offset) &
> ~REG_FIELD_MASK(reg, field)) | (val) << REG_FIELD_SHIFT(reg, field))
>
> +#define AMDGPU_SMN_TARGET_AID(x) ((u64)(x) << 32) #define
> +AMDGPU_SMN_CROSS_AID (1ULL << 34) #define AMDGPU_GET_REG_FIELD(x,
> h, l)
> +(((x) & GENMASK_ULL(h, l)) >> (l))
>  /*
>   * BIOS helpers.
>   */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index 39399d0f2ce5..5f302b7693b3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -3764,3 +3764,98 @@ int amdgpu_ras_error_statistic_ce_count(struct
> ras_err_data *err_data,
>
>   return 0;
>  }
> +
> +#define mmMP0_SMN_C2PMSG_92  0x1609C
> +#define mmMP0_SMN_C2PMSG_126 0x160BE
> +static void amdgpu_ras_boot_time_error_reporting(struct amdgpu_device
> *adev,
> +  u32 instance, u32 boot_error)
> +{
> + u32 socket_id, aid_id, hbm_id;
> + u32 reg_data;
> + u64 reg_addr;
> +
> + socket_id = AMDGPU_RAS_GPU_ERR_SOCKET_ID(boot_error);
> + aid_id = AMDGPU_RAS_GPU_ERR_AID_ID(boot_error);
> + hbm_id = AMDGPU_RAS_GPU_ERR_HBM_ID(boot_error);
> +
> + if (instance)
> + reg_addr = (mmMP0_SMN_C2PMSG_92 << 2) +
> +AMDGPU_SMN_TARGET_AID(instance) +
> +AMDGPU_SMN_CROSS_AID;
Hi Hawking,

We have asic function "aqua_vanjaram_encode_ext_smn_addressing" for this stuff, 
maybe it could also be re-used here.

Thanks.
> + else
> + reg_addr = (mmMP0_SMN_C2PMSG_92 << 2);
> +
> + reg_data = amdgpu_device_indirect_rreg_ext(adev, reg_addr);
> + dev_err(adev->dev, "socket: %d, aid: %d, firmware boot failed, fw
> status is 0x%x\n",
> + socket_id, aid_id, reg_data);
> +
> + if (AMDGPU_RAS_GPU_ERR_MEM_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, hbm: %d, memory
> training failed\n",
> +  socket_id, aid_id, hbm_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_FW_LOAD(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, firmware load failed
> at boot time\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_WAFL_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, wafl link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_XGMI_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, xgmi link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_USR_CP_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, usr cp link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_USR_DP_LINK_TRAINING(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, usr dp link training
> failed\n",
> +  socket_id, aid_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_HBM_MEM_TEST(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, hbm: %d, hbm
> memory test failed\n",
> +  socket_id, aid_id, hbm_id);
> +
> + if (AMDGPU_RAS_GPU_ERR_HBM_BIST_TEST(boot_error))
> + dev_info(adev->dev, "socket: %d, aid: %d, hbm: %d, hbm bist
&g

RE: [PATCH 1/4] drm/amd/pm: Use separate metric table for APU

2023-12-25 Thread Ma, Le
[AMD Official Use Only - General]

Series is Reviewed-by: Le Ma 

> -Original Message-
> From: Kamal, Asad 
> Sent: Friday, December 22, 2023 11:27 PM
> To: amd-gfx@lists.freedesktop.org; Lazar, Lijo 
> Cc: Zhang, Hawking ; Ma, Le ;
> Zhang, Morris ; Oliveira, Daniel
> ; Cheung, Donald ;
> Khatir, Sepehr ; Kamal, Asad 
> Subject: [PATCH 1/4] drm/amd/pm: Use separate metric table for APU
>
> Use separate metric table for APU and Non APU systems for smu_v_13_0_6 to
> get metric data
>
> Signed-off-by: Asad Kamal 
> Reviewed-by: Lijo Lazar 
> ---
>  .../pm/swsmu/inc/pmfw_if/smu_v13_0_6_pmfw.h   |  90 -
>  .../drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c  | 124 ++
>  2 files changed, 156 insertions(+), 58 deletions(-)
>
> diff --git
> a/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_6_pmfw.h
> b/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_6_pmfw.h
> index fef2d290f3f2..8f166aa3043c 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_6_pmfw.h
> +++ b/drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu_v13_0_6_pmfw.h
> @@ -219,7 +219,95 @@ typedef struct __attribute__((packed, aligned(4))) {
>uint32_t PCIenReplayARolloverCountAcc;  // The Pcie counter itself is
> accumulated
>uint32_t PCIeNAKSentCountAcc;   // The Pcie counter itself is 
> accumulated
>uint32_t PCIeNAKReceivedCountAcc;   // The Pcie counter itself is
> accumulated
> -} MetricsTable_t;
> +} MetricsTableX_t;
> +
> +typedef struct __attribute__((packed, aligned(4))) {
> +  uint32_t AccumulationCounter;
> +
> +  //TEMPERATURE
> +  uint32_t MaxSocketTemperature;
> +  uint32_t MaxVrTemperature;
> +  uint32_t MaxHbmTemperature;
> +  uint64_t MaxSocketTemperatureAcc;
> +  uint64_t MaxVrTemperatureAcc;
> +  uint64_t MaxHbmTemperatureAcc;
> +
> +  //POWER
> +  uint32_t SocketPowerLimit;
> +  uint32_t MaxSocketPowerLimit;
> +  uint32_t SocketPower;
> +
> +  //ENERGY
> +  uint64_t Timestamp;
> +  uint64_t SocketEnergyAcc;
> +  uint64_t CcdEnergyAcc;
> +  uint64_t XcdEnergyAcc;
> +  uint64_t AidEnergyAcc;
> +  uint64_t HbmEnergyAcc;
> +
> +  //FREQUENCY
> +  uint32_t CclkFrequencyLimit;
> +  uint32_t GfxclkFrequencyLimit;
> +  uint32_t FclkFrequency;
> +  uint32_t UclkFrequency;
> +  uint32_t SocclkFrequency[4];
> +  uint32_t VclkFrequency[4];
> +  uint32_t DclkFrequency[4];
> +  uint32_t LclkFrequency[4];
> +  uint64_t GfxclkFrequencyAcc[8];
> +  uint64_t CclkFrequencyAcc[96];
> +
> +  //FREQUENCY RANGE
> +  uint32_t MaxCclkFrequency;
> +  uint32_t MinCclkFrequency;
> +  uint32_t MaxGfxclkFrequency;
> +  uint32_t MinGfxclkFrequency;
> +  uint32_t FclkFrequencyTable[4];
> +  uint32_t UclkFrequencyTable[4];
> +  uint32_t SocclkFrequencyTable[4];
> +  uint32_t VclkFrequencyTable[4];
> +  uint32_t DclkFrequencyTable[4];
> +  uint32_t LclkFrequencyTable[4];
> +  uint32_t MaxLclkDpmRange;
> +  uint32_t MinLclkDpmRange;
> +
> +  //XGMI
> +  uint32_t XgmiWidth;
> +  uint32_t XgmiBitrate;
> +  uint64_t XgmiReadBandwidthAcc[8];
> +  uint64_t XgmiWriteBandwidthAcc[8];
> +
> +  //ACTIVITY
> +  uint32_t SocketC0Residency;
> +  uint32_t SocketGfxBusy;
> +  uint32_t DramBandwidthUtilization;
> +  uint64_t SocketC0ResidencyAcc;
> +  uint64_t SocketGfxBusyAcc;
> +  uint64_t DramBandwidthAcc;
> +  uint32_t MaxDramBandwidth;
> +  uint64_t DramBandwidthUtilizationAcc;  uint64_t PcieBandwidthAcc[4];
> +
> +  //THROTTLERS
> +  uint32_t ProchotResidencyAcc;
> +  uint32_t PptResidencyAcc;
> +  uint32_t SocketThmResidencyAcc;
> +  uint32_t VrThmResidencyAcc;
> +  uint32_t HbmThmResidencyAcc;
> +  uint32_t GfxLockXCDMak;
> +
> +  // New Items at end to maintain driver compatibility  uint32_t
> + GfxclkFrequency[8];
> +
> +  //PSNs
> +  uint64_t PublicSerialNumber_AID[4];
> +  uint64_t PublicSerialNumber_XCD[8];
> +  uint64_t PublicSerialNumber_CCD[12];
> +
> +  //XGMI Data tranfser size
> +  uint64_t XgmiReadDataSizeAcc[8];//in KByte
> +  uint64_t XgmiWriteDataSizeAcc[8];//in KByte } MetricsTableA_t;
>
>  #define SMU_VF_METRICS_TABLE_VERSION 0x3
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> index 81b217bbdebb..96777a365133 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> @@ -248,6 +248,8 @@ struct PPTable_t {
>  #define SMUQ10_TO_UINT(x) ((x) >> 10)
>  #define SMUQ10_FRAC(x) ((x) & 0x3ff)
>  #define SMUQ10_ROUND(x) ((SMUQ10_TO_UINT(x)) + ((SMUQ10_FRAC(x)) >=
> 0x200))
> +#define GET_METRIC_FIELD(field) ((adev->flags & AMD

RE: [PATCH] drm/amdgpu: Fix sdma 4.4.2 doorbell rptr/wptr init

2023-11-05 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: Lazar, Lijo 
> Sent: Monday, November 6, 2023 12:21 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhang, Hawking ; Deucher, Alexander
> ; Kamal, Asad ; Ma,
> Le 
> Subject: [PATCH] drm/amdgpu: Fix sdma 4.4.2 doorbell rptr/wptr init
>
> Doorbell rptr/wptr can be set through multiple ways including direct register
> initialization. Disable doorbell during hw_fini once the ring is disabled so 
> that
> during next module reload direct initialization takes effect. Also, move the 
> direct
> initialization after minor update is set to 1 since rptr/wptr are 
> reinitialized back
> to 0 which could be lower than the previous doorbell value (ex: cases like
> module reload).
>
> Signed-off-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 25 ++--
>  1 file changed, 19 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index c46bc6aa4f48..bd65a62f8903 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -427,6 +427,7 @@ static void sdma_v4_4_2_inst_gfx_stop(struct
> amdgpu_device *adev,
> uint32_t inst_mask)
>  {
>   struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
> + u32 doorbell_offset, doorbell;
>   u32 rb_cntl, ib_cntl;
>   int i, unset = 0;
>
> @@ -444,6 +445,18 @@ static void sdma_v4_4_2_inst_gfx_stop(struct
> amdgpu_device *adev,
>   ib_cntl = RREG32_SDMA(i, regSDMA_GFX_IB_CNTL);
>   ib_cntl = REG_SET_FIELD(ib_cntl, SDMA_GFX_IB_CNTL,
> IB_ENABLE, 0);
>   WREG32_SDMA(i, regSDMA_GFX_IB_CNTL, ib_cntl);
> +
> + if (sdma[i]->use_doorbell) {
> + doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL);
> + doorbell_offset = RREG32_SDMA(i,
> regSDMA_GFX_DOORBELL_OFFSET);
> +
> + doorbell = REG_SET_FIELD(doorbell,
> SDMA_GFX_DOORBELL, ENABLE, 0);
> + doorbell_offset = REG_SET_FIELD(doorbell_offset,
> + SDMA_GFX_DOORBELL_OFFSET,
> + OFFSET, 0);
> + WREG32_SDMA(i, regSDMA_GFX_DOORBELL, doorbell);
> + WREG32_SDMA(i, regSDMA_GFX_DOORBELL_OFFSET,
> doorbell_offset);
> + }
>   }
>  }
>
> @@ -631,12 +644,6 @@ static void sdma_v4_4_2_gfx_resume(struct
> amdgpu_device *adev, unsigned int i)
>   rb_cntl = sdma_v4_4_2_rb_cntl(ring, rb_cntl);
>   WREG32_SDMA(i, regSDMA_GFX_RB_CNTL, rb_cntl);
>
> - /* Initialize the ring buffer's read and write pointers */
> - WREG32_SDMA(i, regSDMA_GFX_RB_RPTR, 0);
> - WREG32_SDMA(i, regSDMA_GFX_RB_RPTR_HI, 0);
> - WREG32_SDMA(i, regSDMA_GFX_RB_WPTR, 0);
> - WREG32_SDMA(i, regSDMA_GFX_RB_WPTR_HI, 0);
> -
>   /* set the wb address whether it's enabled or not */
>   WREG32_SDMA(i, regSDMA_GFX_RB_RPTR_ADDR_HI,
>  upper_32_bits(adev->wb.gpu_addr + wb_offset) & 0x); @@
> -654,6 +661,12 @@ static void sdma_v4_4_2_gfx_resume(struct
> amdgpu_device *adev, unsigned int i)
>   /* before programing wptr to a less value, need set minor_ptr_update
> first */
>   WREG32_SDMA(i, regSDMA_GFX_MINOR_PTR_UPDATE, 1);
>
> + /* Initialize the ring buffer's read and write pointers */
> + WREG32_SDMA(i, regSDMA_GFX_RB_RPTR, 0);
> + WREG32_SDMA(i, regSDMA_GFX_RB_RPTR_HI, 0);
> + WREG32_SDMA(i, regSDMA_GFX_RB_WPTR, 0);
> + WREG32_SDMA(i, regSDMA_GFX_RB_WPTR_HI, 0);
> +
>   doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL);
>   doorbell_offset = RREG32_SDMA(i, regSDMA_GFX_DOORBELL_OFFSET);
>
> --
> 2.25.1



RE: [PATCH] drm/amdgpu: Use READ_ONCE() when reading the values in 'sdma_v4_4_2_ring_get_rptr'

2023-08-18 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: SHANMUGAM, SRINIVASAN 
> Sent: Friday, August 4, 2023 1:47 PM
> To: Koenig, Christian ; Deucher, Alexander
> ; Chen, Guchun ;
> Pan, Xinhui 
> Cc: amd-gfx@lists.freedesktop.org; SHANMUGAM, SRINIVASAN
> ; Ma, Le ; Zhang,
> Hawking 
> Subject: [PATCH] drm/amdgpu: Use READ_ONCE() when reading the values in
> 'sdma_v4_4_2_ring_get_rptr'
>
> Instead of declaring pointers use READ_ONCE(), when accessing those values to
> make sure that the compiler doesn't voilate any cache coherences
>
> Cc: Guchun Chen 
> Cc: Christian König 
> Cc: Alex Deucher 
> Cc: "Pan, Xinhui" 
> Cc: Le Ma 
> Cc: Hawking Zhang 
> Signed-off-by: Srinivasan Shanmugam 
> ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index f413898dda37..267c1b7b8dcd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -154,13 +154,13 @@ static int sdma_v4_4_2_init_microcode(struct
> amdgpu_device *adev)
>   */
>  static uint64_t sdma_v4_4_2_ring_get_rptr(struct amdgpu_ring *ring)  {
> - u64 *rptr;
> + u64 rptr;
>
>   /* XXX check if swapping is necessary on BE */
> - rptr = ((u64 *)>adev->wb.wb[ring->rptr_offs]);
> + rptr = READ_ONCE(*((u64 *)>adev->wb.wb[ring->rptr_offs]));
>
> - DRM_DEBUG("rptr before shift == 0x%016llx\n", *rptr);
> - return ((*rptr) >> 2);
> + DRM_DEBUG("rptr before shift == 0x%016llx\n", rptr);
> + return rptr >> 2;
>  }
>
>  /**
> --
> 2.25.1



RE: [PATCH] drm/amdgpu: Keep reset handlers shared

2023-08-16 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: amd-gfx  On Behalf Of Lazar,
> Lijo
> Sent: Wednesday, August 16, 2023 1:38 PM
> To: Lazar, Lijo ; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Kamal, Asad
> ; Zhang, Hawking 
> Subject: RE: [PATCH] drm/amdgpu: Keep reset handlers shared
>
> [AMD Official Use Only - General]
>
> [AMD Official Use Only - General]
>
> 
>
> Thanks,
> Lijo
>
> -Original Message-
> From: amd-gfx  On Behalf Of Lijo
> Lazar
> Sent: Thursday, August 10, 2023 5:14 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Kamal, Asad
> ; Zhang, Hawking 
> Subject: [PATCH] drm/amdgpu: Keep reset handlers shared
>
> Instead of maintaining a list per device, keep the reset handlers common per
> ASIC family. A pointer to the list of handlers is maintained in reset control.
>
> Signed-off-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/aldebaran.c  | 19 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c   |  8 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h   | 16 
>  drivers/gpu/drm/amd/amdgpu/sienna_cichlid.c | 20 +++-
>  drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.c   | 19 +++
>  5 files changed, 45 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> index 2b97b8a96fb4..82e1c83a7ccc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> @@ -48,20 +48,19 @@ aldebaran_get_reset_handler(struct
> amdgpu_reset_control *reset_ctl,  {
> struct amdgpu_reset_handler *handler;
> struct amdgpu_device *adev = (struct amdgpu_device 
> *)reset_ctl->handle;
> +   int i;
>
> if (reset_context->method != AMD_RESET_METHOD_NONE) {
> dev_dbg(adev->dev, "Getting reset handler for method %d\n",
> reset_context->method);
> -   list_for_each_entry(handler, _ctl->reset_handlers,
> -handler_list) {
> +   for_each_handler(i, handler, reset_ctl) {
> if (handler->reset_method == reset_context->method)
> return handler;
> }
> }
>
> if (aldebaran_is_mode2_default(reset_ctl)) {
> -   list_for_each_entry(handler, _ctl->reset_handlers,
> -handler_list) {
> +   for_each_handler(i, handler, reset_ctl) {
> if (handler->reset_method == AMD_RESET_METHOD_MODE2) {
> reset_context->method = 
> AMD_RESET_METHOD_MODE2;
> return handler; @@ -124,9 +123,9 @@ static 
> void
> aldebaran_async_reset(struct work_struct *work)
> struct amdgpu_reset_control *reset_ctl =
> container_of(work, struct amdgpu_reset_control, reset_work);
> struct amdgpu_device *adev = (struct amdgpu_device 
> *)reset_ctl->handle;
> +   int i;
>
> -   list_for_each_entry(handler, _ctl->reset_handlers,
> -handler_list) {
> +   for_each_handler(i, handler, reset_ctl) {
> if (handler->reset_method == reset_ctl->active_reset) {
> dev_dbg(adev->dev, "Resetting device\n");
> handler->do_reset(adev); @@ -395,6 +394,11 @@ static 
> struct
> amdgpu_reset_handler aldebaran_mode2_handler = {
> .do_reset   = aldebaran_mode2_reset,
>  };
>
> +static struct amdgpu_reset_handler
> +   *aldebaran_rst_handlers[AMDGPU_RESET_MAX_HANDLERS] = {
> +   _mode2_handler,
> +   };
> +
>  int aldebaran_reset_init(struct amdgpu_device *adev)  {
> struct amdgpu_reset_control *reset_ctl; @@ -408,10 +412,9 @@ int
> aldebaran_reset_init(struct amdgpu_device *adev)
> reset_ctl->active_reset = AMD_RESET_METHOD_NONE;
> reset_ctl->get_reset_handler = aldebaran_get_reset_handler;
>
> -   INIT_LIST_HEAD(_ctl->reset_handlers);
> INIT_WORK(_ctl->reset_work, reset_ctl->async_reset);
> /* Only mode2 is handled through reset control now */
> -   amdgpu_reset_add_handler(reset_ctl, _mode2_handler);
> +   reset_ctl->reset_handlers = _rst_handlers;
>
> adev->reset_cntl = reset_ctl;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
> index 5fed06ffcc6b..02d874799c16 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
> @@ -26,14 +26,6 @@
>  #include "sienna_cichlid.h"
>  #include "smu_v13_0_10.h"
>
> -int amdgpu_reset_add_handler(struct amdgpu_reset_control *reset_ctl,
> -struct amdgpu_reset_handler *handler)
> -{
> -   /* TODO: Check if handler exists? */
> -   list_add_tail(>handler_list, 

RE: [PATCH] drm/amdgpu: Remove redundant GFX v9.4.3 sequence

2023-07-05 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: Lazar, Lijo 
> Sent: Wednesday, July 5, 2023 1:31 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhang, Hawking ; Deucher, Alexander
> ; Kamal, Asad ; Ma,
> Le ; Gadre, Mangesh 
> Subject: [PATCH] drm/amdgpu: Remove redundant GFX v9.4.3 sequence
>
> Programming of XCC id is already taken care with partition mode change.
>
> Signed-off-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 29 -
>  1 file changed, 29 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> index 51532d0dd7a7..548b1123f7c6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> @@ -1034,32 +1034,6 @@ static void
> gfx_v9_4_3_xcc_disable_gpa_mode(struct amdgpu_device *adev, int xcc_
>   WREG32_SOC15(GC, GET_INST(GC, xcc_id), regCPC_PSP_DEBUG,
> data);  }
>
> -static void gfx_v9_4_3_xcc_program_xcc_id(struct amdgpu_device *adev,
> -   int xcc_id)
> -{
> - uint32_t tmp = 0;
> - int num_xcc;
> -
> - num_xcc = NUM_XCC(adev->gfx.xcc_mask);
> - switch (num_xcc) {
> - /* directly config VIRTUAL_XCC_ID to 0 for 1-XCC */
> - case 1:
> - WREG32_SOC15(GC, GET_INST(GC, xcc_id),
> regCP_HYP_XCP_CTL, 0x8);
> - break;
> - case 2:
> - case 4:
> - case 6:
> - case 8:
> - tmp = (xcc_id % adev->gfx.num_xcc_per_xcp) <<
> REG_FIELD_SHIFT(CP_HYP_XCP_CTL, VIRTUAL_XCC_ID);
> - tmp = tmp | (adev->gfx.num_xcc_per_xcp <<
> REG_FIELD_SHIFT(CP_HYP_XCP_CTL, NUM_XCC_IN_XCP));
> - WREG32_SOC15(GC, GET_INST(GC, xcc_id),
> regCP_HYP_XCP_CTL, tmp);
> -
> - break;
> - default:
> - break;
> - }
> -}
> -
>  static bool gfx_v9_4_3_is_rlc_enabled(struct amdgpu_device *adev)  {
>   uint32_t rlc_setting;
> @@ -1917,9 +1891,6 @@ static int gfx_v9_4_3_xcc_cp_resume(struct
> amdgpu_device *adev, int xcc_id)
>   return r;
>   }
>
> - /* set the virtual and physical id based on partition_mode */
> - gfx_v9_4_3_xcc_program_xcc_id(adev, xcc_id);
> -
>   r = gfx_v9_4_3_xcc_kiq_resume(adev, xcc_id);
>   if (r)
>   return r;
> --
> 2.25.1



RE: [PATCH] drm/amdgpu: Add vbios attribute only if supported

2023-06-15 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: amd-gfx  On Behalf Of Lijo
> Lazar
> Sent: Thursday, June 15, 2023 4:56 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Ma, Le
> ; Kamal, Asad ; Zhang, Hawking
> 
> Subject: [PATCH] drm/amdgpu: Add vbios attribute only if supported
>
> Not all devices carry VBIOS version information. Add the device attribute 
> only if
> supported.
>
> Signed-off-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c | 9 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h | 1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   | 5 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c  | 2 --
>  4 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> index 9ba4817a9148..f4e3c133a16c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> @@ -1791,6 +1791,15 @@ const struct attribute_group
> amdgpu_vbios_version_attr_group = {
>   .attrs = amdgpu_vbios_version_attrs
>  };
>
> +int amdgpu_atombios_sysfs_init(struct amdgpu_device *adev) {
> + if (adev->mode_info.atom_context)
> + return devm_device_add_group(adev->dev,
> +
> _vbios_version_attr_group);
> +
> + return 0;
> +}
> +
>  /**
>   * amdgpu_atombios_fini - free the driver info and callbacks for atombios
>   *
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> index 4153d520e2a3..b639a80ee3fc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> @@ -217,5 +217,6 @@ int amdgpu_atombios_get_data_table(struct
> amdgpu_device *adev,
>
>  void amdgpu_atombios_fini(struct amdgpu_device *adev);  int
> amdgpu_atombios_init(struct amdgpu_device *adev);
> +int amdgpu_atombios_sysfs_init(struct amdgpu_device *adev);
>
>  #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index e25f085ee886..eda0a598722e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -4018,6 +4018,11 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   /* Get a log2 for easy divisions. */
>   adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
>
> + r = amdgpu_atombios_sysfs_init(adev);
> + if (r)
> + drm_err(>ddev,
> + "registering atombios sysfs failed (%d).\n", r);
> +
>   r = amdgpu_pm_sysfs_init(adev);
>   if (r)
>   DRM_ERROR("registering pm sysfs failed (%d).\n", r); diff --git
> a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 999d008b6b48..70455b00c36e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -2896,12 +2896,10 @@ static struct pci_error_handlers
> amdgpu_pci_err_handler = {
>
>  extern const struct attribute_group amdgpu_vram_mgr_attr_group;  extern
> const struct attribute_group amdgpu_gtt_mgr_attr_group; -extern const struct
> attribute_group amdgpu_vbios_version_attr_group;
>
>  static const struct attribute_group *amdgpu_sysfs_groups[] = {
>   _vram_mgr_attr_group,
>   _gtt_mgr_attr_group,
> - _vbios_version_attr_group,
>   NULL,
>  };
>
> --
> 2.25.1



RE: [PATCH 3/3] drm/amdgpu: Remove unused NBIO interface

2023-06-13 Thread Ma, Le
[AMD Official Use Only - General]

Series is Reviewed-by: Le Ma 

> -Original Message-
> From: Lazar, Lijo 
> Sent: Tuesday, June 13, 2023 6:54 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhang, Hawking ; Deucher, Alexander
> ; Kamal, Asad ; Ma,
> Le 
> Subject: [PATCH 3/3] drm/amdgpu: Remove unused NBIO interface
>
> Set compute partition mode interface in NBIO is no longer used. Remove the
> only implementation from NBIO v7.9
>
> Signed-off-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h |  2 --
>  drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c   | 14 --
>  2 files changed, 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
> index 095aecfb201e..8ab8ae01f87c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
> @@ -99,8 +99,6 @@ struct amdgpu_nbio_funcs {
>   int (*get_compute_partition_mode)(struct amdgpu_device *adev);
>   u32 (*get_memory_partition_mode)(struct amdgpu_device *adev,
>u32 *supp_modes);
> - void (*set_compute_partition_mode)(struct amdgpu_device *adev,
> -enum amdgpu_gfx_partition mode);
>  };
>
>  struct amdgpu_nbio {
> diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
> b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
> index b033935d6749..cd1a02d30420 100644
> --- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
> +++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
> @@ -393,19 +393,6 @@ static int
> nbio_v7_9_get_compute_partition_mode(struct amdgpu_device *adev)
>   return px;
>  }
>
> -static void nbio_v7_9_set_compute_partition_mode(struct amdgpu_device
> *adev,
> - enum amdgpu_gfx_partition mode)
> -{
> - u32 tmp;
> -
> - /* SPX=0, DPX=1, TPX=2, QPX=3, CPX=4 */
> - tmp = RREG32_SOC15(NBIO, 0,
> regBIF_BX_PF0_PARTITION_COMPUTE_STATUS);
> - tmp = REG_SET_FIELD(tmp,
> BIF_BX_PF0_PARTITION_COMPUTE_STATUS,
> - PARTITION_MODE, mode);
> -
> - WREG32_SOC15(NBIO, 0,
> regBIF_BX_PF0_PARTITION_COMPUTE_STATUS, tmp);
> -}
> -
>  static u32 nbio_v7_9_get_memory_partition_mode(struct amdgpu_device
> *adev,
>  u32 *supp_modes)
>  {
> @@ -461,7 +448,6 @@ const struct amdgpu_nbio_funcs nbio_v7_9_funcs = {
>   .ih_control = nbio_v7_9_ih_control,
>   .remap_hdp_registers = nbio_v7_9_remap_hdp_registers,
>   .get_compute_partition_mode =
> nbio_v7_9_get_compute_partition_mode,
> - .set_compute_partition_mode =
> nbio_v7_9_set_compute_partition_mode,
>   .get_memory_partition_mode =
> nbio_v7_9_get_memory_partition_mode,
>   .init_registers = nbio_v7_9_init_registers,  };
> --
> 2.25.1



RE: [PATCH] drm/amd/pm: Fill metrics data for SMUv13.0.6

2023-06-02 Thread Ma, Le
[AMD Official Use Only - General]

> -Original Message-
> From: amd-gfx  On Behalf Of Lijo
> Lazar
> Sent: Friday, June 2, 2023 12:00 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Zhang, Hawking
> 
> Subject: [PATCH] drm/amd/pm: Fill metrics data for SMUv13.0.6
>
> Populate metrics data table for SMU v13.0.6. Add PCIe link speed/width
> information also.
>
> Signed-off-by: Lijo Lazar 
> ---
>  .../drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c  | 108 +++---
>  1 file changed, 67 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> index 75255e0baf91..4ff5a66d446a 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> @@ -80,7 +80,10 @@
>  /* possible frequency drift (1Mhz) */
>  #define EPSILON 1
>
> -#define smnPCIE_ESM_CTRL 0x111003D0
> +#define smnPCIE_ESM_CTRL 0x193D0
> +#define smnPCIE_LC_LINK_WIDTH_CNTL 0x1ab40288 #define
> +PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK 0x0070L
> #define
> +PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT 0x4
>
>  static const struct cmn2asic_msg_mapping
> smu_v13_0_6_message_map[SMU_MSG_MAX_COUNT] = {
>   MSG_MAP(TestMessage,
> PPSMC_MSG_TestMessage,0),
> @@ -197,6 +200,7 @@ struct PPTable_t {
>  };
>
>  #define SMUQ10_TO_UINT(x) ((x) >> 10)
> +#define SMUQ16_TO_UINT(x) ((x) >> 16)
>
>  struct smu_v13_0_6_dpm_map {
>   enum smu_clk_type clk_type;
> @@ -1935,6 +1939,16 @@ static void
> smu_v13_0_6_log_thermal_throttling_event(struct smu_context *smu)
>
> smu_v13_0_6_throttler_map));
>  }
>
> +static int
> +smu_v13_0_6_get_current_pcie_link_width_level(struct smu_context *smu)
> +{
> + struct amdgpu_device *adev = smu->adev;
> +
> + return (RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL) &
> + PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK) >>
> +PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT;
> +}

Here it can be wrapped like 
REG_GET_FIELD(RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL), PCIE_LC_LINK_WIDTH_CNTL, 
LC_LINK_WIDTH_RD)

It's optional and patch is Reviewed-by: Le Ma  either way.

> +
>  static int smu_v13_0_6_get_current_pcie_link_speed(struct smu_context *smu)
> {
>   struct amdgpu_device *adev = smu->adev; @@ -1953,8 +1967,12 @@
> static ssize_t smu_v13_0_6_get_gpu_metrics(struct smu_context *smu, void
> **table
>   struct smu_table_context *smu_table = >smu_table;
>   struct gpu_metrics_v1_3 *gpu_metrics =
>   (struct gpu_metrics_v1_3 *)smu_table->gpu_metrics_table;
> + struct amdgpu_device *adev = smu->adev;
> + int ret = 0, inst0, xcc0;
>   MetricsTable_t *metrics;
> - int i, ret = 0;
> +
> + inst0 = adev->sdma.instance[0].aid_id;
> + xcc0 = GET_INST(GC, 0);
>
>   metrics = kzalloc(sizeof(MetricsTable_t), GFP_KERNEL);
>   ret = smu_v13_0_6_get_metrics_table(smu, metrics, true); @@ -
> 1963,51 +1981,59 @@ static ssize_t smu_v13_0_6_get_gpu_metrics(struct
> smu_context *smu, void **table
>
>   smu_cmn_init_soft_gpu_metrics(gpu_metrics, 1, 3);
>
> - /* TODO: Decide on how to fill in zero value fields */
> - gpu_metrics->temperature_edge = 0;
> - gpu_metrics->temperature_hotspot = 0;
> - gpu_metrics->temperature_mem = 0;
> - gpu_metrics->temperature_vrgfx = 0;
> - gpu_metrics->temperature_vrsoc = 0;
> - gpu_metrics->temperature_vrmem = 0;
> -
> - gpu_metrics->average_gfx_activity = 0;
> - gpu_metrics->average_umc_activity = 0;
> - gpu_metrics->average_mm_activity = 0;
> -
> - gpu_metrics->average_socket_power = 0;
> - gpu_metrics->energy_accumulator = 0;
> -
> - gpu_metrics->average_gfxclk_frequency = 0;
> - gpu_metrics->average_socclk_frequency = 0;
> - gpu_metrics->average_uclk_frequency = 0;
> - gpu_metrics->average_vclk0_frequency = 0;
> - gpu_metrics->average_dclk0_frequency = 0;
> -
> - gpu_metrics->current_gfxclk = 0;
> - gpu_metrics->current_socclk = 0;
> - gpu_metrics->current_uclk = 0;
> - gpu_metrics->current_vclk0 = 0;
> - gpu_metrics->current_dclk0 = 0;
> -
> + gpu_metrics->temperature_hotspot =
> + SMUQ10_TO_UINT(metrics->MaxSocketTemperature);
> + /* Individual HBM stack temperature is not reported */
> + gpu_metrics->temperature_mem =
> + SMUQ10_TO_UINT(metrics->MaxHbmTemperature);
> + /* Reports max temperature of all voltage rails */
> + gpu_metrics->temperature_vrsoc =
> + SMUQ10_TO_UINT(metrics->MaxVrTemperature);
> +
> + gpu_metrics->average_gfx_activity =
> + SMUQ10_TO_UINT(metrics->SocketGfxBusy);
> + gpu_metrics->average_umc_activity =
> + SMUQ10_TO_UINT(metrics->DramBandwidthUtilization);
> +
> + gpu_metrics->average_socket_power =
> + SMUQ10_TO_UINT(metrics->SocketPower);
> + gpu_metrics->energy_accumulator =

RE: [PATCH 3/3] drm/amd/pm: Fix SMUv13.0.6 throttle status report

2023-06-01 Thread Ma, Le
[AMD Official Use Only - General]

Series is Reviewed-by: Le Ma 

> -Original Message-
> From: Kamal, Asad 
> Sent: Thursday, June 1, 2023 3:28 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhang, Hawking ; Lazar, Lijo
> ; Ma, Le ; Zhang, Morris
> 
> Subject: [PATCH 3/3] drm/amd/pm: Fix SMUv13.0.6 throttle status report
>
> From: Lijo Lazar 
>
> Instead of accumulated counters, PMFW will pass the throttle reason along
> with throttle interrupt. Use that context information to report the exact
> reason for throttling.
>
> v2: Removed Dummy definition
>
> Signed-off-by: Asad Kamal 
> Signed-off-by: Lijo Lazar 
> Reviewed-by: Hawking Zhang 
> ---
>  .../drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c  | 95 +
> --
>  1 file changed, 45 insertions(+), 50 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> index 27fd71afc73f..b9f32e0364db 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
> @@ -82,8 +82,6 @@
>
>  #define smnPCIE_ESM_CTRL 0x111003D0
>
> -#define THROTTLER_TEMP_HBM_BIT 2
> -
>  static const struct cmn2asic_msg_mapping
> smu_v13_0_6_message_map[SMU_MSG_MAX_COUNT] = {
>   MSG_MAP(TestMessage,
> PPSMC_MSG_TestMessage,0),
>   MSG_MAP(GetSmuVersion,
> PPSMC_MSG_GetSmuVersion,  1),
> @@ -174,17 +172,12 @@ static const struct cmn2asic_mapping
> smu_v13_0_6_table_map[SMU_TABLE_COUNT] = {
>   TAB_MAP(I2C_COMMANDS),
>  };
>
> -#define THROTTLER_PROCHOT_GFX_BIT  0
> -#define THROTTLER_PPT_BIT 1
> -#define THROTTLER_TEMP_SOC_BIT 2
> -#define THROTTLER_TEMP_VR_GFX_BIT 3
> -
>  static const uint8_t smu_v13_0_6_throttler_map[] = {
>   [THROTTLER_PPT_BIT] = (SMU_THROTTLER_PPT0_BIT),
> - [THROTTLER_TEMP_SOC_BIT]= (SMU_THROTTLER_TEMP_GPU_BIT),
> - [THROTTLER_TEMP_HBM_BIT]=
> (SMU_THROTTLER_TEMP_MEM_BIT),
> - [THROTTLER_TEMP_VR_GFX_BIT] =
> (SMU_THROTTLER_TEMP_VR_GFX_BIT),
> - [THROTTLER_PROCHOT_GFX_BIT] =
> (SMU_THROTTLER_PROCHOT_GFX_BIT),
> + [THROTTLER_THERMAL_SOCKET_BIT]  =
> (SMU_THROTTLER_TEMP_GPU_BIT),
> + [THROTTLER_THERMAL_HBM_BIT] =
> (SMU_THROTTLER_TEMP_MEM_BIT),
> + [THROTTLER_THERMAL_VR_BIT]  =
> (SMU_THROTTLER_TEMP_VR_GFX_BIT),
> + [THROTTLER_PROCHOT_BIT] =
> (SMU_THROTTLER_PROCHOT_GFX_BIT),
>  };
>
>  struct PPTable_t {
> @@ -642,16 +635,14 @@ static int
> smu_v13_0_6_freqs_in_same_level(int32_t frequency1,
>   return (abs(frequency1 - frequency2) <= EPSILON);  }
>
> -static uint32_t smu_v13_0_6_get_throttler_status(struct smu_context *smu,
> -  MetricsTable_t *metrics)
> +static uint32_t smu_v13_0_6_get_throttler_status(struct smu_context
> +*smu)
>  {
> + struct smu_power_context *smu_power = >smu_power;
> + struct smu_13_0_power_context *power_context =
> +smu_power->power_context;
>   uint32_t  throttler_status = 0;
>
> - throttler_status |= metrics->ProchotResidencyAcc > 0 ? 1U <<
> THROTTLER_PROCHOT_GFX_BIT : 0;
> - throttler_status |= metrics->PptResidencyAcc > 0 ? 1U <<
> THROTTLER_PPT_BIT : 0;
> - throttler_status |= metrics->SocketThmResidencyAcc > 0 ?  1U <<
> THROTTLER_TEMP_SOC_BIT : 0;
> - throttler_status |= metrics->VrThmResidencyAcc > 0 ? 1U <<
> THROTTLER_TEMP_VR_GFX_BIT : 0;
> - throttler_status |= metrics->HbmThmResidencyAcc > 0 ? 1U <<
> THROTTLER_TEMP_HBM_BIT : 0;
> + throttler_status = atomic_read(_context->throttle_status);
> + dev_dbg(smu->adev->dev, "SMU Throttler status: %u",
> throttler_status);
>
>   return throttler_status;
>  }
> @@ -721,9 +712,6 @@ static int smu_v13_0_6_get_smu_metrics_data(struct
> smu_context *smu,
>   case METRICS_TEMPERATURE_VRSOC:
>   *value = SMUQ10_TO_UINT(metrics->MaxVrTemperature);
>   break;
> - case METRICS_THROTTLER_STATUS:
> - *value = smu_v13_0_6_get_throttler_status(smu, metrics);
> - break;
>   default:
>   *value = UINT_MAX;
>   break;
> @@ -1290,13 +1278,11 @@ static int smu_v13_0_6_irq_process(struct
> amdgpu_device *adev,
>  struct amdgpu_iv_entry *entry)
>  {
>   struct smu_context *smu = adev->powerplay.pp_handle;
> + struct smu_power_context *smu_power = >smu_power;
> + struct smu_13_0_power_context *power_context =
> +smu_power->po

RE: [PATCH] drm/amdgpu: check correct allocated mqd_backup object after alloc

2023-04-26 Thread Ma, Le
[AMD Official Use Only - General]

Thanks for catching these. Double checked the two places are good in topic 
branch. The patch is Reviewed-by: Le Ma 

> -Original Message-
> From: Chen, Guchun 
> Sent: Wednesday, April 26, 2023 11:31 AM
> To: amd-gfx@lists.freedesktop.org; Deucher, Alexander
> ; Zhang, Hawking ;
> Ma, Le 
> Cc: Chen, Guchun 
> Subject: [PATCH] drm/amdgpu: check correct allocated mqd_backup object
> after alloc
>
> Instead of the default one, check the right mqd_backup object.
>
> Signed-off-by: Guchun Chen 
> Cc: Le Ma 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> index 2cf1f88fde48..66b9740ec376 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> @@ -379,7 +379,7 @@ int amdgpu_gfx_kiq_init(struct amdgpu_device *adev,
> int amdgpu_gfx_mqd_sw_init(struct amdgpu_device *adev,
>  unsigned mqd_size, int xcc_id)
>  {
> - int r, i;
> + int r, i, j;
>   struct amdgpu_kiq *kiq = >gfx.kiq[xcc_id];
>   struct amdgpu_ring *ring = >ring;
>
> @@ -431,7 +431,8 @@ int amdgpu_gfx_mqd_sw_init(struct amdgpu_device
> *adev,
>
>   /* create MQD for each KCQ */
>   for (i = 0; i < adev->gfx.num_compute_rings; i++) {
> - ring = >gfx.compute_ring[i + xcc_id * adev-
> >gfx.num_compute_rings];
> + j = i + xcc_id * adev->gfx.num_compute_rings;
> + ring = >gfx.compute_ring[j];
>   if (!ring->mqd_obj) {
>   r = amdgpu_bo_create_kernel(adev, mqd_size,
> PAGE_SIZE,
>
> AMDGPU_GEM_DOMAIN_GTT, >mqd_obj, @@ -443,8 +444,8 @@ int
> amdgpu_gfx_mqd_sw_init(struct amdgpu_device *adev,
>
>   ring->mqd_size = mqd_size;
>   /* prepare MQD backup */
> - adev->gfx.mec.mqd_backup[i + xcc_id * adev-
> >gfx.num_compute_rings] = kmalloc(mqd_size, GFP_KERNEL);
> - if (!adev->gfx.mec.mqd_backup[i])
> + adev->gfx.mec.mqd_backup[j] = kmalloc(mqd_size,
> GFP_KERNEL);
> + if (!adev->gfx.mec.mqd_backup[j])
>   dev_warn(adev->dev, "no memory to create
> MQD backup for ring %s\n", ring->name);
>   }
>   }
> --
> 2.25.1



RE: [PATCH] drm/amdgpu: fix a build warning by a typo in amdgpu_gfx.c

2023-04-25 Thread Ma, Le
[AMD Official Use Only - General]

Reviewed-by: Le Ma 

> -Original Message-
> From: Chen, Guchun 
> Sent: Wednesday, April 26, 2023 11:12 AM
> To: amd-gfx@lists.freedesktop.org; Deucher, Alexander
> ; Zhang, Hawking ;
> Ma, Le 
> Cc: Chen, Guchun ; kernel test robot 
> Subject: [PATCH] drm/amdgpu: fix a build warning by a typo in amdgpu_gfx.c
>
> This should be a typo when intruducing multi-xx support.
>
> Reported-by: kernel test robot 
> Signed-off-by: Guchun Chen 
> Cc: Le Ma 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> index 60bb4bba1994..2cf1f88fde48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> @@ -470,8 +470,8 @@ void amdgpu_gfx_mqd_sw_fini(struct amdgpu_device
> *adev, int xcc_id)
>
>   for (i = 0; i < adev->gfx.num_compute_rings; i++) {
>   j = i + xcc_id * adev->gfx.num_compute_rings;
> - ring = >gfx.compute_ring[i];
> - kfree(adev->gfx.mec.mqd_backup[i]);
> + ring = >gfx.compute_ring[j];
> + kfree(adev->gfx.mec.mqd_backup[j]);
>   amdgpu_bo_free_kernel(>mqd_obj,
> >mqd_gpu_addr,
> >mqd_ptr);
> --
> 2.25.1



RE: [PATCH] drm/amdgpu: Refactor mode2 reset logic for v13.0.2

2022-03-02 Thread Ma, Le
Reviewed-by: Le Ma 

> -Original Message-
> From: amd-gfx  On Behalf Of Lazar,
> Lijo
> Sent: Thursday, March 3, 2022 11:33 AM
> To: Lazar, Lijo ; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Zhang, Hawking
> 
> Subject: RE: [PATCH] drm/amdgpu: Refactor mode2 reset logic for v13.0.2
> 
> [Public]
> 
> 
> 
> Thanks,
> Lijo
> 
> -Original Message-
> From: amd-gfx  On Behalf Of Lijo
> Lazar
> Sent: Monday, February 28, 2022 3:27 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander ; Zhang, Hawking
> 
> Subject: [PATCH] drm/amdgpu: Refactor mode2 reset logic for v13.0.2
> 
> Use IP version and refactor reset logic to apply to a list of devices.
> 
> Signed-off-by: Lijo Lazar 
> Reviewed-by: Hawking Zhang 
> ---
>  drivers/gpu/drm/amd/amdgpu/aldebaran.c| 66 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c |  8 +--
>  2 files changed, 54 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> index a545df4efce1..c6cc493a5486 100644
> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> @@ -31,6 +31,17 @@
>  #include "amdgpu_psp.h"
>  #include "amdgpu_xgmi.h"
> 
> +static bool aldebaran_is_mode2_default(struct amdgpu_reset_control
> +*reset_ctl) {
> + struct amdgpu_device *adev = (struct amdgpu_device
> +*)reset_ctl->handle;
> +
> + if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
> +  adev->gmc.xgmi.connected_to_cpu))
> + return true;
> +
> + return false;
> +}
> +
>  static struct amdgpu_reset_handler *
>  aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
>   struct amdgpu_reset_context *reset_context) @@ -
> 48,7 +59,7 @@ aldebaran_get_reset_handler(struct amdgpu_reset_control
> *reset_ctl,
>   }
>   }
> 
> - if (adev->gmc.xgmi.connected_to_cpu) {
> + if (aldebaran_is_mode2_default(reset_ctl)) {
>   list_for_each_entry(handler, _ctl->reset_handlers,
>handler_list) {
>   if (handler->reset_method ==
> AMD_RESET_METHOD_MODE2) { @@ -136,18 +147,31 @@ static int
> aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
> struct amdgpu_reset_context *reset_context)  {
> - struct amdgpu_device *tmp_adev = NULL;
>   struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl-
> >handle;
> + struct amdgpu_device *tmp_adev = NULL;
> + struct list_head reset_device_list;
>   int r = 0;
> 
>   dev_dbg(adev->dev, "aldebaran perform hw reset\n");
> - if (reset_context->hive == NULL) {
> + if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
> + reset_context->hive == NULL) {
>   /* Wrong context, return error */
>   return -EINVAL;
>   }
> 
> - list_for_each_entry(tmp_adev, _context->hive->device_list,
> -  gmc.xgmi.head) {
> + INIT_LIST_HEAD(_device_list);
> + if (reset_context->hive) {
> + list_for_each_entry (tmp_adev,
> +  _context->hive->device_list,
> +  gmc.xgmi.head)
> + list_add_tail(_adev->reset_list,
> +   _device_list);
> + } else {
> + list_add_tail(_context->reset_req_dev->reset_list,
> +   _device_list);
> + }
> +
> + list_for_each_entry (tmp_adev, _device_list, reset_list) {
>   mutex_lock(_adev->reset_cntl->reset_lock);
>   tmp_adev->reset_cntl->active_reset =
> AMD_RESET_METHOD_MODE2;
>   }
> @@ -155,8 +179,7 @@ aldebaran_mode2_perform_reset(struct
> amdgpu_reset_control *reset_ctl,
>* Mode2 reset doesn't need any sync between nodes in XGMI hive,
> instead launch
>* them together so that they can be completed asynchronously on
> multiple nodes
>*/
> - list_for_each_entry(tmp_adev, _context->hive->device_list,
> -  gmc.xgmi.head) {
> + list_for_each_entry (tmp_adev, _device_list, reset_list) {
>   /* For XGMI run all resets in parallel to speed up the process 
> */
>   if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
>   if (!queue_work(system_unbound_wq,
> @@ -174,9 +197,7 @@ aldebaran_mode2_perform_reset(struct
> amdgpu_reset_control *reset_ctl,
> 
>   /* For XGMI wait for all resets to complete before proceed */
>   if (!r) {
> - list_for_each_entry(tmp_adev,
> -  _context->hive->device_list,
> -  gmc.xgmi.head) {
> + list_for_each_entry (tmp_adev, _device_list, reset_list) {
>   if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
>   

RE: [PATCH] drm/amdgpu: correct initial cp_hqd_quantum for gfx9

2021-09-27 Thread Ma, Le
[AMD Official Use Only]

Reviewed-by: Le Ma 

-Original Message-
From: Hawking Zhang 
Sent: Sunday, September 26, 2021 10:29 PM
To: amd-gfx@lists.freedesktop.org; Ma, Le ; Deucher, Alexander 
; Zhang, Morris 
Cc: Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: correct initial cp_hqd_quantum for gfx9

didn't read the value of mmCP_HQD_QUANTUM from correct register offset

Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 603c259..025184a5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3599,7 +3599,7 @@ static int gfx_v9_0_mqd_init(struct amdgpu_ring *ring)

/* set static priority for a queue/ring */
gfx_v9_0_mqd_set_priority(ring, mqd);
-   mqd->cp_hqd_quantum = RREG32(mmCP_HQD_QUANTUM);
+   mqd->cp_hqd_quantum = RREG32_SOC15(GC, 0, mmCP_HQD_QUANTUM);

/* map_queues packet doesn't need activate the queue,
 * so only kiq need set this field.
--
2.7.4



RE: [PATCH 4/4] drm/amdgpu: assign the cpu/gpu address of fence from ring

2020-07-28 Thread Ma, Le
[AMD Public Use]

Series is Reviewed-by: Le Ma 

Regards,
Ma Le

-Original Message-
From: Xiao, Jack  
Sent: Tuesday, July 28, 2020 6:22 PM
To: amd-gfx@lists.freedesktop.org; Deucher, Alexander 
; Zhang, Hawking ; Koenig, 
Christian ; Ma, Le 
Cc: Xiao, Jack ; Koenig, Christian 
Subject: [PATCH 4/4] drm/amdgpu: assign the cpu/gpu address of fence from ring

assign the cpu/gpu address of fence for the normal or mes ring from ring 
structure.

Signed-off-by: Jack Xiao 
Reviewed-by: Hawking Zhang 
Acked-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 58d4c219178a..0be3e2007387 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -407,8 +407,8 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
uint64_t index;
 
if (ring->funcs->type != AMDGPU_RING_TYPE_UVD) {
-   ring->fence_drv.cpu_addr = >wb.wb[ring->fence_offs];
-   ring->fence_drv.gpu_addr = adev->wb.gpu_addr + 
(ring->fence_offs * 4);
+   ring->fence_drv.cpu_addr = ring->fence_cpu_addr;
+   ring->fence_drv.gpu_addr = ring->fence_gpu_addr;
} else {
/* put fence directly behind firmware */
index = ALIGN(adev->uvd.fw->size, 8);
--
2.26.2
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: check sdma ras funcs pointer before accessing

2020-01-09 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Le Ma 

-Original Message-
From: amd-gfx  On Behalf Of Hawking Zhang
Sent: Thursday, January 9, 2020 7:42 PM
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: check sdma ras funcs pointer before accessing

sdma ras funcs are not supported by ASIC prior to vega20

Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index f4107f9b75f3..c4b4caaf56fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -1810,7 +1810,10 @@ static int sdma_v4_0_late_init(void *handle)
RREG32_SDMA(i, mmSDMA0_EDC_COUNTER);
}
 
-   return adev->sdma.funcs->ras_late_init(adev, _info);
+   if (adev->sdma.funcs && adev->sdma.funcs->ras_late_init)
+   return adev->sdma.funcs->ras_late_init(adev, _info);
+   else
+   return 0;
 }
 
 static int sdma_v4_0_sw_init(void *handle) @@ -1882,7 +1885,8 @@ static int 
sdma_v4_0_sw_fini(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int i;
 
-   adev->sdma.funcs->ras_fini(adev);
+   if (adev->sdma.funcs && adev->sdma.funcs->ras_fini)
+   adev->sdma.funcs->ras_fini(adev);
 
for (i = 0; i < adev->sdma.num_instances; i++) {
amdgpu_ring_fini(>sdma.instance[i].ring);
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=02%7C01%7Cle.ma%40amd.com%7Cf443faf2f98f4135d8a408d794f8fdd0%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637141669481128296sdata=h1x2tb1pAb4ZvU58Xbagq2fIfbnIvl%2FTzybbry4gFhk%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: simplify function return logic

2019-12-23 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Le Ma 

-Original Message-
From: Chen, Guchun  
Sent: Tuesday, December 24, 2019 2:33 PM
To: Zhou1, Tao ; Zhang, Hawking ; Ma, 
Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun 
Subject: [PATCH] drm/amdgpu: simplify function return logic

Former return logic is redundant.

Signed-off-by: Guchun Chen 
---
 drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
index bb701dbfd472..41af6d0801d9 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
@@ -456,10 +456,8 @@ static int nbio_v7_4_init_ras_controller_interrupt (struct 
amdgpu_device *adev)
r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_BIF,
  NBIF_7_4__SRCID__RAS_CONTROLLER_INTERRUPT,
  >nbio.ras_controller_irq);
-   if (r)
-   return r;
 
-   return 0;
+   return r;
 }
 
 static int nbio_v7_4_init_ras_err_event_athub_interrupt (struct amdgpu_device 
*adev) @@ -476,10 +474,8 @@ static int 
nbio_v7_4_init_ras_err_event_athub_interrupt (struct amdgpu_device *a
r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_BIF,
  NBIF_7_4__SRCID__ERREVENT_ATHUB_INTERRUPT,
  >nbio.ras_err_event_athub_irq);
-   if (r)
-   return r;
 
-   return 0;
+   return r;
 }
 
 #define smnPARITY_ERROR_STATUS_UNCORR_GRP2 0x13a20030
--
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH v2 1/5] drm/amdgpu: reverts commit b01245ff54db66073b104ac9d9fbefb7b264b36d.

2019-12-17 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]


Hi Andry



Please check the 3 minor comments in this patch. With that addressed, the V2s 
series is Reviewed-by: Le Ma mailto:le...@amd.com>>



Regards,

Ma Le



-Original Message-
From: Andrey Grodzovsky 
Sent: Saturday, December 14, 2019 12:54 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [PATCH v2 1/5] drm/amdgpu: reverts commit 
b01245ff54db66073b104ac9d9fbefb7b264b36d.



In preparation for doing XGMI reset synchronization using task barrier.



Signed-off-by: Andrey Grodzovsky 
mailto:andrey.grodzov...@amd.com>>

Reviewed-by: Le Ma mailto:le...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 -

drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 76 +-

2 files changed, 12 insertions(+), 66 deletions(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

index a78a363..50bab33 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

@@ -1001,8 +1001,6 @@ struct amdgpu_device {



boolpm_sysfs_en;

   boolucode_sysfs_en;

-

-   boolin_baco;

};



 static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device 
*bdev) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

index 7324a5f..1d19edfa 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

@@ -2667,7 +2667,7 @@ static void amdgpu_device_xgmi_reset_func(struct 
work_struct *__work)

   if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)

   adev->asic_reset_res = (adev->in_baco == false) ?

   
amdgpu_device_baco_enter(adev->ddev) :

-   
amdgpu_device_baco_exit(adev->ddev);

+  
qamdgpu_device_baco_exit(adev->ddev);

[Le] 1/3: Still unnecessary typo here, although it will be removed in patch #4.

   else

   adev->asic_reset_res = amdgpu_asic_reset(adev);



@@ -3796,18 +3796,13 @@ static int amdgpu_device_pre_asic_reset(struct 
amdgpu_device *adev,

   return r;

}



-static int amdgpu_do_asic_reset(struct amdgpu_device *adev,

-  struct amdgpu_hive_info *hive,

+static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,

  struct list_head *device_list_handle,

  bool *need_full_reset_arg)

{

   struct amdgpu_device *tmp_adev = NULL;

   bool need_full_reset = *need_full_reset_arg, vram_lost = false;

   int r = 0;

-   int cpu = smp_processor_id();

-   bool use_baco =

-   (amdgpu_asic_reset_method(adev) == 
AMD_RESET_METHOD_BACO) ?

-   true : false;



/*

* ASIC reset has to be done on all HGMI hive nodes ASAP @@ -3815,62 
+3810,22 @@ static int amdgpu_do_asic_reset(struct amdgpu_device *adev,

*/

   if (need_full_reset) {

   list_for_each_entry(tmp_adev, device_list_handle, 
gmc.xgmi.head) {

-   /*

-   * For XGMI run all resets in parallel to 
speed up the

-   * process by scheduling the highpri wq on 
different

-   * cpus. For XGMI with baco reset, all nodes 
must enter

-   * baco within close proximity before anyone 
exit.

-   */

+  /* For XGMI run all resets in parallel to 
speed up the process */

   if (tmp_adev->gmc.xgmi.num_physical_nodes > 
1) {

-   if (!queue_work_on(cpu, 
system_highpri_wq,

-  
_adev->xgmi_reset_work))

+  if 
(!queue_work(system_highpri_wq, _adev->xgmi_reset_work))

   r = -EALREADY;

-   cpu = cpumask_next(cpu, 
cpu_online_mask);

   } else

   r = amdgpu_asic_reset(tmp_adev);

-   if (r)

-   break;

-   }

-

-   /* For XGMI wait for all work to complete be

RE: [RESEND PATCH 5/5] drm/amdgpu: Switch from system_highpri_wq to system_unbound_wq

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Le Ma 

Regards,
Ma Le

-Original Message-
From: Andrey Grodzovsky  
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [RESEND PATCH 5/5] drm/amdgpu: Switch from system_highpri_wq to 
system_unbound_wq

This is to avoid queueing jobs to same CPU during XGMI hive reset because there 
is a strict timeline for when the reset commands must reach all the GPUs in the 
hive.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index e4089a0..1518565 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3842,7 +3842,7 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info 
*hive,
list_for_each_entry(tmp_adev, device_list_handle, 
gmc.xgmi.head) {
/* For XGMI run all resets in parallel to speed up the 
process */
if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
-   if (!queue_work(system_highpri_wq, 
_adev->xgmi_reset_work))
+   if (!queue_work(system_unbound_wq, 
_adev->xgmi_reset_work))
r = -EALREADY;
} else
r = amdgpu_asic_reset(tmp_adev);
--
2.7.4
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [RESEND PATCH 4/5] Subject: drm/amdgpu: Redo XGMI reset synchronization.

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]






-Original Message-
From: Andrey Grodzovsky 
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [RESEND PATCH 4/5] Subject: drm/amdgpu: Redo XGMI reset 
synchronization.



Use task barrier in XGMI hive to synchronize ASIC resets across devices in XGMI 
hive.



Signed-off-by: Andrey Grodzovsky 
mailto:andrey.grodzov...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 42 +-

1 file changed, 36 insertions(+), 6 deletions(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

index 1d19edfa..e4089a0 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

@@ -67,6 +67,7 @@

#include "amdgpu_tmz.h"



 #include 

+#include 



 MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");

MODULE_FIRMWARE("amdgpu/vega12_gpu_info.bin");

@@ -2663,14 +2664,43 @@ static void amdgpu_device_xgmi_reset_func(struct 
work_struct *__work)  {

   struct amdgpu_device *adev =

   container_of(__work, struct amdgpu_device, 
xgmi_reset_work);

+  struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev, 0);



-   if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)

-   adev->asic_reset_res = (adev->in_baco == false) ?

-   
amdgpu_device_baco_enter(adev->ddev) :

-   
qamdgpu_device_baco_exit(adev->ddev);

-   else

-   adev->asic_reset_res = amdgpu_asic_reset(adev);

+  /*

+  * Use task barrier to synchronize all xgmi reset works across the

+  * hive.

+  * task_barrier_enter and task_barrier_exit will block untill all the

+  * threads running the xgmi reset works reach those points. I assume

+  * guarantee of progress here for all the threads as the workqueue 
code

+  * creates new worker threads as needed by amount of work items in 
queue

+  * (see worker_thread) and also each thread sleeps in the barrir and 
by

+  * this yielding the CPU for other work threads to make progress.

+  */

[Le]: This comments can be adjusted since we switch to system_unbound_wq in 
patch #5.

+  if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {

+

+  if (hive)

+  task_barrier_enter(>tb);

[Le]: The multiple hive condition can be checked only once and moved to the 
location right after the assignment.

+

+  adev->asic_reset_res = 
amdgpu_device_baco_enter(adev->ddev);

+

+  if (adev->asic_reset_res)

+  goto fail;

+

+  if (hive)

+  task_barrier_exit(>tb);

[Le]: Same as above.

+

+  adev->asic_reset_res = 
amdgpu_device_baco_exit(adev->ddev);

+

+  if (adev->asic_reset_res)

+  goto fail;

+  } else {

+  if (hive)

+  task_barrier_full(>tb);

[Le]: Same as above.



With above addressed, Reviewed-by: Le Ma mailto:le...@amd.com>>



Regards,

Ma Le

+

+  adev->asic_reset_res =  amdgpu_asic_reset(adev);

+  }



+fail:

   if (adev->asic_reset_res)

   DRM_WARN("ASIC reset failed with error, %d for drm dev, 
%s",

adev->asic_reset_res, adev->ddev->unique);

--

2.7.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [RESEND PATCH 1/5] drm/amdgpu: reverts commit b01245ff54db66073b104ac9d9fbefb7b264b36d.

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]




-Original Message-
From: Andrey Grodzovsky 
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [RESEND PATCH 1/5] drm/amdgpu: reverts commit 
b01245ff54db66073b104ac9d9fbefb7b264b36d.



In preparation for doing XGMI reset synchronization using task barrier.



Signed-off-by: Andrey Grodzovsky 

---

drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 -

drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 76 +-

2 files changed, 12 insertions(+), 66 deletions(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

index a78a363..50bab33 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

@@ -1001,8 +1001,6 @@ struct amdgpu_device {



boolpm_sysfs_en;

   boolucode_sysfs_en;

-

-   boolin_baco;

};



 static inline struct amdgpu_device *amdgpu_ttm_adev(struct ttm_bo_device 
*bdev) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

index 7324a5f..1d19edfa 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

@@ -2667,7 +2667,7 @@ static void amdgpu_device_xgmi_reset_func(struct 
work_struct *__work)

   if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)

   adev->asic_reset_res = (adev->in_baco == false) ?

   
amdgpu_device_baco_enter(adev->ddev) :

-   
amdgpu_device_baco_exit(adev->ddev);

+  
qamdgpu_device_baco_exit(adev->ddev);

[Le]: Typo here. With it fixed, Reviewed-by: Le Ma 
mailto:le...@amd.com>>



Regards,

Ma Le

   else

   adev->asic_reset_res = amdgpu_asic_reset(adev);



@@ -3796,18 +3796,13 @@ static int amdgpu_device_pre_asic_reset(struct 
amdgpu_device *adev,

   return r;

}



-static int amdgpu_do_asic_reset(struct amdgpu_device *adev,

-  struct amdgpu_hive_info *hive,

+static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,

  struct list_head *device_list_handle,

  bool *need_full_reset_arg)

{

   struct amdgpu_device *tmp_adev = NULL;

   bool need_full_reset = *need_full_reset_arg, vram_lost = false;

   int r = 0;

-   int cpu = smp_processor_id();

-   bool use_baco =

-   (amdgpu_asic_reset_method(adev) == 
AMD_RESET_METHOD_BACO) ?

-   true : false;



/*

* ASIC reset has to be done on all HGMI hive nodes ASAP @@ -3815,62 
+3810,22 @@ static int amdgpu_do_asic_reset(struct amdgpu_device *adev,

*/

   if (need_full_reset) {

   list_for_each_entry(tmp_adev, device_list_handle, 
gmc.xgmi.head) {

-   /*

-   * For XGMI run all resets in parallel to 
speed up the

-   * process by scheduling the highpri wq on 
different

-   * cpus. For XGMI with baco reset, all nodes 
must enter

-   * baco within close proximity before anyone 
exit.

-   */

+  /* For XGMI run all resets in parallel to 
speed up the process */

   if (tmp_adev->gmc.xgmi.num_physical_nodes > 
1) {

-   if (!queue_work_on(cpu, 
system_highpri_wq,

-  
_adev->xgmi_reset_work))

+  if 
(!queue_work(system_highpri_wq, _adev->xgmi_reset_work))

   r = -EALREADY;

-   cpu = cpumask_next(cpu, 
cpu_online_mask);

   } else

   r = amdgpu_asic_reset(tmp_adev);

-   if (r)

-   break;

-   }

-

-   /* For XGMI wait for all work to complete before 
proceed */

-   if (!r) {

-   list_for_each_entry(tmp_adev, 
device_list_handle,

-

RE: [RESEND PATCH 2/5] drm: Add Reusable task barrier.

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]






-Original Message-
From: Andrey Grodzovsky 
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [RESEND PATCH 2/5] drm: Add Reusable task barrier.



It is used to synchronize N threads at a rendevouz point before execution of 
critical code that has to be started by all the threads at approximatly the 
same time.



Signed-off-by: Andrey Grodzovsky 
mailto:andrey.grodzov...@amd.com>>

---

include/drm/task_barrier.h | 106 +

1 file changed, 106 insertions(+)

create mode 100644 include/drm/task_barrier.h



diff --git a/include/drm/task_barrier.h b/include/drm/task_barrier.h new file 
mode 100644 index 000..81fb0f7

--- /dev/null

+++ b/include/drm/task_barrier.h

@@ -0,0 +1,106 @@

+/*

+ * Copyright 2019 Advanced Micro Devices, Inc.

+ *

+ * Permission is hereby granted, free of charge, to any person

+obtaining a

+ * copy of this software and associated documentation files (the

+"Software"),

+ * to deal in the Software without restriction, including without

+limitation

+ * the rights to use, copy, modify, merge, publish, distribute,

+sublicense,

+ * and/or sell copies of the Software, and to permit persons to whom

+the

+ * Software is furnished to do so, subject to the following conditions:

+ *

+ * The above copyright notice and this permission notice shall be

+included in

+ * all copies or substantial portions of the Software.

+ *

+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,

+EXPRESS OR

+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF

+MERCHANTABILITY,

+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT

+SHALL

+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,

+DAMAGES OR

+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR

+OTHERWISE,

+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE

+OR

+ * OTHER DEALINGS IN THE SOFTWARE.

+ *

+ */

+#include 

+#include 

+

+/*

+ * Reusable 2 PHASE task barrier (randevouz point) implementation for N tasks.

+ * Based on the Little book of sempahores -

+https://greenteapress.com/wp/semaphores/

+ */

+

+

+

+#ifndef DRM_TASK_BARRIER_H_

+#define DRM_TASK_BARRIER_H_

+



[Le]: It might be better to prefix "drm_" to the functions and structure below, 
even this header file name.



+/*

+ * Represents an instance of a task barrier.

+ */

+struct task_barrier {

+  unsigned int n;

[Le]: We can define it as signed type here for more common use.

+  atomic_t count;

+  struct semaphore enter_turnstile;

+  struct semaphore exit_turnstile;

+};

+

+static inline void task_barrier_signal_turnstile(struct semaphore *turnstile,

+  unsigned 
int n)

+{

+  int i;

+

+  for (i = 0 ; i < n; i++)

+  up(turnstile);

+}

+

+static inline void task_barrier_init(struct task_barrier *tb) {

+  tb->n = 0;

+  atomic_set(>count, 0);

+  sema_init(>enter_turnstile, 0);

+  sema_init(>exit_turnstile, 0);

+}

+

+static inline void task_barrier_add_task(struct task_barrier *tb) {

+  tb->n++;

+}

+

+static inline void task_barrier_rem_task(struct task_barrier *tb) {

+  tb->n--;

+}

+

+/*

+ * Lines up all the threads BEFORE the critical point.

+ *

+ * When all thread passed this code the entry barrier is back to locked state.

+ */

+static inline void task_barrier_enter(struct task_barrier *tb) {

+  if (atomic_inc_return(>count) == tb->n)

+  task_barrier_signal_turnstile(>enter_turnstile, 
tb->n);

+

+  down(>enter_turnstile);

+}

+

+/*

+ * Lines up all the threads AFTER the critical point.

+ *

+ * This function is used to avoid any one thread running ahead of the

+reset if

[Le]: No need to mention "reset" here.



With the above addressed, Acked-by: Le Ma le...@amd.com<mailto:le...@amd.com>



Regards,

Ma Le

+ * the barrier is used in a loop (repeatedly) .

+ */

+static inline void task_barrier_exit(struct task_barrier *tb) {

+  if (atomic_dec_return(>count) == 0)

+  task_barrier_signal_turnstile(>exit_turnstile, 
tb->n);

+

+  down(>exit_turnstile);

+}

+

+static inline void task_barrier_full(struct task_barrier *tb) {

+  task_barrier_enter(tb);

+  task_barrier_exit(tb);

+}

+

+#endif

--

2.7.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [RESEND PATCH 3/5] drm/amdgpu: Add task barrier to XGMI hive.

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Le Ma 

Regards,
Ma Le

-Original Message-
From: Andrey Grodzovsky  
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Ma, Le ; 
Zhang, Hawking ; Quan, Evan ; 
Grodzovsky, Andrey 
Subject: [RESEND PATCH 3/5] drm/amdgpu: Add task barrier to XGMI hive.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c | 4   
drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
index 61d13d8..5cf920d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
@@ -261,6 +261,7 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct 
amdgpu_device *adev, int lo
INIT_LIST_HEAD(>device_list);
mutex_init(>hive_lock);
mutex_init(>reset_lock);
+   task_barrier_init(>tb);
 
if (lock)
mutex_lock(>hive_lock);
@@ -408,6 +409,8 @@ int amdgpu_xgmi_add_device(struct amdgpu_device *adev)
top_info->num_nodes = count;
hive->number_devices = count;
 
+   task_barrier_add_task(>tb);
+
if (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_PSP)) {
list_for_each_entry(tmp_adev, >device_list, 
gmc.xgmi.head) {
/* update node list for other device in the hive */ @@ 
-470,6 +473,7 @@ void amdgpu_xgmi_remove_device(struct amdgpu_device *adev)
mutex_destroy(>hive_lock);
mutex_destroy(>reset_lock);
} else {
+   task_barrier_rem_task(>tb);
amdgpu_xgmi_sysfs_rem_dev_info(adev, hive);
mutex_unlock(>hive_lock);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
index bbf504f..74011fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
@@ -22,6 +22,7 @@
 #ifndef __AMDGPU_XGMI_H__
 #define __AMDGPU_XGMI_H__
 
+#include 
 #include "amdgpu_psp.h"
 
 struct amdgpu_hive_info {
@@ -33,6 +34,7 @@ struct amdgpu_hive_info {
struct device_attribute dev_attr;
struct amdgpu_device *adev;
int pstate; /*0 -- low , 1 -- high , -1 unknown*/
+   struct task_barrier tb;
 };
 
 struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev, int 
lock);
--
2.7.4
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-11 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

I tried your new patches to run BACO for about 10 loops and the result looks 
positive, without observing enter/exit baco message failure again.

The time interval between BACO entries or exits in my environment was almost 
less than 10 us: max 36us, min 2us. I think it's safe enough according to the 
sample data we collected in both sides.

And it looks not necessary to continue using system_highpri_wq any more because 
we require all the nodes enter or exit at the same time, while do not mind how 
long the time interval is b/t enter and exit. The system_unbound_wq can satisfy 
our requirement here since it wakes different CPUs up to work at the same time.

Regards,
Ma Le

From: Grodzovsky, Andrey 
Sent: Wednesday, December 11, 2019 3:56 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


I switched the workqueue we were using for xgmi_reset_work from 
system_highpri_wq to system_unbound_wq - the difference is that workers 
servicing the queue in system_unbound_wq are not bounded to specific CPU and so 
the reset jobs for each XGMI node are getting scheduled to different CPU while 
system_highpri_wq is a bounded work queue. I traced it as bellow for 10 
consecutive times and didn't see errors any more. Also the time diff between 
BACO entries or exits was never more then around 2 uS.

Please give this updated patchset a try

   kworker/u16:2-57[004] ...1   243.276312: trace_code: func: 
vega20_baco_set_state, line 91 <- - Before BEACO enter
   <...>-60[007] ...1   243.276312: trace_code: func: 
vega20_baco_set_state, line 91 <- - Before BEACO enter
   kworker/u16:2-57[004] ...1   243.276384: trace_code: func: 
vega20_baco_set_state, line 105 <- - After BEACO enter done
   <...>-60[007] ...1   243.276392: trace_code: func: 
vega20_baco_set_state, line 105 <- - After BEACO enter done
   kworker/u16:3-60[007] ...1   243.276397: trace_code: func: 
vega20_baco_set_state, line 108 <- - Before BEACO exit
   kworker/u16:2-57[004] ...1   243.276399: trace_code: func: 
vega20_baco_set_state, line 108 <- - Before BEACO exit
   kworker/u16:3-60[007] ...1   243.288067: trace_code: func: 
vega20_baco_set_state, line 114 <- - After BEACO exit done
   kworker/u16:2-57[004] ...1   243.295624: trace_code: func: 
vega20_baco_set_state, line 114 <- - After BEACO exit done

Andrey
On 12/9/19 9:45 PM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]

I'm fine with your solution if synchronization time interval satisfies BACO 
requirements and loop test can pass on XGMI system.

Regards,
Ma Le

From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Monday, December 9, 2019 11:52 PM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Thanks a lot Ma for trying - I think I have to have my own system to debug this 
so I will keep trying enabling XGMI - i still think the is the right and the 
generic solution for multiple nodes reset synchronization and in fact the 
barrier should also be used for synchronizing PSP mode 1 XGMI reset too.

Andrey
On 12/9/19 6:34 AM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]

Hi Andrey,

I tried your patches on my 2P XGMI platform. The baco can work at most time, 
and randomly got following error:
[ 1701.542298] amdgpu: [powerplay] Failed to send message 0x25, response 0x0

This error usually means some sync issue exist for xgmi baco case. Feel free to 
debug your patches on my XGMI platform.

Regards,
Ma Le

From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Saturday, December 7, 2019 5:51 AM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Hey Ma, attached a solution - it's just compiled as I still can't make my XGMI 
setup work (with bridge connected only one device is visible to the system 
while the other is not). Please try it on your system if you have a chance.

Andrey
On 

RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-09 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Not sure it's same issue as I observed.

If you have an XGMI setup, use the latest drm-next and the PMFW I used on my 
XGMI system(I just sent you the vega20_smc.bin through mail). And then give 
another attempt.

About the strict time interval, I remember the XGMI node EnterBaco message will 
fail when interval is around millisecond.

Regards,
Ma Le

From: Grodzovsky, Andrey 
Sent: Tuesday, December 10, 2019 6:01 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


I reproduced the issue on my side - i consistently  observe amdgpu: [powerplay] 
Failed to send message 0x58, response 0x0 - Baco exit failure - do you know 
what is the strict time interval within which all the Baco enter/Exit messages 
needs to be sent to all the nodes in the hive ?

Andrey
On 12/9/19 6:34 AM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]

Hi Andrey,

I tried your patches on my 2P XGMI platform. The baco can work at most time, 
and randomly got following error:
[ 1701.542298] amdgpu: [powerplay] Failed to send message 0x25, response 0x0

This error usually means some sync issue exist for xgmi baco case. Feel free to 
debug your patches on my XGMI platform.

Regards,
Ma Le

From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Saturday, December 7, 2019 5:51 AM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Hey Ma, attached a solution - it's just compiled as I still can't make my XGMI 
setup work (with bridge connected only one device is visible to the system 
while the other is not). Please try it on your system if you have a chance.

Andrey
On 12/4/19 10:14 PM, Ma, Le wrote:

AFAIK it's enough for even single one node in the hive to to fail the enter the 
BACO state on time to fail the entire hive reset procedure, no ?
[Le]: Yeah, agree that. I've been thinking that make all nodes entering baco 
simultaneously can reduce the possibility of node failure to enter/exit BACO 
risk. For example, in an XGMI hive with 8 nodes, the total time interval of 8 
nodes enter/exit BACO on 8 CPUs is less than the interval that 8 nodes enter 
BACO serially and exit BACO serially depending on one CPU with yield 
capability. This interval is usually strict for BACO feature itself. Anyway, we 
need more looping test later on any method we will choose.

Any way - I see our discussion blocks your entire patch set - I think you can 
go ahead and commit yours way (I think you got an RB from Hawking) and I will 
look then and see if I can implement my method and if it works will just revert 
your patch.

[Le]: OK, fine.

Andrey
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-09 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

I'm fine with your solution if synchronization time interval satisfies BACO 
requirements and loop test can pass on XGMI system.

Regards,
Ma Le

From: Grodzovsky, Andrey 
Sent: Monday, December 9, 2019 11:52 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Thanks a lot Ma for trying - I think I have to have my own system to debug this 
so I will keep trying enabling XGMI - i still think the is the right and the 
generic solution for multiple nodes reset synchronization and in fact the 
barrier should also be used for synchronizing PSP mode 1 XGMI reset too.

Andrey
On 12/9/19 6:34 AM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]

Hi Andrey,

I tried your patches on my 2P XGMI platform. The baco can work at most time, 
and randomly got following error:
[ 1701.542298] amdgpu: [powerplay] Failed to send message 0x25, response 0x0

This error usually means some sync issue exist for xgmi baco case. Feel free to 
debug your patches on my XGMI platform.

Regards,
Ma Le

From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Saturday, December 7, 2019 5:51 AM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Hey Ma, attached a solution - it's just compiled as I still can't make my XGMI 
setup work (with bridge connected only one device is visible to the system 
while the other is not). Please try it on your system if you have a chance.

Andrey
On 12/4/19 10:14 PM, Ma, Le wrote:

AFAIK it's enough for even single one node in the hive to to fail the enter the 
BACO state on time to fail the entire hive reset procedure, no ?
[Le]: Yeah, agree that. I've been thinking that make all nodes entering baco 
simultaneously can reduce the possibility of node failure to enter/exit BACO 
risk. For example, in an XGMI hive with 8 nodes, the total time interval of 8 
nodes enter/exit BACO on 8 CPUs is less than the interval that 8 nodes enter 
BACO serially and exit BACO serially depending on one CPU with yield 
capability. This interval is usually strict for BACO feature itself. Anyway, we 
need more looping test later on any method we will choose.

Any way - I see our discussion blocks your entire patch set - I think you can 
go ahead and commit yours way (I think you got an RB from Hawking) and I will 
look then and see if I can implement my method and if it works will just revert 
your patch.

[Le]: OK, fine.

Andrey
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-09 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Hi Andrey,

I tried your patches on my 2P XGMI platform. The baco can work at most time, 
and randomly got following error:
[ 1701.542298] amdgpu: [powerplay] Failed to send message 0x25, response 0x0

This error usually means some sync issue exist for xgmi baco case. Feel free to 
debug your patches on my XGMI platform.

Regards,
Ma Le

From: Grodzovsky, Andrey 
Sent: Saturday, December 7, 2019 5:51 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Hey Ma, attached a solution - it's just compiled as I still can't make my XGMI 
setup work (with bridge connected only one device is visible to the system 
while the other is not). Please try it on your system if you have a chance.

Andrey
On 12/4/19 10:14 PM, Ma, Le wrote:

AFAIK it's enough for even single one node in the hive to to fail the enter the 
BACO state on time to fail the entire hive reset procedure, no ?
[Le]: Yeah, agree that. I've been thinking that make all nodes entering baco 
simultaneously can reduce the possibility of node failure to enter/exit BACO 
risk. For example, in an XGMI hive with 8 nodes, the total time interval of 8 
nodes enter/exit BACO on 8 CPUs is less than the interval that 8 nodes enter 
BACO serially and exit BACO serially depending on one CPU with yield 
capability. This interval is usually strict for BACO feature itself. Anyway, we 
need more looping test later on any method we will choose.

Any way - I see our discussion blocks your entire patch set - I think you can 
go ahead and commit yours way (I think you got an RB from Hawking) and I will 
look then and see if I can implement my method and if it works will just revert 
your patch.

[Le]: OK, fine.

Andrey
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: fix resume failures due to psp fw loading sequence change (v3)

2019-12-06 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Le Ma 

Regards,
Ma Le
-Original Message-
From: Hawking Zhang  
Sent: Friday, December 6, 2019 6:12 PM
To: amd-gfx@lists.freedesktop.org; Ma, Le ; Clements, John 
; Deucher, Alexander ; Chen, 
Guchun 
Cc: Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: fix resume failures due to psp fw loading sequence 
change (v3)

this fix the regression caused by asd/ta loading sequence adjustment recently. 
asd/ta loading was move out from hw_start and should also be applied to 
psp_resume.
otherwise those fw loading will be ignored in resume phase.

v2: add the mutex unlock for asd loading failure case
v3: merge the error handling to failed tag

Change-Id: I20d3651f325e793e1ea7e73df1c76219eaa0b5ab
Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 33 +
 1 file changed, 33 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index ceea8314d88d..2dfda5590e77 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -1702,6 +1702,39 @@ static int psp_resume(void *handle)
if (ret)
goto failed;
 
+   ret = psp_asd_load(psp);
+   if (ret) {
+   DRM_ERROR("PSP load asd failed!\n");
+   goto failed;
+   }
+
+   if (adev->gmc.xgmi.num_physical_nodes > 1) {
+   ret = psp_xgmi_initialize(psp);
+   /* Warning the XGMI seesion initialize failure
+* Instead of stop driver initialization
+*/
+   if (ret)
+   dev_err(psp->adev->dev,
+   "XGMI: Failed to initialize XGMI session\n");
+   }
+
+   if (psp->adev->psp.ta_fw) {
+   ret = psp_ras_initialize(psp);
+   if (ret)
+   dev_err(psp->adev->dev,
+   "RAS: Failed to initialize RAS\n");
+
+   ret = psp_hdcp_initialize(psp);
+   if (ret)
+   dev_err(psp->adev->dev,
+   "HDCP: Failed to initialize HDCP\n");
+
+   ret = psp_dtm_initialize(psp);
+   if (ret)
+   dev_err(psp->adev->dev,
+   "DTM: Failed to initialize DTM\n");
+   }
+
mutex_unlock(>firmware.mutex);
 
return 0;
--
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-04 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]



From: Grodzovsky, Andrey 
Sent: Thursday, December 5, 2019 12:06 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI



On 12/4/19 2:09 AM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]


From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Wednesday, December 4, 2019 2:44 AM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Thanks Ma, this was very helpful as I am sill not able to setup XGMI hive with 
latest FW and VBIOS.

I traced the workqueue subsystem (full log attached). Specifically here is the 
life cycle of our 2 work items executing amdgpu_device_xgmi_reset_func bellow

[Le]: Thanks Andrey for the deep debug. Your feedback gave me a more profound 
understanding on this case. My comments split as below.

You were right to note they both run on came CPU (32) but they are executed by 
different threads. Also as you see by workqueue_execute_start/end timestamps 
they actually ran in parallel and not one after another even while being 
assigned to the same CPU and that because of thread preemption (there is at 
least psp_v11_0_mode1_reset->msleep(500)) which yields the CPU and hence allows 
the second work to run + I am sure that on preemptive kernel one reset work 
would be preempted at some point anyway  and let the other run.

[Le]: Yes, from the trace log, the xgmi_reset_func items are assigned to 
different work threads bound to one same CPU. And you are right that cpu 
preemption will happen when msleep called which yield the CPU to allow second 
work to run. That’s a great founding. But it’s not a real parallel run to me 
because second work can only preempt to run when first work go to sleep. I made 
an experiment here to change this unique msleep to udelay, then second work 
item will run after first item finished in a serial execuation.



I would expect in kernel compiled with preemption support that a running thread 
would be interrupted to let others run even when he is not voluntarily yields 
the CPU so this is strange.



Now you had issues with BACO reset while the test I ran on your system is mode1 
reset and so I assumed that maybe BACO has some non preempt-able busy wait 
which doesn't give a chance to second work item's thread to run on that CPU 
before the first finished - but from looking in the code I see 
smu_v11_0_baco_enter->msleep(10) so even in that case the first reset work item 
was supposed to yield CPU after BACO ENTER sent to SMU and let the other reset 
work do the same to the second card and so i don't see how even in this case 
there is a serial execution ?

[Le]: VG20 uses old powerplay framework (smu_v11_0_baco_enter->msleep(10) in 
swSMU framework), so no msleep and no CPU preemption. BACO reset has Enter/Exit 
2 phases. We expect all the XGMI nodes enter BACO simultaneously instead of one 
after one as a serial execution, then exit BACO simultaneously.



Well, we always can add something like bellow to force each XGMI reset work to 
let others run before going into BACO exit. We can also guarantee that all of 
the reset works will execute BACO ENTER before proceeding to BACO EXIT by using 
some kind of semaphore barrier along the line of this - 
https://stackoverflow.com/questions/47522174/reusable-barrier-implementation-using-posix-semaphores.
 This will also solve the #XGMI_NODES > #CPUs use case.

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 48649f5..3e91e54 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -531,6 +531,8 @@ static int soc15_asic_baco_reset(struct amdgpu_device *adev)
if (pp_funcs->set_asic_baco_state(pp_handle, 1))
return -EIO;

+   yield();
+
/* exit BACO state */
if (pp_funcs->set_asic_baco_state(pp_handle, 0))
return -EIO;



P.S How you solution solves the case where the XGMI hive is bigger then number 
of CPUs on the system ? Assuming that what you say is correct and there is a 
serial execution when on the same CPU, if they hive is bigger then number of 
CPUs you will eventually get back to sending reset work to a CPU already 
executing BACO ENTER (or EXIT) for another device and will get the 
serialization problem anyway.

[Le]: Yeah, I also considered the sit

RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-03 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]


From: Grodzovsky, Andrey 
Sent: Wednesday, December 4, 2019 2:44 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org; Zhou1, Tao 
; Deucher, Alexander ; Li, Dennis 
; Zhang, Hawking 
Cc: Chen, Guchun 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI


Thanks Ma, this was very helpful as I am sill not able to setup XGMI hive with 
latest FW and VBIOS.

I traced the workqueue subsystem (full log attached). Specifically here is the 
life cycle of our 2 work items executing amdgpu_device_xgmi_reset_func bellow

[Le]: Thanks Andrey for the deep debug. Your feedback gave me a more profound 
understanding on this case. My comments split as below.

You were right to note they both run on came CPU (32) but they are executed by 
different threads. Also as you see by workqueue_execute_start/end timestamps 
they actually ran in parallel and not one after another even while being 
assigned to the same CPU and that because of thread preemption (there is at 
least psp_v11_0_mode1_reset->msleep(500)) which yields the CPU and hence allows 
the second work to run + I am sure that on preemptive kernel one reset work 
would be preempted at some point anyway  and let the other run.

[Le]: Yes, from the trace log, the xgmi_reset_func items are assigned to 
different work threads bound to one same CPU. And you are right that cpu 
preemption will happen when msleep called which yield the CPU to allow second 
work to run. That’s a great founding. But it’s not a real parallel run to me 
because second work can only preempt to run when first work go to sleep. I made 
an experiment here to change this unique msleep to udelay, then second work 
item will run after first item finished in a serial execuation.

Now you had issues with BACO reset while the test I ran on your system is mode1 
reset and so I assumed that maybe BACO has some non preempt-able busy wait 
which doesn't give a chance to second work item's thread to run on that CPU 
before the first finished - but from looking in the code I see 
smu_v11_0_baco_enter->msleep(10) so even in that case the first reset work item 
was supposed to yield CPU after BACO ENTER sent to SMU and let the other reset 
work do the same to the second card and so i don't see how even in this case 
there is a serial execution ?

[Le]: VG20 uses old powerplay framework (smu_v11_0_baco_enter->msleep(10) in 
swSMU framework), so no msleep and no CPU preemption. BACO reset has Enter/Exit 
2 phases. We expect all the XGMI nodes enter BACO simultaneously instead of one 
after one as a serial execution, then exit BACO simultaneously.

P.S How you solution solves the case where the XGMI hive is bigger then number 
of CPUs on the system ? Assuming that what you say is correct and there is a 
serial execution when on the same CPU, if they hive is bigger then number of 
CPUs you will eventually get back to sending reset work to a CPU already 
executing BACO ENTER (or EXIT) for another device and will get the 
serialization problem anyway.

[Le]: Yeah, I also considered the situation that XGMI hive bigger than CPU NR. 
I think it’s an extreme situation and should not exist. However, assuming it 
exists, many work items scatter in several CPUs will be executed faster than 
bound to one same CPU, isn’t it ?

 cat-3002  [032] d... 33153.791829: workqueue_queue_work: work 
struct=e43c1ebb function=amdgpu_device_xgmi_reset_func [amdgpu] 
workqueue=80331d91 req_cpu=8192 cpu=32
 cat-3002  [032] d... 33153.791829: workqueue_activate_work: work 
struct e43c1ebb
 cat-3002  [032] dN.. 33153.791831: workqueue_queue_work: work 
struct=e67113aa function=amdgpu_device_xgmi_reset_func [amdgpu] 
workqueue=80331d91 req_cpu=8192 cpu=32
 cat-3002  [032] dN.. 33153.791832: workqueue_activate_work: work 
struct e67113aa
   kworker/32:1H-551   [032]  33153.791834: workqueue_execute_start: work 
struct e43c1ebb: function amdgpu_device_xgmi_reset_func [amdgpu]
   kworker/32:0H-175   [032]  33153.792087: workqueue_execute_start: work 
struct e67113aa: function amdgpu_device_xgmi_reset_func [amdgpu]
   kworker/32:1H-551   [032]  33154.310948: workqueue_execute_end: work 
struct e43c1ebb
   kworker/32:0H-175   [032]  33154.311043: workqueue_execute_end: work 
struct e67113aa

Andrey




On 12/3/19 5:06 AM, Ma, Le wrote:

[AMD Official Use Only - Internal Distribution Only]

Hi Andrey,

You can try the XGMI system below:
  IP: 10.67.69.53
  U/P: jenkins/0

The original drm-next kernel is installed.

Regards,
Ma Le

From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Tuesday, December 3, 2019 6:05 AM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>
Cc: Chen, Guchun <mailto:guchun.c...@amd.co

RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-12-02 Thread Ma, Le
[AMD Official Use Only - Internal Distribution Only]



From: Grodzovsky, Andrey 
Sent: Saturday, November 30, 2019 12:22 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; 
Deucher, Alexander ; Li, Dennis ; 
Zhang, Hawking 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI



On 11/28/19 4:00 AM, Ma, Le wrote:





-Original Message-
From: Grodzovsky, Andrey 
<mailto:andrey.grodzov...@amd.com>
Sent: Wednesday, November 27, 2019 11:46 PM
To: Ma, Le <mailto:le...@amd.com>; 
amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>
Cc: Chen, Guchun <mailto:guchun.c...@amd.com>; Zhou1, Tao 
<mailto:tao.zh...@amd.com>; Deucher, Alexander 
<mailto:alexander.deuc...@amd.com>; Li, Dennis 
<mailto:dennis...@amd.com>; Zhang, Hawking 
<mailto:hawking.zh...@amd.com>
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI





On 11/27/19 4:15 AM, Le Ma wrote:

> Currently each XGMI node reset wq does not run in parrallel because

> same work item bound to same cpu runs in sequence. So change to bound

> the xgmi_reset_work item to different cpus.



It's not the same work item, see more bellow





>

> XGMI requires all nodes enter into baco within very close proximity

> before any node exit baco. So schedule the xgmi_reset_work wq twice

> for enter/exit baco respectively.

>

> The default reset code path and methods do not change for vega20 production:

>- baco reset without xgmi/ras

>- psp reset with xgmi/ras

>

> To enable baco for XGMI/RAS case, both 2 conditions below are needed:

>- amdgpu_ras_enable=2

>- baco-supported smu firmware

>

> The case that PSP reset and baco reset coexist within an XGMI hive is

> not in the consideration.

>

> Change-Id: I9c08cf90134f940b42e20d2129ff87fba761c532

> Signed-off-by: Le Ma mailto:le...@amd.com>>

> ---

>   drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 +

>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 78 
> ++

>   2 files changed, 70 insertions(+), 10 deletions(-)

>

> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> index d120fe5..08929e6 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> @@ -998,6 +998,8 @@ struct amdgpu_device {

>  int   pstate;

>  /* enable runtime pm on the device */

>  boolrunpm;

> +

> +  boolin_baco;

>   };

>

>   static inline struct amdgpu_device *amdgpu_ttm_adev(struct

> ttm_bo_device *bdev) diff --git

> a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> index bd387bb..71abfe9 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> @@ -2654,7 +2654,13 @@ static void amdgpu_device_xgmi_reset_func(struct 
> work_struct *__work)

>  struct amdgpu_device *adev =

>  container_of(__work, struct amdgpu_device, 
> xgmi_reset_work);

>

> -   adev->asic_reset_res =  amdgpu_asic_reset(adev);

> +  if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)

> +  adev->asic_reset_res = (adev->in_baco == false) ?

> +  
> amdgpu_device_baco_enter(adev->ddev) :

> +  
> amdgpu_device_baco_exit(adev->ddev);

> +  else

> +  adev->asic_reset_res = amdgpu_asic_reset(adev);

> +

>  if (adev->asic_reset_res)

>  DRM_WARN("ASIC reset failed with error, %d for drm dev, 
> %s",

>   adev->asic_reset_res, adev->ddev->unique); 
> @@ -3796,6 +3802,7 @@

> static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,

>  struct amdgpu_device *tmp_adev = NULL;

>  bool need_full_reset = *need_full_reset_arg, vram_lost = false;

>  int r = 0;

> +  int cpu = smp_processor_id();

>

>  /*

>   * ASIC reset has to be done on all HGMI hive nodes ASAP @@

> -3803,21 +3810,24 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info 
> *hive,

>   */

>  if (need_full_reset) {

>  list_for_each_entry(tmp_adev, device_list_handle, 
> gmc.xgmi.head) {

> -   /* For XGMI run all resets in parallel to 
> speed up the process */

> +  /*

> + 

RE: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for XGMI

2019-11-28 Thread Ma, Le




-Original Message-
From: Grodzovsky, Andrey 
Sent: Wednesday, November 27, 2019 11:46 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; 
Deucher, Alexander ; Li, Dennis ; 
Zhang, Hawking 
Subject: Re: [PATCH 07/10] drm/amdgpu: add concurrent baco reset support for 
XGMI





On 11/27/19 4:15 AM, Le Ma wrote:

> Currently each XGMI node reset wq does not run in parrallel because

> same work item bound to same cpu runs in sequence. So change to bound

> the xgmi_reset_work item to different cpus.



It's not the same work item, see more bellow





>

> XGMI requires all nodes enter into baco within very close proximity

> before any node exit baco. So schedule the xgmi_reset_work wq twice

> for enter/exit baco respectively.

>

> The default reset code path and methods do not change for vega20 production:

>- baco reset without xgmi/ras

>- psp reset with xgmi/ras

>

> To enable baco for XGMI/RAS case, both 2 conditions below are needed:

>- amdgpu_ras_enable=2

>- baco-supported smu firmware

>

> The case that PSP reset and baco reset coexist within an XGMI hive is

> not in the consideration.

>

> Change-Id: I9c08cf90134f940b42e20d2129ff87fba761c532

> Signed-off-by: Le Ma mailto:le...@amd.com>>

> ---

>   drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 +

>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 78 
> ++

>   2 files changed, 70 insertions(+), 10 deletions(-)

>

> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> index d120fe5..08929e6 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

> @@ -998,6 +998,8 @@ struct amdgpu_device {

>  int   pstate;

>  /* enable runtime pm on the device */

>  boolrunpm;

> +

> +  boolin_baco;

>   };

>

>   static inline struct amdgpu_device *amdgpu_ttm_adev(struct

> ttm_bo_device *bdev) diff --git

> a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> index bd387bb..71abfe9 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> @@ -2654,7 +2654,13 @@ static void amdgpu_device_xgmi_reset_func(struct 
> work_struct *__work)

>  struct amdgpu_device *adev =

>  container_of(__work, struct amdgpu_device, 
> xgmi_reset_work);

>

> -   adev->asic_reset_res =  amdgpu_asic_reset(adev);

> +  if (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)

> +  adev->asic_reset_res = (adev->in_baco == false) ?

> +  
> amdgpu_device_baco_enter(adev->ddev) :

> +  
> amdgpu_device_baco_exit(adev->ddev);

> +  else

> +  adev->asic_reset_res = amdgpu_asic_reset(adev);

> +

>  if (adev->asic_reset_res)

>  DRM_WARN("ASIC reset failed with error, %d for drm dev, 
> %s",

>   adev->asic_reset_res, adev->ddev->unique); 
> @@ -3796,6 +3802,7 @@

> static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,

>  struct amdgpu_device *tmp_adev = NULL;

>  bool need_full_reset = *need_full_reset_arg, vram_lost = false;

>  int r = 0;

> +  int cpu = smp_processor_id();

>

>  /*

>   * ASIC reset has to be done on all HGMI hive nodes ASAP @@

> -3803,21 +3810,24 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info 
> *hive,

>   */

>  if (need_full_reset) {

>  list_for_each_entry(tmp_adev, device_list_handle, 
> gmc.xgmi.head) {

> -   /* For XGMI run all resets in parallel to 
> speed up the process */

> +  /*

> +  * For XGMI run all resets in parallel to speed 
> up the

> +  * process by scheduling the highpri wq on 
> different

> +  * cpus. For XGMI with baco reset, all nodes 
> must enter

> +  * baco within close proximity before anyone 
> exit.

> +  */

>  if (tmp_adev->gmc.xgmi.num_physical_nodes > 
> 1) {

> -   if 
> (!queue_work(system_highpri_wq, _adev->xgmi_reset_work))





Note that tmp_adev->xgmi_reset_work (the work item) is per device in 

RE: [PATCH 06/10] drm/amdgpu: add condition to enable baco for xgmi/ras case

2019-11-27 Thread Ma, Le
Hi Hawking,

Please check this v2 patch which is just sent out. And as discussed, we decide 
to still leverage the current reset_method() function with functionality/change 
scale/code maintainability balanced . Thanks.

Regards,
Ma Le

-Original Message-
From: Zhang, Hawking  
Sent: Wednesday, November 27, 2019 7:39 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; Li, 
Dennis ; Deucher, Alexander ; Ma, 
Le 
Subject: RE: [PATCH 06/10] drm/amdgpu: add condition to enable baco for 
xgmi/ras case

[AMD Public Use]

And It is still necessary to put all the condition check in a function. I mean 
a function that decide to go ras recovery or legacy fatal_error handling. The 
PMFW version that support RAS recovery will be different among ASICs. Current 
version check only works for VG20. In fact, once ras->supported is set and 
proper PMFW is detected, RAS recovery will be the best choice no matter it is 
sGPU or mGPU.

Regards,
Hawking

-Original Message-
From: Le Ma  
Sent: 2019年11月27日 17:15
To: amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Chen, Guchun ; 
Zhou1, Tao ; Li, Dennis ; Deucher, 
Alexander ; Ma, Le 
Subject: [PATCH 06/10] drm/amdgpu: add condition to enable baco for xgmi/ras 
case

Avoid to change default reset behavior for production card by checking 
amdgpu_ras_enable equal to 2. And only new enough smu ucode can support baco 
for xgmi/ras case.

Change-Id: I07c3e6862be03e068745c73db8ea71f428ecba6b
Signed-off-by: Le Ma 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 951327f..6202333 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -577,7 +577,9 @@ soc15_asic_reset_method(struct amdgpu_device *adev)
struct amdgpu_hive_info *hive = 
amdgpu_get_xgmi_hive(adev, 0);
struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
 
-   if (hive || (ras && ras->supported))
+   if ((hive || (ras && ras->supported)) &&
+   (amdgpu_ras_enable != 2 ||
+   adev->pm.fw_version <= 0x283400))
baco_reset = false;
}
break;
--
2.7.4
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 06/10] drm/amdgpu: add condition to enable baco for xgmi/ras case

2019-11-27 Thread Ma, Le
Agree with your thoughts that we drop amdgpu_ras_enable=2 condition. The only 
concern in my side is that besides fatal_error, another result may happen that 
atombios_init timeout on xgmi by baco (not sure psp mode1 reset causes this as 
well).



Assuming no amdgpu_ras_enable=2 check, if PMFW > 40.52,  the use cases as my 
understanding includes:

  1.  sGPU without RAS:
 *   new: baco
 *   old: baco
  2.  sGPU with RAS:

  *   new: baco
  *   old: psp mode1 chain reset and legacy fatal_error handling

  1.  XGMI with RAS: baco
 *   new: baco
 *   old: psp mode1 chain reset and legacy fatal_error handling
  2.  XGMI without RAS: baco
 *   new: baco
 *   old: psp mode1 chain reset



That is to say, all uses cases go on baco road when PMFW > 40.52.



Regards,

Ma Le



-Original Message-
From: Zhang, Hawking 
Sent: Wednesday, November 27, 2019 7:28 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; Li, 
Dennis ; Deucher, Alexander ; Ma, 
Le 
Subject: RE: [PATCH 06/10] drm/amdgpu: add condition to enable baco for 
xgmi/ras case



[AMD Public Use]



After thinking it a bit, I think we can just rely on PMFW version to decide to 
go RAS recovery or legacy fatal_error handling for the platforms that support 
RAS. Leveraging amdgpu_ras_enable as a temporary solution seems not necessary? 
Even baco ras recovery not stable, it is the same result as legacy fatal_error 
handling that user has to reboot the node manually.



So the new soc reset use cases are:

XGMI (without RAS): use PSP mode1 based chain reset, RAS enabled (with PMFW 
40.52 and onwards): use BACO based RAS recovery, RAS enabled (with PMFW prior 
to 40.52): use legacy fatal_error handling.

Anything else?



Regards,

Hawking

-Original Message-

From: Le Ma mailto:le...@amd.com>>

Sent: 2019年11月27日 17:15

To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

Cc: Zhang, Hawking mailto:hawking.zh...@amd.com>>; Chen, 
Guchun mailto:guchun.c...@amd.com>>; Zhou1, Tao 
mailto:tao.zh...@amd.com>>; Li, Dennis 
mailto:dennis...@amd.com>>; Deucher, Alexander 
mailto:alexander.deuc...@amd.com>>; Ma, Le 
mailto:le...@amd.com>>

Subject: [PATCH 06/10] drm/amdgpu: add condition to enable baco for xgmi/ras 
case



Avoid to change default reset behavior for production card by checking 
amdgpu_ras_enable equal to 2. And only new enough smu ucode can support baco 
for xgmi/ras case.



Change-Id: I07c3e6862be03e068745c73db8ea71f428ecba6b

Signed-off-by: Le Ma mailto:le...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/soc15.c | 4 +++-

1 file changed, 3 insertions(+), 1 deletion(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c

index 951327f..6202333 100644

--- a/drivers/gpu/drm/amd/amdgpu/soc15.c

+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c

@@ -577,7 +577,9 @@ soc15_asic_reset_method(struct amdgpu_device *adev)

   struct amdgpu_hive_info *hive = 
amdgpu_get_xgmi_hive(adev, 0);

   struct amdgpu_ras *ras = 
amdgpu_ras_get_context(adev);

-   if (hive || (ras && ras->supported))

+  if ((hive || (ras && ras->supported)) &&

+  (amdgpu_ras_enable != 2 ||

+  adev->pm.fw_version <= 0x283400))

   baco_reset = false;

   }

   break;

--

2.7.4
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 05/10] drm/amdgpu: enable/disable doorbell interrupt in baco entry/exit helper

2019-11-27 Thread Ma, Le


From: Zhang, Hawking 
Sent: Wednesday, November 27, 2019 8:04 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; Li, 
Dennis ; Deucher, Alexander ; Ma, 
Le 
Subject: RE: [PATCH 05/10] drm/amdgpu: enable/disable doorbell interrupt in 
baco entry/exit helper


Please check my comments inline



Regards,
Hawking



-Original Message-
From: Le Ma mailto:le...@amd.com>>
Sent: 2019年11月27日 17:15
To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>
Cc: Zhang, Hawking mailto:hawking.zh...@amd.com>>; Chen, 
Guchun mailto:guchun.c...@amd.com>>; Zhou1, Tao 
mailto:tao.zh...@amd.com>>; Li, Dennis 
mailto:dennis...@amd.com>>; Deucher, Alexander 
mailto:alexander.deuc...@amd.com>>; Ma, Le 
mailto:le...@amd.com>>
Subject: [PATCH 05/10] drm/amdgpu: enable/disable doorbell interrupt in baco 
entry/exit helper



This operation is needed when baco entry/exit for ras recovery



Change-Id: I535c7231693f3138a8e3d5acd55672e2ac68232f

Signed-off-by: Le Ma mailto:le...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 19 ---

1 file changed, 12 insertions(+), 7 deletions(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

index b1408c5..bd387bb 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

@@ -4308,10 +4308,14 @@ static void amdgpu_device_get_pcie_info(struct 
amdgpu_device *adev)  int amdgpu_device_baco_enter(struct drm_device *dev)  {

   struct amdgpu_device *adev = dev->dev_private;

+ struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);



if (!amdgpu_device_supports_baco(adev->ddev))

   return -ENOTSUPP;



+ if (ras && ras->supported)

+ adev->nbio.funcs->enable_doorbell_interrupt(adev, 
false);

+

   if (is_support_sw_smu(adev)) {

   struct smu_context *smu = >smu;

   int ret;

@@ -4319,8 +4323,6 @@ int amdgpu_device_baco_enter(struct drm_device *dev)

   ret = smu_baco_enter(smu);

   if (ret)

   return ret;

-

-  return 0;

   } else {

   void *pp_handle = adev->powerplay.pp_handle;

   const struct amd_pm_funcs *pp_funcs = 
adev->powerplay.pp_funcs; @@ -4331,14 +4333,15 @@ int 
amdgpu_device_baco_enter(struct drm_device *dev)

   /* enter BACO state */

   if (pp_funcs->set_asic_baco_state(pp_handle, 1))

   return -EIO;

-

-  return 0;

   }

+

+ return 0;

}



 int amdgpu_device_baco_exit(struct drm_device *dev)  {

   struct amdgpu_device *adev = dev->dev_private;

+ struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);



if (!amdgpu_device_supports_baco(adev->ddev))

   return -ENOTSUPP;

@@ -4351,7 +4354,6 @@ int amdgpu_device_baco_exit(struct drm_device *dev)

   if (ret)

   return ret;



-  return 0;

   } else {

   void *pp_handle = adev->powerplay.pp_handle;

   const struct amd_pm_funcs *pp_funcs = 
adev->powerplay.pp_funcs; @@ -4362,7 +4364,10 @@ int 
amdgpu_device_baco_exit(struct drm_device *dev)

   /* exit BACO state */

   if (pp_funcs->set_asic_baco_state(pp_handle, 0))

   return -EIO;

-

-  return 0;

   }

+

+ if (ras && ras->supported)

+ adev->nbio.funcs->enable_doorbell_interrupt(adev, 
false);

+





[Hawking] Shouldn't be enabled doorbell interrupt after exit baco? Or do I miss 
something?



[Le]: Yes, the argument should be true. I made a typo here.



+ return 0;

}

--

2.7.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 10/10 v2] drm/amdgpu: reduce redundant uvd context lost warning message

2019-11-27 Thread Ma, Le




-Original Message-
From: Christian König 
Sent: Wednesday, November 27, 2019 6:08 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Chen, Guchun ; Zhou1, Tao ; 
Deucher, Alexander ; Li, Dennis ; 
Zhang, Hawking 
Subject: Re: [PATCH 10/10 v2] drm/amdgpu: reduce redundant uvd context lost 
warning message



Am 27.11.19 um 11:02 schrieb Le Ma:

> Move the print out of uvd instance loop in amdgpu_uvd_suspend

>

> v2: drop unnecessary brackets

>

> Change-Id: Ifad997debd84763e1b55d668e144b729598f115e

> Signed-off-by: Le Ma mailto:le...@amd.com>>

> ---

>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 10 ++

>   1 file changed, 6 insertions(+), 4 deletions(-)

>

> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

> index e324bfe..69248ecb 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

> @@ -376,13 +376,15 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)

>  return -ENOMEM;

>

>  /* re-write 0 since err_event_athub will corrupt VCPU 
> buffer */

> -   if (amdgpu_ras_intr_triggered()) {

> -   DRM_WARN("UVD VCPU state may lost due to RAS 
> ERREVENT_ATHUB_INTERRUPT\n");

> +  if (amdgpu_ras_intr_triggered())



Can the state change while doing the loop? If yes than I would rather grab that 
once and use it multiple times.



Christian.



[Le]: Got your meaning, and the state will not change here. Will update this in 
v3.



>  memset(adev->uvd.inst[j].saved_bo, 0, size);

> -   } else {

> +  else

>  memcpy_fromio(adev->uvd.inst[j].saved_bo, 
> ptr, size);

> -   }

>  }

> +

> +  if (amdgpu_ras_intr_triggered())

> +  DRM_WARN("UVD VCPU state may lost due to RAS

> +ERREVENT_ATHUB_INTERRUPT\n");

> +

>  return 0;

>   }

>


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 10/10] drm/amdgpu: reduce redundant uvd context lost warning message

2019-11-27 Thread Ma, Le




-Original Message-
From: Chen, Guchun 
Sent: Wednesday, November 27, 2019 5:50 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Zhang, Hawking ; Zhou1, Tao ; Li, 
Dennis ; Deucher, Alexander ; Ma, 
Le 
Subject: RE: [PATCH 10/10] drm/amdgpu: reduce redundant uvd context lost 
warning message



[AMD Official Use Only - Internal Distribution Only]









-Original Message-

From: Le Ma mailto:le...@amd.com>>

Sent: Wednesday, November 27, 2019 5:15 PM

To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

Cc: Zhang, Hawking mailto:hawking.zh...@amd.com>>; Chen, 
Guchun mailto:guchun.c...@amd.com>>; Zhou1, Tao 
mailto:tao.zh...@amd.com>>; Li, Dennis 
mailto:dennis...@amd.com>>; Deucher, Alexander 
mailto:alexander.deuc...@amd.com>>; Ma, Le 
mailto:le...@amd.com>>

Subject: [PATCH 10/10] drm/amdgpu: reduce redundant uvd context lost warning 
message



Move the print out of uvd instance loop in amdgpu_uvd_suspend



Change-Id: Ifad997debd84763e1b55d668e144b729598f115e

Signed-off-by: Le Ma mailto:le...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 5 -

1 file changed, 4 insertions(+), 1 deletion(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

index e324bfe..ac7c7795 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c

@@ -377,12 +377,15 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)

/* re-write 0 since err_event_athub will corrupt VCPU 
buffer */

   if (amdgpu_ras_intr_triggered()) {

-   DRM_WARN("UVD VCPU state may lost due to 
RAS ERREVENT_ATHUB_INTERRUPT\n");

   memset(adev->uvd.inst[j].saved_bo, 0, size);

   } else {

   memcpy_fromio(adev->uvd.inst[j].saved_bo, 
ptr, size);

   }

   }

+

+  if (amdgpu_ras_intr_triggered()) {

+  DRM_WARN("UVD VCPU state may lost due to RAS 
ERREVENT_ATHUB_INTERRUPT\n");

+

[Guchun]the "{" after the if condition needs to be removed?

[Le] Yes, sent it too quickly and made a mistake here.

   return 0;

}

--

2.7.4
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: avoid upload corrupted ta ucode to psp

2019-11-10 Thread Ma, Le
Reviewed-by: Le Ma 

-Original Message-
From: Hawking Zhang  
Sent: Monday, November 11, 2019 12:44 PM
To: amd-gfx@lists.freedesktop.org; Deucher, Alexander 
; Clements, John ; Ma, Le 

Cc: Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: avoid upload corrupted ta ucode to psp

xgmi, ras, hdcp and dtm ta are actually separated ucode and need to handled 
case by case to upload to psp.

We support the case that ta binary have one or multiple of them built-in. As a 
result, the driver should check each ta binariy's availablity before decide to 
upload them to psp.

In the terminate (unload) case, the driver will check the context readiness 
before perform unload activity. It's fine to keep it as is.

Change-Id: I493116970ffb557f33c06de10f786684fdcef85b
Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 456ac04b246c..9621e207a9ca 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -558,7 +558,9 @@ static int psp_xgmi_initialize(struct psp_context *psp)
struct ta_xgmi_shared_memory *xgmi_cmd;
int ret;
 
-   if (!psp->adev->psp.ta_fw)
+   if (!psp->adev->psp.ta_fw ||
+   !psp->adev->psp.ta_xgmi_ucode_size ||
+   !psp->adev->psp.ta_xgmi_start_addr)
return -ENOENT;
 
if (!psp->xgmi_context.initialized) {
@@ -768,6 +770,12 @@ static int psp_ras_initialize(struct psp_context *psp)  {
int ret;
 
+   if (!psp->adev->psp.ta_ras_ucode_size ||
+   !psp->adev->psp.ta_ras_start_addr) {
+   dev_warn(psp->adev->dev, "RAS: ras ta ucode is not 
available\n");
+   return 0;
+   }
+
if (!psp->ras.ras_initialized) {
ret = psp_ras_init_shared_buf(psp);
if (ret)
@@ -857,6 +865,12 @@ static int psp_hdcp_initialize(struct psp_context *psp)  {
int ret;
 
+   if (!psp->adev->psp.ta_hdcp_ucode_size ||
+   !psp->adev->psp.ta_hdcp_start_addr) {
+   dev_warn(psp->adev->dev, "HDCP: hdcp ta ucode is not 
available\n");
+   return 0;
+   }
+
if (!psp->hdcp_context.hdcp_initialized) {
ret = psp_hdcp_init_shared_buf(psp);
if (ret)
@@ -1030,6 +1044,12 @@ static int psp_dtm_initialize(struct psp_context *psp)  {
int ret;
 
+   if (!psp->adev->psp.ta_dtm_ucode_size ||
+   !psp->adev->psp.ta_dtm_start_addr) {
+   dev_warn(psp->adev->dev, "DTM: dtm ta ucode is not 
available\n");
+   return 0;
+   }
+
if (!psp->dtm_context.dtm_initialized) {
ret = psp_dtm_init_shared_buf(psp);
if (ret)
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amd/powerplay: update Arcturus driver-smu interface header

2019-11-05 Thread Ma, Le
Reviewed-by: Le Ma 

-Original Message-
From: amd-gfx  On Behalf Of Quan, Evan
Sent: Tuesday, November 5, 2019 4:23 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan 
Subject: [PATCH] drm/amd/powerplay: update Arcturus driver-smu interface header

To fit the latest SMU firmware.

Change-Id: Ib197e6186127121b4ae276639fa66677094a7d01
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h | 2 +-
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
index 886b9a21ebd8..a886f0644d24 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
@@ -159,7 +159,7 @@
 //FIXME need updating
 // Debug Overrides Bitmask
 #define DPM_OVERRIDE_DISABLE_UCLK_PID   0x0001
-#define DPM_OVERRIDE_ENABLE_VOLT_LINK_VCN_FCLK  0x0002
+#define DPM_OVERRIDE_DISABLE_VOLT_LINK_VCN_FCLK 0x0002
 
 // I2C Config Bit Defines
 #define I2C_CONTROLLER_ENABLED   1
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index 88ee66683271..36028e9d1011 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -27,7 +27,7 @@
 
 #define SMU11_DRIVER_IF_VERSION_INV 0x  #define 
SMU11_DRIVER_IF_VERSION_VG20 0x13 -#define SMU11_DRIVER_IF_VERSION_ARCT 0x0F
+#define SMU11_DRIVER_IF_VERSION_ARCT 0x10
 #define SMU11_DRIVER_IF_VERSION_NV10 0x33  #define 
SMU11_DRIVER_IF_VERSION_NV12 0x33  #define SMU11_DRIVER_IF_VERSION_NV14 0x34
--
2.23.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 4/4] drm/amdgpu: remove ras global recovery handling from ras_controller_int handler

2019-10-29 Thread Ma, Le




-Original Message-
From: Chen, Guchun 
Sent: Tuesday, October 29, 2019 9:37 AM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Ma, Le 
Subject: RE: [PATCH 4/4] drm/amdgpu: remove ras global recovery handling from 
ras_controller_int handler









Regards,

Guchun



-Original Message-

From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 On Behalf Of Le Ma

Sent: Monday, October 28, 2019 7:31 PM

To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

Cc: Ma, Le mailto:le...@amd.com>>

Subject: [PATCH 4/4] drm/amdgpu: remove ras global recovery handling from 
ras_controller_int handler



From: Le Ma mailto:le...@amd.com>>



Change-Id: Ia8a61a4b3bd529f0f691e43e69b299d7d151c0c2

Signed-off-by: Le Ma mailto:le...@amd.com>>

---

drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c | 6 +-

1 file changed, 5 insertions(+), 1 deletion(-)



diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c 
b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c

index 0db458f..876690a 100644

--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c

+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c

@@ -324,7 +324,11 @@ static void 
nbio_v7_4_handle_ras_controller_intr_no_bifring(struct amdgpu_device

   
RAS_CNTLR_INTERRUPT_CLEAR, 1);

   WREG32_SOC15(NBIO, 0, mmBIF_DOORBELL_INT_CNTL, 
bif_doorbell_intr_cntl);

-   amdgpu_ras_global_ras_isr(adev);

+  /*

+  * ras_controller_int is dedicated for nbif ras error,

+  * not the global interrupt for sync flood

+  */

+  amdgpu_ras_reset_gpu(adev, true);

[Guchun]We need to add one printing here to tell audience, who and why resets 
gpu? And moreover, in the removed global ras isr handler 
amdgpu_ras_global_ras_isr, we call amdgpu_ras_reset_gpu with is_baco parameter 
"false", but now we use "true" here?

[Le] We may consider add printing here to indicate it’s ras controller 
interrupt issue. The is_baco parameter is unused and has no effect. Anyway, I 
will revise and hold on patch #2 and #4 when baco based RAS recovery totally 
works as Hawking’s comment.

   }

}

--

2.7.4



___

amd-gfx mailing list

amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 2/4] drm/amdgpu: reset err_event_athub flag if gpu recovery succeeded

2019-10-29 Thread Ma, Le




> -Original Message-

> From: Chen, Guchun 

> Sent: Tuesday, October 29, 2019 9:28 AM

> To: Ma, Le ; amd-gfx@lists.freedesktop.org

> Cc: Ma, Le 

> Subject: RE: [PATCH 2/4] drm/amdgpu: reset err_event_athub flag if gpu

> recovery succeeded

>

>

>

> Regards,

> Guchun

>

> -Original Message-

> From: amd-gfx 
> mailto:amd-gfx-boun...@lists.freedesktop.org>>
>  On Behalf Of Le Ma

> Sent: Monday, October 28, 2019 7:31 PM

> To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

> Cc: Ma, Le mailto:le...@amd.com>>

> Subject: [PATCH 2/4] drm/amdgpu: reset err_event_athub flag if gpu recovery

> succeeded

>

> Otherwise next err_event_athub error cannot call gpu reset.

>

> Change-Id: I5cd293f30f23876bf2a1860681bcb50f47713ecd

> Signed-off-by: Le Ma mailto:le...@amd.com>>

> ---

>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 +++

>  1 file changed, 3 insertions(+)

>

> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> index 676cad1..51d74bb 100644

> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

> @@ -4089,6 +4089,9 @@ int amdgpu_device_gpu_recover(struct

> amdgpu_device *adev,

>  }

>  }

>

> +  if (!r && in_ras_intr)

> +  atomic_set(_ras_in_intr, 0);

> [Guchun]To access this atomic variable, maybe it's better we create a new

> function like reset or clear in amdgpu_ras.h or .c first, then we can call 
> that

> function here, like we we do to amdgpu_ras_intr_triggered in this same

> function. This will do assist to modularity of ras driver.

> [Le] Agree with you. We could make it paired with amdgpu_ras_intr_triggered.



>  skip_sched_resume:

>  list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {

>  /*unlock kfd: SRIOV would do it separately */

> --

> 2.7.4

>

> ___

> amd-gfx mailing list

> amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>

> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amd/powerplay: update arcturus smu-driver interaction header

2019-09-25 Thread Ma, Le
Reviewed-by: Le Ma 

-Original Message-
From: amd-gfx  On Behalf Of Quan, Evan
Sent: Tuesday, September 24, 2019 12:50 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan 
Subject: [PATCH] drm/amd/powerplay: update arcturus smu-driver interaction 
header

To pair the latest SMU firmware.

Change-Id: I376b8c9d0c5a56a343d477a945d63ba894b984d3
Signed-off-by: Evan Quan 
---
 .../amd/powerplay/inc/smu11_driver_if_arcturus.h  | 15 ---
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  2 +-
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
index 40a51a141336..2248d682c462 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
@@ -137,23 +137,23 @@
 #define FEATURE_DS_SOCCLK_MASK(1 << FEATURE_DS_SOCCLK_BIT  
  )
 #define FEATURE_DS_LCLK_MASK  (1 << FEATURE_DS_LCLK_BIT
  )
 #define FEATURE_DS_FCLK_MASK  (1 << FEATURE_DS_FCLK_BIT
  )
-#define FEATURE_DS_LCLK_MASK  (1 << FEATURE_DS_LCLK_BIT
  )
+#define FEATURE_DS_UCLK_MASK  (1 << FEATURE_DS_UCLK_BIT
  )
 #define FEATURE_GFX_ULV_MASK  (1 << FEATURE_GFX_ULV_BIT
  )
-#define FEATURE_VCN_PG_MASK   (1 << FEATURE_VCN_PG_BIT 
  )
+#define FEATURE_DPM_VCN_MASK  (1 << FEATURE_DPM_VCN_BIT
  )
 #define FEATURE_RSMU_SMN_CG_MASK  (1 << FEATURE_RSMU_SMN_CG_BIT
  )
 #define FEATURE_WAFL_CG_MASK  (1 << FEATURE_WAFL_CG_BIT
  )
 
 #define FEATURE_PPT_MASK  (1 << FEATURE_PPT_BIT
  )
 #define FEATURE_TDC_MASK  (1 << FEATURE_TDC_BIT
  )
-#define FEATURE_APCC_MASK (1 << FEATURE_APCC_BIT   
  )
+#define FEATURE_APCC_PLUS_MASK(1 << FEATURE_APCC_PLUS_BIT  
  )
 #define FEATURE_VR0HOT_MASK   (1 << FEATURE_VR0HOT_BIT 
  )
 #define FEATURE_VR1HOT_MASK   (1 << FEATURE_VR1HOT_BIT 
  )
 #define FEATURE_FW_CTF_MASK   (1 << FEATURE_FW_CTF_BIT 
  )
 #define FEATURE_FAN_CONTROL_MASK  (1 << FEATURE_FAN_CONTROL_BIT
  )
 #define FEATURE_THERMAL_MASK  (1 << FEATURE_THERMAL_BIT
  )
 
-#define FEATURE_OUT_OF_BAND_MONITOR_MASK  (1 << EATURE_OUT_OF_BAND_MONITOR_BIT 
  )
-#define FEATURE_TEMP_DEPENDENT_VMIN_MASK  (1 << 
FEATURE_TEMP_DEPENDENT_VMIN_MASK )
+#define FEATURE_OUT_OF_BAND_MONITOR_MASK  (1 << 
FEATURE_OUT_OF_BAND_MONITOR_BIT   )
+#define FEATURE_TEMP_DEPENDENT_VMIN_MASK  (1 << 
+FEATURE_TEMP_DEPENDENT_VMIN_BIT )
 
 
 //FIXME need updating
@@ -806,7 +806,7 @@ typedef struct {
 
   uint32_t P2VCharzFreq[AVFS_VOLTAGE_COUNT]; // in 10KHz units
 
-  uint32_t EnabledAvfsModules[2];
+  uint32_t EnabledAvfsModules[3];
 
   uint32_t MmHubPadding[8]; // SMU internal use  } AvfsFuseOverride_t; @@ 
-869,7 +869,8 @@ typedef struct {  //#define TABLE_ACTIVITY_MONITOR_COEFF  7
 #define TABLE_OVERDRIVE   7
 #define TABLE_WAFL_XGMI_TOPOLOGY  8
-#define TABLE_COUNT   9
+#define TABLE_I2C_COMMANDS9
+#define TABLE_COUNT   10
 
 // These defines are used with the SMC_MSG_SetUclkFastSwitch message.
 typedef enum {
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index af1add570153..e71f6fedf3c6 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -27,7 +27,7 @@
 
 #define SMU11_DRIVER_IF_VERSION_INV 0x  #define 
SMU11_DRIVER_IF_VERSION_VG20 0x13 -#define SMU11_DRIVER_IF_VERSION_ARCT 0x0A
+#define SMU11_DRIVER_IF_VERSION_ARCT 0x0D
 #define SMU11_DRIVER_IF_VERSION_NV10 0x33  #define 
SMU11_DRIVER_IF_VERSION_NV12 0x33  #define SMU11_DRIVER_IF_VERSION_NV14 0x34
--
2.23.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 2/2] drm/amdgpu: correct condition check for psp rlc autoload

2019-09-23 Thread Ma, Le
Sorry that I missed to add Reviewed-by when push this patch.

Regards,
Ma Le

-Original Message-
From: Zhang, Hawking  
Sent: Monday, September 23, 2019 9:58 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Ma, Le 
Subject: RE: [PATCH 2/2] drm/amdgpu: correct condition check for psp rlc 
autoload

Please help to add simple description for both patches. with that fixed,

Series is Reviewed-by: Hawking Zhang 

Regards,
Hawking

-Original Message-
From: amd-gfx  On Behalf Of Le Ma
Sent: 2019年9月23日 21:31
To: amd-gfx@lists.freedesktop.org
Cc: Ma, Le 
Subject: [PATCH 2/2] drm/amdgpu: correct condition check for psp rlc autoload

Change-Id: Ia91d0fb7179f6944214e892f370d7ef3d6b7d30e
Signed-off-by: Le Ma 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index d359f1d..2aa1ae6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -1080,7 +1080,8 @@ static int psp_np_fw_load(struct psp_context *psp)
return ret;
 
/* Start rlc autoload after psp recieved all the gfx firmware */
-   if (ucode->ucode_id == 
AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM) {
+   if (psp->autoload_supported && ucode->ucode_id ==
+   AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM) {
ret = psp_rlc_autoload(psp);
if (ret) {
DRM_ERROR("Failed to start rlc autoload\n");
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: libdrm patch merge request

2019-09-18 Thread Ma, Le
Thanks Alex.

Regards,
Ma Le

From: Deucher, Alexander 
Sent: Wednesday, September 18, 2019 8:55 PM
To: Ma, Le 
Cc: amd-gfx@lists.freedesktop.org
Subject: Re: libdrm patch merge request

Done.

Alex

From: Ma, Le mailto:le...@amd.com>>
Sent: Wednesday, September 18, 2019 5:40 AM
To: Deucher, Alexander 
mailto:alexander.deuc...@amd.com>>
Cc: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org> 
mailto:amd-gfx@lists.freedesktop.org>>
Subject: libdrm patch merge request


Hi Alex,



Could you help to merge patch 
https://gitlab.freedesktop.org/lema1/drm/commit/51f3e80716578d0bf1590286e32f00f4c09c726d
 into drm master branch ?



Thanks.



Regards,

Ma Le
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

libdrm patch merge request

2019-09-18 Thread Ma, Le
Hi Alex,

Could you help to merge patch 
https://gitlab.freedesktop.org/lema1/drm/commit/51f3e80716578d0bf1590286e32f00f4c09c726d
 into drm master branch ?

Thanks.

Regards,
Ma Le
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH 7/9] drm/amdgpu: enable sdma clock gating for Arcturus

2019-08-08 Thread Ma, Le
This series only contains 7 patches. Sorry for the confusion.

Regards,
Ma Le

-Original Message-
From: amd-gfx  On Behalf Of Le Ma
Sent: Thursday, August 08, 2019 6:22 PM
To: amd-gfx@lists.freedesktop.org
Cc: Ma, Le 
Subject: [PATCH 7/9] drm/amdgpu: enable sdma clock gating for Arcturus

Init sdma MGCG/LS flag

Change-Id: I600b8c67b1dfa74240269f2f028960b2c93a0ec2
Signed-off-by: Le Ma 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 6038dce..ad64975 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -1019,7 +1019,9 @@ static int soc15_common_early_init(void *handle)
AMD_CG_SUPPORT_GFX_CGLS |
AMD_CG_SUPPORT_GFX_CP_LS |
AMD_CG_SUPPORT_HDP_MGCG |
-   AMD_CG_SUPPORT_HDP_LS;
+   AMD_CG_SUPPORT_HDP_LS |
+   AMD_CG_SUPPORT_SDMA_MGCG |
+   AMD_CG_SUPPORT_SDMA_LS;
adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x32;
break;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] Revert "drm/amdgpu: fix transform feedback GDS hang on gfx10 (v2)"

2019-08-06 Thread Ma, Le
Reviewed-by: Le Ma 

-Original Message-
From: amd-gfx  On Behalf Of Marek Ol?ák
Sent: Saturday, August 03, 2019 6:27 AM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] Revert "drm/amdgpu: fix transform feedback GDS hang on gfx10 
(v2)"

From: Marek Olšák 

This reverts commit b41335c6c0303d100abe89c843e52645d1974cd9.

SET_CONFIG_REG writes to memory if register shadowing is enabled, causing a VM 
fault.

NGG streamout is unstable anyway, so all UMDs should use legacy streamout. I 
think Mesa is the only driver using NGG streamout.

Signed-off-by: Marek Olšák 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h |  1 -  
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c  | 12 +---
 2 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h
index df8a23554831..f6ac1e9548f2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gds.h
@@ -25,21 +25,20 @@
 #define __AMDGPU_GDS_H__
 
 struct amdgpu_ring;
 struct amdgpu_bo;
 
 struct amdgpu_gds {
uint32_t gds_size;
uint32_t gws_size;
uint32_t oa_size;
uint32_t gds_compute_max_wave_id;
-   uint32_t vgt_gs_max_wave_id;
 };
 
 struct amdgpu_gds_reg_offset {
uint32_tmem_base;
uint32_tmem_size;
uint32_tgws;
uint32_toa;
 };
 
 #endif /* __AMDGPU_GDS_H__ */
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 618291df659b..e3823c8e9850 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -4269,29 +4269,20 @@ static void gfx_v10_0_ring_emit_hdp_flush(struct 
amdgpu_ring *ring)  }
 
 static void gfx_v10_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
   struct amdgpu_job *job,
   struct amdgpu_ib *ib,
   uint32_t flags)
 {
unsigned vmid = AMDGPU_JOB_GET_VMID(job);
u32 header, control = 0;
 
-   /* Prevent a hw deadlock due to a wave ID mismatch between ME and GDS.
-* This resets the wave ID counters. (needed by transform feedback)
-* TODO: This might only be needed on a VMID switch when we change
-*   the GDS OA mapping, not sure.
-*/
-   amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
-   amdgpu_ring_write(ring, mmVGT_GS_MAX_WAVE_ID);
-   amdgpu_ring_write(ring, ring->adev->gds.vgt_gs_max_wave_id);
-
if (ib->flags & AMDGPU_IB_FLAG_CE)
header = PACKET3(PACKET3_INDIRECT_BUFFER_CNST, 2);
else
header = PACKET3(PACKET3_INDIRECT_BUFFER, 2);
 
control |= ib->length_dw | (vmid << 24);
 
if (amdgpu_mcbp && (ib->flags & AMDGPU_IB_FLAG_PREEMPT)) {
control |= INDIRECT_BUFFER_PRE_ENB(1);
 
@@ -5023,21 +5014,21 @@ static const struct amdgpu_ring_funcs 
gfx_v10_0_ring_funcs_gfx = {
 */
5 + /* COND_EXEC */
7 + /* HDP_flush */
4 + /* VGT_flush */
14 + /* CE_META */
31 + /* DE_META */
3 + /* CNTX_CTRL */
5 + /* HDP_INVL */
8 + 8 + /* FENCE x2 */
2, /* SWITCH_BUFFER */
-   .emit_ib_size = 7, /* gfx_v10_0_ring_emit_ib_gfx */
+   .emit_ib_size = 4, /* gfx_v10_0_ring_emit_ib_gfx */
.emit_ib = gfx_v10_0_ring_emit_ib_gfx,
.emit_fence = gfx_v10_0_ring_emit_fence,
.emit_pipeline_sync = gfx_v10_0_ring_emit_pipeline_sync,
.emit_vm_flush = gfx_v10_0_ring_emit_vm_flush,
.emit_gds_switch = gfx_v10_0_ring_emit_gds_switch,
.emit_hdp_flush = gfx_v10_0_ring_emit_hdp_flush,
.test_ring = gfx_v10_0_ring_test_ring,
.test_ib = gfx_v10_0_ring_test_ib,
.insert_nop = amdgpu_ring_insert_nop,
.pad_ib = amdgpu_ring_generic_pad_ib,
@@ -5175,21 +5166,20 @@ static void gfx_v10_0_set_rlc_funcs(struct 
amdgpu_device *adev)  }
 
 static void gfx_v10_0_set_gds_init(struct amdgpu_device *adev)  {
/* init asic gds info */
switch (adev->asic_type) {
case CHIP_NAVI10:
default:
adev->gds.gds_size = 0x1;
adev->gds.gds_compute_max_wave_id = 0x4ff;
-   adev->gds.vgt_gs_max_wave_id = 0x3ff;
break;
}
 
adev->gds.gws_size = 64;
adev->gds.oa_size = 16;
 }
 
 static void gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(struct amdgpu_device 
*adev,
  u32 bitmap)
 {
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

RE: [PATCH] drm/amd/powerplay: skip pcie params override on Arcturus

2019-08-05 Thread Ma, Le
Please combine the 2 close "if (adev->asic_type != CHIP_ARCTURUS) {".

With that fixed: Reviewed-by: Le Ma 

Regards,
Ma Le

-Original Message-
From: amd-gfx  On Behalf Of Evan Quan
Sent: Monday, August 05, 2019 3:13 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan 
Subject: [PATCH] drm/amd/powerplay: skip pcie params override on Arcturus

This is not supported on Arcturus.

Affected ASIC: Arcturus

Change-Id: I62a8bce17a070ce4eda5fa22f4b12a7ffa1201cd
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 5ba038260091..7c2c24a291b0 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1109,9 +1109,11 @@ static int smu_smc_table_hw_init(struct smu_context *smu,
if (ret)
return ret;
 
-   ret = smu_override_pcie_parameters(smu);
-   if (ret)
-   return ret;
+   if (adev->asic_type != CHIP_ARCTURUS) {
+   ret = smu_override_pcie_parameters(smu);
+   if (ret)
+   return ret;
+   }
 
if (adev->asic_type != CHIP_ARCTURUS) {
ret = smu_notify_display_change(smu);
-- 
2.22.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

libdrm patch merge request

2019-07-30 Thread Ma, Le
Hi Alex,

Could you help to merge following 2 reviewed patches on 
https://gitlab.freedesktop.org/lema1/drm/commits/lema1/drm into drm master 
branch ?

  1.  tests/amdgpu: disable reset test for 
now<https://gitlab.freedesktop.org/lema1/drm/commit/97c8dca664c00864778a042ba2f69d41405e63a3>
  2.  tests/amdgpu: divide dispatch test into compute and 
gfx<https://gitlab.freedesktop.org/lema1/drm/commit/c02cb80241ba041485837488925f3e0fc864cf1f>

Thanks.

Regards,
Ma Le
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH libdrm 1/1] tests/amdgpu: divide dispatch test into compute and gfx

2019-07-22 Thread Ma, Le
Thanks Flora. No need change for draw test since it only runs gfx ring test.

Regards,
Ma Le

-Original Message-
From: Cui, Flora  
Sent: Monday, July 22, 2019 4:07 PM
To: Ma, Le ; amd-gfx@lists.freedesktop.org
Cc: Ma, Le 
Subject: RE: [PATCH libdrm 1/1] tests/amdgpu: divide dispatch test into compute 
and gfx

Patch is Reviewed-by: Flora Cui  Could you apply the similar 
change to draw test?

-Original Message-
From: amd-gfx  On Behalf Of Le Ma
Sent: Monday, July 22, 2019 4:01 PM
To: amd-gfx@lists.freedesktop.org
Cc: Ma, Le ; Cui, Flora 
Subject: [PATCH libdrm 1/1] tests/amdgpu: divide dispatch test into compute and 
gfx

for better clarification

Change-Id: I245d760d5f9d64eb10b137d5ce375ef52a4d873a
Signed-off-by: Le Ma 
---
 tests/amdgpu/basic_tests.c | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/tests/amdgpu/basic_tests.c b/tests/amdgpu/basic_tests.c index 
938106e..fa0f568 100644
--- a/tests/amdgpu/basic_tests.c
+++ b/tests/amdgpu/basic_tests.c
@@ -55,7 +55,8 @@ static void amdgpu_userptr_test(void);  static void 
amdgpu_semaphore_test(void);  static void amdgpu_sync_dependency_test(void);
 static void amdgpu_bo_eviction_test(void); -static void 
amdgpu_dispatch_test(void);
+static void amdgpu_compute_dispatch_test(void);
+static void amdgpu_gfx_dispatch_test(void);
 static void amdgpu_draw_test(void);
 static void amdgpu_gpu_reset_test(void);
 
@@ -79,7 +80,8 @@ CU_TestInfo basic_tests[] = {
{ "Command submission Test (SDMA)", amdgpu_command_submission_sdma },
{ "SW semaphore Test",  amdgpu_semaphore_test },
{ "Sync dependency Test",  amdgpu_sync_dependency_test },
-   { "Dispatch Test",  amdgpu_dispatch_test },
+   { "Dispatch Test (Compute)",  amdgpu_compute_dispatch_test },
+   { "Dispatch Test (GFX)",  amdgpu_gfx_dispatch_test },
{ "Draw Test",  amdgpu_draw_test },
{ "GPU reset Test", amdgpu_gpu_reset_test },
CU_TEST_INFO_NULL,
@@ -2448,7 +2450,8 @@ static void 
amdgpu_memcpy_dispatch_test(amdgpu_device_handle device_handle,
r = amdgpu_cs_ctx_free(context_handle);
CU_ASSERT_EQUAL(r, 0);
 }
-static void amdgpu_dispatch_test(void)
+
+static void amdgpu_compute_dispatch_test(void)
 {
int r;
struct drm_amdgpu_info_hw_ip info;
@@ -2463,6 +2466,13 @@ static void amdgpu_dispatch_test(void)
amdgpu_memset_dispatch_test(device_handle, 
AMDGPU_HW_IP_COMPUTE, ring_id);
amdgpu_memcpy_dispatch_test(device_handle, 
AMDGPU_HW_IP_COMPUTE, ring_id);
}
+}
+
+static void amdgpu_gfx_dispatch_test(void) {
+   int r;
+   struct drm_amdgpu_info_hw_ip info;
+   uint32_t ring_id;
 
r = amdgpu_query_hw_ip_info(device_handle, AMDGPU_HW_IP_GFX, 0, );
CU_ASSERT_EQUAL(r, 0);
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: set sdma irq src num according to sdma instances

2019-07-19 Thread Ma, Le
Reviewed-by: Le Ma 

Regards,
Ma Le

-Original Message-
From: amd-gfx  On Behalf Of Hawking Zhang
Sent: Friday, July 19, 2019 7:18 PM
To: amd-gfx@lists.freedesktop.org; Ma, Le 
Cc: Zhang, Hawking 
Subject: [PATCH] drm/amdgpu: set sdma irq src num according to sdma instances

Otherwise, it will cause driver access non-existing sdma registers in gpu reset 
code path

Signed-off-by: Hawking Zhang 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index c21b247..a1c2d22 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2421,10 +2421,23 @@ static const struct amdgpu_irq_src_funcs 
sdma_v4_0_ecc_irq_funcs = {
 
 static void sdma_v4_0_set_irq_funcs(struct amdgpu_device *adev)  {
-   adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
+   switch (adev->sdma.num_instances) {
+   case 1:
+   adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_INSTANCE1;
+   adev->sdma.ecc_irq.num_types = AMDGPU_SDMA_IRQ_INSTANCE1;
+   break;
+   case 8:
+   adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
+   adev->sdma.ecc_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
+   break;
+   case 2:
+   default:
+   adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_INSTANCE2;
+   adev->sdma.ecc_irq.num_types = AMDGPU_SDMA_IRQ_INSTANCE2;
+   break;
+   }
adev->sdma.trap_irq.funcs = _v4_0_trap_irq_funcs;
adev->sdma.illegal_inst_irq.funcs = _v4_0_illegal_inst_irq_funcs;
-   adev->sdma.ecc_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
adev->sdma.ecc_irq.funcs = _v4_0_ecc_irq_funcs;  }
 
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx