RE: [PATCH] drm/amdgpu/powerplay: fix typo in mvdd table setup

2019-10-08 Thread Quan, Evan
Reviewed-by: Evan Quan 

-Original Message-
From: amd-gfx  On Behalf Of Alex Deucher
Sent: Thursday, October 3, 2019 10:26 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Robert Strube 

Subject: [PATCH] drm/amdgpu/powerplay: fix typo in mvdd table setup

Polaris and vegam use count for the value rather than level.  This looks like a 
copy paste typo from when the code was adapted from previous asics.

I'm not sure that the SMU actually uses this value, so I don't know that it 
actually is a bug per se.

Bug: https://bugs.freedesktop.org/show_bug.cgi?id=108609
Reported-by: Robert Strube 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c | 2 +-
 drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
index dc754447f0dd..23c12018dbc1 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
@@ -655,7 +655,7 @@ static int polaris10_populate_smc_mvdd_table(struct 
pp_hwmgr *hwmgr,
count = SMU_MAX_SMIO_LEVELS;
for (level = 0; level < count; level++) {
table->SmioTable2.Pattern[level].Voltage =
-   
PP_HOST_TO_SMC_US(data->mvdd_voltage_table.entries[count].value * 
VOLTAGE_SCALE);
+   
PP_HOST_TO_SMC_US(data->mvdd_voltage_table.entries[level].value * 
+VOLTAGE_SCALE);
/* Index into DpmTable.Smio. Drive bits from Smio entry 
to get this voltage level.*/
table->SmioTable2.Pattern[level].Smio =
(uint8_t) level;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c 
b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
index 7c960b07746f..ae18fbcb26fb 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
@@ -456,7 +456,7 @@ static int vegam_populate_smc_mvdd_table(struct pp_hwmgr 
*hwmgr,
count = SMU_MAX_SMIO_LEVELS;
for (level = 0; level < count; level++) {
table->SmioTable2.Pattern[level].Voltage = 
PP_HOST_TO_SMC_US(
-   
data->mvdd_voltage_table.entries[count].value * VOLTAGE_SCALE);
+   
data->mvdd_voltage_table.entries[level].value * VOLTAGE_SCALE);
/* Index into DpmTable.Smio. Drive bits from Smio entry 
to get this voltage level.*/
table->SmioTable2.Pattern[level].Smio =
(uint8_t) level;
--
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 8/8] drm/amdgpu/psp: add psp memory training implementation

2019-10-08 Thread Tuikov, Luben
On 2019-10-08 3:29 p.m., Alex Deucher wrote:
> From: "Tianci.Yin" 
> 
> add memory training implementation code to save resume time.
> 
> Change-Id: I625794a780b11d824ab57ef39cc33b872c6dc6c9
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h |   1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |   9 ++
>  drivers/gpu/drm/amd/amdgpu/psp_v11_0.c  | 171 
>  3 files changed, 181 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index eeb6b6282fce..a3fd498bbba4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -150,6 +150,7 @@ extern uint amdgpu_sdma_phase_quantum;
>  extern char *amdgpu_disable_cu;
>  extern char *amdgpu_virtual_display;
>  extern uint amdgpu_pp_feature_mask;
> +extern uint amdgpu_force_long_training;
>  extern int amdgpu_job_hang_limit;
>  extern int amdgpu_lbpw;
>  extern int amdgpu_compute_multipipe;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index d892b7481d00..a88ea74ca222 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -127,6 +127,7 @@ char *amdgpu_disable_cu = NULL;
>  char *amdgpu_virtual_display = NULL;
>  /* OverDrive(bit 14) disabled by default*/
>  uint amdgpu_pp_feature_mask = 0xbfff;
> +uint amdgpu_force_long_training = 0;
>  int amdgpu_job_hang_limit = 0;
>  int amdgpu_lbpw = -1;
>  int amdgpu_compute_multipipe = -1;
> @@ -389,6 +390,14 @@ module_param_named(sched_hw_submission, 
> amdgpu_sched_hw_submission, int, 0444);
>  MODULE_PARM_DESC(ppfeaturemask, "all power features enabled (default))");
>  module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, uint, 0444);
>  
> +/**
> + * DOC: forcelongtraining (uint)
> + * Force long memory training in resume.
> + * The default is zero, indicates short training in resume.
> + */
> +MODULE_PARM_DESC(forcelongtraining, "force memory long training");
> +module_param_named(forcelongtraining, amdgpu_force_long_training, uint, 
> 0444);
> +
>  /**
>   * DOC: pcie_gen_cap (uint)
>   * Override PCIE gen speed capabilities. See the CAIL flags in 
> drivers/gpu/drm/amd/include/amd_pcie.h.
> diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c 
> b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> index e74a952a3f7c..23915653a278 100644
> --- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> @@ -902,6 +902,174 @@ static int psp_v11_0_rlc_autoload_start(struct 
> psp_context *psp)
>   return psp_rlc_autoload_start(psp);
>  }
>  
> +static int psp_v11_0_memory_training_send_msg(struct psp_context *psp, int 
> msg)
> +{
> + int ret = 0;
> + int i = 0;
> + uint32_t data_32 = 0;

NAK.
Do not initialize any of those integer variables.
They are unconditionally set below.

> + struct amdgpu_device *adev = psp->adev;
> +
> + data_32 = (psp->mem_train_ctx.c2p_train_data_offset >> 20);
> + WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_36, data_32);
> + WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_35, msg);
> +
> + /*max 5s*/
> + while (i < 50) {
> + ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, 
> mmMP0_SMN_C2PMSG_35),
> +0x8000, 0x8000, false);
> + if (ret == 0)
> + break;
> + i++;
> + }

NAK!
Use a for-loop so you can collect in one place to loop definition:

for (i = 0; i < 50; i++) {
ret = ...;
if (ret == 0)
break;
}

This also makes the body of the loop relocatable with ease.

> + DRM_DEBUG("%s training %s, cost %d * %dms.\n",
> +   (msg == PSP_BL__DRAM_SHORT_TRAIN) ? "short" : "long",
> +   (ret == 0) ? "succeed" : "failed",
> +   i, adev->usec_timeout/1000);
> + return ret;
> +}
> +
> +static int psp_v11_0_memory_training_fini(struct psp_context *psp)
> +{
> + int ret = 0;
> + struct psp_memory_training_context *ctx = >mem_train_ctx;
> +
> + ctx->init = PSP_MEM_TRAIN_NOT_SUPPORT;
> + if(ctx->sys_cache) {

Space after keyword: "if (".

> + kfree(ctx->sys_cache);
> + ctx->sys_cache = NULL;
> + }
> +
> + return ret;
> +}

Get rid of "ret" as it is not used.

> +
> +static int psp_v11_0_memory_training_init(struct psp_context *psp)
> +{
> + int ret = 0;

NAK!
Do not preinitialize. "int ret;" is fine.
"ret" exists below only to capture a non-zero (error) code.
It should never be 0.

> + struct psp_memory_training_context *ctx = >mem_train_ctx;
> +
> + if(ctx->init != PSP_MEM_TRAIN_RESERVE_SUCCESS) {
> + DRM_DEBUG("memory training does not support!\n");
> + return 0;
> + }
> +
> + ctx->sys_cache = kzalloc(ctx->train_data_size, GFP_KERNEL);
> + if(ctx->sys_cache == NULL) {

Space after keyword: "if (".

> + 

Re: [PATCH 7/8] drm/amdgpu: reserve vram for memory training

2019-10-08 Thread Tuikov, Luben
On 2019-10-08 3:29 p.m., Alex Deucher wrote:
> From: "Tianci.Yin" 
> 
> memory training using specific fixed vram segment, reserve these
> segments before anyone may allocate it.
> 
> Change-Id: I1436755813a565608a2857a683f535377620a637
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 96 +
>  1 file changed, 96 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 74a9bd94f10c..0819721af16e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1667,6 +1667,93 @@ static int amdgpu_ttm_fw_reserve_vram_init(struct 
> amdgpu_device *adev)
> >fw_vram_usage.va);
>  }
>  
> +/*
> + * Memoy training reservation functions
> + */

Include an empty line between the two comments, just as you would
a paragraph in written text.

> +/**
> + * amdgpu_ttm_training_reserve_vram_fini - free memory training reserved vram
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * free memory training reserved vram if it has been reserved.
> + */
> +static int amdgpu_ttm_training_reserve_vram_fini(struct amdgpu_device *adev)
> +{
> + int ret = 0;
> + struct psp_memory_training_context *ctx = >psp.mem_train_ctx;
> +
> + ctx->init = PSP_MEM_TRAIN_NOT_SUPPORT;
> + if(ctx->c2p_bo) {

Space after keywords, according to LKCS:
if (...)

> + amdgpu_bo_free_kernel(>c2p_bo, NULL, NULL);
> + ctx->c2p_bo = NULL;
> + }
> + if(ctx->p2c_bo) {

Space after keywords, according to LKCS:
if (...)

> + amdgpu_bo_free_kernel(>p2c_bo, NULL, NULL);
> + ctx->p2c_bo = NULL;
> + }
> +
> + return ret;
> +}

Get rid of "ret" variable altogether as it is not used in this function.

> +
> +/**
> + * amdgpu_ttm_training_reserve_vram_init - create bo vram reservation from 
> memory training
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * create bo vram reservation from memory training.
> + */
> +static int amdgpu_ttm_training_reserve_vram_init(struct amdgpu_device *adev)
> +{
> + int ret = 0;

DO NOT preinitialize ret.

> + struct psp_memory_training_context *ctx = >psp.mem_train_ctx;
> +
> + memset(ctx, 0, sizeof(*ctx));
> + if(!adev->fw_vram_usage.mem_train_support) {

Space after keywords: "if (".

> + DRM_DEBUG("memory training does not support!\n");
> + return 0;
> + }
> +
> + ctx->c2p_train_data_offset = adev->fw_vram_usage.mem_train_fb_loc;
> + ctx->p2c_train_data_offset = (adev->gmc.mc_vram_size - 
> GDDR6_MEM_TRAINING_OFFSET);
> + ctx->train_data_size = GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES;
> +
> + 
> DRM_DEBUG("train_data_size:%llx,p2c_train_data_offset:%llx,c2p_train_data_offset:%llx.\n",
> +   ctx->train_data_size,
> +   ctx->p2c_train_data_offset,
> +   ctx->c2p_train_data_offset);
> +
> + ret = amdgpu_bo_create_kernel_at(adev,

Here is where you definitively set "ret" so DO NOT preinitialize it to 0,
just to avoid "pesky compiler unininitialized variable warnings"--those
are helpful to make the code more secure: a variable should be intentionally
initialized in all paths.

> +  ctx->p2c_train_data_offset,
> +  ctx->train_data_size,
> +  AMDGPU_GEM_DOMAIN_VRAM,
> +  >p2c_bo,
> +  NULL);
> + if(ret) {

Space after keywords "if (".

> + DRM_ERROR("alloc p2c_bo failed(%d)!\n", ret);
> + ret = -ENOMEM;
> + goto err_out;
> + }
> +
> + ret = amdgpu_bo_create_kernel_at(adev,
> +  ctx->c2p_train_data_offset,
> +  ctx->train_data_size,
> +  AMDGPU_GEM_DOMAIN_VRAM,
> +  >c2p_bo,
> +  NULL);
> + if(ret) {

Space after keywords: "if (", according to LKCS.

Regards,
Luben

> + DRM_ERROR("alloc c2p_bo failed(%d)!\n", ret);
> + ret = -ENOMEM;
> + goto err_out;
> + }
> +
> + ctx->init = PSP_MEM_TRAIN_RESERVE_SUCCESS;
> + return 0;
> +
> +err_out:
> + amdgpu_ttm_training_reserve_vram_fini(adev);
> + return ret;
> +}
> +
>  /**
>   * amdgpu_ttm_init - Init the memory management (ttm) as well as various
>   * gtt/vram related fields.
> @@ -1740,6 +1827,14 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>   return r;
>   }
>  
> + /*
> +  *The reserved vram for memory training must be pinned to the specified
> +  *place on the VRAM, so reserve it early.
> +  */
> + r = amdgpu_ttm_training_reserve_vram_init(adev);
> + if (r)
> + 

Re: [PATCH 5/8] drm/amdgpu/atomfirmware: add memory training related helper functions

2019-10-08 Thread Tuikov, Luben
On 2019-10-08 3:29 p.m., Alex Deucher wrote:
> From: "Tianci.Yin" 
> 
> parse firmware to get memory training capability and fb location.
> 
> Change-Id: I147c1d48e255e0191be4beb1ad6b637da607bf75
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |   7 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |   5 +
>  .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c  | 130 ++
>  .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h  |   1 +
>  4 files changed, 143 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 0d60c2e6c592..eeb6b6282fce 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -288,6 +288,9 @@ struct amdgpu_ip_block_version {
>   const struct amd_ip_funcs *funcs;
>  };
>  
> +#define hw_revision(major, minor, revision) \
> + uint32_t) major) << 16) | ((uint32_t) minor << 8) | ((uint32_t) 
> revision))
> +
>  struct amdgpu_ip_block {
>   struct amdgpu_ip_block_status status;
>   const struct amdgpu_ip_block_version *version;
> @@ -630,6 +633,10 @@ struct amdgpu_fw_vram_usage {
>   u64 size;
>   struct amdgpu_bo *reserved_bo;
>   void *va;
> +
> + /*offset on the top of vram, used as c2p write buffer*/
> + u64 mem_train_fb_loc;
> + bool mem_train_support;
>  };
>  
>  /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> index 1c9d40f97a9b..5f5a2d3fff9b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> @@ -2038,6 +2038,11 @@ int amdgpu_atombios_init(struct amdgpu_device *adev)
>   if (adev->is_atom_fw) {
>   amdgpu_atomfirmware_scratch_regs_init(adev);
>   amdgpu_atomfirmware_allocate_fb_scratch(adev);
> + ret = amdgpu_atomfirmware_get_mem_train_fb_loc(adev);
> + if(ret) {

Space after a keyword: "if (ret)" according to LKCS.

> + DRM_ERROR("Failed to get mem train fb location.\n");
> + return ret;
> + }
>   } else {
>   amdgpu_atombios_scratch_regs_init(adev);
>   amdgpu_atombios_allocate_fb_scratch(adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> index 39fd8ae5a822..dfaebd929332 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
> @@ -27,6 +27,7 @@
>  #include "amdgpu_atomfirmware.h"
>  #include "atom.h"
>  #include "atombios.h"
> +#include "soc15_hw_ip.h"
>  
>  bool amdgpu_atomfirmware_gpu_supports_virtualization(struct amdgpu_device 
> *adev)
>  {
> @@ -462,3 +463,132 @@ int amdgpu_atomfirmware_get_gfx_info(struct 
> amdgpu_device *adev)
>   }
>   return -EINVAL;
>  }
> +
> +/*
> + * Check if VBIOS supports GDDR6 training data save/restore
> + */
> +static bool gddr6_mem_train_vbios_support(struct amdgpu_device *adev)
> +{
> + uint16_t data_offset;
> + int index;
> +
> + index = 
> get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
> + firmwareinfo);
> + if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, 
> NULL,
> +   NULL, NULL, _offset)) {
> + struct atom_firmware_info_v3_1 *firmware_info =
> + (struct atom_firmware_info_v3_1 
> *)(adev->mode_info.atom_context->bios +
> +data_offset);
> +
> + DRM_DEBUG("atom firmware capability:0x%08x.\n",
> +   le32_to_cpu(firmware_info->firmware_capability));
> +
> + if (le32_to_cpu(firmware_info->firmware_capability) &
> + ATOM_FIRMWARE_CAP_ENABLE_2STAGE_BIST_TRAINING)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static int gddr6_mem_train_support(struct amdgpu_device *adev)
> +{
> + int ret = 0;

int ret;
Don't preinitialize. Instead, explicitly set on all contingencies.
This makes the code more secure. See below.

> + bool vbios_support = false;
> + uint32_t major, minor, revision, hw_v;
> +
> + if (!amdgpu_sriov_vf(adev) &&
> + gddr6_mem_train_vbios_support(adev)) {
> + vbios_support = true;
> + }
> +
> + amdgpu_discovery_get_ip_version(adev, MP0_HWID, , , 
> );
> + hw_v = hw_revision(major, minor, revision);
> + /*
> +  * treat 0 revision as a special case since register for MP0 and MMHUB 
> is missing
> +  * for some Navi10 A0, preventing driver from discovering the hwip 
> information since
> +  * none of the functions will be initialized, it should not cause any 
> problems
> +  */
> + switch(hw_v) {

Space after keyword: "switch (hw_v) {" 

Re: [PATCH 4/8] drm/amdgpu: update atomfirmware header with memory training related members

2019-10-08 Thread Tuikov, Luben
On 2019-10-08 3:29 p.m., Alex Deucher wrote:
> From: "Tianci.Yin" 
> 
> add new vram_reserve_block structure and atomfirmware_internal_constants 
> enumeration
> 
> Change-Id: I6ba642ecd7ad94250162ae5c322ed8d85de9c35a
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>  drivers/gpu/drm/amd/include/atomfirmware.h | 15 +++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
> b/drivers/gpu/drm/amd/include/atomfirmware.h
> index e88541d67aa0..5196b94097f5 100644
> --- a/drivers/gpu/drm/amd/include/atomfirmware.h
> +++ b/drivers/gpu/drm/amd/include/atomfirmware.h
> @@ -498,6 +498,7 @@ enum atombios_firmware_capability
>ATOM_FIRMWARE_CAP_HWEMU_ENABLE  = 0x0080,
>ATOM_FIRMWARE_CAP_HWEMU_UMC_CFG = 0x0100,
>ATOM_FIRMWARE_CAP_SRAM_ECC  = 0x0200,
> +  ATOM_FIRMWARE_CAP_ENABLE_2STAGE_BIST_TRAINING  = 0x0400,
>  };

Indentation seems off in the added line.

>  
>  enum atom_cooling_solution_id{
> @@ -671,6 +672,20 @@ struct vram_usagebyfirmware_v2_1
>uint16_t  used_by_driver_in_kb; 
>  };
>  
> +/* This is part of vram_usagebyfirmware_v2_1 */
> +struct vram_reserve_block
> +{
> +uint32_t  start_address_in_kb;
> +uint16_t  used_by_firmware_in_kb;
> +uint16_t  used_by_driver_in_kb;
> +};
> +
> +/* Definitions for constance */
> +enum atomfirmware_internal_constants {
> +ONE_K= 0x400,
> +ONE_MEG  = 0x10,
> +ONE_G= 0x4000,
> +};

Indent according to LKCS: leading-tabs only (TABS are not allowed anywhere 
else), hard-tabs (indent using TAB char).
(Emacs naturally does this for you. TAB key only indents according to mode, and 
if the line is already indented,
nothing happens (nothing is inserted).)

Regards,
Luben

>  
>  /* 
>***
> 

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 3/8] drm/amdgpu: introduce psp_v11_0_is_sos_alive interface

2019-10-08 Thread Tuikov, Luben
On 2019-10-08 3:29 p.m., Alex Deucher wrote:
> From: "Tianci.Yin" 
> 
> introduce psp_v11_0_is_sos_alive func for common use.
> 
> Change-Id: Iee0a6dd924d6a4b164eb751c0bec49fcb7d79483
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>  drivers/gpu/drm/amd/amdgpu/psp_v11_0.c | 22 +-
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c 
> b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> index 04318cfd50a8..e74a952a3f7c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
> @@ -206,18 +206,26 @@ static int psp_v11_0_init_microcode(struct psp_context 
> *psp)
>   return err;
>  }
>  
> +static bool psp_v11_0_is_sos_alive(struct psp_context *psp)
> +{
> + struct amdgpu_device *adev = psp->adev;
> + uint32_t sol_reg;
> +
> + sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
> +
> + return (sol_reg != 0x0);

Parenthesis are unnecessary in the return and not in LKCS.

Regards,
Luben

> +}
> +
>  static int psp_v11_0_bootloader_load_kdb(struct psp_context *psp)
>  {
>   int ret;
>   uint32_t psp_gfxdrv_command_reg = 0;
>   struct amdgpu_device *adev = psp->adev;
> - uint32_t sol_reg;
>  
>   /* Check tOS sign of life register to confirm sys driver and sOS
>* are already been loaded.
>*/
> - sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
> - if (sol_reg) {
> + if (psp_v11_0_is_sos_alive(psp)) {
>   psp->sos_fw_version = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_58);
>   dev_info(adev->dev, "sos fw version = 0x%x.\n", 
> psp->sos_fw_version);
>   return 0;
> @@ -253,13 +261,11 @@ static int psp_v11_0_bootloader_load_sysdrv(struct 
> psp_context *psp)
>   int ret;
>   uint32_t psp_gfxdrv_command_reg = 0;
>   struct amdgpu_device *adev = psp->adev;
> - uint32_t sol_reg;
>  
>   /* Check sOS sign of life register to confirm sys driver and sOS
>* are already been loaded.
>*/
> - sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
> - if (sol_reg) {
> + if (psp_v11_0_is_sos_alive(psp)) {
>   psp->sos_fw_version = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_58);
>   dev_info(adev->dev, "sos fw version = 0x%x.\n", 
> psp->sos_fw_version);
>   return 0;
> @@ -297,13 +303,11 @@ static int psp_v11_0_bootloader_load_sos(struct 
> psp_context *psp)
>   int ret;
>   unsigned int psp_gfxdrv_command_reg = 0;
>   struct amdgpu_device *adev = psp->adev;
> - uint32_t sol_reg;
>  
>   /* Check sOS sign of life register to confirm sys driver and sOS
>* are already been loaded.
>*/
> - sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
> - if (sol_reg)
> + if (psp_v11_0_is_sos_alive(psp))
>   return 0;
>  
>   /* Wait for bootloader to signify that is ready having bit 31 of 
> C2PMSG_35 set to 1 */
> 

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 13/14] drm/amd/display: Recalculate VCPI slots for new DSC connectors

2019-10-08 Thread Mikita Lipski


On 08.10.2019 12:24, Lyude Paul wrote:
> ...
> yikes
> I need to apologize because I was going through my email and I realized you
> _did_ respond to me earlier regarding some of these questions, it just appears
> the reply fell through the cracks and somehow I didn't notice it until just
> now. Sorry about that!
> 
No worries, the patches got messy and are easily lost in here. I'll 
clean up on the next batch so it will be clearer what's happening

> I'm still not sure I understand why this is a future enhancement? Everything
> we need to implement these helpers should already be here, it's just a matter
> of keeping track who has dsc enabled where in the mst atomic state

As I've mentioned earlier our policy for DSC is to keep it disabled if 
not needed, because it is a lossy compression. Beside the fact that we 
don't want imperfect image on the screen, enabling DSC would also 
increase our power consumption. If we need it - we use the  greedy 
algorithm inside compute_mst_dsc_configs_for_link to only enable DSC on 
streams that can't fit into allowed bandwidth without compression.

I'm not exactly sure how it can be moved as helper to DRM.
We keep track on which stream has DSC enabled by raising DSC flag in 
dc_stream_state flags structure.
That it why pbn recalculation was moved to a separate function so only 
streams that have DSC flag raised get VCPI recalculated.

I guess if we would have dsc_desired flag on drm_connnector_state and it 
would then perform recalculation of PBN and VCPI for the connector. 
Would that be something applicable as a DRM helper?


> On Mon, 2019-10-07 at 17:52 -0400, Lyude Paul wrote:
>> Ok, let's stop and slow down for a minute here since I've repeated myself a
>> few times, and I'd like to figure out what's actually happening here and
>> whether I'm not being clear enough with my explanations, or if there's just
>> some misunderstanding here.
>>
>> I'll start of by trying my best to explain my hesistance in accepting these
>> patches. To be clear, MST with DSC is a big thing in terms of impact.
>> There's
>> a lot of things to consider:
>>   * What should the default DSC policy for MST be? Should we keep it on by
>> default so that we don't need to trigger a large number of PBN
>> recalculations and enable DSC on a per-case basis, or are there
>> legitimate
>> reasons to keep DSC off by default (such as potential issues with
>> lossiness
>> down the line).
>>   * We have other drivers in the tree that are planning on implementing MST
>> DSC
>> quite soon. Chances are they'll be implementing helpers to do this so
>> this
>> can be supported across the DRM tree, and chances are we'll just have to
>> rewrite and remove most of this code in that case once they do.
>>   * amdgpu already has a _lot_ of code that doesn't need to, and shouldn't be
>> there. This ranges from various DC specific helpers that are essentially
>> the same as the helpers that we already implement in the rest of DRM, to
>> misuses of various kernel APIs, and a complete lack of any kind of code
>> style (at least the last time that I checked) in or out of DC. This makes
>> the driver more difficult for people who don't work at AMD but are well
>> accustomed to working on the rest of the kernel, e.g. myself and others,
>> and it's a problem that seriously needs to be solved. Adding more
>> redundant
>> DP helpers that will inevitably get removed is just making the problem
>> worse.
>>
>> With all of this being said: I've been asking the same questions about this
>> patch throughout the whole patch series so far (since it got passed off to
>> you
>> from David's internship ending):
>>
>>   * Can we keep DSC enabled by default, so that these patches aren't needed?
>>   * If we can't, can we move this into common code so that other drivers can
>> use it?
>> The problem is I haven't received any responses to these questions, other
>> then
>> being told by David a month or two ago that it would be expedient to just
>> accept the patches and move on. Honestly, if discussion had been had about
>> the
>> changes I requested, I would have given my r-b a long time ago. In fact-I
>> really want to give it now! There's multiple patches in this series that
>> would
>> be very useful for other things that are being worked on at the moment, such
>> as the changes to drm_dp_dpcd_read/drm_dp_dpcd_write(). But I really think
>> this needs to be discussed first, and I'm worried that just going ahead and
>> giving my R-B before they have been discussed will further encourage rushing
>> changes like this in the future and continually building more tech debt for
>> ourselves.
>>
>> Please note as well: if anything I've asked for is confusing, or you don't
>> understand what I'm asking or looking for I am more then willing to help
>> explain things and help out as best as I can. I understand that a lot of the
>> developers working on DRM at AMD may have 

Re: [PATCH 6/8] drm/amdgpu: add psp memory training callbacks and macro

2019-10-08 Thread William Lewis

On 10/8/19 2:29 PM, Alex Deucher wrote:
> From: "Tianci.Yin" 
>
> add interface for memory training.
>
> Change-Id: Ibb6d1d24eb651df796bc2bb3419a44937af60242
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 18 
>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h | 55 +
>   2 files changed, 73 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> index 37ffed5e2171..b996b5bc5804 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> @@ -88,6 +88,17 @@ static int psp_sw_init(void *handle)
>   return ret;
>   }
>   
> + ret = psp_mem_training_init(psp);
> + if (ret) {
> + DRM_ERROR("Failed to initliaze memory training!\n");
Typo.  s/initliaze/initialize/
> + return ret;
> + }
> + ret = psp_mem_training(psp, PSP_MEM_TRAIN_COLD_BOOT);
> + if (ret) {
> + DRM_ERROR("Failed to process memory training!\n");
> + return ret;
> + }
> +
>   return 0;
>   }
>   
> @@ -95,6 +106,7 @@ static int psp_sw_fini(void *handle)
>   {
>   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>   
> + psp_mem_training_fini(>psp);
>   release_firmware(adev->psp.sos_fw);
>   adev->psp.sos_fw = NULL;
>   release_firmware(adev->psp.asd_fw);
> @@ -1608,6 +1620,12 @@ static int psp_resume(void *handle)
>   
>   DRM_INFO("PSP is resuming...\n");
>   
> + ret = psp_mem_training(psp, PSP_MEM_TRAIN_RESUME);
> + if (ret) {
> + DRM_ERROR("Failed to process memory training!\n");
> + return ret;
> + }
> +
>   mutex_lock(>firmware.mutex);
>   
>   ret = psp_hw_start(psp);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> index 7dd9ae7dbbe4..c6f17d6310d6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
> @@ -49,6 +49,8 @@ enum psp_bootloader_cmd {
>   PSP_BL__LOAD_SYSDRV = 0x1,
>   PSP_BL__LOAD_SOSDRV = 0x2,
>   PSP_BL__LOAD_KEY_DATABASE   = 0x8,
> + PSP_BL__DRAM_LONG_TRAIN = 0x10,
> + PSP_BL__DRAM_SHORT_TRAIN= 0x20,
>   };
>   
>   enum psp_ring_type
> @@ -111,6 +113,9 @@ struct psp_funcs
>   struct ta_ras_trigger_error_input *info);
>   int (*ras_cure_posion)(struct psp_context *psp, uint64_t *mode_ptr);
>   int (*rlc_autoload_start)(struct psp_context *psp);
> + int (*mem_training_init)(struct psp_context *psp);
> + int (*mem_training_fini)(struct psp_context *psp);
> + int (*mem_training)(struct psp_context *psp, uint32_t ops);
>   };
>   
>   #define AMDGPU_XGMI_MAX_CONNECTED_NODES 64
> @@ -161,6 +166,49 @@ struct psp_dtm_context {
>   void*dtm_shared_buf;
>   };
>   
> +#define MEM_TRAIN_SYSTEM_SIGNATURE   0x54534942
> +#define GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES0x1000
> +#define GDDR6_MEM_TRAINING_OFFSET0x8000
> +
> +enum psp_memory_training_init_flag {
> + PSP_MEM_TRAIN_NOT_SUPPORT   = 0x0,
> + PSP_MEM_TRAIN_SUPPORT   = 0x1,
> + PSP_MEM_TRAIN_INIT_FAILED   = 0x2,
> + PSP_MEM_TRAIN_RESERVE_SUCCESS   = 0x4,
> + PSP_MEM_TRAIN_INIT_SUCCESS  = 0x8,
> +};
> +
> +enum psp_memory_training_ops {
> + PSP_MEM_TRAIN_SEND_LONG_MSG = 0x1,
> + PSP_MEM_TRAIN_SAVE  = 0x2,
> + PSP_MEM_TRAIN_RESTORE   = 0x4,
> + PSP_MEM_TRAIN_SEND_SHORT_MSG= 0x8,
> + PSP_MEM_TRAIN_COLD_BOOT = PSP_MEM_TRAIN_SEND_LONG_MSG,
> + PSP_MEM_TRAIN_RESUME= PSP_MEM_TRAIN_SEND_SHORT_MSG,
> +};
> +
> +struct psp_memory_training_context {
> + /*training data size*/
> + u64 train_data_size;
> + /*
> +  * sys_cache
> +  * cpu virtual address
> +  * system memory buffer that used to store the training data.
> +  */
> + void *sys_cache;
> +
> + /*vram offset of the p2c training data*/
> + u64 p2c_train_data_offset;
> + struct amdgpu_bo *p2c_bo;
> +
> + /*vram offset of the c2p training data*/
> + u64 c2p_train_data_offset;
> + struct amdgpu_bo *c2p_bo;
> +
> + enum psp_memory_training_init_flag init;
> + u32 training_cnt;
> +};
> +
>   struct psp_context
>   {
>   struct amdgpu_device*adev;
> @@ -239,6 +287,7 @@ struct psp_context
>   struct psp_hdcp_context hdcp_context;
>   struct psp_dtm_context  dtm_context;
>   struct mutexmutex;
> + struct psp_memory_training_context mem_train_ctx;
>   };
>   
>   struct amdgpu_psp_funcs {
> @@ -281,6 +330,12 @@ struct amdgpu_psp_funcs {
>   (psp)->funcs->xgmi_set_topology_info((psp), (num_device), 
> (topology)) : -EINVAL)
>   #define 

Re: [PATCH 4/8] drm/amdgpu: update atomfirmware header with memory training related members

2019-10-08 Thread William Lewis

On 10/8/19 2:29 PM, Alex Deucher wrote:
> From: "Tianci.Yin" 
>
> add new vram_reserve_block structure and atomfirmware_internal_constants 
> enumeration
>
> Change-Id: I6ba642ecd7ad94250162ae5c322ed8d85de9c35a
> Reviewed-by: Alex Deucher 
> Signed-off-by: Tianci.Yin 
> ---
>   drivers/gpu/drm/amd/include/atomfirmware.h | 15 +++
>   1 file changed, 15 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
> b/drivers/gpu/drm/amd/include/atomfirmware.h
> index e88541d67aa0..5196b94097f5 100644
> --- a/drivers/gpu/drm/amd/include/atomfirmware.h
> +++ b/drivers/gpu/drm/amd/include/atomfirmware.h
> @@ -498,6 +498,7 @@ enum atombios_firmware_capability
> ATOM_FIRMWARE_CAP_HWEMU_ENABLE  = 0x0080,
> ATOM_FIRMWARE_CAP_HWEMU_UMC_CFG = 0x0100,
> ATOM_FIRMWARE_CAP_SRAM_ECC  = 0x0200,
> +  ATOM_FIRMWARE_CAP_ENABLE_2STAGE_BIST_TRAINING  = 0x0400,
>   };
>   
>   enum atom_cooling_solution_id{
> @@ -671,6 +672,20 @@ struct vram_usagebyfirmware_v2_1
> uint16_t  used_by_driver_in_kb;
>   };
>   
> +/* This is part of vram_usagebyfirmware_v2_1 */
> +struct vram_reserve_block
> +{
> +uint32_t  start_address_in_kb;
> +uint16_t  used_by_firmware_in_kb;
> +uint16_t  used_by_driver_in_kb;
> +};
> +
> +/* Definitions for constance */

s/constance/constants/

Would it not be better also to widen the enum constants below explicitly 
for legibility, i.e.

ONE_K   = 0x0400,
ONE_MEG = 0x0010,
ONE_G   = 0x4000

> +enum atomfirmware_internal_constants {
> +ONE_K= 0x400,
> +ONE_MEG  = 0x10,
> +ONE_G= 0x4000,
> +};
>   
>   /*
> 
> ***
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 8/8] drm/amdgpu/psp: add psp memory training implementation

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

add memory training implementation code to save resume time.

Change-Id: I625794a780b11d824ab57ef39cc33b872c6dc6c9
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |   9 ++
 drivers/gpu/drm/amd/amdgpu/psp_v11_0.c  | 171 
 3 files changed, 181 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index eeb6b6282fce..a3fd498bbba4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -150,6 +150,7 @@ extern uint amdgpu_sdma_phase_quantum;
 extern char *amdgpu_disable_cu;
 extern char *amdgpu_virtual_display;
 extern uint amdgpu_pp_feature_mask;
+extern uint amdgpu_force_long_training;
 extern int amdgpu_job_hang_limit;
 extern int amdgpu_lbpw;
 extern int amdgpu_compute_multipipe;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index d892b7481d00..a88ea74ca222 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -127,6 +127,7 @@ char *amdgpu_disable_cu = NULL;
 char *amdgpu_virtual_display = NULL;
 /* OverDrive(bit 14) disabled by default*/
 uint amdgpu_pp_feature_mask = 0xbfff;
+uint amdgpu_force_long_training = 0;
 int amdgpu_job_hang_limit = 0;
 int amdgpu_lbpw = -1;
 int amdgpu_compute_multipipe = -1;
@@ -389,6 +390,14 @@ module_param_named(sched_hw_submission, 
amdgpu_sched_hw_submission, int, 0444);
 MODULE_PARM_DESC(ppfeaturemask, "all power features enabled (default))");
 module_param_named(ppfeaturemask, amdgpu_pp_feature_mask, uint, 0444);
 
+/**
+ * DOC: forcelongtraining (uint)
+ * Force long memory training in resume.
+ * The default is zero, indicates short training in resume.
+ */
+MODULE_PARM_DESC(forcelongtraining, "force memory long training");
+module_param_named(forcelongtraining, amdgpu_force_long_training, uint, 0444);
+
 /**
  * DOC: pcie_gen_cap (uint)
  * Override PCIE gen speed capabilities. See the CAIL flags in 
drivers/gpu/drm/amd/include/amd_pcie.h.
diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c 
b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
index e74a952a3f7c..23915653a278 100644
--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
@@ -902,6 +902,174 @@ static int psp_v11_0_rlc_autoload_start(struct 
psp_context *psp)
return psp_rlc_autoload_start(psp);
 }
 
+static int psp_v11_0_memory_training_send_msg(struct psp_context *psp, int msg)
+{
+   int ret = 0;
+   int i = 0;
+   uint32_t data_32 = 0;
+   struct amdgpu_device *adev = psp->adev;
+
+   data_32 = (psp->mem_train_ctx.c2p_train_data_offset >> 20);
+   WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_36, data_32);
+   WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_35, msg);
+
+   /*max 5s*/
+   while (i < 50) {
+   ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, 
mmMP0_SMN_C2PMSG_35),
+  0x8000, 0x8000, false);
+   if (ret == 0)
+   break;
+   i++;
+   }
+   DRM_DEBUG("%s training %s, cost %d * %dms.\n",
+ (msg == PSP_BL__DRAM_SHORT_TRAIN) ? "short" : "long",
+ (ret == 0) ? "succeed" : "failed",
+ i, adev->usec_timeout/1000);
+   return ret;
+}
+
+static int psp_v11_0_memory_training_fini(struct psp_context *psp)
+{
+   int ret = 0;
+   struct psp_memory_training_context *ctx = >mem_train_ctx;
+
+   ctx->init = PSP_MEM_TRAIN_NOT_SUPPORT;
+   if(ctx->sys_cache) {
+   kfree(ctx->sys_cache);
+   ctx->sys_cache = NULL;
+   }
+
+   return ret;
+}
+
+static int psp_v11_0_memory_training_init(struct psp_context *psp)
+{
+   int ret = 0;
+   struct psp_memory_training_context *ctx = >mem_train_ctx;
+
+   if(ctx->init != PSP_MEM_TRAIN_RESERVE_SUCCESS) {
+   DRM_DEBUG("memory training does not support!\n");
+   return 0;
+   }
+
+   ctx->sys_cache = kzalloc(ctx->train_data_size, GFP_KERNEL);
+   if(ctx->sys_cache == NULL) {
+   DRM_ERROR("alloc mem_train_ctx.sys_cache failed(%d)!\n", ret);
+   ret = -ENOMEM;
+   goto err_out;
+   }
+
+   
DRM_DEBUG("train_data_size:%llx,p2c_train_data_offset:%llx,c2p_train_data_offset:%llx.\n",
+ ctx->train_data_size,
+ ctx->p2c_train_data_offset,
+ ctx->c2p_train_data_offset);
+   ctx->init = PSP_MEM_TRAIN_INIT_SUCCESS;
+   return 0;
+
+err_out:
+   psp_v11_0_memory_training_fini(psp);
+   return ret;
+}
+
+/*
+ * save and restore proces
+ */
+static int psp_v11_0_memory_training(struct psp_context *psp, uint32_t ops)
+{
+   int ret = 0;
+   uint32_t p2c_header[4];
+   struct psp_memory_training_context *ctx = >mem_train_ctx;
+   uint32_t 

[PATCH 7/8] drm/amdgpu: reserve vram for memory training

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

memory training using specific fixed vram segment, reserve these
segments before anyone may allocate it.

Change-Id: I1436755813a565608a2857a683f535377620a637
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 96 +
 1 file changed, 96 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 74a9bd94f10c..0819721af16e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1667,6 +1667,93 @@ static int amdgpu_ttm_fw_reserve_vram_init(struct 
amdgpu_device *adev)
  >fw_vram_usage.va);
 }
 
+/*
+ * Memoy training reservation functions
+ */
+/**
+ * amdgpu_ttm_training_reserve_vram_fini - free memory training reserved vram
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * free memory training reserved vram if it has been reserved.
+ */
+static int amdgpu_ttm_training_reserve_vram_fini(struct amdgpu_device *adev)
+{
+   int ret = 0;
+   struct psp_memory_training_context *ctx = >psp.mem_train_ctx;
+
+   ctx->init = PSP_MEM_TRAIN_NOT_SUPPORT;
+   if(ctx->c2p_bo) {
+   amdgpu_bo_free_kernel(>c2p_bo, NULL, NULL);
+   ctx->c2p_bo = NULL;
+   }
+   if(ctx->p2c_bo) {
+   amdgpu_bo_free_kernel(>p2c_bo, NULL, NULL);
+   ctx->p2c_bo = NULL;
+   }
+
+   return ret;
+}
+
+/**
+ * amdgpu_ttm_training_reserve_vram_init - create bo vram reservation from 
memory training
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * create bo vram reservation from memory training.
+ */
+static int amdgpu_ttm_training_reserve_vram_init(struct amdgpu_device *adev)
+{
+   int ret = 0;
+   struct psp_memory_training_context *ctx = >psp.mem_train_ctx;
+
+   memset(ctx, 0, sizeof(*ctx));
+   if(!adev->fw_vram_usage.mem_train_support) {
+   DRM_DEBUG("memory training does not support!\n");
+   return 0;
+   }
+
+   ctx->c2p_train_data_offset = adev->fw_vram_usage.mem_train_fb_loc;
+   ctx->p2c_train_data_offset = (adev->gmc.mc_vram_size - 
GDDR6_MEM_TRAINING_OFFSET);
+   ctx->train_data_size = GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES;
+
+   
DRM_DEBUG("train_data_size:%llx,p2c_train_data_offset:%llx,c2p_train_data_offset:%llx.\n",
+ ctx->train_data_size,
+ ctx->p2c_train_data_offset,
+ ctx->c2p_train_data_offset);
+
+   ret = amdgpu_bo_create_kernel_at(adev,
+ctx->p2c_train_data_offset,
+ctx->train_data_size,
+AMDGPU_GEM_DOMAIN_VRAM,
+>p2c_bo,
+NULL);
+   if(ret) {
+   DRM_ERROR("alloc p2c_bo failed(%d)!\n", ret);
+   ret = -ENOMEM;
+   goto err_out;
+   }
+
+   ret = amdgpu_bo_create_kernel_at(adev,
+ctx->c2p_train_data_offset,
+ctx->train_data_size,
+AMDGPU_GEM_DOMAIN_VRAM,
+>c2p_bo,
+NULL);
+   if(ret) {
+   DRM_ERROR("alloc c2p_bo failed(%d)!\n", ret);
+   ret = -ENOMEM;
+   goto err_out;
+   }
+
+   ctx->init = PSP_MEM_TRAIN_RESERVE_SUCCESS;
+   return 0;
+
+err_out:
+   amdgpu_ttm_training_reserve_vram_fini(adev);
+   return ret;
+}
+
 /**
  * amdgpu_ttm_init - Init the memory management (ttm) as well as various
  * gtt/vram related fields.
@@ -1740,6 +1827,14 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
return r;
}
 
+   /*
+*The reserved vram for memory training must be pinned to the specified
+*place on the VRAM, so reserve it early.
+*/
+   r = amdgpu_ttm_training_reserve_vram_init(adev);
+   if (r)
+   return r;
+
/* allocate memory as required for VGA
 * This is used for VGA emulation and pre-OS scanout buffers to
 * avoid display artifacts while transitioning between pre-OS
@@ -1825,6 +1920,7 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
return;
 
amdgpu_ttm_debugfs_fini(adev);
+   amdgpu_ttm_training_reserve_vram_fini(adev);
amdgpu_ttm_fw_reserve_vram_fini(adev);
if (adev->mman.aper_base_kaddr)
iounmap(adev->mman.aper_base_kaddr);
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 2/8] drm/amdgpu: add a generic fb accessing helper function

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

add a generic helper function for accessing framebuffer via MMIO

Change-Id: I4baa0aa53c93a94c2eff98c6211a61f369239982
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 42 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 13 +-
 3 files changed, 45 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 17ccb9015141..0d60c2e6c592 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -985,6 +985,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 void amdgpu_device_fini(struct amdgpu_device *adev);
 int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev);
 
+int amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,
+  uint32_t *buf, size_t size, bool write);
 uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg,
uint32_t acc_flags);
 void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index f25275abf408..53ce227f759c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -153,6 +153,48 @@ bool amdgpu_device_is_px(struct drm_device *dev)
return false;
 }
 
+/**
+ * VRAM access helper functions.
+ *
+ * amdgpu_device_vram_access - read/write a buffer in vram
+ *
+ * @adev: amdgpu_device pointer
+ * @pos: offset of the buffer in vram
+ * @buf: virtual address of the buffer in system memory
+ * @size: read/write size, sizeof(@buf) must > @size
+ * @write: true - write to vram, otherwise - read from vram
+ *
+ * Returns 0 on success or an -error on failure.
+ */
+int amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,
+  uint32_t *buf, size_t size, bool write)
+{
+   uint64_t end = pos + size;
+   unsigned long flags;
+
+   if (IS_ERR_OR_NULL(buf) ||
+   (adev->gmc.mc_vram_size > 0 &&
+end > adev->gmc.mc_vram_size)) {
+   DRM_ERROR("parameter error! pos:%llx, buf:%llx, size:%zx.\n",
+ pos, (u64)buf, size);
+   return -EINVAL;
+   }
+
+   while (pos < end) {
+   spin_lock_irqsave(>mmio_idx_lock, flags);
+   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)pos) | 0x8000);
+   WREG32_NO_KIQ(mmMM_INDEX_HI, pos >> 31);
+   if (write)
+   WREG32_NO_KIQ(mmMM_DATA, *buf++);
+   else
+   *buf++ = RREG32_NO_KIQ(mmMM_DATA);
+   spin_unlock_irqrestore(>mmio_idx_lock, flags);
+   pos += 4;
+   }
+
+   return 0;
+}
+
 /*
  * MMIO register access helper functions.
  */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index db2dab3a6dff..324c2d605f54 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -134,21 +134,10 @@ static int hw_id_map[MAX_HWIP] = {
 
 static int amdgpu_discovery_read_binary(struct amdgpu_device *adev, uint8_t 
*binary)
 {
-   uint32_t *p = (uint32_t *)binary;
uint64_t vram_size = (uint64_t)RREG32(mmRCC_CONFIG_MEMSIZE) << 20;
uint64_t pos = vram_size - BINARY_MAX_SIZE;
-   unsigned long flags;
-
-   while (pos < vram_size) {
-   spin_lock_irqsave(>mmio_idx_lock, flags);
-   WREG32_NO_KIQ(mmMM_INDEX, ((uint32_t)pos) | 0x8000);
-   WREG32_NO_KIQ(mmMM_INDEX_HI, pos >> 31);
-   *p++ = RREG32_NO_KIQ(mmMM_DATA);
-   spin_unlock_irqrestore(>mmio_idx_lock, flags);
-   pos += 4;
-   }
 
-   return 0;
+   return amdgpu_device_vram_access(adev, pos, (uint32_t *)binary, 
BINARY_MAX_SIZE, false);
 }
 
 static uint16_t amdgpu_discovery_calculate_checksum(uint8_t *data, uint32_t 
size)
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 4/8] drm/amdgpu: update atomfirmware header with memory training related members

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

add new vram_reserve_block structure and atomfirmware_internal_constants 
enumeration

Change-Id: I6ba642ecd7ad94250162ae5c322ed8d85de9c35a
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/include/atomfirmware.h | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
b/drivers/gpu/drm/amd/include/atomfirmware.h
index e88541d67aa0..5196b94097f5 100644
--- a/drivers/gpu/drm/amd/include/atomfirmware.h
+++ b/drivers/gpu/drm/amd/include/atomfirmware.h
@@ -498,6 +498,7 @@ enum atombios_firmware_capability
   ATOM_FIRMWARE_CAP_HWEMU_ENABLE  = 0x0080,
   ATOM_FIRMWARE_CAP_HWEMU_UMC_CFG = 0x0100,
   ATOM_FIRMWARE_CAP_SRAM_ECC  = 0x0200,
+  ATOM_FIRMWARE_CAP_ENABLE_2STAGE_BIST_TRAINING  = 0x0400,
 };
 
 enum atom_cooling_solution_id{
@@ -671,6 +672,20 @@ struct vram_usagebyfirmware_v2_1
   uint16_t  used_by_driver_in_kb; 
 };
 
+/* This is part of vram_usagebyfirmware_v2_1 */
+struct vram_reserve_block
+{
+uint32_t  start_address_in_kb;
+uint16_t  used_by_firmware_in_kb;
+uint16_t  used_by_driver_in_kb;
+};
+
+/* Definitions for constance */
+enum atomfirmware_internal_constants {
+ONE_K  = 0x400,
+ONE_MEG= 0x10,
+ONE_G  = 0x4000,
+};
 
 /* 
   ***
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 5/8] drm/amdgpu/atomfirmware: add memory training related helper functions

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

parse firmware to get memory training capability and fb location.

Change-Id: I147c1d48e255e0191be4beb1ad6b637da607bf75
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |   7 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |   5 +
 .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c  | 130 ++
 .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h  |   1 +
 4 files changed, 143 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 0d60c2e6c592..eeb6b6282fce 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -288,6 +288,9 @@ struct amdgpu_ip_block_version {
const struct amd_ip_funcs *funcs;
 };
 
+#define hw_revision(major, minor, revision) \
+   uint32_t) major) << 16) | ((uint32_t) minor << 8) | ((uint32_t) 
revision))
+
 struct amdgpu_ip_block {
struct amdgpu_ip_block_status status;
const struct amdgpu_ip_block_version *version;
@@ -630,6 +633,10 @@ struct amdgpu_fw_vram_usage {
u64 size;
struct amdgpu_bo *reserved_bo;
void *va;
+
+   /*offset on the top of vram, used as c2p write buffer*/
+   u64 mem_train_fb_loc;
+   bool mem_train_support;
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
index 1c9d40f97a9b..5f5a2d3fff9b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
@@ -2038,6 +2038,11 @@ int amdgpu_atombios_init(struct amdgpu_device *adev)
if (adev->is_atom_fw) {
amdgpu_atomfirmware_scratch_regs_init(adev);
amdgpu_atomfirmware_allocate_fb_scratch(adev);
+   ret = amdgpu_atomfirmware_get_mem_train_fb_loc(adev);
+   if(ret) {
+   DRM_ERROR("Failed to get mem train fb location.\n");
+   return ret;
+   }
} else {
amdgpu_atombios_scratch_regs_init(adev);
amdgpu_atombios_allocate_fb_scratch(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
index 39fd8ae5a822..dfaebd929332 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
@@ -27,6 +27,7 @@
 #include "amdgpu_atomfirmware.h"
 #include "atom.h"
 #include "atombios.h"
+#include "soc15_hw_ip.h"
 
 bool amdgpu_atomfirmware_gpu_supports_virtualization(struct amdgpu_device 
*adev)
 {
@@ -462,3 +463,132 @@ int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device 
*adev)
}
return -EINVAL;
 }
+
+/*
+ * Check if VBIOS supports GDDR6 training data save/restore
+ */
+static bool gddr6_mem_train_vbios_support(struct amdgpu_device *adev)
+{
+   uint16_t data_offset;
+   int index;
+
+   index = 
get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+   firmwareinfo);
+   if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, 
NULL,
+ NULL, NULL, _offset)) {
+   struct atom_firmware_info_v3_1 *firmware_info =
+   (struct atom_firmware_info_v3_1 
*)(adev->mode_info.atom_context->bios +
+  data_offset);
+
+   DRM_DEBUG("atom firmware capability:0x%08x.\n",
+ le32_to_cpu(firmware_info->firmware_capability));
+
+   if (le32_to_cpu(firmware_info->firmware_capability) &
+   ATOM_FIRMWARE_CAP_ENABLE_2STAGE_BIST_TRAINING)
+   return true;
+   }
+
+   return false;
+}
+
+static int gddr6_mem_train_support(struct amdgpu_device *adev)
+{
+   int ret = 0;
+   bool vbios_support = false;
+   uint32_t major, minor, revision, hw_v;
+
+   if (!amdgpu_sriov_vf(adev) &&
+   gddr6_mem_train_vbios_support(adev)) {
+   vbios_support = true;
+   }
+
+   amdgpu_discovery_get_ip_version(adev, MP0_HWID, , , 
);
+   hw_v = hw_revision(major, minor, revision);
+   /*
+* treat 0 revision as a special case since register for MP0 and MMHUB 
is missing
+* for some Navi10 A0, preventing driver from discovering the hwip 
information since
+* none of the functions will be initialized, it should not cause any 
problems
+*/
+   switch(hw_v) {
+   case hw_revision(11, 0, 0):
+   case hw_revision(11, 0, 5):
+   ret = vbios_support;
+   break;
+   default:
+   if (vbios_support) {
+   DRM_ERROR("memory training vbios supports but psp 
hw(%08x)"
+ " doesn't support!\n", hw_v);
+   ret = -1;
+   }
+   break;
+   

[PATCH 6/8] drm/amdgpu: add psp memory training callbacks and macro

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

add interface for memory training.

Change-Id: Ibb6d1d24eb651df796bc2bb3419a44937af60242
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 18 
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h | 55 +
 2 files changed, 73 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 37ffed5e2171..b996b5bc5804 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -88,6 +88,17 @@ static int psp_sw_init(void *handle)
return ret;
}
 
+   ret = psp_mem_training_init(psp);
+   if (ret) {
+   DRM_ERROR("Failed to initliaze memory training!\n");
+   return ret;
+   }
+   ret = psp_mem_training(psp, PSP_MEM_TRAIN_COLD_BOOT);
+   if (ret) {
+   DRM_ERROR("Failed to process memory training!\n");
+   return ret;
+   }
+
return 0;
 }
 
@@ -95,6 +106,7 @@ static int psp_sw_fini(void *handle)
 {
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
+   psp_mem_training_fini(>psp);
release_firmware(adev->psp.sos_fw);
adev->psp.sos_fw = NULL;
release_firmware(adev->psp.asd_fw);
@@ -1608,6 +1620,12 @@ static int psp_resume(void *handle)
 
DRM_INFO("PSP is resuming...\n");
 
+   ret = psp_mem_training(psp, PSP_MEM_TRAIN_RESUME);
+   if (ret) {
+   DRM_ERROR("Failed to process memory training!\n");
+   return ret;
+   }
+
mutex_lock(>firmware.mutex);
 
ret = psp_hw_start(psp);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
index 7dd9ae7dbbe4..c6f17d6310d6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
@@ -49,6 +49,8 @@ enum psp_bootloader_cmd {
PSP_BL__LOAD_SYSDRV = 0x1,
PSP_BL__LOAD_SOSDRV = 0x2,
PSP_BL__LOAD_KEY_DATABASE   = 0x8,
+   PSP_BL__DRAM_LONG_TRAIN = 0x10,
+   PSP_BL__DRAM_SHORT_TRAIN= 0x20,
 };
 
 enum psp_ring_type
@@ -111,6 +113,9 @@ struct psp_funcs
struct ta_ras_trigger_error_input *info);
int (*ras_cure_posion)(struct psp_context *psp, uint64_t *mode_ptr);
int (*rlc_autoload_start)(struct psp_context *psp);
+   int (*mem_training_init)(struct psp_context *psp);
+   int (*mem_training_fini)(struct psp_context *psp);
+   int (*mem_training)(struct psp_context *psp, uint32_t ops);
 };
 
 #define AMDGPU_XGMI_MAX_CONNECTED_NODES64
@@ -161,6 +166,49 @@ struct psp_dtm_context {
void*dtm_shared_buf;
 };
 
+#define MEM_TRAIN_SYSTEM_SIGNATURE 0x54534942
+#define GDDR6_MEM_TRAINING_DATA_SIZE_IN_BYTES  0x1000
+#define GDDR6_MEM_TRAINING_OFFSET  0x8000
+
+enum psp_memory_training_init_flag {
+   PSP_MEM_TRAIN_NOT_SUPPORT   = 0x0,
+   PSP_MEM_TRAIN_SUPPORT   = 0x1,
+   PSP_MEM_TRAIN_INIT_FAILED   = 0x2,
+   PSP_MEM_TRAIN_RESERVE_SUCCESS   = 0x4,
+   PSP_MEM_TRAIN_INIT_SUCCESS  = 0x8,
+};
+
+enum psp_memory_training_ops {
+   PSP_MEM_TRAIN_SEND_LONG_MSG = 0x1,
+   PSP_MEM_TRAIN_SAVE  = 0x2,
+   PSP_MEM_TRAIN_RESTORE   = 0x4,
+   PSP_MEM_TRAIN_SEND_SHORT_MSG= 0x8,
+   PSP_MEM_TRAIN_COLD_BOOT = PSP_MEM_TRAIN_SEND_LONG_MSG,
+   PSP_MEM_TRAIN_RESUME= PSP_MEM_TRAIN_SEND_SHORT_MSG,
+};
+
+struct psp_memory_training_context {
+   /*training data size*/
+   u64 train_data_size;
+   /*
+* sys_cache
+* cpu virtual address
+* system memory buffer that used to store the training data.
+*/
+   void *sys_cache;
+
+   /*vram offset of the p2c training data*/
+   u64 p2c_train_data_offset;
+   struct amdgpu_bo *p2c_bo;
+
+   /*vram offset of the c2p training data*/
+   u64 c2p_train_data_offset;
+   struct amdgpu_bo *c2p_bo;
+
+   enum psp_memory_training_init_flag init;
+   u32 training_cnt;
+};
+
 struct psp_context
 {
struct amdgpu_device*adev;
@@ -239,6 +287,7 @@ struct psp_context
struct psp_hdcp_context hdcp_context;
struct psp_dtm_context  dtm_context;
struct mutexmutex;
+   struct psp_memory_training_context mem_train_ctx;
 };
 
 struct amdgpu_psp_funcs {
@@ -281,6 +330,12 @@ struct amdgpu_psp_funcs {
(psp)->funcs->xgmi_set_topology_info((psp), (num_device), 
(topology)) : -EINVAL)
 #define psp_rlc_autoload(psp) \
((psp)->funcs->rlc_autoload_start ? 
(psp)->funcs->rlc_autoload_start((psp)) : 0)
+#define psp_mem_training_init(psp) \
+   ((psp)->funcs->mem_training_init ? 
(psp)->funcs->mem_training_init((psp)) : 0)

[PATCH 1/8] drm/amdgpu: update amdgpu_discovery to handle revision

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

update amdgpu_discovery to get IP revision.

Change-Id: If8152103d03b58e1dc0f32db63625e290f5f08a0
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 4 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 1481899f86c1..db2dab3a6dff 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -333,7 +333,7 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device 
*adev)
 }
 
 int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id,
-   int *major, int *minor)
+   int *major, int *minor, int *revision)
 {
struct binary_header *bhdr;
struct ip_discovery_header *ihdr;
@@ -369,6 +369,8 @@ int amdgpu_discovery_get_ip_version(struct amdgpu_device 
*adev, int hw_id,
*major = ip->major;
if (minor)
*minor = ip->minor;
+   if (revision)
+   *revision = ip->revision;
return 0;
}
ip_offset += sizeof(*ip) + 4 * (ip->num_base_address - 
1);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h
index 85b8c4d4d576..ada30cfd9d35 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h
@@ -28,7 +28,7 @@ int amdgpu_discovery_init(struct amdgpu_device *adev);
 void amdgpu_discovery_fini(struct amdgpu_device *adev);
 int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev);
 int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id,
-int *major, int *minor);
+int *major, int *minor, int *revision);
 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev);
 
 #endif /* __AMDGPU_DISCOVERY__ */
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 3/8] drm/amdgpu: introduce psp_v11_0_is_sos_alive interface

2019-10-08 Thread Alex Deucher
From: "Tianci.Yin" 

introduce psp_v11_0_is_sos_alive func for common use.

Change-Id: Iee0a6dd924d6a4b164eb751c0bec49fcb7d79483
Reviewed-by: Alex Deucher 
Signed-off-by: Tianci.Yin 
---
 drivers/gpu/drm/amd/amdgpu/psp_v11_0.c | 22 +-
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c 
b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
index 04318cfd50a8..e74a952a3f7c 100644
--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
@@ -206,18 +206,26 @@ static int psp_v11_0_init_microcode(struct psp_context 
*psp)
return err;
 }
 
+static bool psp_v11_0_is_sos_alive(struct psp_context *psp)
+{
+   struct amdgpu_device *adev = psp->adev;
+   uint32_t sol_reg;
+
+   sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
+
+   return (sol_reg != 0x0);
+}
+
 static int psp_v11_0_bootloader_load_kdb(struct psp_context *psp)
 {
int ret;
uint32_t psp_gfxdrv_command_reg = 0;
struct amdgpu_device *adev = psp->adev;
-   uint32_t sol_reg;
 
/* Check tOS sign of life register to confirm sys driver and sOS
 * are already been loaded.
 */
-   sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
-   if (sol_reg) {
+   if (psp_v11_0_is_sos_alive(psp)) {
psp->sos_fw_version = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_58);
dev_info(adev->dev, "sos fw version = 0x%x.\n", 
psp->sos_fw_version);
return 0;
@@ -253,13 +261,11 @@ static int psp_v11_0_bootloader_load_sysdrv(struct 
psp_context *psp)
int ret;
uint32_t psp_gfxdrv_command_reg = 0;
struct amdgpu_device *adev = psp->adev;
-   uint32_t sol_reg;
 
/* Check sOS sign of life register to confirm sys driver and sOS
 * are already been loaded.
 */
-   sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
-   if (sol_reg) {
+   if (psp_v11_0_is_sos_alive(psp)) {
psp->sos_fw_version = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_58);
dev_info(adev->dev, "sos fw version = 0x%x.\n", 
psp->sos_fw_version);
return 0;
@@ -297,13 +303,11 @@ static int psp_v11_0_bootloader_load_sos(struct 
psp_context *psp)
int ret;
unsigned int psp_gfxdrv_command_reg = 0;
struct amdgpu_device *adev = psp->adev;
-   uint32_t sol_reg;
 
/* Check sOS sign of life register to confirm sys driver and sOS
 * are already been loaded.
 */
-   sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
-   if (sol_reg)
+   if (psp_v11_0_is_sos_alive(psp))
return 0;
 
/* Wait for bootloader to signify that is ready having bit 31 of 
C2PMSG_35 set to 1 */
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 0/8] low latency memory training for navi

2019-10-08 Thread Alex Deucher
This patch set improves the latency for memory training
on navi parts.

Tianci.Yin (8):
  drm/amdgpu: update amdgpu_discovery to handle revision
  drm/amdgpu: add a generic fb accessing helper function
  drm/amdgpu: introduce psp_v11_0_is_sos_alive interface
  drm/amdgpu: update atomfirmware header with memory training related
members
  drm/amdgpu/atomfirmware: add memory training related helper functions
  drm/amdgpu: add psp memory training callbacks and macro
  drm/amdgpu: reserve vram for memory training
  drm/amdgpu/psp: add psp memory training implementation

 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  10 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |   5 +
 .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c  | 130 
 .../gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h  |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c|  42 
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c |  17 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |   9 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   |  18 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h   |  55 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c   |  96 +
 drivers/gpu/drm/amd/amdgpu/psp_v11_0.c| 193 +-
 drivers/gpu/drm/amd/include/atomfirmware.h|  15 ++
 13 files changed, 570 insertions(+), 23 deletions(-)

-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdgpu/display/dcn2.1: drop unused function

2019-10-08 Thread Alex Deucher
It's the same as the dcn2.0 function which it already uses.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index d2fc61490052..9a6701698faa 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -1352,14 +1352,6 @@ static struct pp_smu_funcs *dcn21_pp_smu_create(struct 
dc_context *ctx)
return pp_smu;
 }
 
-static void dcn21_pp_smu_destroy(struct pp_smu_funcs **pp_smu)
-{
-   if (pp_smu && *pp_smu) {
-   kfree(*pp_smu);
-   *pp_smu = NULL;
-   }
-}
-
 static struct audio *dcn21_create_audio(
struct dc_context *ctx, unsigned int inst)
 {
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH RFC v4 16/16] drm/amdgpu: Integrate with DRM cgroup

2019-10-08 Thread Kuehling, Felix
On 2019-08-29 2:05 a.m., Kenny Ho wrote:
> The number of logical gpu (lgpu) is defined to be the number of compute
> unit (CU) for a device.  The lgpu allocation limit only applies to
> compute workload for the moment (enforced via kfd queue creation.)  Any
> cu_mask update is validated against the availability of the compute unit
> as defined by the drmcg the kfd process belongs to.

There is something missing here. There is an API for the application to 
specify a CU mask. Right now it looks like the application-specified and 
CGroup-specified CU masks would clobber each other. Instead the two 
should be merged.

The CGroup-specified mask should specify a subset of CUs available for 
application-specified CU masks. When the cgroup CU mask changes, you'd 
need to take any application-specified CU masks into account before 
updating the hardware.

The KFD topology APIs report the number of available CUs to the 
application. CGroups would change that number at runtime and 
applications would not expect that. I think the best way to deal with 
that would be to have multiple bits in the application-specified CU mask 
map to the same CU. How to do that in a fair way is not obvious. I guess 
a more coarse-grain division of the GPU into LGPUs would make this 
somewhat easier.

How is this problem handled for CPU cores and the interaction with CPU 
pthread_setaffinity_np?

Regards,
   Felix


>
> Change-Id: I69a57452c549173a1cd623c30dc57195b3b6563e
> Signed-off-by: Kenny Ho 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h|   4 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  21 +++
>   drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  |   6 +
>   drivers/gpu/drm/amd/amdkfd/kfd_priv.h |   3 +
>   .../amd/amdkfd/kfd_process_queue_manager.c| 140 ++
>   5 files changed, 174 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> index 55cb1b2094fd..369915337213 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
> @@ -198,6 +198,10 @@ uint8_t amdgpu_amdkfd_get_xgmi_hops_count(struct kgd_dev 
> *dst, struct kgd_dev *s
>   valid;  \
>   })
>   
> +int amdgpu_amdkfd_update_cu_mask_for_process(struct task_struct *task,
> + struct amdgpu_device *adev, unsigned long *lgpu_bitmap,
> + unsigned int nbits);
> +
>   /* GPUVM API */
>   int amdgpu_amdkfd_gpuvm_create_process_vm(struct kgd_dev *kgd, unsigned int 
> pasid,
>   void **vm, void **process_info,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 163a4fbf0611..8abeffdd2e5b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1398,9 +1398,29 @@ amdgpu_get_crtc_scanout_position(struct drm_device 
> *dev, unsigned int pipe,
>   static void amdgpu_drmcg_custom_init(struct drm_device *dev,
>   struct drmcg_props *props)
>   {
> + struct amdgpu_device *adev = dev->dev_private;
> +
> + props->lgpu_capacity = adev->gfx.cu_info.number;
> +
>   props->limit_enforced = true;
>   }
>   
> +static void amdgpu_drmcg_limit_updated(struct drm_device *dev,
> + struct task_struct *task, struct drmcg_device_resource *ddr,
> + enum drmcg_res_type res_type)
> +{
> + struct amdgpu_device *adev = dev->dev_private;
> +
> + switch (res_type) {
> + case DRMCG_TYPE_LGPU:
> + amdgpu_amdkfd_update_cu_mask_for_process(task, adev,
> +ddr->lgpu_allocated, dev->drmcg_props.lgpu_capacity);
> + break;
> + default:
> + break;
> + }
> +}
> +
>   static struct drm_driver kms_driver = {
>   .driver_features =
>   DRIVER_USE_AGP | DRIVER_ATOMIC |
> @@ -1438,6 +1458,7 @@ static struct drm_driver kms_driver = {
>   .gem_prime_mmap = amdgpu_gem_prime_mmap,
>   
>   .drmcg_custom_init = amdgpu_drmcg_custom_init,
> + .drmcg_limit_updated = amdgpu_drmcg_limit_updated,
>   
>   .name = DRIVER_NAME,
>   .desc = DRIVER_DESC,
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> index 138c70454e2b..fa765b803f97 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> @@ -450,6 +450,12 @@ static int kfd_ioctl_set_cu_mask(struct file *filp, 
> struct kfd_process *p,
>   return -EFAULT;
>   }
>   
> + if (!pqm_drmcg_lgpu_validate(p, args->queue_id, properties.cu_mask, 
> cu_mask_size)) {
> + pr_debug("CU mask not permitted by DRM Cgroup");
> + kfree(properties.cu_mask);
> + return -EACCES;
> + }
> +
>   mutex_lock(>mutex);
>   
>   retval = pqm_set_cu_mask(>pqm, args->queue_id, );
> diff --git 

Re: [PATCH RFC v4 14/16] drm, cgroup: Introduce lgpu as DRM cgroup resource

2019-10-08 Thread Kuehling, Felix
On 2019-08-29 2:05 a.m., Kenny Ho wrote:
> drm.lgpu
>  A read-write nested-keyed file which exists on all cgroups.
>  Each entry is keyed by the DRM device's major:minor.
>
>  lgpu stands for logical GPU, it is an abstraction used to
>  subdivide a physical DRM device for the purpose of resource
>  management.
>
>  The lgpu is a discrete quantity that is device specific (i.e.
>  some DRM devices may have 64 lgpus while others may have 100
>  lgpus.)  The lgpu is a single quantity with two representations
>  denoted by the following nested keys.
>
>= 
>count Representing lgpu as anonymous resource
>list  Representing lgpu as named resource
>= 
>
>  For example:
>  226:0 count=256 list=0-255
>  226:1 count=4 list=0,2,4,6
>  226:2 count=32 list=32-63
>
>  lgpu is represented by a bitmap and uses the bitmap_parselist
>  kernel function so the list key input format is a
>  comma-separated list of decimal numbers and ranges.
>
>  Consecutively set bits are shown as two hyphen-separated decimal
>  numbers, the smallest and largest bit numbers set in the range.
>  Optionally each range can be postfixed to denote that only parts
>  of it should be set.  The range will divided to groups of
>  specific size.
>  Syntax: range:used_size/group_size
>  Example: 0-1023:2/256 ==> 0,1,256,257,512,513,768,769
>
>  The count key is the hamming weight / hweight of the bitmap.
>
>  Both count and list accept the max and default keywords.
>
>  Some DRM devices may only support lgpu as anonymous resources.
>  In such case, the significance of the position of the set bits
>  in list will be ignored.
>
>  This lgpu resource supports the 'allocation' resource
>  distribution model.
>
> Change-Id: I1afcacf356770930c7f925df043e51ad06ceb98e
> Signed-off-by: Kenny Ho 

The description sounds reasonable to me and maps well to the CU masking 
feature in our GPUs.

It would also allow us to do more coarse-grained masking for example to 
guarantee balanced allocation of CUs across shader engines or 
partitioning of memory bandwidth or CP pipes (if that is supported by 
the hardware/firmware).

I can't comment on the code as I'm unfamiliar with the details of the 
cgroup code.

Acked-by: Felix Kuehling 


> ---
>   Documentation/admin-guide/cgroup-v2.rst |  46 
>   include/drm/drm_cgroup.h|   4 +
>   include/linux/cgroup_drm.h  |   6 ++
>   kernel/cgroup/drm.c | 135 
>   4 files changed, 191 insertions(+)
>
> diff --git a/Documentation/admin-guide/cgroup-v2.rst 
> b/Documentation/admin-guide/cgroup-v2.rst
> index 87a195133eaa..57f18469bd76 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1958,6 +1958,52 @@ DRM Interface Files
>   Set largest allocation for /dev/dri/card1 to 4MB
>   echo "226:1 4m" > drm.buffer.peak.max
>   
> +  drm.lgpu
> + A read-write nested-keyed file which exists on all cgroups.
> + Each entry is keyed by the DRM device's major:minor.
> +
> + lgpu stands for logical GPU, it is an abstraction used to
> + subdivide a physical DRM device for the purpose of resource
> + management.
> +
> + The lgpu is a discrete quantity that is device specific (i.e.
> + some DRM devices may have 64 lgpus while others may have 100
> + lgpus.)  The lgpu is a single quantity with two representations
> + denoted by the following nested keys.
> +
> +   = 
> +   count Representing lgpu as anonymous resource
> +   list  Representing lgpu as named resource
> +   = 
> +
> + For example:
> + 226:0 count=256 list=0-255
> + 226:1 count=4 list=0,2,4,6
> + 226:2 count=32 list=32-63
> +
> + lgpu is represented by a bitmap and uses the bitmap_parselist
> + kernel function so the list key input format is a
> + comma-separated list of decimal numbers and ranges.
> +
> + Consecutively set bits are shown as two hyphen-separated decimal
> + numbers, the smallest and largest bit numbers set in the range.
> + Optionally each range can be postfixed to denote that only parts
> + of it should be set.  The range will divided to groups of
> + specific size.
> + Syntax: range:used_size/group_size
> + Example: 0-1023:2/256 ==> 0,1,256,257,512,513,768,769
> +
> + The count key is the hamming weight / hweight of the bitmap.
> +
> + Both count and list accept the max and default keywords.
> +
> + Some DRM devices may only 

Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c to hdmi

2019-10-08 Thread Julien Isorce
Hi Harry,

I can reproduce on LG, Samsung and NEC monitors.

"Have you checked whether the driver picks RGB or YCBCR420 without your
patch?" -> it was selecting RGB .

For example on https://commons.wikimedia.org/wiki/File:Gray_scale.jpg , the
second band from the left, will be entirely pinkish.
Since the issue also happens without dongle, so with a direct cable from
the miniDP from the graphic card to DisplayPort on the screen I think there
is more serious issue with RGB output in amdgpu. But it is not easy to
reproduce, you should try on above image.

In any case, the goal with the patch is just to get the same output when
using 2 screens at the same time, one connected to hdmi output of the
graphic card and one connected  to usb-c to graphic card (hdmi cable with
dongle). So prior this patch, the first one would use YCbCr 444 and the
second would use RGB.
After this patch, both will use YCbCr 444 (both are hdmi).
The patch does not change the case for miniDP to DisplayPort, the driver
will still use RGB. Because maybe the RGB issue is also specific to that
graphic card which
is VEGA"M". So that is why the patch only tries to match hdmi cases
together, whether it is direct connection or through usb-c.

-
Julien



On Tue, Oct 8, 2019 at 10:44 AM Harry Wentland  wrote:

> Hi Julien,
>
> curious which monitor you're using.
>
> Have you checked whether the driver picks RGB or YCBCR420 without your
> patch?
>
> I'm not sure I understand how the pinkish color issue looks. Do you see
> a pinkish color at the transition from grey to another color? Or is the
> entire grey area pinkish?
>
> Thanks,
> Harry
>
> On 2019-10-08 12:06 p.m., Julien Isorce wrote:
> > Hi,
> >
> > Gentle ping ?
> >
> > Thx
> > Julien
> >
> > On Tue, Oct 1, 2019 at 3:21 PM Julien Isorce  > > wrote:
> >
> > Fix pinkish color issue around grey areas. This also happens
> > when not using any dongle so directly with a usb-c to Display
> > Port cable. Meaning there is something wrong when using pixel
> > encoding RGB with amd driver in the general case. In the meantime
> > just use the same pixel encoding as when using HDMI without dongle.
> > This way users will see the same thing on 2 identical screens when
> > one is connected with hdmi-to-hdmi and the other is connected with
> > usb-c-to-hdmi.
> >
> > Signed-off-by: Julien Isorce  > >
> > ---
> >  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index d3f404f097eb..8139dcc0bfba 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -3313,6 +3313,7 @@ static void
> > fill_stream_properties_from_drm_display_mode(
> >  {
> > struct dc_crtc_timing *timing_out = >timing;
> > const struct drm_display_info *info =
> >display_info;
> > +   const struct dc_link *link = stream->sink->link;
> >
> > memset(timing_out, 0, sizeof(struct dc_crtc_timing));
> >
> > @@ -3327,6 +3328,10 @@ static void
> > fill_stream_properties_from_drm_display_mode(
> > else if ((connector->display_info.color_formats &
> > DRM_COLOR_FORMAT_YCRCB444)
> > && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
> > timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
> > +   else if ((connector->display_info.color_formats &
> > DRM_COLOR_FORMAT_YCRCB444)
> > +   && stream->sink->sink_signal ==
> > SIGNAL_TYPE_DISPLAY_PORT
> > +   && link->dpcd_caps.dongle_type ==
> > DISPLAY_DONGLE_DP_HDMI_CONVERTER)
> > +   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
> > else
> > timing_out->pixel_encoding = PIXEL_ENCODING_RGB;
> >
> > --
> > 2.17.1
> >
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu/ras: fix typos in documentation

2019-10-08 Thread Grodzovsky, Andrey
Reviewed-by: Andrey Grodzovsky 

Andrey

On 10/8/19 2:05 PM, Alex Deucher wrote:
> Fix a couple of spelling typos.
>
> Signed-off-by: Alex Deucher 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index 0e2ee5869b5f..c0d3edf77901 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -228,13 +228,13 @@ static struct ras_manager *amdgpu_ras_find_obj(struct 
> amdgpu_device *adev,
>*
>* .. code-block:: bash
>*
> - *   echo op block [error [sub_blcok address value]] > .../ras/ras_ctrl
> + *   echo op block [error [sub_block address value]] > .../ras/ras_ctrl
>*
>* op: disable, enable, inject
>*  disable: only block is needed
>*  enable: block and error are needed
>*  inject: error, address, value are needed
> - * block: umc, smda, gfx, .
> + * block: umc, sdma, gfx, .
>*  see ras_block_string[] for details
>* error: ue, ce
>*  ue: multi_uncorrectable
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu/ras: document the reboot ras option

2019-10-08 Thread Grodzovsky, Andrey
Reviewed-by: Andrey Grodzovsky 

Andrey

On 10/8/19 2:09 PM, Alex Deucher wrote:
> We recently added it, but never documented it.
>
> Signed-off-by: Alex Deucher 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index c0d3edf77901..84d8c3342a81 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -215,11 +215,12 @@ static struct ras_manager *amdgpu_ras_find_obj(struct 
> amdgpu_device *adev,
>* value to the address.
>*
>* Second member: struct ras_debug_if::op.
> - * It has three kinds of operations.
> + * It has four kinds of operations.
>*
>* - 0: disable RAS on the block. Take ::head as its data.
>* - 1: enable RAS on the block. Take ::head as its data.
>* - 2: inject errors on the block. Take ::inject as its data.
> + * - 3: reboot on unrecoverable error
>*
>* How to use the interface?
>* programs:
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdgpu/ras: document the reboot ras option

2019-10-08 Thread Alex Deucher
We recently added it, but never documented it.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index c0d3edf77901..84d8c3342a81 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -215,11 +215,12 @@ static struct ras_manager *amdgpu_ras_find_obj(struct 
amdgpu_device *adev,
  * value to the address.
  *
  * Second member: struct ras_debug_if::op.
- * It has three kinds of operations.
+ * It has four kinds of operations.
  *
  * - 0: disable RAS on the block. Take ::head as its data.
  * - 1: enable RAS on the block. Take ::head as its data.
  * - 2: inject errors on the block. Take ::inject as its data.
+ * - 3: reboot on unrecoverable error
  *
  * How to use the interface?
  * programs:
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdgpu/ras: fix typos in documentation

2019-10-08 Thread Alex Deucher
Fix a couple of spelling typos.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 0e2ee5869b5f..c0d3edf77901 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -228,13 +228,13 @@ static struct ras_manager *amdgpu_ras_find_obj(struct 
amdgpu_device *adev,
  *
  * .. code-block:: bash
  *
- * echo op block [error [sub_blcok address value]] > .../ras/ras_ctrl
+ * echo op block [error [sub_block address value]] > .../ras/ras_ctrl
  *
  * op: disable, enable, inject
  * disable: only block is needed
  * enable: block and error are needed
  * inject: error, address, value are needed
- * block: umc, smda, gfx, .
+ * block: umc, sdma, gfx, .
  * see ras_block_string[] for details
  * error: ue, ce
  * ue: multi_uncorrectable
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c to hdmi

2019-10-08 Thread Harry Wentland
Thanks for bumping.

I replied on the mailing list.

On 2019-10-08 1:28 p.m., Deucher, Alexander wrote:
+ some display folks.

From: amd-gfx 

 on behalf of Julien Isorce 

Sent: Tuesday, October 8, 2019 12:06 PM
To: amd-gfx list 

Subject: Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c 
to hdmi

Hi,

Gentle ping ?

Thx
Julien

On Tue, Oct 1, 2019 at 3:21 PM Julien Isorce 
mailto:julien.iso...@gmail.com>> wrote:
Fix pinkish color issue around grey areas. This also happens
when not using any dongle so directly with a usb-c to Display
Port cable. Meaning there is something wrong when using pixel
encoding RGB with amd driver in the general case. In the meantime
just use the same pixel encoding as when using HDMI without dongle.
This way users will see the same thing on 2 identical screens when
one is connected with hdmi-to-hdmi and the other is connected with
usb-c-to-hdmi.

Signed-off-by: Julien Isorce mailto:jiso...@oblong.com>>
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index d3f404f097eb..8139dcc0bfba 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3313,6 +3313,7 @@ static void fill_stream_properties_from_drm_display_mode(
 {
struct dc_crtc_timing *timing_out = >timing;
const struct drm_display_info *info = >display_info;
+   const struct dc_link *link = stream->sink->link;

memset(timing_out, 0, sizeof(struct dc_crtc_timing));

@@ -3327,6 +3328,10 @@ static void fill_stream_properties_from_drm_display_mode(
else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
&& stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
+   else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
+   && stream->sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT
+   && link->dpcd_caps.dongle_type == 
DISPLAY_DONGLE_DP_HDMI_CONVERTER)
+   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
else
timing_out->pixel_encoding = PIXEL_ENCODING_RGB;

--
2.17.1


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c to hdmi

2019-10-08 Thread Harry Wentland
Hi Julien,

curious which monitor you're using.

Have you checked whether the driver picks RGB or YCBCR420 without your
patch?

I'm not sure I understand how the pinkish color issue looks. Do you see
a pinkish color at the transition from grey to another color? Or is the
entire grey area pinkish?

Thanks,
Harry

On 2019-10-08 12:06 p.m., Julien Isorce wrote:
> Hi,
> 
> Gentle ping ?
> 
> Thx
> Julien
> 
> On Tue, Oct 1, 2019 at 3:21 PM Julien Isorce  > wrote:
> 
> Fix pinkish color issue around grey areas. This also happens
> when not using any dongle so directly with a usb-c to Display
> Port cable. Meaning there is something wrong when using pixel
> encoding RGB with amd driver in the general case. In the meantime
> just use the same pixel encoding as when using HDMI without dongle.
> This way users will see the same thing on 2 identical screens when
> one is connected with hdmi-to-hdmi and the other is connected with
> usb-c-to-hdmi.
> 
> Signed-off-by: Julien Isorce  >
> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index d3f404f097eb..8139dcc0bfba 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3313,6 +3313,7 @@ static void
> fill_stream_properties_from_drm_display_mode(
>  {
>         struct dc_crtc_timing *timing_out = >timing;
>         const struct drm_display_info *info = >display_info;
> +       const struct dc_link *link = stream->sink->link;
> 
>         memset(timing_out, 0, sizeof(struct dc_crtc_timing));
> 
> @@ -3327,6 +3328,10 @@ static void
> fill_stream_properties_from_drm_display_mode(
>         else if ((connector->display_info.color_formats &
> DRM_COLOR_FORMAT_YCRCB444)
>                         && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
>                 timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
> +       else if ((connector->display_info.color_formats &
> DRM_COLOR_FORMAT_YCRCB444)
> +                       && stream->sink->sink_signal ==
> SIGNAL_TYPE_DISPLAY_PORT
> +                       && link->dpcd_caps.dongle_type ==
> DISPLAY_DONGLE_DP_HDMI_CONVERTER)
> +               timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
>         else
>                 timing_out->pixel_encoding = PIXEL_ENCODING_RGB;
> 
> -- 
> 2.17.1
> 
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c to hdmi

2019-10-08 Thread Deucher, Alexander
+ some display folks.

From: amd-gfx  on behalf of Julien 
Isorce 
Sent: Tuesday, October 8, 2019 12:06 PM
To: amd-gfx list 
Subject: Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c 
to hdmi

Hi,

Gentle ping ?

Thx
Julien

On Tue, Oct 1, 2019 at 3:21 PM Julien Isorce 
mailto:julien.iso...@gmail.com>> wrote:
Fix pinkish color issue around grey areas. This also happens
when not using any dongle so directly with a usb-c to Display
Port cable. Meaning there is something wrong when using pixel
encoding RGB with amd driver in the general case. In the meantime
just use the same pixel encoding as when using HDMI without dongle.
This way users will see the same thing on 2 identical screens when
one is connected with hdmi-to-hdmi and the other is connected with
usb-c-to-hdmi.

Signed-off-by: Julien Isorce mailto:jiso...@oblong.com>>
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index d3f404f097eb..8139dcc0bfba 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3313,6 +3313,7 @@ static void fill_stream_properties_from_drm_display_mode(
 {
struct dc_crtc_timing *timing_out = >timing;
const struct drm_display_info *info = >display_info;
+   const struct dc_link *link = stream->sink->link;

memset(timing_out, 0, sizeof(struct dc_crtc_timing));

@@ -3327,6 +3328,10 @@ static void fill_stream_properties_from_drm_display_mode(
else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
&& stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
+   else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
+   && stream->sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT
+   && link->dpcd_caps.dongle_type == 
DISPLAY_DONGLE_DP_HDMI_CONVERTER)
+   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
else
timing_out->pixel_encoding = PIXEL_ENCODING_RGB;

--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 13/14] drm/amd/display: Recalculate VCPI slots for new DSC connectors

2019-10-08 Thread Lyude Paul
On Tue, 2019-10-08 at 12:24 -0400, Lyude Paul wrote:
> ...
> yikes
> I need to apologize because I was going through my email and I realized you
> _did_ respond to me earlier regarding some of these questions, it just
> appears
> the reply fell through the cracks and somehow I didn't notice it until just
> now. Sorry about that!

Referring to this response, jfyi:
https://lists.freedesktop.org/archives/amd-gfx/2019-October/040683.html

(sorry again for replying before reading that!)
> 
> I'm still not sure I understand why this is a future enhancement? Everything
> we need to implement these helpers should already be here, it's just a
> matter
> of keeping track who has dsc enabled where in the mst atomic state
> 
> On Mon, 2019-10-07 at 17:52 -0400, Lyude Paul wrote:
> > Ok, let's stop and slow down for a minute here since I've repeated myself
> > a
> > few times, and I'd like to figure out what's actually happening here and
> > whether I'm not being clear enough with my explanations, or if there's
> > just
> > some misunderstanding here.
> > 
> > I'll start of by trying my best to explain my hesistance in accepting
> > these
> > patches. To be clear, MST with DSC is a big thing in terms of impact.
> > There's
> > a lot of things to consider:
> >  * What should the default DSC policy for MST be? Should we keep it on by
> >default so that we don't need to trigger a large number of PBN
> >recalculations and enable DSC on a per-case basis, or are there
> > legitimate
> >reasons to keep DSC off by default (such as potential issues with
> > lossiness
> >down the line).
> >  * We have other drivers in the tree that are planning on implementing MST
> > DSC
> >quite soon. Chances are they'll be implementing helpers to do this so
> > this
> >can be supported across the DRM tree, and chances are we'll just have
> > to
> >rewrite and remove most of this code in that case once they do.
> >  * amdgpu already has a _lot_ of code that doesn't need to, and shouldn't
> > be
> >there. This ranges from various DC specific helpers that are
> > essentially
> >the same as the helpers that we already implement in the rest of DRM,
> > to
> >misuses of various kernel APIs, and a complete lack of any kind of code
> >style (at least the last time that I checked) in or out of DC. This
> > makes
> >the driver more difficult for people who don't work at AMD but are well
> >accustomed to working on the rest of the kernel, e.g. myself and
> > others,
> >and it's a problem that seriously needs to be solved. Adding more
> > redundant
> >DP helpers that will inevitably get removed is just making the problem
> >worse.
> > 
> > With all of this being said: I've been asking the same questions about
> > this
> > patch throughout the whole patch series so far (since it got passed off to
> > you
> > from David's internship ending):
> > 
> >  * Can we keep DSC enabled by default, so that these patches aren't
> > needed?
> >  * If we can't, can we move this into common code so that other drivers
> > can
> >use it?
> > The problem is I haven't received any responses to these questions, other
> > then
> > being told by David a month or two ago that it would be expedient to just
> > accept the patches and move on. Honestly, if discussion had been had about
> > the
> > changes I requested, I would have given my r-b a long time ago. In fact-I
> > really want to give it now! There's multiple patches in this series that
> > would
> > be very useful for other things that are being worked on at the moment,
> > such
> > as the changes to drm_dp_dpcd_read/drm_dp_dpcd_write(). But I really think
> > this needs to be discussed first, and I'm worried that just going ahead
> > and
> > giving my R-B before they have been discussed will further encourage
> > rushing
> > changes like this in the future and continually building more tech debt
> > for
> > ourselves.
> > 
> > Please note as well: if anything I've asked for is confusing, or you don't
> > understand what I'm asking or looking for I am more then willing to help
> > explain things and help out as best as I can. I understand that a lot of
> > the
> > developers working on DRM at AMD may have more experience in Windows and
> > Mac
> > land and as a result, trying to get used to the way that we do things in
> > the
> > Linux kernel can be a confusing endeavor. I'm more then happy to help out
> > with
> > this wherever I can, all you need to do is ask. Asking a few questions
> > about
> > something you aren't sure you understand can save both of us a lot of
> > time,
> > and avoid having to go through this many patch respins.
> > 
> > In the mean time, I'd be willing to look at what patches from this series
> > that
> > have already been reviewed which could be pushed to drm-misc or friends in
> > the
> > mean time to speed things up a bit.
> > 
> > On Tue, 2019-10-01 at 12:17 -0400, mikita.lip...@amd.com wrote:
> > > From: Mikita Lipski 
> > 

Re: [PATCH 13/14] drm/amd/display: Recalculate VCPI slots for new DSC connectors

2019-10-08 Thread Lyude Paul
...
yikes
I need to apologize because I was going through my email and I realized you
_did_ respond to me earlier regarding some of these questions, it just appears
the reply fell through the cracks and somehow I didn't notice it until just
now. Sorry about that!

I'm still not sure I understand why this is a future enhancement? Everything
we need to implement these helpers should already be here, it's just a matter
of keeping track who has dsc enabled where in the mst atomic state

On Mon, 2019-10-07 at 17:52 -0400, Lyude Paul wrote:
> Ok, let's stop and slow down for a minute here since I've repeated myself a
> few times, and I'd like to figure out what's actually happening here and
> whether I'm not being clear enough with my explanations, or if there's just
> some misunderstanding here.
> 
> I'll start of by trying my best to explain my hesistance in accepting these
> patches. To be clear, MST with DSC is a big thing in terms of impact.
> There's
> a lot of things to consider:
>  * What should the default DSC policy for MST be? Should we keep it on by
>default so that we don't need to trigger a large number of PBN
>recalculations and enable DSC on a per-case basis, or are there
> legitimate
>reasons to keep DSC off by default (such as potential issues with
> lossiness
>down the line).
>  * We have other drivers in the tree that are planning on implementing MST
> DSC
>quite soon. Chances are they'll be implementing helpers to do this so
> this
>can be supported across the DRM tree, and chances are we'll just have to
>rewrite and remove most of this code in that case once they do.
>  * amdgpu already has a _lot_ of code that doesn't need to, and shouldn't be
>there. This ranges from various DC specific helpers that are essentially
>the same as the helpers that we already implement in the rest of DRM, to
>misuses of various kernel APIs, and a complete lack of any kind of code
>style (at least the last time that I checked) in or out of DC. This makes
>the driver more difficult for people who don't work at AMD but are well
>accustomed to working on the rest of the kernel, e.g. myself and others,
>and it's a problem that seriously needs to be solved. Adding more
> redundant
>DP helpers that will inevitably get removed is just making the problem
>worse.
> 
> With all of this being said: I've been asking the same questions about this
> patch throughout the whole patch series so far (since it got passed off to
> you
> from David's internship ending):
> 
>  * Can we keep DSC enabled by default, so that these patches aren't needed?
>  * If we can't, can we move this into common code so that other drivers can
>use it?
> The problem is I haven't received any responses to these questions, other
> then
> being told by David a month or two ago that it would be expedient to just
> accept the patches and move on. Honestly, if discussion had been had about
> the
> changes I requested, I would have given my r-b a long time ago. In fact-I
> really want to give it now! There's multiple patches in this series that
> would
> be very useful for other things that are being worked on at the moment, such
> as the changes to drm_dp_dpcd_read/drm_dp_dpcd_write(). But I really think
> this needs to be discussed first, and I'm worried that just going ahead and
> giving my R-B before they have been discussed will further encourage rushing
> changes like this in the future and continually building more tech debt for
> ourselves.
> 
> Please note as well: if anything I've asked for is confusing, or you don't
> understand what I'm asking or looking for I am more then willing to help
> explain things and help out as best as I can. I understand that a lot of the
> developers working on DRM at AMD may have more experience in Windows and Mac
> land and as a result, trying to get used to the way that we do things in the
> Linux kernel can be a confusing endeavor. I'm more then happy to help out
> with
> this wherever I can, all you need to do is ask. Asking a few questions about
> something you aren't sure you understand can save both of us a lot of time,
> and avoid having to go through this many patch respins.
> 
> In the mean time, I'd be willing to look at what patches from this series
> that
> have already been reviewed which could be pushed to drm-misc or friends in
> the
> mean time to speed things up a bit.
> 
> On Tue, 2019-10-01 at 12:17 -0400, mikita.lip...@amd.com wrote:
> > From: Mikita Lipski 
> > 
> > Since for DSC MST connector's PBN is claculated differently
> > due to compression, we have to recalculate both PBN and
> > VCPI slots for that connector.
> > 
> > This patch proposes to use similair logic as in
> > dm_encoder_helper_atomic_chek, but since we do not know which
> > connectors will have DSC enabled we have to recalculate PBN only
> > after that's determined, which is done in
> > compute_mst_dsc_configs_for_state.
> > 
> > Cc: Jerry Zuo 
> > Cc: 

Re: [PATCH] drm/amd/display: Use pixel encoding 444 for dongle usb-c to hdmi

2019-10-08 Thread Julien Isorce
Hi,

Gentle ping ?

Thx
Julien

On Tue, Oct 1, 2019 at 3:21 PM Julien Isorce 
wrote:

> Fix pinkish color issue around grey areas. This also happens
> when not using any dongle so directly with a usb-c to Display
> Port cable. Meaning there is something wrong when using pixel
> encoding RGB with amd driver in the general case. In the meantime
> just use the same pixel encoding as when using HDMI without dongle.
> This way users will see the same thing on 2 identical screens when
> one is connected with hdmi-to-hdmi and the other is connected with
> usb-c-to-hdmi.
>
> Signed-off-by: Julien Isorce 
> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index d3f404f097eb..8139dcc0bfba 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3313,6 +3313,7 @@ static void
> fill_stream_properties_from_drm_display_mode(
>  {
> struct dc_crtc_timing *timing_out = >timing;
> const struct drm_display_info *info = >display_info;
> +   const struct dc_link *link = stream->sink->link;
>
> memset(timing_out, 0, sizeof(struct dc_crtc_timing));
>
> @@ -3327,6 +3328,10 @@ static void
> fill_stream_properties_from_drm_display_mode(
> else if ((connector->display_info.color_formats &
> DRM_COLOR_FORMAT_YCRCB444)
> && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
> timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
> +   else if ((connector->display_info.color_formats &
> DRM_COLOR_FORMAT_YCRCB444)
> +   && stream->sink->sink_signal ==
> SIGNAL_TYPE_DISPLAY_PORT
> +   && link->dpcd_caps.dongle_type ==
> DISPLAY_DONGLE_DP_HDMI_CONVERTER)
> +   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
> else
> timing_out->pixel_encoding = PIXEL_ENCODING_RGB;
>
> --
> 2.17.1
>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu/display: make various arrays static, makes object smaller

2019-10-08 Thread Harry Wentland
On 2019-10-08 10:00 a.m., Joe Perches wrote:
> On Tue, 2019-10-08 at 13:56 +, Harry Wentland wrote:
> []
>>> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
>>> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
>> []
>>> @@ -2745,7 +2745,7 @@ static enum bp_result bios_get_board_layout_info(
>>> struct bios_parser *bp;
>>> enum bp_result record_result;
>>>  
>>> -   const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
>>> +   static const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
>>
>> Won't this break the multi-GPU case where you'll have multiple driver
>> instances with different layout?
> 
> As the array is read-only, how could that happen?
> 

You're right.

Patch is
Reviewed-by: Harry Wentland 

Harry

> 


Re: [PATCH] drm/amdgpu/display: make various arrays static, makes object smaller

2019-10-08 Thread Joe Perches
On Tue, 2019-10-08 at 13:56 +, Harry Wentland wrote:
[]
> > diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
> > b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> []
> > @@ -2745,7 +2745,7 @@ static enum bp_result bios_get_board_layout_info(
> > struct bios_parser *bp;
> > enum bp_result record_result;
> >  
> > -   const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
> > +   static const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
> 
> Won't this break the multi-GPU case where you'll have multiple driver
> instances with different layout?

As the array is read-only, how could that happen?




Re: [PATCH] drm/amdgpu/display: make various arrays static, makes object smaller

2019-10-08 Thread Harry Wentland
On 2019-10-07 5:58 p.m., Colin King wrote:
> From: Colin Ian King 
> 
> Don't populate the arrays on the stack but instead make them
> static. Makes the object code smaller by 158 bytes.
> 
> Before:
>text  data bss dec hex filename
>   32468  2072   0   3454086ec display/dc/bios/bios_parser.o
>   22198  1088   0   232865af6 display/dc/bios/bios_parser2.o
>   22278  1076   0   233545b3a display/dc/dce/dce_mem_input.o
> 
> 81180
> After:
>text  data bss dec hex filename
>   32341  2136   0   3447786ad display/dc/bios/bios_parser.o
>   22070  1184   0   232545ad6 display/dc/bios/bios_parser2.o
>   22119  1172   0   232915afb display/dc/dce/dce_mem_input.o
> 
> (gcc version 9.2.1, amd64)
> 
> Signed-off-by: Colin Ian King 
> ---
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser.c  | 2 +-
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 2 +-
>  drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> index 221e0f56389f..65ab225cf542 100644
> --- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> @@ -2745,7 +2745,7 @@ static enum bp_result bios_get_board_layout_info(
>   struct bios_parser *bp;
>   enum bp_result record_result;
>  
> - const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
> + static const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {

Won't this break the multi-GPU case where you'll have multiple driver
instances with different layout?

Harry

>   GENERICOBJECT_BRACKET_LAYOUT_ENUM_ID1,
>   GENERICOBJECT_BRACKET_LAYOUT_ENUM_ID2,
>   0, 0
> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c 
> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> index dff65c0fe82f..809c4a89b899 100644
> --- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> @@ -1832,7 +1832,7 @@ static enum bp_result bios_get_board_layout_info(
>   struct bios_parser *bp;
>   enum bp_result record_result;
>  
> - const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
> + static const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
>   GENERICOBJECT_BRACKET_LAYOUT_ENUM_ID1,
>   GENERICOBJECT_BRACKET_LAYOUT_ENUM_ID2,
>   0, 0
> diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c 
> b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
> index 8aa937f496c4..ed0031d5e021 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
> @@ -395,7 +395,7 @@ static void program_size_and_rotation(
>  {
>   const struct rect *in_rect = _size->surface_size;
>   struct rect hw_rect = plane_size->surface_size;
> - const uint32_t rotation_angles[ROTATION_ANGLE_COUNT] = {
> + static const uint32_t rotation_angles[ROTATION_ANGLE_COUNT] = {
>   [ROTATION_ANGLE_0] = 0,
>   [ROTATION_ANGLE_90] = 1,
>   [ROTATION_ANGLE_180] = 2,
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 03/27] drm/dp_mst: Destroy MSTBs asynchronously

2019-10-08 Thread Daniel Vetter
On Fri, Sep 27, 2019 at 09:31:07AM -0400, Sean Paul wrote:
> On Wed, Sep 25, 2019 at 04:08:22PM -0400, Lyude Paul wrote:
> > On Wed, 2019-09-25 at 14:16 -0400, Sean Paul wrote:
> > > On Tue, Sep 03, 2019 at 04:45:41PM -0400, Lyude Paul wrote:
> > > > When reprobing an MST topology during resume, we have to account for the
> > > > fact that while we were suspended it's possible that mstbs may have been
> > > > removed from any ports in the topology. Since iterating downwards in the
> > > > topology requires that we hold >lock, destroying MSTBs from this
> > > > context would result in attempting to lock >lock a second time and
> > > > deadlocking.
> > > > 
> > > > So, fix this by first moving destruction of MSTBs into
> > > > destroy_connector_work, then rename destroy_connector_work and friends
> > > > to reflect that they now destroy both ports and mstbs.
> > > > 
> > > > Changes since v1:
> > > > * s/destroy_connector_list/destroy_port_list/
> > > >   s/connector_destroy_lock/delayed_destroy_lock/
> > > >   s/connector_destroy_work/delayed_destroy_work/
> > > >   s/drm_dp_finish_destroy_branch_device/drm_dp_delayed_destroy_mstb/
> > > >   s/drm_dp_finish_destroy_port/drm_dp_delayed_destroy_port/
> > > >   - danvet
> > > > * Use two loops in drm_dp_delayed_destroy_work() - danvet
> > > > * Better explain why we need to do this - danvet
> > > > * Use cancel_work_sync() instead of flush_work() - flush_work() doesn't
> > > >   account for work requeing
> > > > 
> > > > Cc: Juston Li 
> > > > Cc: Imre Deak 
> > > > Cc: Ville Syrjälä 
> > > > Cc: Harry Wentland 
> > > > Cc: Daniel Vetter 
> > > > Signed-off-by: Lyude Paul 
> > > 
> > > Took me a while to grok this, and I'm still not 100% confident my mental
> > > model
> > > is correct, so please bear with me while I ask silly questions :)
> > > 
> > > Now that the destroy is delayed, and the port remains in the topology, is 
> > > it
> > > possible we will underflow the topology kref by calling put_mstb multiple
> > > times?
> > > It looks like that would result in a WARN from refcount.c, and wouldn't 
> > > call
> > > the
> > > destroy function multiple times, so that's nice :)
> > > 
> > > Similarly, is there any defense against calling get_mstb() between 
> > > destroy()
> > > and
> > > the delayed destroy worker running?
> > > 
> > Good question! There's only one instance where we unconditionally grab an 
> > MSTB
> > ref, drm_dp_mst_topology_mgr_set_mst(), and in that location we're 
> > guaranteed
> > to be the only one with access to that mstb until we drop >lock.
> > Everywhere else we use drm_dp_mst_topology_try_get_mstb(), which uses
> > kref_get_unless_zero() to protect against that kind of situation (and forces
> > the caller to check with __must_check)

Imo would be good to add this to the commit message when merging as a Q,
just for the record. At least I like to do that with the
silly-but-no-so-silly questions that come up in review.
-Daniel

> 
> Awesome, thanks for the breakdown!
> 
> 
> Reviewed-by: Sean Paul 
> 
> 
> > 
> > > Sean
> > > 
> > > > ---
> > > >  drivers/gpu/drm/drm_dp_mst_topology.c | 162 +-
> > > >  include/drm/drm_dp_mst_helper.h   |  26 +++--
> > > >  2 files changed, 127 insertions(+), 61 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > index 3054ec622506..738f260d4b15 100644
> > > > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > @@ -1113,34 +1113,17 @@ static void
> > > > drm_dp_destroy_mst_branch_device(struct kref *kref)
> > > > struct drm_dp_mst_branch *mstb =
> > > > container_of(kref, struct drm_dp_mst_branch, 
> > > > topology_kref);
> > > > struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
> > > > -   struct drm_dp_mst_port *port, *tmp;
> > > > -   bool wake_tx = false;
> > > >  
> > > > -   mutex_lock(>lock);
> > > > -   list_for_each_entry_safe(port, tmp, >ports, next) {
> > > > -   list_del(>next);
> > > > -   drm_dp_mst_topology_put_port(port);
> > > > -   }
> > > > -   mutex_unlock(>lock);
> > > > -
> > > > -   /* drop any tx slots msg */
> > > > -   mutex_lock(>mgr->qlock);
> > > > -   if (mstb->tx_slots[0]) {
> > > > -   mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> > > > -   mstb->tx_slots[0] = NULL;
> > > > -   wake_tx = true;
> > > > -   }
> > > > -   if (mstb->tx_slots[1]) {
> > > > -   mstb->tx_slots[1]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> > > > -   mstb->tx_slots[1] = NULL;
> > > > -   wake_tx = true;
> > > > -   }
> > > > -   mutex_unlock(>mgr->qlock);
> > > > +   INIT_LIST_HEAD(>destroy_next);
> > > >  
> > > > -   if (wake_tx)
> > > > -   wake_up_all(>mgr->tx_waitq);
> > > > -
> > > > -   

Re: [PATCH 2/2] drm/amdgpu: Enable gfx cache probing on HDP write for arcturus

2019-10-08 Thread Christian König

Am 07.10.19 um 22:34 schrieb Zeng, Oak:

This allows gfx cache to be probed and invalidated (for none-dirty cache lines)
on a HDP write (from either another GPU or CPU). This should work only for the
memory mapped as RW memory type newly added for arcturus, to achieve some cache
coherence b/t multiple memory clients.

Change-Id: I5c9a6a25d88cd75c71c88822123e0d4c067aa3f8
Signed-off-by: Oak Zeng 


Acked-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index c7e07f1..6e45ebb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -1192,6 +1192,9 @@ static int gmc_v9_0_hw_init(void *handle)
/* TODO for renoir */
mmhub_v1_0_update_power_gating(adev, true);
break;
+   case CHIP_ARCTURUS:
+   WREG32_FIELD15(HDP, 0, HDP_MMHUB_CNTL, HDP_MMHUB_GCC, 1);
+   break;
default:
break;
}


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 1/2] drm/amdgpu: Clean up gmc_v9_0_gart_enable

2019-10-08 Thread Christian König

Am 07.10.19 um 22:34 schrieb Zeng, Oak:

Many logic in this function are HDP set up,
not gart set up. Moved those logic to gmc_v9_0_hw_init.
No functional change.

Change-Id: Ib00cc1ffd1e486e77571796dce53aa7506c0c55f
Signed-off-by: Oak Zeng 


One minor note on the coding style below, apart from that the patch is 
Acked-by: Christian König .



---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 82 +--
  1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 4b11f7e..c7e07f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -1135,13 +1135,7 @@ static void gmc_v9_0_init_golden_registers(struct 
amdgpu_device *adev)
   */
  static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
  {
-   int r, i;
-   bool value;
-   u32 tmp;
-
-   amdgpu_device_program_register_sequence(adev,
-   golden_settings_vega10_hdp,
-   
ARRAY_SIZE(golden_settings_vega10_hdp));
+   int r;
  
  	if (adev->gart.bo == NULL) {

dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
@@ -1151,15 +1145,6 @@ static int gmc_v9_0_gart_enable(struct amdgpu_device 
*adev)
if (r)
return r;
  
-	switch (adev->asic_type) {

-   case CHIP_RAVEN:
-   /* TODO for renoir */
-   mmhub_v1_0_update_power_gating(adev, true);
-   break;
-   default:
-   break;
-   }
-
r = gfxhub_v1_0_gart_enable(adev);
if (r)
return r;
@@ -1171,6 +1156,46 @@ static int gmc_v9_0_gart_enable(struct amdgpu_device 
*adev)
if (r)
return r;
  
+	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",

+(unsigned)(adev->gmc.gart_size >> 20),
+(unsigned long long)amdgpu_bo_gpu_offset(adev->gart.bo));
+   adev->gart.ready = true;
+   return 0;
+}
+
+static int gmc_v9_0_hw_init(void *handle)
+{
+   int r, i;
+   bool value;
+   u32 tmp;
+   struct amdgpu_device *adev = (struct amdgpu_device *)handle;


We usually try to have variables like r and i declared last and long 
structure defines first.


Regards,
Christian.


+
+   /* The sequence of these two function calls matters.*/
+   gmc_v9_0_init_golden_registers(adev);
+
+   if (adev->mode_info.num_crtc) {
+   if (adev->asic_type != CHIP_ARCTURUS) {
+   /* Lockout access through VGA aperture*/
+   WREG32_FIELD15(DCE, 0, VGA_HDP_CONTROL, 
VGA_MEMORY_DISABLE, 1);
+
+   /* disable VGA render */
+   WREG32_FIELD15(DCE, 0, VGA_RENDER_CONTROL, 
VGA_VSTATUS_CNTL, 0);
+   }
+   }
+
+   amdgpu_device_program_register_sequence(adev,
+   golden_settings_vega10_hdp,
+   
ARRAY_SIZE(golden_settings_vega10_hdp));
+
+   switch (adev->asic_type) {
+   case CHIP_RAVEN:
+   /* TODO for renoir */
+   mmhub_v1_0_update_power_gating(adev, true);
+   break;
+   default:
+   break;
+   }
+
WREG32_FIELD15(HDP, 0, HDP_MISC_CNTL, FLUSH_INVALIDATE_CACHE, 1);
  
  	tmp = RREG32_SOC15(HDP, 0, mmHDP_HOST_PATH_CNTL);

@@ -1199,31 +1224,6 @@ static int gmc_v9_0_gart_enable(struct amdgpu_device 
*adev)
if (adev->umc.funcs && adev->umc.funcs->init_registers)
adev->umc.funcs->init_registers(adev);
  
-	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",

-(unsigned)(adev->gmc.gart_size >> 20),
-(unsigned long long)amdgpu_bo_gpu_offset(adev->gart.bo));
-   adev->gart.ready = true;
-   return 0;
-}
-
-static int gmc_v9_0_hw_init(void *handle)
-{
-   int r;
-   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
-   /* The sequence of these two function calls matters.*/
-   gmc_v9_0_init_golden_registers(adev);
-
-   if (adev->mode_info.num_crtc) {
-   if (adev->asic_type != CHIP_ARCTURUS) {
-   /* Lockout access through VGA aperture*/
-   WREG32_FIELD15(DCE, 0, VGA_HDP_CONTROL, 
VGA_MEMORY_DISABLE, 1);
-
-   /* disable VGA render */
-   WREG32_FIELD15(DCE, 0, VGA_RENDER_CONTROL, 
VGA_VSTATUS_CNTL, 0);
-   }
-   }
-
r = gmc_v9_0_gart_enable(adev);
  
  	return r;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH TRIVIAL v2] gpu: Fix Kconfig indentation

2019-10-08 Thread Jani Nikula
On Mon, 07 Oct 2019, Krzysztof Kozlowski  wrote:
> On Mon, 7 Oct 2019 at 18:09, Alex Deucher  wrote:
>>
>> On Mon, Oct 7, 2019 at 7:39 AM Jani Nikula  
>> wrote:
>> >
>> > On Fri, 04 Oct 2019, Krzysztof Kozlowski  wrote:
>> > >  drivers/gpu/drm/i915/Kconfig |  12 +-
>> > >  drivers/gpu/drm/i915/Kconfig.debug   | 144 +++
>> >
>> > Please split these out to a separate patch. Can't speak for others, but
>> > the patch looks like it'll be conflicts galore and a problem to manage
>> > if merged in one big lump.
>>
>> Yes, it would be nice to have the amd patch separate as well.
>
> I'll split the AMD and i915 although I am not sure if it is sense to
> split such trivial patch per each driver.

Thanks.

See MAINTAINERS, many of the drivers are maintained in the same drm-misc
repo, and it makes no difference to split those.

In general it's, well, trivial to split up patches like this per driver
or repo, but not splitting it up generates extra busywork in managing
conflicts until some common merge/backmerge happens. We just want to
apply the patch and forget about it, instead of dealing with a trivial
whitespace cleanup many times over.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Graphics Center
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx