Re: [PATCH 12/13] drm/amd/powerplay: correct vega12 thermal support as true

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> Thermal support is enabled on vega12.
>
> Change-Id: I7069a65c6b289dbfe4a12f81ff96e943e878e6fa
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index 1fadb71..de61f86 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -81,6 +81,7 @@ static void vega12_set_default_registry_data(struct 
> pp_hwmgr *hwmgr)
>
> data->registry_data.disallowed_features = 0x0;
> data->registry_data.od_state_in_dc_support = 0;
> +   data->registry_data.thermal_support = 1;
> data->registry_data.skip_baco_hardware = 0;
>
> data->registry_data.log_avfs_param = 0;
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/5] dma-buf: remove kmap_atomic interface

2018-06-19 Thread Daniel Vetter
On Tue, Jun 19, 2018 at 4:47 PM, Christian König
 wrote:
> Am 18.06.2018 um 10:18 schrieb Daniel Vetter:
>>
>> On Fri, Jun 01, 2018 at 02:00:17PM +0200, Christian König wrote:
>>>
>>> Neither used nor correctly implemented anywhere. Just completely remove
>>> the interface.
>>>
>>> Signed-off-by: Christian König 
>>
>> I wonder whether we can nuke the normal kmap stuff too ... everyone seems
>> to want/use the vmap stuff for kernel-internal mapping needs.
>>
>> Anyway, this looks good.
>>>
>>> ---
>>>   drivers/dma-buf/dma-buf.c  | 44
>>> --
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c  |  2 -
>>>   drivers/gpu/drm/armada/armada_gem.c|  2 -
>>>   drivers/gpu/drm/drm_prime.c| 26 -
>>>   drivers/gpu/drm/i915/i915_gem_dmabuf.c | 11 --
>>>   drivers/gpu/drm/i915/selftests/mock_dmabuf.c   |  2 -
>>>   drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c  |  2 -
>>>   drivers/gpu/drm/tegra/gem.c| 14 ---
>>>   drivers/gpu/drm/udl/udl_dmabuf.c   | 17 -
>>>   drivers/gpu/drm/vmwgfx/vmwgfx_prime.c  | 13 ---
>>>   .../media/common/videobuf2/videobuf2-dma-contig.c  |  1 -
>>>   drivers/media/common/videobuf2/videobuf2-dma-sg.c  |  1 -
>>>   drivers/media/common/videobuf2/videobuf2-vmalloc.c |  1 -
>>>   drivers/staging/android/ion/ion.c  |  2 -
>>>   drivers/tee/tee_shm.c  |  6 ---
>>>   include/drm/drm_prime.h|  4 --
>>>   include/linux/dma-buf.h|  4 --
>>>   17 files changed, 152 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index e99a8d19991b..e4c657d9fad7 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -405,7 +405,6 @@ struct dma_buf *dma_buf_export(const struct
>>> dma_buf_export_info *exp_info)
>>>   || !exp_info->ops->map_dma_buf
>>>   || !exp_info->ops->unmap_dma_buf
>>>   || !exp_info->ops->release
>>> - || !exp_info->ops->map_atomic
>>>   || !exp_info->ops->map
>>>   || !exp_info->ops->mmap)) {
>>> return ERR_PTR(-EINVAL);
>>> @@ -687,14 +686,6 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>>>*  void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
>>>*  void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
>>>*
>>> - *   There are also atomic variants of these interfaces. Like for kmap
>>> they
>>> - *   facilitate non-blocking fast-paths. Neither the importer nor the
>>> exporter
>>> - *   (in the callback) is allowed to block when using these.
>>> - *
>>> - *   Interfaces::
>>> - *  void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
>>> - *  void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long,
>>> void \*);
>>> - *
>>>*   For importers all the restrictions of using kmap apply, like the
>>> limited
>>>*   supply of kmap_atomic slots. Hence an importer shall only hold
>>> onto at
>>>*   max 2 atomic dma_buf kmaps at the same time (in any given process
>>> context).
>>
>> This is also about atomic kmap ...
>>
>> And the subsequent language around "Note that these calls need to always
>> succeed." is also not true, might be good to update that stating that kmap
>> is optional (like we say already for vmap).
>>
>> With those docs nits addressed:
>>
>> Reviewed-by: Daniel Vetter 
>
>
> I've fixed up patch #1 and #2 and added your Reviewed-by tag.
>
> Since I finally had time to install dim do you have any objections that I
> now run "dim push drm-misc-next" with those two applied?

Go ahead, that's the point of commit rights. dim might complain if you
cherry picked them and didn't pick them up using dim apply though ...
-Daniel


> Regards,
> Christian.
>
>
>>
>>> @@ -859,41 +850,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>>>   }
>>>   EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
>>>   -/**
>>> - * dma_buf_kmap_atomic - Map a page of the buffer object into kernel
>>> address
>>> - * space. The same restrictions as for kmap_atomic and friends apply.
>>> - * @dmabuf:[in]buffer to map page from.
>>> - * @page_num:  [in]page in PAGE_SIZE units to map.
>>> - *
>>> - * This call must always succeed, any necessary preparations that might
>>> fail
>>> - * need to be done in begin_cpu_access.
>>> - */
>>> -void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long
>>> page_num)
>>> -{
>>> -   WARN_ON(!dmabuf);
>>> -
>>> -   return dmabuf->ops->map_atomic(dmabuf, page_num);
>>> -}
>>> -EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
>>> -
>>> -/**
>>> - * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
>>> - * @dmabuf:[in]buffer to unmap page from.
>>> - * @page_num: 

Re: [PATCH 10/13] drm/amd/powerplay: apply clocks adjust rules on power state change

2018-06-19 Thread Zhu, Rex
Hi Evan,

did we need to check the following flags on vega12?will driver set those flags 
when user select the umd_pstate?

PHM_PlatformCaps_UMDPState/PHM_PlatformCaps_PState.

Best Regards
Rex


?? Outlook for Android


From: amd-gfx  on behalf of Alex Deucher 

Sent: Tuesday, June 19, 2018 11:16:44 PM
To: Quan, Evan
Cc: amd-gfx list
Subject: Re: [PATCH 10/13] drm/amd/powerplay: apply clocks adjust rules on 
power state change

On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> The clocks hard/soft min/max clock levels will be adjusted
> correspondingly.


Also note that this add the apply_clocks_adjust_rules callback which
is used to validate the clock settings on a power state change.  One
other comment below.

>
> Change-Id: I2c4b6cd6756d40a28933f0c26b9e1a3d5078bab8
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 162 
> +
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |   2 +
>  2 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index a227ace..26bdfff 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -1950,6 +1950,166 @@ static int vega12_print_clock_levels(struct pp_hwmgr 
> *hwmgr,
> return size;
>  }
>
> +static int vega12_apply_clocks_adjust_rules(struct pp_hwmgr *hwmgr)
> +{
> +   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> +   struct vega12_single_dpm_table *dpm_table;
> +   bool vblank_too_short = false;
> +   bool disable_mclk_switching;
> +   uint32_t i, latency;
> +
> +   disable_mclk_switching = ((1 < hwmgr->display_config->num_display) &&
> + 
> !hwmgr->display_config->multi_monitor_in_sync) ||
> + vblank_too_short;
> +   latency = hwmgr->display_config->dce_tolerable_mclk_in_active_latency;
> +
> +   /* gfxclk */
> +   dpm_table = &(data->dpm_table.gfx_table);
> +   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.hard_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +
> +   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
> +   if (VEGA12_UMD_PSTATE_GFXCLK_LEVEL < dpm_table->count) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
> +   }
> +
> +   if (hwmgr->dpm_level == 
> AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[0].value;
> +   }
> +
> +   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   }
> +   }
> +
> +   /* memclk */
> +   dpm_table = &(data->dpm_table.mem_table);
> +   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.hard_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +
> +   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
> +   if (VEGA12_UMD_PSTATE_MCLK_LEVEL < dpm_table->count) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
> +   }
> +
> +   if (hwmgr->dpm_level == 
> AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[0].value;
> +   }
> +
> +   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   

Re: [PATCH 10/13] drm/amd/powerplay: apply clocks adjust rules on power state change

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> The clocks hard/soft min/max clock levels will be adjusted
> correspondingly.


Also note that this add the apply_clocks_adjust_rules callback which
is used to validate the clock settings on a power state change.  One
other comment below.

>
> Change-Id: I2c4b6cd6756d40a28933f0c26b9e1a3d5078bab8
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 162 
> +
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |   2 +
>  2 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index a227ace..26bdfff 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -1950,6 +1950,166 @@ static int vega12_print_clock_levels(struct pp_hwmgr 
> *hwmgr,
> return size;
>  }
>
> +static int vega12_apply_clocks_adjust_rules(struct pp_hwmgr *hwmgr)
> +{
> +   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> +   struct vega12_single_dpm_table *dpm_table;
> +   bool vblank_too_short = false;
> +   bool disable_mclk_switching;
> +   uint32_t i, latency;
> +
> +   disable_mclk_switching = ((1 < hwmgr->display_config->num_display) &&
> + 
> !hwmgr->display_config->multi_monitor_in_sync) ||
> + vblank_too_short;
> +   latency = hwmgr->display_config->dce_tolerable_mclk_in_active_latency;
> +
> +   /* gfxclk */
> +   dpm_table = &(data->dpm_table.gfx_table);
> +   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.hard_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +
> +   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
> +   if (VEGA12_UMD_PSTATE_GFXCLK_LEVEL < dpm_table->count) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
> +   }
> +
> +   if (hwmgr->dpm_level == 
> AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[0].value;
> +   }
> +
> +   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   }
> +   }
> +
> +   /* memclk */
> +   dpm_table = &(data->dpm_table.mem_table);
> +   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.hard_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +
> +   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
> +   if (VEGA12_UMD_PSTATE_MCLK_LEVEL < dpm_table->count) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
> +   }
> +
> +   if (hwmgr->dpm_level == 
> AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[0].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[0].value;
> +   }
> +
> +   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
> +   dpm_table->dpm_state.soft_min_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   dpm_table->dpm_state.soft_max_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   }
> +   }
> +
> +   /* honour DAL's UCLK Hardmin */
> +   if (dpm_table->dpm_state.hard_min_level < 
> (hwmgr->display_config->min_mem_set_clock / 100))
> +   dpm_table->dpm_state.hard_min_level = 
> hwmgr->display_config->min_mem_set_clock / 100;
> +

Didn't you just remove the uclk hard min setting in a previous patch?



> +   /* Hardmin is dependent on displayconfig */
> +

Re: [PATCH 13/13] drm/amd/powerplay: cosmetic fix

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> Fix coding style and drop unused variable.
>
> Change-Id: I9630f39154ec6bc30115e75924b35bcbe028a1a4
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 10 +++---
>  .../gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h  | 18 
> +-
>  2 files changed, 12 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index de61f86..a699416 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -811,9 +811,6 @@ static int vega12_enable_all_smu_features(struct pp_hwmgr 
> *hwmgr)
> enabled = (features_enabled & 
> data->smu_features[i].smu_feature_bitmap) ? true : false;
> data->smu_features[i].enabled = enabled;
> data->smu_features[i].supported = enabled;
> -   PP_ASSERT(
> -   !data->smu_features[i].allowed || enabled,
> -   "[EnableAllSMUFeatures] Enabled feature is 
> different from allowed, expected disabled!");
> }
> }
>
> @@ -1230,8 +1227,8 @@ static int vega12_get_current_gfx_clk_freq(struct 
> pp_hwmgr *hwmgr, uint32_t *gfx
>
> *gfx_freq = 0;
>
> -   PP_ASSERT_WITH_CODE(
> -   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetDpmClockFreq, (PPCLK_GFXCLK << 16)) == 0,
> +   PP_ASSERT_WITH_CODE(smum_send_msg_to_smc_with_parameter(hwmgr,
> +   PPSMC_MSG_GetDpmClockFreq, (PPCLK_GFXCLK << 16)) == 0,
> "[GetCurrentGfxClkFreq] Attempt to get Current GFXCLK 
> Frequency Failed!",
> return -1);
> PP_ASSERT_WITH_CODE(
> @@ -1790,7 +1787,6 @@ static int 
> vega12_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
>  {
> struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> Watermarks_t *table = &(data->smc_state_table.water_marks_table);
> -   int result = 0;
> uint32_t i;
>
> if (!data->registry_data.disable_water_mark &&
> @@ -1841,7 +1837,7 @@ static int 
> vega12_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
> data->water_marks_bitmap &= ~WaterMarksLoaded;
> }
>
> -   return result;
> +   return 0;
>  }
>
>  static int vega12_force_clock_level(struct pp_hwmgr *hwmgr,
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h 
> b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> index b08526f..b6ffd08 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> @@ -412,10 +412,10 @@ typedef struct {
>QuadraticInt_tReservedEquation2;
>QuadraticInt_tReservedEquation3;
>
> -   uint16_t MinVoltageUlvGfx;
> -   uint16_t MinVoltageUlvSoc;
> +  uint16_t MinVoltageUlvGfx;
> +  uint16_t MinVoltageUlvSoc;
>
> -   uint32_t Reserved[14];
> +  uint32_t Reserved[14];
>
>
>
> @@ -483,9 +483,9 @@ typedef struct {
>uint8_t  padding8_4;
>
>
> -   uint8_t  PllGfxclkSpreadEnabled;
> -   uint8_t  PllGfxclkSpreadPercent;
> -   uint16_t PllGfxclkSpreadFreq;
> +  uint8_t  PllGfxclkSpreadEnabled;
> +  uint8_t  PllGfxclkSpreadPercent;
> +  uint16_t PllGfxclkSpreadFreq;
>
>uint8_t  UclkSpreadEnabled;
>uint8_t  UclkSpreadPercent;
> @@ -495,9 +495,9 @@ typedef struct {
>uint8_t  SocclkSpreadPercent;
>uint16_t SocclkSpreadFreq;
>
> -   uint8_t  AcgGfxclkSpreadEnabled;
> -   uint8_t  AcgGfxclkSpreadPercent;
> -   uint16_t AcgGfxclkSpreadFreq;
> +  uint8_t  AcgGfxclkSpreadEnabled;
> +  uint8_t  AcgGfxclkSpreadPercent;
> +  uint16_t AcgGfxclkSpreadFreq;
>
>uint8_t  Vr2_I2C_address;
>uint8_t  padding_vr2[3];
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/amdgpu: Polish SQ IH.

2018-06-19 Thread Andrey Grodzovsky
Switch to using reg fields defines istead of magic values.
Add SH_ID and PRIV fields reading for instr. and err cases.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 36 +++
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 15e61e1..93904a7 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -6958,10 +6958,11 @@ static int gfx_v8_0_sq_irq(struct amdgpu_device *adev,
 {
u8 enc, se_id;
char type[20];
+   unsigned ih_data = entry->src_data[0];
 
-   /* Parse all fields according to SQ_INTERRUPT* registers */
-   enc = (entry->src_data[0] >> 26) & 0x3;
-   se_id = (entry->src_data[0] >> 24) & 0x3;
+
+   enc = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_CMN, ENCODING);
+   se_id = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_CMN, SE_ID);
 
switch (enc) {
case 0:
@@ -6971,14 +6972,14 @@ static int gfx_v8_0_sq_irq(struct amdgpu_device *adev,
"reg_timestamp %d, 
thread_trace_buff_full %d,"
"wlt %d, thread_trace %d.\n",
se_id,
-   (entry->src_data[0] >> 7) & 0x1,
-   (entry->src_data[0] >> 6) & 0x1,
-   (entry->src_data[0] >> 5) & 0x1,
-   (entry->src_data[0] >> 4) & 0x1,
-   (entry->src_data[0] >> 3) & 0x1,
-   (entry->src_data[0] >> 2) & 0x1,
-   (entry->src_data[0] >> 1) & 0x1,
-   entry->src_data[0] & 0x1
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, IMMED_OVERFLOW),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, HOST_REG_OVERFLOW),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, HOST_CMD_OVERFLOW),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, CMD_TIMESTAMP),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, REG_TIMESTAMP),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, THREAD_TRACE_BUF_FULL),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, WLT),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_AUTO, THREAD_TRACE)
);
break;
case 1:
@@ -6991,12 +6992,15 @@ static int gfx_v8_0_sq_irq(struct amdgpu_device *adev,
 
DRM_INFO(
"SQ %s detected: "
-   "se_id %d, cu_id %d, simd_id %d, 
wave_id %d, vm_id %d\n",
+   "se_id %d, cu_id %d, simd_id %d, 
wave_id %d, vm_id %d\n"
+   "trap %s, sh_id %d. ",
type, se_id,
-   (entry->src_data[0] >> 20) & 0xf,
-   (entry->src_data[0] >> 18) & 0x3,
-   (entry->src_data[0] >> 14) & 0xf,
-   (entry->src_data[0] >> 10) & 0xf
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, CU_ID),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, SIMD_ID),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, WAVE_ID),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, VM_ID),
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, PRIV) ? "true" : "false",
+   REG_GET_FIELD(ih_data, 
SQ_INTERRUPT_WORD_WAVE, SH_ID)
);
break;
default:
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/amdgpu: Add parsing SQ_EDC_INFO to SQ IH.

2018-06-19 Thread Andrey Grodzovsky
Access to SQ_EDC_INFO requires selecting register instance and
hence mutex lock when accessing GRBM_GFX_INDEX for which a work
is schedueled from IH. But SQ interrupt can be raised on many instances
at once which means queuing work will usually succeed for the first one
but fail for the reset since the work takes time to process. To avoid
losing info about other interrupt instances call the parsing function
directly from high IRQ when current work hasn't finished and avoid
accessing SQ_EDC_INFO in that case.

Signed-off-by: Andrey Grodzovsky 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  7 +++
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 97 ++-
 2 files changed, 91 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index e8c6cc1..a7b9ef5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -930,6 +930,11 @@ struct amdgpu_ngg {
boolinit;
 };
 
+struct sq_work {
+   struct work_struct  work;
+   unsigned ih_data;
+};
+
 struct amdgpu_gfx {
struct mutexgpu_clock_mutex;
struct amdgpu_gfx_configconfig;
@@ -970,6 +975,8 @@ struct amdgpu_gfx {
struct amdgpu_irq_src   priv_inst_irq;
struct amdgpu_irq_src   cp_ecc_error_irq;
struct amdgpu_irq_src   sq_irq;
+   struct sq_work  sq_work;
+
/* gfx status */
uint32_tgfx_current_status;
/* ce ram size*/
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 93904a7..0add7fc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -704,6 +704,17 @@ static const u32 stoney_mgcg_cgcg_init[] =
mmCGTS_SM_CTRL_REG, 0x, 0x96940200,
 };
 
+
+static const char * const sq_edc_source_names[] = {
+   "SQ_EDC_INFO_SOURCE_INVALID: No EDC error has occurred",
+   "SQ_EDC_INFO_SOURCE_INST: EDC source is Instruction Fetch",
+   "SQ_EDC_INFO_SOURCE_SGPR: EDC source is SGPR or SQC data return",
+   "SQ_EDC_INFO_SOURCE_VGPR: EDC source is VGPR",
+   "SQ_EDC_INFO_SOURCE_LDS: EDC source is LDS",
+   "SQ_EDC_INFO_SOURCE_GDS: EDC source is GDS",
+   "SQ_EDC_INFO_SOURCE_TA: EDC source is TA",
+};
+
 static void gfx_v8_0_set_ring_funcs(struct amdgpu_device *adev);
 static void gfx_v8_0_set_irq_funcs(struct amdgpu_device *adev);
 static void gfx_v8_0_set_gds_init(struct amdgpu_device *adev);
@@ -2003,6 +2014,8 @@ static int gfx_v8_0_compute_ring_init(struct 
amdgpu_device *adev, int ring_id,
return 0;
 }
 
+static void gfx_v8_0_sq_irq_work_func(struct work_struct *work);
+
 static int gfx_v8_0_sw_init(void *handle)
 {
int i, j, k, r, ring_id;
@@ -2066,6 +2079,8 @@ static int gfx_v8_0_sw_init(void *handle)
return r;
}
 
+   INIT_WORK(>gfx.sq_work.work, gfx_v8_0_sq_irq_work_func);
+
adev->gfx.gfx_current_status = AMDGPU_GFX_NORMAL_MODE;
 
gfx_v8_0_scratch_init(adev);
@@ -6952,14 +6967,11 @@ static int gfx_v8_0_cp_ecc_error_irq(struct 
amdgpu_device *adev,
return 0;
 }
 
-static int gfx_v8_0_sq_irq(struct amdgpu_device *adev,
-  struct amdgpu_irq_src *source,
-  struct amdgpu_iv_entry *entry)
+static void gfx_v8_0_parse_sq_irq(struct amdgpu_device *adev, unsigned ih_data)
 {
-   u8 enc, se_id;
+   u32 enc, se_id, sh_id, cu_id;
char type[20];
-   unsigned ih_data = entry->src_data[0];
-
+   int sq_edc_source = -1;
 
enc = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_CMN, ENCODING);
se_id = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_CMN, SE_ID);
@@ -6985,6 +6997,24 @@ static int gfx_v8_0_sq_irq(struct amdgpu_device *adev,
case 1:
case 2:
 
+   cu_id = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_WAVE, 
CU_ID);
+   sh_id = REG_GET_FIELD(ih_data, SQ_INTERRUPT_WORD_WAVE, 
SH_ID);
+
+   /*
+* This function can be called either directly from ISR
+* or from BH in which case we can access SQ_EDC_INFO
+* instance
+*/
+   if (in_task()) {
+   mutex_lock(>grbm_idx_mutex);
+   gfx_v8_0_select_se_sh(adev, se_id, sh_id, 
cu_id);
+
+   sq_edc_source = 
REG_GET_FIELD(RREG32(mmSQ_EDC_INFO), SQ_EDC_INFO, SOURCE);
+
+   gfx_v8_0_select_se_sh(adev, 0x, 
0x, 0x);
+   mutex_unlock(>grbm_idx_mutex);
+   }
+
if (enc == 1)
sprintf(type, "instruction intr");
else

Re: [PATCH 11/13] drm/amd/powerplay: set vega12 pre display configurations

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> PPSMC_MSG_NumOfDisplays is set as 0 and uclk is forced as
> highest.

Adjust the commit message to make it clear that you set num_displays
to 0 and force uclk high as part of the mode set dequence.

With that fixed:
Acked-by: Alex Deucher 

>
> Change-Id: I2400279d3c979d99f4dd4b8d53f051cd8f8e0c33
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 41 
> ++
>  1 file changed, 41 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index 26bdfff..1fadb71 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -2110,6 +2110,45 @@ static int vega12_apply_clocks_adjust_rules(struct 
> pp_hwmgr *hwmgr)
> return 0;
>  }
>
> +static int vega12_set_uclk_to_highest_dpm_level(struct pp_hwmgr *hwmgr,
> +   struct vega12_single_dpm_table *dpm_table)
> +{
> +   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> +   int ret = 0;
> +
> +   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
> +   PP_ASSERT_WITH_CODE(dpm_table->count > 0,
> +   "[SetUclkToHightestDpmLevel] Dpm table has no 
> entry!",
> +   return -EINVAL);
> +   PP_ASSERT_WITH_CODE(dpm_table->count <= NUM_UCLK_DPM_LEVELS,
> +   "[SetUclkToHightestDpmLevel] Dpm table has 
> too many entries!",
> +   return -EINVAL);
> +
> +   dpm_table->dpm_state.hard_min_level = 
> dpm_table->dpm_levels[dpm_table->count - 1].value;
> +   PP_ASSERT_WITH_CODE(!(ret = 
> smum_send_msg_to_smc_with_parameter(hwmgr,
> +   PPSMC_MSG_SetHardMinByFreq,
> +   (PPCLK_UCLK << 16 ) | 
> dpm_table->dpm_state.hard_min_level)),
> +   "[SetUclkToHightestDpmLevel] Set hard min 
> uclk failed!",
> +   return ret);
> +   }
> +
> +   return ret;
> +}
> +
> +static int vega12_pre_display_configuration_changed_task(struct pp_hwmgr 
> *hwmgr)
> +{
> +   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> +   int ret = 0;
> +
> +   smum_send_msg_to_smc_with_parameter(hwmgr,
> +   PPSMC_MSG_NumOfDisplays, 0);
> +
> +   ret = vega12_set_uclk_to_highest_dpm_level(hwmgr,
> +   >dpm_table.mem_table);
> +
> +   return ret;
> +}
> +
>  static int vega12_display_configuration_changed_task(struct pp_hwmgr *hwmgr)
>  {
> struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
> @@ -2358,6 +2397,8 @@ static const struct pp_hwmgr_func vega12_hwmgr_funcs = {
> .print_clock_levels = vega12_print_clock_levels,
> .apply_clocks_adjust_rules =
> vega12_apply_clocks_adjust_rules,
> +   .pre_display_config_changed =
> +   vega12_pre_display_configuration_changed_task,
> .display_config_changed = vega12_display_configuration_changed_task,
> .powergate_uvd = vega12_power_gate_uvd,
> .powergate_vce = vega12_power_gate_vce,
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Use correct enum to set powergating state

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 5:16 AM, Stefan Agner  wrote:
> Use enum amd_powergating_state instead of enum amd_clockgating_state.
> The underlying value stays the same, so there is no functional change
> in practise. This fixes a warning seen with clang:
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:1930:14: warning: implicit
>   conversion from enumeration type 'enum amd_clockgating_state' to
>   different enumeration type 'enum amd_powergating_state'
>   [-Wenum-conversion]
>AMD_CG_STATE_UNGATE);
>^~~
>
> Signed-off-by: Stefan Agner 

Applied.  thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index fe76ec1f9737..2a1d19c31922 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1927,7 +1927,7 @@ int amdgpu_device_ip_suspend(struct amdgpu_device *adev)
> if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
> amdgpu_device_ip_set_powergating_state(adev,
>AMD_IP_BLOCK_TYPE_SMC,
> -  AMD_CG_STATE_UNGATE);
> +  AMD_PG_STATE_UNGATE);
>
> /* ungate SMC block first */
> r = amdgpu_device_ip_set_clockgating_state(adev, 
> AMD_IP_BLOCK_TYPE_SMC,
> --
> 2.17.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu:All UVD instances share one idle_work handle

2018-06-19 Thread Stefan Agner
On 18.06.2018 20:00, James Zhu wrote:
> All UVD instanses have only one dpm control, so it is better
> to share one idle_work handle.

Compiles fine with clang here.

Tested-by: Stefan Agner 

> 
> Signed-off-by: James Zhu 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 14 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h |  2 +-
>  2 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index 04d77f1..cc15d32 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -130,7 +130,7 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
>   unsigned family_id;
>   int i, j, r;
>  
> - INIT_DELAYED_WORK(>uvd.inst->idle_work, 
> amdgpu_uvd_idle_work_handler);
> + INIT_DELAYED_WORK(>uvd.idle_work, amdgpu_uvd_idle_work_handler);
>  
>   switch (adev->asic_type) {
>  #ifdef CONFIG_DRM_AMDGPU_CIK
> @@ -331,12 +331,12 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
>   void *ptr;
>   int i, j;
>  
> + cancel_delayed_work_sync(>uvd.idle_work);
> +
>   for (j = 0; j < adev->uvd.num_uvd_inst; ++j) {
>   if (adev->uvd.inst[j].vcpu_bo == NULL)
>   continue;
>  
> - cancel_delayed_work_sync(>uvd.inst[j].idle_work);
> -
>   /* only valid for physical mode */
>   if (adev->asic_type < CHIP_POLARIS10) {
>   for (i = 0; i < adev->uvd.max_handles; ++i)
> @@ -1162,7 +1162,7 @@ int amdgpu_uvd_get_destroy_msg(struct
> amdgpu_ring *ring, uint32_t handle,
>  static void amdgpu_uvd_idle_work_handler(struct work_struct *work)
>  {
>   struct amdgpu_device *adev =
> - container_of(work, struct amdgpu_device, 
> uvd.inst->idle_work.work);
> + container_of(work, struct amdgpu_device, uvd.idle_work.work);
>   unsigned fences = 0, i, j;
>  
>   for (i = 0; i < adev->uvd.num_uvd_inst; ++i) {
> @@ -1184,7 +1184,7 @@ static void amdgpu_uvd_idle_work_handler(struct
> work_struct *work)
>  
> AMD_CG_STATE_GATE);
>   }
>   } else {
> - schedule_delayed_work(>uvd.inst->idle_work, 
> UVD_IDLE_TIMEOUT);
> + schedule_delayed_work(>uvd.idle_work, UVD_IDLE_TIMEOUT);
>   }
>  }
>  
> @@ -1196,7 +1196,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)
>   if (amdgpu_sriov_vf(adev))
>   return;
>  
> - set_clocks = !cancel_delayed_work_sync(>uvd.inst->idle_work);
> + set_clocks = !cancel_delayed_work_sync(>uvd.idle_work);
>   if (set_clocks) {
>   if (adev->pm.dpm_enabled) {
>   amdgpu_dpm_enable_uvd(adev, true);
> @@ -1213,7 +1213,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)
>  void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring)
>  {
>   if (!amdgpu_sriov_vf(ring->adev))
> - schedule_delayed_work(>adev->uvd.inst->idle_work, 
> UVD_IDLE_TIMEOUT);
> + schedule_delayed_work(>adev->uvd.idle_work, 
> UVD_IDLE_TIMEOUT);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> index b1579fb..8b23a1b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
> @@ -44,7 +44,6 @@ struct amdgpu_uvd_inst {
>   void*saved_bo;
>   atomic_thandles[AMDGPU_MAX_UVD_HANDLES];
>   struct drm_file *filp[AMDGPU_MAX_UVD_HANDLES];
> - struct delayed_work idle_work;
>   struct amdgpu_ring  ring;
>   struct amdgpu_ring  ring_enc[AMDGPU_MAX_UVD_ENC_RINGS];
>   struct amdgpu_irq_src   irq;
> @@ -62,6 +61,7 @@ struct amdgpu_uvd {
>   booladdress_64_bit;
>   booluse_ctx_buf;
>   struct amdgpu_uvd_inst  inst[AMDGPU_MAX_UVD_INSTANCES];
> + struct delayed_work idle_work;
>  };
>  
>  int amdgpu_uvd_sw_init(struct amdgpu_device *adev);
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/51] drm/amd/display: get rid of cur_clks from dcn_bw_output

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Cleans up dcn_bw_output to only contain calculated info,
actual programmed values will now be stored in respective blocks.

Change-Id: I8d5139ba4bea9e6738bd6d8bd8e45ec82477c276
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Nikola Cornij 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 28 +++---
 .../gpu/drm/amd/display/dc/core/dc_debug.c| 24 +++---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  2 +-
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   |  4 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 85 +--
 .../gpu/drm/amd/display/dc/inc/core_types.h   |  3 +-
 6 files changed, 72 insertions(+), 74 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 9ce329e8f287..b8195e5a0676 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -977,42 +977,42 @@ bool dcn_validate_bandwidth(
 
display_pipe_configuration(v);
calc_wm_sets_and_perf_params(context, v);
-   context->bw.dcn.calc_clk.fclk_khz = (int)(bw_consumed * 100 
/
+   context->bw.dcn.clk.fclk_khz = (int)(bw_consumed * 100 /
(ddr4_dram_factor_single_Channel * 
v->number_of_channels));
if (bw_consumed == v->fabric_and_dram_bandwidth_vmin0p65) {
-   context->bw.dcn.calc_clk.fclk_khz = (int)(bw_consumed * 
100 / 32);
+   context->bw.dcn.clk.fclk_khz = (int)(bw_consumed * 
100 / 32);
}
 
-   context->bw.dcn.calc_clk.dcfclk_deep_sleep_khz = 
(int)(v->dcf_clk_deep_sleep * 1000);
-   context->bw.dcn.calc_clk.dcfclk_khz = (int)(v->dcfclk * 1000);
+   context->bw.dcn.clk.dcfclk_deep_sleep_khz = 
(int)(v->dcf_clk_deep_sleep * 1000);
+   context->bw.dcn.clk.dcfclk_khz = (int)(v->dcfclk * 1000);
 
-   context->bw.dcn.calc_clk.dispclk_khz = (int)(v->dispclk * 1000);
+   context->bw.dcn.clk.dispclk_khz = (int)(v->dispclk * 1000);
if (dc->debug.max_disp_clk == true)
-   context->bw.dcn.calc_clk.dispclk_khz = 
(int)(dc->dcn_soc->max_dispclk_vmax0p9 * 1000);
+   context->bw.dcn.clk.dispclk_khz = 
(int)(dc->dcn_soc->max_dispclk_vmax0p9 * 1000);
 
-   if (context->bw.dcn.calc_clk.dispclk_khz <
+   if (context->bw.dcn.clk.dispclk_khz <
dc->debug.min_disp_clk_khz) {
-   context->bw.dcn.calc_clk.dispclk_khz =
+   context->bw.dcn.clk.dispclk_khz =
dc->debug.min_disp_clk_khz;
}
 
-   context->bw.dcn.calc_clk.dppclk_khz = 
context->bw.dcn.calc_clk.dispclk_khz / v->dispclk_dppclk_ratio;
-   context->bw.dcn.calc_clk.phyclk_khz = 
v->phyclk_per_state[v->voltage_level];
+   context->bw.dcn.clk.dppclk_khz = 
context->bw.dcn.clk.dispclk_khz / v->dispclk_dppclk_ratio;
+   context->bw.dcn.clk.phyclk_khz = 
v->phyclk_per_state[v->voltage_level];
switch (v->voltage_level) {
case 0:
-   context->bw.dcn.calc_clk.max_supported_dppclk_khz =
+   context->bw.dcn.clk.max_supported_dppclk_khz =
(int)(dc->dcn_soc->max_dppclk_vmin0p65 
* 1000);
break;
case 1:
-   context->bw.dcn.calc_clk.max_supported_dppclk_khz =
+   context->bw.dcn.clk.max_supported_dppclk_khz =
(int)(dc->dcn_soc->max_dppclk_vmid0p72 
* 1000);
break;
case 2:
-   context->bw.dcn.calc_clk.max_supported_dppclk_khz =
+   context->bw.dcn.clk.max_supported_dppclk_khz =
(int)(dc->dcn_soc->max_dppclk_vnom0p8 * 
1000);
break;
default:
-   context->bw.dcn.calc_clk.max_supported_dppclk_khz =
+   context->bw.dcn.clk.max_supported_dppclk_khz =
(int)(dc->dcn_soc->max_dppclk_vmax0p9 * 
1000);
break;
}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
index 267c76766dea..e1ebdf7b5eaf 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
@@ -352,19 +352,19 @@ void context_clock_trace(
DC_LOGGER_INIT(dc->ctx->logger);
CLOCK_TRACE("Current: dispclk_khz:%d  max_dppclk_khz:%d  
dcfclk_khz:%d\n"
"dcfclk_deep_sleep_khz:%d  fclk_khz:%d  
socclk_khz:%d\n",
-  

[PATCH 01/51] drm/amd/display: replace clocks_value struct with dc_clocks

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

This will avoid structs with duplicate information. Also
removes pixel clock voltage request. This has no effect since
pixel clock does not affect dcn voltage and this function only
matters for dcn.

Change-Id: I62e8906e8f250602ccc8e8a61e9585df13a00a0f
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 34 +++---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  8 ++--
 drivers/gpu/drm/amd/display/dc/dc.h   |  5 ++
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 46 +++
 .../display/dc/dce110/dce110_hw_sequencer.c   | 18 +---
 .../gpu/drm/amd/display/dc/inc/dcn_calcs.h|  2 +-
 .../drm/amd/display/dc/inc/hw/display_clock.h | 22 ++---
 7 files changed, 49 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 49a4ea45466d..d8a31650e856 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -1145,10 +1145,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
 
switch (clocks_type) {
case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
-   if (clocks_in_khz > dc->dcn_soc->max_dispclk_vmax0p9*1000) {
+   /*if (clocks_in_khz > dc->dcn_soc->max_dispclk_vmax0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
-   BREAK_TO_DEBUGGER();
-   } else if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vnom0p8*1000) {
+   //BREAK_TO_DEBUGGER();
+   } else*/ if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vnom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vmid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1158,10 +1158,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
vdd_level = dcn_bw_v_min0p65;
break;
case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
-   if (clocks_in_khz > dc->dcn_soc->phyclkv_max0p9*1000) {
+   /*if (clocks_in_khz > dc->dcn_soc->phyclkv_max0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else if (clocks_in_khz > dc->dcn_soc->phyclkv_nom0p8*1000) {
+   } else*/ if (clocks_in_khz > dc->dcn_soc->phyclkv_nom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > dc->dcn_soc->phyclkv_mid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1172,10 +1172,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
break;
 
case DM_PP_CLOCK_TYPE_DPPCLK:
-   if (clocks_in_khz > dc->dcn_soc->max_dppclk_vmax0p9*1000) {
+   /*if (clocks_in_khz > dc->dcn_soc->max_dppclk_vmax0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vnom0p8*1000) {
+   } else*/ if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vnom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vmid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1189,10 +1189,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
{
unsigned factor = (ddr4_dram_factor_single_Channel * 
dc->dcn_soc->number_of_channels);
 
-   if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9*100/factor) {
+   /*if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9*100/factor) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8*100/factor) {
+   } else */if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8*100/factor) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72*100/factor) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1204,10 +1204,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
break;
 
case DM_PP_CLOCK_TYPE_DCFCLK:
-   if (clocks_in_khz > dc->dcn_soc->dcfclkv_max0p9*1000) {
+   /*if (clocks_in_khz > dc->dcn_soc->dcfclkv_max0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else if (clocks_in_khz > dc->dcn_soc->dcfclkv_nom0p8*1000) {
+ 

[PATCH 13/51] drm/amd/display: Add clock types to applying clk for voltage

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Add DCF and FCLK clock case statements for changing raven's
clocks for voltage request.
Also maintain DCEF clock for DCE120 calls.

Change-Id: I33018a752485b107c4127d4655d5e976855b6917
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c| 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 67f1245e70ef..025f37f62c4b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -441,10 +441,18 @@ bool dm_pp_apply_clock_for_voltage_request(
pp_clock_request.clock_type = amd_pp_dcef_clock;
break;
 
+   case DM_PP_CLOCK_TYPE_DCFCLK:
+   pp_clock_request.clock_type = amd_pp_dcf_clock;
+   break;
+
case DM_PP_CLOCK_TYPE_PIXELCLK:
pp_clock_request.clock_type = amd_pp_pixel_clock;
break;
 
+   case DM_PP_CLOCK_TYPE_FCLK:
+   pp_clock_request.clock_type = amd_pp_f_clock;
+   break;
+
default:
return false;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/51] drm/amd/display: Adding Get static clocks for dm_pp interface

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Adding a call to powerplay to get system clocks and translate to dm structure

Change-Id: Ic6892857a16cce24d9ac9485b5507d97a67fca37
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../amd/display/amdgpu_dm/amdgpu_dm_services.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 0dc7a791c216..3dcd5f9af6e5 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -459,8 +459,22 @@ bool dm_pp_get_static_clocks(
const struct dc_context *ctx,
struct dm_pp_static_clock_info *static_clk_info)
 {
-   /* TODO: to be implemented */
-   return false;
+   struct amdgpu_device *adev = ctx->driver_context;
+   struct amd_pp_clock_info *pp_clk_info = {0};
+   int ret = 0;
+
+   if (adev->powerplay.pp_funcs->get_current_clocks)
+   ret = adev->powerplay.pp_funcs->get_current_clocks(
+   adev->powerplay.pp_handle,
+   pp_clk_info);
+   if (ret)
+   return false;
+
+   static_clk_info->max_clocks_state = pp_clk_info->max_clocks_state;
+   static_clk_info->max_mclk_khz = pp_clk_info->max_memory_clock;
+   static_clk_info->max_sclk_khz = pp_clk_info->max_engine_clock;
+
+   return true;
 }
 
 void dm_pp_get_funcs_rv(
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/51] drm/amd/display: Use tg count for opp init.

2018-06-19 Thread Harry Wentland
From: Yongqiang Sun 

In case of tg count not equal to FE pipe count, if use pipe count to iterate
the tgs, it will cause BSOD.

Change-Id: I83a3e1b5a1fdbf004fa43cdc6943f69e436ac49d
Signed-off-by: Yongqiang Sun 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 66ecb861f2f3..eae2fd7692da 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1025,7 +1025,7 @@ static void dcn10_init_hw(struct dc *dc)
/* Reset all MPCC muxes */
dc->res_pool->mpc->funcs->mpc_init(dc->res_pool->mpc);
 
-   for (i = 0; i < dc->res_pool->pipe_count; i++) {
+   for (i = 0; i < dc->res_pool->timing_generator_count; i++) {
struct timing_generator *tg = 
dc->res_pool->timing_generators[i];
struct pipe_ctx *pipe_ctx = >res_ctx.pipe_ctx[i];
struct hubp *hubp = dc->res_pool->hubps[i];
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/51] drm/amd/display: Use local structs instead of struct pointers

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Change struct pointers to creating structs on a stack.
Thats fixing a mistake in a previous patch introducing dm_pplib functions

Change-Id: Ibd7960f5ccfcc8a9377ad8dbc8bd2f81e75d30d7
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../display/amdgpu_dm/amdgpu_dm_services.c| 22 +--
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 0c720e50ea43..67f1245e70ef 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -430,31 +430,31 @@ bool dm_pp_apply_clock_for_voltage_request(
struct dm_pp_clock_for_voltage_req *clock_for_voltage_req)
 {
struct amdgpu_device *adev = ctx->driver_context;
-   struct pp_display_clock_request *pp_clock_request = {0};
+   struct pp_display_clock_request pp_clock_request = {0};
int ret = 0;
switch (clock_for_voltage_req->clk_type) {
case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
-   pp_clock_request->clock_type = amd_pp_disp_clock;
+   pp_clock_request.clock_type = amd_pp_disp_clock;
break;
 
case DM_PP_CLOCK_TYPE_DCEFCLK:
-   pp_clock_request->clock_type = amd_pp_dcef_clock;
+   pp_clock_request.clock_type = amd_pp_dcef_clock;
break;
 
case DM_PP_CLOCK_TYPE_PIXELCLK:
-   pp_clock_request->clock_type = amd_pp_pixel_clock;
+   pp_clock_request.clock_type = amd_pp_pixel_clock;
break;
 
default:
return false;
}
 
-   pp_clock_request->clock_freq_in_khz = 
clock_for_voltage_req->clocks_in_khz;
+   pp_clock_request.clock_freq_in_khz = 
clock_for_voltage_req->clocks_in_khz;
 
if (adev->powerplay.pp_funcs->display_clock_voltage_request)
ret = adev->powerplay.pp_funcs->display_clock_voltage_request(
adev->powerplay.pp_handle,
-   pp_clock_request);
+   _clock_request);
if (ret)
return false;
return true;
@@ -465,19 +465,19 @@ bool dm_pp_get_static_clocks(
struct dm_pp_static_clock_info *static_clk_info)
 {
struct amdgpu_device *adev = ctx->driver_context;
-   struct amd_pp_clock_info *pp_clk_info = {0};
+   struct amd_pp_clock_info pp_clk_info = {0};
int ret = 0;
 
if (adev->powerplay.pp_funcs->get_current_clocks)
ret = adev->powerplay.pp_funcs->get_current_clocks(
adev->powerplay.pp_handle,
-   pp_clk_info);
+   _clk_info);
if (ret)
return false;
 
-   static_clk_info->max_clocks_state = pp_clk_info->max_clocks_state;
-   static_clk_info->max_mclk_khz = pp_clk_info->max_memory_clock;
-   static_clk_info->max_sclk_khz = pp_clk_info->max_engine_clock;
+   static_clk_info->max_clocks_state = pp_clk_info.max_clocks_state;
+   static_clk_info->max_mclk_khz = pp_clk_info.max_memory_clock;
+   static_clk_info->max_sclk_khz = pp_clk_info.max_engine_clock;
 
return true;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/51] drm/amd/display: move dcn1 dispclk programming to dccg

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

No functional change.

Change-Id: Ia774240b5a2410df6bee9c6c912b5b1315d8989f
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Nikola Cornij 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   |  95 ++--
 .../gpu/drm/amd/display/dc/dce/dce_clocks.h   |   2 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 106 --
 .../drm/amd/display/dc/dcn10/dcn10_resource.c |   2 +-
 4 files changed, 90 insertions(+), 115 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index 6b6570ea998d..55f533cf55ba 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -552,7 +552,85 @@ static void dce12_update_clocks(struct dccg *dccg,
}
 }
 
-static void dcn_update_clocks(struct dccg *dccg,
+static int dcn1_determine_dppclk_threshold(struct dccg *dccg, struct dc_clocks 
*new_clocks)
+{
+   bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
+   bool dispclk_increase = new_clocks->dispclk_khz > 
dccg->clks.dispclk_khz;
+   int disp_clk_threshold = new_clocks->max_supported_dppclk_khz;
+   bool cur_dpp_div = dccg->clks.dispclk_khz > dccg->clks.dppclk_khz;
+
+   /* increase clock, looking for div is 0 for current, request div is 1*/
+   if (dispclk_increase) {
+   /* already divided by 2, no need to reach target clk with 2 
steps*/
+   if (cur_dpp_div)
+   return new_clocks->dispclk_khz;
+
+   /* request disp clk is lower than maximum supported dpp clk,
+* no need to reach target clk with two steps.
+*/
+   if (new_clocks->dispclk_khz <= disp_clk_threshold)
+   return new_clocks->dispclk_khz;
+
+   /* target dpp clk not request divided by 2, still within 
threshold */
+   if (!request_dpp_div)
+   return new_clocks->dispclk_khz;
+
+   } else {
+   /* decrease clock, looking for current dppclk divided by 2,
+* request dppclk not divided by 2.
+*/
+
+   /* current dpp clk not divided by 2, no need to ramp*/
+   if (!cur_dpp_div)
+   return new_clocks->dispclk_khz;
+
+   /* current disp clk is lower than current maximum dpp clk,
+* no need to ramp
+*/
+   if (dccg->clks.dispclk_khz <= disp_clk_threshold)
+   return new_clocks->dispclk_khz;
+
+   /* request dpp clk need to be divided by 2 */
+   if (request_dpp_div)
+   return new_clocks->dispclk_khz;
+   }
+
+   return disp_clk_threshold;
+}
+
+static void dcn1_ramp_up_dispclk_with_dpp(struct dccg *dccg, struct dc_clocks 
*new_clocks)
+{
+   struct dc *dc = dccg->ctx->dc;
+   int dispclk_to_dpp_threshold = dcn1_determine_dppclk_threshold(dccg, 
new_clocks);
+   bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
+   int i;
+
+   /* set disp clk to dpp clk threshold */
+   dccg->funcs->set_dispclk(dccg, dispclk_to_dpp_threshold);
+
+   /* update request dpp clk division option */
+   for (i = 0; i < dc->res_pool->pipe_count; i++) {
+   struct pipe_ctx *pipe_ctx = 
>current_state->res_ctx.pipe_ctx[i];
+
+   if (!pipe_ctx->plane_state)
+   continue;
+
+   pipe_ctx->plane_res.dpp->funcs->dpp_dppclk_control(
+   pipe_ctx->plane_res.dpp,
+   request_dpp_div,
+   true);
+   }
+
+   /* If target clk not same as dppclk threshold, set to target clock */
+   if (dispclk_to_dpp_threshold != new_clocks->dispclk_khz)
+   dccg->funcs->set_dispclk(dccg, new_clocks->dispclk_khz);
+
+   dccg->clks.dispclk_khz = new_clocks->dispclk_khz;
+   dccg->clks.dppclk_khz = new_clocks->dppclk_khz;
+   dccg->clks.max_supported_dppclk_khz = 
new_clocks->max_supported_dppclk_khz;
+}
+
+static void dcn1_update_clocks(struct dccg *dccg,
struct dc_clocks *new_clocks,
bool safe_to_lower)
 {
@@ -572,6 +650,9 @@ static void dcn_update_clocks(struct dccg *dccg,
send_request_to_increase = true;
 
 #ifdef CONFIG_DRM_AMD_DC_DCN1_0
+   /* make sure dcf clk is before dpp clk to
+* make sure we have enough voltage to run dpp clk
+*/
if (send_request_to_increase
) {
/*use dcfclk to request voltage*/
@@ -584,8 +665,8 @@ static void dcn_update_clocks(struct dccg *dccg,
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAY_CLK;
   

[PATCH 08/51] drm/amd/display: dal 3.1.48

2018-06-19 Thread Harry Wentland
From: Tony Cheng 

Change-Id: I74a0ddcba313f0e51d38c6cad058063b9586d936
Signed-off-by: Tony Cheng 
Reviewed-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 8ba90a75d128..82e0f55bc3e4 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.47"
+#define DC_VER "3.1.48"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/51] drm/amd/display: Introduce pp-smu raven functions

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

DM powerplay calls for DCN10 allowing to bypass PPLib
and call directly to the SMU functions.

Change-Id: I55952aca19299c7c61747ec15695c026f29dbbc8
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../display/amdgpu_dm/amdgpu_dm_services.c| 88 ++-
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  |  4 +-
 drivers/gpu/drm/amd/display/dc/dm_pp_smu.h|  6 +-
 3 files changed, 92 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 3dcd5f9af6e5..0c720e50ea43 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -34,6 +34,11 @@
 #include "amdgpu_dm.h"
 #include "amdgpu_dm_irq.h"
 #include "amdgpu_pm.h"
+#include "dm_pp_smu.h"
+#include "../../powerplay/inc/hwmgr.h"
+#include "../../powerplay/hwmgr/smu10_hwmgr.h"
+
+
 
 unsigned long long dm_get_timestamp(struct dc_context *ctx)
 {
@@ -477,9 +482,90 @@ bool dm_pp_get_static_clocks(
return true;
 }
 
+void pp_rv_set_display_requirement(struct pp_smu *pp,
+   struct pp_smu_display_requirement_rv *req)
+{
+   struct amdgpu_device *adev = pp->ctx->driver_context;
+   struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+   int ret = 0;
+   if (hwmgr->hwmgr_func->set_deep_sleep_dcefclk)
+   ret = hwmgr->hwmgr_func->set_deep_sleep_dcefclk(hwmgr, 
req->hard_min_dcefclk_khz/10);
+   if (hwmgr->hwmgr_func->set_active_display_count)
+   ret = hwmgr->hwmgr_func->set_active_display_count(hwmgr, 
req->display_count);
+
+   //store_cc6 is not yet implemented in SMU level
+}
+
+void pp_rv_set_wm_ranges(struct pp_smu *pp,
+   struct pp_smu_wm_range_sets *ranges)
+{
+   struct amdgpu_device *adev = pp->ctx->driver_context;
+   struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+   struct pp_wm_sets_with_clock_ranges_soc15 ranges_soc15 = {0};
+   int i = 0;
+
+   if (!hwmgr->hwmgr_func->set_watermarks_for_clocks_ranges ||
+   !pp || !ranges)
+   return;
+
+   //not entirely sure if thats a correct assignment
+   ranges_soc15.num_wm_sets_dmif = ranges->num_reader_wm_sets;
+   ranges_soc15.num_wm_sets_mcif = ranges->num_writer_wm_sets;
+
+   for (i = 0; i < ranges_soc15.num_wm_sets_dmif; i++) {
+   if (ranges->reader_wm_sets[i].wm_inst > 3)
+   ranges_soc15.wm_sets_dmif[i].wm_set_id = DC_WM_SET_A;
+   else
+   ranges_soc15.wm_sets_dmif[i].wm_set_id =
+   ranges->reader_wm_sets[i].wm_inst;
+   ranges_soc15.wm_sets_dmif[i].wm_max_dcefclk_in_khz =
+   ranges->reader_wm_sets[i].max_drain_clk_khz;
+   ranges_soc15.wm_sets_dmif[i].wm_min_dcefclk_in_khz =
+   ranges->reader_wm_sets[i].min_drain_clk_khz;
+   ranges_soc15.wm_sets_dmif[i].wm_max_memclk_in_khz =
+   ranges->reader_wm_sets[i].max_fill_clk_khz;
+   ranges_soc15.wm_sets_dmif[i].wm_min_memclk_in_khz =
+   ranges->reader_wm_sets[i].min_fill_clk_khz;
+   }
+
+   for (i = 0; i < ranges_soc15.num_wm_sets_mcif; i++) {
+   if (ranges->writer_wm_sets[i].wm_inst > 3)
+   ranges_soc15.wm_sets_dmif[i].wm_set_id = DC_WM_SET_A;
+   else
+   ranges_soc15.wm_sets_mcif[i].wm_set_id =
+   ranges->writer_wm_sets[i].wm_inst;
+   ranges_soc15.wm_sets_mcif[i].wm_max_socclk_in_khz =
+   ranges->writer_wm_sets[i].max_fill_clk_khz;
+   ranges_soc15.wm_sets_mcif[i].wm_min_socclk_in_khz =
+   ranges->writer_wm_sets[i].min_fill_clk_khz;
+   ranges_soc15.wm_sets_mcif[i].wm_max_memclk_in_khz =
+   ranges->writer_wm_sets[i].max_fill_clk_khz;
+   ranges_soc15.wm_sets_mcif[i].wm_min_memclk_in_khz =
+   ranges->writer_wm_sets[i].min_fill_clk_khz;
+   }
+
+   hwmgr->hwmgr_func->set_watermarks_for_clocks_ranges(hwmgr, 
_soc15);
+
+}
+
+void pp_rv_set_pme_wa_enable(struct pp_smu *pp)
+{
+   struct amdgpu_device *adev = pp->ctx->driver_context;
+   struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+
+   if (hwmgr->hwmgr_func->smus_notify_pwe)
+   hwmgr->hwmgr_func->smus_notify_pwe(hwmgr);
+}
+
 void dm_pp_get_funcs_rv(
struct dc_context *ctx,
struct pp_smu_funcs_rv *funcs)
-{}
+{
+   funcs->pp_smu.ctx = ctx;
+   funcs->set_display_requirement = pp_rv_set_display_requirement;
+   funcs->set_wm_ranges = pp_rv_set_wm_ranges;
+   

[PATCH 23/51] drm/amd/display: fix pplib voltage request

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

This fixes incorrect clock caching and by extension fixes
the clock reporting.

Change-Id: I47ec8d78da4f7938ed79c86a9ab475a8e1e47819
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Eric Yang 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 59 ++-
 1 file changed, 32 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index e62a21f55064..0a4ae0f49f99 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -570,37 +570,25 @@ static void dcn1_update_clocks(struct dccg *dccg,
bool send_request_to_increase = false;
bool send_request_to_lower = false;
 
+   if (new_clocks->phyclk_khz)
+   smu_req.display_count = 1;
+   else
+   smu_req.display_count = 0;
+
if (new_clocks->dispclk_khz > dccg->clks.dispclk_khz
|| new_clocks->phyclk_khz > dccg->clks.phyclk_khz
|| new_clocks->fclk_khz > dccg->clks.fclk_khz
|| new_clocks->dcfclk_khz > dccg->clks.dcfclk_khz)
send_request_to_increase = true;
 
-   /* make sure dcf clk is before dpp clk to
-* make sure we have enough voltage to run dpp clk
-*/
-   if (send_request_to_increase) {
-   /*use dcfclk to request voltage*/
-   clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
-   clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
-   dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
-   }
-
-   if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
-   dcn1_ramp_up_dispclk_with_dpp(dccg, new_clocks);
-   dccg->clks.dispclk_khz = new_clocks->dispclk_khz;
-
-   send_request_to_lower = true;
-   }
-
if (should_set_clock(safe_to_lower, new_clocks->phyclk_khz, 
dccg->clks.phyclk_khz)) {
-   clock_voltage_req.clocks_in_khz = new_clocks->phyclk_khz;
+   dccg->clks.phyclk_khz = new_clocks->phyclk_khz;
 
send_request_to_lower = true;
}
 
if (should_set_clock(safe_to_lower, new_clocks->fclk_khz, 
dccg->clks.fclk_khz)) {
-   dccg->clks.phyclk_khz = new_clocks->fclk_khz;
+   dccg->clks.fclk_khz = new_clocks->fclk_khz;
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_FCLK;
clock_voltage_req.clocks_in_khz = new_clocks->fclk_khz;
smu_req.hard_min_fclk_khz = new_clocks->fclk_khz;
@@ -610,7 +598,7 @@ static void dcn1_update_clocks(struct dccg *dccg,
}
 
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, 
dccg->clks.dcfclk_khz)) {
-   dccg->clks.phyclk_khz = new_clocks->dcfclk_khz;
+   dccg->clks.dcfclk_khz = new_clocks->dcfclk_khz;
smu_req.hard_min_dcefclk_khz = new_clocks->dcfclk_khz;
 
send_request_to_lower = true;
@@ -620,22 +608,39 @@ static void dcn1_update_clocks(struct dccg *dccg,
new_clocks->dcfclk_deep_sleep_khz, 
dccg->clks.dcfclk_deep_sleep_khz)) {
dccg->clks.dcfclk_deep_sleep_khz = 
new_clocks->dcfclk_deep_sleep_khz;
smu_req.min_deep_sleep_dcefclk_mhz = 
new_clocks->dcfclk_deep_sleep_khz;
+
+   send_request_to_lower = true;
}
 
-   if (!send_request_to_increase && send_request_to_lower) {
+   /* make sure dcf clk is before dpp clk to
+* make sure we have enough voltage to run dpp clk
+*/
+   if (send_request_to_increase) {
/*use dcfclk to request voltage*/
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
+   if (pp_smu->set_display_requirement)
+   pp_smu->set_display_requirement(_smu->pp_smu, 
_req);
}
 
-   if (new_clocks->phyclk_khz)
-   smu_req.display_count = 1;
-   else
-   smu_req.display_count = 0;
+   /* dcn1 dppclk is tied to dispclk */
+   if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
+   dcn1_ramp_up_dispclk_with_dpp(dccg, new_clocks);
+   dccg->clks.dispclk_khz = new_clocks->dispclk_khz;
+
+   send_request_to_lower = true;
+   }
+
+   if (!send_request_to_increase && send_request_to_lower) {
+   /*use dcfclk to request voltage*/
+   clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
+   clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
+   

[PATCH 19/51] drm/amd/display: remove unnecessary pplib volage requests that are asserting

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: Ifb3b0175987d6e5d2d203896818b82742f125fe4
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index 242e8ae56025..df6a37b7b769 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -588,21 +588,15 @@ static void dcn1_update_clocks(struct dccg *dccg,
 #endif
 
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
-   clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAY_CLK;
-   clock_voltage_req.clocks_in_khz = new_clocks->dispclk_khz;
dcn1_ramp_up_dispclk_with_dpp(dccg, new_clocks);
dccg->clks.dispclk_khz = new_clocks->dispclk_khz;
 
-   dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
send_request_to_lower = true;
}
 
if (should_set_clock(safe_to_lower, new_clocks->phyclk_khz, 
dccg->clks.phyclk_khz)) {
-   dccg->clks.phyclk_khz = new_clocks->phyclk_khz;
-   clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAYPHYCLK;
clock_voltage_req.clocks_in_khz = new_clocks->phyclk_khz;
 
-   dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
send_request_to_lower = true;
}
 
@@ -618,8 +612,6 @@ static void dcn1_update_clocks(struct dccg *dccg,
 
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, 
dccg->clks.dcfclk_khz)) {
dccg->clks.phyclk_khz = new_clocks->dcfclk_khz;
-   clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
-   clock_voltage_req.clocks_in_khz = new_clocks->dcfclk_khz;
smu_req.hard_min_dcefclk_khz = new_clocks->dcfclk_khz;
 
send_request_to_lower = true;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/51] drm/amd/display: rename display clock block to dccg

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I73b4dd52de0b5988fcb9013f01c65512070cc26f
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  2 +-
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 78 +--
 .../gpu/drm/amd/display/dc/dce/dce_clocks.h   | 16 ++--
 .../display/dc/dce100/dce100_hw_sequencer.c   |  4 +-
 .../amd/display/dc/dce100/dce100_resource.c   | 10 +--
 .../display/dc/dce110/dce110_hw_sequencer.c   |  4 +-
 .../amd/display/dc/dce110/dce110_resource.c   | 10 +--
 .../amd/display/dc/dce112/dce112_resource.c   | 10 +--
 .../amd/display/dc/dce120/dce120_resource.c   | 12 +--
 .../drm/amd/display/dc/dce80/dce80_resource.c | 22 +++---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 12 +--
 .../drm/amd/display/dc/dcn10/dcn10_resource.c |  8 +-
 .../gpu/drm/amd/display/dc/inc/core_types.h   |  4 +-
 .../drm/amd/display/dc/inc/hw/display_clock.h |  8 +-
 14 files changed, 100 insertions(+), 100 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 45f9ecbb3d47..72f233963748 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -1951,7 +1951,7 @@ void dc_resource_state_construct(
const struct dc *dc,
struct dc_state *dst_ctx)
 {
-   dst_ctx->dis_clk = dc->res_pool->display_clock;
+   dst_ctx->dis_clk = dc->res_pool->dccg;
 }
 
 enum dc_status dc_validate_global_state(
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index d3bbac85dff1..890a3ec68d49 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -38,7 +38,7 @@
 #include "dal_asic_id.h"
 
 #define TO_DCE_CLOCKS(clocks)\
-   container_of(clocks, struct dce_disp_clk, base)
+   container_of(clocks, struct dce_dccg, base)
 
 #define REG(reg) \
(clk_dce->regs->reg)
@@ -187,9 +187,9 @@ static int dce_divider_range_get_divider(
return div;
 }
 
-static int dce_clocks_get_dp_ref_freq(struct display_clock *clk)
+static int dce_clocks_get_dp_ref_freq(struct dccg *clk)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
int dprefclk_wdivider;
int dprefclk_src_sel;
int dp_ref_clk_khz = 60;
@@ -250,9 +250,9 @@ static int dce_clocks_get_dp_ref_freq(struct display_clock 
*clk)
  * or CLK0_CLK11 by SMU. For DCE120, it is wlays 600Mhz. Will re-visit
  * clock implementation
  */
-static int dce_clocks_get_dp_ref_freq_wrkaround(struct display_clock *clk)
+static int dce_clocks_get_dp_ref_freq_wrkaround(struct dccg *clk)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
int dp_ref_clk_khz = 60;
 
if (clk_dce->ss_on_dprefclk && clk_dce->dprefclk_ss_divider != 0) {
@@ -274,10 +274,10 @@ static int dce_clocks_get_dp_ref_freq_wrkaround(struct 
display_clock *clk)
return dp_ref_clk_khz;
 }
 static enum dm_pp_clocks_state dce_get_required_clocks_state(
-   struct display_clock *clk,
+   struct dccg *clk,
struct dc_clocks *req_clocks)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
int i;
enum dm_pp_clocks_state low_req_clk;
 
@@ -306,10 +306,10 @@ static enum dm_pp_clocks_state 
dce_get_required_clocks_state(
 }
 
 static int dce_set_clock(
-   struct display_clock *clk,
+   struct dccg *clk,
int requested_clk_khz)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
struct bp_pixel_clock_parameters pxl_clk_params = { 0 };
struct dc_bios *bp = clk->ctx->dc_bios;
int actual_clock = requested_clk_khz;
@@ -341,10 +341,10 @@ static int dce_set_clock(
 }
 
 static int dce_psr_set_clock(
-   struct display_clock *clk,
+   struct dccg *clk,
int requested_clk_khz)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
struct dc_context *ctx = clk_dce->base.ctx;
struct dc *core_dc = ctx->dc;
struct dmcu *dmcu = core_dc->res_pool->dmcu;
@@ -357,10 +357,10 @@ static int dce_psr_set_clock(
 }
 
 static int dce112_set_clock(
-   struct display_clock *clk,
+   struct dccg *clk,
int requested_clk_khz)
 {
-   struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(clk);
+   struct dce_dccg *clk_dce = TO_DCE_CLOCKS(clk);
struct bp_set_dce_clock_parameters dce_clk_params;
struct dc_bios *bp = clk->ctx->dc_bios;
struct dc *core_dc = clk->ctx->dc;
@@ -409,7 +409,7 @@ static int dce112_set_clock(
return actual_clock;
 }
 
-static 

[PATCH 10/51] drm/amd/display: remove invalid assert when no max_pixel_clk is found

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: If7c3c7a1dc754eef3fa60cc9d4f56a4779a10ece
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index d8a0741180b3..f723a29e8065 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -1761,9 +1761,6 @@ static uint32_t get_max_pixel_clock_for_all_paths(

pipe_ctx->stream_res.pix_clk_params.requested_pix_clk;
}
 
-   if (max_pix_clk == 0)
-   ASSERT(0);
-
return max_pix_clk;
 }
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 27/51] drm/amd/display: support ACrYCb2101010

2018-06-19 Thread Harry Wentland
From: "Zheng, XueLai(Eric)" 

Change-Id: I291b94fa4db12f42ae38bb9b9e3a78c24a63c423
Signed-off-by: XueLai(Eric), Zheng 
Reviewed-by: Charlene Liu 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h  | 1 +
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h 
b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
index 7e5a41fc8adc..f285d3754221 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
@@ -199,6 +199,7 @@ enum surface_pixel_format {
SURFACE_PIXEL_FORMAT_VIDEO_420_YCrCb,
SURFACE_PIXEL_FORMAT_VIDEO_420_10bpc_YCbCr,
SURFACE_PIXEL_FORMAT_VIDEO_420_10bpc_YCrCb,
+   SURFACE_PIXEL_FORMAT_SUBSAMPLE_END,
SURFACE_PIXEL_FORMAT_INVALID
 
/*grow 444 video here if necessary */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
index c28085be39ff..93f52c58bc69 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
@@ -166,7 +166,7 @@ void hubp1_program_size_and_rotation(
/* Program data and meta surface pitch (calculation from addrlib)
 * 444 or 420 luma
 */
-   if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) {
+   if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN && format < 
SURFACE_PIXEL_FORMAT_SUBSAMPLE_END) {
ASSERT(plane_size->video.chroma_pitch != 0);
/* Chroma pitch zero can cause system hang! */
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 43/51] drm/amd/display: separate out wm change request dcn workaround

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I4546eaa268498b5ca0a4b5f18c5f4c4446be8d76
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c   | 11 ++-
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h   |  2 ++
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c |  3 +++
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c |  1 +
 drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h |  1 +
 5 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
index 63b75ac4a1d5..623db09389b5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
@@ -190,6 +190,12 @@ static uint32_t convert_and_clamp(
 }
 
 
+void hubbub1_wm_change_req_wa(struct hubbub *hubbub)
+{
+   REG_UPDATE_SEQ(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL,
+   DCHUBBUB_ARB_WATERMARK_CHANGE_REQUEST, 0, 1);
+}
+
 void hubbub1_program_watermarks(
struct hubbub *hubbub,
struct dcn_watermark_set *watermarks,
@@ -203,8 +209,6 @@ void hubbub1_program_watermarks(
 */
uint32_t prog_wm_value;
 
-   REG_UPDATE(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL,
-   DCHUBBUB_ARB_WATERMARK_CHANGE_REQUEST, 0);
 
/* Repeat for water mark set A, B, C and D. */
/* clock state A */
@@ -459,9 +463,6 @@ void hubbub1_program_watermarks(
watermarks->d.cstate_pstate.pstate_change_ns, 
prog_wm_value);
}
 
-   REG_UPDATE(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL,
-   DCHUBBUB_ARB_WATERMARK_CHANGE_REQUEST, 1);
-
REG_UPDATE(DCHUBBUB_ARB_SAT_LEVEL,
DCHUBBUB_ARB_SAT_LEVEL, 60 * refclk_mhz);
REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
index 0ca39cb71968..d6e596eef4c5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
@@ -195,6 +195,8 @@ void hubbub1_update_dchub(
 bool hubbub1_verify_allow_pstate_change_high(
struct hubbub *hubbub);
 
+void hubbub1_wm_change_req_wa(struct hubbub *hubbub);
+
 void hubbub1_program_watermarks(
struct hubbub *hubbub,
struct dcn_watermark_set *watermarks,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index eaa8b0a3fb3a..d78802e751c8 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2320,6 +2320,9 @@ static void dcn10_apply_ctx_for_surface(
hubbub1_program_watermarks(dc->res_pool->hubbub,
>bw.dcn.watermarks, ref_clk_mhz, true);
 
+   if (dc->hwseq->wa.DEGVIDCN10_254)
+   hubbub1_wm_change_req_wa(dc->res_pool->hubbub);
+
if (dc->debug.sanity_checks) {
/* pstate stuck check after watermark update */
dcn10_verify_allow_pstate_change_high(dc);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
index 2db08b99db56..68be66eabc40 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
@@ -743,6 +743,7 @@ static struct dce_hwseq *dcn10_hwseq_create(
hws->masks = _mask;
hws->wa.DEGVIDCN10_253 = true;
hws->wa.false_optc_underflow = true;
+   hws->wa.DEGVIDCN10_254 = true;
}
return hws;
 }
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h 
b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
index a71770ed4b9f..1c94dae6bbde 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
@@ -44,6 +44,7 @@ struct dce_hwseq_wa {
bool blnd_crtc_trigger;
bool DEGVIDCN10_253;
bool false_optc_underflow;
+   bool DEGVIDCN10_254;
 };
 
 struct hwseq_wa_state {
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 32/51] drm/amd/display: clean rq/dlg/ttu reg structs before calculations

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I7b6c3f223cbb383c9c041a18a7f7b00e0a5be89b
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 4 
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 8 ++--
 .../gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c | 2 --
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index b8195e5a0676..ac4451adeec9 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -423,6 +423,10 @@ static void dcn_bw_calc_rq_dlg_ttu(
int total_flip_bytes = 0;
int i;
 
+   memset(dlg_regs, 0, sizeof(*dlg_regs));
+   memset(ttu_regs, 0, sizeof(*ttu_regs));
+   memset(rq_regs, 0, sizeof(*rq_regs));
+
for (i = 0; i < number_of_planes; i++) {
total_active_bw += v->read_bandwidth[i];
total_prefetch_bw += v->prefetch_bandwidth[i];
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 1170ea0472e1..eaa8b0a3fb3a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -151,19 +151,23 @@ static void dcn10_log_hubp_states(struct dc *dc)
 
DTN_INFO("\n=RQ\n");
DTN_INFO("HUBP:  drq_exp_m  prq_exp_m  mrq_exp_m  crq_exp_m  plane1_ba  
L:chunk_s  min_chu_s  meta_ch_s"
+   "  min_m_c_s  dpte_gr_s  mpte_gr_s  swath_hei  pte_row_h  
C:chunk_s  min_chu_s  meta_ch_s"
"  min_m_c_s  dpte_gr_s  mpte_gr_s  swath_hei  pte_row_h\n");
for (i = 0; i < pool->pipe_count; i++) {
struct dcn_hubp_state *s = 
&(TO_DCN10_HUBP(pool->hubps[i])->state);
struct _vcs_dpi_display_rq_regs_st *rq_regs = >rq_regs;
 
if (!s->blank_en)
-   DTN_INFO("[%2d]:  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh\n",
+   DTN_INFO("[%2d]:  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh\n",
pool->hubps[i]->inst, 
rq_regs->drq_expansion_mode, rq_regs->prq_expansion_mode, 
rq_regs->mrq_expansion_mode,
rq_regs->crq_expansion_mode, 
rq_regs->plane1_base_address, rq_regs->rq_regs_l.chunk_size,
rq_regs->rq_regs_l.min_chunk_size, 
rq_regs->rq_regs_l.meta_chunk_size,
rq_regs->rq_regs_l.min_meta_chunk_size, 
rq_regs->rq_regs_l.dpte_group_size,
rq_regs->rq_regs_l.mpte_group_size, 
rq_regs->rq_regs_l.swath_height,
-   rq_regs->rq_regs_l.pte_row_height_linear);
+   rq_regs->rq_regs_l.pte_row_height_linear, 
rq_regs->rq_regs_c.chunk_size, rq_regs->rq_regs_c.min_chunk_size,
+   rq_regs->rq_regs_c.meta_chunk_size, 
rq_regs->rq_regs_c.min_meta_chunk_size,
+   rq_regs->rq_regs_c.dpte_group_size, 
rq_regs->rq_regs_c.mpte_group_size,
+   rq_regs->rq_regs_c.swath_height, 
rq_regs->rq_regs_c.pte_row_height_linear);
}
 
DTN_INFO("DLG\n");
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c 
b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
index 0efbf411667a..c2037daa8e66 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
@@ -239,8 +239,6 @@ void dml1_extract_rq_regs(
extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), 
rq_param.sizing.rq_l);
if (rq_param.yuv420)
extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), 
rq_param.sizing.rq_c);
-   else
-   memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
 
rq_regs->rq_regs_l.swath_height = 
dml_log2(rq_param.dlg.rq_l.swath_height);
rq_regs->rq_regs_c.swath_height = 
dml_log2(rq_param.dlg.rq_c.swath_height);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 31/51] drm/amd/display: dal 3.1.50

2018-06-19 Thread Harry Wentland
From: Tony Cheng 

Change-Id: I5a44c0d4289122f7738efa674dc079cfedb258b9
Signed-off-by: Tony Cheng 
Reviewed-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index d1ac9676a539..cdab7b0453c5 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.49"
+#define DC_VER "3.1.50"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 36/51] drm/amd/display: Add dmpp clks types for conversion

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Add more cases for dm_pp clks translator into pp clks so
we can pass the right structures to the powerplay.
Use clks translator instead of massive switch statement.

Change-Id: Ic757734d9e708f81bd72cfe00c55a627a51f8d7a
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../display/amdgpu_dm/amdgpu_dm_services.c| 41 ---
 1 file changed, 18 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 025f37f62c4b..a3b8b295aa27 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -203,6 +203,21 @@ static enum amd_pp_clock_type dc_to_pp_clock_type(
case DM_PP_CLOCK_TYPE_MEMORY_CLK:
amd_pp_clk_type = amd_pp_mem_clock;
break;
+   case DM_PP_CLOCK_TYPE_DCEFCLK:
+   amd_pp_clk_type  = amd_pp_dcef_clock;
+   break;
+   case DM_PP_CLOCK_TYPE_DCFCLK:
+   amd_pp_clk_type = amd_pp_dcf_clock;
+   break;
+   case DM_PP_CLOCK_TYPE_PIXELCLK:
+   amd_pp_clk_type = amd_pp_pixel_clock;
+   break;
+   case DM_PP_CLOCK_TYPE_FCLK:
+   amd_pp_clk_type = amd_pp_f_clock;
+   break;
+   case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
+   amd_pp_clk_type = amd_pp_dpp_clock;
+   break;
default:
DRM_ERROR("DM_PPLIB: invalid clock type: %d!\n",
dm_pp_clk_type);
@@ -432,32 +447,12 @@ bool dm_pp_apply_clock_for_voltage_request(
struct amdgpu_device *adev = ctx->driver_context;
struct pp_display_clock_request pp_clock_request = {0};
int ret = 0;
-   switch (clock_for_voltage_req->clk_type) {
-   case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
-   pp_clock_request.clock_type = amd_pp_disp_clock;
-   break;
-
-   case DM_PP_CLOCK_TYPE_DCEFCLK:
-   pp_clock_request.clock_type = amd_pp_dcef_clock;
-   break;
 
-   case DM_PP_CLOCK_TYPE_DCFCLK:
-   pp_clock_request.clock_type = amd_pp_dcf_clock;
-   break;
-
-   case DM_PP_CLOCK_TYPE_PIXELCLK:
-   pp_clock_request.clock_type = amd_pp_pixel_clock;
-   break;
-
-   case DM_PP_CLOCK_TYPE_FCLK:
-   pp_clock_request.clock_type = amd_pp_f_clock;
-   break;
+   pp_clock_request.clock_type = 
dc_to_pp_clock_type(clock_for_voltage_req->clk_type);
+   pp_clock_request.clock_freq_in_khz = 
clock_for_voltage_req->clocks_in_khz;
 
-   default:
+   if (!pp_clock_request.clock_type)
return false;
-   }
-
-   pp_clock_request.clock_freq_in_khz = 
clock_for_voltage_req->clocks_in_khz;
 
if (adev->powerplay.pp_funcs->display_clock_voltage_request)
ret = adev->powerplay.pp_funcs->display_clock_voltage_request(
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 30/51] drm/amd/display: Add front end for dp debugfs files

2018-06-19 Thread Harry Wentland
From: David Francis 

As part of hardware certification, read-write access to
the link rate, lane count, voltage swing, pre-emphasis,
and PHY test pattern of DP connectors is required.  This commit
adds debugfs files that will correspond to these values.
The file operations are not yet implemented: currently
writing or reading them does nothing.

Change-Id: Ide5499df3f9c3d0d01741e0acd8700cd973a36b0
Signed-off-by: David Francis 
Reviewed-by: Harry Wentland 
---
 .../gpu/drm/amd/display/amdgpu_dm/Makefile|   2 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  10 ++
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 170 ++
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.h |  34 
 4 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
 create mode 100644 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.h

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile 
b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
index af16973f2c41..589c60ec59bd 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
@@ -32,7 +32,7 @@ AMDGPUDM += amdgpu_dm_services.o amdgpu_dm_helpers.o
 endif
 
 ifneq ($(CONFIG_DEBUG_FS),)
-AMDGPUDM += amdgpu_dm_crc.o
+AMDGPUDM += amdgpu_dm_crc.o amdgpu_dm_debugfs.o
 endif
 
 subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/dc
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 90d7b9225b90..d2f89e267ad6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -39,6 +39,9 @@
 #include "dm_helpers.h"
 #include "dm_services_types.h"
 #include "amdgpu_dm_mst_types.h"
+#if defined(CONFIG_DEBUG_FS)
+#include "amdgpu_dm_debugfs.h"
+#endif
 
 #include "ivsrcid/ivsrcid_vislands30.h"
 
@@ -3686,6 +3689,13 @@ static int amdgpu_dm_connector_init(struct 
amdgpu_display_manager *dm,
>base, >base);
 
drm_connector_register(>base);
+#if defined(CONFIG_DEBUG_FS)
+   res = connector_debugfs_init(aconnector);
+   if (res) {
+   DRM_ERROR("Failed to create debugfs for connector");
+   goto out_free;
+   }
+#endif
 
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort
|| connector_type == DRM_MODE_CONNECTOR_eDP)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
new file mode 100644
index ..cf5ea69e46ad
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -0,0 +1,170 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include 
+
+#include "dc.h"
+#include "dc_link.h"
+
+#include "amdgpu.h"
+#include "amdgpu_dm.h"
+#include "amdgpu_dm_debugfs.h"
+
+static ssize_t dp_link_rate_debugfs_read(struct file *f, char __user *buf,
+size_t size, loff_t *pos)
+{
+   /* TODO: create method to read link rate */
+   return 1;
+}
+
+static ssize_t dp_link_rate_debugfs_write(struct file *f, const char __user 
*buf,
+size_t size, loff_t *pos)
+{
+   /* TODO: create method to write link rate */
+   return 1;
+}
+
+static ssize_t dp_lane_count_debugfs_read(struct file *f, char __user *buf,
+size_t size, loff_t *pos)
+{
+   /* TODO: create method to read lane count */
+   return 1;
+}
+
+static ssize_t dp_lane_count_debugfs_write(struct file *f, const char __user 
*buf,
+size_t size, loff_t *pos)
+{
+   /* TODO: create method to write lane count */
+   return 1;
+}
+
+static ssize_t dp_voltage_swing_debugfs_read(struct file *f, char __user *buf,
+ 

[PATCH 37/51] drm/amd/display: Convert 10kHz clks from PPLib into kHz

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

The driver is expecting clock frequency in kHz, while SMU returns
the values in 10kHz, which causes the bandwidth validation to fail

Change-Id: I7b79af18d200fd2157193ee9041c675fe66c391c
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../amd/display/amdgpu_dm/amdgpu_dm_services.c| 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index a3b8b295aa27..3b7b74d3cdcf 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -275,8 +275,9 @@ static void pp_to_dc_clock_levels_with_latency(
DC_DECODE_PP_CLOCK_TYPE(dc_clk_type));
 
for (i = 0; i < clk_level_info->num_levels; i++) {
-   DRM_DEBUG("DM_PPLIB:\t %d\n", pp_clks->data[i].clocks_in_khz);
-   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz;
+   DRM_DEBUG("DM_PPLIB:\t %d in 10kHz\n", 
pp_clks->data[i].clocks_in_khz);
+   /* translate 10kHz to kHz */
+   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz * 10;
clk_level_info->data[i].latency_in_us = 
pp_clks->data[i].latency_in_us;
}
 }
@@ -302,8 +303,9 @@ static void pp_to_dc_clock_levels_with_voltage(
DC_DECODE_PP_CLOCK_TYPE(dc_clk_type));
 
for (i = 0; i < clk_level_info->num_levels; i++) {
-   DRM_INFO("DM_PPLIB:\t %d\n", pp_clks->data[i].clocks_in_khz);
-   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz;
+   DRM_INFO("DM_PPLIB:\t %d in 10kHz\n", 
pp_clks->data[i].clocks_in_khz);
+   /* translate 10kHz to kHz */
+   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz * 10;
clk_level_info->data[i].voltage_in_mv = 
pp_clks->data[i].voltage_in_mv;
}
 }
@@ -479,8 +481,9 @@ bool dm_pp_get_static_clocks(
return false;
 
static_clk_info->max_clocks_state = pp_clk_info.max_clocks_state;
-   static_clk_info->max_mclk_khz = pp_clk_info.max_memory_clock;
-   static_clk_info->max_sclk_khz = pp_clk_info.max_engine_clock;
+   /* translate 10kHz to kHz */
+   static_clk_info->max_mclk_khz = pp_clk_info.max_memory_clock * 10;
+   static_clk_info->max_sclk_khz = pp_clk_info.max_engine_clock * 10;
 
return true;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 48/51] drm/amd/display: add valid regoffset and NULL pointer check

2018-06-19 Thread Harry Wentland
From: Charlene Liu 

Change-Id: I31c377a4ab1c428d260cf311b7e28c6137159cf7
Signed-off-by: Charlene Liu 
Reviewed-by: Dmytro Laktyushkin 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  8 +++---
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |  9 ---
 .../display/dc/dce110/dce110_hw_sequencer.c   |  7 ++---
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.c   |  5 
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 26 ++-
 5 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 2f9c23d94b50..053ef83ea60f 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -33,6 +33,7 @@
 #include "dc_link_dp.h"
 #include "dc_link_ddc.h"
 #include "link_hwss.h"
+#include "opp.h"
 
 #include "link_encoder.h"
 #include "hw_sequencer.h"
@@ -2416,9 +2417,10 @@ void core_link_enable_stream(
core_dc->hwss.enable_audio_stream(pipe_ctx);
 
/* turn off otg test pattern if enable */
-   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
-   CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
-   COLOR_DEPTH_UNDEFINED);
+   if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
+   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
+   CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
+   COLOR_DEPTH_UNDEFINED);
 
core_dc->hwss.enable_stream(pipe_ctx);
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 72a8a55565c8..aa27a301653f 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -3,6 +3,7 @@
 #include "dc.h"
 #include "dc_link_dp.h"
 #include "dm_helpers.h"
+#include "opp.h"
 
 #include "inc/core_types.h"
 #include "link_hwss.h"
@@ -2514,8 +2515,8 @@ static void set_crtc_test_pattern(struct dc_link *link,
pipe_ctx->stream->bit_depth_params = params;
pipe_ctx->stream_res.opp->funcs->

opp_program_bit_depth_reduction(pipe_ctx->stream_res.opp, );
-
-   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
+   if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
+   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
controller_test_pattern, color_depth);
}
break;
@@ -2527,8 +2528,8 @@ static void set_crtc_test_pattern(struct dc_link *link,
pipe_ctx->stream->bit_depth_params = params;
pipe_ctx->stream_res.opp->funcs->

opp_program_bit_depth_reduction(pipe_ctx->stream_res.opp, );
-
-   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
+   if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
+   
pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
color_depth);
}
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 9ea576cbbefa..b96525036f94 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -1449,7 +1449,7 @@ static void power_down_controllers(struct dc *dc)
 {
int i;
 
-   for (i = 0; i < dc->res_pool->pipe_count; i++) {
+   for (i = 0; i < dc->res_pool->timing_generator_count; i++) {
dc->res_pool->timing_generators[i]->funcs->disable_crtc(
dc->res_pool->timing_generators[i]);
}
@@ -1489,12 +1489,13 @@ static void disable_vga_and_power_gate_all_controllers(
struct timing_generator *tg;
struct dc_context *ctx = dc->ctx;
 
-   for (i = 0; i < dc->res_pool->pipe_count; i++) {
+   for (i = 0; i < dc->res_pool->timing_generator_count; i++) {
tg = dc->res_pool->timing_generators[i];
 
if (tg->funcs->disable_vga)
tg->funcs->disable_vga(tg);
-
+   }
+   for (i = 0; i < dc->res_pool->pipe_count; i++) {
/* Enable CLOCK gating for each pipe BEFORE controller
 * powergating. */
enable_display_pipe_clock_gating(ctx,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
index 623db09389b5..1ea91e153d3a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
@@ -483,6 +483,11 @@ void 

[PATCH 41/51] drm/amd/display: fix dcn1 watermark range reporting

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I984632d3eda21200e3911be5ec197801c3b855fb
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 102 --
 1 file changed, 18 insertions(+), 84 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index ac4451adeec9..8dc0773b285e 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -1335,21 +1335,14 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
 {
struct pp_smu_funcs_rv *pp = dc->res_pool->pp_smu;
struct pp_smu_wm_range_sets ranges = {0};
-   int max_fclk_khz, nom_fclk_khz, mid_fclk_khz, min_fclk_khz;
-   int max_dcfclk_khz, min_dcfclk_khz;
-   int socclk_khz;
+   int min_fclk_khz, min_dcfclk_khz, socclk_khz;
const int overdrive = 500; /* 5 GHz to cover Overdrive */
-   unsigned factor = (ddr4_dram_factor_single_Channel * 
dc->dcn_soc->number_of_channels);
 
if (!pp->set_wm_ranges)
return;
 
kernel_fpu_begin();
-   max_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 * 100 
/ factor;
-   nom_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8 * 100 
/ factor;
-   mid_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72 * 
100 / factor;
min_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 * 
100 / 32;
-   max_dcfclk_khz = dc->dcn_soc->dcfclkv_max0p9 * 1000;
min_dcfclk_khz = dc->dcn_soc->dcfclkv_min0p65 * 1000;
socclk_khz = dc->dcn_soc->socclk * 1000;
kernel_fpu_end();
@@ -1357,7 +1350,7 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
/* Now notify PPLib/SMU about which Watermarks sets they should select
 * depending on DPM state they are in. And update BW MGR GFX Engine and
 * Memory clock member variables for Watermarks calculations for each
-* Watermark Set
+* Watermark Set. Only one watermark set for dcn1 due to hw bug 
DEGVIDCN10-254.
 */
/* SOCCLK does not affect anytihng but writeback for DCN so for now we 
dont
 * care what the value is, hence min to overdrive level
@@ -1366,96 +1359,37 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
ranges.num_writer_wm_sets = WM_SET_COUNT;
ranges.reader_wm_sets[0].wm_inst = WM_A;
ranges.reader_wm_sets[0].min_drain_clk_khz = min_dcfclk_khz;
-   ranges.reader_wm_sets[0].max_drain_clk_khz = max_dcfclk_khz;
+   ranges.reader_wm_sets[0].max_drain_clk_khz = overdrive;
ranges.reader_wm_sets[0].min_fill_clk_khz = min_fclk_khz;
-   ranges.reader_wm_sets[0].max_fill_clk_khz = min_fclk_khz;
+   ranges.reader_wm_sets[0].max_fill_clk_khz = overdrive;
ranges.writer_wm_sets[0].wm_inst = WM_A;
ranges.writer_wm_sets[0].min_fill_clk_khz = socclk_khz;
ranges.writer_wm_sets[0].max_fill_clk_khz = overdrive;
ranges.writer_wm_sets[0].min_drain_clk_khz = min_fclk_khz;
-   ranges.writer_wm_sets[0].max_drain_clk_khz = min_fclk_khz;
-
-   ranges.reader_wm_sets[1].wm_inst = WM_B;
-   ranges.reader_wm_sets[1].min_drain_clk_khz = min_fclk_khz;
-   ranges.reader_wm_sets[1].max_drain_clk_khz = max_dcfclk_khz;
-   ranges.reader_wm_sets[1].min_fill_clk_khz = mid_fclk_khz;
-   ranges.reader_wm_sets[1].max_fill_clk_khz = mid_fclk_khz;
-   ranges.writer_wm_sets[1].wm_inst = WM_B;
-   ranges.writer_wm_sets[1].min_fill_clk_khz = socclk_khz;
-   ranges.writer_wm_sets[1].max_fill_clk_khz = overdrive;
-   ranges.writer_wm_sets[1].min_drain_clk_khz = mid_fclk_khz;
-   ranges.writer_wm_sets[1].max_drain_clk_khz = mid_fclk_khz;
-
-
-   ranges.reader_wm_sets[2].wm_inst = WM_C;
-   ranges.reader_wm_sets[2].min_drain_clk_khz = min_fclk_khz;
-   ranges.reader_wm_sets[2].max_drain_clk_khz = max_dcfclk_khz;
-   ranges.reader_wm_sets[2].min_fill_clk_khz = nom_fclk_khz;
-   ranges.reader_wm_sets[2].max_fill_clk_khz = nom_fclk_khz;
-   ranges.writer_wm_sets[2].wm_inst = WM_C;
-   ranges.writer_wm_sets[2].min_fill_clk_khz = socclk_khz;
-   ranges.writer_wm_sets[2].max_fill_clk_khz = overdrive;
-   ranges.writer_wm_sets[2].min_drain_clk_khz = nom_fclk_khz;
-   ranges.writer_wm_sets[2].max_drain_clk_khz = nom_fclk_khz;
-
-   ranges.reader_wm_sets[3].wm_inst = WM_D;
-   ranges.reader_wm_sets[3].min_drain_clk_khz = min_fclk_khz;
-   ranges.reader_wm_sets[3].max_drain_clk_khz = max_dcfclk_khz;
-   ranges.reader_wm_sets[3].min_fill_clk_khz = max_fclk_khz;
-   ranges.reader_wm_sets[3].max_fill_clk_khz = max_fclk_khz;
-   ranges.writer_wm_sets[3].wm_inst = WM_D;
-   ranges.writer_wm_sets[3].min_fill_clk_khz = socclk_khz;
-   ranges.writer_wm_sets[3].max_fill_clk_khz = overdrive;
-   

[PATCH 28/51] drm/amd/display: fix use of uninitialized memory

2018-06-19 Thread Harry Wentland
From: Wesley Chalmers 

DML does not calculate chroma values for RQ when surface is not YUV, but DC
will unconditionally use the uninitialized values for HW programming.
This does not cause visual corruption since HW will ignore garbage chroma
values when surface is not YUV, but causes presubmission tests to fail
golden value comparison.

Change-Id: I44aa094f7bae9c26fc27e108b3eb284db5b00a56
Signed-off-by: Wesley Chalmers 
Signed-off-by: Eryk Brol 
Reviewed-by: Wenjing Liu 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c 
b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
index c2037daa8e66..0efbf411667a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
@@ -239,6 +239,8 @@ void dml1_extract_rq_regs(
extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), 
rq_param.sizing.rq_l);
if (rq_param.yuv420)
extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), 
rq_param.sizing.rq_c);
+   else
+   memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
 
rq_regs->rq_regs_l.swath_height = 
dml_log2(rq_param.dlg.rq_l.swath_height);
rq_regs->rq_regs_c.swath_height = 
dml_log2(rq_param.dlg.rq_c.swath_height);
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 51/51] drm/amd/display: Program vsc_infopacket in commit_planes_for_stream

2018-06-19 Thread Harry Wentland
From: Alvin lee 

Change-Id: I0867e7e8d85a64cf35d9de2c319699ae027091d2
Signed-off-by: Alvin lee 
Reviewed-by: Jun Lei 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc.c   | 3 ++-
 drivers/gpu/drm/amd/display/dc/dc_stream.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index e855424ee575..ffd4a423d8ff 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1321,7 +1321,8 @@ static void commit_planes_do_stream_update(struct dc *dc,
}
 
if (stream_update->hdr_static_metadata ||
-   stream_update->vrr_infopacket) {
+   stream_update->vrr_infopacket ||
+   stream_update->vsc_infopacket) {
resource_build_info_frame(pipe_ctx);
dc->hwss.update_info_frame(pipe_ctx);
}
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h 
b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index bc496906b695..f0e8d19efa4a 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -128,6 +128,7 @@ struct dc_stream_update {
unsigned long long *periodic_fn_vsync_delta;
struct dc_crtc_timing_adjust *adjust;
struct dc_info_packet *vrr_infopacket;
+   struct dc_info_packet *vsc_infopacket;
 
bool *dpms_off;
 };
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 50/51] drm/amd/display: Allow option to use worst-case watermark

2018-06-19 Thread Harry Wentland
From: Tony Cheng 

use worse case watermark (consider both DCC and VM)
to keep golden consistent regardless of DCC

Change-Id: Ibc7ef72099dcd0a514a600ceae2e3ddc19f47a48
Signed-off-by: Tony Cheng 
Reviewed-by: Aric Cyr 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 23 ++-
 drivers/gpu/drm/amd/display/dc/dc.h   |  1 +
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 12261fbc25e0..e44b8d3d6891 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -31,6 +31,8 @@
 
 #include "resource.h"
 #include "dcn10/dcn10_resource.h"
+#include "dcn10/dcn10_hubbub.h"
+
 #include "dcn_calc_math.h"
 
 #define DC_LOGGER \
@@ -889,7 +891,26 @@ bool dcn_validate_bandwidth(

ASSERT(pipe->plane_res.scl_data.ratios.vert.value != dc_fixpt_one.value
|| v->scaler_rec_out_width[input_idx] 
== v->viewport_height[input_idx]);
}
-   v->dcc_enable[input_idx] = 
pipe->plane_state->dcc.enable ? dcn_bw_yes : dcn_bw_no;
+
+   if (dc->debug.optimized_watermark) {
+   /*
+* this method requires us to always 
re-calculate watermark when dcc change
+* between flip.
+*/
+   v->dcc_enable[input_idx] = 
pipe->plane_state->dcc.enable ? dcn_bw_yes : dcn_bw_no;
+   } else {
+   /*
+* allow us to disable dcc on the fly without 
re-calculating WM
+*
+* extra overhead for DCC is quite small.  for 
1080p WM without
+* DCC is only 0.417us lower (urgent goes from 
6.979us to 6.562us)
+*/
+   unsigned int bpe;
+
+   v->dcc_enable[input_idx] = 
dc->res_pool->hubbub->funcs->dcc_support_pixel_format(
+   pipe->plane_state->format, 
) ? dcn_bw_yes : dcn_bw_no;
+   }
+
v->source_pixel_format[input_idx] = 
tl_pixel_format_to_bw_defs(
pipe->plane_state->format);
v->source_surface_mode[input_idx] = 
tl_sw_mode_to_bw_defs(
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 74e6653b9852..0cb7e10d2505 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -233,6 +233,7 @@ struct dc_debug {
int urgent_latency_ns;
int percent_of_ideal_drambw;
int dram_clock_change_latency_ns;
+   bool optimized_watermark;
int always_scale;
bool disable_pplib_clock_request;
bool disable_clock_gate;
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 38/51] drm/amd/display: move dml defaults to respective dcn resource files

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I77cb3bca09bd2939e66c538bf1c317d0639711c0
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/dcn10/dcn10_resource.c | 62 ++
 .../drm/amd/display/dc/dml/display_mode_lib.c | 63 +--
 2 files changed, 64 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
index 1761e1a40dad..2db08b99db56 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
@@ -65,6 +65,68 @@
 #include "dce/dce_abm.h"
 #include "dce/dce_dmcu.h"
 
+const struct _vcs_dpi_ip_params_st dcn1_0_ip = {
+   .rob_buffer_size_kbytes = 64,
+   .det_buffer_size_kbytes = 164,
+   .dpte_buffer_size_in_pte_reqs = 42,
+   .dpp_output_buffer_pixels = 2560,
+   .opp_output_buffer_lines = 1,
+   .pixel_chunk_size_kbytes = 8,
+   .pte_enable = 1,
+   .pte_chunk_size_kbytes = 2,
+   .meta_chunk_size_kbytes = 2,
+   .writeback_chunk_size_kbytes = 2,
+   .line_buffer_size_bits = 589824,
+   .max_line_buffer_lines = 12,
+   .IsLineBufferBppFixed = 0,
+   .LineBufferFixedBpp = -1,
+   .writeback_luma_buffer_size_kbytes = 12,
+   .writeback_chroma_buffer_size_kbytes = 8,
+   .max_num_dpp = 4,
+   .max_num_wb = 2,
+   .max_dchub_pscl_bw_pix_per_clk = 4,
+   .max_pscl_lb_bw_pix_per_clk = 2,
+   .max_lb_vscl_bw_pix_per_clk = 4,
+   .max_vscl_hscl_bw_pix_per_clk = 4,
+   .max_hscl_ratio = 4,
+   .max_vscl_ratio = 4,
+   .hscl_mults = 4,
+   .vscl_mults = 4,
+   .max_hscl_taps = 8,
+   .max_vscl_taps = 8,
+   .dispclk_ramp_margin_percent = 1,
+   .underscan_factor = 1.10,
+   .min_vblank_lines = 14,
+   .dppclk_delay_subtotal = 90,
+   .dispclk_delay_subtotal = 42,
+   .dcfclk_cstate_latency = 10,
+   .max_inter_dcn_tile_repeaters = 8,
+   .can_vstartup_lines_exceed_vsync_plus_back_porch_lines_minus_one = 0,
+   .bug_forcing_LC_req_same_size_fixed = 0,
+};
+
+const struct _vcs_dpi_soc_bounding_box_st dcn1_0_soc = {
+   .sr_exit_time_us = 9.0,
+   .sr_enter_plus_exit_time_us = 11.0,
+   .urgent_latency_us = 4.0,
+   .writeback_latency_us = 12.0,
+   .ideal_dram_bw_after_urgent_percent = 80.0,
+   .max_request_size_bytes = 256,
+   .downspread_percent = 0.5,
+   .dram_page_open_time_ns = 50.0,
+   .dram_rw_turnaround_time_ns = 17.5,
+   .dram_return_buffer_per_channel_bytes = 8192,
+   .round_trip_ping_latency_dcfclk_cycles = 128,
+   .urgent_out_of_order_return_per_channel_bytes = 256,
+   .channel_interleave_bytes = 256,
+   .num_banks = 8,
+   .num_chans = 2,
+   .vmm_page_size_bytes = 4096,
+   .dram_clock_change_latency_us = 17.0,
+   .writeback_dram_clock_change_latency_us = 23.0,
+   .return_bus_width_bytes = 64,
+};
+
 #ifndef mmDP0_DP_DPHY_INTERNAL_CTRL
#define mmDP0_DP_DPHY_INTERNAL_CTRL 0x210f
#define mmDP0_DP_DPHY_INTERNAL_CTRL_BASE_IDX2
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
index fd9d97aab071..dddeb0d4db8f 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
@@ -26,67 +26,8 @@
 #include "display_mode_lib.h"
 #include "dc_features.h"
 
-static const struct _vcs_dpi_ip_params_st dcn1_0_ip = {
-   .rob_buffer_size_kbytes = 64,
-   .det_buffer_size_kbytes = 164,
-   .dpte_buffer_size_in_pte_reqs = 42,
-   .dpp_output_buffer_pixels = 2560,
-   .opp_output_buffer_lines = 1,
-   .pixel_chunk_size_kbytes = 8,
-   .pte_enable = 1,
-   .pte_chunk_size_kbytes = 2,
-   .meta_chunk_size_kbytes = 2,
-   .writeback_chunk_size_kbytes = 2,
-   .line_buffer_size_bits = 589824,
-   .max_line_buffer_lines = 12,
-   .IsLineBufferBppFixed = 0,
-   .LineBufferFixedBpp = -1,
-   .writeback_luma_buffer_size_kbytes = 12,
-   .writeback_chroma_buffer_size_kbytes = 8,
-   .max_num_dpp = 4,
-   .max_num_wb = 2,
-   .max_dchub_pscl_bw_pix_per_clk = 4,
-   .max_pscl_lb_bw_pix_per_clk = 2,
-   .max_lb_vscl_bw_pix_per_clk = 4,
-   .max_vscl_hscl_bw_pix_per_clk = 4,
-   .max_hscl_ratio = 4,
-   .max_vscl_ratio = 4,
-   .hscl_mults = 4,
-   .vscl_mults = 4,
-   .max_hscl_taps = 8,
-   .max_vscl_taps = 8,
-   .dispclk_ramp_margin_percent = 1,
-   .underscan_factor = 1.10,
-   .min_vblank_lines = 14,
-   .dppclk_delay_subtotal = 90,
-   .dispclk_delay_subtotal = 42,
-   .dcfclk_cstate_latency = 10,
-   .max_inter_dcn_tile_repeaters = 8,
-   .can_vstartup_lines_exceed_vsync_plus_back_porch_lines_minus_one = 0,
-   

[PATCH 49/51] drm/amd/display: get board layout for edid emulation

2018-06-19 Thread Harry Wentland
From: Samson Tam 

Change-Id: I74999e1a164c027d74fafb8be32321377fcadb9e
Signed-off-by: Samson Tam 
Reviewed-by: Aric Cyr 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/bios/bios_parser.c | 196 
 .../drm/amd/display/dc/bios/bios_parser2.c| 218 +-
 .../gpu/drm/amd/display/dc/dc_bios_types.h|   4 +
 .../amd/display/include/grph_object_defs.h|  46 
 .../drm/amd/display/include/grph_object_id.h  |  11 +
 5 files changed, 474 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
index c7f0b27e457e..be8a2494355a 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
@@ -3762,6 +3762,200 @@ static struct integrated_info 
*bios_parser_create_integrated_info(
return NULL;
 }
 
+enum bp_result update_slot_layout_info(
+   struct dc_bios *dcb,
+   unsigned int i,
+   struct slot_layout_info *slot_layout_info,
+   unsigned int record_offset)
+{
+   unsigned int j;
+   struct bios_parser *bp;
+   ATOM_BRACKET_LAYOUT_RECORD *record;
+   ATOM_COMMON_RECORD_HEADER *record_header;
+   enum bp_result result = BP_RESULT_NORECORD;
+
+   bp = BP_FROM_DCB(dcb);
+   record = NULL;
+   record_header = NULL;
+
+   for (;;) {
+
+   record_header = (ATOM_COMMON_RECORD_HEADER *)
+   GET_IMAGE(ATOM_COMMON_RECORD_HEADER, record_offset);
+   if (record_header == NULL) {
+   result = BP_RESULT_BADBIOSTABLE;
+   break;
+   }
+
+   /* the end of the list */
+   if (record_header->ucRecordType == 0xff ||
+   record_header->ucRecordSize == 0)   {
+   break;
+   }
+
+   if (record_header->ucRecordType ==
+   ATOM_BRACKET_LAYOUT_RECORD_TYPE &&
+   sizeof(ATOM_BRACKET_LAYOUT_RECORD)
+   <= record_header->ucRecordSize) {
+   record = (ATOM_BRACKET_LAYOUT_RECORD *)
+   (record_header);
+   result = BP_RESULT_OK;
+   break;
+   }
+
+   record_offset += record_header->ucRecordSize;
+   }
+
+   /* return if the record not found */
+   if (result != BP_RESULT_OK)
+   return result;
+
+   /* get slot sizes */
+   slot_layout_info->length = record->ucLength;
+   slot_layout_info->width = record->ucWidth;
+
+   /* get info for each connector in the slot */
+   slot_layout_info->num_of_connectors = record->ucConnNum;
+   for (j = 0; j < slot_layout_info->num_of_connectors; ++j) {
+   slot_layout_info->connectors[j].connector_type =
+   (enum connector_layout_type)
+   (record->asConnInfo[j].ucConnectorType);
+   switch (record->asConnInfo[j].ucConnectorType) {
+   case CONNECTOR_TYPE_DVI_D:
+   slot_layout_info->connectors[j].connector_type =
+   CONNECTOR_LAYOUT_TYPE_DVI_D;
+   slot_layout_info->connectors[j].length =
+   CONNECTOR_SIZE_DVI;
+   break;
+
+   case CONNECTOR_TYPE_HDMI:
+   slot_layout_info->connectors[j].connector_type =
+   CONNECTOR_LAYOUT_TYPE_HDMI;
+   slot_layout_info->connectors[j].length =
+   CONNECTOR_SIZE_HDMI;
+   break;
+
+   case CONNECTOR_TYPE_DISPLAY_PORT:
+   slot_layout_info->connectors[j].connector_type =
+   CONNECTOR_LAYOUT_TYPE_DP;
+   slot_layout_info->connectors[j].length =
+   CONNECTOR_SIZE_DP;
+   break;
+
+   case CONNECTOR_TYPE_MINI_DISPLAY_PORT:
+   slot_layout_info->connectors[j].connector_type =
+   CONNECTOR_LAYOUT_TYPE_MINI_DP;
+   slot_layout_info->connectors[j].length =
+   CONNECTOR_SIZE_MINI_DP;
+   break;
+
+   default:
+   slot_layout_info->connectors[j].connector_type =
+   CONNECTOR_LAYOUT_TYPE_UNKNOWN;
+   slot_layout_info->connectors[j].length =
+   CONNECTOR_SIZE_UNKNOWN;
+   }
+
+   slot_layout_info->connectors[j].position =
+   record->asConnInfo[j].ucPosition;
+   slot_layout_info->connectors[j].connector_id =
+   object_id_from_bios_object_id(
+   

[PATCH 47/51] drm/amd/display: dal 3.1.52

2018-06-19 Thread Harry Wentland
From: Tony Cheng 

Change-Id: I247a15ddaa6ed8b94f8ebe23bdebd5a475549f4f
Signed-off-by: Tony Cheng 
Reviewed-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index cbcdbd9b9910..74e6653b9852 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.51"
+#define DC_VER "3.1.52"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 44/51] drm/amd/display: move dcn watermark programming to set_bandwidth

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: If70a4124cf898c7561b8f3c8aa7960901f8b6dcc
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 107 +++---
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |   2 +-
 2 files changed, 19 insertions(+), 90 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index d78802e751c8..da82c6a2d4e0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2234,8 +2234,6 @@ static void dcn10_apply_ctx_for_surface(
int i;
struct timing_generator *tg;
bool removed_pipe[4] = { false };
-   unsigned int ref_clk_mhz = dc->res_pool->ref_clock_inKhz/1000;
-   bool program_water_mark = false;
struct pipe_ctx *top_pipe_to_program =
find_top_pipe_for_stream(dc, context, stream);
DC_LOGGER_INIT(dc->ctx->logger);
@@ -2296,107 +2294,38 @@ static void dcn10_apply_ctx_for_surface(
if (num_planes == 0)
false_optc_underflow_wa(dc, stream, tg);
 
-   for (i = 0; i < dc->res_pool->pipe_count; i++) {
-   struct pipe_ctx *old_pipe_ctx =
-   >current_state->res_ctx.pipe_ctx[i];
-   struct pipe_ctx *pipe_ctx = >res_ctx.pipe_ctx[i];
-
-   if (pipe_ctx->stream == stream &&
-   pipe_ctx->plane_state &&
-   pipe_ctx->plane_state->update_flags.bits.full_update)
-   program_water_mark = true;
-
+   for (i = 0; i < dc->res_pool->pipe_count; i++)
if (removed_pipe[i])
-   dcn10_disable_plane(dc, old_pipe_ctx);
-   }
-
-   if (program_water_mark) {
-   if (dc->debug.sanity_checks) {
-   /* pstate stuck check after watermark update */
-   dcn10_verify_allow_pstate_change_high(dc);
-   }
-
-   /* watermark is for all pipes */
-   hubbub1_program_watermarks(dc->res_pool->hubbub,
-   >bw.dcn.watermarks, ref_clk_mhz, true);
-
-   if (dc->hwseq->wa.DEGVIDCN10_254)
-   hubbub1_wm_change_req_wa(dc->res_pool->hubbub);
+   dcn10_disable_plane(dc, 
>current_state->res_ctx.pipe_ctx[i]);
 
-   if (dc->debug.sanity_checks) {
-   /* pstate stuck check after watermark update */
-   dcn10_verify_allow_pstate_change_high(dc);
-   }
-   }
-/* DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
-   "\n== Watermark parameters ==\n"
-   "a.urgent_ns: %d \n"
-   "a.cstate_enter_plus_exit: %d \n"
-   "a.cstate_exit: %d \n"
-   "a.pstate_change: %d \n"
-   "a.pte_meta_urgent: %d \n"
-   "b.urgent_ns: %d \n"
-   "b.cstate_enter_plus_exit: %d \n"
-   "b.cstate_exit: %d \n"
-   "b.pstate_change: %d \n"
-   "b.pte_meta_urgent: %d \n",
-   context->bw.dcn.watermarks.a.urgent_ns,
-   
context->bw.dcn.watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns,
-   
context->bw.dcn.watermarks.a.cstate_pstate.cstate_exit_ns,
-   
context->bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns,
-   context->bw.dcn.watermarks.a.pte_meta_urgent_ns,
-   context->bw.dcn.watermarks.b.urgent_ns,
-   
context->bw.dcn.watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns,
-   
context->bw.dcn.watermarks.b.cstate_pstate.cstate_exit_ns,
-   
context->bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns,
-   context->bw.dcn.watermarks.b.pte_meta_urgent_ns
-   );
-   DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
-   "\nc.urgent_ns: %d \n"
-   "c.cstate_enter_plus_exit: %d \n"
-   "c.cstate_exit: %d \n"
-   "c.pstate_change: %d \n"
-   "c.pte_meta_urgent: %d \n"
-   "d.urgent_ns: %d \n"
-   "d.cstate_enter_plus_exit: %d \n"
-   "d.cstate_exit: %d \n"
-   "d.pstate_change: %d \n"
-   "d.pte_meta_urgent: %d \n"
-   
"\n",
-   context->bw.dcn.watermarks.c.urgent_ns,
-   
context->bw.dcn.watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns,
- 

[PATCH 21/51] drm/amd/display: Define dp_alt_mode

2018-06-19 Thread Harry Wentland
From: Charlene Liu 

Also cleanup command_table2.c. No need for a lot of forward
declarations.

Change-Id: Ia8668e9142556d9f7d865480afc64bc00444a578
Signed-off-by: Charlene Liu 
Reviewed-by: Krunoslav Kovac 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/bios/command_table2.c  | 46 +++
 .../amd/display/dc/dcn10/dcn10_link_encoder.c |  2 +
 .../drm/amd/display/include/grph_object_id.h  |  5 ++
 3 files changed, 24 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c 
b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
index 752b08a42d3e..2b5dc499a35e 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -59,36 +59,7 @@
bios_cmd_table_para_revision(bp->base.ctx->driver_context, \
GET_INDEX_INTO_MASTER_TABLE(command, fname))
 
-static void init_dig_encoder_control(struct bios_parser *bp);
-static void init_transmitter_control(struct bios_parser *bp);
-static void init_set_pixel_clock(struct bios_parser *bp);
 
-static void init_set_crtc_timing(struct bios_parser *bp);
-
-static void init_select_crtc_source(struct bios_parser *bp);
-static void init_enable_crtc(struct bios_parser *bp);
-
-static void init_external_encoder_control(struct bios_parser *bp);
-static void init_enable_disp_power_gating(struct bios_parser *bp);
-static void init_set_dce_clock(struct bios_parser *bp);
-static void init_get_smu_clock_info(struct bios_parser *bp);
-
-void dal_firmware_parser_init_cmd_tbl(struct bios_parser *bp)
-{
-   init_dig_encoder_control(bp);
-   init_transmitter_control(bp);
-   init_set_pixel_clock(bp);
-
-   init_set_crtc_timing(bp);
-
-   init_select_crtc_source(bp);
-   init_enable_crtc(bp);
-
-   init_external_encoder_control(bp);
-   init_enable_disp_power_gating(bp);
-   init_set_dce_clock(bp);
-   init_get_smu_clock_info(bp);
-}
 
 static uint32_t bios_cmd_table_para_revision(void *dev,
 uint32_t index)
@@ -829,3 +800,20 @@ static unsigned int get_smu_clock_info_v3_1(struct 
bios_parser *bp, uint8_t id)
return 0;
 }
 
+void dal_firmware_parser_init_cmd_tbl(struct bios_parser *bp)
+{
+   init_dig_encoder_control(bp);
+   init_transmitter_control(bp);
+   init_set_pixel_clock(bp);
+
+   init_set_crtc_timing(bp);
+
+   init_select_crtc_source(bp);
+   init_enable_crtc(bp);
+
+   init_external_encoder_control(bp);
+   init_enable_disp_power_gating(bp);
+   init_set_dce_clock(bp);
+   init_get_smu_clock_info(bp);
+
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
index 21fa40ac0786..fd9dc70190a8 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c
@@ -995,6 +995,8 @@ void dcn10_link_encoder_disable_output(
 
if (!dcn10_is_dig_enabled(enc)) {
/* OF_SKIP_POWER_DOWN_INACTIVE_ENCODER */
+   /*in DP_Alt_No_Connect case, we turn off the dig already,
+   after excuation the PHY w/a sequence, not allow touch PHY any more*/
return;
}
/* Power-down RX and disable GPU PHY should be paired.
diff --git a/drivers/gpu/drm/amd/display/include/grph_object_id.h 
b/drivers/gpu/drm/amd/display/include/grph_object_id.h
index c4197432eb7c..92cc6c112ea6 100644
--- a/drivers/gpu/drm/amd/display/include/grph_object_id.h
+++ b/drivers/gpu/drm/amd/display/include/grph_object_id.h
@@ -197,6 +197,11 @@ enum transmitter_color_depth {
TRANSMITTER_COLOR_DEPTH_48   /* 16 bits */
 };
 
+enum dp_alt_mode {
+   DP_Alt_mode__Unknown = 0,
+   DP_Alt_mode__Connect,
+   DP_Alt_mode__NoConnect,
+};
 /*
  *
  * graphics_object_id struct
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 24/51] drm/amd/display: add CHG_DONE mash/sh defines for dentist

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: Ib6de55162c46b5f3858c528ec80905fdef1371c6
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Charlene Liu 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
index 1f1899ef773a..7ce0a54e548f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
@@ -45,13 +45,17 @@
 
 #define CLK_COMMON_MASK_SH_LIST_DCN_COMMON_BASE(mask_sh) \
CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_WDIVIDER, mask_sh),\
-   CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, mask_sh)
+   CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, mask_sh),\
+   CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, mask_sh),\
+   CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_CHG_DONE, mask_sh)
 
 #define CLK_REG_FIELD_LIST(type) \
type DPREFCLK_SRC_SEL; \
type DENTIST_DPREFCLK_WDIVIDER; \
type DENTIST_DISPCLK_WDIVIDER; \
-   type DENTIST_DPPCLK_WDIVIDER;
+   type DENTIST_DPPCLK_WDIVIDER; \
+   type DENTIST_DISPCLK_CHG_DONE; \
+   type DENTIST_DPPCLK_CHG_DONE;
 
 struct dccg_shift {
CLK_REG_FIELD_LIST(uint8_t)
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/51] drm/amd/display: Apply clock for voltage request

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Translate dm_pp tructure to pp type
Call PP lib to apply clock voltage request for display

Change-Id: I9aa732073fda7b2d7f911849a35572ef5db6ae48
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../display/amdgpu_dm/amdgpu_dm_services.c| 31 +--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 5d20a7d1d0d5..0dc7a791c216 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -424,8 +424,35 @@ bool dm_pp_apply_clock_for_voltage_request(
const struct dc_context *ctx,
struct dm_pp_clock_for_voltage_req *clock_for_voltage_req)
 {
-   /* TODO: to be implemented */
-   return false;
+   struct amdgpu_device *adev = ctx->driver_context;
+   struct pp_display_clock_request *pp_clock_request = {0};
+   int ret = 0;
+   switch (clock_for_voltage_req->clk_type) {
+   case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
+   pp_clock_request->clock_type = amd_pp_disp_clock;
+   break;
+
+   case DM_PP_CLOCK_TYPE_DCEFCLK:
+   pp_clock_request->clock_type = amd_pp_dcef_clock;
+   break;
+
+   case DM_PP_CLOCK_TYPE_PIXELCLK:
+   pp_clock_request->clock_type = amd_pp_pixel_clock;
+   break;
+
+   default:
+   return false;
+   }
+
+   pp_clock_request->clock_freq_in_khz = 
clock_for_voltage_req->clocks_in_khz;
+
+   if (adev->powerplay.pp_funcs->display_clock_voltage_request)
+   ret = adev->powerplay.pp_funcs->display_clock_voltage_request(
+   adev->powerplay.pp_handle,
+   pp_clock_request);
+   if (ret)
+   return false;
+   return true;
 }
 
 bool dm_pp_get_static_clocks(
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/51] drm/amd/display: move clock programming from set_bandwidth to dccg

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

This change moves dcn clock programming(with exception of dispclk)
into dccg. This should have no functional effect.

Change-Id: I5f857f620e6649cdafcf519569e07c36716bf47b
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  |  2 +-
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 57 +++--
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 62 ---
 3 files changed, 51 insertions(+), 70 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 2b70ac67e6c2..9acdd9da740e 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -997,7 +997,7 @@ bool dcn_validate_bandwidth(
}
 
context->bw.dcn.calc_clk.dppclk_khz = 
context->bw.dcn.calc_clk.dispclk_khz / v->dispclk_dppclk_ratio;
-
+   context->bw.dcn.calc_clk.phyclk_khz = 
v->phyclk_per_state[v->voltage_level];
switch (v->voltage_level) {
case 0:
context->bw.dcn.calc_clk.max_supported_dppclk_khz =
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index 890a3ec68d49..93e6063c4b97 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -523,14 +523,18 @@ static void dce_clock_read_ss_info(struct dce_dccg 
*clk_dce)
}
 }
 
+static inline bool should_set_clock(bool safe_to_lower, int calc_clk, int 
cur_clk)
+{
+   return ((safe_to_lower && calc_clk < cur_clk) || calc_clk > cur_clk);
+}
+
 static void dce12_update_clocks(struct dccg *dccg,
struct dc_clocks *new_clocks,
bool safe_to_lower)
 {
struct dm_pp_clock_for_voltage_req clock_voltage_req = {0};
 
-   if ((new_clocks->dispclk_khz < dccg->clks.dispclk_khz && safe_to_lower)
-   || new_clocks->dispclk_khz > dccg->clks.dispclk_khz) {
+   if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAY_CLK;
clock_voltage_req.clocks_in_khz = new_clocks->dispclk_khz;
dccg->funcs->set_dispclk(dccg, new_clocks->dispclk_khz);
@@ -539,8 +543,7 @@ static void dce12_update_clocks(struct dccg *dccg,
dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
}
 
-   if ((new_clocks->phyclk_khz < dccg->clks.phyclk_khz && safe_to_lower)
-   || new_clocks->phyclk_khz > dccg->clks.phyclk_khz) {
+   if (should_set_clock(safe_to_lower, new_clocks->phyclk_khz, 
dccg->clks.phyclk_khz)) {
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAYPHYCLK;
clock_voltage_req.clocks_in_khz = new_clocks->phyclk_khz;
dccg->clks.phyclk_khz = new_clocks->phyclk_khz;
@@ -553,6 +556,11 @@ static void dcn_update_clocks(struct dccg *dccg,
struct dc_clocks *new_clocks,
bool safe_to_lower)
 {
+   struct dc *dc = dccg->ctx->dc;
+   struct pp_smu_display_requirement_rv *smu_req_cur =
+   >res_pool->pp_smu_req;
+   struct pp_smu_display_requirement_rv smu_req = *smu_req_cur;
+   struct pp_smu_funcs_rv *pp_smu = dc->res_pool->pp_smu;
struct dm_pp_clock_for_voltage_req clock_voltage_req = {0};
bool send_request_to_increase = false;
bool send_request_to_lower = false;
@@ -566,17 +574,14 @@ static void dcn_update_clocks(struct dccg *dccg,
 #ifdef CONFIG_DRM_AMD_DC_DCN1_0
if (send_request_to_increase
) {
-   struct dc *core_dc = dccg->ctx->dc;
-
/*use dcfclk to request voltage*/
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
-   clock_voltage_req.clocks_in_khz = 
dcn_find_dcfclk_suits_all(core_dc, new_clocks);
+   clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
}
 #endif
 
-   if ((new_clocks->dispclk_khz < dccg->clks.dispclk_khz && safe_to_lower)
-   || new_clocks->dispclk_khz > dccg->clks.dispclk_khz) {
+   if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAY_CLK;
clock_voltage_req.clocks_in_khz = new_clocks->dispclk_khz;
/* TODO: ramp up - dccg->funcs->set_dispclk(dccg, 
new_clocks->dispclk_khz);*/
@@ -586,8 +591,7 @@ static void dcn_update_clocks(struct dccg *dccg,
send_request_to_lower = true;
}
 
-   if ((new_clocks->phyclk_khz < 

[PATCH 18/51] drm/amd/display: clean up set_bandwidth usage

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

This removes redundant set_bandwidth calls as well
as fixes a bug in post_set_address_update where dcn1
would never get to lower clocks.

Change-Id: I6ad9fe8179b7281b5bd3b58665a1efdf1ec5b8a0
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Nikola Cornij 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc.c  |  5 -
 .../drm/amd/display/dc/dce110/dce110_hw_sequencer.c   |  5 -
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 11 +++
 3 files changed, 3 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index caa8c908b63f..e855424ee575 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -942,12 +942,7 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
 
dc->optimized_required = false;
 
-   /* 3rd param should be true, temp w/a for RV*/
-#if defined(CONFIG_DRM_AMD_DC_DCN1_0)
-   dc->hwss.set_bandwidth(dc, context, dc->ctx->dce_version < 
DCN_VERSION_1_0);
-#else
dc->hwss.set_bandwidth(dc, context, true);
-#endif
return true;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index f723a29e8065..9ea576cbbefa 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -2018,8 +2018,6 @@ enum dc_status dce110_apply_ctx_to_hw(
if (dc->fbc_compressor)
dc->fbc_compressor->funcs->disable_fbc(dc->fbc_compressor);
 
-   dc->hwss.set_bandwidth(dc, context, false);
-
dce110_setup_audio_dto(dc, context);
 
for (i = 0; i < dc->res_pool->pipe_count; i++) {
@@ -2048,9 +2046,6 @@ enum dc_status dce110_apply_ctx_to_hw(
return status;
}
 
-   /* to save power */
-   dc->hwss.set_bandwidth(dc, context, true);
-
dcb->funcs->set_scratch_critical_state(dcb, false);
 
if (dc->fbc_compressor)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 5d47b282aef2..ef3969fd372c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2283,8 +2283,7 @@ static void dcn10_apply_ctx_for_surface(
hwss1_plane_atomic_disconnect(dc, old_pipe_ctx);
removed_pipe[i] = true;
 
-   DC_LOG_DC(
-   "Reset mpcc for pipe %d\n",
+   DC_LOG_DC("Reset mpcc for pipe %d\n",
old_pipe_ctx->pipe_idx);
}
}
@@ -2380,9 +2379,8 @@ static void dcn10_set_bandwidth(
struct dc_state *context,
bool decrease_allowed)
 {
-   if (dc->debug.sanity_checks) {
+   if (dc->debug.sanity_checks)
dcn10_verify_allow_pstate_change_high(dc);
-   }
 
if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
return;
@@ -2397,11 +2395,8 @@ static void dcn10_set_bandwidth(
 
dcn10_pplib_apply_display_requirements(dc, context);
 
-   if (dc->debug.sanity_checks) {
+   if (dc->debug.sanity_checks)
dcn10_verify_allow_pstate_change_high(dc);
-   }
-
-   /* need to fix this function.  not doing the right thing here */
 }
 
 static void set_drr(struct pipe_ctx **pipe_ctx,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/51] DC Patches Jun 19, 2018

2018-06-19 Thread Harry Wentland
Been pretty busy and haven't been able to get these out as regularly as I'd 
like. We're working on improving our process. I hope to be able to give an 
update next week.

Changes here are:
 * RV powerplay integration
 * Retry link-training if it fails initially
 * Stubbing out a DP debugfs to allow changing PHY settings (more to come)
 * whole bunch of miscellaneous work on DCN

Alvin lee (2):
  drm/amd/display: Enable Stereo in Dal3
  drm/amd/display: Program vsc_infopacket in commit_planes_for_stream

Charlene Liu (2):
  drm/amd/display: Define dp_alt_mode
  drm/amd/display: add valid regoffset and NULL pointer check

David Francis (1):
  drm/amd/display: Add front end for dp debugfs files

Dmytro Laktyushkin (23):
  drm/amd/display: replace clocks_value struct with dc_clocks
  drm/amd/display: redesign dce/dcn clock voltage update request
  drm/amd/display: rename display clock block to dccg
  drm/amd/display: move clock programming from set_bandwidth to dccg
  drm/amd/display: remove invalid assert when no max_pixel_clk is found
  drm/amd/display: get rid of cur_clks from dcn_bw_output
  drm/amd/display: move dcn1 dispclk programming to dccg
  drm/amd/display: clean up dccg divider calc and dcn constructor
  drm/amd/display: rename dce_disp_clk to dccg
  drm/amd/display: clean up set_bandwidth usage
  drm/amd/display: remove unnecessary pplib volage requests that are
asserting
  drm/amd/display: fix dccg dcn1 ifdef
  drm/amd/display: fix pplib voltage request
  drm/amd/display: add CHG_DONE mash/sh defines for dentist
  drm/amd/display: change dentist DID enum values to uppercase
  drm/amd/display: add safe_to_lower support to dcn wm programming
  drm/amd/display: clean rq/dlg/ttu reg structs before calculations
  drm/amd/display: move dml defaults to respective dcn resource files
  drm/amd/display: fix dcn1 watermark range reporting
  drm/amd/display: remove dcn1 watermark sets b, c and d
  drm/amd/display: separate out wm change request dcn workaround
  drm/amd/display: move dcn watermark programming to set_bandwidth
  drm/amd/display: remove soc_bounding_box.c

Mikita Lipski (10):
  drm/amd/display: Adding dm-pp clocks getting by voltage
  drm/amd/display: Apply clock for voltage request
  drm/amd/display: Adding Get static clocks for dm_pp interface
  drm/amd/display: Introduce pp-smu raven functions
  drm/amd/display: Use local structs instead of struct pointers
  drm/amd/display: Add clock types to applying clk for voltage
  drm/amd/display: Enable PPLib calls from DC on linux
  drm/amd/display: Add dmpp clks types for conversion
  drm/amd/display: Convert 10kHz clks from PPLib into kHz
  drm/amd/display: Moving powerplay functions to a separate class

Roman Li (1):
  drm/amd/display: fix potential infinite loop in fbc path

Samson Tam (1):
  drm/amd/display: get board layout for edid emulation

Tony Cheng (6):
  drm/amd/display: dal 3.1.48
  drm/amd/display: dal 3.1.49
  drm/amd/display: dal 3.1.50
  drm/amd/display: dal 3.1.51
  drm/amd/display: dal 3.1.52
  drm/amd/display: Allow option to use worst-case watermark

Wesley Chalmers (2):
  drm/amd/display: Temporarily remove Chroma logs
  drm/amd/display: fix use of uninitialized memory

Yongqiang Sun (2):
  drm/amd/display: Use tg count for opp init.
  drm/amd/display: Check scaling ration not viewports params.

Zheng, XueLai(Eric) (1):
  drm/amd/display: support ACrYCb2101010

 drivers/gpu/drm/amd/display/Makefile  |   3 +-
 .../gpu/drm/amd/display/amdgpu_dm/Makefile|   4 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  10 +
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 170 +
 .../amdgpu_dm_debugfs.h}  |  13 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  | 525 +
 .../display/amdgpu_dm/amdgpu_dm_services.c| 187 -
 .../gpu/drm/amd/display/dc/bios/bios_parser.c | 196 +
 .../drm/amd/display/dc/bios/bios_parser2.c| 218 +-
 .../drm/amd/display/dc/bios/command_table2.c  |  46 +-
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 194 +++--
 drivers/gpu/drm/amd/display/dc/core/dc.c  |   8 +-
 .../gpu/drm/amd/display/dc/core/dc_debug.c|  24 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  40 +-
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |   9 +-
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  59 +-
 drivers/gpu/drm/amd/display/dc/dc.h   |   8 +-
 .../gpu/drm/amd/display/dc/dc_bios_types.h|   4 +
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h  |   1 +
 drivers/gpu/drm/amd/display/dc/dc_stream.h|   2 +
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 718 +-
 .../gpu/drm/amd/display/dc/dce/dce_clocks.h   |  93 +--
 .../gpu/drm/amd/display/dc/dce/dce_hwseq.h|   3 -
 .../display/dc/dce100/dce100_hw_sequencer.c   |  49 +-
 .../amd/display/dc/dce100/dce100_resource.c   |  16 +-
 .../amd/display/dc/dce110/dce110_compressor.c |   4 +-
 .../display/dc/dce110/dce110_hw_sequencer.c   | 175 +
 

[PATCH 05/51] drm/amd/display: Adding dm-pp clocks getting by voltage

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Function to get clock levels by voltage from PPLib

Change-Id: I0723e59ac21e69a90497382fce348be95bea4ed9
Signed-off-by: Mikita Lipski 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../display/amdgpu_dm/amdgpu_dm_services.c| 43 ++-
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 5a3346124a01..5d20a7d1d0d5 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -261,6 +261,34 @@ static void pp_to_dc_clock_levels_with_latency(
}
 }
 
+static void pp_to_dc_clock_levels_with_voltage(
+   const struct pp_clock_levels_with_voltage *pp_clks,
+   struct dm_pp_clock_levels_with_voltage *clk_level_info,
+   enum dm_pp_clock_type dc_clk_type)
+{
+   uint32_t i;
+
+   if (pp_clks->num_levels > DM_PP_MAX_CLOCK_LEVELS) {
+   DRM_INFO("DM_PPLIB: Warning: %s clock: number of levels %d 
exceeds maximum of %d!\n",
+   DC_DECODE_PP_CLOCK_TYPE(dc_clk_type),
+   pp_clks->num_levels,
+   DM_PP_MAX_CLOCK_LEVELS);
+
+   clk_level_info->num_levels = DM_PP_MAX_CLOCK_LEVELS;
+   } else
+   clk_level_info->num_levels = pp_clks->num_levels;
+
+   DRM_INFO("DM_PPLIB: values for %s clock\n",
+   DC_DECODE_PP_CLOCK_TYPE(dc_clk_type));
+
+   for (i = 0; i < clk_level_info->num_levels; i++) {
+   DRM_INFO("DM_PPLIB:\t %d\n", pp_clks->data[i].clocks_in_khz);
+   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz;
+   clk_level_info->data[i].voltage_in_mv = 
pp_clks->data[i].voltage_in_mv;
+   }
+}
+
+
 bool dm_pp_get_clock_levels_by_type(
const struct dc_context *ctx,
enum dm_pp_clock_type clk_type,
@@ -361,8 +389,19 @@ bool dm_pp_get_clock_levels_by_type_with_voltage(
enum dm_pp_clock_type clk_type,
struct dm_pp_clock_levels_with_voltage *clk_level_info)
 {
-   /* TODO: to be implemented */
-   return false;
+   struct amdgpu_device *adev = ctx->driver_context;
+   void *pp_handle = adev->powerplay.pp_handle;
+   struct pp_clock_levels_with_voltage pp_clk_info = {0};
+   const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+   if (pp_funcs->get_clock_by_type_with_voltage(pp_handle,
+
dc_to_pp_clock_type(clk_type),
+_clk_info))
+   return false;
+
+   pp_to_dc_clock_levels_with_voltage(_clk_info, clk_level_info, 
clk_type);
+
+   return true;
 }
 
 bool dm_pp_notify_wm_clock_changes(
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 16/51] drm/amd/display: clean up dccg divider calc and dcn constructor

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

No functional change.

Change-Id: Ia108aff1f2e1def6d69cbda13d90b9a818fbecc4
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 197 ++
 .../gpu/drm/amd/display/dc/dce/dce_clocks.h   |  26 ---
 2 files changed, 68 insertions(+), 155 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index 55f533cf55ba..6e3bfdf8a9e7 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -101,90 +101,42 @@ static const struct state_dependent_clocks 
dce120_max_clks_by_state[] = {
 /*ClocksStatePerformance*/
 { .display_clk_khz = 1133000, .pixel_clk_khz = 60 } };
 
-/* Starting point for each divider range.*/
-enum dce_divider_range_start {
-   DIVIDER_RANGE_01_START = 200, /* 2.00*/
-   DIVIDER_RANGE_02_START = 1600, /* 16.00*/
-   DIVIDER_RANGE_03_START = 3200, /* 32.00*/
-   DIVIDER_RANGE_SCALE_FACTOR = 100 /* Results are scaled up by 100.*/
+/* Starting DID for each range */
+enum dentist_base_divider_id {
+   dentist_base_divider_id_1 = 0x08,
+   dentist_base_divider_id_2 = 0x40,
+   dentist_base_divider_id_3 = 0x60,
+   dentist_max_divider_id= 0x80
 };
 
-/* Ranges for divider identifiers (Divider ID or DID)
- mmDENTIST_DISPCLK_CNTL.DENTIST_DISPCLK_WDIVIDER*/
-enum dce_divider_id_register_setting {
-   DIVIDER_RANGE_01_BASE_DIVIDER_ID = 0X08,
-   DIVIDER_RANGE_02_BASE_DIVIDER_ID = 0X40,
-   DIVIDER_RANGE_03_BASE_DIVIDER_ID = 0X60,
-   DIVIDER_RANGE_MAX_DIVIDER_ID = 0X80
+/* Starting point and step size for each divider range.*/
+enum dentist_divider_range {
+   dentist_divider_range_1_start = 8,   /* 2.00  */
+   dentist_divider_range_1_step  = 1,   /* 0.25  */
+   dentist_divider_range_2_start = 64,  /* 16.00 */
+   dentist_divider_range_2_step  = 2,   /* 0.50  */
+   dentist_divider_range_3_start = 128, /* 32.00 */
+   dentist_divider_range_3_step  = 4,   /* 1.00  */
+   dentist_divider_range_scale_factor = 4
 };
 
-/* Step size between each divider within a range.
- Incrementing the DENTIST_DISPCLK_WDIVIDER by one
- will increment the divider by this much.*/
-enum dce_divider_range_step_size {
-   DIVIDER_RANGE_01_STEP_SIZE = 25, /* 0.25*/
-   DIVIDER_RANGE_02_STEP_SIZE = 50, /* 0.50*/
-   DIVIDER_RANGE_03_STEP_SIZE = 100 /* 1.00 */
-};
-
-static bool dce_divider_range_construct(
-   struct dce_divider_range *div_range,
-   int range_start,
-   int range_step,
-   int did_min,
-   int did_max)
-{
-   div_range->div_range_start = range_start;
-   div_range->div_range_step = range_step;
-   div_range->did_min = did_min;
-   div_range->did_max = did_max;
-
-   if (div_range->div_range_step == 0) {
-   div_range->div_range_step = 1;
-   /*div_range_step cannot be zero*/
-   BREAK_TO_DEBUGGER();
-   }
-   /* Calculate this based on the other inputs.*/
-   /* See DividerRange.h for explanation of */
-   /* the relationship between divider id (DID) and a divider.*/
-   /* Number of Divider IDs = (Maximum Divider ID - Minimum Divider ID)*/
-   /* Maximum divider identified in this range =
-* (Number of Divider IDs)*Step size between dividers
-*  + The start of this range.*/
-   div_range->div_range_end = (did_max - did_min) * range_step
-   + range_start;
-   return true;
-}
-
-static int dce_divider_range_calc_divider(
-   struct dce_divider_range *div_range,
-   int did)
-{
-   /* Is this DID within our range?*/
-   if ((did < div_range->did_min) || (did >= div_range->did_max))
-   return INVALID_DIVIDER;
-
-   return ((did - div_range->did_min) * div_range->div_range_step)
-   + div_range->div_range_start;
-
-}
-
-static int dce_divider_range_get_divider(
-   struct dce_divider_range *div_range,
-   int ranges_num,
-   int did)
+static int dentist_get_divider_from_did(int did)
 {
-   int div = INVALID_DIVIDER;
-   int i;
-
-   for (i = 0; i < ranges_num; i++) {
-   /* Calculate divider with given divider ID*/
-   div = dce_divider_range_calc_divider(_range[i], did);
-   /* Found a valid return divider*/
-   if (div != INVALID_DIVIDER)
-   break;
+   if (did < dentist_base_divider_id_1)
+   did = dentist_base_divider_id_1;
+   if (did > dentist_max_divider_id)
+   did = dentist_max_divider_id;
+
+   if (did < dentist_base_divider_id_2) {
+   return dentist_divider_range_1_start + 
dentist_divider_range_1_step
+   * (did - 
dentist_base_divider_id_1);
+   } else if (did < 

[PATCH 02/51] drm/amd/display: redesign dce/dcn clock voltage update request

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

The goal of this change is to move clock programming and voltage
requests to a single function. As of this change only dce is affected.

Change-Id: If2fdb76f2760533fbd701db9855a611d9569fd54
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  |  22 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  30 +-
 .../gpu/drm/amd/display/dc/dce/dce_clocks.c   | 279 ++
 .../gpu/drm/amd/display/dc/dce/dce_clocks.h   |   6 +-
 .../display/dc/dce100/dce100_hw_sequencer.c   |  49 ++-
 .../display/dc/dce110/dce110_hw_sequencer.c   | 150 ++
 .../display/dc/dce110/dce110_hw_sequencer.h   |   4 +
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |   9 +-
 .../drm/amd/display/dc/dcn10/dcn10_resource.c |   2 +-
 .../drm/amd/display/dc/inc/hw/display_clock.h |  21 +-
 10 files changed, 250 insertions(+), 322 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index d8a31650e856..2b70ac67e6c2 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -1145,10 +1145,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
 
switch (clocks_type) {
case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
-   /*if (clocks_in_khz > dc->dcn_soc->max_dispclk_vmax0p9*1000) {
+   if (clocks_in_khz > dc->dcn_soc->max_dispclk_vmax0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
-   //BREAK_TO_DEBUGGER();
-   } else*/ if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vnom0p8*1000) {
+   BREAK_TO_DEBUGGER();
+   } else if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vnom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->max_dispclk_vmid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1158,10 +1158,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
vdd_level = dcn_bw_v_min0p65;
break;
case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
-   /*if (clocks_in_khz > dc->dcn_soc->phyclkv_max0p9*1000) {
+   if (clocks_in_khz > dc->dcn_soc->phyclkv_max0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else*/ if (clocks_in_khz > dc->dcn_soc->phyclkv_nom0p8*1000) {
+   } else if (clocks_in_khz > dc->dcn_soc->phyclkv_nom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > dc->dcn_soc->phyclkv_mid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1172,10 +1172,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
break;
 
case DM_PP_CLOCK_TYPE_DPPCLK:
-   /*if (clocks_in_khz > dc->dcn_soc->max_dppclk_vmax0p9*1000) {
+   if (clocks_in_khz > dc->dcn_soc->max_dppclk_vmax0p9*1000) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else*/ if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vnom0p8*1000) {
+   } else if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vnom0p8*1000) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->max_dppclk_vmid0p72*1000) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1189,10 +1189,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
{
unsigned factor = (ddr4_dram_factor_single_Channel * 
dc->dcn_soc->number_of_channels);
 
-   /*if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9*100/factor) {
+   if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9*100/factor) {
vdd_level = dcn_bw_v_max0p91;
BREAK_TO_DEBUGGER();
-   } else */if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8*100/factor) {
+   } else if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8*100/factor) {
vdd_level = dcn_bw_v_max0p9;
} else if (clocks_in_khz > 
dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72*100/factor) {
vdd_level = dcn_bw_v_nom0p8;
@@ -1204,10 +1204,10 @@ static unsigned int dcn_find_normalized_clock_vdd_Level(
break;
 
case DM_PP_CLOCK_TYPE_DCFCLK:
-   /*if (clocks_in_khz > dc->dcn_soc->dcfclkv_max0p9*1000) {
+   if (clocks_in_khz > dc->dcn_soc->dcfclkv_max0p9*1000) {
vdd_level = dcn_bw_v_max0p91;

[PATCH 20/51] drm/amd/display: Temporarily remove Chroma logs

2018-06-19 Thread Harry Wentland
From: Wesley Chalmers 

To ensure tests continue to pass

Change-Id: I03e78245b679889786730897306a6e572c5e89d0
Signed-off-by: Wesley Chalmers 
Reviewed-by: Shahin Khayyer 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index ef3969fd372c..db72b4d7b89a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -151,23 +151,19 @@ static void dcn10_log_hubp_states(struct dc *dc)
 
DTN_INFO("\n=RQ\n");
DTN_INFO("HUBP:  drq_exp_m  prq_exp_m  mrq_exp_m  crq_exp_m  plane1_ba  
L:chunk_s  min_chu_s  meta_ch_s"
-   "  min_m_c_s  dpte_gr_s  mpte_gr_s  swath_hei  pte_row_h  
C:chunk_s  min_chu_s  meta_ch_s"
"  min_m_c_s  dpte_gr_s  mpte_gr_s  swath_hei  pte_row_h\n");
for (i = 0; i < pool->pipe_count; i++) {
struct dcn_hubp_state *s = 
&(TO_DCN10_HUBP(pool->hubps[i])->state);
struct _vcs_dpi_display_rq_regs_st *rq_regs = >rq_regs;
 
if (!s->blank_en)
-   DTN_INFO("[%2d]:  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh\n",
+   DTN_INFO("[%2d]:  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh  
%8xh  %8xh  %8xh  %8xh  %8xh  %8xh  %8xh\n",
pool->hubps[i]->inst, 
rq_regs->drq_expansion_mode, rq_regs->prq_expansion_mode, 
rq_regs->mrq_expansion_mode,
rq_regs->crq_expansion_mode, 
rq_regs->plane1_base_address, rq_regs->rq_regs_l.chunk_size,
rq_regs->rq_regs_l.min_chunk_size, 
rq_regs->rq_regs_l.meta_chunk_size,
rq_regs->rq_regs_l.min_meta_chunk_size, 
rq_regs->rq_regs_l.dpte_group_size,
rq_regs->rq_regs_l.mpte_group_size, 
rq_regs->rq_regs_l.swath_height,
-   rq_regs->rq_regs_l.pte_row_height_linear, 
rq_regs->rq_regs_c.chunk_size, rq_regs->rq_regs_c.min_chunk_size,
-   rq_regs->rq_regs_c.meta_chunk_size, 
rq_regs->rq_regs_c.min_meta_chunk_size,
-   rq_regs->rq_regs_c.dpte_group_size, 
rq_regs->rq_regs_c.mpte_group_size,
-   rq_regs->rq_regs_c.swath_height, 
rq_regs->rq_regs_c.pte_row_height_linear);
+   rq_regs->rq_regs_l.pte_row_height_linear);
}
 
DTN_INFO("DLG\n");
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 42/51] drm/amd/display: remove dcn1 watermark sets b, c and d

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Currently dcn1 will not switch between watermark sets so we can
save time by not calculating 3 extra sets.

Change-Id: If0282c383df30d3698ab90cc41b4a8c52624ceb8
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 21 ++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 8dc0773b285e..12261fbc25e0 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -505,6 +505,7 @@ static void split_stream_across_pipes(
resource_build_scaling_params(secondary_pipe);
 }
 
+#if 0
 static void calc_wm_sets_and_perf_params(
struct dc_state *context,
struct dcn_bw_internal_vars *v)
@@ -586,6 +587,7 @@ static void calc_wm_sets_and_perf_params(
if (v->voltage_level >= 3)
context->bw.dcn.watermarks.d = context->bw.dcn.watermarks.a;
 }
+#endif
 
 static bool dcn_bw_apply_registry_override(struct dc *dc)
 {
@@ -980,7 +982,24 @@ bool dcn_validate_bandwidth(
bw_consumed = v->fabric_and_dram_bandwidth;
 
display_pipe_configuration(v);
-   calc_wm_sets_and_perf_params(context, v);
+   /*calc_wm_sets_and_perf_params(context, v);*/
+   /* Only 1 set is used by dcn since no noticeable
+* performance improvement was measured and due to hw bug 
DEGVIDCN10-254
+*/
+   
dispclkdppclkdcfclk_deep_sleep_prefetch_parameters_watermarks_and_performance_calculation(v);
+
+   context->bw.dcn.watermarks.a.cstate_pstate.cstate_exit_ns =
+   v->stutter_exit_watermark * 1000;
+   
context->bw.dcn.watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns =
+   v->stutter_enter_plus_exit_watermark * 1000;
+   context->bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns =
+   v->dram_clock_change_watermark * 1000;
+   context->bw.dcn.watermarks.a.pte_meta_urgent_ns = 
v->ptemeta_urgent_watermark * 1000;
+   context->bw.dcn.watermarks.a.urgent_ns = v->urgent_watermark * 
1000;
+   context->bw.dcn.watermarks.b = context->bw.dcn.watermarks.a;
+   context->bw.dcn.watermarks.c = context->bw.dcn.watermarks.a;
+   context->bw.dcn.watermarks.d = context->bw.dcn.watermarks.a;
+
context->bw.dcn.clk.fclk_khz = (int)(bw_consumed * 100 /
(ddr4_dram_factor_single_Channel * 
v->number_of_channels));
if (bw_consumed == v->fabric_and_dram_bandwidth_vmin0p65) {
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 40/51] drm/amd/display: Enable Stereo in Dal3

2018-06-19 Thread Harry Wentland
From: Alvin lee 

- program infoframe for Stereo
- program stereo flip control registers properly

Change-Id: If547e2677b72709359b3a8602357b80961f1bfce
Signed-off-by: Alvin lee 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/Makefile  |  3 +-
 .../gpu/drm/amd/display/dc/core/dc_resource.c | 57 ++
 drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c | 18 -
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h |  4 +
 .../amd/display/modules/inc/mod_info_packet.h | 15 
 .../amd/display/modules/info_packet/Makefile  | 31 
 .../display/modules/info_packet/info_packet.c | 74 +++
 8 files changed, 165 insertions(+), 38 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/modules/inc/mod_info_packet.h
 create mode 100644 drivers/gpu/drm/amd/display/modules/info_packet/Makefile
 create mode 100644 
drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c

diff --git a/drivers/gpu/drm/amd/display/Makefile 
b/drivers/gpu/drm/amd/display/Makefile
index a2c5be493555..c97dc9613325 100644
--- a/drivers/gpu/drm/amd/display/Makefile
+++ b/drivers/gpu/drm/amd/display/Makefile
@@ -31,11 +31,12 @@ subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/dc/inc/hw
 subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/inc
 subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/freesync
 subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/color
+subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/info_packet
 
 #TODO: remove when Timing Sync feature is complete
 subdir-ccflags-y += -DBUILD_FEATURE_TIMING_SYNC=0
 
-DAL_LIBS = amdgpu_dm dcmodules/freesync modules/color
+DAL_LIBS = amdgpu_dm dcmodules/freesync modules/color 
modules/info_packet
 
 AMD_DAL = $(addsuffix /Makefile, $(addprefix 
$(FULL_AMD_DISPLAY_PATH)/,$(DAL_LIBS)))
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 72f233963748..41562ffa1c62 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -1488,6 +1488,20 @@ static bool is_hdr_static_meta_changed(struct 
dc_stream_state *cur_stream,
return false;
 }
 
+static bool is_vsc_info_packet_changed(struct dc_stream_state *cur_stream,
+   struct dc_stream_state *new_stream)
+{
+   if (cur_stream == NULL)
+   return true;
+
+   if (memcmp(_stream->vsc_infopacket,
+   _stream->vsc_infopacket,
+   sizeof(struct dc_info_packet)) != 0)
+   return true;
+
+   return false;
+}
+
 static bool is_timing_changed(struct dc_stream_state *cur_stream,
struct dc_stream_state *new_stream)
 {
@@ -1528,6 +1542,9 @@ static bool are_stream_backends_same(
if (stream_a->dpms_off != stream_b->dpms_off)
return false;
 
+   if (is_vsc_info_packet_changed(stream_a, stream_b))
+   return false;
+
return true;
 }
 
@@ -2414,43 +2431,10 @@ static void set_vsc_info_packet(
struct dc_info_packet *info_packet,
struct dc_stream_state *stream)
 {
-   unsigned int vscPacketRevision = 0;
-   unsigned int i;
-
-   /*VSC packet set to 2 when DP revision >= 1.2*/
-   if (stream->psr_version != 0) {
-   vscPacketRevision = 2;
-   }
-
-   /* VSC packet not needed based on the features
-* supported by this DP display
-*/
-   if (vscPacketRevision == 0)
+   if (!stream->vsc_infopacket.valid)
return;
 
-   if (vscPacketRevision == 0x2) {
-   /* Secondary-data Packet ID = 0*/
-   info_packet->hb0 = 0x00;
-   /* 07h - Packet Type Value indicating Video
-* Stream Configuration packet
-*/
-   info_packet->hb1 = 0x07;
-   /* 02h = VSC SDP supporting 3D stereo and PSR
-* (applies to eDP v1.3 or higher).
-*/
-   info_packet->hb2 = 0x02;
-   /* 08h = VSC packet supporting 3D stereo + PSR
-* (HB2 = 02h).
-*/
-   info_packet->hb3 = 0x08;
-
-   for (i = 0; i < 28; i++)
-   info_packet->sb[i] = 0;
-
-   info_packet->valid = true;
-   }
-
-   /*TODO: stereo 3D support and extend pixel encoding colorimetry*/
+   *info_packet = stream->vsc_infopacket;
 }
 
 void dc_resource_state_destruct(struct dc_state *context)
@@ -2632,6 +2616,9 @@ bool pipe_need_reprogram(
if (pipe_ctx_old->stream->dpms_off != pipe_ctx->stream->dpms_off)
return true;
 
+   if (is_vsc_info_packet_changed(pipe_ctx_old->stream, pipe_ctx->stream))
+   return true;
+
return false;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h 

[PATCH 26/51] drm/amd/display: add safe_to_lower support to dcn wm programming

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

This will prevent watermarks from lowering when unsafe to do so.

Change-Id: I848dcf3dfbdab9c64e365857c98a7efdc65410cb
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Charlene Liu 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.c   | 346 +++---
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |   4 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |   2 +-
 3 files changed, 214 insertions(+), 138 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
index 943143efbb82..63b75ac4a1d5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
@@ -193,7 +193,8 @@ static uint32_t convert_and_clamp(
 void hubbub1_program_watermarks(
struct hubbub *hubbub,
struct dcn_watermark_set *watermarks,
-   unsigned int refclk_mhz)
+   unsigned int refclk_mhz,
+   bool safe_to_lower)
 {
uint32_t force_en = hubbub->ctx->dc->debug.disable_stutter ? 1 : 0;
/*
@@ -207,184 +208,257 @@ void hubbub1_program_watermarks(
 
/* Repeat for water mark set A, B, C and D. */
/* clock state A */
-   prog_wm_value = convert_and_clamp(watermarks->a.urgent_ns,
-   refclk_mhz, 0x1f);
-   REG_WRITE(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, prog_wm_value);
-
-   DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_A calculated =%d\n"
-   "HW register value = 0x%x\n",
-   watermarks->a.urgent_ns, prog_wm_value);
+   if (safe_to_lower || watermarks->a.urgent_ns > 
hubbub->watermarks.a.urgent_ns) {
+   hubbub->watermarks.a.urgent_ns = watermarks->a.urgent_ns;
+   prog_wm_value = convert_and_clamp(watermarks->a.urgent_ns,
+   refclk_mhz, 0x1f);
+   REG_WRITE(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, prog_wm_value);
 
-   prog_wm_value = convert_and_clamp(watermarks->a.pte_meta_urgent_ns,
-   refclk_mhz, 0x1f);
-   REG_WRITE(DCHUBBUB_ARB_PTE_META_URGENCY_WATERMARK_A, prog_wm_value);
-   DC_LOG_BANDWIDTH_CALCS("PTE_META_URGENCY_WATERMARK_A calculated =%d\n"
-   "HW register value = 0x%x\n",
-   watermarks->a.pte_meta_urgent_ns, prog_wm_value);
+   DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_A calculated =%d\n"
+   "HW register value = 0x%x\n",
+   watermarks->a.urgent_ns, prog_wm_value);
+   }
 
-   if (REG(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A)) {
-   prog_wm_value = convert_and_clamp(
-   
watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns,
+   if (safe_to_lower || watermarks->a.pte_meta_urgent_ns > 
hubbub->watermarks.a.pte_meta_urgent_ns) {
+   hubbub->watermarks.a.pte_meta_urgent_ns = 
watermarks->a.pte_meta_urgent_ns;
+   prog_wm_value = 
convert_and_clamp(watermarks->a.pte_meta_urgent_ns,
refclk_mhz, 0x1f);
-   REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, 
prog_wm_value);
-   DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_A calculated 
=%d\n"
+   REG_WRITE(DCHUBBUB_ARB_PTE_META_URGENCY_WATERMARK_A, 
prog_wm_value);
+   DC_LOG_BANDWIDTH_CALCS("PTE_META_URGENCY_WATERMARK_A calculated 
=%d\n"
"HW register value = 0x%x\n",
-   watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns, 
prog_wm_value);
+   watermarks->a.pte_meta_urgent_ns, prog_wm_value);
+   }
 
+   if (REG(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A)) {
+   if (safe_to_lower || 
watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns
+   > 
hubbub->watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns) {
+   
hubbub->watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns =
+   
watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns;
+   prog_wm_value = convert_and_clamp(
+   
watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns,
+   refclk_mhz, 0x1f);
+   REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, 
prog_wm_value);
+   DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_A 
calculated =%d\n"
+   "HW register value = 0x%x\n",
+   
watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns, prog_wm_value);
+   }
+
+   if (safe_to_lower || watermarks->a.cstate_pstate.cstate_exit_ns
+   > 
hubbub->watermarks.a.cstate_pstate.cstate_exit_ns) {
+   

[PATCH 22/51] drm/amd/display: fix dccg dcn1 ifdef

2018-06-19 Thread Harry Wentland
From: Dmytro Laktyushkin 

Change-Id: I49bedf2d8bad4d8405375007270e3702bdd6523c
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Eric Yang 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c | 10 ++
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h |  2 ++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index df6a37b7b769..e62a21f55064 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -478,6 +478,7 @@ static void dce12_update_clocks(struct dccg *dccg,
}
 }
 
+#ifdef CONFIG_DRM_AMD_DC_DCN1_0
 static int dcn1_determine_dppclk_threshold(struct dccg *dccg, struct dc_clocks 
*new_clocks)
 {
bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz;
@@ -575,7 +576,6 @@ static void dcn1_update_clocks(struct dccg *dccg,
|| new_clocks->dcfclk_khz > dccg->clks.dcfclk_khz)
send_request_to_increase = true;
 
-#ifdef CONFIG_DRM_AMD_DC_DCN1_0
/* make sure dcf clk is before dpp clk to
 * make sure we have enough voltage to run dpp clk
 */
@@ -585,7 +585,6 @@ static void dcn1_update_clocks(struct dccg *dccg,
clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
}
-#endif
 
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, 
dccg->clks.dispclk_khz)) {
dcn1_ramp_up_dispclk_with_dpp(dccg, new_clocks);
@@ -623,14 +622,12 @@ static void dcn1_update_clocks(struct dccg *dccg,
smu_req.min_deep_sleep_dcefclk_mhz = 
new_clocks->dcfclk_deep_sleep_khz;
}
 
-#ifdef CONFIG_DRM_AMD_DC_DCN1_0
if (!send_request_to_increase && send_request_to_lower) {
/*use dcfclk to request voltage*/
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DCFCLK;
clock_voltage_req.clocks_in_khz = dcn_find_dcfclk_suits_all(dc, 
new_clocks);
dm_pp_apply_clock_for_voltage_request(dccg->ctx, 
_voltage_req);
}
-#endif
 
if (new_clocks->phyclk_khz)
smu_req.display_count = 1;
@@ -642,6 +639,7 @@ static void dcn1_update_clocks(struct dccg *dccg,
 
*smu_req_cur = smu_req;
 }
+#endif
 
 static void dce_update_clocks(struct dccg *dccg,
struct dc_clocks *new_clocks,
@@ -663,11 +661,13 @@ static void dce_update_clocks(struct dccg *dccg,
}
 }
 
+#ifdef CONFIG_DRM_AMD_DC_DCN1_0
 static const struct display_clock_funcs dcn1_funcs = {
.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
.set_dispclk = dce112_set_clock,
.update_clocks = dcn1_update_clocks
 };
+#endif
 
 static const struct display_clock_funcs dce120_funcs = {
.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
@@ -816,6 +816,7 @@ struct dccg *dce120_dccg_create(struct dc_context *ctx)
return _dce->base;
 }
 
+#ifdef CONFIG_DRM_AMD_DC_DCN1_0
 struct dccg *dcn1_dccg_create(struct dc_context *ctx)
 {
struct dc_debug *debug = >dc->debug;
@@ -854,6 +855,7 @@ struct dccg *dcn1_dccg_create(struct dc_context *ctx)
 
return _dce->base;
 }
+#endif
 
 void dce_dccg_destroy(struct dccg **dccg)
 {
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
index be5b68d7c3c0..1f1899ef773a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
@@ -111,7 +111,9 @@ struct dccg *dce112_dccg_create(
 
 struct dccg *dce120_dccg_create(struct dc_context *ctx);
 
+#ifdef CONFIG_DRM_AMD_DC_DCN1_0
 struct dccg *dcn1_dccg_create(struct dc_context *ctx);
+#endif
 
 void dce_dccg_destroy(struct dccg **dccg);
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 29/51] drm/amd/display: dal 3.1.49

2018-06-19 Thread Harry Wentland
From: Tony Cheng 

Change-Id: I411ec7317d612caff8ffb7c830a9857d0e81d586
Signed-off-by: Tony Cheng 
Reviewed-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 82e0f55bc3e4..d1ac9676a539 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.48"
+#define DC_VER "3.1.49"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 34/51] drm/amd/display: fix potential infinite loop in fbc path

2018-06-19 Thread Harry Wentland
From: Roman Li 

- Fixing integer overflow bug in wait_for_fbc_state_changed()
- Correct the max value of retries for the corresponding warning

Change-Id: I67e3e5dd7762d3a2d73711840fc1754f8e8a4dfb
Signed-off-by: Roman Li 
Reviewed-by: Tony Cheng 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
index df027013e50c..1f7f25013217 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
@@ -143,7 +143,7 @@ static void wait_for_fbc_state_changed(
struct dce110_compressor *cp110,
bool enabled)
 {
-   uint16_t counter = 0;
+   uint32_t counter = 0;
uint32_t addr = mmFBC_STATUS;
uint32_t value;
 
@@ -158,7 +158,7 @@ static void wait_for_fbc_state_changed(
counter++;
}
 
-   if (counter == 10) {
+   if (counter == 1000) {
DC_LOG_WARNING("%s: wait counter exceeded, changes to HW not 
applied",
__func__);
} else {
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 35/51] drm/amd/display: Enable PPLib calls from DC on linux

2018-06-19 Thread Harry Wentland
From: Mikita Lipski 

Set the powerplay debug flag to false for both Windows and Linux
to allow the calls to pplib. So we can retrieve the clock values
from powerplay instead of using default hardcoded values.

Change-Id: I529f6fab41a88d88a5447d24e857656049e061ec
Signed-off-by: Mikita Lipski 
Reviewed-by: Charlene Liu 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
index b5a727f7e880..1761e1a40dad 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
@@ -437,7 +437,7 @@ static const struct dc_debug debug_defaults_drv = {
 */
.min_disp_clk_khz = 10,
 
-   .disable_pplib_clock_request = true,
+   .disable_pplib_clock_request = false,
.disable_pplib_wm_range = false,
.pplib_wm_report_mode = WM_REPORT_DEFAULT,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/5] drm/amd/pp: Memory Latency is always 25us on Vega10

2018-06-19 Thread Harry Wentland
From: Rex Zhu 

Also use the tolerable latency defined in Display
to find lowest MCLK frequency when disable mclk switch

Signed-off-by: Rex Zhu 
Acked-by: Alex Deucher 
---
 .../drm/amd/powerplay/hwmgr/vega10_hwmgr.c| 24 ++-
 1 file changed, 2 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index e9a8b527d481..3e54de061496 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -55,12 +55,6 @@
 
 static const uint32_t channel_number[] = {1, 2, 0, 4, 0, 8, 0, 16, 2};
 
-#define MEM_FREQ_LOW_LATENCY25000
-#define MEM_FREQ_HIGH_LATENCY   8
-#define MEM_LATENCY_HIGH245
-#define MEM_LATENCY_LOW 35
-#define MEM_LATENCY_ERR 0x
-
 #define mmDF_CS_AON0_DramBaseAddress0  
0x0044
 #define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX 
0
 
@@ -3223,7 +3217,7 @@ static int vega10_apply_state_adjust_rules(struct 
pp_hwmgr *hwmgr,
/* Find the lowest MCLK frequency that is within
 * the tolerable latency defined in DAL
 */
-   latency = 0;
+   latency = 
hwmgr->display_config->dce_tolerable_mclk_in_active_latency;
for (i = 0; i < data->mclk_latency_table.count; i++) {
if ((data->mclk_latency_table.entries[i].latency <= 
latency) &&
(data->mclk_latency_table.entries[i].frequency 
>=
@@ -4074,18 +4068,6 @@ static void vega10_get_sclks(struct pp_hwmgr *hwmgr,
 
 }
 
-static uint32_t vega10_get_mem_latency(struct pp_hwmgr *hwmgr,
-   uint32_t clock)
-{
-   if (clock >= MEM_FREQ_LOW_LATENCY &&
-   clock < MEM_FREQ_HIGH_LATENCY)
-   return MEM_LATENCY_HIGH;
-   else if (clock >= MEM_FREQ_HIGH_LATENCY)
-   return MEM_LATENCY_LOW;
-   else
-   return MEM_LATENCY_ERR;
-}
-
 static void vega10_get_memclocks(struct pp_hwmgr *hwmgr,
struct pp_clock_levels_with_latency *clocks)
 {
@@ -4107,9 +4089,7 @@ static void vega10_get_memclocks(struct pp_hwmgr *hwmgr,
dep_table->entries[i].clk * 10;
clocks->data[clocks->num_levels].latency_in_us =
data->mclk_latency_table.entries
-   [data->mclk_latency_table.count].latency =
-   vega10_get_mem_latency(hwmgr,
-   dep_table->entries[i].clk);
+   [data->mclk_latency_table.count].latency = 25;
clocks->num_levels++;
data->mclk_latency_table.count++;
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/5] drm/amd/display: Implement dm_pp_get_clock_levels_by_type_with_latency

2018-06-19 Thread Harry Wentland
From: Rex Zhu 

Display component can get tru max_displ_clk_in_khz instand of hardcode

Signed-off-by: Rex Zhu 
Acked-by: Alex Deucher 
---
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
index c2576b235c52..35fe97a7bc24 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
@@ -251,13 +251,12 @@ static void pp_to_dc_clock_levels_with_voltage(
} else
clk_level_info->num_levels = pp_clks->num_levels;
 
-   DRM_INFO("DM_PPLIB: values for %s clock\n",
+   DRM_DEBUG("DM_PPLIB: values for %s clock\n",
DC_DECODE_PP_CLOCK_TYPE(dc_clk_type));
 
for (i = 0; i < clk_level_info->num_levels; i++) {
-   DRM_INFO("DM_PPLIB:\t %d in 10kHz\n", 
pp_clks->data[i].clocks_in_khz);
-   /* translate 10kHz to kHz */
-   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz * 10;
+   DRM_DEBUG("DM_PPLIB:\t %d\n", pp_clks->data[i].clocks_in_khz);
+   clk_level_info->data[i].clocks_in_khz = 
pp_clks->data[i].clocks_in_khz;
clk_level_info->data[i].voltage_in_mv = 
pp_clks->data[i].voltage_in_mv;
}
 }
@@ -364,15 +363,18 @@ bool dm_pp_get_clock_levels_by_type_with_voltage(
 {
struct amdgpu_device *adev = ctx->driver_context;
void *pp_handle = adev->powerplay.pp_handle;
-   struct pp_clock_levels_with_voltage pp_clk_info = {0};
+   struct pp_clock_levels_with_voltage pp_clks = { 0 };
const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
+   if (!pp_funcs || !pp_funcs->get_clock_by_type_with_voltage)
+   return false;
+
if (pp_funcs->get_clock_by_type_with_voltage(pp_handle,
 
dc_to_pp_clock_type(clk_type),
-_clk_info))
+_clks))
return false;
 
-   pp_to_dc_clock_levels_with_voltage(_clk_info, clk_level_info, 
clk_type);
+   pp_to_dc_clock_levels_with_voltage(_clks, clk_level_info, clk_type);
 
return true;
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 4/5] drm/amd/display: Delete old implementation of bw_calcs_data_update_from_pplib

2018-06-19 Thread Harry Wentland
From: Rex Zhu 

this function is copied from dce112. it is not for AI/RV.
driver need to re-implement this function.

Signed-off-by: Rex Zhu 
---
 .../amd/display/dc/dce120/dce120_resource.c   | 123 +-
 1 file changed, 1 insertion(+), 122 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
index 13c388a608c4..1126dc56e407 100644
--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
@@ -26,7 +26,6 @@
 
 #include "dm_services.h"
 
-
 #include "stream_encoder.h"
 #include "resource.h"
 #include "include/irq_service_interface.h"
@@ -691,127 +690,7 @@ static const struct resource_funcs dce120_res_pool_funcs 
= {
 
 static void bw_calcs_data_update_from_pplib(struct dc *dc)
 {
-   struct dm_pp_clock_levels_with_latency eng_clks = {0};
-   struct dm_pp_clock_levels_with_latency mem_clks = {0};
-   struct dm_pp_wm_sets_with_clock_ranges clk_ranges = {0};
-   int i;
-   unsigned int clk;
-   unsigned int latency;
-
-   /*do system clock*/
-   if (!dm_pp_get_clock_levels_by_type_with_latency(
-   dc->ctx,
-   DM_PP_CLOCK_TYPE_ENGINE_CLK,
-   _clks) || eng_clks.num_levels == 0) {
-
-   eng_clks.num_levels = 8;
-   clk = 30;
-
-   for (i = 0; i < eng_clks.num_levels; i++) {
-   eng_clks.data[i].clocks_in_khz = clk;
-   clk += 10;
-   }
-   }
-
-   /* convert all the clock fro kHz to fix point mHz  TODO: wloop data */
-   dc->bw_vbios->high_sclk = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels-1].clocks_in_khz, 1000);
-   dc->bw_vbios->mid1_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels/8].clocks_in_khz, 1000);
-   dc->bw_vbios->mid2_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels*2/8].clocks_in_khz, 1000);
-   dc->bw_vbios->mid3_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz, 1000);
-   dc->bw_vbios->mid4_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels*4/8].clocks_in_khz, 1000);
-   dc->bw_vbios->mid5_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels*5/8].clocks_in_khz, 1000);
-   dc->bw_vbios->mid6_sclk  = bw_frc_to_fixed(
-   eng_clks.data[eng_clks.num_levels*6/8].clocks_in_khz, 1000);
-   dc->bw_vbios->low_sclk  = bw_frc_to_fixed(
-   eng_clks.data[0].clocks_in_khz, 1000);
-
-   /*do memory clock*/
-   if (!dm_pp_get_clock_levels_by_type_with_latency(
-   dc->ctx,
-   DM_PP_CLOCK_TYPE_MEMORY_CLK,
-   _clks) || mem_clks.num_levels == 0) {
-
-   mem_clks.num_levels = 3;
-   clk = 25;
-   latency = 45;
-
-   for (i = 0; i < eng_clks.num_levels; i++) {
-   mem_clks.data[i].clocks_in_khz = clk;
-   mem_clks.data[i].latency_in_us = latency;
-   clk += 50;
-   latency -= 5;
-   }
-
-   }
-
-   /* we don't need to call PPLIB for validation clock since they
-* also give us the highest sclk and highest mclk (UMA clock).
-* ALSO always convert UMA clock (from PPLIB)  to YCLK (HW formula):
-* YCLK = UMACLK*m_memoryTypeMultiplier
-*/
-   dc->bw_vbios->low_yclk = bw_frc_to_fixed(
-   mem_clks.data[0].clocks_in_khz * MEMORY_TYPE_MULTIPLIER, 1000);
-   dc->bw_vbios->mid_yclk = bw_frc_to_fixed(
-   mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * 
MEMORY_TYPE_MULTIPLIER,
-   1000);
-   dc->bw_vbios->high_yclk = bw_frc_to_fixed(
-   mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * 
MEMORY_TYPE_MULTIPLIER,
-   1000);
-
-   /* Now notify PPLib/SMU about which Watermarks sets they should select
-* depending on DPM state they are in. And update BW MGR GFX Engine and
-* Memory clock member variables for Watermarks calculations for each
-* Watermark Set
-*/
-   clk_ranges.num_wm_sets = 4;
-   clk_ranges.wm_clk_ranges[0].wm_set_id = WM_SET_A;
-   clk_ranges.wm_clk_ranges[0].wm_min_eng_clk_in_khz =
-   eng_clks.data[0].clocks_in_khz;
-   clk_ranges.wm_clk_ranges[0].wm_max_eng_clk_in_khz =
-   eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 
1;
-   clk_ranges.wm_clk_ranges[0].wm_min_memg_clk_in_khz =
-   mem_clks.data[0].clocks_in_khz;
-   clk_ranges.wm_clk_ranges[0].wm_max_mem_clk_in_khz =
-   

[PATCH 5/5] drm/amd/display: Refine the interface dm_pp_notify_wm_clock_changes

2018-06-19 Thread Harry Wentland
From: Rex Zhu 

change function parameter type from dm_pp_wm_sets_with_clock_ranges * to
void *. so this interface can be supported on AI/RV.

Signed-off-by: Rex Zhu 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c | 2 +-
 drivers/gpu/drm/amd/display/dc/dm_services.h   | 2 +-
 drivers/gpu/drm/amd/include/kgd_pp_interface.h | 2 +-
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c  | 6 +++---
 drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  | 4 ++--
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c  | 3 ++-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 3 ++-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 3 ++-
 drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h| 2 +-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h  | 3 +--
 10 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 6b005209fe5a..ad4bd4a0e1aa 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -425,7 +425,7 @@ bool dm_pp_get_clock_levels_by_type_with_voltage(
 
 bool dm_pp_notify_wm_clock_changes(
const struct dc_context *ctx,
-   struct dm_pp_wm_sets_with_clock_ranges *wm_with_clock_ranges)
+   void *clock_ranges)
 {
/* TODO: to be implemented */
return false;
diff --git a/drivers/gpu/drm/amd/display/dc/dm_services.h 
b/drivers/gpu/drm/amd/display/dc/dm_services.h
index 4ff9b2bba178..535b415386b6 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_services.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_services.h
@@ -217,7 +217,7 @@ bool dm_pp_get_clock_levels_by_type_with_voltage(
 
 bool dm_pp_notify_wm_clock_changes(
const struct dc_context *ctx,
-   struct dm_pp_wm_sets_with_clock_ranges *wm_with_clock_ranges);
+   void *clock_ranges);
 
 void dm_pp_get_funcs_rv(struct dc_context *ctx,
struct pp_smu_funcs_rv *funcs);
diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h 
b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index 06f08f34a110..7250fb8804f5 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -261,7 +261,7 @@ struct amd_pm_funcs {
enum amd_pp_clock_type type,
struct pp_clock_levels_with_voltage *clocks);
int (*set_watermarks_for_clocks_ranges)(void *handle,
-   struct pp_wm_sets_with_clock_ranges_soc15 
*wm_with_clock_ranges);
+   void *clock_ranges);
int (*display_clock_voltage_request)(void *handle,
struct pp_display_clock_request *clock);
int (*get_display_mode_validation_clocks)(void *handle,
diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c 
b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index d567be49c31b..fdbd5667c901 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -1118,17 +1118,17 @@ static int pp_get_clock_by_type_with_voltage(void 
*handle,
 }
 
 static int pp_set_watermarks_for_clocks_ranges(void *handle,
-   struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+   void *clock_ranges)
 {
struct pp_hwmgr *hwmgr = handle;
int ret = 0;
 
-   if (!hwmgr || !hwmgr->pm_en ||!wm_with_clock_ranges)
+   if (!hwmgr || !hwmgr->pm_en || !clock_ranges)
return -EINVAL;
 
mutex_lock(>smu_lock);
ret = phm_set_watermarks_for_clocks_ranges(hwmgr,
-   wm_with_clock_ranges);
+   clock_ranges);
mutex_unlock(>smu_lock);
 
return ret;
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
index a0bb921fac22..53207e76b0f3 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
@@ -435,7 +435,7 @@ int phm_get_clock_by_type_with_voltage(struct pp_hwmgr 
*hwmgr,
 }
 
 int phm_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
-   struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+   void *clock_ranges)
 {
PHM_FUNC_CHECK(hwmgr);
 
@@ -443,7 +443,7 @@ int phm_set_watermarks_for_clocks_ranges(struct pp_hwmgr 
*hwmgr,
return -EINVAL;
 
return hwmgr->hwmgr_func->set_watermarks_for_clocks_ranges(hwmgr,
-   wm_with_clock_ranges);
+   clock_ranges);
 }
 
 int phm_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 

[PATCH 0/5] Rex's pplib/dc changes rebased on latest DC

2018-06-19 Thread Harry Wentland
Sending Rex's pplib changes rebased on the latest DC as that had some work 
Mikita did in the same area.

Patch 1 is
Reviewed-by: Harry Wentland 

Patches 2-3 are
Acked-by: Harry Wentland 

Not sure yet about 4-5. Will need to get someone with more expertise to eyeball 
those.

Harry

Rex Zhu (5):
  drm/amd/display: Implement dm_pp_get_clock_levels_by_type_with_latency
  drm/amd/pp: Fix wrong clock-unit exported to Display
  drm/amd/pp: Memory Latency is always 25us on Vega10
  drm/amd/display: Delete old implementation of
bw_calcs_data_update_from_pplib
  drm/amd/display: Refine the interface dm_pp_notify_wm_clock_changes

 .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  |  16 ++-
 .../display/amdgpu_dm/amdgpu_dm_services.c|   2 +-
 .../amd/display/dc/dce120/dce120_resource.c   | 123 +-
 drivers/gpu/drm/amd/display/dc/dm_services.h  |   2 +-
 .../gpu/drm/amd/include/kgd_pp_interface.h|   2 +-
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c |   6 +-
 .../drm/amd/powerplay/hwmgr/hardwaremanager.c |   4 +-
 .../gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c |   7 +-
 .../drm/amd/powerplay/hwmgr/vega10_hwmgr.c|  37 ++
 .../drm/amd/powerplay/hwmgr/vega12_hwmgr.c|  13 +-
 .../drm/amd/powerplay/inc/hardwaremanager.h   |   2 +-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h |   3 +-
 12 files changed, 40 insertions(+), 177 deletions(-)

-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/5] drm/amd/pp: Fix wrong clock-unit exported to Display

2018-06-19 Thread Harry Wentland
From: Rex Zhu 

Transfer 10KHz (requested by smu) to KHz needed by Display
component.

This can fix the issue 4k Monitor can't be lit up on Vega/Raven.

Signed-off-by: Rex Zhu 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c  |  4 ++--
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 10 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 10 +-
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
index d4bc83e81389..c905df42adc5 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
@@ -993,7 +993,7 @@ static int smu10_get_clock_by_type_with_latency(struct 
pp_hwmgr *hwmgr,
 
clocks->num_levels = 0;
for (i = 0; i < pclk_vol_table->count; i++) {
-   clocks->data[i].clocks_in_khz = pclk_vol_table->entries[i].clk;
+   clocks->data[i].clocks_in_khz = pclk_vol_table->entries[i].clk 
* 10;
clocks->data[i].latency_in_us = latency_required ?
smu10_get_mem_latency(hwmgr,
pclk_vol_table->entries[i].clk) 
:
@@ -1044,7 +1044,7 @@ static int smu10_get_clock_by_type_with_voltage(struct 
pp_hwmgr *hwmgr,
 
clocks->num_levels = 0;
for (i = 0; i < pclk_vol_table->count; i++) {
-   clocks->data[i].clocks_in_khz = pclk_vol_table->entries[i].clk;
+   clocks->data[i].clocks_in_khz = pclk_vol_table->entries[i].clk  
* 10;
clocks->data[i].voltage_in_mv = pclk_vol_table->entries[i].vol;
clocks->num_levels++;
}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index 3b8d36df52e9..e9a8b527d481 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -4067,7 +4067,7 @@ static void vega10_get_sclks(struct pp_hwmgr *hwmgr,
for (i = 0; i < dep_table->count; i++) {
if (dep_table->entries[i].clk) {
clocks->data[clocks->num_levels].clocks_in_khz =
-   dep_table->entries[i].clk;
+   dep_table->entries[i].clk * 10;
clocks->num_levels++;
}
}
@@ -4104,7 +4104,7 @@ static void vega10_get_memclocks(struct pp_hwmgr *hwmgr,
clocks->data[clocks->num_levels].clocks_in_khz =
data->mclk_latency_table.entries
[data->mclk_latency_table.count].frequency =
-   dep_table->entries[i].clk;
+   dep_table->entries[i].clk * 10;
clocks->data[clocks->num_levels].latency_in_us =
data->mclk_latency_table.entries
[data->mclk_latency_table.count].latency =
@@ -4126,7 +4126,7 @@ static void vega10_get_dcefclocks(struct pp_hwmgr *hwmgr,
uint32_t i;
 
for (i = 0; i < dep_table->count; i++) {
-   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk * 10;
clocks->data[i].latency_in_us = 0;
clocks->num_levels++;
}
@@ -4142,7 +4142,7 @@ static void vega10_get_socclocks(struct pp_hwmgr *hwmgr,
uint32_t i;
 
for (i = 0; i < dep_table->count; i++) {
-   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk * 10;
clocks->data[i].latency_in_us = 0;
clocks->num_levels++;
}
@@ -4202,7 +4202,7 @@ static int vega10_get_clock_by_type_with_voltage(struct 
pp_hwmgr *hwmgr,
}
 
for (i = 0; i < dep_table->count; i++) {
-   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk  * 10;
clocks->data[i].voltage_in_mv = 
(uint32_t)(table_info->vddc_lookup_table->
entries[dep_table->entries[i].vddInd].us_vdd);
clocks->num_levels++;
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 782e2098824d..d685ce7f88cc 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -1576,7 +1576,7 @@ static int vega12_get_sclks(struct pp_hwmgr *hwmgr,
 
for (i = 0; i < ucount; i++) {
clocks->data[i].clocks_in_khz =
-   dpm_table->dpm_levels[i].value * 100;
+   

Re: [PATCH 05/13] drm/amd/powerplay: retrieve all clock ranges on startup

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> So that we do not need to use PPSMC_MSG_GetMin/MaxDpmFreq to
> get the clock ranges on runtime. Since that causes some problems.
>
> Change-Id: Ia0d6390c976749538b35c8ffde5d1e661b4944c0
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 69 
> +-
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |  8 +++
>  2 files changed, 61 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index bc976e1..ea530af 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -856,6 +856,48 @@ static int vega12_power_control_set_level(struct 
> pp_hwmgr *hwmgr)
> return result;
>  }
>
> +static int vega12_get_all_clock_ranges_helper(struct pp_hwmgr *hwmgr,
> +   PPCLK_e clkid, struct vega12_clock_range *clock)
> +{
> +   /* AC Max */
> +   PP_ASSERT_WITH_CODE(
> +   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetMaxDpmFreq, (clkid << 16)) == 0,
> +   "[GetClockRanges] Failed to get max ac clock from SMC!",
> +   return -1);


Please use a proper error code here (e.g., -EINVAL) rather than -1.

> +   vega12_read_arg_from_smc(hwmgr, &(clock->ACMax));
> +
> +   /* AC Min */
> +   PP_ASSERT_WITH_CODE(
> +   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetMinDpmFreq, (clkid << 16)) == 0,
> +   "[GetClockRanges] Failed to get min ac clock from SMC!",
> +   return -1);

Same here.

> +   vega12_read_arg_from_smc(hwmgr, &(clock->ACMin));
> +
> +   /* DC Max */
> +   PP_ASSERT_WITH_CODE(
> +   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetDcModeMaxDpmFreq, (clkid << 16)) == 0,
> +   "[GetClockRanges] Failed to get max dc clock from SMC!",
> +   return -1);

and here.

> +   vega12_read_arg_from_smc(hwmgr, &(clock->DCMax));
> +
> +   return 0;
> +}
> +
> +static int vega12_get_all_clock_ranges(struct pp_hwmgr *hwmgr)
> +{
> +   struct vega12_hwmgr *data =
> +   (struct vega12_hwmgr *)(hwmgr->backend);
> +   uint32_t i;
> +
> +   for (i = 0; i < PPCLK_COUNT; i++)
> +   PP_ASSERT_WITH_CODE(!vega12_get_all_clock_ranges_helper(hwmgr,
> +   i, &(data->clk_range[i])),
> +   "Failed to get clk range from SMC!",
> +   return -1);


And here.  With those fixed:
Acked-by: Alex Deucher 

> +
> +   return 0;
> +}
> +
>  static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
>  {
> int tmp_result, result = 0;
> @@ -883,6 +925,11 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr 
> *hwmgr)
> "Failed to power control set level!",
> result = tmp_result);
>
> +   result = vega12_get_all_clock_ranges(hwmgr);
> +   PP_ASSERT_WITH_CODE(!result,
> +   "Failed to get all clock ranges!",
> +   return result);
> +
> result = vega12_odn_initialize_default_settings(hwmgr);
> PP_ASSERT_WITH_CODE(!result,
> "Failed to power control set level!",
> @@ -1472,24 +1519,14 @@ static int vega12_get_clock_ranges(struct pp_hwmgr 
> *hwmgr,
> PPCLK_e clock_select,
> bool max)
>  {
> -   int result;
> -   *clock = 0;
> +   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
>
> -   if (max) {
> -PP_ASSERT_WITH_CODE(
> -   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetMaxDpmFreq, (clock_select << 16)) == 0,
> -   "[GetClockRanges] Failed to get max clock from SMC!",
> -   return -1);
> -   result = vega12_read_arg_from_smc(hwmgr, clock);
> -   } else {
> -   PP_ASSERT_WITH_CODE(
> -   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_GetMinDpmFreq, (clock_select << 16)) == 0,
> -   "[GetClockRanges] Failed to get min clock from SMC!",
> -   return -1);
> -   result = vega12_read_arg_from_smc(hwmgr, clock);
> -   }
> +   if (max)
> +   *clock = data->clk_range[clock_select].ACMax;
> +   else
> +   *clock = data->clk_range[clock_select].ACMin;
>
> -   return result;
> +   return 0;
>  }
>
>  static int vega12_get_sclks(struct pp_hwmgr *hwmgr,
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
> index 49b38df..e18c083 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
> +++ 

Re: [PATCH 08/13] drm/amd/powerplay: correct smc display config setting

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> Multi monitor situation should be taked into consideration.
> Also, there is no need to setup UCLK hard min clock level.

This looks like it should be two patches since there are two distinct
changes.  Also please extend the commit messages a bit (e.g., "need to
take into account multi-head with synced displays" and "we don't need
to set a uclk hard min because...").  With that fixed:
Acked-by: Alex Deucher 

>
> Change-Id: Icf1bc9b420a4048d9071e386308d30999491
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 13 ++---
>  1 file changed, 2 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index cb0589e..4732179 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -1399,9 +1399,9 @@ static int 
> vega12_notify_smc_display_config_after_ps_adjustment(
> (struct vega12_hwmgr *)(hwmgr->backend);
> struct PP_Clocks min_clocks = {0};
> struct pp_display_clock_request clock_req;
> -   uint32_t clk_request;
>
> -   if (hwmgr->display_config->num_display > 1)
> +   if ((hwmgr->display_config->num_display > 1) &&
> +   !hwmgr->display_config->multi_monitor_in_sync)
> vega12_notify_smc_display_change(hwmgr, false);
> else
> vega12_notify_smc_display_change(hwmgr, true);
> @@ -1426,15 +1426,6 @@ static int 
> vega12_notify_smc_display_config_after_ps_adjustment(
> }
> }
>
> -   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
> -   clk_request = (PPCLK_UCLK << 16) | (min_clocks.memoryClock) / 
> 100;
> -   PP_ASSERT_WITH_CODE(
> -   smum_send_msg_to_smc_with_parameter(hwmgr, 
> PPSMC_MSG_SetHardMinByFreq, clk_request) == 0,
> -   
> "[PhwVega12_NotifySMCDisplayConfigAfterPowerStateAdjustment] Attempt to set 
> UCLK HardMin Failed!",
> -   return -1);
> -   data->dpm_table.mem_table.dpm_state.hard_min_level = 
> min_clocks.memoryClock;
> -   }
> -
> return 0;
>  }
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 09/13] drm/amd/powerplay: correct vega12 max num of dpm level

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:39 AM, Evan Quan  wrote:
> Use MAX_NUM_CLOCKS instead of VG12_PSUEDO* macros for
> the max number of dpm levels.
>
> Change-Id: Ida49f51777663a8d68d05ddcd41f4df0d8e61481
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 17 +
>  1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index 4732179..a227ace 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -1642,8 +1642,8 @@ static int vega12_get_sclks(struct pp_hwmgr *hwmgr,
> return -1;
>
> dpm_table = &(data->dpm_table.gfx_table);
> -   ucount = (dpm_table->count > VG12_PSUEDO_NUM_GFXCLK_DPM_LEVELS) ?
> -   VG12_PSUEDO_NUM_GFXCLK_DPM_LEVELS : dpm_table->count;
> +   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
> +   MAX_NUM_CLOCKS : dpm_table->count;
>
> for (i = 0; i < ucount; i++) {
> clocks->data[i].clocks_in_khz =
> @@ -1674,11 +1674,12 @@ static int vega12_get_memclocks(struct pp_hwmgr 
> *hwmgr,
> return -1;
>
> dpm_table = &(data->dpm_table.mem_table);
> -   ucount = (dpm_table->count > VG12_PSUEDO_NUM_UCLK_DPM_LEVELS) ?
> -   VG12_PSUEDO_NUM_UCLK_DPM_LEVELS : dpm_table->count;
> +   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
> +   MAX_NUM_CLOCKS : dpm_table->count;
>
> for (i = 0; i < ucount; i++) {
> clocks->data[i].clocks_in_khz =
> +   data->mclk_latency_table.entries[i].frequency =
> dpm_table->dpm_levels[i].value * 100;
>
> clocks->data[i].latency_in_us =
> @@ -1704,8 +1705,8 @@ static int vega12_get_dcefclocks(struct pp_hwmgr *hwmgr,
>
>
> dpm_table = &(data->dpm_table.dcef_table);
> -   ucount = (dpm_table->count > VG12_PSUEDO_NUM_DCEFCLK_DPM_LEVELS) ?
> -   VG12_PSUEDO_NUM_DCEFCLK_DPM_LEVELS : dpm_table->count;
> +   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
> +   MAX_NUM_CLOCKS : dpm_table->count;
>
> for (i = 0; i < ucount; i++) {
> clocks->data[i].clocks_in_khz =
> @@ -1732,8 +1733,8 @@ static int vega12_get_socclocks(struct pp_hwmgr *hwmgr,
>
>
> dpm_table = &(data->dpm_table.soc_table);
> -   ucount = (dpm_table->count > VG12_PSUEDO_NUM_SOCCLK_DPM_LEVELS) ?
> -   VG12_PSUEDO_NUM_SOCCLK_DPM_LEVELS : dpm_table->count;
> +   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
> +   MAX_NUM_CLOCKS : dpm_table->count;
>
> for (i = 0; i < ucount; i++) {
> clocks->data[i].clocks_in_khz =
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 04/13] drm/amd/powerplay: revise default dpm tables setup

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> Initialize the soft/hard min/max level correctly and
> handle the dpm disabled situation.
>
> Change-Id: I9a1d303ee54ac4c9687f72c86097b008ae398c05
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 334 
> -
>  1 file changed, 132 insertions(+), 202 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index e81661cc..bc976e1 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -453,37 +453,30 @@ static int vega12_setup_asic_task(struct pp_hwmgr 
> *hwmgr)
>   */
>  static void vega12_init_dpm_state(struct vega12_dpm_state *dpm_state)
>  {
> -   dpm_state->soft_min_level = 0xff;
> -   dpm_state->soft_max_level = 0xff;
> -   dpm_state->hard_min_level = 0xff;
> -   dpm_state->hard_max_level = 0xff;
> +   dpm_state->soft_min_level = 0x0;
> +   dpm_state->soft_max_level = 0x;
> +   dpm_state->hard_min_level = 0x0;
> +   dpm_state->hard_max_level = 0x;
>  }
>
> -static int vega12_get_number_dpm_level(struct pp_hwmgr *hwmgr,
> -   PPCLK_e clkID, uint32_t *num_dpm_level)
> +static int vega12_get_number_of_dpm_level(struct pp_hwmgr *hwmgr,
> +   PPCLK_e clk_id, uint32_t *num_of_levels)
>  {
> -   int result;
> -   /*
> -* SMU expects the Clock ID to be in the top 16 bits.
> -* Lower 16 bits specify the level however 0xFF is a
> -* special argument the returns the total number of levels
> -*/
> -   PP_ASSERT_WITH_CODE(smum_send_msg_to_smc_with_parameter(hwmgr,
> -   PPSMC_MSG_GetDpmFreqByIndex, (clkID << 16 | 0xFF)) == 0,
> -   "[GetNumberDpmLevel] Failed to get DPM levels from SMU for 
> CLKID!",
> -   return -EINVAL);
> -
> -   result = vega12_read_arg_from_smc(hwmgr, num_dpm_level);
> +   int ret = 0;
>
> -   PP_ASSERT_WITH_CODE(*num_dpm_level < MAX_REGULAR_DPM_NUMBER,
> -   "[GetNumberDPMLevel] Number of DPM levels is greater than 
> limit",
> -   return -EINVAL);
> +   ret = smum_send_msg_to_smc_with_parameter(hwmgr,
> +   PPSMC_MSG_GetDpmFreqByIndex,
> +   (clk_id << 16 | 0xFF));
> +   PP_ASSERT_WITH_CODE(!ret,
> +   "[GetNumOfDpmLevel] failed to get dpm levels!",
> +   return ret);
>
> -   PP_ASSERT_WITH_CODE(*num_dpm_level != 0,
> -   "[GetNumberDPMLevel] Number of CLK Levels is zero!",
> -   return -EINVAL);
> +   vega12_read_arg_from_smc(hwmgr, num_of_levels);
> +   PP_ASSERT_WITH_CODE(*num_of_levels > 0,
> +   "[GetNumOfDpmLevel] number of clk levels is invalid!",
> +   return -EINVAL);
>
> -   return result;
> +   return ret;
>  }
>
>  static int vega12_get_dpm_frequency_by_index(struct pp_hwmgr *hwmgr,
> @@ -509,6 +502,31 @@ static int vega12_get_dpm_frequency_by_index(struct 
> pp_hwmgr *hwmgr,
> return result;
>  }
>
> +static int vega12_setup_single_dpm_table(struct pp_hwmgr *hwmgr,
> +   struct vega12_single_dpm_table *dpm_table, PPCLK_e clk_id)
> +{
> +   int ret = 0;
> +   uint32_t i, num_of_levels, clk;
> +
> +   ret = vega12_get_number_of_dpm_level(hwmgr, clk_id, _of_levels);
> +   PP_ASSERT_WITH_CODE(!ret,
> +   "[SetupSingleDpmTable] failed to get clk levels!",
> +   return ret);
> +
> +   dpm_table->count = num_of_levels;
> +
> +   for (i = 0; i < num_of_levels; i++) {
> +   ret = vega12_get_dpm_frequency_by_index(hwmgr, clk_id, i, 
> );
> +   PP_ASSERT_WITH_CODE(!ret,
> +   "[SetupSingleDpmTable] failed to get clk of specific 
> level!",
> +   return ret);
> +   dpm_table->dpm_levels[i].value = clk;
> +   dpm_table->dpm_levels[i].enabled = true;
> +   }
> +
> +   return ret;
> +}
> +
>  /*
>   * This function is to initialize all DPM state tables
>   * for SMU based on the dependency table.
> @@ -519,224 +537,136 @@ static int vega12_get_dpm_frequency_by_index(struct 
> pp_hwmgr *hwmgr,
>   */
>  static int vega12_setup_default_dpm_tables(struct pp_hwmgr *hwmgr)
>  {
> -   uint32_t num_levels, i, clock;
>
> struct vega12_hwmgr *data =
> (struct vega12_hwmgr *)(hwmgr->backend);
> -
> struct vega12_single_dpm_table *dpm_table;
> +   int ret = 0;
>
> memset(>dpm_table, 0, sizeof(data->dpm_table));
>
> -   /* Initialize Sclk DPM and SOC DPM table based on allow Sclk values */
> +   /* socclk */
> dpm_table = &(data->dpm_table.soc_table);
> -
> -   PP_ASSERT_WITH_CODE(vega12_get_number_dpm_level(hwmgr, 

Re: [PATCH 07/13] drm/amd/powerplay: initialize uvd/vce powergate status

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> On UVD/VCE dpm disabled, the powergate status should be
> set as true.

Can you explain this patch a bit?  Why is power gate state set to true
when dpm is disabled?

Alex

>
> Change-Id: I569a5aa216b5e7d64a2b504f2ff98cc83ca802d5
> Signed-off-by: Evan Quan 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 17 +
>  1 file changed, 17 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index a124b81..cb0589e 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -777,6 +777,21 @@ static int vega12_set_allowed_featuresmask(struct 
> pp_hwmgr *hwmgr)
> return 0;
>  }
>
> +static void vega12_init_powergate_state(struct pp_hwmgr *hwmgr)
> +{
> +   struct vega12_hwmgr *data =
> +   (struct vega12_hwmgr *)(hwmgr->backend);
> +
> +   data->uvd_power_gated = true;
> +   data->vce_power_gated = true;
> +
> +   if (data->smu_features[GNLD_DPM_UVD].enabled)
> +   data->uvd_power_gated = false;
> +
> +   if (data->smu_features[GNLD_DPM_VCE].enabled)
> +   data->vce_power_gated = false;
> +}
> +
>  static int vega12_enable_all_smu_features(struct pp_hwmgr *hwmgr)
>  {
> struct vega12_hwmgr *data =
> @@ -801,6 +816,8 @@ static int vega12_enable_all_smu_features(struct pp_hwmgr 
> *hwmgr)
> }
> }
>
> +   vega12_init_powergate_state(hwmgr);
> +
> return 0;
>  }
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: band aid validating VM PTs

2018-06-19 Thread Christian König
Always validating the VM PTs takes to much time. Only always validate
the per VM BOs for now.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 819949418495..7c30451ba897 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1082,7 +1082,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
   struct amdgpu_vm_bo_base,
   vm_status);
bo_base->moved = false;
-   list_move(_base->vm_status, >idle);
+   list_del_init(_base->vm_status);
 
bo = bo_base->bo->parent;
if (!bo)
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/5] dma-buf: remove kmap_atomic interface

2018-06-19 Thread Christian König

Am 18.06.2018 um 10:18 schrieb Daniel Vetter:

On Fri, Jun 01, 2018 at 02:00:17PM +0200, Christian König wrote:

Neither used nor correctly implemented anywhere. Just completely remove
the interface.

Signed-off-by: Christian König 

I wonder whether we can nuke the normal kmap stuff too ... everyone seems
to want/use the vmap stuff for kernel-internal mapping needs.

Anyway, this looks good.

---
  drivers/dma-buf/dma-buf.c  | 44 --
  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c  |  2 -
  drivers/gpu/drm/armada/armada_gem.c|  2 -
  drivers/gpu/drm/drm_prime.c| 26 -
  drivers/gpu/drm/i915/i915_gem_dmabuf.c | 11 --
  drivers/gpu/drm/i915/selftests/mock_dmabuf.c   |  2 -
  drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c  |  2 -
  drivers/gpu/drm/tegra/gem.c| 14 ---
  drivers/gpu/drm/udl/udl_dmabuf.c   | 17 -
  drivers/gpu/drm/vmwgfx/vmwgfx_prime.c  | 13 ---
  .../media/common/videobuf2/videobuf2-dma-contig.c  |  1 -
  drivers/media/common/videobuf2/videobuf2-dma-sg.c  |  1 -
  drivers/media/common/videobuf2/videobuf2-vmalloc.c |  1 -
  drivers/staging/android/ion/ion.c  |  2 -
  drivers/tee/tee_shm.c  |  6 ---
  include/drm/drm_prime.h|  4 --
  include/linux/dma-buf.h|  4 --
  17 files changed, 152 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e99a8d19991b..e4c657d9fad7 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -405,7 +405,6 @@ struct dma_buf *dma_buf_export(const struct 
dma_buf_export_info *exp_info)
  || !exp_info->ops->map_dma_buf
  || !exp_info->ops->unmap_dma_buf
  || !exp_info->ops->release
- || !exp_info->ops->map_atomic
  || !exp_info->ops->map
  || !exp_info->ops->mmap)) {
return ERR_PTR(-EINVAL);
@@ -687,14 +686,6 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
   *  void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
   *  void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
   *
- *   There are also atomic variants of these interfaces. Like for kmap they
- *   facilitate non-blocking fast-paths. Neither the importer nor the exporter
- *   (in the callback) is allowed to block when using these.
- *
- *   Interfaces::
- *  void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
- *  void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*);
- *
   *   For importers all the restrictions of using kmap apply, like the limited
   *   supply of kmap_atomic slots. Hence an importer shall only hold onto at
   *   max 2 atomic dma_buf kmaps at the same time (in any given process 
context).

This is also about atomic kmap ...

And the subsequent language around "Note that these calls need to always
succeed." is also not true, might be good to update that stating that kmap
is optional (like we say already for vmap).

With those docs nits addressed:

Reviewed-by: Daniel Vetter 


I've fixed up patch #1 and #2 and added your Reviewed-by tag.

Since I finally had time to install dim do you have any objections that 
I now run "dim push drm-misc-next" with those two applied?


Regards,
Christian.




@@ -859,41 +850,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
  }
  EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
  
-/**

- * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
- * space. The same restrictions as for kmap_atomic and friends apply.
- * @dmabuf:[in]buffer to map page from.
- * @page_num:  [in]page in PAGE_SIZE units to map.
- *
- * This call must always succeed, any necessary preparations that might fail
- * need to be done in begin_cpu_access.
- */
-void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
-{
-   WARN_ON(!dmabuf);
-
-   return dmabuf->ops->map_atomic(dmabuf, page_num);
-}
-EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
-
-/**
- * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
- * @dmabuf:[in]buffer to unmap page from.
- * @page_num:  [in]page in PAGE_SIZE units to unmap.
- * @vaddr: [in]kernel space pointer obtained from dma_buf_kmap_atomic.
- *
- * This call must always succeed.
- */
-void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
-  void *vaddr)
-{
-   WARN_ON(!dmabuf);
-
-   if (dmabuf->ops->unmap_atomic)
-   dmabuf->ops->unmap_atomic(dmabuf, page_num, vaddr);
-}
-EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
-
  /**
   * dma_buf_kmap - Map a page of the buffer object into kernel address space. 
The
   * same restrictions as for kmap and friends 

Re: [PATCH 02/13] drm/amd/powerplay: smc_dpm_info structure change

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> A new member Vr2_I2C_address is added.
>
> Change-Id: I9821365721c9d73e1d2df2f65dfa97f39f0425c6
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/include/atomfirmware.h   | 5 -
>  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c   | 2 ++
>  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h   | 2 ++
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c | 2 ++
>  drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h| 5 -
>  5 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
> b/drivers/gpu/drm/amd/include/atomfirmware.h
> index 092d800..33b4de4 100644
> --- a/drivers/gpu/drm/amd/include/atomfirmware.h
> +++ b/drivers/gpu/drm/amd/include/atomfirmware.h
> @@ -1433,7 +1433,10 @@ struct atom_smc_dpm_info_v4_1
> uint8_t  acggfxclkspreadpercent;
> uint16_t acggfxclkspreadfreq;
>
> -   uint32_t boardreserved[10];
> +   uint8_t Vr2_I2C_address;
> +   uint8_t padding_vr2[3];
> +
> +   uint32_t boardreserved[9];
>  };
>
>  /*
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
> index aa2faff..d27c1c9 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
> @@ -699,5 +699,7 @@ int pp_atomfwctrl_get_smc_dpm_information(struct pp_hwmgr 
> *hwmgr,
> param->acggfxclkspreadpercent = info->acggfxclkspreadpercent;
> param->acggfxclkspreadfreq = info->acggfxclkspreadfreq;
>
> +   param->Vr2_I2C_address = info->Vr2_I2C_address;
> +
> return 0;
>  }
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
> index 745bd38..22e2166 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
> @@ -210,6 +210,8 @@ struct pp_atomfwctrl_smc_dpm_parameters
> uint8_t  acggfxclkspreadenabled;
> uint8_t  acggfxclkspreadpercent;
> uint16_t acggfxclkspreadfreq;
> +
> +   uint8_t Vr2_I2C_address;
>  };
>
>  int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr,
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> index 888ddca..2991470 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> @@ -230,6 +230,8 @@ static int append_vbios_pptable(struct pp_hwmgr *hwmgr, 
> PPTable_t *ppsmc_pptable
> ppsmc_pptable->AcgThresholdFreqLow = 0x;
> }
>
> +   ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address;
> +
> return 0;
>  }
>
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h 
> b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> index 2f8a3b9..b08526f 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
> @@ -499,7 +499,10 @@ typedef struct {
> uint8_t  AcgGfxclkSpreadPercent;
> uint16_t AcgGfxclkSpreadFreq;
>
> -   uint32_t BoardReserved[10];
> +  uint8_t  Vr2_I2C_address;
> +  uint8_t  padding_vr2[3];
> +
> +  uint32_t BoardReserved[9];
>
>
>uint32_t MmHubPadding[7];
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 03/13] drm/amd/powerplay: drop the acg fix

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> This workaround is not needed any more.
>
> Change-Id: I81cb20ecd52d242af26ca32860baacdb5ec126c9
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c | 6 --
>  1 file changed, 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> index 2991470..f4f366b 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
> @@ -224,12 +224,6 @@ static int append_vbios_pptable(struct pp_hwmgr *hwmgr, 
> PPTable_t *ppsmc_pptable
> ppsmc_pptable->AcgGfxclkSpreadPercent = 
> smc_dpm_table.acggfxclkspreadpercent;
> ppsmc_pptable->AcgGfxclkSpreadFreq = 
> smc_dpm_table.acggfxclkspreadfreq;
>
> -   /* 0x will disable the ACG feature */
> -   if (!(hwmgr->feature_mask & PP_ACG_MASK)) {
> -   ppsmc_pptable->AcgThresholdFreqHigh = 0x;
> -   ppsmc_pptable->AcgThresholdFreqLow = 0x;
> -   }
> -
> ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address;
>
> return 0;
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 06/13] drm/amd/powerplay: revise clock level setup

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 3:38 AM, Evan Quan  wrote:
> Make sure the clock level set only on dpm enabled. Also uvd/vce/soc
> clock also changed correspondingly.
>
> Change-Id: I1db2e2ac355fd5aea1c0a25c2b140d039a590089
> Signed-off-by: Evan Quan 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 318 
> ++---
>  1 file changed, 211 insertions(+), 107 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index ea530af..a124b81 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> @@ -958,76 +958,172 @@ static uint32_t vega12_find_lowest_dpm_level(
> break;
> }
>
> +   if (i >= table->count) {
> +   i = 0;
> +   table->dpm_levels[i].enabled = true;
> +   }
> +
> return i;
>  }
>
>  static uint32_t vega12_find_highest_dpm_level(
> struct vega12_single_dpm_table *table)
>  {
> -   uint32_t i = 0;
> +   int32_t i = 0;
> +   PP_ASSERT_WITH_CODE(table->count <= MAX_REGULAR_DPM_NUMBER,
> +   "[FindHighestDPMLevel] DPM Table has too many 
> entries!",
> +   return MAX_REGULAR_DPM_NUMBER - 1);
>
> -   if (table->count <= MAX_REGULAR_DPM_NUMBER) {
> -   for (i = table->count; i > 0; i--) {
> -   if (table->dpm_levels[i - 1].enabled)
> -   return i - 1;
> -   }
> -   } else {
> -   pr_info("DPM Table Has Too Many Entries!");
> -   return MAX_REGULAR_DPM_NUMBER - 1;
> +   for (i = table->count - 1; i >= 0; i--) {
> +   if (table->dpm_levels[i].enabled)
> +   break;
> }
>
> -   return i;
> +   if (i < 0) {
> +   i = 0;
> +   table->dpm_levels[i].enabled = true;
> +   }
> +
> +   return (uint32_t)i;
>  }
>
>  static int vega12_upload_dpm_min_level(struct pp_hwmgr *hwmgr)
>  {
> struct vega12_hwmgr *data = hwmgr->backend;
> -   if (data->smc_state_table.gfx_boot_level !=
> -   data->dpm_table.gfx_table.dpm_state.soft_min_level) {
> -   smum_send_msg_to_smc_with_parameter(hwmgr,
> -   PPSMC_MSG_SetSoftMinByFreq,
> -   PPCLK_GFXCLK<<16 | 
> data->dpm_table.gfx_table.dpm_levels[data->smc_state_table.gfx_boot_level].value);
> -   data->dpm_table.gfx_table.dpm_state.soft_min_level =
> -   data->smc_state_table.gfx_boot_level;
> +   uint32_t min_freq;
> +   int ret = 0;
> +
> +   if (data->smu_features[GNLD_DPM_GFXCLK].enabled) {
> +   min_freq = data->dpm_table.gfx_table.dpm_state.soft_min_level;
> +   PP_ASSERT_WITH_CODE(!(ret = 
> smum_send_msg_to_smc_with_parameter(
> +   hwmgr, PPSMC_MSG_SetSoftMinByFreq,
> +   (PPCLK_GFXCLK << 16) | (min_freq & 
> 0x))),
> +   "Failed to set soft min gfxclk !",
> +   return ret);
> }
>
> -   if (data->smc_state_table.mem_boot_level !=
> -   data->dpm_table.mem_table.dpm_state.soft_min_level) {
> -   smum_send_msg_to_smc_with_parameter(hwmgr,
> -   PPSMC_MSG_SetSoftMinByFreq,
> -   PPCLK_UCLK<<16 | 
> data->dpm_table.mem_table.dpm_levels[data->smc_state_table.mem_boot_level].value);
> -   data->dpm_table.mem_table.dpm_state.soft_min_level =
> -   data->smc_state_table.mem_boot_level;
> +   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
> +   min_freq = data->dpm_table.mem_table.dpm_state.soft_min_level;
> +   PP_ASSERT_WITH_CODE(!(ret = 
> smum_send_msg_to_smc_with_parameter(
> +   hwmgr, PPSMC_MSG_SetSoftMinByFreq,
> +   (PPCLK_UCLK << 16) | (min_freq & 
> 0x))),
> +   "Failed to set soft min memclk !",
> +   return ret);
> +
> +   min_freq = data->dpm_table.mem_table.dpm_state.hard_min_level;
> +   PP_ASSERT_WITH_CODE(!(ret = 
> smum_send_msg_to_smc_with_parameter(
> +   hwmgr, PPSMC_MSG_SetHardMinByFreq,
> +   (PPCLK_UCLK << 16) | (min_freq & 
> 0x))),
> +   "Failed to set hard min memclk !",
> +   return ret);
> }
>
> -   return 0;
> +   if (data->smu_features[GNLD_DPM_UVD].enabled) {
> +   min_freq = 
> data->dpm_table.vclk_table.dpm_state.soft_min_level;
> +
> +  

[PATCH 1/2] drm/amd/pp: Remove duplicate code in vega12_hwmgr.c

2018-06-19 Thread Rex Zhu
use smu_helper function smu_set_watermarks_for_clocks_ranges
in vega12_set_watermarks_for_clocks_ranges.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 43 +-
 1 file changed, 1 insertion(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index bcb64cd..81b20d1 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -1719,52 +1719,11 @@ static int 
vega12_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
Watermarks_t *table = &(data->smc_state_table.water_marks_table);
struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges = 
clock_ranges;
int result = 0;
-   uint32_t i;
 
if (!data->registry_data.disable_water_mark &&
data->smu_features[GNLD_DPM_DCEFCLK].supported &&
data->smu_features[GNLD_DPM_SOCCLK].supported) {
-   for (i = 0; i < wm_with_clock_ranges->num_wm_sets_dmif; i++) {
-   table->WatermarkRow[WM_DCEFCLK][i].MinClock =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_dmif[i].wm_min_dcefclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_DCEFCLK][i].MaxClock =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_dmif[i].wm_max_dcefclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_DCEFCLK][i].MinUclk =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_dmif[i].wm_min_memclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_DCEFCLK][i].MaxUclk =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_dmif[i].wm_max_memclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_DCEFCLK][i].WmSetting = (uint8_t)
-   
wm_with_clock_ranges->wm_sets_dmif[i].wm_set_id;
-   }
-
-   for (i = 0; i < wm_with_clock_ranges->num_wm_sets_mcif; i++) {
-   table->WatermarkRow[WM_SOCCLK][i].MinClock =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_mcif[i].wm_min_socclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_SOCCLK][i].MaxClock =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_mcif[i].wm_max_socclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_SOCCLK][i].MinUclk =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_mcif[i].wm_min_memclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_SOCCLK][i].MaxUclk =
-   cpu_to_le16((uint16_t)
-   
(wm_with_clock_ranges->wm_sets_mcif[i].wm_max_memclk_in_khz) /
-   100);
-   table->WatermarkRow[WM_SOCCLK][i].WmSetting = (uint8_t)
-   
wm_with_clock_ranges->wm_sets_mcif[i].wm_set_id;
-   }
+   smu_set_watermarks_for_clocks_ranges(table, 
wm_with_clock_ranges);
data->water_marks_bitmap |= WaterMarksExist;
data->water_marks_bitmap &= ~WaterMarksLoaded;
}
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/7] drm/amdgpu: Rename set_mmhub_powergating_by_smu to powergate_mmhub

2018-06-19 Thread Quan, Evan
Reviewed-by: Evan Quan 

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Wednesday, June 13, 2018 7:18 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex 
> Subject: [PATCH 1/7] drm/amdgpu: Rename
> set_mmhub_powergating_by_smu to powergate_mmhub
> 
> In order to keep consistent with powergate_uvd/vce.
> 
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h   | 4 ++--
>  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c   | 4 ++--
>  drivers/gpu/drm/amd/include/kgd_pp_interface.h| 2 +-
>  drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 8 
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c | 4 ++--
>  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 2 +-
>  6 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
> index 9acfbee..c6d6926 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
> @@ -359,8 +359,8 @@ enum amdgpu_pcie_gen {
>   ((adev)->powerplay.pp_funcs->odn_edit_dpm_table(\
>   (adev)->powerplay.pp_handle, type, parameter,
> size))
> 
> -#define amdgpu_dpm_set_mmhub_powergating_by_smu(adev) \
> - ((adev)->powerplay.pp_funcs-
> >set_mmhub_powergating_by_smu( \
> +#define amdgpu_dpm_powergate_mmhub(adev) \
> + ((adev)->powerplay.pp_funcs->powergate_mmhub( \
>   (adev)->powerplay.pp_handle))
> 
>  struct amdgpu_dpm {
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> index 3d53c44..377f536 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> @@ -471,8 +471,8 @@ void mmhub_v1_0_update_power_gating(struct
> amdgpu_device *adev,
> 
>   RENG_EXECUTE_ON_REG_UPDATE, 1);
>   WREG32_SOC15(MMHUB, 0, mmPCTL1_RENG_EXECUTE,
> pctl1_reng_execute);
> 
> - if (adev->powerplay.pp_funcs-
> >set_mmhub_powergating_by_smu)
> -
>   amdgpu_dpm_set_mmhub_powergating_by_smu(adev);
> + if (adev->powerplay.pp_funcs->powergate_mmhub)
> + amdgpu_dpm_powergate_mmhub(adev);
> 
>   } else {
>   pctl0_reng_execute = REG_SET_FIELD(pctl0_reng_execute,
> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> index 06f08f3..0f98862 100644
> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> @@ -269,7 +269,7 @@ struct amd_pm_funcs {
>   int (*get_power_profile_mode)(void *handle, char *buf);
>   int (*set_power_profile_mode)(void *handle, long *input, uint32_t
> size);
>   int (*odn_edit_dpm_table)(void *handle, uint32_t type, long *input,
> uint32_t size);
> - int (*set_mmhub_powergating_by_smu)(void *handle);
> + int (*powergate_mmhub)(void *handle);
>  };
> 
>  #endif
> diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> index d567be4..da98208 100644
> --- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> +++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> @@ -1168,19 +1168,19 @@ static int
> pp_get_display_mode_validation_clocks(void *handle,
>   return ret;
>  }
> 
> -static int pp_set_mmhub_powergating_by_smu(void *handle)
> +static int pp_dpm_powergate_mmhub(void *handle)
>  {
>   struct pp_hwmgr *hwmgr = handle;
> 
>   if (!hwmgr || !hwmgr->pm_en)
>   return -EINVAL;
> 
> - if (hwmgr->hwmgr_func->set_mmhub_powergating_by_smu ==
> NULL) {
> + if (hwmgr->hwmgr_func->powergate_mmhub == NULL) {
>   pr_info("%s was not implemented.\n", __func__);
>   return 0;
>   }
> 
> - return hwmgr->hwmgr_func-
> >set_mmhub_powergating_by_smu(hwmgr);
> + return hwmgr->hwmgr_func->powergate_mmhub(hwmgr);
>  }
> 
>  static const struct amd_pm_funcs pp_dpm_funcs = { @@ -1227,5 +1227,5
> @@ static int pp_set_mmhub_powergating_by_smu(void *handle)
>   .set_watermarks_for_clocks_ranges =
> pp_set_watermarks_for_clocks_ranges,
>   .display_clock_voltage_request = pp_display_clock_voltage_request,
>   .get_display_mode_validation_clocks =
> pp_get_display_mode_validation_clocks,
> - .set_mmhub_powergating_by_smu =
> pp_set_mmhub_powergating_by_smu,
> + .powergate_mmhub = pp_dpm_powergate_mmhub,
>  };
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> index d4bc83e..0f8352c 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> @@ -1126,7 +1126,7 @@ static int smu10_smus_notify_pwe(struct
> pp_hwmgr *hwmgr)
>   return smum_send_msg_to_smc(hwmgr,
> PPSMC_MSG_SetRccPfcPmeRestoreRegister);
>  }
> 
> -static int 

RE: [PATCH 2/7] drm/amd/pp: Rename enable_per_cu_power_gating to powergate_gfx

2018-06-19 Thread Quan, Evan
Reviewed-by: Evan Quan 

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Wednesday, June 13, 2018 7:18 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex 
> Subject: [PATCH 2/7] drm/amd/pp: Rename enable_per_cu_power_gating
> to powergate_gfx
> 
> keep consistent with powergate_uvd/vce/mmhub
> 
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/powerplay/amd_powerplay.c   | 6 +++---
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.c | 2 +-
> drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.h | 2 +-
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c| 2 +-
>  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h   | 2 +-
>  5 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> index da98208..b69da11 100644
> --- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> +++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> @@ -236,13 +236,13 @@ static int pp_set_powergating_state(void *handle,
>   pr_err("gfx off control failed!\n");
>   }
> 
> - if (hwmgr->hwmgr_func->enable_per_cu_power_gating == NULL) {
> - pr_debug("%s was not implemented.\n", __func__);
> + if (hwmgr->hwmgr_func->powergate_gfx == NULL) {
> + pr_info("%s was not implemented.\n", __func__);
>   return 0;
>   }
> 
>   /* Enable/disable GFX per cu powergating through SMU */
> - return hwmgr->hwmgr_func-
> >enable_per_cu_power_gating(hwmgr,
> + return hwmgr->hwmgr_func->powergate_gfx(hwmgr,
>   state == AMD_PG_STATE_GATE);
>  }
> 
> diff --git
> a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.c
> index 4149562..683b29a 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.c
> @@ -416,7 +416,7 @@ int smu7_update_clock_gatings(struct pp_hwmgr
> *hwmgr,
>   * Powerplay will only control the static per CU Power Gating.
>   * Dynamic per CU Power Gating will be done in gfx.
>   */
> -int smu7_enable_per_cu_power_gating(struct pp_hwmgr *hwmgr, bool
> enable)
> +int smu7_powergate_gfx(struct pp_hwmgr *hwmgr, bool enable)
>  {
>   struct amdgpu_device *adev = hwmgr->adev;
> 
> diff --git
> a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.h
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.h
> index be7f66d..fc8f8a6 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.h
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_clockpowergating.h
> @@ -33,6 +33,6 @@
>  int smu7_disable_clock_power_gating(struct pp_hwmgr *hwmgr);  int
> smu7_update_clock_gatings(struct pp_hwmgr *hwmgr,
>   const uint32_t *msg_id);
> -int smu7_enable_per_cu_power_gating(struct pp_hwmgr *hwmgr, bool
> enable);
> +int smu7_powergate_gfx(struct pp_hwmgr *hwmgr, bool enable);
> 
>  #endif
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> index b73e200..b4c93a9 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> @@ -5044,7 +5044,7 @@ static int smu7_set_power_profile_mode(struct
> pp_hwmgr *hwmgr, long *input, uint
>   .get_fan_control_mode = smu7_get_fan_control_mode,
>   .force_clock_level = smu7_force_clock_level,
>   .print_clock_levels = smu7_print_clock_levels,
> - .enable_per_cu_power_gating =
> smu7_enable_per_cu_power_gating,
> + .powergate_gfx = smu7_powergate_gfx,
>   .get_sclk_od = smu7_get_sclk_od,
>   .set_sclk_od = smu7_set_sclk_od,
>   .get_mclk_od = smu7_get_mclk_od,
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> index 9b07d6e..95e29a2 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> @@ -302,7 +302,7 @@ struct pp_hwmgr_func {
>   int (*power_off_asic)(struct pp_hwmgr *hwmgr);
>   int (*force_clock_level)(struct pp_hwmgr *hwmgr, enum
> pp_clock_type type, uint32_t mask);
>   int (*print_clock_levels)(struct pp_hwmgr *hwmgr, enum
> pp_clock_type type, char *buf);
> - int (*enable_per_cu_power_gating)(struct pp_hwmgr *hwmgr, bool
> enable);
> + int (*powergate_gfx)(struct pp_hwmgr *hwmgr, bool enable);
>   int (*get_sclk_od)(struct pp_hwmgr *hwmgr);
>   int (*set_sclk_od)(struct pp_hwmgr *hwmgr, uint32_t value);
>   int (*get_mclk_od)(struct pp_hwmgr *hwmgr);
> --
> 1.9.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___

Re: [PATCH 40/51] drm/amd/display: Enable Stereo in Dal3

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 5:10 PM, Harry Wentland  wrote:
> From: Alvin lee 
>
> - program infoframe for Stereo
> - program stereo flip control registers properly
>
> Change-Id: If547e2677b72709359b3a8602357b80961f1bfce
> Signed-off-by: Alvin lee 
> Reviewed-by: Tony Cheng 
> Acked-by: Harry Wentland 
> ---
>  drivers/gpu/drm/amd/display/Makefile  |  3 +-
>  .../gpu/drm/amd/display/dc/core/dc_resource.c | 57 ++
>  drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
>  .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c | 18 -
>  .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h |  4 +
>  .../amd/display/modules/inc/mod_info_packet.h | 15 
>  .../amd/display/modules/info_packet/Makefile  | 31 
>  .../display/modules/info_packet/info_packet.c | 74 +++
>  8 files changed, 165 insertions(+), 38 deletions(-)
>  create mode 100644 drivers/gpu/drm/amd/display/modules/inc/mod_info_packet.h
>  create mode 100644 drivers/gpu/drm/amd/display/modules/info_packet/Makefile
>  create mode 100644 
> drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
>
> diff --git a/drivers/gpu/drm/amd/display/Makefile 
> b/drivers/gpu/drm/amd/display/Makefile
> index a2c5be493555..c97dc9613325 100644
> --- a/drivers/gpu/drm/amd/display/Makefile
> +++ b/drivers/gpu/drm/amd/display/Makefile
> @@ -31,11 +31,12 @@ subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/dc/inc/hw
>  subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/inc
>  subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/freesync
>  subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/color
> +subdir-ccflags-y += -I$(FULL_AMD_DISPLAY_PATH)/modules/info_packet
>
>  #TODO: remove when Timing Sync feature is complete
>  subdir-ccflags-y += -DBUILD_FEATURE_TIMING_SYNC=0
>
> -DAL_LIBS = amdgpu_dm dcmodules/freesync modules/color
> +DAL_LIBS = amdgpu_dm dcmodules/freesync modules/color 
> modules/info_packet
>
>  AMD_DAL = $(addsuffix /Makefile, $(addprefix 
> $(FULL_AMD_DISPLAY_PATH)/,$(DAL_LIBS)))
>
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> index 72f233963748..41562ffa1c62 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
> @@ -1488,6 +1488,20 @@ static bool is_hdr_static_meta_changed(struct 
> dc_stream_state *cur_stream,
> return false;
>  }
>
> +static bool is_vsc_info_packet_changed(struct dc_stream_state *cur_stream,
> +   struct dc_stream_state *new_stream)
> +{
> +   if (cur_stream == NULL)
> +   return true;
> +
> +   if (memcmp(_stream->vsc_infopacket,
> +   _stream->vsc_infopacket,
> +   sizeof(struct dc_info_packet)) != 0)
> +   return true;
> +
> +   return false;
> +}
> +
>  static bool is_timing_changed(struct dc_stream_state *cur_stream,
> struct dc_stream_state *new_stream)
>  {
> @@ -1528,6 +1542,9 @@ static bool are_stream_backends_same(
> if (stream_a->dpms_off != stream_b->dpms_off)
> return false;
>
> +   if (is_vsc_info_packet_changed(stream_a, stream_b))
> +   return false;
> +
> return true;
>  }
>
> @@ -2414,43 +2431,10 @@ static void set_vsc_info_packet(
> struct dc_info_packet *info_packet,
> struct dc_stream_state *stream)
>  {
> -   unsigned int vscPacketRevision = 0;
> -   unsigned int i;
> -
> -   /*VSC packet set to 2 when DP revision >= 1.2*/
> -   if (stream->psr_version != 0) {
> -   vscPacketRevision = 2;
> -   }
> -
> -   /* VSC packet not needed based on the features
> -* supported by this DP display
> -*/
> -   if (vscPacketRevision == 0)
> +   if (!stream->vsc_infopacket.valid)
> return;
>
> -   if (vscPacketRevision == 0x2) {
> -   /* Secondary-data Packet ID = 0*/
> -   info_packet->hb0 = 0x00;
> -   /* 07h - Packet Type Value indicating Video
> -* Stream Configuration packet
> -*/
> -   info_packet->hb1 = 0x07;
> -   /* 02h = VSC SDP supporting 3D stereo and PSR
> -* (applies to eDP v1.3 or higher).
> -*/
> -   info_packet->hb2 = 0x02;
> -   /* 08h = VSC packet supporting 3D stereo + PSR
> -* (HB2 = 02h).
> -*/
> -   info_packet->hb3 = 0x08;
> -
> -   for (i = 0; i < 28; i++)
> -   info_packet->sb[i] = 0;
> -
> -   info_packet->valid = true;
> -   }
> -
> -   /*TODO: stereo 3D support and extend pixel encoding colorimetry*/
> +   *info_packet = stream->vsc_infopacket;
>  }
>
>  void dc_resource_state_destruct(struct dc_state *context)
> @@ -2632,6 +2616,9 @@ bool pipe_need_reprogram(
> if 

Re: [PATCH 2/5] drm/amd/pp: Fix wrong clock-unit exported to Display

2018-06-19 Thread Alex Deucher
On Tue, Jun 19, 2018 at 5:17 PM, Harry Wentland  wrote:
> From: Rex Zhu 
>
> Transfer 10KHz (requested by smu) to KHz needed by Display
> component.
>
> This can fix the issue 4k Monitor can't be lit up on Vega/Raven.
>
> Signed-off-by: Rex Zhu 
> Acked-by: Alex Deucher 

Need to make sure we drop Mikita's patch if we apply this one
otherwise the clocks will be wrong again.

Alex

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c  |  4 ++--
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 10 +-
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 10 +-
>  3 files changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> index d4bc83e81389..c905df42adc5 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
> @@ -993,7 +993,7 @@ static int smu10_get_clock_by_type_with_latency(struct 
> pp_hwmgr *hwmgr,
>
> clocks->num_levels = 0;
> for (i = 0; i < pclk_vol_table->count; i++) {
> -   clocks->data[i].clocks_in_khz = 
> pclk_vol_table->entries[i].clk;
> +   clocks->data[i].clocks_in_khz = 
> pclk_vol_table->entries[i].clk * 10;
> clocks->data[i].latency_in_us = latency_required ?
> smu10_get_mem_latency(hwmgr,
> 
> pclk_vol_table->entries[i].clk) :
> @@ -1044,7 +1044,7 @@ static int smu10_get_clock_by_type_with_voltage(struct 
> pp_hwmgr *hwmgr,
>
> clocks->num_levels = 0;
> for (i = 0; i < pclk_vol_table->count; i++) {
> -   clocks->data[i].clocks_in_khz = 
> pclk_vol_table->entries[i].clk;
> +   clocks->data[i].clocks_in_khz = 
> pclk_vol_table->entries[i].clk  * 10;
> clocks->data[i].voltage_in_mv = 
> pclk_vol_table->entries[i].vol;
> clocks->num_levels++;
> }
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> index 3b8d36df52e9..e9a8b527d481 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> @@ -4067,7 +4067,7 @@ static void vega10_get_sclks(struct pp_hwmgr *hwmgr,
> for (i = 0; i < dep_table->count; i++) {
> if (dep_table->entries[i].clk) {
> clocks->data[clocks->num_levels].clocks_in_khz =
> -   dep_table->entries[i].clk;
> +   dep_table->entries[i].clk * 10;
> clocks->num_levels++;
> }
> }
> @@ -4104,7 +4104,7 @@ static void vega10_get_memclocks(struct pp_hwmgr *hwmgr,
> clocks->data[clocks->num_levels].clocks_in_khz =
> data->mclk_latency_table.entries
> [data->mclk_latency_table.count].frequency =
> -   dep_table->entries[i].clk;
> +   dep_table->entries[i].clk * 10;
> clocks->data[clocks->num_levels].latency_in_us =
> data->mclk_latency_table.entries
> [data->mclk_latency_table.count].latency =
> @@ -4126,7 +4126,7 @@ static void vega10_get_dcefclocks(struct pp_hwmgr 
> *hwmgr,
> uint32_t i;
>
> for (i = 0; i < dep_table->count; i++) {
> -   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
> +   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk * 
> 10;
> clocks->data[i].latency_in_us = 0;
> clocks->num_levels++;
> }
> @@ -4142,7 +4142,7 @@ static void vega10_get_socclocks(struct pp_hwmgr *hwmgr,
> uint32_t i;
>
> for (i = 0; i < dep_table->count; i++) {
> -   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
> +   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk * 
> 10;
> clocks->data[i].latency_in_us = 0;
> clocks->num_levels++;
> }
> @@ -4202,7 +4202,7 @@ static int vega10_get_clock_by_type_with_voltage(struct 
> pp_hwmgr *hwmgr,
> }
>
> for (i = 0; i < dep_table->count; i++) {
> -   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
> +   clocks->data[i].clocks_in_khz = dep_table->entries[i].clk  * 
> 10;
> clocks->data[i].voltage_in_mv = 
> (uint32_t)(table_info->vddc_lookup_table->
> entries[dep_table->entries[i].vddInd].us_vdd);
> clocks->num_levels++;
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
> index 782e2098824d..d685ce7f88cc 100644
> --- 

RE: [PATCH v2 6/7] drm/amdgpu: Make gfx_off control by GFX ip

2018-06-19 Thread Quan, Evan
Reviewed-by: Evan Quan 

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Wednesday, June 13, 2018 7:37 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex 
> Subject: [PATCH v2 6/7] drm/amdgpu: Make gfx_off control by GFX ip
> 
> gfx off should be controlled by GFX IP.
> Powerplay only export interface to gfx ip.
> This logic is same as uvd/vce cg/pg.
> 
> 1. Delete the gfx pg/off ctrl code in pp_set_powergating_state
>this ip function is for smu pg enablement.
> 2. call set_powergating_by_smu to enable/disalbe power off feature.
> 
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 19 +++
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c |  4 
>  drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 25 +-
> ---
>  3 files changed, 12 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 3adef57..caf588d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1732,16 +1732,11 @@ static int
> amdgpu_device_ip_late_set_cg_state(struct amdgpu_device *adev)
>   }
>   }
> 
> - if (adev->powerplay.pp_feature & PP_GFXOFF_MASK) {
> + if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
>   /* enable gfx powergating */
>   amdgpu_device_ip_set_powergating_state(adev,
> 
> AMD_IP_BLOCK_TYPE_GFX,
>  AMD_PG_STATE_GATE);
> - /* enable gfxoff */
> - amdgpu_device_ip_set_powergating_state(adev,
> -
> AMD_IP_BLOCK_TYPE_SMC,
> -AMD_PG_STATE_GATE);
> - }
> 
>   return 0;
>  }
> @@ -1814,6 +1809,8 @@ static int amdgpu_device_ip_fini(struct
> amdgpu_device *adev)
> adev->ip_blocks[i].version->funcs-
> >name, r);
>   return r;
>   }
> + if (adev->powerplay.pp_funcs-
> >set_powergating_by_smu)
> +
>   amdgpu_dpm_set_powergating_by_smu(adev,
> AMD_IP_BLOCK_TYPE_GFX,
> +false);
>   r = adev->ip_blocks[i].version->funcs->hw_fini((void
> *)adev);
>   /* XXX handle errors */
>   if (r) {
> @@ -1923,12 +1920,6 @@ int amdgpu_device_ip_suspend(struct
> amdgpu_device *adev)
>   if (amdgpu_sriov_vf(adev))
>   amdgpu_virt_request_full_gpu(adev, false);
> 
> - /* ungate SMC block powergating */
> - if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
> - amdgpu_device_ip_set_powergating_state(adev,
> -
> AMD_IP_BLOCK_TYPE_SMC,
> -
> AMD_CG_STATE_UNGATE);
> -
>   /* ungate SMC block first */
>   r = amdgpu_device_ip_set_clockgating_state(adev,
> AMD_IP_BLOCK_TYPE_SMC,
>  AMD_CG_STATE_UNGATE);
> @@ -1936,6 +1927,10 @@ int amdgpu_device_ip_suspend(struct
> amdgpu_device *adev)
>   DRM_ERROR("set_clockgating_state(ungate) SMC
> failed %d\n", r);
>   }
> 
> + /* call smu to disable gfx off feature first when suspend */
> + if (adev->powerplay.pp_funcs->set_powergating_by_smu)
> + amdgpu_dpm_set_powergating_by_smu(adev,
> AMD_IP_BLOCK_TYPE_GFX,
> +false);
> +
>   for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>   if (!adev->ip_blocks[i].status.valid)
>   continue;
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index ae35bbe..bec5592 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -3715,6 +3715,10 @@ static int gfx_v9_0_set_powergating_state(void
> *handle,
> 
>   /* update mgcg state */
>   gfx_v9_0_update_gfx_mg_power_gating(adev, enable);
> +
> + /* set gfx off through smu */
> + if (enable && adev->powerplay.pp_funcs-
> >set_powergating_by_smu)
> + amdgpu_dpm_set_powergating_by_smu(adev,
> AMD_IP_BLOCK_TYPE_GFX,
> +true);
>   break;
>   default:
>   break;
> diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> index cb2dd7c..387a1eb 100644
> --- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> +++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
> @@ -221,30 +221,7 @@ static int pp_sw_reset(void *handle)  static int
> pp_set_powergating_state(void *handle,
>   enum amd_powergating_state state)  {
> - struct amdgpu_device *adev = handle;
> - struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
> - int ret;
> -
> - if (!hwmgr || !hwmgr->pm_en)
> - return 0;
> -
> - if (hwmgr->hwmgr_func->gfx_off_control) {
> - /* 

RE: [PATCH v2 7/7] drm/amdgpu: Change PG enable sequence

2018-06-19 Thread Quan, Evan
Reviewed-by: Evan Quan 

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Wednesday, June 13, 2018 7:39 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex 
> Subject: [PATCH v2 7/7] drm/amdgpu: Change PG enable sequence
> 
> Enable PG state after CG enabled.
> 
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 33
> +-
>  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c  |  4 
>  2 files changed, 28 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index caf588d..9647f54 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1732,12 +1732,34 @@ static int
> amdgpu_device_ip_late_set_cg_state(struct amdgpu_device *adev)
>   }
>   }
> 
> - if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
> - /* enable gfx powergating */
> - amdgpu_device_ip_set_powergating_state(adev,
> -
> AMD_IP_BLOCK_TYPE_GFX,
> -AMD_PG_STATE_GATE);
> + return 0;
> +}
> +
> +static int amdgpu_device_ip_late_set_pg_state(struct amdgpu_device
> +*adev) {
> + int i = 0, r;
> 
> + if (amdgpu_emu_mode == 1)
> + return 0;
> +
> + for (i = 0; i < adev->num_ip_blocks; i++) {
> + if (!adev->ip_blocks[i].status.valid)
> + continue;
> + /* skip CG for VCE/UVD, it's handled specially */
> + if (adev->ip_blocks[i].version->type !=
> AMD_IP_BLOCK_TYPE_UVD &&
> + adev->ip_blocks[i].version->type !=
> AMD_IP_BLOCK_TYPE_VCE &&
> + adev->ip_blocks[i].version->type !=
> AMD_IP_BLOCK_TYPE_VCN &&
> + adev->ip_blocks[i].version->funcs-
> >set_powergating_state) {
> + /* enable powergating to save power */
> + r = adev->ip_blocks[i].version->funcs-
> >set_powergating_state((void *)adev,
> +
>AMD_PG_STATE_GATE);
> + if (r) {
> + DRM_ERROR("set_powergating_state(gate)
> of IP block <%s> failed %d\n",
> +   adev->ip_blocks[i].version->funcs-
> >name, r);
> + return r;
> + }
> + }
> + }
>   return 0;
>  }
> 
> @@ -1900,6 +1922,7 @@ static void
> amdgpu_device_ip_late_init_func_handler(struct work_struct *work)
>   struct amdgpu_device *adev =
>   container_of(work, struct amdgpu_device,
> late_init_work.work);
>   amdgpu_device_ip_late_set_cg_state(adev);
> + amdgpu_device_ip_late_set_pg_state(adev);
>  }
> 
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> index 916776a..2a860ef 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> @@ -5581,10 +5581,6 @@ static int gfx_v8_0_late_init(void *handle)
>   return r;
>   }
> 
> - amdgpu_device_ip_set_powergating_state(adev,
> -AMD_IP_BLOCK_TYPE_GFX,
> -AMD_PG_STATE_GATE);
> -
>   return 0;
>  }
> 
> --
> 1.9.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: Fix a typo

2018-06-19 Thread Rex Zhu
change wm_min_memg_clk_in_khz -> wm_min_mem_clk_in_khz

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c | 8 
 drivers/gpu/drm/amd/display/dc/dm_services_types.h  | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
index 00c0a1e..943d74d 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
@@ -1000,7 +1000,7 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
eng_clks.data[0].clocks_in_khz;
clk_ranges.wm_clk_ranges[0].wm_max_eng_clk_in_khz =
eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 
1;
-   clk_ranges.wm_clk_ranges[0].wm_min_memg_clk_in_khz =
+   clk_ranges.wm_clk_ranges[0].wm_min_mem_clk_in_khz =
mem_clks.data[0].clocks_in_khz;
clk_ranges.wm_clk_ranges[0].wm_max_mem_clk_in_khz =
mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1;
@@ -1010,7 +1010,7 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz;
/* 5 GHz instead of data[7].clockInKHz to cover Overdrive */
clk_ranges.wm_clk_ranges[1].wm_max_eng_clk_in_khz = 500;
-   clk_ranges.wm_clk_ranges[1].wm_min_memg_clk_in_khz =
+   clk_ranges.wm_clk_ranges[1].wm_min_mem_clk_in_khz =
mem_clks.data[0].clocks_in_khz;
clk_ranges.wm_clk_ranges[1].wm_max_mem_clk_in_khz =
mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1;
@@ -1020,7 +1020,7 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
eng_clks.data[0].clocks_in_khz;
clk_ranges.wm_clk_ranges[2].wm_max_eng_clk_in_khz =
eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 
1;
-   clk_ranges.wm_clk_ranges[2].wm_min_memg_clk_in_khz =
+   clk_ranges.wm_clk_ranges[2].wm_min_mem_clk_in_khz =
mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz;
/* 5 GHz instead of data[2].clockInKHz to cover Overdrive */
clk_ranges.wm_clk_ranges[2].wm_max_mem_clk_in_khz = 500;
@@ -1030,7 +1030,7 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz;
/* 5 GHz instead of data[7].clockInKHz to cover Overdrive */
clk_ranges.wm_clk_ranges[3].wm_max_eng_clk_in_khz = 500;
-   clk_ranges.wm_clk_ranges[3].wm_min_memg_clk_in_khz =
+   clk_ranges.wm_clk_ranges[3].wm_min_mem_clk_in_khz =
mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz;
/* 5 GHz instead of data[2].clockInKHz to cover Overdrive */
clk_ranges.wm_clk_ranges[3].wm_max_mem_clk_in_khz = 500;
diff --git a/drivers/gpu/drm/amd/display/dc/dm_services_types.h 
b/drivers/gpu/drm/amd/display/dc/dm_services_types.h
index ab8c77d..2b83f92 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_services_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_services_types.h
@@ -137,7 +137,7 @@ struct dm_pp_clock_range_for_wm_set {
enum dm_pp_wm_set_id wm_set_id;
uint32_t wm_min_eng_clk_in_khz;
uint32_t wm_max_eng_clk_in_khz;
-   uint32_t wm_min_memg_clk_in_khz;
+   uint32_t wm_min_mem_clk_in_khz;
uint32_t wm_max_mem_clk_in_khz;
 };
 
@@ -150,7 +150,7 @@ struct dm_pp_clock_range_for_dmif_wm_set_soc15 {
enum dm_pp_wm_set_id wm_set_id;
uint32_t wm_min_dcfclk_clk_in_khz;
uint32_t wm_max_dcfclk_clk_in_khz;
-   uint32_t wm_min_memg_clk_in_khz;
+   uint32_t wm_min_mem_clk_in_khz;
uint32_t wm_max_mem_clk_in_khz;
 };
 
@@ -158,7 +158,7 @@ struct dm_pp_clock_range_for_mcif_wm_set_soc15 {
enum dm_pp_wm_set_id wm_set_id;
uint32_t wm_min_socclk_clk_in_khz;
uint32_t wm_max_socclk_clk_in_khz;
-   uint32_t wm_min_memg_clk_in_khz;
+   uint32_t wm_min_mem_clk_in_khz;
uint32_t wm_max_mem_clk_in_khz;
 };
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: correct GART location info

2018-06-19 Thread Christian König
We need a commit message, something like "Avoid confusing the GART with 
the GTT domain.".


Am 19.06.2018 um 06:41 schrieb Junwei Zhang:

Signed-off-by: Junwei Zhang 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 14 +++---
  1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index f77b07b..f9fe8d3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -676,17 +676,17 @@ void amdgpu_device_vram_location(struct amdgpu_device 
*adev,
  }
  
  /**

- * amdgpu_device_gart_location - try to find GTT location
+ * amdgpu_device_gart_location - try to find GART location
   *
   * @adev: amdgpu device structure holding all necessary informations
   * @mc: memory controller structure holding memory informations
   *
- * Function will place try to place GTT before or after VRAM.
+ * Function will place try to place GART before or after VRAM.
   *
- * If GTT size is bigger than space left then we ajust GTT size.
+ * If GART size is bigger than space left then we ajust GART size.
   * Thus function will never fails.
   *
- * FIXME: when reducing GTT size align new size on power of 2.
+ * FIXME: when reducing GART size align new size on power of 2.


Please just drop this line. IIRC we actually don't align the gartsize 
parameter to power of two any more either.


With that fixed the patch is Reviewed-by: Christian König 
.


Thanks,
Christian.


   */
  void amdgpu_device_gart_location(struct amdgpu_device *adev,
 struct amdgpu_gmc *mc)
@@ -699,13 +699,13 @@ void amdgpu_device_gart_location(struct amdgpu_device 
*adev,
size_bf = mc->vram_start;
if (size_bf > size_af) {
if (mc->gart_size > size_bf) {
-   dev_warn(adev->dev, "limiting GTT\n");
+   dev_warn(adev->dev, "limiting GART\n");
mc->gart_size = size_bf;
}
mc->gart_start = 0;
} else {
if (mc->gart_size > size_af) {
-   dev_warn(adev->dev, "limiting GTT\n");
+   dev_warn(adev->dev, "limiting GART\n");
mc->gart_size = size_af;
}
/* VCE doesn't like it when BOs cross a 4GB segment, so align
@@ -714,7 +714,7 @@ void amdgpu_device_gart_location(struct amdgpu_device *adev,
mc->gart_start = ALIGN(mc->vram_end + 1, 0x1ULL);
}
mc->gart_end = mc->gart_start + mc->gart_size - 1;
-   dev_info(adev->dev, "GTT: %lluM 0x%016llX - 0x%016llX\n",
+   dev_info(adev->dev, "GART: %lluM 0x%016llX - 0x%016llX\n",
mc->gart_size >> 20, mc->gart_start, mc->gart_end);
  }
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/13] drm/amd/powerplay: correct smc display config setting

2018-06-19 Thread Evan Quan
Multi monitor situation should be taked into consideration.
Also, there is no need to setup UCLK hard min clock level.

Change-Id: Icf1bc9b420a4048d9071e386308d30999491
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 13 ++---
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index cb0589e..4732179 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -1399,9 +1399,9 @@ static int 
vega12_notify_smc_display_config_after_ps_adjustment(
(struct vega12_hwmgr *)(hwmgr->backend);
struct PP_Clocks min_clocks = {0};
struct pp_display_clock_request clock_req;
-   uint32_t clk_request;
 
-   if (hwmgr->display_config->num_display > 1)
+   if ((hwmgr->display_config->num_display > 1) &&
+   !hwmgr->display_config->multi_monitor_in_sync)
vega12_notify_smc_display_change(hwmgr, false);
else
vega12_notify_smc_display_change(hwmgr, true);
@@ -1426,15 +1426,6 @@ static int 
vega12_notify_smc_display_config_after_ps_adjustment(
}
}
 
-   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
-   clk_request = (PPCLK_UCLK << 16) | (min_clocks.memoryClock) / 
100;
-   PP_ASSERT_WITH_CODE(
-   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_SetHardMinByFreq, clk_request) == 0,
-   
"[PhwVega12_NotifySMCDisplayConfigAfterPowerStateAdjustment] Attempt to set 
UCLK HardMin Failed!",
-   return -1);
-   data->dpm_table.mem_table.dpm_state.hard_min_level = 
min_clocks.memoryClock;
-   }
-
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/13] drm/amd/powerplay: cosmetic fix

2018-06-19 Thread Evan Quan
Fix coding style and drop unused variable.

Change-Id: I9630f39154ec6bc30115e75924b35bcbe028a1a4
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 10 +++---
 .../gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h  | 18 +-
 2 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index de61f86..a699416 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -811,9 +811,6 @@ static int vega12_enable_all_smu_features(struct pp_hwmgr 
*hwmgr)
enabled = (features_enabled & 
data->smu_features[i].smu_feature_bitmap) ? true : false;
data->smu_features[i].enabled = enabled;
data->smu_features[i].supported = enabled;
-   PP_ASSERT(
-   !data->smu_features[i].allowed || enabled,
-   "[EnableAllSMUFeatures] Enabled feature is 
different from allowed, expected disabled!");
}
}
 
@@ -1230,8 +1227,8 @@ static int vega12_get_current_gfx_clk_freq(struct 
pp_hwmgr *hwmgr, uint32_t *gfx
 
*gfx_freq = 0;
 
-   PP_ASSERT_WITH_CODE(
-   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetDpmClockFreq, (PPCLK_GFXCLK << 16)) == 0,
+   PP_ASSERT_WITH_CODE(smum_send_msg_to_smc_with_parameter(hwmgr,
+   PPSMC_MSG_GetDpmClockFreq, (PPCLK_GFXCLK << 16)) == 0,
"[GetCurrentGfxClkFreq] Attempt to get Current GFXCLK 
Frequency Failed!",
return -1);
PP_ASSERT_WITH_CODE(
@@ -1790,7 +1787,6 @@ static int vega12_set_watermarks_for_clocks_ranges(struct 
pp_hwmgr *hwmgr,
 {
struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
Watermarks_t *table = &(data->smc_state_table.water_marks_table);
-   int result = 0;
uint32_t i;
 
if (!data->registry_data.disable_water_mark &&
@@ -1841,7 +1837,7 @@ static int vega12_set_watermarks_for_clocks_ranges(struct 
pp_hwmgr *hwmgr,
data->water_marks_bitmap &= ~WaterMarksLoaded;
}
 
-   return result;
+   return 0;
 }
 
 static int vega12_force_clock_level(struct pp_hwmgr *hwmgr,
diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h 
b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
index b08526f..b6ffd08 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
@@ -412,10 +412,10 @@ typedef struct {
   QuadraticInt_tReservedEquation2;
   QuadraticInt_tReservedEquation3;
 
-   uint16_t MinVoltageUlvGfx;
-   uint16_t MinVoltageUlvSoc;
+  uint16_t MinVoltageUlvGfx;
+  uint16_t MinVoltageUlvSoc;
 
-   uint32_t Reserved[14];
+  uint32_t Reserved[14];
 
 
 
@@ -483,9 +483,9 @@ typedef struct {
   uint8_t  padding8_4;
 
 
-   uint8_t  PllGfxclkSpreadEnabled;
-   uint8_t  PllGfxclkSpreadPercent;
-   uint16_t PllGfxclkSpreadFreq;
+  uint8_t  PllGfxclkSpreadEnabled;
+  uint8_t  PllGfxclkSpreadPercent;
+  uint16_t PllGfxclkSpreadFreq;
 
   uint8_t  UclkSpreadEnabled;
   uint8_t  UclkSpreadPercent;
@@ -495,9 +495,9 @@ typedef struct {
   uint8_t  SocclkSpreadPercent;
   uint16_t SocclkSpreadFreq;
 
-   uint8_t  AcgGfxclkSpreadEnabled;
-   uint8_t  AcgGfxclkSpreadPercent;
-   uint16_t AcgGfxclkSpreadFreq;
+  uint8_t  AcgGfxclkSpreadEnabled;
+  uint8_t  AcgGfxclkSpreadPercent;
+  uint16_t AcgGfxclkSpreadFreq;
 
   uint8_t  Vr2_I2C_address;
   uint8_t  padding_vr2[3];
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/13] drm/amd/powerplay: correct vega12 thermal support as true

2018-06-19 Thread Evan Quan
Thermal support is enabled on vega12.

Change-Id: I7069a65c6b289dbfe4a12f81ff96e943e878e6fa
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 1fadb71..de61f86 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -81,6 +81,7 @@ static void vega12_set_default_registry_data(struct pp_hwmgr 
*hwmgr)
 
data->registry_data.disallowed_features = 0x0;
data->registry_data.od_state_in_dc_support = 0;
+   data->registry_data.thermal_support = 1;
data->registry_data.skip_baco_hardware = 0;
 
data->registry_data.log_avfs_param = 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/13] drm/amd/powerplay: set vega12 pre display configurations

2018-06-19 Thread Evan Quan
PPSMC_MSG_NumOfDisplays is set as 0 and uclk is forced as
highest.

Change-Id: I2400279d3c979d99f4dd4b8d53f051cd8f8e0c33
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 41 ++
 1 file changed, 41 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 26bdfff..1fadb71 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -2110,6 +2110,45 @@ static int vega12_apply_clocks_adjust_rules(struct 
pp_hwmgr *hwmgr)
return 0;
 }
 
+static int vega12_set_uclk_to_highest_dpm_level(struct pp_hwmgr *hwmgr,
+   struct vega12_single_dpm_table *dpm_table)
+{
+   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
+   int ret = 0;
+
+   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
+   PP_ASSERT_WITH_CODE(dpm_table->count > 0,
+   "[SetUclkToHightestDpmLevel] Dpm table has no 
entry!",
+   return -EINVAL);
+   PP_ASSERT_WITH_CODE(dpm_table->count <= NUM_UCLK_DPM_LEVELS,
+   "[SetUclkToHightestDpmLevel] Dpm table has too 
many entries!",
+   return -EINVAL);
+
+   dpm_table->dpm_state.hard_min_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   PP_ASSERT_WITH_CODE(!(ret = 
smum_send_msg_to_smc_with_parameter(hwmgr,
+   PPSMC_MSG_SetHardMinByFreq,
+   (PPCLK_UCLK << 16 ) | 
dpm_table->dpm_state.hard_min_level)),
+   "[SetUclkToHightestDpmLevel] Set hard min uclk 
failed!",
+   return ret);
+   }
+
+   return ret;
+}
+
+static int vega12_pre_display_configuration_changed_task(struct pp_hwmgr 
*hwmgr)
+{
+   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
+   int ret = 0;
+
+   smum_send_msg_to_smc_with_parameter(hwmgr,
+   PPSMC_MSG_NumOfDisplays, 0);
+
+   ret = vega12_set_uclk_to_highest_dpm_level(hwmgr,
+   >dpm_table.mem_table);
+
+   return ret;
+}
+
 static int vega12_display_configuration_changed_task(struct pp_hwmgr *hwmgr)
 {
struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
@@ -2358,6 +2397,8 @@ static const struct pp_hwmgr_func vega12_hwmgr_funcs = {
.print_clock_levels = vega12_print_clock_levels,
.apply_clocks_adjust_rules =
vega12_apply_clocks_adjust_rules,
+   .pre_display_config_changed =
+   vega12_pre_display_configuration_changed_task,
.display_config_changed = vega12_display_configuration_changed_task,
.powergate_uvd = vega12_power_gate_uvd,
.powergate_vce = vega12_power_gate_vce,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: correct GART location info

2018-06-19 Thread Zhang, Jerry (Junwei)

On 06/19/2018 03:04 PM, Christian König wrote:

We need a commit message, something like "Avoid confusing the GART with the GTT
domain.".


Yeah, will add such kind of info.



Am 19.06.2018 um 06:41 schrieb Junwei Zhang:

Signed-off-by: Junwei Zhang 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 14 +++---
  1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index f77b07b..f9fe8d3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -676,17 +676,17 @@ void amdgpu_device_vram_location(struct amdgpu_device
*adev,
  }
  /**
- * amdgpu_device_gart_location - try to find GTT location
+ * amdgpu_device_gart_location - try to find GART location
   *
   * @adev: amdgpu device structure holding all necessary informations
   * @mc: memory controller structure holding memory informations
   *
- * Function will place try to place GTT before or after VRAM.
+ * Function will place try to place GART before or after VRAM.
   *
- * If GTT size is bigger than space left then we ajust GTT size.
+ * If GART size is bigger than space left then we ajust GART size.
   * Thus function will never fails.
   *
- * FIXME: when reducing GTT size align new size on power of 2.
+ * FIXME: when reducing GART size align new size on power of 2.


Please just drop this line. IIRC we actually don't align the gartsize parameter
to power of two any more either.


Got it.
Thanks.

Jerry


With that fixed the patch is Reviewed-by: Christian König
.

Thanks,
Christian.


   */
  void amdgpu_device_gart_location(struct amdgpu_device *adev,
   struct amdgpu_gmc *mc)
@@ -699,13 +699,13 @@ void amdgpu_device_gart_location(struct amdgpu_device
*adev,
  size_bf = mc->vram_start;
  if (size_bf > size_af) {
  if (mc->gart_size > size_bf) {
-dev_warn(adev->dev, "limiting GTT\n");
+dev_warn(adev->dev, "limiting GART\n");
  mc->gart_size = size_bf;
  }
  mc->gart_start = 0;
  } else {
  if (mc->gart_size > size_af) {
-dev_warn(adev->dev, "limiting GTT\n");
+dev_warn(adev->dev, "limiting GART\n");
  mc->gart_size = size_af;
  }
  /* VCE doesn't like it when BOs cross a 4GB segment, so align
@@ -714,7 +714,7 @@ void amdgpu_device_gart_location(struct amdgpu_device *adev,
  mc->gart_start = ALIGN(mc->vram_end + 1, 0x1ULL);
  }
  mc->gart_end = mc->gart_start + mc->gart_size - 1;
-dev_info(adev->dev, "GTT: %lluM 0x%016llX - 0x%016llX\n",
+dev_info(adev->dev, "GART: %lluM 0x%016llX - 0x%016llX\n",
  mc->gart_size >> 20, mc->gart_start, mc->gart_end);
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/13] drm/amd/powerplay: revise default dpm tables setup

2018-06-19 Thread Evan Quan
Initialize the soft/hard min/max level correctly and
handle the dpm disabled situation.

Change-Id: I9a1d303ee54ac4c9687f72c86097b008ae398c05
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 334 -
 1 file changed, 132 insertions(+), 202 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index e81661cc..bc976e1 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -453,37 +453,30 @@ static int vega12_setup_asic_task(struct pp_hwmgr *hwmgr)
  */
 static void vega12_init_dpm_state(struct vega12_dpm_state *dpm_state)
 {
-   dpm_state->soft_min_level = 0xff;
-   dpm_state->soft_max_level = 0xff;
-   dpm_state->hard_min_level = 0xff;
-   dpm_state->hard_max_level = 0xff;
+   dpm_state->soft_min_level = 0x0;
+   dpm_state->soft_max_level = 0x;
+   dpm_state->hard_min_level = 0x0;
+   dpm_state->hard_max_level = 0x;
 }
 
-static int vega12_get_number_dpm_level(struct pp_hwmgr *hwmgr,
-   PPCLK_e clkID, uint32_t *num_dpm_level)
+static int vega12_get_number_of_dpm_level(struct pp_hwmgr *hwmgr,
+   PPCLK_e clk_id, uint32_t *num_of_levels)
 {
-   int result;
-   /*
-* SMU expects the Clock ID to be in the top 16 bits.
-* Lower 16 bits specify the level however 0xFF is a
-* special argument the returns the total number of levels
-*/
-   PP_ASSERT_WITH_CODE(smum_send_msg_to_smc_with_parameter(hwmgr,
-   PPSMC_MSG_GetDpmFreqByIndex, (clkID << 16 | 0xFF)) == 0,
-   "[GetNumberDpmLevel] Failed to get DPM levels from SMU for 
CLKID!",
-   return -EINVAL);
-
-   result = vega12_read_arg_from_smc(hwmgr, num_dpm_level);
+   int ret = 0;
 
-   PP_ASSERT_WITH_CODE(*num_dpm_level < MAX_REGULAR_DPM_NUMBER,
-   "[GetNumberDPMLevel] Number of DPM levels is greater than 
limit",
-   return -EINVAL);
+   ret = smum_send_msg_to_smc_with_parameter(hwmgr,
+   PPSMC_MSG_GetDpmFreqByIndex,
+   (clk_id << 16 | 0xFF));
+   PP_ASSERT_WITH_CODE(!ret,
+   "[GetNumOfDpmLevel] failed to get dpm levels!",
+   return ret);
 
-   PP_ASSERT_WITH_CODE(*num_dpm_level != 0,
-   "[GetNumberDPMLevel] Number of CLK Levels is zero!",
-   return -EINVAL);
+   vega12_read_arg_from_smc(hwmgr, num_of_levels);
+   PP_ASSERT_WITH_CODE(*num_of_levels > 0,
+   "[GetNumOfDpmLevel] number of clk levels is invalid!",
+   return -EINVAL);
 
-   return result;
+   return ret;
 }
 
 static int vega12_get_dpm_frequency_by_index(struct pp_hwmgr *hwmgr,
@@ -509,6 +502,31 @@ static int vega12_get_dpm_frequency_by_index(struct 
pp_hwmgr *hwmgr,
return result;
 }
 
+static int vega12_setup_single_dpm_table(struct pp_hwmgr *hwmgr,
+   struct vega12_single_dpm_table *dpm_table, PPCLK_e clk_id)
+{
+   int ret = 0;
+   uint32_t i, num_of_levels, clk;
+
+   ret = vega12_get_number_of_dpm_level(hwmgr, clk_id, _of_levels);
+   PP_ASSERT_WITH_CODE(!ret,
+   "[SetupSingleDpmTable] failed to get clk levels!",
+   return ret);
+
+   dpm_table->count = num_of_levels;
+
+   for (i = 0; i < num_of_levels; i++) {
+   ret = vega12_get_dpm_frequency_by_index(hwmgr, clk_id, i, );
+   PP_ASSERT_WITH_CODE(!ret,
+   "[SetupSingleDpmTable] failed to get clk of specific 
level!",
+   return ret);
+   dpm_table->dpm_levels[i].value = clk;
+   dpm_table->dpm_levels[i].enabled = true;
+   }
+
+   return ret;
+}
+
 /*
  * This function is to initialize all DPM state tables
  * for SMU based on the dependency table.
@@ -519,224 +537,136 @@ static int vega12_get_dpm_frequency_by_index(struct 
pp_hwmgr *hwmgr,
  */
 static int vega12_setup_default_dpm_tables(struct pp_hwmgr *hwmgr)
 {
-   uint32_t num_levels, i, clock;
 
struct vega12_hwmgr *data =
(struct vega12_hwmgr *)(hwmgr->backend);
-
struct vega12_single_dpm_table *dpm_table;
+   int ret = 0;
 
memset(>dpm_table, 0, sizeof(data->dpm_table));
 
-   /* Initialize Sclk DPM and SOC DPM table based on allow Sclk values */
+   /* socclk */
dpm_table = &(data->dpm_table.soc_table);
-
-   PP_ASSERT_WITH_CODE(vega12_get_number_dpm_level(hwmgr, PPCLK_SOCCLK,
-   _levels) == 0,
-   "[SetupDefaultDPMTables] Failed to get DPM levels from SMU for 
SOCCLK!",
-   return -EINVAL);
-
-   dpm_table->count = num_levels;
-
-   for (i = 0; i < num_levels; i++) {
-   

[PATCH 05/13] drm/amd/powerplay: retrieve all clock ranges on startup

2018-06-19 Thread Evan Quan
So that we do not need to use PPSMC_MSG_GetMin/MaxDpmFreq to
get the clock ranges on runtime. Since that causes some problems.

Change-Id: Ia0d6390c976749538b35c8ffde5d1e661b4944c0
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 69 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |  8 +++
 2 files changed, 61 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index bc976e1..ea530af 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -856,6 +856,48 @@ static int vega12_power_control_set_level(struct pp_hwmgr 
*hwmgr)
return result;
 }
 
+static int vega12_get_all_clock_ranges_helper(struct pp_hwmgr *hwmgr,
+   PPCLK_e clkid, struct vega12_clock_range *clock)
+{
+   /* AC Max */
+   PP_ASSERT_WITH_CODE(
+   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetMaxDpmFreq, (clkid << 16)) == 0,
+   "[GetClockRanges] Failed to get max ac clock from SMC!",
+   return -1);
+   vega12_read_arg_from_smc(hwmgr, &(clock->ACMax));
+
+   /* AC Min */
+   PP_ASSERT_WITH_CODE(
+   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetMinDpmFreq, (clkid << 16)) == 0,
+   "[GetClockRanges] Failed to get min ac clock from SMC!",
+   return -1);
+   vega12_read_arg_from_smc(hwmgr, &(clock->ACMin));
+
+   /* DC Max */
+   PP_ASSERT_WITH_CODE(
+   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetDcModeMaxDpmFreq, (clkid << 16)) == 0,
+   "[GetClockRanges] Failed to get max dc clock from SMC!",
+   return -1);
+   vega12_read_arg_from_smc(hwmgr, &(clock->DCMax));
+
+   return 0;
+}
+
+static int vega12_get_all_clock_ranges(struct pp_hwmgr *hwmgr)
+{
+   struct vega12_hwmgr *data =
+   (struct vega12_hwmgr *)(hwmgr->backend);
+   uint32_t i;
+
+   for (i = 0; i < PPCLK_COUNT; i++)
+   PP_ASSERT_WITH_CODE(!vega12_get_all_clock_ranges_helper(hwmgr,
+   i, &(data->clk_range[i])),
+   "Failed to get clk range from SMC!",
+   return -1);
+
+   return 0;
+}
+
 static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
 {
int tmp_result, result = 0;
@@ -883,6 +925,11 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
"Failed to power control set level!",
result = tmp_result);
 
+   result = vega12_get_all_clock_ranges(hwmgr);
+   PP_ASSERT_WITH_CODE(!result,
+   "Failed to get all clock ranges!",
+   return result);
+
result = vega12_odn_initialize_default_settings(hwmgr);
PP_ASSERT_WITH_CODE(!result,
"Failed to power control set level!",
@@ -1472,24 +1519,14 @@ static int vega12_get_clock_ranges(struct pp_hwmgr 
*hwmgr,
PPCLK_e clock_select,
bool max)
 {
-   int result;
-   *clock = 0;
+   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
 
-   if (max) {
-PP_ASSERT_WITH_CODE(
-   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetMaxDpmFreq, (clock_select << 16)) == 0,
-   "[GetClockRanges] Failed to get max clock from SMC!",
-   return -1);
-   result = vega12_read_arg_from_smc(hwmgr, clock);
-   } else {
-   PP_ASSERT_WITH_CODE(
-   smum_send_msg_to_smc_with_parameter(hwmgr, 
PPSMC_MSG_GetMinDpmFreq, (clock_select << 16)) == 0,
-   "[GetClockRanges] Failed to get min clock from SMC!",
-   return -1);
-   result = vega12_read_arg_from_smc(hwmgr, clock);
-   }
+   if (max)
+   *clock = data->clk_range[clock_select].ACMax;
+   else
+   *clock = data->clk_range[clock_select].ACMin;
 
-   return result;
+   return 0;
 }
 
 static int vega12_get_sclks(struct pp_hwmgr *hwmgr,
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
index 49b38df..e18c083 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
@@ -304,6 +304,12 @@ struct vega12_odn_fan_table {
boolforce_fan_pwm;
 };
 
+struct vega12_clock_range {
+   uint32_tACMax;
+   uint32_tACMin;
+   uint32_tDCMax;
+};
+
 struct vega12_hwmgr {
struct vega12_dpm_table  dpm_table;
struct vega12_dpm_table  golden_dpm_table;
@@ -385,6 +391,8 @@ struct vega12_hwmgr {
uint32_t  

[PATCH 06/13] drm/amd/powerplay: revise clock level setup

2018-06-19 Thread Evan Quan
Make sure the clock level set only on dpm enabled. Also uvd/vce/soc
clock also changed correspondingly.

Change-Id: I1db2e2ac355fd5aea1c0a25c2b140d039a590089
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 318 ++---
 1 file changed, 211 insertions(+), 107 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index ea530af..a124b81 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -958,76 +958,172 @@ static uint32_t vega12_find_lowest_dpm_level(
break;
}
 
+   if (i >= table->count) {
+   i = 0;
+   table->dpm_levels[i].enabled = true;
+   }
+
return i;
 }
 
 static uint32_t vega12_find_highest_dpm_level(
struct vega12_single_dpm_table *table)
 {
-   uint32_t i = 0;
+   int32_t i = 0;
+   PP_ASSERT_WITH_CODE(table->count <= MAX_REGULAR_DPM_NUMBER,
+   "[FindHighestDPMLevel] DPM Table has too many entries!",
+   return MAX_REGULAR_DPM_NUMBER - 1);
 
-   if (table->count <= MAX_REGULAR_DPM_NUMBER) {
-   for (i = table->count; i > 0; i--) {
-   if (table->dpm_levels[i - 1].enabled)
-   return i - 1;
-   }
-   } else {
-   pr_info("DPM Table Has Too Many Entries!");
-   return MAX_REGULAR_DPM_NUMBER - 1;
+   for (i = table->count - 1; i >= 0; i--) {
+   if (table->dpm_levels[i].enabled)
+   break;
}
 
-   return i;
+   if (i < 0) {
+   i = 0;
+   table->dpm_levels[i].enabled = true;
+   }
+
+   return (uint32_t)i;
 }
 
 static int vega12_upload_dpm_min_level(struct pp_hwmgr *hwmgr)
 {
struct vega12_hwmgr *data = hwmgr->backend;
-   if (data->smc_state_table.gfx_boot_level !=
-   data->dpm_table.gfx_table.dpm_state.soft_min_level) {
-   smum_send_msg_to_smc_with_parameter(hwmgr,
-   PPSMC_MSG_SetSoftMinByFreq,
-   PPCLK_GFXCLK<<16 | 
data->dpm_table.gfx_table.dpm_levels[data->smc_state_table.gfx_boot_level].value);
-   data->dpm_table.gfx_table.dpm_state.soft_min_level =
-   data->smc_state_table.gfx_boot_level;
+   uint32_t min_freq;
+   int ret = 0;
+
+   if (data->smu_features[GNLD_DPM_GFXCLK].enabled) {
+   min_freq = data->dpm_table.gfx_table.dpm_state.soft_min_level;
+   PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter(
+   hwmgr, PPSMC_MSG_SetSoftMinByFreq,
+   (PPCLK_GFXCLK << 16) | (min_freq & 
0x))),
+   "Failed to set soft min gfxclk !",
+   return ret);
}
 
-   if (data->smc_state_table.mem_boot_level !=
-   data->dpm_table.mem_table.dpm_state.soft_min_level) {
-   smum_send_msg_to_smc_with_parameter(hwmgr,
-   PPSMC_MSG_SetSoftMinByFreq,
-   PPCLK_UCLK<<16 | 
data->dpm_table.mem_table.dpm_levels[data->smc_state_table.mem_boot_level].value);
-   data->dpm_table.mem_table.dpm_state.soft_min_level =
-   data->smc_state_table.mem_boot_level;
+   if (data->smu_features[GNLD_DPM_UCLK].enabled) {
+   min_freq = data->dpm_table.mem_table.dpm_state.soft_min_level;
+   PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter(
+   hwmgr, PPSMC_MSG_SetSoftMinByFreq,
+   (PPCLK_UCLK << 16) | (min_freq & 
0x))),
+   "Failed to set soft min memclk !",
+   return ret);
+
+   min_freq = data->dpm_table.mem_table.dpm_state.hard_min_level;
+   PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter(
+   hwmgr, PPSMC_MSG_SetHardMinByFreq,
+   (PPCLK_UCLK << 16) | (min_freq & 
0x))),
+   "Failed to set hard min memclk !",
+   return ret);
}
 
-   return 0;
+   if (data->smu_features[GNLD_DPM_UVD].enabled) {
+   min_freq = data->dpm_table.vclk_table.dpm_state.soft_min_level;
+
+   PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter(
+   hwmgr, PPSMC_MSG_SetSoftMinByFreq,
+   (PPCLK_VCLK << 16) | (min_freq & 
0x))),
+   "Failed to set soft min 

[PATCH 10/13] drm/amd/powerplay: apply clocks adjust rules on power state change

2018-06-19 Thread Evan Quan
The clocks hard/soft min/max clock levels will be adjusted
correspondingly.

Change-Id: I2c4b6cd6756d40a28933f0c26b9e1a3d5078bab8
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 162 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |   2 +
 2 files changed, 164 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index a227ace..26bdfff 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -1950,6 +1950,166 @@ static int vega12_print_clock_levels(struct pp_hwmgr 
*hwmgr,
return size;
 }
 
+static int vega12_apply_clocks_adjust_rules(struct pp_hwmgr *hwmgr)
+{
+   struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
+   struct vega12_single_dpm_table *dpm_table;
+   bool vblank_too_short = false;
+   bool disable_mclk_switching;
+   uint32_t i, latency;
+
+   disable_mclk_switching = ((1 < hwmgr->display_config->num_display) &&
+ 
!hwmgr->display_config->multi_monitor_in_sync) ||
+ vblank_too_short;
+   latency = hwmgr->display_config->dce_tolerable_mclk_in_active_latency;
+
+   /* gfxclk */
+   dpm_table = &(data->dpm_table.gfx_table);
+   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.hard_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+
+   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
+   if (VEGA12_UMD_PSTATE_GFXCLK_LEVEL < dpm_table->count) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
+   }
+
+   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[0].value;
+   }
+
+   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   }
+   }
+
+   /* memclk */
+   dpm_table = &(data->dpm_table.mem_table);
+   dpm_table->dpm_state.soft_min_level = dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   dpm_table->dpm_state.hard_min_level = dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.hard_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+
+   if (PP_CAP(PHM_PlatformCaps_UMDPState)) {
+   if (VEGA12_UMD_PSTATE_MCLK_LEVEL < dpm_table->count) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
+   }
+
+   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[0].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[0].value;
+   }
+
+   if (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
+   dpm_table->dpm_state.soft_min_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   dpm_table->dpm_state.soft_max_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   }
+   }
+
+   /* honour DAL's UCLK Hardmin */
+   if (dpm_table->dpm_state.hard_min_level < 
(hwmgr->display_config->min_mem_set_clock / 100))
+   dpm_table->dpm_state.hard_min_level = 
hwmgr->display_config->min_mem_set_clock / 100;
+
+   /* Hardmin is dependent on displayconfig */
+   if (disable_mclk_switching) {
+   dpm_table->dpm_state.hard_min_level = 
dpm_table->dpm_levels[dpm_table->count - 1].value;
+   for (i = 0; i < data->mclk_latency_table.count - 1; i++) {
+   if (data->mclk_latency_table.entries[i].latency <= 
latency) {
+   if (dpm_table->dpm_levels[i].value >= 
(hwmgr->display_config->min_mem_set_clock / 100)) {
+   dpm_table->dpm_state.hard_min_level = 

[PATCH 09/13] drm/amd/powerplay: correct vega12 max num of dpm level

2018-06-19 Thread Evan Quan
Use MAX_NUM_CLOCKS instead of VG12_PSUEDO* macros for
the max number of dpm levels.

Change-Id: Ida49f51777663a8d68d05ddcd41f4df0d8e61481
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index 4732179..a227ace 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -1642,8 +1642,8 @@ static int vega12_get_sclks(struct pp_hwmgr *hwmgr,
return -1;
 
dpm_table = &(data->dpm_table.gfx_table);
-   ucount = (dpm_table->count > VG12_PSUEDO_NUM_GFXCLK_DPM_LEVELS) ?
-   VG12_PSUEDO_NUM_GFXCLK_DPM_LEVELS : dpm_table->count;
+   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
+   MAX_NUM_CLOCKS : dpm_table->count;
 
for (i = 0; i < ucount; i++) {
clocks->data[i].clocks_in_khz =
@@ -1674,11 +1674,12 @@ static int vega12_get_memclocks(struct pp_hwmgr *hwmgr,
return -1;
 
dpm_table = &(data->dpm_table.mem_table);
-   ucount = (dpm_table->count > VG12_PSUEDO_NUM_UCLK_DPM_LEVELS) ?
-   VG12_PSUEDO_NUM_UCLK_DPM_LEVELS : dpm_table->count;
+   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
+   MAX_NUM_CLOCKS : dpm_table->count;
 
for (i = 0; i < ucount; i++) {
clocks->data[i].clocks_in_khz =
+   data->mclk_latency_table.entries[i].frequency =
dpm_table->dpm_levels[i].value * 100;
 
clocks->data[i].latency_in_us =
@@ -1704,8 +1705,8 @@ static int vega12_get_dcefclocks(struct pp_hwmgr *hwmgr,
 
 
dpm_table = &(data->dpm_table.dcef_table);
-   ucount = (dpm_table->count > VG12_PSUEDO_NUM_DCEFCLK_DPM_LEVELS) ?
-   VG12_PSUEDO_NUM_DCEFCLK_DPM_LEVELS : dpm_table->count;
+   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
+   MAX_NUM_CLOCKS : dpm_table->count;
 
for (i = 0; i < ucount; i++) {
clocks->data[i].clocks_in_khz =
@@ -1732,8 +1733,8 @@ static int vega12_get_socclocks(struct pp_hwmgr *hwmgr,
 
 
dpm_table = &(data->dpm_table.soc_table);
-   ucount = (dpm_table->count > VG12_PSUEDO_NUM_SOCCLK_DPM_LEVELS) ?
-   VG12_PSUEDO_NUM_SOCCLK_DPM_LEVELS : dpm_table->count;
+   ucount = (dpm_table->count > MAX_NUM_CLOCKS) ?
+   MAX_NUM_CLOCKS : dpm_table->count;
 
for (i = 0; i < ucount; i++) {
clocks->data[i].clocks_in_khz =
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/13] drm/amd/powerplay: correct vega12 bootup values settings

2018-06-19 Thread Evan Quan
The vbios firmware structure changed between v3_1 and v3_2. So,
the code to setup bootup values needs different paths based
on header version.

Change-Id: I15140c4d80a91022f66a5052f4b9303fdab4ed9d
Signed-off-by: Evan Quan 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c | 94 +++---
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |  3 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c |  3 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h |  3 +
 4 files changed, 91 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
index 5325661..aa2faff 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
@@ -512,14 +512,82 @@ int pp_atomfwctrl_get_clk_information_by_clkid(struct 
pp_hwmgr *hwmgr, BIOS_CLKI
return 0;
 }
 
+static void pp_atomfwctrl_copy_vbios_bootup_values_3_2(struct pp_hwmgr *hwmgr,
+   struct pp_atomfwctrl_bios_boot_up_values *boot_values,
+   struct atom_firmware_info_v3_2 *fw_info)
+{
+   uint32_t frequency = 0;
+
+   boot_values->ulRevision = fw_info->firmware_revision;
+   boot_values->ulGfxClk   = fw_info->bootup_sclk_in10khz;
+   boot_values->ulUClk = fw_info->bootup_mclk_in10khz;
+   boot_values->usVddc = fw_info->bootup_vddc_mv;
+   boot_values->usVddci= fw_info->bootup_vddci_mv;
+   boot_values->usMvddc= fw_info->bootup_mvddc_mv;
+   boot_values->usVddGfx   = fw_info->bootup_vddgfx_mv;
+   boot_values->ucCoolingID = fw_info->coolingsolution_id;
+   boot_values->ulSocClk   = 0;
+   boot_values->ulDCEFClk   = 0;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU11_SYSPLL0_SOCCLK_ID, ))
+   boot_values->ulSocClk   = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU11_SYSPLL0_DCEFCLK_ID, ))
+   boot_values->ulDCEFClk  = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU11_SYSPLL0_ECLK_ID, ))
+   boot_values->ulEClk = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU11_SYSPLL0_VCLK_ID, ))
+   boot_values->ulVClk = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU11_SYSPLL0_DCLK_ID, ))
+   boot_values->ulDClk = frequency;
+}
+
+static void pp_atomfwctrl_copy_vbios_bootup_values_3_1(struct pp_hwmgr *hwmgr,
+   struct pp_atomfwctrl_bios_boot_up_values *boot_values,
+   struct atom_firmware_info_v3_1 *fw_info)
+{
+   uint32_t frequency = 0;
+
+   boot_values->ulRevision = fw_info->firmware_revision;
+   boot_values->ulGfxClk   = fw_info->bootup_sclk_in10khz;
+   boot_values->ulUClk = fw_info->bootup_mclk_in10khz;
+   boot_values->usVddc = fw_info->bootup_vddc_mv;
+   boot_values->usVddci= fw_info->bootup_vddci_mv;
+   boot_values->usMvddc= fw_info->bootup_mvddc_mv;
+   boot_values->usVddGfx   = fw_info->bootup_vddgfx_mv;
+   boot_values->ucCoolingID = fw_info->coolingsolution_id;
+   boot_values->ulSocClk   = 0;
+   boot_values->ulDCEFClk   = 0;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU9_SYSPLL0_SOCCLK_ID, ))
+   boot_values->ulSocClk   = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU9_SYSPLL0_DCEFCLK_ID, ))
+   boot_values->ulDCEFClk  = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU9_SYSPLL0_ECLK_ID, ))
+   boot_values->ulEClk = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU9_SYSPLL0_VCLK_ID, ))
+   boot_values->ulVClk = frequency;
+
+   if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, 
SMU9_SYSPLL0_DCLK_ID, ))
+   boot_values->ulDClk = frequency;
+}
+
 int pp_atomfwctrl_get_vbios_bootup_values(struct pp_hwmgr *hwmgr,
struct pp_atomfwctrl_bios_boot_up_values *boot_values)
 {
-   struct atom_firmware_info_v3_1 *info = NULL;
+   struct atom_firmware_info_v3_2 *fwinfo_3_2;
+   struct atom_firmware_info_v3_1 *fwinfo_3_1;
+   struct atom_common_table_header *info = NULL;
uint16_t ix;
 
ix = GetIndexIntoMasterDataTable(firmwareinfo);
-   info = (struct atom_firmware_info_v3_1 *)
+   info = (struct atom_common_table_header *)
smu_atom_get_data_table(hwmgr->adev,
ix, NULL, NULL, NULL);
 
@@ -528,16 +596,18 @@ int pp_atomfwctrl_get_vbios_bootup_values(struct pp_hwmgr 
*hwmgr,
return -EINVAL;
}
 
-   boot_values->ulRevision = info->firmware_revision;
-   boot_values->ulGfxClk   = info->bootup_sclk_in10khz;
-   

[PATCH 02/13] drm/amd/powerplay: smc_dpm_info structure change

2018-06-19 Thread Evan Quan
A new member Vr2_I2C_address is added.

Change-Id: I9821365721c9d73e1d2df2f65dfa97f39f0425c6
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/include/atomfirmware.h   | 5 -
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c   | 2 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h   | 2 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c | 2 ++
 drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h| 5 -
 5 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h 
b/drivers/gpu/drm/amd/include/atomfirmware.h
index 092d800..33b4de4 100644
--- a/drivers/gpu/drm/amd/include/atomfirmware.h
+++ b/drivers/gpu/drm/amd/include/atomfirmware.h
@@ -1433,7 +1433,10 @@ struct atom_smc_dpm_info_v4_1
uint8_t  acggfxclkspreadpercent;
uint16_t acggfxclkspreadfreq;
 
-   uint32_t boardreserved[10];
+   uint8_t Vr2_I2C_address;
+   uint8_t padding_vr2[3];
+
+   uint32_t boardreserved[9];
 };
 
 /* 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
index aa2faff..d27c1c9 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
@@ -699,5 +699,7 @@ int pp_atomfwctrl_get_smc_dpm_information(struct pp_hwmgr 
*hwmgr,
param->acggfxclkspreadpercent = info->acggfxclkspreadpercent;
param->acggfxclkspreadfreq = info->acggfxclkspreadfreq;
 
+   param->Vr2_I2C_address = info->Vr2_I2C_address;
+
return 0;
 }
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h 
b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
index 745bd38..22e2166 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
@@ -210,6 +210,8 @@ struct pp_atomfwctrl_smc_dpm_parameters
uint8_t  acggfxclkspreadenabled;
uint8_t  acggfxclkspreadpercent;
uint16_t acggfxclkspreadfreq;
+
+   uint8_t Vr2_I2C_address;
 };
 
 int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr,
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
index 888ddca..2991470 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
@@ -230,6 +230,8 @@ static int append_vbios_pptable(struct pp_hwmgr *hwmgr, 
PPTable_t *ppsmc_pptable
ppsmc_pptable->AcgThresholdFreqLow = 0x;
}
 
+   ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address;
+
return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h 
b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
index 2f8a3b9..b08526f 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
@@ -499,7 +499,10 @@ typedef struct {
uint8_t  AcgGfxclkSpreadPercent;
uint16_t AcgGfxclkSpreadFreq;
 
-   uint32_t BoardReserved[10];
+  uint8_t  Vr2_I2C_address;
+  uint8_t  padding_vr2[3];
+
+  uint32_t BoardReserved[9];
 
 
   uint32_t MmHubPadding[7];
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/13] drm/amd/powerplay: drop the acg fix

2018-06-19 Thread Evan Quan
This workaround is not needed any more.

Change-Id: I81cb20ecd52d242af26ca32860baacdb5ec126c9
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
index 2991470..f4f366b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
@@ -224,12 +224,6 @@ static int append_vbios_pptable(struct pp_hwmgr *hwmgr, 
PPTable_t *ppsmc_pptable
ppsmc_pptable->AcgGfxclkSpreadPercent = 
smc_dpm_table.acggfxclkspreadpercent;
ppsmc_pptable->AcgGfxclkSpreadFreq = smc_dpm_table.acggfxclkspreadfreq;
 
-   /* 0x will disable the ACG feature */
-   if (!(hwmgr->feature_mask & PP_ACG_MASK)) {
-   ppsmc_pptable->AcgThresholdFreqHigh = 0x;
-   ppsmc_pptable->AcgThresholdFreqLow = 0x;
-   }
-
ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address;
 
return 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/13] drm/amd/powerplay: initialize uvd/vce powergate status

2018-06-19 Thread Evan Quan
On UVD/VCE dpm disabled, the powergate status should be
set as true.

Change-Id: I569a5aa216b5e7d64a2b504f2ff98cc83ca802d5
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
index a124b81..cb0589e 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
@@ -777,6 +777,21 @@ static int vega12_set_allowed_featuresmask(struct pp_hwmgr 
*hwmgr)
return 0;
 }
 
+static void vega12_init_powergate_state(struct pp_hwmgr *hwmgr)
+{
+   struct vega12_hwmgr *data =
+   (struct vega12_hwmgr *)(hwmgr->backend);
+
+   data->uvd_power_gated = true;
+   data->vce_power_gated = true;
+
+   if (data->smu_features[GNLD_DPM_UVD].enabled)
+   data->uvd_power_gated = false;
+
+   if (data->smu_features[GNLD_DPM_VCE].enabled)
+   data->vce_power_gated = false;
+}
+
 static int vega12_enable_all_smu_features(struct pp_hwmgr *hwmgr)
 {
struct vega12_hwmgr *data =
@@ -801,6 +816,8 @@ static int vega12_enable_all_smu_features(struct pp_hwmgr 
*hwmgr)
}
}
 
+   vega12_init_powergate_state(hwmgr);
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu:All UVD instances share one idle_work handle

2018-06-19 Thread Christian König

Am 18.06.2018 um 20:00 schrieb James Zhu:

All UVD instanses have only one dpm control, so it is better
to share one idle_work handle.

Signed-off-by: James Zhu 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 14 +++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h |  2 +-
  2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 04d77f1..cc15d32 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -130,7 +130,7 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
unsigned family_id;
int i, j, r;
  
-	INIT_DELAYED_WORK(>uvd.inst->idle_work, amdgpu_uvd_idle_work_handler);

+   INIT_DELAYED_WORK(>uvd.idle_work, amdgpu_uvd_idle_work_handler);
  
  	switch (adev->asic_type) {

  #ifdef CONFIG_DRM_AMDGPU_CIK
@@ -331,12 +331,12 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
void *ptr;
int i, j;
  
+	cancel_delayed_work_sync(>uvd.idle_work);

+
for (j = 0; j < adev->uvd.num_uvd_inst; ++j) {
if (adev->uvd.inst[j].vcpu_bo == NULL)
continue;
  
-		cancel_delayed_work_sync(>uvd.inst[j].idle_work);

-
/* only valid for physical mode */
if (adev->asic_type < CHIP_POLARIS10) {
for (i = 0; i < adev->uvd.max_handles; ++i)
@@ -1162,7 +1162,7 @@ int amdgpu_uvd_get_destroy_msg(struct amdgpu_ring *ring, 
uint32_t handle,
  static void amdgpu_uvd_idle_work_handler(struct work_struct *work)
  {
struct amdgpu_device *adev =
-   container_of(work, struct amdgpu_device, 
uvd.inst->idle_work.work);
+   container_of(work, struct amdgpu_device, uvd.idle_work.work);
unsigned fences = 0, i, j;
  
  	for (i = 0; i < adev->uvd.num_uvd_inst; ++i) {

@@ -1184,7 +1184,7 @@ static void amdgpu_uvd_idle_work_handler(struct 
work_struct *work)
   
AMD_CG_STATE_GATE);
}
} else {
-   schedule_delayed_work(>uvd.inst->idle_work, 
UVD_IDLE_TIMEOUT);
+   schedule_delayed_work(>uvd.idle_work, UVD_IDLE_TIMEOUT);
}
  }
  
@@ -1196,7 +1196,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)

if (amdgpu_sriov_vf(adev))
return;
  
-	set_clocks = !cancel_delayed_work_sync(>uvd.inst->idle_work);

+   set_clocks = !cancel_delayed_work_sync(>uvd.idle_work);
if (set_clocks) {
if (adev->pm.dpm_enabled) {
amdgpu_dpm_enable_uvd(adev, true);
@@ -1213,7 +1213,7 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)
  void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring)
  {
if (!amdgpu_sriov_vf(ring->adev))
-   schedule_delayed_work(>adev->uvd.inst->idle_work, 
UVD_IDLE_TIMEOUT);
+   schedule_delayed_work(>adev->uvd.idle_work, 
UVD_IDLE_TIMEOUT);
  }
  
  /**

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
index b1579fb..8b23a1b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
@@ -44,7 +44,6 @@ struct amdgpu_uvd_inst {
void*saved_bo;
atomic_thandles[AMDGPU_MAX_UVD_HANDLES];
struct drm_file *filp[AMDGPU_MAX_UVD_HANDLES];
-   struct delayed_work idle_work;
struct amdgpu_ring  ring;
struct amdgpu_ring  ring_enc[AMDGPU_MAX_UVD_ENC_RINGS];
struct amdgpu_irq_src   irq;
@@ -62,6 +61,7 @@ struct amdgpu_uvd {
booladdress_64_bit;
booluse_ctx_buf;
struct amdgpu_uvd_inst  inst[AMDGPU_MAX_UVD_INSTANCES];
+   struct delayed_work idle_work;
  };
  
  int amdgpu_uvd_sw_init(struct amdgpu_device *adev);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Use correct enum to set powergating state

2018-06-19 Thread Stefan Agner
Use enum amd_powergating_state instead of enum amd_clockgating_state.
The underlying value stays the same, so there is no functional change
in practise. This fixes a warning seen with clang:
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:1930:14: warning: implicit
  conversion from enumeration type 'enum amd_clockgating_state' to
  different enumeration type 'enum amd_powergating_state'
  [-Wenum-conversion]
   AMD_CG_STATE_UNGATE);
   ^~~

Signed-off-by: Stefan Agner 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index fe76ec1f9737..2a1d19c31922 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1927,7 +1927,7 @@ int amdgpu_device_ip_suspend(struct amdgpu_device *adev)
if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
amdgpu_device_ip_set_powergating_state(adev,
   AMD_IP_BLOCK_TYPE_SMC,
-  AMD_CG_STATE_UNGATE);
+  AMD_PG_STATE_UNGATE);
 
/* ungate SMC block first */
r = amdgpu_device_ip_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_SMC,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx