RE: [PATCH] drm/amd/pp: Fix NULL point check error in smu_set_watermarks_for_clocks_ranges

2018-04-18 Thread Quan, Evan
Reviewed-by: Evan Quan 

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Thursday, April 19, 2018 12:48 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex 
> Subject: [PATCH] drm/amd/pp: Fix NULL point check error in
> smu_set_watermarks_for_clocks_ranges
> 
> It is caused by
> 'commit d6c9a7dc86cd ("drm/amd/pp: Move common code to
> smu_helper.c")'
> 
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
> index 7c23741..93a3d02 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
> @@ -657,7 +657,7 @@ int smu_set_watermarks_for_clocks_ranges(void
> *wt_table,
>   uint32_t i;
>   struct watermarks *table = wt_table;
> 
> - if (!table || wm_with_clock_ranges)
> + if (!table || !wm_with_clock_ranges)
>   return -EINVAL;
> 
>   if (wm_with_clock_ranges->num_wm_sets_dmif > 4 ||
> wm_with_clock_ranges->num_wm_sets_mcif > 4)
> --
> 1.9.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: Fix NULL point check error in smu_set_watermarks_for_clocks_ranges

2018-04-18 Thread Rex Zhu
It is caused by
'commit d6c9a7dc86cd ("drm/amd/pp: Move common code to smu_helper.c")'

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
index 7c23741..93a3d02 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
@@ -657,7 +657,7 @@ int smu_set_watermarks_for_clocks_ranges(void *wt_table,
uint32_t i;
struct watermarks *table = wt_table;
 
-   if (!table || wm_with_clock_ranges)
+   if (!table || !wm_with_clock_ranges)
return -EINVAL;
 
if (wm_with_clock_ranges->num_wm_sets_dmif > 4 || 
wm_with_clock_ranges->num_wm_sets_mcif > 4)
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: Fix NULL point check error in smu_set_watermarks_for_clocks_ranges

2018-04-18 Thread Rex Zhu
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
index 7c23741..93a3d02 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu_helper.c
@@ -657,7 +657,7 @@ int smu_set_watermarks_for_clocks_ranges(void *wt_table,
uint32_t i;
struct watermarks *table = wt_table;
 
-   if (!table || wm_with_clock_ranges)
+   if (!table || !wm_with_clock_ranges)
return -EINVAL;
 
if (wm_with_clock_ranges->num_wm_sets_dmif > 4 || 
wm_with_clock_ranges->num_wm_sets_mcif > 4)
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix list not initialized

2018-04-18 Thread Zhang, Jerry (Junwei)

On 04/19/2018 10:30 AM, zhoucm1 wrote:



On 2018年04月19日 09:48, Zhang, Jerry (Junwei) wrote:

On 04/18/2018 06:37 PM, Chunming Zhou wrote:

Otherwise, cpu stuck for 22s with kernel panic.

Change-Id: I5b87cde662a4658c9ab253ba88d009c9628a44ca
Signed-off-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f0fbc331aa30..7131ad13c5b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1563,10 +1563,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
   * the evicted list so that it gets validated again on the
   * next command submission.
   */
+list_del_init(_va->base.vm_status);
  if (!(bo->preferred_domains & amdgpu_mem_type_to_domain(mem_type)))
  list_add_tail(_va->base.vm_status, >evicted);
-else
-list_del_init(_va->base.vm_status);
  } else {
  list_del_init(_va->base.vm_status);
  }

We may simplify the logic as below.
How do you think?

list_del_init(_va->base.vm_status);
unsigned mem_type = bo->tbo.mem.mem_type;
/* If the BO is not in its preferred location add it back to
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
if ((bo && bo->tbo.resv == vm->root.base.bo->tbo.resv) &&
(!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type
list_add_tail(_va->base.vm_status, >evicted);

Looks good, but I already pushed that patch just now. if you like, you can make
a simplify patch with your idea.


Sure, I will.

Jerry



Regards,
David Zhou


Jerry




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/pp: Print out voltage/clock range in sysfs

2018-04-18 Thread Rex Zhu
when user cat pp_od_clk_voltage
add display info about the sclk/mclk/vddc range that user can overdrive
as:
OD_SCLK:
0:300MHz900 mV
1:400MHz912 mV
2:500MHz925 mV
3:600MHz937 mV
4:700MHz950 mV
5:800MHz975 mV
6:900MHz987 mV
7:   1000MHz   1000 mV
OD_MCLK:
0:300MHz900 mV
1:   1500MHz912 mV
OD_RANGE:
SCLK: 300MHz   1200MHz
MCLK: 300MHz   1500MHz
VDDC: 700mV1200mV

also
1. remove unnecessary whitespace before a quoted newline
2. change unit of frequency Mhz to MHz

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c   |  1 +
 drivers/gpu/drm/amd/include/kgd_pp_interface.h   |  1 +
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 22 ++
 3 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index 744f105..8f968bc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -437,6 +437,7 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device 
*dev,
if (adev->powerplay.pp_funcs->print_clock_levels) {
size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf+size);
+   size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf+size);
return size;
} else {
return snprintf(buf, PAGE_SIZE, "\n");
diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h 
b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index 01969b1..06f08f3 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -94,6 +94,7 @@ enum pp_clock_type {
PP_PCIE,
OD_SCLK,
OD_MCLK,
+   OD_RANGE,
 };
 
 enum amd_pp_sensors {
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 966b5b1..df8fa99 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -4335,22 +4335,36 @@ static int smu7_print_clock_levels(struct pp_hwmgr 
*hwmgr,
break;
case OD_SCLK:
if (hwmgr->od_enabled) {
-   size = sprintf(buf, "%s: \n", "OD_SCLK");
+   size = sprintf(buf, "%s:\n", "OD_SCLK");
for (i = 0; i < odn_sclk_table->num_of_pl; i++)
-   size += sprintf(buf + size, "%d: %10uMhz %10u 
mV\n",
+   size += sprintf(buf + size, "%d: %10uMHz %10u 
mV\n",
i, odn_sclk_table->entries[i].clock / 
100,
odn_sclk_table->entries[i].vddc);
}
break;
case OD_MCLK:
if (hwmgr->od_enabled) {
-   size = sprintf(buf, "%s: \n", "OD_MCLK");
+   size = sprintf(buf, "%s:\n", "OD_MCLK");
for (i = 0; i < odn_mclk_table->num_of_pl; i++)
-   size += sprintf(buf + size, "%d: %10uMhz %10u 
mV\n",
+   size += sprintf(buf + size, "%d: %10uMHz %10u 
mV\n",
i, odn_mclk_table->entries[i].clock / 
100,
odn_mclk_table->entries[i].vddc);
}
break;
+   case OD_RANGE:
+   if (hwmgr->od_enabled) {
+   size = sprintf(buf, "%s:\n", "OD_RANGE");
+   size += sprintf(buf + size, "SCLK: %7uMHz %10uMHz\n",
+   
data->golden_dpm_table.sclk_table.dpm_levels[0].value / 100,
+   
hwmgr->platform_descriptor.overdriveLimit.engineClock / 100);
+   size += sprintf(buf + size, "MCLK: %7uMHz %10uMHz\n",
+   
data->golden_dpm_table.mclk_table.dpm_levels[0].value / 100,
+   
hwmgr->platform_descriptor.overdriveLimit.memoryClock / 100);
+   size += sprintf(buf + size, "VDDC: %7umV %11umV\n",
+   data->odn_dpm_table.min_vddc,
+   data->odn_dpm_table.max_vddc);
+   }
+   break;
default:
break;
}
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix list not initialized

2018-04-18 Thread zhoucm1



On 2018年04月19日 09:48, Zhang, Jerry (Junwei) wrote:

On 04/18/2018 06:37 PM, Chunming Zhou wrote:

Otherwise, cpu stuck for 22s with kernel panic.

Change-Id: I5b87cde662a4658c9ab253ba88d009c9628a44ca
Signed-off-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index f0fbc331aa30..7131ad13c5b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1563,10 +1563,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device 
*adev,

   * the evicted list so that it gets validated again on the
   * next command submission.
   */
+    list_del_init(_va->base.vm_status);
  if (!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type)))

  list_add_tail(_va->base.vm_status, >evicted);
-    else
-    list_del_init(_va->base.vm_status);
  } else {
  list_del_init(_va->base.vm_status);
  }

We may simplify the logic as below.
How do you think?

    list_del_init(_va->base.vm_status);
    unsigned mem_type = bo->tbo.mem.mem_type;
    /* If the BO is not in its preferred location add it back to
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
    if ((bo && bo->tbo.resv == vm->root.base.bo->tbo.resv) &&
    (!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type

list_add_tail(_va->base.vm_status, >evicted);
Looks good, but I already pushed that patch just now. if you like, you 
can make a simplify patch with your idea.


Regards,
David Zhou


Jerry




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/3] drm/amd/pp: Add OVERDRIVE support on Vega10

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 9:13 AM, Rex Zhu  wrote:
> Signed-off-by: Rex Zhu 

Please include a patch description.  More comments below.

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 705 
> +++--
>  drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |  25 +-
>  .../gpu/drm/amd/powerplay/inc/hardwaremanager.h|   3 +-
>  3 files changed, 376 insertions(+), 357 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> index 384aa07..b85fedd 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
> @@ -285,6 +285,48 @@ static int vega10_set_features_platform_caps(struct 
> pp_hwmgr *hwmgr)
> return 0;
>  }
>
> +static int vega10_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
> +{
> +   struct vega10_hwmgr *data = hwmgr->backend;
> +   struct phm_ppt_v2_information *table_info =
> +   (struct phm_ppt_v2_information *)(hwmgr->pptable);
> +   struct vega10_odn_dpm_table *odn_table = &(data->odn_dpm_table);
> +   struct vega10_odn_vddc_lookup_table *od_lookup_table;
> +   struct phm_ppt_v1_voltage_lookup_table *vddc_lookup_table;
> +   struct phm_ppt_v1_clock_voltage_dependency_table *dep_table[3];
> +   struct phm_ppt_v1_clock_voltage_dependency_table *od_table[3];
> +   uint32_t i;
> +
> +   od_lookup_table = _table->vddc_lookup_table;
> +   vddc_lookup_table = table_info->vddc_lookup_table;
> +
> +   for (i = 0; i < vddc_lookup_table->count; i++)
> +   od_lookup_table->entries[i].us_vdd = 
> vddc_lookup_table->entries[i].us_vdd;
> +
> +   od_lookup_table->count = vddc_lookup_table->count;
> +
> +   dep_table[0] = table_info->vdd_dep_on_sclk;
> +   dep_table[1] = table_info->vdd_dep_on_mclk;
> +   dep_table[2] = table_info->vdd_dep_on_socclk;
> +   od_table[0] = (struct phm_ppt_v1_clock_voltage_dependency_table 
> *)_table->vdd_dep_on_sclk;
> +   od_table[1] = (struct phm_ppt_v1_clock_voltage_dependency_table 
> *)_table->vdd_dep_on_mclk;
> +   od_table[2] = (struct phm_ppt_v1_clock_voltage_dependency_table 
> *)_table->vdd_dep_on_socclk;
> +
> +   for (i = 0; i < 3; i++)
> +   smu_get_voltage_dependency_table_ppt_v1(dep_table[i], 
> od_table[i]);
> +
> +   if (odn_table->max_vddc == 0 || odn_table->max_vddc > 2000)
> +   odn_table->max_vddc = 
> dep_table[0]->entries[dep_table[0]->count - 1].vddc;
> +   if (odn_table->min_vddc == 0 || odn_table->min_vddc > 2000)
> +   odn_table->min_vddc = dep_table[0]->entries[0].vddc;
> +
> +   i = od_table[2]->count -1;
> +   od_table[2]->entries[i].clk = 
> hwmgr->platform_descriptor.overdriveLimit.memoryClock;
> +   od_table[2]->entries[i].vddc = odn_table->max_vddc;
> +
> +   return 0;
> +}
> +
>  static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
>  {
> struct vega10_hwmgr *data = hwmgr->backend;
> @@ -421,7 +463,6 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr 
> *hwmgr)
> /* ACG firmware has major version 5 */
> if ((hwmgr->smu_version & 0xff00) == 0x500)
> data->smu_features[GNLD_ACG].supported = true;
> -
> if (data->registry_data.didt_support)
> data->smu_features[GNLD_DIDT].supported = true;
>
> @@ -1360,48 +1401,6 @@ static int vega10_setup_default_dpm_tables(struct 
> pp_hwmgr *hwmgr)
> memcpy(&(data->golden_dpm_table), &(data->dpm_table),
> sizeof(struct vega10_dpm_table));
>
> -   if (PP_CAP(PHM_PlatformCaps_ODNinACSupport) ||
> -   PP_CAP(PHM_PlatformCaps_ODNinDCSupport)) {
> -   data->odn_dpm_table.odn_core_clock_dpm_levels.num_of_pl =
> -   
> data->dpm_table.gfx_table.count;
> -   for (i = 0; i < data->dpm_table.gfx_table.count; i++) {
> -   
> data->odn_dpm_table.odn_core_clock_dpm_levels.entries[i].clock =
> -   
> data->dpm_table.gfx_table.dpm_levels[i].value;
> -   
> data->odn_dpm_table.odn_core_clock_dpm_levels.entries[i].enabled = true;
> -   }
> -
> -   data->odn_dpm_table.vdd_dependency_on_sclk.count =
> -   dep_gfx_table->count;
> -   for (i = 0; i < dep_gfx_table->count; i++) {
> -   
> data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].clk =
> -   dep_gfx_table->entries[i].clk;
> -   
> data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].vddInd =
> -   dep_gfx_table->entries[i].vddInd;
> -   
> data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].cks_enable =
> -

Re: [PATCH 1/3] drm/amd/pp: Remove reduplicate code in smu7_check_dpm_table_updated

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 9:13 AM, Rex Zhu  wrote:
> Signed-off-by: Rex Zhu 

Please include a patch description.  With that fixed:
Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 8 ++--
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> index 720ac47..9654593 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> @@ -4683,10 +4683,6 @@ static void smu7_check_dpm_table_updated(struct 
> pp_hwmgr *hwmgr)
> return;
> }
> }
> -   if (i == dep_table->count && data->need_update_smu7_dpm_table & 
> DPMTABLE_OD_UPDATE_VDDC) {
> -   data->need_update_smu7_dpm_table &= ~DPMTABLE_OD_UPDATE_VDDC;
> -   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_MCLK;
> -   }
>
> dep_table = table_info->vdd_dep_on_sclk;
> odn_dep_table = (struct phm_ppt_v1_clock_voltage_dependency_table 
> *)&(odn_table->vdd_dependency_on_sclk);
> @@ -4696,9 +4692,9 @@ static void smu7_check_dpm_table_updated(struct 
> pp_hwmgr *hwmgr)
> return;
> }
> }
> -   if (i == dep_table->count && data->need_update_smu7_dpm_table & 
> DPMTABLE_OD_UPDATE_VDDC) {
> +   if (data->need_update_smu7_dpm_table & DPMTABLE_OD_UPDATE_VDDC) {
> data->need_update_smu7_dpm_table &= ~DPMTABLE_OD_UPDATE_VDDC;
> -   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK;
> +   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK | 
> DPMTABLE_OD_UPDATE_MCLK;
> }
>  }
>
> --
> 1.9.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/3] drm/amd/pp: Change voltage/clk range for OD feature on VI

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 9:13 AM, Rex Zhu  wrote:
> read vddc range from vbios.
>
> Signed-off-by: Rex Zhu 
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c | 28 
>  drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h |  3 ++
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 56 
> 
>  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.h |  2 +
>  4 files changed, 71 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
> index 971fb5d..afd7ecf 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
> @@ -1505,3 +1505,31 @@ int atomctrl_get_leakage_vddc_base_on_leakage(struct 
> pp_hwmgr *hwmgr,
>
> return 0;
>  }
> +
> +void atomctrl_get_voltage_range(struct pp_hwmgr *hwmgr, uint32_t *max_vddc,
> +   uint32_t *min_vddc)
> +{
> +   void *profile;
> +
> +   profile = smu_atom_get_data_table(hwmgr->adev,
> +   GetIndexIntoMasterTable(DATA, 
> ASIC_ProfilingInfo),
> +   NULL, NULL, NULL);
> +
> +   if (profile) {
> +   switch (hwmgr->chip_id) {
> +   case CHIP_TONGA:
> +   case CHIP_FIJI:
> +   *max_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_3 
> *)profile)->ulMaxVddc/4;
> +   *min_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_3 
> *)profile)->ulMinVddc/4;
> +   break;
> +   case CHIP_POLARIS11:
> +   case CHIP_POLARIS10:
> +   case CHIP_POLARIS12:
> +   *max_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_6 
> *)profile)->ulMaxVddc/100;
> +   *min_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_6 
> *)profile)->ulMinVddc/100;

Need le32_to_cpu() to properly swap the vbios data.  With that fixed:
Reviewed-by: Alex Deucher 

> +   break;
> +   default:
> +   return;
> +   }
> +   }
> +}
> \ No newline at end of file
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
> index c672a50..e1b5d6b 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
> @@ -320,5 +320,8 @@ extern int 
> atomctrl_get_leakage_vddc_base_on_leakage(struct pp_hwmgr *hwmgr,
> uint16_t virtual_voltage_id,
> uint16_t efuse_voltage_id);
>  extern int atomctrl_get_leakage_id_from_efuse(struct pp_hwmgr *hwmgr, 
> uint16_t *virtual_voltage_id);
> +
> +extern void atomctrl_get_voltage_range(struct pp_hwmgr *hwmgr, uint32_t 
> *max_vddc,
> +   uint32_t *min_vddc);
>  #endif
>
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> index 9654593..966b5b1 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> @@ -838,6 +838,33 @@ static int smu7_odn_initial_default_setting(struct 
> pp_hwmgr *hwmgr)
> return 0;
>  }
>
> +static void smu7_setup_voltage_range_from_vbios(struct pp_hwmgr *hwmgr)
> +{
> +   struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
> +   struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table;
> +   struct phm_ppt_v1_information *table_info =
> +   (struct phm_ppt_v1_information *)(hwmgr->pptable);
> +   uint32_t min_vddc, max_vddc;
> +
> +   if (table_info == NULL)
> +   return;
> +
> +   dep_sclk_table = table_info->vdd_dep_on_sclk;
> +
> +   atomctrl_get_voltage_range(hwmgr, _vddc, _vddc);
> +
> +   if (min_vddc == 0 || min_vddc > 2000
> +   || min_vddc > dep_sclk_table->entries[0].vddc)
> +   min_vddc = dep_sclk_table->entries[0].vddc;
> +
> +   if (max_vddc == 0 || max_vddc > 2000
> +   || max_vddc < dep_sclk_table->entries[dep_sclk_table->count - 
> 1].vddc)
> +   max_vddc = dep_sclk_table->entries[dep_sclk_table->count - 
> 1].vddc;
> +
> +   data->odn_dpm_table.min_vddc = min_vddc;
> +   data->odn_dpm_table.max_vddc = max_vddc;
> +}
> +
>  static int smu7_setup_default_dpm_tables(struct pp_hwmgr *hwmgr)
>  {
> struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
> @@ -856,8 +883,10 @@ static int smu7_setup_default_dpm_tables(struct pp_hwmgr 
> *hwmgr)
> sizeof(struct smu7_dpm_table));
>
> /* initialize ODN table */
> -   if (hwmgr->od_enabled)
> +   if (hwmgr->od_enabled) {
> +   

Re: [PATCH] drm/amdgpu: fix list not initialized

2018-04-18 Thread Zhang, Jerry (Junwei)

On 04/18/2018 06:37 PM, Chunming Zhou wrote:

Otherwise, cpu stuck for 22s with kernel panic.

Change-Id: I5b87cde662a4658c9ab253ba88d009c9628a44ca
Signed-off-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f0fbc331aa30..7131ad13c5b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1563,10 +1563,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
+   list_del_init(_va->base.vm_status);
if (!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type)))
list_add_tail(_va->base.vm_status, >evicted);
-   else
-   list_del_init(_va->base.vm_status);
} else {
list_del_init(_va->base.vm_status);
}

We may simplify the logic as below.
How do you think?

list_del_init(_va->base.vm_status);
unsigned mem_type = bo->tbo.mem.mem_type;
/* If the BO is not in its preferred location add it back to
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
if ((bo && bo->tbo.resv == vm->root.base.bo->tbo.resv) &&
(!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type
list_add_tail(_va->base.vm_status, >evicted);

Jerry




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 16/20] drm/amd/powerplay: add control gfxoff enabling in late init

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c 
b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index 246f8e9..b493369 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -180,6 +180,7 @@ static int pp_late_init(void *handle)
 {
struct amdgpu_device *adev = handle;
struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+   int ret;
 
if (hwmgr && hwmgr->pm_en) {
mutex_lock(>smu_lock);
@@ -189,6 +190,14 @@ static int pp_late_init(void *handle)
}
if (adev->pm.smu_prv_buffer_size != 0)
pp_reserve_vram_for_smu(adev);
+
+   if (hwmgr->hwmgr_func->gfx_off_control &&
+   (hwmgr->feature_mask & PP_GFXOFF_MASK)) {
+   ret = hwmgr->hwmgr_func->gfx_off_control(hwmgr, true);
+   if (ret)
+   pr_err("gfx off enabling failed!\n");
+   }
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 17/20] drm/amdgpu: it should disable gfxoff when system is going to suspend

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 8e63832..f509d32 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1902,6 +1902,12 @@ int amdgpu_device_ip_suspend(struct amdgpu_device *adev)
if (amdgpu_sriov_vf(adev))
amdgpu_virt_request_full_gpu(adev, false);
 
+   /* ungate SMC block powergating */
+   if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
+   amdgpu_device_ip_set_powergating_state(adev,
+  AMD_IP_BLOCK_TYPE_SMC,
+  AMD_CG_STATE_UNGATE);
+
/* ungate SMC block first */
r = amdgpu_device_ip_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_SMC,
   AMD_CG_STATE_UNGATE);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 18/20] drm/amdgpu: fix to disable powergating in hw_fini

2018-04-18 Thread Huang Rui
We need enable CGPG and GFXOFF together. If only enable one of them, this system
will get hang after startx (do draw command). So when gfxoff is disabled, it
also need disable CGPG after that.

Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 20c57ac..d4da610 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3134,6 +3134,9 @@ static int gfx_v9_0_hw_fini(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int i;
 
+   amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_GFX,
+  AMD_PG_STATE_UNGATE);
+
amdgpu_irq_put(adev, >gfx.priv_reg_irq, 0);
amdgpu_irq_put(adev, >gfx.priv_inst_irq, 0);
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 19/20] drm/amdgpu: set CGPG if gfxoff is enabled for raven

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c 
b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 65e781f..9006576 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -682,6 +682,11 @@ static int soc15_common_early_init(void *handle)
AMD_CG_SUPPORT_SDMA_LS;
adev->pg_flags = AMD_PG_SUPPORT_SDMA;
 
+   if (adev->powerplay.pp_feature & PP_GFXOFF_MASK)
+   adev->pg_flags |= AMD_PG_SUPPORT_GFX_PG |
+   AMD_PG_SUPPORT_CP |
+   AMD_PG_SUPPORT_RLC_SMU_HS;
+
adev->external_rev_id = 0x1;
break;
default:
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 20/20] drm/amd/powerplay: use the flag to decide whether send gfxoff smc message

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
index fde1e5c..7712eb6 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
@@ -81,11 +81,15 @@ static int smu10_initialize_dpm_defaults(struct pp_hwmgr 
*hwmgr)
smu10_data->thermal_auto_throttling_treshold = 0;
smu10_data->is_nb_dpm_enabled = 1;
smu10_data->dpm_flags = 1;
-   smu10_data->gfx_off_controled_by_driver = false;
smu10_data->need_min_deep_sleep_dcefclk = true;
smu10_data->num_active_display = 0;
smu10_data->deep_sleep_dcefclk = 0;
 
+   if (hwmgr->feature_mask & PP_GFXOFF_MASK)
+   smu10_data->gfx_off_controled_by_driver = true;
+   else
+   smu10_data->gfx_off_controled_by_driver = false;
+
phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_SclkDeepSleep);
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/20] drm/amdgpu: use pp_feature member to store the mask

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   | 1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 2 ++
 drivers/gpu/drm/amd/amdgpu/ci_dpm.c   | 2 +-
 drivers/gpu/drm/amd/amdgpu/kv_dpm.c   | 2 +-
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 2 +-
 5 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index bed1f5d..59df4b7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1433,6 +1433,7 @@ enum amd_hw_ip_block_type {
 struct amd_powerplay {
void *pp_handle;
const struct amd_pm_funcs *pp_funcs;
+   uint32_t pp_feature;
 };
 
 #define AMDGPU_RESET_MAGIC_NUM 64
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 6f1a8b7..8e63832 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1545,6 +1545,8 @@ static int amdgpu_device_ip_early_init(struct 
amdgpu_device *adev)
return -EAGAIN;
}
 
+   adev->powerplay.pp_feature = amdgpu_pp_feature_mask;
+
for (i = 0; i < adev->num_ip_blocks; i++) {
if ((amdgpu_ip_block_mask & (1 << i)) == 0) {
DRM_ERROR("disabled ip block: %d <%s>\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c 
b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
index f48168f..a266dcf 100644
--- a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
@@ -5903,7 +5903,7 @@ static int ci_dpm_init(struct amdgpu_device *adev)
pi->pcie_dpm_key_disabled = 0;
pi->thermal_sclk_dpm_enabled = 0;
 
-   if (amdgpu_pp_feature_mask & PP_SCLK_DEEP_SLEEP_MASK)
+   if (adev->powerplay.pp_feature & PP_SCLK_DEEP_SLEEP_MASK)
pi->caps_sclk_ds = true;
else
pi->caps_sclk_ds = false;
diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c 
b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
index ef668a3..17f7f07 100644
--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
@@ -2817,7 +2817,7 @@ static int kv_dpm_init(struct amdgpu_device *adev)
pi->caps_tcp_ramping = true;
}
 
-   if (amdgpu_pp_feature_mask & PP_SCLK_DEEP_SLEEP_MASK)
+   if (adev->powerplay.pp_feature & PP_SCLK_DEEP_SLEEP_MASK)
pi->caps_sclk_ds = true;
else
pi->caps_sclk_ds = false;
diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c 
b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index 6976596..246f8e9 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -53,7 +53,7 @@ static int amd_powerplay_create(struct amdgpu_device *adev)
mutex_init(>smu_lock);
hwmgr->chip_family = adev->family;
hwmgr->chip_id = adev->asic_type;
-   hwmgr->feature_mask = amdgpu_pp_feature_mask;
+   hwmgr->feature_mask = adev->powerplay.pp_feature;
hwmgr->display_config = >pm.pm_display_cfg;
adev->powerplay.pp_handle = hwmgr;
adev->powerplay.pp_funcs = _dpm_funcs;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/20] drm/amdgpu: clear gfxoff featue mask if the asic is not raven

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
index bca67df..d1052b5 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
@@ -95,7 +95,8 @@ int hwmgr_early_init(struct pp_hwmgr *hwmgr)
hwmgr->smumgr_funcs = _smu_funcs;
ci_set_asic_special_caps(hwmgr);
hwmgr->feature_mask &= ~(PP_VBI_TIME_SUPPORT_MASK |
-   PP_ENABLE_GFX_CG_THRU_SMU);
+PP_ENABLE_GFX_CG_THRU_SMU |
+PP_GFXOFF_MASK);
hwmgr->pp_table_version = PP_TABLE_V0;
hwmgr->od_enabled = false;
smu7_init_function_pointers(hwmgr);
@@ -103,9 +104,11 @@ int hwmgr_early_init(struct pp_hwmgr *hwmgr)
case AMDGPU_FAMILY_CZ:
hwmgr->od_enabled = false;
hwmgr->smumgr_funcs = _smu_funcs;
+   hwmgr->feature_mask &= ~PP_GFXOFF_MASK;
smu8_init_function_pointers(hwmgr);
break;
case AMDGPU_FAMILY_VI:
+   hwmgr->feature_mask &= ~PP_GFXOFF_MASK;
switch (hwmgr->chip_id) {
case CHIP_TOPAZ:
hwmgr->smumgr_funcs = _smu_funcs;
@@ -139,6 +142,7 @@ int hwmgr_early_init(struct pp_hwmgr *hwmgr)
smu7_init_function_pointers(hwmgr);
break;
case AMDGPU_FAMILY_AI:
+   hwmgr->feature_mask &= ~PP_GFXOFF_MASK;
switch (hwmgr->chip_id) {
case CHIP_VEGA10:
hwmgr->smumgr_funcs = _smu_funcs;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/20] drm/amdgpu: add gfxoff feature mask

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/include/amd_shared.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/include/amd_shared.h 
b/drivers/gpu/drm/amd/include/amd_shared.h
index a63e8da..850e8ef 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -125,6 +125,7 @@ enum PP_FEATURE_MASK {
PP_SOCCLK_DPM_MASK = 0x1000,
PP_DCEFCLK_DPM_MASK = 0x2000,
PP_OVERDRIVE_MASK = 0x4000,
+   PP_GFXOFF_MASK = 0x8000,
 };
 
 struct amd_ip_funcs {
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/20] drm/amd/powerplay: add gfx off control function

2018-04-18 Thread Huang Rui
gfx_off_control is used to be called for sending enabling/disabling gfxoff
message.

Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c | 36 ++-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h |  1 +
 2 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
index f0727b4..fde1e5c 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
@@ -42,6 +42,13 @@
 #define SMU10_DISPCLK_BYPASS_THRESHOLD 1 /* 100Mhz */
 #define SMC_RAM_END 0x4
 
+#define mmPWR_MISC_CNTL_STATUS 0x0183
+#define mmPWR_MISC_CNTL_STATUS_BASE_IDX0
+#define PWR_MISC_CNTL_STATUS__PWR_GFX_RLC_CGPG_EN__SHIFT   0x0
+#define PWR_MISC_CNTL_STATUS__PWR_GFXOFF_STATUS__SHIFT 0x1
+#define PWR_MISC_CNTL_STATUS__PWR_GFX_RLC_CGPG_EN_MASK 0x0001L
+#define PWR_MISC_CNTL_STATUS__PWR_GFXOFF_STATUS_MASK   0x0006L
+
 static const unsigned long SMU10_Magic = (unsigned long) PHM_Rv_Magic;
 
 
@@ -243,13 +250,31 @@ static int smu10_power_off_asic(struct pp_hwmgr *hwmgr)
return smu10_reset_cc6_data(hwmgr);
 }
 
+static bool smu10_is_gfx_on(struct pp_hwmgr *hwmgr)
+{
+   uint32_t reg;
+   struct amdgpu_device *adev = hwmgr->adev;
+
+   reg = RREG32_SOC15(PWR, 0, mmPWR_MISC_CNTL_STATUS);
+   if ((reg & PWR_MISC_CNTL_STATUS__PWR_GFXOFF_STATUS_MASK) ==
+   (0x2 << PWR_MISC_CNTL_STATUS__PWR_GFXOFF_STATUS__SHIFT))
+   return true;
+
+   return false;
+}
+
 static int smu10_disable_gfx_off(struct pp_hwmgr *hwmgr)
 {
struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend);
 
-   if (smu10_data->gfx_off_controled_by_driver)
+   if (smu10_data->gfx_off_controled_by_driver) {
smum_send_msg_to_smc(hwmgr, PPSMC_MSG_DisableGfxOff);
 
+   /* confirm gfx is back to "on" state */
+   while (!smu10_is_gfx_on(hwmgr))
+   msleep(1);
+   }
+
return 0;
 }
 
@@ -273,6 +298,14 @@ static int smu10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
return smu10_enable_gfx_off(hwmgr);
 }
 
+static int smu10_gfx_off_control(struct pp_hwmgr *hwmgr, bool enable)
+{
+   if (enable)
+   return smu10_enable_gfx_off(hwmgr);
+   else
+   return smu10_disable_gfx_off(hwmgr);
+}
+
 static int smu10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
struct pp_power_state  *prequest_ps,
const struct pp_power_state *pcurrent_ps)
@@ -1060,6 +1093,7 @@ static const struct pp_hwmgr_func smu10_hwmgr_funcs = {
.power_state_set = smu10_set_power_state_tasks,
.dynamic_state_management_disable = smu10_disable_dpm_tasks,
.set_mmhub_powergating_by_smu = smu10_set_mmhub_powergating_by_smu,
+   .gfx_off_control = smu10_gfx_off_control,
 };
 
 int smu10_init_function_pointers(struct pp_hwmgr *hwmgr)
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 0d2b3ce..3d9743f 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -296,6 +296,7 @@ struct pp_hwmgr_func {
int (*display_clock_voltage_request)(struct pp_hwmgr *hwmgr,
struct pp_display_clock_request *clock);
int (*get_max_high_clocks)(struct pp_hwmgr *hwmgr, struct 
amd_pp_simple_clock_info *clocks);
+   int (*gfx_off_control)(struct pp_hwmgr *hwmgr, bool enable);
int (*power_off_asic)(struct pp_hwmgr *hwmgr);
int (*force_clock_level)(struct pp_hwmgr *hwmgr, enum pp_clock_type 
type, uint32_t mask);
int (*print_clock_levels)(struct pp_hwmgr *hwmgr, enum pp_clock_type 
type, char *buf);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/20] drm/amd/powerplay: enable/disable gfxoff through smu

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c 
b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index bd0d387..6976596 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -222,10 +222,19 @@ static int pp_set_powergating_state(void *handle,
 {
struct amdgpu_device *adev = handle;
struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+   int ret;
 
if (!hwmgr || !hwmgr->pm_en)
return 0;
 
+   if (hwmgr->hwmgr_func->gfx_off_control) {
+   /* Enable/disable GFX off through SMU */
+   ret = hwmgr->hwmgr_func->gfx_off_control(hwmgr,
+state == 
AMD_PG_STATE_GATE);
+   if (ret)
+   pr_err("gfx off control failed!\n");
+   }
+
if (hwmgr->hwmgr_func->enable_per_cu_power_gating == NULL) {
pr_info("%s was not implemented.\n", __func__);
return 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/20] drm/amdgpu: set gfxoff disabled by default

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 5c0567a..3e07bd4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -121,7 +121,7 @@ uint amdgpu_pg_mask = 0x;
 uint amdgpu_sdma_phase_quantum = 32;
 char *amdgpu_disable_cu = NULL;
 char *amdgpu_virtual_display = NULL;
-uint amdgpu_pp_feature_mask = 0xbfff;
+uint amdgpu_pp_feature_mask = 0x3fff; /* gfxoff (bit 15) disabled by 
default */
 int amdgpu_ngg = 0;
 int amdgpu_prim_buf_per_se = 0;
 int amdgpu_pos_buf_per_se = 0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/20] drm/amdgpu: move PP_FEATURE_MASK to amd_shared header

2018-04-18 Thread Huang Rui
It will be used not only for powerplay but also on amdgpu part in future
patches. So move it into amd_shared header file.

Signed-off-by: Huang Rui 
Reviewed-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h   |  2 --
 drivers/gpu/drm/amd/amdgpu/ci_dpm.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/kv_dpm.c   |  2 +-
 drivers/gpu/drm/amd/include/amd_shared.h  | 18 ++
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 18 --
 5 files changed, 20 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
index 354c6dc..dd6203a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
@@ -52,8 +52,6 @@ enum amdgpu_dpm_event_src {
AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
 };
 
-#define SCLK_DEEP_SLEEP_MASK 0x8
-
 struct amdgpu_ps {
u32 caps; /* vbios flags */
u32 class; /* vbios flags */
diff --git a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c 
b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
index be6b199..f48168f 100644
--- a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
@@ -5903,7 +5903,7 @@ static int ci_dpm_init(struct amdgpu_device *adev)
pi->pcie_dpm_key_disabled = 0;
pi->thermal_sclk_dpm_enabled = 0;
 
-   if (amdgpu_pp_feature_mask & SCLK_DEEP_SLEEP_MASK)
+   if (amdgpu_pp_feature_mask & PP_SCLK_DEEP_SLEEP_MASK)
pi->caps_sclk_ds = true;
else
pi->caps_sclk_ds = false;
diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c 
b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
index bc1720e..ef668a3 100644
--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
@@ -2817,7 +2817,7 @@ static int kv_dpm_init(struct amdgpu_device *adev)
pi->caps_tcp_ramping = true;
}
 
-   if (amdgpu_pp_feature_mask & SCLK_DEEP_SLEEP_MASK)
+   if (amdgpu_pp_feature_mask & PP_SCLK_DEEP_SLEEP_MASK)
pi->caps_sclk_ds = true;
else
pi->caps_sclk_ds = false;
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h 
b/drivers/gpu/drm/amd/include/amd_shared.h
index 9fa3aae..a63e8da 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -109,6 +109,24 @@ enum amd_powergating_state {
 #define AMD_PG_SUPPORT_GFX_PIPELINE(1 << 12)
 #define AMD_PG_SUPPORT_MMHUB   (1 << 13)
 
+enum PP_FEATURE_MASK {
+   PP_SCLK_DPM_MASK = 0x1,
+   PP_MCLK_DPM_MASK = 0x2,
+   PP_PCIE_DPM_MASK = 0x4,
+   PP_SCLK_DEEP_SLEEP_MASK = 0x8,
+   PP_POWER_CONTAINMENT_MASK = 0x10,
+   PP_UVD_HANDSHAKE_MASK = 0x20,
+   PP_SMC_VOLTAGE_CONTROL_MASK = 0x40,
+   PP_VBI_TIME_SUPPORT_MASK = 0x80,
+   PP_ULV_MASK = 0x100,
+   PP_ENABLE_GFX_CG_THRU_SMU = 0x200,
+   PP_CLOCK_STRETCH_MASK = 0x400,
+   PP_OD_FUZZY_FAN_CONTROL_MASK = 0x800,
+   PP_SOCCLK_DPM_MASK = 0x1000,
+   PP_DCEFCLK_DPM_MASK = 0x2000,
+   PP_OVERDRIVE_MASK = 0x4000,
+};
+
 struct amd_ip_funcs {
/* Name of IP block */
char *name;
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 9b3dd7d..0d2b3ce 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -66,24 +66,6 @@ struct vi_dpm_table {
 #define PCIE_PERF_REQ_GEN2 3
 #define PCIE_PERF_REQ_GEN3 4
 
-enum PP_FEATURE_MASK {
-   PP_SCLK_DPM_MASK = 0x1,
-   PP_MCLK_DPM_MASK = 0x2,
-   PP_PCIE_DPM_MASK = 0x4,
-   PP_SCLK_DEEP_SLEEP_MASK = 0x8,
-   PP_POWER_CONTAINMENT_MASK = 0x10,
-   PP_UVD_HANDSHAKE_MASK = 0x20,
-   PP_SMC_VOLTAGE_CONTROL_MASK = 0x40,
-   PP_VBI_TIME_SUPPORT_MASK = 0x80,
-   PP_ULV_MASK = 0x100,
-   PP_ENABLE_GFX_CG_THRU_SMU = 0x200,
-   PP_CLOCK_STRETCH_MASK = 0x400,
-   PP_OD_FUZZY_FAN_CONTROL_MASK = 0x800,
-   PP_SOCCLK_DPM_MASK = 0x1000,
-   PP_DCEFCLK_DPM_MASK = 0x2000,
-   PP_OVERDRIVE_MASK = 0x4000,
-};
-
 enum PHM_BackEnd_Magic {
PHM_Dummy_Magic   = 0xAAAA,
PHM_RV770_Magic   = 0xDCBAABCD,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/20] drm/amd/powerplay: send CGPG smc message if PG is enabled for raven

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c | 8 +++-
 drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h  | 1 +
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
index 0f25226..f0727b4 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
@@ -206,12 +206,18 @@ static int smu10_set_power_state_tasks(struct pp_hwmgr 
*hwmgr, const void *input
 static int smu10_init_power_gate_state(struct pp_hwmgr *hwmgr)
 {
struct smu10_hwmgr *smu10_data = (struct smu10_hwmgr *)(hwmgr->backend);
+   struct amdgpu_device *adev = hwmgr->adev;
 
smu10_data->vcn_power_gated = true;
smu10_data->isp_tileA_power_gated = true;
smu10_data->isp_tileB_power_gated = true;
 
-   return 0;
+   if (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)
+   return smum_send_msg_to_smc_with_parameter(hwmgr,
+  PPSMC_MSG_SetGfxCGPG,
+  true);
+   else
+   return 0;
 }
 
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h 
b/drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h
index 426bff2..5d07b6e 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h
@@ -75,6 +75,7 @@
 #define PPSMC_MSG_GetMinGfxclkFrequency 0x2C
 #define PPSMC_MSG_GetMaxGfxclkFrequency 0x2D
 #define PPSMC_MSG_SoftReset 0x2E
+#define PPSMC_MSG_SetGfxCGPG   0x2F
 #define PPSMC_MSG_SetSoftMaxGfxClk  0x30
 #define PPSMC_MSG_SetHardMinGfxClk  0x31
 #define PPSMC_MSG_SetSoftMaxSocclkByFreq0x32
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/20] drm/amdgpu: add setting powergating method for gfx9

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 5e3ddd5..20c57ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3352,6 +3352,11 @@ static int gfx_v9_0_late_init(void *handle)
if (r)
return r;
 
+   r = amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_GFX,
+  AMD_PG_STATE_GATE);
+   if (r)
+   return r;
+
return 0;
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 05/20] drm/amdgpu: cleanup init power gating function

2018-04-18 Thread Huang Rui
Remove gfx_v9_0_enable_sck_slow_down_on_power_up/down and CP power gating
enabling functions because they only need to be called on setting power gating
behavior. We keep it in set_powergating callback to enable/disable PG in
late_init.

Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 24 ++--
 1 file changed, 6 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 537d624..6387fda 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -2062,6 +2062,9 @@ static void 
gfx_v9_0_enable_gfx_dynamic_mg_power_gating(struct amdgpu_device *ad
 
 static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
 {
+   if (!adev->gfx.rlc.is_rlc_v2_1)
+   return;
+
if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_PG |
  AMD_PG_SUPPORT_GFX_SMG |
  AMD_PG_SUPPORT_GFX_DMG |
@@ -2072,24 +2075,9 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
gfx_v9_0_init_rlc_save_restore_list(adev);
gfx_v9_0_enable_save_restore_machine(adev);
 
-   if (adev->asic_type == CHIP_RAVEN) {
-   WREG32(mmRLC_JUMP_TABLE_RESTORE,
-   adev->gfx.rlc.cp_table_gpu_addr >> 8);
-   gfx_v9_0_init_gfx_power_gating(adev);
-
-   if (adev->pg_flags & AMD_PG_SUPPORT_RLC_SMU_HS) {
-   gfx_v9_0_enable_sck_slow_down_on_power_up(adev, 
true);
-   
gfx_v9_0_enable_sck_slow_down_on_power_down(adev, true);
-   } else {
-   gfx_v9_0_enable_sck_slow_down_on_power_up(adev, 
false);
-   
gfx_v9_0_enable_sck_slow_down_on_power_down(adev, false);
-   }
-
-   if (adev->pg_flags & AMD_PG_SUPPORT_CP)
-   gfx_v9_0_enable_cp_power_gating(adev, true);
-   else
-   gfx_v9_0_enable_cp_power_gating(adev, false);
-   }
+   WREG32(mmRLC_JUMP_TABLE_RESTORE,
+  adev->gfx.rlc.cp_table_gpu_addr >> 8);
+   gfx_v9_0_init_gfx_power_gating(adev);
}
 }
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/20] drm/amdgpu: revise init_rlc_save_restore_list behavior to support latest register_list_format/register_restore table

2018-04-18 Thread Huang Rui
RLC save/restore list will be used on CGPG and GFXOFF function, it loads two bin
table of register_list_format/register_restore in RLC firmware.

Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 141 +-
 1 file changed, 87 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 6387fda..5e3ddd5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -184,6 +184,30 @@ static const struct soc15_reg_golden 
golden_settings_gc_9_2_1_vg12[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, mmTD_CNTL, 0x01bd9f33, 0x0100)
 };
 
+static const u32 GFX_RLC_SRM_INDEX_CNTL_ADDR_OFFSETS[] =
+{
+   mmRLC_SRM_INDEX_CNTL_ADDR_0 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_1 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_2 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_3 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_4 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_5 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_6 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+   mmRLC_SRM_INDEX_CNTL_ADDR_7 - mmRLC_SRM_INDEX_CNTL_ADDR_0,
+};
+
+static const u32 GFX_RLC_SRM_INDEX_CNTL_DATA_OFFSETS[] =
+{
+   mmRLC_SRM_INDEX_CNTL_DATA_0 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_1 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_2 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_3 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_4 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_5 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_6 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+   mmRLC_SRM_INDEX_CNTL_DATA_7 - mmRLC_SRM_INDEX_CNTL_DATA_0,
+};
+
 #define VEGA10_GB_ADDR_CONFIG_GOLDEN 0x2a114042
 #define VEGA12_GB_ADDR_CONFIG_GOLDEN 0x24104041
 #define RAVEN_GB_ADDR_CONFIG_GOLDEN 0x2442
@@ -1760,55 +1784,42 @@ static void gfx_v9_0_init_csb(struct amdgpu_device 
*adev)
adev->gfx.rlc.clear_state_size);
 }
 
-static void gfx_v9_0_parse_ind_reg_list(int *register_list_format,
+static void gfx_v9_1_parse_ind_reg_list(int *register_list_format,
int indirect_offset,
int list_size,
int *unique_indirect_regs,
int *unique_indirect_reg_count,
-   int max_indirect_reg_count,
int *indirect_start_offsets,
-   int *indirect_start_offsets_count,
-   int max_indirect_start_offsets_count)
+   int *indirect_start_offsets_count)
 {
int idx;
-   bool new_entry = true;
 
for (; indirect_offset < list_size; indirect_offset++) {
+   indirect_start_offsets[*indirect_start_offsets_count] = 
indirect_offset;
+   *indirect_start_offsets_count = *indirect_start_offsets_count + 
1;
 
-   if (new_entry) {
-   new_entry = false;
-   indirect_start_offsets[*indirect_start_offsets_count] = 
indirect_offset;
-   *indirect_start_offsets_count = 
*indirect_start_offsets_count + 1;
-   BUG_ON(*indirect_start_offsets_count >= 
max_indirect_start_offsets_count);
-   }
+   while (register_list_format[indirect_offset] != 0x) {
+   indirect_offset += 2;
 
-   if (register_list_format[indirect_offset] == 0x) {
-   new_entry = true;
-   continue;
-   }
+   /* look for the matching indice */
+   for (idx = 0; idx < *unique_indirect_reg_count; idx++) {
+   if (unique_indirect_regs[idx] ==
+   register_list_format[indirect_offset] ||
+   !unique_indirect_regs[idx])
+   break;
+   }
 
-   indirect_offset += 2;
+   BUG_ON(idx >= *unique_indirect_reg_count);
 
-   /* look for the matching indice */
-   for (idx = 0; idx < *unique_indirect_reg_count; idx++) {
-   if (unique_indirect_regs[idx] ==
-   register_list_format[indirect_offset])
-   break;
-   }
+   if (!unique_indirect_regs[idx])
+   unique_indirect_regs[idx] = 
register_list_format[indirect_offset];
 
-   if (idx >= 

[PATCH 04/20] drm/amdgpu: enter rlc safe mode before set cgpg

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index f0ff604..537d624 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3391,8 +3391,7 @@ static void gfx_v9_0_exit_rlc_safe_mode(struct 
amdgpu_device *adev)
 static void gfx_v9_0_update_gfx_cg_power_gating(struct amdgpu_device *adev,
bool enable)
 {
-   /* TODO: double check if we need to perform under safe mdoe */
-   /* gfx_v9_0_enter_rlc_safe_mode(adev); */
+   gfx_v9_0_enter_rlc_safe_mode(adev);
 
if ((adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) && enable) {
gfx_v9_0_enable_gfx_cg_power_gating(adev, true);
@@ -3403,7 +3402,7 @@ static void gfx_v9_0_update_gfx_cg_power_gating(struct 
amdgpu_device *adev,
gfx_v9_0_enable_gfx_pipeline_powergating(adev, false);
}
 
-   /* gfx_v9_0_exit_rlc_safe_mode(adev); */
+   gfx_v9_0_exit_rlc_safe_mode(adev);
 }
 
 static void gfx_v9_0_update_gfx_mg_power_gating(struct amdgpu_device *adev,
@@ -3794,7 +3793,7 @@ static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring 
*ring,
}
 
amdgpu_ring_write(ring, header);
-BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+   BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
amdgpu_ring_write(ring,
 #ifdef __BIG_ENDIAN
(2 << 0) |
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/20] drm/amdgpu: gfx off support

2018-04-18 Thread Huang Rui
GFXOFF is the new GPU feature that save power consumption. It used RLC to
poweroff the gfx engine dynamicly when there is no workload on gfx pipe and make
gfx into "idle" state.
1. Add three additional RLC ucodes, and use psp to load them.
2. Revise RLC save restore list.
3. Enable CGPG (GFX power gating).
4. Enable gfxoff.
5. Revise suspend/resume sequence.

Currently, only raven is able to support gfxoff at first. And after CQE do
series rounds of testing, and there is no regression that bring by gfxoff
feature till now. 

We support two types of gfxoff, and user is able to build them manually from
firmware repo:
1. Real CGPG
$ make clean
$ make REAL_CGPG=1
2. Faked CGPG: (by default)
$ make clean
$ make  

Then configure to enable gfxoff with ppfeaturemask=0xbfff.

Thanks,
Ray

Huang Rui (20):
  drm/amdgpu: update psp gfx if header
  drm/amdgpu: add new rlc firmware header format v2.1
  drm/amdgpu: add save restore list cntl gpm and srm firmware support
  drm/amdgpu: enter rlc safe mode before set cgpg
  drm/amdgpu: cleanup init power gating function
  drm/amdgpu: revise init_rlc_save_restore_list behavior to support
latest register_list_format/register_restore table
  drm/amdgpu: add setting powergating method for gfx9
  drm/amd/powerplay: send CGPG smc message if PG is enabled for raven
  drm/amdgpu: move PP_FEATURE_MASK to amd_shared header
  drm/amdgpu: add gfxoff feature mask
  drm/amdgpu: set gfxoff disabled by default
  drm/amd/powerplay: add gfx off control function
  drm/amd/powerplay: enable/disable gfxoff through smu
  drm/amdgpu: use pp_feature member to store the mask
  drm/amdgpu: clear gfxoff featue mask if the asic is not raven
  drm/amd/powerplay: add control gfxoff enabling in late init
  drm/amdgpu: it should disable gfxoff when system is going to suspend
  drm/amdgpu: fix to disable powergating in hw_fini
  drm/amdgpu: set CGPG if gfxoff is enabled for raven
  drm/amd/powerplay: use the flag to decide whether send gfxoff smc
message

 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  16 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c|   8 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h   |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   |  36 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c |  51 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h |  22 ++
 drivers/gpu/drm/amd/amdgpu/ci_dpm.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 233 +++---
 drivers/gpu/drm/amd/amdgpu/kv_dpm.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h   |  67 +--
 drivers/gpu/drm/amd/amdgpu/psp_v10_0.c|   9 +
 drivers/gpu/drm/amd/amdgpu/soc15.c|   5 +
 drivers/gpu/drm/amd/include/amd_shared.h  |  19 ++
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c |  20 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c   |   6 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c |  50 -
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h |  19 +-
 drivers/gpu/drm/amd/powerplay/inc/rv_ppsmc.h  |   1 +
 include/uapi/drm/amdgpu_drm.h |   6 +
 20 files changed, 447 insertions(+), 129 deletions(-)

-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/20] drm/amdgpu: add save restore list cntl gpm and srm firmware support

2018-04-18 Thread Huang Rui
RLC save/restore list cntl/gpm_mem/srm_mem ucodes are used for CGPG and gfxoff
function.

Signed-off-by: Huang Rui 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   | 15 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 36 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 17 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h |  3 ++
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 55 +--
 drivers/gpu/drm/amd/amdgpu/psp_v10_0.c|  9 +
 include/uapi/drm/amdgpu_drm.h |  6 
 7 files changed, 138 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index f5b2ec2..bed1f5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -774,9 +774,18 @@ struct amdgpu_rlc {
u32 starting_offsets_start;
u32 reg_list_format_size_bytes;
u32 reg_list_size_bytes;
+   u32 reg_list_format_direct_reg_list_length;
+   u32 save_restore_list_cntl_size_bytes;
+   u32 save_restore_list_gpm_size_bytes;
+   u32 save_restore_list_srm_size_bytes;
 
u32 *register_list_format;
u32 *register_restore;
+   u8 *save_restore_list_cntl;
+   u8 *save_restore_list_gpm;
+   u8 *save_restore_list_srm;
+
+   bool is_rlc_v2_1;
 };
 
 #define AMDGPU_MAX_COMPUTE_QUEUES KGD_MAX_QUEUES
@@ -943,6 +952,12 @@ struct amdgpu_gfx {
uint32_tce_feature_version;
uint32_tpfp_feature_version;
uint32_trlc_feature_version;
+   uint32_trlc_srlc_fw_version;
+   uint32_trlc_srlc_feature_version;
+   uint32_trlc_srlg_fw_version;
+   uint32_trlc_srlg_feature_version;
+   uint32_trlc_srls_fw_version;
+   uint32_trlc_srls_feature_version;
uint32_tmec_feature_version;
uint32_tmec2_feature_version;
struct amdgpu_ring  gfx_ring[AMDGPU_MAX_GFX_RINGS];
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index f059d8e..6d55cae 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -215,6 +215,18 @@ static int amdgpu_firmware_info(struct 
drm_amdgpu_info_firmware *fw_info,
fw_info->ver = adev->gfx.rlc_fw_version;
fw_info->feature = adev->gfx.rlc_feature_version;
break;
+   case AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_CNTL:
+   fw_info->ver = adev->gfx.rlc_srlc_fw_version;
+   fw_info->feature = adev->gfx.rlc_srlc_feature_version;
+   break;
+   case AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_GPM_MEM:
+   fw_info->ver = adev->gfx.rlc_srlg_fw_version;
+   fw_info->feature = adev->gfx.rlc_srlg_feature_version;
+   break;
+   case AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_SRM_MEM:
+   fw_info->ver = adev->gfx.rlc_srls_fw_version;
+   fw_info->feature = adev->gfx.rlc_srls_feature_version;
+   break;
case AMDGPU_INFO_FW_GFX_MEC:
if (query_fw->index == 0) {
fw_info->ver = adev->gfx.mec_fw_version;
@@ -1150,6 +1162,30 @@ static int amdgpu_debugfs_firmware_info(struct seq_file 
*m, void *data)
seq_printf(m, "RLC feature version: %u, firmware version: 0x%08x\n",
   fw_info.feature, fw_info.ver);
 
+   /* RLC SAVE RESTORE LIST CNTL */
+   query_fw.fw_type = AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_CNTL;
+   ret = amdgpu_firmware_info(_info, _fw, adev);
+   if (ret)
+   return ret;
+   seq_printf(m, "RLC SRLC feature version: %u, firmware version: 
0x%08x\n",
+  fw_info.feature, fw_info.ver);
+
+   /* RLC SAVE RESTORE LIST GPM MEM */
+   query_fw.fw_type = AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_GPM_MEM;
+   ret = amdgpu_firmware_info(_info, _fw, adev);
+   if (ret)
+   return ret;
+   seq_printf(m, "RLC SRLG feature version: %u, firmware version: 
0x%08x\n",
+  fw_info.feature, fw_info.ver);
+
+   /* RLC SAVE RESTORE LIST SRM MEM */
+   query_fw.fw_type = AMDGPU_INFO_FW_GFX_RLC_RESTORE_LIST_SRM_MEM;
+   ret = amdgpu_firmware_info(_info, _fw, adev);
+   if (ret)
+   return ret;
+   seq_printf(m, "RLC SRLS feature version: %u, firmware version: 
0x%08x\n",
+  fw_info.feature, fw_info.ver);
+
/* MEC */
query_fw.fw_type = AMDGPU_INFO_FW_GFX_MEC;
query_fw.index = 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 

[PATCH 01/20] drm/amdgpu: update psp gfx if header

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Acked-by: Hawking Zhang 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h | 67 ++---
 1 file changed, 46 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h 
b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
index 8da6da9..0cf48d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
+++ b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
@@ -40,11 +40,20 @@ enum psp_gfx_crtl_cmd_id
 GFX_CTRL_CMD_ID_INIT_GPCOM_RING = 0x0002,   /* initialize GPCOM ring */
 GFX_CTRL_CMD_ID_DESTROY_RINGS   = 0x0003,   /* destroy rings */
 GFX_CTRL_CMD_ID_CAN_INIT_RINGS  = 0x0004,   /* is it allowed to 
initialized the rings */
+GFX_CTRL_CMD_ID_ENABLE_INT  = 0x0005,   /* enable PSP-to-Gfx 
interrupt */
+GFX_CTRL_CMD_ID_DISABLE_INT = 0x0006,   /* disable PSP-to-Gfx 
interrupt */
+GFX_CTRL_CMD_ID_MODE1_RST   = 0x0007,   /* trigger the Mode 1 
reset */
 
 GFX_CTRL_CMD_ID_MAX = 0x000F,   /* max command ID */
 };
 
 
+/*-
+NOTE:   All physical addresses used in this interface are actually
+GPU Virtual Addresses.
+*/
+
+
 /* Control registers of the TEE Gfx interface. These are located in
 *  SRBM-to-PSP mailbox registers (total 8 registers).
 */
@@ -55,8 +64,8 @@ struct psp_gfx_ctrl
 volatile uint32_t   rbi_rptr; /* +8   Read pointer (index) of RBI 
ring */
 volatile uint32_t   gpcom_wptr;   /* +12  Write pointer (index) of 
GPCOM ring */
 volatile uint32_t   gpcom_rptr;   /* +16  Read pointer (index) of 
GPCOM ring */
-volatile uint32_t   ring_addr_lo; /* +20  bits [31:0] of physical 
address of ring buffer */
-volatile uint32_t   ring_addr_hi; /* +24  bits [63:32] of physical 
address of ring buffer */
+volatile uint32_t   ring_addr_lo; /* +20  bits [31:0] of GPU Virtual 
of ring buffer (VMID=0)*/
+volatile uint32_t   ring_addr_hi; /* +24  bits [63:32] of GPU Virtual 
of ring buffer (VMID=0) */
 volatile uint32_t   ring_buf_size;/* +28  Ring buffer size (in bytes) 
*/
 
 };
@@ -78,6 +87,8 @@ enum psp_gfx_cmd_id
 GFX_CMD_ID_LOAD_ASD = 0x0004,   /* load ASD Driver */
 GFX_CMD_ID_SETUP_TMR= 0x0005,   /* setup TMR region */
 GFX_CMD_ID_LOAD_IP_FW   = 0x0006,   /* load HW IP FW */
+GFX_CMD_ID_DESTROY_TMR  = 0x0007,   /* destroy TMR region */
+GFX_CMD_ID_SAVE_RESTORE = 0x0008,   /* save/restore HW IP FW */
 
 };
 
@@ -85,11 +96,11 @@ enum psp_gfx_cmd_id
 /* Command to load Trusted Application binary into PSP OS. */
 struct psp_gfx_cmd_load_ta
 {
-uint32_tapp_phy_addr_lo;/* bits [31:0] of the physical 
address of the TA binary (must be 4 KB aligned) */
-uint32_tapp_phy_addr_hi;/* bits [63:32] of the physical 
address of the TA binary */
+uint32_tapp_phy_addr_lo;/* bits [31:0] of the GPU Virtual 
address of the TA binary (must be 4 KB aligned) */
+uint32_tapp_phy_addr_hi;/* bits [63:32] of the GPU Virtual 
address of the TA binary */
 uint32_tapp_len;/* length of the TA binary in 
bytes */
-uint32_tcmd_buf_phy_addr_lo;/* bits [31:0] of the physical 
address of CMD buffer (must be 4 KB aligned) */
-uint32_tcmd_buf_phy_addr_hi;/* bits [63:32] of the physical 
address of CMD buffer */
+uint32_tcmd_buf_phy_addr_lo;/* bits [31:0] of the GPU Virtual 
address of CMD buffer (must be 4 KB aligned) */
+uint32_tcmd_buf_phy_addr_hi;/* bits [63:32] of the GPU Virtual 
address of CMD buffer */
 uint32_tcmd_buf_len;/* length of the CMD buffer in 
bytes; must be multiple of 4 KB */
 
 /* Note: CmdBufLen can be set to 0. In this case no persistent CMD buffer 
is provided
@@ -111,8 +122,8 @@ struct psp_gfx_cmd_unload_ta
 */
 struct psp_gfx_buf_desc
 {
-uint32_tbuf_phy_addr_lo;   /* bits [31:0] of physical address 
of the buffer (must be 4 KB aligned) */
-uint32_tbuf_phy_addr_hi;   /* bits [63:32] of physical address 
of the buffer */
+uint32_tbuf_phy_addr_lo;   /* bits [31:0] of GPU Virtual 
address of the buffer (must be 4 KB aligned) */
+uint32_tbuf_phy_addr_hi;   /* bits [63:32] of GPU Virtual 
address of the buffer */
 uint32_tbuf_size;  /* buffer size in bytes (must be 
multiple of 4 KB and no bigger than 64 MB) */
 
 };
@@ -145,8 +156,8 @@ struct psp_gfx_cmd_invoke_cmd
 /* Command to setup TMR region. */
 struct psp_gfx_cmd_setup_tmr
 {
-uint32_tbuf_phy_addr_lo;   /* bits [31:0] of physical address 
of TMR buffer (must be 4 KB aligned) */
-uint32_tbuf_phy_addr_hi;   /* bits [63:32] of physical 

[PATCH 02/20] drm/amdgpu: add new rlc firmware header format v2.1

2018-04-18 Thread Huang Rui
Signed-off-by: Huang Rui 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 34 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 19 +
 2 files changed, 51 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index dd6f989..84d6525 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -161,8 +161,38 @@ void amdgpu_ucode_print_rlc_hdr(const struct 
common_firmware_header *hdr)
  
le32_to_cpu(rlc_hdr->reg_list_format_separate_array_offset_bytes));
DRM_DEBUG("reg_list_separate_size_bytes: %u\n",
  le32_to_cpu(rlc_hdr->reg_list_separate_size_bytes));
-   DRM_DEBUG("reg_list_separate_size_bytes: %u\n",
- le32_to_cpu(rlc_hdr->reg_list_separate_size_bytes));
+   DRM_DEBUG("reg_list_separate_array_offset_bytes: %u\n",
+ 
le32_to_cpu(rlc_hdr->reg_list_separate_array_offset_bytes));
+   if (version_minor == 1) {
+   const struct rlc_firmware_header_v2_1 *v2_1 =
+   container_of(rlc_hdr, struct 
rlc_firmware_header_v2_1, v2_0);
+   DRM_DEBUG("reg_list_format_direct_reg_list_length: 
%u\n",
+ 
le32_to_cpu(v2_1->reg_list_format_direct_reg_list_length));
+   DRM_DEBUG("save_restore_list_cntl_ucode_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_cntl_ucode_ver));
+   DRM_DEBUG("save_restore_list_cntl_feature_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_cntl_feature_ver));
+   DRM_DEBUG("save_restore_list_cntl_size_bytes %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_cntl_size_bytes));
+   DRM_DEBUG("save_restore_list_cntl_offset_bytes: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_cntl_offset_bytes));
+   DRM_DEBUG("save_restore_list_gpm_ucode_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_gpm_ucode_ver));
+   DRM_DEBUG("save_restore_list_gpm_feature_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_gpm_feature_ver));
+   DRM_DEBUG("save_restore_list_gpm_size_bytes %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_gpm_size_bytes));
+   DRM_DEBUG("save_restore_list_gpm_offset_bytes: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_gpm_offset_bytes));
+   DRM_DEBUG("save_restore_list_srm_ucode_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_srm_ucode_ver));
+   DRM_DEBUG("save_restore_list_srm_feature_ver: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_srm_feature_ver));
+   DRM_DEBUG("save_restore_list_srm_size_bytes %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_srm_size_bytes));
+   DRM_DEBUG("save_restore_list_srm_offset_bytes: %u\n",
+ 
le32_to_cpu(v2_1->save_restore_list_srm_offset_bytes));
+   }
} else {
DRM_ERROR("Unknown RLC ucode version: %u.%u\n", version_major, 
version_minor);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 30b5500..0b262f4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -98,6 +98,24 @@ struct rlc_firmware_header_v2_0 {
uint32_t reg_list_separate_array_offset_bytes; /* payload offset from 
the start of the header */
 };
 
+/* version_major=2, version_minor=1 */
+struct rlc_firmware_header_v2_1 {
+   struct rlc_firmware_header_v2_0 v2_0;
+   uint32_t reg_list_format_direct_reg_list_length; /* length of direct 
reg list format array */
+   uint32_t save_restore_list_cntl_ucode_ver;
+   uint32_t save_restore_list_cntl_feature_ver;
+   uint32_t save_restore_list_cntl_size_bytes;
+   uint32_t save_restore_list_cntl_offset_bytes;
+   uint32_t save_restore_list_gpm_ucode_ver;
+   uint32_t save_restore_list_gpm_feature_ver;
+   uint32_t save_restore_list_gpm_size_bytes;
+   uint32_t save_restore_list_gpm_offset_bytes;
+   uint32_t save_restore_list_srm_ucode_ver;
+   uint32_t save_restore_list_srm_feature_ver;
+   uint32_t save_restore_list_srm_size_bytes;
+   uint32_t save_restore_list_srm_offset_bytes;
+};
+
 /* version_major=1, version_minor=0 */
 struct 

Re: [PATCH 3/3] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 5:51 PM, Samuel Li  wrote:
> It's auto by default. For CZ/ST, auto setting enables sg display
> when vram size is small; otherwise still uses vram.
> This patch fixed some potention issues introduced by change
> "allow framebuffer in GART memory as well" due to CZ/ST hardware
> limitation.
>
> v2: Change default setting to auto.
> v3: Move some logic from amdgpu_display_framebuffer_domains()
> to pin function, suggested by Christian.
> v4: Split into several patches.
>
> Signed-off-by: Samuel Li 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 11 +++
>  3 files changed, 17 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index b3d047d..26429de 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -129,6 +129,7 @@ extern int amdgpu_lbpw;
>  extern int amdgpu_compute_multipipe;
>  extern int amdgpu_gpu_recovery;
>  extern int amdgpu_emu_mode;
> +extern int amdgpu_sg_display;
>
>  #ifdef CONFIG_DRM_AMDGPU_SI
>  extern int amdgpu_si_support;
> @@ -137,6 +138,7 @@ extern int amdgpu_si_support;
>  extern int amdgpu_cik_support;
>  #endif
>
> +#define AMDGPU_SG_THRESHOLD(256*1024*1024)
>  #define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
>  #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
>  #define AMDGPU_MAX_USEC_TIMEOUT10  /* 100 ms */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 0b19482..85dcd1c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -132,6 +132,7 @@ int amdgpu_lbpw = -1;
>  int amdgpu_compute_multipipe = -1;
>  int amdgpu_gpu_recovery = -1; /* auto */
>  int amdgpu_emu_mode = 0;
> +int amdgpu_sg_display = -1;
>
>  MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
>  module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
> @@ -290,6 +291,9 @@ module_param_named(gpu_recovery, amdgpu_gpu_recovery, 
> int, 0444);
>  MODULE_PARM_DESC(emu_mode, "Emulation mode, (1 = enable, 0 = disable)");
>  module_param_named(emu_mode, amdgpu_emu_mode, int, 0444);
>
> +MODULE_PARM_DESC(sg_display, "Enable scatter gather display, (1 = enable, 0 
> = disable, -1 = auto");
> +module_param_named(sg_display, amdgpu_sg_display, int, 0444);
> +
>  #ifdef CONFIG_DRM_AMDGPU_SI
>
>  #if defined(CONFIG_DRM_RADEON) || defined(CONFIG_DRM_RADEON_MODULE)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 8dc782a..cb0807c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -696,6 +696,17 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
> domain,
> return -EINVAL;
> }
>
> +   /* This assumes only apu display buffers pin with (VRAM|GTT) */
> +   if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
> +   domain = AMDGPU_GEM_DOMAIN_VRAM;
> +   if (amdgpu_sg_display == 1)
> +   domain = AMDGPU_GEM_DOMAIN_GTT;
> +   else if (amdgpu_sg_display == -1) {
> +   if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
> +   domain = AMDGPU_GEM_DOMAIN_GTT;
> +   }
> +   }

Please drop the module parameter.  I can't see any reason for it.  We
are ending up with too many driver parameters of questionable value.
If the user would prefer GTT over VRAM or vice versa, we should take
those preferences (from the UMDs) into account at pinning time as I
said earlier rather than globally at as a driver option.  E.g.,

if (bo->preferred_domains == AMDGPU_GEM_DOMAIN_VRAM)
domain = AMDGPU_GEM_DOMAIN_VRAM; /* if user really wants vram, respect it */
else if (bo->preferred_domains == AMDGPU_GEM_DOMAIN_GTT)
domain = AMDGPU_GEM_DOMAIN_GTT; /* if user really wants gtt, respect it */
else if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
domain = AMDGPU_GEM_DOMAIN_GTT; /* if vram is limited, use gtt */
else
domain = AMDGPU_GEM_DOMAIN_VRAM;

Alex

> +
> if (bo->pin_count) {
> uint32_t mem_type = bo->tbo.mem.mem_type;
>
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/3] drm/amdgpu: Remove VRAM from shared bo domains.

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 5:51 PM, Samuel Li  wrote:
> Signed-off-by: Samuel Li 

Please add a commit message.  E.g.,

This fixes a potential regression introduced when SG display support
was initially added which could lead to a shared
buffer ending up pinned in vram.  Check if GTT is allowed in the
domain and use that if so, otherwise return an error.

With that fixed:
Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 24f582c..8dc782a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -689,8 +689,12 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
> domain,
> return -EINVAL;
>
> /* A shared bo cannot be migrated to VRAM */
> -   if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
> -   return -EINVAL;
> +   if (bo->prime_shared_count) {
> +   if (domain & AMDGPU_GEM_DOMAIN_GTT)
> +   domain = AMDGPU_GEM_DOMAIN_GTT;
> +   else
> +   return -EINVAL;
> +   }
>
> if (bo->pin_count) {
> uint32_t mem_type = bo->tbo.mem.mem_type;
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/3] drm/amdgpu: Rename amdgpu_display_framebuffer_domains()

2018-04-18 Thread Alex Deucher
On Wed, Apr 18, 2018 at 5:50 PM, Samuel Li  wrote:
> It returns supported domains, and domains actually used are to be
> decided later.

maybe clarify the commit message a bit.  E.g.,
It returns supported domains for display, and domains actually used
are to be decided later when we pin them.

Either way:
Reviewed-by: Alex Deucher 

>
> Signed-off-by: Samuel Li 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 4 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.h   | 2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c| 2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 2 +-
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 +--
>  5 files changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 50f98df..0caa3d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -189,7 +189,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc 
> *crtc,
> goto cleanup;
> }
>
> -   r = amdgpu_bo_pin(new_abo, amdgpu_display_framebuffer_domains(adev), 
> );
> +   r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev), 
> );
> if (unlikely(r != 0)) {
> DRM_ERROR("failed to pin new abo buffer before flip\n");
> goto unreserve;
> @@ -484,7 +484,7 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs 
> = {
> .create_handle = drm_gem_fb_create_handle,
>  };
>
> -uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev)
> +uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev)
>  {
> uint32_t domain = AMDGPU_GEM_DOMAIN_VRAM;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
> index 2b11d80..f66e3e3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
> @@ -23,7 +23,7 @@
>  #ifndef __AMDGPU_DISPLAY_H__
>  #define __AMDGPU_DISPLAY_H__
>
> -uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev);
> +uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev);
>  struct drm_framebuffer *
>  amdgpu_display_user_framebuffer_create(struct drm_device *dev,
>struct drm_file *file_priv,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index ff89e84..bc5fd8e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -137,7 +137,7 @@ static int amdgpufb_create_pinned_object(struct 
> amdgpu_fbdev *rfbdev,
> /* need to align pitch with crtc limits */
> mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
>   fb_tiled);
> -   domain = amdgpu_display_framebuffer_domains(adev);
> +   domain = amdgpu_display_supported_domains(adev);
>
> height = ALIGN(mode_cmd->height, 8);
> size = mode_cmd->pitches[0] * height;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> index 4b584cb7..cf0749f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
> @@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
> *dma_buf,
> struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
> struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> struct ttm_operation_ctx ctx = { true, false };
> -   u32 domain = amdgpu_display_framebuffer_domains(adev);
> +   u32 domain = amdgpu_display_supported_domains(adev);
> int ret;
> bool reads = (direction == DMA_BIDIRECTIONAL ||
>   direction == DMA_FROM_DEVICE);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 6f92a19..1f5603a 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct 
> drm_plane *plane,
> return r;
>
> if (plane->type != DRM_PLANE_TYPE_CURSOR)
> -   domain = amdgpu_display_framebuffer_domains(adev);
> +   domain = amdgpu_display_supported_domains(adev);
> else
> domain = AMDGPU_GEM_DOMAIN_VRAM;
>
> r = amdgpu_bo_pin(rbo, domain, >address);
> -
> amdgpu_bo_unreserve(rbo);
>
> if (unlikely(r != 0)) {
> --
> 2.7.4
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___

[PATCH 2/3] drm/amdgpu: Remove VRAM from shared bo domains.

2018-04-18 Thread Samuel Li
Signed-off-by: Samuel Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 24f582c..8dc782a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -689,8 +689,12 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
domain,
return -EINVAL;
 
/* A shared bo cannot be migrated to VRAM */
-   if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
-   return -EINVAL;
+   if (bo->prime_shared_count) {
+   if (domain & AMDGPU_GEM_DOMAIN_GTT)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   else
+   return -EINVAL;
+   }
 
if (bo->pin_count) {
uint32_t mem_type = bo->tbo.mem.mem_type;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/3] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Samuel Li
It's auto by default. For CZ/ST, auto setting enables sg display
when vram size is small; otherwise still uses vram.
This patch fixed some potention issues introduced by change
"allow framebuffer in GART memory as well" due to CZ/ST hardware
limitation.

v2: Change default setting to auto.
v3: Move some logic from amdgpu_display_framebuffer_domains()
to pin function, suggested by Christian.
v4: Split into several patches.

Signed-off-by: Samuel Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 11 +++
 3 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b3d047d..26429de 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -129,6 +129,7 @@ extern int amdgpu_lbpw;
 extern int amdgpu_compute_multipipe;
 extern int amdgpu_gpu_recovery;
 extern int amdgpu_emu_mode;
+extern int amdgpu_sg_display;
 
 #ifdef CONFIG_DRM_AMDGPU_SI
 extern int amdgpu_si_support;
@@ -137,6 +138,7 @@ extern int amdgpu_si_support;
 extern int amdgpu_cik_support;
 #endif
 
+#define AMDGPU_SG_THRESHOLD(256*1024*1024)
 #define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
 #define AMDGPU_MAX_USEC_TIMEOUT10  /* 100 ms */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 0b19482..85dcd1c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -132,6 +132,7 @@ int amdgpu_lbpw = -1;
 int amdgpu_compute_multipipe = -1;
 int amdgpu_gpu_recovery = -1; /* auto */
 int amdgpu_emu_mode = 0;
+int amdgpu_sg_display = -1;
 
 MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
 module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@@ -290,6 +291,9 @@ module_param_named(gpu_recovery, amdgpu_gpu_recovery, int, 
0444);
 MODULE_PARM_DESC(emu_mode, "Emulation mode, (1 = enable, 0 = disable)");
 module_param_named(emu_mode, amdgpu_emu_mode, int, 0444);
 
+MODULE_PARM_DESC(sg_display, "Enable scatter gather display, (1 = enable, 0 = 
disable, -1 = auto");
+module_param_named(sg_display, amdgpu_sg_display, int, 0444);
+
 #ifdef CONFIG_DRM_AMDGPU_SI
 
 #if defined(CONFIG_DRM_RADEON) || defined(CONFIG_DRM_RADEON_MODULE)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 8dc782a..cb0807c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -696,6 +696,17 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
domain,
return -EINVAL;
}
 
+   /* This assumes only apu display buffers pin with (VRAM|GTT) */
+   if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
+   domain = AMDGPU_GEM_DOMAIN_VRAM;
+   if (amdgpu_sg_display == 1)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   else if (amdgpu_sg_display == -1) {
+   if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   }
+   }
+
if (bo->pin_count) {
uint32_t mem_type = bo->tbo.mem.mem_type;
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/3] drm/amdgpu: Rename amdgpu_display_framebuffer_domains()

2018-04-18 Thread Samuel Li
It returns supported domains, and domains actually used are to be
decided later.

Signed-off-by: Samuel Li 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.h   | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c| 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 2 +-
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 +--
 5 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 50f98df..0caa3d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -189,7 +189,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc 
*crtc,
goto cleanup;
}
 
-   r = amdgpu_bo_pin(new_abo, amdgpu_display_framebuffer_domains(adev), 
);
+   r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev), 
);
if (unlikely(r != 0)) {
DRM_ERROR("failed to pin new abo buffer before flip\n");
goto unreserve;
@@ -484,7 +484,7 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = 
{
.create_handle = drm_gem_fb_create_handle,
 };
 
-uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev)
+uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev)
 {
uint32_t domain = AMDGPU_GEM_DOMAIN_VRAM;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
index 2b11d80..f66e3e3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
@@ -23,7 +23,7 @@
 #ifndef __AMDGPU_DISPLAY_H__
 #define __AMDGPU_DISPLAY_H__
 
-uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev);
+uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev);
 struct drm_framebuffer *
 amdgpu_display_user_framebuffer_create(struct drm_device *dev,
   struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index ff89e84..bc5fd8e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -137,7 +137,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
/* need to align pitch with crtc limits */
mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
  fb_tiled);
-   domain = amdgpu_display_framebuffer_domains(adev);
+   domain = amdgpu_display_supported_domains(adev);
 
height = ALIGN(mode_cmd->height, 8);
size = mode_cmd->pitches[0] * height;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 4b584cb7..cf0749f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
*dma_buf,
struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct ttm_operation_ctx ctx = { true, false };
-   u32 domain = amdgpu_display_framebuffer_domains(adev);
+   u32 domain = amdgpu_display_supported_domains(adev);
int ret;
bool reads = (direction == DMA_BIDIRECTIONAL ||
  direction == DMA_FROM_DEVICE);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 6f92a19..1f5603a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct drm_plane 
*plane,
return r;
 
if (plane->type != DRM_PLANE_TYPE_CURSOR)
-   domain = amdgpu_display_framebuffer_domains(adev);
+   domain = amdgpu_display_supported_domains(adev);
else
domain = AMDGPU_GEM_DOMAIN_VRAM;
 
r = amdgpu_bo_pin(rbo, domain, >address);
-
amdgpu_bo_unreserve(rbo);
 
if (unlikely(r != 0)) {
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Raven Ridge Ryzen 2500U hang reproduced

2018-04-18 Thread Bráulio Bhavamitra
Hi Nicolai,

It makes sense.

The easiest way to reproduce the real freeze is to:
- Run vblank_mode=0 glxgears
- Open https://hangouts.google.com/start
- Activate KDE compositing

It will freeze within a few minutes.

Cheers,
Bráulio

On Mon, Apr 16, 2018 at 4:44 AM Nicolai Hähnle  wrote:

> On 14.04.2018 00:24, Bráulio Bhavamitra wrote:
> > It ALWAYS crashes on shader15 of
> > http://www.graphicsfuzz.com/benchmark/android-v1.html.
>
> This is very likely an unrelated issue to any kind of desktop hang
> you're seeing. The graphics fuzz shaders are the result of fuzzing to
> intentionally generate unusual control flow structures which are likely
> to trigger shader compiler bugs. Typical desktop workloads don't have
> such shaders, so any generic desktop hang you're seeing is almost
> certainly unrelated.
>
> Cheers,
> Nicolai
>
>
> >
> > Also reported at https://bugzilla.redhat.com/show_bug.cgi?id=1562530
> >
> > Using kernel 4.16 with options rcu_nocb=0-15 and amdgpu.dpm=0
> >
> > Cheers,
> > Bráulio
> >
> > On Mon, Mar 26, 2018 at 8:30 PM Bráulio Bhavamitra  > > wrote:
> >
> > Hi all,
> >
> > Following the random crashes happenning with many users (e.g.
> >
> https://www.phoronix.com/scan.php?page=news_item=Raven-Ridge-March-Update
> ),
> > not only on Linux but also Windows, I've been struggling to
> > reproduce and generate any error log.
> >
> > After discovering that the error only happenned with KDE and games
> > (at least for me, see https://bugs.kde.org/show_bug.cgi?id=392378),
> > I could reproduce after a failing suspend.
> >
> > The crash most of the times allows the mouse to keep moving, but
> > anything else works. Except for this time the keyboard worked so I
> > could switch the tty and save the dmesg messages. After this I had
> > to force reboot as it got stuck trying to kill the lightdm service
> > (gpu hanged?).
> >
> > The errors are, see attached the full dmesg:
> > [ 2899.525650] amdgpu :03:00.0: couldn't schedule ib on ring
> 
> > [ 2899.525769] [drm:amdgpu_job_run [amdgpu]] *ERROR* Error
> > scheduling IBs (-22)
> > [ 2909.125047] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx
> > timeout, last signaled seq=174624, last emitted seq=174627
> > [ 2909.125060] [drm] IP block:psp is hung!
> > [ 2909.125063] [drm] GPU recovery disabled.
> > [ 2914.756931] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
> > amdgpu_cs_list_validate(validated) failed.
> > [ 2914.756997] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to
> > process the buffer list -16!
> > [ 2914.997372] amdgpu :03:00.0: couldn't schedule ib on ring
> 
> > [ 2914.997498] [drm:amdgpu_job_run [amdgpu]] *ERROR* Error
> > scheduling IBs (-22)
> > [ 2930.117275] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
> > amdgpu_cs_list_validate(validated) failed.
> > [ 2930.117405] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to
> > process the buffer list -16!
> > [ 2930.152015] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to
> > clear memory with ring turned off.
> > [ 2930.157940] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to
> > clear memory with ring turned off.
> > [ 2930.180535] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to
> > clear memory with ring turned off.
> > [ 2933.781692] IPv6: ADDRCONF(NETDEV_CHANGE): wlp2s0: link becomes
> ready
> > [ 2945.477205] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
> > amdgpu_cs_list_validate(validated) failed.
> > [ 2945.477348] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to
> > process the buffer list -16!
> >
> > System details:
> > HP Envy x360 Ryzen 2500U
> > ArchLinux, kernel 4.16rc6 and 4.15.12
> >
> > Cheers,
> > bráulio
> >
> >
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >
>
>
> --
> Lerne, wie die Welt wirklich ist,
> Aber vergiss niemals, wie sie sein sollte.
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: Which branch to test latest support for Raven Ridge?

2018-04-18 Thread Bráulio Bhavamitra
FYI, getting a blank screen and system freeze when lightdm is loaded with
both drm-next-4.18-wip
 and
amd-staging-drm-next

 branches.

On Fri, Apr 13, 2018 at 10:05 PM Bráulio Bhavamitra 
wrote:

> Hi all,
>
> I've been testing and mitigating the Raven Ridge crashes. I've just
> compiled the kernel https://cgit.freedesktop.org/~agd5f/linux/ at branch
> drm-next-4.18-wip to see if it would have the necessary fix for video
> hangs/freezes. Unfortunetely, it still crashes in the same situations.
>
> Which branch should I compile for the latest Raven Ridge support?
>
> Cheers,
> Bráulio
>
> -- Forwarded message -
> From: Bráulio Bhavamitra 
> Date: Fri, Apr 13, 2018 at 7:24 PM
> Subject: Re: Raven Ridge Ryzen 2500U hang reproduced
> To: 
>
>
> It ALWAYS crashes on shader15 of
> http://www.graphicsfuzz.com/benchmark/android-v1.html.
>
> Also reported at https://bugzilla.redhat.com/show_bug.cgi?id=1562530
>
> Using kernel 4.16 with options rcu_nocb=0-15 and amdgpu.dpm=0
>
> Cheers,
> Bráulio
>
>
> On Mon, Mar 26, 2018 at 8:30 PM Bráulio Bhavamitra 
> wrote:
>
>> Hi all,
>>
>> Following the random crashes happenning with many users (e.g.
>> https://www.phoronix.com/scan.php?page=news_item=Raven-Ridge-March-Update),
>> not only on Linux but also Windows, I've been struggling to reproduce and
>> generate any error log.
>>
>> After discovering that the error only happenned with KDE and games (at
>> least for me, see https://bugs.kde.org/show_bug.cgi?id=392378), I could
>> reproduce after a failing suspend.
>>
>> The crash most of the times allows the mouse to keep moving, but anything
>> else works. Except for this time the keyboard worked so I could switch the
>> tty and save the dmesg messages. After this I had to force reboot as it got
>> stuck trying to kill the lightdm service (gpu hanged?).
>>
>> The errors are, see attached the full dmesg:
>> [ 2899.525650] amdgpu :03:00.0: couldn't schedule ib on ring 
>> [ 2899.525769] [drm:amdgpu_job_run [amdgpu]] *ERROR* Error scheduling IBs
>> (-22)
>> [ 2909.125047] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx
>> timeout, last signaled seq=174624, last emitted seq=174627
>> [ 2909.125060] [drm] IP block:psp is hung!
>> [ 2909.125063] [drm] GPU recovery disabled.
>> [ 2914.756931] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
>> amdgpu_cs_list_validate(validated) failed.
>> [ 2914.756997] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to process
>> the buffer list -16!
>> [ 2914.997372] amdgpu :03:00.0: couldn't schedule ib on ring 
>> [ 2914.997498] [drm:amdgpu_job_run [amdgpu]] *ERROR* Error scheduling IBs
>> (-22)
>> [ 2930.117275] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
>> amdgpu_cs_list_validate(validated) failed.
>> [ 2930.117405] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to process
>> the buffer list -16!
>> [ 2930.152015] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear
>> memory with ring turned off.
>> [ 2930.157940] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear
>> memory with ring turned off.
>> [ 2930.180535] [drm:amdgpu_fill_buffer [amdgpu]] *ERROR* Trying to clear
>> memory with ring turned off.
>> [ 2933.781692] IPv6: ADDRCONF(NETDEV_CHANGE): wlp2s0: link becomes ready
>> [ 2945.477205] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR*
>> amdgpu_cs_list_validate(validated) failed.
>> [ 2945.477348] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to process
>> the buffer list -16!
>>
>> System details:
>> HP Envy x360 Ryzen 2500U
>> ArchLinux, kernel 4.16rc6 and 4.15.12
>>
>> Cheers,
>> bráulio
>>
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: Disallow enabling CRTC without primary plane with FB

2018-04-18 Thread Harry Wentland
The below commit

"drm/atomic: Try to preserve the crtc enabled state in 
drm_atomic_remove_fb, v2"

introduces a slight behavioral change to rmfb. Instead of disabling a crtc
when the primary plane is disabled, it now preserves it.

Since DC is currently not equipped to handle this we need to fail such
a commit, otherwise we might see a corrupted screen.

This is based on Shirish's previous approach but avoids adding all
planes to the new atomic state which leads to a full update in DC for
any commit, and is not what we intend.

Theoretically DM should be able to deal with states with fully populated planes,
even for simple updates, such as cursor updates. This should still be
addressed in the future.

Signed-off-by: Harry Wentland 
Cc: sta...@vger.kernel.org
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 6f92a19bebd6..0bdc6b484bad 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -4683,6 +4683,7 @@ static int dm_update_crtcs_state(struct 
amdgpu_display_manager *dm,
struct amdgpu_dm_connector *aconnector = NULL;
struct drm_connector_state *new_con_state = NULL;
struct dm_connector_state *dm_conn_state = NULL;
+   struct drm_plane_state *new_plane_state = NULL;
 
new_stream = NULL;
 
@@ -4690,6 +4691,13 @@ static int dm_update_crtcs_state(struct 
amdgpu_display_manager *dm,
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
acrtc = to_amdgpu_crtc(crtc);
 
+   new_plane_state = drm_atomic_get_new_plane_state(state, 
new_crtc_state->crtc->primary);
+
+   if (new_crtc_state->enable && new_plane_state && 
!new_plane_state->fb) {
+   ret = -EINVAL;
+   goto fail;
+   }
+
aconnector = 
amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
 
/* TODO This hack should go away */
@@ -4894,7 +4902,7 @@ static int dm_update_planes_state(struct dc *dc,
if (!dm_old_crtc_state->stream)
continue;
 
-   DRM_DEBUG_DRIVER("Disabling DRM plane: %d on DRM crtc 
%d\n",
+   DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc 
%d\n",
plane->base.id, 
old_plane_crtc->base.id);
 
if (!dc_remove_plane_from_context(
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH umr] Allow specifying a zero size for --vm-disasm to have it size the shader automatically

2018-04-18 Thread Tom St Denis
Signed-off-by: Tom St Denis 
---
 doc/umr.1  | 3 ++-
 src/app/main.c | 9 -
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/doc/umr.1 b/doc/umr.1
index 83706b887894..f1f5fec55946 100644
--- a/doc/umr.1
+++ b/doc/umr.1
@@ -115,7 +115,8 @@ Write 'size' bytes (in hex) to the address specified (in 
hexadecimal) to VRAM
 from stdin.
 
 .IP "--vm-disasm, -vdis [@] "
-Disassemble 'size' bytes (in hex) from a given address (in hex).
+Disassemble 'size' bytes (in hex) from a given address (in hex).  The size can 
be
+specified as zero to have umr try and compute the shader size.
 
 .IP "--update, -u" 
 Specify update file to add, change, or delete registers from the register
diff --git a/src/app/main.c b/src/app/main.c
index 81ebc4c5bf42..600f3ca02988 100644
--- a/src/app/main.c
+++ b/src/app/main.c
@@ -489,6 +489,12 @@ int main(int argc, char **argv)
vmid |= UMR_USER_HUB;
 
sscanf(argv[i+2], "%"SCNx32, );
+   if (!size) {
+   struct umr_shaders_pgm shader;
+   shader.vmid = vmid;
+   shader.addr = address;
+   size = umr_compute_shader_size(asic, 
);
+   }
umr_vm_disasm(asic, vmid, address, 0, size);
 
i += 2;
@@ -573,7 +579,8 @@ int main(int argc, char **argv)
 "\n\t--vm-write, -vw [@] "
"\n\t\tWrite 'size' bytes (in hex) to a given address (in hex) from 
stdin.\n"
 "\n\t--vm-disasm, -vdis [@] "
-   "\n\t\tDisassemble 'size' bytes (in hex) from a given address (in 
hex).\n"
+   "\n\t\tDisassemble 'size' bytes (in hex) from a given address (in hex). 
 The size can"
+   "\n\t\tbe specified as zero to have umr try and compute the shader 
size.\n"
 "\n\t--option -O [,,...]\n\t\tEnable various flags: bits, 
bitsfull, empty_log, follow, no_follow_ib, named, many,"
"\n\t\tuse_pci, use_colour, read_smc, quiet, no_kernel, verbose, 
halt_waves, disasm_early_term.\n"
 "\n\n", UMR_BUILD_VER, UMR_BUILD_REV);
-- 
2.14.3

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm: Print unadorned pointers

2018-04-18 Thread Greg Kroah-Hartman
On Wed, Apr 18, 2018 at 12:24:50PM +0300, Alexey Brodkin wrote:
> After commit ad67b74 ("printk: hash addresses printed with %p")
> pointers are being hashed when printed. However, this makes
> debug output completely useless. Switch to %px in order to see the
> unadorned kernel pointers.
> 
> This was done with the following one-liner:
>  find drivers/gpu/drm -type f -name "*.c" -exec sed -r -i 
> '/DRM_DEBUG|KERN_DEBUG|pr_debug/ s/%p\b/%px/g' {} +
> 
> Signed-off-by: Alexey Brodkin 
> Cc: Borislav Petkov 
> Cc: Tobin C. Harding 
> Cc: Alex Deucher 
> Cc: Andrey Grodzovsky 
> Cc: Arnd Bergmann 
> Cc: Benjamin Gaignard 
> Cc: Chen-Yu Tsai 
> Cc: Christian Gmeiner 
> Cc: "Christian König" 
> Cc: Cihangir Akturk 
> Cc: CK Hu 
> Cc: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: David Airlie 
> Cc: "David (ChunMing) Zhou" 
> Cc: Gerd Hoffmann 
> Cc: Greg Kroah-Hartman 
> Cc: Gustavo Padovan 
> Cc: Harry Wentland 
> Cc: "Heiko Stübner" 
> Cc: Ingo Molnar 
> Cc: Jani Nikula 
> Cc: "Jerry (Fangzhi) Zuo" 
> Cc: Joonas Lahtinen 
> Cc: Krzysztof Kozlowski 
> Cc: "Leo (Sunpeng) Li" 
> Cc: Lucas Stach 
> Cc: Maarten Lankhorst 
> Cc: Matthias Brugger 
> Cc: Maxime Ripard 
> Cc: "Michel Dänzer" 
> Cc: Oded Gabbay 
> Cc: Philipp Zabel 
> Cc: Rob Clark 
> Cc: Rodrigo Vivi 
> Cc: Roger He 
> Cc: Roman Li 
> Cc: Russell King 
> Cc: Samuel Li 
> Cc: Sandy Huang 
> Cc: Sean Paul 
> Cc: Shirish S 
> Cc: Sinclair Yeh 
> Cc: Thomas Hellstrom 
> Cc: Tom Lendacky 
> Cc: Tony Cheng 
> Cc: Vincent Abriou 
> Cc: VMware Graphics 
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-arm-...@vger.kernel.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-media...@lists.infradead.org
> Cc: linux-rockc...@lists.infradead.org
> Cc: etna...@lists.freedesktop.org
> Cc: freedr...@lists.freedesktop.org
> Cc: amd-gfx@lists.freedesktop.org
> Cc: intel-...@lists.freedesktop.org
> Cc: virtualizat...@lists.linux-foundation.org
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 14 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c|  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_device.c| 10 ++---
>  drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c  |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_events.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c  |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 18 -
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 14 +++
>  .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  2 +-
>  drivers/gpu/drm/armada/armada_gem.c| 12 +++---
>  drivers/gpu/drm/drm_atomic.c   | 44 
> +++---
>  drivers/gpu/drm/drm_bufs.c |  8 ++--
>  drivers/gpu/drm/drm_dp_mst_topology.c  |  4 +-
>  drivers/gpu/drm/drm_lease.c|  6 +--
>  drivers/gpu/drm/drm_lock.c |  2 +-
>  drivers/gpu/drm/drm_scatter.c  |  4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_drv.c  |  6 +--
>  drivers/gpu/drm/i810/i810_dma.c|  2 +-
>  drivers/gpu/drm/i915/i915_perf.c   |  2 +-
>  drivers/gpu/drm/i915/intel_display.c   |  2 +-
>  drivers/gpu/drm/i915/intel_guc_ct.c|  4 +-
>  drivers/gpu/drm/i915/intel_guc_submission.c|  2 +-
>  drivers/gpu/drm/i915/intel_uc_fw.c |  2 +-
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c |  2 +-
>  drivers/gpu/drm/mga/mga_warp.c |  2 +-
>  drivers/gpu/drm/msm/msm_drv.c  |  4 +-
>  drivers/gpu/drm/qxl/qxl_cmd.c  |  4 +-
>  drivers/gpu/drm/qxl/qxl_fb.c   |  2 +-
>  

Re: [PATCH 1/1] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Samuel Li


On 2018-04-18 12:16 PM, Christian König wrote:
> Am 18.04.2018 um 17:29 schrieb Samuel Li:
>>
>> On 2018-04-18 12:14 AM, Alex Deucher wrote:
>>> On Tue, Apr 17, 2018 at 8:40 PM, Samuel Li  wrote:
 It's auto by default. For CZ/ST, auto setting enables sg display
 when vram size is small; otherwise still uses vram.
 This patch fixed some potention hang issue introduced by change
 "allow framebuffer in GART memory as well" due to CZ/ST hardware
 limitation.

>>>
>> OK.
>>

[...]

> 
> Mhm, for developer testing we can easily modify 
> amdgpu_display_supported_domains().
> 
> The real question is should we give an end user the ability to modify the 
> behavior? I currently can't think of a reason for that.
Yes, we do. For example, there are cases that user prefers GTT(VRAM can run 
out), also there are cases user prefers VRAM for performance.

Regards,
Sam


> 
> Regards,
> Christian.
> 
>>
>> Regards,
>> Samuel Li
>>
>>> Alex
>>>
 +   }
 +#endif

  if (bo->pin_count) {
  uint32_t mem_type = bo->tbo.mem.mem_type;
 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
 b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
 index 4b584cb7..cf0749f 100644
 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
 +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
 @@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
 *dma_buf,
  struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
  struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
  struct ttm_operation_ctx ctx = { true, false };
 -   u32 domain = amdgpu_display_framebuffer_domains(adev);
 +   u32 domain = amdgpu_display_supported_domains(adev);
  int ret;
  bool reads = (direction == DMA_BIDIRECTIONAL ||
    direction == DMA_FROM_DEVICE);
 diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
 b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
 index 6f92a19..1f5603a 100644
 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
 +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
 @@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct 
 drm_plane *plane,
  return r;

  if (plane->type != DRM_PLANE_TYPE_CURSOR)
 -   domain = amdgpu_display_framebuffer_domains(adev);
 +   domain = amdgpu_display_supported_domains(adev);
  else
  domain = AMDGPU_GEM_DOMAIN_VRAM;

  r = amdgpu_bo_pin(rbo, domain, >address);
 -
  amdgpu_bo_unreserve(rbo);

  if (unlikely(r != 0)) {
 -- 
 2.7.4

 ___
 amd-gfx mailing list
 amd-gfx@lists.freedesktop.org
 https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/1] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Christian König

Am 18.04.2018 um 17:29 schrieb Samuel Li:


On 2018-04-18 12:14 AM, Alex Deucher wrote:

On Tue, Apr 17, 2018 at 8:40 PM, Samuel Li  wrote:

It's auto by default. For CZ/ST, auto setting enables sg display
when vram size is small; otherwise still uses vram.
This patch fixed some potention hang issue introduced by change
"allow framebuffer in GART memory as well" due to CZ/ST hardware
limitation.

v2: Change default setting to auto.
v3: Move some logic from amdgpu_display_framebuffer_domains()
 to pin function, suggested by Christian.
Signed-off-by: Samuel Li 
[...]
@@ -484,7 +484,7 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = 
{
 .create_handle = drm_gem_fb_create_handle,
  };

-uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev)
+uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev)

This change should be a separate patch,


OK.

[...]


diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 24f582c..f0f1f8a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -689,8 +689,29 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
domain,
 return -EINVAL;

 /* A shared bo cannot be migrated to VRAM */
-   if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
-   return -EINVAL;
+   if (bo->prime_shared_count) {
+   if (domain & AMDGPU_GEM_DOMAIN_GTT)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   else
+   return -EINVAL;
+   }

This is a bug fix and should be split out into a separate patch.


OK.


+
+   /* display buffer */
+#if defined(CONFIG_DRM_AMD_DC)
+   if (adev->asic_type >= CHIP_CARRIZO && adev->asic_type < CHIP_RAVEN &&
+   adev->flags & AMD_IS_APU &&
+   amdgpu_device_asic_has_dc_support(adev->asic_type) &&
+   domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
+   if (amdgpu_sg_display == 1)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   else if (amdgpu_sg_display == -1) {
+   if (adev->gmc.real_vram_size < AMDGPU_SG_THRESHOLD)
+   domain = AMDGPU_GEM_DOMAIN_GTT;
+   else
+   domain = AMDGPU_GEM_DOMAIN_VRAM;
+   }

I thought we were dropping the module parameter.  Also, we had talked
about taking preferred domains into account here as well, but that can
be a follow on patch.

As per the documents, SG display feature can affect other features. The option 
has been used for debugging on Windows by development, testing and supported 
teams. So I prefer to keep it.


Mhm, for developer testing we can easily modify 
amdgpu_display_supported_domains().


The real question is should we give an end user the ability to modify 
the behavior? I currently can't think of a reason for that.


Regards,
Christian.



Regards,
Samuel Li


Alex


+   }
+#endif

 if (bo->pin_count) {
 uint32_t mem_type = bo->tbo.mem.mem_type;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 4b584cb7..cf0749f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
*dma_buf,
 struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 struct ttm_operation_ctx ctx = { true, false };
-   u32 domain = amdgpu_display_framebuffer_domains(adev);
+   u32 domain = amdgpu_display_supported_domains(adev);
 int ret;
 bool reads = (direction == DMA_BIDIRECTIONAL ||
   direction == DMA_FROM_DEVICE);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 6f92a19..1f5603a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct drm_plane 
*plane,
 return r;

 if (plane->type != DRM_PLANE_TYPE_CURSOR)
-   domain = amdgpu_display_framebuffer_domains(adev);
+   domain = amdgpu_display_supported_domains(adev);
 else
 domain = AMDGPU_GEM_DOMAIN_VRAM;

 r = amdgpu_bo_pin(rbo, domain, >address);
-
 amdgpu_bo_unreserve(rbo);

 if (unlikely(r != 0)) {
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Re: [PATCH 1/1] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Samuel Li


On 2018-04-18 04:34 AM, Christian König wrote:
> Am 18.04.2018 um 06:14 schrieb Alex Deucher:
>> On Tue, Apr 17, 2018 at 8:40 PM, Samuel Li  wrote:
>>> It's auto by default. For CZ/ST, auto setting enables sg display
>>> when vram size is small; otherwise still uses vram.
>>> This patch fixed some potention hang issue introduced by change
>>> "allow framebuffer in GART memory as well" due to CZ/ST hardware
>>> limitation.
>>>
>>> v2: Change default setting to auto.
>>> v3: Move some logic from amdgpu_display_framebuffer_domains()
>>>  to pin function, suggested by Christian.
>>> Signed-off-by: Samuel Li 
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 ++
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  4 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_display.h   |  2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c    |  2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    | 25 
>>> +--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  2 +-
>>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  3 +--
>>>   8 files changed, 35 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h

[...]

>> This is a bug fix and should be split out into a separate patch.
>>
>>> +
>>> +   /* display buffer */
>>> +#if defined(CONFIG_DRM_AMD_DC)
> 
> Please drop that #if, we certainly don't want this to be depend on any define.
OK.


> 
>>> +   if (adev->asic_type >= CHIP_CARRIZO && adev->asic_type < CHIP_RAVEN 
>>> &&
>>> +   adev->flags & AMD_IS_APU &&
>>> +   amdgpu_device_asic_has_dc_support(adev->asic_type) &&
> 
> Those checks are static and don't depend on the BO. So they should be in 
> amdgpu_display_supported_domains().
OK. That is going to be risky for the future though.


Regards,
Samuel Li

> 
> Regards,
> Christian.
> 
>>> +   domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
>>> +   if (amdgpu_sg_display == 1)
>>> +   domain = AMDGPU_GEM_DOMAIN_GTT;
>>> +   else if (amdgpu_sg_display == -1) {
>>> +   if (adev->gmc.real_vram_size < AMDGPU_SG_THRESHOLD)
>>> +   domain = AMDGPU_GEM_DOMAIN_GTT;
>>> +   else
>>> +   domain = AMDGPU_GEM_DOMAIN_VRAM;
>>> +   }
>> I thought we were dropping the module parameter.  Also, we had talked
>> about taking preferred domains into account here as well, but that can
>> be a follow on patch.
>>
>> Alex
>>
>>> +   }
>>> +#endif
>>>
>>>  if (bo->pin_count) {
>>>  uint32_t mem_type = bo->tbo.mem.mem_type;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>>> index 4b584cb7..cf0749f 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>>> @@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
>>> *dma_buf,
>>>  struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
>>>  struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>>  struct ttm_operation_ctx ctx = { true, false };
>>> -   u32 domain = amdgpu_display_framebuffer_domains(adev);
>>> +   u32 domain = amdgpu_display_supported_domains(adev);
>>>  int ret;
>>>  bool reads = (direction == DMA_BIDIRECTIONAL ||
>>>    direction == DMA_FROM_DEVICE);
>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
>>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> index 6f92a19..1f5603a 100644
>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>>> @@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct 
>>> drm_plane *plane,
>>>  return r;
>>>
>>>  if (plane->type != DRM_PLANE_TYPE_CURSOR)
>>> -   domain = amdgpu_display_framebuffer_domains(adev);
>>> +   domain = amdgpu_display_supported_domains(adev);
>>>  else
>>>  domain = AMDGPU_GEM_DOMAIN_VRAM;
>>>
>>>  r = amdgpu_bo_pin(rbo, domain, >address);
>>> -
>>>  amdgpu_bo_unreserve(rbo);
>>>
>>>  if (unlikely(r != 0)) {
>>> -- 
>>> 2.7.4
>>>
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Re: [PATCH] drm: Print unadorned pointers

2018-04-18 Thread Felix Kuehling
On 2018-04-18 05:24 AM, Alexey Brodkin wrote:
> After commit ad67b74 ("printk: hash addresses printed with %p")
> pointers are being hashed when printed. However, this makes
> debug output completely useless. Switch to %px in order to see the
> unadorned kernel pointers.
My understanding of the printk pointer hashing change was to force
people to think more carefully when they really need to print a kernel
pointer. When it is only used to identify an object, then a hash works
just fine. Most of the changes I see in amdgpu/amdkfd fall into this
category.

As I see it, changing all %p to %px by a script takes the thought
process out of it and subverts the intention of the original pointer
hashing change.

Regards,
  Felix


>
> This was done with the following one-liner:
>  find drivers/gpu/drm -type f -name "*.c" -exec sed -r -i 
> '/DRM_DEBUG|KERN_DEBUG|pr_debug/ s/%p\b/%px/g' {} +
>
> Signed-off-by: Alexey Brodkin 
> Cc: Borislav Petkov 
> Cc: Tobin C. Harding 
> Cc: Alex Deucher 
> Cc: Andrey Grodzovsky 
> Cc: Arnd Bergmann 
> Cc: Benjamin Gaignard 
> Cc: Chen-Yu Tsai 
> Cc: Christian Gmeiner 
> Cc: "Christian König" 
> Cc: Cihangir Akturk 
> Cc: CK Hu 
> Cc: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: David Airlie 
> Cc: "David (ChunMing) Zhou" 
> Cc: Gerd Hoffmann 
> Cc: Greg Kroah-Hartman 
> Cc: Gustavo Padovan 
> Cc: Harry Wentland 
> Cc: "Heiko Stübner" 
> Cc: Ingo Molnar 
> Cc: Jani Nikula 
> Cc: "Jerry (Fangzhi) Zuo" 
> Cc: Joonas Lahtinen 
> Cc: Krzysztof Kozlowski 
> Cc: "Leo (Sunpeng) Li" 
> Cc: Lucas Stach 
> Cc: Maarten Lankhorst 
> Cc: Matthias Brugger 
> Cc: Maxime Ripard 
> Cc: "Michel Dänzer" 
> Cc: Oded Gabbay 
> Cc: Philipp Zabel 
> Cc: Rob Clark 
> Cc: Rodrigo Vivi 
> Cc: Roger He 
> Cc: Roman Li 
> Cc: Russell King 
> Cc: Samuel Li 
> Cc: Sandy Huang 
> Cc: Sean Paul 
> Cc: Shirish S 
> Cc: Sinclair Yeh 
> Cc: Thomas Hellstrom 
> Cc: Tom Lendacky 
> Cc: Tony Cheng 
> Cc: Vincent Abriou 
> Cc: VMware Graphics 
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-arm-...@vger.kernel.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-media...@lists.infradead.org
> Cc: linux-rockc...@lists.infradead.org
> Cc: etna...@lists.freedesktop.org
> Cc: freedr...@lists.freedesktop.org
> Cc: amd-gfx@lists.freedesktop.org
> Cc: intel-...@lists.freedesktop.org
> Cc: virtualizat...@lists.linux-foundation.org
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 14 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c|  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_device.c| 10 ++---
>  drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c  |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_events.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c  |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 18 -
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 14 +++
>  .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  2 +-
>  drivers/gpu/drm/armada/armada_gem.c| 12 +++---
>  drivers/gpu/drm/drm_atomic.c   | 44 
> +++---
>  drivers/gpu/drm/drm_bufs.c |  8 ++--
>  drivers/gpu/drm/drm_dp_mst_topology.c  |  4 +-
>  drivers/gpu/drm/drm_lease.c|  6 +--
>  drivers/gpu/drm/drm_lock.c |  2 +-
>  drivers/gpu/drm/drm_scatter.c  |  4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_drv.c  |  6 +--
>  drivers/gpu/drm/i810/i810_dma.c|  2 +-
>  drivers/gpu/drm/i915/i915_perf.c   |  2 +-
>  drivers/gpu/drm/i915/intel_display.c   |  2 +-
>  drivers/gpu/drm/i915/intel_guc_ct.c|  4 

Re: [PATCH 1/1] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Samuel Li


On 2018-04-18 12:14 AM, Alex Deucher wrote:
> On Tue, Apr 17, 2018 at 8:40 PM, Samuel Li  wrote:
>> It's auto by default. For CZ/ST, auto setting enables sg display
>> when vram size is small; otherwise still uses vram.
>> This patch fixed some potention hang issue introduced by change
>> "allow framebuffer in GART memory as well" due to CZ/ST hardware
>> limitation.
>>
>> v2: Change default setting to auto.
>> v3: Move some logic from amdgpu_display_framebuffer_domains()
>> to pin function, suggested by Christian.
>> Signed-off-by: Samuel Li 
>> [...]
>> @@ -484,7 +484,7 @@ static const struct drm_framebuffer_funcs 
>> amdgpu_fb_funcs = {
>> .create_handle = drm_gem_fb_create_handle,
>>  };
>>
>> -uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev)
>> +uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev)
> 
> This change should be a separate patch,
> 
OK.

[...]

>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> index 24f582c..f0f1f8a 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> @@ -689,8 +689,29 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 
>> domain,
>> return -EINVAL;
>>
>> /* A shared bo cannot be migrated to VRAM */
>> -   if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
>> -   return -EINVAL;
>> +   if (bo->prime_shared_count) {
>> +   if (domain & AMDGPU_GEM_DOMAIN_GTT)
>> +   domain = AMDGPU_GEM_DOMAIN_GTT;
>> +   else
>> +   return -EINVAL;
>> +   }
> 
> This is a bug fix and should be split out into a separate patch.
> 
OK.

>> +
>> +   /* display buffer */
>> +#if defined(CONFIG_DRM_AMD_DC)
>> +   if (adev->asic_type >= CHIP_CARRIZO && adev->asic_type < CHIP_RAVEN 
>> &&
>> +   adev->flags & AMD_IS_APU &&
>> +   amdgpu_device_asic_has_dc_support(adev->asic_type) &&
>> +   domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
>> +   if (amdgpu_sg_display == 1)
>> +   domain = AMDGPU_GEM_DOMAIN_GTT;
>> +   else if (amdgpu_sg_display == -1) {
>> +   if (adev->gmc.real_vram_size < AMDGPU_SG_THRESHOLD)
>> +   domain = AMDGPU_GEM_DOMAIN_GTT;
>> +   else
>> +   domain = AMDGPU_GEM_DOMAIN_VRAM;
>> +   }
> 
> I thought we were dropping the module parameter.  Also, we had talked
> about taking preferred domains into account here as well, but that can
> be a follow on patch.
As per the documents, SG display feature can affect other features. The option 
has been used for debugging on Windows by development, testing and supported 
teams. So I prefer to keep it.

Regards,
Samuel Li

> 
> Alex
> 
>> +   }
>> +#endif
>>
>> if (bo->pin_count) {
>> uint32_t mem_type = bo->tbo.mem.mem_type;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>> index 4b584cb7..cf0749f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>> @@ -209,7 +209,7 @@ static int amdgpu_gem_begin_cpu_access(struct dma_buf 
>> *dma_buf,
>> struct amdgpu_bo *bo = gem_to_amdgpu_bo(dma_buf->priv);
>> struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>> struct ttm_operation_ctx ctx = { true, false };
>> -   u32 domain = amdgpu_display_framebuffer_domains(adev);
>> +   u32 domain = amdgpu_display_supported_domains(adev);
>> int ret;
>> bool reads = (direction == DMA_BIDIRECTIONAL ||
>>   direction == DMA_FROM_DEVICE);
>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> index 6f92a19..1f5603a 100644
>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> @@ -3109,12 +3109,11 @@ static int dm_plane_helper_prepare_fb(struct 
>> drm_plane *plane,
>> return r;
>>
>> if (plane->type != DRM_PLANE_TYPE_CURSOR)
>> -   domain = amdgpu_display_framebuffer_domains(adev);
>> +   domain = amdgpu_display_supported_domains(adev);
>> else
>> domain = AMDGPU_GEM_DOMAIN_VRAM;
>>
>> r = amdgpu_bo_pin(rbo, domain, >address);
>> -
>> amdgpu_bo_unreserve(rbo);
>>
>> if (unlikely(r != 0)) {
>> --
>> 2.7.4
>>
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Re: PROBLEM: linux-firmware provided firmware files do not support AMDVLK driver

2018-04-18 Thread Andrew Stone
Thanks. I wasn't aware of that issue. The updated firmware seems to work
fine with Radeon drivers on my machine. (After updating the firmware, I
booted with Radeon.si_support=1 Radeon.cik_support=1 amdgpu.si_support=0
amdgpu.cik_support=0) though I've messed with my system a lot to get AMDVLK
working so I'm not sure it's really using radeon. It should be. And the GL
version is 1.4 now. I obviously can't speak for other cards and I haven't
done extensive testing by any means, but it seems to work.

On Tue, Apr 17, 2018, 21:58 Alex Deucher  wrote:

> On Tue, Apr 17, 2018 at 11:16 PM, boomboom psh 
> wrote:
> > [1.] linux-firmware provided firmware files do not support AMDVLK driver
> > [2.] Full description of the problem/report: Vulkan instance fails to
> load
> > on the AMDVLK driver, using a radeon HD7770 card. It throws a
> > VK_ERROR_OUT_OF_HOST_MEMORY. This appears to be due to an outdated
> > firmware, as replacing the firmware with the firmware provided by the
> > amdgpu-pro driver fixes the issue (
> > https://github.com/GPUOpen-Drivers/AMDVLK/issues/17). The issue appears
> to
> > also be present on pitcairn cards. (
> > https://github.com/GPUOpen-Drivers/AMDVLK/issues/25)
> > [3.] firmware, AMDVLK, vulkan, SI, verde, pitcairn
> > [4.1.] Linux version 4.15.15-1-ARCH (builduser@heftig-4572) (gcc version
> > 7.3.1 20180312 (GCC)) #1 SMP PREEMPT Sat Mar 31 23:59:25 UTC 2018
> > [7.] vulkaninfo from
> > https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers
> > demonstrates the problem
> > [8.] AMD Ryzen 3 1200, 8GB DDR4, Radeon HD7770 1GB
> > [X.] Workaround: copy firmware from amdgpupro driver, however this must
> be
> > redone every time there is an update to linux-firmware.
>
> The newer SI firmware only works with amdgpu right now.  At the moment
> the upstream SI firmware is shared with radeon.  We'd need to use
> separate firmwares for radeon and amdgpu for SI.  No one has gotten
> around to switching it yet.
>
> Alex
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: print DMA-buf status in debugfs

2018-04-18 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 


From: amd-gfx  on behalf of Christian 
König 
Sent: Wednesday, April 18, 2018 6:13:09 AM
To: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: print DMA-buf status in debugfs

Ping? Could anybody review that?

Thanks,
Christian.

Am 11.04.2018 um 15:09 schrieb Christian König:
> Just note if a BO was imported/exported.
>
> Signed-off-by: Christian König 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 11 +++
>   1 file changed, 11 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 28c2706e48d7..93d3f333444b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -765,6 +765,8 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, 
> void *data)
>struct amdgpu_bo *bo = gem_to_amdgpu_bo(gobj);
>struct seq_file *m = data;
>
> + struct dma_buf_attachment *attachment;
> + struct dma_buf *dma_buf;
>unsigned domain;
>const char *placement;
>unsigned pin_count;
> @@ -793,6 +795,15 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, 
> void *data)
>pin_count = READ_ONCE(bo->pin_count);
>if (pin_count)
>seq_printf(m, " pin count %d", pin_count);
> +
> + dma_buf = READ_ONCE(bo->gem_base.dma_buf);
> + attachment = READ_ONCE(bo->gem_base.import_attach);
> +
> + if (attachment)
> + seq_printf(m, " imported from %p", dma_buf);
> + else if (dma_buf)
> + seq_printf(m, " exported as %p", dma_buf);
> +
>seq_printf(m, "\n");
>
>return 0;

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2] drm/amd/amdgpu: passing i2s instance value as platform data

2018-04-18 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 


From: Vijendar Mukunda 
Sent: Wednesday, April 18, 2018 4:56:32 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander; Agrawal, Akshu; Mukunda, Vijendar
Subject: [PATCH v2] drm/amd/amdgpu: passing i2s instance value as platform data

i2s instance value is passed as platform data to dwc driver.
this parameter will be useful to distinguish current i2s
instance value when multiple i2s controller instances are created.

Signed-off-by: Vijendar Mukunda 
---
v1->v2: moved I2S instance macros from dwc driver header file
 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
index 6cca4d1..c8c7583 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
@@ -83,6 +83,8 @@
 #define ACP_TIMEOUT_LOOP0x00FF
 #define ACP_DEVS4
 #define ACP_SRC_ID  162
+#define I2S_SP_INSTANCE 0x01
+#define I2S_BT_INSTANCE 0x02

 enum {
 ACP_TILE_P1 = 0,
@@ -347,6 +349,7 @@ static int acp_hw_init(void *handle)
 i2s_pdata[0].snd_rates = SNDRV_PCM_RATE_8000_96000;
 i2s_pdata[0].i2s_reg_comp1 = ACP_I2S_COMP1_PLAY_REG_OFFSET;
 i2s_pdata[0].i2s_reg_comp2 = ACP_I2S_COMP2_PLAY_REG_OFFSET;
+   i2s_pdata[0].i2s_instance = I2S_SP_INSTANCE;
 switch (adev->asic_type) {
 case CHIP_STONEY:
 i2s_pdata[1].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
@@ -362,6 +365,7 @@ static int acp_hw_init(void *handle)
 i2s_pdata[1].snd_rates = SNDRV_PCM_RATE_8000_96000;
 i2s_pdata[1].i2s_reg_comp1 = ACP_I2S_COMP1_CAP_REG_OFFSET;
 i2s_pdata[1].i2s_reg_comp2 = ACP_I2S_COMP2_CAP_REG_OFFSET;
+   i2s_pdata[1].i2s_instance = I2S_SP_INSTANCE;

 i2s_pdata[2].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET;
 switch (adev->asic_type) {
@@ -376,6 +380,7 @@ static int acp_hw_init(void *handle)
 i2s_pdata[2].snd_rates = SNDRV_PCM_RATE_8000_96000;
 i2s_pdata[2].i2s_reg_comp1 = ACP_BT_COMP1_REG_OFFSET;
 i2s_pdata[2].i2s_reg_comp2 = ACP_BT_COMP2_REG_OFFSET;
+   i2s_pdata[2].i2s_instance = I2S_BT_INSTANCE;

 adev->acp.acp_res[0].name = "acp2x_dma";
 adev->acp.acp_res[0].flags = IORESOURCE_MEM;
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm: Print unadorned pointers

2018-04-18 Thread Alexey Brodkin
After commit ad67b74 ("printk: hash addresses printed with %p")
pointers are being hashed when printed. However, this makes
debug output completely useless. Switch to %px in order to see the
unadorned kernel pointers.

This was done with the following one-liner:
 find drivers/gpu/drm -type f -name "*.c" -exec sed -r -i 
'/DRM_DEBUG|KERN_DEBUG|pr_debug/ s/%p\b/%px/g' {} +

Signed-off-by: Alexey Brodkin 
Cc: Borislav Petkov 
Cc: Tobin C. Harding 
Cc: Alex Deucher 
Cc: Andrey Grodzovsky 
Cc: Arnd Bergmann 
Cc: Benjamin Gaignard 
Cc: Chen-Yu Tsai 
Cc: Christian Gmeiner 
Cc: "Christian König" 
Cc: Cihangir Akturk 
Cc: CK Hu 
Cc: Daniel Vetter 
Cc: Dave Airlie 
Cc: David Airlie 
Cc: "David (ChunMing) Zhou" 
Cc: Gerd Hoffmann 
Cc: Greg Kroah-Hartman 
Cc: Gustavo Padovan 
Cc: Harry Wentland 
Cc: "Heiko Stübner" 
Cc: Ingo Molnar 
Cc: Jani Nikula 
Cc: "Jerry (Fangzhi) Zuo" 
Cc: Joonas Lahtinen 
Cc: Krzysztof Kozlowski 
Cc: "Leo (Sunpeng) Li" 
Cc: Lucas Stach 
Cc: Maarten Lankhorst 
Cc: Matthias Brugger 
Cc: Maxime Ripard 
Cc: "Michel Dänzer" 
Cc: Oded Gabbay 
Cc: Philipp Zabel 
Cc: Rob Clark 
Cc: Rodrigo Vivi 
Cc: Roger He 
Cc: Roman Li 
Cc: Russell King 
Cc: Samuel Li 
Cc: Sandy Huang 
Cc: Sean Paul 
Cc: Shirish S 
Cc: Sinclair Yeh 
Cc: Thomas Hellstrom 
Cc: Tom Lendacky 
Cc: Tony Cheng 
Cc: Vincent Abriou 
Cc: VMware Graphics 
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-arm-...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-media...@lists.infradead.org
Cc: linux-rockc...@lists.infradead.org
Cc: etna...@lists.freedesktop.org
Cc: freedr...@lists.freedesktop.org
Cc: amd-gfx@lists.freedesktop.org
Cc: intel-...@lists.freedesktop.org
Cc: virtualizat...@lists.linux-foundation.org
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 14 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c|  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c   |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c|  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_device.c| 10 ++---
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c  |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_events.c|  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c  |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c   |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 18 -
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 14 +++
 .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  2 +-
 drivers/gpu/drm/armada/armada_gem.c| 12 +++---
 drivers/gpu/drm/drm_atomic.c   | 44 +++---
 drivers/gpu/drm/drm_bufs.c |  8 ++--
 drivers/gpu/drm/drm_dp_mst_topology.c  |  4 +-
 drivers/gpu/drm/drm_lease.c|  6 +--
 drivers/gpu/drm/drm_lock.c |  2 +-
 drivers/gpu/drm/drm_scatter.c  |  4 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.c  |  6 +--
 drivers/gpu/drm/i810/i810_dma.c|  2 +-
 drivers/gpu/drm/i915/i915_perf.c   |  2 +-
 drivers/gpu/drm/i915/intel_display.c   |  2 +-
 drivers/gpu/drm/i915/intel_guc_ct.c|  4 +-
 drivers/gpu/drm/i915/intel_guc_submission.c|  2 +-
 drivers/gpu/drm/i915/intel_uc_fw.c |  2 +-
 drivers/gpu/drm/mediatek/mtk_drm_gem.c |  2 +-
 drivers/gpu/drm/mga/mga_warp.c |  2 +-
 drivers/gpu/drm/msm/msm_drv.c  |  4 +-
 drivers/gpu/drm/qxl/qxl_cmd.c  |  4 +-
 drivers/gpu/drm/qxl/qxl_fb.c   |  2 +-
 drivers/gpu/drm/qxl/qxl_ttm.c  |  2 +-
 drivers/gpu/drm/radeon/radeon_display.c|  2 +-
 drivers/gpu/drm/radeon/radeon_dp_mst.c | 12 +++---
 drivers/gpu/drm/radeon/radeon_object.c |  2 +-
 

[PATCH] drm/amd/amdgpu: Add missing DCN CMx_TEST_DEBUG bitfields

2018-04-18 Thread Tom St Denis
Only the bitfields for CM0 were initially added.  This adds the
rest for CM1..CM3 that way umr can pick up the bitfields for all
of the debug registers.

Signed-off-by: Tom St Denis 
---
 .../gpu/drm/amd/include/asic_reg/dcn/dcn_1_0_sh_mask.h| 15 +++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_1_0_sh_mask.h 
b/drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_1_0_sh_mask.h
index e7c0cad41081..9d860e4fff0b 100644
--- a/drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_1_0_sh_mask.h
+++ b/drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_1_0_sh_mask.h
@@ -14054,6 +14054,21 @@
 #define CM0_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN__SHIFT 
   0x8
 #define CM0_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX_MASK  
   0x00FFL
 #define CM0_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN_MASK   
   0x0100L
+
+#define CM1_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX__SHIFT
   0x0
+#define CM1_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN__SHIFT 
   0x8
+#define CM1_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX_MASK  
   0x00FFL
+#define CM1_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN_MASK   
   0x0100L
+
+#define CM2_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX__SHIFT
   0x0
+#define CM2_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN__SHIFT 
   0x8
+#define CM2_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX_MASK  
   0x00FFL
+#define CM2_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN_MASK   
   0x0100L
+
+#define CM3_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX__SHIFT
   0x0
+#define CM3_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN__SHIFT 
   0x8
+#define CM3_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_INDEX_MASK  
   0x00FFL
+#define CM3_CM_TEST_DEBUG_INDEX__CM_TEST_DEBUG_WRITE_EN_MASK   
   0x0100L
 //CM0_CM_TEST_DEBUG_DATA
 #define CM0_CM_TEST_DEBUG_DATA__CM_TEST_DEBUG_DATA__SHIFT  
   0x0
 #define CM0_CM_TEST_DEBUG_DATA__CM_TEST_DEBUG_DATA_MASK
   0xL
-- 
2.14.3

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/3] drm/amd/pp: Add OVERDRIVE support on Vega10

2018-04-18 Thread Rex Zhu
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 705 +++--
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |  25 +-
 .../gpu/drm/amd/powerplay/inc/hardwaremanager.h|   3 +-
 3 files changed, 376 insertions(+), 357 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index 384aa07..b85fedd 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -285,6 +285,48 @@ static int vega10_set_features_platform_caps(struct 
pp_hwmgr *hwmgr)
return 0;
 }
 
+static int vega10_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
+{
+   struct vega10_hwmgr *data = hwmgr->backend;
+   struct phm_ppt_v2_information *table_info =
+   (struct phm_ppt_v2_information *)(hwmgr->pptable);
+   struct vega10_odn_dpm_table *odn_table = &(data->odn_dpm_table);
+   struct vega10_odn_vddc_lookup_table *od_lookup_table;
+   struct phm_ppt_v1_voltage_lookup_table *vddc_lookup_table;
+   struct phm_ppt_v1_clock_voltage_dependency_table *dep_table[3];
+   struct phm_ppt_v1_clock_voltage_dependency_table *od_table[3];
+   uint32_t i;
+
+   od_lookup_table = _table->vddc_lookup_table;
+   vddc_lookup_table = table_info->vddc_lookup_table;
+
+   for (i = 0; i < vddc_lookup_table->count; i++)
+   od_lookup_table->entries[i].us_vdd = 
vddc_lookup_table->entries[i].us_vdd;
+
+   od_lookup_table->count = vddc_lookup_table->count;
+
+   dep_table[0] = table_info->vdd_dep_on_sclk;
+   dep_table[1] = table_info->vdd_dep_on_mclk;
+   dep_table[2] = table_info->vdd_dep_on_socclk;
+   od_table[0] = (struct phm_ppt_v1_clock_voltage_dependency_table 
*)_table->vdd_dep_on_sclk;
+   od_table[1] = (struct phm_ppt_v1_clock_voltage_dependency_table 
*)_table->vdd_dep_on_mclk;
+   od_table[2] = (struct phm_ppt_v1_clock_voltage_dependency_table 
*)_table->vdd_dep_on_socclk;
+
+   for (i = 0; i < 3; i++)
+   smu_get_voltage_dependency_table_ppt_v1(dep_table[i], 
od_table[i]);
+
+   if (odn_table->max_vddc == 0 || odn_table->max_vddc > 2000)
+   odn_table->max_vddc = dep_table[0]->entries[dep_table[0]->count 
- 1].vddc;
+   if (odn_table->min_vddc == 0 || odn_table->min_vddc > 2000)
+   odn_table->min_vddc = dep_table[0]->entries[0].vddc;
+
+   i = od_table[2]->count -1;
+   od_table[2]->entries[i].clk = 
hwmgr->platform_descriptor.overdriveLimit.memoryClock;
+   od_table[2]->entries[i].vddc = odn_table->max_vddc;
+
+   return 0;
+}
+
 static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
 {
struct vega10_hwmgr *data = hwmgr->backend;
@@ -421,7 +463,6 @@ static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
/* ACG firmware has major version 5 */
if ((hwmgr->smu_version & 0xff00) == 0x500)
data->smu_features[GNLD_ACG].supported = true;
-
if (data->registry_data.didt_support)
data->smu_features[GNLD_DIDT].supported = true;
 
@@ -1360,48 +1401,6 @@ static int vega10_setup_default_dpm_tables(struct 
pp_hwmgr *hwmgr)
memcpy(&(data->golden_dpm_table), &(data->dpm_table),
sizeof(struct vega10_dpm_table));
 
-   if (PP_CAP(PHM_PlatformCaps_ODNinACSupport) ||
-   PP_CAP(PHM_PlatformCaps_ODNinDCSupport)) {
-   data->odn_dpm_table.odn_core_clock_dpm_levels.num_of_pl =
-   data->dpm_table.gfx_table.count;
-   for (i = 0; i < data->dpm_table.gfx_table.count; i++) {
-   
data->odn_dpm_table.odn_core_clock_dpm_levels.entries[i].clock =
-   
data->dpm_table.gfx_table.dpm_levels[i].value;
-   
data->odn_dpm_table.odn_core_clock_dpm_levels.entries[i].enabled = true;
-   }
-
-   data->odn_dpm_table.vdd_dependency_on_sclk.count =
-   dep_gfx_table->count;
-   for (i = 0; i < dep_gfx_table->count; i++) {
-   
data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].clk =
-   dep_gfx_table->entries[i].clk;
-   
data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].vddInd =
-   dep_gfx_table->entries[i].vddInd;
-   
data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].cks_enable =
-   dep_gfx_table->entries[i].cks_enable;
-   
data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].cks_voffset =
-   dep_gfx_table->entries[i].cks_voffset;
-   }
-
-   data->odn_dpm_table.odn_memory_clock_dpm_levels.num_of_pl =
-

[PATCH 2/3] drm/amd/pp: Change voltage/clk range for OD feature on VI

2018-04-18 Thread Rex Zhu
read vddc range from vbios.

Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c | 28 
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h |  3 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 56 
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.h |  2 +
 4 files changed, 71 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
index 971fb5d..afd7ecf 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
@@ -1505,3 +1505,31 @@ int atomctrl_get_leakage_vddc_base_on_leakage(struct 
pp_hwmgr *hwmgr,
 
return 0;
 }
+
+void atomctrl_get_voltage_range(struct pp_hwmgr *hwmgr, uint32_t *max_vddc,
+   uint32_t *min_vddc)
+{
+   void *profile;
+
+   profile = smu_atom_get_data_table(hwmgr->adev,
+   GetIndexIntoMasterTable(DATA, 
ASIC_ProfilingInfo),
+   NULL, NULL, NULL);
+
+   if (profile) {
+   switch (hwmgr->chip_id) {
+   case CHIP_TONGA:
+   case CHIP_FIJI:
+   *max_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_3 
*)profile)->ulMaxVddc/4;
+   *min_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_3 
*)profile)->ulMinVddc/4;
+   break;
+   case CHIP_POLARIS11:
+   case CHIP_POLARIS10:
+   case CHIP_POLARIS12:
+   *max_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_6 
*)profile)->ulMaxVddc/100;
+   *min_vddc = ((ATOM_ASIC_PROFILING_INFO_V3_6 
*)profile)->ulMinVddc/100;
+   break;
+   default:
+   return;
+   }
+   }
+}
\ No newline at end of file
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h 
b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
index c672a50..e1b5d6b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
@@ -320,5 +320,8 @@ extern int atomctrl_get_leakage_vddc_base_on_leakage(struct 
pp_hwmgr *hwmgr,
uint16_t virtual_voltage_id,
uint16_t efuse_voltage_id);
 extern int atomctrl_get_leakage_id_from_efuse(struct pp_hwmgr *hwmgr, uint16_t 
*virtual_voltage_id);
+
+extern void atomctrl_get_voltage_range(struct pp_hwmgr *hwmgr, uint32_t 
*max_vddc,
+   uint32_t *min_vddc);
 #endif
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 9654593..966b5b1 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -838,6 +838,33 @@ static int smu7_odn_initial_default_setting(struct 
pp_hwmgr *hwmgr)
return 0;
 }
 
+static void smu7_setup_voltage_range_from_vbios(struct pp_hwmgr *hwmgr)
+{
+   struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+   struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table;
+   struct phm_ppt_v1_information *table_info =
+   (struct phm_ppt_v1_information *)(hwmgr->pptable);
+   uint32_t min_vddc, max_vddc;
+
+   if (table_info == NULL)
+   return;
+
+   dep_sclk_table = table_info->vdd_dep_on_sclk;
+
+   atomctrl_get_voltage_range(hwmgr, _vddc, _vddc);
+
+   if (min_vddc == 0 || min_vddc > 2000
+   || min_vddc > dep_sclk_table->entries[0].vddc)
+   min_vddc = dep_sclk_table->entries[0].vddc;
+
+   if (max_vddc == 0 || max_vddc > 2000
+   || max_vddc < dep_sclk_table->entries[dep_sclk_table->count - 
1].vddc)
+   max_vddc = dep_sclk_table->entries[dep_sclk_table->count - 
1].vddc;
+
+   data->odn_dpm_table.min_vddc = min_vddc;
+   data->odn_dpm_table.max_vddc = max_vddc;
+}
+
 static int smu7_setup_default_dpm_tables(struct pp_hwmgr *hwmgr)
 {
struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
@@ -856,8 +883,10 @@ static int smu7_setup_default_dpm_tables(struct pp_hwmgr 
*hwmgr)
sizeof(struct smu7_dpm_table));
 
/* initialize ODN table */
-   if (hwmgr->od_enabled)
+   if (hwmgr->od_enabled) {
+   smu7_setup_voltage_range_from_vbios(hwmgr);
smu7_odn_initial_default_setting(hwmgr);
+   }
 
return 0;
 }
@@ -4605,35 +4634,26 @@ static bool smu7_check_clk_voltage_valid(struct 
pp_hwmgr *hwmgr,
 {
struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
 
-   struct phm_ppt_v1_information *table_info =
-   (struct phm_ppt_v1_information *)(hwmgr->pptable);
-   uint32_t min_vddc;
- 

[PATCH 1/3] drm/amd/pp: Remove reduplicate code in smu7_check_dpm_table_updated

2018-04-18 Thread Rex Zhu
Signed-off-by: Rex Zhu 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index 720ac47..9654593 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -4683,10 +4683,6 @@ static void smu7_check_dpm_table_updated(struct pp_hwmgr 
*hwmgr)
return;
}
}
-   if (i == dep_table->count && data->need_update_smu7_dpm_table & 
DPMTABLE_OD_UPDATE_VDDC) {
-   data->need_update_smu7_dpm_table &= ~DPMTABLE_OD_UPDATE_VDDC;
-   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_MCLK;
-   }
 
dep_table = table_info->vdd_dep_on_sclk;
odn_dep_table = (struct phm_ppt_v1_clock_voltage_dependency_table 
*)&(odn_table->vdd_dependency_on_sclk);
@@ -4696,9 +4692,9 @@ static void smu7_check_dpm_table_updated(struct pp_hwmgr 
*hwmgr)
return;
}
}
-   if (i == dep_table->count && data->need_update_smu7_dpm_table & 
DPMTABLE_OD_UPDATE_VDDC) {
+   if (data->need_update_smu7_dpm_table & DPMTABLE_OD_UPDATE_VDDC) {
data->need_update_smu7_dpm_table &= ~DPMTABLE_OD_UPDATE_VDDC;
-   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK;
+   data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_SCLK | 
DPMTABLE_OD_UPDATE_MCLK;
}
 }
 
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix list not initialized

2018-04-18 Thread Christian König

Am 18.04.2018 um 12:37 schrieb Chunming Zhou:

Otherwise, cpu stuck for 22s with kernel panic.

Change-Id: I5b87cde662a4658c9ab253ba88d009c9628a44ca
Signed-off-by: Chunming Zhou 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f0fbc331aa30..7131ad13c5b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1563,10 +1563,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
+   list_del_init(_va->base.vm_status);
if (!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type)))
list_add_tail(_va->base.vm_status, >evicted);
-   else
-   list_del_init(_va->base.vm_status);


Good catch, but I think I would prefer to replace list_add_tail() with 
list_move_tail() instead of moving the list_del_init().


But just a nit pick. Either way the patch is Reviewed-by: Christian 
König .


Regards,
Christian.


} else {
list_del_init(_va->base.vm_status);
}


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/scheduler: fix build broken by "move last_sched fence updating prior to job popping"

2018-04-18 Thread Huang Rui
On Wed, Apr 18, 2018 at 12:06:27PM +0200, Christian König wrote:
> We don't have s_fence as local variable here.
> 
> Signed-off-by: Christian König 

I just also meet this issue, and file the patch. And found you already
fixed it.

Acked-by: Huang Rui 

> ---
>  drivers/gpu/drm/scheduler/gpu_scheduler.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 5de79bbb12c8..f4b862503710 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -402,7 +402,7 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>   dma_fence_set_error(_job->s_fence->finished, -ECANCELED);
>  
>   dma_fence_put(entity->last_scheduled);
> - entity->last_scheduled = dma_fence_get(_fence->finished);
> + entity->last_scheduled = dma_fence_get(_job->s_fence->finished);
>  
>   spsc_queue_pop(>job_queue);
>   return sched_job;
> -- 
> 2.14.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/scheduler: fix build broken by "move last_sched fence updating prior to job popping"

2018-04-18 Thread Michel Dänzer
On 2018-04-18 12:06 PM, Christian König wrote:
> We don't have s_fence as local variable here.
> 
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/scheduler/gpu_scheduler.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 5de79bbb12c8..f4b862503710 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -402,7 +402,7 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>   dma_fence_set_error(_job->s_fence->finished, -ECANCELED);
>  
>   dma_fence_put(entity->last_scheduled);
> - entity->last_scheduled = dma_fence_get(_fence->finished);
> + entity->last_scheduled = dma_fence_get(_job->s_fence->finished);
>  
>   spsc_queue_pop(>job_queue);
>   return sched_job;
> 

There's no need to wait for a review before pushing such an obvious and
trivial fix for a compile failure.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH xf86-video-amdgpu 2/3] Track DRM event queue sequence number in scanout_update_pending

2018-04-18 Thread Michel Dänzer
From: Michel Dänzer 

Preparation for next change, no behaviour change intended.

Signed-off-by: Michel Dänzer 
---
 src/amdgpu_kms.c  | 16 
 src/drmmode_display.h |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/src/amdgpu_kms.c b/src/amdgpu_kms.c
index 7ec610f5d..454fa1860 100644
--- a/src/amdgpu_kms.c
+++ b/src/amdgpu_kms.c
@@ -424,7 +424,7 @@ amdgpu_scanout_flip_abort(xf86CrtcPtr crtc, void 
*event_data)
AMDGPUEntPtr pAMDGPUEnt = AMDGPUEntPriv(crtc->scrn);
drmmode_crtc_private_ptr drmmode_crtc = crtc->driver_private;
 
-   drmmode_crtc->scanout_update_pending = FALSE;
+   drmmode_crtc->scanout_update_pending = 0;
drmmode_fb_reference(pAMDGPUEnt->fd, _crtc->flip_pending,
 NULL);
 }
@@ -509,7 +509,7 @@ amdgpu_prime_scanout_update_abort(xf86CrtcPtr crtc, void 
*event_data)
 {
drmmode_crtc_private_ptr drmmode_crtc = crtc->driver_private;
 
-   drmmode_crtc->scanout_update_pending = FALSE;
+   drmmode_crtc->scanout_update_pending = 0;
 }
 
 void
@@ -650,7 +650,7 @@ amdgpu_prime_scanout_update_handler(xf86CrtcPtr crtc, 
uint32_t frame, uint64_t u
drmmode_crtc_private_ptr drmmode_crtc = crtc->driver_private;
 
amdgpu_prime_scanout_do_update(crtc, 0);
-   drmmode_crtc->scanout_update_pending = FALSE;
+   drmmode_crtc->scanout_update_pending = 0;
 }
 
 static void
@@ -691,7 +691,7 @@ amdgpu_prime_scanout_update(PixmapDirtyUpdatePtr dirty)
return;
}
 
-   drmmode_crtc->scanout_update_pending = TRUE;
+   drmmode_crtc->scanout_update_pending = drm_queue_seq;
 }
 
 static void
@@ -749,7 +749,7 @@ amdgpu_prime_scanout_flip(PixmapDirtyUpdatePtr ent)
}
 
drmmode_crtc->scanout_id = scanout_id;
-   drmmode_crtc->scanout_update_pending = TRUE;
+   drmmode_crtc->scanout_update_pending = drm_queue_seq;
 }
 
 static void
@@ -892,7 +892,7 @@ amdgpu_scanout_update_abort(xf86CrtcPtr crtc, void 
*event_data)
 {
drmmode_crtc_private_ptr drmmode_crtc = event_data;
 
-   drmmode_crtc->scanout_update_pending = FALSE;
+   drmmode_crtc->scanout_update_pending = 0;
 }
 
 static void
@@ -967,7 +967,7 @@ amdgpu_scanout_update(xf86CrtcPtr xf86_crtc)
return;
}
 
-   drmmode_crtc->scanout_update_pending = TRUE;
+   drmmode_crtc->scanout_update_pending = drm_queue_seq;
 }
 
 static void
@@ -1032,7 +1032,7 @@ amdgpu_scanout_flip(ScreenPtr pScreen, AMDGPUInfoPtr info,
}
 
drmmode_crtc->scanout_id = scanout_id;
-   drmmode_crtc->scanout_update_pending = TRUE;
+   drmmode_crtc->scanout_update_pending = drm_queue_seq;
 }
 
 static void AMDGPUBlockHandler_KMS(BLOCKHANDLER_ARGS_DECL)
diff --git a/src/drmmode_display.h b/src/drmmode_display.h
index 2aa56723d..25ae9f8c0 100644
--- a/src/drmmode_display.h
+++ b/src/drmmode_display.h
@@ -84,7 +84,7 @@ typedef struct {
Bool ignore_damage;
RegionRec scanout_last_region;
unsigned scanout_id;
-   Bool scanout_update_pending;
+   uintptr_t scanout_update_pending;
Bool tear_free;
 
PixmapPtr prime_scanout_pixmap;
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH xf86-video-amdgpu 1/3] Ignore AMDGPU_DRM_QUEUE_ERROR (0) in amdgpu_drm_abort_entry

2018-04-18 Thread Michel Dänzer
From: Michel Dänzer 

This allows a following change to be slightly simpler.

Signed-off-by: Michel Dänzer 
---
 src/amdgpu_drm_queue.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/amdgpu_drm_queue.c b/src/amdgpu_drm_queue.c
index 2aa21e04d..d1456ca84 100644
--- a/src/amdgpu_drm_queue.c
+++ b/src/amdgpu_drm_queue.c
@@ -150,6 +150,9 @@ amdgpu_drm_abort_entry(uintptr_t seq)
 {
struct amdgpu_drm_queue_entry *e, *tmp;
 
+   if (seq == AMDGPU_DRM_QUEUE_ERROR)
+   return;
+
xorg_list_for_each_entry_safe(e, tmp, _drm_queue, list) {
if (e->seq == seq) {
amdgpu_drm_abort_one(e);
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix list not initialized

2018-04-18 Thread Chunming Zhou
Otherwise, cpu stuck for 22s with kernel panic.

Change-Id: I5b87cde662a4658c9ab253ba88d009c9628a44ca
Signed-off-by: Chunming Zhou 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f0fbc331aa30..7131ad13c5b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1563,10 +1563,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
 * the evicted list so that it gets validated again on the
 * next command submission.
 */
+   list_del_init(_va->base.vm_status);
if (!(bo->preferred_domains & 
amdgpu_mem_type_to_domain(mem_type)))
list_add_tail(_va->base.vm_status, >evicted);
-   else
-   list_del_init(_va->base.vm_status);
} else {
list_del_init(_va->base.vm_status);
}
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH xf86-video-amdgpu 3/3] Abort scanout_update_pending event when possible

2018-04-18 Thread Michel Dänzer
From: Michel Dänzer 

We don't need to wait for a non-TearFree scanout update before scanning
out from the screen pixmap or before flipping, as the scanout update
won't be visible anyway. Instead, just abort it.

Signed-off-by: Michel Dänzer 
---
 src/drmmode_display.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/src/drmmode_display.c b/src/drmmode_display.c
index 2d1540d7b..dcfc9937e 100644
--- a/src/drmmode_display.c
+++ b/src/drmmode_display.c
@@ -953,8 +953,8 @@ done:
if (drmmode_crtc->scanout[scanout_id].pixmap &&
fb != amdgpu_pixmap_get_fb(drmmode_crtc->
   scanout[scanout_id].pixmap)) {
-   drmmode_crtc_wait_pending_event(drmmode_crtc, 
pAMDGPUEnt->fd,
-   
drmmode_crtc->scanout_update_pending);
+   
amdgpu_drm_abort_entry(drmmode_crtc->scanout_update_pending);
+   drmmode_crtc->scanout_update_pending = 0;
drmmode_crtc_scanout_free(drmmode_crtc);
} else if (!drmmode_crtc->tear_free) {
drmmode_crtc_scanout_destroy(drmmode,
@@ -3083,8 +3083,12 @@ Bool amdgpu_do_pageflip(ScrnInfoPtr scrn, ClientPtr 
client,
amdgpu_scanout_do_update(crtc, scanout_id, new_front,
 extents);
 
-   drmmode_crtc_wait_pending_event(drmmode_crtc, 
pAMDGPUEnt->fd,
-   
drmmode_crtc->scanout_update_pending);
+   if (drmmode_crtc->scanout_update_pending) {
+   drmmode_crtc_wait_pending_event(drmmode_crtc, 
pAMDGPUEnt->fd,
+   
drmmode_crtc->flip_pending);
+   
amdgpu_drm_abort_entry(drmmode_crtc->scanout_update_pending);
+   drmmode_crtc->scanout_update_pending = 0;
+   }
}
 
if (crtc == ref_crtc) {
-- 
2.17.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/scheduler: fix build broken by "move last_sched fence updating prior to job popping"

2018-04-18 Thread Zhou, David(ChunMing)
Reviewed-by: Chunming Zhou 

-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of 
Christian K?nig
Sent: Wednesday, April 18, 2018 6:06 PM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm/scheduler: fix build broken by "move last_sched fence 
updating prior to job popping"

We don't have s_fence as local variable here.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 5de79bbb12c8..f4b862503710 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -402,7 +402,7 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)
dma_fence_set_error(_job->s_fence->finished, -ECANCELED);
 
dma_fence_put(entity->last_scheduled);
-   entity->last_scheduled = dma_fence_get(_fence->finished);
+   entity->last_scheduled = dma_fence_get(_job->s_fence->finished);
 
spsc_queue_pop(>job_queue);
return sched_job;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: print DMA-buf status in debugfs

2018-04-18 Thread Christian König

Ping? Could anybody review that?

Thanks,
Christian.

Am 11.04.2018 um 15:09 schrieb Christian König:

Just note if a BO was imported/exported.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 28c2706e48d7..93d3f333444b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -765,6 +765,8 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, 
void *data)
struct amdgpu_bo *bo = gem_to_amdgpu_bo(gobj);
struct seq_file *m = data;
  
+	struct dma_buf_attachment *attachment;

+   struct dma_buf *dma_buf;
unsigned domain;
const char *placement;
unsigned pin_count;
@@ -793,6 +795,15 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, 
void *data)
pin_count = READ_ONCE(bo->pin_count);
if (pin_count)
seq_printf(m, " pin count %d", pin_count);
+
+   dma_buf = READ_ONCE(bo->gem_base.dma_buf);
+   attachment = READ_ONCE(bo->gem_base.import_attach);
+
+   if (attachment)
+   seq_printf(m, " imported from %p", dma_buf);
+   else if (dma_buf)
+   seq_printf(m, " exported as %p", dma_buf);
+
seq_printf(m, "\n");
  
  	return 0;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Michel Dänzer
On 2018-04-18 11:57 AM, Qu, Jim wrote:
> OK, Please push your patch ASAP.

Done:

commit 9f6a8905611b5b1d8fcd31bebbc9af7ca1355cc3
Author: Jim Qu 
Date:   Tue Apr 17 19:11:16 2018 +0800

Wait for pending scanout update before calling drmmode_crtc_scanout_free


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/scheduler: fix build broken by "move last_sched fence updating prior to job popping"

2018-04-18 Thread Christian König
We don't have s_fence as local variable here.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 5de79bbb12c8..f4b862503710 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -402,7 +402,7 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)
dma_fence_set_error(_job->s_fence->finished, -ECANCELED);
 
dma_fence_put(entity->last_scheduled);
-   entity->last_scheduled = dma_fence_get(_fence->finished);
+   entity->last_scheduled = dma_fence_get(_job->s_fence->finished);
 
spsc_queue_pop(>job_queue);
return sched_job;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


答复: 答复: 答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Qu, Jim
OK, Please push your patch ASAP.

Thanks
JimQu


发件人: Michel Dänzer 
发送时间: 2018年4月18日 17:54
收件人: Qu, Jim
抄送: amd-gfx@lists.freedesktop.org
主题: Re: 答复: 答复: [PATCH] sync amdgpu scanout update event before mode setting

On 2018-04-18 11:44 AM, Qu, Jim wrote:
> Yeah, I realize that it should use || . I will check it again with your
> modification.

I've verified that it fixes the crash.


> and then push it immediately. The issue has delayed a long time.

Really? I haven't seen anything about this before you posted your patch
yesterday. (I wonder if
https://bugs.freedesktop.org/show_bug.cgi?id=105736 might be the same
issue, but it's not clear yet)


> May I get your RB?

I'd prefer pushing it myself.


--
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: 答复: 答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Michel Dänzer
On 2018-04-18 11:44 AM, Qu, Jim wrote:
> Yeah, I realize that it should use || . I will check it again with your
> modification.

I've verified that it fixes the crash.


> and then push it immediately. The issue has delayed a long time.

Really? I haven't seen anything about this before you posted your patch
yesterday. (I wonder if
https://bugs.freedesktop.org/show_bug.cgi?id=105736 might be the same
issue, but it's not clear yet)


> May I get your RB?

I'd prefer pushing it myself.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


答复: 答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Qu, Jim
Yeah, I realize that it should use || . I will check it again with your 
modification. and then push it immediately. The issue has delayed a long time.

May I get your RB?

Thanks
JimQu


发件人: Michel Dänzer 
发送时间: 2018年4月18日 17:29
收件人: Qu, Jim
抄送: amd-gfx@lists.freedesktop.org
主题: Re: 答复: [PATCH] sync amdgpu scanout update event before mode setting

On 2018-04-18 11:12 AM, Qu, Jim wrote:
> Hi Michel,
>
> drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
> drmmode_crtc->flip_pending ||
> drmmode_crtc->scanout_update_pending);
>
> Here, should not use && for this condition?

No; that would only wait as long as both drmmode_crtc->flip_pending and
drmmode_crtc->scanout_update_pending are non-zero, i.e. while a TearFree
flip is pending. But it needs to wait while a non-TearFree flip is
pending as well (as the existing code did), and while a non-TearFree
scanout update is pending (the case your patch fixes).


Anyway, I've come to realize this isn't the right place to fix the
problem, it should only be done when drmmode_crtc_scanout_free is
called:

if (drmmode_crtc->scanout[scanout_id].pixmap &&
fb != amdgpu_pixmap_get_fb(drmmode_crtc->
   scanout[scanout_id].pixmap)) {
drmmode_crtc_wait_pending_event(drmmode_crtc, 
pAMDGPUEnt->fd,

drmmode_crtc->scanout_update_pending);
drmmode_crtc_scanout_free(drmmode_crtc);
} ...

Do you prefer if I make this modification to your patch before pushing
it, or submit my own patch instead?


--
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm: Print unadorned pointers

2018-04-18 Thread Maarten Lankhorst
Op 18-04-18 om 11:24 schreef Alexey Brodkin:
> After commit ad67b74 ("printk: hash addresses printed with %p")
> pointers are being hashed when printed. However, this makes
> debug output completely useless. Switch to %px in order to see the
> unadorned kernel pointers.
>
> This was done with the following one-liner:
>  find drivers/gpu/drm -type f -name "*.c" -exec sed -r -i 
> '/DRM_DEBUG|KERN_DEBUG|pr_debug/ s/%p\b/%px/g' {} +
So first we plug a kernel information leak hole, then we introduce it again? 
Seems like a terrible idea..
> Signed-off-by: Alexey Brodkin 
> Cc: Borislav Petkov 
> Cc: Tobin C. Harding 
> Cc: Alex Deucher 
> Cc: Andrey Grodzovsky 
> Cc: Arnd Bergmann 
> Cc: Benjamin Gaignard 
> Cc: Chen-Yu Tsai 
> Cc: Christian Gmeiner 
> Cc: "Christian König" 
> Cc: Cihangir Akturk 
> Cc: CK Hu 
> Cc: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: David Airlie 
> Cc: "David (ChunMing) Zhou" 
> Cc: Gerd Hoffmann 
> Cc: Greg Kroah-Hartman 
> Cc: Gustavo Padovan 
> Cc: Harry Wentland 
> Cc: "Heiko Stübner" 
> Cc: Ingo Molnar 
> Cc: Jani Nikula 
> Cc: "Jerry (Fangzhi) Zuo" 
> Cc: Joonas Lahtinen 
> Cc: Krzysztof Kozlowski 
> Cc: "Leo (Sunpeng) Li" 
> Cc: Lucas Stach 
> Cc: Maarten Lankhorst 
> Cc: Matthias Brugger 
> Cc: Maxime Ripard 
> Cc: "Michel Dänzer" 
> Cc: Oded Gabbay 
> Cc: Philipp Zabel 
> Cc: Rob Clark 
> Cc: Rodrigo Vivi 
> Cc: Roger He 
> Cc: Roman Li 
> Cc: Russell King 
> Cc: Samuel Li 
> Cc: Sandy Huang 
> Cc: Sean Paul 
> Cc: Shirish S 
> Cc: Sinclair Yeh 
> Cc: Thomas Hellstrom 
> Cc: Tom Lendacky 
> Cc: Tony Cheng 
> Cc: Vincent Abriou 
> Cc: VMware Graphics 
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-arm-...@vger.kernel.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-media...@lists.infradead.org
> Cc: linux-rockc...@lists.infradead.org
> Cc: etna...@lists.freedesktop.org
> Cc: freedr...@lists.freedesktop.org
> Cc: amd-gfx@lists.freedesktop.org
> Cc: intel-...@lists.freedesktop.org
> Cc: virtualizat...@lists.linux-foundation.org
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c   | 14 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c|  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_device.c| 10 ++---
>  drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c  |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_events.c|  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c  |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c   |  4 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_queue.c | 18 -
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 14 +++
>  .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  2 +-
>  drivers/gpu/drm/armada/armada_gem.c| 12 +++---
>  drivers/gpu/drm/drm_atomic.c   | 44 
> +++---
>  drivers/gpu/drm/drm_bufs.c |  8 ++--
>  drivers/gpu/drm/drm_dp_mst_topology.c  |  4 +-
>  drivers/gpu/drm/drm_lease.c|  6 +--
>  drivers/gpu/drm/drm_lock.c |  2 +-
>  drivers/gpu/drm/drm_scatter.c  |  4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_drv.c  |  6 +--
>  drivers/gpu/drm/i810/i810_dma.c|  2 +-
>  drivers/gpu/drm/i915/i915_perf.c   |  2 +-
>  drivers/gpu/drm/i915/intel_display.c   |  2 +-
>  drivers/gpu/drm/i915/intel_guc_ct.c|  4 +-
>  drivers/gpu/drm/i915/intel_guc_submission.c|  2 +-
>  drivers/gpu/drm/i915/intel_uc_fw.c |  2 +-
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c |  2 +-
>  drivers/gpu/drm/mga/mga_warp.c |  2 +-
>  drivers/gpu/drm/msm/msm_drv.c  |  4 +-
>  drivers/gpu/drm/qxl/qxl_cmd.c 

Re: 答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Michel Dänzer
On 2018-04-18 11:12 AM, Qu, Jim wrote:
> Hi Michel,
> 
> drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
> drmmode_crtc->flip_pending ||
> drmmode_crtc->scanout_update_pending);
> 
> Here, should not use && for this condition?

No; that would only wait as long as both drmmode_crtc->flip_pending and
drmmode_crtc->scanout_update_pending are non-zero, i.e. while a TearFree
flip is pending. But it needs to wait while a non-TearFree flip is
pending as well (as the existing code did), and while a non-TearFree
scanout update is pending (the case your patch fixes).


Anyway, I've come to realize this isn't the right place to fix the
problem, it should only be done when drmmode_crtc_scanout_free is
called:

if (drmmode_crtc->scanout[scanout_id].pixmap &&
fb != amdgpu_pixmap_get_fb(drmmode_crtc->
   scanout[scanout_id].pixmap)) {
drmmode_crtc_wait_pending_event(drmmode_crtc, 
pAMDGPUEnt->fd,

drmmode_crtc->scanout_update_pending);
drmmode_crtc_scanout_free(drmmode_crtc);
} ...

Do you prefer if I make this modification to your patch before pushing
it, or submit my own patch instead?


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Qu, Jim
Hi Michel,

drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
drmmode_crtc->flip_pending ||

drmmode_crtc->scanout_update_pending);

Here, should not use && for this condition?

Thanks
JimQu


发件人: amd-gfx  代表 Qu, Jim 
发送时间: 2018年4月18日 17:00
收件人: Michel Dänzer
抄送: amd-gfx@lists.freedesktop.org
主题: 答复: [PATCH] sync amdgpu scanout update event before mode setting

Okay if I make that modification before pushing?

A: Yes , of course :p)

Thanks
JimQu


发件人: Michel Dänzer 
发送时间: 2018年4月18日 16:55
收件人: Qu, Jim
抄送: amd-gfx@lists.freedesktop.org
主题: Re: [PATCH] sync amdgpu scanout update event before mode setting

On 2018-04-17 01:11 PM, Jim Qu wrote:
> There is a case that when set screen from reverse to normal, the old
> scanout damage is freed in modesetting before sanout update handler,
> so it causes segment fault issue.

Good catch, thanks.


> diff --git a/src/drmmode_display.c b/src/drmmode_display.c
> index 85970d1..ea38e29 100644
> --- a/src/drmmode_display.c
> +++ b/src/drmmode_display.c
> @@ -902,6 +902,9 @@ drmmode_set_mode_major(xf86CrtcPtr crtc, DisplayModePtr 
> mode,
>   drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
>   drmmode_crtc->flip_pending);
>
> + drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
> + 
> drmmode_crtc->scanout_update_pending);
> +
>   if (!drmmode_set_mode(crtc, fb, mode, x, y))
>   goto done;
>
>

The two drmmode_crtc_wait_pending_event invocations can be combined like
this:

drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
drmmode_crtc->flip_pending ||

drmmode_crtc->scanout_update_pending);

Okay if I make that modification before pushing?


--
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: force app kill patch

2018-04-18 Thread Christian König

Am 18.04.2018 um 11:00 schrieb Liu, Monk:


1.Drm_sched_entity_fini(): it exit right after entity->job_queue 
empty, [ but that time scheduler is not fast enough to deal with this 
entity now ]


That should never happen.

The last job from the entity->job_queue is only removed after the 
scheduler is done with the entity (at least that was the original 
idea, not sure if that still works as expected).


[ML] no that’s not true and we already catch the kernel NULL pointer 
issue with a entity->last_scheduled fence get double put , exactly 
like the way I described in the scenario …


Pixel already fixed it by moving the put/get pair on 
entity->last_scheduled prior to spsc_queue_pop() and the race issue is 
therefore avoided




Yeah, already seen and reviewed that. That's a good catch, please make 
sure that this gets pushed to amd-staging-drm-next ASAP.


Christian.


/Monk

*From:*Christian König [mailto:ckoenig.leichtzumer...@gmail.com]
*Sent:* 2018年4月18日16:36
*To:* Liu, Monk ; Koenig, Christian 
; Deng, Emily 

*Cc:* amd-gfx@lists.freedesktop.org
*Subject:* Re: force app kill patch

See that in “sched_entity_fini”, we only call
dma_fence_put(entity->last_scheduled” under the condition of “If
(entity->fini_status)”, so

This way there is memory leak for the case of “entity->fini_stats ==0”

Good catch, we indeed should fix that.


1.Drm_sched_entity_fini(): it exit right after entity->job_queue
empty, [ but that time scheduler is not fast enough to deal with
this entity now ]

That should never happen.

The last job from the entity->job_queue is only removed after the 
scheduler is done with the entity (at least that was the original 
idea, not sure if that still works as expected).


Regards,
Christian.

Am 18.04.2018 um 09:20 schrieb Liu, Monk:

*Correctio for the scenario *

After we move fence_put(entity->last_sched) out of the fini_status
check:

A potential race issue for the scenario:

1.Drm_sched_entity_fini(): it exit right after entity->job_queue
empty, [ but that time scheduler is not fast enough to deal with
this entity now ]

2.Drm_sched_entity_cleanup() : it call
dma_fence_put(entity->last_scheduled)  [ but this time
entity->last_scheduled  actually points to the fence prior to the
real last one ]

3.Scheduler_main() now dealing with this entity: it call
dma_fence_put(entity->last_scheduled)    [   Now this fence get
double put !!!  ]

4.Scheduler_main() now call dma_fence_get() on the *real* last one !

So eventually the real last one fence triggers memory leak and
more critical the double put fence cause NULL pointer access

/Monk

*From:*Liu, Monk
*Sent:* 2018年4月18日15:11
*To:* Koenig, Christian 
; Deng, Emily
 
*Cc:* amd-gfx@lists.freedesktop.org

*Subject:* force app kill patch

Hi Christian & Emily

I think the v4 fix for “fix force app kill hang”is still not good
enough:

First:

See that in “sched_entity_fini”, we only call
dma_fence_put(entity->last_scheduled”under the condition of “If
(entity->fini_status)”, so

This way there is memory leak for the case of “entity->fini_stats ==0”

Second:

If we move dma_fence_put(entity->last_scheduled) out of the
condition of “if (entity->fini_status)”, the memory leak issue can
be fixed

But there will be kernel NULL pointer access, I think the time you
call dma_fence_put(entity->last_scheduled”) may actually executed
**not**

On the last scheduled fence of this entity, because it is run
without “thread_park/unpark”pair which to make sure scheduler not
dealing this entity

So with certain race issue, here is the scenario:

1.scheduler is doing the dma_fence_put() on the 1^st fence,

2.scheduler set entity->last_scheduled to 1^st fence

3.now sched_entity_fini() run, and it call dma_fence_put() on
entity->last_scheduled

4.now this 1^st fence is actually put double time and the real
last fence won’t get put by expected

any idea?

/Monk




___

amd-gfx mailing list

amd-gfx@lists.freedesktop.org 

https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: force app kill patch

2018-04-18 Thread Liu, Monk
1.Drm_sched_entity_fini(): it exit right after entity->job_queue empty, 
[ but that time scheduler is not fast enough to deal with this entity now ]
That should never happen.

The last job from the entity->job_queue is only removed after the scheduler is 
done with the entity (at least that was the original idea, not sure if that 
still works as expected).

[ML] no that’s not true and we already catch the kernel NULL pointer issue with 
a entity->last_scheduled fence get double put , exactly like the way I 
described in the scenario …

Pixel already fixed it by moving the put/get pair on entity->last_scheduled 
prior to spsc_queue_pop() and the race issue is therefore avoided

/Monk

From: Christian König [mailto:ckoenig.leichtzumer...@gmail.com]
Sent: 2018年4月18日 16:36
To: Liu, Monk ; Koenig, Christian ; 
Deng, Emily 
Cc: amd-gfx@lists.freedesktop.org
Subject: Re: force app kill patch

See that in “sched_entity_fini”, we only call 
dma_fence_put(entity->last_scheduled” under the condition of “If 
(entity->fini_status)”, so
This way there is memory leak for the case of “entity->fini_stats ==0”
Good catch, we indeed should fix that.


1.Drm_sched_entity_fini(): it exit right after entity->job_queue empty, 
[ but that time scheduler is not fast enough to deal with this entity now ]
That should never happen.

The last job from the entity->job_queue is only removed after the scheduler is 
done with the entity (at least that was the original idea, not sure if that 
still works as expected).

Regards,
Christian.

Am 18.04.2018 um 09:20 schrieb Liu, Monk:
*Correctio for the scenario *

After we move fence_put(entity->last_sched) out of the fini_status check:


A potential race issue for the scenario:


1.Drm_sched_entity_fini(): it exit right after entity->job_queue empty, 
[ but that time scheduler is not fast enough to deal with this entity now ]

2.Drm_sched_entity_cleanup() : it call 
dma_fence_put(entity->last_scheduled)  [ but this time entity->last_scheduled  
actually points to the fence prior to the real last one ]

3.Scheduler_main() now dealing with this entity: it call 
dma_fence_put(entity->last_scheduled)[   Now this fence get double put !!!  
]

4.Scheduler_main() now call dma_fence_get() on the *real* last one !

So eventually the real last one fence triggers memory leak and more critical 
the double put fence cause NULL pointer access

/Monk

From: Liu, Monk
Sent: 2018年4月18日 15:11
To: Koenig, Christian 
; Deng, Emily 

Cc: amd-gfx@lists.freedesktop.org
Subject: force app kill patch

Hi Christian & Emily

I think the v4 fix for “fix force app kill hang” is still not good enough:


First:
See that in “sched_entity_fini”, we only call 
dma_fence_put(entity->last_scheduled” under the condition of “If 
(entity->fini_status)”, so
This way there is memory leak for the case of “entity->fini_stats ==0”


Second:
If we move dma_fence_put(entity->last_scheduled) out of the condition of “if 
(entity->fini_status)”, the memory leak issue can be fixed
But there will be kernel NULL pointer access, I think the time you call 
dma_fence_put(entity->last_scheduled”) may actually executed *not*
On the last scheduled fence of this entity, because it is run without 
“thread_park/unpark” pair which to make sure scheduler not dealing this entity

So with certain race issue, here is the scenario:


1.scheduler is doing the dma_fence_put() on the 1st fence,

2.scheduler set entity->last_scheduled to 1st fence

3.now sched_entity_fini() run, and it call dma_fence_put() on 
entity->last_scheduled

4.now this 1st fence is actually put double time and the real last 
fence won’t get put by expected


any idea?


/Monk




___

amd-gfx mailing list

amd-gfx@lists.freedesktop.org

https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


答复: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Qu, Jim
Okay if I make that modification before pushing?

A: Yes , of course :p)

Thanks
JimQu


发件人: Michel Dänzer 
发送时间: 2018年4月18日 16:55
收件人: Qu, Jim
抄送: amd-gfx@lists.freedesktop.org
主题: Re: [PATCH] sync amdgpu scanout update event before mode setting

On 2018-04-17 01:11 PM, Jim Qu wrote:
> There is a case that when set screen from reverse to normal, the old
> scanout damage is freed in modesetting before sanout update handler,
> so it causes segment fault issue.

Good catch, thanks.


> diff --git a/src/drmmode_display.c b/src/drmmode_display.c
> index 85970d1..ea38e29 100644
> --- a/src/drmmode_display.c
> +++ b/src/drmmode_display.c
> @@ -902,6 +902,9 @@ drmmode_set_mode_major(xf86CrtcPtr crtc, DisplayModePtr 
> mode,
>   drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
>   drmmode_crtc->flip_pending);
>
> + drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
> + 
> drmmode_crtc->scanout_update_pending);
> +
>   if (!drmmode_set_mode(crtc, fb, mode, x, y))
>   goto done;
>
>

The two drmmode_crtc_wait_pending_event invocations can be combined like
this:

drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
drmmode_crtc->flip_pending ||

drmmode_crtc->scanout_update_pending);

Okay if I make that modification before pushing?


--
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/scheduler: always put last_sched fence in entity_fini

2018-04-18 Thread Christian König

Am 18.04.2018 um 10:45 schrieb Pixel Ding:

Fix the potential memleak since scheduler main thread always
hold one last_sched fence.

Signed-off-by: Pixel Ding 


Reviewed-by: Christian König  for the whole 
series.



---
  drivers/gpu/drm/scheduler/gpu_scheduler.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 44d21981bf3b..4968867da7a6 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -276,10 +276,10 @@ void drm_sched_entity_cleanup(struct drm_gpu_scheduler 
*sched,
else if (r)
DRM_ERROR("fence add callback failed (%d)\n", 
r);
}
-
-   dma_fence_put(entity->last_scheduled);
-   entity->last_scheduled = NULL;
}
+
+   dma_fence_put(entity->last_scheduled);
+   entity->last_scheduled = NULL;
  }
  EXPORT_SYMBOL(drm_sched_entity_cleanup);
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] sync amdgpu scanout update event before mode setting

2018-04-18 Thread Michel Dänzer
On 2018-04-17 01:11 PM, Jim Qu wrote:
> There is a case that when set screen from reverse to normal, the old
> scanout damage is freed in modesetting before sanout update handler,
> so it causes segment fault issue.

Good catch, thanks.


> diff --git a/src/drmmode_display.c b/src/drmmode_display.c
> index 85970d1..ea38e29 100644
> --- a/src/drmmode_display.c
> +++ b/src/drmmode_display.c
> @@ -902,6 +902,9 @@ drmmode_set_mode_major(xf86CrtcPtr crtc, DisplayModePtr 
> mode,
>   drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
>   drmmode_crtc->flip_pending);
>  
> + drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
> + 
> drmmode_crtc->scanout_update_pending);
> +
>   if (!drmmode_set_mode(crtc, fb, mode, x, y))
>   goto done;
>  
> 

The two drmmode_crtc_wait_pending_event invocations can be combined like
this:

drmmode_crtc_wait_pending_event(drmmode_crtc, pAMDGPUEnt->fd,
drmmode_crtc->flip_pending ||

drmmode_crtc->scanout_update_pending);

Okay if I make that modification before pushing?


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2] drm/amd/amdgpu: passing i2s instance value as platform data

2018-04-18 Thread Vijendar Mukunda
i2s instance value is passed as platform data to dwc driver.
this parameter will be useful to distinguish current i2s
instance value when multiple i2s controller instances are created.

Signed-off-by: Vijendar Mukunda 
---
v1->v2: moved I2S instance macros from dwc driver header file
 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
index 6cca4d1..c8c7583 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
@@ -83,6 +83,8 @@
 #define ACP_TIMEOUT_LOOP   0x00FF
 #define ACP_DEVS   4
 #define ACP_SRC_ID 162
+#define I2S_SP_INSTANCE0x01
+#define I2S_BT_INSTANCE0x02
 
 enum {
ACP_TILE_P1 = 0,
@@ -347,6 +349,7 @@ static int acp_hw_init(void *handle)
i2s_pdata[0].snd_rates = SNDRV_PCM_RATE_8000_96000;
i2s_pdata[0].i2s_reg_comp1 = ACP_I2S_COMP1_PLAY_REG_OFFSET;
i2s_pdata[0].i2s_reg_comp2 = ACP_I2S_COMP2_PLAY_REG_OFFSET;
+   i2s_pdata[0].i2s_instance = I2S_SP_INSTANCE;
switch (adev->asic_type) {
case CHIP_STONEY:
i2s_pdata[1].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET |
@@ -362,6 +365,7 @@ static int acp_hw_init(void *handle)
i2s_pdata[1].snd_rates = SNDRV_PCM_RATE_8000_96000;
i2s_pdata[1].i2s_reg_comp1 = ACP_I2S_COMP1_CAP_REG_OFFSET;
i2s_pdata[1].i2s_reg_comp2 = ACP_I2S_COMP2_CAP_REG_OFFSET;
+   i2s_pdata[1].i2s_instance = I2S_SP_INSTANCE;
 
i2s_pdata[2].quirks = DW_I2S_QUIRK_COMP_REG_OFFSET;
switch (adev->asic_type) {
@@ -376,6 +380,7 @@ static int acp_hw_init(void *handle)
i2s_pdata[2].snd_rates = SNDRV_PCM_RATE_8000_96000;
i2s_pdata[2].i2s_reg_comp1 = ACP_BT_COMP1_REG_OFFSET;
i2s_pdata[2].i2s_reg_comp2 = ACP_BT_COMP2_REG_OFFSET;
+   i2s_pdata[2].i2s_instance = I2S_BT_INSTANCE;
 
adev->acp.acp_res[0].name = "acp2x_dma";
adev->acp.acp_res[0].flags = IORESOURCE_MEM;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/2] drm/scheduler: move last_sched fence updating prior to job popping

2018-04-18 Thread Pixel Ding
Make sure main thread won't update last_sched fence when entity
is cleanup.

Fix a racing issue which is caused by putting last_sched fence
twice. Running vulkaninfo in tight loop can produce this issue
as seeing wild fence pointer.

Signed-off-by: Pixel Ding 
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 4968867da7a6..b937b6dc00a9 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -402,6 +402,9 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)
if (entity->guilty && atomic_read(entity->guilty))
dma_fence_set_error(_job->s_fence->finished, -ECANCELED);
 
+   dma_fence_put(entity->last_scheduled);
+   entity->last_scheduled = dma_fence_get(_fence->finished);
+
spsc_queue_pop(>job_queue);
return sched_job;
 }
@@ -715,9 +718,6 @@ static int drm_sched_main(void *param)
fence = sched->ops->run_job(sched_job);
drm_sched_fence_scheduled(s_fence);
 
-   dma_fence_put(entity->last_scheduled);
-   entity->last_scheduled = dma_fence_get(_fence->finished);
-
if (fence) {
s_fence->parent = dma_fence_get(fence);
r = dma_fence_add_callback(fence, _fence->cb,
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/2] drm/scheduler: always put last_sched fence in entity_fini

2018-04-18 Thread Pixel Ding
Fix the potential memleak since scheduler main thread always
hold one last_sched fence.

Signed-off-by: Pixel Ding 
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 44d21981bf3b..4968867da7a6 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -276,10 +276,10 @@ void drm_sched_entity_cleanup(struct drm_gpu_scheduler 
*sched,
else if (r)
DRM_ERROR("fence add callback failed (%d)\n", 
r);
}
-
-   dma_fence_put(entity->last_scheduled);
-   entity->last_scheduled = NULL;
}
+
+   dma_fence_put(entity->last_scheduled);
+   entity->last_scheduled = NULL;
 }
 EXPORT_SYMBOL(drm_sched_entity_cleanup);
 
-- 
2.11.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: force app kill patch

2018-04-18 Thread Christian König
See that in “sched_entity_fini”, we only call 
dma_fence_put(entity->last_scheduled”under the condition of “If 
(entity->fini_status)”, so


This way there is memory leak for the case of “entity->fini_stats ==0”


Good catch, we indeed should fix that.

1.Drm_sched_entity_fini(): it exit right after entity->job_queue 
empty, [ but that time scheduler is not fast enough to deal with this 
entity now ] 

That should never happen.

The last job from the entity->job_queue is only removed after the 
scheduler is done with the entity (at least that was the original idea, 
not sure if that still works as expected).


Regards,
Christian.

Am 18.04.2018 um 09:20 schrieb Liu, Monk:


*Correctio for the scenario *

After we move fence_put(entity->last_sched) out of the fini_status check:

A potential race issue for the scenario:

1.Drm_sched_entity_fini(): it exit right after entity->job_queue 
empty, [ but that time scheduler is not fast enough to deal with this 
entity now ]


2.Drm_sched_entity_cleanup() : it call 
dma_fence_put(entity->last_scheduled)  [ but this time 
entity->last_scheduled  actually points to the fence prior to the real 
last one ]


3.Scheduler_main() now dealing with this entity: it call 
dma_fence_put(entity->last_scheduled)    [   Now this fence get double 
put !!!  ]


4.Scheduler_main() now call dma_fence_get() on the *real* last one !

So eventually the real last one fence triggers memory leak and more 
critical the double put fence cause NULL pointer access


/Monk

*From:*Liu, Monk
*Sent:* 2018年4月18日15:11
*To:* Koenig, Christian ; Deng, Emily 


*Cc:* amd-gfx@lists.freedesktop.org
*Subject:* force app kill patch

Hi Christian & Emily

I think the v4 fix for “fix force app kill hang”is still not good enough:

First:

See that in “sched_entity_fini”, we only call 
dma_fence_put(entity->last_scheduled”under the condition of “If 
(entity->fini_status)”, so


This way there is memory leak for the case of “entity->fini_stats ==0”

Second:

If we move dma_fence_put(entity->last_scheduled) out of the condition 
of “if (entity->fini_status)”, the memory leak issue can be fixed


But there will be kernel NULL pointer access, I think the time you 
call dma_fence_put(entity->last_scheduled”) may actually executed **not**


On the last scheduled fence of this entity, because it is run without 
“thread_park/unpark”pair which to make sure scheduler not dealing this 
entity


So with certain race issue, here is the scenario:

1.scheduler is doing the dma_fence_put() on the 1^st fence,

2.scheduler set entity->last_scheduled to 1^st fence

3.now sched_entity_fini() run, and it call dma_fence_put() on 
entity->last_scheduled


4.now this 1^st fence is actually put double time and the real last 
fence won’t get put by expected


any idea?

/Monk



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/1] drm/amdgpu: Enable scatter gather display support

2018-04-18 Thread Christian König

Am 18.04.2018 um 06:14 schrieb Alex Deucher:

On Tue, Apr 17, 2018 at 8:40 PM, Samuel Li  wrote:

It's auto by default. For CZ/ST, auto setting enables sg display
when vram size is small; otherwise still uses vram.
This patch fixed some potention hang issue introduced by change
"allow framebuffer in GART memory as well" due to CZ/ST hardware
limitation.

v2: Change default setting to auto.
v3: Move some logic from amdgpu_display_framebuffer_domains()
 to pin function, suggested by Christian.
Signed-off-by: Samuel Li 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  4 ++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_display.h   |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 25 +--
  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c |  2 +-
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  3 +--
  8 files changed, 35 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b3d047d..26429de 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -129,6 +129,7 @@ extern int amdgpu_lbpw;
  extern int amdgpu_compute_multipipe;
  extern int amdgpu_gpu_recovery;
  extern int amdgpu_emu_mode;
+extern int amdgpu_sg_display;

  #ifdef CONFIG_DRM_AMDGPU_SI
  extern int amdgpu_si_support;
@@ -137,6 +138,7 @@ extern int amdgpu_si_support;
  extern int amdgpu_cik_support;
  #endif

+#define AMDGPU_SG_THRESHOLD(256*1024*1024)
  #define AMDGPU_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
  #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
  #define AMDGPU_MAX_USEC_TIMEOUT10  /* 100 ms */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 50f98df..0caa3d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -189,7 +189,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc 
*crtc,
 goto cleanup;
 }

-   r = amdgpu_bo_pin(new_abo, amdgpu_display_framebuffer_domains(adev), 
);
+   r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev), 
);
 if (unlikely(r != 0)) {
 DRM_ERROR("failed to pin new abo buffer before flip\n");
 goto unreserve;
@@ -484,7 +484,7 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = 
{
 .create_handle = drm_gem_fb_create_handle,
  };

-uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev)
+uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev)

This change should be a separate patch,


  {
 uint32_t domain = AMDGPU_GEM_DOMAIN_VRAM;

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
index 2b11d80..f66e3e3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.h
@@ -23,7 +23,7 @@
  #ifndef __AMDGPU_DISPLAY_H__
  #define __AMDGPU_DISPLAY_H__

-uint32_t amdgpu_display_framebuffer_domains(struct amdgpu_device *adev);
+uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev);
  struct drm_framebuffer *
  amdgpu_display_user_framebuffer_create(struct drm_device *dev,
struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 0b19482..85dcd1c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -132,6 +132,7 @@ int amdgpu_lbpw = -1;
  int amdgpu_compute_multipipe = -1;
  int amdgpu_gpu_recovery = -1; /* auto */
  int amdgpu_emu_mode = 0;
+int amdgpu_sg_display = -1;

  MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
  module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@@ -290,6 +291,9 @@ module_param_named(gpu_recovery, amdgpu_gpu_recovery, int, 
0444);
  MODULE_PARM_DESC(emu_mode, "Emulation mode, (1 = enable, 0 = disable)");
  module_param_named(emu_mode, amdgpu_emu_mode, int, 0444);

+MODULE_PARM_DESC(sg_display, "Enable scatter gather display, (1 = enable, 0 = 
disable, -1 = auto");
+module_param_named(sg_display, amdgpu_sg_display, int, 0444);
+
  #ifdef CONFIG_DRM_AMDGPU_SI

  #if defined(CONFIG_DRM_RADEON) || defined(CONFIG_DRM_RADEON_MODULE)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index ff89e84..bc5fd8e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -137,7 +137,7 @@ static int amdgpufb_create_pinned_object(struct 
amdgpu_fbdev *rfbdev,
 /* need to align pitch with crtc limits */
 

Re: [PATCH 1/2] drm/amdgpu: set preferred_domain independent of fallback handling

2018-04-18 Thread Christian König

Am 18.04.2018 um 04:19 schrieb Chunming Zhou:

When GEM needs to fallback to GTT for VRAM BOs we still want the
preferred domain to be untouched so that the BO has a cance to move back
to VRAM in the future.

Change-Id: I8cfdf3f30532f7e5d80b8e4266b7800211de2f0b
Signed-off-by: Chunming Zhou 


Reviewed-by: Christian König  for the whole 
series.



---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c|  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 15 +--
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.h |  1 +
  3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 1200c5ba37da..ff606ce88837 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -62,6 +62,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
bp.byte_align = alignment;
bp.type = type;
bp.resv = resv;
+   bp.preferred_domain = initial_domain;
  retry:
bp.flags = flags;
bp.domain = initial_domain;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index cac65e32a0b9..9258f0694922 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -360,6 +360,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
};
struct amdgpu_bo *bo;
unsigned long page_align, size = bp->size;
+   u32 preferred_domains;
size_t acc_size;
int r;
  
@@ -380,12 +381,14 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,

drm_gem_private_object_init(adev->ddev, >gem_base, size);
INIT_LIST_HEAD(>shadow_list);
INIT_LIST_HEAD(>va);
-   bo->preferred_domains = bp->domain & (AMDGPU_GEM_DOMAIN_VRAM |
- AMDGPU_GEM_DOMAIN_GTT |
- AMDGPU_GEM_DOMAIN_CPU |
- AMDGPU_GEM_DOMAIN_GDS |
- AMDGPU_GEM_DOMAIN_GWS |
- AMDGPU_GEM_DOMAIN_OA);
+   preferred_domains = bp->preferred_domain ? bp->preferred_domain :
+   bp->domain;
+   bo->preferred_domains = preferred_domains & (AMDGPU_GEM_DOMAIN_VRAM |
+AMDGPU_GEM_DOMAIN_GTT |
+AMDGPU_GEM_DOMAIN_CPU |
+AMDGPU_GEM_DOMAIN_GDS |
+AMDGPU_GEM_DOMAIN_GWS |
+AMDGPU_GEM_DOMAIN_OA);
bo->allowed_domains = bo->preferred_domains;
if (bp->type != ttm_bo_type_kernel &&
bo->allowed_domains == AMDGPU_GEM_DOMAIN_VRAM)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index e9a21d991e77..540e03fa159f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -37,6 +37,7 @@ struct amdgpu_bo_param {
unsigned long   size;
int byte_align;
u32 domain;
+   u32 preferred_domain;
u64 flags;
enum ttm_bo_typetype;
struct reservation_object   *resv;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: RFC for a render API to support adaptive sync and VRR

2018-04-18 Thread Daniel Vetter
On Wed, Apr 18, 2018 at 5:58 AM, Keith Packard  wrote:
> Michel Dänzer  writes:
>> Time-based presentation seems to be the right approach for preventing
>> micro-stutter in games as well, Croteam developers have been researching
>> this.
>
> Both the Vulkan GOOGLE_display_timing extension and X11 Present
> extension offer the ability to specify the desired display time in
> seconds.
>
> Similarly, I'd suggest that the min/max display refresh rate values be
> advertised as time between frames rather than frames per second.
>
> I'd also encourage using a single unit for all of these values,
> preferably nanoseconds. Absolute times should all be referenced to
> CLOCK_MONOTONIC.

+1 on everything Keith said. I got somehow dragged in khr vk
discussions around preventing micro-stuttering, and consensus seems to
be that timestamps for scheduling frames is the way to go, most likely
absolute ones (not everything is running Linux unfortunately, so can't
go outright and claim it's guaranteed to be CLOCK_MONOTONIC).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: force app kill patch

2018-04-18 Thread Liu, Monk
*Correctio for the scenario *

After we move fence_put(entity->last_sched) out of the fini_status check:


A potential race issue for the scenario:


1.Drm_sched_entity_fini(): it exit right after entity->job_queue empty, 
[ but that time scheduler is not fast enough to deal with this entity now ]

2.Drm_sched_entity_cleanup() : it call 
dma_fence_put(entity->last_scheduled)  [ but this time entity->last_scheduled  
actually points to the fence prior to the real last one ]

3.Scheduler_main() now dealing with this entity: it call 
dma_fence_put(entity->last_scheduled)[   Now this fence get double put !!!  
]

4.Scheduler_main() now call dma_fence_get() on the *real* last one !

So eventually the real last one fence triggers memory leak and more critical 
the double put fence cause NULL pointer access

/Monk

From: Liu, Monk
Sent: 2018年4月18日 15:11
To: Koenig, Christian ; Deng, Emily 

Cc: amd-gfx@lists.freedesktop.org
Subject: force app kill patch

Hi Christian & Emily

I think the v4 fix for “fix force app kill hang” is still not good enough:


First:
See that in “sched_entity_fini”, we only call 
dma_fence_put(entity->last_scheduled” under the condition of “If 
(entity->fini_status)”, so
This way there is memory leak for the case of “entity->fini_stats ==0”


Second:
If we move dma_fence_put(entity->last_scheduled) out of the condition of “if 
(entity->fini_status)”, the memory leak issue can be fixed
But there will be kernel NULL pointer access, I think the time you call 
dma_fence_put(entity->last_scheduled”) may actually executed *not*
On the last scheduled fence of this entity, because it is run without 
“thread_park/unpark” pair which to make sure scheduler not dealing this entity

So with certain race issue, here is the scenario:


1.scheduler is doing the dma_fence_put() on the 1st fence,

2.scheduler set entity->last_scheduled to 1st fence

3.now sched_entity_fini() run, and it call dma_fence_put() on 
entity->last_scheduled

4.now this 1st fence is actually put double time and the real last 
fence won’t get put by expected


any idea?


/Monk
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


force app kill patch

2018-04-18 Thread Liu, Monk
Hi Christian & Emily

I think the v4 fix for "fix force app kill hang" is still not good enough:


First:
See that in "sched_entity_fini", we only call 
dma_fence_put(entity->last_scheduled" under the condition of "If 
(entity->fini_status)", so
This way there is memory leak for the case of "entity->fini_stats ==0"


Second:
If we move dma_fence_put(entity->last_scheduled) out of the condition of "if 
(entity->fini_status)", the memory leak issue can be fixed
But there will be kernel NULL pointer access, I think the time you call 
dma_fence_put(entity->last_scheduled") may actually executed *not*
On the last scheduled fence of this entity, because it is run without 
"thread_park/unpark" pair which to make sure scheduler not dealing this entity

So with certain race issue, here is the scenario:


1.scheduler is doing the dma_fence_put() on the 1st fence,

2.scheduler set entity->last_scheduled to 1st fence

3.now sched_entity_fini() run, and it call dma_fence_put() on 
entity->last_scheduled

4.now this 1st fence is actually put double time and the real last 
fence won't get put by expected


any idea?


/Monk
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx