回复: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Liu, Monk
>>> If we are trying to debug a reproducible hang, probably best to just to 
>>> disable gfxoff before messing with it to remove that as a factor.
Agree

>> Otherwise, the method included in this patch is the proper way to 
>> disable/enable GFXOFF dynamically.
Sounds not doable, because we cannot disable GFXOFF every time we use debugfs 
(and restore GFXOFF as well after that debugfs interface done …)

thanks
发件人: Deucher, Alexander 
发送时间: 2020年2月21日 23:40
收件人: Christian König ; Huang, Ray 
; Liu, Monk 
抄送: StDenis, Tom ; Alex Deucher ; 
amd-gfx list 
主题: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO


[AMD Public Use]

If we are trying to debug a reproducible hang, probably best to just to disable 
gfxoff before messing with it to remove that as a factor.  Otherwise, the 
method included in this patch is the proper way to disable/enable GFXOFF 
dynamically.

Alex


From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 on behalf of Christian König 
mailto:ckoenig.leichtzumer...@gmail.com>>
Sent: Friday, February 21, 2020 10:27 AM
To: Huang, Ray mailto:ray.hu...@amd.com>>; Liu, Monk 
mailto:monk@amd.com>>
Cc: StDenis, Tom mailto:tom.stde...@amd.com>>; Alex 
Deucher mailto:alexdeuc...@gmail.com>>; amd-gfx list 
mailto:amd-gfx@lists.freedesktop.org>>
Subject: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access 
to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register you 
>> usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we needn't use 
> KIQ.

Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx 
>> mailto:amd-gfx-boun...@lists.freedesktop.org>>
>>  代表 Huang Rui
>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom mailto:tom.stde...@amd.com>>
>> 抄送: Alex Deucher mailto:alexdeuc...@gmail.com>>; 
>> amd-gfx list 
>> mailto:amd-gfx@lists.freedesktop.org>>
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
 mailto:tom.stde...@amd.com>> wrote:
> Signed-off-by: Tom St Denis 
> mailto:tom.stde...@amd.com>>
 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher 
 mailto:alexander.deuc...@amd.com>>

> ---
>drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   if (pm_pg_lock)
>   mutex_lock(&adev->pm.mutex);
>
> +   amdgpu_gfx_off_ctrl(adev, false);
>   while (size) {
>   uint32_t value;
>
> @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   }
>
>end:
> +   amdgpu_gfx_off_ctrl(adev, true);
> +
>   if (use_bank) {
>   amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> 0x);
>   mutex_unlock(&adev->grbm_idx_mutex);
> --
> 2.24.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%

回复: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Liu, Monk
>>>RREG32_KIQ and WREG32_KIQ

If you are using RREG32_KIQ it is always go through KIQ no matter GFX is "on" 
state or not 



-邮件原件-
发件人: Huang, Ray  
发送时间: 2020年2月21日 23:23
收件人: Liu, Monk 
抄送: StDenis, Tom ; Alex Deucher ; 
amd-gfx list 
主题: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
> Better not use KIQ, because when you use debugfs to read register you 
> usually hit a hang, and by that case KIQ probably already die

If CP is busy, the gfx should be in "on" state at that time, we needn't use KIQ.

Thanks,
Ray

> 
> -邮件原件-
> 发件人: amd-gfx  代表 Huang Rui
> 发送时间: 2020年2月21日 22:34
> 收件人: StDenis, Tom 
> 抄送: Alex Deucher ; amd-gfx list 
> 
> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access 
> to MMIO
> 
> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> > I got some messages after a while:
> > 
> > [  741.788564] Failed to send Message 8.
> > [  746.671509] Failed to send Message 8.
> > [  748.749673] Failed to send Message 2b.
> > [  759.245414] Failed to send Message 7.
> > [  763.216902] Failed to send Message 2a.
> > 
> > Are there any additional locks that should be held?  Because some 
> > commands like --top or --waves can do a lot of distinct read 
> > operations (causing a lot of enable/disable calls).
> > 
> > I'm going to sit on this a bit since I don't think the patch is 
> > ready for pushing out.
> > 
> 
> How about use RREG32_KIQ and WREG32_KIQ?
> 
> Thanks,
> Ray
> 
> > 
> > Tom
> > 
> > On 2020-02-19 10:07 a.m., Alex Deucher wrote:
> > > On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:
> > > > Signed-off-by: Tom St Denis 
> > > Please add a patch description.  With that fixed:
> > > Reviewed-by: Alex Deucher 
> > > 
> > > > ---
> > > >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> > > >   1 file changed, 3 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > index 7379910790c9..66f763300c96 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > > > read, struct file *f,
> > > >  if (pm_pg_lock)
> > > >  mutex_lock(&adev->pm.mutex);
> > > > 
> > > > +   amdgpu_gfx_off_ctrl(adev, false);
> > > >  while (size) {
> > > >  uint32_t value;
> > > > 
> > > > @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > > > read, struct file *f,
> > > >  }
> > > > 
> > > >   end:
> > > > +   amdgpu_gfx_off_ctrl(adev, true);
> > > > +
> > > >  if (use_bank) {
> > > >  amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> > > > 0x);
> > > >  mutex_unlock(&adev->grbm_idx_mutex);
> > > > --
> > > > 2.24.1
> > > > 
> > > > ___
> > > > amd-gfx mailing list
> > > > amd-gfx@lists.freedesktop.org
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%
> > > > 2F
> > > > lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02
> > > > %7
> > > > C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3d
> > > > d8 
> > > > 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sd
> > > > at 
> > > > a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reser
> > > > ve
> > > > d=0
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fli
> > st 
> > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7C
> > mo
> > nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e
> > 60 
> > 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%
> > 2F
> > 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Add aconnector condition check for dpcd read

2020-02-21 Thread Gravenor, Joseph
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: Joseph Gravenor 


From: Liu, Zhan 
Sent: Monday, February 10, 2020 4:08 PM
To: amd-gfx@lists.freedesktop.org ; Liu, Zhan 
; Gravenor, Joseph 
Subject: [PATCH] drm/amd/display: Add aconnector condition check for dpcd read

[Why]
core_link_read_dpcd() will invoke dm_helpers_dp_read_dpcd(),
which needs to read dpcd info with the help of aconnector.
If aconnector (dc->links[i]->prev) is NULL, then dpcd status
 cannot be read.

As a result, dpcd read fails and a line of error will be
printed out in dmesg.log as:
"*ERROR* Failed to found connector for link!"

[How]
Make sure that aconnector (dc->links[i]->prev) is not NULL,
then read dpcd status.

Signed-off-by: Zhan Liu 
---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 42fcfee2c31b..92e1574073fd 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1331,11 +1331,20 @@ void dcn10_init_hw(struct dc *dc)
 if (dc->links[i]->connector_signal != 
SIGNAL_TYPE_DISPLAY_PORT) {
 continue;
 }
-   /* if any of the displays are lit up turn them off */
-   status = core_link_read_dpcd(dc->links[i], DP_SET_POWER,
-&dpcd_power_state, 
sizeof(dpcd_power_state));
-   if (status == DC_OK && dpcd_power_state == 
DP_POWER_STATE_D0) {
-   dp_receiver_power_ctrl(dc->links[i], false);
+
+   /*
+* core_link_read_dpcd() will invoke 
dm_helpers_dp_read_dpcd(),
+* which needs to read dpcd info with the help of 
aconnector.
+* If aconnector (dc->links[i]->prev) is NULL, then 
dpcd status
+* cannot be read.
+*/
+   if (dc->links[i]->priv) {
+   /* if any of the displays are lit up turn them 
off */
+   status = core_link_read_dpcd(dc->links[i], 
DP_SET_POWER,
+   
&dpcd_power_state, sizeof(dpcd_power_state));
+   if (status == DC_OK && dpcd_power_state == 
DP_POWER_STATE_D0) {
+   dp_receiver_power_ctrl(dc->links[i], 
false);
+   }
 }
 }
 }
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] SWDEV-220585 - Navi12 L1 policy GC regs WAR #1

2020-02-21 Thread Rohit Khaire
This change disables programming of GCVM_L2_CNTL* regs on VF.

Signed-off-by: Rohit Khaire 
---
 drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
index b70c7b483c24..e0654a216ab5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
@@ -135,6 +135,10 @@ static void gfxhub_v2_0_init_cache_regs(struct 
amdgpu_device *adev)
 {
uint32_t tmp;
 
+   /* These regs are not accessible for VF, PF will program these in SRIOV 
*/
+   if (amdgpu_sriov_vf(adev))
+   return;
+
/* Setup L2 cache */
tmp = RREG32_SOC15(GC, 0, mmGCVM_L2_CNTL);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, ENABLE_L2_CACHE, 1);
@@ -298,9 +302,11 @@ void gfxhub_v2_0_gart_disable(struct amdgpu_device *adev)
ENABLE_ADVANCED_DRIVER_MODEL, 0);
WREG32_SOC15(GC, 0, mmGCMC_VM_MX_L1_TLB_CNTL, tmp);
 
-   /* Setup L2 cache */
-   WREG32_FIELD15(GC, 0, GCVM_L2_CNTL, ENABLE_L2_CACHE, 0);
-   WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL3, 0);
+   if (!amdgpu_sriov_vf(adev)) {
+   /* Setup L2 cache */
+   WREG32_FIELD15(GC, 0, GCVM_L2_CNTL, ENABLE_L2_CACHE, 0);
+   WREG32_SOC15(GC, 0, mmGCVM_L2_CNTL3, 0);
+   }
 }
 
 /**
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/amdgpu: Add gfxoff debugfs entry

2020-02-21 Thread Alex Deucher
On Fri, Feb 21, 2020 at 1:45 PM Tom St Denis  wrote:
>
> Write a 32-bit value of zero to disable GFXOFF and write a 32-bit
> value of non-zero to enable GFXOFF.
>
> Signed-off-by: Tom St Denis 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 56 +
>  1 file changed, 56 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..3bb74056b9d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -842,6 +842,55 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, 
> char __user *buf,
> return result;
>  }
>
> +/**
> + * amdgpu_debugfs_regs_gfxoff_write - Enable/disable GFXOFF
> + *
> + * @f: open file handle
> + * @buf: User buffer to write data from
> + * @size: Number of bytes to write
> + * @pos:  Offset to seek to
> + *
> + * Write a 32-bit zero to disable or a 32-bit non-zero to enable
> + */
> +static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user 
> *buf,
> +size_t size, loff_t *pos)
> +{
> +   struct amdgpu_device *adev = file_inode(f)->i_private;
> +   ssize_t result = 0;
> +   int r;
> +
> +   if (size & 0x3 || *pos & 0x3)
> +   return -EINVAL;
> +
> +   r = pm_runtime_get_sync(adev->ddev->dev);

Not really directly related to this patch, but If you are using umr
for debugging, we should probably disable runtime pm, otherwise the
entire GPU may be powered down between accesses.  There is already an
interface to do that via the core kernel power subsystem in sysfs.
E.g.,
/sys/class/drm/card0/device/power/control
/sys/class/drm/card0/device/power/runtime_status
Something else to look at for umr.

We don't store the state for when we dynamically turn it off like this
so if we get a GPU reset or a power management event (runtime pm or
S3), GFXOFF will be re-enabled at that point.  This is just for
debugging though so:
Acked-by: Alex Deucher 

Alex


> +   if (r < 0)
> +   return r;
> +
> +   while (size) {
> +   uint32_t value;
> +
> +   r = get_user(value, (uint32_t *)buf);
> +   if (r) {
> +   pm_runtime_mark_last_busy(adev->ddev->dev);
> +   pm_runtime_put_autosuspend(adev->ddev->dev);
> +   return r;
> +   }
> +
> +   amdgpu_gfx_off_ctrl(adev, value ? true : false);
> +
> +   result += 4;
> +   buf += 4;
> +   *pos += 4;
> +   size -= 4;
> +   }
> +
> +   pm_runtime_mark_last_busy(adev->ddev->dev);
> +   pm_runtime_put_autosuspend(adev->ddev->dev);
> +
> +   return result;
> +}
> +
> +
>  static const struct file_operations amdgpu_debugfs_regs_fops = {
> .owner = THIS_MODULE,
> .read = amdgpu_debugfs_regs_read,
> @@ -890,6 +939,11 @@ static const struct file_operations 
> amdgpu_debugfs_gpr_fops = {
> .llseek = default_llseek
>  };
>
> +static const struct file_operations amdgpu_debugfs_gfxoff_fops = {
> +   .owner = THIS_MODULE,
> +   .write = amdgpu_debugfs_gfxoff_write,
> +};
> +
>  static const struct file_operations *debugfs_regs[] = {
> &amdgpu_debugfs_regs_fops,
> &amdgpu_debugfs_regs_didt_fops,
> @@ -899,6 +953,7 @@ static const struct file_operations *debugfs_regs[] = {
> &amdgpu_debugfs_sensors_fops,
> &amdgpu_debugfs_wave_fops,
> &amdgpu_debugfs_gpr_fops,
> +   &amdgpu_debugfs_gfxoff_fops,
>  };
>
>  static const char *debugfs_regs_names[] = {
> @@ -910,6 +965,7 @@ static const char *debugfs_regs_names[] = {
> "amdgpu_sensors",
> "amdgpu_wave",
> "amdgpu_gpr",
> +   "amdgpu_gfxoff",
>  };
>
>  /**
> --
> 2.24.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/amdgpu: Add gfxoff debugfs entry

2020-02-21 Thread Tom St Denis
Write a 32-bit value of zero to disable GFXOFF and write a 32-bit
value of non-zero to enable GFXOFF.

Signed-off-by: Tom St Denis 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 56 +
 1 file changed, 56 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 7379910790c9..3bb74056b9d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -842,6 +842,55 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, 
char __user *buf,
return result;
 }
 
+/**
+ * amdgpu_debugfs_regs_gfxoff_write - Enable/disable GFXOFF
+ *
+ * @f: open file handle
+ * @buf: User buffer to write data from
+ * @size: Number of bytes to write
+ * @pos:  Offset to seek to
+ *
+ * Write a 32-bit zero to disable or a 32-bit non-zero to enable
+ */
+static ssize_t amdgpu_debugfs_gfxoff_write(struct file *f, const char __user 
*buf,
+size_t size, loff_t *pos)
+{
+   struct amdgpu_device *adev = file_inode(f)->i_private;
+   ssize_t result = 0;
+   int r;
+
+   if (size & 0x3 || *pos & 0x3)
+   return -EINVAL;
+
+   r = pm_runtime_get_sync(adev->ddev->dev);
+   if (r < 0)
+   return r;
+
+   while (size) {
+   uint32_t value;
+
+   r = get_user(value, (uint32_t *)buf);
+   if (r) {
+   pm_runtime_mark_last_busy(adev->ddev->dev);
+   pm_runtime_put_autosuspend(adev->ddev->dev);
+   return r;
+   }
+
+   amdgpu_gfx_off_ctrl(adev, value ? true : false);
+
+   result += 4;
+   buf += 4;
+   *pos += 4;
+   size -= 4;
+   }
+
+   pm_runtime_mark_last_busy(adev->ddev->dev);
+   pm_runtime_put_autosuspend(adev->ddev->dev);
+
+   return result;
+}
+
+
 static const struct file_operations amdgpu_debugfs_regs_fops = {
.owner = THIS_MODULE,
.read = amdgpu_debugfs_regs_read,
@@ -890,6 +939,11 @@ static const struct file_operations 
amdgpu_debugfs_gpr_fops = {
.llseek = default_llseek
 };
 
+static const struct file_operations amdgpu_debugfs_gfxoff_fops = {
+   .owner = THIS_MODULE,
+   .write = amdgpu_debugfs_gfxoff_write,
+};
+
 static const struct file_operations *debugfs_regs[] = {
&amdgpu_debugfs_regs_fops,
&amdgpu_debugfs_regs_didt_fops,
@@ -899,6 +953,7 @@ static const struct file_operations *debugfs_regs[] = {
&amdgpu_debugfs_sensors_fops,
&amdgpu_debugfs_wave_fops,
&amdgpu_debugfs_gpr_fops,
+   &amdgpu_debugfs_gfxoff_fops,
 };
 
 static const char *debugfs_regs_names[] = {
@@ -910,6 +965,7 @@ static const char *debugfs_regs_names[] = {
"amdgpu_sensors",
"amdgpu_wave",
"amdgpu_gpr",
+   "amdgpu_gfxoff",
 };
 
 /**
-- 
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-21 Thread Zhou, David(ChunMing)
[AMD Official Use Only - Internal Distribution Only]

That's fine to me.

-David

From: Koenig, Christian 
Sent: Friday, February 21, 2020 11:33 PM
To: Deucher, Alexander ; Christian König 
; Zhou, David(ChunMing) 
; He, Jacob ; 
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

I would just do this as part of the vm_flush() callback on the ring.

E.g. check if the VMID you want to flush is reserved and if yes enable SPM.

Maybe pass along a flag or something in the job to make things easier.

Christian.

Am 21.02.20 um 16:31 schrieb Deucher, Alexander:

[AMD Public Use]

We already have the RESERVE_VMID ioctl interface, can't we just use that 
internally in the kernel to update the rlc register via the ring when we 
schedule the relevant IB?  E.g., add a new ring callback to set SPM state and 
then set it to the reserved vmid before we schedule the ib, and then reset it 
to 0 after the IB in amdgpu_ib_schedule().

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 4b2342d11520..e0db9362c6ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -185,6 +185,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
if (ring->funcs->insert_start)
ring->funcs->insert_start(ring);

+   if (ring->funcs->setup_spm)
+   ring->funcs->setup_spm(ring, job);
+
if (job) {
r = amdgpu_vm_flush(ring, job, need_pipe_sync);
if (r) {
@@ -273,6 +276,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
return r;
}

+   if (ring->funcs->setup_spm)
+   ring->funcs->setup_spm(ring, NULL);
+
if (ring->funcs->insert_end)
ring->funcs->insert_end(ring);



Alex

From: amd-gfx 

 on behalf of Christian König 

Sent: Friday, February 21, 2020 5:28 AM
To: Zhou, David(ChunMing) ; 
He, Jacob ; Koenig, Christian 
; 
amd-gfx@lists.freedesktop.org 

Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

That would probably be a no-go, but we could enhance the kernel driver to 
update the RLC_SPM_VMID register with the reserved VMID.

Handling that in userspace is most likely not working anyway, since the RLC 
registers are usually not accessible by userspace.

Regards,
Christian.

Am 20.02.20 um 16:15 schrieb Zhou, David(ChunMing):

[AMD Official Use Only - Internal Distribution Only]



You can enhance amdgpu_vm_ioctl In amdgpu_vm.c to return vmid to userspace.



-David





From: He, Jacob 
Sent: Thursday, February 20, 2020 10:46 PM
To: Zhou, David(ChunMing) ; 
Koenig, Christian ; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



amdgpu_vm_reserve_vmid doesn't return the reserved vmid back to user space. 
There is no chance for user mode driver to update RLC_SPM_VMID.



Thanks

Jacob



From: He, Jacob
Sent: Thursday, February 20, 2020 6:20 PM
To: Zhou, David(ChunMing); Koenig, 
Christian; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



Looks like amdgpu_vm_reserve_vmid could work, let me have a try to update the 
RLC_SPM_VMID with pm4 packets in UMD.



Thanks

Jacob



From: Zhou, David(ChunMing)
Sent: Thursday, February 20, 2020 10:13 AM
To: Koenig, Christian; He, 
Jacob; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



[AMD Official Use Only - Internal Distribution Only]

Christian is right here, that will cause many problems for simply using VMID in 
kernel.
We already have an pair interface for RGP, I think you can use it instead of 
involving additional kernel change.
amdgpu_vm_reserve_vmid/ amdgpu_vm_unreserve_vmid.

-David

-Original Message-
From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 On Behalf Of Christian König
Sent: Wednesday, February 19, 2020 7:03 PM
To: He, Jacob mailto:jacob...@amd.com>>; 
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Am 19.02.20 um 11:15 schrieb Jacob He:
> [WHY]
> When SPM trace enabled, SPM_VMID should be updated with the current
> vmid.
>
> [HOW]
> Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Christian König
Probably simpler just to do on/off and let userspace determine timing 
but other than that ya sounds good.


Works for me as long as we only expose it through debugfs for root. 
Otherwise there is always the risk of userspace forgetting to turn it on 
again.


Christian.

Am 21.02.20 um 17:06 schrieb Tom St Denis:
Probably simpler just to do on/off and let userspace determine timing 
but other than that ya sounds good.



For things like umr's --top which runs indefinitely having a timer 
wouldn't work.  Similarly, --waves can take a long time depending on 
activity and the asic.



Tom



On 2020-02-21 11:04 a.m., Christian König wrote:

Ok how about this:

We add a debugfs file which when read returns the GFXOFF status and 
when written with a number disabled GFXOFF for N seconds with 0 
meaning forever.


Umr gets a new commandline option to write to that file before 
reading registers.


This way the user can still disable it if it causes any problems. 
Does that sounds like a plan?


Regards,
Christian.

Am 21.02.20 um 16:56 schrieb Deucher, Alexander:


[AMD Public Use]


Not at the moment.  But we could add a debugfs file which just wraps 
amdgpu_gfx_off_ctrl(). That said, maybe we just add a delay here or 
a use a timer to delay turning gfxoff back on again so that we 
aren't turning it on and off so rapidly.


Alex

 


*From:* Christian König 
*Sent:* Friday, February 21, 2020 10:43 AM
*To:* Deucher, Alexander ; Huang, Ray 
; Liu, Monk 
*Cc:* StDenis, Tom ; Alex Deucher 
; amd-gfx list 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Do we have a way to disable GFXOFF on the fly?

If not maybe it would be a good idea to add a separate debugfs file 
to do this.


Christian.

Am 21.02.20 um 16:39 schrieb Deucher, Alexander:


[AMD Public Use]


If we are trying to debug a reproducible hang, probably best to 
just to disable gfxoff before messing with it to remove that as a 
factor.  Otherwise, the method included in this patch is the proper 
way to disable/enable GFXOFF dynamically.


Alex

 

*From:* amd-gfx  
 on behalf of 
Christian König  


*Sent:* Friday, February 21, 2020 10:27 AM
*To:* Huang, Ray  ; 
Liu, Monk  
*Cc:* StDenis, Tom  
; Alex Deucher  
; amd-gfx list 
 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read 
register you usually hit a hang, and by that case KIQ probably 
already die
> If CP is busy, the gfx should be in "on" state at that time, we 
needn't use KIQ.


Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the 
ASIC is
completed locked up. Sending messages to the SMU is not really 
going to

work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx  
 代表 Huang Rui

>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom  

>> 抄送: Alex Deucher  
; amd-gfx list 
 
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs 
access to MMIO

>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is 
ready

>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
  wrote:
> Signed-off-by: Tom St Denis  


 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher  



> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>    1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Tom St Denis
Probably simpler just to do on/off and let userspace determine timing 
but other than that ya sounds good.



For things like umr's --top which runs indefinitely having a timer 
wouldn't work.  Similarly, --waves can take a long time depending on 
activity and the asic.



Tom



On 2020-02-21 11:04 a.m., Christian König wrote:

Ok how about this:

We add a debugfs file which when read returns the GFXOFF status and 
when written with a number disabled GFXOFF for N seconds with 0 
meaning forever.


Umr gets a new commandline option to write to that file before reading 
registers.


This way the user can still disable it if it causes any problems. Does 
that sounds like a plan?


Regards,
Christian.

Am 21.02.20 um 16:56 schrieb Deucher, Alexander:


[AMD Public Use]


Not at the moment.  But we could add a debugfs file which just wraps 
amdgpu_gfx_off_ctrl(). That said, maybe we just add a delay here or a 
use a timer to delay turning gfxoff back on again so that we aren't 
turning it on and off so rapidly.


Alex


*From:* Christian König 
*Sent:* Friday, February 21, 2020 10:43 AM
*To:* Deucher, Alexander ; Huang, Ray 
; Liu, Monk 
*Cc:* StDenis, Tom ; Alex Deucher 
; amd-gfx list 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Do we have a way to disable GFXOFF on the fly?

If not maybe it would be a good idea to add a separate debugfs file 
to do this.


Christian.

Am 21.02.20 um 16:39 schrieb Deucher, Alexander:


[AMD Public Use]


If we are trying to debug a reproducible hang, probably best to just 
to disable gfxoff before messing with it to remove that as a 
factor.  Otherwise, the method included in this patch is the proper 
way to disable/enable GFXOFF dynamically.


Alex


*From:* amd-gfx  
 on behalf of 
Christian König  


*Sent:* Friday, February 21, 2020 10:27 AM
*To:* Huang, Ray  ; 
Liu, Monk  
*Cc:* StDenis, Tom  
; Alex Deucher  
; amd-gfx list 
 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register 
you usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we 
needn't use KIQ.


Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx  
 代表 Huang Rui

>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom  

>> 抄送: Alex Deucher  
; amd-gfx list 
 
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs 
access to MMIO

>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is 
ready

>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
  wrote:
> Signed-off-by: Tom St Denis  


 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher  



> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>    1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int  
amdgpu_debugfs_process_reg_op(bool read, struct file *f,

>   if (pm_pg_lock)
> mut

[PATCH 31/35] drm/amd/display: optimize prgoram wm and clks

2020-02-21 Thread Rodrigo Siqueira
From: Yongqiang Sun 

[Why]
In some display configuration like 1080P monitor playing a 1080P video,
if user use ALT+F4 to exit Movie and TV, there is a chance clocks are
same only water mark changed. Current clock optimization machanism will
result in water mark keeps high after exit Movie and TV app.

[How]
Return if watermark need to be optimized when doing program watermark,
perform the optimization after.

Signed-off-by: Yongqiang Sun 
Reviewed-by: Tony Cheng 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc.c  |  10 +-
 drivers/gpu/drm/amd/display/dc/dc.h   |   3 +-
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.c   | 101 +
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |   8 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  33 +++--
 .../drm/amd/display/dc/dcn20/dcn20_hubbub.c   |  11 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c|  26 ++--
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.c   | 137 +-
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.h   |   8 +-
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |   2 +-
 10 files changed, 237 insertions(+), 102 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 7513aa71da38..6dece1ee30bf 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1365,7 +1365,7 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
int i;
struct dc_state *context = dc->current_state;
 
-   if (!dc->optimized_required || dc->optimize_seamless_boot_streams > 0)
+   if ((!dc->clk_optimized_required && !dc->wm_optimized_required) || 
dc->optimize_seamless_boot_streams > 0)
return true;
 
post_surface_trace(dc);
@@ -1377,8 +1377,6 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
dc->hwss.disable_plane(dc, 
&context->res_ctx.pipe_ctx[i]);
}
 
-   dc->optimized_required = false;
-
dc->hwss.optimize_bandwidth(dc, context);
return true;
 }
@@ -1826,10 +1824,10 @@ enum surface_update_type 
dc_check_update_surfaces_for_stream(
// If there's an available clock comparator, we use that.
if (dc->clk_mgr->funcs->are_clock_states_equal) {
if 
(!dc->clk_mgr->funcs->are_clock_states_equal(&dc->clk_mgr->clks, 
&dc->current_state->bw_ctx.bw.dcn.clk))
-   dc->optimized_required = true;
+   dc->clk_optimized_required = true;
// Else we fallback to mem compare.
} else if (memcmp(&dc->current_state->bw_ctx.bw.dcn.clk, 
&dc->clk_mgr->clks, offsetof(struct dc_clocks, prev_p_state_change_support)) != 
0) {
-   dc->optimized_required = true;
+   dc->clk_optimized_required = true;
}
}
 
@@ -2200,7 +2198,7 @@ static void commit_planes_for_stream(struct dc *dc,
dc->optimize_seamless_boot_streams--;
 
if (dc->optimize_seamless_boot_streams == 0)
-   dc->optimized_required = true;
+   dc->clk_optimized_required = true;
}
}
 
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index e10d5a7d0cb8..bc1220dce3b1 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -520,7 +520,8 @@ struct dc {
struct dce_hwseq *hwseq;
 
/* Require to optimize clocks and bandwidth for added/removed planes */
-   bool optimized_required;
+   bool clk_optimized_required;
+   bool wm_optimized_required;
 
/* Require to maintain clocks and bandwidth for UEFI enabled HW */
int optimize_seamless_boot_streams;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
index 3e851713cf8d..e441c149ff40 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
@@ -243,7 +243,7 @@ void hubbub1_wm_change_req_wa(struct hubbub *hubbub)
DCHUBBUB_ARB_WATERMARK_CHANGE_REQUEST, 1);
 }
 
-void hubbub1_program_urgent_watermarks(
+bool hubbub1_program_urgent_watermarks(
struct hubbub *hubbub,
struct dcn_watermark_set *watermarks,
unsigned int refclk_mhz,
@@ -251,6 +251,7 @@ void hubbub1_program_urgent_watermarks(
 {
struct dcn10_hubbub *hubbub1 = TO_DCN10_HUBBUB(hubbub);
uint32_t prog_wm_value;
+   bool wm_pending = false;
 
/* Repeat for water mark set A, B, C and D. */
/* clock state A */
@@ -264,7 +265,8 @@ void hubbub1_program_urgent_watermarks(
DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_A calculated =%d\n"
"HW register va

[PATCH 33/35] drm/amd/display: Temporarily disable stutter on MPO transition

2020-02-21 Thread Rodrigo Siqueira
From: George Shen 

[Why]
Underflow sometimes occurs during transition into MPO with stutter
enabled.

[How]
When transitioning into MPO, disable stutter. Re-enable stutter within
one frame.

Signed-off-by: George Shen 
Signed-off-by: Tony Cheng 
Reviewed-by: Eric Yang 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 14 ++
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c| 19 ++-
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.c   |  1 +
 .../drm/amd/display/dc/dcn21/dcn21_resource.c |  1 +
 .../amd/display/dc/inc/hw_sequencer_private.h |  3 +++
 5 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index c381d347208f..c31ea11f10fc 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2925,6 +2925,7 @@ void dcn10_update_pending_status(struct pipe_ctx 
*pipe_ctx)
struct dc_plane_state *plane_state = pipe_ctx->plane_state;
struct timing_generator *tg = pipe_ctx->stream_res.tg;
bool flip_pending;
+   struct dc *dc = plane_state->ctx->dc;
 
if (plane_state == NULL)
return;
@@ -2942,6 +2943,19 @@ void dcn10_update_pending_status(struct pipe_ctx 
*pipe_ctx)
plane_state->status.is_right_eye =

!tg->funcs->is_stereo_left_eye(pipe_ctx->stream_res.tg);
}
+
+   if 
(dc->hwseq->wa_state.disallow_self_refresh_during_multi_plane_transition_applied)
 {
+   struct dce_hwseq *hwseq = dc->hwseq;
+   struct timing_generator *tg = 
dc->res_pool->timing_generators[0];
+   unsigned int cur_frame = tg->funcs->get_frame_count(tg);
+
+   if (cur_frame != 
hwseq->wa_state.disallow_self_refresh_during_multi_plane_transition_applied_on_frame)
 {
+   struct hubbub *hubbub = dc->res_pool->hubbub;
+
+   hubbub->funcs->allow_self_refresh_control(hubbub, 
!dc->debug.disable_stutter);
+   
hwseq->wa_state.disallow_self_refresh_during_multi_plane_transition_applied = 
false;
+   }
+   }
 }
 
 void dcn10_update_dchub(struct dce_hwseq *hws, struct dchub_init_data *dh_data)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index cf13b1db1025..97c0c8ced8e5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1584,6 +1584,7 @@ void dcn20_post_unlock_program_front_end(
 {
int i;
const unsigned int TIMEOUT_FOR_PIPE_ENABLE_MS = 100;
+   struct dce_hwseq *hwseq = dc->hwseq;
 
DC_LOGGER_INIT(dc->ctx->logger);
 
@@ -1611,8 +1612,24 @@ void dcn20_post_unlock_program_front_end(
}
 
/* WA to apply WM setting*/
-   if (dc->hwseq->wa.DEGVIDCN21)
+   if (hwseq->wa.DEGVIDCN21)

dc->res_pool->hubbub->funcs->apply_DEDCN21_147_wa(dc->res_pool->hubbub);
+
+
+   /* WA for stutter underflow during MPO transitions when adding 2nd 
plane */
+   if (hwseq->wa.disallow_self_refresh_during_multi_plane_transition) {
+
+   if (dc->current_state->stream_status[0].plane_count == 1 &&
+   context->stream_status[0].plane_count > 1) {
+
+   struct timing_generator *tg = 
dc->res_pool->timing_generators[0];
+
+   
dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub, 
false);
+
+   
hwseq->wa_state.disallow_self_refresh_during_multi_plane_transition_applied = 
true;
+   
hwseq->wa_state.disallow_self_refresh_during_multi_plane_transition_applied_on_frame
 = tg->funcs->get_frame_count(tg);
+   }
+   }
 }
 
 void dcn20_prepare_bandwidth(
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
index 8440975206e0..5e2d14b897af 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
@@ -702,6 +702,7 @@ static const struct hubbub_funcs hubbub21_funcs = {
.wm_read_state = hubbub21_wm_read_state,
.get_dchub_ref_freq = hubbub2_get_dchub_ref_freq,
.program_watermarks = hubbub21_program_watermarks,
+   .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
.apply_DEDCN21_147_wa = hubbub21_apply_DEDCN21_147_wa,
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index f453de10aa2d..aa73025c1747 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -1564,6 +1564,7 @@ static st

[PATCH 34/35] drm/amd/display: Access patches from stream for ignore MSA monitor patch

2020-02-21 Thread Rodrigo Siqueira
From: Jaehyun Chung 

[Why]
System will crash when trying to access local sink in
core_link_enable_stream in MST case.

[How]
Access patches directly from stream.

Signed-off-by: Jaehyun Chung 
Reviewed-by: Aric Cyr 
Reviewed-by: Ashley Thomas 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 2ccc2db93f5d..02e1ad318203 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -3095,8 +3095,8 @@ void core_link_enable_stream(
dc->hwss.unblank_stream(pipe_ctx,
&pipe_ctx->stream->link->cur_link_settings);
 
-   if 
(stream->link->local_sink->edid_caps.panel_patch.delay_ignore_msa > 0)
-   
msleep(stream->link->local_sink->edid_caps.panel_patch.delay_ignore_msa);
+   if (stream->sink_patches.delay_ignore_msa > 0)
+   msleep(stream->sink_patches.delay_ignore_msa);
 
if (dc_is_dp_signal(pipe_ctx->stream->signal))
enable_stream_features(pipe_ctx);
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 30/35] drm/amd/display: correct dml surface size assignment

2020-02-21 Thread Rodrigo Siqueira
From: Dmytro Laktyushkin 

Need to assign surface size rather than viewport size for surface size
dml variable.

Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Eric Bernstein 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index 193cc9c6b180..6b525c52124c 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -393,11 +393,11 @@ static void fetch_pipe_params(struct display_mode_lib 
*mode_lib)

mode_lib->vba.ViewportYStartC[mode_lib->vba.NumberOfActivePlanes] =
src->viewport_y_c;
mode_lib->vba.PitchY[mode_lib->vba.NumberOfActivePlanes] = 
src->data_pitch;
-   
mode_lib->vba.SurfaceHeightY[mode_lib->vba.NumberOfActivePlanes] = 
src->viewport_height;
-   mode_lib->vba.SurfaceWidthY[mode_lib->vba.NumberOfActivePlanes] 
= src->viewport_width;
+   mode_lib->vba.SurfaceWidthY[mode_lib->vba.NumberOfActivePlanes] 
= src->surface_width_y;
+   
mode_lib->vba.SurfaceHeightY[mode_lib->vba.NumberOfActivePlanes] = 
src->surface_height_y;
mode_lib->vba.PitchC[mode_lib->vba.NumberOfActivePlanes] = 
src->data_pitch_c;
-   
mode_lib->vba.SurfaceHeightC[mode_lib->vba.NumberOfActivePlanes] = 
src->viewport_height_c;
-   mode_lib->vba.SurfaceWidthC[mode_lib->vba.NumberOfActivePlanes] 
= src->viewport_width_c;
+   
mode_lib->vba.SurfaceHeightC[mode_lib->vba.NumberOfActivePlanes] = 
src->surface_height_c;
+   mode_lib->vba.SurfaceWidthC[mode_lib->vba.NumberOfActivePlanes] 
= src->surface_width_c;
mode_lib->vba.DCCMetaPitchY[mode_lib->vba.NumberOfActivePlanes] 
= src->meta_pitch;
mode_lib->vba.DCCMetaPitchC[mode_lib->vba.NumberOfActivePlanes] 
= src->meta_pitch_c;
mode_lib->vba.HRatio[mode_lib->vba.NumberOfActivePlanes] = 
scl->hscl_ratio;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 32/35] drm/amd/display: Make clock table struct more accessible

2020-02-21 Thread Rodrigo Siqueira
From: Sung Lee 

[WHY]
Currently clock table struct is very far down in the bounding box struct
making it hard to find while debugging, especially when using the
dal3dbgext.

[HOW]
Move it up so it is the first struct defined, and therefore much easier
to find and access.

Signed-off-by: Sung Lee 
Reviewed-by: Eric Yang 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 114f861f7f3e..a56b611db15e 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
@@ -68,6 +68,7 @@ struct _vcs_dpi_voltage_scaling_st {
 };
 
 struct _vcs_dpi_soc_bounding_box_st {
+   struct _vcs_dpi_voltage_scaling_st clock_limits[MAX_CLOCK_LIMIT_STATES];
double sr_exit_time_us;
double sr_enter_plus_exit_time_us;
double urgent_latency_us;
@@ -111,7 +112,6 @@ struct _vcs_dpi_soc_bounding_box_st {
double xfc_xbuf_latency_tolerance_us;
int use_urgent_burst_bw;
unsigned int num_states;
-   struct _vcs_dpi_voltage_scaling_st clock_limits[MAX_CLOCK_LIMIT_STATES];
double min_dcfclk;
bool do_urgent_latency_adjustment;
double urgent_latency_adjustment_fabric_clock_component_us;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 35/35] drm/amd/display: limit display clock to 100MHz to avoid FIFO error

2020-02-21 Thread Rodrigo Siqueira
From: Yu-ting Shen 

[Why]
when changing display clock, SMU need to use power up DFS and use
DENTIST to ramp DFS DID to switch target frequency before switching back
to bypass.

[How]
fixed the minimum display clock to 100MHz, it's W/A the same with PCO.

Signed-off-by: Yu-ting Shen 
Reviewed-by: Tony Cheng 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 3 +++
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 883ecd2ed4c8..78971b6b195c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -2786,6 +2786,9 @@ void dcn20_calculate_dlg_params(
!= 
dm_dram_clock_change_unsupported;
context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
 
+   if (context->bw_ctx.bw.dcn.clk.dispclk_khz < dc->debug.min_disp_clk_khz)
+   context->bw_ctx.bw.dcn.clk.dispclk_khz = 
dc->debug.min_disp_clk_khz;
+
/*
 * An artifact of dml pipe split/odm is that pipes get merged back 
together for
 * calculation. Therefore we need to only extract for first pipe in 
ascending index order
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index aa73025c1747..dce4966eca20 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -859,6 +859,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
+   .min_disp_clk_khz = 10,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 28/35] drm/amd/display: Fix RV2 Variant Detection

2020-02-21 Thread Rodrigo Siqueira
From: Michael Strauss 

[WHY]
RV2 and variants are indistinguishable by hw internal rev alone, need to
be distinguishable in order to correctly set max vlevel.  Previous
detection change incorrectly checked for hw internal rev.

[HOW]
Use pci revision to check if RV2 or low power variant Correct a few
overlapping ASICREV range checks

Signed-off-by: Michael Strauss 
Reviewed-by: Michael Strauss 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 20 ++-
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |  7 ---
 .../gpu/drm/amd/display/include/dal_asic_id.h | 12 +--
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c 
b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 1a37550731de..f0f07b160152 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -703,11 +703,19 @@ static void hack_bounding_box(struct dcn_bw_internal_vars 
*v,
 }
 
 
-unsigned int get_highest_allowed_voltage_level(uint32_t hw_internal_rev)
+unsigned int get_highest_allowed_voltage_level(uint32_t hw_internal_rev, 
uint32_t pci_revision_id)
 {
-   /* for dali & pollock, the highest voltage level we want is 0 */
-   if (ASICREV_IS_POLLOCK(hw_internal_rev) || 
ASICREV_IS_DALI(hw_internal_rev))
-   return 0;
+   /* for low power RV2 variants, the highest voltage level we want is 0 */
+   if (ASICREV_IS_RAVEN2(hw_internal_rev))
+   switch (pci_revision_id) {
+   case PRID_DALI_DE:
+   case PRID_DALI_DF:
+   case PRID_DALI_E3:
+   case PRID_DALI_E4:
+   return 0;
+   default:
+   break;
+   }
 
/* we are ok with all levels */
return 4;
@@ -1277,7 +1285,9 @@ bool dcn_validate_bandwidth(
PERFORMANCE_TRACE_END();
BW_VAL_TRACE_FINISH();
 
-   if (bw_limit_pass && v->voltage_level <= 
get_highest_allowed_voltage_level(dc->ctx->asic_id.hw_internal_rev))
+   if (bw_limit_pass && v->voltage_level <= 
get_highest_allowed_voltage_level(
+   
dc->ctx->asic_id.hw_internal_rev,
+   
dc->ctx->asic_id.pci_revision_id))
return true;
else
return false;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
index 2f43f1618db8..8ec2dfe45d40 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
@@ -153,13 +153,6 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, 
struct pp_smu_funcs *p
 
 #if defined(CONFIG_DRM_AMD_DC_DCN)
case FAMILY_RV:
-   if (ASICREV_IS_DALI(asic_id.hw_internal_rev) ||
-   ASICREV_IS_POLLOCK(asic_id.hw_internal_rev)) {
-   /* TEMP: this check has to come before 
ASICREV_IS_RENOIR */
-   /* which also incorrectly returns true for 
Dali/Pollock*/
-   rv2_clk_mgr_construct(ctx, clk_mgr, pp_smu);
-   break;
-   }
if (ASICREV_IS_RENOIR(asic_id.hw_internal_rev)) {
rn_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
break;
diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h 
b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
index a2903985b9e8..ea7015f869c9 100644
--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
@@ -136,8 +136,6 @@
 #define RAVEN2_A0 0x81
 #define RAVEN2_15D8_REV_94 0x94
 #define RAVEN2_15D8_REV_95 0x95
-#define RAVEN2_15D8_REV_E3 0xE3
-#define RAVEN2_15D8_REV_E4 0xE4
 #define RAVEN2_15D8_REV_E9 0xE9
 #define RAVEN2_15D8_REV_EA 0xEA
 #define RAVEN2_15D8_REV_EB 0xEB
@@ -146,14 +144,16 @@
 #ifndef ASICREV_IS_RAVEN
 #define ASICREV_IS_RAVEN(eChipRev) ((eChipRev >= RAVEN_A0) && eChipRev < 
RAVEN_UNKNOWN)
 #endif
+#define PRID_DALI_DE 0xDE
+#define PRID_DALI_DF 0xDF
+#define PRID_DALI_E3 0xE3
+#define PRID_DALI_E4 0xE4
 
 #define ASICREV_IS_PICASSO(eChipRev) ((eChipRev >= PICASSO_A0) && (eChipRev < 
RAVEN2_A0))
 #ifndef ASICREV_IS_RAVEN2
-#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < 
RAVEN1_F0))
+#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < 
RENOIR_A0))
 #endif
 #define ASICREV_IS_RV1_F0(eChipRev) ((eChipRev >= RAVEN1_F0) && (eChipRev < 
RAVEN_UNKNOWN))
-#define ASICREV_IS_DALI(eChipRev) ((eChipRev == RAVEN2_15D8_REV_E3) \
-   || (eChipRev == RAVEN2_15D8_REV_E4))
 #define ASICREV_IS_POLLOCK(eChipRev) (eChipRev == RAVEN2_15D8_REV_94 \
|| eChipRev == RAVEN2_15D8_REV_95 \
|| eChipRev == RA

[PATCH 19/35] drm/amd/display: Revert "DCN2.x Do not program DPPCLK if same value"

2020-02-21 Thread Rodrigo Siqueira
From: Sung Lee 

[WHY]
Not programming dto with same values causes test failures in DCN2 diags
DPP tests.

[HOW]
This reverts commit 6f4c8c3022bcdad362b89953a43644e943608f9f.

Signed-off-by: Sung Lee 
Reviewed-by: Yongqiang Sun 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
index 68a1120ff674..368d497bc64b 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
@@ -117,7 +117,7 @@ void dcn20_update_clocks_update_dpp_dto(struct 
clk_mgr_internal *clk_mgr,
 
prev_dppclk_khz = 
clk_mgr->base.ctx->dc->current_state->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz;
 
-   if ((prev_dppclk_khz > dppclk_khz && safe_to_lower) || 
prev_dppclk_khz < dppclk_khz) {
+   if (safe_to_lower || prev_dppclk_khz < dppclk_khz) {
clk_mgr->dccg->funcs->update_dpp_dto(
clk_mgr->dccg, 
dpp_inst, dppclk_khz);
}
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 24/35] drm/amd/display: update dml input population function

2020-02-21 Thread Rodrigo Siqueira
From: Dmytro Laktyushkin 

Update dcn20_populate_dml_pipes_from_context to correctly handle odm
when no surface is provided.

Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/dcn20/dcn20_resource.c | 26 ---
 .../amd/display/dc/dml/display_mode_structs.h |  1 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |  2 --
 .../drm/amd/display/dc/dml/display_mode_vba.h |  3 ---
 4 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 080d4581a93d..883ecd2ed4c8 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1949,9 +1949,14 @@ int dcn20_populate_dml_pipes_from_context(
}
pipes[pipe_cnt].pipe.src.hsplit_grp = 
res_ctx->pipe_ctx[i].pipe_idx;
if (res_ctx->pipe_ctx[i].top_pipe && 
res_ctx->pipe_ctx[i].top_pipe->plane_state
-   == res_ctx->pipe_ctx[i].plane_state)
-   pipes[pipe_cnt].pipe.src.hsplit_grp = 
res_ctx->pipe_ctx[i].top_pipe->pipe_idx;
-   else if (res_ctx->pipe_ctx[i].prev_odm_pipe) {
+   == res_ctx->pipe_ctx[i].plane_state) {
+   struct pipe_ctx *first_pipe = 
res_ctx->pipe_ctx[i].top_pipe;
+
+   while (first_pipe->top_pipe && 
first_pipe->top_pipe->plane_state
+   == res_ctx->pipe_ctx[i].plane_state)
+   first_pipe = first_pipe->top_pipe;
+   pipes[pipe_cnt].pipe.src.hsplit_grp = 
first_pipe->pipe_idx;
+   } else if (res_ctx->pipe_ctx[i].prev_odm_pipe) {
struct pipe_ctx *first_pipe = 
res_ctx->pipe_ctx[i].prev_odm_pipe;
 
while (first_pipe->prev_odm_pipe)
@@ -2046,6 +2051,7 @@ int dcn20_populate_dml_pipes_from_context(
pipes[pipe_cnt].pipe.src.cur1_bpp = dm_cur_32bit;
 
if (!res_ctx->pipe_ctx[i].plane_state) {
+   pipes[pipe_cnt].pipe.src.is_hsplit = 
pipes[pipe_cnt].pipe.dest.odm_combine != dm_odm_combine_mode_disabled;
pipes[pipe_cnt].pipe.src.source_scan = dm_horz;
pipes[pipe_cnt].pipe.src.sw_mode = dm_sw_linear;
pipes[pipe_cnt].pipe.src.macro_tile_size = dm_64k_tile;
@@ -2071,19 +2077,21 @@ int dcn20_populate_dml_pipes_from_context(
pipes[pipe_cnt].pipe.scale_ratio_depth.scl_enable = 0; 
/*Lb only or Full scl*/
pipes[pipe_cnt].pipe.scale_taps.htaps = 1;
pipes[pipe_cnt].pipe.scale_taps.vtaps = 1;
-   pipes[pipe_cnt].pipe.src.is_hsplit = 0;
-   pipes[pipe_cnt].pipe.dest.odm_combine = 0;
pipes[pipe_cnt].pipe.dest.vtotal_min = v_total;
pipes[pipe_cnt].pipe.dest.vtotal_max = v_total;
+
+   if (pipes[pipe_cnt].pipe.dest.odm_combine == 
dm_odm_combine_mode_2to1) {
+   pipes[pipe_cnt].pipe.src.viewport_width /= 2;
+   pipes[pipe_cnt].pipe.dest.recout_width /= 2;
+   }
} else {
struct dc_plane_state *pln = 
res_ctx->pipe_ctx[i].plane_state;
struct scaler_data *scl = 
&res_ctx->pipe_ctx[i].plane_res.scl_data;
 
pipes[pipe_cnt].pipe.src.immediate_flip = 
pln->flip_immediate;
-   pipes[pipe_cnt].pipe.src.is_hsplit = 
(res_ctx->pipe_ctx[i].bottom_pipe
-   && 
res_ctx->pipe_ctx[i].bottom_pipe->plane_state == pln)
-   || (res_ctx->pipe_ctx[i].top_pipe
-   && 
res_ctx->pipe_ctx[i].top_pipe->plane_state == pln);
+   pipes[pipe_cnt].pipe.src.is_hsplit = 
(res_ctx->pipe_ctx[i].bottom_pipe && 
res_ctx->pipe_ctx[i].bottom_pipe->plane_state == pln)
+   || (res_ctx->pipe_ctx[i].top_pipe && 
res_ctx->pipe_ctx[i].top_pipe->plane_state == pln)
+   || 
pipes[pipe_cnt].pipe.dest.odm_combine != dm_odm_combine_mode_disabled;
pipes[pipe_cnt].pipe.src.source_scan = pln->rotation == 
ROTATION_ANGLE_90
|| pln->rotation == ROTATION_ANGLE_270 
? dm_vert : dm_horz;
pipes[pipe_cnt].pipe.src.viewport_y_y = scl->viewport.y;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 9bb8bff4cdd9..35fe3c640330 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++

[PATCH 23/35] drm/amd/display: Add visual confirm support for FreeSync 2 ARGB2101010

2020-02-21 Thread Rodrigo Siqueira
From: Peikang Zhang 

[Why]
DalMPVisualConfirm does not support FreeSync 2 ARGB2101010 which causes
black visual confirm bar when playing HDR video on FreeSync 2 display in
full screen mode

[How]
Added pink color for DalMPVisualConfirm on FreeSync 2 ARGB2101010
surface

Signed-off-by: Peikang Zhang 
Reviewed-by: Anthony Koo 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 113ff6731902..77396a08ad29 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2114,6 +2114,10 @@ void dcn10_get_hdr_visual_confirm_color(
if (top_pipe_ctx->stream->out_transfer_func->tf == 
TRANSFER_FUNCTION_PQ) {
/* HDR10, ARGB2101010 - set boarder color to red */
color->color_r_cr = color_value;
+   } else if (top_pipe_ctx->stream->out_transfer_func->tf == 
TRANSFER_FUNCTION_GAMMA22) {
+   /* FreeSync 2 ARGB2101010 - set boarder color to pink */
+   color->color_r_cr = color_value;
+   color->color_b_cb = color_value;
}
break;
case PIXEL_FORMAT_FP16:
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 25/35] drm/amd/display: remove unused dml variable

2020-02-21 Thread Rodrigo Siqueira
From: Dmytro Laktyushkin 

Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Eric Bernstein 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h | 1 -
 drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c | 1 -
 drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h | 1 -
 3 files changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 35fe3c640330..114f861f7f3e 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
@@ -327,7 +327,6 @@ struct _vcs_dpi_display_pipe_dest_params_st {
unsigned int vupdate_width;
unsigned int vready_offset;
unsigned char interlaced;
-   unsigned char embedded;
double pixel_rate_mhz;
unsigned char synchronized_vblank_all_planes;
unsigned char otg_inst;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index e23fa0f05f06..193cc9c6b180 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -377,7 +377,6 @@ static void fetch_pipe_params(struct display_mode_lib 
*mode_lib)
 
mode_lib->vba.pipe_plane[j] = 
mode_lib->vba.NumberOfActivePlanes;
 
-   mode_lib->vba.EmbeddedPanel[mode_lib->vba.NumberOfActivePlanes] 
= dst->embedded;
mode_lib->vba.DPPPerPlane[mode_lib->vba.NumberOfActivePlanes] = 
1;
mode_lib->vba.SourceScan[mode_lib->vba.NumberOfActivePlanes] =
(enum scan_direction_class) (src->source_scan);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h 
b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
index cb563a429590..5d82fc5a7ed7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
@@ -389,7 +389,6 @@ struct vba_vars_st {
 
/* vba mode support */
/*inputs*/
-   bool EmbeddedPanel[DC__NUM_DPP__MAX];
bool SupportGFX7CompatibleTilingIn32bppAnd64bpp;
double MaxHSCLRatio;
double MaxVSCLRatio;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 29/35] drm/amd/display: Update TTU properly

2020-02-21 Thread Rodrigo Siqueira
From: Alvin Lee 

[Why]
We need to update TTU properly if DRAMClockChangeWatermark changes. If
TTU < DRAMClockChangeWatermark, we pstate won't be allowed and we will
hang in some PSR cases.

[How]
Update TTU if DramClockChangeWatermark value increases (only if TTU was
dependent on the watermark value on the DRAMClockChangeWatermark value
in the first place).

Signed-off-by: Alvin Lee 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c  | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c 
b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
index 485a9c62ec58..5bbbafacc720 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
@@ -2614,6 +2614,14 @@ static void 
dml20v2_DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndP
 
if (mode_lib->vba.DRAMClockChangeSupportsVActive &&
mode_lib->vba.MinActiveDRAMClockChangeMargin > 60) {
+
+   for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) {
+   if 
(mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb]
 == 0) {
+   if (mode_lib->vba.DRAMClockChangeWatermark >
+   
dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, 
mode_lib->vba.UrgentWatermark))
+   mode_lib->vba.MinTTUVBlank[k] += 25;
+   }
+   }
mode_lib->vba.DRAMClockChangeWatermark += 25;
mode_lib->vba.DRAMClockChangeSupport[0][0] = 
dm_dram_clock_change_vactive;
} else if (mode_lib->vba.DummyPStateCheck &&
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Christian König

Ok how about this:

We add a debugfs file which when read returns the GFXOFF status and when 
written with a number disabled GFXOFF for N seconds with 0 meaning forever.


Umr gets a new commandline option to write to that file before reading 
registers.


This way the user can still disable it if it causes any problems. Does 
that sounds like a plan?


Regards,
Christian.

Am 21.02.20 um 16:56 schrieb Deucher, Alexander:


[AMD Public Use]


Not at the moment.  But we could add a debugfs file which just wraps 
amdgpu_gfx_off_ctrl(). That said, maybe we just add a delay here or a 
use a timer to delay turning gfxoff back on again so that we aren't 
turning it on and off so rapidly.


Alex


*From:* Christian König 
*Sent:* Friday, February 21, 2020 10:43 AM
*To:* Deucher, Alexander ; Huang, Ray 
; Liu, Monk 
*Cc:* StDenis, Tom ; Alex Deucher 
; amd-gfx list 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Do we have a way to disable GFXOFF on the fly?

If not maybe it would be a good idea to add a separate debugfs file to 
do this.


Christian.

Am 21.02.20 um 16:39 schrieb Deucher, Alexander:


[AMD Public Use]


If we are trying to debug a reproducible hang, probably best to just 
to disable gfxoff before messing with it to remove that as a factor.  
Otherwise, the method included in this patch is the proper way to 
disable/enable GFXOFF dynamically.


Alex


*From:* amd-gfx  
 on behalf of Christian 
König  


*Sent:* Friday, February 21, 2020 10:27 AM
*To:* Huang, Ray  ; Liu, 
Monk  
*Cc:* StDenis, Tom  
; Alex Deucher  
; amd-gfx list 
 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register 
you usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we 
needn't use KIQ.


Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx  
 代表 Huang Rui

>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom  

>> 抄送: Alex Deucher  
; amd-gfx list 
 
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs 
access to MMIO

>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
  wrote:
> Signed-off-by: Tom St Denis  


 Please add a patch description. With that fixed:
 Reviewed-by: Alex Deucher  



> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>    1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int  
amdgpu_debugfs_process_reg_op(bool read, struct file *f,

>   if (pm_pg_lock)
> mutex_lock(&adev->pm.mutex);
>
> + amdgpu_gfx_off_ctrl(adev, false);
>   while (size) {
>   uint32_t value;
>
> @@ -192,6 +193,8 @@ static int  
amdgpu_debugfs_process_reg_op(bool read, struct file *f,

>   }
>
>    end:
> + amdgpu_gfx_off_ctrl(adev, true);
> +

[PATCH 00/35] DC Patches February 21, 2020

2020-02-21 Thread Rodrigo Siqueira
This DC patchset brings improvements in multiple areas. In summary, we
highlight:

* Fixes and improvements on:
  - DML
  - ddc
  - i2c
  - tx mask
  - link training
* DMCUB improvements
* Clks optimizations

Alvin Lee (3):
  drm/amd/display: Update TX masks correctly
  drm/amd/display: Disable PG on NV12
  drm/amd/display: Update TTU properly

Anthony Koo (2):
  drm/amd/display: Add function pointers for panel related hw functions
  drm/amd/display: make some rn_clk_mgr structs and funcs static

Aric Cyr (4):
  drm/amd/display: dal_ddc_i2c_payloads_create can fail causing panic
  drm/amd/display: Only round InfoFrame refresh rates
  drm/amd/display: 3.2.73
  drm/amd/display: 3.2.74

Bhawanpreet Lakha (1):
  drm/amd/display: Fix HDMI repeater authentication

David Galiffi (1):
  drm/amd/display: Workaround required for link training reliability

Dmytro Laktyushkin (4):
  drm/amd/display: update scaling filters
  drm/amd/display: update dml input population function
  drm/amd/display: remove unused dml variable
  drm/amd/display: correct dml surface size assignment

George Shen (1):
  drm/amd/display: Temporarily disable stutter on MPO transition

Hersen Wu (2):
  drm/amd/display: dmub back door load
  drm/amd/display: DMUB Firmware Load by PSP

Jaehyun Chung (2):
  drm/amd/display: Monitor patch to delay setting ignore MSA bit
  drm/amd/display: Access patches from stream for ignore MSA monitor
patch

Martin Leung (1):
  drm/amd/display: Link training TPS1 workaround

Michael Strauss (1):
  drm/amd/display: Fix RV2 Variant Detection

Nicholas Kazlauskas (3):
  drm/amd/display: Wait for DMCUB to finish loading before executing
commands
  drm/amd/display: Don't ask PSP to load DMCUB for backdoor load
  drm/amd/display: Add DMUB firmware state debugfs

Peikang Zhang (2):
  drm/amd/display: System crashes when add_ptb_to_table() gets called
  drm/amd/display: Add visual confirm support for FreeSync 2 ARGB2101010

Roman Li (1):
  drm/amd/display: Add dmcu f/w loading for NV12

Samson Tam (1):
  drm/amd/display: do not force UCLK DPM to stay at highest state during
display off in DCN2

Sung Lee (2):
  drm/amd/display: Revert "DCN2.x Do not program DPPCLK if same value"
  drm/amd/display: Make clock table struct more accessible

Vladimir Stempen (1):
  drm/amd/display: programming last delta in output transfer function
LUT to a correct value

Wyatt Wood (1):
  drm/amd/display: Add driver support for enabling PSR on DMCUB

Yongqiang Sun (1):
  drm/amd/display: optimize prgoram wm and clks

Yu-ting Shen (1):
  drm/amd/display: limit display clock to 100MHz to avoid FIFO error

 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |   50 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c |   27 +
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  |   20 +-
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |   26 +-
 .../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c  |   10 +-
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c |8 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c  |   10 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |7 +-
 .../gpu/drm/amd/display/dc/core/dc_link_ddc.c |   52 +-
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |   25 +-
 drivers/gpu/drm/amd/display/dc/dc.h   |7 +-
 drivers/gpu/drm/amd/display/dc/dc_link.h  |1 +
 drivers/gpu/drm/amd/display/dc/dc_types.h |1 +
 drivers/gpu/drm/amd/display/dc/dce/Makefile   |2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c |   16 +
 .../drm/amd/display/dc/dce/dce_scl_filters.c  | 2204 ++---
 .../amd/display/dc/dce/dce_scl_filters_old.c  |   25 +
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c |5 +-
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h |3 +-
 .../display/dc/dce110/dce110_hw_sequencer.c   |   15 +-
 .../display/dc/dce110/dce110_hw_sequencer.h   |4 +
 .../amd/display/dc/dcn10/dcn10_cm_common.c|   13 +
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.c   |  101 +-
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |8 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |   58 +-
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |2 +
 .../drm/amd/display/dc/dcn20/dcn20_hubbub.c   |   11 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c|   45 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |2 +
 .../drm/amd/display/dc/dcn20/dcn20_resource.c |   38 +-
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.c   |  138 +-
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.h   |8 +-
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |2 +
 .../drm/amd/display/dc/dcn21/dcn21_resource.c |   19 +-
 .../dc/dml/dcn20/display_mode_vba_20v2.c  |8 +
 .../amd/display/dc/dml/display_mode_structs.h |4 +-
 .../drm/amd/display/dc/dml/display_mode_vba.c |   11 +-
 .../drm/amd/display/dc/dml/display_mode_vba.h |4 -
 .../amd/display/dc/inc/hw/clk_mgr_internal.h  |4 +
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |2 +-
 .../amd/display/dc/inc/hw_sequencer_private.h |5 +
 .../drm/amd/display/dmub/inc/

[PATCH 02/35] drm/amd/display: update scaling filters

2020-02-21 Thread Rodrigo Siqueira
From: Dmytro Laktyushkin 

Currently there is a minor error in scaling filter coefficients
caused by truncation to fit the HW registers.This error accummulates
with increased taps, but has gone unnoticed due to vast majority of
scaling being done with only 4 taps.

Scaling filters are now updated using HW team's filter generator
which has quantization error minimization built in.

Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../drm/amd/display/dc/dce/dce_scl_filters.c  | 2204 ++---
 .../amd/display/dc/dce/dce_scl_filters_old.c  |   25 +
 2 files changed, 1290 insertions(+), 939 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce/dce_scl_filters_old.c

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_scl_filters.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_scl_filters.c
index 48862bebf29e..7311f312369f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_scl_filters.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_scl_filters.c
@@ -22,1004 +22,1330 @@
  * Authors: AMD
  *
  */
-
 #include "transform.h"
 
+//=
+// = 2
+//   = 16
+//  = 0.83 (input/output)
+//= 0
+// = ModifiedLanczos
+//= s1.10
+//  = s1.12
+//=
 static const uint16_t filter_2tap_16p[18] = {
-   4096, 0,
-   3840, 256,
-   3584, 512,
-   3328, 768,
-   3072, 1024,
-   2816, 1280,
-   2560, 1536,
-   2304, 1792,
-   2048, 2048
+   0x1000, 0x,
+   0x0FF0, 0x0010,
+   0x0FB0, 0x0050,
+   0x0F34, 0x00CC,
+   0x0E68, 0x0198,
+   0x0D44, 0x02BC,
+   0x0BC4, 0x043C,
+   0x09FC, 0x0604,
+   0x0800, 0x0800
 };
 
+//=
+// = 3
+//   = 16
+//  = 0.8 (input/output)
+//= 0
+// = ModifiedLanczos
+//= 1.10
+//  = 1.12
+//=
 static const uint16_t filter_3tap_16p_upscale[27] = {
-   2048, 2048, 0,
-   1708, 2424, 16348,
-   1372, 2796, 16308,
-   1056, 3148, 16272,
-   768, 3464, 16244,
-   512, 3728, 16236,
-   296, 3928, 16252,
-   124, 4052, 16296,
-   0, 4096, 0
+   0x0804, 0x07FC, 0x,
+   0x06AC, 0x0978, 0x3FDC,
+   0x055C, 0x0AF0, 0x3FB4,
+   0x0420, 0x0C50, 0x3F90,
+   0x0300, 0x0D88, 0x3F78,
+   0x0200, 0x0E90, 0x3F70,
+   0x0128, 0x0F5C, 0x3F7C,
+   0x007C, 0x0FD8, 0x3FAC,
+   0x, 0x1000, 0x
 };
 
-static const uint16_t filter_3tap_16p_117[27] = {
-   2048, 2048, 0,
-   1824, 2276, 16376,
-   1600, 2496, 16380,
-   1376, 2700, 16,
-   1156, 2880, 52,
-   948, 3032, 108,
-   756, 3144, 192,
-   580, 3212, 296,
-   428, 3236, 428
+//=
+// = 3
+//   = 16
+//  = 1.1 (input/output)
+//= 0
+// = ModifiedLanczos
+//= 1.10
+//  = 1.12
+//=
+static const uint16_t filter_3tap_16p_116[27] = {
+   0x0804, 0x07FC, 0x,
+   0x0700, 0x0914, 0x3FEC,
+   0x0604, 0x0A1C, 0x3FE0,
+   0x050C, 0x0B14, 0x3FE0,
+   0x041C, 0x0BF4, 0x3FF0,
+   0x0340, 0x0CB0, 0x0010,
+   0x0274, 0x0D3C, 0x0050,
+   0x01C0, 0x0D94, 0x00AC,
+   0x0128, 0x0DB4, 0x0124
 };
 
-static const uint16_t filter_3tap_16p_150[27] = {
-   2048, 2048, 0,
-   1872, 2184, 36,
-   1692, 2308, 88,
-   1516, 2420, 156,
-   1340, 2516, 236,
-   1168, 2592, 328,
-   1004, 2648, 440,
-   844, 2684, 560,
-   696, 2696, 696
+//=
+// = 3
+//   = 16
+//  = 1.4 (input/output)
+//= 0
+// = ModifiedLanczos
+//= 1.10
+//  = 1.12
+//=
+static const uint16_t filter_3tap_16p_149[27] = {
+   0x0804, 0x07FC, 0x,
+   0x0730, 0x08CC, 0x0004,
+   0x0660, 0x098C, 0x0014,
+   0x0590, 0x0A3C, 0x0034,
+   0x04C4, 0x0AD4, 0x0068,
+   0x0400, 0x0B54, 0x00AC,
+   0x0348, 0x0BB0, 0x0108,
+   0x029C, 0x0BEC, 0x0178,
+   0x0200, 0x0C00, 0x0200
 };
 
+//=
+// = 3
+//   = 16
+//  = 1.83332 (input/output)
+//= 0
+// = ModifiedLanczos
+//= 1.10
+//  = 1.12
+//=
 static const uint16_t filter_3tap_16p_183[27] = {
-   2048, 2048, 0,
-   1892, 2104, 92,
-   1744, 2152, 196,
-   1592, 2196, 300,
-   1448, 2232, 412,
-   1304, 2256, 528,
-   1168, 2276, 648,
-   1032, 2288, 772,
-   900, 2292, 900
+   0x0804, 0

[PATCH 26/35] drm/amd/display: 3.2.74

2020-02-21 Thread Rodrigo Siqueira
From: Aric Cyr 

Signed-off-by: Aric Cyr 
Reviewed-by: Aric Cyr 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 72298520a303..f8ee2b75d2b8 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -39,7 +39,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.73"
+#define DC_VER "3.2.74"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 27/35] drm/amd/display: Add driver support for enabling PSR on DMCUB

2020-02-21 Thread Rodrigo Siqueira
From: Wyatt Wood 

[Why]
We want to be able to enable PSR on DMCUB, and fallback to
DMCU when necessary.

[How]
Add infrastructure to enable and disable PSR on DMCUB.

Signed-off-by: Wyatt Wood 
Reviewed-by: Nicholas Kazlauskas 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c   |  4 ++--
 drivers/gpu/drm/amd/display/dc/dc.h |  1 +
 drivers/gpu/drm/amd/display/dc/dc_link.h|  1 +
 drivers/gpu/drm/amd/display/dc/dce/Makefile |  2 +-
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c   |  5 +++--
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h   |  3 ++-
 .../drm/amd/display/dc/dcn21/dcn21_resource.c   | 17 +
 .../drm/amd/display/dmub/inc/dmub_gpint_cmd.h   |  1 +
 8 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 3420d098d771..2ccc2db93f5d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -45,7 +45,7 @@
 #include "dpcd_defs.h"
 #include "dmcu.h"
 #include "hw/clk_mgr.h"
-#include "../dce/dmub_psr.h"
+#include "dce/dmub_psr.h"
 
 #define DC_LOGGER_INIT(logger)
 
@@ -2433,7 +2433,7 @@ bool dc_link_set_psr_allow_active(struct dc_link *link, 
bool allow_active, bool
struct dmcu *dmcu = dc->res_pool->dmcu;
struct dmub_psr *psr = dc->res_pool->psr;
 
-   if ((psr != NULL) && link->psr_feature_enabled)
+   if (psr != NULL && link->psr_feature_enabled)
psr->funcs->psr_enable(psr, allow_active);
else if ((dmcu != NULL && dmcu->funcs->is_dmcu_initialized(dmcu)) && 
link->psr_feature_enabled)
dmcu->funcs->set_psr_enable(dmcu, allow_active, wait);
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index f8ee2b75d2b8..e10d5a7d0cb8 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -410,6 +410,7 @@ struct dc_debug_options {
bool dmub_offload_enabled;
bool dmcub_emulation;
bool dmub_command_table; /* for testing only */
+   bool psr_on_dmub;
struct dc_bw_validation_profile bw_val_profile;
bool disable_fec;
bool disable_48mhz_pwrdwn;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h 
b/drivers/gpu/drm/amd/display/dc/dc_link.h
index 5f341e960506..c45c7680fa58 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
@@ -26,6 +26,7 @@
 #ifndef DC_LINK_H_
 #define DC_LINK_H_
 
+#include "dc.h"
 #include "dc_types.h"
 #include "grph_object_defs.h"
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce/Makefile 
b/drivers/gpu/drm/amd/display/dc/dce/Makefile
index fdf3d8f87eee..fbfcff700971 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/dce/Makefile
@@ -29,7 +29,7 @@
 DCE = dce_audio.o dce_stream_encoder.o dce_link_encoder.o dce_hwseq.o \
 dce_mem_input.o dce_clock_source.o dce_scl_filters.o dce_transform.o \
 dce_opp.o dce_dmcu.o dce_abm.o dce_ipp.o dce_aux.o \
-dce_i2c.o dce_i2c_hw.o dce_i2c_sw.o
+dce_i2c.o dce_i2c_hw.o dce_i2c_sw.o dmub_psr.o
 
 AMD_DAL_DCE = $(addprefix $(AMDDALPATH)/dc/dce/,$(DCE))
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c 
b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
index 22cd68f7dca0..2c932c29f1f9 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
@@ -27,7 +27,7 @@
 #include "dc.h"
 #include "dc_dmub_srv.h"
 #include "../../dmub/inc/dmub_srv.h"
-#include "dmub_fw_state.h"
+#include "../../dmub/inc/dmub_gpint_cmd.h"
 #include "core_types.h"
 
 #define MAX_PIPES 6
@@ -131,8 +131,9 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
= &cmd.psr_copy_settings.psr_copy_settings_data;
struct pipe_ctx *pipe_ctx = NULL;
struct resource_context *res_ctx = 
&link->ctx->dc->current_state->res_ctx;
+   int i = 0;
 
-   for (int i = 0; i < MAX_PIPES; i++) {
+   for (i = 0; i < MAX_PIPES; i++) {
if (res_ctx &&
res_ctx->pipe_ctx[i].stream &&
res_ctx->pipe_ctx[i].stream->link &&
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h 
b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
index 3de7b9439f42..f404fecd6410 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.h
@@ -27,6 +27,7 @@
 #define _DMUB_PSR_H_
 
 #include "os_types.h"
+#include "dc_link.h"
 
 struct dmub_psr {
struct dc_context *ctx;
@@ -44,4 +45,4 @@ struct dmub_psr *dmub_psr_create(struct dc_context *ctx);
 void dmub_psr_destroy(struct dmub_psr **dmub);
 
 
-#endif /* _DCE_DMUB_H_ */
+#endif /* _DMUB_PSR_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index e7076b0d7afb

[PATCH 08/35] drm/amd/display: Don't ask PSP to load DMCUB for backdoor load

2020-02-21 Thread Rodrigo Siqueira
From: Nicholas Kazlauskas 

[Why]
If we're doing backdoor load then do it entirely ourselves without
invoking any of the frontdoor path to avoid potential issues with
outdated tOS.

[How]
Check the load type and don't pass it to base if we don't want it
loaded.

Signed-off-by: Nicholas Kazlauskas 
Reviewed-by: Hersen Wu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 20 +++
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 8bb022c91b8b..933bbe8350bb 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1202,16 +1202,20 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
}
 
hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
-   adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
-   AMDGPU_UCODE_ID_DMCUB;
-   adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].fw = adev->dm.dmub_fw;
-   adev->firmware.fw_size +=
-   ALIGN(le32_to_cpu(hdr->inst_const_bytes), PAGE_SIZE);
 
-   adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
+   if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+   adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
+   AMDGPU_UCODE_ID_DMCUB;
+   adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].fw =
+   adev->dm.dmub_fw;
+   adev->firmware.fw_size +=
+   ALIGN(le32_to_cpu(hdr->inst_const_bytes), PAGE_SIZE);
 
-   DRM_INFO("Loading DMUB firmware via PSP: version=0x%08X\n",
-adev->dm.dmcub_fw_version);
+   DRM_INFO("Loading DMUB firmware via PSP: version=0x%08X\n",
+adev->dm.dmcub_fw_version);
+   }
+
+   adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
 
adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
dmub_srv = adev->dm.dmub_srv;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 14/35] drm/amd/display: Fix HDMI repeater authentication

2020-02-21 Thread Rodrigo Siqueira
From: Bhawanpreet Lakha 

when the rxstatus split was done the index was incorrect. This
lead to HDMI repeater authentication failure for HDCP2.X So fix it

Fixes: 302169003733 ("drm/amd/display: split rxstatus for hdmi and dp")
Signed-off-by: Bhawanpreet Lakha 
Reviewed-by: Wenjing Liu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_execution.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_execution.c 
b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_execution.c
index 340df6d406f9..491c00f48026 100644
--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_execution.c
+++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp2_execution.c
@@ -34,7 +34,7 @@ static inline enum mod_hdcp_status 
check_receiver_id_list_ready(struct mod_hdcp
if (is_dp_hdcp(hdcp))
is_ready = 
HDCP_2_2_DP_RXSTATUS_READY(hdcp->auth.msg.hdcp2.rxstatus_dp) ? 1 : 0;
else
-   is_ready = 
(HDCP_2_2_HDMI_RXSTATUS_READY(hdcp->auth.msg.hdcp2.rxstatus[0]) &&
+   is_ready = 
(HDCP_2_2_HDMI_RXSTATUS_READY(hdcp->auth.msg.hdcp2.rxstatus[1]) &&

(HDCP_2_2_HDMI_RXSTATUS_MSG_SZ_HI(hdcp->auth.msg.hdcp2.rxstatus[1]) << 8 |

hdcp->auth.msg.hdcp2.rxstatus[0])) ? 1 : 0;
return is_ready ? MOD_HDCP_STATUS_SUCCESS :
@@ -67,7 +67,7 @@ static inline enum mod_hdcp_status 
check_reauthentication_request(
MOD_HDCP_STATUS_HDCP2_REAUTH_REQUEST :
MOD_HDCP_STATUS_SUCCESS;
else
-   ret = 
HDCP_2_2_HDMI_RXSTATUS_REAUTH_REQ(hdcp->auth.msg.hdcp2.rxstatus[0]) ?
+   ret = 
HDCP_2_2_HDMI_RXSTATUS_REAUTH_REQ(hdcp->auth.msg.hdcp2.rxstatus[1]) ?
MOD_HDCP_STATUS_HDCP2_REAUTH_REQUEST :
MOD_HDCP_STATUS_SUCCESS;
return ret;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 22/35] drm/amd/display: Link training TPS1 workaround

2020-02-21 Thread Rodrigo Siqueira
From: Martin Leung 

[Why]
Previously implemented early_cr_pattern was link level but the whole
asic should be affected.

[How]
 - change old link flag to dc level
 - new bit in dc->work_arounds set by DM

Signed-off-by: Martin Leung 
Reviewed-by: Joshua Aberback 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c   | 18 +-
 drivers/gpu/drm/amd/display/dc/dc.h|  1 +
 drivers/gpu/drm/amd/display/dc/dc_link.h   |  1 -
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 8de9d6f9a477..93127bc90f3c 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -973,7 +973,7 @@ static enum link_training_result 
perform_clock_recovery_sequence(
retries_cr = 0;
retry_count = 0;
 
-   if (!link->wa_flags.dp_early_cr_pattern)
+   if (!link->ctx->dc->work_arounds.lt_early_cr_pattern)
dp_set_hw_training_pattern(link, tr_pattern, offset);
 
/* najeeb - The synaptics MST hub can put the LT in
@@ -1446,11 +1446,11 @@ enum link_training_result 
dc_link_dp_perform_link_training(
&link->preferred_training_settings,
<_settings);
 
-   if (link->wa_flags.dp_early_cr_pattern)
-   start_clock_recovery_pattern_early(link, <_settings, DPRX);
-
/* 1. set link rate, lane count and spread. */
-   dpcd_set_link_settings(link, <_settings);
+   if (link->ctx->dc->work_arounds.lt_early_cr_pattern)
+   start_clock_recovery_pattern_early(link, <_settings, DPRX);
+   else
+   dpcd_set_link_settings(link, <_settings);
 
if (link->preferred_training_settings.fec_enable != NULL)
fec_enable = *link->preferred_training_settings.fec_enable;
@@ -1669,11 +1669,11 @@ enum link_training_result dc_link_dp_sync_lt_attempt(
dp_set_panel_mode(link, panel_mode);
 
/* Attempt to train with given link training settings */
-   if (link->wa_flags.dp_early_cr_pattern)
-   start_clock_recovery_pattern_early(link, <_settings, DPRX);
-
/* Set link rate, lane count and spread. */
-   dpcd_set_link_settings(link, <_settings);
+   if (link->ctx->dc->work_arounds.lt_early_cr_pattern)
+   start_clock_recovery_pattern_early(link, <_settings, DPRX);
+   else
+   dpcd_set_link_settings(link, <_settings);
 
/* 2. perform link training (set link training done
 *  to false is done as well)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index b3f6311d3564..72298520a303 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -126,6 +126,7 @@ struct dc_bug_wa {
bool no_connect_phy_config;
bool dedcn20_305_wa;
bool skip_clock_update;
+   bool lt_early_cr_pattern;
 };
 
 struct dc_dcc_surface_param {
diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h 
b/drivers/gpu/drm/amd/display/dc/dc_link.h
index 6344de3ca979..5f341e960506 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
@@ -135,7 +135,6 @@ struct dc_link {
bool dp_keep_receiver_powered;
bool dp_skip_DID2;
bool dp_skip_reset_segment;
-   bool dp_early_cr_pattern;
} wa_flags;
struct link_mst_stream_allocation_table mst_stream_alloc_table;
 
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 05/35] drm/amd/display: DMUB Firmware Load by PSP

2020-02-21 Thread Rodrigo Siqueira
From: Hersen Wu 

Signed-off-by: Hersen Wu 
Signed-off-by: Jerry (Fangzhi) Zuo 
Reviewed-by: Hersen Wu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index aeca0ada2484..8bb022c91b8b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -801,10 +801,20 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
 
fw_bss_data_size = le32_to_cpu(hdr->bss_data_bytes);
 
-   memcpy(fb_info->fb[DMUB_WINDOW_0_INST_CONST].cpu_addr, fw_inst_const,
-  fw_inst_const_size);
+   /* if adev->firmware.load_type == AMDGPU_FW_LOAD_PSP,
+* amdgpu_ucode_init_single_fw will load dmub firmware
+* fw_inst_const part to cw0; otherwise, the firmware back door load
+* will be done by dm_dmub_hw_init
+*/
+   if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+   memcpy(fb_info->fb[DMUB_WINDOW_0_INST_CONST].cpu_addr, 
fw_inst_const,
+   fw_inst_const_size);
+   }
+
memcpy(fb_info->fb[DMUB_WINDOW_2_BSS_DATA].cpu_addr, fw_bss_data,
   fw_bss_data_size);
+
+   /* Copy firmware bios info into FB memory. */
memcpy(fb_info->fb[DMUB_WINDOW_3_VBIOS].cpu_addr, adev->bios,
   adev->bios_size);
 
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 16/35] drm/amd/display: make some rn_clk_mgr structs and funcs static

2020-02-21 Thread Rodrigo Siqueira
From: Anthony Koo 

[Why]
There are some structures and functions meant only to be used in the
scope of that single rn_clk_mgr c file.

[How]
Make structs and funcs static if only meant to be used within
rn_clk_mgr

Signed-off-by: Anthony Koo 
Reviewed-by: Aric Cyr 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 5d82ec1f1ce5..64cbd5462c79 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -405,7 +405,7 @@ void rn_init_clocks(struct clk_mgr *clk_mgr)
clk_mgr->clks.pwr_state = DCN_PWR_STATE_UNKNOWN;
 }
 
-void build_watermark_ranges(struct clk_bw_params *bw_params, struct 
pp_smu_wm_range_sets *ranges)
+static void build_watermark_ranges(struct clk_bw_params *bw_params, struct 
pp_smu_wm_range_sets *ranges)
 {
int i, num_valid_sets;
 
@@ -503,7 +503,7 @@ static struct clk_mgr_funcs dcn21_funcs = {
.notify_wm_ranges = rn_notify_wm_ranges
 };
 
-struct clk_bw_params rn_bw_params = {
+static struct clk_bw_params rn_bw_params = {
.vram_type = Ddr4MemType,
.num_channels = 1,
.clk_table = {
@@ -543,7 +543,7 @@ struct clk_bw_params rn_bw_params = {
 
 };
 
-struct wm_table ddr4_wm_table = {
+static struct wm_table ddr4_wm_table = {
.entries = {
{
.wm_inst = WM_A,
@@ -580,7 +580,7 @@ struct wm_table ddr4_wm_table = {
}
 };
 
-struct wm_table lpddr4_wm_table = {
+static struct wm_table lpddr4_wm_table = {
.entries = {
{
.wm_inst = WM_A,
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/35] drm/amd/display: System crashes when add_ptb_to_table() gets called

2020-02-21 Thread Rodrigo Siqueira
From: Peikang Zhang 

[Why]
Unused VMIDs were not evicted correctly

[How]
1. evict_vmids() logic was fixed;
2. Added boundary check for add_ptb_to_table() and
   clear_entry_from_vmid_table() to avoid crash caused by array out of
   boundary;
3. For mod_vmid_get_for_ptb(), vimd is changed from unsigned to signed
   due to vimd is signed.

Signed-off-by: Peikang Zhang 
Reviewed-by: Aric Cyr 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/modules/vmid/vmid.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/vmid/vmid.c 
b/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
index f0a153704f6e..00f132f8ad55 100644
--- a/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
+++ b/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
@@ -40,14 +40,18 @@ struct core_vmid {
 
 static void add_ptb_to_table(struct core_vmid *core_vmid, unsigned int vmid, 
uint64_t ptb)
 {
-   core_vmid->ptb_assigned_to_vmid[vmid] = ptb;
-   core_vmid->num_vmids_available--;
+   if (vmid < MAX_VMID) {
+   core_vmid->ptb_assigned_to_vmid[vmid] = ptb;
+   core_vmid->num_vmids_available--;
+   }
 }
 
 static void clear_entry_from_vmid_table(struct core_vmid *core_vmid, unsigned 
int vmid)
 {
-   core_vmid->ptb_assigned_to_vmid[vmid] = 0;
-   core_vmid->num_vmids_available++;
+   if (vmid < MAX_VMID) {
+   core_vmid->ptb_assigned_to_vmid[vmid] = 0;
+   core_vmid->num_vmids_available++;
+   }
 }
 
 static void evict_vmids(struct core_vmid *core_vmid)
@@ -57,7 +61,7 @@ static void evict_vmids(struct core_vmid *core_vmid)
 
// At this point any positions with value 0 are unused vmids, evict them
for (i = 1; i < core_vmid->num_vmid; i++) {
-   if (ord & (1u << i))
+   if (!(ord & (1u << i)))
clear_entry_from_vmid_table(core_vmid, i);
}
 }
@@ -91,7 +95,7 @@ static int get_next_available_vmid(struct core_vmid 
*core_vmid)
 uint8_t mod_vmid_get_for_ptb(struct mod_vmid *mod_vmid, uint64_t ptb)
 {
struct core_vmid *core_vmid = MOD_VMID_TO_CORE(mod_vmid);
-   unsigned int vmid = 0;
+   int vmid = 0;
 
// Physical address gets vmid 0
if (ptb == 0)
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 09/35] drm/amd/display: Add dmcu f/w loading for NV12

2020-02-21 Thread Rodrigo Siqueira
From: Roman Li 

[Why]
We need DMCU for features like PSR and ABM.

[How]
Add path to dmcu firmware binary and load it for Navi12.

Signed-off-by: Roman Li 
Reviewed-by: Hersen Wu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 933bbe8350bb..bd88396a8469 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -98,6 +98,9 @@ MODULE_FIRMWARE(FIRMWARE_RENOIR_DMUB);
 #define FIRMWARE_RAVEN_DMCU"amdgpu/raven_dmcu.bin"
 MODULE_FIRMWARE(FIRMWARE_RAVEN_DMCU);
 
+#define FIRMWARE_NAVI12_DMCU"amdgpu/navi12_dmcu.bin"
+MODULE_FIRMWARE(FIRMWARE_NAVI12_DMCU);
+
 /* Number of bytes in PSP header for firmware. */
 #define PSP_HEADER_BYTES 0x100
 
@@ -1088,9 +1091,11 @@ static int load_dmcu_fw(struct amdgpu_device *adev)
case CHIP_VEGA20:
case CHIP_NAVI10:
case CHIP_NAVI14:
-   case CHIP_NAVI12:
case CHIP_RENOIR:
return 0;
+   case CHIP_NAVI12:
+   fw_name_dmcu = FIRMWARE_NAVI12_DMCU;
+   break;
case CHIP_RAVEN:
if (ASICREV_IS_PICASSO(adev->external_rev_id))
fw_name_dmcu = FIRMWARE_RAVEN_DMCU;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/35] drm/amd/display: dal_ddc_i2c_payloads_create can fail causing panic

2020-02-21 Thread Rodrigo Siqueira
From: Aric Cyr 

[Why]
Since the i2c payload allocation can fail need to check return codes

[How]
Clean up i2c payload allocations and check for errors

Signed-off-by: Aric Cyr 
Reviewed-by: Joshua Aberback 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/core/dc_link_ddc.c | 52 +--
 1 file changed, 25 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
index a5586f68b4da..256889eed93e 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
@@ -126,22 +126,16 @@ struct aux_payloads {
struct vector payloads;
 };
 
-static struct i2c_payloads *dal_ddc_i2c_payloads_create(struct dc_context 
*ctx, uint32_t count)
+static bool dal_ddc_i2c_payloads_create(
+   struct dc_context *ctx,
+   struct i2c_payloads *payloads,
+   uint32_t count)
 {
-   struct i2c_payloads *payloads;
-
-   payloads = kzalloc(sizeof(struct i2c_payloads), GFP_KERNEL);
-
-   if (!payloads)
-   return NULL;
-
if (dal_vector_construct(
&payloads->payloads, ctx, count, sizeof(struct i2c_payload)))
-   return payloads;
-
-   kfree(payloads);
-   return NULL;
+   return true;
 
+   return false;
 }
 
 static struct i2c_payload *dal_ddc_i2c_payloads_get(struct i2c_payloads *p)
@@ -154,14 +148,12 @@ static uint32_t dal_ddc_i2c_payloads_get_count(struct 
i2c_payloads *p)
return p->payloads.count;
 }
 
-static void dal_ddc_i2c_payloads_destroy(struct i2c_payloads **p)
+static void dal_ddc_i2c_payloads_destroy(struct i2c_payloads *p)
 {
-   if (!p || !*p)
+   if (!p)
return;
-   dal_vector_destruct(&(*p)->payloads);
-   kfree(*p);
-   *p = NULL;
 
+   dal_vector_destruct(&p->payloads);
 }
 
 #define DDC_MIN(a, b) (((a) < (b)) ? (a) : (b))
@@ -524,9 +516,13 @@ bool dal_ddc_service_query_ddc_data(
 
uint32_t payloads_num = write_payloads + read_payloads;
 
+
if (write_size > EDID_SEGMENT_SIZE || read_size > EDID_SEGMENT_SIZE)
return false;
 
+   if (!payloads_num)
+   return false;
+
/*TODO: len of payload data for i2c and aux is uint8,
 *  but we want to read 256 over i2c*/
if (dal_ddc_service_is_in_aux_transaction_mode(ddc)) {
@@ -557,23 +553,25 @@ bool dal_ddc_service_query_ddc_data(
ret = dal_ddc_submit_aux_command(ddc, &payload);
}
} else {
-   struct i2c_payloads *payloads =
-   dal_ddc_i2c_payloads_create(ddc->ctx, payloads_num);
+   struct i2c_command command = {0};
+   struct i2c_payloads payloads;
+
+   if (!dal_ddc_i2c_payloads_create(ddc->ctx, &payloads, 
payloads_num))
+   return false;
 
-   struct i2c_command command = {
-   .payloads = dal_ddc_i2c_payloads_get(payloads),
-   .number_of_payloads = 0,
-   .engine = DDC_I2C_COMMAND_ENGINE,
-   .speed = ddc->ctx->dc->caps.i2c_speed_in_khz };
+   command.payloads = dal_ddc_i2c_payloads_get(&payloads);
+   command.number_of_payloads = 0;
+   command.engine = DDC_I2C_COMMAND_ENGINE;
+   command.speed = ddc->ctx->dc->caps.i2c_speed_in_khz;
 
dal_ddc_i2c_payloads_add(
-   payloads, address, write_size, write_buf, true);
+   &payloads, address, write_size, write_buf, true);
 
dal_ddc_i2c_payloads_add(
-   payloads, address, read_size, read_buf, false);
+   &payloads, address, read_size, read_buf, false);
 
command.number_of_payloads =
-   dal_ddc_i2c_payloads_get_count(payloads);
+   dal_ddc_i2c_payloads_get_count(&payloads);
 
ret = dm_helpers_submit_i2c(
ddc->ctx,
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 17/35] drm/amd/display: programming last delta in output transfer function LUT to a correct value

2020-02-21 Thread Rodrigo Siqueira
From: Vladimir Stempen 

[Why]
Currently DAL programs negative slope for the last point of output
transfer function curve.

[How]
Applying a check for the last PWL point for RGB values not to be
smaller than previous. If smaller, initialize the last point values
to a sum of previous PWL value and previous PWL delta;

Signed-off-by: Vladimir Stempen 
Reviewed-by: Tony Cheng 
Acked-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c  | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
index bbd6e01b3eca..47a39eb9400b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
@@ -316,6 +316,7 @@ bool cm_helper_translate_curve_to_hw_format(
struct pwl_result_data *rgb_resulted;
struct pwl_result_data *rgb;
struct pwl_result_data *rgb_plus_1;
+   struct pwl_result_data *rgb_minus_1;
 
int32_t region_start, region_end;
int32_t i;
@@ -465,9 +466,20 @@ bool cm_helper_translate_curve_to_hw_format(
 
rgb = rgb_resulted;
rgb_plus_1 = rgb_resulted + 1;
+   rgb_minus_1 = rgb;
 
i = 1;
while (i != hw_points + 1) {
+
+   if (i >= hw_points - 1) {
+   if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
+   rgb_plus_1->red = dc_fixpt_add(rgb->red, 
rgb_minus_1->delta_red);
+   if (dc_fixpt_lt(rgb_plus_1->green, rgb->green))
+   rgb_plus_1->green = dc_fixpt_add(rgb->green, 
rgb_minus_1->delta_green);
+   if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue))
+   rgb_plus_1->blue = dc_fixpt_add(rgb->blue, 
rgb_minus_1->delta_blue);
+   }
+
rgb->delta_red   = dc_fixpt_sub(rgb_plus_1->red,   rgb->red);
rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green);
rgb->delta_blue  = dc_fixpt_sub(rgb_plus_1->blue,  rgb->blue);
@@ -482,6 +494,7 @@ bool cm_helper_translate_curve_to_hw_format(
}
 
++rgb_plus_1;
+   rgb_minus_1 = rgb;
++rgb;
++i;
}
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/35] drm/amd/display: Update TX masks correctly

2020-02-21 Thread Rodrigo Siqueira
From: Alvin Lee 

[Why]
Bugs occur when TX interrupt comes in when no USB-C on board.

[How]
Check PHY for USB-C before enabling TX interrupt in DMCU FW.

Signed-off-by: Alvin Lee 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
index 30d953acd016..f0cebe721bcc 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
@@ -378,6 +378,11 @@ static bool dcn10_dmcu_init(struct dmcu *dmcu)
struct dce_dmcu *dmcu_dce = TO_DCE_DMCU(dmcu);
const struct dc_config *config = &dmcu->ctx->dc->config;
bool status = false;
+   struct dc_context *ctx = dmcu->ctx;
+   unsigned int i;
+   //  5 4 3 2 1 0
+   //  F E D C B A - bit 0 is A, bit 5 is F
+   unsigned int tx_interrupt_mask = 0;
 
PERF_TRACE();
/*  Definition of DC_DMCU_SCRATCH
@@ -387,6 +392,15 @@ static bool dcn10_dmcu_init(struct dmcu *dmcu)
 */
dmcu->dmcu_state = REG_READ(DC_DMCU_SCRATCH);
 
+   for (i = 0; i < ctx->dc->link_count; i++) {
+   if 
(ctx->dc->links[i]->link_enc->features.flags.bits.DP_IS_USB_C) {
+   if (ctx->dc->links[i]->link_enc->transmitter >= 
TRANSMITTER_UNIPHY_A &&
+   
ctx->dc->links[i]->link_enc->transmitter <= TRANSMITTER_UNIPHY_F) {
+   tx_interrupt_mask |= 1 << 
ctx->dc->links[i]->link_enc->transmitter;
+   }
+   }
+   }
+
switch (dmcu->dmcu_state) {
case DMCU_UNLOADED:
status = false;
@@ -401,6 +415,8 @@ static bool dcn10_dmcu_init(struct dmcu *dmcu)
/* Set backlight ramping stepsize */
REG_WRITE(MASTER_COMM_DATA_REG2, abm_gain_stepsize);
 
+   REG_WRITE(MASTER_COMM_DATA_REG3, tx_interrupt_mask);
+
/* Set command to initialize microcontroller */
REG_UPDATE(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0,
MCP_INIT_DMCU);
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/35] drm/amd/display: Only round InfoFrame refresh rates

2020-02-21 Thread Rodrigo Siqueira
From: Aric Cyr 

[Why]
When calculating nominal refresh rates, don't round.
Only the VSIF needs to be rounded.

[How]
Revert rounding change for nominal and just round when forming the
FreeSync VSIF.

Signed-off-by: Aric Cyr 
Reviewed-by: Anthony Koo 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/modules/freesync/freesync.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c 
b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
index b9992ebf77a6..4e542826cd26 100644
--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
@@ -524,12 +524,12 @@ static void build_vrr_infopacket_data(const struct 
mod_vrr_params *vrr,
infopacket->sb[6] |= 0x04;
 
/* PB7 = FreeSync Minimum refresh rate (Hz) */
-   infopacket->sb[7] = (unsigned char)(vrr->min_refresh_in_uhz / 100);
+   infopacket->sb[7] = (unsigned char)((vrr->min_refresh_in_uhz + 50) 
/ 100);
 
/* PB8 = FreeSync Maximum refresh rate (Hz)
 * Note: We should never go above the field rate of the mode timing set.
 */
-   infopacket->sb[8] = (unsigned char)(vrr->max_refresh_in_uhz / 100);
+   infopacket->sb[8] = (unsigned char)((vrr->max_refresh_in_uhz + 50) 
/ 100);
 
 
//FreeSync HDR
@@ -747,10 +747,6 @@ void mod_freesync_build_vrr_params(struct mod_freesync 
*mod_freesync,
nominal_field_rate_in_uhz =
mod_freesync_calc_nominal_field_rate(stream);
 
-   /* Rounded to the nearest Hz */
-   nominal_field_rate_in_uhz = 100ULL *
-   div_u64(nominal_field_rate_in_uhz + 50, 100);
-
min_refresh_in_uhz = in_config->min_refresh_in_uhz;
max_refresh_in_uhz = in_config->max_refresh_in_uhz;
 
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/35] drm/amd/display: 3.2.73

2020-02-21 Thread Rodrigo Siqueira
From: Aric Cyr 

Signed-off-by: Aric Cyr 
Reviewed-by: Aric Cyr 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index f77b3acfeb06..b3f6311d3564 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -39,7 +39,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.72"
+#define DC_VER "3.2.73"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 18/35] drm/amd/display: Add DMUB firmware state debugfs

2020-02-21 Thread Rodrigo Siqueira
From: Nicholas Kazlauskas 

[Why]
Firmware state helps to debug sequence issues and hangs for DMCUB
commands and we don't have an easy mechanism to dump it from the driver.

[How]
Add a debugfs entry to dump the current firmware state.
Example usage:

cat /sys/kernel/debug/dri/0/amdgpu_dm_dmub_fw_state

Signed-off-by: Nicholas Kazlauskas 
Reviewed-by: Hersen Wu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 27 +++
 1 file changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index 6bc0bdc8835c..0461fecd68db 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -732,6 +732,29 @@ static int dmub_tracebuffer_show(struct seq_file *m, void 
*data)
return 0;
 }
 
+/**
+ * Returns the DMCUB firmware state contents.
+ * Example usage: cat /sys/kernel/debug/dri/0/amdgpu_dm_dmub_fw_state
+ */
+static int dmub_fw_state_show(struct seq_file *m, void *data)
+{
+   struct amdgpu_device *adev = m->private;
+   struct dmub_srv_fb_info *fb_info = adev->dm.dmub_fb_info;
+   uint8_t *state_base;
+   uint32_t state_size;
+
+   if (!fb_info)
+   return 0;
+
+   state_base = (uint8_t *)fb_info->fb[DMUB_WINDOW_6_FW_STATE].cpu_addr;
+   if (!state_base)
+   return 0;
+
+   state_size = fb_info->fb[DMUB_WINDOW_6_FW_STATE].size;
+
+   return seq_write(m, state_base, state_size);
+}
+
 /*
  * Returns the current and maximum output bpc for the connector.
  * Example usage: cat /sys/kernel/debug/dri/0/DP-1/output_bpc
@@ -937,6 +960,7 @@ static ssize_t dp_dpcd_data_read(struct file *f, char 
__user *buf,
return read_size - r;
 }
 
+DEFINE_SHOW_ATTRIBUTE(dmub_fw_state);
 DEFINE_SHOW_ATTRIBUTE(dmub_tracebuffer);
 DEFINE_SHOW_ATTRIBUTE(output_bpc);
 DEFINE_SHOW_ATTRIBUTE(vrr_range);
@@ -1252,5 +1276,8 @@ int dtn_debugfs_init(struct amdgpu_device *adev)
debugfs_create_file_unsafe("amdgpu_dm_dmub_tracebuffer", 0644, root,
   adev, &dmub_tracebuffer_fops);
 
+   debugfs_create_file_unsafe("amdgpu_dm_dmub_fw_state", 0644, root,
+  adev, &dmub_fw_state_fops);
+
return 0;
 }
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 20/35] drm/amd/display: Workaround required for link training reliability

2020-02-21 Thread Rodrigo Siqueira
From: David Galiffi 

[Why]
A software workaround is required for all vendor-built cards on platform.

[How]
When performing DP link training, we must send TPS1 before DPCD:100h is
written with the proper bit rate value. This change must be applies in
ALL cases when LT happens.

Signed-off-by: David Galiffi 
Reviewed-by: Tony Cheng 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 19 ++-
 drivers/gpu/drm/amd/display/dc/dc_link.h  |  1 +
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index c0fcee4b5b69..8de9d6f9a477 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -945,6 +945,17 @@ static enum link_training_result 
perform_channel_equalization_sequence(
 }
 #define TRAINING_AUX_RD_INTERVAL 100 //us
 
+static void start_clock_recovery_pattern_early(struct dc_link *link,
+   struct link_training_settings *lt_settings,
+   uint32_t offset)
+{
+   DC_LOG_HW_LINK_TRAINING("%s\n GPU sends TPS1. Wait 400us.\n",
+   __func__);
+   dp_set_hw_training_pattern(link, DP_TRAINING_PATTERN_SEQUENCE_1, 
offset);
+   dp_set_hw_lane_settings(link, lt_settings, offset);
+   udelay(400);
+}
+
 static enum link_training_result perform_clock_recovery_sequence(
struct dc_link *link,
struct link_training_settings *lt_settings,
@@ -962,7 +973,8 @@ static enum link_training_result 
perform_clock_recovery_sequence(
retries_cr = 0;
retry_count = 0;
 
-   dp_set_hw_training_pattern(link, tr_pattern, offset);
+   if (!link->wa_flags.dp_early_cr_pattern)
+   dp_set_hw_training_pattern(link, tr_pattern, offset);
 
/* najeeb - The synaptics MST hub can put the LT in
* infinite loop by switching the VS
@@ -1434,6 +1446,9 @@ enum link_training_result 
dc_link_dp_perform_link_training(
&link->preferred_training_settings,
<_settings);
 
+   if (link->wa_flags.dp_early_cr_pattern)
+   start_clock_recovery_pattern_early(link, <_settings, DPRX);
+
/* 1. set link rate, lane count and spread. */
dpcd_set_link_settings(link, <_settings);
 
@@ -1654,6 +1669,8 @@ enum link_training_result dc_link_dp_sync_lt_attempt(
dp_set_panel_mode(link, panel_mode);
 
/* Attempt to train with given link training settings */
+   if (link->wa_flags.dp_early_cr_pattern)
+   start_clock_recovery_pattern_early(link, <_settings, DPRX);
 
/* Set link rate, lane count and spread. */
dpcd_set_link_settings(link, <_settings);
diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h 
b/drivers/gpu/drm/amd/display/dc/dc_link.h
index 5f341e960506..6344de3ca979 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
@@ -135,6 +135,7 @@ struct dc_link {
bool dp_keep_receiver_powered;
bool dp_skip_DID2;
bool dp_skip_reset_segment;
+   bool dp_early_cr_pattern;
} wa_flags;
struct link_mst_stream_allocation_table mst_stream_alloc_table;
 
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 21/35] drm/amd/display: Monitor patch to delay setting ignore MSA bit

2020-02-21 Thread Rodrigo Siqueira
From: Jaehyun Chung 

[Why]
Some displays clear ignore MSA bit on mode change, which cause
blackscreen when programming variable vtotals. Ignore MSA bit needs
programming needs to be delayed or re-set to be retained.

[How]
Create patch to delay programming ignore MSA bit after unblanking
stream.

Signed-off-by: Jaehyun Chung 
Reviewed-by: Aric Cyr 
Acked-by: Anthony Koo 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 3 +++
 drivers/gpu/drm/amd/display/dc/dc_types.h | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index a3bfa05c545e..3420d098d771 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -3095,6 +3095,9 @@ void core_link_enable_stream(
dc->hwss.unblank_stream(pipe_ctx,
&pipe_ctx->stream->link->cur_link_settings);
 
+   if 
(stream->link->local_sink->edid_caps.panel_patch.delay_ignore_msa > 0)
+   
msleep(stream->link->local_sink->edid_caps.panel_patch.delay_ignore_msa);
+
if (dc_is_dp_signal(pipe_ctx->stream->signal))
enable_stream_features(pipe_ctx);
 #if defined(CONFIG_DRM_AMD_DC_HDCP)
diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h 
b/drivers/gpu/drm/amd/display/dc/dc_types.h
index 1490732a4b44..299f6e00f576 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
@@ -230,6 +230,7 @@ struct dc_panel_patch {
unsigned int extra_delay_backlight_off;
unsigned int extra_t7_ms;
unsigned int skip_scdc_overwrite;
+   unsigned int delay_ignore_msa;
 };
 
 struct dc_edid_caps {
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/35] drm/amd/display: Add function pointers for panel related hw functions

2020-02-21 Thread Rodrigo Siqueira
From: Anthony Koo 

[Why]
Make panel backlight and power on/off functions into
hardware specific function pointers

[How]
Add function pointers for panel related hw functions
 - is_panel_powered_on
 - is_panel_backlight_on

Signed-off-by: Anthony Koo 
Reviewed-by: Aric Cyr 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../amd/display/dc/dce110/dce110_hw_sequencer.c   | 15 ++-
 .../amd/display/dc/dce110/dce110_hw_sequencer.h   |  4 
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c |  2 ++
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c |  2 ++
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c |  2 ++
 .../drm/amd/display/dc/inc/hw_sequencer_private.h |  2 ++
 6 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 28b681b33f7a..0976e378659f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -698,8 +698,10 @@ void dce110_enable_stream(struct pipe_ctx *pipe_ctx)
 }
 
 /*todo: cloned in stream enc, fix*/
-static bool is_panel_backlight_on(struct dce_hwseq *hws)
+bool dce110_is_panel_backlight_on(struct dc_link *link)
 {
+   struct dc_context *ctx = link->ctx;
+   struct dce_hwseq *hws = ctx->dc->hwseq;
uint32_t value;
 
REG_GET(LVTMA_PWRSEQ_CNTL, LVTMA_BLON, &value);
@@ -707,11 +709,12 @@ static bool is_panel_backlight_on(struct dce_hwseq *hws)
return value;
 }
 
-static bool is_panel_powered_on(struct dce_hwseq *hws)
+bool dce110_is_panel_powered_on(struct dc_link *link)
 {
+   struct dc_context *ctx = link->ctx;
+   struct dce_hwseq *hws = ctx->dc->hwseq;
uint32_t pwr_seq_state, dig_on, dig_on_ovrd;
 
-
REG_GET(LVTMA_PWRSEQ_STATE, LVTMA_PWRSEQ_TARGET_STATE_R, 
&pwr_seq_state);
 
REG_GET_2(LVTMA_PWRSEQ_CNTL, LVTMA_DIGON, &dig_on, LVTMA_DIGON_OVRD, 
&dig_on_ovrd);
@@ -818,7 +821,7 @@ void dce110_edp_power_control(
return;
}
 
-   if (power_up != is_panel_powered_on(hwseq)) {
+   if (power_up != hwseq->funcs.is_panel_powered_on(link)) {
/* Send VBIOS command to prompt eDP panel power */
if (power_up) {
unsigned long long current_ts = dm_get_timestamp(ctx);
@@ -898,7 +901,7 @@ void dce110_edp_backlight_control(
return;
}
 
-   if (enable && is_panel_backlight_on(hws)) {
+   if (enable && hws->funcs.is_panel_backlight_on(link)) {
DC_LOG_HW_RESUME_S3(
"%s: panel already powered up. Do nothing.\n",
__func__);
@@ -2764,6 +2767,8 @@ static const struct hwseq_private_funcs 
dce110_private_funcs = {
.disable_stream_gating = NULL,
.enable_stream_gating = NULL,
.edp_backlight_control = dce110_edp_backlight_control,
+   .is_panel_backlight_on = dce110_is_panel_backlight_on,
+   .is_panel_powered_on = dce110_is_panel_powered_on,
 };
 
 void dce110_hw_sequencer_construct(struct dc *dc)
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
index 26a9c14a58b1..34be166e8ff0 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
@@ -85,5 +85,9 @@ void dce110_edp_wait_for_hpd_ready(
struct dc_link *link,
bool power_up);
 
+bool dce110_is_panel_backlight_on(struct dc_link *link);
+
+bool dce110_is_panel_powered_on(struct dc_link *link);
+
 #endif /* __DC_HWSS_DCE110_H__ */
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
index b88ef9703b2b..dd02d3983695 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
@@ -87,6 +87,8 @@ static const struct hwseq_private_funcs dcn10_private_funcs = 
{
.reset_hw_ctx_wrap = dcn10_reset_hw_ctx_wrap,
.enable_stream_timing = dcn10_enable_stream_timing,
.edp_backlight_control = dce110_edp_backlight_control,
+   .is_panel_backlight_on = dce110_is_panel_backlight_on,
+   .is_panel_powered_on = dce110_is_panel_powered_on,
.disable_stream_gating = NULL,
.enable_stream_gating = NULL,
.setup_vupdate_interrupt = dcn10_setup_vupdate_interrupt,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
index 44ec5f5f9fd2..6c4f90f58656 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
@@ -97,6 +97,8 @@ static const struct hwseq_private_funcs dcn20_private_funcs = 
{
.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
.enable_stream_timing = dcn20

[PATCH 04/35] drm/amd/display: dmub back door load

2020-02-21 Thread Rodrigo Siqueira
From: Hersen Wu 

Signed-off-by: Hersen Wu 
Signed-off-by: Jerry (Fangzhi) Zuo 
Reviewed-by: Hersen Wu 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 0e7d75436d59..aeca0ada2484 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -823,6 +823,10 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
hw_params.fb_base = adev->gmc.fb_start;
hw_params.fb_offset = adev->gmc.aper_base;
 
+   /* backdoor load firmware and trigger dmub running */
+   if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+   hw_params.load_inst_const = true;
+
if (dmcu)
hw_params.psp_version = dmcu->psp_version;
 
@@ -1187,11 +1191,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
return 0;
}
 
-   if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
-   DRM_WARN("Only PSP firmware loading is supported for DMUB\n");
-   return 0;
-   }
-
hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
AMDGPU_UCODE_ID_DMCUB;
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 07/35] drm/amd/display: Wait for DMCUB to finish loading before executing commands

2020-02-21 Thread Rodrigo Siqueira
From: Nicholas Kazlauskas 

[Why]
When we execute the first command for ASIC_INIT for command table
offloading we can hit a timing scenario such that the interrupts
for the inbox wptr haven't been enabled yet and the first command
is ignored until the second command is sent.

[How]
This happens when either the SCRATCH0 is already the correct status
code or autoload check is unsupported.

Clear SCRATCH0 during reset.

Also ensure that we don't accidentally reset the ASIC again in case
of a hang by clearing GPINT while we're at it.

Signed-off-by: Nicholas Kazlauskas 
Reviewed-by: Chris Park 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index 993e47e99fbe..63bb9e2c81de 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -116,6 +116,10 @@ void dmub_dcn20_reset(struct dmub_srv *dmub)
break;
}
 
+   /* Clear the GPINT command manually so we don't reset again. */
+   cmd.all = 0;
+   dmub->hw_funcs.set_gpint(dmub, cmd);
+
/* Force reset in case we timed out, DMCUB is likely hung. */
}
 
@@ -124,6 +128,7 @@ void dmub_dcn20_reset(struct dmub_srv *dmub)
REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
REG_WRITE(DMCUB_INBOX1_RPTR, 0);
REG_WRITE(DMCUB_INBOX1_WPTR, 0);
+   REG_WRITE(DMCUB_SCRATCH0, 0);
 }
 
 void dmub_dcn20_reset_release(struct dmub_srv *dmub)
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/35] drm/amd/display: do not force UCLK DPM to stay at highest state during display off in DCN2

2020-02-21 Thread Rodrigo Siqueira
From: Samson Tam 

[Why]
Add optimization to allow pstate change support when all displays
are off in DCN2.

[How]
Add clk_mgr_helper_get_active_plane_cnt() to sum plane_count for all
valid stream_status[].  If plane_count is 0, then there are no active
or virtual streams present. Use plane_count == 0 as extra condition to
enable p_state_change_support in dcn2_update_clocks().

Signed-off-by: Samson Tam 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  | 19 +++
 .../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c  |  8 ++--
 .../amd/display/dc/inc/hw/clk_mgr_internal.h  |  4 
 3 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
index a78e5c74c79c..2f43f1618db8 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
@@ -63,6 +63,25 @@ int clk_mgr_helper_get_active_display_cnt(
return display_count;
 }
 
+int clk_mgr_helper_get_active_plane_cnt(
+   struct dc *dc,
+   struct dc_state *context)
+{
+   int i, total_plane_count;
+
+   total_plane_count = 0;
+   for (i = 0; i < context->stream_count; i++) {
+   const struct dc_stream_status stream_status = 
context->stream_status[i];
+
+   /*
+* Sum up plane_count for all streams ( active and virtual ).
+*/
+   total_plane_count += stream_status.plane_count;
+   }
+
+   return total_plane_count;
+}
+
 void clk_mgr_exit_optimized_pwr_state(const struct dc *dc, struct clk_mgr 
*clk_mgr)
 {
struct dc_link *edp_link = get_edp_link(dc);
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
index 49ce46b543ea..68a1120ff674 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
@@ -158,6 +158,8 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
bool dpp_clock_lowered = false;
struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu;
bool force_reset = false;
+   bool p_state_change_support;
+   int total_plane_count;
 
if (dc->work_arounds.skip_clock_update)
return;
@@ -213,9 +215,11 @@ void dcn2_update_clocks(struct clk_mgr *clk_mgr_base,
pp_smu->set_hard_min_socclk_by_freq(&pp_smu->pp_smu, 
clk_mgr_base->clks.socclk_khz / 1000);
}
 
-   if (should_update_pstate_support(safe_to_lower, 
new_clocks->p_state_change_support, clk_mgr_base->clks.p_state_change_support)) 
{
+   total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+   p_state_change_support = new_clocks->p_state_change_support || 
(total_plane_count == 0);
+   if (should_update_pstate_support(safe_to_lower, p_state_change_support, 
clk_mgr_base->clks.p_state_change_support)) {
clk_mgr_base->clks.prev_p_state_change_support = 
clk_mgr_base->clks.p_state_change_support;
-   clk_mgr_base->clks.p_state_change_support = 
new_clocks->p_state_change_support;
+   clk_mgr_base->clks.p_state_change_support = 
p_state_change_support;
if (pp_smu && pp_smu->set_pstate_handshake_support)
pp_smu->set_pstate_handshake_support(&pp_smu->pp_smu, 
clk_mgr_base->clks.p_state_change_support);
}
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h 
b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
index 862952c0286a..9311d0de377f 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
@@ -296,6 +296,10 @@ int clk_mgr_helper_get_active_display_cnt(
struct dc *dc,
struct dc_state *context);
 
+int clk_mgr_helper_get_active_plane_cnt(
+   struct dc *dc,
+   struct dc_state *context);
+
 
 
 #endif //__DAL_CLK_MGR_INTERNAL_H__
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/35] drm/amd/display: Disable PG on NV12

2020-02-21 Thread Rodrigo Siqueira
From: Alvin Lee 

[Why]
According to HW team, PG is dropped for NV12, but programming
the registers will still cause power to be consumed, so don't
program for NV12.

[How]
Set function pointer to NULL if NV12

Signed-off-by: Alvin Lee 
Reviewed-by: Jun Lei 
Acked-by: Rodrigo Siqueira 
Acked-by: Harry Wentland 
---
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c| 7 ---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c| 9 +
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 5f56cc13d6dc..113ff6731902 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1268,7 +1268,8 @@ void dcn10_init_hw(struct dc *dc)
}
 
//Enable ability to power gate / don't force power on 
permanently
-   hws->funcs.enable_power_gating_plane(hws, true);
+   if (hws->funcs.enable_power_gating_plane)
+   hws->funcs.enable_power_gating_plane(hws, true);
 
return;
}
@@ -1378,8 +1379,8 @@ void dcn10_init_hw(struct dc *dc)
 
REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
}
-
-   hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+   if (hws->funcs.enable_power_gating_plane)
+   hws->funcs.enable_power_gating_plane(dc->hwseq, true);
 
if (dc->clk_mgr->funcs->notify_wm_ranges)
dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 1061faccec9c..080d4581a93d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -3760,6 +3760,15 @@ static bool dcn20_resource_construct(
 
dcn20_hw_sequencer_construct(dc);
 
+   // IF NV12, set PG function pointer to NULL. It's not that
+   // PG isn't supported for NV12, it's that we don't want to
+   // program the registers because that will cause more power
+   // to be consumed. We could have created dcn20_init_hw to get
+   // the same effect by checking ASIC rev, but there was a
+   // request at some point to not check ASIC rev on hw sequencer.
+   if (ASICREV_IS_NAVI12_P(dc->ctx->asic_id.hw_internal_rev))
+   dc->hwseq->funcs.enable_power_gating_plane = NULL;
+
dc->caps.max_planes =  pool->base.pipe_count;
 
for (i = 0; i < dc->caps.max_planes; ++i)
-- 
2.25.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Deucher, Alexander
[AMD Public Use]

Not at the moment.  But we could add a debugfs file which just wraps 
amdgpu_gfx_off_ctrl().  That said, maybe we just add a delay here or a use a 
timer to delay turning gfxoff back on again so that we aren't turning it on and 
off so rapidly.

Alex


From: Christian König 
Sent: Friday, February 21, 2020 10:43 AM
To: Deucher, Alexander ; Huang, Ray 
; Liu, Monk 
Cc: StDenis, Tom ; Alex Deucher ; 
amd-gfx list 
Subject: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access 
to MMIO

Do we have a way to disable GFXOFF on the fly?

If not maybe it would be a good idea to add a separate debugfs file to do this.

Christian.

Am 21.02.20 um 16:39 schrieb Deucher, Alexander:

[AMD Public Use]

If we are trying to debug a reproducible hang, probably best to just to disable 
gfxoff before messing with it to remove that as a factor.  Otherwise, the 
method included in this patch is the proper way to disable/enable GFXOFF 
dynamically.

Alex


From: amd-gfx 

 on behalf of Christian König 

Sent: Friday, February 21, 2020 10:27 AM
To: Huang, Ray ; Liu, Monk 

Cc: StDenis, Tom ; Alex 
Deucher ; amd-gfx list 

Subject: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access 
to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register you 
>> usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we needn't use 
> KIQ.

Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx 
>> 
>>  代表 Huang Rui
>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom 
>> 抄送: Alex Deucher ; 
>> amd-gfx list 
>> 
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
  wrote:
> Signed-off-by: Tom St Denis 
> 
 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher 
 

> ---
>drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   if (pm_pg_lock)
>   mutex_lock(&adev->pm.mutex);
>
> +   amdgpu_gfx_off_ctrl(adev, false);
>   while (size) {
>   uint32_t value;
>
> @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   }
>
>end:
> +   amdgpu_gfx_off_ctrl(adev, true);
> +
>   if (use_bank) {
>   amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> 0x);
>   mutex_unlock(&adev->grbm_idx_mutex);
> --
> 2.24.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lists.freedesktop.org%2Fma

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Huang Rui
On Fri, Feb 21, 2020 at 11:27:10PM +0800, Christian König wrote:
> Am 21.02.20 um 16:23 schrieb Huang Rui:
> > On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
> >> Better not use KIQ, because when you use debugfs to read register you 
> >> usually hit a hang, and by that case KIQ probably already die
> > If CP is busy, the gfx should be in "on" state at that time, we needn't use 
> > KIQ.
> 
> Yeah, but how do you detect that?

I remember there is a bit in SMU or PWR register which is not exposed yet.
And need double confirm with SMU or RLC guys.

> Do we have a way to wake up the CP without asking power management to do
> so?

Use doorbell interrupt. RLC will detect the doorbell interrupt to tell SMU
to wake up gfx at runtime. So I suggest KIQ here.

> 
> Cause the register debug interface is meant to be used when the ASIC is 
> completed locked up. Sending messages to the SMU is not really going to 
> work in that situation.
> 

Agree, actually, we tried this kind of messages a long time before, and
will get failure sometimes that the same like Tom here.

Thanks,
Ray

> Regards,
> Christian.
> 
> >
> > Thanks,
> > Ray
> >
> >> -邮件原件-
> >> 发件人: amd-gfx  代表 Huang Rui
> >> 发送时间: 2020年2月21日 22:34
> >> 收件人: StDenis, Tom 
> >> 抄送: Alex Deucher ; amd-gfx list 
> >> 
> >> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to 
> >> MMIO
> >>
> >> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> >>> I got some messages after a while:
> >>>
> >>> [  741.788564] Failed to send Message 8.
> >>> [  746.671509] Failed to send Message 8.
> >>> [  748.749673] Failed to send Message 2b.
> >>> [  759.245414] Failed to send Message 7.
> >>> [  763.216902] Failed to send Message 2a.
> >>>
> >>> Are there any additional locks that should be held?  Because some
> >>> commands like --top or --waves can do a lot of distinct read
> >>> operations (causing a lot of enable/disable calls).
> >>>
> >>> I'm going to sit on this a bit since I don't think the patch is ready
> >>> for pushing out.
> >>>
> >> How about use RREG32_KIQ and WREG32_KIQ?
> >>
> >> Thanks,
> >> Ray
> >>
> >>> Tom
> >>>
> >>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
>  On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  
>  wrote:
> > Signed-off-by: Tom St Denis 
>  Please add a patch description.  With that fixed:
>  Reviewed-by: Alex Deucher 
> 
> > ---
> >drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> >1 file changed, 3 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > index 7379910790c9..66f763300c96 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > read, struct file *f,
> >   if (pm_pg_lock)
> >   mutex_lock(&adev->pm.mutex);
> >
> > +   amdgpu_gfx_off_ctrl(adev, false);
> >   while (size) {
> >   uint32_t value;
> >
> > @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > read, struct file *f,
> >   }
> >
> >end:
> > +   amdgpu_gfx_off_ctrl(adev, true);
> > +
> >   if (use_bank) {
> >   amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> > 0x);
> >   mutex_unlock(&adev->grbm_idx_mutex);
> > --
> > 2.24.1
> >
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> > C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
> > 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
> > a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
> > d=0
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> >>> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
> >>> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
> >>> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
> >>> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
> >> ___
> >> amd-gfx mailing list
> >> amd-gfx@lists.freedesktop.org
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cray.huang%40amd.com%7Cefe423577bde46fc9e2508d7b6e28702%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178956359228789&sdata=rShU5sl749BuNxVR8uFLtIf8kR%2BUWBrt%2FO9H%2F0SRVTo%3D&r

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Christian König

Do we have a way to disable GFXOFF on the fly?

If not maybe it would be a good idea to add a separate debugfs file to 
do this.


Christian.

Am 21.02.20 um 16:39 schrieb Deucher, Alexander:


[AMD Public Use]


If we are trying to debug a reproducible hang, probably best to just 
to disable gfxoff before messing with it to remove that as a factor.  
Otherwise, the method included in this patch is the proper way to 
disable/enable GFXOFF dynamically.


Alex


*From:* amd-gfx  on behalf of 
Christian König 

*Sent:* Friday, February 21, 2020 10:27 AM
*To:* Huang, Ray ; Liu, Monk 
*Cc:* StDenis, Tom ; Alex Deucher 
; amd-gfx list 
*Subject:* Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around 
debugfs access to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register 
you usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we 
needn't use KIQ.


Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx  代表 Huang Rui
>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom 
>> 抄送: Alex Deucher ; amd-gfx list 

>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs 
access to MMIO

>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis 
 wrote:

> Signed-off-by: Tom St Denis 
 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher 

> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>    1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int 
amdgpu_debugfs_process_reg_op(bool read, struct file *f,

>   if (pm_pg_lock)
> mutex_lock(&adev->pm.mutex);
>
> +   amdgpu_gfx_off_ctrl(adev, false);
>   while (size) {
>   uint32_t value;
>
> @@ -192,6 +193,8 @@ static int 
amdgpu_debugfs_process_reg_op(bool read, struct file *f,

>   }
>
>    end:
> +   amdgpu_gfx_off_ctrl(adev, true);
> +
>   if (use_bank) {
> amdgpu_gfx_select_se_sh(adev, 0x, 0x, 0x);
> mutex_unlock(&adev->grbm_idx_mutex);
> --
> 2.24.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
> 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
> a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
> d=0
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
>>> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
>>> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
>>> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
>>> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Calexander.deucher%40amd.com%7Cf3762304925b4019bfed08d7b6e287e4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178957179179431&sdata=bY7V%2BKOF3gYu4ITGCKg

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Deucher, Alexander
[AMD Public Use]

If we are trying to debug a reproducible hang, probably best to just to disable 
gfxoff before messing with it to remove that as a factor.  Otherwise, the 
method included in this patch is the proper way to disable/enable GFXOFF 
dynamically.

Alex


From: amd-gfx  on behalf of Christian 
König 
Sent: Friday, February 21, 2020 10:27 AM
To: Huang, Ray ; Liu, Monk 
Cc: StDenis, Tom ; Alex Deucher ; 
amd-gfx list 
Subject: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access 
to MMIO

Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register you 
>> usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we needn't use 
> KIQ.

Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?

Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> -邮件原件-
>> 发件人: amd-gfx  代表 Huang Rui
>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom 
>> 抄送: Alex Deucher ; amd-gfx list 
>> 
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [  741.788564] Failed to send Message 8.
>>> [  746.671509] Failed to send Message 8.
>>> [  748.749673] Failed to send Message 2b.
>>> [  759.245414] Failed to send Message 7.
>>> [  763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held?  Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
 On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:
> Signed-off-by: Tom St Denis 
 Please add a patch description.  With that fixed:
 Reviewed-by: Alex Deucher 

> ---
>drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 7379910790c9..66f763300c96 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   if (pm_pg_lock)
>   mutex_lock(&adev->pm.mutex);
>
> +   amdgpu_gfx_off_ctrl(adev, false);
>   while (size) {
>   uint32_t value;
>
> @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> struct file *f,
>   }
>
>end:
> +   amdgpu_gfx_off_ctrl(adev, true);
> +
>   if (use_bank) {
>   amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> 0x);
>   mutex_unlock(&adev->grbm_idx_mutex);
> --
> 2.24.1
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
> 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
> a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
> d=0
>>> ___
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
>>> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
>>> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
>>> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
>>> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Calexander.deucher%40amd.com%7Cf3762304925b4019bfed08d7b6e287e4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178957179179431&sdata=bY7V%2BKOF3gYu4ITGCKgAiRvXUvxPcwsz2zsEJDns3jI%3D&reserved=0
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.

[PATCH -next] drm/amd/powerplay: Use bitwise instead of arithmetic operator for flags

2020-02-21 Thread Chen Zhou
This silences the following coccinelle warning:

"WARNING: sum of probable bitmasks, consider |"

Signed-off-by: Chen Zhou 
---
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
index 92a65e3d..f29f95b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -3382,7 +3382,7 @@ static int 
vega10_populate_and_upload_sclk_mclk_dpm_levels(
}
 
if (data->need_update_dpm_table &
-   (DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK + 
DPMTABLE_UPDATE_SOCCLK)) {
+   (DPMTABLE_OD_UPDATE_SCLK | DPMTABLE_UPDATE_SCLK | 
DPMTABLE_UPDATE_SOCCLK)) {
result = vega10_populate_all_graphic_levels(hwmgr);
PP_ASSERT_WITH_CODE((0 == result),
"Failed to populate SCLK during 
PopulateNewDPMClocksStates Function!",
@@ -3390,7 +3390,7 @@ static int 
vega10_populate_and_upload_sclk_mclk_dpm_levels(
}
 
if (data->need_update_dpm_table &
-   (DPMTABLE_OD_UPDATE_MCLK + DPMTABLE_UPDATE_MCLK)) {
+   (DPMTABLE_OD_UPDATE_MCLK | DPMTABLE_UPDATE_MCLK)) {
result = vega10_populate_all_memory_levels(hwmgr);
PP_ASSERT_WITH_CODE((0 == result),
"Failed to populate MCLK during 
PopulateNewDPMClocksStates Function!",
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH -next] drm/amd/display: remove set but not used variable 'mc_vm_apt_default'

2020-02-21 Thread YueHaibing
drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_hubp.c:
 In function hubp21_set_vm_system_aperture_settings:
drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_hubp.c:343:23:
 warning: variable mc_vm_apt_default set but not used 
[-Wunused-but-set-variable]

It is never used, so remove it.

Reported-by: Hulk Robot 
Signed-off-by: YueHaibing 
---
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c 
b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
index aa7b0e7..d285ba6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
@@ -340,13 +340,9 @@ void hubp21_set_vm_system_aperture_settings(struct hubp 
*hubp,
 {
struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
 
-   PHYSICAL_ADDRESS_LOC mc_vm_apt_default;
PHYSICAL_ADDRESS_LOC mc_vm_apt_low;
PHYSICAL_ADDRESS_LOC mc_vm_apt_high;
 
-   // The format of default addr is 48:12 of the 48 bit addr
-   mc_vm_apt_default.quad_part = apt->sys_default.quad_part >> 12;
-
// The format of high/low are 48:18 of the 48 bit addr
mc_vm_apt_low.quad_part = apt->sys_low.quad_part >> 18;
mc_vm_apt_high.quad_part = apt->sys_high.quad_part >> 18;
-- 
2.7.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-21 Thread Christian König

I would just do this as part of the vm_flush() callback on the ring.

E.g. check if the VMID you want to flush is reserved and if yes enable SPM.

Maybe pass along a flag or something in the job to make things easier.

Christian.

Am 21.02.20 um 16:31 schrieb Deucher, Alexander:


[AMD Public Use]


We already have the RESERVE_VMID ioctl interface, can't we just use 
that internally in the kernel to update the rlc register via the ring 
when we schedule the relevant IB?  E.g., add a new ring callback to 
set SPM state and then set it to the reserved vmid before we schedule 
the ib, and then reset it to 0 after the IB in amdgpu_ib_schedule().


diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c

index 4b2342d11520..e0db9362c6ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -185,6 +185,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, 
unsigned num_ibs,

        if (ring->funcs->insert_start)
                ring->funcs->insert_start(ring);

+       if (ring->funcs->setup_spm)
+               ring->funcs->setup_spm(ring, job);
+
        if (job) {
                r = amdgpu_vm_flush(ring, job, need_pipe_sync);
                if (r) {
@@ -273,6 +276,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, 
unsigned num_ibs,

                return r;
        }

+       if (ring->funcs->setup_spm)
+               ring->funcs->setup_spm(ring, NULL);
+
        if (ring->funcs->insert_end)
                ring->funcs->insert_end(ring);



Alex

*From:* amd-gfx  on behalf of 
Christian König 

*Sent:* Friday, February 21, 2020 5:28 AM
*To:* Zhou, David(ChunMing) ; He, Jacob 
; Koenig, Christian ; 
amd-gfx@lists.freedesktop.org 

*Subject:* Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace
That would probably be a no-go, but we could enhance the kernel driver 
to update the RLC_SPM_VMID register with the reserved VMID.


Handling that in userspace is most likely not working anyway, since 
the RLC registers are usually not accessible by userspace.


Regards,
Christian.

Am 20.02.20 um 16:15 schrieb Zhou, David(ChunMing):


[AMD Official Use Only - Internal Distribution Only]

You can enhance amdgpu_vm_ioctl In amdgpu_vm.c to return vmid to 
userspace.


-David

*From:* He, Jacob  
*Sent:* Thursday, February 20, 2020 10:46 PM
*To:* Zhou, David(ChunMing)  
; Koenig, Christian 
 ; 
amd-gfx@lists.freedesktop.org 

*Subject:* RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

amdgpu_vm_reserve_vmid doesn’t return the reserved vmid back to user 
space. There is no chance for user mode driver to update RLC_SPM_VMID.


Thanks

Jacob

*From: *He, Jacob 
*Sent: *Thursday, February 20, 2020 6:20 PM
*To: *Zhou, David(ChunMing) ; Koenig, 
Christian ; 
amd-gfx@lists.freedesktop.org 

*Subject: *RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Looks like amdgpu_vm_reserve_vmid could work, let me have a try to 
update the RLC_SPM_VMID with pm4 packets in UMD.


Thanks

Jacob

*From: *Zhou, David(ChunMing) 
*Sent: *Thursday, February 20, 2020 10:13 AM
*To: *Koenig, Christian ; He, Jacob 
; amd-gfx@lists.freedesktop.org 


*Subject: *RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

[AMD Official Use Only - Internal Distribution Only]

Christian is right here, that will cause many problems for simply 
using VMID in kernel.
We already have an pair interface for RGP, I think you can use it 
instead of involving additional kernel change.

amdgpu_vm_reserve_vmid/ amdgpu_vm_unreserve_vmid.

-David

-Original Message-
From: amd-gfx > On Behalf Of 
Christian König

Sent: Wednesday, February 19, 2020 7:03 PM
To: He, Jacob mailto:jacob...@amd.com>>; 
amd-gfx@lists.freedesktop.org 

Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Am 19.02.20 um 11:15 schrieb Jacob He:
> [WHY]
> When SPM trace enabled, SPM_VMID should be updated with the current
> vmid.
>
> [HOW]
> Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
> which job should update SPM_VMID.
> Right before a job is submitted to GPU, set the SPM_VMID accordingly.
>
> [Limitation]
> Running more than one SPM trace enabled processes simultaneously is
> not supported.

Well there are multiple problems with that patch.

First of all you need to better describe what SPM tracing is in the 
commit message.


Then the updating of mmRLC_SPM_MC_CNTL must be executed 
asynchronously on the ring. Otherwise we might corrupt an alr

Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-21 Thread Deucher, Alexander
[AMD Public Use]

We already have the RESERVE_VMID ioctl interface, can't we just use that 
internally in the kernel to update the rlc register via the ring when we 
schedule the relevant IB?  E.g., add a new ring callback to set SPM state and 
then set it to the reserved vmid before we schedule the ib, and then reset it 
to 0 after the IB in amdgpu_ib_schedule().

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 4b2342d11520..e0db9362c6ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -185,6 +185,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
if (ring->funcs->insert_start)
ring->funcs->insert_start(ring);

+   if (ring->funcs->setup_spm)
+   ring->funcs->setup_spm(ring, job);
+
if (job) {
r = amdgpu_vm_flush(ring, job, need_pipe_sync);
if (r) {
@@ -273,6 +276,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
return r;
}

+   if (ring->funcs->setup_spm)
+   ring->funcs->setup_spm(ring, NULL);
+
if (ring->funcs->insert_end)
ring->funcs->insert_end(ring);



Alex

From: amd-gfx  on behalf of Christian 
König 
Sent: Friday, February 21, 2020 5:28 AM
To: Zhou, David(ChunMing) ; He, Jacob ; 
Koenig, Christian ; amd-gfx@lists.freedesktop.org 

Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

That would probably be a no-go, but we could enhance the kernel driver to 
update the RLC_SPM_VMID register with the reserved VMID.

Handling that in userspace is most likely not working anyway, since the RLC 
registers are usually not accessible by userspace.

Regards,
Christian.

Am 20.02.20 um 16:15 schrieb Zhou, David(ChunMing):

[AMD Official Use Only - Internal Distribution Only]



You can enhance amdgpu_vm_ioctl In amdgpu_vm.c to return vmid to userspace.



-David





From: He, Jacob 
Sent: Thursday, February 20, 2020 10:46 PM
To: Zhou, David(ChunMing) ; 
Koenig, Christian ; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



amdgpu_vm_reserve_vmid doesn’t return the reserved vmid back to user space. 
There is no chance for user mode driver to update RLC_SPM_VMID.



Thanks

Jacob



From: He, Jacob
Sent: Thursday, February 20, 2020 6:20 PM
To: Zhou, David(ChunMing); Koenig, 
Christian; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



Looks like amdgpu_vm_reserve_vmid could work, let me have a try to update the 
RLC_SPM_VMID with pm4 packets in UMD.



Thanks

Jacob



From: Zhou, David(ChunMing)
Sent: Thursday, February 20, 2020 10:13 AM
To: Koenig, Christian; He, 
Jacob; 
amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace



[AMD Official Use Only - Internal Distribution Only]

Christian is right here, that will cause many problems for simply using VMID in 
kernel.
We already have an pair interface for RGP, I think you can use it instead of 
involving additional kernel change.
amdgpu_vm_reserve_vmid/ amdgpu_vm_unreserve_vmid.

-David

-Original Message-
From: amd-gfx 
mailto:amd-gfx-boun...@lists.freedesktop.org>>
 On Behalf Of Christian König
Sent: Wednesday, February 19, 2020 7:03 PM
To: He, Jacob mailto:jacob...@amd.com>>; 
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Am 19.02.20 um 11:15 schrieb Jacob He:
> [WHY]
> When SPM trace enabled, SPM_VMID should be updated with the current
> vmid.
>
> [HOW]
> Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
> which job should update SPM_VMID.
> Right before a job is submitted to GPU, set the SPM_VMID accordingly.
>
> [Limitation]
> Running more than one SPM trace enabled processes simultaneously is
> not supported.

Well there are multiple problems with that patch.

First of all you need to better describe what SPM tracing is in the commit 
message.

Then the updating of mmRLC_SPM_MC_CNTL must be executed asynchronously on the 
ring. Otherwise we might corrupt an already executing SPM trace.

And you also need to make sure to disable the tracing again or otherwise we run 
into a bunch of trouble when the VMID is reused.

You also need to make sure that IBs using the SPM trace are serialized with 
each other, e.g. hack into amdgpu_ids.c file and make sure that only one VMID 
at a time can have that attribute.

Regards,
Christia

Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Christian König

Am 21.02.20 um 16:23 schrieb Huang Rui:

On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:

Better not use KIQ, because when you use debugfs to read register you usually 
hit a hang, and by that case KIQ probably already die

If CP is busy, the gfx should be in "on" state at that time, we needn't use KIQ.


Yeah, but how do you detect that? Do we have a way to wake up the CP 
without asking power management to do so?


Cause the register debug interface is meant to be used when the ASIC is 
completed locked up. Sending messages to the SMU is not really going to 
work in that situation.


Regards,
Christian.



Thanks,
Ray


-邮件原件-
发件人: amd-gfx  代表 Huang Rui
发送时间: 2020年2月21日 22:34
收件人: StDenis, Tom 
抄送: Alex Deucher ; amd-gfx list 

主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:

I got some messages after a while:

[  741.788564] Failed to send Message 8.
[  746.671509] Failed to send Message 8.
[  748.749673] Failed to send Message 2b.
[  759.245414] Failed to send Message 7.
[  763.216902] Failed to send Message 2a.

Are there any additional locks that should be held?  Because some
commands like --top or --waves can do a lot of distinct read
operations (causing a lot of enable/disable calls).

I'm going to sit on this a bit since I don't think the patch is ready
for pushing out.


How about use RREG32_KIQ and WREG32_KIQ?

Thanks,
Ray


Tom

On 2020-02-19 10:07 a.m., Alex Deucher wrote:

On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:

Signed-off-by: Tom St Denis 

Please add a patch description.  With that fixed:
Reviewed-by: Alex Deucher 


---
   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
   1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 7379910790c9..66f763300c96 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, struct 
file *f,
  if (pm_pg_lock)
  mutex_lock(&adev->pm.mutex);

+   amdgpu_gfx_off_ctrl(adev, false);
  while (size) {
  uint32_t value;

@@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, struct 
file *f,
  }

   end:
+   amdgpu_gfx_off_ctrl(adev, true);
+
  if (use_bank) {
  amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
0x);
  mutex_unlock(&adev->grbm_idx_mutex);
--
2.24.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
d=0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Huang Rui
On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
> Better not use KIQ, because when you use debugfs to read register you usually 
> hit a hang, and by that case KIQ probably already die 

If CP is busy, the gfx should be in "on" state at that time, we needn't use KIQ.

Thanks,
Ray

> 
> -邮件原件-
> 发件人: amd-gfx  代表 Huang Rui
> 发送时间: 2020年2月21日 22:34
> 收件人: StDenis, Tom 
> 抄送: Alex Deucher ; amd-gfx list 
> 
> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
> 
> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> > I got some messages after a while:
> > 
> > [  741.788564] Failed to send Message 8.
> > [  746.671509] Failed to send Message 8.
> > [  748.749673] Failed to send Message 2b.
> > [  759.245414] Failed to send Message 7.
> > [  763.216902] Failed to send Message 2a.
> > 
> > Are there any additional locks that should be held?  Because some 
> > commands like --top or --waves can do a lot of distinct read 
> > operations (causing a lot of enable/disable calls).
> > 
> > I'm going to sit on this a bit since I don't think the patch is ready 
> > for pushing out.
> > 
> 
> How about use RREG32_KIQ and WREG32_KIQ?
> 
> Thanks,
> Ray
> 
> > 
> > Tom
> > 
> > On 2020-02-19 10:07 a.m., Alex Deucher wrote:
> > > On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:
> > > > Signed-off-by: Tom St Denis 
> > > Please add a patch description.  With that fixed:
> > > Reviewed-by: Alex Deucher 
> > > 
> > > > ---
> > > >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> > > >   1 file changed, 3 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > index 7379910790c9..66f763300c96 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > > > read, struct file *f,
> > > >  if (pm_pg_lock)
> > > >  mutex_lock(&adev->pm.mutex);
> > > > 
> > > > +   amdgpu_gfx_off_ctrl(adev, false);
> > > >  while (size) {
> > > >  uint32_t value;
> > > > 
> > > > @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool 
> > > > read, struct file *f,
> > > >  }
> > > > 
> > > >   end:
> > > > +   amdgpu_gfx_off_ctrl(adev, true);
> > > > +
> > > >  if (use_bank) {
> > > >  amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> > > > 0x);
> > > >  mutex_unlock(&adev->grbm_idx_mutex);
> > > > --
> > > > 2.24.1
> > > > 
> > > > ___
> > > > amd-gfx mailing list
> > > > amd-gfx@lists.freedesktop.org
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > > > lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> > > > C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
> > > > 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
> > > > a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
> > > > d=0
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
> > nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
> > 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
> > 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Liu, Monk
Better not use KIQ, because when you use debugfs to read register you usually 
hit a hang, and by that case KIQ probably already die 

-邮件原件-
发件人: amd-gfx  代表 Huang Rui
发送时间: 2020年2月21日 22:34
收件人: StDenis, Tom 
抄送: Alex Deucher ; amd-gfx list 

主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> I got some messages after a while:
> 
> [  741.788564] Failed to send Message 8.
> [  746.671509] Failed to send Message 8.
> [  748.749673] Failed to send Message 2b.
> [  759.245414] Failed to send Message 7.
> [  763.216902] Failed to send Message 2a.
> 
> Are there any additional locks that should be held?  Because some 
> commands like --top or --waves can do a lot of distinct read 
> operations (causing a lot of enable/disable calls).
> 
> I'm going to sit on this a bit since I don't think the patch is ready 
> for pushing out.
> 

How about use RREG32_KIQ and WREG32_KIQ?

Thanks,
Ray

> 
> Tom
> 
> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
> > On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:
> > > Signed-off-by: Tom St Denis 
> > Please add a patch description.  With that fixed:
> > Reviewed-by: Alex Deucher 
> > 
> > > ---
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> > >   1 file changed, 3 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > index 7379910790c9..66f763300c96 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> > > struct file *f,
> > >  if (pm_pg_lock)
> > >  mutex_lock(&adev->pm.mutex);
> > > 
> > > +   amdgpu_gfx_off_ctrl(adev, false);
> > >  while (size) {
> > >  uint32_t value;
> > > 
> > > @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> > > struct file *f,
> > >  }
> > > 
> > >   end:
> > > +   amdgpu_gfx_off_ctrl(adev, true);
> > > +
> > >  if (use_bank) {
> > >  amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> > > 0x);
> > >  mutex_unlock(&adev->grbm_idx_mutex);
> > > --
> > > 2.24.1
> > > 
> > > ___
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > > lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7
> > > C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
> > > 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
> > > a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
> > > d=0
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmo
> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Huang Rui
On Fri, Feb 21, 2020 at 10:35:33PM +0800, StDenis, Tom wrote:
> 
> On 2020-02-21 9:34 a.m., Huang Rui wrote:
> > On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> >> I got some messages after a while:
> >>
> >> [  741.788564] Failed to send Message 8.
> >> [  746.671509] Failed to send Message 8.
> >> [  748.749673] Failed to send Message 2b.
> >> [  759.245414] Failed to send Message 7.
> >> [  763.216902] Failed to send Message 2a.
> >>
> >> Are there any additional locks that should be held?  Because some commands
> >> like --top or --waves can do a lot of distinct read operations (causing a
> >> lot of enable/disable calls).
> >>
> >> I'm going to sit on this a bit since I don't think the patch is ready for
> >> pushing out.
> >>
> > How about use RREG32_KIQ and WREG32_KIQ?
> 
> 
> For all register accesses (in the debugfs read/write method)? Can we use 
> those on all ASICs?

It can be used for all register access, but using KIQ is not as fast as MMIO.

So we can check if GFXOFF enabled, then go with KIQ path. Because KIQ can
wake up the GFX to "on" state at runtime.

Thanks,
Ray

> 
> 
> Tom
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Tom St Denis


On 2020-02-21 9:34 a.m., Huang Rui wrote:

On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:

I got some messages after a while:

[  741.788564] Failed to send Message 8.
[  746.671509] Failed to send Message 8.
[  748.749673] Failed to send Message 2b.
[  759.245414] Failed to send Message 7.
[  763.216902] Failed to send Message 2a.

Are there any additional locks that should be held?  Because some commands
like --top or --waves can do a lot of distinct read operations (causing a
lot of enable/disable calls).

I'm going to sit on this a bit since I don't think the patch is ready for
pushing out.


How about use RREG32_KIQ and WREG32_KIQ?



For all register accesses (in the debugfs read/write method)? Can we use 
those on all ASICs?



Tom

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO

2020-02-21 Thread Huang Rui
On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
> I got some messages after a while:
> 
> [  741.788564] Failed to send Message 8.
> [  746.671509] Failed to send Message 8.
> [  748.749673] Failed to send Message 2b.
> [  759.245414] Failed to send Message 7.
> [  763.216902] Failed to send Message 2a.
> 
> Are there any additional locks that should be held?  Because some commands
> like --top or --waves can do a lot of distinct read operations (causing a
> lot of enable/disable calls).
> 
> I'm going to sit on this a bit since I don't think the patch is ready for
> pushing out.
> 

How about use RREG32_KIQ and WREG32_KIQ?

Thanks,
Ray

> 
> Tom
> 
> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
> > On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis  wrote:
> > > Signed-off-by: Tom St Denis 
> > Please add a patch description.  With that fixed:
> > Reviewed-by: Alex Deucher 
> > 
> > > ---
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
> > >   1 file changed, 3 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > index 7379910790c9..66f763300c96 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > @@ -169,6 +169,7 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> > > struct file *f,
> > >  if (pm_pg_lock)
> > >  mutex_lock(&adev->pm.mutex);
> > > 
> > > +   amdgpu_gfx_off_ctrl(adev, false);
> > >  while (size) {
> > >  uint32_t value;
> > > 
> > > @@ -192,6 +193,8 @@ static int  amdgpu_debugfs_process_reg_op(bool read, 
> > > struct file *f,
> > >  }
> > > 
> > >   end:
> > > +   amdgpu_gfx_off_ctrl(adev, true);
> > > +
> > >  if (use_bank) {
> > >  amdgpu_gfx_select_se_sh(adev, 0x, 0x, 
> > > 0x);
> > >  mutex_unlock(&adev->grbm_idx_mutex);
> > > --
> > > 2.24.1
> > > 
> > > ___
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cray.huang%40amd.com%7C7db7d3365c8842d46cde08d7b54dc560%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637177217984125436&sdata=NBNxMQ%2Fuq7YswVzlrvZWbSmcf4JUt4eL5L62%2F7iLL28%3D&reserved=0
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cray.huang%40amd.com%7C7db7d3365c8842d46cde08d7b54dc560%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637177217984125436&sdata=NBNxMQ%2Fuq7YswVzlrvZWbSmcf4JUt4eL5L62%2F7iLL28%3D&reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 6/8] drm/vram-helper: don't use ttm bo->offset v2

2020-02-21 Thread Thomas Zimmermann
Hi

Am 19.02.20 um 14:53 schrieb Nirmoy Das:
> Calculate GEM VRAM bo's offset within vram-helper without depending on
> bo->offset
> 
> Signed-off-by: Nirmoy Das 
> ---
>  drivers/gpu/drm/drm_gem_vram_helper.c | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c 
> b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 92a11bb42365..3edf5f241c15 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -198,6 +198,21 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object 
> *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
> 
> +/**
> + * drm_gem_vram_pg_offset() - Returns a GEM VRAM object's page offset
> + * @gbo: the GEM VRAM object
> + *
> + * Returns:
> + * The buffer object's page offset, or
> + * 0 with a warning when memory manager node of the buffer object is NULL
> + * */
> +static s64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo)
> +{
> + if (WARN_ON_ONCE(!gbo->bo.mem.mm_node))
> + return 0;
> + return gbo->bo.mem.start;
> +}

As Daniel said, you don't heve to document this function. Otherwise

Reviewed-by: Thomas Zimmermann 

> +
>  /**
>   * drm_gem_vram_offset() - \
>   Returns a GEM VRAM object's offset in video memory
> @@ -214,7 +229,7 @@ s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo)
>  {
>   if (WARN_ON_ONCE(!gbo->pin_count))
>   return (s64)-ENODEV;
> - return gbo->bo.offset;
> + return drm_gem_vram_pg_offset(gbo) << PAGE_SHIFT;
>  }
>  EXPORT_SYMBOL(drm_gem_vram_offset);
> 
> --
> 2.25.0
> 
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



signature.asc
Description: OpenPGP digital signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

2020-02-21 Thread Christian König
That would probably be a no-go, but we could enhance the kernel driver 
to update the RLC_SPM_VMID register with the reserved VMID.


Handling that in userspace is most likely not working anyway, since the 
RLC registers are usually not accessible by userspace.


Regards,
Christian.

Am 20.02.20 um 16:15 schrieb Zhou, David(ChunMing):


[AMD Official Use Only - Internal Distribution Only]

You can enhance amdgpu_vm_ioctl In amdgpu_vm.c to return vmid to 
userspace.


-David

*From:* He, Jacob 
*Sent:* Thursday, February 20, 2020 10:46 PM
*To:* Zhou, David(ChunMing) ; Koenig, Christian 
; amd-gfx@lists.freedesktop.org

*Subject:* RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

amdgpu_vm_reserve_vmid doesn’t return the reserved vmid back to user 
space. There is no chance for user mode driver to update RLC_SPM_VMID.


Thanks

Jacob

*From: *He, Jacob 
*Sent: *Thursday, February 20, 2020 6:20 PM
*To: *Zhou, David(ChunMing) ; Koenig, 
Christian ; 
amd-gfx@lists.freedesktop.org 

*Subject: *RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Looks like amdgpu_vm_reserve_vmid could work, let me have a try to 
update the RLC_SPM_VMID with pm4 packets in UMD.


Thanks

Jacob

*From: *Zhou, David(ChunMing) 
*Sent: *Thursday, February 20, 2020 10:13 AM
*To: *Koenig, Christian ; He, Jacob 
; amd-gfx@lists.freedesktop.org 


*Subject: *RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

[AMD Official Use Only - Internal Distribution Only]

Christian is right here, that will cause many problems for simply 
using VMID in kernel.
We already have an pair interface for RGP, I think you can use it 
instead of involving additional kernel change.

amdgpu_vm_reserve_vmid/ amdgpu_vm_unreserve_vmid.

-David

-Original Message-
From: amd-gfx > On Behalf Of Christian 
König

Sent: Wednesday, February 19, 2020 7:03 PM
To: He, Jacob mailto:jacob...@amd.com>>; 
amd-gfx@lists.freedesktop.org 

Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Am 19.02.20 um 11:15 schrieb Jacob He:
> [WHY]
> When SPM trace enabled, SPM_VMID should be updated with the current
> vmid.
>
> [HOW]
> Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
> which job should update SPM_VMID.
> Right before a job is submitted to GPU, set the SPM_VMID accordingly.
>
> [Limitation]
> Running more than one SPM trace enabled processes simultaneously is
> not supported.

Well there are multiple problems with that patch.

First of all you need to better describe what SPM tracing is in the 
commit message.


Then the updating of mmRLC_SPM_MC_CNTL must be executed asynchronously 
on the ring. Otherwise we might corrupt an already executing SPM trace.


And you also need to make sure to disable the tracing again or 
otherwise we run into a bunch of trouble when the VMID is reused.


You also need to make sure that IBs using the SPM trace are serialized 
with each other, e.g. hack into amdgpu_ids.c file and make sure that 
only one VMID at a time can have that attribute.


Regards,
Christian.

>
> Change-Id: Ic932ef6ac9dbf244f03aaee90550e8ff3a675666
> Signed-off-by: Jacob He mailto:jacob...@amd.com>>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  |  7 +++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  | 10 +++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h |  1 +
>   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c  | 15 ++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c   |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c   |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c   | 15 ++-
>   8 files changed, 48 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index f9fa6e104fef..3f32c4db5232 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -113,6 +113,7 @@ static int amdgpu_cs_parser_init(struct 
amdgpu_cs_parser *p, union drm_amdgpu_cs

>    uint32_t uf_offset = 0;
>    int i;
>    int ret;
> + bool update_spm_vmid = false;
>
>    if (cs->in.num_chunks == 0)
>    return 0;
> @@ -221,6 +222,10 @@ static int amdgpu_cs_parser_init(struct 
amdgpu_cs_parser *p, union drm_amdgpu_cs

>    case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
>    break;
>
> + case AMDGPU_CHUNK_ID_SPM_TRACE:
> + update_spm_vmid = true;
> + break;
> +
>    default:
>    ret = -EINVAL;
>    goto free_partial_kdata;
> @@ -231,6 +236,8 @@ static int amdgpu_cs_parser_init(s

Re: [PATCH 3/8] drm/vmwgfx: don't use ttm bo->offset

2020-02-21 Thread VMware

On 2/19/20 2:53 PM, Nirmoy Das wrote:

Calculate GPU offset within vmwgfx driver itself without depending on
bo->offset

Signed-off-by: Nirmoy Das 
Acked-by: Christian König 


Tested-by: Thomas Hellstrom 
Acked-by: Thomas Hellstrom 


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 0/8] do not store GPU address in TTM

2020-02-21 Thread VMware

Hi,

On 2/19/20 2:53 PM, Nirmoy Das wrote:

With this patch series I am trying to remove GPU address dependency in
TTM and moving GPU address calculation to individual drm drivers.


For future reference, could you please add a motivation for the series?
for example cleanup, needed because, simplifies... etc.

Thanks,

Thomas


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx