Re: [Intel-gfx] [RFC 0/1] drm/i915/display: Expose HDMI properties to userspace

2021-05-06 Thread Ville Syrjälä
On Thu, May 06, 2021 at 06:17:18AM +0530, Nischal Varide wrote:
> Right now the HDMI properties like vendor and product ids are hardcoded
> in the function "intel_hdmi_compute_spd_infoframe()".
> 
> ret = hdmi_spd_infoframe_init(frame, "Intel", "Integrated gfx").
> 
> This patch enables the possibility of setting vendor and product fields
> of the Infoframe structure in the userspace, instead of hardcoding in
> the kernel.
> 
> The changes has been Tested by an IGT testcase , which will be floated
> in few hours
> 
> 
> Nischal Varide (1):
>   drm/i915/display: Expose HDMI properties to userspace

That subject is quite misleading/vague.

Any uapi additions must be posted to dri-devel.

> 
>  drivers/gpu/drm/i915/display/intel_atomic.c   | 14 +
>  .../gpu/drm/i915/display/intel_connector.c| 20 +++
>  .../gpu/drm/i915/display/intel_connector.h|  1 +
>  .../drm/i915/display/intel_display_types.h|  5 +
>  drivers/gpu/drm/i915/display/intel_hdmi.c | 14 -
>  drivers/gpu/drm/i915/display/intel_hdmi.h |  5 +
>  drivers/gpu/drm/i915/i915_drv.h   |  1 +
>  7 files changed, 59 insertions(+), 1 deletion(-)
> 
> -- 
> 2.29.2
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

2021-05-06 Thread Patchwork
== Series Details ==

Series: drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON 
(rev2)
URL   : https://patchwork.freedesktop.org/series/89639/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10047_full -> Patchwork_20065_full


Summary
---

  **SUCCESS**

  No regressions found.

  


Changes
---

  No changes found


Participating hosts (11 -> 9)
--

  Missing(2): pig-skl-6260u pig-glk-j5005 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_10047 -> Patchwork_20065

  CI-20190529: 20190529
  CI_DRM_10047: 6bc6aeb4870cfb28f24523f42157cf9a86be80d7 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6077: 126a3f6fc0e97786e2819085efc84e741093aed5 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_20065: 9065f5d90df031d6e8a262a19ee067456b09d263 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ 
git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

2021-05-06 Thread Vudum, Lakshminarayana
Re-reported.

From: Nautiyal, Ankit K 
Sent: Thursday, May 6, 2021 9:28 PM
To: intel-gfx@lists.freedesktop.org; Vudum, Lakshminarayana 

Subject: RE: ✗ Fi.CI.BAT: failure for drm/i915: Use correct downstream caps for 
check Src-Ctl mode for PCON (rev2)

Hi Lakshmi,

The following failure is due to existing issue : 
https://gitlab.freedesktop.org/drm/intel/-/issues/541
Possible regressions

  *   igt@i915_selftest@live@gt_heartbeat:
 *   fi-tgl-y: 
PASS
 -> 
DMESG-FAIL
Thanks & Regards,
Ankit

From: Patchwork 
Sent: Wednesday, May 5, 2021 2:32 PM
To: Nautiyal, Ankit K 
Cc: intel-gfx@lists.freedesktop.org
Subject: ✗ Fi.CI.BAT: failure for drm/i915: Use correct downstream caps for 
check Src-Ctl mode for PCON (rev2)

Patch Details
Series:

drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

URL:

https://patchwork.freedesktop.org/series/89639/

State:

failure

Details:

https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html

CI Bug Log - changes from CI_DRM_10047 -> Patchwork_20065
Summary

FAILURE

Serious unknown changes coming with Patchwork_20065 absolutely need to be
verified manually.

If you think the reported changes have nothing to do with the changes
introduced in Patchwork_20065, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.

External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html

Possible new issues

Here are the unknown changes that may have been introduced in Patchwork_20065:

IGT changes
Possible regressions

  *   igt@i915_selftest@live@gt_heartbeat:
 *   fi-tgl-y: 
PASS
 -> 
DMESG-FAIL

Known issues

Here are the changes found in Patchwork_20065 that come from known issues:

IGT changes
Issues hit

  *   igt@amdgpu/amd_basic@semaphore:
 *   fi-bsw-nick: NOTRUN -> 
SKIP
 (fdo#109271) +17 similar 
issues
  *   igt@amdgpu/amd_cs_nop@fork-gfx0:
 *   fi-tgl-y: NOTRUN -> 
SKIP
 (fdo#109315 / 
i915#2575) +13 similar 
issues
  *   igt@gem_exec_gttfill@basic:
 *   fi-bsw-n3050: NOTRUN -> 
SKIP
 (fdo#109271)
  *   igt@gem_exec_suspend@basic-s3:
 *   fi-bsw-n3050: NOTRUN -> 
INCOMPLETE
 (i915#3159)

Possible fixes

  *   igt@i915_pm_rpm@basic-rte:
 *   {fi-tgl-1115g4}: 
DMESG-WARN
 (i915#402) -> 
PASS
  *   igt@i915_pm_rpm@module-reload:
 *   {fi-tgl-1115g4}: 
DMESG-WARN
 (k.org#205379) -> 
PASS
  *   igt@i915_selftest@live@late_gt_pm:
 *   fi-bsw-nick: 
DMESG-FAIL
 (i915#2927) -> 
PASS

{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).

Participating hosts (43 -> 40)

Additional (1): fi-bsw-n3050
Missing (4): fi-ctg-p8600 fi-ilk-m540 fi-bdw-samus fi-hsw-4200u

Build changes

  *   Linux: CI_DRM_10047 -> Patchwork_20065

CI-20190529: 20190529
CI_DRM_10047: 6bc6aeb4870cfb28f24523f42157cf9a86be80d7 @ 
git://anongit.freedesktop.org/gfx-ci/linux
IGT_6077: 126a3f6fc0e97786e2819085efc84e741093aed5 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_20065: 9065f5d90df031d6e

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

2021-05-06 Thread Patchwork
== Series Details ==

Series: drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON 
(rev2)
URL   : https://patchwork.freedesktop.org/series/89639/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10047 -> Patchwork_20065


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html

Known issues


  Here are the changes found in Patchwork_20065 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@amdgpu/amd_basic@semaphore:
- fi-bsw-nick:NOTRUN -> [SKIP][1] ([fdo#109271]) +17 similar issues
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-bsw-nick/igt@amdgpu/amd_ba...@semaphore.html

  * igt@amdgpu/amd_cs_nop@fork-gfx0:
- fi-tgl-y:   NOTRUN -> [SKIP][2] ([fdo#109315] / [i915#2575]) +13 
similar issues
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-tgl-y/igt@amdgpu/amd_cs_...@fork-gfx0.html

  * igt@gem_exec_gttfill@basic:
- fi-bsw-n3050:   NOTRUN -> [SKIP][3] ([fdo#109271])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-bsw-n3050/igt@gem_exec_gttf...@basic.html

  * igt@gem_exec_suspend@basic-s3:
- fi-bsw-n3050:   NOTRUN -> [INCOMPLETE][4] ([i915#3159])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-bsw-n3050/igt@gem_exec_susp...@basic-s3.html

  * igt@i915_selftest@live@gt_heartbeat:
- fi-tgl-y:   [PASS][5] -> [DMESG-FAIL][6] ([i915#541])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10047/fi-tgl-y/igt@i915_selftest@live@gt_heartbeat.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-tgl-y/igt@i915_selftest@live@gt_heartbeat.html

  
 Possible fixes 

  * igt@i915_pm_rpm@basic-rte:
- {fi-tgl-1115g4}:[DMESG-WARN][7] ([i915#402]) -> [PASS][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10047/fi-tgl-1115g4/igt@i915_pm_...@basic-rte.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-tgl-1115g4/igt@i915_pm_...@basic-rte.html

  * igt@i915_pm_rpm@module-reload:
- {fi-tgl-1115g4}:[DMESG-WARN][9] ([k.org#205379]) -> [PASS][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10047/fi-tgl-1115g4/igt@i915_pm_...@module-reload.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-tgl-1115g4/igt@i915_pm_...@module-reload.html

  * igt@i915_selftest@live@late_gt_pm:
- fi-bsw-nick:[DMESG-FAIL][11] ([i915#2927]) -> [PASS][12]
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10047/fi-bsw-nick/igt@i915_selftest@live@late_gt_pm.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/fi-bsw-nick/igt@i915_selftest@live@late_gt_pm.html

  
  {name}: This element is suppressed. This means it is ignored when computing
  the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2927]: https://gitlab.freedesktop.org/drm/intel/issues/2927
  [i915#3159]: https://gitlab.freedesktop.org/drm/intel/issues/3159
  [i915#3277]: https://gitlab.freedesktop.org/drm/intel/issues/3277
  [i915#3283]: https://gitlab.freedesktop.org/drm/intel/issues/3283
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#541]: https://gitlab.freedesktop.org/drm/intel/issues/541
  [k.org#205379]: https://bugzilla.kernel.org/show_bug.cgi?id=205379


Participating hosts (43 -> 40)
--

  Additional (1): fi-bsw-n3050 
  Missing(4): fi-ctg-p8600 fi-ilk-m540 fi-bdw-samus fi-hsw-4200u 


Build changes
-

  * Linux: CI_DRM_10047 -> Patchwork_20065

  CI-20190529: 20190529
  CI_DRM_10047: 6bc6aeb4870cfb28f24523f42157cf9a86be80d7 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6077: 126a3f6fc0e97786e2819085efc84e741093aed5 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_20065: 9065f5d90df031d6e8a262a19ee067456b09d263 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9065f5d90df0 drm/i915: Use correct downstream caps for check Src-Ctl mode for 
PCON

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

2021-05-06 Thread Nautiyal, Ankit K
Hi Lakshmi,

The following failure is due to existing issue : 
https://gitlab.freedesktop.org/drm/intel/-/issues/541
Possible regressions

  *   igt@i915_selftest@live@gt_heartbeat:
 *   fi-tgl-y: 
PASS
 -> 
DMESG-FAIL
Thanks & Regards,
Ankit

From: Patchwork 
Sent: Wednesday, May 5, 2021 2:32 PM
To: Nautiyal, Ankit K 
Cc: intel-gfx@lists.freedesktop.org
Subject: ✗ Fi.CI.BAT: failure for drm/i915: Use correct downstream caps for 
check Src-Ctl mode for PCON (rev2)

Patch Details
Series:

drm/i915: Use correct downstream caps for check Src-Ctl mode for PCON (rev2)

URL:

https://patchwork.freedesktop.org/series/89639/

State:

failure

Details:

https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html

CI Bug Log - changes from CI_DRM_10047 -> Patchwork_20065
Summary

FAILURE

Serious unknown changes coming with Patchwork_20065 absolutely need to be
verified manually.

If you think the reported changes have nothing to do with the changes
introduced in Patchwork_20065, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.

External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20065/index.html

Possible new issues

Here are the unknown changes that may have been introduced in Patchwork_20065:

IGT changes
Possible regressions

  *   igt@i915_selftest@live@gt_heartbeat:
 *   fi-tgl-y: 
PASS
 -> 
DMESG-FAIL

Known issues

Here are the changes found in Patchwork_20065 that come from known issues:

IGT changes
Issues hit

  *   igt@amdgpu/amd_basic@semaphore:
 *   fi-bsw-nick: NOTRUN -> 
SKIP
 (fdo#109271) +17 similar 
issues
  *   igt@amdgpu/amd_cs_nop@fork-gfx0:
 *   fi-tgl-y: NOTRUN -> 
SKIP
 (fdo#109315 / 
i915#2575) +13 similar 
issues
  *   igt@gem_exec_gttfill@basic:
 *   fi-bsw-n3050: NOTRUN -> 
SKIP
 (fdo#109271)
  *   igt@gem_exec_suspend@basic-s3:
 *   fi-bsw-n3050: NOTRUN -> 
INCOMPLETE
 (i915#3159)

Possible fixes

  *   igt@i915_pm_rpm@basic-rte:
 *   {fi-tgl-1115g4}: 
DMESG-WARN
 (i915#402) -> 
PASS
  *   igt@i915_pm_rpm@module-reload:
 *   {fi-tgl-1115g4}: 
DMESG-WARN
 (k.org#205379) -> 
PASS
  *   igt@i915_selftest@live@late_gt_pm:
 *   fi-bsw-nick: 
DMESG-FAIL
 (i915#2927) -> 
PASS

{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).

Participating hosts (43 -> 40)

Additional (1): fi-bsw-n3050
Missing (4): fi-ctg-p8600 fi-ilk-m540 fi-bdw-samus fi-hsw-4200u

Build changes

  *   Linux: CI_DRM_10047 -> Patchwork_20065

CI-20190529: 20190529
CI_DRM_10047: 6bc6aeb4870cfb28f24523f42157cf9a86be80d7 @ 
git://anongit.freedesktop.org/gfx-ci/linux
IGT_6077: 126a3f6fc0e97786e2819085efc84e741093aed5 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_20065: 9065f5d90df031d6e8a262a19ee067456b09d263 @ 
git://anongit.freedesktop.org/gfx-ci/linux

== Linux commits ==

9065f5d90df0 drm/i915: Use correct downstream caps for check Src-Ctl mode for 
PCON
___
Intel-gfx mailing list
Intel-g

Re: [Intel-gfx] [PATCH v2 07/10] drm/i915/adl_p: Add stride restriction when using DPT

2021-05-06 Thread Clint Taylor


On 5/6/21 9:19 AM, Imre Deak wrote:

From: José Roberto de Souza 

Alderlake-P have a new stride restriction when using DPT and it is used
by non linear framebuffers. Stride needs to be a power of two to take
full DPT rows, but stride is a parameter set by userspace.

What we could do is use a fake stride when doing DPT allocation so
HW requirements are met and userspace don't need to be changed to
met this power of two restrictions but this change will take a while
to be implemented so for now adding this restriction in driver to
reject atomic commits that would cause visual corruptions.

BSpec: 53393
Acked-by: Matt Roper 
Cc: Matt Roper 
Cc: Ville Syrjälä 
Cc: Stanislav Lisovskiy 
Signed-off-by: José Roberto de Souza 
Signed-off-by: Imre Deak 
---
  drivers/gpu/drm/i915/display/intel_display.c | 9 +
  1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 292396058e75d..70ac197746b1f 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -11566,6 +11566,15 @@ static int intel_framebuffer_init(struct 
intel_framebuffer *intel_fb,
}
}
  
+		if (IS_ALDERLAKE_P(dev_priv) &&

+   mode_cmd->modifier[i] != DRM_FORMAT_MOD_LINEAR &&
+   !is_power_of_2(mode_cmd->pitches[i])) {
+   drm_dbg_kms(&dev_priv->drm,
+   "plane %d pitch (%d) must be power of two for 
tiled buffers\n",
+   i, mode_cmd->pitches[i]);
+   goto err;
+   }
+

Reviewed-by: Clint Taylor 

fb->obj[i] = &obj->base;
}
  

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/adl_p: Add support for Display Page Tables (rev2)

2021-05-06 Thread Patchwork
== Series Details ==

Series: drm/i915/adl_p: Add support for Display Page Tables (rev2)
URL   : https://patchwork.freedesktop.org/series/89078/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10053_full -> Patchwork_20077_full


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_20077_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20077_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_20077_full:

### Piglit changes ###

 Possible regressions 

  * spec@ext_transform_feedback@builtin-varyings gl_clipvertex (NEW):
- pig-glk-j5005:  NOTRUN -> [INCOMPLETE][1]
   [1]: None

  
New tests
-

  New tests have been introduced between CI_DRM_10053_full and 
Patchwork_20077_full:

### New Piglit tests (1) ###

  * spec@ext_transform_feedback@builtin-varyings gl_clipvertex:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s

  

Known issues


  Here are the changes found in Patchwork_20077_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@feature_discovery@display-2x:
- shard-iclb: NOTRUN -> [SKIP][2] ([i915#1839])
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-iclb2/igt@feature_discov...@display-2x.html

  * igt@gem_ctx_persistence@clone:
- shard-snb:  NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#1099]) +5 
similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-snb6/igt@gem_ctx_persiste...@clone.html

  * igt@gem_eio@unwedge-stress:
- shard-snb:  NOTRUN -> [FAIL][4] ([i915#3354])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-snb6/igt@gem_...@unwedge-stress.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
- shard-tglb: [PASS][5] -> [FAIL][6] ([i915#2842])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-tglb3/igt@gem_exec_fair@basic-pace-sh...@rcs0.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-tglb5/igt@gem_exec_fair@basic-pace-sh...@rcs0.html
- shard-glk:  [PASS][7] -> [FAIL][8] ([i915#2842]) +1 similar issue
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-glk2/igt@gem_exec_fair@basic-pace-sh...@rcs0.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-glk5/igt@gem_exec_fair@basic-pace-sh...@rcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-iclb: [PASS][9] -> [FAIL][10] ([i915#2842])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-iclb3/igt@gem_exec_fair@basic-pace-s...@rcs0.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-iclb7/igt@gem_exec_fair@basic-pace-s...@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
- shard-kbl:  [PASS][11] -> [FAIL][12] ([i915#2842])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-kbl1/igt@gem_exec_fair@basic-p...@rcs0.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-kbl3/igt@gem_exec_fair@basic-p...@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
- shard-kbl:  [PASS][13] -> [SKIP][14] ([fdo#109271])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-kbl1/igt@gem_exec_fair@basic-p...@vcs1.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-kbl3/igt@gem_exec_fair@basic-p...@vcs1.html

  * igt@gem_pwrite@basic-exhaustion:
- shard-apl:  NOTRUN -> [WARN][15] ([i915#2658])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-apl1/igt@gem_pwr...@basic-exhaustion.html

  * igt@gem_render_copy@x-tiled-to-vebox-yf-tiled:
- shard-kbl:  NOTRUN -> [SKIP][16] ([fdo#109271]) +38 similar issues
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-kbl4/igt@gem_render_c...@x-tiled-to-vebox-yf-tiled.html

  * igt@gem_render_copy@y-tiled-ccs-to-yf-tiled-mc-ccs:
- shard-iclb: NOTRUN -> [SKIP][17] ([i915#768])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-iclb2/igt@gem_render_c...@y-tiled-ccs-to-yf-tiled-mc-ccs.html

  * igt@gem_userptr_blits@input-checking:
- shard-apl:  NOTRUN -> [DMESG-WARN][18] ([i915#3002])
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-apl8/igt@gem_userptr_bl...@input-checking.html

  * igt@gen7_exec_parse@batch-without-end:
- shard-iclb: NOTRUN -> [SKIP][19] ([fdo#109289])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/shard-iclb2/igt@gen7_exec_pa...@batch-without-end.html

  * igt@gen9_exec_parse@allowed-single:
   

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/display: Try YCbCr420 color when RGB fails

2021-05-06 Thread Patchwork
== Series Details ==

Series: drm/i915/display: Try YCbCr420 color when RGB fails
URL   : https://patchwork.freedesktop.org/series/89842/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10053_full -> Patchwork_20079_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_20079_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@feature_discovery@display-2x:
- shard-iclb: NOTRUN -> [SKIP][1] ([i915#1839])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb5/igt@feature_discov...@display-2x.html

  * igt@feature_discovery@psr2:
- shard-iclb: NOTRUN -> [SKIP][2] ([i915#658])
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb5/igt@feature_discov...@psr2.html

  * igt@gem_create@create-clear:
- shard-glk:  [PASS][3] -> [FAIL][4] ([i915#1888] / [i915#3160])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-glk5/igt@gem_cre...@create-clear.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-glk7/igt@gem_cre...@create-clear.html

  * igt@gem_ctx_persistence@engines-mixed-process:
- shard-snb:  NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#1099]) +3 
similar issues
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-snb5/igt@gem_ctx_persiste...@engines-mixed-process.html

  * igt@gem_eio@unwedge-stress:
- shard-tglb: [PASS][6] -> [TIMEOUT][7] ([i915#2369] / [i915#3063])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-tglb1/igt@gem_...@unwedge-stress.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-tglb3/igt@gem_...@unwedge-stress.html
- shard-iclb: [PASS][8] -> [TIMEOUT][9] ([i915#2369] / [i915#2481] 
/ [i915#3070])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-iclb6/igt@gem_...@unwedge-stress.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb4/igt@gem_...@unwedge-stress.html
- shard-snb:  NOTRUN -> [FAIL][10] ([i915#3354])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-snb5/igt@gem_...@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
- shard-tglb: [PASS][11] -> [FAIL][12] ([i915#2846])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-tglb3/igt@gem_exec_f...@basic-deadline.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-tglb2/igt@gem_exec_f...@basic-deadline.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
- shard-kbl:  NOTRUN -> [FAIL][13] ([i915#2842])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-kbl6/igt@gem_exec_fair@basic-none-...@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs1:
- shard-iclb: NOTRUN -> [FAIL][14] ([i915#2842])
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb2/igt@gem_exec_fair@basic-n...@vcs1.html

  * igt@gem_exec_fair@basic-none@vecs0:
- shard-apl:  NOTRUN -> [FAIL][15] ([i915#2842])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-apl1/igt@gem_exec_fair@basic-n...@vecs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-iclb: [PASS][16] -> [FAIL][17] ([i915#2842])
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-iclb3/igt@gem_exec_fair@basic-pace-s...@rcs0.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb3/igt@gem_exec_fair@basic-pace-s...@rcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
- shard-kbl:  [PASS][18] -> [SKIP][19] ([fdo#109271])
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-kbl1/igt@gem_exec_fair@basic-p...@vecs0.html
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-kbl1/igt@gem_exec_fair@basic-p...@vecs0.html

  * igt@gem_exec_reloc@basic-wide-active@vcs1:
- shard-iclb: NOTRUN -> [FAIL][20] ([i915#2389])
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb2/igt@gem_exec_reloc@basic-wide-act...@vcs1.html

  * igt@gem_mmap_gtt@big-copy-xy:
- shard-skl:  [PASS][21] -> [FAIL][22] ([i915#307])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/shard-skl9/igt@gem_mmap_...@big-copy-xy.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-skl9/igt@gem_mmap_...@big-copy-xy.html

  * igt@gem_pwrite@basic-exhaustion:
- shard-apl:  NOTRUN -> [WARN][23] ([i915#2658])
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-apl2/igt@gem_pwr...@basic-exhaustion.html

  * igt@gem_render_copy@y-tiled-ccs-to-yf-tiled-mc-ccs:
- shard-iclb: NOTRUN -> [SKIP][24] ([i915#768])
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20079/shard-iclb5/igt

Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/adl_p: Add support for Display Page Tables (rev2)

2021-05-06 Thread Vudum, Lakshminarayana
Re-reported.

-Original Message-
From: Deak, Imre  
Sent: Thursday, May 6, 2021 10:58 AM
To: intel-gfx@lists.freedesktop.org; Vudum, Lakshminarayana 

Subject: Re: ✗ Fi.CI.BAT: failure for drm/i915/adl_p: Add support for Display 
Page Tables (rev2)

On Thu, May 06, 2021 at 05:03:29PM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915/adl_p: Add support for Display Page Tables (rev2)
> URL   : https://patchwork.freedesktop.org/series/89078/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_10053 -> Patchwork_20077 
> 
> 
> Summary
> ---
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_20077 absolutely need to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_20077, please notify your bug team to allow them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   External URL: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/index.html
> 
> Possible new issues
> ---
> 
>   Here are the unknown changes that may have been introduced in 
> Patchwork_20077:
> 
> ### IGT changes ###
> 
>  Possible regressions 
> 
>   * igt@kms_chamelium@common-hpd-after-suspend:
> - fi-kbl-7500u:   [PASS][1] -> [FAIL][2]
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/fi-kbl-7500u/
> igt@kms_chamel...@common-hpd-after-suspend.html

Chamelium doesn't disconnect after deasserting manually its HPD signal as 
expected. No idea how it would be related to the ADL_P specific changes in this 
patchset. I found a few previous instances of the same problem on the same 
machine:

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10045/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20026/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20021/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html
https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5766/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html
https://intel-gfx-ci.01.org/tree/drm-tip/Trybot_7697/fi-kbl-7500u/igt@kms_chamel...@common-hpd-after-suspend.html

Lakshmi, could we open a ticket for this?

> Known issues
> 
> 
>   Here are the changes found in Patchwork_20077 that come from known issues:
> 
> ### IGT changes ###
> 
>  Issues hit 
> 
>   * igt@amdgpu/amd_prime@amd-to-i915:
> - fi-tgl-y:   NOTRUN -> [SKIP][3] ([fdo#109315] / [i915#2575])
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/fi-tgl-y/igt@
> amdgpu/amd_pr...@amd-to-i915.html
> 
>   * igt@i915_selftest@live@hangcheck:
> - fi-snb-2600:[PASS][4] -> [INCOMPLETE][5] ([i915#2782])
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10053/fi-snb-2600/igt@i915_selftest@l...@hangcheck.html
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20077/fi-snb-2600/i
> gt@i915_selftest@l...@hangcheck.html
> 
>   
>   {name}: This element is suppressed. This means it is ignored when computing
>   the status of the difference (SUCCESS, WARNING, or FAILURE).
> 
>   [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
>   [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
>   [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
>   [i915#2782]: https://gitlab.freedesktop.org/drm/intel/issues/2782
>   [i915#3180]: https://gitlab.freedesktop.org/drm/intel/issues/3180
>   [i915#3277]: https://gitlab.freedesktop.org/drm/intel/issues/3277
>   [i915#3283]: https://gitlab.freedesktop.org/drm/intel/issues/3283
> 
> 
> Participating hosts (44 -> 40)
> --
> 
>   Missing(4): fi-ctg-p8600 fi-ilk-m540 fi-bdw-samus fi-hsw-4200u 
> 
> 
> Build changes
> -
> 
>   * Linux: CI_DRM_10053 -> Patchwork_20077
> 
>   CI-20190529: 20190529
>   CI_DRM_10053: 3e000bbf311ad04f734843e1ba6396b28ba44399 @ 
> git://anongit.freedesktop.org/gfx-ci/linux
>   IGT_6080: 1c450c3d4df19cf1087b8ccff3b62cb51addacae @ 
> git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
>   Patchwork_20077: e83dba92cd47bd2b5841fc8e7f66bbd7d376e7bd @ 
> git://anongit.freedesktop.org/gfx-ci/linux
> 
> 
> == Linux commits ==
> 
> e83dba92cd47 drm/i915/adl_p: Enable remapping to pad DPT FB strides to 
> POT 6cc7df9cf93e drm/i915/adl_p: Require a minimum of 8 tiles stride 
> for DPT FBs
> f86259ff5f81 drm/i915/adl_p: Disable support for 90/270 FB rotation
> c55b96cff231 drm/i915/adl_p: Add stride restriction when using DPT
> c14a9051e424 drm/i915/xelpd: Support 128k plane stride
> c248682ab7f3 drm/i915/xelpd: Fallback to plane stride limitations when 
> usi

[Intel-gfx] ✗ Fi.CI.BUILD: failure for Basic GuC submission support in the i915

2021-05-06 Thread Patchwork
== Series Details ==

Series: Basic GuC submission support in the i915
URL   : https://patchwork.freedesktop.org/series/89844/
State : failure

== Summary ==

CALLscripts/checksyscalls.sh
  CALLscripts/atomic/check-atomics.sh
  DESCEND  objtool
  CHK include/generated/compile.h
  LD [M]  drivers/gpu/drm/i915/i915.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_requests.h
In file included from :
./drivers/gpu/drm/i915/gt/intel_gt_requests.h: In function 
‘intel_gt_retire_requests’:
./drivers/gpu/drm/i915/gt/intel_gt_requests.h:17:42: error: ‘NULL’ undeclared 
(first use in this function)
  intel_gt_retire_requests_timeout(gt, 0, NULL);
  ^~~~
./drivers/gpu/drm/i915/gt/intel_gt_requests.h:17:42: note: ‘NULL’ is defined in 
header ‘’; did you forget to ‘#include ’?
./drivers/gpu/drm/i915/gt/intel_gt_requests.h:1:1:
+#include 
 /* SPDX-License-Identifier: MIT */
./drivers/gpu/drm/i915/gt/intel_gt_requests.h:17:42:
  intel_gt_retire_requests_timeout(gt, 0, NULL);
  ^~~~
./drivers/gpu/drm/i915/gt/intel_gt_requests.h:17:42: note: each undeclared 
identifier is reported only once for each function it appears in
drivers/gpu/drm/i915/Makefile:316: recipe for target 
'drivers/gpu/drm/i915/gt/intel_gt_requests.hdrtest' failed
make[4]: *** [drivers/gpu/drm/i915/gt/intel_gt_requests.hdrtest] Error 1
scripts/Makefile.build:514: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:514: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:514: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
Makefile:1851: recipe for target 'drivers' failed
make: *** [drivers] Error 2


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 01/10] drm/i915/xelpd: add XE_LPD display characteristics

2021-05-06 Thread Souza, Jose
On Thu, 2021-05-06 at 19:19 +0300, Imre Deak wrote:
> From: Matt Roper 
> 
> Let's start preparing for upcoming platforms that will use an XE_LPD
> design.
> 
> v2:
>  - Use the now-preferred "XE_LPD" term to refer to this design
>  - Utilize DISPLAY_VER() rather than a feature flag
>  - Drop unused mbus_size field (Lucas)
> v3:
>  - Adjust for dbuf.{size,slice_mask} (Ville)
> 

Reviewed-by: José Roberto de Souza 

> Signed-off-by: Matt Roper 
> Reviewed-by: José Roberto de Souza  (v2)
> Signed-off-by: Imre Deak 
> ---
>  drivers/gpu/drm/i915/display/intel_display_power.h |  2 ++
>  drivers/gpu/drm/i915/i915_pci.c| 10 ++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h 
> b/drivers/gpu/drm/i915/display/intel_display_power.h
> index f3ca5d5c97781..acf47252d9e75 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> @@ -380,6 +380,8 @@ intel_display_power_put_all_in_set(struct 
> drm_i915_private *i915,
>  enum dbuf_slice {
>   DBUF_S1,
>   DBUF_S2,
> + DBUF_S3,
> + DBUF_S4,
>   I915_MAX_DBUF_SLICES
>  };
>  
> diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
> index c678e0663d808..00e15fe00f4f0 100644
> --- a/drivers/gpu/drm/i915/i915_pci.c
> +++ b/drivers/gpu/drm/i915/i915_pci.c
> @@ -939,6 +939,16 @@ static const struct intel_device_info adl_s_info = {
>   .dma_mask_size = 46,
>  };
>  
> +#define XE_LPD_FEATURES \
> + .display.ver = 13,  \
> + .display.has_psr_hw_tracking = 0,   \
> + .abox_mask = GENMASK(1, 0), \
> + .pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), \
> + .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |  \
> + BIT(TRANSCODER_C) | BIT(TRANSCODER_D),  \
> + .dbuf.size = 4096,  \
> + .dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2) | BIT(DBUF_S3) | 
> BIT(DBUF_S4)
> +
>  #undef GEN
>  #undef PLATFORM
>  

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 88/97] drm/i915/guc: Support request cancellation

2021-05-06 Thread Matthew Brost
This adds GuC backend support for i915_request_cancel(), which in turn
makes CONFIG_DRM_I915_REQUEST_TIMEOUT work.

Signed-off-by: Matthew Brost 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |   9 +
 drivers/gpu/drm/i915/gt/intel_context.h   |   7 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   7 +
 .../drm/i915/gt/intel_execlists_submission.c  |  18 ++
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   |   1 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 168 ++
 drivers/gpu/drm/i915/i915_request.c   |  14 +-
 7 files changed, 211 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 3fe7794b2bfd..b633fea684d4 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -366,6 +366,12 @@ static int __intel_context_active(struct i915_active 
*active)
return 0;
 }
 
+static int sw_fence_dummy_notify(struct i915_sw_fence *sf,
+enum i915_sw_fence_notify state)
+{
+   return NOTIFY_DONE;
+}
+
 void
 intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine)
 {
@@ -398,6 +404,9 @@ intel_context_init(struct intel_context *ce, struct 
intel_engine_cs *engine)
ce->guc_id = GUC_INVALID_LRC_ID;
INIT_LIST_HEAD(&ce->guc_id_link);
 
+   i915_sw_fence_init(&ce->guc_blocked, sw_fence_dummy_notify);
+   i915_sw_fence_commit(&ce->guc_blocked);
+
i915_active_init(&ce->active,
 __intel_context_active, __intel_context_retire, 0);
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index 11fa7700dc9e..1b208daee72b 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -71,6 +71,13 @@ intel_context_is_pinned(struct intel_context *ce)
return atomic_read(&ce->pin_count);
 }
 
+static inline void intel_context_cancel_request(struct intel_context *ce,
+   struct i915_request *rq)
+{
+   GEM_BUG_ON(!ce->ops->cancel_request);
+   return ce->ops->cancel_request(ce, rq);
+}
+
 /**
  * intel_context_unlock_pinned - Releases the earlier locking of 'pinned' 
status
  * @ce - the context
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 217761b27b6c..cd2ea5b98fc3 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -13,6 +13,7 @@
 #include 
 
 #include "i915_active_types.h"
+#include "i915_sw_fence.h"
 #include "i915_utils.h"
 #include "intel_engine_types.h"
 #include "intel_sseu.h"
@@ -43,6 +44,9 @@ struct intel_context_ops {
void (*unpin)(struct intel_context *ce);
void (*post_unpin)(struct intel_context *ce);
 
+   void (*cancel_request)(struct intel_context *ce,
+  struct i915_request *rq);
+
void (*enter)(struct intel_context *ce);
void (*exit)(struct intel_context *ce);
 
@@ -200,6 +204,9 @@ struct intel_context {
 */
u8 guc_prio;
u32 guc_prio_count[GUC_CLIENT_PRIORITY_NUM];
+
+   /* GuC context blocked fence */
+   struct i915_sw_fence guc_blocked;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 54518b64bdbd..16606cdfc2f5 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -114,6 +114,7 @@
 #include "gen8_engine_cs.h"
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
+#include "intel_engine_heartbeat.h"
 #include "intel_engine_pm.h"
 #include "intel_engine_stats.h"
 #include "intel_execlists_submission.h"
@@ -2545,11 +2546,26 @@ static int execlists_context_alloc(struct intel_context 
*ce)
return lrc_alloc(ce, ce->engine);
 }
 
+static void execlists_context_cancel_request(struct intel_context *ce,
+struct i915_request *rq)
+{
+   struct intel_engine_cs *engine = NULL;
+
+   i915_request_active_engine(rq, &engine);
+
+   if (engine && intel_engine_pulse(engine))
+   intel_gt_handle_error(engine->gt, engine->mask, 0,
+ "request cancellation by %s",
+ current->comm);
+}
+
 static const struct intel_context_ops execlists_context_ops = {
.flags = COPS_HAS_INFLIGHT,
 
.alloc = execlists_context_alloc,
 
+   .cancel_request = execlists_context_cancel_request,
+
.pre_pin = execlists_context_pre_pin,
.pin = execlists_context_pin,
.unpin = lrc_unpin,
@@ -3649,6 +3665,8 @@ static const struct intel_context_ops virtual_context_ops 
= {
 
.alloc = virtual_context_alloc,
 
+   .ca

[Intel-gfx] [RFC PATCH 72/97] drm/i915/guc: Don't complain about reset races

2021-05-06 Thread Matthew Brost
From: John Harrison 

It is impossible to seal all race conditions of resets occurring
concurrent to other operations. At least, not without introducing
excesive mutex locking. Instead, don't complain if it occurs. In
particular, don't complain if trying to send a H2G during a reset.
Whatever the H2G was about should get redone once the reset is over.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 5 -
 drivers/gpu/drm/i915/gt/uc/intel_uc.c | 4 
 drivers/gpu/drm/i915/gt/uc/intel_uc.h | 2 ++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index d5b326d4e250..1c240ff8dec9 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -718,7 +718,10 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 
*action, u32 len,
int ret;
 
if (unlikely(!ct->enabled)) {
-   WARN(1, "Unexpected send: action=%#x\n", *action);
+   struct intel_guc *guc = ct_to_guc(ct);
+   struct intel_uc *uc = container_of(guc, struct intel_uc, guc);
+
+   WARN(!uc->reset_in_progress, "Unexpected send: action=%#x\n", 
*action);
return -ENODEV;
}
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7035aa727e04..8c681fc49638 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -550,6 +550,8 @@ void intel_uc_reset_prepare(struct intel_uc *uc)
 {
struct intel_guc *guc = &uc->guc;
 
+   uc->reset_in_progress = true;
+
/* Firmware expected to be running when this function is called */
if (!intel_guc_is_ready(guc))
goto sanitize;
@@ -574,6 +576,8 @@ void intel_uc_reset_finish(struct intel_uc *uc)
 {
struct intel_guc *guc = &uc->guc;
 
+   uc->reset_in_progress = false;
+
/* Firmware expected to be running when this function is called */
if (intel_guc_is_fw_running(guc) && intel_uc_uses_guc_submission(uc))
intel_guc_submission_reset_finish(guc);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
index eaa3202192ac..91315e3f1c58 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
@@ -30,6 +30,8 @@ struct intel_uc {
 
/* Snapshot of GuC log from last failed load */
struct drm_i915_gem_object *load_err_log;
+
+   bool reset_in_progress;
 };
 
 void intel_uc_init_early(struct intel_uc *uc);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 78/97] drm/i915/guc: Include scheduling policies in the debugfs state dump

2021-05-06 Thread Matthew Brost
From: John Harrison 

Added the scheduling policy parameters to the 'guc_info' debugfs state
dump.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c | 13 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h |  2 ++
 drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c |  2 ++
 3 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index bb20513f40f6..bc2745f73a06 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -95,6 +95,19 @@ static void guc_policies_init(struct intel_guc *guc, struct 
guc_policies *polici
policies->is_valid = 1;
 }
 
+void intel_guc_log_policy_info(struct intel_guc *guc, struct drm_printer *dp)
+{
+   struct __guc_ads_blob *blob = guc->ads_blob;
+
+   if (unlikely(!blob))
+   return;
+
+   drm_printf(dp, "Global scheduling policies:\n");
+   drm_printf(dp, "  DPC promote time   = %u\n", 
blob->policies.dpc_promote_time);
+   drm_printf(dp, "  Max num work items = %u\n", 
blob->policies.max_num_work_items);
+   drm_printf(dp, "  Flags  = %u\n", 
blob->policies.global_flags);
+}
+
 static int guc_action_policies_update(struct intel_guc *guc, u32 policy_offset)
 {
u32 action[] = {
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
index b00d3ae1113a..0fdcb3583601 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
@@ -7,9 +7,11 @@
 #define _INTEL_GUC_ADS_H_
 
 struct intel_guc;
+struct drm_printer;
 
 int intel_guc_ads_create(struct intel_guc *guc);
 void intel_guc_ads_destroy(struct intel_guc *guc);
 void intel_guc_ads_reset(struct intel_guc *guc);
+void intel_guc_log_policy_info(struct intel_guc *guc, struct drm_printer *p);
 
 #endif
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
index 62b9ce0fafaa..9a03ff56e654 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
@@ -10,6 +10,7 @@
 #include "intel_guc_debugfs.h"
 #include "intel_guc_log_debugfs.h"
 #include "gt/uc/intel_guc_ct.h"
+#include "gt/uc/intel_guc_ads.h"
 #include "gt/uc/intel_guc_submission.h"
 
 static int guc_info_show(struct seq_file *m, void *data)
@@ -29,6 +30,7 @@ static int guc_info_show(struct seq_file *m, void *data)
 
intel_guc_log_ct_info(&guc->ct, &p);
intel_guc_log_submission_info(guc, &p);
+   intel_guc_log_policy_info(guc, &p);
 
return 0;
 }
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 56/97] drm/i915/guc: Update GuC debugfs to support new GuC

2021-05-06 Thread Matthew Brost
Update GuC debugfs to support the new GuC structures.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 22 
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  3 ++
 .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c| 23 +++-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 52 +++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  4 ++
 drivers/gpu/drm/i915/i915_debugfs.c   |  1 +
 6 files changed, 104 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index cf701056fa14..b3194d753b13 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -1131,3 +1131,25 @@ void intel_guc_ct_event_handler(struct intel_guc_ct *ct)
 
ct_try_receive_message(ct);
 }
+
+void intel_guc_log_ct_info(struct intel_guc_ct *ct,
+  struct drm_printer *p)
+{
+   if (!ct->enabled) {
+   drm_puts(p, "CT disabled\n");
+   return;
+   }
+
+   drm_printf(p, "H2G Space: %u\n",
+  atomic_read(&ct->ctbs.send.space) * 4);
+   drm_printf(p, "Head: %u\n",
+  ct->ctbs.send.desc->head);
+   drm_printf(p, "Tail: %u\n",
+  ct->ctbs.send.desc->tail);
+   drm_printf(p, "G2H Space: %u\n",
+  atomic_read(&ct->ctbs.recv.space) * 4);
+   drm_printf(p, "Head: %u\n",
+  ct->ctbs.recv.desc->head);
+   drm_printf(p, "Tail: %u\n",
+  ct->ctbs.recv.desc->tail);
+}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
index ab1b79ab960b..f62eb06b32fc 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
@@ -16,6 +16,7 @@
 
 struct i915_vma;
 struct intel_guc;
+struct drm_printer;
 
 /**
  * DOC: Command Transport (CT).
@@ -106,4 +107,6 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 
*action, u32 len,
  u32 *response_buf, u32 response_buf_size, u32 flags);
 void intel_guc_ct_event_handler(struct intel_guc_ct *ct);
 
+void intel_guc_log_ct_info(struct intel_guc_ct *ct, struct drm_printer *p);
+
 #endif /* _INTEL_GUC_CT_H_ */
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
index fe7cb7b29a1e..62b9ce0fafaa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
@@ -9,6 +9,8 @@
 #include "intel_guc.h"
 #include "intel_guc_debugfs.h"
 #include "intel_guc_log_debugfs.h"
+#include "gt/uc/intel_guc_ct.h"
+#include "gt/uc/intel_guc_submission.h"
 
 static int guc_info_show(struct seq_file *m, void *data)
 {
@@ -22,16 +24,35 @@ static int guc_info_show(struct seq_file *m, void *data)
drm_puts(&p, "\n");
intel_guc_log_info(&guc->log, &p);
 
-   /* Add more as required ... */
+   if (!intel_guc_submission_is_used(guc))
+   return 0;
+
+   intel_guc_log_ct_info(&guc->ct, &p);
+   intel_guc_log_submission_info(guc, &p);
 
return 0;
 }
 DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_info);
 
+static int guc_registered_contexts_show(struct seq_file *m, void *data)
+{
+   struct intel_guc *guc = m->private;
+   struct drm_printer p = drm_seq_file_printer(m);
+
+   if (!intel_guc_submission_is_used(guc))
+   return -ENODEV;
+
+   intel_guc_log_context_info(guc, &p);
+
+   return 0;
+}
+DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
+
 void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
 {
static const struct debugfs_gt_file files[] = {
{ "guc_info", &guc_info_fops, NULL },
+   { "guc_registered_contexts", &guc_registered_contexts_fops, 
NULL },
};
 
if (!intel_guc_is_supported(guc))
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 0ff7dd6d337d..c7a8968f22c5 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1607,3 +1607,55 @@ int intel_guc_sched_done_process_msg(struct intel_guc 
*guc,
 
return 0;
 }
+
+void intel_guc_log_submission_info(struct intel_guc *guc,
+  struct drm_printer *p)
+{
+   struct i915_sched_engine *sched_engine = guc->sched_engine;
+   struct rb_node *rb;
+   unsigned long flags;
+
+   drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n",
+  atomic_read(&guc->outstanding_submission_g2h));
+   drm_printf(p, "GuC tasklet count: %u\n\n",
+  atomic_read(&sched_engine->tasklet.count));
+
+   spin_lock_irqsave(&sched_engine->lock, flags);
+   drm_printf(p, "Requests in GuC submit tasklet:\n");
+   for (rb = rb_first_cached(&sc

[Intel-gfx] [RFC PATCH 30/97] drm/i915/uc: turn on GuC/HuC auto mode by default

2021-05-06 Thread Matthew Brost
From: Daniele Ceraolo Spurio 

This will enable HuC loading for Gen11+ by default if the binaries
are available on the system. GuC submission still requires explicit
enabling by the user.

Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
Cc: Michal Wajdeczko 
Cc: John Harrison 
---
 drivers/gpu/drm/i915/i915_params.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_params.h 
b/drivers/gpu/drm/i915/i915_params.h
index 14cd64cc61d0..a0575948ab61 100644
--- a/drivers/gpu/drm/i915/i915_params.h
+++ b/drivers/gpu/drm/i915/i915_params.h
@@ -59,7 +59,7 @@ struct drm_printer;
param(int, disable_power_well, -1, 0400) \
param(int, enable_ips, 1, 0600) \
param(int, invert_brightness, 0, 0600) \
-   param(int, enable_guc, 0, 0400) \
+   param(int, enable_guc, -1, 0400) \
param(int, guc_log_level, -1, 0400) \
param(char *, guc_firmware_path, NULL, 0400) \
param(char *, huc_firmware_path, NULL, 0400) \
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 40/97] drm/i915/guc: Module load failure test for CT buffer creation

2021-05-06 Thread Matthew Brost
From: John Harrison 

Add several module failure load inject points in the CT buffer creation
code path.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index d6895d29ed2d..586e6efc3558 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -177,6 +177,10 @@ static int ct_register_buffer(struct intel_guc_ct *ct, u32 
type,
 {
int err;
 
+   err = i915_inject_probe_error(guc_to_gt(ct_to_guc(ct))->i915, -ENXIO);
+   if (unlikely(err))
+   return err;
+
err = guc_action_register_ct_buffer(ct_to_guc(ct), type,
desc_addr, buff_addr, size);
if (unlikely(err))
@@ -228,6 +232,10 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
u32 *cmds;
int err;
 
+   err = i915_inject_probe_error(guc_to_gt(guc)->i915, -ENXIO);
+   if (err)
+   return err;
+
GEM_BUG_ON(ct->vma);
 
blob_size = 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE + 
CTB_G2H_BUFFER_SIZE;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 85/97] drm/i915/guc: Introduce guc_submit_engine object

2021-05-06 Thread Matthew Brost
Move fields related to controlling the GuC submission state machine to a
unique object (guc_submit_engine) rather than the global GuC state
(intel_guc). This encapsulation allows multiple instances of submission
objects to operate in parallel and a single instance can block if needed
while another can make forward progress. This is analogous to how the
execlist mode works assigning a schedule object per physical engine but
rather in GuC mode we assign a schedule object based on the blocking
dependencies.

The guc_submit_engine object also encapsulates the i915_sched_engine
object as well.

Lots of find-replace.

Currently only 1 guc_submit_engine instantiated, future patches will
instantiate more.

Signed-off-by: Matthew Brost 
Cc: John Harrison 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  33 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 544 +++---
 .../i915/gt/uc/intel_guc_submission_types.h   |  53 ++
 drivers/gpu/drm/i915/i915_scheduler.c |  25 +-
 drivers/gpu/drm/i915/i915_scheduler.h |   5 +-
 drivers/gpu/drm/i915/i915_scheduler_types.h   |   3 +
 6 files changed, 411 insertions(+), 252 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 26a0225f45e9..904f3a941832 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -20,6 +20,11 @@
 
 struct __guc_ads_blob;
 
+enum {
+   GUC_SUBMIT_ENGINE_SINGLE_LRC,
+   GUC_SUBMIT_ENGINE_MAX
+};
+
 /*
  * Top level structure of GuC. It handles firmware loading and manages client
  * pool. intel_guc owns a intel_guc_client to replace the legacy ExecList
@@ -30,31 +35,6 @@ struct intel_guc {
struct intel_guc_log log;
struct intel_guc_ct ct;
 
-   /* Global engine used to submit requests to GuC */
-   struct i915_sched_engine *sched_engine;
-
-   /* Global state related to submission tasklet */
-   struct i915_request *stalled_rq;
-   struct intel_context *stalled_context;
-   struct work_struct retire_worker;
-   unsigned long flags;
-   int total_num_rq_with_no_guc_id;
-
-   /*
-* Submisson stall reason. See intel_guc_submission.c for detailed
-* description.
-*/
-   enum {
-   STALL_NONE,
-   STALL_GUC_ID_WORKQUEUE,
-   STALL_GUC_ID_TASKLET,
-   STALL_SCHED_DISABLE,
-   STALL_REGISTER_CONTEXT,
-   STALL_DEREGISTER_CONTEXT,
-   STALL_MOVE_LRC_TAIL,
-   STALL_ADD_REQUEST,
-   } submission_stall_reason;
-
/* intel_guc_recv interrupt related state */
spinlock_t irq_lock;
unsigned int msg_enabled_mask;
@@ -68,6 +48,8 @@ struct intel_guc {
void (*disable)(struct intel_guc *guc);
} interrupts;
 
+   struct guc_submit_engine *gse[GUC_SUBMIT_ENGINE_MAX];
+
/*
 * contexts_lock protects the pool of free guc ids and a linked list of
 * guc ids available to be stolden
@@ -76,7 +58,6 @@ struct intel_guc {
struct ida guc_ids;
u32 num_guc_ids;
u32 max_guc_ids;
-   atomic_t num_guc_ids_not_ready;
struct list_head guc_id_list_no_ref;
struct list_head guc_id_list_unpinned;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index aa5e608deed5..9dc0ffc07cd7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -21,6 +21,7 @@
 #include "gt/intel_ring.h"
 
 #include "intel_guc_submission.h"
+#include "intel_guc_submission_types.h"
 
 #include "i915_drv.h"
 #include "i915_trace.h"
@@ -57,7 +58,7 @@
  * WQ_TYPE_INORDER is needed to support legacy submission via GuC, which
  * represents in-order queue. The kernel driver packs ring tail pointer and an
  * ELSP context descriptor dword into Work Item.
- * See guc_add_request()
+ * See gse_add_request()
  *
  * GuC flow control state machine:
  * The tasklet, workqueue (retire_worker), and the G2H handlers together more 
or
@@ -80,57 +81,57 @@
  * context)
  */
 
-/* GuC Virtual Engine */
-struct guc_virtual_engine {
-   struct intel_engine_cs base;
-   struct intel_context context;
-};
-
 static struct intel_context *
 guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count);
 
 #define GUC_REQUEST_SIZE 64 /* bytes */
 
+static inline struct guc_submit_engine *ce_to_gse(struct intel_context *ce)
+{
+   return container_of(ce->engine->sched_engine, struct guc_submit_engine,
+   sched_engine);
+}
+
 /*
  * Global GuC flags helper functions
  */
 enum {
-   GUC_STATE_TASKLET_BLOCKED,
-   GUC_STATE_GUC_IDS_EXHAUSTED,
+   GSE_STATE_TASKLET_BLOCKED,
+   GSE_STATE_GUC_IDS_EXHAUSTED,
 };
 
-static

[Intel-gfx] [RFC PATCH 86/97] drm/i915/guc: Add golden context to GuC ADS

2021-05-06 Thread Matthew Brost
From: John Harrison 

The media watchdog mechanism involves GuC doing a silent reset and
continue of the hung context. This requires the i915 driver provide a
golden context to GuC in the ADS.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_gt.c |   2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.c |   5 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h |   2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c | 213 ++---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h |   1 +
 drivers/gpu/drm/i915/gt/uc/intel_uc.c  |   5 +
 drivers/gpu/drm/i915/gt/uc/intel_uc.h  |   1 +
 7 files changed, 199 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c 
b/drivers/gpu/drm/i915/gt/intel_gt.c
index 1742a8561f69..0e4a5c4c883f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -641,6 +641,8 @@ int intel_gt_init(struct intel_gt *gt)
if (err)
goto err_gt;
 
+   intel_uc_init_late(>->uc);
+
err = i915_inject_probe_error(gt->i915, -EIO);
if (err)
goto err_gt;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index f3240037fb7c..918802712460 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -192,6 +192,11 @@ void intel_guc_init_early(struct intel_guc *guc)
}
 }
 
+void intel_guc_init_late(struct intel_guc *guc)
+{
+   intel_guc_ads_init_late(guc);
+}
+
 static u32 guc_ctl_debug_flags(struct intel_guc *guc)
 {
u32 level = intel_guc_log_get_level(&guc->log);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 904f3a941832..96849a256be8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -66,6 +66,7 @@ struct intel_guc {
struct i915_vma *ads_vma;
struct __guc_ads_blob *ads_blob;
u32 ads_regset_size;
+   u32 ads_golden_ctxt_size;
 
struct i915_vma *lrc_desc_pool;
void *lrc_desc_pool_vaddr;
@@ -183,6 +184,7 @@ static inline u32 intel_guc_ggtt_offset(struct intel_guc 
*guc,
 }
 
 void intel_guc_init_early(struct intel_guc *guc);
+void intel_guc_init_late(struct intel_guc *guc);
 void intel_guc_init_send_regs(struct intel_guc *guc);
 void intel_guc_write_params(struct intel_guc *guc);
 int intel_guc_init(struct intel_guc *guc);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index bc2745f73a06..299aa580d90a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -7,6 +7,7 @@
 
 #include "gt/intel_gt.h"
 #include "gt/intel_lrc.h"
+#include "gt/shmem_utils.h"
 #include "intel_guc_ads.h"
 #include "intel_guc_fwif.h"
 #include "intel_uc.h"
@@ -35,6 +36,10 @@
  *  +---+ <== dynamic
  *  | padding   |
  *  +---+ <== 4K aligned
+ *  | golden contexts   |
+ *  +---+
+ *  | padding   |
+ *  +---+ <== 4K aligned
  *  | private data  |
  *  +---+
  *  | padding   |
@@ -55,6 +60,11 @@ static u32 guc_ads_regset_size(struct intel_guc *guc)
return guc->ads_regset_size;
 }
 
+static u32 guc_ads_golden_ctxt_size(struct intel_guc *guc)
+{
+   return PAGE_ALIGN(guc->ads_golden_ctxt_size);
+}
+
 static u32 guc_ads_private_data_size(struct intel_guc *guc)
 {
return PAGE_ALIGN(guc->fw.private_data_size);
@@ -65,12 +75,23 @@ static u32 guc_ads_regset_offset(struct intel_guc *guc)
return offsetof(struct __guc_ads_blob, regset);
 }
 
-static u32 guc_ads_private_data_offset(struct intel_guc *guc)
+static u32 guc_ads_golden_ctxt_offset(struct intel_guc *guc)
 {
u32 offset;
 
offset = guc_ads_regset_offset(guc) +
 guc_ads_regset_size(guc);
+
+   return PAGE_ALIGN(offset);
+}
+
+static u32 guc_ads_private_data_offset(struct intel_guc *guc)
+{
+   u32 offset;
+
+   offset = guc_ads_golden_ctxt_offset(guc) +
+guc_ads_golden_ctxt_size(guc);
+
return PAGE_ALIGN(offset);
 }
 
@@ -321,53 +342,163 @@ static void guc_mmio_reg_state_init(struct intel_guc 
*guc,
GEM_BUG_ON(temp_set.size);
 }
 
-/*
- * The first 80 dwords of the register state context, containing the
- * execlists and ppgtt registers.
- */
-#define LR_HW_CONTEXT_SIZE (80 * sizeof(u32))
+static void fill_engine_enable_masks(struct intel_gt *gt,
+struct guc_gt_system_info *info)
+{
+   info->engine_enabled_masks[GUC_RENDER_CLASS] = 1;
+   info->engine_enabled_masks[GUC_BLIT

[Intel-gfx] [RFC PATCH 96/97] drm/i915/guc: Update GuC documentation

2021-05-06 Thread Matthew Brost
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 99 ++-
 1 file changed, 77 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 337ddc0dab6b..594a99ea4f5c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -29,21 +29,6 @@
 /**
  * DOC: GuC-based command submission
  *
- * IMPORTANT NOTE: GuC submission is currently not supported in i915. The GuC
- * firmware is moving to an updated submission interface and we plan to
- * turn submission back on when that lands. The below documentation (and 
related
- * code) matches the old submission model and will be updated as part of the
- * upgrade to the new flow.
- *
- * GuC stage descriptor:
- * During initialization, the driver allocates a static pool of 1024 such
- * descriptors, and shares them with the GuC. Currently, we only use one
- * descriptor. This stage descriptor lets the GuC know about the workqueue and
- * process descriptor. Theoretically, it also lets the GuC know about our HW
- * contexts (context ID, etc...), but we actually employ a kind of submission
- * where the GuC uses the LRCA sent via the work item instead. This is called
- * a "proxy" submission.
- *
  * The Scratch registers:
  * There are 16 MMIO-based registers start from 0xC180. The kernel driver 
writes
  * a value to the action register (SOFT_SCRATCH_0) along with any data. It then
@@ -52,13 +37,45 @@
  * processes the request. The kernel driver polls waiting for this update and
  * then proceeds.
  *
- * Work Items:
- * There are several types of work items that the host may place into a
- * workqueue, each with its own requirements and limitations. Currently only
- * WQ_TYPE_INORDER is needed to support legacy submission via GuC, which
- * represents in-order queue. The kernel driver packs ring tail pointer and an
- * ELSP context descriptor dword into Work Item.
- * See gse_add_request()
+ * Command Transport buffers (CTBs):
+ * Covered in detail in other sections but CTBs (host-to-guc, H2G, guc-to-host
+ * G2H) are how the i915 controls submissions.
+ *
+ * Context registration:
+ * Before a context can be submitted it must be registered with the GuC via a
+ * H2G. A unique guc_id associated with each context. The context is either
+ * registered at request creation time (no flow control) or at submission time
+ * (flow control). It will stay registered until the context is destroyed or a
+ * flow control condition is met (e.g. pressure on guc_ids).
+ *
+ * Context submission:
+ * The i915 updates the LRC tail value in memory. Either a schedule enable H2G
+ * or context submit H2G is used to submit a context.
+ *
+ * Context unpin:
+ * To unpin a context a H2G is used to disable scheduling and when the
+ * corresponding G2H returns indicating the scheduling disable operation has
+ * completed it is safe to unpin the context. While a disable is in flight it
+ * isn't safe to resubmit the context so a fence is used to stall all future
+ * requests until the G2H is returned.
+ *
+ * Context deregistration:
+ * Before a context can be destroyed or we steal its guc_id we must deregister
+ * the context with the GuC via H2G. If stealing the guc_id it isn't safe to
+ * submit anything to this guc_id until the deregister completes so a fence is
+ * used to stall all requests associated with this guc_ids until the
+ * corresponding G2H returns indicating the guc_id has been deregistered.
+ *
+ * guc_ids:
+ * Unique number associated with private GuC context data passed in during
+ * context registration / submission / deregistration. 64k available. Simple 
ida
+ * is used for allocation.
+ *
+ * Stealing guc_ids:
+ * If no guc_ids are available they can be stolen from another context at
+ * request creation time if that context is unpinned. If nothing can be found 
at
+ * request creation time, flow control is triggered (serializing all submission
+ * until flow control exits) and guc_ids are stolden at submission time.
  *
  * GuC flow control state machine:
  * The tasklet, workqueue (retire_worker), and the G2H handlers together more 
or
@@ -79,6 +96,44 @@
  * STALL_MOVE_LRC_TAIL Tasklet will try to move LRC tail
  * STALL_ADD_REQUEST   Tasklet will try to add the request (submit
  * context)
+ *
+ * Locking:
+ * In the GuC submission code we have 4 basic spin locks which protect
+ * everything. Details about each below.
+ *
+ * gse->sched_engine->lock
+ * This is the submission lock for all contexts that share a GuC submit engine
+ * (gse), thus only 1 context which share a gse can be submitting at a time.
+ *
+ * guc->contexts_lock
+ * Protects guc_id allocation. Global lock i.e. Only 1 context that uses GuC
+ * submission can hold this at a time.
+ *
+ * ce->guc_state.lock
+ * Protects everything under ce->guc_state. En

[Intel-gfx] [RFC PATCH 83/97] drm/i915/guc: Don't return -EAGAIN to user when guc_ids exhausted

2021-05-06 Thread Matthew Brost
Rather than returning -EAGAIN to the user when no guc_ids are available,
implement a fair sharing algorithm in the kernel which blocks submissons
until guc_ids become available. Submissions are released one at a time,
based on priority, until the guc_id pressure is released to ensure fair
sharing of the guc_ids. Once the pressure is fully released, the normal
guc_id allocation (at request creation time in guc_request_alloc) can
resume as this allocation path should be significantly faster and a fair
sharing algorithm isn't needed when guc_ids are plentifully.

The fair sharing algorithm is implemented by forcing all submissions to
the tasklet which serializes submissions, dequeuing one at a time.

If the submission doesn't have a guc_id and new guc_id can't be found,
two lists are searched, one list with contexts that are not pinned but
still registered with the guc (searched first) and another list with
contexts that are pinned but do not have any submissions actively in
inflight (scheduling enabled + registered, searched second). If no
guc_ids can be found we kick a workqueue which will retire requests
hopefully freeing a guc_id. The workqueue + tasklet ping / pong back and
forth until a guc_id can be found.

Once a guc_id is found, we may have to disable context scheduling
depending on which list the context is stolen from. When we disable
scheduling, we block the tasklet from executing until the completion G2H
returns. The disable scheduling must be issued from the workqueue
because of the locking structure. When we deregister a context, we also
do the same thing (waiting on the G2H) but we can safely issue the
deregister H2G from the tasklet.

Once all the G2H have returned we can trigger a submission on the
context.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context_types.h |   3 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  26 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 806 --
 drivers/gpu/drm/i915/i915_request.h   |   6 +
 4 files changed, 754 insertions(+), 87 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 591dcba7bfde..a25ea8fe2029 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -180,6 +180,9 @@ struct intel_context {
/* GuC lrc descriptor ID */
u16 guc_id;
 
+   /* Number of rq submitted without a guc_id */
+   u16 guc_num_rq_submit_no_id;
+
/* GuC lrc descriptor reference count */
atomic_t guc_id_ref;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 9b1a89530844..bd477209839b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -32,7 +32,28 @@ struct intel_guc {
 
/* Global engine used to submit requests to GuC */
struct i915_sched_engine *sched_engine;
-   struct i915_request *stalled_request;
+
+   /* Global state related to submission tasklet */
+   struct i915_request *stalled_rq;
+   struct intel_context *stalled_context;
+   struct work_struct retire_worker;
+   unsigned long flags;
+   int total_num_rq_with_no_guc_id;
+
+   /*
+* Submisson stall reason. See intel_guc_submission.c for detailed
+* description.
+*/
+   enum {
+   STALL_NONE,
+   STALL_GUC_ID_WORKQUEUE,
+   STALL_GUC_ID_TASKLET,
+   STALL_SCHED_DISABLE,
+   STALL_REGISTER_CONTEXT,
+   STALL_DEREGISTER_CONTEXT,
+   STALL_MOVE_LRC_TAIL,
+   STALL_ADD_REQUEST,
+   } submission_stall_reason;
 
/* intel_guc_recv interrupt related state */
spinlock_t irq_lock;
@@ -55,7 +76,8 @@ struct intel_guc {
struct ida guc_ids;
u32 num_guc_ids;
u32 max_guc_ids;
-   struct list_head guc_id_list;
+   struct list_head guc_id_list_no_ref;
+   struct list_head guc_id_list_unpinned;
 
bool submission_selected;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 3c73c2ca668e..037a7ee4971b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -59,6 +59,25 @@
  * ELSP context descriptor dword into Work Item.
  * See guc_add_request()
  *
+ * GuC flow control state machine:
+ * The tasklet, workqueue (retire_worker), and the G2H handlers together more 
or
+ * less form a state machine which is used to submit requests + flow control
+ * requests, while waiting on resources / actions, if necessary. The enum,
+ * submission_stall_reason, controls the handoff of stalls between these
+ * entities with stalled_rq & stalled_context being the arguments. Each state
+ * described below.
+ *
+ * STALL_NONE  No stall condition
+ * STALL_G

[Intel-gfx] [RFC PATCH 92/97] drm/i915: Add GT PM delayed worker

2021-05-06 Thread Matthew Brost
Sometimes it is desirable to queue work up for later if the GT PM isn't
held and run that work on next GT PM unpark.

Implemented with a list in the GT of all pending work, workqueues in
the list, a callback to add a workqueue to the list, and finally a
wakeref post_get callback that iterates / drains the list + queues the
workqueues.

First user of this is deregistration of GuC contexts.

Signed-off-by: Matthew Brost irq_lock);
 
+   spin_lock_init(>->pm_delayed_work_lock);
+   INIT_LIST_HEAD(>->pm_delayed_work_list);
+
INIT_LIST_HEAD(>->closed_vma);
spin_lock_init(>->closed_lock);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 463a6ae605a0..9f5485be156e 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -93,6 +93,13 @@ static int __gt_unpark(struct intel_wakeref *wf)
return 0;
 }
 
+static void __gt_queue_delayed_work(struct intel_wakeref *wf)
+{
+   struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref);
+
+   intel_gt_pm_queue_delayed_work(gt);
+}
+
 static int __gt_park(struct intel_wakeref *wf)
 {
struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref);
@@ -123,6 +130,7 @@ static int __gt_park(struct intel_wakeref *wf)
 
 static const struct intel_wakeref_ops wf_ops = {
.get = __gt_unpark,
+   .post_get = __gt_queue_delayed_work,
.put = __gt_park,
 };
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.c 
b/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.c
new file mode 100644
index ..fc97a37b9ca1
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "i915_drv.h"
+#include "intel_runtime_pm.h"
+#include "intel_gt_pm.h"
+
+void intel_gt_pm_queue_delayed_work(struct intel_gt *gt)
+{
+   struct intel_gt_pm_delayed_work *work, *next;
+   unsigned long flags;
+
+   spin_lock_irqsave(>->pm_delayed_work_lock, flags);
+   list_for_each_entry_safe(work, next,
+>->pm_delayed_work_list, link) {
+   list_del_init(&work->link);
+   queue_work(system_unbound_wq, &work->worker);
+   }
+   spin_unlock_irqrestore(>->pm_delayed_work_lock, flags);
+}
+
+void intel_gt_pm_add_delayed_work(struct intel_gt *gt,
+ struct intel_gt_pm_delayed_work *work)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(>->pm_delayed_work_lock, flags);
+   if (intel_gt_pm_is_awake(gt))
+   queue_work(system_unbound_wq, &work->worker);
+   else if (list_empty(&work->link))
+   list_add_tail(&work->link, >->pm_delayed_work_list);
+   spin_unlock_irqrestore(>->pm_delayed_work_lock, flags);
+}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.h 
b/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.h
new file mode 100644
index ..7e91a9432f7f
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_delayed_work.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef INTEL_GT_PM_DELAYED_WORK_H
+#define INTEL_GT_PM_DELAYED_WORK_H
+
+#include 
+#include 
+
+struct intel_gt;
+
+struct intel_gt_pm_delayed_work {
+   struct list_head link;
+   struct work_struct worker;
+};
+
+void intel_gt_pm_queue_delayed_work(struct intel_gt *gt);
+
+void intel_gt_pm_add_delayed_work(struct intel_gt *gt,
+ struct intel_gt_pm_delayed_work *work);
+
+#endif /* INTEL_GT_PM_DELAYED_WORK_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h 
b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index fecfacf551d5..60ed7af94dba 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -68,6 +68,9 @@ struct intel_gt {
struct intel_wakeref wakeref;
atomic_t user_wakeref;
 
+   struct list_head pm_delayed_work_list;
+   spinlock_t pm_delayed_work_lock;
+
struct list_head closed_vma;
spinlock_t closed_lock; /* guards the list of closed_vma */
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index f6c40f6fb7ac..10dcfd790aa2 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -17,6 +17,7 @@
 #include "intel_uc_fw.h"
 #include "i915_utils.h"
 #include "i915_vma.h"
+#include "gt/intel_gt_pm_delayed_work.h"
 
 struct __guc_ads_blob;
 
@@ -63,7 +64,7 @@ struct intel_guc {
 
spinlock_t destroy_lock;
struct list_head destroyed_contexts;
-   struct work_struct destroy_worker;
+   struct intel_gt_pm_delayed_work destroy_worker;
 
bool submission_selected;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 6fd5414296c

[Intel-gfx] [RFC PATCH 95/97] drm/i915/guc: Selftest for GuC flow control

2021-05-06 Thread Matthew Brost
Add 5 selftests for hard (from user space) to recreate flow conditions.
Test listed below:

1. A test to verify that the number of guc_ids can be exhausted and all
submissions still complete.

2. A test to verify that the flow control state machine can recover from
a full GPU reset.

3. A teset to verify that the lrcd registration slots can be exhausted
and all submissions still complete.

4. A test to verify that the H2G channel can deadlock and a full GPU
reset recovers the system.

5. A test to stress to CTB channel but submitting to lots of contexts
and then immediately destroy the contexts.

Tests 1, 2, and 3 also ensure when the flow control is triggered by
unready requests those unready requests do not DoS ready requests.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/Makefile |   1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   6 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |  40 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |   9 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  16 +
 .../i915/gt/uc/intel_guc_submission_types.h   |   2 +
 .../i915/gt/uc/selftest_guc_flow_control.c| 589 ++
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 .../i915/selftests/intel_scheduler_helpers.c  | 101 +++
 .../i915/selftests/intel_scheduler_helpers.h  |  37 ++
 10 files changed, 793 insertions(+), 9 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/selftest_guc_flow_control.c
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
 create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index c80ec163a7d1..eba5c1e9eceb 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -285,6 +285,7 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
selftests/igt_mmap.o \
selftests/igt_reset.o \
selftests/igt_spinner.o \
+   selftests/intel_scheduler_helpers.o \
selftests/librapl.o
 
 # virtual gpu code
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 10dcfd790aa2..169daaf8a189 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -102,6 +102,12 @@ struct intel_guc {
 
/* To serialize the intel_guc_send actions */
struct mutex send_mutex;
+
+   I915_SELFTEST_DECLARE(bool gse_hang_expected;)
+   I915_SELFTEST_DECLARE(bool deadlock_expected;)
+   I915_SELFTEST_DECLARE(bool bad_desc_expected;)
+   I915_SELFTEST_DECLARE(bool inject_bad_sched_disable;)
+   I915_SELFTEST_DECLARE(bool inject_corrupt_h2g;)
 };
 
 static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 1c240ff8dec9..03b8a359bfcb 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -3,7 +3,6 @@
  * Copyright © 2016-2019 Intel Corporation
  */
 
-#include 
 #include 
 #include 
 #include 
@@ -404,11 +403,13 @@ static int ct_write(struct intel_guc_ct *ct,
u32 *cmds = ctb->cmds;
unsigned int i;
 
-   if (unlikely(ctb->broken))
-   return -EDEADLK;
+   if (!I915_SELFTEST_ONLY(ct_to_guc(ct)->deadlock_expected)) {
+   if (unlikely(ctb->broken))
+   return -EDEADLK;
 
-   if (unlikely(desc->status))
-   goto corrupted;
+   if (unlikely(desc->status))
+   goto corrupted;
+   }
 
 #ifdef CONFIG_DRM_I915_DEBUG_GUC
if (unlikely((desc->tail | desc->head) >= size)) {
@@ -427,6 +428,15 @@ static int ct_write(struct intel_guc_ct *ct,
 FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len) |
 FIELD_PREP(GUC_CTB_MSG_0_FENCE, fence);
 
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+   if (ct_to_guc(ct)->inject_corrupt_h2g) {
+   header = FIELD_PREP(GUC_CTB_MSG_0_FORMAT, 3) |
+FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len + 5) |
+FIELD_PREP(GUC_CTB_MSG_0_FENCE, 0xdead);
+   ct_to_guc(ct)->inject_corrupt_h2g = false;
+   }
+#endif
+
hxg = (flags & INTEL_GUC_SEND_NB) ?
(FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_EVENT) |
 FIELD_PREP(GUC_HXG_EVENT_MSG_0_ACTION |
@@ -464,8 +474,12 @@ static int ct_write(struct intel_guc_ct *ct,
return 0;
 
 corrupted:
-   CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n",
-desc->head, desc->tail, desc->status);
+   if (I915_SELFTEST_ONLY(ct_to_guc(ct)->bad_desc_expected))
+   CT_DEBUG(ct, "Corrupted descriptor head=%u tail=%u 
status=%#x\n",
+desc->head, desc->tail, desc->status);
+   else
+   CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u 
status=%#x\n",
+

[Intel-gfx] [RFC PATCH 76/97] drm/i915/guc: Hook GuC scheduling policies up

2021-05-06 Thread Matthew Brost
From: John Harrison 

Use the official driver default scheduling policies for configuring
the GuC scheduler rather than a bunch of hardcoded values.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
Cc: Jose Souza 
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c| 44 ++-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 11 +++--
 4 files changed, 53 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index bba53e3b39b9..16cc8453b01c 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -461,6 +461,7 @@ struct intel_engine_cs {
 #define I915_ENGINE_IS_VIRTUAL   BIT(5)
 #define I915_ENGINE_HAS_RELATIVE_MMIO BIT(6)
 #define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7)
+#define I915_ENGINE_WANT_FORCED_PREEMPTION BIT(8)
unsigned int flags;
 
/*
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 10b48b9f7603..266358d04bfc 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -271,6 +271,8 @@ int intel_guc_engine_failure_process_msg(struct intel_guc 
*guc,
 
 void intel_guc_find_hung_context(struct intel_engine_cs *engine);
 
+int intel_guc_global_policies_update(struct intel_guc *guc);
+
 void intel_guc_submission_reset_prepare(struct intel_guc *guc);
 void intel_guc_submission_reset(struct intel_guc *guc, bool stalled);
 void intel_guc_submission_reset_finish(struct intel_guc *guc);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index 179ab658d2b5..b37473bc8fff 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -80,14 +80,54 @@ static u32 guc_ads_blob_size(struct intel_guc *guc)
   guc_ads_private_data_size(guc);
 }
 
-static void guc_policies_init(struct guc_policies *policies)
+static void guc_policies_init(struct intel_guc *guc, struct guc_policies 
*policies)
 {
+   struct intel_gt *gt = guc_to_gt(guc);
+   struct drm_i915_private *i915 = gt->i915;
+
policies->dpc_promote_time = GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US;
policies->max_num_work_items = GLOBAL_POLICY_MAX_NUM_WI;
+
policies->global_flags = 0;
+   if (i915->params.reset < 2)
+   policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET;
+
policies->is_valid = 1;
 }
 
+static int guc_action_policies_update(struct intel_guc *guc, u32 policy_offset)
+{
+   u32 action[] = {
+   INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE,
+   policy_offset
+   };
+
+   return intel_guc_send(guc, action, ARRAY_SIZE(action));
+}
+
+int intel_guc_global_policies_update(struct intel_guc *guc)
+{
+   struct __guc_ads_blob *blob = guc->ads_blob;
+   struct intel_gt *gt = guc_to_gt(guc);
+   intel_wakeref_t wakeref;
+   int ret;
+
+   if (!blob)
+   return -ENOTSUPP;
+
+   GEM_BUG_ON(!blob->ads.scheduler_policies);
+
+   guc_policies_init(guc, &blob->policies);
+
+   if (!intel_guc_is_ready(guc))
+   return 0;
+
+   with_intel_runtime_pm(>->i915->runtime_pm, wakeref)
+   ret = guc_action_policies_update(guc, 
blob->ads.scheduler_policies);
+
+   return ret;
+}
+
 static void guc_mapping_table_init(struct intel_gt *gt,
   struct guc_gt_system_info *system_info)
 {
@@ -284,7 +324,7 @@ static void __guc_ads_init(struct intel_guc *guc)
u8 engine_class, guc_class;
 
/* GuC scheduling policies */
-   guc_policies_init(&blob->policies);
+   guc_policies_init(guc, &blob->policies);
 
/*
 * GuC expects a per-engine-class context image and size
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index ad3d2326a81d..a9fb31370c61 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -872,6 +872,7 @@ void intel_guc_submission_reset_finish(struct intel_guc 
*guc)
GEM_WARN_ON(atomic_read(&guc->outstanding_submission_g2h));
atomic_set(&guc->outstanding_submission_g2h, 0);
 
+   intel_guc_global_policies_update(guc);
enable_submission(guc);
intel_gt_unpark_heartbeats(guc_to_gt(guc));
 }
@@ -1160,8 +1161,12 @@ static void guc_context_policy_init(struct 
intel_engine_cs *engine,
 {
desc->policy_flags = 0;
 
-   desc->execution_quantum = CONTEXT_POLICY_DEFAULT_EXECUTION_QUANTUM_US;
-   desc->preemption_timeout = CONTEXT_POLICY_DEFAULT_PREEMPTION_TIME_US;
+   if (engine->flags & I915_ENGINE_WANT_FORCED_PREEMPTION)
+   desc->policy_flags |= CONTEXT_POLICY_

[Intel-gfx] [RFC PATCH 69/97] drm/i915/guc: Handle engine reset failure notification

2021-05-06 Thread Matthew Brost
GuC will notify the driver, via G2H, if it fails to
reset an engine. We recover by resorting to a full GPU
reset.

Signed-off-by: Matthew Brost 
Signed-off-by: Fernando Pacheco 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |  6 +++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 43 +++
 3 files changed, 51 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index a2abe1c422e3..e118d8217e77 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -265,6 +265,8 @@ int intel_guc_sched_done_process_msg(struct intel_guc *guc,
 const u32 *msg, u32 len);
 int intel_guc_context_reset_process_msg(struct intel_guc *guc,
const u32 *msg, u32 len);
+int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
+const u32 *msg, u32 len);
 
 void intel_guc_submission_reset_prepare(struct intel_guc *guc);
 void intel_guc_submission_reset(struct intel_guc *guc, bool stalled);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 9c84b2ba63a8..d5b326d4e250 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -947,6 +947,12 @@ static int ct_process_request(struct intel_guc_ct *ct, 
struct ct_incoming_msg *r
CT_ERROR(ct, "context reset notification failed %x 
%*ph\n",
  action, 4 * len, payload);
break;
+   case INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION:
+   ret = intel_guc_engine_failure_process_msg(guc, payload, len);
+   if (unlikely(ret))
+   CT_ERROR(ct, "engine failure handler failed %x %*ph\n",
+ action, 4 * len, payload);
+   break;
default:
ret = -EOPNOTSUPP;
break;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 940017495731..22f17a055b21 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2227,6 +2227,49 @@ int intel_guc_context_reset_process_msg(struct intel_guc 
*guc,
return 0;
 }
 
+static struct intel_engine_cs *
+guc_lookup_engine(struct intel_guc *guc, u8 guc_class, u8 instance)
+{
+   struct intel_gt *gt = guc_to_gt(guc);
+   u8 engine_class = guc_class_to_engine_class(guc_class);
+
+   /* Class index is checked in class converter */
+   GEM_BUG_ON(instance > MAX_ENGINE_INSTANCE);
+
+   return gt->engine_class[engine_class][instance];
+}
+
+int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
+const u32 *msg, u32 len)
+{
+   struct intel_engine_cs *engine;
+   u8 guc_class, instance;
+   u32 reason;
+
+   if (unlikely(len != 3)) {
+   drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+   return -EPROTO;
+   }
+
+   guc_class = msg[0];
+   instance = msg[1];
+   reason = msg[2];
+
+   engine = guc_lookup_engine(guc, guc_class, instance);
+   if (unlikely(!engine)) {
+   drm_dbg(&guc_to_gt(guc)->i915->drm,
+   "Invalid engine %d:%d", guc_class, instance);
+   return -EPROTO;
+   }
+
+   intel_gt_handle_error(guc_to_gt(guc), engine->mask,
+ I915_ERROR_CAPTURE,
+ "GuC failed to reset %s (reason=0x%08x)\n",
+ engine->name, reason);
+
+   return 0;
+}
+
 void intel_guc_log_submission_info(struct intel_guc *guc,
   struct drm_printer *p)
 {
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 97/97] drm/i915/guc: Unblock GuC submission on Gen11+

2021-05-06 Thread Matthew Brost
From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
 drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14 +-
 4 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 169daaf8a189..ac7ece2f4c8c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -66,6 +66,7 @@ struct intel_guc {
struct list_head destroyed_contexts;
struct intel_gt_pm_delayed_work destroy_worker;
 
+   bool submission_supported;
bool submission_selected;
 
struct i915_vma *ads_vma;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 594a99ea4f5c..b9c86e0f02b2 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -3477,6 +3477,13 @@ void intel_guc_submission_disable(struct intel_guc *guc)
/* Note: By the time we're here, GuC may have already been reset */
 }
 
+static bool __guc_submission_supported(struct intel_guc *guc)
+{
+   /* GuC submission is unavailable for pre-Gen11 */
+   return intel_guc_is_supported(guc) &&
+  INTEL_GEN(guc_to_gt(guc)->i915) >= 11;
+}
+
 static bool __guc_submission_selected(struct intel_guc *guc)
 {
struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
@@ -3491,6 +3498,7 @@ void intel_guc_submission_init_early(struct intel_guc 
*guc)
 {
guc->max_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
guc->num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
+   guc->submission_supported = __guc_submission_supported(guc);
guc->submission_selected = __guc_submission_selected(guc);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
index 60c8b9aaad6e..9431ec52a6c4 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -37,8 +37,7 @@ int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
 
 static inline bool intel_guc_submission_is_supported(struct intel_guc *guc)
 {
-   /* XXX: GuC submission is unavailable for now */
-   return false;
+   return guc->submission_supported;
 }
 
 static inline bool intel_guc_submission_is_wanted(struct intel_guc *guc)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 4a79db4a739f..8cfb226da62e 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct intel_uc *uc)
return;
}
 
-   /* Default: enable HuC authentication only */
-   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   /* Intermediate platforms are HuC authentication only */
+   if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+   drm_dbg(&i915->drm, "Disabling GuC only due to old platform\n");
+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   return;
+   }
+
+   /* Default: enable HuC authentication and GuC submission */
+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION;
 }
 
 /* Reset GuC providing us with fresh state for both GuC and HuC.
@@ -313,9 +320,6 @@ static int __uc_init(struct intel_uc *uc)
if (i915_inject_probe_failure(uc_to_gt(uc)->i915))
return -ENOMEM;
 
-   /* XXX: GuC submission is unavailable for now */
-   GEM_BUG_ON(intel_uc_uses_guc_submission(uc));
-
ret = intel_guc_init(guc);
if (ret)
return ret;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 81/97] drm/i915/guc: Allow flexible number of context ids

2021-05-06 Thread Matthew Brost
Number of available GuC contexts ids might be limited.
Stop refering in code to macro and use variable instead.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h   |  2 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c| 16 +---
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 306d6857d683..9b1a89530844 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -53,6 +53,8 @@ struct intel_guc {
 */
spinlock_t contexts_lock;
struct ida guc_ids;
+   u32 num_guc_ids;
+   u32 max_guc_ids;
struct list_head guc_id_list;
 
bool submission_selected;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index a20d7205895a..8f40e534bc81 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -228,7 +228,7 @@ static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc 
*guc, u32 index)
 {
struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr;
 
-   GEM_BUG_ON(index >= GUC_MAX_LRC_DESCRIPTORS);
+   GEM_BUG_ON(index >= guc->max_guc_ids);
 
return &base[index];
 }
@@ -237,7 +237,7 @@ static inline struct intel_context *__get_context(struct 
intel_guc *guc, u32 id)
 {
struct intel_context *ce = xa_load(&guc->context_lookup, id);
 
-   GEM_BUG_ON(id >= GUC_MAX_LRC_DESCRIPTORS);
+   GEM_BUG_ON(id >= guc->max_guc_ids);
 
return ce;
 }
@@ -247,8 +247,7 @@ static int guc_lrc_desc_pool_create(struct intel_guc *guc)
u32 size;
int ret;
 
-   size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) *
- GUC_MAX_LRC_DESCRIPTORS);
+   size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * guc->max_guc_ids);
ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool,
 (void 
**)&guc->lrc_desc_pool_vaddr);
if (ret)
@@ -1008,7 +1007,7 @@ static void guc_submit_request(struct i915_request *rq)
 static int new_guc_id(struct intel_guc *guc)
 {
return ida_simple_get(&guc->guc_ids, GUC_ID_START,
- GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL |
+ guc->num_guc_ids, GFP_KERNEL |
  __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
 }
 
@@ -2142,6 +2141,8 @@ static bool __guc_submission_selected(struct intel_guc 
*guc)
 
 void intel_guc_submission_init_early(struct intel_guc *guc)
 {
+   guc->max_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
+   guc->num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
guc->submission_selected = __guc_submission_selected(guc);
 }
 
@@ -2150,7 +2151,7 @@ g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
 {
struct intel_context *ce;
 
-   if (unlikely(desc_idx >= GUC_MAX_LRC_DESCRIPTORS)) {
+   if (unlikely(desc_idx >= guc->max_guc_ids)) {
drm_dbg(&guc_to_gt(guc)->i915->drm,
"Invalid desc_idx %u", desc_idx);
return NULL;
@@ -2451,6 +2452,8 @@ void intel_guc_log_submission_info(struct intel_guc *guc,
 
drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n",
   atomic_read(&guc->outstanding_submission_g2h));
+   drm_printf(p, "GuC Number GuC IDs: %u\n", guc->num_guc_ids);
+   drm_printf(p, "GuC Max GuC IDs: %u\n", guc->max_guc_ids);
drm_printf(p, "GuC tasklet count: %u\n\n",
   atomic_read(&sched_engine->tasklet.count));
 
@@ -2474,7 +2477,6 @@ void intel_guc_log_context_info(struct intel_guc *guc,
 {
struct intel_context *ce;
unsigned long index;
-
xa_for_each(&guc->context_lookup, index, ce) {
drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id);
drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 91/97] drm/i915/guc: Take GT PM ref when deregistering context

2021-05-06 Thread Matthew Brost
Taking a PM reference to prevent intel_gt_wait_for_idle from short
circuiting while a deregister context H2G is in flight.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.h |  5 +
 drivers/gpu/drm/i915/gt/intel_gt_pm.h | 13 +++
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  4 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 98 +++
 4 files changed, 101 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index 70ea46d6cfb0..17a5028ea177 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -16,6 +16,11 @@ intel_engine_pm_is_awake(const struct intel_engine_cs 
*engine)
return intel_wakeref_is_active(&engine->wakeref);
 }
 
+static inline void __intel_engine_pm_get(struct intel_engine_cs *engine)
+{
+   __intel_wakeref_get(&engine->wakeref);
+}
+
 static inline void intel_engine_pm_get(struct intel_engine_cs *engine)
 {
intel_wakeref_get(&engine->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index d0588d8aaa44..a17bf0d4592b 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -41,6 +41,19 @@ static inline void intel_gt_pm_put_async(struct intel_gt *gt)
intel_wakeref_put_async(>->wakeref);
 }
 
+#define with_intel_gt_pm(gt, tmp) \
+   for (tmp = 1, intel_gt_pm_get(gt); tmp; \
+intel_gt_pm_put(gt), tmp = 0)
+#define with_intel_gt_pm_async(gt, tmp) \
+   for (tmp = 1, intel_gt_pm_get(gt); tmp; \
+intel_gt_pm_put_async(gt), tmp = 0)
+#define with_intel_gt_pm_if_awake(gt, tmp) \
+   for (tmp = intel_gt_pm_get_if_awake(gt); tmp; \
+intel_gt_pm_put(gt), tmp = 0)
+#define with_intel_gt_pm_if_awake_async(gt, tmp) \
+   for (tmp = intel_gt_pm_get_if_awake(gt); tmp; \
+intel_gt_pm_put_async(gt), tmp = 0)
+
 static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
 {
return intel_wakeref_wait_for_idle(>->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 97bb262f8a13..f6c40f6fb7ac 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -61,6 +61,10 @@ struct intel_guc {
struct list_head guc_id_list_no_ref;
struct list_head guc_id_list_unpinned;
 
+   spinlock_t destroy_lock;
+   struct list_head destroyed_contexts;
+   struct work_struct destroy_worker;
+
bool submission_selected;
 
struct i915_vma *ads_vma;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 79caf9596084..6fd5414296cd 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -909,6 +909,7 @@ static void scrub_guc_desc_for_outstanding_g2h(struct 
intel_guc *guc)
if (deregister)
guc_signal_context_fence(ce);
if (destroyed) {
+   intel_gt_pm_put_async(guc_to_gt(guc));
release_guc_id(guc, ce);
__guc_context_destroy(ce);
}
@@ -1023,6 +1024,8 @@ static void guc_flush_submissions(struct intel_guc *guc)
gse_flush_submissions(guc->gse[i]);
 }
 
+static void guc_flush_destroyed_contexts(struct intel_guc *guc);
+
 void intel_guc_submission_reset_prepare(struct intel_guc *guc)
 {
int i;
@@ -1040,6 +1043,7 @@ void intel_guc_submission_reset_prepare(struct intel_guc 
*guc)
spin_unlock_irq(&guc_to_gt(guc)->irq_lock);
 
guc_flush_submissions(guc);
+   guc_flush_destroyed_contexts(guc);
 
/*
 * Handle any outstanding G2Hs before reset. Call IRQ handler directly
@@ -1365,6 +1369,8 @@ static void retire_worker_func(struct work_struct *w)
 static int guc_lrcd_reg_init(struct intel_guc *guc);
 static void guc_lrcd_reg_fini(struct intel_guc *guc);
 
+static void destroy_worker_func(struct work_struct *w);
+
 /*
  * Set up the memory resources to be shared with the GuC (via the GGTT)
  * at firmware loading time.
@@ -1387,6 +1393,10 @@ int intel_guc_submission_init(struct intel_guc *guc)
INIT_LIST_HEAD(&guc->guc_id_list_unpinned);
ida_init(&guc->guc_ids);
 
+   spin_lock_init(&guc->destroy_lock);
+   INIT_LIST_HEAD(&guc->destroyed_contexts);
+   INIT_WORK(&guc->destroy_worker, destroy_worker_func);
+
return 0;
 }
 
@@ -1397,6 +1407,7 @@ void intel_guc_submission_fini(struct intel_guc *guc)
if (!guc_submission_initialized(guc))
return;
 
+   guc_flush_destroyed_contexts(guc);
guc_lrcd_reg_fini(guc);
 
for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) {
@@ -2280,11 +2291,29 @@ static 

[Intel-gfx] [RFC PATCH 50/97] drm/i915/guc: Extend deregistration fence to schedule disable

2021-05-06 Thread Matthew Brost
Extend the deregistration context fence to fence whne a GuC context has
scheduling disable pending.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 37 +++
 1 file changed, 30 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 2afc49caf462..885f14bfe3b9 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -921,7 +921,19 @@ static void guc_context_sched_disable(struct intel_context 
*ce)
goto unpin;
 
spin_lock_irqsave(&ce->guc_state.lock, flags);
+
+   /*
+* We have to check if the context has been pinned again as another pin
+* operation is allowed to pass this function. Checking the pin count
+* here synchronizes this function with guc_request_alloc ensuring a
+* request doesn't slip through the 'context_pending_disable' fence.
+*/
+   if (unlikely(atomic_add_unless(&ce->pin_count, -2, 2))) {
+   spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+   return;
+   }
guc_id = prep_context_pending_disable(ce);
+
spin_unlock_irqrestore(&ce->guc_state.lock, flags);
 
with_intel_runtime_pm(runtime_pm, wakeref)
@@ -1127,19 +1139,22 @@ static int guc_request_alloc(struct i915_request *rq)
 out:
/*
 * We block all requests on this context if a G2H is pending for a
-* context deregistration as the GuC will fail a context registration
-* while this G2H is pending. Once a G2H returns, the fence is released
-* that is blocking these requests (see guc_signal_context_fence).
+* schedule disable or context deregistration as the GuC will fail a
+* schedule enable or context registration if either G2H is pending
+* respectfully. Once a G2H returns, the fence is released that is
+* blocking these requests (see guc_signal_context_fence).
 *
-* We can safely check the below field outside of the lock as it isn't
-* possible for this field to transition from being clear to set but
+* We can safely check the below fields outside of the lock as it isn't
+* possible for these fields to transition from being clear to set but
 * converse is possible, hence the need for the check within the lock.
 */
-   if (likely(!context_wait_for_deregister_to_register(ce)))
+   if (likely(!context_wait_for_deregister_to_register(ce) &&
+  !context_pending_disable(ce)))
return 0;
 
spin_lock_irqsave(&ce->guc_state.lock, flags);
-   if (context_wait_for_deregister_to_register(ce)) {
+   if (context_wait_for_deregister_to_register(ce) ||
+   context_pending_disable(ce)) {
i915_sw_fence_await(&rq->submit);
 
list_add_tail(&rq->guc_fence_link, &ce->guc_state.fences);
@@ -1488,10 +1503,18 @@ int intel_guc_sched_done_process_msg(struct intel_guc 
*guc,
if (context_pending_enable(ce)) {
clr_context_pending_enable(ce);
} else if (context_pending_disable(ce)) {
+   /*
+* Unpin must be done before __guc_signal_context_fence,
+* otherwise a race exists between the requests getting
+* submitted + retired before this unpin completes resulting in
+* the pin_count going to zero and the context still being
+* enabled.
+*/
intel_context_sched_disable_unpin(ce);
 
spin_lock_irqsave(&ce->guc_state.lock, flags);
clr_context_pending_disable(ce);
+   __guc_signal_context_fence(ce);
spin_unlock_irqrestore(&ce->guc_state.lock, flags);
}
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 94/97] drm/i915/guc: Don't call switch_to_kernel_context with GuC submission

2021-05-06 Thread Matthew Brost
Calling switch_to_kernel_context isn't needed if the engine PM reference
is taken while all contexts are pinned. By not calling
switch_to_kernel_context we save on issuing a request to the engine.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index ba6a9931c4e8..f8fab316e33d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -162,6 +162,10 @@ static bool switch_to_kernel_context(struct 
intel_engine_cs *engine)
unsigned long flags;
bool result = true;
 
+   /* No need to switch_to_kernel_context if GuC submission */
+   if (intel_engine_uses_guc(engine))
+   return true;
+
/* GPU is pointing to the void, as good as in the kernel context. */
if (intel_gt_is_wedged(engine->gt))
return true;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 71/97] drm/i915/guc: Provide mmio list to be saved/restored on engine reset

2021-05-06 Thread Matthew Brost
From: John Harrison 

The driver must provide GuC with a list of mmio registers
that should be saved/restored during a GuC-based engine reset.
Unfortunately, the list must be dynamically allocated as its size is
variable. That means the driver must generate the list twice - once to
work out the size and a second time to actually save it.

Signed-off-by: John Harrison 
Signed-off-by: Fernando Pacheco 
Signed-off-by: Matthew Brost 
Cc: Daniele Ceraolo Spurio 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |  46 ++--
 .../gpu/drm/i915/gt/intel_workarounds_types.h |   1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c| 199 +-
 drivers/gpu/drm/i915/i915_reg.h   |   1 +
 5 files changed, 222 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c 
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 5a03a76bb9e2..05d21476d140 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -150,13 +150,14 @@ static void _wa_add(struct i915_wa_list *wal, const 
struct i915_wa *wa)
 }
 
 static void wa_add(struct i915_wa_list *wal, i915_reg_t reg,
-  u32 clear, u32 set, u32 read_mask)
+  u32 clear, u32 set, u32 read_mask, bool masked_reg)
 {
struct i915_wa wa = {
.reg  = reg,
.clr  = clear,
.set  = set,
.read = read_mask,
+   .masked_reg = masked_reg,
};
 
_wa_add(wal, &wa);
@@ -165,7 +166,7 @@ static void wa_add(struct i915_wa_list *wal, i915_reg_t reg,
 static void
 wa_write_clr_set(struct i915_wa_list *wal, i915_reg_t reg, u32 clear, u32 set)
 {
-   wa_add(wal, reg, clear, set, clear);
+   wa_add(wal, reg, clear, set, clear, false);
 }
 
 static void
@@ -200,20 +201,20 @@ wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, 
u32 clr)
 static void
 wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
 {
-   wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val);
+   wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val, true);
 }
 
 static void
 wa_masked_dis(struct i915_wa_list *wal, i915_reg_t reg, u32 val)
 {
-   wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val);
+   wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val, true);
 }
 
 static void
 wa_masked_field_set(struct i915_wa_list *wal, i915_reg_t reg,
u32 mask, u32 val)
 {
-   wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask);
+   wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask, true);
 }
 
 static void gen6_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -583,10 +584,10 @@ static void icl_ctx_workarounds_init(struct 
intel_engine_cs *engine,
 GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC);
 
/* WaEnableFloatBlendOptimization:icl */
-   wa_write_clr_set(wal,
-GEN10_CACHE_MODE_SS,
-0, /* write-only, so skip validation */
-_MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE));
+   wa_add(wal, GEN10_CACHE_MODE_SS, 0,
+  _MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE),
+  0 /* write-only, so skip validation */,
+  true);
 
/* WaDisableGPGPUMidThreadPreemption:icl */
wa_masked_field_set(wal, GEN8_CS_CHICKEN1,
@@ -631,7 +632,7 @@ static void gen12_ctx_gt_tuning_init(struct intel_engine_cs 
*engine,
   FF_MODE2,
   FF_MODE2_TDS_TIMER_MASK,
   FF_MODE2_TDS_TIMER_128,
-  0);
+  0, false);
 }
 
 static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -668,7 +669,7 @@ static void gen12_ctx_workarounds_init(struct 
intel_engine_cs *engine,
   FF_MODE2,
   FF_MODE2_GS_TIMER_MASK,
   FF_MODE2_GS_TIMER_224,
-  0);
+  0, false);
 }
 
 static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
@@ -839,7 +840,7 @@ hsw_gt_workarounds_init(struct drm_i915_private *i915, 
struct i915_wa_list *wal)
wa_add(wal,
   HSW_ROW_CHICKEN3, 0,
   _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE),
-   0 /* XXX does this reg exist? */);
+  0 /* XXX does this reg exist? */, true);
 
/* WaVSRefCountFullforceMissDisable:hsw */
wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME);
@@ -1950,10 +1951,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, 
struct i915_wa_list *wal)
 * disable bit, which we don't touch here, but it's good
 * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
 */
-   wa_add(wal, GEN7_GT_MODE, 0,
-  _MASKED_FIELD(GEN6_WIZ_HASHING_MASK,
-GEN6_WIZ_HASHING_16x4),
- 

[Intel-gfx] [RFC PATCH 89/97] drm/i915/guc: Check return of __xa_store when registering a context

2021-05-06 Thread Matthew Brost
Check return of __xa_store when registering a context as this can fail
in a rare case if not memory can not be allocated. If this occurs fall
back on the tasklet flow control and try again in the future.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index b3157eeb2599..608b30907f4c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -503,18 +503,24 @@ static inline bool lrc_desc_registered(struct intel_guc 
*guc, u32 id)
return __get_context(guc, id);
 }
 
-static inline void set_lrc_desc_registered(struct intel_guc *guc, u32 id,
+static inline int set_lrc_desc_registered(struct intel_guc *guc, u32 id,
   struct intel_context *ce)
 {
unsigned long flags;
+   void *ret;
 
/*
 * xarray API doesn't have xa_save_irqsave wrapper, so calling the
 * lower level functions directly.
 */
xa_lock_irqsave(&guc->context_lookup, flags);
-   __xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC);
+   ret = __xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC);
xa_unlock_irqrestore(&guc->context_lookup, flags);
+
+   if (unlikely(xa_is_err(ret)))
+   return -EBUSY;  /* Try again in future */
+
+   return 0;
 }
 
 static int guc_submission_busy_loop(struct intel_guc* guc,
@@ -1831,7 +1837,9 @@ static int guc_lrc_desc_pin(struct intel_context *ce, 
bool loop)
rcu_read_unlock();
 
reset_lrc_desc(guc, desc_idx);
-   set_lrc_desc_registered(guc, desc_idx, ce);
+   ret = set_lrc_desc_registered(guc, desc_idx, ce);
+   if (unlikely(ret))
+   return ret;
 
desc = __get_lrc_desc(guc, desc_idx);
desc->engine_class = engine_class_to_guc_class(engine->class);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 35/97] drm/i915/guc: Improve error message for unsolicited CT response

2021-05-06 Thread Matthew Brost
Improve the error message when a unsolicited CT response is received by
printing fence that couldn't be found, the last fence, and all requests
with a response outstanding.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 217ab3ebd1af..a76603537fa8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -703,12 +703,16 @@ static int ct_handle_response(struct intel_guc_ct *ct, 
struct ct_incoming_msg *r
found = true;
break;
}
-   spin_unlock_irqrestore(&ct->requests.lock, flags);
-
if (!found) {
CT_ERROR(ct, "Unsolicited response (fence %u)\n", fence);
-   return -ENOKEY;
+   CT_ERROR(ct, "Could not find fence=%u, last_fence=%u\n", fence,
+ct->requests.last_fence);
+   list_for_each_entry(req, &ct->requests.pending, link)
+   CT_ERROR(ct, "request %u awaits response\n",
+req->fence);
+   err = -ENOKEY;
}
+   spin_unlock_irqrestore(&ct->requests.lock, flags);
 
if (unlikely(err))
return err;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 79/97] drm/i915/guc: Don't call ring_is_idle in GuC submission

2021-05-06 Thread Matthew Brost
The engine registers really shouldn't be touched during GuC submission
as the GuC owns the registers. Don't call ring_is_idle and tie
intel_engine_is_idle strickly the engine pm.

Because intel_engine_is_idle tied to the engine pm, retire requests
before checking intel_engines_are_idle in gt_drop_caches, and lastly
increase the timeout in gt_drop_caches for the intel_engines_are_idle
check.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 13 +
 drivers/gpu/drm/i915/i915_debugfs.c   |  6 +++---
 drivers/gpu/drm/i915/i915_drv.h   |  2 +-
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index e34a61600c8c..591226b96201 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1226,6 +1226,9 @@ static bool ring_is_idle(struct intel_engine_cs *engine)
 {
bool idle = true;
 
+   /* GuC submission shouldn't access HEAD & TAIL via MMIO */
+   GEM_BUG_ON(intel_engine_uses_guc(engine));
+
if (I915_SELFTEST_ONLY(!engine->mmio_base))
return true;
 
@@ -1292,6 +1295,16 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
if (!i915_sched_engine_is_empty(engine->sched_engine))
return false;
 
+   /*
+* We shouldn't touch engine registers with GuC submission as the GuC
+* owns the registers. Let's tie the idle to engine pm, at worst this
+* function sometimes will falsely report non-idle when idle during the
+* delay to retire requests or with virtual engines and a request
+* running on another instance within the same class / submit mask.
+*/
+   if (intel_engine_uses_guc(engine))
+   return false;
+
/* Ring stopped? */
return ring_is_idle(engine);
 }
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
b/drivers/gpu/drm/i915/i915_debugfs.c
index d540dd8029d0..2639961504b5 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -867,13 +867,13 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
 {
int ret;
 
+   if (val & DROP_RETIRE || val & DROP_RESET_ACTIVE)
+   intel_gt_retire_requests(gt);
+
if (val & DROP_RESET_ACTIVE &&
wait_for(intel_engines_are_idle(gt), I915_IDLE_ENGINES_TIMEOUT))
intel_gt_set_wedged(gt);
 
-   if (val & DROP_RETIRE)
-   intel_gt_retire_requests(gt);
-
if (val & (DROP_IDLE | DROP_ACTIVE)) {
ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
if (ret)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 3cfa6effbb5f..aa359b8480cd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -576,7 +576,7 @@ struct i915_gem_mm {
u32 shrink_count;
 };
 
-#define I915_IDLE_ENGINES_TIMEOUT (200) /* in ms */
+#define I915_IDLE_ENGINES_TIMEOUT (500) /* in ms */
 
 unsigned long i915_fence_context_timeout(const struct drm_i915_private *i915,
 u64 context);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 36/97] drm/i915/guc: Add non blocking CTB send function

2021-05-06 Thread Matthew Brost
Add non blocking CTB send function, intel_guc_send_nb. In order to
support a non blocking CTB send function a spin lock is needed to
protect the CTB descriptors fields. Also the non blocking call must not
update the fence value as this value is owned by the blocking call
(intel_guc_send).

The blocking CTB now must have a flow control mechanism to ensure the
buffer isn't overrun. A lazy spin wait is used as we believe the flow
control condition should be rare with properly sized buffer.

The function, intel_guc_send_nb, is exported in this patch but unused.
Several patches later in the series make use of this function.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h| 12 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 96 +--
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  7 +-
 3 files changed, 105 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index c20f3839de12..4c0a367e41d8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -75,7 +75,15 @@ static inline struct intel_guc *log_to_guc(struct 
intel_guc_log *log)
 static
 inline int intel_guc_send(struct intel_guc *guc, const u32 *action, u32 len)
 {
-   return intel_guc_ct_send(&guc->ct, action, len, NULL, 0);
+   return intel_guc_ct_send(&guc->ct, action, len, NULL, 0, 0);
+}
+
+#define INTEL_GUC_SEND_NB  BIT(31)
+static
+inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len)
+{
+   return intel_guc_ct_send(&guc->ct, action, len, NULL, 0,
+INTEL_GUC_SEND_NB);
 }
 
 static inline int
@@ -83,7 +91,7 @@ intel_guc_send_and_receive(struct intel_guc *guc, const u32 
*action, u32 len,
   u32 *response_buf, u32 response_buf_size)
 {
return intel_guc_ct_send(&guc->ct, action, len,
-response_buf, response_buf_size);
+response_buf, response_buf_size, 0);
 }
 
 static inline void intel_guc_to_host_event_handler(struct intel_guc *guc)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index a76603537fa8..af7314d45a78 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -3,6 +3,11 @@
  * Copyright © 2016-2019 Intel Corporation
  */
 
+#include 
+#include 
+#include 
+#include 
+
 #include "i915_drv.h"
 #include "intel_guc_ct.h"
 #include "gt/intel_gt.h"
@@ -308,6 +313,7 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
if (unlikely(err))
goto err_deregister;
 
+   ct->requests.last_fence = 1;
ct->enabled = true;
 
return 0;
@@ -343,10 +349,22 @@ static u32 ct_get_next_fence(struct intel_guc_ct *ct)
return ++ct->requests.last_fence;
 }
 
+static void write_barrier(struct intel_guc_ct *ct) {
+   struct intel_guc *guc = ct_to_guc(ct);
+   struct intel_gt *gt = guc_to_gt(guc);
+
+   if (i915_gem_object_is_lmem(guc->ct.vma->obj)) {
+   GEM_BUG_ON(guc->send_regs.fw_domains);
+   intel_uncore_write_fw(gt->uncore, GEN11_SOFT_SCRATCH(0), 0);
+   } else {
+   wmb();
+   }
+}
+
 static int ct_write(struct intel_guc_ct *ct,
const u32 *action,
u32 len /* in dwords */,
-   u32 fence)
+   u32 fence, u32 flags)
 {
struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
struct guc_ct_buffer_desc *desc = ctb->desc;
@@ -393,9 +411,13 @@ static int ct_write(struct intel_guc_ct *ct,
 FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len) |
 FIELD_PREP(GUC_CTB_MSG_0_FENCE, fence);
 
-   hxg = FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_REQUEST) |
- FIELD_PREP(GUC_HXG_REQUEST_MSG_0_ACTION |
-GUC_HXG_REQUEST_MSG_0_DATA0, action[0]);
+   hxg = (flags & INTEL_GUC_SEND_NB) ?
+   (FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_EVENT) |
+FIELD_PREP(GUC_HXG_EVENT_MSG_0_ACTION |
+   GUC_HXG_EVENT_MSG_0_DATA0, action[0])) :
+   (FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_REQUEST) |
+FIELD_PREP(GUC_HXG_REQUEST_MSG_0_ACTION |
+   GUC_HXG_REQUEST_MSG_0_DATA0, action[0]));
 
CT_DEBUG(ct, "writing (tail %u) %*ph %*ph %*ph\n",
 tail, 4, &header, 4, &hxg, 4 * (len - 1), &action[1]);
@@ -412,6 +434,12 @@ static int ct_write(struct intel_guc_ct *ct,
}
GEM_BUG_ON(tail > size);
 
+   /*
+* make sure H2G buffer update and LRC tail update (if this triggering a
+* submission) are visable before updating the descriptor tail
+*/
+   write_barrier(ct);
+
/* now update descriptor */
WRITE_ONCE(desc->tail,

[Intel-gfx] [RFC PATCH 90/97] drm/i915/guc: Non-static lrc descriptor registration buffer

2021-05-06 Thread Matthew Brost
Dynamically allocate space for lrc descriptor registration with the GuC
rather than using a large static buffer indexed by the guc_id. If no
space is available to register a context, fall back to tasklet flow
control mechanism. Only allow 1/2 of the space to be allocated outside
the tasklet to prevent unready requests/contexts from consuming all
registration space.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context_types.h |   3 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   9 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 198 +-
 3 files changed, 150 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index cd2ea5b98fc3..0d7173d3eabd 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -182,6 +182,9 @@ struct intel_context {
/* GuC scheduling state that does not require a lock. */
atomic_t guc_sched_state_no_lock;
 
+   /* GuC lrc descriptor registration buffer */
+   unsigned int guc_lrcd_reg_idx;
+
/* GuC lrc descriptor ID */
u16 guc_id;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 96849a256be8..97bb262f8a13 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -68,8 +68,13 @@ struct intel_guc {
u32 ads_regset_size;
u32 ads_golden_ctxt_size;
 
-   struct i915_vma *lrc_desc_pool;
-   void *lrc_desc_pool_vaddr;
+   /* GuC LRC descriptor registration */
+   struct {
+   struct i915_vma *vma;
+   void *vaddr;
+   struct ida ida;
+   unsigned int max_idx;
+   } lrcd_reg;
 
/* guc_id to intel_context lookup */
struct xarray context_lookup;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 608b30907f4c..79caf9596084 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -437,65 +437,54 @@ static inline struct i915_priolist *to_priolist(struct 
rb_node *rb)
return rb_entry(rb, struct i915_priolist, node);
 }
 
-static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index)
+static u32 __get_lrc_desc_offset(struct intel_guc *guc, int index)
 {
-   struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr;
-
+   GEM_BUG_ON(index >= guc->lrcd_reg.max_idx);
GEM_BUG_ON(index >= guc->max_guc_ids);
 
-   return &base[index];
+   return intel_guc_ggtt_offset(guc, guc->lrcd_reg.vma) +
+   (index * sizeof(struct guc_lrc_desc));
 }
 
-static inline struct intel_context *__get_context(struct intel_guc *guc, u32 
id)
+static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, int index)
 {
-   struct intel_context *ce = xa_load(&guc->context_lookup, id);
+   struct guc_lrc_desc *desc;
 
-   GEM_BUG_ON(id >= guc->max_guc_ids);
+   GEM_BUG_ON(index >= guc->lrcd_reg.max_idx);
+   GEM_BUG_ON(index >= guc->max_guc_ids);
 
-   return ce;
+   desc = guc->lrcd_reg.vaddr;
+   desc = &desc[index];
+   memset(desc, 0, sizeof(*desc));
+
+   return desc;
 }
 
-static int guc_lrc_desc_pool_create(struct intel_guc *guc)
+static inline struct intel_context *__get_context(struct intel_guc *guc, u32 
id)
 {
-   u32 size;
-   int ret;
-
-   size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * guc->max_guc_ids);
-   ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool,
-(void 
**)&guc->lrc_desc_pool_vaddr);
-   if (ret)
-   return ret;
+   struct intel_context *ce = xa_load(&guc->context_lookup, id);
 
-   return 0;
-}
+   GEM_BUG_ON(id >= guc->max_guc_ids);
 
-static void guc_lrc_desc_pool_destroy(struct intel_guc *guc)
-{
-   guc->lrc_desc_pool_vaddr = NULL;
-   i915_vma_unpin_and_release(&guc->lrc_desc_pool, I915_VMA_RELEASE_MAP);
+   return ce;
 }
 
 static inline bool guc_submission_initialized(struct intel_guc *guc)
 {
-   return guc->lrc_desc_pool_vaddr != NULL;
+   return guc->lrcd_reg.max_idx != 0;
 }
 
-static inline void reset_lrc_desc(struct intel_guc *guc, u32 id)
+static inline void clr_lrc_desc_registered(struct intel_guc *guc, u32 id)
 {
-   if (likely(guc_submission_initialized(guc))) {
-   struct guc_lrc_desc *desc = __get_lrc_desc(guc, id);
-   unsigned long flags;
-
-   memset(desc, 0, sizeof(*desc));
+   unsigned long flags;
 
-   /*
-* xarray API doesn't have xa_erase_irqsave wrapper, so calling
-* the lower level functions directly.
-*/
-   xa_lock_irqsave(&guc->context_lookup, flags);
-   __xa_erase(&guc->conte

[Intel-gfx] [RFC PATCH 77/97] drm/i915/guc: Connect reset modparam updates to GuC policy flags

2021-05-06 Thread Matthew Brost
From: John Harrison 

Changing the reset module parameter has no effect on a running GuC.
The corresponding entry in the ADS must be updated and then the GuC
informed via a Host2GuC message.

The new debugfs interface to module parameters allows this to happen.
However, connecting the parameter data address back to anything useful
is messy. One option would be to pass a new private data structure
address through instead of just the parameter pointer. However, that
means having a new (and different) data structure for each parameter
and a new (and different) write function for each parameter. This
method keeps everything generic by instead using a string lookup on
the directory entry name.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c |  2 +-
 drivers/gpu/drm/i915/i915_debugfs_params.c | 31 ++
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index b37473bc8fff..bb20513f40f6 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -102,7 +102,7 @@ static int guc_action_policies_update(struct intel_guc 
*guc, u32 policy_offset)
policy_offset
};
 
-   return intel_guc_send(guc, action, ARRAY_SIZE(action));
+   return intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, 
true);
 }
 
 int intel_guc_global_policies_update(struct intel_guc *guc)
diff --git a/drivers/gpu/drm/i915/i915_debugfs_params.c 
b/drivers/gpu/drm/i915/i915_debugfs_params.c
index 4e2b077692cb..8ecd8b42f048 100644
--- a/drivers/gpu/drm/i915/i915_debugfs_params.c
+++ b/drivers/gpu/drm/i915/i915_debugfs_params.c
@@ -6,9 +6,20 @@
 #include 
 
 #include "i915_debugfs_params.h"
+#include "gt/intel_gt.h"
+#include "gt/uc/intel_guc.h"
 #include "i915_drv.h"
 #include "i915_params.h"
 
+#define MATCH_DEBUGFS_NODE_NAME(_file, _name)  
(strcmp((_file)->f_path.dentry->d_name.name, (_name)) == 0)
+
+#define GET_I915(i915, name, ptr)  \
+   do {\
+   struct i915_params *params; \
+   params = container_of(((void *) (ptr)), typeof(*params), name); 
\
+   (i915) = container_of(params, typeof(*(i915)), params); \
+   } while(0)
+
 /* int param */
 static int i915_param_int_show(struct seq_file *m, void *data)
 {
@@ -24,6 +35,16 @@ static int i915_param_int_open(struct inode *inode, struct 
file *file)
return single_open(file, i915_param_int_show, inode->i_private);
 }
 
+static int notify_guc(struct drm_i915_private *i915)
+{
+   int ret = 0;
+
+   if (intel_uc_uses_guc_submission(&i915->gt.uc))
+   ret = intel_guc_global_policies_update(&i915->gt.uc.guc);
+
+   return ret;
+}
+
 static ssize_t i915_param_int_write(struct file *file,
const char __user *ubuf, size_t len,
loff_t *offp)
@@ -81,8 +102,10 @@ static ssize_t i915_param_uint_write(struct file *file,
 const char __user *ubuf, size_t len,
 loff_t *offp)
 {
+   struct drm_i915_private *i915;
struct seq_file *m = file->private_data;
unsigned int *value = m->private;
+   unsigned int old = *value;
int ret;
 
ret = kstrtouint_from_user(ubuf, len, 0, value);
@@ -95,6 +118,14 @@ static ssize_t i915_param_uint_write(struct file *file,
*value = b;
}
 
+   if (!ret && MATCH_DEBUGFS_NODE_NAME(file, "reset")) {
+   GET_I915(i915, reset, value);
+
+   ret = notify_guc(i915);
+   if (ret)
+   *value = old;
+   }
+
return ret ?: len;
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 87/97] drm/i915/guc: Implement GuC priority management

2021-05-06 Thread Matthew Brost
Implement a simple static mapping algorithm of the i915 priority levels
(int, -1k to 1k exposed to user) to the 4 GuC levels. Mapping is as
follows:

i915 level < 0  -> GuC low level (3)
i915 level == 0 -> GuC normal level  (2)
i915 level < INT_MAX-> GuC high level(1)
i915 level == INT_MAX   -> GuC highest level (0)

We believe this mapping should cover the UMD use cases (3 distinct user
levels + 1 kernel level).

In addition to static mapping, a simple counter system is attached to
each context tracking the number of requests inflight on the context at
each level. This is needed as the GuC levels are per context while in
the i915 levels are per request.

Signed-off-by: Matthew Brost 
Cc: Daniele Ceraolo Spurio 
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   |   3 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   9 +-
 drivers/gpu/drm/i915/gt/intel_engine_user.c   |   4 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 205 +-
 drivers/gpu/drm/i915/i915_request.c   |   5 +
 drivers/gpu/drm/i915/i915_request.h   |   8 +
 drivers/gpu/drm/i915/i915_scheduler.c |   7 +
 drivers/gpu/drm/i915/i915_scheduler_types.h   |   5 +
 drivers/gpu/drm/i915/i915_trace.h |  16 +-
 include/uapi/drm/i915_drm.h   |   9 +
 10 files changed, 266 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 2007dc6f6b99..209cf265bf74 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -245,6 +245,9 @@ static void signal_irq_work(struct irq_work *work)
llist_entry(signal, typeof(*rq), signal_node);
struct list_head cb_list;
 
+   if (rq->engine->sched_engine->retire_inflight_request_prio)
+   
rq->engine->sched_engine->retire_inflight_request_prio(rq);
+
spin_lock(&rq->lock);
list_replace(&rq->fence.cb_list, &cb_list);
__dma_fence_signal__timestamp(&rq->fence, timestamp);
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 998f3839411a..217761b27b6c 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -17,8 +17,9 @@
 #include "intel_engine_types.h"
 #include "intel_sseu.h"
 
-#define CONTEXT_REDZONE POISON_INUSE
+#include "uc/intel_guc_fwif.h"
 
+#define CONTEXT_REDZONE POISON_INUSE
 DECLARE_EWMA(runtime, 3, 8);
 
 struct i915_gem_context;
@@ -193,6 +194,12 @@ struct intel_context {
 * GuC ID link - in list when unpinned but guc_id still valid in GuC
 */
struct list_head guc_id_link;
+
+   /*
+* GuC priority management
+*/
+   u8 guc_prio;
+   u32 guc_prio_count[GUC_CLIENT_PRIORITY_NUM];
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_user.c 
b/drivers/gpu/drm/i915/gt/intel_engine_user.c
index d6dcdeace174..7cb16b6cf2ef 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_user.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_user.c
@@ -11,6 +11,7 @@
 #include "intel_engine.h"
 #include "intel_engine_user.h"
 #include "intel_gt.h"
+#include "uc/intel_guc_submission.h"
 
 struct intel_engine_cs *
 intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance)
@@ -114,6 +115,9 @@ static void set_scheduler_caps(struct drm_i915_private 
*i915)
disabled |= (I915_SCHEDULER_CAP_ENABLED |
 I915_SCHEDULER_CAP_PRIORITY);
 
+   if (intel_uc_uses_guc_submission(&i915->gt.uc))
+   enabled |= I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP;
+
for (i = 0; i < ARRAY_SIZE(map); i++) {
if (engine->flags & BIT(map[i].engine))
enabled |= BIT(map[i].sched);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 9dc0ffc07cd7..6d2ae6390299 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -180,6 +180,7 @@ static void clr_guc_ids_exhausted(struct guc_submit_engine 
*gse)
 #define SCHED_STATE_NO_LOCK_BLOCK_TASKLET  BIT(2)
 #define SCHED_STATE_NO_LOCK_GUC_ID_STOLEN  BIT(3)
 #define SCHED_STATE_NO_LOCK_NEEDS_REGISTER BIT(4)
+#define SCHED_STATE_NO_LOCK_REGISTERED BIT(5)
 static inline bool context_enabled(struct intel_context *ce)
 {
return (atomic_read(&ce->guc_sched_state_no_lock) &
@@ -269,6 +270,24 @@ static inline void clr_context_needs_register(struct 
intel_context *ce)
   &ce->guc_sched_state_no_lock);
 }
 
+static inline bool context_registered(struct intel_context *ce)
+{
+   return (atomic_read(&ce->gu

[Intel-gfx] [RFC PATCH 80/97] drm/i915/guc: Implement banned contexts for GuC submission

2021-05-06 Thread Matthew Brost
When using GuC submission, if a context gets banned disable scheduling
and mark all inflight requests as complete.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |   2 +-
 drivers/gpu/drm/i915/gt/intel_context.h   |  13 ++
 drivers/gpu/drm/i915/gt/intel_context_types.h |   2 +
 drivers/gpu/drm/i915/gt/intel_reset.c |  32 ++---
 .../gpu/drm/i915/gt/intel_ring_submission.c   |  20 +++
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   2 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 129 --
 drivers/gpu/drm/i915/i915_trace.h |  10 ++
 8 files changed, 172 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index bb827bb99250..5dcab5536433 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -441,7 +441,7 @@ static void kill_engines(struct i915_gem_engines *engines, 
bool ban)
for_each_gem_engine(ce, engines, it) {
struct intel_engine_cs *engine;
 
-   if (ban && intel_context_set_banned(ce))
+   if (ban && intel_context_ban(ce, NULL))
continue;
 
/*
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index d2b499ed8a05..11fa7700dc9e 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -17,6 +17,7 @@
 #include "intel_ring_types.h"
 #include "intel_timeline_types.h"
 #include "uc/intel_guc_submission.h"
+#include "i915_trace.h"
 
 #define CE_TRACE(ce, fmt, ...) do {\
const struct intel_context *ce__ = (ce);\
@@ -243,6 +244,18 @@ static inline bool intel_context_set_banned(struct 
intel_context *ce)
return test_and_set_bit(CONTEXT_BANNED, &ce->flags);
 }
 
+static inline bool intel_context_ban(struct intel_context *ce,
+struct i915_request *rq)
+{
+   bool ret = intel_context_set_banned(ce);
+
+   trace_intel_context_ban(ce);
+   if (ce->ops->ban)
+   ce->ops->ban(ce, rq);
+
+   return ret;
+}
+
 static inline bool
 intel_context_force_single_submission(const struct intel_context *ce)
 {
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index b63c8cf7823b..591dcba7bfde 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -35,6 +35,8 @@ struct intel_context_ops {
 
int (*alloc)(struct intel_context *ce);
 
+   void (*ban)(struct intel_context *ce, struct i915_request *rq);
+
int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww, 
void **vaddr);
int (*pin)(struct intel_context *ce, void *vaddr);
void (*unpin)(struct intel_context *ce);
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index c35c4b529ce5..4347cc2dcea0 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -22,7 +22,6 @@
 #include "intel_reset.h"
 
 #include "uc/intel_guc.h"
-#include "uc/intel_guc_submission.h"
 
 #define RESET_MAX_RETRIES 3
 
@@ -39,21 +38,6 @@ static void rmw_clear_fw(struct intel_uncore *uncore, 
i915_reg_t reg, u32 clr)
intel_uncore_rmw_fw(uncore, reg, clr, 0);
 }
 
-static void skip_context(struct i915_request *rq)
-{
-   struct intel_context *hung_ctx = rq->context;
-
-   list_for_each_entry_from_rcu(rq, &hung_ctx->timeline->requests, link) {
-   if (!i915_request_is_active(rq))
-   return;
-
-   if (rq->context == hung_ctx) {
-   i915_request_set_error_once(rq, -EIO);
-   __i915_request_skip(rq);
-   }
-   }
-}
-
 static void client_mark_guilty(struct i915_gem_context *ctx, bool banned)
 {
struct drm_i915_file_private *file_priv = ctx->file_priv;
@@ -88,10 +72,8 @@ static bool mark_guilty(struct i915_request *rq)
bool banned;
int i;
 
-   if (intel_context_is_closed(rq->context)) {
-   intel_context_set_banned(rq->context);
+   if (intel_context_is_closed(rq->context))
return true;
-   }
 
rcu_read_lock();
ctx = rcu_dereference(rq->context->gem_context);
@@ -123,11 +105,9 @@ static bool mark_guilty(struct i915_request *rq)
banned = !i915_gem_context_is_recoverable(ctx);
if (time_before(jiffies, prev_hang + CONTEXT_FAST_HANG_JIFFIES))
banned = true;
-   if (banned) {
+   if (banned)
drm_dbg(&ctx->i915->drm, "context %s: guilty %d, banned\n",
ctx->name, atomic_read(&ctx->guilty_count));
-   intel_context_set_banned(rq->context);
-   }
 
 

[Intel-gfx] [RFC PATCH 74/97] drm/i915/guc: Capture error state on context reset

2021-05-06 Thread Matthew Brost
We receive notification of an engine reset from GuC at its
completion. Meaning GuC has potentially cleared any HW state
we may have been interested in capturing. GuC resumes scheduling
on the engine post-reset, as the resets are meant to be transparent,
further muddling our error state.

There is ongoing work to define an API for a GuC debug state dump. The
suggestion for now is to manually disable FW initiated resets in cases
where debug state is needed.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   | 20 +++
 drivers/gpu/drm/i915/gt/intel_context.h   |  3 ++
 drivers/gpu/drm/i915/gt/intel_engine.h| 21 ++-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 11 --
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  2 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 35 +--
 drivers/gpu/drm/i915/i915_gpu_error.c | 25 ++---
 7 files changed, 91 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 2f01437056a8..3fe7794b2bfd 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -514,6 +514,26 @@ struct i915_request *intel_context_create_request(struct 
intel_context *ce)
return rq;
 }
 
+struct i915_request *intel_context_find_active_request(struct intel_context 
*ce)
+{
+   struct i915_request *rq, *active = NULL;
+   unsigned long flags;
+
+   GEM_BUG_ON(!intel_engine_uses_guc(ce->engine));
+
+   spin_lock_irqsave(&ce->guc_active.lock, flags);
+   list_for_each_entry_reverse(rq, &ce->guc_active.requests,
+   sched.link) {
+   if (i915_request_completed(rq))
+   break;
+
+   active = rq;
+   }
+   spin_unlock_irqrestore(&ce->guc_active.lock, flags);
+
+   return active;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index 9b211ca5ecc7..d2b499ed8a05 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -195,6 +195,9 @@ int intel_context_prepare_remote_request(struct 
intel_context *ce,
 
 struct i915_request *intel_context_create_request(struct intel_context *ce);
 
+struct i915_request *
+intel_context_find_active_request(struct intel_context *ce);
+
 static inline struct intel_ring *__intel_context_ring_size(u64 sz)
 {
return u64_to_ptr(struct intel_ring, sz);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
b/drivers/gpu/drm/i915/gt/intel_engine.h
index 3321d0917a99..bb94963a9fa2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -242,7 +242,7 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs 
*engine,
   ktime_t *now);
 
 struct i915_request *
-intel_engine_find_active_request(struct intel_engine_cs *engine);
+intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine);
 
 u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
 
@@ -316,4 +316,23 @@ intel_engine_get_sibling(struct intel_engine_cs *engine, 
unsigned int sibling)
return engine->cops->get_sibling(engine, sibling);
 }
 
+static inline void
+intel_engine_set_hung_context(struct intel_engine_cs *engine,
+ struct intel_context *ce)
+{
+   engine->hung_ce = ce;
+}
+
+static inline void
+intel_engine_clear_hung_context(struct intel_engine_cs *engine)
+{
+   intel_engine_set_hung_context(engine, NULL);
+}
+
+static inline struct intel_context *
+intel_engine_get_hung_context(struct intel_engine_cs *engine)
+{
+   return engine->hung_ce;
+}
+
 #endif /* _INTEL_RINGBUFFER_H_ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 10300db1c9a6..ad3987289f09 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1727,7 +1727,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
drm_printf(m, "\tRequests:\n");
 
spin_lock_irqsave(&engine->sched_engine->lock, flags);
-   rq = intel_engine_find_active_request(engine);
+   rq = intel_engine_execlist_find_hung_request(engine);
if (rq) {
struct intel_timeline *tl = get_timeline(rq);
 
@@ -1838,10 +1838,17 @@ static bool match_ring(struct i915_request *rq)
 }
 
 struct i915_request *
-intel_engine_find_active_request(struct intel_engine_cs *engine)
+intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine)
 {
struct i915_request *request, *active = NULL;
 
+   /*
+* This search does not work in GuC submission mode. However, the GuC
+* will report the hanging context directly to the driver itself. So
+* the driver should nev

[Intel-gfx] [RFC PATCH 84/97] drm/i915/guc: Don't allow requests not ready to consume all guc_ids

2021-05-06 Thread Matthew Brost
Add a heuristic which checks if over half of the available guc_ids are
currently consumed by requests not ready to be submitted. If this
heuristic is true at request creation time (normal guc_id allocation
location) force all submissions + guc_ids allocations to tasklet.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context_types.h |  3 ++
 drivers/gpu/drm/i915/gt/intel_reset.c |  9 
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  1 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 53 +--
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 +
 5 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index a25ea8fe2029..998f3839411a 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -186,6 +186,9 @@ struct intel_context {
/* GuC lrc descriptor reference count */
atomic_t guc_id_ref;
 
+   /* GuC number of requests not ready */
+   atomic_t guc_num_rq_not_ready;
+
/*
 * GuC ID link - in list when unpinned but guc_id still valid in GuC
 */
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index 4347cc2dcea0..be25e39f0dd8 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -22,6 +22,7 @@
 #include "intel_reset.h"
 
 #include "uc/intel_guc.h"
+#include "uc/intel_guc_submission.h"
 
 #define RESET_MAX_RETRIES 3
 
@@ -776,6 +777,14 @@ static void nop_submit_request(struct i915_request 
*request)
 {
RQ_TRACE(request, "-EIO\n");
 
+   /*
+* XXX: Kinda ugly to check for GuC submission here but this function is
+* going away once we switch to the DRM scheduler so we can live with
+* this for now.
+*/
+   if (intel_engine_uses_guc(request->engine))
+   intel_guc_decr_num_rq_not_ready(request->context);
+
request = i915_request_mark_eio(request);
if (request) {
i915_request_submit(request);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index bd477209839b..26a0225f45e9 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -76,6 +76,7 @@ struct intel_guc {
struct ida guc_ids;
u32 num_guc_ids;
u32 max_guc_ids;
+   atomic_t num_guc_ids_not_ready;
struct list_head guc_id_list_no_ref;
struct list_head guc_id_list_unpinned;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 037a7ee4971b..aa5e608deed5 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1323,6 +1323,41 @@ static inline void queue_request(struct 
i915_sched_engine *sched_engine,
kick_tasklet(&rq->engine->gt->uc.guc);
 }
 
+/* Macro to tweak heuristic, using a simple over 50% not ready for now */
+#define TOO_MANY_GUC_IDS_NOT_READY(avail, consumed) \
+   (consumed > avail / 2)
+static bool too_many_guc_ids_not_ready(struct intel_guc *guc,
+  struct intel_context *ce)
+{
+   u32 available_guc_ids, guc_ids_consumed;
+
+   available_guc_ids = guc->num_guc_ids;
+   guc_ids_consumed = atomic_read(&guc->num_guc_ids_not_ready);
+
+   if (TOO_MANY_GUC_IDS_NOT_READY(available_guc_ids, guc_ids_consumed)) {
+   set_and_update_guc_ids_exhausted(guc);
+   return true;
+   }
+
+   return false;
+}
+
+static void incr_num_rq_not_ready(struct intel_context *ce)
+{
+   struct intel_guc *guc = ce_to_guc(ce);
+
+   if (!atomic_fetch_add(1, &ce->guc_num_rq_not_ready))
+   atomic_inc(&guc->num_guc_ids_not_ready);
+}
+
+void intel_guc_decr_num_rq_not_ready(struct intel_context *ce)
+{
+   struct intel_guc *guc = ce_to_guc(ce);
+
+   if (atomic_fetch_add(-1, &ce->guc_num_rq_not_ready) == 1)
+   atomic_dec(&guc->num_guc_ids_not_ready);
+}
+
 static bool need_tasklet(struct intel_guc *guc, struct intel_context *ce)
 {
struct i915_sched_engine * const sched_engine =
@@ -1369,6 +1404,8 @@ static void guc_submit_request(struct i915_request *rq)
kick_tasklet(guc);
 
spin_unlock_irqrestore(&sched_engine->lock, flags);
+
+   intel_guc_decr_num_rq_not_ready(rq->context);
 }
 
 #define GUC_ID_START   64  /* First 64 guc_ids reserved */
@@ -2240,10 +2277,13 @@ static int guc_request_alloc(struct i915_request *rq)
GEM_BUG_ON(!intel_context_is_pinned(rq->context));
 
/*
-* guc_ids are exhausted, don't allocate one here, defer to submission
-* in the tasklet.
+* guc_ids are exhausted or a heuristic is met indicating too many
+* guc_ids are waiting on requests w

[Intel-gfx] [RFC PATCH 70/97] drm/i915/guc: Enable the timer expired interrupt for GuC

2021-05-06 Thread Matthew Brost
The GuC can implement execution qunatums, detect hung contexts and
other such things but it requires the timer expired interrupt to do so.

Signed-off-by: Matthew Brost 
CC: John Harrison 
---
 drivers/gpu/drm/i915/gt/intel_rps.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c 
b/drivers/gpu/drm/i915/gt/intel_rps.c
index 97cab1b99871..0bf86d54adb6 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1877,6 +1877,10 @@ void intel_rps_init(struct intel_rps *rps)
 
if (INTEL_GEN(i915) >= 8 && INTEL_GEN(i915) < 11)
rps->pm_intrmsk_mbz |= GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC;
+
+   /* GuC needs ARAT expired interrupt unmasked */
+   if (intel_uc_uses_guc_submission(&rps_to_gt(rps)->uc))
+   rps->pm_intrmsk_mbz |= ARAT_EXPIRED_INTRMSK;
 }
 
 void intel_rps_sanitize(struct intel_rps *rps)
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 46/97] drm/i915/guc: Implement GuC context operations for new inteface

2021-05-06 Thread Matthew Brost
Implement GuC context operations which includes GuC specific operations
pin, unpin, and destroy.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |   5 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  22 +-
 drivers/gpu/drm/i915/gt/intel_lrc_reg.h   |   1 -
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  34 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |   7 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 663 --
 drivers/gpu/drm/i915/i915_reg.h   |   1 +
 drivers/gpu/drm/i915/i915_request.c   |   1 +
 8 files changed, 680 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 4033184f13b9..2b68af16222c 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -383,6 +383,11 @@ intel_context_init(struct intel_context *ce, struct 
intel_engine_cs *engine)
 
mutex_init(&ce->pin_mutex);
 
+   spin_lock_init(&ce->guc_state.lock);
+
+   ce->guc_id = GUC_INVALID_LRC_ID;
+   INIT_LIST_HEAD(&ce->guc_id_link);
+
i915_active_init(&ce->active,
 __intel_context_active, __intel_context_retire, 0);
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index bb6fef7eae52..ce7c69b34cd1 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -95,6 +95,7 @@ struct intel_context {
 #define CONTEXT_BANNED 6
 #define CONTEXT_FORCE_SINGLE_SUBMISSION7
 #define CONTEXT_NOPREEMPT  8
+#define CONTEXT_LRCA_DIRTY 9
 
struct {
u64 timeout_us;
@@ -137,14 +138,29 @@ struct intel_context {
 
u8 wa_bb_page; /* if set, page num reserved for context workarounds */
 
+   struct {
+   /** lock: protects everything in guc_state */
+   spinlock_t lock;
+   /**
+* sched_state: scheduling state of this context using GuC
+* submission
+*/
+   u8 sched_state;
+   } guc_state;
+
/* GuC scheduling state that does not require a lock. */
atomic_t guc_sched_state_no_lock;
 
+   /* GuC lrc descriptor ID */
+   u16 guc_id;
+
+   /* GuC lrc descriptor reference count */
+   atomic_t guc_id_ref;
+
/*
-* GuC lrc descriptor ID - Not assigned in this patch but future patches
-* in the series will.
+* GuC ID link - in list when unpinned but guc_id still valid in GuC
 */
-   u16 guc_id;
+   struct list_head guc_id_link;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h 
b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
index 41e5350a7a05..49d4857ad9b7 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
+++ b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h
@@ -87,7 +87,6 @@
 #define GEN11_CSB_WRITE_PTR_MASK   (GEN11_CSB_PTR_MASK << 0)
 
 #define MAX_CONTEXT_HW_ID  (1 << 21) /* exclusive */
-#define MAX_GUC_CONTEXT_HW_ID  (1 << 20) /* exclusive */
 #define GEN11_MAX_CONTEXT_HW_ID(1 << 11) /* exclusive */
 /* in Gen12 ID 0x7FF is reserved to indicate idle */
 #define GEN12_MAX_CONTEXT_HW_ID(GEN11_MAX_CONTEXT_HW_ID - 1)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index d32866fe90ad..85ff32bfd074 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -45,6 +45,14 @@ struct intel_guc {
void (*disable)(struct intel_guc *guc);
} interrupts;
 
+   /*
+* contexts_lock protects the pool of free guc ids and a linked list of
+* guc ids available to be stolden
+*/
+   spinlock_t contexts_lock;
+   struct ida guc_ids;
+   struct list_head guc_id_list;
+
bool submission_selected;
 
struct i915_vma *ads_vma;
@@ -103,6 +111,29 @@ intel_guc_send_and_receive(struct intel_guc *guc, const 
u32 *action, u32 len,
 response_buf, response_buf_size, 0);
 }
 
+static inline int intel_guc_send_busy_loop(struct intel_guc* guc,
+  const u32 *action,
+  u32 len,
+  bool loop)
+{
+   int err;
+
+   /* No sleeping with spin locks, just busy loop */
+   might_sleep_if(loop && (!in_atomic() && !irqs_disabled()));
+
+retry:
+   err = intel_guc_send_nb(guc, action, len);
+   if (unlikely(err == -EBUSY && loop)) {
+   if (likely(!in_atomic() && !irqs_disabled()))
+   cond_resched();
+   else
+   cpu_relax();
+   goto retry;
+   }
+
+   return err;
+}
+
 static inline v

[Intel-gfx] [RFC PATCH 75/97] drm/i915/guc: Fix for error capture after full GPU reset with GuC

2021-05-06 Thread Matthew Brost
From: John Harrison 

In the case of a full GPU reset (e.g. because GuC has died or because
GuC's hang detection has been disabled), the driver can't rely on GuC
reporting the guilty context. Instead, the driver needs to scan all
active contexts and find one that is currently executing, as per the
execlist mode behaviour. In GuC mode, this scan is different to
execlist mode as the active request list is handled very differently.

Similarly, the request state dump in debugfs needs to be handled
differently when in GuC submission mode.

Also refactured some of the request scanning code to avoid duplication
across the multiple code paths that are now replicating it.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_engine.h|   3 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 139 --
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   8 +
 drivers/gpu/drm/i915/gt/intel_reset.c |   2 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   2 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  67 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |   3 +
 drivers/gpu/drm/i915/i915_request.c   |  41 ++
 drivers/gpu/drm/i915/i915_request.h   |  11 ++
 9 files changed, 229 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
b/drivers/gpu/drm/i915/gt/intel_engine.h
index bb94963a9fa2..2e69be3bb1cf 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -237,6 +237,9 @@ __printf(3, 4)
 void intel_engine_dump(struct intel_engine_cs *engine,
   struct drm_printer *m,
   const char *header, ...);
+void intel_engine_dump_active_requests(struct list_head *requests,
+  struct i915_request *hung_rq,
+  struct drm_printer *m);
 
 ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine,
   ktime_t *now);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index ad3987289f09..e34a61600c8c 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1680,6 +1680,97 @@ static void print_properties(struct intel_engine_cs 
*engine,
   read_ul(&engine->defaults, p->offset));
 }
 
+static void engine_dump_request(struct i915_request *rq, struct drm_printer 
*m, const char *msg)
+{
+   struct intel_timeline *tl = get_timeline(rq);
+
+   i915_request_show(m, rq, msg, 0);
+
+   drm_printf(m, "\t\tring->start:  0x%08x\n",
+  i915_ggtt_offset(rq->ring->vma));
+   drm_printf(m, "\t\tring->head:   0x%08x\n",
+  rq->ring->head);
+   drm_printf(m, "\t\tring->tail:   0x%08x\n",
+  rq->ring->tail);
+   drm_printf(m, "\t\tring->emit:   0x%08x\n",
+  rq->ring->emit);
+   drm_printf(m, "\t\tring->space:  0x%08x\n",
+  rq->ring->space);
+
+   if (tl) {
+   drm_printf(m, "\t\tring->hwsp:   0x%08x\n",
+  tl->hwsp_offset);
+   intel_timeline_put(tl);
+   }
+
+   print_request_ring(m, rq);
+
+   if (rq->context->lrc_reg_state) {
+   drm_printf(m, "Logical Ring Context:\n");
+   hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE);
+   }
+}
+
+void intel_engine_dump_active_requests(struct list_head *requests,
+  struct i915_request *hung_rq,
+  struct drm_printer *m)
+{
+   struct i915_request *rq;
+   const char *msg;
+   enum i915_request_state state;
+
+   list_for_each_entry(rq, requests, sched.link) {
+   if (rq == hung_rq)
+   continue;
+
+   state = i915_test_request_state(rq);
+   if (state < I915_REQUEST_QUEUED)
+   continue;
+
+   if (state == I915_REQUEST_ACTIVE)
+   msg = "\t\tactive on engine";
+   else
+   msg = "\t\tactive in queue";
+
+   engine_dump_request(rq, m, msg);
+   }
+}
+
+static void engine_dump_active_requests(struct intel_engine_cs *engine, struct 
drm_printer *m)
+{
+   struct i915_request *hung_rq = NULL;
+   struct intel_context *ce;
+   bool guc;
+
+   /*
+* No need for an engine->irq_seqno_barrier() before the seqno reads.
+* The GPU is still running so requests are still executing and any
+* hardware reads will be out of date by the time they are reported.
+* But the intention here is just to report an instantaneous snapshot
+* so that's fine.
+*/
+   lockdep_assert_held(&engine->sched_engine->lock);
+
+   drm_printf(m, "\tRequests:\n");
+
+   guc = intel_uc_uses_guc_

[Intel-gfx] [RFC PATCH 93/97] drm/i915/guc: Take engine PM when a context is pinned with GuC submission

2021-05-06 Thread Matthew Brost
Taking a PM reference to prevent intel_gt_wait_for_idle from short
circuiting while a scheduling of user context could be enabled.

Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 36 +--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 25c77084c3a0..dd4baaad679f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2026,7 +2026,12 @@ static int guc_context_pre_pin(struct intel_context *ce,
 
 static int guc_context_pin(struct intel_context *ce, void *vaddr)
 {
-   return __guc_context_pin(ce, ce->engine, vaddr);
+   int ret = __guc_context_pin(ce, ce->engine, vaddr);
+
+   if (likely(!ret && !intel_context_is_barrier(ce)))
+   intel_engine_pm_get(ce->engine);
+
+   return ret;
 }
 
 static void guc_context_unpin(struct intel_context *ce)
@@ -2037,6 +2042,9 @@ static void guc_context_unpin(struct intel_context *ce)
 
unpin_guc_id(guc, ce, true);
lrc_unpin(ce);
+
+   if (likely(!intel_context_is_barrier(ce)))
+   intel_engine_pm_put(ce->engine);
 }
 
 static void guc_context_post_unpin(struct intel_context *ce)
@@ -2922,8 +2930,30 @@ static int guc_virtual_context_pre_pin(struct 
intel_context *ce,
 static int guc_virtual_context_pin(struct intel_context *ce, void *vaddr)
 {
struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0);
+   int ret = __guc_context_pin(ce, engine, vaddr);
+   intel_engine_mask_t tmp, mask = ce->engine->mask;
+
+   if (likely(!ret))
+   for_each_engine_masked(engine, ce->engine->gt, mask, tmp)
+   intel_engine_pm_get(engine);
+
+   return ret;
+}
+
+static void guc_virtual_context_unpin(struct intel_context *ce)
+{
+   intel_engine_mask_t tmp, mask = ce->engine->mask;
+   struct intel_engine_cs *engine;
+   struct intel_guc *guc = ce_to_guc(ce);
 
-   return __guc_context_pin(ce, engine, vaddr);
+   GEM_BUG_ON(context_enabled(ce));
+   GEM_BUG_ON(intel_context_is_barrier(ce));
+
+   unpin_guc_id(guc, ce, true);
+   lrc_unpin(ce);
+
+   for_each_engine_masked(engine, ce->engine->gt, mask, tmp)
+   intel_engine_pm_put(engine);
 }
 
 static void guc_virtual_context_enter(struct intel_context *ce)
@@ -2972,7 +3002,7 @@ static const struct intel_context_ops 
virtual_guc_context_ops = {
 
.pre_pin = guc_virtual_context_pre_pin,
.pin = guc_virtual_context_pin,
-   .unpin = guc_context_unpin,
+   .unpin = guc_virtual_context_unpin,
.post_unpin = guc_context_post_unpin,
 
.ban = guc_context_ban,
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 45/97] drm/i915/guc: Add bypass tasklet submission path to GuC

2021-05-06 Thread Matthew Brost
Add bypass tasklet submission path to GuC. The tasklet is only used if H2G
channel has backpresure.

Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 37 +++
 1 file changed, 29 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 0955a8b00ee8..2fd83562c1d1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -171,6 +171,12 @@ static int guc_add_request(struct intel_guc *guc, struct 
i915_request *rq)
return err;
 }
 
+static inline void guc_set_lrc_tail(struct i915_request *rq)
+{
+   rq->context->lrc_reg_state[CTX_RING_TAIL] =
+   intel_ring_set_tail(rq->ring, rq->tail);
+}
+
 static inline int rq_prio(const struct i915_request *rq)
 {
return rq->sched.attr.priority;
@@ -214,8 +220,7 @@ static int guc_dequeue_one_context(struct intel_guc *guc)
}
 done:
if (submit) {
-   last->context->lrc_reg_state[CTX_RING_TAIL] =
-   intel_ring_set_tail(last->ring, last->tail);
+   guc_set_lrc_tail(last);
 resubmit:
/*
 * We only check for -EBUSY here even though it is possible for
@@ -499,20 +504,36 @@ static inline void queue_request(struct i915_sched_engine 
*sched_engine,
set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
 }
 
+static int guc_bypass_tasklet_submit(struct intel_guc *guc,
+struct i915_request *rq)
+{
+   int ret;
+
+   __i915_request_submit(rq);
+
+   trace_i915_request_in(rq, 0);
+
+   guc_set_lrc_tail(rq);
+   ret = guc_add_request(guc, rq);
+   if (ret == -EBUSY)
+   guc->stalled_request = rq;
+
+   return ret;
+}
+
 static void guc_submit_request(struct i915_request *rq)
 {
struct i915_sched_engine *sched_engine = rq->engine->sched_engine;
+   struct intel_guc *guc = &rq->engine->gt->uc.guc;
unsigned long flags;
 
/* Will be called from irq-context when using foreign fences. */
spin_lock_irqsave(&sched_engine->lock, flags);
 
-   queue_request(sched_engine, rq, rq_prio(rq));
-
-   GEM_BUG_ON(i915_sched_engine_is_empty(sched_engine));
-   GEM_BUG_ON(list_empty(&rq->sched.link));
-
-   i915_sched_engine_hi_kick(sched_engine);
+   if (guc->stalled_request || !i915_sched_engine_is_empty(sched_engine))
+   queue_request(sched_engine, rq, rq_prio(rq));
+   else if (guc_bypass_tasklet_submit(guc, rq) == -EBUSY)
+   i915_sched_engine_hi_kick(sched_engine);
 
spin_unlock_irqrestore(&sched_engine->lock, flags);
 }
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 82/97] drm/i915/guc: Connect the number of guc_ids to debugfs

2021-05-06 Thread Matthew Brost
For testing purposes it may make sense to reduce the number of guc_ids
available to be allocated. Add debugfs support for setting the number of
guc_ids.

Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c| 31 +++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  3 +-
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
index 9a03ff56e654..474c96fc16ef 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c
@@ -50,11 +50,42 @@ static int guc_registered_contexts_show(struct seq_file *m, 
void *data)
 }
 DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts);
 
+static int guc_num_id_get(void *data, u64 *val)
+{
+   struct intel_guc *guc = data;
+
+   if (!intel_guc_submission_is_used(guc))
+   return -ENODEV;
+
+   *val = guc->num_guc_ids;
+
+   return 0;
+}
+
+static int guc_num_id_set(void *data, u64 val)
+{
+   struct intel_guc *guc = data;
+
+   if (!intel_guc_submission_is_used(guc))
+   return -ENODEV;
+
+   if (val > guc->max_guc_ids)
+   val = guc->max_guc_ids;
+   else if (val < 256)
+   val = 256;
+
+   guc->num_guc_ids = val;
+
+   return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(guc_num_id_fops, guc_num_id_get, guc_num_id_set, 
"%lld\n");
+
 void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root)
 {
static const struct debugfs_gt_file files[] = {
{ "guc_info", &guc_info_fops, NULL },
{ "guc_registered_contexts", &guc_registered_contexts_fops, 
NULL },
+   { "guc_num_id", &guc_num_id_fops, NULL },
};
 
if (!intel_guc_is_supported(guc))
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 8f40e534bc81..3c73c2ca668e 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2153,7 +2153,8 @@ g2h_context_lookup(struct intel_guc *guc, u32 desc_idx)
 
if (unlikely(desc_idx >= guc->max_guc_ids)) {
drm_dbg(&guc_to_gt(guc)->i915->drm,
-   "Invalid desc_idx %u", desc_idx);
+   "Invalid desc_idx %u, max %u",
+   desc_idx, guc->max_guc_ids);
return NULL;
}
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 33/97] drm/i915: Engine relative MMIO

2021-05-06 Thread Matthew Brost
From: John Harrison 

With virtual engines, it is no longer possible to know which specific
physical engine a given request will be executed on at the time that
request is generated. This means that the request itself must be engine
agnostic - any direct register writes must be relative to the engine
and not absolute addresses.

The LRI command has support for engine relative addressing. However,
the mechanism is not transparent to the driver. The scheme for Gen11
(MI_LRI_ADD_CS_MMIO_START) requires the LRI address to have no
absolute engine base component. The hardware then adds on the correct
engine offset at execution time.

Due to the non-trivial and differing schemes on different hardware, it
is not possible to simply update the code that creates the LRI
commands to set a remap flag and let the hardware get on with it.
Instead, this patch adds function wrappers for generating the LRI
command itself and then for constructing the correct address to use
with the LRI.

Bspec: 45606
Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
CC: Rodrigo Vivi 
CC: Tvrtko Ursulin 
CC: Chris P Wilson 
CC: Daniele Ceraolo Spurio 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c  |  7 +++---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c| 25 
 drivers/gpu/drm/i915/gt/intel_engine_types.h |  3 +++
 drivers/gpu/drm/i915/gt/intel_gpu_commands.h |  5 
 drivers/gpu/drm/i915/i915_perf.c |  6 +
 5 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 188dee13e017..993faa213b41 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -1211,7 +1211,7 @@ static int emit_ppgtt_update(struct i915_request *rq, 
void *data)
 {
struct i915_address_space *vm = rq->context->vm;
struct intel_engine_cs *engine = rq->engine;
-   u32 base = engine->mmio_base;
+   u32 base = engine->lri_mmio_base;
u32 *cs;
int i;
 
@@ -1223,7 +1223,7 @@ static int emit_ppgtt_update(struct i915_request *rq, 
void *data)
if (IS_ERR(cs))
return PTR_ERR(cs);
 
-   *cs++ = MI_LOAD_REGISTER_IMM(2);
+   *cs++ = MI_LOAD_REGISTER_IMM(2) | engine->lri_cmd_mode;
 
*cs++ = i915_mmio_reg_offset(GEN8_RING_PDP_UDW(base, 0));
*cs++ = upper_32_bits(pd_daddr);
@@ -1245,7 +1245,8 @@ static int emit_ppgtt_update(struct i915_request *rq, 
void *data)
if (IS_ERR(cs))
return PTR_ERR(cs);
 
-   *cs++ = MI_LOAD_REGISTER_IMM(2 * GEN8_3LVL_PDPES) | 
MI_LRI_FORCE_POSTED;
+   *cs++ = MI_LOAD_REGISTER_IMM(2 * GEN8_3LVL_PDPES) |
+   MI_LRI_FORCE_POSTED | engine->lri_cmd_mode;
for (i = GEN8_3LVL_PDPES; i--; ) {
const dma_addr_t pd_daddr = 
i915_page_dir_dma_addr(ppgtt, i);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index ec82a7ec0c8d..c88b792c1ab5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -16,6 +16,7 @@
 #include "intel_engine_pm.h"
 #include "intel_engine_user.h"
 #include "intel_execlists_submission.h"
+#include "intel_gpu_commands.h"
 #include "intel_gt.h"
 #include "intel_gt_requests.h"
 #include "intel_gt_pm.h"
@@ -223,6 +224,28 @@ static u32 __engine_mmio_base(struct drm_i915_private 
*i915,
return bases[i].base;
 }
 
+static bool i915_engine_has_relative_lri(const struct intel_engine_cs *engine)
+{
+   if (INTEL_GEN(engine->i915) < 11)
+   return false;
+
+   if (engine->class == COPY_ENGINE_CLASS)
+   return false;
+
+   return true;
+}
+
+static void lri_init(struct intel_engine_cs *engine)
+{
+   if (i915_engine_has_relative_lri(engine)) {
+   engine->lri_cmd_mode = MI_LRI_LRM_CS_MMIO;
+   engine->lri_mmio_base = 0;
+   } else {
+   engine->lri_cmd_mode = 0;
+   engine->lri_mmio_base = engine->mmio_base;
+   }
+}
+
 static void __sprint_engine_name(struct intel_engine_cs *engine)
 {
/*
@@ -327,6 +350,8 @@ static int intel_engine_setup(struct intel_gt *gt, enum 
intel_engine_id id)
if (engine->context_size)
DRIVER_CAPS(i915)->has_logical_contexts = true;
 
+   lri_init(engine);
+
ewma__engine_latency_init(&engine->latency);
seqcount_init(&engine->stats.lock);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 93aa22680db0..86302e6d86b2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -281,6 +281,9 @@ struct intel_engine_cs {
u32 context_size;
u32 mmio_base;
 
+   u32 lri_mmio_base;
+   u32 lri_cmd_mode

[Intel-gfx] [RFC PATCH 60/97] drm/i915: Track 'serial' counts for virtual engines

2021-05-06 Thread Matthew Brost
From: John Harrison 

The serial number tracking of engines happens at the backend of
request submission and was expecting to only be given physical
engines. However, in GuC submission mode, the decomposition of virtual
to physical engines does not happen in i915. Instead, requests are
submitted to their virtual engine mask all the way through to the
hardware (i.e. to GuC). This would mean that the heart beat code
thinks the physical engines are idle due to the serial number not
incrementing.

This patch updates the tracking to decompose virtual engines into
their physical constituents and tracks the request against each. This
is not entirely accurate as the GuC will only be issuing the request
to one physical engine. However, it is the best that i915 can do given
that it has no knowledge of the GuC's scheduling decisions.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h |  2 ++
 .../gpu/drm/i915/gt/intel_execlists_submission.c |  6 ++
 drivers/gpu/drm/i915/gt/intel_ring_submission.c  |  6 ++
 drivers/gpu/drm/i915/gt/mock_engine.c|  6 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c| 16 
 drivers/gpu/drm/i915/i915_request.c  |  4 +++-
 6 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 86302e6d86b2..e2b5cda6dbc4 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -389,6 +389,8 @@ struct intel_engine_cs {
void(*park)(struct intel_engine_cs *engine);
void(*unpark)(struct intel_engine_cs *engine);
 
+   void(*bump_serial)(struct intel_engine_cs *engine);
+
void(*set_default_submission)(struct intel_engine_cs 
*engine);
 
const struct intel_context_ops *cops;
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index ae12d7f19ecd..02880ea5d693 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -3199,6 +3199,11 @@ static void execlists_release(struct intel_engine_cs 
*engine)
lrc_fini_wa_ctx(engine);
 }
 
+static void execlist_bump_serial(struct intel_engine_cs *engine)
+{
+   engine->serial++;
+}
+
 static void
 logical_ring_default_vfuncs(struct intel_engine_cs *engine)
 {
@@ -3208,6 +3213,7 @@ logical_ring_default_vfuncs(struct intel_engine_cs 
*engine)
 
engine->cops = &execlists_context_ops;
engine->request_alloc = execlists_request_alloc;
+   engine->bump_serial = execlist_bump_serial;
 
engine->reset.prepare = execlists_reset_prepare;
engine->reset.rewind = execlists_reset_rewind;
diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c 
b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index 14aa31879a37..39dd7c4ed0a9 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -1045,6 +1045,11 @@ static void setup_irq(struct intel_engine_cs *engine)
}
 }
 
+static void ring_bump_serial(struct intel_engine_cs *engine)
+{
+   engine->serial++;
+}
+
 static void setup_common(struct intel_engine_cs *engine)
 {
struct drm_i915_private *i915 = engine->i915;
@@ -1064,6 +1069,7 @@ static void setup_common(struct intel_engine_cs *engine)
 
engine->cops = &ring_context_ops;
engine->request_alloc = ring_request_alloc;
+   engine->bump_serial = ring_bump_serial;
 
/*
 * Using a global execution timeline; the previous final breadcrumb is
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c 
b/drivers/gpu/drm/i915/gt/mock_engine.c
index bd005c1b6fd5..97b10fd60b55 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -292,6 +292,11 @@ static void mock_engine_release(struct intel_engine_cs 
*engine)
intel_engine_fini_retire(engine);
 }
 
+static void mock_bump_serial(struct intel_engine_cs *engine)
+{
+   engine->serial++;
+}
+
 struct intel_engine_cs *mock_engine(struct drm_i915_private *i915,
const char *name,
int id)
@@ -318,6 +323,7 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private 
*i915,
 
engine->base.cops = &mock_context_ops;
engine->base.request_alloc = mock_request_alloc;
+   engine->base.bump_serial = mock_bump_serial;
engine->base.emit_flush = mock_emit_flush;
engine->base.emit_fini_breadcrumb = mock_emit_breadcrumb;
engine->base.submit_request = mock_submit_request;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index dc79d287c50a..f0e5731bcef6 100644
--- a/drivers/gpu/drm/i915/gt/u

[Intel-gfx] [RFC PATCH 44/97] drm/i915/guc: Implement GuC submission tasklet

2021-05-06 Thread Matthew Brost
Implement GuC submission tasklet for new interface. The new GuC
interface uses H2G to submit contexts to the GuC. Since H2G use a single
channel, a single tasklet submits is used for the submission path. As
such a global struct intel_engine_cs has been added to leverage the
existing scheduling code.

Also the per engine interrupt handler has been updated to disable the
rescheduling of the physical engine tasklet, when using GuC scheduling,
as the physical engine tasklet is no longer used.

In this patch the field, guc_id, has been added to intel_context and is
not assigned. Patches later in the series will assign this value.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context_types.h |   9 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   4 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 233 +-
 3 files changed, 127 insertions(+), 119 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index ed8c447a7346..bb6fef7eae52 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -136,6 +136,15 @@ struct intel_context {
struct intel_sseu sseu;
 
u8 wa_bb_page; /* if set, page num reserved for context workarounds */
+
+   /* GuC scheduling state that does not require a lock. */
+   atomic_t guc_sched_state_no_lock;
+
+   /*
+* GuC lrc descriptor ID - Not assigned in this patch but future patches
+* in the series will.
+*/
+   u16 guc_id;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 2eb6c497e43c..d32866fe90ad 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -30,6 +30,10 @@ struct intel_guc {
struct intel_guc_log log;
struct intel_guc_ct ct;
 
+   /* Global engine used to submit requests to GuC */
+   struct i915_sched_engine *sched_engine;
+   struct i915_request *stalled_request;
+
/* intel_guc_recv interrupt related state */
spinlock_t irq_lock;
unsigned int msg_enabled_mask;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index c2b6d27404b7..0955a8b00ee8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -60,6 +60,30 @@
 
 #define GUC_REQUEST_SIZE 64 /* bytes */
 
+/*
+ * Below is a set of functions which control the GuC scheduling state which do
+ * not require a lock as all state transitions are mutually exclusive. i.e. It
+ * is not possible for the context pinning code and submission, for the same
+ * context, to be executing simultaneously.
+ */
+#define SCHED_STATE_NO_LOCK_ENABLEDBIT(0)
+static inline bool context_enabled(struct intel_context *ce)
+{
+   return (atomic_read(&ce->guc_sched_state_no_lock) &
+   SCHED_STATE_NO_LOCK_ENABLED);
+}
+
+static inline void set_context_enabled(struct intel_context *ce)
+{
+   atomic_or(SCHED_STATE_NO_LOCK_ENABLED, &ce->guc_sched_state_no_lock);
+}
+
+static inline void clr_context_enabled(struct intel_context *ce)
+{
+   atomic_and((u32)~SCHED_STATE_NO_LOCK_ENABLED,
+  &ce->guc_sched_state_no_lock);
+}
+
 static inline struct i915_priolist *to_priolist(struct rb_node *rb)
 {
return rb_entry(rb, struct i915_priolist, node);
@@ -122,37 +146,29 @@ static inline void set_lrc_desc_registered(struct 
intel_guc *guc, u32 id,
xa_store_irq(&guc->context_lookup, id, ce, GFP_ATOMIC);
 }
 
-static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
+static int guc_add_request(struct intel_guc *guc, struct i915_request *rq)
 {
-   /* Leaving stub as this function will be used in future patches */
-}
+   int err;
+   struct intel_context *ce = rq->context;
+   u32 action[3];
+   int len = 0;
+   bool enabled = context_enabled(ce);
 
-/*
- * When we're doing submissions using regular execlists backend, writing to
- * ELSP from CPU side is enough to make sure that writes to ringbuffer pages
- * pinned in mappable aperture portion of GGTT are visible to command streamer.
- * Writes done by GuC on our behalf are not guaranteeing such ordering,
- * therefore, to ensure the flush, we're issuing a POSTING READ.
- */
-static void flush_ggtt_writes(struct i915_vma *vma)
-{
-   if (i915_vma_is_map_and_fenceable(vma))
-   intel_uncore_posting_read_fw(vma->vm->gt->uncore,
-GUC_STATUS);
-}
+   if (!enabled) {
+   action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET;
+   action[len++] = ce->guc_id;
+   action[len++] = GUC_CONTEXT_ENABLE;
+   } else {
+   action[len++] = INTEL_GUC_ACTION

[Intel-gfx] [RFC PATCH 22/97] drm/i915/guc: Update CTB response status

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Format of the STATUS dword in CTB response message now follows
definition of the HXG header. Update our code and remove any
obsolete legacy definitions.

GuC: 55.0.0
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Piotr Piórkowski 
---
 drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h |  1 -
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c   | 12 ++--
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 17 -
 3 files changed, 6 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h
index 488b6061ee89..2030896857d5 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h
@@ -7,7 +7,6 @@
 #define _ABI_GUC_ERRORS_ABI_H
 
 enum intel_guc_response_status {
-   INTEL_GUC_RESPONSE_STATUS_SUCCESS = 0x0,
INTEL_GUC_RESPONSE_STATUS_GENERIC_FAIL = 0xF000,
 };
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index a174978c6a27..1afdeac683b5 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -455,7 +455,7 @@ static int wait_for_ct_request_update(struct ct_request 
*req, u32 *status)
 */
timeout = max(10, CONFIG_DRM_I915_HEARTBEAT_INTERVAL);
 
-#define done INTEL_GUC_MSG_IS_RESPONSE(READ_ONCE(req->status))
+#define done (FIELD_GET(GUC_HXG_MSG_0_ORIGIN, READ_ONCE(req->status)) == 
GUC_HXG_ORIGIN_GUC)
err = wait_for_us(done, 10);
if (err)
err = wait_for(done, timeout);
@@ -510,21 +510,21 @@ static int ct_send(struct intel_guc_ct *ct,
if (unlikely(err))
goto unlink;
 
-   if (!INTEL_GUC_MSG_IS_RESPONSE_SUCCESS(*status)) {
+   if (FIELD_GET(GUC_HXG_MSG_0_TYPE, *status) != 
GUC_HXG_TYPE_RESPONSE_SUCCESS) {
err = -EIO;
goto unlink;
}
 
if (response_buf) {
/* There shall be no data in the status */
-   WARN_ON(INTEL_GUC_MSG_TO_DATA(request.status));
+   WARN_ON(FIELD_GET(GUC_HXG_RESPONSE_MSG_0_DATA0, 
request.status));
/* Return actual response len */
err = request.response_len;
} else {
/* There shall be no response payload */
WARN_ON(request.response_len);
/* Return data decoded from the status dword */
-   err = INTEL_GUC_MSG_TO_DATA(*status);
+   err = FIELD_GET(GUC_HXG_RESPONSE_MSG_0_DATA0, *status);
}
 
 unlink:
@@ -719,8 +719,8 @@ static int ct_handle_response(struct intel_guc_ct *ct, 
struct ct_incoming_msg *r
status = response->msg[2];
datalen = len - 2;
 
-   /* Format of the status follows RESPONSE message */
-   if (unlikely(!INTEL_GUC_MSG_IS_RESPONSE(status))) {
+   /* Format of the status dword follows HXG header */
+   if (unlikely(FIELD_GET(GUC_HXG_MSG_0_ORIGIN, status) != 
GUC_HXG_ORIGIN_GUC)) {
CT_ERROR(ct, "Corrupted response (status %#x)\n", status);
return -EPROTO;
}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 9bf35240e723..d445f6b77db4 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -388,23 +388,6 @@ struct guc_shared_ctx_data {
struct guc_ctx_report preempt_ctx_report[GUC_MAX_ENGINES_NUM];
 } __packed;
 
-#define __INTEL_GUC_MSG_GET(T, m) \
-   (((m) & INTEL_GUC_MSG_ ## T ## _MASK) >> INTEL_GUC_MSG_ ## T ## _SHIFT)
-#define INTEL_GUC_MSG_TO_TYPE(m)   __INTEL_GUC_MSG_GET(TYPE, m)
-#define INTEL_GUC_MSG_TO_DATA(m)   __INTEL_GUC_MSG_GET(DATA, m)
-#define INTEL_GUC_MSG_TO_CODE(m)   __INTEL_GUC_MSG_GET(CODE, m)
-
-#define __INTEL_GUC_MSG_TYPE_IS(T, m) \
-   (INTEL_GUC_MSG_TO_TYPE(m) == INTEL_GUC_MSG_TYPE_ ## T)
-#define INTEL_GUC_MSG_IS_REQUEST(m)__INTEL_GUC_MSG_TYPE_IS(REQUEST, m)
-#define INTEL_GUC_MSG_IS_RESPONSE(m)   __INTEL_GUC_MSG_TYPE_IS(RESPONSE, m)
-
-#define INTEL_GUC_MSG_IS_RESPONSE_SUCCESS(m) \
-(typecheck(u32, (m)) && \
- ((m) & (INTEL_GUC_MSG_TYPE_MASK | INTEL_GUC_MSG_CODE_MASK)) == \
- ((INTEL_GUC_MSG_TYPE_RESPONSE << INTEL_GUC_MSG_TYPE_SHIFT) | \
-  (INTEL_GUC_RESPONSE_STATUS_SUCCESS << INTEL_GUC_MSG_CODE_SHIFT)))
-
 /* This action will be programmed in C1BC - SOFT_SCRATCH_15_REG */
 enum intel_guc_recv_message {
INTEL_GUC_RECV_MSG_CRASH_DUMP_POSTED = BIT(1),
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 20/97] drm/i915/guc: Introduce unified HXG messages

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

New GuC firmware will unify format of MMIO and CTB H2G messages.
Introduce their definitions now to allow gradual transition of
our code to match new changes.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Michał Winiarski 
---
 .../gpu/drm/i915/gt/uc/abi/guc_messages_abi.h | 226 ++
 1 file changed, 226 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h
index 775e21f3058c..1c264819aa03 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h
@@ -6,6 +6,232 @@
 #ifndef _ABI_GUC_MESSAGES_ABI_H
 #define _ABI_GUC_MESSAGES_ABI_H
 
+/**
+ * DOC: HXG Message
+ *
+ * All messages exchanged with GuC are defined using 32 bit dwords.
+ * First dword is treated as a message header. Remaining dwords are optional.
+ *
+ * .. _HXG Message:
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  |   |   |  
|
+ *  | 0 |31 | **ORIGIN** - originator of the message   
|
+ *  |   |   |   - _`GUC_HXG_ORIGIN_HOST` = 0   
|
+ *  |   |   |   - _`GUC_HXG_ORIGIN_GUC` = 1
|
+ *  |   |   |  
|
+ *  |   
+---+--+
+ *  |   | 30:28 | **TYPE** - message type  
|
+ *  |   |   |   - _`GUC_HXG_TYPE_REQUEST` = 0  
|
+ *  |   |   |   - _`GUC_HXG_TYPE_EVENT` = 1
|
+ *  |   |   |   - _`GUC_HXG_TYPE_NO_RESPONSE_BUSY` = 3 
|
+ *  |   |   |   - _`GUC_HXG_TYPE_NO_RESPONSE_RETRY` = 5
|
+ *  |   |   |   - _`GUC_HXG_TYPE_RESPONSE_FAILURE` = 6 
|
+ *  |   |   |   - _`GUC_HXG_TYPE_RESPONSE_SUCCESS` = 7 
|
+ *  |   
+---+--+
+ *  |   |  27:0 | **AUX** - auxiliary data (depends TYPE)  
|
+ *  
+---+---+--+
+ *  | 1 |  31:0 | optional payload (depends on TYPE)   
|
+ *  +---+---+  
|
+ *  |...|   |  
|
+ *  +---+---+  
|
+ *  | n |  31:0 |  
|
+ *  
+---+---+--+
+ */
+
+#define GUC_HXG_MSG_MIN_LEN1u
+#define GUC_HXG_MSG_0_ORIGIN   (0x1 << 31)
+#define   GUC_HXG_ORIGIN_HOST  0u
+#define   GUC_HXG_ORIGIN_GUC   1u
+#define GUC_HXG_MSG_0_TYPE (0x7 << 28)
+#define   GUC_HXG_TYPE_REQUEST 0u
+#define   GUC_HXG_TYPE_EVENT   1u
+#define   GUC_HXG_TYPE_NO_RESPONSE_BUSY3u
+#define   GUC_HXG_TYPE_NO_RESPONSE_RETRY   5u
+#define   GUC_HXG_TYPE_RESPONSE_FAILURE6u
+#define   GUC_HXG_TYPE_RESPONSE_SUCCESS7u
+#define GUC_HXG_MSG_0_AUX  (0xfff << 0)
+
+/**
+ * DOC: HXG Request
+ *
+ * The `HXG Request`_ message should be used to initiate synchronous activity
+ * for which confirmation or return data is expected.
+ *
+ * The recipient of this message shall use `HXG Response`_, `HXG Failure`_
+ * or `HXG Retry`_ message as a definite reply, and may use `HXG Busy`_
+ * message as a intermediate reply.
+ *
+ * Format of @DATA0 and all @DATAn fields depends on the @ACTION code.
+ *
+ * _HXG Request:
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 |31 | ORIGIN   
|
+ *  |   
+---+--+
+ *  |   | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ 
|
+ *  |   
+---+--+
+ *  |   | 27:16 | **DATA0** - request data (depends on ACTION) 
|
+ *  |   
+---+--+
+ *  |   |  15:0 | **ACTION** - requested action code   
|
+ *  
+---+---

[Intel-gfx] [RFC PATCH 48/97] drm/i915/guc: Defer context unpin until scheduling is disabled

2021-05-06 Thread Matthew Brost
With GuC scheduling, it isn't safe to unpin a context while scheduling
is enabled for that context as the GuC may touch some of the pinned
state (e.g. LRC). To ensure scheduling isn't enabled when an unpin is
done, a call back is added to intel_context_unpin when pin count == 1
to disable scheduling for that context. When the response CTB is
received it is safe to do the final unpin.

Future patches may add a heuristic / delay to schedule the disable
call back to avoid thrashing on schedule enable / disable.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |   4 +-
 drivers/gpu/drm/i915/gt/intel_context.h   |  21 ++-
 drivers/gpu/drm/i915/gt/intel_context_types.h |   2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   2 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |   6 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 145 +-
 6 files changed, 176 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index f750c826e19d..1499b8aace2a 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -306,9 +306,9 @@ int __intel_context_do_pin(struct intel_context *ce)
return err;
 }
 
-void intel_context_unpin(struct intel_context *ce)
+void __intel_context_do_unpin(struct intel_context *ce, int sub)
 {
-   if (!atomic_dec_and_test(&ce->pin_count))
+   if (!atomic_sub_and_test(sub, &ce->pin_count))
return;
 
CE_TRACE(ce, "unpin\n");
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index f83a73a2b39f..92ecbab8c1cd 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -113,7 +113,26 @@ static inline void __intel_context_pin(struct 
intel_context *ce)
atomic_inc(&ce->pin_count);
 }
 
-void intel_context_unpin(struct intel_context *ce);
+void __intel_context_do_unpin(struct intel_context *ce, int sub);
+
+static inline void intel_context_sched_disable_unpin(struct intel_context *ce)
+{
+   __intel_context_do_unpin(ce, 2);
+}
+
+static inline void intel_context_unpin(struct intel_context *ce)
+{
+   if (!ce->ops->sched_disable) {
+   __intel_context_do_unpin(ce, 1);
+   } else {
+   while (!atomic_add_unless(&ce->pin_count, -1, 1)) {
+   if (atomic_cmpxchg(&ce->pin_count, 1, 2) == 1) {
+   ce->ops->sched_disable(ce);
+   break;
+   }
+   }
+   }
+}
 
 void intel_context_enter_engine(struct intel_context *ce);
 void intel_context_exit_engine(struct intel_context *ce);
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index beafe55a9101..e7af6a2368f8 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -43,6 +43,8 @@ struct intel_context_ops {
void (*enter)(struct intel_context *ce);
void (*exit)(struct intel_context *ce);
 
+   void (*sched_disable)(struct intel_context *ce);
+
void (*reset)(struct intel_context *ce);
void (*destroy)(struct kref *kref);
 };
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 85ff32bfd074..55f02dd1598d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -237,6 +237,8 @@ int intel_guc_reset_engine(struct intel_guc *guc,
 
 int intel_guc_deregister_done_process_msg(struct intel_guc *guc,
  const u32 *msg, u32 len);
+int intel_guc_sched_done_process_msg(struct intel_guc *guc,
+const u32 *msg, u32 len);
 
 void intel_guc_load_status(struct intel_guc *guc, struct drm_printer *p);
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 51c5efdf543a..8e48bf260eab 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -900,6 +900,12 @@ static int ct_process_request(struct intel_guc_ct *ct, 
struct ct_incoming_msg *r
CT_ERROR(ct, "deregister context failed %x %*ph\n",
  action, 4 * len, payload);
break;
+   case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE:
+   ret = intel_guc_sched_done_process_msg(guc, payload, len);
+   if (unlikely(ret))
+   CT_ERROR(ct, "schedule context failed %x %*ph\n",
+ action, 4 * len, payload);
+   break;
default:
ret = -EOPNOTSUPP;
break;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index b4c439025a5f..

[Intel-gfx] [RFC PATCH 65/97] drm/i915: Reset GPU immediately if submission is disabled

2021-05-06 Thread Matthew Brost
If submission is disabled by the backend for any reason, reset the GPU
immediately in the heartbeat code.

Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 63 +++
 .../gpu/drm/i915/gt/intel_engine_heartbeat.h  |  4 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  9 +++
 drivers/gpu/drm/i915/i915_scheduler.c |  6 ++
 drivers/gpu/drm/i915/i915_scheduler.h |  6 ++
 drivers/gpu/drm/i915/i915_scheduler_types.h   |  3 +
 6 files changed, 78 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index b6a305e6a974..a8495364d906 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -70,12 +70,30 @@ static void show_heartbeat(const struct i915_request *rq,
 {
struct drm_printer p = drm_debug_printer("heartbeat");
 
-   intel_engine_dump(engine, &p,
- "%s heartbeat {seqno:%llx:%lld, prio:%d} not 
ticking\n",
- engine->name,
- rq->fence.context,
- rq->fence.seqno,
- rq->sched.attr.priority);
+   if (!rq) {
+   intel_engine_dump(engine, &p,
+ "%s heartbeat not ticking\n",
+ engine->name);
+   } else {
+   intel_engine_dump(engine, &p,
+ "%s heartbeat {seqno:%llx:%lld, prio:%d} not 
ticking\n",
+ engine->name,
+ rq->fence.context,
+ rq->fence.seqno,
+ rq->sched.attr.priority);
+   }
+}
+
+static void
+reset_engine(struct intel_engine_cs *engine, struct i915_request *rq)
+{
+   if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
+   show_heartbeat(rq, engine);
+
+   intel_gt_handle_error(engine->gt, engine->mask,
+ I915_ERROR_CAPTURE,
+ "stopped heartbeat on %s",
+ engine->name);
 }
 
 static void heartbeat(struct work_struct *wrk)
@@ -102,6 +120,11 @@ static void heartbeat(struct work_struct *wrk)
if (intel_gt_is_wedged(engine->gt))
goto out;
 
+   if (i915_sched_engine_disabled(engine->sched_engine)) {
+   reset_engine(engine, engine->heartbeat.systole);
+   goto out;
+   }
+
if (engine->heartbeat.systole) {
long delay = READ_ONCE(engine->props.heartbeat_interval_ms);
 
@@ -139,13 +162,7 @@ static void heartbeat(struct work_struct *wrk)
engine->sched_engine->schedule(rq, &attr);
local_bh_enable();
} else {
-   if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
-   show_heartbeat(rq, engine);
-
-   intel_gt_handle_error(engine->gt, engine->mask,
- I915_ERROR_CAPTURE,
- "stopped heartbeat on %s",
- engine->name);
+   reset_engine(engine, rq);
}
 
rq->emitted_jiffies = jiffies;
@@ -194,6 +211,26 @@ void intel_engine_park_heartbeat(struct intel_engine_cs 
*engine)
i915_request_put(fetch_and_zero(&engine->heartbeat.systole));
 }
 
+void intel_gt_unpark_heartbeats(struct intel_gt *gt)
+{
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+
+   for_each_engine(engine, gt, id)
+   if (intel_engine_pm_is_awake(engine))
+   intel_engine_unpark_heartbeat(engine);
+
+}
+
+void intel_gt_park_heartbeats(struct intel_gt *gt)
+{
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+
+   for_each_engine(engine, gt, id)
+   intel_engine_park_heartbeat(engine);
+}
+
 void intel_engine_init_heartbeat(struct intel_engine_cs *engine)
 {
INIT_DELAYED_WORK(&engine->heartbeat.work, heartbeat);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
index a488ea3e84a3..5da6d809a87a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
@@ -7,6 +7,7 @@
 #define INTEL_ENGINE_HEARTBEAT_H
 
 struct intel_engine_cs;
+struct intel_gt;
 
 void intel_engine_init_heartbeat(struct intel_engine_cs *engine);
 
@@ -16,6 +17,9 @@ int intel_engine_set_heartbeat(struct intel_engine_cs *engine,
 void intel_engine_park_heartbeat(struct intel_engine_cs *engine);
 void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine);
 
+void intel_gt_park_heartbeats(struct intel_gt *gt);
+void intel_gt_unpark_heartbeats(struct intel_gt *gt);
+
 int int

[Intel-gfx] [RFC PATCH 37/97] drm/i915/guc: Add stall timer to non blocking CTB send function

2021-05-06 Thread Matthew Brost
Implement a stall timer which fails H2G CTBs once a period of time
with no forward progress is reached to prevent deadlock.

Also update to ct_write to return -EDEADLK rather than -EPIPE on a
corrupted descriptor.

Signed-off-by: John Harrison 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 48 +--
 1 file changed, 45 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index af7314d45a78..4eab319d61be 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -69,6 +69,8 @@ static inline struct drm_device *ct_to_drm(struct 
intel_guc_ct *ct)
 #define CTB_H2G_BUFFER_SIZE(SZ_4K)
 #define CTB_G2H_BUFFER_SIZE(SZ_4K)
 
+#define MAX_US_STALL_CTB   100
+
 struct ct_request {
struct list_head link;
u32 fence;
@@ -315,6 +317,7 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 
ct->requests.last_fence = 1;
ct->enabled = true;
+   ct->stall_time = KTIME_MAX;
 
return 0;
 
@@ -378,7 +381,7 @@ static int ct_write(struct intel_guc_ct *ct,
unsigned int i;
 
if (unlikely(ctb->broken))
-   return -EPIPE;
+   return -EDEADLK;
 
if (unlikely(desc->status))
goto corrupted;
@@ -449,7 +452,7 @@ static int ct_write(struct intel_guc_ct *ct,
CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n",
 desc->head, desc->tail, desc->status);
ctb->broken = true;
-   return -EPIPE;
+   return -EDEADLK;
 }
 
 /**
@@ -494,6 +497,17 @@ static int wait_for_ct_request_update(struct ct_request 
*req, u32 *status)
return err;
 }
 
+static inline bool ct_deadlocked(struct intel_guc_ct *ct)
+{
+   bool ret = ktime_us_delta(ktime_get(), ct->stall_time) >
+   MAX_US_STALL_CTB;
+
+   if (unlikely(ret))
+   CT_ERROR(ct, "CT deadlocked\n");
+
+   return ret;
+}
+
 static inline bool ctb_has_room(struct intel_guc_ct_buffer *ctb, u32 len_dw)
 {
struct guc_ct_buffer_desc *desc = ctb->desc;
@@ -505,6 +519,26 @@ static inline bool ctb_has_room(struct intel_guc_ct_buffer 
*ctb, u32 len_dw)
return space >= len_dw;
 }
 
+static int has_room_nb(struct intel_guc_ct *ct, u32 len_dw)
+{
+   struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
+
+   lockdep_assert_held(&ct->ctbs.send.lock);
+
+   if (unlikely(!ctb_has_room(ctb, len_dw))) {
+   if (ct->stall_time == KTIME_MAX)
+   ct->stall_time = ktime_get();
+
+   if (unlikely(ct_deadlocked(ct)))
+   return -EDEADLK;
+   else
+   return -EBUSY;
+   }
+
+   ct->stall_time = KTIME_MAX;
+   return 0;
+}
+
 static int ct_send_nb(struct intel_guc_ct *ct,
  const u32 *action,
  u32 len,
@@ -517,7 +551,7 @@ static int ct_send_nb(struct intel_guc_ct *ct,
 
spin_lock_irqsave(&ctb->lock, spin_flags);
 
-   ret = ctb_has_room(ctb, len + 1);
+   ret = has_room_nb(ct, len + 1);
if (unlikely(ret))
goto out;
 
@@ -561,11 +595,19 @@ static int ct_send(struct intel_guc_ct *ct,
 retry:
spin_lock_irqsave(&ct->ctbs.send.lock, flags);
if (unlikely(!ctb_has_room(ctb, len + 1))) {
+   if (ct->stall_time == KTIME_MAX)
+   ct->stall_time = ktime_get();
spin_unlock_irqrestore(&ct->ctbs.send.lock, flags);
+
+   if (unlikely(ct_deadlocked(ct)))
+   return -EDEADLK;
+
cond_resched();
goto retry;
}
 
+   ct->stall_time = KTIME_MAX;
+
fence = ct_get_next_fence(ct);
request.fence = fence;
request.status = 0;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 18/97] drm/i915/guc: Don't receive all G2H messages in irq handler

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

In irq handler try to receive just single G2H message, let other
messages to be received from tasklet.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 67 ---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  3 +
 2 files changed, 50 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index cb58fa7f970c..d630ec32decf 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -81,6 +81,7 @@ enum { CTB_SEND = 0, CTB_RECV = 1 };
 
 enum { CTB_OWNER_HOST = 0 };
 
+static void ct_receive_tasklet_func(unsigned long data);
 static void ct_incoming_request_worker_func(struct work_struct *w);
 
 /**
@@ -95,6 +96,7 @@ void intel_guc_ct_init_early(struct intel_guc_ct *ct)
INIT_LIST_HEAD(&ct->requests.pending);
INIT_LIST_HEAD(&ct->requests.incoming);
INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func);
+   tasklet_init(&ct->receive_tasklet, ct_receive_tasklet_func, (unsigned 
long)ct);
 }
 
 static inline const char *guc_ct_buffer_type_to_str(u32 type)
@@ -244,6 +246,7 @@ void intel_guc_ct_fini(struct intel_guc_ct *ct)
 {
GEM_BUG_ON(ct->enabled);
 
+   tasklet_kill(&ct->receive_tasklet);
i915_vma_unpin_and_release(&ct->vma, I915_VMA_RELEASE_MAP);
memset(ct, 0, sizeof(*ct));
 }
@@ -629,7 +632,7 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
CT_DEBUG(ct, "received %*ph\n", 4 * len, data);
 
desc->head = head * 4;
-   return 0;
+   return available - len;
 
 corrupted:
CT_ERROR(ct, "Corrupted descriptor addr=%#x head=%u tail=%u size=%u\n",
@@ -665,10 +668,10 @@ static int ct_handle_response(struct intel_guc_ct *ct, 
const u32 *msg)
u32 status;
u32 datalen;
struct ct_request *req;
+   unsigned long flags;
bool found = false;
 
GEM_BUG_ON(!ct_header_is_response(header));
-   GEM_BUG_ON(!in_irq());
 
/* Response payload shall at least include fence and status */
if (unlikely(len < 2)) {
@@ -688,7 +691,7 @@ static int ct_handle_response(struct intel_guc_ct *ct, 
const u32 *msg)
 
CT_DEBUG(ct, "response fence %u status %#x\n", fence, status);
 
-   spin_lock(&ct->requests.lock);
+   spin_lock_irqsave(&ct->requests.lock, flags);
list_for_each_entry(req, &ct->requests.pending, link) {
if (unlikely(fence != req->fence)) {
CT_DEBUG(ct, "request %u awaits response\n",
@@ -707,7 +710,7 @@ static int ct_handle_response(struct intel_guc_ct *ct, 
const u32 *msg)
found = true;
break;
}
-   spin_unlock(&ct->requests.lock);
+   spin_unlock_irqrestore(&ct->requests.lock, flags);
 
if (!found)
CT_ERROR(ct, "Unsolicited response %*ph\n", msgsize, msg);
@@ -821,31 +824,55 @@ static int ct_handle_request(struct intel_guc_ct *ct, 
const u32 *msg)
return 0;
 }
 
+static int ct_receive(struct intel_guc_ct *ct)
+{
+   u32 msg[GUC_CT_MSG_LEN_MASK + 1]; /* one extra dw for the header */
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(&ct->ctbs.recv.lock, flags);
+   ret = ct_read(ct, msg);
+   spin_unlock_irqrestore(&ct->ctbs.recv.lock, flags);
+   if (ret < 0)
+   return ret;
+
+   if (ct_header_is_response(msg[0]))
+   ct_handle_response(ct, msg);
+   else
+   ct_handle_request(ct, msg);
+
+   return ret;
+}
+
+static void ct_try_receive_message(struct intel_guc_ct *ct)
+{
+   int ret;
+
+   if (GEM_WARN_ON(!ct->enabled))
+   return;
+
+   ret = ct_receive(ct);
+   if (ret > 0)
+   tasklet_hi_schedule(&ct->receive_tasklet);
+}
+
+static void ct_receive_tasklet_func(unsigned long data)
+{
+   struct intel_guc_ct *ct = (struct intel_guc_ct *)data;
+
+   ct_try_receive_message(ct);
+}
+
 /*
  * When we're communicating with the GuC over CT, GuC uses events
  * to notify us about new messages being posted on the RECV buffer.
  */
 void intel_guc_ct_event_handler(struct intel_guc_ct *ct)
 {
-   u32 msg[GUC_CT_MSG_LEN_MASK + 1]; /* one extra dw for the header */
-   unsigned long flags;
-   int err = 0;
-
if (unlikely(!ct->enabled)) {
WARN(1, "Unexpected GuC event received while CT disabled!\n");
return;
}
 
-   do {
-   spin_lock_irqsave(&ct->ctbs.recv.lock, flags);
-   err = ct_read(ct, msg);
-   spin_unlock_irqrestore(&ct->ctbs.recv.lock, flags);
-   if (err)
-   break;
-
-   if (ct_header_is_response(msg[0]))
-   err = ct_handle_response(ct, msg);
-   else
-   err = ct_handle_req

[Intel-gfx] [RFC PATCH 49/97] drm/i915/guc: Disable engine barriers with GuC during unpin

2021-05-06 Thread Matthew Brost
Disable engine barriers for unpinning with GuC. This feature isn't
needed with the GuC as it disables context scheduling before unpinning
which guarantees the HW will not reference the context. Hence it is
not necessary to defer unpinning until a kernel context request
completes on each engine in the context engine mask.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
Signed-off-by: Daniele Ceraolo Spurio 
---
 drivers/gpu/drm/i915/gt/intel_context.c|  2 +-
 drivers/gpu/drm/i915/gt/intel_context.h|  1 +
 drivers/gpu/drm/i915/gt/selftest_context.c | 10 ++
 drivers/gpu/drm/i915/i915_active.c |  3 +++
 4 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 1499b8aace2a..7f97753ab164 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -80,7 +80,7 @@ static int intel_context_active_acquire(struct intel_context 
*ce)
 
__i915_active_acquire(&ce->active);
 
-   if (intel_context_is_barrier(ce))
+   if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine))
return 0;
 
/* Preallocate tracking nodes */
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index 92ecbab8c1cd..9b211ca5ecc7 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -16,6 +16,7 @@
 #include "intel_engine_types.h"
 #include "intel_ring_types.h"
 #include "intel_timeline_types.h"
+#include "uc/intel_guc_submission.h"
 
 #define CE_TRACE(ce, fmt, ...) do {\
const struct intel_context *ce__ = (ce);\
diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c 
b/drivers/gpu/drm/i915/gt/selftest_context.c
index 26685b927169..fa7b99a671dd 100644
--- a/drivers/gpu/drm/i915/gt/selftest_context.c
+++ b/drivers/gpu/drm/i915/gt/selftest_context.c
@@ -209,7 +209,13 @@ static int __live_active_context(struct intel_engine_cs 
*engine)
 * This test makes sure that the context is kept alive until a
 * subsequent idle-barrier (emitted when the engine wakeref hits 0
 * with no more outstanding requests).
+*
+* In GuC submission mode we don't use idle barriers and we instead
+* get a message from the GuC to signal that it is safe to unpin the
+* context from memory.
 */
+   if (intel_engine_uses_guc(engine))
+   return 0;
 
if (intel_engine_pm_is_awake(engine)) {
pr_err("%s is awake before starting %s!\n",
@@ -357,7 +363,11 @@ static int __live_remote_context(struct intel_engine_cs 
*engine)
 * on the context image remotely (intel_context_prepare_remote_request),
 * which inserts foreign fences into intel_context.active, does not
 * clobber the idle-barrier.
+*
+* In GuC submission mode we don't use idle barriers.
 */
+   if (intel_engine_uses_guc(engine))
+   return 0;
 
if (intel_engine_pm_is_awake(engine)) {
pr_err("%s is awake before starting %s!\n",
diff --git a/drivers/gpu/drm/i915/i915_active.c 
b/drivers/gpu/drm/i915/i915_active.c
index b1aa1c482c32..9a264898bb91 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -968,6 +968,9 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 
GEM_BUG_ON(i915_active_is_idle(ref));
 
+   if (llist_empty(&ref->preallocated_barriers))
+   return;
+
/*
 * Transfer the list of preallocated barriers into the
 * i915_active rbtree, but only as proto-nodes. They will be
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 57/97] drm/i915/guc: Add several request trace points

2021-05-06 Thread Matthew Brost
Add trace points for request dependencies and GuC submit. Extended
existing request trace points to include submit fence value,, guc_id,
and ring tail value.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  3 ++
 drivers/gpu/drm/i915/i915_request.c   |  3 ++
 drivers/gpu/drm/i915/i915_trace.h | 39 ++-
 3 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index c7a8968f22c5..87ed00f272e7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -421,6 +421,7 @@ static int guc_dequeue_one_context(struct intel_guc *guc)
guc->stalled_request = last;
return false;
}
+   trace_i915_request_guc_submit(last);
}
 
guc->stalled_request = NULL;
@@ -645,6 +646,8 @@ static int guc_bypass_tasklet_submit(struct intel_guc *guc,
ret = guc_add_request(guc, rq);
if (ret == -EBUSY)
guc->stalled_request = rq;
+   else
+   trace_i915_request_guc_submit(rq);
 
return ret;
 }
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 3a8f6ec0c32d..9542a5baa45a 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1344,6 +1344,9 @@ __i915_request_await_execution(struct i915_request *to,
return err;
}
 
+   trace_i915_request_dep_to(to);
+   trace_i915_request_dep_from(from);
+
/* Couple the dependency tree for PI on this exposed to->fence */
if (to->engine->sched_engine->schedule) {
err = i915_sched_node_add_dependency(&to->sched,
diff --git a/drivers/gpu/drm/i915/i915_trace.h 
b/drivers/gpu/drm/i915/i915_trace.h
index 6778ad2a14a4..b02d04b6c8f6 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -794,22 +794,27 @@ DECLARE_EVENT_CLASS(i915_request,
TP_STRUCT__entry(
 __field(u32, dev)
 __field(u64, ctx)
+__field(u32, guc_id)
 __field(u16, class)
 __field(u16, instance)
 __field(u32, seqno)
+__field(u32, tail)
 ),
 
TP_fast_assign(
   __entry->dev = rq->engine->i915->drm.primary->index;
   __entry->class = rq->engine->uabi_class;
   __entry->instance = rq->engine->uabi_instance;
+  __entry->guc_id = rq->context->guc_id;
   __entry->ctx = rq->fence.context;
   __entry->seqno = rq->fence.seqno;
+  __entry->tail = rq->tail;
   ),
 
-   TP_printk("dev=%u, engine=%u:%u, ctx=%llu, seqno=%u",
+   TP_printk("dev=%u, engine=%u:%u, guc_id=%u, ctx=%llu, seqno=%u, 
tail=%u",
  __entry->dev, __entry->class, __entry->instance,
- __entry->ctx, __entry->seqno)
+ __entry->guc_id, __entry->ctx, __entry->seqno,
+ __entry->tail)
 );
 
 DEFINE_EVENT(i915_request, i915_request_add,
@@ -818,6 +823,21 @@ DEFINE_EVENT(i915_request, i915_request_add,
 );
 
 #if defined(CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS)
+DEFINE_EVENT(i915_request, i915_request_dep_to,
+TP_PROTO(struct i915_request *rq),
+TP_ARGS(rq)
+);
+
+DEFINE_EVENT(i915_request, i915_request_dep_from,
+TP_PROTO(struct i915_request *rq),
+TP_ARGS(rq)
+);
+
+DEFINE_EVENT(i915_request, i915_request_guc_submit,
+TP_PROTO(struct i915_request *rq),
+TP_ARGS(rq)
+);
+
 DEFINE_EVENT(i915_request, i915_request_submit,
 TP_PROTO(struct i915_request *rq),
 TP_ARGS(rq)
@@ -887,6 +907,21 @@ TRACE_EVENT(i915_request_out,
 
 #else
 #if !defined(TRACE_HEADER_MULTI_READ)
+static inline void
+trace_i915_request_dep_to(struct i915_request *rq)
+{
+}
+
+static inline void
+trace_i915_request_dep_from(struct i915_request *rq)
+{
+}
+
+static inline void
+trace_i915_request_guc_submit(struct i915_request *rq)
+{
+}
+
 static inline void
 trace_i915_request_submit(struct i915_request *rq)
 {
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 61/97] drm/i915: Hold reference to intel_context over life of i915_request

2021-05-06 Thread Matthew Brost
Hold a reference to the intel_context over life of an i915_request.
Without this an i915_request can exist after the context has been
destroyed (e.g. request retired, context closed, but user space holds a
reference to the request from an out fence). In the case of GuC
submission + virtual engine, the engine that the request references is
also destroyed which can trigger bad pointer dref in fence ops (e.g.
i915_fence_get_driver_name). We could likely change
i915_fence_get_driver_name to avoid touching the engine but let's just
be safe and hold the intel_context reference.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/i915_request.c | 54 -
 1 file changed, 22 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 127d60b36422..0b96b824ea06 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -126,39 +126,17 @@ static void i915_fence_release(struct dma_fence *fence)
i915_sw_fence_fini(&rq->semaphore);
 
/*
-* Keep one request on each engine for reserved use under mempressure
-*
-* We do not hold a reference to the engine here and so have to be
-* very careful in what rq->engine we poke. The virtual engine is
-* referenced via the rq->context and we released that ref during
-* i915_request_retire(), ergo we must not dereference a virtual
-* engine here. Not that we would want to, as the only consumer of
-* the reserved engine->request_pool is the power management parking,
-* which must-not-fail, and that is only run on the physical engines.
-*
-* Since the request must have been executed to be have completed,
-* we know that it will have been processed by the HW and will
-* not be unsubmitted again, so rq->engine and rq->execution_mask
-* at this point is stable. rq->execution_mask will be a single
-* bit if the last and _only_ engine it could execution on was a
-* physical engine, if it's multiple bits then it started on and
-* could still be on a virtual engine. Thus if the mask is not a
-* power-of-two we assume that rq->engine may still be a virtual
-* engine and so a dangling invalid pointer that we cannot dereference
-*
-* For example, consider the flow of a bonded request through a virtual
-* engine. The request is created with a wide engine mask (all engines
-* that we might execute on). On processing the bond, the request mask
-* is reduced to one or more engines. If the request is subsequently
-* bound to a single engine, it will then be constrained to only
-* execute on that engine and never returned to the virtual engine
-* after timeslicing away, see __unwind_incomplete_requests(). Thus we
-* know that if the rq->execution_mask is a single bit, rq->engine
-* can be a physical engine with the exact corresponding mask.
+* Keep one request on each engine for reserved use under mempressure,
+* do not use with virtual engines as this really is only needed for
+* kernel contexts.
 */
-   if (is_power_of_2(rq->execution_mask) &&
-   !cmpxchg(&rq->engine->request_pool, NULL, rq))
+   if (!intel_engine_is_virtual(rq->engine) &&
+   !cmpxchg(&rq->engine->request_pool, NULL, rq)) {
+   intel_context_put(rq->context);
return;
+   }
+
+   intel_context_put(rq->context);
 
kmem_cache_free(global.slab_requests, rq);
 }
@@ -977,7 +955,18 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
}
}
 
-   rq->context = ce;
+   /*
+* Hold a reference to the intel_context over life of an i915_request.
+* Without this an i915_request can exist after the context has been
+* destroyed (e.g. request retired, context closed, but user space holds
+* a reference to the request from an out fence). In the case of GuC
+* submission + virtual engine, the engine that the request references
+* is also destroyed which can trigger bad pointer dref in fence ops
+* (e.g. i915_fence_get_driver_name). We could likely change these
+* functions to avoid touching the engine but let's just be safe and
+* hold the intel_context reference.
+*/
+   rq->context = intel_context_get(ce);
rq->engine = ce->engine;
rq->ring = ce->ring;
rq->execution_mask = ce->engine->mask;
@@ -1054,6 +1043,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
GEM_BUG_ON(!list_empty(&rq->sched.waiters_list));
 
 err_free:
+   intel_context_put(ce);
kmem_cache_free(global.slab_requests, rq);
 err_unreserve:
intel_context_unpin(ce);
-- 
2.28.0

___
Intel

[Intel-gfx] [RFC PATCH 73/97] drm/i915/guc: Enable GuC engine reset

2021-05-06 Thread Matthew Brost
From: John Harrison 

Clear the 'disable resets' flag to allow GuC to reset hung contexts
(detected via pre-emption timeout).

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index cd65ff42657d..179ab658d2b5 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -84,8 +84,7 @@ static void guc_policies_init(struct guc_policies *policies)
 {
policies->dpc_promote_time = GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US;
policies->max_num_work_items = GLOBAL_POLICY_MAX_NUM_WI;
-   /* Disable automatic resets as not yet supported. */
-   policies->global_flags = GLOBAL_POLICY_DISABLE_ENGINE_RESET;
+   policies->global_flags = 0;
policies->is_valid = 1;
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 53/97] drm/i915/guc: Disable semaphores when using GuC scheduling

2021-05-06 Thread Matthew Brost
Disable semaphores when using GuC scheduling as semaphores are broken in
the current GuC firmware.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 993faa213b41..d30260ffe2a7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -230,7 +230,8 @@ static void intel_context_set_gem(struct intel_context *ce,
ce->timeline = intel_timeline_get(ctx->timeline);
 
if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
-   intel_engine_has_timeslices(ce->engine))
+   intel_engine_has_timeslices(ce->engine) &&
+   intel_engine_has_semaphores(ce->engine))
__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
 
intel_context_set_watchdog_us(ce, ctx->watchdog.timeout_us);
@@ -1939,7 +1940,8 @@ static int __apply_priority(struct intel_context *ce, 
void *arg)
if (!intel_engine_has_timeslices(ce->engine))
return 0;
 
-   if (ctx->sched.priority >= I915_PRIORITY_NORMAL)
+   if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
+   intel_engine_has_semaphores(ce->engine))
intel_context_set_use_semaphores(ce);
else
intel_context_clear_use_semaphores(ce);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 42/97] drm/i915/guc: Remove GuC stage descriptor, add lrc descriptor

2021-05-06 Thread Matthew Brost
Remove old GuC stage descriptor, add lrc descriptor which will be used
by the new GuC interface implemented in this patch series.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  4 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   | 65 -
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 72 ++-
 3 files changed, 25 insertions(+), 116 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 4c0a367e41d8..d84f37afb9d8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -44,8 +44,8 @@ struct intel_guc {
struct i915_vma *ads_vma;
struct __guc_ads_blob *ads_blob;
 
-   struct i915_vma *stage_desc_pool;
-   void *stage_desc_pool_vaddr;
+   struct i915_vma *lrc_desc_pool;
+   void *lrc_desc_pool_vaddr;
 
/* Control params for fw initialization */
u32 params[GUC_CTL_MAX_DWORDS];
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index cae8649a8147..1dd2f04c2762 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -26,9 +26,6 @@
 #define GUC_CLIENT_PRIORITY_NORMAL 3
 #define GUC_CLIENT_PRIORITY_NUM4
 
-#define GUC_MAX_STAGE_DESCRIPTORS  1024
-#defineGUC_INVALID_STAGE_IDGUC_MAX_STAGE_DESCRIPTORS
-
 #define GUC_MAX_LRC_DESCRIPTORS65535
 #defineGUC_INVALID_LRC_ID  GUC_MAX_LRC_DESCRIPTORS
 
@@ -183,68 +180,6 @@ struct guc_process_desc {
u32 reserved[30];
 } __packed;
 
-/* engine id and context id is packed into guc_execlist_context.context_id*/
-#define GUC_ELC_CTXID_OFFSET   0
-#define GUC_ELC_ENGINE_OFFSET  29
-
-/* The execlist context including software and HW information */
-struct guc_execlist_context {
-   u32 context_desc;
-   u32 context_id;
-   u32 ring_status;
-   u32 ring_lrca;
-   u32 ring_begin;
-   u32 ring_end;
-   u32 ring_next_free_location;
-   u32 ring_current_tail_pointer_value;
-   u8 engine_state_submit_value;
-   u8 engine_state_wait_value;
-   u16 pagefault_count;
-   u16 engine_submit_queue_count;
-} __packed;
-
-/*
- * This structure describes a stage set arranged for a particular communication
- * between uKernel (GuC) and Driver (KMD). Technically, this is known as a
- * "GuC Context descriptor" in the specs, but we use the term "stage 
descriptor"
- * to avoid confusion with all the other things already named "context" in the
- * driver. A static pool of these descriptors are stored inside a GEM object
- * (stage_desc_pool) which is held for the entire lifetime of our interaction
- * with the GuC, being allocated before the GuC is loaded with its firmware.
- */
-struct guc_stage_desc {
-   u32 sched_common_area;
-   u32 stage_id;
-   u32 pas_id;
-   u8 engines_used;
-   u64 db_trigger_cpu;
-   u32 db_trigger_uk;
-   u64 db_trigger_phy;
-   u16 db_id;
-
-   struct guc_execlist_context lrc[GUC_MAX_ENGINES_NUM];
-
-   u8 attribute;
-
-   u32 priority;
-
-   u32 wq_sampled_tail_offset;
-   u32 wq_total_submit_enqueues;
-
-   u32 process_desc;
-   u32 wq_addr;
-   u32 wq_size;
-
-   u32 engine_presence;
-
-   u8 engine_suspended;
-
-   u8 reserved0[3];
-   u64 reserved1[1];
-
-   u64 desc_private;
-} __packed;
-
 #define CONTEXT_REGISTRATION_FLAG_KMD  BIT(0)
 
 #define CONTEXT_POLICY_DEFAULT_EXECUTION_QUANTUM_US 100
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index b8f9c71af13e..6acc1ef34f92 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -65,57 +65,35 @@ static inline struct i915_priolist *to_priolist(struct 
rb_node *rb)
return rb_entry(rb, struct i915_priolist, node);
 }
 
-static struct guc_stage_desc *__get_stage_desc(struct intel_guc *guc, u32 id)
+/* Future patches will use this function */
+__attribute__ ((unused))
+static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index)
 {
-   struct guc_stage_desc *base = guc->stage_desc_pool_vaddr;
+   struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr;
 
-   return &base[id];
-}
-
-static int guc_stage_desc_pool_create(struct intel_guc *guc)
-{
-   u32 size = PAGE_ALIGN(sizeof(struct guc_stage_desc) *
- GUC_MAX_STAGE_DESCRIPTORS);
+   GEM_BUG_ON(index >= GUC_MAX_LRC_DESCRIPTORS);
 
-   return intel_guc_allocate_and_map_vma(guc, size, &guc->stage_desc_pool,
- &guc->stage_desc_pool_vaddr);
+   return &base[index];
 }
 
-static void guc_stage_desc_pool_destroy(struct intel_guc *guc)
-{
-   i915_vma_

[Intel-gfx] [RFC PATCH 55/97] drm/i915/guc: Update intel_gt_wait_for_idle to work with GuC

2021-05-06 Thread Matthew Brost
When running the GuC the GPU can't be considered idle if the GuC still
has contexts pinned. As such, a call has been added in
intel_gt_wait_for_idle to idle the UC and in turn the GuC by waiting for
the number of unpinned contexts to go to zero.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_mman.c  |  3 +-
 drivers/gpu/drm/i915/gt/intel_gt.c| 18 
 drivers/gpu/drm/i915/gt/intel_gt.h|  2 +
 drivers/gpu/drm/i915/gt/intel_gt_requests.c   | 22 ++---
 drivers/gpu/drm/i915/gt/intel_gt_requests.h   |  7 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  4 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  4 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 91 ++-
 drivers/gpu/drm/i915/gt/uc/intel_uc.h |  5 +
 drivers/gpu/drm/i915/i915_debugfs.c   |  1 +
 drivers/gpu/drm/i915/i915_gem_evict.c |  1 +
 .../gpu/drm/i915/selftests/igt_live_test.c|  2 +-
 .../gpu/drm/i915/selftests/mock_gem_device.c  |  3 +-
 14 files changed, 137 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c 
b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 8598a1c78a4c..2f5295c9408d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -634,7 +634,8 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
goto insert;
 
/* Attempt to reap some mmap space from dead objects */
-   err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT);
+   err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT,
+  NULL);
if (err)
goto err;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c 
b/drivers/gpu/drm/i915/gt/intel_gt.c
index 8d77dcbad059..1742a8561f69 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -574,6 +574,24 @@ static void __intel_gt_disable(struct intel_gt *gt)
GEM_BUG_ON(intel_gt_pm_is_awake(gt));
 }
 
+int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
+{
+   long rtimeout;
+
+   /* If the device is asleep, we have no requests outstanding */
+   if (!intel_gt_pm_is_awake(gt))
+   return 0;
+
+   while ((timeout = intel_gt_retire_requests_timeout(gt, timeout,
+  &rtimeout)) > 0) {
+   cond_resched();
+   if (signal_pending(current))
+   return -EINTR;
+   }
+
+   return timeout ? timeout : intel_uc_wait_for_idle(>->uc, rtimeout);
+}
+
 int intel_gt_init(struct intel_gt *gt)
 {
int err;
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h 
b/drivers/gpu/drm/i915/gt/intel_gt.h
index 7ec395cace69..c775043334bf 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt.h
@@ -48,6 +48,8 @@ void intel_gt_driver_release(struct intel_gt *gt);
 
 void intel_gt_driver_late_release(struct intel_gt *gt);
 
+int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout);
+
 void intel_gt_check_and_clear_faults(struct intel_gt *gt);
 void intel_gt_clear_error_registers(struct intel_gt *gt,
intel_engine_mask_t engine_mask);
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c 
b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
index 647eca9d867a..c6c702f236fa 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
@@ -13,6 +13,7 @@
 #include "intel_gt_pm.h"
 #include "intel_gt_requests.h"
 #include "intel_timeline.h"
+#include "uc/intel_uc.h"
 
 static bool retire_requests(struct intel_timeline *tl)
 {
@@ -130,7 +131,8 @@ void intel_engine_fini_retire(struct intel_engine_cs 
*engine)
GEM_BUG_ON(engine->retire);
 }
 
-long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
+long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout,
+ long *rtimeout)
 {
struct intel_gt_timelines *timelines = >->timelines;
struct intel_timeline *tl, *tn;
@@ -195,22 +197,10 @@ out_active:   spin_lock(&timelines->lock);
if (flush_submission(gt, timeout)) /* Wait, there's more! */
active_count++;
 
-   return active_count ? timeout : 0;
-}
-
-int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
-{
-   /* If the device is asleep, we have no requests outstanding */
-   if (!intel_gt_pm_is_awake(gt))
-   return 0;
-
-   while ((timeout = intel_gt_retire_requests_timeout(gt, timeout)) > 0) {
-   cond_resched();
-   if (signal_pending(current))
-   return -EINTR;
-   }
+   if (rtimeout)
+   *rtimeout = timeout;
 
-   return timeout;
+   return active_count ? timeout : 0;
 }
 

[Intel-gfx] [RFC PATCH 67/97] drm/i915/guc: Suspend/resume implementation for new interface

2021-05-06 Thread Matthew Brost
The new GuC interface introduces an MMIO H2G command,
INTEL_GUC_ACTION_RESET_CLIENT, which is used to implement suspend. This
MMIO tears down any active contexts generating a context reset G2H CTB
for each. Once that step completes the GuC tears down the CTB
channels. It is safe to suspend once this MMIO H2G command completes
and all G2H CTBs have been processed. In practice the i915 will likely
never receive a G2H as suspend should only be called after the GPU is
idle.

Resume is implemented in the same manner as before - simply reload the
GuC firmware and reinitialize everything (e.g. CTB channels, contexts,
etc..).

Cc: John Harrison 
Signed-off-by: Matthew Brost 
Signed-off-by: Michal Wajdeczko 
---
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |  1 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.c| 64 ---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++--
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  5 ++
 drivers/gpu/drm/i915/gt/uc/intel_uc.c | 28 +---
 5 files changed, 59 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
index c0a715ec7276..c9e87de3af49 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -146,6 +146,7 @@ enum intel_guc_action {
INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600,
+   INTEL_GUC_ACTION_RESET_CLIENT = 0x5B01,
INTEL_GUC_ACTION_LIMIT
 };
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 864b14e313a3..f3240037fb7c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -534,51 +534,34 @@ int intel_guc_auth_huc(struct intel_guc *guc, u32 
rsa_offset)
  */
 int intel_guc_suspend(struct intel_guc *guc)
 {
-   struct intel_uncore *uncore = guc_to_gt(guc)->uncore;
int ret;
-   u32 status;
u32 action[] = {
-   INTEL_GUC_ACTION_ENTER_S_STATE,
-   GUC_POWER_D1, /* any value greater than GUC_POWER_D0 */
+   INTEL_GUC_ACTION_RESET_CLIENT,
};
 
-   /*
-* If GuC communication is enabled but submission is not supported,
-* we do not need to suspend the GuC.
-*/
-   if (!intel_guc_submission_is_used(guc) || !intel_guc_is_ready(guc))
+   if (!intel_guc_is_ready(guc))
return 0;
 
-   /*
-* The ENTER_S_STATE action queues the save/restore operation in GuC FW
-* and then returns, so waiting on the H2G is not enough to guarantee
-* GuC is done. When all the processing is done, GuC writes
-* INTEL_GUC_SLEEP_STATE_SUCCESS to scratch register 14, so we can poll
-* on that. Note that GuC does not ensure that the value in the register
-* is different from INTEL_GUC_SLEEP_STATE_SUCCESS while the action is
-* in progress so we need to take care of that ourselves as well.
-*/
-
-   intel_uncore_write(uncore, SOFT_SCRATCH(14),
-  INTEL_GUC_SLEEP_STATE_INVALID_MASK);
-
-   ret = intel_guc_send(guc, action, ARRAY_SIZE(action));
-   if (ret)
-   return ret;
-
-   ret = __intel_wait_for_register(uncore, SOFT_SCRATCH(14),
-   INTEL_GUC_SLEEP_STATE_INVALID_MASK,
-   0, 0, 10, &status);
-   if (ret)
-   return ret;
-
-   if (status != INTEL_GUC_SLEEP_STATE_SUCCESS) {
-   DRM_ERROR("GuC failed to change sleep state. "
- "action=0x%x, err=%u\n",
- action[0], status);
-   return -EIO;
+   if (intel_guc_submission_is_used(guc)) {
+   /*
+* This H2G MMIO command tears down the GuC in two steps. First 
it will
+* generate a G2H CTB for every active context indicating a 
reset. In
+* practice the i915 shouldn't ever get a G2H as suspend should 
only be
+* called when the GPU is idle. Next, it tears down the CTBs 
and this
+* H2G MMIO command completes.
+*
+* Don't abort on a failure code from the GuC. Keep going and 
do the
+* clean up in santize() and re-initialisation on resume and 
hopefully
+* the error here won't be problematic.
+*/
+   ret = intel_guc_send_mmio(guc, action, ARRAY_SIZE(action), 
NULL, 0);
+   if (ret)
+   DRM_ERROR("GuC suspend: RESET_CLIENT action failed with 
error %d!\n", ret);
}
 
+   /* Signal that the GuC isn't running. */
+   intel_guc_sanitize(guc);
+
return 0;
 }
 
@@ -588,7 +571,12 @@ int intel_guc

[Intel-gfx] [RFC PATCH 66/97] drm/i915/guc: Add disable interrupts to guc sanitize

2021-05-06 Thread Matthew Brost
Add disable GuC interrupts to intel_guc_sanitize(). Part of this
requires moving the guc_*_interrupt wrapper function into header file
intel_guc.h.

Signed-off-by: Matthew Brost 
Cc: Daniele Ceraolo Spurio ct);
 }
 
+static inline void intel_guc_reset_interrupts(struct intel_guc *guc)
+{
+   guc->interrupts.reset(guc);
+}
+
+static inline void intel_guc_enable_interrupts(struct intel_guc *guc)
+{
+   guc->interrupts.enable(guc);
+}
+
+static inline void intel_guc_disable_interrupts(struct intel_guc *guc)
+{
+   guc->interrupts.disable(guc);
+}
+
 static inline int intel_guc_sanitize(struct intel_guc *guc)
 {
intel_uc_fw_sanitize(&guc->fw);
+   intel_guc_disable_interrupts(guc);
intel_guc_ct_sanitize(&guc->ct);
guc->mmio_msg = 0;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index d5ccffbb89ae..67c1e15845aa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -207,21 +207,6 @@ static void guc_handle_mmio_msg(struct intel_guc *guc)
spin_unlock_irq(&guc->irq_lock);
 }
 
-static void guc_reset_interrupts(struct intel_guc *guc)
-{
-   guc->interrupts.reset(guc);
-}
-
-static void guc_enable_interrupts(struct intel_guc *guc)
-{
-   guc->interrupts.enable(guc);
-}
-
-static void guc_disable_interrupts(struct intel_guc *guc)
-{
-   guc->interrupts.disable(guc);
-}
-
 static int guc_enable_communication(struct intel_guc *guc)
 {
struct intel_gt *gt = guc_to_gt(guc);
@@ -242,7 +227,7 @@ static int guc_enable_communication(struct intel_guc *guc)
guc_get_mmio_msg(guc);
guc_handle_mmio_msg(guc);
 
-   guc_enable_interrupts(guc);
+   intel_guc_enable_interrupts(guc);
 
/* check for CT messages received before we enabled interrupts */
spin_lock_irq(>->irq_lock);
@@ -265,7 +250,7 @@ static void guc_disable_communication(struct intel_guc *guc)
 */
guc_clear_mmio_msg(guc);
 
-   guc_disable_interrupts(guc);
+   intel_guc_disable_interrupts(guc);
 
intel_guc_ct_disable(&guc->ct);
 
@@ -463,7 +448,7 @@ static int __uc_init_hw(struct intel_uc *uc)
if (ret)
goto err_out;
 
-   guc_reset_interrupts(guc);
+   intel_guc_reset_interrupts(guc);
 
/* WaEnableuKernelHeaderValidFix:skl */
/* WaEnableGuCBootHashCheckNotSet:skl,bxt,kbl */
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 32/97] drm/i915: Introduce i915_sched_engine object

2021-05-06 Thread Matthew Brost
Introduce i915_sched_engine object which is lower level data structure
that i915_scheduler / generic code can operate on without touching
execlist specific structures. This allows additional submission backends
to be added without breaking the layer.

Cc: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_wait.c  |   4 +-
 drivers/gpu/drm/i915/gt/intel_engine.h|  16 -
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  77 ++--
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   4 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  10 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  42 +--
 drivers/gpu/drm/i915/gt/intel_engine_user.c   |   2 +-
 .../drm/i915/gt/intel_execlists_submission.c  | 350 +++---
 .../gpu/drm/i915/gt/intel_ring_submission.c   |  13 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |  17 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  36 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   6 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c|   6 +-
 drivers/gpu/drm/i915/gt/selftest_reset.c  |   2 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  75 ++--
 drivers/gpu/drm/i915/i915_gpu_error.c |   7 +-
 drivers/gpu/drm/i915/i915_request.c   |  50 +--
 drivers/gpu/drm/i915/i915_request.h   |   2 +-
 drivers/gpu/drm/i915/i915_scheduler.c | 168 -
 drivers/gpu/drm/i915/i915_scheduler.h |  65 +++-
 drivers/gpu/drm/i915/i915_scheduler_types.h   |  63 
 21 files changed, 575 insertions(+), 440 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c 
b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 4b9856d5ba14..af1fbf8e2a9a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -104,8 +104,8 @@ static void fence_set_priority(struct dma_fence *fence,
engine = rq->engine;
 
rcu_read_lock(); /* RCU serialisation for set-wedged protection */
-   if (engine->schedule)
-   engine->schedule(rq, attr);
+   if (engine->sched_engine->schedule)
+   engine->sched_engine->schedule(rq, attr);
rcu_read_unlock();
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
b/drivers/gpu/drm/i915/gt/intel_engine.h
index 8d9184920c51..988d9688ae4d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -123,20 +123,6 @@ execlists_active(const struct intel_engine_execlists 
*execlists)
return active;
 }
 
-static inline void
-execlists_active_lock_bh(struct intel_engine_execlists *execlists)
-{
-   local_bh_disable(); /* prevent local softirq and lock recursion */
-   tasklet_lock(&execlists->tasklet);
-}
-
-static inline void
-execlists_active_unlock_bh(struct intel_engine_execlists *execlists)
-{
-   tasklet_unlock(&execlists->tasklet);
-   local_bh_enable(); /* restore softirq, and kick ksoftirqd! */
-}
-
 struct i915_request *
 execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists);
 
@@ -257,8 +243,6 @@ intel_engine_find_active_request(struct intel_engine_cs 
*engine);
 
 u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
 
-void intel_engine_init_active(struct intel_engine_cs *engine,
- unsigned int subclass);
 #define ENGINE_PHYSICAL0
 #define ENGINE_MOCK1
 #define ENGINE_VIRTUAL 2
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 828e1669f92c..ec82a7ec0c8d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -8,6 +8,7 @@
 #include "gem/i915_gem_context.h"
 
 #include "i915_drv.h"
+#include "i915_scheduler.h"
 
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
@@ -326,9 +327,6 @@ static int intel_engine_setup(struct intel_gt *gt, enum 
intel_engine_id id)
if (engine->context_size)
DRIVER_CAPS(i915)->has_logical_contexts = true;
 
-   /* Nothing to do here, execute in order of dependencies */
-   engine->schedule = NULL;
-
ewma__engine_latency_init(&engine->latency);
seqcount_init(&engine->stats.lock);
 
@@ -583,9 +581,6 @@ void intel_engine_init_execlists(struct intel_engine_cs 
*engine)
memset(execlists->pending, 0, sizeof(execlists->pending));
execlists->active =
memset(execlists->inflight, 0, sizeof(execlists->inflight));
-
-   execlists->queue_priority_hint = INT_MIN;
-   execlists->queue = RB_ROOT_CACHED;
 }
 
 static void cleanup_status_page(struct intel_engine_cs *engine)
@@ -712,11 +707,17 @@ static int engine_setup_common(struct intel_engine_cs 
*engine)
goto err_status;
}
 
+   engine->sched_engine = i915_sched_engine_create(ENGINE_PHYSICAL);
+   if (!engine->sched_engine) {
+   err = -ENOMEM;
+   goto err_sched_engine;
+   }
+   engine->sched_en

[Intel-gfx] [RFC PATCH 23/97] drm/i915/guc: Support per context scheduling policies

2021-05-06 Thread Matthew Brost
From: John Harrison 

GuC firmware v53.0.0 introduced per context scheduling policies. This
includes changes to some of the ADS structures which are required to
load the firmware even if not using GuC submission.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c  | 26 +++--
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 31 +
 2 files changed, 11 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index 17526717368c..648e1767b17a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -58,30 +58,12 @@ static u32 guc_ads_blob_size(struct intel_guc *guc)
   guc_ads_private_data_size(guc);
 }
 
-static void guc_policy_init(struct guc_policy *policy)
-{
-   policy->execution_quantum = POLICY_DEFAULT_EXECUTION_QUANTUM_US;
-   policy->preemption_time = POLICY_DEFAULT_PREEMPTION_TIME_US;
-   policy->fault_time = POLICY_DEFAULT_FAULT_TIME_US;
-   policy->policy_flags = 0;
-}
-
 static void guc_policies_init(struct guc_policies *policies)
 {
-   struct guc_policy *policy;
-   u32 p, i;
-
-   policies->dpc_promote_time = POLICY_DEFAULT_DPC_PROMOTE_TIME_US;
-   policies->max_num_work_items = POLICY_MAX_NUM_WI;
-
-   for (p = 0; p < GUC_CLIENT_PRIORITY_NUM; p++) {
-   for (i = 0; i < GUC_MAX_ENGINE_CLASSES; i++) {
-   policy = &policies->policy[p][i];
-
-   guc_policy_init(policy);
-   }
-   }
-
+   policies->dpc_promote_time = GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US;
+   policies->max_num_work_items = GLOBAL_POLICY_MAX_NUM_WI;
+   /* Disable automatic resets as not yet supported. */
+   policies->global_flags = GLOBAL_POLICY_DISABLE_ENGINE_RESET;
policies->is_valid = 1;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index d445f6b77db4..95db4a7d3f4d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -221,32 +221,14 @@ struct guc_stage_desc {
 
 /* Scheduling policy settings */
 
-/* Reset engine upon preempt failure */
-#define POLICY_RESET_ENGINE(1<<0)
-/* Preempt to idle on quantum expiry */
-#define POLICY_PREEMPT_TO_IDLE (1<<1)
-
-#define POLICY_MAX_NUM_WI 15
-#define POLICY_DEFAULT_DPC_PROMOTE_TIME_US 50
-#define POLICY_DEFAULT_EXECUTION_QUANTUM_US 100
-#define POLICY_DEFAULT_PREEMPTION_TIME_US 50
-#define POLICY_DEFAULT_FAULT_TIME_US 25
-
-struct guc_policy {
-   /* Time for one workload to execute. (in micro seconds) */
-   u32 execution_quantum;
-   /* Time to wait for a preemption request to completed before issuing a
-* reset. (in micro seconds). */
-   u32 preemption_time;
-   /* How much time to allow to run after the first fault is observed.
-* Then preempt afterwards. (in micro seconds) */
-   u32 fault_time;
-   u32 policy_flags;
-   u32 reserved[8];
-} __packed;
+#define GLOBAL_POLICY_MAX_NUM_WI 15
+
+/* Don't reset an engine upon preemption failure */
+#define GLOBAL_POLICY_DISABLE_ENGINE_RESET BIT(0)
+
+#define GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US 50
 
 struct guc_policies {
-   struct guc_policy 
policy[GUC_CLIENT_PRIORITY_NUM][GUC_MAX_ENGINE_CLASSES];
u32 submission_queue_depth[GUC_MAX_ENGINE_CLASSES];
/* In micro seconds. How much time to allow before DPC processing is
 * called back via interrupt (to prevent DPC queue drain starving).
@@ -260,6 +242,7 @@ struct guc_policies {
 * idle. */
u32 max_num_work_items;
 
+   u32 global_flags;
u32 reserved[4];
 } __packed;
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 29/97] drm/i915/guc: Update firmware to v60.1.2

2021-05-06 Thread Matthew Brost
From: John Harrison 

Signed-off-by: John Harrison 
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 25 
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
index df647c9a8d56..81f5fad84906 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
@@ -48,19 +48,18 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
  * firmware as TGL.
  */
 #define INTEL_UC_FIRMWARE_DEFS(fw_def, guc_def, huc_def) \
-   fw_def(ALDERLAKE_S, 0, guc_def(tgl, 49, 0, 1), huc_def(tgl,  7, 5, 0)) \
-   fw_def(ROCKETLAKE,  0, guc_def(tgl, 49, 0, 1), huc_def(tgl,  7, 5, 0)) \
-   fw_def(TIGERLAKE,   0, guc_def(tgl, 49, 0, 1), huc_def(tgl,  7, 5, 0)) \
-   fw_def(JASPERLAKE,  0, guc_def(ehl, 49, 0, 1), huc_def(ehl,  9, 0, 0)) \
-   fw_def(ELKHARTLAKE, 0, guc_def(ehl, 49, 0, 1), huc_def(ehl,  9, 0, 0)) \
-   fw_def(ICELAKE, 0, guc_def(icl, 49, 0, 1), huc_def(icl,  9, 0, 0)) \
-   fw_def(COMETLAKE,   5, guc_def(cml, 49, 0, 1), huc_def(cml,  4, 0, 0)) \
-   fw_def(COMETLAKE,   0, guc_def(kbl, 49, 0, 1), huc_def(kbl,  4, 0, 0)) \
-   fw_def(COFFEELAKE,  0, guc_def(kbl, 49, 0, 1), huc_def(kbl,  4, 0, 0)) \
-   fw_def(GEMINILAKE,  0, guc_def(glk, 49, 0, 1), huc_def(glk,  4, 0, 0)) \
-   fw_def(KABYLAKE,0, guc_def(kbl, 49, 0, 1), huc_def(kbl,  4, 0, 0)) \
-   fw_def(BROXTON, 0, guc_def(bxt, 49, 0, 1), huc_def(bxt,  2, 0, 0)) \
-   fw_def(SKYLAKE, 0, guc_def(skl, 49, 0, 1), huc_def(skl,  2, 0, 0))
+   fw_def(ALDERLAKE_S, 0, guc_def(tgl, 60, 1, 2), huc_def(tgl,  7, 5, 0)) \
+   fw_def(ROCKETLAKE,  0, guc_def(tgl, 60, 1, 2), huc_def(tgl,  7, 5, 0)) \
+   fw_def(TIGERLAKE,   0, guc_def(tgl, 60, 1, 2), huc_def(tgl,  7, 5, 0)) \
+   fw_def(JASPERLAKE,  0, guc_def(ehl, 60, 1, 2), huc_def(ehl,  9, 0, 0)) \
+   fw_def(ELKHARTLAKE, 0, guc_def(ehl, 60, 1, 2), huc_def(ehl,  9, 0, 0)) \
+   fw_def(ICELAKE, 0, guc_def(icl, 60, 1, 2), huc_def(icl,  9, 0, 0)) \
+   fw_def(COMETLAKE,   5, guc_def(cml, 60, 1, 2), huc_def(cml,  4, 0, 0)) \
+   fw_def(COFFEELAKE,  0, guc_def(kbl, 60, 1, 2), huc_def(kbl,  4, 0, 0)) \
+   fw_def(GEMINILAKE,  0, guc_def(glk, 60, 1, 2), huc_def(glk,  4, 0, 0)) \
+   fw_def(KABYLAKE,0, guc_def(kbl, 60, 1, 2), huc_def(kbl,  4, 0, 0)) \
+   fw_def(BROXTON, 0, guc_def(bxt, 60, 1, 2), huc_def(bxt,  2, 0, 0)) \
+   fw_def(SKYLAKE, 0, guc_def(skl, 60, 1, 2), huc_def(skl,  2, 0, 0))
 
 #define __MAKE_UC_FW_PATH(prefix_, name_, major_, minor_, patch_) \
"i915/" \
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 58/97] drm/i915: Add intel_context tracing

2021-05-06 Thread Matthew Brost
Add intel_context tracing. These trace points are particular helpful
when debugging the GuC firmware and can be enabled via
CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS kernel config option.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |   6 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  14 ++
 drivers/gpu/drm/i915/i915_trace.h | 148 +-
 3 files changed, 166 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 7f97753ab164..b24a1b7a3f88 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -8,6 +8,7 @@
 
 #include "i915_drv.h"
 #include "i915_globals.h"
+#include "i915_trace.h"
 
 #include "intel_context.h"
 #include "intel_engine.h"
@@ -28,6 +29,7 @@ static void rcu_context_free(struct rcu_head *rcu)
 {
struct intel_context *ce = container_of(rcu, typeof(*ce), rcu);
 
+   trace_intel_context_free(ce);
kmem_cache_free(global.slab_ce, ce);
 }
 
@@ -46,6 +48,7 @@ intel_context_create(struct intel_engine_cs *engine)
return ERR_PTR(-ENOMEM);
 
intel_context_init(ce, engine);
+   trace_intel_context_create(ce);
return ce;
 }
 
@@ -268,6 +271,8 @@ int __intel_context_do_pin_ww(struct intel_context *ce,
 
GEM_BUG_ON(!intel_context_is_pinned(ce)); /* no overflow! */
 
+   trace_intel_context_do_pin(ce);
+
 err_unlock:
mutex_unlock(&ce->pin_mutex);
 err_post_unpin:
@@ -323,6 +328,7 @@ void __intel_context_do_unpin(struct intel_context *ce, int 
sub)
 */
intel_context_get(ce);
intel_context_active_release(ce);
+   trace_intel_context_do_unpin(ce);
intel_context_put(ce);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 87ed00f272e7..a789994d6de7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -347,6 +347,7 @@ static int guc_add_request(struct intel_guc *guc, struct 
i915_request *rq)
 
err = intel_guc_send_nb(guc, action, len, g2h_len_dw);
if (!enabled && !err) {
+   trace_intel_context_sched_enable(ce);
atomic_inc(&guc->outstanding_submission_g2h);
set_context_enabled(ce);
} else if (!enabled) {
@@ -815,6 +816,8 @@ static int register_context(struct intel_context *ce)
u32 offset = intel_guc_ggtt_offset(guc, guc->lrc_desc_pool) +
ce->guc_id * sizeof(struct guc_lrc_desc);
 
+   trace_intel_context_register(ce);
+
return __guc_action_register_context(guc, ce->guc_id, offset);
 }
 
@@ -834,6 +837,8 @@ static int deregister_context(struct intel_context *ce, u32 
guc_id)
 {
struct intel_guc *guc = ce_to_guc(ce);
 
+   trace_intel_context_deregister(ce);
+
return __guc_action_deregister_context(guc, guc_id);
 }
 
@@ -908,6 +913,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce)
 * GuC before registering this context.
 */
if (context_registered) {
+   trace_intel_context_steal_guc_id(ce);
set_context_wait_for_deregister_to_register(ce);
intel_context_get(ce);
 
@@ -966,6 +972,7 @@ static void __guc_context_sched_disable(struct intel_guc 
*guc,
 
GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID);
 
+   trace_intel_context_sched_disable(ce);
intel_context_get(ce);
 
guc_submission_busy_loop(guc, action, ARRAY_SIZE(action),
@@ -1122,6 +1129,9 @@ static void __guc_signal_context_fence(struct 
intel_context *ce)
 
lockdep_assert_held(&ce->guc_state.lock);
 
+   if (!list_empty(&ce->guc_state.fences))
+   trace_intel_context_fence_release(ce);
+
list_for_each_entry(rq, &ce->guc_state.fences, guc_fence_link)
i915_sw_fence_complete(&rq->submit);
 
@@ -1536,6 +1546,8 @@ int intel_guc_deregister_done_process_msg(struct 
intel_guc *guc,
if (unlikely(!ce))
return -EPROTO;
 
+   trace_intel_context_deregister_done(ce);
+
if (context_wait_for_deregister_to_register(ce)) {
struct intel_runtime_pm *runtime_pm =
&ce->engine->gt->i915->runtime_pm;
@@ -1587,6 +1599,8 @@ int intel_guc_sched_done_process_msg(struct intel_guc 
*guc,
return -EPROTO;
}
 
+   trace_intel_context_sched_done(ce);
+
if (context_pending_enable(ce)) {
clr_context_pending_enable(ce);
} else if (context_pending_disable(ce)) {
diff --git a/drivers/gpu/drm/i915/i915_trace.h 
b/drivers/gpu/drm/i915/i915_trace.h
index b02d04b6c8f6..97c2e83984ed 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -818,8 +818,8 @@ DECLARE_EVENT_CLASS(i915_request,
 );
 
 DEFINE_EVENT(i915_request,

[Intel-gfx] [RFC PATCH 68/97] drm/i915/guc: Handle context reset notification

2021-05-06 Thread Matthew Brost
GuC will issue a reset on detecting an engine hang and will notify
the driver via a G2H message. The driver will service the notification
by resetting the guilty context to a simple state or banning it
completely.

Cc: Matthew Brost 
Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  2 ++
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |  6 
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 35 +++
 drivers/gpu/drm/i915/i915_trace.h | 10 ++
 4 files changed, 53 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 277b4496a20e..a2abe1c422e3 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -263,6 +263,8 @@ int intel_guc_deregister_done_process_msg(struct intel_guc 
*guc,
  const u32 *msg, u32 len);
 int intel_guc_sched_done_process_msg(struct intel_guc *guc,
 const u32 *msg, u32 len);
+int intel_guc_context_reset_process_msg(struct intel_guc *guc,
+   const u32 *msg, u32 len);
 
 void intel_guc_submission_reset_prepare(struct intel_guc *guc);
 void intel_guc_submission_reset(struct intel_guc *guc, bool stalled);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index b3194d753b13..9c84b2ba63a8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -941,6 +941,12 @@ static int ct_process_request(struct intel_guc_ct *ct, 
struct ct_incoming_msg *r
CT_ERROR(ct, "schedule context failed %x %*ph\n",
  action, 4 * len, payload);
break;
+   case INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION:
+   ret = intel_guc_context_reset_process_msg(guc, payload, len);
+   if (unlikely(ret))
+   CT_ERROR(ct, "context reset notification failed %x 
%*ph\n",
+ action, 4 * len, payload);
+   break;
default:
ret = -EOPNOTSUPP;
break;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 2c3791fc24b7..940017495731 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2192,6 +2192,41 @@ int intel_guc_sched_done_process_msg(struct intel_guc 
*guc,
return 0;
 }
 
+static void guc_context_replay(struct intel_context *ce)
+{
+   struct i915_sched_engine *sched_engine = ce->engine->sched_engine;
+
+   __guc_reset_context(ce, true);
+   i915_sched_engine_hi_kick(sched_engine);
+}
+
+static void guc_handle_context_reset(struct intel_guc *guc,
+struct intel_context *ce)
+{
+   trace_intel_context_reset(ce);
+   guc_context_replay(ce);
+}
+
+int intel_guc_context_reset_process_msg(struct intel_guc *guc,
+   const u32 *msg, u32 len)
+{
+   struct intel_context *ce;
+   int desc_idx = msg[0];
+
+   if (unlikely(len != 1)) {
+   drm_dbg(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
+   return -EPROTO;
+   }
+
+   ce = g2h_context_lookup(guc, desc_idx);
+   if (unlikely(!ce))
+   return -EPROTO;
+
+   guc_handle_context_reset(guc, ce);
+
+   return 0;
+}
+
 void intel_guc_log_submission_info(struct intel_guc *guc,
   struct drm_printer *p)
 {
diff --git a/drivers/gpu/drm/i915/i915_trace.h 
b/drivers/gpu/drm/i915/i915_trace.h
index 97c2e83984ed..c095c4d39456 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -929,6 +929,11 @@ DECLARE_EVENT_CLASS(intel_context,
  __entry->guc_sched_state_no_lock)
 );
 
+DEFINE_EVENT(intel_context, intel_context_reset,
+TP_PROTO(struct intel_context *ce),
+TP_ARGS(ce)
+);
+
 DEFINE_EVENT(intel_context, intel_context_register,
 TP_PROTO(struct intel_context *ce),
 TP_ARGS(ce)
@@ -1026,6 +1031,11 @@ trace_i915_request_out(struct i915_request *rq)
 {
 }
 
+static inline void
+trace_intel_context_reset(struct intel_context *ce)
+{
+}
+
 static inline void
 trace_intel_context_register(struct intel_context *ce)
 {
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 54/97] drm/i915/guc: Ensure G2H response has space in buffer

2021-05-06 Thread Matthew Brost
Ensure G2H response has space in the buffer before sending H2G CTB as
the GuC can't handle any backpressure on the G2H interface.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h| 13 +++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 74 +++
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  4 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   |  4 +
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 13 ++--
 5 files changed, 85 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 55f02dd1598d..485e98f3f304 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -96,11 +96,17 @@ inline int intel_guc_send(struct intel_guc *guc, const u32 
*action, u32 len)
 }
 
 #define INTEL_GUC_SEND_NB  BIT(31)
+#define INTEL_GUC_SEND_G2H_DW_SHIFT0
+#define INTEL_GUC_SEND_G2H_DW_MASK (0xff << INTEL_GUC_SEND_G2H_DW_SHIFT)
+#define MAKE_SEND_FLAGS(len) \
+   ({GEM_BUG_ON(!FIELD_FIT(INTEL_GUC_SEND_G2H_DW_MASK, len)); \
+   (FIELD_PREP(INTEL_GUC_SEND_G2H_DW_MASK, len) | INTEL_GUC_SEND_NB);})
 static
-inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len)
+inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len,
+u32 g2h_len_dw)
 {
return intel_guc_ct_send(&guc->ct, action, len, NULL, 0,
-INTEL_GUC_SEND_NB);
+MAKE_SEND_FLAGS(g2h_len_dw));
 }
 
 static inline int
@@ -114,6 +120,7 @@ intel_guc_send_and_receive(struct intel_guc *guc, const u32 
*action, u32 len,
 static inline int intel_guc_send_busy_loop(struct intel_guc* guc,
   const u32 *action,
   u32 len,
+  u32 g2h_len_dw,
   bool loop)
 {
int err;
@@ -122,7 +129,7 @@ static inline int intel_guc_send_busy_loop(struct 
intel_guc* guc,
might_sleep_if(loop && (!in_atomic() && !irqs_disabled()));
 
 retry:
-   err = intel_guc_send_nb(guc, action, len);
+   err = intel_guc_send_nb(guc, action, len, g2h_len_dw);
if (unlikely(err == -EBUSY && loop)) {
if (likely(!in_atomic() && !irqs_disabled()))
cond_resched();
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 8e48bf260eab..f1893030ca88 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -73,6 +73,7 @@ static inline struct drm_device *ct_to_drm(struct 
intel_guc_ct *ct)
 #define CTB_DESC_SIZE  ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
 #define CTB_H2G_BUFFER_SIZE(SZ_4K)
 #define CTB_G2H_BUFFER_SIZE(4 * CTB_H2G_BUFFER_SIZE)
+#define G2H_ROOM_BUFFER_SIZE   (PAGE_SIZE)
 
 #define MAX_US_STALL_CTB   100
 
@@ -131,23 +132,27 @@ static void guc_ct_buffer_desc_init(struct 
guc_ct_buffer_desc *desc)
 
 static void guc_ct_buffer_reset(struct intel_guc_ct_buffer *ctb)
 {
+   u32 space;
+
ctb->broken = false;
ctb->tail = 0;
ctb->head = 0;
-   ctb->space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size);
+   space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size) - ctb->resv_space;
+   atomic_set(&ctb->space, space);
 
guc_ct_buffer_desc_init(ctb->desc);
 }
 
 static void guc_ct_buffer_init(struct intel_guc_ct_buffer *ctb,
   struct guc_ct_buffer_desc *desc,
-  u32 *cmds, u32 size_in_bytes)
+  u32 *cmds, u32 size_in_bytes, u32 resv_space)
 {
GEM_BUG_ON(size_in_bytes % 4);
 
ctb->desc = desc;
ctb->cmds = cmds;
ctb->size = size_in_bytes / 4;
+   ctb->resv_space = resv_space / 4;
 
guc_ct_buffer_reset(ctb);
 }
@@ -228,6 +233,7 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
struct guc_ct_buffer_desc *desc;
u32 blob_size;
u32 cmds_size;
+   u32 resv_space;
void *blob;
u32 *cmds;
int err;
@@ -252,19 +258,21 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
desc = blob;
cmds = blob + 2 * CTB_DESC_SIZE;
cmds_size = CTB_H2G_BUFFER_SIZE;
-   CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u\n", "send",
-ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
+   resv_space = 0;
+   CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u/%u\n", "send",
+ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size, 
resv_space);
 
-   guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size);
+   guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size, resv_space);
 
/* store pointers to desc and cmds for recv ctb */
desc = blob + CTB

[Intel-gfx] [RFC PATCH 38/97] drm/i915/guc: Optimize CTB writes and reads

2021-05-06 Thread Matthew Brost
CTB writes are now in the path of command submission and should be
optimized for performance. Rather than reading CTB descriptor values
(e.g. head, tail, size) which could result in accesses across the PCIe
bus, store shadow local copies and only read/write the descriptor
values when absolutely necessary.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 78 +--
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  6 ++
 2 files changed, 52 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 4eab319d61be..77dfbc94dcc3 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -127,6 +127,10 @@ static void guc_ct_buffer_desc_init(struct 
guc_ct_buffer_desc *desc)
 static void guc_ct_buffer_reset(struct intel_guc_ct_buffer *ctb)
 {
ctb->broken = false;
+   ctb->tail = 0;
+   ctb->head = 0;
+   ctb->space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size);
+
guc_ct_buffer_desc_init(ctb->desc);
 }
 
@@ -371,10 +375,8 @@ static int ct_write(struct intel_guc_ct *ct,
 {
struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
struct guc_ct_buffer_desc *desc = ctb->desc;
-   u32 head = desc->head;
-   u32 tail = desc->tail;
+   u32 tail = ctb->tail;
u32 size = ctb->size;
-   u32 used;
u32 header;
u32 hxg;
u32 *cmds = ctb->cmds;
@@ -386,25 +388,14 @@ static int ct_write(struct intel_guc_ct *ct,
if (unlikely(desc->status))
goto corrupted;
 
-   if (unlikely((tail | head) >= size)) {
+#ifdef CONFIG_DRM_I915_DEBUG_GUC
+   if (unlikely((desc->tail | desc->head) >= size)) {
CT_ERROR(ct, "Invalid offsets head=%u tail=%u (size=%u)\n",
-head, tail, size);
+desc->head, desc->tail, size);
desc->status |= GUC_CTB_STATUS_OVERFLOW;
goto corrupted;
}
-
-   /*
-* tail == head condition indicates empty. GuC FW does not support
-* using up the entire buffer to get tail == head meaning full.
-*/
-   if (tail < head)
-   used = (size - head) + tail;
-   else
-   used = tail - head;
-
-   /* make sure there is a space including extra dw for the fence */
-   if (unlikely(used + len + 1 >= size))
-   return -ENOSPC;
+#endif
 
/*
 * dw0: CT header (including fence)
@@ -444,7 +435,9 @@ static int ct_write(struct intel_guc_ct *ct,
write_barrier(ct);
 
/* now update descriptor */
+   ctb->tail = tail;
WRITE_ONCE(desc->tail, tail);
+   ctb->space -= len + 1;
 
return 0;
 
@@ -460,7 +453,7 @@ static int ct_write(struct intel_guc_ct *ct,
  * @req:   pointer to pending request
  * @status:placeholder for status
  *
- * For each sent request, Guc shall send bac CT response message.
+ * For each sent request, GuC shall send back CT response message.
  * Our message handler will update status of tracked request once
  * response message with given fence is received. Wait here and
  * check for valid response status value.
@@ -508,24 +501,35 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct)
return ret;
 }
 
-static inline bool ctb_has_room(struct intel_guc_ct_buffer *ctb, u32 len_dw)
+static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw)
 {
-   struct guc_ct_buffer_desc *desc = ctb->desc;
-   u32 head = READ_ONCE(desc->head);
+   struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
+   u32 head;
u32 space;
 
-   space = CIRC_SPACE(desc->tail, head, ctb->size);
+   if (ctb->space >= len_dw)
+   return true;
+
+   head = READ_ONCE(ctb->desc->head);
+   if (unlikely(head > ctb->size)) {
+   CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u size=%u\n",
+ ctb->desc->head, ctb->desc->tail, ctb->size);
+   ctb->desc->status |= GUC_CTB_STATUS_OVERFLOW;
+   ctb->broken = true;
+   return false;
+   }
+
+   space = CIRC_SPACE(ctb->tail, head, ctb->size);
+   ctb->space = space;
 
return space >= len_dw;
 }
 
 static int has_room_nb(struct intel_guc_ct *ct, u32 len_dw)
 {
-   struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
-
lockdep_assert_held(&ct->ctbs.send.lock);
 
-   if (unlikely(!ctb_has_room(ctb, len_dw))) {
+   if (unlikely(!h2g_has_room(ct, len_dw))) {
if (ct->stall_time == KTIME_MAX)
ct->stall_time = ktime_get();
 
@@ -593,11 +597,11 @@ static int ct_send(struct intel_guc_ct *ct,
 * rare.
 */
 retry:
-   spin_lock_irqsave(&ct->ctbs.send.lock, flags);
-   if (unlikely(!ctb_has_room(ctb, len + 1))) {
+   spin_lock_irqsave(&ctb->lock, fla

[Intel-gfx] [RFC PATCH 62/97] drm/i915/guc: Disable bonding extension with GuC submission

2021-05-06 Thread Matthew Brost
Update the bonding extension to return -ENODEV when using GuC submission
as this extension fundamentally will not work with the GuC submission
interface.

Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index e6bc5c666f93..bb827bb99250 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -1675,6 +1675,11 @@ set_engines__bond(struct i915_user_extension __user 
*base, void *data)
}
virtual = set->engines->engines[idx]->engine;
 
+   if (intel_engine_uses_guc(virtual)) {
+   DRM_DEBUG("bonding extension not supported with GuC 
submission");
+   return -ENODEV;
+   }
+
err = check_user_mbz(&ext->flags);
if (err)
return err;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 41/97] drm/i915/guc: Add new GuC interface defines and structures

2021-05-06 Thread Matthew Brost
Add new GuC interface defines and structures while maintaining old ones
in parallel.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  | 18 
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   | 41 +++
 2 files changed, 59 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
index 6cb0d3eb9b72..c0a715ec7276 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -121,13 +121,31 @@ enum intel_guc_action {
INTEL_GUC_ACTION_DEALLOCATE_DOORBELL = 0x20,
INTEL_GUC_ACTION_LOG_BUFFER_FILE_FLUSH_COMPLETE = 0x30,
INTEL_GUC_ACTION_UK_LOG_ENABLE_LOGGING = 0x40,
+   INTEL_GUC_ACTION_LOG_CACHE_CRASH_DUMP = 0x200,
+   INTEL_GUC_ACTION_GLOBAL_DEBUG_ACTIONS = 0x301,
INTEL_GUC_ACTION_FORCE_LOG_BUFFER_FLUSH = 0x302,
+   INTEL_GUC_ACTION_LOG_VERBOSITY_SELECT = 0x400,
INTEL_GUC_ACTION_ENTER_S_STATE = 0x501,
INTEL_GUC_ACTION_EXIT_S_STATE = 0x502,
+   INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
+   INTEL_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+   INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+   INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+   INTEL_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003,
+   INTEL_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004,
+   INTEL_GUC_ACTION_SET_CONTEXT_PRIORITY = 0x1005,
+   INTEL_GUC_ACTION_SET_CONTEXT_EXECUTION_QUANTUM = 0x1006,
+   INTEL_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT = 0x1007,
+   INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008,
+   INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009,
INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
+   INTEL_GUC_ACTION_SETUP_GUCRC = 0x3004,
INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
+   INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502,
+   INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503,
INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
+   INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600,
INTEL_GUC_ACTION_LIMIT
 };
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 558cfe168cb7..cae8649a8147 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -17,6 +17,9 @@
 #include "abi/guc_messages_abi.h"
 #include "gt/intel_engine_types.h"
 
+#define GUC_CONTEXT_DISABLE0
+#define GUC_CONTEXT_ENABLE 1
+
 #define GUC_CLIENT_PRIORITY_KMD_HIGH   0
 #define GUC_CLIENT_PRIORITY_HIGH   1
 #define GUC_CLIENT_PRIORITY_KMD_NORMAL 2
@@ -26,6 +29,9 @@
 #define GUC_MAX_STAGE_DESCRIPTORS  1024
 #defineGUC_INVALID_STAGE_IDGUC_MAX_STAGE_DESCRIPTORS
 
+#define GUC_MAX_LRC_DESCRIPTORS65535
+#defineGUC_INVALID_LRC_ID  GUC_MAX_LRC_DESCRIPTORS
+
 #define GUC_RENDER_ENGINE  0
 #define GUC_VIDEO_ENGINE   1
 #define GUC_BLITTER_ENGINE 2
@@ -239,6 +245,41 @@ struct guc_stage_desc {
u64 desc_private;
 } __packed;
 
+#define CONTEXT_REGISTRATION_FLAG_KMD  BIT(0)
+
+#define CONTEXT_POLICY_DEFAULT_EXECUTION_QUANTUM_US 100
+#define CONTEXT_POLICY_DEFAULT_PREEMPTION_TIME_US 50
+
+/* Preempt to idle on quantum expiry */
+#define CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLEBIT(0)
+
+/*
+ * GuC Context registration descriptor.
+ * FIXME: This is only required to exist during context registration.
+ * The current 1:1 between guc_lrc_desc and LRCs for the lifetime of the LRC
+ * is not required.
+ */
+struct guc_lrc_desc {
+   u32 hw_context_desc;
+   u32 slpm_perf_mode_hint;/* SPLC v1 only */
+   u32 slpm_freq_hint;
+   u32 engine_submit_mask; /* In logical space */
+   u8 engine_class;
+   u8 reserved0[3];
+   u32 priority;
+   u32 process_desc;
+   u32 wq_addr;
+   u32 wq_size;
+   u32 context_flags;  /* CONTEXT_REGISTRATION_* */
+   /* Time for one workload to execute. (in micro seconds) */
+   u32 execution_quantum;
+   /* Time to wait for a preemption request to complete before issuing a
+* reset. (in micro seconds). */
+   u32 preemption_timeout;
+   u32 policy_flags;   /* CONTEXT_POLICY_* */
+   u32 reserved1[19];
+} __packed;
+
 #define GUC_POWER_UNSPECIFIED  0
 #define GUC_POWER_D0   1
 #define GUC_POWER_D1   2
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 31/97] drm/i915/guc: Early initialization of GuC send registers

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Base offset and count of the GuC scratch registers, used for
sending MMIO messages to GuC, can be initialized earlier with
other GuC members that also depends on platform.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Daniele Ceraolo Spurio 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 454c8d886499..235c1997f32d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -60,15 +60,8 @@ void intel_guc_init_send_regs(struct intel_guc *guc)
enum forcewake_domains fw_domains = 0;
unsigned int i;
 
-   if (INTEL_GEN(gt->i915) >= 11) {
-   guc->send_regs.base =
-   i915_mmio_reg_offset(GEN11_SOFT_SCRATCH(0));
-   guc->send_regs.count = GEN11_SOFT_SCRATCH_COUNT;
-   } else {
-   guc->send_regs.base = i915_mmio_reg_offset(SOFT_SCRATCH(0));
-   guc->send_regs.count = GUC_MAX_MMIO_MSG_LEN;
-   BUILD_BUG_ON(GUC_MAX_MMIO_MSG_LEN > SOFT_SCRATCH_COUNT);
-   }
+   GEM_BUG_ON(!guc->send_regs.base);
+   GEM_BUG_ON(!guc->send_regs.count);
 
for (i = 0; i < guc->send_regs.count; i++) {
fw_domains |= intel_uncore_forcewake_for_reg(gt->uncore,
@@ -181,11 +174,18 @@ void intel_guc_init_early(struct intel_guc *guc)
guc->interrupts.reset = gen11_reset_guc_interrupts;
guc->interrupts.enable = gen11_enable_guc_interrupts;
guc->interrupts.disable = gen11_disable_guc_interrupts;
+   guc->send_regs.base =
+   i915_mmio_reg_offset(GEN11_SOFT_SCRATCH(0));
+   guc->send_regs.count = GEN11_SOFT_SCRATCH_COUNT;
+
} else {
guc->notify_reg = GUC_SEND_INTERRUPT;
guc->interrupts.reset = gen9_reset_guc_interrupts;
guc->interrupts.enable = gen9_enable_guc_interrupts;
guc->interrupts.disable = gen9_disable_guc_interrupts;
+   guc->send_regs.base = i915_mmio_reg_offset(SOFT_SCRATCH(0));
+   guc->send_regs.count = GUC_MAX_MMIO_MSG_LEN;
+   BUILD_BUG_ON(GUC_MAX_MMIO_MSG_LEN > SOFT_SCRATCH_COUNT);
}
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 51/97] drm/i915: Disable preempt busywait when using GuC scheduling

2021-05-06 Thread Matthew Brost
Disable preempt busywait when using GuC scheduling. This isn't need as
the GuC control preemption when scheduling.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index 732c2ed1d933..47500ee955d4 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -506,7 +506,8 @@ gen8_emit_fini_breadcrumb_tail(struct i915_request *rq, u32 
*cs)
*cs++ = MI_USER_INTERRUPT;
 
*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
-   if (intel_engine_has_semaphores(rq->engine))
+   if (intel_engine_has_semaphores(rq->engine) &&
+   !intel_uc_uses_guc_submission(&rq->engine->gt->uc))
cs = emit_preempt_busywait(rq, cs);
 
rq->tail = intel_ring_offset(rq, cs);
@@ -598,7 +599,8 @@ gen12_emit_fini_breadcrumb_tail(struct i915_request *rq, 
u32 *cs)
*cs++ = MI_USER_INTERRUPT;
 
*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
-   if (intel_engine_has_semaphores(rq->engine))
+   if (intel_engine_has_semaphores(rq->engine) &&
+   !intel_uc_uses_guc_submission(&rq->engine->gt->uc))
cs = gen12_emit_preempt_busywait(rq, cs);
 
rq->tail = intel_ring_offset(rq, cs);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 43/97] drm/i915/guc: Add lrc descriptor context lookup array

2021-05-06 Thread Matthew Brost
Add lrc descriptor context lookup array which can resolve the
intel_context from the lrc descriptor index. In addition to lookup, it
can determine in the lrc descriptor context is currently registered with
the GuC by checking if an entry for a descriptor index is present.
Future patches in the series will make use of this array.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|  5 +++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 32 +--
 2 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index d84f37afb9d8..2eb6c497e43c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -6,6 +6,8 @@
 #ifndef _INTEL_GUC_H_
 #define _INTEL_GUC_H_
 
+#include "linux/xarray.h"
+
 #include "intel_uncore.h"
 #include "intel_guc_fw.h"
 #include "intel_guc_fwif.h"
@@ -47,6 +49,9 @@ struct intel_guc {
struct i915_vma *lrc_desc_pool;
void *lrc_desc_pool_vaddr;
 
+   /* guc_id to intel_context lookup */
+   struct xarray context_lookup;
+
/* Control params for fw initialization */
u32 params[GUC_CTL_MAX_DWORDS];
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 6acc1ef34f92..c2b6d27404b7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -65,8 +65,6 @@ static inline struct i915_priolist *to_priolist(struct 
rb_node *rb)
return rb_entry(rb, struct i915_priolist, node);
 }
 
-/* Future patches will use this function */
-__attribute__ ((unused))
 static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index)
 {
struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr;
@@ -76,6 +74,15 @@ static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc 
*guc, u32 index)
return &base[index];
 }
 
+static inline struct intel_context *__get_context(struct intel_guc *guc, u32 
id)
+{
+   struct intel_context *ce = xa_load(&guc->context_lookup, id);
+
+   GEM_BUG_ON(id >= GUC_MAX_LRC_DESCRIPTORS);
+
+   return ce;
+}
+
 static int guc_lrc_desc_pool_create(struct intel_guc *guc)
 {
u32 size;
@@ -96,6 +103,25 @@ static void guc_lrc_desc_pool_destroy(struct intel_guc *guc)
i915_vma_unpin_and_release(&guc->lrc_desc_pool, I915_VMA_RELEASE_MAP);
 }
 
+static inline void reset_lrc_desc(struct intel_guc *guc, u32 id)
+{
+   struct guc_lrc_desc *desc = __get_lrc_desc(guc, id);
+
+   memset(desc, 0, sizeof(*desc));
+   xa_erase_irq(&guc->context_lookup, id);
+}
+
+static inline bool lrc_desc_registered(struct intel_guc *guc, u32 id)
+{
+   return __get_context(guc, id);
+}
+
+static inline void set_lrc_desc_registered(struct intel_guc *guc, u32 id,
+  struct intel_context *ce)
+{
+   xa_store_irq(&guc->context_lookup, id, ce, GFP_ATOMIC);
+}
+
 static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
 {
/* Leaving stub as this function will be used in future patches */
@@ -404,6 +430,8 @@ int intel_guc_submission_init(struct intel_guc *guc)
 */
GEM_BUG_ON(!guc->lrc_desc_pool);
 
+   xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ);
+
return 0;
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 59/97] drm/i915/guc: GuC virtual engines

2021-05-06 Thread Matthew Brost
Implement GuC virtual engines. Rather simple implementation, basically
just allocate an engine, setup context enter / exit function to virtual
engine specific functions, set all other variables / functions to guc
versions, and set the engine mask to that of all the siblings.

Cc: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  19 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.h   |   1 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  10 +
 drivers/gpu/drm/i915/gt/intel_engine.h|  45 +++-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  14 +
 .../drm/i915/gt/intel_execlists_submission.c  | 186 +++--
 .../drm/i915/gt/intel_execlists_submission.h  |  11 -
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  20 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 253 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |   2 +
 10 files changed, 429 insertions(+), 132 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index d30260ffe2a7..e6bc5c666f93 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -72,7 +72,6 @@
 #include "gt/intel_context_param.h"
 #include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_user.h"
-#include "gt/intel_execlists_submission.h" /* virtual_engine */
 #include "gt/intel_gpu_commands.h"
 #include "gt/intel_ring.h"
 
@@ -1569,9 +1568,6 @@ set_engines__load_balance(struct i915_user_extension 
__user *base, void *data)
if (!HAS_EXECLISTS(i915))
return -ENODEV;
 
-   if (intel_uc_uses_guc_submission(&i915->gt.uc))
-   return -ENODEV; /* not implement yet */
-
if (get_user(idx, &ext->engine_index))
return -EFAULT;
 
@@ -1628,7 +1624,7 @@ set_engines__load_balance(struct i915_user_extension 
__user *base, void *data)
}
}
 
-   ce = intel_execlists_create_virtual(siblings, n);
+   ce = intel_engine_create_virtual(siblings, n);
if (IS_ERR(ce)) {
err = PTR_ERR(ce);
goto out_siblings;
@@ -1724,13 +1720,9 @@ set_engines__bond(struct i915_user_extension __user 
*base, void *data)
 * A non-virtual engine has no siblings to choose between; and
 * a submit fence will always be directed to the one engine.
 */
-   if (intel_engine_is_virtual(virtual)) {
-   err = intel_virtual_engine_attach_bond(virtual,
-  master,
-  bond);
-   if (err)
-   return err;
-   }
+   err = intel_engine_attach_bond(virtual, master, bond);
+   if (err)
+   return err;
}
 
return 0;
@@ -2117,8 +2109,7 @@ static int clone_engines(struct i915_gem_context *dst,
 * the virtual engine instead.
 */
if (intel_engine_is_virtual(engine))
-   clone->engines[n] =
-   intel_execlists_clone_virtual(engine);
+   clone->engines[n] = intel_engine_clone_virtual(engine);
else
clone->engines[n] = intel_context_create(engine);
if (IS_ERR_OR_NULL(clone->engines[n])) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h 
b/drivers/gpu/drm/i915/gem/i915_gem_context.h
index b5c908f3f4f2..ba772762f7b9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h
@@ -10,6 +10,7 @@
 #include "i915_gem_context_types.h"
 
 #include "gt/intel_context.h"
+#include "gt/intel_engine.h"
 
 #include "i915_drv.h"
 #include "i915_gem.h"
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index e7af6a2368f8..6945963a31ba 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -47,6 +47,16 @@ struct intel_context_ops {
 
void (*reset)(struct intel_context *ce);
void (*destroy)(struct kref *kref);
+
+   /* virtual engine/context interface */
+   struct intel_context *(*create_virtual)(struct intel_engine_cs **engine,
+   unsigned int count);
+   struct intel_context *(*clone_virtual)(struct intel_engine_cs *engine);
+   struct intel_engine_cs *(*get_sibling)(struct intel_engine_cs *engine,
+  unsigned int sibling);
+   int (*attach_bond)(struct intel_engine_cs *engine,
+  const struct intel_engine_cs *master,
+  const struct intel_engine_cs *sibling);
 };
 
 struct intel_context {
diff --git a/dri

[Intel-gfx] [RFC PATCH 28/97] drm/i915/guc: Kill guc_clients.ct_pool

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

CTB pool is now maintained internally by the GuC as part of its
"private data". No need to allocate separate buffer and pass it
to GuC as yet another ADS.

GuC: 57.0.0
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Janusz Krzysztofik 
Cc: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c  | 12 
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 12 +---
 2 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index 648e1767b17a..775f00d706fa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -25,8 +25,6 @@
  *  +---+
  *  | guc_clients_info  |
  *  +---+
- *  | guc_ct_pool_entry[size]   |
- *  +---+
  *  | padding   |
  *  +---+ <== 4K aligned
  *  | private data  |
@@ -39,7 +37,6 @@ struct __guc_ads_blob {
struct guc_policies policies;
struct guc_gt_system_info system_info;
struct guc_clients_info clients_info;
-   struct guc_ct_pool_entry ct_pool[GUC_CT_POOL_SIZE];
 } __packed;
 
 static u32 guc_ads_private_data_size(struct intel_guc *guc)
@@ -67,11 +64,6 @@ static void guc_policies_init(struct guc_policies *policies)
policies->is_valid = 1;
 }
 
-static void guc_ct_pool_entries_init(struct guc_ct_pool_entry *pool, u32 num)
-{
-   memset(pool, 0, num * sizeof(*pool));
-}
-
 static void guc_mapping_table_init(struct intel_gt *gt,
   struct guc_gt_system_info *system_info)
 {
@@ -157,11 +149,7 @@ static void __guc_ads_init(struct intel_guc *guc)
base = intel_guc_ggtt_offset(guc, guc->ads_vma);
 
/* Clients info  */
-   guc_ct_pool_entries_init(blob->ct_pool, ARRAY_SIZE(blob->ct_pool));
-
blob->clients_info.clients_num = 1;
-   blob->clients_info.ct_pool_addr = base + ptr_offset(blob, ct_pool);
-   blob->clients_info.ct_pool_count = ARRAY_SIZE(blob->ct_pool);
 
/* ADS */
blob->ads.scheduler_policies = base + ptr_offset(blob, policies);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 95db4a7d3f4d..301b173a26bc 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -269,19 +269,9 @@ struct guc_gt_system_info {
 } __packed;
 
 /* Clients info */
-struct guc_ct_pool_entry {
-   struct guc_ct_buffer_desc desc;
-   u32 reserved[7];
-} __packed;
-
-#define GUC_CT_POOL_SIZE   2
-
 struct guc_clients_info {
u32 clients_num;
-   u32 reserved0[13];
-   u32 ct_pool_addr;
-   u32 ct_pool_count;
-   u32 reserved[4];
+   u32 reserved[19];
 } __packed;
 
 /* GuC Additional Data Struct */
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 39/97] drm/i915/guc: Increase size of CTB buffers

2021-05-06 Thread Matthew Brost
With the introduction of non-blocking CTBs more than one CTB can be in
flight at a time. Increasing the size of the CTBs should reduce how
often software hits the case where no space is available in the CTB
buffer.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 77dfbc94dcc3..d6895d29ed2d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -63,11 +63,16 @@ static inline struct drm_device *ct_to_drm(struct 
intel_guc_ct *ct)
  *  ++---+--+
  *
  * Size of each `CT Buffer`_ must be multiple of 4K.
- * As we don't expect too many messages, for now use minimum sizes.
+ * We don't expect too many messages in flight at any time, unless we are
+ * using the GuC submission. In that case each request requires a minimum
+ * 16 bytes which gives us a maximum 256 queue'd requests. Hopefully this
+ * enough space to avoid backpressure on the driver. We increase the size
+ * of the receive buffer (relative to the send) to ensure a G2H response
+ * CTB has a landing spot.
  */
 #define CTB_DESC_SIZE  ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
 #define CTB_H2G_BUFFER_SIZE(SZ_4K)
-#define CTB_G2H_BUFFER_SIZE(SZ_4K)
+#define CTB_G2H_BUFFER_SIZE(4 * CTB_H2G_BUFFER_SIZE)
 
 #define MAX_US_STALL_CTB   100
 
@@ -753,7 +758,7 @@ static int ct_read(struct intel_guc_ct *ct, struct 
ct_incoming_msg **msg)
/* beware of buffer wrap case */
if (unlikely(available < 0))
available += size;
-   CT_DEBUG(ct, "available %d (%u:%u)\n", available, head, tail);
+   CT_DEBUG(ct, "available %d (%u:%u:%u)\n", available, head, tail, size);
GEM_BUG_ON(available < 0);
 
header = cmds[head];
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 63/97] drm/i915/guc: Direct all breadcrumbs for a class to single breadcrumbs

2021-05-06 Thread Matthew Brost
With GuC virtual engines the physical engine which a request executes
and completes on isn't known to the i915. Therefore we can't attach a
request to a physical engines breadcrumbs. To work around this we create
a single breadcrumbs per engine class when using GuC submission and
direct all physical engine interrupts to this breadcrumbs.

Signed-off-by: Matthew Brost 
CC: John Harrison 
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   | 41 +---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.h   | 14 +++-
 .../gpu/drm/i915/gt/intel_breadcrumbs_types.h |  7 ++
 drivers/gpu/drm/i915/gt/intel_engine.h|  3 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 28 +++-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  1 -
 .../drm/i915/gt/intel_execlists_submission.c  |  4 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |  4 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 67 +--
 9 files changed, 133 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 38cc42783dfb..2007dc6f6b99 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -15,28 +15,14 @@
 #include "intel_gt_pm.h"
 #include "intel_gt_requests.h"
 
-static bool irq_enable(struct intel_engine_cs *engine)
+static bool irq_enable(struct intel_breadcrumbs *b)
 {
-   if (!engine->irq_enable)
-   return false;
-
-   /* Caller disables interrupts */
-   spin_lock(&engine->gt->irq_lock);
-   engine->irq_enable(engine);
-   spin_unlock(&engine->gt->irq_lock);
-
-   return true;
+   return intel_engine_irq_enable(b->irq_engine);
 }
 
-static void irq_disable(struct intel_engine_cs *engine)
+static void irq_disable(struct intel_breadcrumbs *b)
 {
-   if (!engine->irq_disable)
-   return;
-
-   /* Caller disables interrupts */
-   spin_lock(&engine->gt->irq_lock);
-   engine->irq_disable(engine);
-   spin_unlock(&engine->gt->irq_lock);
+   intel_engine_irq_disable(b->irq_engine);
 }
 
 static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
@@ -57,7 +43,7 @@ static void __intel_breadcrumbs_arm_irq(struct 
intel_breadcrumbs *b)
WRITE_ONCE(b->irq_armed, true);
 
/* Requests may have completed before we could enable the interrupt. */
-   if (!b->irq_enabled++ && irq_enable(b->irq_engine))
+   if (!b->irq_enabled++ && b->irq_enable(b))
irq_work_queue(&b->irq_work);
 }
 
@@ -76,7 +62,7 @@ static void __intel_breadcrumbs_disarm_irq(struct 
intel_breadcrumbs *b)
 {
GEM_BUG_ON(!b->irq_enabled);
if (!--b->irq_enabled)
-   irq_disable(b->irq_engine);
+   b->irq_disable(b);
 
WRITE_ONCE(b->irq_armed, false);
intel_gt_pm_put_async(b->irq_engine->gt);
@@ -281,7 +267,7 @@ intel_breadcrumbs_create(struct intel_engine_cs *irq_engine)
if (!b)
return NULL;
 
-   b->irq_engine = irq_engine;
+   kref_init(&b->ref);
 
spin_lock_init(&b->signalers_lock);
INIT_LIST_HEAD(&b->signalers);
@@ -290,6 +276,10 @@ intel_breadcrumbs_create(struct intel_engine_cs 
*irq_engine)
spin_lock_init(&b->irq_lock);
init_irq_work(&b->irq_work, signal_irq_work);
 
+   b->irq_engine = irq_engine;
+   b->irq_enable = irq_enable;
+   b->irq_disable = irq_disable;
+
return b;
 }
 
@@ -303,9 +293,9 @@ void intel_breadcrumbs_reset(struct intel_breadcrumbs *b)
spin_lock_irqsave(&b->irq_lock, flags);
 
if (b->irq_enabled)
-   irq_enable(b->irq_engine);
+   b->irq_enable(b);
else
-   irq_disable(b->irq_engine);
+   b->irq_disable(b);
 
spin_unlock_irqrestore(&b->irq_lock, flags);
 }
@@ -325,11 +315,14 @@ void __intel_breadcrumbs_park(struct intel_breadcrumbs *b)
}
 }
 
-void intel_breadcrumbs_free(struct intel_breadcrumbs *b)
+void intel_breadcrumbs_free(struct kref *kref)
 {
+   struct intel_breadcrumbs *b = container_of(kref, typeof(*b), ref);
+
irq_work_sync(&b->irq_work);
GEM_BUG_ON(!list_empty(&b->signalers));
GEM_BUG_ON(b->irq_armed);
+
kfree(b);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
index 3ce5ce270b04..72105b74663d 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
@@ -17,7 +17,7 @@ struct intel_breadcrumbs;
 
 struct intel_breadcrumbs *
 intel_breadcrumbs_create(struct intel_engine_cs *irq_engine);
-void intel_breadcrumbs_free(struct intel_breadcrumbs *b);
+void intel_breadcrumbs_free(struct kref *kref);
 
 void intel_breadcrumbs_reset(struct intel_breadcrumbs *b);
 void __intel_breadcrumbs_park(struct intel_breadcrumbs *b);
@@ -48,4 +48,16 @@ void i915_request_cancel_breadcrumb(struct i915_request 
*request);
 void intel

[Intel-gfx] [RFC PATCH 25/97] drm/i915/guc: New definition of the CTB descriptor

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Definition of the CTB descriptor has changed, leaving only
minimal shared fields like HEAD/TAIL/STATUS.

Both HEAD and TAIL are now in dwords.

Add some ABI documentation and implement required changes.

GuC: 57.0.0
GuC: 60.0.0
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 .../gt/uc/abi/guc_communication_ctb_abi.h | 70 ++-
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 70 +--
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  2 +-
 3 files changed, 85 insertions(+), 57 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
index d38935f47ecf..c2a069a78e01 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
@@ -7,6 +7,58 @@
 #define _ABI_GUC_COMMUNICATION_CTB_ABI_H
 
 #include 
+#include 
+
+#include "guc_messages_abi.h"
+
+/**
+ * DOC: CT Buffer
+ *
+ * TBD
+ */
+
+/**
+ * DOC: CTB Descriptor
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 |  31:0 | **HEAD** - offset (in dwords) to the last dword that was 
|
+ *  |   |   | read from the `CT Buffer`_.  
|
+ *  |   |   | It can only be updated by the receiver.  
|
+ *  
+---+---+--+
+ *  | 1 |  31:0 | **TAIL** - offset (in dwords) to the last dword that was 
|
+ *  |   |   | written to the `CT Buffer`_. 
|
+ *  |   |   | It can only be updated by the sender.
|
+ *  
+---+---+--+
+ *  | 2 |  31:0 | **STATUS** - status of the CTB   
|
+ *  |   |   |  
|
+ *  |   |   |   - _`GUC_CTB_STATUS_NO_ERROR` = 0 (normal operation)
|
+ *  |   |   |   - _`GUC_CTB_STATUS_OVERFLOW` = 1 (head/tail too large) 
|
+ *  |   |   |   - _`GUC_CTB_STATUS_UNDERFLOW` = 2 (truncated message)  
|
+ *  |   |   |   - _`GUC_CTB_STATUS_MISMATCH` = 4 (head/tail modified)  
|
+ *  |   |   |   - _`GUC_CTB_STATUS_NO_BACKCHANNEL` = 8 
|
+ *  |   |   |   - _`GUC_CTB_STATUS_MALFORMED_MSG` = 16 
|
+ *  
+---+---+--+
+ *  |...|   | RESERVED = MBZ   
|
+ *  
+---+---+--+
+ *  | 15|  31:0 | RESERVED = MBZ   
|
+ *  
+---+---+--+
+ */
+
+struct guc_ct_buffer_desc {
+   u32 head;
+   u32 tail;
+   u32 status;
+#define GUC_CTB_STATUS_NO_ERROR0
+#define GUC_CTB_STATUS_OVERFLOW(1 << 0)
+#define GUC_CTB_STATUS_UNDERFLOW   (1 << 1)
+#define GUC_CTB_STATUS_MISMATCH(1 << 2)
+#define GUC_CTB_STATUS_NO_BACKCHANNEL  (1 << 3)
+#define GUC_CTB_STATUS_MALFORMED_MSG   (1 << 4)
+   u32 reserved[13];
+} __packed;
+static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
 
 /**
  * DOC: CTB based communication
@@ -60,24 +112,6 @@
  * - **flags**, holds various bits to control message handling
  */
 
-/*
- * Describes single command transport buffer.
- * Used by both guc-master and clients.
- */
-struct guc_ct_buffer_desc {
-   u32 addr;   /* gfx address */
-   u64 host_private;   /* host private data */
-   u32 size;   /* size in bytes */
-   u32 head;   /* offset updated by GuC*/
-   u32 tail;   /* offset updated by owner */
-   u32 is_in_error;/* error indicator */
-   u32 reserved1;
-   u32 reserved2;
-   u32 owner;  /* id of the channel owner */
-   u32 owner_sub_id;   /* owner-defined field for extra tracking */
-   u32 reserved[5];
-} __packed;
-
 /* Type of command transport buffer */
 #define INTEL_GUC_CT_BUFFER_TYPE_SEND  0x0u
 #define INTEL_GUC_CT_BUFFER_TYPE_RECV  0x1u
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 178f73ab2c96..282df9706912 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -112,32 +112,28 @@ static inline const char *guc_ct_buffer_type_to_str(u32 
type)
}
 }
 
-static void guc_ct_buffer_desc_init(struct guc_ct_buffer_des

[Intel-gfx] [RFC PATCH 15/97] drm/i915/guc: Relax CTB response timeout

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

In upcoming patch we will allow more CTB requests to be sent in
parallel to the GuC for procesing, so we shouldn't assume any more
that GuC will always reply without 10ms.

Use bigger value from CONFIG_DRM_I915_HEARTBEAT_INTERVAL instead.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index c87a0a8bef26..a4b2e7fe318b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -436,17 +436,23 @@ static int ct_write(struct intel_guc_ct *ct,
  */
 static int wait_for_ct_request_update(struct ct_request *req, u32 *status)
 {
+   long timeout;
int err;
 
/*
 * Fast commands should complete in less than 10us, so sample quickly
 * up to that length of time, then switch to a slower sleep-wait loop.
 * No GuC command should ever take longer than 10ms.
+*
+* However, there might be other CT requests in flight before this one,
+* so use @CONFIG_DRM_I915_HEARTBEAT_INTERVAL as backup timeout value.
 */
+   timeout = max(10, CONFIG_DRM_I915_HEARTBEAT_INTERVAL);
+
 #define done INTEL_GUC_MSG_IS_RESPONSE(READ_ONCE(req->status))
err = wait_for_us(done, 10);
if (err)
-   err = wait_for(done, 10);
+   err = wait_for(done, timeout);
 #undef done
 
if (unlikely(err))
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 34/97] drm/i915/guc: Use guc_class instead of engine_class in fw interface

2021-05-06 Thread Matthew Brost
From: Daniele Ceraolo Spurio 

GuC has its own defines for the engine classes. They're currently
mapping 1:1 to the defines used by the driver, but there is no guarantee
this will continue in the future. Given that we've been caught off-guard
in the past by similar divergences, we can prepare for the changes by
introducing helper functions to convert from engine class to GuC class and
back again.

Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
Cc: John Harrison 
Cc: Michal Wajdeczko 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c   |  6 +++--
 drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c  | 20 +---
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 26 +
 3 files changed, 42 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index c88b792c1ab5..7866ff0c2673 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -289,6 +289,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum 
intel_engine_id id)
const struct engine_info *info = &intel_engines[id];
struct drm_i915_private *i915 = gt->i915;
struct intel_engine_cs *engine;
+   u8 guc_class;
 
BUILD_BUG_ON(MAX_ENGINE_CLASS >= BIT(GEN11_ENGINE_CLASS_WIDTH));
BUILD_BUG_ON(MAX_ENGINE_INSTANCE >= BIT(GEN11_ENGINE_INSTANCE_WIDTH));
@@ -317,9 +318,10 @@ static int intel_engine_setup(struct intel_gt *gt, enum 
intel_engine_id id)
engine->i915 = i915;
engine->gt = gt;
engine->uncore = gt->uncore;
-   engine->mmio_base = __engine_mmio_base(i915, info->mmio_bases);
engine->hw_id = info->hw_id;
-   engine->guc_id = MAKE_GUC_ID(info->class, info->instance);
+   guc_class = engine_class_to_guc_class(info->class);
+   engine->guc_id = MAKE_GUC_ID(guc_class, info->instance);
+   engine->mmio_base = __engine_mmio_base(i915, info->mmio_bases);
 
engine->irq_handler = nop_irq_handler;
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
index 775f00d706fa..ecd18531b40a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c
@@ -6,6 +6,7 @@
 #include "gt/intel_gt.h"
 #include "gt/intel_lrc.h"
 #include "intel_guc_ads.h"
+#include "intel_guc_fwif.h"
 #include "intel_uc.h"
 #include "i915_drv.h"
 
@@ -78,7 +79,7 @@ static void guc_mapping_table_init(struct intel_gt *gt,
GUC_MAX_INSTANCES_PER_CLASS;
 
for_each_engine(engine, gt, id) {
-   u8 guc_class = engine->class;
+   u8 guc_class = engine_class_to_guc_class(engine->class);
 
system_info->mapping_table[guc_class][engine->instance] =
engine->instance;
@@ -98,7 +99,7 @@ static void __guc_ads_init(struct intel_guc *guc)
struct __guc_ads_blob *blob = guc->ads_blob;
const u32 skipped_size = LRC_PPHWSP_SZ * PAGE_SIZE + LR_HW_CONTEXT_SIZE;
u32 base;
-   u8 engine_class;
+   u8 engine_class, guc_class;
 
/* GuC scheduling policies */
guc_policies_init(&blob->policies);
@@ -114,22 +115,25 @@ static void __guc_ads_init(struct intel_guc *guc)
for (engine_class = 0; engine_class <= MAX_ENGINE_CLASS; 
++engine_class) {
if (engine_class == OTHER_CLASS)
continue;
+
+   guc_class = engine_class_to_guc_class(engine_class);
+
/*
 * TODO: Set context pointer to default state to allow
 * GuC to re-init guilty contexts after internal reset.
 */
-   blob->ads.golden_context_lrca[engine_class] = 0;
-   blob->ads.eng_state_size[engine_class] =
+   blob->ads.golden_context_lrca[guc_class] = 0;
+   blob->ads.eng_state_size[guc_class] =
intel_engine_context_size(guc_to_gt(guc),
  engine_class) -
skipped_size;
}
 
/* System info */
-   blob->system_info.engine_enabled_masks[RENDER_CLASS] = 1;
-   blob->system_info.engine_enabled_masks[COPY_ENGINE_CLASS] = 1;
-   blob->system_info.engine_enabled_masks[VIDEO_DECODE_CLASS] = 
VDBOX_MASK(gt);
-   blob->system_info.engine_enabled_masks[VIDEO_ENHANCEMENT_CLASS] = 
VEBOX_MASK(gt);
+   blob->system_info.engine_enabled_masks[GUC_RENDER_CLASS] = 1;
+   blob->system_info.engine_enabled_masks[GUC_BLITTER_CLASS] = 1;
+   blob->system_info.engine_enabled_masks[GUC_VIDEO_CLASS] = 
VDBOX_MASK(gt);
+   blob->system_info.engine_enabled_masks[GUC_VIDEOENHANCE_CLASS] = 
VEBOX_MASK(gt);
 

blob->system_info.generic_gt_sysinfo[GUC_GENERIC_GT_SYSINFO_SLICE_ENABLED] =
hweight8(gt->info.sseu.slice_mask);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/

[Intel-gfx] [RFC PATCH 27/97] drm/i915/guc: New CTB based communication

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Format of the CTB messages has changed:
 - support for multiple formats
 - message fence is now part of the header
 - reuse of unified HXG message formats

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Piotr Piórkowski 
---
 .../gt/uc/abi/guc_communication_ctb_abi.h |  56 +
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 193 +++---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |   2 +-
 3 files changed, 134 insertions(+), 117 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
index 127b256a662c..92660726c094 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
@@ -60,6 +60,62 @@ struct guc_ct_buffer_desc {
 } __packed;
 static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
 
+/**
+ * DOC: CTB Message
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 | 31:16 | **FENCE** - message identifier   
|
+ *  |   
+---+--+
+ *  |   | 15:12 | **FORMAT** - format of the CTB message   
|
+ *  |   |   |  - _`GUC_CTB_FORMAT_HXG` = 0 - see `CTB HXG Message`_
|
+ *  |   
+---+--+
+ *  |   |  11:8 | **RESERVED** 
|
+ *  |   
+---+--+
+ *  |   |   7:0 | **NUM_DWORDS** - length of the CTB message (w/o header)  
|
+ *  
+---+---+--+
+ *  | 1 |  31:0 | optional (depends on FORMAT) 
|
+ *  +---+---+  
|
+ *  |...|   |  
|
+ *  +---+---+  
|
+ *  | n |  31:0 |  
|
+ *  
+---+---+--+
+ */
+
+#define GUC_CTB_MSG_MIN_LEN1u
+#define GUC_CTB_MSG_MAX_LEN256u
+#define GUC_CTB_MSG_0_FENCE(0x << 16)
+#define GUC_CTB_MSG_0_FORMAT   (0xf << 12)
+#define   GUC_CTB_FORMAT_HXG   0u
+#define GUC_CTB_MSG_0_RESERVED (0xf << 8)
+#define GUC_CTB_MSG_0_NUM_DWORDS   (0xff << 0)
+
+/**
+ * DOC: CTB HXG Message
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 | 31:16 | FENCE
|
+ *  |   
+---+--+
+ *  |   | 15:12 | FORMAT = GUC_CTB_FORMAT_HXG_ 
|
+ *  |   
+---+--+
+ *  |   |  11:8 | RESERVED = MBZ   
|
+ *  |   
+---+--+
+ *  |   |   7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message  
|
+ *  
+---+---+--+
+ *  | 1 |  31:0 |  ++  
|
+ *  +---+---+  ||  
|
+ *  |...|   |  |  Embedded `HXG Message`_   |  
|
+ *  +---+---+  ||  
|
+ *  | n |  31:0 |  ++  
|
+ *  
+---+---+--+
+ */
+
+#define GUC_CTB_HXG_MSG_MIN_LEN(GUC_CTB_MSG_MIN_LEN + 
GUC_HXG_MSG_MIN_LEN)
+#define GUC_CTB_HXG_MSG_MAX_LENGUC_CTB_MSG_MAX_LEN
+
 /**
  * DOC: CTB based communication
  *
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index e25b49a45107..217ab3ebd1af 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -343,24 +343,6 @@ static u32 ct_get_next_fence(struct intel_guc_ct *ct)
return ++ct->requests.last_fence;
 }
 
-/**
- * DOC: CTB Host to GuC request
- *
- * Format of the CTB Host to GuC request message is as follows::
- *
- *

[Intel-gfx] [RFC PATCH 64/97] drm/i915/guc: Reset implementation for new GuC interface

2021-05-06 Thread Matthew Brost
Reset implementation for new GuC interface. This is the legacy reset
implementation which is called when the i915 owns the engine hang check.
Future patches will offload the engine hang check to GuC but we will
continue to maintain this legacy path as a fallback and this code path
is also required if the GuC dies.

With the new GuC interface it is not possible to reset individual
engines - it is only possible to reset the GPU entirely. This patch
forces an entire chip reset if any engine hangs.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |   3 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   7 +
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   6 +
 .../drm/i915/gt/intel_execlists_submission.c  |  40 ++
 drivers/gpu/drm/i915/gt/intel_gt_pm.c |   6 +-
 drivers/gpu/drm/i915/gt/intel_reset.c |  18 +-
 .../gpu/drm/i915/gt/intel_ring_submission.c   |  22 +
 drivers/gpu/drm/i915/gt/mock_engine.c |  31 +
 drivers/gpu/drm/i915/gt/uc/intel_guc.c|  16 +-
 drivers/gpu/drm/i915/gt/uc/intel_guc.h|   8 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 580 ++
 drivers/gpu/drm/i915/gt/uc/intel_uc.c |  34 +-
 drivers/gpu/drm/i915/gt/uc/intel_uc.h |   3 +
 drivers/gpu/drm/i915/i915_request.c   |  41 +-
 drivers/gpu/drm/i915/i915_request.h   |   2 +
 15 files changed, 643 insertions(+), 174 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index b24a1b7a3f88..2f01437056a8 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -392,6 +392,9 @@ intel_context_init(struct intel_context *ce, struct 
intel_engine_cs *engine)
spin_lock_init(&ce->guc_state.lock);
INIT_LIST_HEAD(&ce->guc_state.fences);
 
+   spin_lock_init(&ce->guc_active.lock);
+   INIT_LIST_HEAD(&ce->guc_active.requests);
+
ce->guc_id = GUC_INVALID_LRC_ID;
INIT_LIST_HEAD(&ce->guc_id_link);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 6945963a31ba..b63c8cf7823b 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -165,6 +165,13 @@ struct intel_context {
struct list_head fences;
} guc_state;
 
+   struct {
+   /** lock: protects everything in guc_active */
+   spinlock_t lock;
+   /** requests: active requests on this context */
+   struct list_head requests;
+   } guc_active;
+
/* GuC scheduling state that does not require a lock. */
atomic_t guc_sched_state_no_lock;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index f7b6eed586ce..b84562b2708b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -432,6 +432,12 @@ struct intel_engine_cs {
 */
void(*release)(struct intel_engine_cs *engine);
 
+   /*
+* Add / remove request from engine active tracking
+*/
+   void(*add_active_request)(struct i915_request *rq);
+   void(*remove_active_request)(struct i915_request *rq);
+
struct intel_engine_execlists execlists;
 
/*
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 396b1356ea3e..54518b64bdbd 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -3117,6 +3117,42 @@ static void execlists_park(struct intel_engine_cs 
*engine)
cancel_timer(&engine->execlists.preempt);
 }
 
+static void add_to_engine(struct i915_request *rq)
+{
+   lockdep_assert_held(&rq->engine->sched_engine->lock);
+   list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests);
+}
+
+static void remove_from_engine(struct i915_request *rq)
+{
+   struct intel_engine_cs *engine, *locked;
+
+   /*
+* Virtual engines complicate acquiring the engine timeline lock,
+* as their rq->engine pointer is not stable until under that
+* engine lock. The simple ploy we use is to take the lock then
+* check that the rq still belongs to the newly locked engine.
+*/
+   locked = READ_ONCE(rq->engine);
+   spin_lock_irq(&locked->sched_engine->lock);
+   while (unlikely(locked != (engine = READ_ONCE(rq->engine {
+   spin_unlock(&locked->sched_engine->lock);
+   spin_lock(&engine->sched_engine->lock);
+   locked = engine;
+   }
+   list_del_init(&rq->sched.link);
+
+   clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+   clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
+
+   /* Prevent further __a

[Intel-gfx] [RFC PATCH 24/97] drm/i915/guc: Add flag for mark broken CTB

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Once CTB descriptor is found in error state, either set by GuC
or us, there is no need continue checking descriptor any more,
we can rely on our internal flag.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Piotr Piórkowski 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 13 +++--
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  2 ++
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 1afdeac683b5..178f73ab2c96 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -123,6 +123,7 @@ static void guc_ct_buffer_desc_init(struct 
guc_ct_buffer_desc *desc,
 
 static void guc_ct_buffer_reset(struct intel_guc_ct_buffer *ctb, u32 cmds_addr)
 {
+   ctb->broken = false;
guc_ct_buffer_desc_init(ctb->desc, cmds_addr, ctb->size);
 }
 
@@ -365,9 +366,12 @@ static int ct_write(struct intel_guc_ct *ct,
u32 *cmds = ctb->cmds;
unsigned int i;
 
-   if (unlikely(desc->is_in_error))
+   if (unlikely(ctb->broken))
return -EPIPE;
 
+   if (unlikely(desc->is_in_error))
+   goto corrupted;
+
if (unlikely(!IS_ALIGNED(head | tail, 4) ||
 (tail | head) >= size))
goto corrupted;
@@ -423,6 +427,7 @@ static int ct_write(struct intel_guc_ct *ct,
CT_ERROR(ct, "Corrupted descriptor addr=%#x head=%u tail=%u size=%u\n",
 desc->addr, desc->head, desc->tail, desc->size);
desc->is_in_error = 1;
+   ctb->broken = true;
return -EPIPE;
 }
 
@@ -608,9 +613,12 @@ static int ct_read(struct intel_guc_ct *ct, struct 
ct_incoming_msg **msg)
unsigned int i;
u32 header;
 
-   if (unlikely(desc->is_in_error))
+   if (unlikely(ctb->broken))
return -EPIPE;
 
+   if (unlikely(desc->is_in_error))
+   goto corrupted;
+
if (unlikely(!IS_ALIGNED(head | tail, 4) ||
 (tail | head) >= size))
goto corrupted;
@@ -674,6 +682,7 @@ static int ct_read(struct intel_guc_ct *ct, struct 
ct_incoming_msg **msg)
CT_ERROR(ct, "Corrupted descriptor addr=%#x head=%u tail=%u size=%u\n",
 desc->addr, desc->head, desc->tail, desc->size);
desc->is_in_error = 1;
+   ctb->broken = true;
return -EPIPE;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
index cb222f202301..7d3cd375d6a7 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
@@ -32,12 +32,14 @@ struct intel_guc;
  * @desc: pointer to the buffer descriptor
  * @cmds: pointer to the commands buffer
  * @size: size of the commands buffer
+ * @broken: flag to indicate if descriptor data is broken
  */
 struct intel_guc_ct_buffer {
spinlock_t lock;
struct guc_ct_buffer_desc *desc;
u32 *cmds;
u32 size;
+   bool broken;
 };
 
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 47/97] drm/i915/guc: Insert fence on context when deregistering

2021-05-06 Thread Matthew Brost
Sometime during context pinning a context with the same guc_id is
registered with the GuC. In this a case deregister must be before before
the context can be registered. A fence is inserted on all requests while
the deregister is in flight. Once the G2H is received indicating the
deregistration is complete the context is registered and the fence is
released.

Cc: John Harrison 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |  1 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  5 ++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 51 ++-
 drivers/gpu/drm/i915/i915_request.h   |  8 +++
 4 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 2b68af16222c..f750c826e19d 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -384,6 +384,7 @@ intel_context_init(struct intel_context *ce, struct 
intel_engine_cs *engine)
mutex_init(&ce->pin_mutex);
 
spin_lock_init(&ce->guc_state.lock);
+   INIT_LIST_HEAD(&ce->guc_state.fences);
 
ce->guc_id = GUC_INVALID_LRC_ID;
INIT_LIST_HEAD(&ce->guc_id_link);
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index ce7c69b34cd1..beafe55a9101 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -146,6 +146,11 @@ struct intel_context {
 * submission
 */
u8 sched_state;
+   /*
+* fences: maintains of list of requests that have a submit
+* fence related to GuC submission
+*/
+   struct list_head fences;
} guc_state;
 
/* GuC scheduling state that does not require a lock. */
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index eada9ffc1a54..b4c439025a5f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -927,6 +927,30 @@ static const struct intel_context_ops guc_context_ops = {
.destroy = guc_context_destroy,
 };
 
+static void __guc_signal_context_fence(struct intel_context *ce)
+{
+   struct i915_request *rq;
+
+   lockdep_assert_held(&ce->guc_state.lock);
+
+   list_for_each_entry(rq, &ce->guc_state.fences, guc_fence_link)
+   i915_sw_fence_complete(&rq->submit);
+
+   INIT_LIST_HEAD(&ce->guc_state.fences);
+}
+
+static void guc_signal_context_fence(struct intel_context *ce)
+{
+   unsigned long flags;
+
+   GEM_BUG_ON(!context_wait_for_deregister_to_register(ce));
+
+   spin_lock_irqsave(&ce->guc_state.lock, flags);
+   clr_context_wait_for_deregister_to_register(ce);
+   __guc_signal_context_fence(ce);
+   spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+}
+
 static bool context_needs_register(struct intel_context *ce, bool new_guc_id)
 {
return new_guc_id || test_bit(CONTEXT_LRCA_DIRTY, &ce->flags) ||
@@ -937,6 +961,7 @@ static int guc_request_alloc(struct i915_request *rq)
 {
struct intel_context *ce = rq->context;
struct intel_guc *guc = ce_to_guc(ce);
+   unsigned long flags;
int ret;
 
GEM_BUG_ON(!intel_context_is_pinned(rq->context));
@@ -981,7 +1006,7 @@ static int guc_request_alloc(struct i915_request *rq)
 * increment (in pin_guc_id) is needed to seal a race with unpin_guc_id.
 */
if (atomic_add_unless(&ce->guc_id_ref, 1, 0))
-   return 0;
+   goto out;
 
ret = pin_guc_id(guc, ce);  /* returns 1 if new guc_id assigned */
if (unlikely(ret < 0))
@@ -998,6 +1023,28 @@ static int guc_request_alloc(struct i915_request *rq)
 
clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags);
 
+out:
+   /*
+* We block all requests on this context if a G2H is pending for a
+* context deregistration as the GuC will fail a context registration
+* while this G2H is pending. Once a G2H returns, the fence is released
+* that is blocking these requests (see guc_signal_context_fence).
+*
+* We can safely check the below field outside of the lock as it isn't
+* possible for this field to transition from being clear to set but
+* converse is possible, hence the need for the check within the lock.
+*/
+   if (likely(!context_wait_for_deregister_to_register(ce)))
+   return 0;
+
+   spin_lock_irqsave(&ce->guc_state.lock, flags);
+   if (context_wait_for_deregister_to_register(ce)) {
+   i915_sw_fence_await(&rq->submit);
+
+   list_add_tail(&rq->guc_fence_link, &ce->guc_state.fences);
+   }
+   spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+
return 0;
 }
 
@@ -129

[Intel-gfx] [RFC PATCH 08/97] drm/i915/guc: Keep strict GuC ABI definitions

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Our fwif.h file is now mix of strict firmware ABI definitions and
set of our helpers. In anticipation of upcoming changes to the GuC
interface try to keep them separate in smaller maintainable files.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Michał Winiarski 
---
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  |  51 +
 .../gt/uc/abi/guc_communication_ctb_abi.h | 106 +
 .../gt/uc/abi/guc_communication_mmio_abi.h|  52 +
 .../gpu/drm/i915/gt/uc/abi/guc_errors_abi.h   |  14 ++
 .../gpu/drm/i915/gt/uc/abi/guc_messages_abi.h |  21 ++
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h   | 203 +-
 6 files changed, 250 insertions(+), 197 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h
 create mode 100644 drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
new file mode 100644
index ..90efef8a73e4
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2014-2021 Intel Corporation
+ */
+
+#ifndef _ABI_GUC_ACTIONS_ABI_H
+#define _ABI_GUC_ACTIONS_ABI_H
+
+enum intel_guc_action {
+   INTEL_GUC_ACTION_DEFAULT = 0x0,
+   INTEL_GUC_ACTION_REQUEST_PREEMPTION = 0x2,
+   INTEL_GUC_ACTION_REQUEST_ENGINE_RESET = 0x3,
+   INTEL_GUC_ACTION_ALLOCATE_DOORBELL = 0x10,
+   INTEL_GUC_ACTION_DEALLOCATE_DOORBELL = 0x20,
+   INTEL_GUC_ACTION_LOG_BUFFER_FILE_FLUSH_COMPLETE = 0x30,
+   INTEL_GUC_ACTION_UK_LOG_ENABLE_LOGGING = 0x40,
+   INTEL_GUC_ACTION_FORCE_LOG_BUFFER_FLUSH = 0x302,
+   INTEL_GUC_ACTION_ENTER_S_STATE = 0x501,
+   INTEL_GUC_ACTION_EXIT_S_STATE = 0x502,
+   INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
+   INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
+   INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
+   INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
+   INTEL_GUC_ACTION_LIMIT
+};
+
+enum intel_guc_preempt_options {
+   INTEL_GUC_PREEMPT_OPTION_DROP_WORK_Q = 0x4,
+   INTEL_GUC_PREEMPT_OPTION_DROP_SUBMIT_Q = 0x8,
+};
+
+enum intel_guc_report_status {
+   INTEL_GUC_REPORT_STATUS_UNKNOWN = 0x0,
+   INTEL_GUC_REPORT_STATUS_ACKED = 0x1,
+   INTEL_GUC_REPORT_STATUS_ERROR = 0x2,
+   INTEL_GUC_REPORT_STATUS_COMPLETE = 0x4,
+};
+
+enum intel_guc_sleep_state_status {
+   INTEL_GUC_SLEEP_STATE_SUCCESS = 0x1,
+   INTEL_GUC_SLEEP_STATE_PREEMPT_TO_IDLE_FAILED = 0x2,
+   INTEL_GUC_SLEEP_STATE_ENGINE_RESET_FAILED = 0x3
+#define INTEL_GUC_SLEEP_STATE_INVALID_MASK 0x8000
+};
+
+#define GUC_LOG_CONTROL_LOGGING_ENABLED(1 << 0)
+#define GUC_LOG_CONTROL_VERBOSITY_SHIFT4
+#define GUC_LOG_CONTROL_VERBOSITY_MASK (0xF << GUC_LOG_CONTROL_VERBOSITY_SHIFT)
+#define GUC_LOG_CONTROL_DEFAULT_LOGGING(1 << 8)
+
+#endif /* _ABI_GUC_ACTIONS_ABI_H */
diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
new file mode 100644
index ..ebd8c3e0e4bb
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2014-2021 Intel Corporation
+ */
+
+#ifndef _ABI_GUC_COMMUNICATION_CTB_ABI_H
+#define _ABI_GUC_COMMUNICATION_CTB_ABI_H
+
+#include 
+
+/**
+ * DOC: CTB based communication
+ *
+ * The CTB (command transport buffer) communication between Host and GuC
+ * is based on u32 data stream written to the shared buffer. One buffer can
+ * be used to transmit data only in one direction (one-directional channel).
+ *
+ * Current status of the each buffer is stored in the buffer descriptor.
+ * Buffer descriptor holds tail and head fields that represents active data
+ * stream. The tail field is updated by the data producer (sender), and head
+ * field is updated by the data consumer (receiver)::
+ *
+ *  ++
+ *  | DESCRIPTOR |  +=+++
+ *  ++  | | MESSAGE(s) ||
+ *  | address|->+=+++
+ *  ++
+ *  | head   |  ^-head^
+ *  ++
+ *  | tail   |  ^-tail-^
+ *  ++
+ *  | size   |  ^---size^
+ *  ++
+ *
+ * Each message in data stream starts with the single u32 treated as a header,
+ * followed by optional set of u32 data that makes message specific payload::
+ *
+ *  +

[Intel-gfx] [RFC PATCH 52/97] drm/i915/guc: Ensure request ordering via completion fences

2021-05-06 Thread Matthew Brost
If two requests are on the same ring, they are explicitly ordered by the
HW. So, a submission fence is sufficient to ensure ordering when using
the new GuC submission interface. Conversely, if two requests share a
timeline and are on the same physical engine but different context this
doesn't ensure ordering on the new GuC submission interface. So, a
completion fence needs to be used to ensure ordering.

Signed-off-by: John Harrison 
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c   |  1 -
 drivers/gpu/drm/i915/i915_request.c | 17 +
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 885f14bfe3b9..580535b02eb1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -929,7 +929,6 @@ static void guc_context_sched_disable(struct intel_context 
*ce)
 * request doesn't slip through the 'context_pending_disable' fence.
 */
if (unlikely(atomic_add_unless(&ce->pin_count, -2, 2))) {
-   spin_unlock_irqrestore(&ce->guc_state.lock, flags);
return;
}
guc_id = prep_context_pending_disable(ce);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 56860b7d065b..3a8f6ec0c32d 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -444,6 +444,7 @@ void i915_request_retire_upto(struct i915_request *rq)
 
do {
tmp = list_first_entry(&tl->requests, typeof(*tmp), link);
+   GEM_BUG_ON(!i915_request_completed(tmp));
} while (i915_request_retire(tmp) && tmp != rq);
 }
 
@@ -1405,6 +1406,9 @@ i915_request_await_external(struct i915_request *rq, 
struct dma_fence *fence)
return err;
 }
 
+static int
+i915_request_await_request(struct i915_request *to, struct i915_request *from);
+
 int
 i915_request_await_execution(struct i915_request *rq,
 struct dma_fence *fence,
@@ -1464,12 +1468,13 @@ await_request_submit(struct i915_request *to, struct 
i915_request *from)
 * the waiter to be submitted immediately to the physical engine
 * as it may then bypass the virtual request.
 */
-   if (to->engine == READ_ONCE(from->engine))
+   if (to->engine == READ_ONCE(from->engine)) {
return i915_sw_fence_await_sw_fence_gfp(&to->submit,
&from->submit,
I915_FENCE_GFP);
-   else
+   } else {
return __i915_request_await_execution(to, from, NULL);
+   }
 }
 
 static int
@@ -1493,7 +1498,8 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return ret;
}
 
-   if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
+   if (!intel_engine_uses_guc(to->engine) &&
+   is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
ret = await_request_submit(to, from);
else
ret = emit_semaphore_wait(to, from, I915_FENCE_GFP);
@@ -1654,6 +1660,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
prev = to_request(__i915_active_fence_set(&timeline->last_request,
  &rq->fence));
if (prev && !__i915_request_is_complete(prev)) {
+   bool uses_guc = intel_engine_uses_guc(rq->engine);
+
/*
 * The requests are supposed to be kept in order. However,
 * we need to be wary in case the timeline->last_request
@@ -1664,7 +1672,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
   i915_seqno_passed(prev->fence.seqno,
 rq->fence.seqno));
 
-   if (is_power_of_2(READ_ONCE(prev->engine)->mask | 
rq->engine->mask))
+   if ((!uses_guc && is_power_of_2(READ_ONCE(prev->engine)->mask | 
rq->engine->mask)) ||
+   (uses_guc && prev->context == rq->context))
i915_sw_fence_await_sw_fence(&rq->submit,
 &prev->submit,
 &rq->submitq);
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 07/97] drm/i915/guc: Remove sample_forcewake h2g action

2021-05-06 Thread Matthew Brost
From: Rodrigo Vivi 

This action is no-op in the GuC side for a few versions already
and it is getting entirely removed soon, in an upcoming version.

Time to remove before we face communication issues.

Cc:  Vinay Belgaumkar 
Signed-off-by: Rodrigo Vivi 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc.c  | 16 
 drivers/gpu/drm/i915/gt/uc/intel_guc.h  |  1 -
 drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h |  4 
 drivers/gpu/drm/i915/gt/uc/intel_uc.c   |  4 
 4 files changed, 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index adae04c47aab..ab2c8fe8cdfa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -469,22 +469,6 @@ int intel_guc_to_host_process_recv_msg(struct intel_guc 
*guc,
return 0;
 }
 
-int intel_guc_sample_forcewake(struct intel_guc *guc)
-{
-   struct drm_i915_private *dev_priv = guc_to_gt(guc)->i915;
-   u32 action[2];
-
-   action[0] = INTEL_GUC_ACTION_SAMPLE_FORCEWAKE;
-   /* WaRsDisableCoarsePowerGating:skl,cnl */
-   if (!HAS_RC6(dev_priv) || NEEDS_WaRsDisableCoarsePowerGating(dev_priv))
-   action[1] = 0;
-   else
-   /* bit 0 and 1 are for Render and Media domain separately */
-   action[1] = GUC_FORCEWAKE_RENDER | GUC_FORCEWAKE_MEDIA;
-
-   return intel_guc_send(guc, action, ARRAY_SIZE(action));
-}
-
 /**
  * intel_guc_auth_huc() - Send action to GuC to authenticate HuC ucode
  * @guc: intel_guc structure
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index bc2ba7d0626c..c20f3839de12 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -128,7 +128,6 @@ int intel_guc_send_mmio(struct intel_guc *guc, const u32 
*action, u32 len,
u32 *response_buf, u32 response_buf_size);
 int intel_guc_to_host_process_recv_msg(struct intel_guc *guc,
   const u32 *payload, u32 len);
-int intel_guc_sample_forcewake(struct intel_guc *guc);
 int intel_guc_auth_huc(struct intel_guc *guc, u32 rsa_offset);
 int intel_guc_suspend(struct intel_guc *guc);
 int intel_guc_resume(struct intel_guc *guc);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
index 79c560d9c0b6..0f9afcde1d0b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
@@ -302,9 +302,6 @@ struct guc_ct_buffer_desc {
 #define GUC_CT_MSG_ACTION_SHIFT16
 #define GUC_CT_MSG_ACTION_MASK 0x
 
-#define GUC_FORCEWAKE_RENDER   (1 << 0)
-#define GUC_FORCEWAKE_MEDIA(1 << 1)
-
 #define GUC_POWER_UNSPECIFIED  0
 #define GUC_POWER_D0   1
 #define GUC_POWER_D1   2
@@ -558,7 +555,6 @@ enum intel_guc_action {
INTEL_GUC_ACTION_ENTER_S_STATE = 0x501,
INTEL_GUC_ACTION_EXIT_S_STATE = 0x502,
INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003,
-   INTEL_GUC_ACTION_SAMPLE_FORCEWAKE = 0x3005,
INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000,
INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505,
INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506,
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 892c1315ce49..ab0789d66e06 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -502,10 +502,6 @@ static int __uc_init_hw(struct intel_uc *uc)
 
intel_huc_auth(huc);
 
-   ret = intel_guc_sample_forcewake(guc);
-   if (ret)
-   goto err_log_capture;
-
if (intel_uc_uses_guc_submission(uc))
intel_guc_submission_enable(guc);
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 21/97] drm/i915/guc: Update MMIO based communication

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

The MMIO based Host-to-GuC communication protocol has been
updated to use unified HXG messages.

Update our intel_guc_send_mmio() function by correctly handle
BUSY, RETRY and FAILURE replies. Also update our documentation.

GuC: 55.0.0
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Piotr Piórkowski 
Cc: Michal Winiarski 
---
 .../gt/uc/abi/guc_communication_mmio_abi.h| 47 
 drivers/gpu/drm/i915/gt/uc/intel_guc.c| 75 ++-
 2 files changed, 70 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
index be066a62e9e0..fef51499386b 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
@@ -7,44 +7,27 @@
 #define _ABI_GUC_COMMUNICATION_MMIO_ABI_H
 
 /**
- * DOC: MMIO based communication
+ * DOC: GuC MMIO based communication
  *
- * The MMIO based communication between Host and GuC uses software scratch
- * registers, where first register holds data treated as message header,
- * and other registers are used to hold message payload.
+ * The MMIO based communication between Host and GuC relies on special
+ * hardware registers which format could be defined by the software
+ * (so called scratch registers).
  *
- * For Gen9+, GuC uses software scratch registers 0xC180-0xC1B8,
- * but no H2G command takes more than 8 parameters and the GuC FW
- * itself uses an 8-element array to store the H2G message.
- *
- *  +---+-+-+-+
- *  |  MMIO[0]  | MMIO[1] |   ...   | MMIO[n] |
- *  +---+-+-+-+
- *  | header|  optional payload   |
- *  +==++=+=+=+
- *  | 31:28|type| | | |
- *  +--++ | | |
- *  | 27:16|data| | | |
- *  +--++ | | |
- *  |  15:0|code| | | |
- *  +--++-+-+-+
+ * Each MMIO based message, both Host to GuC (H2G) and GuC to Host (G2H)
+ * messages, which maximum length depends on number of available scratch
+ * registers, is directly written into those scratch registers.
  *
- * The message header consists of:
- *
- * - **type**, indicates message type
- * - **code**, indicates message code, is specific for **type**
- * - **data**, indicates message data, optional, depends on **code**
+ * For Gen9+, there are 16 software scratch registers 0xC180-0xC1B8,
+ * but no H2G command takes more than 8 parameters and the GuC firmware
+ * itself uses an 8-element array to store the H2G message.
  *
- * The following message **types** are supported:
+ * For Gen11+, there are additional 4 registers 0x190240-0x19024C, which
+ * are, regardless on lower count, preffered over legacy ones.
  *
- * - **REQUEST**, indicates Host-to-GuC request, requested GuC action code
- *   must be priovided in **code** field. Optional action specific parameters
- *   can be provided in remaining payload registers or **data** field.
+ * The MMIO based communication is mainly used during driver initialization
+ * phase to setup the CTB based communication that will be used afterwards.
  *
- * - **RESPONSE**, indicates GuC-to-Host response from earlier GuC request,
- *   action response status will be provided in **code** field. Optional
- *   response data can be returned in remaining payload registers or **data**
- *   field.
+ * Format of the MMIO messages follows definitions of `HXG Message`_.
  */
 
 #define GUC_MAX_MMIO_MSG_LEN   8
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index ab2c8fe8cdfa..454c8d886499 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -385,29 +385,27 @@ void intel_guc_fini(struct intel_guc *guc)
 /*
  * This function implements the MMIO based host to GuC interface.
  */
-int intel_guc_send_mmio(struct intel_guc *guc, const u32 *action, u32 len,
+int intel_guc_send_mmio(struct intel_guc *guc, const u32 *request, u32 len,
u32 *response_buf, u32 response_buf_size)
 {
+   struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
struct intel_uncore *uncore = guc_to_gt(guc)->uncore;
-   u32 status;
+   u32 header;
int i;
int ret;
 
GEM_BUG_ON(!len);
GEM_BUG_ON(len > guc->send_regs.count);
 
-   /* We expect only action code */
-   GEM_BUG_ON(*action & ~INTEL_GUC_MSG_CODE_MASK);
-
-   /* If CT is available, we expect to use MMIO only during init/fini */
-   GEM_BUG_ON(*action != 
INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER &&
-  *action != 
INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER);
+   GEM_BUG_ON(FIELD_GE

[Intel-gfx] [RFC PATCH 12/97] drm/i915/guc: Don't repeat CTB layout calculations

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

We can retrieve offsets to cmds buffers and descriptor from
actual pointers that we already keep locally.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index dbece569fbe4..fbd6bd20f588 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -244,6 +244,7 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 {
struct intel_guc *guc = ct_to_guc(ct);
u32 base, cmds;
+   void *blob;
int err;
int i;
 
@@ -251,15 +252,18 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 
/* vma should be already allocated and map'ed */
GEM_BUG_ON(!ct->vma);
+   GEM_BUG_ON(!i915_gem_object_has_pinned_pages(ct->vma->obj));
base = intel_guc_ggtt_offset(guc, ct->vma);
 
-   /* (re)initialize descriptors
-* cmds buffers are in the second half of the blob page
-*/
+   /* blob should start with send descriptor */
+   blob = __px_vaddr(ct->vma->obj);
+   GEM_BUG_ON(blob != ct->ctbs[CTB_SEND].desc);
+
+   /* (re)initialize descriptors */
for (i = 0; i < ARRAY_SIZE(ct->ctbs); i++) {
GEM_BUG_ON((i != CTB_SEND) && (i != CTB_RECV));
 
-   cmds = base + PAGE_SIZE / 4 * i + PAGE_SIZE / 2;
+   cmds = base + ptrdiff(ct->ctbs[i].cmds, blob);
CT_DEBUG(ct, "%d: cmds addr=%#x\n", i, cmds);
 
guc_ct_buffer_reset(&ct->ctbs[i], cmds);
@@ -269,12 +273,12 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 * Register both CT buffers starting with RECV buffer.
 * Descriptors are in first half of the blob.
 */
-   err = ct_register_buffer(ct, base + PAGE_SIZE / 4 * CTB_RECV,
+   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs[CTB_RECV].desc, 
blob),
 INTEL_GUC_CT_BUFFER_TYPE_RECV);
if (unlikely(err))
goto err_out;
 
-   err = ct_register_buffer(ct, base + PAGE_SIZE / 4 * CTB_SEND,
+   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs[CTB_SEND].desc, 
blob),
 INTEL_GUC_CT_BUFFER_TYPE_SEND);
if (unlikely(err))
goto err_deregister;
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 10/97] drm/i915: Promote ptrdiff() to i915_utils.h

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Generic helpers should be placed in i915_utils.h.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/i915_utils.h | 5 +
 drivers/gpu/drm/i915/i915_vma.h   | 5 -
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_utils.h 
b/drivers/gpu/drm/i915/i915_utils.h
index f02f52ab5070..5259edacde38 100644
--- a/drivers/gpu/drm/i915/i915_utils.h
+++ b/drivers/gpu/drm/i915/i915_utils.h
@@ -201,6 +201,11 @@ __check_struct_size(size_t base, size_t arr, size_t count, 
size_t *size)
__T;\
 })
 
+static __always_inline ptrdiff_t ptrdiff(const void *a, const void *b)
+{
+   return a - b;
+}
+
 /*
  * container_of_user: Extract the superclass from a pointer to a member.
  *
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 8df784a026d2..a29a158990c6 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -146,11 +146,6 @@ static inline void i915_vma_put(struct i915_vma *vma)
i915_gem_object_put(vma->obj);
 }
 
-static __always_inline ptrdiff_t ptrdiff(const void *a, const void *b)
-{
-   return a - b;
-}
-
 static inline long
 i915_vma_compare(struct i915_vma *vma,
 struct i915_address_space *vm,
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 13/97] drm/i915/guc: Replace CTB array with explicit members

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Upcoming GuC firmware will always require just two CTBs and we
also plan to configure them with different sizes, so definining
them as array is no longer suitable.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 46 ---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  7 +++-
 2 files changed, 30 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index fbd6bd20f588..c54a29176862 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -168,10 +168,10 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
struct intel_guc *guc = ct_to_guc(ct);
struct guc_ct_buffer_desc *desc;
u32 blob_size;
+   u32 cmds_size;
void *blob;
u32 *cmds;
int err;
-   int i;
 
GEM_BUG_ON(ct->vma);
 
@@ -207,15 +207,23 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
 
CT_DEBUG(ct, "base=%#x size=%u\n", intel_guc_ggtt_offset(guc, ct->vma), 
blob_size);
 
-   /* store pointers to desc and cmds */
-   for (i = 0; i < ARRAY_SIZE(ct->ctbs); i++) {
-   GEM_BUG_ON((i !=  CTB_SEND) && (i != CTB_RECV));
+   /* store pointers to desc and cmds for send ctb */
+   desc = blob;
+   cmds = blob + PAGE_SIZE / 2;
+   cmds_size = PAGE_SIZE / 4;
+   CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u\n", "send",
+ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
 
-   desc = blob + PAGE_SIZE / 4 * i;
-   cmds = blob + PAGE_SIZE / 4 * i + PAGE_SIZE / 2;
+   guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size);
 
-   guc_ct_buffer_init(&ct->ctbs[i], desc, cmds, PAGE_SIZE / 4);
-   }
+   /* store pointers to desc and cmds for recv ctb */
+   desc = blob + PAGE_SIZE / 4;
+   cmds = blob + PAGE_SIZE / 4 + PAGE_SIZE / 2;
+   cmds_size = PAGE_SIZE / 4;
+   CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u\n", "recv",
+ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
+
+   guc_ct_buffer_init(&ct->ctbs.recv, desc, cmds, cmds_size);
 
return 0;
 }
@@ -246,7 +254,6 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
u32 base, cmds;
void *blob;
int err;
-   int i;
 
GEM_BUG_ON(ct->enabled);
 
@@ -257,28 +264,25 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 
/* blob should start with send descriptor */
blob = __px_vaddr(ct->vma->obj);
-   GEM_BUG_ON(blob != ct->ctbs[CTB_SEND].desc);
+   GEM_BUG_ON(blob != ct->ctbs.send.desc);
 
/* (re)initialize descriptors */
-   for (i = 0; i < ARRAY_SIZE(ct->ctbs); i++) {
-   GEM_BUG_ON((i != CTB_SEND) && (i != CTB_RECV));
+   cmds = base + ptrdiff(ct->ctbs.send.cmds, blob);
+   guc_ct_buffer_reset(&ct->ctbs.send, cmds);
 
-   cmds = base + ptrdiff(ct->ctbs[i].cmds, blob);
-   CT_DEBUG(ct, "%d: cmds addr=%#x\n", i, cmds);
-
-   guc_ct_buffer_reset(&ct->ctbs[i], cmds);
-   }
+   cmds = base + ptrdiff(ct->ctbs.recv.cmds, blob);
+   guc_ct_buffer_reset(&ct->ctbs.recv, cmds);
 
/*
 * Register both CT buffers starting with RECV buffer.
 * Descriptors are in first half of the blob.
 */
-   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs[CTB_RECV].desc, 
blob),
+   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs.recv.desc, blob),
 INTEL_GUC_CT_BUFFER_TYPE_RECV);
if (unlikely(err))
goto err_out;
 
-   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs[CTB_SEND].desc, 
blob),
+   err = ct_register_buffer(ct, base + ptrdiff(ct->ctbs.send.desc, blob),
 INTEL_GUC_CT_BUFFER_TYPE_SEND);
if (unlikely(err))
goto err_deregister;
@@ -341,7 +345,7 @@ static int ct_write(struct intel_guc_ct *ct,
u32 len /* in dwords */,
u32 fence)
 {
-   struct intel_guc_ct_buffer *ctb = &ct->ctbs[CTB_SEND];
+   struct intel_guc_ct_buffer *ctb = &ct->ctbs.send;
struct guc_ct_buffer_desc *desc = ctb->desc;
u32 head = desc->head;
u32 tail = desc->tail;
@@ -557,7 +561,7 @@ static inline bool ct_header_is_response(u32 header)
 
 static int ct_read(struct intel_guc_ct *ct, u32 *data)
 {
-   struct intel_guc_ct_buffer *ctb = &ct->ctbs[CTB_RECV];
+   struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv;
struct guc_ct_buffer_desc *desc = ctb->desc;
u32 head = desc->head;
u32 tail = desc->tail;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
index 4009e2dd0de4..fc9486779e87 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
+++ b/drivers/gpu/drm/i915/gt/uc/int

[Intel-gfx] [RFC PATCH 26/97] drm/i915/guc: New definition of the CTB registration action

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Definition of the CTB registration action has changed.
Add some ABI documentation and implement required changes.

GuC: 57.0.0
Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h  | 107 ++
 .../gt/uc/abi/guc_communication_ctb_abi.h |   4 -
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c |  76 -
 3 files changed, 152 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h 
b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
index 90efef8a73e4..6cb0d3eb9b72 100644
--- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
@@ -6,6 +6,113 @@
 #ifndef _ABI_GUC_ACTIONS_ABI_H
 #define _ABI_GUC_ACTIONS_ABI_H
 
+/**
+ * DOC: HOST2GUC_REGISTER_CTB
+ *
+ * This message is used as part of the `CTB based communication`_ setup.
+ *
+ * This message must be sent as `MMIO H2G Message`_.
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 |31 | ORIGIN = GUC_HXG_ORIGIN_HOST_
|
+ *  |   
+---+--+
+ *  |   | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ 
|
+ *  |   
+---+--+
+ *  |   | 27:16 | DATA0 = MBZ  
|
+ *  |   
+---+--+
+ *  |   |  15:0 | ACTION = _`GUC_ACTION_HOST2GUC_REGISTER_CTB` = 0x5200
|
+ *  
+---+---+--+
+ *  | 1 | 31:12 | RESERVED = MBZ   
|
+ *  |   
+---+--+
+ *  |   |  11:8 | **TYPE** - type for the CT buffer
|
+ *  |   |   |  
|
+ *  |   |   |   - _`GUC_CTB_TYPE_HOST2GUC` = 0 
|
+ *  |   |   |   - _`GUC_CTB_TYPE_GUC2HOST` = 1 
|
+ *  |   
+---+--+
+ *  |   |   7:0 | **SIZE** - size of the `CT Buffer`_ in 4K units minus 1  
|
+ *  
+---+---+--+
+ *  | 2 |  31:0 | **DESC_ADDR** - GGTT address of the `CT Descriptor`_ 
|
+ *  
+---+---+--+
+ *  | 3 |  31:0 | **BUFF_ADDF** - GGTT address of the `CT Buffer`_ 
|
+ *  
+---+---+--+
+*
+ *  
+---+---+--+
+ *  |   | Bits  | Description  
|
+ *  
+===+===+==+
+ *  | 0 |31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ 
|
+ *  |   
+---+--+
+ *  |   | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_
|
+ *  |   
+---+--+
+ *  |   |  27:0 | DATA0 = MBZ  
|
+ *  
+---+---+--+
+ */
+#define GUC_ACTION_HOST2GUC_REGISTER_CTB   0x4505 // FIXME 0x5200
+
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_LEN  
(GUC_HXG_REQUEST_MSG_MIN_LEN + 3u)
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_0_MBZ
GUC_HXG_REQUEST_MSG_0_DATA0
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_1_MBZ(0xf << 12)
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_1_TYPE   (0xf << 8)
+#define   GUC_CTB_TYPE_HOST2GUC0u
+#define   GUC_CTB_TYPE_GUC2HOST1u
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_1_SIZE   (0xff << 0)
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_2_DESC_ADDR  
GUC_HXG_REQUEST_MSG_n_DATAn
+#define HOST2GUC_REGISTER_CTB_REQUEST_MSG_3_BUFF_ADDR  
GUC_HXG_REQUEST_MSG_n_DATAn
+
+#define HOST2GUC_REGISTER_CTB_RESPONSE_MSG_LEN 
GUC_HXG_RESPONSE_MSG_MIN_LEN
+#define HOST2GUC_REGISTER_CTB_RESPONSE_MSG_0_MBZ   
GUC_HXG_RESPONSE_MSG_0_DATA0
+
+/**
+ * DOC: HOST2GUC_DEREGISTER_CTB
+ *
+ * This message is used as part of the `CTB based communication`_ teardown.
+ *
+ * This message must be sent as `MMIO H2G Message`_.
+ *
+ *  
+---+---+--+
+ *  |   | Bits  | Description  

[Intel-gfx] [RFC PATCH 14/97] drm/i915/guc: Update sizes of CTB buffers

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Future GuC will require CTB buffers sizes to be multiple of 4K.
Make these changes now as this shouldn't impact us too much.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: John Harrison 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 60 ---
 1 file changed, 32 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index c54a29176862..c87a0a8bef26 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -38,6 +38,32 @@ static inline struct drm_device *ct_to_drm(struct 
intel_guc_ct *ct)
 #define CT_PROBE_ERROR(_ct, _fmt, ...) \
i915_probe_error(ct_to_i915(ct), "CT: " _fmt, ##__VA_ARGS__);
 
+/**
+ * DOC: CTB Blob
+ *
+ * We allocate single blob to hold both CTB descriptors and buffers:
+ *
+ *  ++---+--+
+ *  | offset | contents  | size |
+ *  ++===+==+
+ *  | 0x | H2G `CTB Descriptor`_ (send)  |  |
+ *  ++---+  4K  |
+ *  | 0x0800 | G2H `CTB Descriptor`_ (recv)  |  |
+ *  ++---+--+
+ *  | 0x1000 | H2G `CT Buffer`_ (send)   | n*4K |
+ *  ||   |  |
+ *  ++---+--+
+ *  | 0x1000 | G2H `CT Buffer`_ (recv)   | m*4K |
+ *  | + n*4K |   |  |
+ *  ++---+--+
+ *
+ * Size of each `CT Buffer`_ must be multiple of 4K.
+ * As we don't expect too many messages, for now use minimum sizes.
+ */
+#define CTB_DESC_SIZE  ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
+#define CTB_H2G_BUFFER_SIZE(SZ_4K)
+#define CTB_G2H_BUFFER_SIZE(SZ_4K)
+
 struct ct_request {
struct list_head link;
u32 fence;
@@ -175,29 +201,7 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
 
GEM_BUG_ON(ct->vma);
 
-   /* We allocate 1 page to hold both descriptors and both buffers.
-*   ___.
-*  |desc (SEND)|   :
-*  |___|   PAGE/4
-*  :___:
-*  |desc (RECV)|   :
-*  |___|   PAGE/4
-*  :___:
-*  |cmds (SEND)|
-*  |   PAGE/4
-*  |___|
-*  |cmds (RECV)|
-*  |   PAGE/4
-*  |___|
-*
-* Each message can use a maximum of 32 dwords and we don't expect to
-* have more than 1 in flight at any time, so we have enough space.
-* Some logic further ahead will rely on the fact that there is only 1
-* page and that it is always mapped, so if the size is changed the
-* other code will need updating as well.
-*/
-
-   blob_size = PAGE_SIZE;
+   blob_size = 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE + 
CTB_G2H_BUFFER_SIZE;
err = intel_guc_allocate_and_map_vma(guc, blob_size, &ct->vma, &blob);
if (unlikely(err)) {
CT_PROBE_ERROR(ct, "Failed to allocate %u for CTB data (%pe)\n",
@@ -209,17 +213,17 @@ int intel_guc_ct_init(struct intel_guc_ct *ct)
 
/* store pointers to desc and cmds for send ctb */
desc = blob;
-   cmds = blob + PAGE_SIZE / 2;
-   cmds_size = PAGE_SIZE / 4;
+   cmds = blob + 2 * CTB_DESC_SIZE;
+   cmds_size = CTB_H2G_BUFFER_SIZE;
CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u\n", "send",
 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
 
guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size);
 
/* store pointers to desc and cmds for recv ctb */
-   desc = blob + PAGE_SIZE / 4;
-   cmds = blob + PAGE_SIZE / 4 + PAGE_SIZE / 2;
-   cmds_size = PAGE_SIZE / 4;
+   desc = blob + CTB_DESC_SIZE;
+   cmds = blob + 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE;
+   cmds_size = CTB_G2H_BUFFER_SIZE;
CT_DEBUG(ct, "%s desc %#lx cmds %#lx size %u\n", "recv",
 ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size);
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 05/97] drm/i915/guc: use probe_error log for CT enablement failure

2021-05-06 Thread Matthew Brost
From: Daniele Ceraolo Spurio 

We have a couple of failure injection points in the CT enablement path,
so we need to use i915_probe_error() to select the appropriate log level.
A new macro (CT_PROBE_ERROR) has been added to the set of CT logging
macros to be used in this scenario and upcoming ones.

While adding the new macros, fix the underlying logging mechanics used
by the existing ones (DRM_DEV_* -> drm_*) and move the inlines to
before they're used inside the macros.

Signed-off-by: Matthew Brost 
Signed-off-by: Daniele Ceraolo Spurio 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 48 ---
 1 file changed, 25 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index fa9e048cc65f..25618649048f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -7,14 +7,36 @@
 #include "intel_guc_ct.h"
 #include "gt/intel_gt.h"
 
+static inline struct intel_guc *ct_to_guc(struct intel_guc_ct *ct)
+{
+   return container_of(ct, struct intel_guc, ct);
+}
+
+static inline struct intel_gt *ct_to_gt(struct intel_guc_ct *ct)
+{
+   return guc_to_gt(ct_to_guc(ct));
+}
+
+static inline struct drm_i915_private *ct_to_i915(struct intel_guc_ct *ct)
+{
+   return ct_to_gt(ct)->i915;
+}
+
+static inline struct drm_device *ct_to_drm(struct intel_guc_ct *ct)
+{
+   return &ct_to_i915(ct)->drm;
+}
+
 #define CT_ERROR(_ct, _fmt, ...) \
-   DRM_DEV_ERROR(ct_to_dev(_ct), "CT: " _fmt, ##__VA_ARGS__)
+   drm_err(ct_to_drm(_ct), "CT: " _fmt, ##__VA_ARGS__)
 #ifdef CONFIG_DRM_I915_DEBUG_GUC
 #define CT_DEBUG(_ct, _fmt, ...) \
-   DRM_DEV_DEBUG_DRIVER(ct_to_dev(_ct), "CT: " _fmt, ##__VA_ARGS__)
+   drm_dbg(ct_to_drm(_ct), "CT: " _fmt, ##__VA_ARGS__)
 #else
 #define CT_DEBUG(...)  do { } while (0)
 #endif
+#define CT_PROBE_ERROR(_ct, _fmt, ...) \
+   i915_probe_error(ct_to_i915(ct), "CT: " _fmt, ##__VA_ARGS__);
 
 struct ct_request {
struct list_head link;
@@ -47,26 +69,6 @@ void intel_guc_ct_init_early(struct intel_guc_ct *ct)
INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func);
 }
 
-static inline struct intel_guc *ct_to_guc(struct intel_guc_ct *ct)
-{
-   return container_of(ct, struct intel_guc, ct);
-}
-
-static inline struct intel_gt *ct_to_gt(struct intel_guc_ct *ct)
-{
-   return guc_to_gt(ct_to_guc(ct));
-}
-
-static inline struct drm_i915_private *ct_to_i915(struct intel_guc_ct *ct)
-{
-   return ct_to_gt(ct)->i915;
-}
-
-static inline struct device *ct_to_dev(struct intel_guc_ct *ct)
-{
-   return ct_to_i915(ct)->drm.dev;
-}
-
 static inline const char *guc_ct_buffer_type_to_str(u32 type)
 {
switch (type) {
@@ -264,7 +266,7 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
 err_deregister:
ct_deregister_buffer(ct, INTEL_GUC_CT_BUFFER_TYPE_RECV);
 err_out:
-   CT_ERROR(ct, "Failed to open open CT channel (err=%d)\n", err);
+   CT_PROBE_ERROR(ct, "Failed to open channel (err=%d)\n", err);
return err;
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH 19/97] drm/i915/guc: Always copy CT message to new allocation

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

Since most of future CT traffic will be based on G2H requests,
instead of copying incoming CT message to static buffer and then
create new allocation for such request, always copy incoming CT
message to new allocation. Also by doing it while reading CT
header, we can safely fallback if that atomic allocation fails.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
Cc: Piotr Piórkowski 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 180 ++
 1 file changed, 120 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index d630ec32decf..a174978c6a27 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -72,8 +72,9 @@ struct ct_request {
u32 *response_buf;
 };
 
-struct ct_incoming_request {
+struct ct_incoming_msg {
struct list_head link;
+   u32 size;
u32 msg[];
 };
 
@@ -575,7 +576,26 @@ static inline bool ct_header_is_response(u32 header)
return !!(header & GUC_CT_MSG_IS_RESPONSE);
 }
 
-static int ct_read(struct intel_guc_ct *ct, u32 *data)
+static struct ct_incoming_msg *ct_alloc_msg(u32 num_dwords)
+{
+   struct ct_incoming_msg *msg;
+
+   msg = kmalloc(sizeof(*msg) + sizeof(u32) * num_dwords, GFP_ATOMIC);
+   if (msg)
+   msg->size = num_dwords;
+   return msg;
+}
+
+static void ct_free_msg(struct ct_incoming_msg *msg)
+{
+   kfree(msg);
+}
+
+/*
+ * Return: number available remaining dwords to read (0 if empty)
+ * or a negative error code on failure
+ */
+static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
 {
struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv;
struct guc_ct_buffer_desc *desc = ctb->desc;
@@ -586,6 +606,7 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
s32 available;
unsigned int len;
unsigned int i;
+   u32 header;
 
if (unlikely(desc->is_in_error))
return -EPIPE;
@@ -601,8 +622,10 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
 
/* tail == head condition indicates empty */
available = tail - head;
-   if (unlikely(available == 0))
-   return -ENODATA;
+   if (unlikely(available == 0)) {
+   *msg = NULL;
+   return 0;
+   }
 
/* beware of buffer wrap case */
if (unlikely(available < 0))
@@ -610,14 +633,14 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
CT_DEBUG(ct, "available %d (%u:%u)\n", available, head, tail);
GEM_BUG_ON(available < 0);
 
-   data[0] = cmds[head];
+   header = cmds[head];
head = (head + 1) % size;
 
/* message len with header */
-   len = ct_header_get_len(data[0]) + 1;
+   len = ct_header_get_len(header) + 1;
if (unlikely(len > (u32)available)) {
CT_ERROR(ct, "Incomplete message %*ph %*ph %*ph\n",
-4, data,
+4, &header,
 4 * (head + available - 1 > size ?
  size - head : available - 1), &cmds[head],
 4 * (head + available - 1 > size ?
@@ -625,11 +648,24 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
goto corrupted;
}
 
+   *msg = ct_alloc_msg(len);
+   if (!*msg) {
+   CT_ERROR(ct, "No memory for message %*ph %*ph %*ph\n",
+4, &header,
+4 * (head + available - 1 > size ?
+ size - head : available - 1), &cmds[head],
+4 * (head + available - 1 > size ?
+ available - 1 - size + head : 0), &cmds[0]);
+   return available;
+   }
+
+   (*msg)->msg[0] = header;
+
for (i = 1; i < len; i++) {
-   data[i] = cmds[head];
+   (*msg)->msg[i] = cmds[head];
head = (head + 1) % size;
}
-   CT_DEBUG(ct, "received %*ph\n", 4 * len, data);
+   CT_DEBUG(ct, "received %*ph\n", 4 * len, (*msg)->msg);
 
desc->head = head * 4;
return available - len;
@@ -659,33 +695,33 @@ static int ct_read(struct intel_guc_ct *ct, u32 *data)
  *   ^---len---^
  */
 
-static int ct_handle_response(struct intel_guc_ct *ct, const u32 *msg)
+static int ct_handle_response(struct intel_guc_ct *ct, struct ct_incoming_msg 
*response)
 {
-   u32 header = msg[0];
+   u32 header = response->msg[0];
u32 len = ct_header_get_len(header);
-   u32 msgsize = (len + 1) * sizeof(u32); /* msg size in bytes w/header */
u32 fence;
u32 status;
u32 datalen;
struct ct_request *req;
unsigned long flags;
bool found = false;
+   int err = 0;
 
GEM_BUG_ON(!ct_header_is_res

[Intel-gfx] [RFC PATCH 17/97] drm/i915/guc: Stop using mutex while sending CTB messages

2021-05-06 Thread Matthew Brost
From: Michal Wajdeczko 

We are no longer using descriptor to hold G2H replies and we are
protecting access to the descriptor and command buffer by the
separate spinlock, so we can stop using mutex.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index bee0958d8bae..cb58fa7f970c 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -537,7 +537,6 @@ static int ct_send(struct intel_guc_ct *ct,
 int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len,
  u32 *response_buf, u32 response_buf_size)
 {
-   struct intel_guc *guc = ct_to_guc(ct);
u32 status = ~0; /* undefined */
int ret;
 
@@ -546,8 +545,6 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 
*action, u32 len,
return -ENODEV;
}
 
-   mutex_lock(&guc->send_mutex);
-
ret = ct_send(ct, action, len, response_buf, response_buf_size, 
&status);
if (unlikely(ret < 0)) {
CT_ERROR(ct, "Sending action %#x failed (err=%d status=%#X)\n",
@@ -557,7 +554,6 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 
*action, u32 len,
 action[0], ret, ret);
}
 
-   mutex_unlock(&guc->send_mutex);
return ret;
 }
 
-- 
2.28.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


  1   2   >