[PATCH v2 05/11] drm/mediatek: Add mtk_ddp_sec_write to config secure buffer info

2023-10-22 Thread Jason-JH . Lin
Add mtk_ddp_sec_write to configure secure buffer information to
cmdq secure packet data.
Then secure cmdq driver will use these information to configure
curresponding secure DRAM address to HW overlay in secure world.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c | 12 
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h |  4 
 2 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c 
b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
index 771f4e173353..3dca936b9143 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
@@ -111,6 +111,18 @@ void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, 
unsigned int value,
 #endif
 }
 
+void mtk_ddp_sec_write(struct cmdq_pkt *cmdq_pkt, u32 addr, u64 base,
+  const enum cmdq_iwc_addr_metadata_type type,
+  const u32 offset, const u32 size, const u32 port)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+   /* secure buffer will be 4K alignment */
+   if (cmdq_pkt)
+   cmdq_sec_pkt_write(cmdq_pkt, addr, base, type,
+  offset, ALIGN(size, PAGE_SIZE), port);
+#endif
+}
+
 static int mtk_ddp_clk_enable(struct device *dev)
 {
struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev);
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h 
b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
index febcaeef16a1..239a65140352 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
@@ -7,6 +7,7 @@
 #define MTK_DRM_DDP_COMP_H
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -291,4 +292,7 @@ void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, 
unsigned int value,
 void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value,
struct cmdq_client_reg *cmdq_reg, void __iomem *regs,
unsigned int offset, unsigned int mask);
+void mtk_ddp_sec_write(struct cmdq_pkt *cmdq_pkt, u32 addr, u64 base,
+  const enum cmdq_iwc_addr_metadata_type type,
+  const u32 offset, const u32 size, const u32 port);
 #endif /* MTK_DRM_DDP_COMP_H */
-- 
2.18.0



[PATCH v2 00/11] Add mediate-drm secure flow for SVP

2023-10-22 Thread Jason-JH . Lin
The patch series provides drm driver support for enabling secure video
path (SVP) playback on MediaiTek hardware in the Linux kernel.

Memory Definitions:
secure memory - Memory allocated in the TEE (Trusted Execution
Environment) which is inaccessible in the REE (Rich Execution
Environment, i.e. linux kernel/userspace).
secure handle - Integer value which acts as reference to 'secure
memory'. Used in communication between TEE and REE to reference
'secure memory'.
secure buffer - 'secure memory' that is used to store decrypted,
compressed video or for other general purposes in the TEE.
secure surface - 'secure memory' that is used to store graphic buffers.

Memory Usage in SVP:
The overall flow of SVP starts with encrypted video coming in from an
outside source into the REE. The REE will then allocate a 'secure
buffer' and send the corresponding 'secure handle' along with the
encrypted, compressed video data to the TEE. The TEE will then decrypt
the video and store the result in the 'secure buffer'. The REE will
then allocate a 'secure surface'. The REE will pass the 'secure
handles' for both the 'secure buffer' and 'secure surface' into the
TEE for video decoding. The video decoder HW will then decode the
contents of the 'secure buffer' and place the result in the 'secure
surface'. The REE will then attach the 'secure surface' to the overlay
plane for rendering of the video.

Everything relating to ensuring security of the actual contents of the
'secure buffer' and 'secure surface' is out of scope for the REE and
is the responsibility of the TEE.

DRM driver handles allocation of gem objects that are backed by a 'secure
surface' and for displaying a 'secure surface' on the overlay plane.
This introduces a new flag for object creation called
DRM_MTK_GEM_CREATE_ENCRYPTED which indicates it should be a 'secure
surface'. All changes here are in MediaTek specific code.

---
Based on 3 series and 1 patch:
[1] dma-buf: heaps: Add MediaTek secure heap
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=782776

[2] add driver to support secure video decoder
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=782922

[3] soc: mediatek: Add register definitions for GCE
- 
https://patchwork.kernel.org/project/linux-mediatek/patch/20231017064717.21616-2-shawn.s...@mediatek.com/

[4] Add CMDQ secure driver for SVP
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=795502
---
Change in v2:

1. remove the DRIVER_RDNDER flag for mtk_drm_ioctl
2. move cmdq_insert_backup_cookie into client driver
3. move secure gce node define from mt8195-cherry.dtsi to mt8195.dtsi
---

CK Hu (1):
  drm/mediatek: Add interface to allocate MediaTek GEM buffer.

Jason-JH.Lin (10):
  drm/mediatek/uapi: Add DRM_MTK_GEM_CREATED_ENCRYPTTED flag
  drm/mediatek: Add secure buffer control flow to mtk_drm_gem
  drm/mediatek: Add secure identify flag and funcution to mtk_drm_plane
  drm/mediatek: Add mtk_ddp_sec_write to config secure buffer info
  drm/mediatek: Add get_sec_port interface to mtk_ddp_comp
  drm/mediatek: Add secure layer config support for ovl
  drm/mediatek: Add secure layer config support for ovl_adaptor
  drm/mediatek: Add secure flow support to mediatek-drm
  drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize
  arm64: dts: mt8195: Add secure mbox settings for vdosys

 arch/arm64/boot/dts/mediatek/mt8195.dtsi  |   6 +-
 drivers/gpu/drm/mediatek/mtk_disp_drv.h   |   3 +
 drivers/gpu/drm/mediatek/mtk_disp_ovl.c   |  31 +-
 .../gpu/drm/mediatek/mtk_disp_ovl_adaptor.c   |  15 +
 drivers/gpu/drm/mediatek/mtk_drm_crtc.c   | 274 +-
 drivers/gpu/drm/mediatek/mtk_drm_crtc.h   |   1 +
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c   |  14 +
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h   |  13 +
 drivers/gpu/drm/mediatek/mtk_drm_drv.c|  13 +
 drivers/gpu/drm/mediatek/mtk_drm_gem.c| 121 
 drivers/gpu/drm/mediatek/mtk_drm_gem.h|  16 +
 drivers/gpu/drm/mediatek/mtk_drm_plane.c  |   7 +
 drivers/gpu/drm/mediatek/mtk_drm_plane.h  |   2 +
 drivers/gpu/drm/mediatek/mtk_mdp_rdma.c   |  11 +-
 drivers/gpu/drm/mediatek/mtk_mdp_rdma.h   |   2 +
 include/uapi/drm/mediatek_drm.h   |  59 
 16 files changed, 570 insertions(+), 18 deletions(-)
 create mode 100644 include/uapi/drm/mediatek_drm.h

-- 
2.18.0



[PATCH v2 02/11] drm/mediatek/uapi: Add DRM_MTK_GEM_CREATED_ENCRYPTTED flag

2023-10-22 Thread Jason-JH . Lin
Add DRM_MTK_GEM_CREATED_ENCRYPTTED flag to allocate a secure buffer
to support secure video path feature.

Signed-off-by: Jason-JH.Lin 
---
 include/uapi/drm/mediatek_drm.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/drm/mediatek_drm.h b/include/uapi/drm/mediatek_drm.h
index c050de320a84..93f25e0c21d7 100644
--- a/include/uapi/drm/mediatek_drm.h
+++ b/include/uapi/drm/mediatek_drm.h
@@ -48,6 +48,7 @@ struct drm_mtk_gem_map_off {
 
 #define DRM_MTK_GEM_CREATE 0x00
 #define DRM_MTK_GEM_MAP_OFFSET 0x01
+#define DRM_MTK_GEM_CREATE_ENCRYPTED   0x02
 
 #define DRM_IOCTL_MTK_GEM_CREATE   DRM_IOWR(DRM_COMMAND_BASE + \
DRM_MTK_GEM_CREATE, struct drm_mtk_gem_create)
-- 
2.18.0



[PATCH v2 11/11] arm64: dts: mt8195: Add secure mbox settings for vdosys

2023-10-22 Thread Jason-JH . Lin
Add a secure mailbox channel to support secure video path on
vdosys0 and vdosys1.

Signed-off-by: Jason-JH.Lin 
---
 arch/arm64/boot/dts/mediatek/mt8195.dtsi | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi 
b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
index 42f9fdf696fe..d14113c89a9b 100644
--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
@@ -2619,7 +2619,8 @@
vdosys0: syscon@1c01a000 {
compatible = "mediatek,mt8195-vdosys0", 
"mediatek,mt8195-mmsys", "syscon";
reg = <0 0x1c01a000 0 0x1000>;
-   mboxes = < 0 CMDQ_THR_PRIO_4>;
+   mboxes = < 0 CMDQ_THR_PRIO_4>,
+< 8 CMDQ_THR_PRIO_4>; /* secure mbox */
#clock-cells = <1>;
};
 
@@ -2804,7 +2805,8 @@
vdosys1: syscon@1c10 {
compatible = "mediatek,mt8195-vdosys1", "syscon";
reg = <0 0x1c10 0 0x1000>;
-   mboxes = < 1 CMDQ_THR_PRIO_4>;
+   mboxes = < 1 CMDQ_THR_PRIO_4>,
+< 9 CMDQ_THR_PRIO_4>; /* secure mbox */;
mediatek,gce-client-reg = < SUBSYS_1c10 0x 
0x1000>;
#clock-cells = <1>;
#reset-cells = <1>;
-- 
2.18.0



[PATCH v2 10/11] drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize

2023-10-22 Thread Jason-JH . Lin
Add cmdq_insert_backup_cookie to append some commands before EOC:
1. Get GCE HW thread execute count from the GCE HW register.
2. Add 1 to the execute count and then store into a shared memory.
3. Set a software event siganl as secure irq to GCE HW.

Since the value of execute count + 1 is stored in a shared memory,
CMDQ driver in the normal world can use it to handle task done in irq
handler and CMDQ driver in the secure world will use it to schedule
the task slot for each secure thread.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c 
b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
index 6c2cf339b923..399aa6bb2f8d 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
@@ -177,7 +177,7 @@ void mtk_crtc_disable_secure_state(struct drm_crtc *crtc)
sec_scn = CMDQ_SEC_SUB_DISP_DISABLE;
 
cmdq_sec_pkt_set_data(_crtc->sec_cmdq_handle, sec_engine, 
sec_engine, sec_scn);
-
+   cmdq_sec_insert_backup_cookie(_crtc->sec_cmdq_handle);
cmdq_pkt_finalize(_crtc->sec_cmdq_handle);
dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev,
   mtk_crtc->sec_cmdq_handle.pa_base,
@@ -786,6 +786,8 @@ static void mtk_drm_crtc_update_config(struct mtk_drm_crtc 
*mtk_crtc,
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
+   if (cmdq_handle->sec_data)
+   cmdq_sec_insert_backup_cookie(cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
dma_sync_single_for_device(cmdq_client.chan->mbox->dev,
   cmdq_handle->pa_base,
-- 
2.18.0



[PATCH v2 01/11] drm/mediatek: Add interface to allocate MediaTek GEM buffer.

2023-10-22 Thread Jason-JH . Lin
From: CK Hu 

Add an interface to allocate MediaTek GEM buffers, allow the IOCTLs
to be used by render nodes.
This patch also sets the RENDER driver feature.

Signed-off-by: CK Hu 
Signed-off-by: Nicolas Boichat 
Signed-off-by: Philipp Zabel 
Signed-off-by: Jason-JH.Lin 
Reviewed-by: Daniel Kurtz 
Reviewed-by: Nicolas Boichat 
Tested-by: Daniel Kurtz 
Tested-by: Pi-Hsun Shih 

---
 drivers/gpu/drm/mediatek/mtk_drm_drv.c | 13 ++
 drivers/gpu/drm/mediatek/mtk_drm_gem.c | 39 +
 drivers/gpu/drm/mediatek/mtk_drm_gem.h | 12 ++
 include/uapi/drm/mediatek_drm.h| 58 ++
 4 files changed, 122 insertions(+)
 create mode 100644 include/uapi/drm/mediatek_drm.h

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c 
b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
index 93552d76b6e7..e3e9dbdf265b 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "mtk_drm_crtc.h"
 #include "mtk_drm_ddp_comp.h"
@@ -541,6 +542,14 @@ static void mtk_drm_kms_deinit(struct drm_device *drm)
component_unbind_all(drm->dev, drm);
 }
 
+static const struct drm_ioctl_desc mtk_ioctls[] = {
+   DRM_IOCTL_DEF_DRV(MTK_GEM_CREATE, mtk_gem_create_ioctl,
+ DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW),
+   DRM_IOCTL_DEF_DRV(MTK_GEM_MAP_OFFSET,
+ mtk_gem_map_offset_ioctl,
+ DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW),
+};
+
 DEFINE_DRM_GEM_FOPS(mtk_drm_fops);
 
 /*
@@ -562,6 +571,10 @@ static const struct drm_driver mtk_drm_driver = {
 
.gem_prime_import = mtk_drm_gem_prime_import,
.gem_prime_import_sg_table = mtk_gem_prime_import_sg_table,
+
+   .ioctls = mtk_ioctls,
+   .num_ioctls = ARRAY_SIZE(mtk_ioctls),
+
.fops = _drm_fops,
 
.name = DRIVER_NAME,
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c 
b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index 9f364df52478..bcce723f257d 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -4,6 +4,7 @@
  */
 
 #include 
+#include 
 
 #include 
 #include 
@@ -272,3 +273,41 @@ void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj,
mtk_gem->kvaddr = NULL;
kfree(mtk_gem->pages);
 }
+
+int mtk_gem_map_offset_ioctl(struct drm_device *drm, void *data,
+struct drm_file *file_priv)
+{
+   struct drm_mtk_gem_map_off *args = data;
+
+   return drm_gem_dumb_map_offset(file_priv, drm, args->handle,
+  >offset);
+}
+
+int mtk_gem_create_ioctl(struct drm_device *dev, void *data,
+struct drm_file *file_priv)
+{
+   struct mtk_drm_gem_obj *mtk_gem;
+   struct drm_mtk_gem_create *args = data;
+   int ret;
+
+   mtk_gem = mtk_drm_gem_create(dev, args->size, false);
+   if (IS_ERR(mtk_gem))
+   return PTR_ERR(mtk_gem);
+
+   /*
+* allocate a id of idr table where the obj is registered
+* and handle has the id what user can see.
+*/
+   ret = drm_gem_handle_create(file_priv, _gem->base, >handle);
+   if (ret)
+   goto err_handle_create;
+
+   /* drop reference from allocate - handle holds it now. */
+   drm_gem_object_put(_gem->base);
+
+   return 0;
+
+err_handle_create:
+   mtk_drm_gem_free_object(_gem->base);
+   return ret;
+}
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h 
b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
index 78f23b07a02e..90f3d2916ec5 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
@@ -46,4 +46,16 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, 
struct iosys_map *map);
 void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj,
  struct iosys_map *map);
 
+/*
+ * request gem object creation and buffer allocation as the size
+ * that it is calculated with framebuffer information such as width,
+ * height and bpp.
+ */
+int mtk_gem_create_ioctl(struct drm_device *dev, void *data,
+struct drm_file *file_priv);
+
+/* get buffer offset to map to user space. */
+int mtk_gem_map_offset_ioctl(struct drm_device *dev, void *data,
+struct drm_file *file_priv);
+
 #endif
diff --git a/include/uapi/drm/mediatek_drm.h b/include/uapi/drm/mediatek_drm.h
new file mode 100644
index ..c050de320a84
--- /dev/null
+++ b/include/uapi/drm/mediatek_drm.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ * Copyright (c) 2015 MediaTek Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT 

[PATCH v2 07/11] drm/mediatek: Add secure layer config support for ovl

2023-10-22 Thread Jason-JH . Lin
Add secure layer config support for ovl.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_disp_drv.h   |  3 ++
 drivers/gpu/drm/mediatek/mtk_disp_ovl.c   | 31 +--
 .../gpu/drm/mediatek/mtk_disp_ovl_adaptor.c   | 12 +++
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c   |  2 ++
 4 files changed, 46 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_disp_drv.h 
b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
index 2254038519e1..dec937b183a8 100644
--- a/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+++ b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include "mtk_drm_ddp_comp.h"
 #include "mtk_drm_plane.h"
 #include "mtk_mdp_rdma.h"
 
@@ -79,6 +80,7 @@ void mtk_ovl_clk_disable(struct device *dev);
 void mtk_ovl_config(struct device *dev, unsigned int w,
unsigned int h, unsigned int vrefresh,
unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
+u64 mtk_ovl_get_sec_port(struct mtk_ddp_comp *comp, unsigned int idx);
 int mtk_ovl_layer_check(struct device *dev, unsigned int idx,
struct mtk_plane_state *mtk_state);
 void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
@@ -112,6 +114,7 @@ void mtk_ovl_adaptor_clk_disable(struct device *dev);
 void mtk_ovl_adaptor_config(struct device *dev, unsigned int w,
unsigned int h, unsigned int vrefresh,
unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
+u64 mtk_ovl_adaptor_get_sec_port(struct mtk_ddp_comp *comp, unsigned int idx);
 void mtk_ovl_adaptor_layer_config(struct device *dev, unsigned int idx,
  struct mtk_plane_state *state,
  struct cmdq_pkt *cmdq_pkt);
diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c 
b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
index 2bffe4245466..76e832e4875a 100644
--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
@@ -46,6 +46,7 @@
 #define DISP_REG_OVL_ADDR(ovl, n)  ((ovl)->data->addr + 0x20 * (n))
 #define DISP_REG_OVL_HDR_ADDR(ovl, n)  ((ovl)->data->addr + 0x20 * (n) 
+ 0x04)
 #define DISP_REG_OVL_HDR_PITCH(ovl, n) ((ovl)->data->addr + 0x20 * (n) 
+ 0x08)
+#define DISP_REG_OVL_SECURE0x0fc0
 
 #define GMC_THRESHOLD_BITS 16
 #define GMC_THRESHOLD_HIGH ((1 << GMC_THRESHOLD_BITS) / 4)
@@ -126,8 +127,19 @@ struct mtk_disp_ovl {
const struct mtk_disp_ovl_data  *data;
void(*vblank_cb)(void *data);
void*vblank_cb_data;
+   resource_size_t regs_pa;
 };
 
+u64 mtk_ovl_get_sec_port(struct mtk_ddp_comp *comp, unsigned int idx)
+{
+   if (comp->id == DDP_COMPONENT_OVL0)
+   return 1ULL << CMDQ_SEC_DISP_OVL0;
+   else if (comp->id == DDP_COMPONENT_OVL1)
+   return 1ULL << CMDQ_SEC_DISP_OVL1;
+
+   return 0;
+}
+
 static irqreturn_t mtk_disp_ovl_irq_handler(int irq, void *dev_id)
 {
struct mtk_disp_ovl *priv = dev_id;
@@ -449,8 +461,22 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int 
idx,
  DISP_REG_OVL_SRC_SIZE(idx));
mtk_ddp_write_relaxed(cmdq_pkt, offset, >cmdq_reg, ovl->regs,
  DISP_REG_OVL_OFFSET(idx));
-   mtk_ddp_write_relaxed(cmdq_pkt, addr, >cmdq_reg, ovl->regs,
- DISP_REG_OVL_ADDR(ovl, idx));
+
+   if (state->pending.is_sec) {
+   const struct drm_format_info *fmt_info = drm_format_info(fmt);
+   unsigned int buf_size = (pending->height - 1) * pending->pitch +
+   pending->width * fmt_info->cpp[0];
+
+   mtk_ddp_write_mask(cmdq_pkt, BIT(idx), >cmdq_reg, 
ovl->regs,
+  DISP_REG_OVL_SECURE, BIT(idx));
+   mtk_ddp_sec_write(cmdq_pkt, ovl->regs_pa + 
DISP_REG_OVL_ADDR(ovl, idx),
+ pending->addr, CMDQ_IWC_H_2_MVA, 0, buf_size, 
0);
+   } else {
+   mtk_ddp_write_mask(cmdq_pkt, 0, >cmdq_reg, ovl->regs,
+  DISP_REG_OVL_SECURE, BIT(idx));
+   mtk_ddp_write_relaxed(cmdq_pkt, addr, >cmdq_reg, ovl->regs,
+ DISP_REG_OVL_ADDR(ovl, idx));
+   }
 
if (is_afbc) {
mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, >cmdq_reg, 
ovl->regs,
@@ -529,6 +555,7 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev)
}
 
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+   priv->regs_pa = res->start;
priv->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->regs)) {
dev_err(dev, "failed to ioremap ovl\n");
diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c 
b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c

[PATCH v2 09/11] drm/mediatek: Add secure flow support to mediatek-drm

2023-10-22 Thread Jason-JH . Lin
To add secure flow support for mediatek-drm, each crtc have to
create a secure cmdq mailbox channel. Then cmdq packets with
display HW configuration will be sent to secure cmdq mailbox channel
and configured in the secure world.

Each crtc have to use secure cmdq interface to configure some secure
settings for display HW before sending cmdq packets to secure cmdq
mailbox channel.

If any of fb get from current drm_atomic_state is secure, then crtc
will switch to the secure flow to configure display HW.
If all fbs are not secure in current drm_atomic_state, then crtc will
switch to the normal flow.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_crtc.c  | 272 ++-
 drivers/gpu/drm/mediatek/mtk_drm_crtc.h  |   1 +
 drivers/gpu/drm/mediatek/mtk_drm_plane.c |   7 +
 3 files changed, 269 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c 
b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
index b6fa4ad2f94d..6c2cf339b923 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
@@ -56,6 +56,11 @@ struct mtk_drm_crtc {
u32 cmdq_event;
u32 cmdq_vblank_cnt;
wait_queue_head_t   cb_blocking_queue;
+
+   struct cmdq_client  sec_cmdq_client;
+   struct cmdq_pkt sec_cmdq_handle;
+   boolsec_cmdq_working;
+   wait_queue_head_t   sec_cb_blocking_queue;
 #endif
 
struct device   *mmsys_dev;
@@ -67,6 +72,7 @@ struct mtk_drm_crtc {
/* lock for display hardware access */
struct mutexhw_lock;
boolconfig_updating;
+   boolsec_on;
 };
 
 struct mtk_crtc_state {
@@ -109,6 +115,154 @@ static void mtk_drm_finish_page_flip(struct mtk_drm_crtc 
*mtk_crtc)
}
 }
 
+void mtk_crtc_disable_secure_state(struct drm_crtc *crtc)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+   enum cmdq_sec_scenario sec_scn = CMDQ_MAX_SEC_COUNT;
+   int i;
+   struct mtk_ddp_comp *ddp_first_comp;
+   struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
+   u64 sec_engine = 0; /* for hw engine write output secure fb */
+   u64 sec_port = 0; /* for larb port read input secure fb */
+
+   mutex_lock(_crtc->hw_lock);
+
+   if (!mtk_crtc->sec_cmdq_client.chan) {
+   pr_err("crtc-%d secure mbox channel is NULL\n", 
drm_crtc_index(crtc));
+   goto err;
+   }
+
+   if (!mtk_crtc->sec_on) {
+   pr_debug("crtc-%d is already disabled!\n", 
drm_crtc_index(crtc));
+   goto err;
+   }
+
+   mbox_flush(mtk_crtc->sec_cmdq_client.chan, 0);
+   mtk_crtc->sec_cmdq_handle.cmd_buf_size = 0;
+
+   if (mtk_crtc->sec_cmdq_handle.sec_data) {
+   struct cmdq_sec_data *sec_data;
+
+   sec_data = mtk_crtc->sec_cmdq_handle.sec_data;
+   sec_data->addrMetadataCount = 0;
+   sec_data->addrMetadatas = (uintptr_t)NULL;
+   }
+
+   /*
+* Secure path only support DL mode, so we just wait
+* the first path frame done here
+*/
+   cmdq_pkt_wfe(_crtc->sec_cmdq_handle, mtk_crtc->cmdq_event, false);
+
+   ddp_first_comp = mtk_crtc->ddp_comp[0];
+   for (i = 0; i < mtk_crtc->layer_nr; i++) {
+   struct drm_plane *plane = _crtc->planes[i];
+
+   sec_port |= mtk_ddp_comp_layer_get_sec_port(ddp_first_comp, i);
+
+   /* make sure secure layer off before switching secure state */
+   if (!mtk_plane_fb_is_secure(plane->state->fb)) {
+   struct mtk_plane_state *plane_state = 
to_mtk_plane_state(plane->state);
+
+   plane_state->pending.enable = false;
+   mtk_ddp_comp_layer_config(ddp_first_comp, i, 
plane_state,
+ _crtc->sec_cmdq_handle);
+   }
+   }
+
+   /* Disable secure path */
+   if (drm_crtc_index(crtc) == 0)
+   sec_scn = CMDQ_SEC_PRIMARY_DISP_DISABLE;
+   else if (drm_crtc_index(crtc) == 1)
+   sec_scn = CMDQ_SEC_SUB_DISP_DISABLE;
+
+   cmdq_sec_pkt_set_data(_crtc->sec_cmdq_handle, sec_engine, 
sec_engine, sec_scn);
+
+   cmdq_pkt_finalize(_crtc->sec_cmdq_handle);
+   dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev,
+  mtk_crtc->sec_cmdq_handle.pa_base,
+  mtk_crtc->sec_cmdq_handle.cmd_buf_size,
+  DMA_TO_DEVICE);
+
+   mtk_crtc->sec_cmdq_working = true;
+   mbox_send_message(mtk_crtc->sec_cmdq_client.chan, 
_crtc->sec_cmdq_handle);
+   mbox_client_txdone(mtk_crtc->sec_cmdq_client.chan, 0);
+
+   // Wait for sec state to be disabled by cmdq
+   

[PATCH v2 03/11] drm/mediatek: Add secure buffer control flow to mtk_drm_gem

2023-10-22 Thread Jason-JH . Lin
Add secure buffer control flow to mtk_drm_gem.

When user space takes DRM_MTK_GEM_CREATE_ENCRYPTED flag and size
to create a mtk_drm_gem object, mtk_drm_gem will find a matched size
dma buffer from secure dma-heap and bind it to mtk_drm_gem object.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_gem.c | 84 +-
 drivers/gpu/drm/mediatek/mtk_drm_gem.h |  4 ++
 2 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c 
b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index bcce723f257d..2064ccd7dde0 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -4,6 +4,8 @@
  */
 
 #include 
+#include 
+#include 
 #include 
 
 #include 
@@ -55,6 +57,80 @@ static struct mtk_drm_gem_obj *mtk_drm_gem_init(struct 
drm_device *dev,
return mtk_gem_obj;
 }
 
+struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev,
+const char *heap, size_t 
size)
+{
+   struct mtk_drm_private *priv = dev->dev_private;
+   struct mtk_drm_gem_obj *mtk_gem;
+   struct drm_gem_object *obj;
+   struct dma_heap *dma_heap;
+   struct dma_buf *dma_buf;
+   struct dma_buf_attachment *attach;
+   struct sg_table *sgt;
+   struct iosys_map map = {};
+   int ret;
+
+   mtk_gem = mtk_drm_gem_init(dev, size);
+   if (IS_ERR(mtk_gem))
+   return ERR_CAST(mtk_gem);
+
+   obj = _gem->base;
+
+   dma_heap = dma_heap_find(heap);
+   if (!dma_heap) {
+   DRM_ERROR("heap find fail\n");
+   goto err_gem_free;
+   }
+   dma_buf = dma_heap_buffer_alloc(dma_heap, size,
+   O_RDWR | O_CLOEXEC, 
DMA_HEAP_VALID_HEAP_FLAGS);
+   if (IS_ERR(dma_buf)) {
+   DRM_ERROR("buffer alloc fail\n");
+   dma_heap_put(dma_heap);
+   goto err_gem_free;
+   }
+   dma_heap_put(dma_heap);
+
+   attach = dma_buf_attach(dma_buf, priv->dma_dev);
+   if (IS_ERR(attach)) {
+   DRM_ERROR("attach fail, return\n");
+   dma_buf_put(dma_buf);
+   goto err_gem_free;
+   }
+   sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+   if (IS_ERR(sgt)) {
+   DRM_ERROR("map failed, detach and return\n");
+   dma_buf_detach(dma_buf, attach);
+   dma_buf_put(dma_buf);
+   goto err_gem_free;
+   }
+   obj->import_attach = attach;
+   mtk_gem->dma_addr = sg_dma_address(sgt->sgl);
+   mtk_gem->sg = sgt;
+   mtk_gem->size = dma_buf->size;
+
+   if (!strcmp(heap, "mtk_svp") || !strcmp(heap, "mtk_svp_cma")) {
+   /* secure buffer can not be mapped */
+   mtk_gem->sec = true;
+   } else {
+   ret = dma_buf_vmap(dma_buf, );
+   mtk_gem->kvaddr = map.vaddr;
+   if (ret) {
+   DRM_ERROR("map failed, ret=%d\n", ret);
+   dma_buf_unmap_attachment(attach, sgt, 
DMA_BIDIRECTIONAL);
+   dma_buf_detach(dma_buf, attach);
+   dma_buf_put(dma_buf);
+   mtk_gem->kvaddr = NULL;
+   }
+   }
+
+   return mtk_gem;
+
+err_gem_free:
+   drm_gem_object_release(obj);
+   kfree(mtk_gem);
+   return ERR_PTR(-ENOMEM);
+}
+
 struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev,
   size_t size, bool alloc_kmap)
 {
@@ -218,7 +294,9 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct 
drm_device *dev,
if (IS_ERR(mtk_gem))
return ERR_CAST(mtk_gem);
 
+   mtk_gem->sec = !sg_page(sg->sgl);
mtk_gem->dma_addr = sg_dma_address(sg->sgl);
+   mtk_gem->size = attach->dmabuf->size;
mtk_gem->sg = sg;
 
return _gem->base;
@@ -290,7 +368,11 @@ int mtk_gem_create_ioctl(struct drm_device *dev, void 
*data,
struct drm_mtk_gem_create *args = data;
int ret;
 
-   mtk_gem = mtk_drm_gem_create(dev, args->size, false);
+   if (args->flags & DRM_MTK_GEM_CREATE_ENCRYPTED)
+   mtk_gem = mtk_drm_gem_create_from_heap(dev, "mtk_svp_cma", 
args->size);
+   else
+   mtk_gem = mtk_drm_gem_create(dev, args->size, false);
+
if (IS_ERR(mtk_gem))
return PTR_ERR(mtk_gem);
 
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h 
b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
index 90f3d2916ec5..ed4d23e252e9 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
@@ -27,9 +27,11 @@ struct mtk_drm_gem_obj {
void*cookie;
void*kvaddr;
dma_addr_t  dma_addr;
+   size_t  size;
unsigned long   dma_attrs;
struct sg_table *sg;
  

[PATCH v2 04/11] drm/mediatek: Add secure identify flag and funcution to mtk_drm_plane

2023-10-22 Thread Jason-JH . Lin
Add is_sec flag to identify current mtk_drm_plane is secure.
Add mtk_plane_is_sec_fb() to check current drm_framebuffer is secure.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_plane.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.h 
b/drivers/gpu/drm/mediatek/mtk_drm_plane.h
index 99aff7da0831..fe60e20a6e1c 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.h
@@ -33,6 +33,7 @@ struct mtk_plane_pending_state {
boolasync_dirty;
boolasync_config;
enum drm_color_encoding color_encoding;
+   boolis_sec;
 };
 
 struct mtk_plane_state {
@@ -46,6 +47,7 @@ to_mtk_plane_state(struct drm_plane_state *state)
return container_of(state, struct mtk_plane_state, base);
 }
 
+bool mtk_plane_fb_is_secure(struct drm_framebuffer *fb);
 int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
   unsigned long possible_crtcs, enum drm_plane_type type,
   unsigned int supported_rotations, const u32 *formats,
-- 
2.18.0



[PATCH v2 08/11] drm/mediatek: Add secure layer config support for ovl_adaptor

2023-10-22 Thread Jason-JH . Lin
Add secure layer config support for ovl_adaptor and sub driver mdp_rdma.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c |  3 +++
 drivers/gpu/drm/mediatek/mtk_mdp_rdma.c | 11 ---
 drivers/gpu/drm/mediatek/mtk_mdp_rdma.h |  2 ++
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c 
b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
index 28a0bccfb0b9..274961222672 100644
--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
@@ -153,6 +153,9 @@ void mtk_ovl_adaptor_layer_config(struct device *dev, 
unsigned int idx,
rdma_config.pitch = pending->pitch;
rdma_config.fmt = pending->format;
rdma_config.color_encoding = pending->color_encoding;
+   rdma_config.source_size = (pending->height - 1) * pending->pitch +
+ pending->width * fmt_info->cpp[0];
+   rdma_config.is_sec = state->pending.is_sec;
mtk_mdp_rdma_config(rdma_l, _config, cmdq_pkt);
 
if (use_dual_pipe) {
diff --git a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c 
b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
index c3adaeefd551..1c4798e3bbc3 100644
--- a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
+++ b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
@@ -94,6 +94,7 @@ struct mtk_mdp_rdma {
void __iomem*regs;
struct clk  *clk;
struct cmdq_client_reg  cmdq_reg;
+   resource_size_t regs_pa;
 };
 
 static unsigned int rdma_fmt_convert(unsigned int fmt)
@@ -198,9 +199,12 @@ void mtk_mdp_rdma_config(struct device *dev, struct 
mtk_mdp_rdma_cfg *cfg,
else
mtk_ddp_write_mask(cmdq_pkt, 0, >cmdq_reg, priv->regs,
   MDP_RDMA_SRC_CON, FLD_OUTPUT_ARGB);
-
-   mtk_ddp_write_mask(cmdq_pkt, cfg->addr0, >cmdq_reg, priv->regs,
-  MDP_RDMA_SRC_BASE_0, FLD_SRC_BASE_0);
+   if (cfg->is_sec)
+   mtk_ddp_sec_write(cmdq_pkt, priv->regs_pa + MDP_RDMA_SRC_BASE_0,
+ cfg->addr0, CMDQ_IWC_H_2_MVA, 0, 
cfg->source_size, 0);
+   else
+   mtk_ddp_write_mask(cmdq_pkt, cfg->addr0, >cmdq_reg, 
priv->regs,
+  MDP_RDMA_SRC_BASE_0, FLD_SRC_BASE_0);
 
mtk_ddp_write_mask(cmdq_pkt, src_pitch_y, >cmdq_reg, priv->regs,
   MDP_RDMA_MF_BKGD_SIZE_IN_BYTE, FLD_MF_BKGD_WB);
@@ -285,6 +289,7 @@ static int mtk_mdp_rdma_probe(struct platform_device *pdev)
return -ENOMEM;
 
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+   priv->regs_pa = res->start;
priv->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->regs)) {
dev_err(dev, "failed to ioremap rdma\n");
diff --git a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h 
b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h
index 9943ee3aac31..9add18e96319 100644
--- a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h
+++ b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h
@@ -15,6 +15,8 @@ struct mtk_mdp_rdma_cfg {
unsigned inty_top;
int fmt;
int color_encoding;
+   unsigned intsource_size;
+   unsigned intis_sec;
 };
 
 #endif // __MTK_MDP_RDMA_H__
-- 
2.18.0



[PATCH v2 06/11] drm/mediatek: Add get_sec_port interface to mtk_ddp_comp

2023-10-22 Thread Jason-JH . Lin
Add get_sec_port interface to ddp_comp to get the secure port settings
from ovl and ovl_adaptor.
Then mediatek-drm will use secure cmdq driver to configure DRAM access
permission in secure world by their secure port settings.

Signed-off-by: Jason-JH.Lin 
---
 drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h 
b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
index 239a65140352..5831ad343e4f 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
@@ -81,6 +81,7 @@ struct mtk_ddp_comp_funcs {
void (*disconnect)(struct device *dev, struct device *mmsys_dev, 
unsigned int next);
void (*add)(struct device *dev, struct mtk_mutex *mutex);
void (*remove)(struct device *dev, struct mtk_mutex *mutex);
+   u64 (*get_sec_port)(struct mtk_ddp_comp *comp, unsigned int idx);
 };
 
 struct mtk_ddp_comp {
@@ -187,6 +188,14 @@ static inline void mtk_ddp_comp_layer_config(struct 
mtk_ddp_comp *comp,
comp->funcs->layer_config(comp->dev, idx, state, cmdq_pkt);
 }
 
+static inline u64 mtk_ddp_comp_layer_get_sec_port(struct mtk_ddp_comp *comp,
+ unsigned int idx)
+{
+   if (comp->funcs && comp->funcs->get_sec_port)
+   return comp->funcs->get_sec_port(comp, idx);
+   return 0;
+}
+
 static inline void mtk_ddp_gamma_set(struct mtk_ddp_comp *comp,
 struct drm_crtc_state *state)
 {
-- 
2.18.0



[PATCH] drm/sched: Convert the GPU scheduler to variable number of run-queues

2023-10-22 Thread Luben Tuikov
The GPU scheduler has now a variable number of run-queues, which are set up at
drm_sched_init() time. This way, each driver announces how many run-queues it
requires (supports) per each GPU scheduler it creates. Note, that run-queues
correspond to scheduler "priorities", thus if the number of run-queues is set
to 1 at drm_sched_init(), then that scheduler supports a single run-queue,
i.e. single "priority". If a driver further sets a single entity per
run-queue, then this creates a 1-to-1 correspondence between a scheduler and
a scheduled entity.

Cc: Lucas Stach 
Cc: Russell King 
Cc: Qiang Yu 
Cc: Rob Clark 
Cc: Abhinav Kumar 
Cc: Dmitry Baryshkov 
Cc: Danilo Krummrich 
Cc: Matthew Brost 
Cc: Boris Brezillon 
Cc: Alex Deucher 
Cc: Christian König 
Cc: Emma Anholt 
Cc: etna...@lists.freedesktop.org
Cc: l...@lists.freedesktop.org
Cc: linux-arm-...@vger.kernel.org
Cc: freedr...@lists.freedesktop.org
Cc: nouv...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  4 +-
 drivers/gpu/drm/etnaviv/etnaviv_sched.c|  1 +
 drivers/gpu/drm/lima/lima_sched.c  |  4 +-
 drivers/gpu/drm/msm/msm_ringbuffer.c   |  5 +-
 drivers/gpu/drm/nouveau/nouveau_sched.c|  1 +
 drivers/gpu/drm/panfrost/panfrost_job.c|  1 +
 drivers/gpu/drm/scheduler/sched_entity.c   | 18 +-
 drivers/gpu/drm/scheduler/sched_main.c | 74 ++
 drivers/gpu/drm/v3d/v3d_sched.c|  5 ++
 include/drm/gpu_scheduler.h|  9 ++-
 11 files changed, 98 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2b8356699f235d..251995a90bbe69 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2280,6 +2280,7 @@ static int amdgpu_device_init_schedulers(struct 
amdgpu_device *adev)
}
 
r = drm_sched_init(>sched, _sched_ops,
+  DRM_SCHED_PRIORITY_COUNT,
   ring->num_hw_submission, 0,
   timeout, adev->reset_domain->wq,
   ring->sched_score, ring->name,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 78476bc75b4e1d..1f357198533f3e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -325,8 +325,8 @@ void amdgpu_job_stop_all_jobs_on_sched(struct 
drm_gpu_scheduler *sched)
int i;
 
/* Signal all jobs not yet scheduled */
-   for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; 
i--) {
-   struct drm_sched_rq *rq = >sched_rq[i];
+   for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+   struct drm_sched_rq *rq = sched->sched_rq[i];
spin_lock(>lock);
list_for_each_entry(s_entity, >entities, list) {
while ((s_job = 
to_drm_sched_job(spsc_queue_pop(_entity->job_queue {
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c 
b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index 345fec6cb1a4c1..9b79f218e21afc 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -135,6 +135,7 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
int ret;
 
ret = drm_sched_init(>sched, _sched_ops,
+DRM_SCHED_PRIORITY_COUNT,
 etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
 msecs_to_jiffies(500), NULL, NULL,
 dev_name(gpu->dev), gpu->dev);
diff --git a/drivers/gpu/drm/lima/lima_sched.c 
b/drivers/gpu/drm/lima/lima_sched.c
index ffd91a5ee29901..295f0353a02e58 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -488,7 +488,9 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, 
const char *name)
 
INIT_WORK(>recover_work, lima_sched_recover_work);
 
-   return drm_sched_init(>base, _sched_ops, 1,
+   return drm_sched_init(>base, _sched_ops,
+ DRM_SCHED_PRIORITY_COUNT,
+ 1,
  lima_job_hang_limit,
  msecs_to_jiffies(timeout), NULL,
  NULL, name, pipe->ldev->dev);
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c 
b/drivers/gpu/drm/msm/msm_ringbuffer.c
index 40c0bc35a44cee..95257ab0185dc4 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
@@ -95,8 +95,9 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu 
*gpu, int id,
sched_timeout = MAX_SCHEDULE_TIMEOUT;
 
ret = drm_sched_init(>sched, _sched_ops,
-   num_hw_submissions, 0, 

Re: [PATCH v3 2/2] drm/uapi: add explicit virtgpu context debug name

2023-10-22 Thread Dmitry Osipenko
On 10/18/23 21:17, Gurchetan Singh wrote:
> There are two problems with the current method of determining the
> virtio-gpu debug name.
> 
> 1) TASK_COMM_LEN is defined to be 16 bytes only, and this is a
>Linux kernel idiom (see PR_SET_NAME + PR_GET_NAME). Though,
>Android/FreeBSD get around this via setprogname(..)/getprogname(..)
>in libc.
> 
>On Android, names longer than 16 bytes are common.  For example,
>one often encounters a program like "com.android.systemui".
> 
>The virtio-gpu spec allows the debug name to be up to 64 bytes, so
>ideally userspace should be able to set debug names up to 64 bytes.
> 
> 2) The current implementation determines the debug name using whatever
>task initiated virtgpu.  This is could be a "RenderThread" of a
>larger program, when we actually want to propagate the debug name
>of the program.
> 
> To fix these issues, add a new CONTEXT_INIT param that allows userspace
> to set the debug name when creating a context.
> 
> It takes a null-terminated C-string as the param value. The length of the
> string (excluding the terminator) **should** be <= 64 bytes.  Otherwise,
> the debug_name will be truncated to 64 bytes.
> 
> Link to open-source userspace:
> https://android-review.googlesource.com/c/platform/hardware/google/gfxstream/+/2787176
> 
> Signed-off-by: Gurchetan Singh 
> Reviewed-by: Josh Simonot 
> ---
> Fixes suggested by Dmitry Osipenko
> v2:
> - Squash implementation and UAPI change into one commit
> - Avoid unnecessary casts
> - Use bool when necessary
> v3:
> - Use DEBUG_NAME_MAX_LEN - 1 when copying string
> 
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  5 
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 39 ++
>  include/uapi/drm/virtgpu_drm.h |  2 ++
>  3 files changed, 40 insertions(+), 6 deletions(-)

Gerd, do you have objections to this UAPI change?

-- 
Best regards,
Dmitry



Re: [PATCH] dt-bindings: display: ssd132x: Remove '-' before compatible enum

2023-10-22 Thread Rob Herring


On Sat, 21 Oct 2023 00:30:17 +0200, Javier Martinez Canillas wrote:
> This is a leftover from when the binding schema had the compatible string
> property enum as a 'oneOf' child and the '-' was not removed when 'oneOf'
> got dropped during the binding review process.
> 
> Reported-by: Rob Herring 
> Closes: 
> https://lore.kernel.org/dri-devel/cal_jsq+h8dcnpkqhokqoodcc8+qi3m0prxrfkz_y4v37ymj...@mail.gmail.com/
> Signed-off-by: Javier Martinez Canillas 
> ---
> 
>  .../devicetree/bindings/display/solomon,ssd132x.yaml  | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 

Reviewed-by: Rob Herring 



[PATCH v3] Documentation/gpu: VM_BIND locking document

2023-10-22 Thread Thomas Hellström
Add the first version of the VM_BIND locking document which is
intended to be part of the xe driver upstreaming agreement.

The document describes and discuss the locking used during exec-
functions, evicton and for userptr gpu-vmas. Intention is to be using the
same nomenclature as the drm-vm-bind-async.rst.

v2:
- s/gvm/gpu_vm/g (Rodrigo Vivi)
- Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
  (Rodrigo Vivi)
- Adjust commit message accordingly.
- Add SPDX license header.

v3:
- Large update to align with the drm_gpuvm manager locking
- Add "Efficient userptr gpu_vma exec function iteration" section
- Add "Locking at bind- and unbind time" section.

Cc: Rodrigo Vivi 
Signed-off-by: Thomas Hellström 
---
 Documentation/gpu/drm-vm-bind-locking.rst | 494 ++
 1 file changed, 494 insertions(+)
 create mode 100644 Documentation/gpu/drm-vm-bind-locking.rst

diff --git a/Documentation/gpu/drm-vm-bind-locking.rst 
b/Documentation/gpu/drm-vm-bind-locking.rst
new file mode 100644
index ..c290ff4287fb
--- /dev/null
+++ b/Documentation/gpu/drm-vm-bind-locking.rst
@@ -0,0 +1,494 @@
+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
+
+===
+VM_BIND locking
+===
+
+This document attempts to describe what's needed to get VM_BIND locking right,
+including the userptr mmu_notifier locking and it will also discuss some
+optimizations to get rid of the looping through of all userptr mappings and
+external / shared object mappings that is needed in the simplest
+implementation. It will also discuss some implications for faulting gpu_vms.
+
+Nomenclature
+
+
+* ``Context``: GPU execution context.
+* ``gpu_vm``: Abstraction of a virtual GPU address space with
+  meta-data. Typically one per client (DRM file-private), or one per
+  context.
+* ``gpu_vma``: Abstraction of a GPU address range within a gpu_vm with
+  associated meta-data. The backing storage of a gpu_vma can either be
+  a GEM object or anonymous pages mapped also into the CPU
+  address space for the process.
+* gpu_vm_bo: Abstracts the association of a GEM object and
+  a VM. Note that if only one gpu_vma per vm and buffer object were
+  allowed, the state stored with a gpu_vm_bo could just as well have
+  been stored with the gpu_vma. For the purpose of this document, each
+  GEM object maintains a list of gpu_vm_bos, and each gpu_vm_bo
+  maintains a list of gpu_vmas.
+* ``userptr gpu_vma or just userptr``: A gpu_vma, the backing store of
+  which is anonymous pages as described above.
+* ``revalidating``: Revalidating a gpu_vma means making the latest version
+  of the backing store resident and making sure the gpu_vma's
+  page-table entries point to that backing store.
+* ``dma_fence``: A struct dma_fence that is similar to a struct completion
+  and which tracks GPU activity. When the GPU activity is finished,
+  the dma_fence signals.
+* ``dma_resv``: A struct dma_resv (AKA reservation object) that is used
+  to track GPU activity in the form of multiple dma_fences on a
+  gpu_vm or a GEM object. The dma_resv contains an array / list
+  of dma_fences and a lock that needs to be held when adding
+  additional dma_fences to the dma_resv. The lock is of a type that
+  allows deadlock-safe locking of multiple dma_resvs in arbitrary order.
+* ``exec function``: An exec function is a function that revalidates all
+  affected gpu_vmas, submits a GPU command batch and registers the
+  dma_fence representing the GPU command's activity with all affected
+  dma_resvs. For completeness, although not covered by this document,
+  it's worth mentioning that an exec function may also be the
+  revalidation worker that is used by some drivers in compute /
+  long-running mode.
+* ``local object``: A GEM object which is local to a gpu_vm. Shared gem
+  objects also share the gpu_vm's dma_resv.
+* ``shared object``: AKA external object: A GEM object which may be shared
+  by multiple gpu_vms and whose backing storage may be shared with
+  other drivers.
+
+
+Locks used and locking orders
+=
+
+One of the benefits of VM_BIND is that local GEM objects share the gpu_vm's
+dma_resv object and hence the dma_resv lock. So even with a huge
+number of local GEM objects, only one lock is needed to make the exec
+sequence atomic.
+
+The following locks and locking orders are used:
+
+* The ``gpu_vm->lock`` (optionally an rwsem). Protects how the gpu_vm is
+  partitioned into gpu_vmas. It can also protect the gpu_vm's list of
+  userptr gpu_vmas. With a CPU mm analogy this would correspond to the
+  mmap_lock.
+* The ``userptr_seqlock``. This lock is taken in read mode for each
+  userptr gpu_vma on the gpu_vm's userptr list, and in write mode during mmu
+  notifier invalidation. This is not a real seqlock but described in
+  ``mm/mmu_notifier.c`` as a "Collision-retry read-side/write-side
+  'lock' a lot like a seqcount, however this allows multiple
+  write-sides to hold it at 

[Bug 201957] amdgpu: ring gfx timeout

2023-10-22 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=201957

--- Comment #98 from jeremy boyd (j...@jerdboyd.com) ---
(In reply to jeremy boyd from comment #97)
> Hello, I'm having this same issue with my thinkpad z16 laptop, Ryzen 6850H
> and Radeon RX 6500M graphics card.
> 
> I do not use the laptop for gaming but for audio and video editing. I have
> not had trouble with any video editing software but I can easily reproduce
> the issue by loading up Ardour or Mixbus32C and either leaving it alone or
> working. After 15 minutes the screen freezes although audio will continue
> for a time. At this point Ardour or Mixbus will close and I can continue
> using the machine. If I load up either program again it will fail again,
> usually within a couple minutes and the whole laptop will freeze up until I
> ctrl-alt-F2 to get to a terminal prompt.
> 
> The issue always happens when Im recording audio with an HDMI device
> attached and 90% of the time without HDMI
> 
> I will attempt to set this kernel parameter amdgpu.mcbp=0 and report back.

I can confirm that this did not solve my problem. I tested my system out for
several hours with no issue and thought that perhaps it had been solved but
while doing a libreoffice presentation with my audio software running it
happened again. here is the error from journalctl

Oct 22 09:40:01 fedora kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring
gfx_0.0.0 timeout, signaled seq=433823, emitted seq=433825
Oct 22 09:40:01 fedora kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR*
Process information: process Xorg pid 2189 thread Xorg:cs0 pid 2319
Oct 22 09:40:01 fedora kernel: amdgpu :67:00.0: amdgpu: GPU reset begin!
Oct 22 09:40:02 fedora kernel: amdgpu :67:00.0: amdgpu: MODE2 reset
Oct 22 09:40:02 fedora kernel: amdgpu :67:00.0: amdgpu: GPU reset
succeeded, trying to resume

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

Re: [PATCH v2 2/2] accel/qaic: Support MHI QAIC_TIMESYNC channel

2023-10-22 Thread Stanislaw Gruszka
On Mon, Oct 16, 2023 at 11:01:14AM -0600, Jeffrey Hugo wrote:
> From: Pranjal Ramajor Asha Kanojiya 
> 
> Use QAIC_TIMESYNC MHI channel to send UTC time to device in SBL
> environment. Remove support for QAIC_TIMESYNC MHI channel in AMSS
> environment as it is not used in that environment.
> 
> Signed-off-by: Pranjal Ramajor Asha Kanojiya 
> Reviewed-by: Jeffrey Hugo 
> Reviewed-by: Carl Vanderlip 
> Signed-off-by: Jeffrey Hugo 
Reviewed-by: Stanislaw Gruszka 


Re: [PATCH v2 1/2] accel/qaic: Add support for periodic timesync

2023-10-22 Thread Stanislaw Gruszka
On Mon, Oct 16, 2023 at 11:01:13AM -0600, Jeffrey Hugo wrote:
> From: Ajit Pal Singh 
> 
> Device and Host have a time synchronization mechanism that happens once
> during boot when device is in SBL mode. After that, in mission-mode there
> is no timesync. In an experiment after continuous operation, device time
> drifted w.r.t. host by approximately 3 seconds per day. This drift leads
> to mismatch in timestamp of device and Host logs. To correct this
> implement periodic timesync in driver. This timesync is carried out via
> QAIC_TIMESYNC_PERIODIC MHI channel.
> 
> Signed-off-by: Ajit Pal Singh 
> Signed-off-by: Pranjal Ramajor Asha Kanojiya 
> Reviewed-by: Jeffrey Hugo 
> Reviewed-by: Carl Vanderlip 
> Reviewed-by: Pranjal Ramajor Asha Kanojiya 
> Signed-off-by: Jeffrey Hugo 
Reviewed-by: Stanislaw Gruszka 

> @@ -586,8 +587,16 @@ static int __init qaic_init(void)
>   goto free_pci;
>   }
>  
> + ret = qaic_timesync_init();
> + if (ret) {
> + pr_debug("qaic: qaic_timesync_init failed %d\n", ret);
> + goto free_mhi;
I would print at error level here. Or if timesync is optional do not error 
exit. 

> +#ifdef readq
> +static u64 read_qtimer(const volatile void __iomem *addr)
> +{
> + return readq(addr);
> +}
> +#else
> +static u64 read_qtimer(const volatile void __iomem *addr)
> +{
> + u64 low, high;
> +
> + low = readl(addr);
> + high = readl(addr + sizeof(u32));
> + return low | (high << 32);
> +}
If that's only for compile on 32-bit PowerPC, I think would be better
to limit supported architectures on Kconfig. 

Regards
Stanislaw


Re: [RFC PATCH 03/10] drm/mipi-dsi: add API for manual control over the DSI link power state

2023-10-22 Thread Dmitry Baryshkov
On Thu, 19 Oct 2023 at 14:42, Alexander Stein
 wrote:
>
> Hi,
>
> Am Donnerstag, 19. Oktober 2023, 13:19:51 CEST schrieb Dmitry Baryshkov:
> > On Thu, 19 Oct 2023 at 12:26, Maxime Ripard  wrote:
> > > On Mon, Oct 16, 2023 at 07:53:48PM +0300, Dmitry Baryshkov wrote:
> > > > The MIPI DSI links do not fully fall into the DRM callbacks model.
> > >
> > > Explaining why would help
> >
> > A kind of explanation comes afterwards, but probably I should change
> > the order of the phrases and expand it:
> >
> > The atomic_pre_enable / atomic_enable and correspondingly
> > atomic_disable / atomic_post_disable expect that the bridge links
> > follow a simple paradigm: either it is off, or it is on and streaming
> > video. Thus, it is fine to just enable the link at the enable time,
> > doing some preparations during the pre_enable.
> >
> > But then it causes several issues with DSI. First, some of the DSI
> > bridges and most of the DSI panels would like to send commands over
> > the DSI link to setup the device. Next, some of the DSI hosts have
> > limitations on sending the commands. The proverbial sunxi DSI host can
> > not send DSI commands after the video stream has started. Thus most of
> > the panels have opted to send all DSI commands from pre_enable (or
> > prepare) callback (before the video stream has started).
> >
> > However this leaves no good place for the DSI host to power up the DSI
> > link. By default the host's pre_enable callback is called after the
> > DSI bridge's pre_enable. For quite some time we were powering up the
> > DSI link from mode_set. This doesn't look fully correct. And also we
> > got into the issue with ps8640 bridge, which requires for the DSI link
> > to be quiet / unpowered at the bridge's reset time.
>
> There are also bridges (e.g. tc358767) which require DSI-LP11 upon bridge
> reset. And additionally this DSI-(e)DP bridge requires LP11 while accessing
> DP-AUX channel, e.g. reading EDID. So bridges need at least some control over
> DSI line state.

For sending commands in LP11 it is typical to toggle the
MIPI_DSI_MODE_LPM flag, for example see panel-=jdi-lt070me05000.c or
some other drives. It might be a good idea to make that more explicit.
All suggestions here would be appreciated.

>
> > Dave has come with the idea of pre_enable_prev_first /
> > prepare_prev_first flags, which attempt to solve the issue by
> > reversing the order of pre_enable callbacks. This mostly solves the
> > issue. However during this cycle it became obvious that this approach
> > is not ideal too. There is no way for the DSI host to know whether the
> > DSI panel / bridge has been updated to use this flag or not, see the
> > discussion at [1].
> >
> > Thus comes this proposal. It allows for the panels to explicitly bring
> > the link up and down at the correct time, it supports automatic use
> > case, where no special handling is required. And last, but not least,
> > it allows the DSI host to note that the bridge / panel were not
> > updated to follow new protocol and thus the link should be powered on
> > at the mode_set time. This leaves us with the possibility of dropping
> > support for this workaround once all in-kernel drivers are updated.
>
> I want to use this series to support tc358767 properly, because
> pre_enable_prev_first is not enough, see AUX channel above. I hope I'll find
> any time soon.
>
> Best regards,
> Alexander
>
> >
> > > > The drm_bridge_funcs abstraction.
> > >
> > > Is there a typo or missing words?
> >
> > missing comma in front of The
> >
> > > > Instead of having just two states (off and on) the DSI hosts have
> > > > separate LP-11 state. In this state the host is on, but the video
> > > > stream is not yet enabled.
> > > >
> > > > Introduce API that allows DSI bridges / panels to control the DSI host
> > > > power up state.
> >
> > [1]
> > https://lore.kernel.org/dri-devel/6c0dd9fd-5d8e-537c-804f-7a03d5899a07@lina
> > ro.org/
> > > > Signed-off-by: Dmitry Baryshkov 
> > > > ---
> > > >
> > > >  drivers/gpu/drm/drm_mipi_dsi.c | 31 +++
> > > >  include/drm/drm_mipi_dsi.h | 29 +
> > > >  2 files changed, 56 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_mipi_dsi.c
> > > > b/drivers/gpu/drm/drm_mipi_dsi.c index 14201f73aab1..c467162cb7d8
> > > > 100644
> > > > --- a/drivers/gpu/drm/drm_mipi_dsi.c
> > > > +++ b/drivers/gpu/drm/drm_mipi_dsi.c
> > > > @@ -428,6 +428,37 @@ int devm_mipi_dsi_attach(struct device *dev,
> > > >
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(devm_mipi_dsi_attach);
> > > >
> > > > +bool mipi_dsi_host_power_control_available(struct mipi_dsi_host *host)
> > > > +{
> > > > + const struct mipi_dsi_host_ops *ops = host->ops;
> > > > +
> > > > + return ops && ops->power_up;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(mipi_dsi_host_power_control_available);
> > > > +
> > > > +int mipi_dsi_host_power_up(struct mipi_dsi_host *host)
> > > > +{
> > > > + const 

Re: [PATCH v2] accel/qaic: Enable 1 MSI fallback mode

2023-10-22 Thread Stanislaw Gruszka
On Mon, Oct 16, 2023 at 11:00:36AM -0600, Jeffrey Hugo wrote:
> From: Carl Vanderlip 
> 
> Several virtualization use-cases either don't support 32 MultiMSIs
> (Xen/VMware) or have significant drawbacks to their use (KVM's vIOMMU,
> which is required to support 32 MSI, needs to allocate an alternate
> system memory space for each device using vIOMMU (e.g. 8GB VM mem and
> 2 cards => 8 + 2 * 8 = 24GB host memory required)). Support these
> cases by enabling a 1 MSI fallback mode.
> 
> Whenever all 32 MSIs requested are not available, a second request for
> a single MSI is made. Its success is the initiator of single MSI mode.
> This mode causes all interrupts generated by the device to be directed
> to the 0th MSI (firmware >=v1.10 will do this as a response to the PCIe
> MSI capability configuration). Likewise, all interrupt handlers for the
> device are registered to the 0th MSI.
> 
> Since the DBC interrupt handler checks if the DBC is in use or if
> there is any pending changes, the 'spurious' interrupts are
> disregarded. If there is work to be done, the standard threaded IRQ
> handler is dispatched.
> 
> On every interrupt, the MHI handler wakes up its threaded interrupt
> handler, and attempts to wake any waiters for MHI state events.
> 
> Performance is within +-0.6% for test cases that typify real world
> use. Larger differences ([-4,+132]%, avg +47%) exist for very simple
> tasks (e.g. addition) compiled for single NSPs. It is assumed that the
> small work and many interrupts typically cause contention (e.g. 16 NSPs
> vs 4 CPUs), as evidenced by the standard deviation between runs also
> decreasing (r=-0.48 between delta(Performace_test) and
> delta(StdDev_test/Avg_test))
> 
> Signed-off-by: Carl Vanderlip 
> Reviewed-by: Pranjal Ramajor Asha Kanojiya 
> Reviewed-by: Jeffrey Hugo 
> Signed-off-by: Jeffrey Hugo 
Reviewed-by: Stanislaw Gruszka 


Re: [PATCH v3 1/3] pwm: make it possible to apply pwm changes in atomic context

2023-10-22 Thread Sean Young
Hi Hans,

On Sat, Oct 21, 2023 at 11:08:22AM +0200, Hans de Goede wrote:
> On 10/19/23 12:51, Uwe Kleine-König wrote:
> > On Wed, Oct 18, 2023 at 03:57:48PM +0200, Hans de Goede wrote:
> >> On 10/17/23 11:17, Sean Young wrote:
> >>> Some drivers require sleeping, for example if the pwm device is connected
> >>> over i2c. The pwm-ir-tx requires precise timing, and sleeping causes havoc
> >>> with the generated IR signal when sleeping occurs.
> >>>
> >>> This patch makes it possible to use pwm when the driver does not sleep,
> >>> by introducing the pwm_can_sleep() function.
> >>>
> >>> Signed-off-by: Sean Young 
> >>
> >> I have no objection to this patch by itself, but it seems a bit
> >> of unnecessary churn to change all current callers of pwm_apply_state()
> >> to a new API.
> > 
> > The idea is to improve the semantic of the function name, see
> > https://lore.kernel.org/linux-pwm/20231013180449.mcdmklbsz2rly...@pengutronix.de
> > for more context.
> 
> Hmm, so the argument here is that the GPIO API has this, but GPIOs
> generally speaking can be set atomically, so there not being able
> to set it atomically is special.
> 
> OTOH we have many many many other kernel functions which may sleep
> and we don't all postfix them with _can_sleep.
> 
> And for PWM controllers pwm_apply_state is IMHO sorta expected to
> sleep. Many of these are attached over I2C so things will sleep,
> others have a handshake to wait for the current dutycycle to
> end before you can apply a second change on top of an earlier
> change during the current dutycycle which often also involves
> sleeping.
> 
> So the natural/expeected thing for pwm_apply_state() is to sleep
> and thus it does not need a postfix for this IMHO.

Most pwm drivers look like they can be made to work in atomic context,
I think. Like you say this is not the case for all of them. Whatever
we choose to be the default for pwm_apply_state(), we should have a
clear function name for the alternative. This is essentially why
pam_apply_cansleep() was picked.

The alternative to pwm_apply_cansleep() is to have a function name
which implies it can be used from atomic context. However, 
pwm_apply_atomic() is not great because the "atomic" could be
confused with the PWM atomic API, not the kernel process/atomic
context.

So what should the non-sleeping function be called then? 
 - pwm_apply_cannotsleep() 
 - pwm_apply_nosleep()
 - pwm_apply_nonsleeping()
 - pwm_apply_atomic_context()

> > I think it's very subjective if you consider this
> > churn or not.
> 
> I consider it churn because I don't think adding a postfix
> for what is the default/expected behavior is a good idea
> (with GPIOs not sleeping is the expected behavior).
> 
> I agree that this is very subjective and very much goes
> into the territory of bikeshedding. So please consider
> the above my 2 cents on this and lets leave it at that.

You have a valid point. Let's focus on having descriptive function names.

> > While it's nice to have every caller converted in a single
> > step, I'd go for
> > 
> > #define pwm_apply_state(pwm, state) pwm_apply_cansleep(pwm, state)
> > 
> > , keep that macro for a while and convert all users step by step. This
> > way we don't needlessly break oot code and the changes to convert to the
> > new API can go via their usual trees without time pressure.
> 
> I don't think there are enough users of pwm_apply_state() to warrant
> such an exercise.
> 
> So if people want to move ahead with the _can_sleep postfix addition
> (still not a fan) here is my acked-by for the drivers/platform/x86
> changes, for merging this through the PWM tree in a single commit:
> 
> Acked-by: Hans de Goede 

Thanks,

Sean


Re: [PATCH v7 4/6] drm: Refuse to async flip with atomic prop changes

2023-10-22 Thread Michel Dänzer
On 10/17/23 14:16, Simon Ser wrote:
> After discussing with André it seems like we missed a plane type check
> here. We need to make sure FB_ID changes are only allowed on primary
> planes.

Can you elaborate why that's needed?


-- 
Earthling Michel Dänzer|  https://redhat.com
Libre software enthusiast  | Mesa and Xwayland developer



Re: [PATCH] dt-bindings: display: ssd132x: Remove '-' before compatible enum

2023-10-22 Thread Conor Dooley
On Sat, Oct 21, 2023 at 12:30:17AM +0200, Javier Martinez Canillas wrote:
> This is a leftover from when the binding schema had the compatible string
> property enum as a 'oneOf' child and the '-' was not removed when 'oneOf'
> got dropped during the binding review process.
> 
> Reported-by: Rob Herring 
> Closes: 
> https://lore.kernel.org/dri-devel/cal_jsq+h8dcnpkqhokqoodcc8+qi3m0prxrfkz_y4v37ymj...@mail.gmail.com/
> Signed-off-by: Javier Martinez Canillas 

Acked-by: Conor Dooley 

Thanks,
Conor.

> ---
> 
>  .../devicetree/bindings/display/solomon,ssd132x.yaml  | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/display/solomon,ssd132x.yaml 
> b/Documentation/devicetree/bindings/display/solomon,ssd132x.yaml
> index 0aa41bd9ddca..37975ee61c5a 100644
> --- a/Documentation/devicetree/bindings/display/solomon,ssd132x.yaml
> +++ b/Documentation/devicetree/bindings/display/solomon,ssd132x.yaml
> @@ -11,10 +11,10 @@ maintainers:
>  
>  properties:
>compatible:
> -- enum:
> -- solomon,ssd1322
> -- solomon,ssd1325
> -- solomon,ssd1327
> +enum:
> +  - solomon,ssd1322
> +  - solomon,ssd1325
> +  - solomon,ssd1327
>  
>  required:
>- compatible
> -- 
> 2.41.0
> 


signature.asc
Description: PGP signature


[Bug 218015] amdgpu powerplay: Spontaneous changes to power_dpm_force_performance_level

2023-10-22 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=218015

Roman Žilka (roman.zi...@gmail.com) changed:

   What|Removed |Added

URL||https://gitlab.freedesktop.
   ||org/drm/amd/-/issues/2931

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

[Bug 218015] amdgpu powerplay: Spontaneous changes to power_dpm_force_performance_level

2023-10-22 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=218015

--- Comment #5 from Roman Žilka (roman.zi...@gmail.com) ---
Thanks, I passed the info on.
https://gitlab.freedesktop.org/drm/amd/-/issues/2931

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.