[Freedreno] [PATCH v7 2/8] drm/display/dsc: add helper to set semi-const parameters

2023-05-09 Thread Jessica Zhang
From: Dmitry Baryshkov 

Add a helper setting config values which are typically constant across
operating modes (table E-4 of the standard) and mux_word_size (which is
a const according to 3.5.2).

Signed-off-by: Dmitry Baryshkov 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/display/drm_dsc_helper.c | 22 ++
 include/drm/display/drm_dsc_helper.h |  1 +
 2 files changed, 23 insertions(+)

diff --git a/drivers/gpu/drm/display/drm_dsc_helper.c 
b/drivers/gpu/drm/display/drm_dsc_helper.c
index 65e810a54257..b9c4e10ced41 100644
--- a/drivers/gpu/drm/display/drm_dsc_helper.c
+++ b/drivers/gpu/drm/display/drm_dsc_helper.c
@@ -270,6 +270,28 @@ void drm_dsc_pps_payload_pack(struct 
drm_dsc_picture_parameter_set *pps_payload,
 }
 EXPORT_SYMBOL(drm_dsc_pps_payload_pack);
 
+/**
+ * drm_dsc_set_const_params() - Set DSC parameters considered typically
+ * constant across operation modes
+ *
+ * @vdsc_cfg:
+ * DSC Configuration data partially filled by driver
+ */
+void drm_dsc_set_const_params(struct drm_dsc_config *vdsc_cfg)
+{
+   if (!vdsc_cfg->rc_model_size)
+   vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
+   vdsc_cfg->rc_edge_factor = DSC_RC_EDGE_FACTOR_CONST;
+   vdsc_cfg->rc_tgt_offset_high = DSC_RC_TGT_OFFSET_HI_CONST;
+   vdsc_cfg->rc_tgt_offset_low = DSC_RC_TGT_OFFSET_LO_CONST;
+
+   if (vdsc_cfg->bits_per_component <= 10)
+   vdsc_cfg->mux_word_size = DSC_MUX_WORD_SIZE_8_10_BPC;
+   else
+   vdsc_cfg->mux_word_size = DSC_MUX_WORD_SIZE_12_BPC;
+}
+EXPORT_SYMBOL(drm_dsc_set_const_params);
+
 /* From DSC_v1.11 spec, rc_parameter_Set syntax element typically constant */
 static const u16 drm_dsc_rc_buf_thresh[] = {
896, 1792, 2688, 3584, 4480, 5376, 6272, 6720, 7168, 7616,
diff --git a/include/drm/display/drm_dsc_helper.h 
b/include/drm/display/drm_dsc_helper.h
index 422135a33d65..bfa7f3acafcb 100644
--- a/include/drm/display/drm_dsc_helper.h
+++ b/include/drm/display/drm_dsc_helper.h
@@ -21,6 +21,7 @@ void drm_dsc_dp_pps_header_init(struct dp_sdp_header 
*pps_header);
 int drm_dsc_dp_rc_buffer_size(u8 rc_buffer_block_size, u8 rc_buffer_size);
 void drm_dsc_pps_payload_pack(struct drm_dsc_picture_parameter_set *pps_sdp,
  const struct drm_dsc_config *dsc_cfg);
+void drm_dsc_set_const_params(struct drm_dsc_config *vdsc_cfg);
 void drm_dsc_set_rc_buf_thresh(struct drm_dsc_config *vdsc_cfg);
 int drm_dsc_setup_rc_params(struct drm_dsc_config *vdsc_cfg, enum 
drm_dsc_params_kind kind);
 int drm_dsc_compute_rc_parameters(struct drm_dsc_config *vdsc_cfg);

-- 
2.40.1



[Freedreno] [PATCH v7 7/8] drm/msm/dsi: Use MSM and DRM DSC helper methods

2023-05-09 Thread Jessica Zhang
Use MSM and DRM DSC helper methods to configure DSC for DSI.

Signed-off-by: Jessica Zhang 
Reviewed-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/dsi/dsi_host.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 74d38f90398a..508577c596ff 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -28,6 +28,7 @@
 #include "dsi.xml.h"
 #include "sfpb.xml.h"
 #include "dsi_cfg.h"
+#include "msm_dsc_helper.h"
 #include "msm_kms.h"
 #include "msm_gem.h"
 #include "phy/dsi_phy.h"
@@ -848,7 +849,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host 
*msm_host, bool is_cmd_mod
/* first calculate dsc parameters and then program
 * compress mode registers
 */
-   slice_per_intf = DIV_ROUND_UP(hdisplay, dsc->slice_width);
+   slice_per_intf = msm_dsc_get_slice_per_intf(dsc, hdisplay);
 
/*
 * If slice_count is greater than slice_per_intf
@@ -858,7 +859,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host 
*msm_host, bool is_cmd_mod
if (dsc->slice_count > slice_per_intf)
dsc->slice_count = 1;
 
-   total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
+   total_bytes_per_intf = msm_dsc_get_bytes_per_intf(dsc, hdisplay);
 
eol_byte_num = total_bytes_per_intf % 3;
pkt_per_line = slice_per_intf / dsc->slice_count;
@@ -1759,7 +1760,7 @@ static int dsi_populate_dsc_params(struct msm_dsi_host 
*msm_host, struct drm_dsc
return ret;
}
 
-   dsc->initial_scale_value = 32;
+   drm_dsc_set_initial_scale_value(dsc);
dsc->line_buf_depth = dsc->bits_per_component + 1;
 
return drm_dsc_compute_rc_parameters(dsc);

-- 
2.40.1



[Freedreno] [PATCH v7 5/8] drm/msm/dpu: Use DRM DSC helper for det_thresh_flatness

2023-05-09 Thread Jessica Zhang
The current dpu_hw_dsc calculation for det_thresh_flatness does not
match the downstream calculation or the DSC spec.

Use the DRM DSC helper for det_thresh_flatness to match downstream
implementation and the DSC spec.

Fixes: c110cfd1753e ("drm/msm/disp/dpu1: Add support for DSC")
Reviewed-by: Dmitry Baryshkov 
Reviewed-by: Marijn Suijten 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
index 4e1396575e6a..1e11c0fb0545 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
@@ -3,6 +3,8 @@
  * Copyright (c) 2020-2022, Linaro Limited
  */
 
+#include 
+
 #include "dpu_kms.h"
 #include "dpu_hw_catalog.h"
 #include "dpu_hwio.h"
@@ -102,7 +104,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
data |= dsc->final_offset;
DPU_REG_WRITE(c, DSC_DSC_OFFSET, data);
 
-   det_thresh_flatness = 7 + 2 * (dsc->bits_per_component - 8);
+   det_thresh_flatness = drm_dsc_calculate_flatness_det_thresh(dsc);
data = det_thresh_flatness << 10;
data |= dsc->flatness_max_qp << 5;
data |= dsc->flatness_min_qp;

-- 
2.40.1



[Freedreno] [PATCH v7 6/8] drm/msm/dpu: Fix slice_last_group_size calculation

2023-05-09 Thread Jessica Zhang
Correct the math for slice_last_group_size so that it matches the
calculations downstream.

Fixes: c110cfd1753e ("drm/msm/disp/dpu1: Add support for DSC")
Reviewed-by: Dmitry Baryshkov 
Reviewed-by: Marijn Suijten 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
index 1e11c0fb0545..ddaec05151cd 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
@@ -56,9 +56,10 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
if (is_cmd_mode)
initial_lines += 1;
 
-   slice_last_group_size = 3 - (dsc->slice_width % 3);
+   slice_last_group_size = (dsc->slice_width + 2) % 3;
+
data = (initial_lines << 20);
-   data |= ((slice_last_group_size - 1) << 18);
+   data |= (slice_last_group_size << 18);
/* bpp is 6.4 format, 4 LSBs bits are for fractional part */
data |= (dsc->bits_per_pixel << 8);
data |= (dsc->block_pred_enable << 7);

-- 
2.40.1



[Freedreno] [PATCH v7 8/8] drm/msm/dsi: update hdisplay calculation for dsi_timing_setup

2023-05-09 Thread Jessica Zhang
hdisplay for compressed images should be calculated as bytes_per_slice *
slice_count. Thus, use MSM DSC helper to calculate hdisplay for
dsi_timing_setup instead of directly using mode->hdisplay.

Reviewed-by: Dmitry Baryshkov 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/msm/dsi/dsi_host.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 508577c596ff..d60403372514 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -952,7 +952,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, 
bool is_bonded_dsi)
 * pulse width same
 */
h_total -= hdisplay;
-   hdisplay /= 3;
+   hdisplay = msm_dsc_get_bytes_per_line(msm_host->dsc) / 3;
h_total += hdisplay;
ha_end = ha_start + hdisplay;
}

-- 
2.40.1



[Freedreno] [PATCH v7 4/8] drm/msm: Add MSM-specific DSC helper methods

2023-05-09 Thread Jessica Zhang
Introduce MSM-specific DSC helper methods, as some calculations are
common between DP and DSC.

Reviewed-by: Dmitry Baryshkov 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/msm/Makefile |  1 +
 drivers/gpu/drm/msm/msm_dsc_helper.c | 26 ++
 drivers/gpu/drm/msm/msm_dsc_helper.h | 69 
 3 files changed, 96 insertions(+)

diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index 7274c41228ed..b814fc80e2d5 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -94,6 +94,7 @@ msm-y += \
msm_atomic_tracepoints.o \
msm_debugfs.o \
msm_drv.o \
+   msm_dsc_helper.o \
msm_fb.o \
msm_fence.o \
msm_gem.o \
diff --git a/drivers/gpu/drm/msm/msm_dsc_helper.c 
b/drivers/gpu/drm/msm/msm_dsc_helper.c
new file mode 100644
index ..29feb3e3b5a4
--- /dev/null
+++ b/drivers/gpu/drm/msm/msm_dsc_helper.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved
+ */
+
+#include 
+#include 
+
+#include "msm_dsc_helper.h"
+
+s64 msm_dsc_get_bytes_per_soft_slice(struct drm_dsc_config *dsc)
+{
+   return drm_fixp_from_fraction(dsc->slice_width * 
msm_dsc_get_bpp_int(dsc), 8);
+}
+
+u32 msm_dsc_get_bytes_per_intf(struct drm_dsc_config *dsc, int intf_width)
+{
+   u32 bytes_per_soft_slice;
+   s64 bytes_per_soft_slice_fp;
+   int slice_per_intf = msm_dsc_get_slice_per_intf(dsc, intf_width);
+
+   bytes_per_soft_slice_fp = msm_dsc_get_bytes_per_soft_slice(dsc);
+   bytes_per_soft_slice = drm_fixp2int_ceil(bytes_per_soft_slice_fp);
+
+   return bytes_per_soft_slice * slice_per_intf;
+}
diff --git a/drivers/gpu/drm/msm/msm_dsc_helper.h 
b/drivers/gpu/drm/msm/msm_dsc_helper.h
new file mode 100644
index ..38f3651d0b79
--- /dev/null
+++ b/drivers/gpu/drm/msm/msm_dsc_helper.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved
+ */
+
+#ifndef MSM_DSC_HELPER_H_
+#define MSM_DSC_HELPER_H_
+
+#include 
+#include 
+#include 
+
+/*
+ * Helper methods for MSM specific DSC calculations that are common between 
timing engine,
+ * DSI, and DP.
+ */
+
+/**
+ * msm_dsc_get_bpp_int - get bits per pixel integer value
+ * @dsc: Pointer to drm dsc config struct
+ * Returns: BPP integer value
+ */
+static inline int msm_dsc_get_bpp_int(struct drm_dsc_config *dsc)
+{
+   WARN_ON_ONCE(dsc->bits_per_pixel & 0xf);
+   return dsc->bits_per_pixel >> 4;
+}
+
+/**
+ * msm_dsc_get_slice_per_intf - get number of slices per interface
+ * @dsc: Pointer to drm dsc config struct
+ * @intf_width: interface width
+ * Returns: Integer representing the slice per interface
+ */
+static inline int msm_dsc_get_slice_per_intf(struct drm_dsc_config *dsc, int 
intf_width)
+{
+   return DIV_ROUND_UP(intf_width, dsc->slice_width);
+}
+
+/**
+ * msm_dsc_get_bytes_per_line - Calculate bytes per line
+ * @dsc: Pointer to drm dsc config struct
+ * Returns: Integer value representing pclk per interface
+ *
+ * Note: This value will then be passed along to DSI and DP for some more
+ * calculations. This is because DSI and DP divide the pclk_per_intf value
+ * by different values depending on if widebus is enabled.
+ */
+static inline int msm_dsc_get_bytes_per_line(struct drm_dsc_config *dsc)
+{
+   return DIV_ROUND_UP(dsc->slice_width * dsc->slice_count * 
msm_dsc_get_bpp_int(dsc), 8);
+}
+
+/**
+ * msm_dsc_get_bytes_per_soft_slice - get size of each soft slice for dsc
+ * @dsc: Pointer to drm dsc config struct
+ * Returns: s31.32 fixed point value representing bytes per soft slice
+ */
+s64 msm_dsc_get_bytes_per_soft_slice(struct drm_dsc_config *dsc);
+
+/**
+ * msm_dsc_get_bytes_per_intf - get total bytes per interface
+ * @dsc: Pointer to drm dsc config struct
+ * @intf_width: interface width
+ * Returns: u32 value representing bytes per interface
+ */
+u32 msm_dsc_get_bytes_per_intf(struct drm_dsc_config *dsc, int intf_width);
+
+#endif /* MSM_DSC_HELPER_H_ */

-- 
2.40.1



[Freedreno] [PATCH v7 3/8] drm/msm/dsi: use DRM DSC helpers for DSC setup

2023-05-09 Thread Jessica Zhang
From: Dmitry Baryshkov 

Use new DRM DSC helpers to setup DSI DSC configuration. The
initial_scale_value needs to be adjusted according to the standard, but
this is a separate change.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Signed-off-by: Jessica Zhang 
---
 drivers/gpu/drm/msm/dsi/dsi_host.c | 61 +-
 1 file changed, 8 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 961689a255c4..74d38f90398a 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -1731,28 +1731,9 @@ static int dsi_host_parse_lane_data(struct msm_dsi_host 
*msm_host,
return -EINVAL;
 }
 
-static u32 dsi_dsc_rc_buf_thresh[DSC_NUM_BUF_RANGES - 1] = {
-   0x0e, 0x1c, 0x2a, 0x38, 0x46, 0x54, 0x62,
-   0x69, 0x70, 0x77, 0x79, 0x7b, 0x7d, 0x7e
-};
-
-/* only 8bpc, 8bpp added */
-static char min_qp[DSC_NUM_BUF_RANGES] = {
-   0, 0, 1, 1, 3, 3, 3, 3, 3, 3, 5, 5, 5, 7, 13
-};
-
-static char max_qp[DSC_NUM_BUF_RANGES] = {
-   4, 4, 5, 6, 7, 7, 7, 8, 9, 10, 11, 12, 13, 13, 15
-};
-
-static char bpg_offset[DSC_NUM_BUF_RANGES] = {
-   2, 0, 0, -2, -4, -6, -8, -8, -8, -10, -10, -12, -12, -12, -12
-};
-
 static int dsi_populate_dsc_params(struct msm_dsi_host *msm_host, struct 
drm_dsc_config *dsc)
 {
-   int i;
-   u16 bpp = dsc->bits_per_pixel >> 4;
+   int ret;
 
if (dsc->bits_per_pixel & 0xf) {
DRM_DEV_ERROR(_host->pdev->dev, "DSI does not support 
fractional bits_per_pixel\n");
@@ -1764,49 +1745,23 @@ static int dsi_populate_dsc_params(struct msm_dsi_host 
*msm_host, struct drm_dsc
return -EOPNOTSUPP;
}
 
-   dsc->rc_model_size = 8192;
-   dsc->first_line_bpg_offset = 12;
-   dsc->rc_edge_factor = 6;
-   dsc->rc_tgt_offset_high = 3;
-   dsc->rc_tgt_offset_low = 3;
dsc->simple_422 = 0;
dsc->convert_rgb = 1;
dsc->vbr_enable = 0;
 
-   /* handle only bpp = bpc = 8 */
-   for (i = 0; i < DSC_NUM_BUF_RANGES - 1 ; i++)
-   dsc->rc_buf_thresh[i] = dsi_dsc_rc_buf_thresh[i];
+   drm_dsc_set_const_params(dsc);
+   drm_dsc_set_rc_buf_thresh(dsc);
 
-   for (i = 0; i < DSC_NUM_BUF_RANGES; i++) {
-   dsc->rc_range_params[i].range_min_qp = min_qp[i];
-   dsc->rc_range_params[i].range_max_qp = max_qp[i];
-   /*
-* Range BPG Offset contains two's-complement signed values 
that fill
-* 8 bits, yet the registers and DCS PPS field are only 6 bits 
wide.
-*/
-   dsc->rc_range_params[i].range_bpg_offset = bpg_offset[i] & 
DSC_RANGE_BPG_OFFSET_MASK;
+   /* handle only bpp = bpc = 8, pre-SCR panels */
+   ret = drm_dsc_setup_rc_params(dsc, DRM_DSC_1_1_PRE_SCR);
+   if (ret) {
+   DRM_DEV_ERROR(_host->pdev->dev, "could not find DSC RC 
parameters\n");
+   return ret;
}
 
-   dsc->initial_offset = 6144; /* Not bpp 12 */
-   if (bpp != 8)
-   dsc->initial_offset = 2048; /* bpp = 12 */
-
-   if (dsc->bits_per_component <= 10)
-   dsc->mux_word_size = DSC_MUX_WORD_SIZE_8_10_BPC;
-   else
-   dsc->mux_word_size = DSC_MUX_WORD_SIZE_12_BPC;
-
-   dsc->initial_xmit_delay = 512;
dsc->initial_scale_value = 32;
-   dsc->first_line_bpg_offset = 12;
dsc->line_buf_depth = dsc->bits_per_component + 1;
 
-   /* bpc 8 */
-   dsc->flatness_min_qp = 3;
-   dsc->flatness_max_qp = 12;
-   dsc->rc_quant_incr_limit0 = 11;
-   dsc->rc_quant_incr_limit1 = 11;
-
return drm_dsc_compute_rc_parameters(dsc);
 }
 

-- 
2.40.1



[Freedreno] [PATCH v7 0/8] Introduce MSM-specific DSC helpers

2023-05-09 Thread Jessica Zhang
There are some overlap in calculations for MSM-specific DSC variables
between DP and DSI. In addition, the calculations for initial_scale_value
and det_thresh_flatness that are defined within the DSC 1.2 specifications,
but aren't yet included in drm_dsc_helper.c.

This series moves these calculations to a shared msm_dsc_helper.c file and
defines drm_dsc_helper methods for initial_scale_value and
det_thresh_flatness.

Note: For now, the MSM specific helper methods are only called for the DSI
path, but will called for DP once DSC 1.2 support for DP has been added.

Depends on: "drm/i915: move DSC RC tables to drm_dsc_helper.c" [1]

[1] https://patchwork.freedesktop.org/series/114472/

---
Changes in v7:
- Renamed msm_dsc_get_pclk_per_intf to msm_dsc_get_bytes_per_line
- Removed duplicate msm_dsc_get_dce_bytes_per_line
- Reworded commit message for "drm/msm/dpu: Use DRM DSC helper for
  det_thresh_flatness"
- Dropped slice_per_pkt change (it will be included in the later
  "Add DSC v1.2 Support for DSI" series)
- Picked up "drm/display/dsc: Add flatness and initial scale value
  calculations" and "drm/display/dsc: add helper to set semi-const
  parameters", which were dropped from "drm/i915: move DSC RC tables to
  drm_dsc_helper.c" series
- Picked up "Reviewed-by" tags
- Removed changelog in individual patches
- Link to v6: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v6-0-cb7f59f0f...@quicinc.com

Changes in v6:
- Documented return values for MSM DSC helpers
- Fixed dependency issue in msm_dsc_helper.c
- Link to v5: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v5-0-0108401d7...@quicinc.com

Changes in v5:
- Added extra line at end of msm_dsc_helper.h
- Simplified msm_dsc_get_bytes_per_soft_slice() math
- Simplified and inlined msm_dsc_get_pclk_per_intf() math
- "Fix calculations pkt_per_line" --> "... Fix calculation for pkt_per_line"
- Split dsc->slice_width check into a separate patch
- Picked up Dmitry's msm/dsi patch ("drm/msm/dsi: use new helpers for
  DSC setup")
- Removed unused headers in MSM DSC helper files
- Picked up Reviewed-by tags
- Link to v4: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v4-0-1b79c78b3...@quicinc.com

Changes in v4:
- Changed msm_dsc_get_uncompressed_pclk_per_intf to msm_dsc_get_pclk_per_intf
- Moved pclk_per_intf calculation for dsi_timing_setup to `if
  (msm_host->dsc)` block
- Link to v3: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v3-0-6bec0d277...@quicinc.com

Changes in v3:
- Dropped src_bpp parameter from all methods -- src_bpp can be
  calculated as dsc->bits_per_component * 3- Cleaned up unused parameters
- Dropped intf_width parameter from get_bytes_per_soft_slice()
- Moved dsc->bits_per_component to numerator calculation in
  get_bytes_per_soft_slice()
- Made get_bytes_per_soft_slice() a public method (this will be called
  later to help calculate DP pclk params)- Added comment documentation to
  MSM DSC helpers
- Renamed msm_dsc_get_uncompressed_pclk_per_line to
  *_get_uncompressed_pclk_per_intf()
- Removed dsc->slice_width check from msm_dsc_get_uncompressed_pclk_per_intf()
- Added documentation in comments
- Moved extra_eol_bytes math out of msm_dsc_get_eol_byte_num()
- Renamed msm_dsc_get_eol_byte_num to *_get_bytes_per_intf.
- Reworded slice_last_group_size calculation to `(dsc->slice_width + 2) % 3`
- Used MSM DSC helper to calculate total_bytes_per_intf
- Initialized hdisplay as uncompressed pclk per line at the beginning of
  dsi_timing_setup as to not break dual DSI calculations
- Added slice_width check to dsi_timing_setup
- Dropped 78c8b81d57d8 ("drm/display/dsc: Add flatness and initial scale
  value calculations") patch as it was absorbed in Dmitry's DSC series [1]
- Split dsi_timing_setup() hdisplay calculation to a separate patch
- Link to v2: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v2-0-3c13ced53...@quicinc.com

Changes in v2:
- Changed det_thresh_flatness to flatness_det_thresh
- Set initial_scale_value directly in helper
- Moved msm_dsc_helper files to msm/ directory
- Dropped get_comp_ratio() helper
- Used drm_int2fixp() to convert to integers to fp
- Fixed type mismatch issues in MSM DSC helpers
- Changed DSC_BPP macro to drm_dsc_get_bpp_int() helper method
- Style changes to improve readability
- Dropped last division step of msm_dsc_get_pclk_per_line() and changed
  method name accordingly
- Dropped unused bpp variable in msm_dsc_get_dce_bytes_per_line()
- Changed msm_dsc_get_slice_per_intf() to a static inline method
- Split eol_byte_num and pkt_per_line calculation into a separate patch
- Moved pclk_per_line calculation into `if (dsc)` block in
  dsi_timing_setup()
- *_calculate_initial_scale_value --> *_set_initial_scale_value
- Picked up Fixes tags for patches 3/5 and 4/5
- Picked up Reviewed-by for patch 4/5
- Link to v1: 
https://lore.kernel.org/r/20230329-rfc-msm-dsc-helper-v1-0-f3e479f59...@quicinc.com

---
Dmitry Baryshkov (2):
  drm/display/dsc: add helper to set 

[Freedreno] [PATCH v7 1/8] drm/display/dsc: Add flatness and initial scale value calculations

2023-05-09 Thread Jessica Zhang
Add helpers to calculate det_thresh_flatness and initial_scale_value as
these calculations are defined within the DSC spec.

Signed-off-by: Dmitry Baryshkov 
Signed-off-by: Jessica Zhang 
---
 include/drm/display/drm_dsc_helper.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/include/drm/display/drm_dsc_helper.h 
b/include/drm/display/drm_dsc_helper.h
index 0bb0c3afd740..422135a33d65 100644
--- a/include/drm/display/drm_dsc_helper.h
+++ b/include/drm/display/drm_dsc_helper.h
@@ -25,5 +25,16 @@ void drm_dsc_set_rc_buf_thresh(struct drm_dsc_config 
*vdsc_cfg);
 int drm_dsc_setup_rc_params(struct drm_dsc_config *vdsc_cfg, enum 
drm_dsc_params_kind kind);
 int drm_dsc_compute_rc_parameters(struct drm_dsc_config *vdsc_cfg);
 
+static inline void drm_dsc_set_initial_scale_value(struct drm_dsc_config *dsc)
+{
+   dsc->initial_scale_value = 8 * dsc->rc_model_size /
+   (dsc->rc_model_size - dsc->initial_offset);
+}
+
+static inline int drm_dsc_calculate_flatness_det_thresh(struct drm_dsc_config 
*dsc)
+{
+   return 2 << (dsc->bits_per_component - 8);
+}
+
 #endif /* _DRM_DSC_HELPER_H_ */
 

-- 
2.40.1



[Freedreno] [PATCH v4] drm/msm: Fix submit error-path leaks

2023-05-09 Thread Rob Clark
From: Rob Clark 

For errors after msm_submitqueue_get(), we need to drop the submitqueue
reference.  Additionally after get_unused_fd() we need to drop the fd.
The ordering for dropping the queue lock and put_unused_fd() is not
important, so just move this all into out_post_unlock.

v2: Only drop queue ref if submit doesn't take it
v3: Fix unitialized submit ref in error path
v4: IS_ERR_OR_NULL()

Reported-by: pinkperfect2...@gmail.com
Fixes: f0de40a131d9 drm/msm: ("Reorder lock vs submit alloc")
Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem_submit.c | 25 ++---
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c 
b/drivers/gpu/drm/msm/msm_gem_submit.c
index 6c6aefaa72be..8a3c9246ebf7 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -720,21 +720,21 @@ static void msm_process_post_deps(struct 
msm_submit_post_dep *post_deps,
}
}
 }
 
 int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
struct drm_file *file)
 {
struct msm_drm_private *priv = dev->dev_private;
struct drm_msm_gem_submit *args = data;
struct msm_file_private *ctx = file->driver_priv;
-   struct msm_gem_submit *submit;
+   struct msm_gem_submit *submit = NULL;
struct msm_gpu *gpu = priv->gpu;
struct msm_gpu_submitqueue *queue;
struct msm_ringbuffer *ring;
struct msm_submit_post_dep *post_deps = NULL;
struct drm_syncobj **syncobjs_to_reset = NULL;
int out_fence_fd = -1;
bool has_ww_ticket = false;
unsigned i;
int ret;
 
@@ -767,27 +767,29 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void 
*data,
queue = msm_submitqueue_get(ctx, args->queueid);
if (!queue)
return -ENOENT;
 
ring = gpu->rb[queue->ring_nr];
 
if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
if (out_fence_fd < 0) {
ret = out_fence_fd;
-   return ret;
+   goto out_post_unlock;
}
}
 
submit = submit_create(dev, gpu, queue, args->nr_bos, args->nr_cmds);
-   if (IS_ERR(submit))
-   return PTR_ERR(submit);
+   if (IS_ERR(submit)) {
+   ret = PTR_ERR(submit);
+   goto out_post_unlock;
+   }
 
trace_msm_gpu_submit(pid_nr(submit->pid), ring->id, submit->ident,
args->nr_bos, args->nr_cmds);
 
ret = mutex_lock_interruptible(>lock);
if (ret)
goto out_post_unlock;
 
if (args->flags & MSM_SUBMIT_SUDO)
submit->in_rb = true;
@@ -962,25 +964,34 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void 
*data,
msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs);
msm_process_post_deps(post_deps, args->nr_out_syncobjs,
  submit->user_fence);
 
 
 out:
submit_cleanup(submit, !!ret);
if (has_ww_ticket)
ww_acquire_fini(>ticket);
 out_unlock:
-   if (ret && (out_fence_fd >= 0))
-   put_unused_fd(out_fence_fd);
mutex_unlock(>lock);
 out_post_unlock:
-   msm_gem_submit_put(submit);
+   if (ret && (out_fence_fd >= 0))
+   put_unused_fd(out_fence_fd);
+
+   if (!IS_ERR_OR_NULL(submit)) {
+   msm_gem_submit_put(submit);
+   } else {
+   /*
+* If the submit hasn't yet taken ownership of the queue
+* then we need to drop the reference ourself:
+*/
+   msm_submitqueue_put(queue);
+   }
if (!IS_ERR_OR_NULL(post_deps)) {
for (i = 0; i < args->nr_out_syncobjs; ++i) {
kfree(post_deps[i].chain);
drm_syncobj_put(post_deps[i].syncobj);
}
kfree(post_deps);
}
 
if (!IS_ERR_OR_NULL(syncobjs_to_reset)) {
for (i = 0; i < args->nr_in_syncobjs; ++i) {
-- 
2.40.1



Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Jessica Zhang




On 5/9/2023 11:02 AM, Abhinav Kumar wrote:



On 5/9/2023 4:42 AM, Dmitry Baryshkov wrote:

On 09/05/2023 11:54, Konrad Dybcio wrote:



On 9.05.2023 10:23, Neil Armstrong wrote:

On 09/05/2023 01:27, Dmitry Baryshkov wrote:

On 08/05/2023 23:09, Abhinav Kumar wrote:



On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:

On 03/05/2023 04:19, Jessica Zhang wrote:
Currently, word count is calculated using slice_count. This is 
incorrect

as downstream uses slice per packet, which is different from
slice_count.

Slice count represents the number of soft slices per interface, 
and its
value will not always match that of slice per packet. For 
example, it is
possible to have cases where there are multiple soft slices per 
interface

but the panel specifies only one slice per packet.

Thus, use the default value of one slice per packet and remove 
slice_count

from the word count calculation.

Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size 
to compute word count")

Signed-off-by: Jessica Zhang 
---
   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
   1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c

index 35c69dbe5f6f..b0d448ffb078 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -996,7 +996,14 @@ static void dsi_timing_setup(struct 
msm_dsi_host *msm_host, bool is_bonded_dsi)

   if (!msm_host->dsc)
   wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 
+ 1;

   else
-    wc = msm_host->dsc->slice_chunk_size * 
msm_host->dsc->slice_count + 1;

+    /*
+ * When DSC is enabled, WC = slice_chunk_size * 
slice_per_packet + 1.
+ * Currently, the driver only supports default 
value of slice_per_packet = 1

+ *
+ * TODO: Expand drm_panel struct to hold 
slice_per_packet info
+ *   and adjust DSC math to account for 
slice_per_packet.


slice_per_packet is not a part of the standard DSC, so I'm not 
sure how that can be implemented. And definitely we should not 
care about the drm_panel here. It should be either a part of 
drm_dsc_config, or mipi_dsi_device.




This is not correct.

It is part of the DSI standard (not DSC standard). Please refer to 
Figure 40 "One Line Containing One Packet with Data from One or 
More Compressed Slices" and Figure 41 "One Line Containing More 
than One Compressed Pixel Stream Packet".


I have reviewed section 8.8.24 and Annex D of the DSI standard.

It is not clear to me, if we can get away with always using 
slice_per_packet = 1. What is the DSI sink's difference between 
Fig. 40.(b) and Fig 41?


Are there are known panels that require slice_per_packet != 1? If 
so, we will have to implement support for such configurations.


This has details about this. So I still stand by my point that 
this should be in the drm_panel.


Note, the driver doesn't use drm_panel directly. So 
slices_per_packet should go to mipi_dsi_device instead (which in 
turn can be filled from e.g. drm_panel or from any other source).


This is a big question, where should we set those parameters ?

It's an even bigger questions for panels optionally supporting DSC 
in Video or Command mode (like the vtdr6130),

how to select DSC or not ? DT is not an option.

Compressed vs uncompressed modes, maybe? Would be nice to make this
togglable from userspace.. But then it may not scale for panels with 
e.g.

10 resolutions, all cmd/vid/dsc/nodsc


Currently the panel/panel-bridge make decision on command vs video 
mode. We have no way to influence that decision. If you want to make 
that negotiable, I'd start with adding 
'cmd_supported/video_supported/dsc_supported' flags to struct 
mipi_dsi_hosts.




Right. Isn't that issue there even today that if a panel supports DSC in 
only one of the modes, we have no way to tell that as we have not 
encountered such a panel in upstream yet.


Also, fundamental question to folks who had panels requiring 
slice_per_pkt as 2,


if you had some panels which need a slice_per_pkt as 2, this support 
could have been added even earlier by someone who had these panels even 
in DSC 1.1.


If these panels are not yet upstreamed, may i please know why this is 
considered as a "breakage"? If they were working "somehow" due to 
incorrect math / panel settings /  DPU calculations, unfortunately we 
have to work towards bringing up these panels properly and upstreaming 
them rather than saying "oh, these panels were working somehow and now 
we need to keep it working".


I also want to add on top of Abhinav's point:

Currently, the panel I'm testing this series on only uses 
slice_per_pkt=1, so I have no way to testing slice_per_pkt > 1.


So in the case where we add support for slice_per_pkt > 1, I would 
prefer that be as a separate series as I would not be able to validate 
those changes on my current setup.


Thanks,

Jessica Zhang







Re: [Freedreno] [PATCH] Revert "drm/msm/dp: Remove INIT_SETUP delay"

2023-05-09 Thread Abhinav Kumar




On 5/8/2023 4:30 AM, Dmitry Baryshkov wrote:

On 08/05/2023 14:02, Leonard Lausen wrote:

Abhinav Kumar  writes:

On 5/7/2023 7:15 PM, Bjorn Andersson wrote:
When booting with the cable connected on my X13s, 100 is long enough 
for

my display to time out and require me to disconnect and reconnect the
cable again.

Do we have any idea of why the reduction to 0 is causing an issue when
using the internal HPD?

Regards,
Bjorn

Yes, we do know why this is causing an issue. The cleaner patch for this
will be posted this week.


Great!


There is no need to add the 100ms delay back yet.

thanks for posting this but NAK on this patch till we post the fix this
week.

Appreciate a bit of patience till then.


This regression is already part of the 6.3 stable release series. Will
the new patch qualify for inclusion in 6.3.y? Or will it be part of 6.4
and this revert should go into 6.3.y?


This is a tough situation, as landing a revert will break x13s, as noted 
by Bjorn. Given that the workaround is known at this moment, I would 
like to wait for the patch from Abhinav to appear, then we can decide 
which of the fixes should go to the stable kernel.




Even with this revert, there are additional regressions in 6.3 causing
dpu errors and blank external display upon suspending and resuming the
system while an external display is connected. Will your new patch also
fix these regressions?

[  275.025497] [drm:dpu_encoder_phys_vid_wait_for_commit_done:488] 
[dpu error]vblank timeout
[  275.025514] [drm:dpu_kms_wait_for_commit_done:510] [dpu error]wait 
for commit done returned -110
[  275.064141] [drm:dpu_encoder_frame_done_timeout:2382] [dpu 
error]enc33 frame done timeout


followed by a kernel panic if any modification to the display settings
is done, such as disabling the external display:


Interesting crash, thank you for the report.



This is a different crash but the root-cause of both the issues is the 
bridge hpd_enable/disable series.


https://patchwork.freedesktop.org/patch/514414/

This is breaking the sequence and logic of internal hpd as per my 
discussion with kuogee.


We are analyzing the issue and the fix internally first and once we 
figure out all the details will post it.




[  341.631287] Hardware name: Google Lazor (rev3 - 8) (DT)
[  341.631290] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)

[  341.631296] pc : do_raw_spin_unlock+0xb8/0xc4
[  341.631310] lr : do_raw_spin_unlock+0x78/0xc4
[  341.631315] sp : ffc01100b880
[  341.631317] x29: ffc01100b880 x28: 0028 x27: 
0038
[  341.631326] x26: ff808c89e180 x25: ffef33e39920 x24: 

[  341.631333] x23: ffef33e3ca0c x22: 0002 x21: 
ff808345ded8
[  341.631339] x20: ff808345ded0 x19: 001e x18: 

[  341.631345] x17: 00480460 x16: 0441043b04600438 x15: 
0438089807d0
[  341.631351] x14: 07b0089807800780 x13: 0068 x12: 
0001
[  341.631357] x11: ffef3413bb76 x10: 0bb0 x9 : 
ffef33e3d6bc
[  341.631363] x8 : ff808c89ed90 x7 : ff80b1c9f738 x6 : 
0001
[  341.631370] x5 :  x4 :  x3 : 
ff808345def0
[  341.631375] x2 : dead4ead x1 : 0003 x0 : 


[  341.631383] Kernel panic - not syncing: Asynchronous SError Interrupt
[  341.631386] CPU: 3 PID: 1520 Comm: kwin_wayland Not tainted 
6.3.0-stb-cbq+ #2

[  341.631390] Hardware name: Google Lazor (rev3 - 8) (DT)
[  341.631393] Call trace:
[  341.631395]  dump_backtrace+0xc8/0x104
[  341.631402]  show_stack+0x20/0x30
[  341.631407]  dump_stack_lvl+0x48/0x60
[  341.631414]  dump_stack+0x18/0x24
[  341.631419]  panic+0x130/0x2fc
[  341.631425]  nmi_panic+0x54/0x78
[  341.631428]  arm64_serror_panic+0x74/0x80
[  341.631434]  arm64_is_fatal_ras_serror+0x6c/0x8c
[  341.631439]  do_serror+0x48/0x60
[  341.631444]  el1h_64_error_handler+0x30/0x48
[  341.631450]  el1h_64_error+0x68/0x6c
[  341.631455]  do_raw_spin_unlock+0xb8/0xc4
[  341.631460]  _raw_spin_unlock_irq+0x18/0x38
[  341.631466]  __wait_for_common+0xb8/0x154
[  341.631472]  wait_for_completion_timeout+0x28/0x34
[  341.631477]  dp_ctrl_push_idle+0x3c/0x88
[  341.631483]  dp_bridge_disable+0x20/0x2c
[  341.631488]  drm_atomic_bridge_chain_disable+0x8c/0xb8
[  341.631495]  drm_atomic_helper_commit_modeset_disables+0x198/0x450
[  341.631501]  msm_atomic_commit_tail+0x1c8/0x36c
[  341.631507]  commit_tail+0x80/0x108
[  341.631512]  drm_atomic_helper_commit+0x114/0x118
[  341.631516]  drm_atomic_commit+0xb4/0xe0
[  341.631522]  drm_mode_atomic_ioctl+0x6b0/0x890
[  341.631527]  drm_ioctl_kernel+0xe4/0x164
[  341.631534]  drm_ioctl+0x35c/0x3bc
[  341.631539]  vfs_ioctl+0x30/0x50
[  341.631547]  __arm64_sys_ioctl+0x80/0xb4
[  341.631552]  invoke_syscall+0x84/0x11c
[  341.631558]  el0_svc_common.constprop.0+0xc0/0xec
[  341.631563]  do_el0_svc+0x94/0xa4
[  341.631567]  el0_svc+0x2c/0x54
[  

Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Abhinav Kumar




On 5/9/2023 4:42 AM, Dmitry Baryshkov wrote:

On 09/05/2023 11:54, Konrad Dybcio wrote:



On 9.05.2023 10:23, Neil Armstrong wrote:

On 09/05/2023 01:27, Dmitry Baryshkov wrote:

On 08/05/2023 23:09, Abhinav Kumar wrote:



On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:

On 03/05/2023 04:19, Jessica Zhang wrote:
Currently, word count is calculated using slice_count. This is 
incorrect

as downstream uses slice per packet, which is different from
slice_count.

Slice count represents the number of soft slices per interface, 
and its
value will not always match that of slice per packet. For 
example, it is
possible to have cases where there are multiple soft slices per 
interface

but the panel specifies only one slice per packet.

Thus, use the default value of one slice per packet and remove 
slice_count

from the word count calculation.

Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size 
to compute word count")

Signed-off-by: Jessica Zhang 
---
   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
   1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c

index 35c69dbe5f6f..b0d448ffb078 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -996,7 +996,14 @@ static void dsi_timing_setup(struct 
msm_dsi_host *msm_host, bool is_bonded_dsi)

   if (!msm_host->dsc)
   wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 
+ 1;

   else
-    wc = msm_host->dsc->slice_chunk_size * 
msm_host->dsc->slice_count + 1;

+    /*
+ * When DSC is enabled, WC = slice_chunk_size * 
slice_per_packet + 1.
+ * Currently, the driver only supports default value 
of slice_per_packet = 1

+ *
+ * TODO: Expand drm_panel struct to hold 
slice_per_packet info
+ *   and adjust DSC math to account for 
slice_per_packet.


slice_per_packet is not a part of the standard DSC, so I'm not 
sure how that can be implemented. And definitely we should not 
care about the drm_panel here. It should be either a part of 
drm_dsc_config, or mipi_dsi_device.




This is not correct.

It is part of the DSI standard (not DSC standard). Please refer to 
Figure 40 "One Line Containing One Packet with Data from One or 
More Compressed Slices" and Figure 41 "One Line Containing More 
than One Compressed Pixel Stream Packet".


I have reviewed section 8.8.24 and Annex D of the DSI standard.

It is not clear to me, if we can get away with always using 
slice_per_packet = 1. What is the DSI sink's difference between Fig. 
40.(b) and Fig 41?


Are there are known panels that require slice_per_packet != 1? If 
so, we will have to implement support for such configurations.


This has details about this. So I still stand by my point that this 
should be in the drm_panel.


Note, the driver doesn't use drm_panel directly. So 
slices_per_packet should go to mipi_dsi_device instead (which in 
turn can be filled from e.g. drm_panel or from any other source).


This is a big question, where should we set those parameters ?

It's an even bigger questions for panels optionally supporting DSC in 
Video or Command mode (like the vtdr6130),

how to select DSC or not ? DT is not an option.

Compressed vs uncompressed modes, maybe? Would be nice to make this
togglable from userspace.. But then it may not scale for panels with e.g.
10 resolutions, all cmd/vid/dsc/nodsc


Currently the panel/panel-bridge make decision on command vs video mode. 
We have no way to influence that decision. If you want to make that 
negotiable, I'd start with adding 
'cmd_supported/video_supported/dsc_supported' flags to struct 
mipi_dsi_hosts.




Right. Isn't that issue there even today that if a panel supports DSC in 
only one of the modes, we have no way to tell that as we have not 
encountered such a panel in upstream yet.


Also, fundamental question to folks who had panels requiring 
slice_per_pkt as 2,


if you had some panels which need a slice_per_pkt as 2, this support 
could have been added even earlier by someone who had these panels even 
in DSC 1.1.


If these panels are not yet upstreamed, may i please know why this is 
considered as a "breakage"? If they were working "somehow" due to 
incorrect math / panel settings /  DPU calculations, unfortunately we 
have to work towards bringing up these panels properly and upstreaming 
them rather than saying "oh, these panels were working somehow and now 
we need to keep it working".





Konrad


Those should tied to a panel+controller tuple.

Neil






+ */
+    wc = msm_host->dsc->slice_chunk_size + 1;
   dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
   DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |











Re: [Freedreno] [PATCH 1/3] dt-bindings: display: hdmi-connector: add hdmi-pwr supply

2023-05-09 Thread Krzysztof Kozlowski
On 07/05/2023 22:12, Dmitry Baryshkov wrote:
> Follow the dp-connector example and add hdmi-pwr supply to drive the 5V
> pin of the HDMI connector (together with some simple glue logic possibly
> attached to the connector).
> 
> Signed-off-by: Dmitry Baryshkov 
> ---

Acked-by: Krzysztof Kozlowski 

Best regards,
Krzysztof



Re: [Freedreno] [PATCH 1/3] dt-bindings: display: hdmi-connector: add hdmi-pwr supply

2023-05-09 Thread Krzysztof Kozlowski
On 08/05/2023 04:56, Laurent Pinchart wrote:
> Hi Rob,
> 
> On Sun, May 07, 2023 at 04:25:44PM -0500, Rob Herring wrote:
>> On Sun, 07 May 2023 23:12:16 +0300, Dmitry Baryshkov wrote:
>>> Follow the dp-connector example and add hdmi-pwr supply to drive the 5V
>>> pin of the HDMI connector (together with some simple glue logic possibly
>>> attached to the connector).
>>>
>>> Signed-off-by: Dmitry Baryshkov 
>>> ---
>>>  .../devicetree/bindings/display/connector/hdmi-connector.yaml  | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>
>> My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
>> on your patch (DT_CHECKER_FLAGS is new in v5.13):
> 
> The issues below don't seem to be related to Dmitry's patch, are they ?

No, can be ignored.

Best regards,
Krzysztof



[Freedreno] [PATCH v3] drm/msm: Fix submit error-path leaks

2023-05-09 Thread Rob Clark
From: Rob Clark 

For errors after msm_submitqueue_get(), we need to drop the submitqueue
reference.  Additionally after get_unused_fd() we need to drop the fd.
The ordering for dropping the queue lock and put_unused_fd() is not
important, so just move this all into out_post_unlock.

v2: Only drop queue ref if submit doesn't take it
v3: Fix unitialized submit ref in error path

Reported-by: pinkperfect2...@gmail.com
Fixes: f0de40a131d9 drm/msm: ("Reorder lock vs submit alloc")
Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem_submit.c | 24 +---
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c 
b/drivers/gpu/drm/msm/msm_gem_submit.c
index 6c6aefaa72be..c994d4a13580 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -720,21 +720,21 @@ static void msm_process_post_deps(struct 
msm_submit_post_dep *post_deps,
}
}
 }
 
 int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
struct drm_file *file)
 {
struct msm_drm_private *priv = dev->dev_private;
struct drm_msm_gem_submit *args = data;
struct msm_file_private *ctx = file->driver_priv;
-   struct msm_gem_submit *submit;
+   struct msm_gem_submit *submit = NULL;
struct msm_gpu *gpu = priv->gpu;
struct msm_gpu_submitqueue *queue;
struct msm_ringbuffer *ring;
struct msm_submit_post_dep *post_deps = NULL;
struct drm_syncobj **syncobjs_to_reset = NULL;
int out_fence_fd = -1;
bool has_ww_ticket = false;
unsigned i;
int ret;
 
@@ -767,27 +767,29 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void 
*data,
queue = msm_submitqueue_get(ctx, args->queueid);
if (!queue)
return -ENOENT;
 
ring = gpu->rb[queue->ring_nr];
 
if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
if (out_fence_fd < 0) {
ret = out_fence_fd;
-   return ret;
+   goto out_post_unlock;
}
}
 
submit = submit_create(dev, gpu, queue, args->nr_bos, args->nr_cmds);
-   if (IS_ERR(submit))
-   return PTR_ERR(submit);
+   if (IS_ERR(submit)) {
+   ret = PTR_ERR(submit);
+   goto out_post_unlock;
+   }
 
trace_msm_gpu_submit(pid_nr(submit->pid), ring->id, submit->ident,
args->nr_bos, args->nr_cmds);
 
ret = mutex_lock_interruptible(>lock);
if (ret)
goto out_post_unlock;
 
if (args->flags & MSM_SUBMIT_SUDO)
submit->in_rb = true;
@@ -962,25 +964,33 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void 
*data,
msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs);
msm_process_post_deps(post_deps, args->nr_out_syncobjs,
  submit->user_fence);
 
 
 out:
submit_cleanup(submit, !!ret);
if (has_ww_ticket)
ww_acquire_fini(>ticket);
 out_unlock:
-   if (ret && (out_fence_fd >= 0))
-   put_unused_fd(out_fence_fd);
mutex_unlock(>lock);
 out_post_unlock:
-   msm_gem_submit_put(submit);
+   if (ret && (out_fence_fd >= 0))
+   put_unused_fd(out_fence_fd);
+   if (submit) {
+   msm_gem_submit_put(submit);
+   } else {
+   /*
+* If the submit hasn't yet taken ownership of the queue
+* then we need to drop the reference ourself:
+*/
+   msm_submitqueue_put(queue);
+   }
if (!IS_ERR_OR_NULL(post_deps)) {
for (i = 0; i < args->nr_out_syncobjs; ++i) {
kfree(post_deps[i].chain);
drm_syncobj_put(post_deps[i].syncobj);
}
kfree(post_deps);
}
 
if (!IS_ERR_OR_NULL(syncobjs_to_reset)) {
for (i = 0; i < args->nr_in_syncobjs; ++i) {
-- 
2.40.1



[Freedreno] [PATCH 2/2] drm/msm: Be more shouty if per-process pgtables aren't working

2023-05-09 Thread Rob Clark
From: Rob Clark 

Otherwise it is not always obvious if a dt or iommu change is causing us
to fall back to global pgtable.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_iommu.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index 418e1e06cdde..9b19124c9bd0 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -227,21 +227,26 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu 
*parent)
struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev);
struct msm_iommu *iommu = to_msm_iommu(parent);
struct msm_iommu_pagetable *pagetable;
const struct io_pgtable_cfg *ttbr1_cfg = NULL;
struct io_pgtable_cfg ttbr0_cfg;
int ret;
 
/* Get the pagetable configuration from the domain */
if (adreno_smmu->cookie)
ttbr1_cfg = adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie);
-   if (!ttbr1_cfg)
+
+   /*
+* If you hit this WARN_ONCE() you are probably missing an entry in
+* qcom_smmu_impl_of_match[] in arm-smmu-qcom.c
+*/
+   if (WARN_ONCE(!ttbr1_cfg, "No per-process page tables"))
return ERR_PTR(-ENODEV);
 
pagetable = kzalloc(sizeof(*pagetable), GFP_KERNEL);
if (!pagetable)
return ERR_PTR(-ENOMEM);
 
msm_mmu_init(>base, parent->dev, _funcs,
MSM_MMU_IOMMU_PAGETABLE);
 
/* Clone the TTBR1 cfg as starting point for TTBR0 cfg: */
-- 
2.40.1



[Freedreno] [PATCH 1/2] iommu/arm-smmu-qcom: Fix missing adreno_smmu's

2023-05-09 Thread Rob Clark
From: Rob Clark 

When the special handling of qcom,adreno-smmu was moved into
qcom_smmu_create(), it was overlooked that we didn't have all the
required entries in qcom_smmu_impl_of_match.  So we stopped getting
adreno_smmu_priv on sc7180, breaking per-process pgtables.

Fixes: 30b912a03d91 ("iommu/arm-smmu-qcom: Move the qcom,adreno-smmu check into 
qcom_smmu_create")
Suggested-by: Lepton Wu 
Signed-off-by: Rob Clark 
---
 drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
index d1b296b95c86..760d9c43dbd2 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
@@ -496,20 +496,21 @@ static const struct qcom_smmu_match_data 
qcom_smmu_500_impl0_data = {
 /*
  * Do not add any more qcom,SOC-smmu-500 entries to this list, unless they need
  * special handling and can not be covered by the qcom,smmu-500 entry.
  */
 static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
{ .compatible = "qcom,msm8996-smmu-v2", .data = _smmu_data },
{ .compatible = "qcom,msm8998-smmu-v2", .data = _smmu_v2_data },
{ .compatible = "qcom,qcm2290-smmu-500", .data = 
_smmu_500_impl0_data },
{ .compatible = "qcom,qdu1000-smmu-500", .data = 
_smmu_500_impl0_data  },
{ .compatible = "qcom,sc7180-smmu-500", .data = 
_smmu_500_impl0_data },
+   { .compatible = "qcom,sc7180-smmu-v2", .data = _smmu_v2_data },
{ .compatible = "qcom,sc7280-smmu-500", .data = 
_smmu_500_impl0_data },
{ .compatible = "qcom,sc8180x-smmu-500", .data = 
_smmu_500_impl0_data },
{ .compatible = "qcom,sc8280xp-smmu-500", .data = 
_smmu_500_impl0_data },
{ .compatible = "qcom,sdm630-smmu-v2", .data = _smmu_v2_data },
{ .compatible = "qcom,sdm845-smmu-v2", .data = _smmu_v2_data },
{ .compatible = "qcom,sdm845-smmu-500", .data = _smmu_500_data },
{ .compatible = "qcom,sm6115-smmu-500", .data = 
_smmu_500_impl0_data},
{ .compatible = "qcom,sm6125-smmu-500", .data = 
_smmu_500_impl0_data },
{ .compatible = "qcom,sm6350-smmu-v2", .data = _smmu_v2_data },
{ .compatible = "qcom,sm6350-smmu-500", .data = 
_smmu_500_impl0_data },
@@ -540,12 +541,14 @@ struct arm_smmu_device *qcom_smmu_impl_init(struct 
arm_smmu_device *smmu)
/* Match platform for ACPI boot */
if (acpi_match_platform_list(qcom_acpi_platlist) >= 0)
return qcom_smmu_create(smmu, 
_smmu_500_impl0_data);
}
 #endif
 
match = of_match_node(qcom_smmu_impl_of_match, np);
if (match)
return qcom_smmu_create(smmu, match->data);
 
+   WARN_ON(of_device_is_compatible(np, "qcom,adreno-smmu"));
+
return smmu;
 }
-- 
2.40.1



Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Dmitry Baryshkov

On 09/05/2023 11:54, Konrad Dybcio wrote:



On 9.05.2023 10:23, Neil Armstrong wrote:

On 09/05/2023 01:27, Dmitry Baryshkov wrote:

On 08/05/2023 23:09, Abhinav Kumar wrote:



On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:

On 03/05/2023 04:19, Jessica Zhang wrote:

Currently, word count is calculated using slice_count. This is incorrect
as downstream uses slice per packet, which is different from
slice_count.

Slice count represents the number of soft slices per interface, and its
value will not always match that of slice per packet. For example, it is
possible to have cases where there are multiple soft slices per interface
but the panel specifies only one slice per packet.

Thus, use the default value of one slice per packet and remove slice_count
from the word count calculation.

Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size to compute word 
count")
Signed-off-by: Jessica Zhang 
---
   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
   1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 35c69dbe5f6f..b0d448ffb078 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -996,7 +996,14 @@ static void dsi_timing_setup(struct msm_dsi_host 
*msm_host, bool is_bonded_dsi)
   if (!msm_host->dsc)
   wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
   else
-    wc = msm_host->dsc->slice_chunk_size * msm_host->dsc->slice_count 
+ 1;
+    /*
+ * When DSC is enabled, WC = slice_chunk_size * slice_per_packet + 
1.
+ * Currently, the driver only supports default value of 
slice_per_packet = 1
+ *
+ * TODO: Expand drm_panel struct to hold slice_per_packet info
+ *   and adjust DSC math to account for slice_per_packet.


slice_per_packet is not a part of the standard DSC, so I'm not sure how that 
can be implemented. And definitely we should not care about the drm_panel here. 
It should be either a part of drm_dsc_config, or mipi_dsi_device.



This is not correct.

It is part of the DSI standard (not DSC standard). Please refer to Figure 40 "One Line 
Containing One Packet with Data from One or More Compressed Slices" and Figure 41 "One 
Line Containing More than One Compressed Pixel Stream Packet".


I have reviewed section 8.8.24 and Annex D of the DSI standard.

It is not clear to me, if we can get away with always using slice_per_packet = 
1. What is the DSI sink's difference between Fig. 40.(b) and Fig 41?

Are there are known panels that require slice_per_packet != 1? If so, we will 
have to implement support for such configurations.


This has details about this. So I still stand by my point that this should be 
in the drm_panel.


Note, the driver doesn't use drm_panel directly. So slices_per_packet should go 
to mipi_dsi_device instead (which in turn can be filled from e.g. drm_panel or 
from any other source).


This is a big question, where should we set those parameters ?

It's an even bigger questions for panels optionally supporting DSC in Video or 
Command mode (like the vtdr6130),
how to select DSC or not ? DT is not an option.

Compressed vs uncompressed modes, maybe? Would be nice to make this
togglable from userspace.. But then it may not scale for panels with e.g.
10 resolutions, all cmd/vid/dsc/nodsc


Currently the panel/panel-bridge make decision on command vs video mode. 
We have no way to influence that decision. If you want to make that 
negotiable, I'd start with adding 
'cmd_supported/video_supported/dsc_supported' flags to struct 
mipi_dsi_hosts.





Konrad


Those should tied to a panel+controller tuple.

Neil






+ */
+    wc = msm_host->dsc->slice_chunk_size + 1;
   dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
   DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |









--
With best wishes
Dmitry



Re: [Freedreno] [PATCH v2 3/4] drm/msm/dpu: Add DPU_INTF_DATA_COMPRESS feature flag

2023-05-09 Thread Dmitry Baryshkov
On Tue, 9 May 2023 at 10:00, Marijn Suijten
 wrote:
>
> On 2023-05-09 02:08:52, Dmitry Baryshkov wrote:
> > On 09/05/2023 00:46, Jessica Zhang wrote:
> > >
> > >
> > > On 5/7/2023 9:00 AM, Marijn Suijten wrote:
> > >> On 2023-05-05 14:23:50, Jessica Zhang wrote:
> > >>> Add DATA_COMPRESS feature flag to DPU INTF block.
> > >>>
> > >>> In DPU 7.x and later, DSC/DCE enablement registers have been moved from
> > >>> PINGPONG to INTF.
> > >>>
> > >>> As core_rev (and related macros) was removed from the dpu_kms struct,
> > >>> the
> > >>> most straightforward way to indicate the presence of this register
> > >>> would be
> > >>> to have a feature flag.
> > >>
> > >> Irrelevant.  Even though core_rev was still in mainline until recently,
> > >> we always hardcoded the features in the catalog and only used core_rev
> > >> to select a dpu_mdss_cfg catalog entry.  There is no "if version >= X
> > >> then enable feature Y" logic, this manually-enabled feature flag is the
> > >> only, correct way to do it.
> > >
> > > Hi Marijn,
> > >
> > > Understood. FWIW, if we do find more register bit-level differences
> > > between HW versions in the future, it might make more sense to keep the
> > > HW catalog small and bring core_rev back, rather than keep adding these
> > > kinds of small differences to caps.
> >
> > Let's see how it goes. Abhinav suggested that there might be feature
> > differences inside the DPU generations (and even inside the single DPU
> > major/minor combo). So I'm not sure what core_rev will bring us.
> >
> > Let's land the platforms which are ready (or if there is anything close
> > to be submitted).
>
> You mean waiting for catalog changes on the list specifically, so the
> DSC series as well as SM6350/SM6375?  I do intend to send SM6125 now
> that the INTF TE series (required for it, as well as for SM63**) seems
> to be generally accepted, but have been quite busy with the DSC series
> on the list as we're now unblocking many Xperia's to finally have
> display!

Yes, please send it, as you find time. No time pressure.

>
> The catalog addition is "pretty much ready", let me know if you'd like
> it to be sent in prior to your cleanup.

-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Konrad Dybcio



On 9.05.2023 10:23, Neil Armstrong wrote:
> On 09/05/2023 01:27, Dmitry Baryshkov wrote:
>> On 08/05/2023 23:09, Abhinav Kumar wrote:
>>>
>>>
>>> On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:
 On 03/05/2023 04:19, Jessica Zhang wrote:
> Currently, word count is calculated using slice_count. This is incorrect
> as downstream uses slice per packet, which is different from
> slice_count.
>
> Slice count represents the number of soft slices per interface, and its
> value will not always match that of slice per packet. For example, it is
> possible to have cases where there are multiple soft slices per interface
> but the panel specifies only one slice per packet.
>
> Thus, use the default value of one slice per packet and remove slice_count
> from the word count calculation.
>
> Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size to 
> compute word count")
> Signed-off-by: Jessica Zhang 
> ---
>   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
>   1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
> b/drivers/gpu/drm/msm/dsi/dsi_host.c
> index 35c69dbe5f6f..b0d448ffb078 100644
> --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
> +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
> @@ -996,7 +996,14 @@ static void dsi_timing_setup(struct msm_dsi_host 
> *msm_host, bool is_bonded_dsi)
>   if (!msm_host->dsc)
>   wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
>   else
> -    wc = msm_host->dsc->slice_chunk_size * 
> msm_host->dsc->slice_count + 1;
> +    /*
> + * When DSC is enabled, WC = slice_chunk_size * 
> slice_per_packet + 1.
> + * Currently, the driver only supports default value of 
> slice_per_packet = 1
> + *
> + * TODO: Expand drm_panel struct to hold slice_per_packet 
> info
> + *   and adjust DSC math to account for slice_per_packet.

 slice_per_packet is not a part of the standard DSC, so I'm not sure how 
 that can be implemented. And definitely we should not care about the 
 drm_panel here. It should be either a part of drm_dsc_config, or 
 mipi_dsi_device.

>>>
>>> This is not correct.
>>>
>>> It is part of the DSI standard (not DSC standard). Please refer to Figure 
>>> 40 "One Line Containing One Packet with Data from One or More Compressed 
>>> Slices" and Figure 41 "One Line Containing More than One Compressed Pixel 
>>> Stream Packet".
>>
>> I have reviewed section 8.8.24 and Annex D of the DSI standard.
>>
>> It is not clear to me, if we can get away with always using slice_per_packet 
>> = 1. What is the DSI sink's difference between Fig. 40.(b) and Fig 41?
>>
>> Are there are known panels that require slice_per_packet != 1? If so, we 
>> will have to implement support for such configurations.
>>
>>> This has details about this. So I still stand by my point that this should 
>>> be in the drm_panel.
>>
>> Note, the driver doesn't use drm_panel directly. So slices_per_packet should 
>> go to mipi_dsi_device instead (which in turn can be filled from e.g. 
>> drm_panel or from any other source).
> 
> This is a big question, where should we set those parameters ?
> 
> It's an even bigger questions for panels optionally supporting DSC in Video 
> or Command mode (like the vtdr6130),
> how to select DSC or not ? DT is not an option.
Compressed vs uncompressed modes, maybe? Would be nice to make this
togglable from userspace.. But then it may not scale for panels with e.g.
10 resolutions, all cmd/vid/dsc/nodsc


Konrad
> 
> Those should tied to a panel+controller tuple.
> 
> Neil
> 
>>
>>>
> + */
> +    wc = msm_host->dsc->slice_chunk_size + 1;
>   dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
>   DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |
>

>>
> 


Re: [Freedreno] [PATCH v6 06/15] drm/msm/a6xx: Introduce GMU wrapper support

2023-05-09 Thread Konrad Dybcio



On 8.05.2023 23:15, Akhil P Oommen wrote:
> On Mon, May 08, 2023 at 10:59:24AM +0200, Konrad Dybcio wrote:
>>
>>
>> On 6.05.2023 16:46, Akhil P Oommen wrote:
>>> On Fri, May 05, 2023 at 12:35:18PM +0200, Konrad Dybcio wrote:


 On 5.05.2023 10:46, Akhil P Oommen wrote:
> On Thu, May 04, 2023 at 08:34:07AM +0200, Konrad Dybcio wrote:
>>
>>
>> On 3.05.2023 22:32, Akhil P Oommen wrote:
>>> On Tue, May 02, 2023 at 11:40:26AM +0200, Konrad Dybcio wrote:


 On 2.05.2023 09:49, Akhil P Oommen wrote:
> On Sat, Apr 01, 2023 at 01:54:43PM +0200, Konrad Dybcio wrote:
>> Some (particularly SMD_RPM, a.k.a non-RPMh) SoCs implement A6XX GPUs
>> but don't implement the associated GMUs. This is due to the fact that
>> the GMU directly pokes at RPMh. Sadly, this means we have to take 
>> care
>> of enabling & scaling power rails, clocks and bandwidth ourselves.
>>
>> Reuse existing Adreno-common code and modify the deeply-GMU-infused
>> A6XX code to facilitate these GPUs. This involves if-ing out lots
>> of GMU callbacks and introducing a new type of GMU - GMU wrapper 
>> (it's
>> the actual name that Qualcomm uses in their downstream kernels).
>>
>> This is essentially a register region which is convenient to model
>> as a device. We'll use it for managing the GDSCs. The register
>> layout matches the actual GMU_CX/GX regions on the "real GMU" devices
>> and lets us reuse quite a bit of gmu_read/write/rmw calls.
> << I sent a reply to this patch earlier, but not sure where it went.
> Still figuring out Mutt... >>
 Answered it here:

 https://lore.kernel.org/linux-arm-msm/4d3000c1-c3f9-0bfd-3eb3-23393f9a8...@linaro.org/
>>>
>>> Thanks. Will check and respond there if needed.
>>>

 I don't think I see any new comments in this "reply revision" (heh), 
 so please
 check that one out.

>
> Only convenience I found is that we can reuse gmu register ops in a 
> few
> places (< 10 I think). If we just model this as another gpu memory
> region, I think it will help to keep gmu vs gmu-wrapper/no-gmu
> architecture code with clean separation. Also, it looks like we need 
> to
> keep a dummy gmu platform device in the devicetree with the current
> approach. That doesn't sound right.
 That's correct, but.. if we switch away from that, VDD_GX/VDD_CX will
 need additional, gmuwrapper-configuration specific code anyway, as
 OPP & genpd will no longer make use of the default behavior which
 only gets triggered if there's a single power-domains=<> entry, afaicu.
>>> Can you please tell me which specific *default behviour* do you mean 
>>> here?
>>> I am curious to know what I am overlooking here. We can always get a 
>>> cxpd/gxpd device
>>> and vote for the gdscs directly from the driver. Anything related to
>>> OPP?
>> I *believe* this is true:
>>
>> if (ARRAY_SIZE(power-domains) == 1) {
>>  of generic code will enable the power domain at .probe time
> we need to handle the voting directly. I recently shared a patch to
> vote cx gdsc from gpu driver. Maybe we can ignore this when gpu has
> only cx rail due to this logic you quoted here.
>
> I see that you have handled it mostly correctly from the gpu driver in 
> the updated
> a6xx_pm_suspend() callback. Just the power domain device ptrs should be 
> moved to
> gpu from gmu.
>
>>
>>  opp APIs will default to scaling that domain with required-opps
>
>> }
>>
>> and we do need to put GX/CX (with an MX parent to match) there, as the
>> AP is responsible for voting in this configuration
>
> We should vote to turn ON gx/cx headswitches through genpd from gpu 
> driver. When you vote for
> core clk frequency, *clock driver is supposed to scale* all the necessary
> regulators. At least that is how downstream works. You can refer the 
> downstream
> gpucc clk driver of these SoCs. I am not sure how much of that can be 
> easily converted to
> upstream.
>
> Also, how does having a gmu dt node help in this regard? Feel free to
> elaborate, I am not very familiar with clk/regulator implementations.
 Okay so I think we have a bit of a confusion here.

 Currently, with this patchset we manage things like this:

 1. GPU has a VDD_GX (or equivalent[1]) line passed in power-domains=<>, 
 which
is then used with OPP APIs to ensure it's being scaled on freq change 
 [2].
The VDD_lines coming from RPM(h) are described as power domains upstream
*unlike downstream*, which represents them as regulators with preset 
 

Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Neil Armstrong

On 09/05/2023 01:27, Dmitry Baryshkov wrote:

On 08/05/2023 23:09, Abhinav Kumar wrote:



On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:

On 03/05/2023 04:19, Jessica Zhang wrote:

Currently, word count is calculated using slice_count. This is incorrect
as downstream uses slice per packet, which is different from
slice_count.

Slice count represents the number of soft slices per interface, and its
value will not always match that of slice per packet. For example, it is
possible to have cases where there are multiple soft slices per interface
but the panel specifies only one slice per packet.

Thus, use the default value of one slice per packet and remove slice_count
from the word count calculation.

Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size to compute word 
count")
Signed-off-by: Jessica Zhang 
---
  drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 35c69dbe5f6f..b0d448ffb078 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -996,7 +996,14 @@ static void dsi_timing_setup(struct msm_dsi_host 
*msm_host, bool is_bonded_dsi)
  if (!msm_host->dsc)
  wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
  else
-    wc = msm_host->dsc->slice_chunk_size * msm_host->dsc->slice_count 
+ 1;
+    /*
+ * When DSC is enabled, WC = slice_chunk_size * slice_per_packet + 
1.
+ * Currently, the driver only supports default value of 
slice_per_packet = 1
+ *
+ * TODO: Expand drm_panel struct to hold slice_per_packet info
+ *   and adjust DSC math to account for slice_per_packet.


slice_per_packet is not a part of the standard DSC, so I'm not sure how that 
can be implemented. And definitely we should not care about the drm_panel here. 
It should be either a part of drm_dsc_config, or mipi_dsi_device.



This is not correct.

It is part of the DSI standard (not DSC standard). Please refer to Figure 40 "One Line 
Containing One Packet with Data from One or More Compressed Slices" and Figure 41 "One 
Line Containing More than One Compressed Pixel Stream Packet".


I have reviewed section 8.8.24 and Annex D of the DSI standard.

It is not clear to me, if we can get away with always using slice_per_packet = 
1. What is the DSI sink's difference between Fig. 40.(b) and Fig 41?

Are there are known panels that require slice_per_packet != 1? If so, we will 
have to implement support for such configurations.


This has details about this. So I still stand by my point that this should be 
in the drm_panel.


Note, the driver doesn't use drm_panel directly. So slices_per_packet should go 
to mipi_dsi_device instead (which in turn can be filled from e.g. drm_panel or 
from any other source).


This is a big question, where should we set those parameters ?

It's an even bigger questions for panels optionally supporting DSC in Video or 
Command mode (like the vtdr6130),
how to select DSC or not ? DT is not an option.

Those should tied to a panel+controller tuple.

Neil






+ */
+    wc = msm_host->dsc->slice_chunk_size + 1;
  dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
  DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |









Re: [Freedreno] [PATCH 3/4] ARM: dts: qcom: apq8074-dragonboard: enable adsp and MSS

2023-05-09 Thread Konrad Dybcio



On 9.05.2023 01:38, Dmitry Baryshkov wrote:
> On Mon, 8 May 2023 at 21:01, Dmitry Baryshkov
>  wrote:
>>
>> On 08/05/2023 13:49, Dmitry Baryshkov wrote:
>>> On 08/05/2023 13:38, Konrad Dybcio wrote:


 On 8.05.2023 12:33, Dmitry Baryshkov wrote:
> On 08/05/2023 11:33, Konrad Dybcio wrote:
>>
>>
>> On 7.05.2023 21:07, Dmitry Baryshkov wrote:
>>> Enable ADSP and Modem DSPs on APQ8074 dragonboard. The MSS region
>>> differs from the one defined in the msm8974, so it overriden locally.
>>>
>>> The modem is specified use mba.mbn instead of mbn.b00 (for the sake of
>>> similarity with other platforms). This requires a patch for remoteproc
>>> to be applied [1].
>>>
>>> [1]
>>> https://lore.kernel.org/all/20230507172041.2320279-1-dmitry.barysh...@linaro.org/
>>>
>>> Signed-off-by: Dmitry Baryshkov 
>>> ---
>>>.../arm/boot/dts/qcom-apq8074-dragonboard.dts | 28
>>> +++
>>>1 file changed, 28 insertions(+)
>>>
>>> diff --git a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
>>> b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
>>> index 6b047c679370..c893afc00eb4 100644
>>> --- a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
>>> +++ b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
>>> @@ -4,6 +4,8 @@
>>>#include "qcom-pm8841.dtsi"
>>>#include "qcom-pm8941.dtsi"
>>>+/delete-node/ _region;
>>> +
>>>/ {
>>>model = "Qualcomm APQ8074 Dragonboard";
>>>compatible = "qcom,apq8074-dragonboard", "qcom,apq8074";
>>> @@ -17,6 +19,13 @@ aliases {
>>>chosen {
>>>stdout-path = "serial0:115200n8";
>>>};
>>> +
>>> +reserved-memory {
>>> +mpss_region: mpss@ac0 {
>>> +reg = <0x0ac0 0x250>;
>>> +no-map;
>>> +};
>>> +};
>>>};
>>>  _uart2 {
>>> @@ -39,6 +48,25 @@ eeprom: eeprom@52 {
>>>};
>>>};
>>>+_adsp {
>>> +cx-supply = <_s2>;
>>> +
>>> +firmware-name = "qcom/apq8074/adsp.mbn";
>>> +
>>> +status = "okay";
>>> +};
>>> +
>>> +_mss {
>>> +cx-supply = <_s2>;
>>> +mss-supply = <_s3>;
>>> +mx-supply = <_s1>;
>>> +pll-supply = <_l12>;
>> High time to move this to rpmpd!
>> I won't object to adding this though, as it obviously works
>> and is already used on other boards..
>
> I think the problem is that they are not level-voted on this
> platform, so they are regulators, not PDs.
 They're corner-voted.
>>>
>>> Hmm. Indeed. In msm8974-regulators I see both voltage and corner entries
>>> for these regulators.
>>
>> Checked. Only CX and GFX (yes, MX not included) are enabled as corners
>> in vendor dtsi. So this probably doesn't gain us a lot.
> 
> I did a check. Implementing CX as a powerdomain here makes things
> worse for now. We should first teach mss/pas/etc drivers to properly
> handle the case when there is a single power-domain, which should be
> unbound after staring the DSP Consider e.g. ADSP and CDSP on sm8150 or
> CDSP on sm8250. But that's definitely a topic for a different patch
> series. After that we can consider landing msm8974_genpd.
Ack, thanks for looking into it.

Konrad
> 
>>
>>>

 Konrad
>
>>
>>> +
>>> +firmware-name = "qcom/apq8074/mba.mbn", "qcom/apq8074/modem.mbn";
>> Could you please keep it one entry per line?
>
> Sure.
>
>>
>> Otherwise,
>>
>> Reviewed-by: Konrad Dybcio 
>>
>> Konrad
>>> +
>>> +status = "okay";
>>> +};
>>> +
>>>_requests {
>>>regulators-0 {
>>>compatible = "qcom,rpm-pm8841-regulators";
>
>>>
>>
>> --
>> With best wishes
>> Dmitry
>>
> 
> 


Re: [Freedreno] [PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

2023-05-09 Thread Konrad Dybcio



On 9.05.2023 01:27, Dmitry Baryshkov wrote:
> On 08/05/2023 23:09, Abhinav Kumar wrote:
>>
>>
>> On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:
>>> On 03/05/2023 04:19, Jessica Zhang wrote:
 Currently, word count is calculated using slice_count. This is incorrect
 as downstream uses slice per packet, which is different from
 slice_count.

 Slice count represents the number of soft slices per interface, and its
 value will not always match that of slice per packet. For example, it is
 possible to have cases where there are multiple soft slices per interface
 but the panel specifies only one slice per packet.

 Thus, use the default value of one slice per packet and remove slice_count
 from the word count calculation.

 Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size to compute 
 word count")
 Signed-off-by: Jessica Zhang 
 ---
   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 -
   1 file changed, 8 insertions(+), 1 deletion(-)

 diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
 b/drivers/gpu/drm/msm/dsi/dsi_host.c
 index 35c69dbe5f6f..b0d448ffb078 100644
 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
 +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
 @@ -996,7 +996,14 @@ static void dsi_timing_setup(struct msm_dsi_host 
 *msm_host, bool is_bonded_dsi)
   if (!msm_host->dsc)
   wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
   else
 -    wc = msm_host->dsc->slice_chunk_size * 
 msm_host->dsc->slice_count + 1;
 +    /*
 + * When DSC is enabled, WC = slice_chunk_size * 
 slice_per_packet + 1.
 + * Currently, the driver only supports default value of 
 slice_per_packet = 1
 + *
 + * TODO: Expand drm_panel struct to hold slice_per_packet info
 + *   and adjust DSC math to account for slice_per_packet.
>>>
>>> slice_per_packet is not a part of the standard DSC, so I'm not sure how 
>>> that can be implemented. And definitely we should not care about the 
>>> drm_panel here. It should be either a part of drm_dsc_config, or 
>>> mipi_dsi_device.
>>>
>>
>> This is not correct.
>>
>> It is part of the DSI standard (not DSC standard). Please refer to Figure 40 
>> "One Line Containing One Packet with Data from One or More Compressed 
>> Slices" and Figure 41 "One Line Containing More than One Compressed Pixel 
>> Stream Packet".
> 
> I have reviewed section 8.8.24 and Annex D of the DSI standard.
> 
> It is not clear to me, if we can get away with always using slice_per_packet 
> = 1. What is the DSI sink's difference between Fig. 40.(b) and Fig 41?
> 
> Are there are known panels that require slice_per_packet != 1? If so, we will 
> have to implement support for such configurations.
At least two different ones on expensive Xperias (souxp00_a+amb650wh07 and
sofef03_m)

Konrad
> 
>> This has details about this. So I still stand by my point that this should 
>> be in the drm_panel.
> 
> Note, the driver doesn't use drm_panel directly. So slices_per_packet should 
> go to mipi_dsi_device instead (which in turn can be filled from e.g. 
> drm_panel or from any other source).
> 
>>
 + */
 +    wc = msm_host->dsc->slice_chunk_size + 1;
   dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
   DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |

>>>
> 


Re: [Freedreno] [PATCH v2 3/4] drm/msm/dpu: Add DPU_INTF_DATA_COMPRESS feature flag

2023-05-09 Thread Marijn Suijten
On 2023-05-09 02:08:52, Dmitry Baryshkov wrote:
> On 09/05/2023 00:46, Jessica Zhang wrote:
> > 
> > 
> > On 5/7/2023 9:00 AM, Marijn Suijten wrote:
> >> On 2023-05-05 14:23:50, Jessica Zhang wrote:
> >>> Add DATA_COMPRESS feature flag to DPU INTF block.
> >>>
> >>> In DPU 7.x and later, DSC/DCE enablement registers have been moved from
> >>> PINGPONG to INTF.
> >>>
> >>> As core_rev (and related macros) was removed from the dpu_kms struct, 
> >>> the
> >>> most straightforward way to indicate the presence of this register 
> >>> would be
> >>> to have a feature flag.
> >>
> >> Irrelevant.  Even though core_rev was still in mainline until recently,
> >> we always hardcoded the features in the catalog and only used core_rev
> >> to select a dpu_mdss_cfg catalog entry.  There is no "if version >= X
> >> then enable feature Y" logic, this manually-enabled feature flag is the
> >> only, correct way to do it.
> > 
> > Hi Marijn,
> > 
> > Understood. FWIW, if we do find more register bit-level differences 
> > between HW versions in the future, it might make more sense to keep the 
> > HW catalog small and bring core_rev back, rather than keep adding these 
> > kinds of small differences to caps.
> 
> Let's see how it goes. Abhinav suggested that there might be feature 
> differences inside the DPU generations (and even inside the single DPU 
> major/minor combo). So I'm not sure what core_rev will bring us.
> 
> Let's land the platforms which are ready (or if there is anything close 
> to be submitted).

You mean waiting for catalog changes on the list specifically, so the
DSC series as well as SM6350/SM6375?  I do intend to send SM6125 now
that the INTF TE series (required for it, as well as for SM63**) seems
to be generally accepted, but have been quite busy with the DSC series
on the list as we're now unblocking many Xperia's to finally have
display!

The catalog addition is "pretty much ready", let me know if you'd like
it to be sent in prior to your cleanup.

> I'll post the next proposal for the catalog cleanups 
> close to -rc4, when the dust settles then we can have one or two weaks 
> for the discussion and polishing.
> 
> I'd like to consider:
> - inlining foo_BLK macros, if that makes adding new features easier

Sounds like a good idea.

> - reformat of clk_ctrls
> - maybe reintroduction of per-generation feature masks instead of 
> keeping them named after the random SoC

Yes that would make things more clear.  Or we can inline them entirely
though that might accidentally diverge SoCs and revisions.

> - maybe a rework of mdss_irqs / INTFn_INTR. We already have this info in 
> hw catalog.

Sounds good as well, that should get rid of some "duplication".

- Marijn


Re: [Freedreno] [PATCH v6 6/7] drm/msm/dsi: update hdisplay calculation for dsi_timing_setup

2023-05-09 Thread Marijn Suijten
On 2023-05-08 17:51:08, Jessica Zhang wrote:
> 
> 
> On 5/7/2023 11:34 AM, Marijn Suijten wrote:
> > On 2023-05-07 17:27:33, Marijn Suijten wrote:
> >> On 2023-05-04 15:05:15, Abhinav Kumar wrote:
> >>>
> >>>
> >>> On 5/4/2023 2:56 PM, Marijn Suijten wrote:
>  On 2023-04-12 16:25:20, Jessica Zhang wrote:
> > hdisplay for compressed images should be calculated as bytes_per_slice *
> > slice_count. Thus, use MSM DSC helper to calculate hdisplay for
> > dsi_timing_setup instead of directly using mode->hdisplay.
> >
> > Changes in v3:
> > - Split from previous patch
> > - Initialized hdisplay as uncompressed pclk per line at the beginning of
> > dsi_timing_setup as to not break dual DSI calculations
> >
> > Changes in v4:
> > - Moved pclk_per_intf calculations to DSC hdisplay adjustments
> >
> > Signed-off-by: Jessica Zhang 
> > Reviewed-by: Dmitry Baryshkov 
> > ---
> >drivers/gpu/drm/msm/dsi/dsi_host.c | 2 +-
> >1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
> > b/drivers/gpu/drm/msm/dsi/dsi_host.c
> > index 508577c596ff..ae966d4e349d 100644
> > --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
> > +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
> > @@ -952,7 +952,7 @@ static void dsi_timing_setup(struct msm_dsi_host 
> > *msm_host, bool is_bonded_dsi)
> >  * pulse width same
> >  */
> > h_total -= hdisplay;
> > -   hdisplay /= 3;
> > +   hdisplay = msm_dsc_get_pclk_per_intf(msm_host->dsc) / 3;
> 
>  This patch is unfortunately regressing the Sony Xperia XZ3 (sdm845,
>  single DSI), which will only show garbage when it is applied.
> 
>  Are you sure this is correct, and the helper is returning the right
>  values?  I'll see if I can help review and validate those later, and
>  debug if necessary.
> 
>  - Marijn
> >>>
> >>> To help us debug these kind of issues, can you pls point us to your
> >>> panel driver?
> >>
> >> https://github.com/SoMainline/linux/commit/b154ea72e6c2ca0d4a33a28cc24e3a762dba4948
> > 
> > I found the fix myself after piecing together the hints provided across
> > the many different patch series.  This panel driver assigns
> > slice_count=1 based on downstream's qcom,mdss-dsc-slice-per-pkt = <1>,
> > but as per the many slice_count-related fixes the latter DT parameter is
> > a QCOM-specific hardware feature, whereas slice_count is simply the
> > number of slices per line.
> > 
> > Since a line is a scanline, and that panel has a width of hdisplay=1440
> > and a slice_width of 720, the number of slices spanning a line is simply
> > slice_count=hdisplay/slice_width=2.  This makes the panel work again
> > atop the four-or-so-series without a revert of this patch.
> > 
> > Is it a big ask to request a single, coherent series fixing all uses of
> > slice_count - and implementing support for slice-per-pkt - instead of
> > having the patches spread across multiple series?  That makes it much
> > easier to cover ground here and review this series, as slice_count seems
> > to be used everywhere where downstream used slice_per_pkt - even I
> > mistakenly used it after assuming it was the same based on the original
> > patches.
> 
> Hi Marijn,
> 
> Just want to document the changes we agreed on regarding the slice_count 
> fixes:
> 
> I will move "drm/msm/dsi: Fix calculation for pkt_per_line" to the "Add 
> DSC v1.2 Support for DSI" series so that all the 
> slice_count/slice_per_pkt fixes are consolidated.
> 
> In addition I will also add a patch in "Add DSC v1.2 Support for DSI" to 
> remove the incorrect `dsc->slice_count = 1` line in dsi_update_dsc_timing().

That sounds good, but be sure to check two things:

- Can we squash some of the patches?
- Can we make the wording and title consistent?

That goes a long way in understanding that the patches in question are
addressing the same "wrong use of dsc->slice_count because it was
thought to be equal to slice_per_pkt" issue.

- Marijn


Re: [Freedreno] [PATCH v2 4/4] drm/msm/dpu: Set DATA_COMPRESS for command mode

2023-05-09 Thread Marijn Suijten
On 2023-05-08 17:00:12, Jessica Zhang wrote:
> On 5/8/2023 4:17 PM, Jessica Zhang wrote:
> > On 5/7/2023 9:06 AM, Marijn Suijten wrote:
> >> On 2023-05-05 14:23:51, Jessica Zhang wrote:
> >>> Add a DPU INTF op to set DATA_COMPRESS register for command mode 
> >>> panels if
> >>> the DPU_INTF_DATA_COMPRESS feature flag is set. This flag needs to be
> >>> enabled in order for DSC v1.2 to work.
> >>>
> >>> Note: These changes are for command mode only. Video mode changes will
> >>> be posted along with the DSC v1.2 support for DP.
> >>
> >> Nit: the "command mode" parts of both paragraphs only apply to the call
> >> in dpu_encoder_phys_cmd, right?  If so, and the INTF op remains the same
> >> for video mode (but only the call needs to be added to the
> >> dpu_encoder_phy_vid), make this a bit more clear in your commit message.
> 
> (Sorry, forgot to address this comment in my initial reply)
> 
> The op will be available for video mode to use, but most likely video 
> mode will set DATA_COMPRESS (or call dpu_hw_intf_enable_compression()) 
> directly in dpu_hw_intf_setup_timing_engine().

Sounsds good, if you can clarify something along those lines so that it
is clear that the call is valid on video mode too, and that the callback
is also available there.

e.g. 
- Drop "for command mode panels" from "op to set DATA_COMPRESS
  register";
- "Note: the op is currently called for command-mode encoders only,
  video mode changes..."

Thanks!

- Marijn


Re: [Freedreno] [PATCH v2 4/4] drm/msm/dpu: Set DATA_COMPRESS for command mode

2023-05-09 Thread Marijn Suijten
On 2023-05-08 16:17:54, Jessica Zhang wrote:

> >> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h 
> >> b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
> >> index 73b0885918f8..a8def68a5ec2 100644
> >> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
> >> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
> >> @@ -70,6 +70,7 @@ struct intf_status {
> >>* @get_autorefresh:Retrieve autorefresh config from hardware
> >>*  Return: 0 on success, -ETIMEDOUT on 
> >> timeout
> >>* @vsync_sel:  Select vsync signal for tear-effect 
> >> configuration
> >> + * @enable_compression: Enable data compression
> > 
> > Indent to match above.
> 
> Sure, is the plan to correct the whitespace in the first half of the 
> comment block in the future?

I couldn't see the first part of the block in the diff context here, but
indeed that's a broken disaster so we will have to fix that up :(

I think it is fine to leave the latter ones as it is, as long as it is
consistent:

- Only using spaces;
- Colon directly after the word (and an @ before it, see kerneldoc
  specification);
- Aligned to existing entries.

- Marijn