Re: [PATCH v17 0/9] Enable Adaptive Sync SDP Support for DP
On 4/19/2024 6:05 PM, Jani Nikula wrote: On Thu, 04 Apr 2024, "Nautiyal, Ankit K" wrote: On 3/19/2024 3:16 PM, Maxime Ripard wrote: On Mon, Mar 18, 2024 at 04:37:58PM +0200, Jani Nikula wrote: On Mon, 11 Mar 2024, Mitul Golani wrote: An Adaptive-Sync-capable DP protocol converter indicates its support by setting the related bit in the DPCD register. This is valid for DP and edp as well. Computes AS SDP values based on the display configuration, ensuring proper handling of Variable Refresh Rate (VRR) in the context of Adaptive Sync. [snip] Mitul Golani (9): drm/dp: Add support to indicate if sink supports AS SDP drm: Add Adaptive Sync SDP logging Maarten, Maxime, Thomas, ack for merging these two patches via drm-intel-next? Ack Maxime Thanks for the patch, ack and reviews, pushed to drm-intel-next. This came up again today [1]. The acks absolutely must be recorded in the commit messages when pushing the patches. I apologize for the oversight. Moving forward, I will ensure to consistently include the "acked-by" tag when pushing such changes. Regards, Ankit dim should complain about applying non-i915 patches without acks. BR, Jani. [1] https://lore.kernel.org/r/zh_q72gykmmbg...@intel.com
Re: [PATCH 0/4] drm/amd/display: Update Display Core unit tests
Em sáb., 20 de abr. de 2024 às 15:50, Joao Paulo Pereira da Silva escreveu: > > Hey, I'm interested in contributing for display tests from this patch-set. > I've noticed potential updates related to both refactoring and optimization. > This patch-set applies these suggestions. > Hi, It's great to see this moving forward! Overall the suggested changes make sense to me, and honestly I already don't remember the discussions that went behind some of them. The only thing that I would like to raise for you, and anyone else reviewing this, is that apparently there are now stronger feeling towards the "preferred way"[1] to handle tests in static functions, using EXPORT_SYMBOL_IF_KUNIT (or EXPORT_SYMBOL_FOR_TESTS_ONLY in the case of DRM), so they might be more adequate to work on refactoring this code. [1]: https://lore.kernel.org/all/5z66ivuhfrzrnuzt6lwjfm5fuozxlgqsco3qb5rfzyf6mil5ms@2svqtlcncyjj/ Kind regards, Tales > > [WHY] > > 1. The single test suite in the file > test/kunit/dc/dml/calcs/bw_fixed_test.c, which tests some static > functions defined in the dc/basics/bpw_fixed.c, is not being run. > According to kunit documentation > > (https://www.kernel.org/doc/html/latest/dev-tools/kunit/usage.html#testing-static-functions), > there are two strategies for testing > static functions, but none of them seem to be configured. > Additionally, > it appears that the Config DCE_KUNIT_TEST should be associated with > this > test, since it was introduced in the same patch of the test > > (https://lore.kernel.org/amd-gfx/20240222155811.44096-3-rodrigo.sique...@amd.com/), > but it is not being used anywhere in the display driver. > > 2. Also, according to the documentation, "The display/tests folder > replicates > the folder hierarchy of the display folder". However, note that this > test file > (test/kunit/dc/dml/calcs/bw_fixed_test.c) has a conflicting path with > the file > that is being tested (dc/basics/bw_fixed.c). > > 3. Config Names and Helps are a bit misleading and don't follow a strict > pattern. For example, the config DML_KUNIT_TEST indicates that it is > used > to activate tests for the Display Core Engine, but instead activates > tests > for the Display Core Next. Also, note the different name patterns in > DML_KUNIT_TEST and AMD_DC_BASICS_KUNIT_TEST. > > 4. The test suite dcn21_update_bw_bounding_box_test_suite configures an > init > function that doesn't need to be executed before every test, but only > once > before the suite runs. > > 5. There are some not updated info in the Documentation, such as the > recommended command to run the tests: > $ ./tools/testing/kunit/kunit.py run --arch=x86_64 \ > --kunitconfig=drivers/gpu/drm/amd/display/tests > (it doesn't work since there is no .kunitconfig in > drivers/gpu/drm/amd/display/tests) > > > [HOW] > > 1. Revise Config names and Help blocks. > > 2. Change the path of the test file bw_fixed_test from > test/kunit/dc/dml/calcs/bw_fixed_test.c to > test/kunit/dc/basics/bw_fixed_test.c > to make it consistent with the Documentation and the other display > driver > tests. Make this same test file run by importing it conditionally in > the file > dc/basics/bw_fixed_test.c. > > 3. Turn the test init function of the suite > dcn21_update_bw_bounding_box_test_suite into a suite init. > > 4. Update Documentation > > Joao Paulo Pereira da Silva (4): > drm/amd/display: Refactor AMD display KUnit tests configs > drm/amd/display/test: Fix kunit test that is not running > drm/amd/display/test: Optimize kunit test suite > dml_dcn20_fpu_dcn21_update_bw_bounding_box_test > Documentation/gpu: Update AMD Display Core Unit Test documentation > > .../gpu/amdgpu/display/display-test.rst | 20 ++-- > drivers/gpu/drm/amd/display/Kconfig | 31 ++- > .../gpu/drm/amd/display/dc/basics/bw_fixed.c | 3 ++ > drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 2 +- > .../dc/dml/dcn20/display_mode_vba_20.c| 2 +- > .../dc/dml/dcn20/display_rq_dlg_calc_20.c | 2 +- > .../drm/amd/display/test/kunit/.kunitconfig | 7 ++--- > .../gpu/drm/amd/display/test/kunit/Makefile | 4 +-- > .../dc/{dml/calcs => basics}/bw_fixed_test.c | 0 > .../test/kunit/dc/dml/dcn20/dcn20_fpu_test.c | 6 ++-- > 10 files changed, 32 insertions(+), 45 deletions(-) > rename drivers/gpu/drm/amd/display/test/kunit/dc/{dml/calcs => > basics}/bw_fixed_test.c (100%) > > -- > 2.44.0 >
[PATCH v8 3/4] MAINTAINERS: add SAM9X7 SoC's LVDS controller
Add the newly added LVDS controller for the SAM9X7 SoC to the existing MAINTAINERS entry. Signed-off-by: Dharma Balasubiramani Reviewed-by: Neil Armstrong Acked-by: Nicolas Ferre --- Changelog v7 -> v8 v6 -> v7 - No changes. v5 -> v6 - Correct the file name sam9x7-lvds.yaml -> sam9x75-lvds.yaml. v4 -> v5 v3 -> v4 - No changes. v2 -> v3 - Move the entry before "MICROCHIP SAMA5D2-COMPATIBLE ADC DRIVER". v1 -> v2 - No Changes. --- MAINTAINERS | 8 1 file changed, 8 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index c23fda1aa1f0..e49347eac596 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -14563,6 +14563,14 @@ S: Supported F: Documentation/devicetree/bindings/pwm/atmel,at91sam-pwm.yaml F: drivers/pwm/pwm-atmel.c +MICROCHIP SAM9x7-COMPATIBLE LVDS CONTROLLER +M: Manikandan Muralidharan +M: Dharma Balasubiramani +L: dri-devel@lists.freedesktop.org +S: Supported +F: Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml +F: drivers/gpu/drm/bridge/microchip-lvds.c + MICROCHIP SAMA5D2-COMPATIBLE ADC DRIVER M: Eugen Hristev L: linux-...@vger.kernel.org -- 2.25.1
[PATCH v8 4/4] ARM: configs: at91: Enable LVDS serializer support
Enable LVDS serializer support for display pipeline. Signed-off-by: Dharma Balasubiramani Acked-by: Hari Prasath Gujulan Elango Acked-by: Nicolas Ferre --- Changelog v7 -> v8 v6 -> v7 v5 -> v6 v4 -> v5 v3 -> v4 v2 -> v3 - No Changes. --- arch/arm/configs/at91_dt_defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/configs/at91_dt_defconfig b/arch/arm/configs/at91_dt_defconfig index 1d53aec4c836..6eabe2313c9a 100644 --- a/arch/arm/configs/at91_dt_defconfig +++ b/arch/arm/configs/at91_dt_defconfig @@ -143,6 +143,7 @@ CONFIG_VIDEO_OV2640=m CONFIG_VIDEO_OV7740=m CONFIG_DRM=y CONFIG_DRM_ATMEL_HLCDC=y +CONFIG_DRM_MICROCHIP_LVDS_SERIALIZER=y CONFIG_DRM_PANEL_SIMPLE=y CONFIG_DRM_PANEL_EDP=y CONFIG_FB_ATMEL=y -- 2.25.1
[PATCH v8 2/4] drm/bridge: add lvds controller support for sam9x7
Add a new LVDS controller driver for sam9x7 which does the following: - Prepares and enables the LVDS Peripheral clock - Defines its connector type as DRM_MODE_CONNECTOR_LVDS and adds itself to the global bridge list. - Identifies its output endpoint as panel and adds it to the encoder display pipeline - Enables the LVDS serializer Signed-off-by: Manikandan Muralidharan Signed-off-by: Dharma Balasubiramani Acked-by: Hari Prasath Gujulan Elango --- Changelog v7 -> v8 - Assign ret variable properly before checking it for err. v6 -> v7 - Remove setting encoder type from bridge driver. - Drop clk_disable() from pm_runtime_get_sync() error handling. - Use devm_clk_get() instead of prepared version. - Hence use clk_prepare_enable() and clk_disable_unprepare(). - Use devm_drm_of_get_bridge() instead of devm_drm_panel_bridge_add(). - Add error check for devm_pm_runtime_enable(). - Use dev_err() instead of DRM_DEV_ERROR() as it is deprecated. - Add missing Acked-by tag. v5 -> v6 - No Changes. v4 -> v5 - Drop the unused variable 'format'. - Use DRM wrapper for dev_err() to maintain uniformity. - return -ENODEV instead of -EINVAL to maintain consistency with other DRM bridge drivers. v3 -> v4 - No changes. v2 ->v3 - Correct Typo error "serializer". - Consolidate get() and prepare() functions and use devm_clk_get_prepared(). - Remove unused variable 'ret' in probe(). - Use devm_pm_runtime_enable() and drop the mchp_lvds_remove(). v1 -> v2 - Drop 'res' variable and combine two lines into one. - Handle deferred probe properly, use dev_err_probe(). - Don't print anything on deferred probe. Dropped print. - Remove the MODULE_ALIAS and add MODULE_DEVICE_TABLE(). - symbol 'mchp_lvds_driver' was not declared. It should be static. --- drivers/gpu/drm/bridge/Kconfig | 7 + drivers/gpu/drm/bridge/Makefile | 1 + drivers/gpu/drm/bridge/microchip-lvds.c | 229 3 files changed, 237 insertions(+) create mode 100644 drivers/gpu/drm/bridge/microchip-lvds.c diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig index efd996f6c138..889098e2d65f 100644 --- a/drivers/gpu/drm/bridge/Kconfig +++ b/drivers/gpu/drm/bridge/Kconfig @@ -190,6 +190,13 @@ config DRM_MEGACHIPS_STDP_GE_B850V3_FW to DP++. This is used with the i.MX6 imx-ldb driver. You are likely to say N here. +config DRM_MICROCHIP_LVDS_SERIALIZER + tristate "Microchip LVDS serializer support" + depends on OF + depends on DRM_ATMEL_HLCDC + help + Support for Microchip's LVDS serializer. + config DRM_NWL_MIPI_DSI tristate "Northwest Logic MIPI DSI Host controller" depends on DRM diff --git a/drivers/gpu/drm/bridge/Makefile b/drivers/gpu/drm/bridge/Makefile index 017b5832733b..7df87b582dca 100644 --- a/drivers/gpu/drm/bridge/Makefile +++ b/drivers/gpu/drm/bridge/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_DRM_LONTIUM_LT9611) += lontium-lt9611.o obj-$(CONFIG_DRM_LONTIUM_LT9611UXC) += lontium-lt9611uxc.o obj-$(CONFIG_DRM_LVDS_CODEC) += lvds-codec.o obj-$(CONFIG_DRM_MEGACHIPS_STDP_GE_B850V3_FW) += megachips-stdp-ge-b850v3-fw.o +obj-$(CONFIG_DRM_MICROCHIP_LVDS_SERIALIZER) += microchip-lvds.o obj-$(CONFIG_DRM_NXP_PTN3460) += nxp-ptn3460.o obj-$(CONFIG_DRM_PARADE_PS8622) += parade-ps8622.o obj-$(CONFIG_DRM_PARADE_PS8640) += parade-ps8640.o diff --git a/drivers/gpu/drm/bridge/microchip-lvds.c b/drivers/gpu/drm/bridge/microchip-lvds.c new file mode 100644 index ..b8313dad6072 --- /dev/null +++ b/drivers/gpu/drm/bridge/microchip-lvds.c @@ -0,0 +1,229 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries + * + * Author: Manikandan Muralidharan + * Author: Dharma Balasubiramani + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#define LVDS_POLL_TIMEOUT_MS 1000 + +/* LVDSC register offsets */ +#define LVDSC_CR 0x00 +#define LVDSC_CFGR 0x04 +#define LVDSC_SR 0x0C +#define LVDSC_WPMR 0xE4 + +/* Bitfields in LVDSC_CR (Control Register) */ +#define LVDSC_CR_SER_ENBIT(0) + +/* Bitfields in LVDSC_CFGR (Configuration Register) */ +#define LVDSC_CFGR_PIXSIZE_24BITS 0 +#define LVDSC_CFGR_DEN_POL_HIGH0 +#define LVDSC_CFGR_DC_UNBALANCED 0 +#define LVDSC_CFGR_MAPPING_JEIDA BIT(6) + +/*Bitfields in LVDSC_SR */ +#define LVDSC_SR_CSBIT(0) + +/* Bitfields in LVDSC_WPMR (Write Protection Mode Register) */ +#define LVDSC_WPMR_WPKEY_MASK GENMASK(31, 8) +#define LVDSC_WPMR_WPKEY_PSSWD 0x4C5644 + +struct mchp_lvds { + struct device *dev; + void __iomem *regs; + struct clk *pclk; + struct drm_panel *panel; + struct drm_bridge bridge; + struct drm_bridge *panel_bridge; +}; + +static inline struct mchp_lvds
[PATCH v8 1/4] dt-bindings: display: bridge: add sam9x75-lvds binding
Add the 'sam9x75-lvds' compatible binding, which describes the Low Voltage Differential Signaling (LVDS) Controller found on some Microchip's sam9x7 series System-on-Chip (SoC) devices. This binding will be used to define the properties and configuration for the LVDS Controller in DT. Signed-off-by: Dharma Balasubiramani Reviewed-by: Rob Herring --- Changelog v7 -> v8 v6 -> v7 v5 -> v6 v4 -> v5 - No changes. v3 -> v4 - Rephrase the commit subject. v2 -> v3 - No changes. v1 -> v2 - Remove '|' in description, as there is no formatting to preserve. - Remove 'gclk' from clock-names as there is only one clock(pclk). - Remove the unused headers and include only used ones. - Change the compatible name specific to SoC (sam9x75) instead of entire series. - Change file name to match the compatible name. --- .../bridge/microchip,sam9x75-lvds.yaml| 55 +++ 1 file changed, 55 insertions(+) create mode 100644 Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml diff --git a/Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml b/Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml new file mode 100644 index ..862ef441ac9f --- /dev/null +++ b/Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml @@ -0,0 +1,55 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/display/bridge/microchip,sam9x75-lvds.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Microchip SAM9X75 LVDS Controller + +maintainers: + - Dharma Balasubiramani + +description: + The Low Voltage Differential Signaling Controller (LVDSC) manages data + format conversion from the LCD Controller internal DPI bus to OpenLDI + LVDS output signals. LVDSC functions include bit mapping, balanced mode + management, and serializer. + +properties: + compatible: +const: microchip,sam9x75-lvds + + reg: +maxItems: 1 + + interrupts: +maxItems: 1 + + clocks: +items: + - description: Peripheral Bus Clock + + clock-names: +items: + - const: pclk + +required: + - compatible + - reg + - interrupts + - clocks + - clock-names + +additionalProperties: false + +examples: + - | +#include +#include +lvds-controller@f806 { + compatible = "microchip,sam9x75-lvds"; + reg = <0xf806 0x100>; + interrupts = <56 IRQ_TYPE_LEVEL_HIGH 0>; + clocks = < PMC_TYPE_PERIPHERAL 56>; + clock-names = "pclk"; +}; -- 2.25.1
[PATCH v8 0/4] LVDS Controller Support for SAM9X75 SoC
This patch series introduces LVDS controller support for the SAM9X75 SoC. The LVDS controller is designed to work with Microchip's sam9x7 series System-on-Chip (SoC) devices, providing Low Voltage Differential Signaling capabilities. Patch series Changelog: - Include configs: at91: Enable LVDS serializer - include all necessary To/Cc entries. The Individual Changelogs are available on the respective patches. Dharma Balasubiramani (4): dt-bindings: display: bridge: add sam9x75-lvds binding drm/bridge: add lvds controller support for sam9x7 MAINTAINERS: add SAM9X7 SoC's LVDS controller ARM: configs: at91: Enable LVDS serializer support .../bridge/microchip,sam9x75-lvds.yaml| 55 + MAINTAINERS | 8 + arch/arm/configs/at91_dt_defconfig| 1 + drivers/gpu/drm/bridge/Kconfig| 7 + drivers/gpu/drm/bridge/Makefile | 1 + drivers/gpu/drm/bridge/microchip-lvds.c | 229 ++ 6 files changed, 301 insertions(+) create mode 100644 Documentation/devicetree/bindings/display/bridge/microchip,sam9x75-lvds.yaml create mode 100644 drivers/gpu/drm/bridge/microchip-lvds.c -- 2.25.1
[RFC][PATCH] drm: bridge: dw-mipi-dsi: Call modeset in modeset callback
Doing modeset in .atomic_pre_enable callback instead of dedicated .mode_set callback does not seem right. Undo this change, which was added as part of commit 05aa61334592 ("drm: bridge: dw-mipi-dsi: Fix enable/disable of DSI controller") as it breaks STM32MP15xx LTDC scanout (DSI)->TC358762 DSI-to-DPI bridge->PT800480 DPI panel pipeline. The original fix for HX8394 panel likely requires HX8394 panel side fix instead. Fixes: 05aa61334592 ("drm: bridge: dw-mipi-dsi: Fix enable/disable of DSI controller") Signed-off-by: Marek Vasut --- Cc: Andrzej Hajda Cc: Biju Das Cc: Daniel Vetter Cc: David Airlie Cc: Douglas Anderson Cc: Jernej Skrabec Cc: Jonas Karlman Cc: Laurent Pinchart Cc: Liu Ying Cc: Maarten Lankhorst Cc: Maxime Ripard Cc: Neil Armstrong Cc: Ondrej Jirman Cc: Rob Herring Cc: Robert Foss Cc: Sam Ravnborg Cc: Thomas Zimmermann Cc: dri-devel@lists.freedesktop.org Cc: linux-ker...@vger.kernel.org --- drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c | 18 +++--- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c index 824fb3c65742e..ca5894393dba4 100644 --- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c +++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c @@ -268,7 +268,6 @@ struct dw_mipi_dsi { struct dw_mipi_dsi *master; /* dual-dsi master ptr */ struct dw_mipi_dsi *slave; /* dual-dsi slave ptr */ - struct drm_display_mode mode; const struct dw_mipi_dsi_plat_data *plat_data; }; @@ -1016,25 +1015,15 @@ static void dw_mipi_dsi_mode_set(struct dw_mipi_dsi *dsi, phy_ops->power_on(dsi->plat_data->priv_data); } -static void dw_mipi_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge, -struct drm_bridge_state *old_bridge_state) -{ - struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge); - - /* Power up the dsi ctl into a command mode */ - dw_mipi_dsi_mode_set(dsi, >mode); - if (dsi->slave) - dw_mipi_dsi_mode_set(dsi->slave, >mode); -} - static void dw_mipi_dsi_bridge_mode_set(struct drm_bridge *bridge, const struct drm_display_mode *mode, const struct drm_display_mode *adjusted_mode) { struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge); - /* Store the display mode for later use in pre_enable callback */ - drm_mode_copy(>mode, adjusted_mode); + dw_mipi_dsi_mode_set(dsi, adjusted_mode); + if (dsi->slave) + dw_mipi_dsi_mode_set(dsi->slave, adjusted_mode); } static void dw_mipi_dsi_bridge_atomic_enable(struct drm_bridge *bridge, @@ -1090,7 +1079,6 @@ static const struct drm_bridge_funcs dw_mipi_dsi_bridge_funcs = { .atomic_get_input_bus_fmts = dw_mipi_dsi_bridge_atomic_get_input_bus_fmts, .atomic_check = dw_mipi_dsi_bridge_atomic_check, .atomic_reset = drm_atomic_helper_bridge_reset, - .atomic_pre_enable = dw_mipi_dsi_bridge_atomic_pre_enable, .atomic_enable = dw_mipi_dsi_bridge_atomic_enable, .atomic_post_disable= dw_mipi_dsi_bridge_post_atomic_disable, .mode_set = dw_mipi_dsi_bridge_mode_set, -- 2.43.0
Re: [PATCH 3/3] drm/msm/mdp4: correct LCDC regulator name
On 4/19/2024 7:33 PM, Dmitry Baryshkov wrote: Correct c error from the conversion of LCDC regulators to the bulk API. Fixes: 54f1fbcb47d4 ("drm/msm/mdp4: use bulk regulators API for LCDC encoder") Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Indeed ! I should have caught this during review :( Reviewed-by: Abhinav Kumar
Re: [PATCH 1/3] drm/msm: don't clean up priv->kms prematurely
On 4/19/2024 7:33 PM, Dmitry Baryshkov wrote: MSM display drivers provide kms structure allocated during probe(). Don't clean up priv->kms field in case of an error. Otherwise probe functions might fail after KMS probe deferral. So just to understand this more, this will happen when master component probe (dpu) succeeded but other sub-component probe (dsi) deferred? Because if master component probe itself deferred it will allocate priv->kms again isnt it and we will not even hit here. Fixes: a2ab5d5bb6b1 ("drm/msm: allow passing struct msm_kms to msm_drv_probe()") Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/msm_kms.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index af6a6fcb1173..6749f0fbca96 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -244,7 +244,6 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv) ret = priv->kms_init(ddev); if (ret) { DRM_DEV_ERROR(dev, "failed to load kms\n"); - priv->kms = NULL; return ret; }
Re: [PATCH v2 8/9] drm/msm: merge dpu format database to MDP formats
On 4/19/2024 9:01 PM, Dmitry Baryshkov wrote: Finally remove duplication between DPU and generic MDP code by merging DPU format lists to the MDP format database. Signed-off-by: Dmitry Baryshkov --- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 4 +- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c| 7 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c| 602 drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h| 23 - drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h| 10 - drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c| 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 3 +- drivers/gpu/drm/msm/disp/mdp_format.c | 614 ++--- drivers/gpu/drm/msm/disp/mdp_format.h | 10 + drivers/gpu/drm/msm/disp/mdp_kms.h | 2 - drivers/gpu/drm/msm/msm_drv.h | 2 + 11 files changed, 571 insertions(+), 708 deletions(-) Reviewed-by: Abhinav Kumar
Re: [PATCH v2 4/9] drm/msm/dpu: pull format flag definitions to mdp_format.h
On 4/19/2024 9:01 PM, Dmitry Baryshkov wrote: In preparation to merger of formats databases, pull format flag definitions to mdp_format.h header, so that they are visibile to both dpu and mdp drivers. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 98 ++--- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h | 31 +++-- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 8 +-- drivers/gpu/drm/msm/disp/mdp_format.c | 6 +- drivers/gpu/drm/msm/disp/mdp_format.h | 39 drivers/gpu/drm/msm/disp/mdp_kms.h | 4 +- drivers/gpu/drm/msm/msm_drv.h | 4 -- 9 files changed, 109 insertions(+), 89 deletions(-) Reviewed-by: Abhinav Kumar
[PATCH v3 4/5] drm/v3d: Decouple stats calculation from printing
Create a function to decouple the stats calculation from the printing. This will be useful in the next step when we add a seqcount to protect the stats. Signed-off-by: Maíra Canal --- drivers/gpu/drm/v3d/v3d_drv.c | 18 ++ drivers/gpu/drm/v3d/v3d_drv.h | 4 drivers/gpu/drm/v3d/v3d_sysfs.c | 11 +++ 3 files changed, 21 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 52e3ba9df46f..2ec359ed2def 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -142,6 +142,15 @@ v3d_postclose(struct drm_device *dev, struct drm_file *file) kfree(v3d_priv); } +void v3d_get_stats(const struct v3d_stats *stats, u64 timestamp, + u64 *active_runtime, u64 *jobs_completed) +{ + *active_runtime = stats->enabled_ns; + if (stats->start_ns) + *active_runtime += timestamp - stats->start_ns; + *jobs_completed = stats->jobs_completed; +} + static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) { struct v3d_file_priv *file_priv = file->driver_priv; @@ -150,20 +159,21 @@ static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { struct v3d_stats *stats = _priv->stats[queue]; + u64 active_runtime, jobs_completed; + + v3d_get_stats(stats, timestamp, _runtime, _completed); /* Note that, in case of a GPU reset, the time spent during an * attempt of executing the job is not computed in the runtime. */ drm_printf(p, "drm-engine-%s: \t%llu ns\n", - v3d_queue_to_string(queue), - stats->start_ns ? stats->enabled_ns + timestamp - stats->start_ns - : stats->enabled_ns); + v3d_queue_to_string(queue), active_runtime); /* Note that we only count jobs that completed. Therefore, jobs * that were resubmitted due to a GPU reset are not computed. */ drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n", - v3d_queue_to_string(queue), stats->jobs_completed); + v3d_queue_to_string(queue), jobs_completed); } } diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 5a198924d568..ff06dc1cc078 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -510,6 +510,10 @@ struct drm_gem_object *v3d_prime_import_sg_table(struct drm_device *dev, /* v3d_debugfs.c */ void v3d_debugfs_init(struct drm_minor *minor); +/* v3d_drv.c */ +void v3d_get_stats(const struct v3d_stats *stats, u64 timestamp, + u64 *active_runtime, u64 *jobs_completed); + /* v3d_fence.c */ extern const struct dma_fence_ops v3d_fence_ops; struct dma_fence *v3d_fence_create(struct v3d_dev *v3d, enum v3d_queue queue); diff --git a/drivers/gpu/drm/v3d/v3d_sysfs.c b/drivers/gpu/drm/v3d/v3d_sysfs.c index 6a8e7acc8b82..d610e355964f 100644 --- a/drivers/gpu/drm/v3d/v3d_sysfs.c +++ b/drivers/gpu/drm/v3d/v3d_sysfs.c @@ -15,18 +15,15 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) struct v3d_dev *v3d = to_v3d_dev(drm); enum v3d_queue queue; u64 timestamp = local_clock(); - u64 active_runtime; ssize_t len = 0; len += sysfs_emit(buf, "queue\ttimestamp\tjobs\truntime\n"); for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { struct v3d_stats *stats = >queue[queue].stats; + u64 active_runtime, jobs_completed; - if (stats->start_ns) - active_runtime = timestamp - stats->start_ns; - else - active_runtime = 0; + v3d_get_stats(stats, timestamp, _runtime, _completed); /* Each line will display the queue name, timestamp, the number * of jobs sent to that queue and the runtime, as can be seem here: @@ -40,9 +37,7 @@ gpu_stats_show(struct device *dev, struct device_attribute *attr, char *buf) */ len += sysfs_emit_at(buf, len, "%s\t%llu\t%llu\t%llu\n", v3d_queue_to_string(queue), -timestamp, -stats->jobs_completed, -stats->enabled_ns + active_runtime); +timestamp, jobs_completed, active_runtime); } return len; -- 2.44.0
[PATCH v3 5/5] drm/v3d: Fix race-condition between sysfs/fdinfo and interrupt handler
In V3D, the conclusion of a job is indicated by a IRQ. When a job finishes, then we update the local and the global GPU stats of that queue. But, while the GPU stats are being updated, a user might be reading the stats from sysfs or fdinfo. For example, on `gpu_stats_show()`, we could think about a scenario where `v3d->queue[queue].start_ns != 0`, then an interrupt happens, we update the value of `v3d->queue[queue].start_ns` to 0, we come back to `gpu_stats_show()` to calculate `active_runtime` and now, `active_runtime = timestamp`. In this simple example, the user would see a spike in the queue usage, that didn't match reality. In order to address this issue properly, use a seqcount to protect read and write sections of the code. Fixes: 09a93cc4f7d1 ("drm/v3d: Implement show_fdinfo() callback for GPU usage stats") Reported-by: Tvrtko Ursulin Signed-off-by: Maíra Canal Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/v3d/v3d_drv.c | 14 ++ drivers/gpu/drm/v3d/v3d_drv.h | 7 +++ drivers/gpu/drm/v3d/v3d_gem.c | 1 + drivers/gpu/drm/v3d/v3d_sched.c | 7 +++ 4 files changed, 25 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 2ec359ed2def..28b7ddce7747 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -121,6 +121,7 @@ v3d_open(struct drm_device *dev, struct drm_file *file) 1, NULL); memset(_priv->stats[i], 0, sizeof(v3d_priv->stats[i])); + seqcount_init(_priv->stats[i].lock); } v3d_perfmon_open_file(v3d_priv); @@ -145,10 +146,15 @@ v3d_postclose(struct drm_device *dev, struct drm_file *file) void v3d_get_stats(const struct v3d_stats *stats, u64 timestamp, u64 *active_runtime, u64 *jobs_completed) { - *active_runtime = stats->enabled_ns; - if (stats->start_ns) - *active_runtime += timestamp - stats->start_ns; - *jobs_completed = stats->jobs_completed; + unsigned int seq; + + do { + seq = read_seqcount_begin(>lock); + *active_runtime = stats->enabled_ns; + if (stats->start_ns) + *active_runtime += timestamp - stats->start_ns; + *jobs_completed = stats->jobs_completed; + } while (read_seqcount_retry(>lock, seq)); } static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index ff06dc1cc078..a2c516fe6d79 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -40,6 +40,13 @@ struct v3d_stats { u64 start_ns; u64 enabled_ns; u64 jobs_completed; + + /* +* This seqcount is used to protect the access to the GPU stats +* variables. It must be used as, while we are reading the stats, +* IRQs can happen and the stats can be updated. +*/ + seqcount_t lock; }; struct v3d_queue_state { diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 0086081a9261..da8faf3b9011 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -251,6 +251,7 @@ v3d_gem_init(struct drm_device *dev) queue->fence_context = dma_fence_context_alloc(1); memset(>stats, 0, sizeof(queue->stats)); + seqcount_init(>stats.lock); } spin_lock_init(>mm_lock); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index b9614944931c..7cd8c335cd9b 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -114,16 +114,23 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *local_stats = >stats[queue]; u64 now = local_clock(); + write_seqcount_begin(_stats->lock); local_stats->start_ns = now; + write_seqcount_end(_stats->lock); + + write_seqcount_begin(_stats->lock); global_stats->start_ns = now; + write_seqcount_end(_stats->lock); } static void v3d_stats_update(struct v3d_stats *stats, u64 now) { + write_seqcount_begin(>lock); stats->enabled_ns += now - stats->start_ns; stats->jobs_completed++; stats->start_ns = 0; + write_seqcount_end(>lock); } void -- 2.44.0
[PATCH v3 3/5] drm/v3d: Create function to update a set of GPU stats
Given a set of GPU stats, that is, a `struct v3d_stats` related to a queue in a given context, create a function that can update this set of GPU stats. Signed-off-by: Maíra Canal Reviewed-by: Tvrtko Ursulin Reviewed-by: Jose Maria Casanova Crespo --- drivers/gpu/drm/v3d/v3d_sched.c | 17 ++--- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index b6b5542c3fcf..b9614944931c 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -118,6 +118,14 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) global_stats->start_ns = now; } +static void +v3d_stats_update(struct v3d_stats *stats, u64 now) +{ + stats->enabled_ns += now - stats->start_ns; + stats->jobs_completed++; + stats->start_ns = 0; +} + void v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue) { @@ -127,13 +135,8 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *local_stats = >stats[queue]; u64 now = local_clock(); - local_stats->enabled_ns += now - local_stats->start_ns; - local_stats->jobs_completed++; - local_stats->start_ns = 0; - - global_stats->enabled_ns += now - global_stats->start_ns; - global_stats->jobs_completed++; - global_stats->start_ns = 0; + v3d_stats_update(local_stats, now); + v3d_stats_update(global_stats, now); } static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) -- 2.44.0
[PATCH v3 1/5] drm/v3d: Create two functions to update all GPU stats variables
Currently, we manually perform all operations to update the GPU stats variables. Apart from the code repetition, this is very prone to errors, as we can see on commit 35f4f8c9fc97 ("drm/v3d: Don't increment `enabled_ns` twice"). Therefore, create two functions to manage updating all GPU stats variables. Now, the jobs only need to call for `v3d_job_update_stats()` when the job is done and `v3d_job_start_stats()` when starting the job. Co-developed-by: Tvrtko Ursulin Signed-off-by: Tvrtko Ursulin Signed-off-by: Maíra Canal Reviewed-by: Jose Maria Casanova Crespo --- drivers/gpu/drm/v3d/v3d_drv.h | 1 + drivers/gpu/drm/v3d/v3d_irq.c | 48 ++-- drivers/gpu/drm/v3d/v3d_sched.c | 80 +++-- 3 files changed, 40 insertions(+), 89 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 1950c723dde1..ee3545226d7f 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -543,6 +543,7 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo); void v3d_mmu_remove_ptes(struct v3d_bo *bo); /* v3d_sched.c */ +void v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue); int v3d_sched_init(struct v3d_dev *v3d); void v3d_sched_fini(struct v3d_dev *v3d); diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c index ce6b2fb341d1..d469bda52c1a 100644 --- a/drivers/gpu/drm/v3d/v3d_irq.c +++ b/drivers/gpu/drm/v3d/v3d_irq.c @@ -102,18 +102,8 @@ v3d_irq(int irq, void *arg) if (intsts & V3D_INT_FLDONE) { struct v3d_fence *fence = to_v3d_fence(v3d->bin_job->base.irq_fence); - struct v3d_file_priv *file = v3d->bin_job->base.file->driver_priv; - u64 runtime = local_clock() - file->start_ns[V3D_BIN]; - - file->jobs_sent[V3D_BIN]++; - v3d->queue[V3D_BIN].jobs_sent++; - - file->start_ns[V3D_BIN] = 0; - v3d->queue[V3D_BIN].start_ns = 0; - - file->enabled_ns[V3D_BIN] += runtime; - v3d->queue[V3D_BIN].enabled_ns += runtime; + v3d_job_update_stats(>bin_job->base, V3D_BIN); trace_v3d_bcl_irq(>drm, fence->seqno); dma_fence_signal(>base); status = IRQ_HANDLED; @@ -122,18 +112,8 @@ v3d_irq(int irq, void *arg) if (intsts & V3D_INT_FRDONE) { struct v3d_fence *fence = to_v3d_fence(v3d->render_job->base.irq_fence); - struct v3d_file_priv *file = v3d->render_job->base.file->driver_priv; - u64 runtime = local_clock() - file->start_ns[V3D_RENDER]; - - file->jobs_sent[V3D_RENDER]++; - v3d->queue[V3D_RENDER].jobs_sent++; - - file->start_ns[V3D_RENDER] = 0; - v3d->queue[V3D_RENDER].start_ns = 0; - - file->enabled_ns[V3D_RENDER] += runtime; - v3d->queue[V3D_RENDER].enabled_ns += runtime; + v3d_job_update_stats(>render_job->base, V3D_RENDER); trace_v3d_rcl_irq(>drm, fence->seqno); dma_fence_signal(>base); status = IRQ_HANDLED; @@ -142,18 +122,8 @@ v3d_irq(int irq, void *arg) if (intsts & V3D_INT_CSDDONE(v3d->ver)) { struct v3d_fence *fence = to_v3d_fence(v3d->csd_job->base.irq_fence); - struct v3d_file_priv *file = v3d->csd_job->base.file->driver_priv; - u64 runtime = local_clock() - file->start_ns[V3D_CSD]; - - file->jobs_sent[V3D_CSD]++; - v3d->queue[V3D_CSD].jobs_sent++; - - file->start_ns[V3D_CSD] = 0; - v3d->queue[V3D_CSD].start_ns = 0; - - file->enabled_ns[V3D_CSD] += runtime; - v3d->queue[V3D_CSD].enabled_ns += runtime; + v3d_job_update_stats(>csd_job->base, V3D_CSD); trace_v3d_csd_irq(>drm, fence->seqno); dma_fence_signal(>base); status = IRQ_HANDLED; @@ -189,18 +159,8 @@ v3d_hub_irq(int irq, void *arg) if (intsts & V3D_HUB_INT_TFUC) { struct v3d_fence *fence = to_v3d_fence(v3d->tfu_job->base.irq_fence); - struct v3d_file_priv *file = v3d->tfu_job->base.file->driver_priv; - u64 runtime = local_clock() - file->start_ns[V3D_TFU]; - - file->jobs_sent[V3D_TFU]++; - v3d->queue[V3D_TFU].jobs_sent++; - - file->start_ns[V3D_TFU] = 0; - v3d->queue[V3D_TFU].start_ns = 0; - - file->enabled_ns[V3D_TFU] += runtime; - v3d->queue[V3D_TFU].enabled_ns += runtime; + v3d_job_update_stats(>tfu_job->base, V3D_TFU); trace_v3d_tfu_irq(>drm, fence->seqno); dma_fence_signal(>base); status = IRQ_HANDLED; diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
[PATCH v3 0/5] drm/v3d: Fix GPU stats inconsistencies and race-condition
The first version of this series had the intention to fix two major issues with the GPU stats: 1. We were incrementing `enabled_ns` twice by the end of each job. 2. There is a race-condition between the IRQ handler and the users The first of the issues was already addressed and the fix was applied to drm-misc-fixes. Now, what is left, addresses the second issue. Apart from addressing this issue, this series improved the GPU stats code as a whole. We reduced code repetition, creating functions to start and update the GPU stats. This will likely reduce the odds of issue #1 happen again. v1 -> v2: https://lore.kernel.org/dri-devel/20240403203517.731876-1-mca...@igalia.com/T/ - As the first patch was a bugfix, it was pushed to drm-misc-fixes. - [1/4] Add Chema Casanova's R-b - [2/4] s/jobs_sent/jobs_completed and add the reasoning in the commit message (Chema Casanova) - [2/4] Add Chema Casanova's and Tvrtko Ursulin's R-b - [3/4] Call `local_clock()` only once, by adding a new parameter to the `v3d_stats_update` function (Chema Casanova) - [4/4] Move new line to the correct patch [2/4] (Tvrtko Ursulin) - [4/4] Use `seqcount_t` as locking primitive instead of a `rw_lock` (Tvrtko Ursulin) v2 -> v3: https://lore.kernel.org/dri-devel/20240417011021.600889-1-mca...@igalia.com/T/ - [4/5] New patch: separates the code refactor from the race-condition fix (Tvrtko Ursulin) - [5/5] s/interruption/interrupt (Tvrtko Ursulin) - [5/5] s/matches/match (Tvrtko Ursulin) - [5/5] Add Tvrtko Ursulin's R-b Best Regards, - Maíra Maíra Canal (5): drm/v3d: Create two functions to update all GPU stats variables drm/v3d: Create a struct to store the GPU stats drm/v3d: Create function to update a set of GPU stats drm/v3d: Decouple stats calculation from printing drm/v3d: Fix race-condition between sysfs/fdinfo and interrupt handler drivers/gpu/drm/v3d/v3d_drv.c | 33 drivers/gpu/drm/v3d/v3d_drv.h | 30 --- drivers/gpu/drm/v3d/v3d_gem.c | 9 ++-- drivers/gpu/drm/v3d/v3d_irq.c | 48 ++--- drivers/gpu/drm/v3d/v3d_sched.c | 94 + drivers/gpu/drm/v3d/v3d_sysfs.c | 13 ++--- 6 files changed, 109 insertions(+), 118 deletions(-) -- 2.44.0
[PATCH v3 2/5] drm/v3d: Create a struct to store the GPU stats
This will make it easier to instantiate the GPU stats variables and it will create a structure where we can store all the variables that refer to GPU stats. Note that, when we created the struct `v3d_stats`, we renamed `jobs_sent` to `jobs_completed`. This better express the semantics of the variable, as we are only accounting jobs that have been completed. Signed-off-by: Maíra Canal Reviewed-by: Tvrtko Ursulin Reviewed-by: Jose Maria Casanova Crespo --- drivers/gpu/drm/v3d/v3d_drv.c | 15 +++ drivers/gpu/drm/v3d/v3d_drv.h | 18 ++ drivers/gpu/drm/v3d/v3d_gem.c | 8 drivers/gpu/drm/v3d/v3d_sched.c | 20 drivers/gpu/drm/v3d/v3d_sysfs.c | 10 ++ 5 files changed, 39 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 3debf37e7d9b..52e3ba9df46f 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -115,14 +115,12 @@ v3d_open(struct drm_device *dev, struct drm_file *file) v3d_priv->v3d = v3d; for (i = 0; i < V3D_MAX_QUEUES; i++) { - v3d_priv->enabled_ns[i] = 0; - v3d_priv->start_ns[i] = 0; - v3d_priv->jobs_sent[i] = 0; - sched = >queue[i].sched; drm_sched_entity_init(_priv->sched_entity[i], DRM_SCHED_PRIORITY_NORMAL, , 1, NULL); + + memset(_priv->stats[i], 0, sizeof(v3d_priv->stats[i])); } v3d_perfmon_open_file(v3d_priv); @@ -151,20 +149,21 @@ static void v3d_show_fdinfo(struct drm_printer *p, struct drm_file *file) enum v3d_queue queue; for (queue = 0; queue < V3D_MAX_QUEUES; queue++) { + struct v3d_stats *stats = _priv->stats[queue]; + /* Note that, in case of a GPU reset, the time spent during an * attempt of executing the job is not computed in the runtime. */ drm_printf(p, "drm-engine-%s: \t%llu ns\n", v3d_queue_to_string(queue), - file_priv->start_ns[queue] ? file_priv->enabled_ns[queue] - + timestamp - file_priv->start_ns[queue] - : file_priv->enabled_ns[queue]); + stats->start_ns ? stats->enabled_ns + timestamp - stats->start_ns + : stats->enabled_ns); /* Note that we only count jobs that completed. Therefore, jobs * that were resubmitted due to a GPU reset are not computed. */ drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n", - v3d_queue_to_string(queue), file_priv->jobs_sent[queue]); + v3d_queue_to_string(queue), stats->jobs_completed); } } diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index ee3545226d7f..5a198924d568 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -36,15 +36,20 @@ static inline char *v3d_queue_to_string(enum v3d_queue queue) return "UNKNOWN"; } +struct v3d_stats { + u64 start_ns; + u64 enabled_ns; + u64 jobs_completed; +}; + struct v3d_queue_state { struct drm_gpu_scheduler sched; u64 fence_context; u64 emit_seqno; - u64 start_ns; - u64 enabled_ns; - u64 jobs_sent; + /* Stores the GPU stats for this queue in the global context. */ + struct v3d_stats stats; }; /* Performance monitor object. The perform lifetime is controlled by userspace @@ -188,11 +193,8 @@ struct v3d_file_priv { struct drm_sched_entity sched_entity[V3D_MAX_QUEUES]; - u64 start_ns[V3D_MAX_QUEUES]; - - u64 enabled_ns[V3D_MAX_QUEUES]; - - u64 jobs_sent[V3D_MAX_QUEUES]; + /* Stores the GPU stats for a specific queue for this fd. */ + struct v3d_stats stats[V3D_MAX_QUEUES]; }; struct v3d_bo { diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index afc565078c78..0086081a9261 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -247,10 +247,10 @@ v3d_gem_init(struct drm_device *dev) int ret, i; for (i = 0; i < V3D_MAX_QUEUES; i++) { - v3d->queue[i].fence_context = dma_fence_context_alloc(1); - v3d->queue[i].start_ns = 0; - v3d->queue[i].enabled_ns = 0; - v3d->queue[i].jobs_sent = 0; + struct v3d_queue_state *queue = >queue[i]; + + queue->fence_context = dma_fence_context_alloc(1); + memset(>stats, 0, sizeof(queue->stats)); } spin_lock_init(>mm_lock); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index
Re: [PATCH v7 2/4] drm/bridge: add lvds controller support for sam9x7
Hi Dharma, kernel test robot noticed the following build warnings: [auto build test WARNING on drm-misc/drm-misc-next] [also build test WARNING on linus/master v6.9-rc4 next-20240419] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Dharma-Balasubiramani/dt-bindings-display-bridge-add-sam9x75-lvds-binding/20240418-170157 base: git://anongit.freedesktop.org/drm/drm-misc drm-misc-next patch link: https://lore.kernel.org/r/20240418085725.373797-3-dharma.b%40microchip.com patch subject: [PATCH v7 2/4] drm/bridge: add lvds controller support for sam9x7 config: arm-at91_dt_defconfig (https://download.01.org/0day-ci/archive/20240421/202404210232.ji4lxq3k-...@intel.com/config) compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project 7089c359a3845323f6f30c44a47dd901f2edfe63) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240421/202404210232.ji4lxq3k-...@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202404210232.ji4lxq3k-...@intel.com/ All warnings (new ones prefixed by >>): In file included from drivers/gpu/drm/bridge/microchip-lvds.c:17: In file included from include/linux/phy/phy.h:17: In file included from include/linux/regulator/consumer.h:35: In file included from include/linux/suspend.h:5: In file included from include/linux/swap.h:9: In file included from include/linux/memcontrol.h:21: In file included from include/linux/mm.h:2208: include/linux/vmstat.h:522:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion] 522 | return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_" | ~~~ ^ ~~~ >> drivers/gpu/drm/bridge/microchip-lvds.c:199:6: warning: variable 'ret' is >> uninitialized when used here [-Wuninitialized] 199 | if (ret < 0) { | ^~~ drivers/gpu/drm/bridge/microchip-lvds.c:154:9: note: initialize the variable 'ret' to silence this warning 154 | int ret; |^ | = 0 2 warnings generated. vim +/ret +199 drivers/gpu/drm/bridge/microchip-lvds.c 148 149 static int mchp_lvds_probe(struct platform_device *pdev) 150 { 151 struct device *dev = >dev; 152 struct mchp_lvds *lvds; 153 struct device_node *port; 154 int ret; 155 156 if (!dev->of_node) 157 return -ENODEV; 158 159 lvds = devm_kzalloc(>dev, sizeof(*lvds), GFP_KERNEL); 160 if (!lvds) 161 return -ENOMEM; 162 163 lvds->dev = dev; 164 165 lvds->regs = devm_ioremap_resource(lvds->dev, 166 platform_get_resource(pdev, IORESOURCE_MEM, 0)); 167 if (IS_ERR(lvds->regs)) 168 return PTR_ERR(lvds->regs); 169 170 lvds->pclk = devm_clk_get(lvds->dev, "pclk"); 171 if (IS_ERR(lvds->pclk)) 172 return dev_err_probe(lvds->dev, PTR_ERR(lvds->pclk), 173 "could not get pclk_lvds\n"); 174 175 port = of_graph_get_remote_node(dev->of_node, 1, 0); 176 if (!port) { 177 dev_err(dev, 178 "can't find port point, please init lvds panel port!\n"); 179 return -ENODEV; 180 } 181 182 lvds->panel = of_drm_find_panel(port); 183 of_node_put(port); 184 185 if (IS_ERR(lvds->panel)) 186 return -EPROBE_DEFER; 187 188 lvds->panel_bridge = devm_drm_of_get_bridge(dev, dev->of_node, 1, 0); 189 190 if (IS_ERR(lvds->panel_bridge)) 191 return PTR_ERR(lvds->panel_bridge); 192 193 lvds->bridge.of_node = dev->of_node; 194 lvds->bridge.type = DRM_MODE_CONNECTOR_LVDS; 195 lvds->bridge.funcs = _lvds_bridge_funcs; 196 197 dev_set_drvdata(dev, lvds); 198 devm_pm_runtime_enable(dev); > 199 if (ret < 0) { 200 dev_err(lvds->dev, "failed to enable pm runtime: %d\n", ret); 201 return ret; 202 } 203 204 drm_bridge_add(>bridge); 205 206 return 0; 207 } 208 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
[PATCH 4/4] Documentation/gpu: Update AMD Display Core Unit Test documentation
Display Core unit tests documentation is a bit outdated, therefore update it to follow current configuration. Signed-off-by: Joao Paulo Pereira da Silva --- .../gpu/amdgpu/display/display-test.rst | 20 +-- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/Documentation/gpu/amdgpu/display/display-test.rst b/Documentation/gpu/amdgpu/display/display-test.rst index a8c136ce87b7..a9fddf0adae7 100644 --- a/Documentation/gpu/amdgpu/display/display-test.rst +++ b/Documentation/gpu/amdgpu/display/display-test.rst @@ -15,14 +15,14 @@ How to run the tests? = In order to facilitate running the test suite, a configuration file is present -in ``drivers/gpu/drm/amd/display/tests/dc/.kunitconfig``. This configuration file +in ``drivers/gpu/drm/amd/display/test/kunit/.kunitconfig``. This configuration file can be used to run the kunit_tool, a Python script (``tools/testing/kunit/kunit.py``) used to configure, build, exec, parse and run tests. .. code-block:: bash - $ ./tools/testing/kunit/kunit.py run --arch=x86_64 \ - --kunitconfig=drivers/gpu/drm/amd/display/tests +$ ./tools/testing/kunit/kunit.py run --arch=x86_64 \ +--kunitconfig=drivers/gpu/drm/amd/display/test/kunit Currently, the Display Core Unit Tests are only supported on x86_64. @@ -34,10 +34,9 @@ you might add the following config options to your ``.config``: CONFIG_KUNIT=y CONFIG_AMDGPU=m - CONFIG_AMD_DC_BASICS_KUNIT_TEST=y - CONFIG_AMD_DC_KUNIT_TEST=y - CONFIG_DCE_KUNIT_TEST=y - CONFIG_DML_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_BASICS_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_DML_KUNIT_TEST=y Once the kernel is built and installed, you can load the ``amdgpu`` module to run all tests available. @@ -49,10 +48,9 @@ following config options to your ``.config``: CONFIG_KUNIT=y CONFIG_AMDGPU=y - CONFIG_AMD_DC_BASICS_KUNIT_TEST=y - CONFIG_AMD_DC_KUNIT_TEST=y - CONFIG_DCE_KUNIT_TEST=y - CONFIG_DML_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_BASICS_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_KUNIT_TEST=y + CONFIG_DRM_AMD_DC_DML_KUNIT_TEST=y In order to run specific tests, you can check the filter options from KUnit on Documentation/dev-tools/kunit/kunit-tool.rst. -- 2.44.0
[PATCH 3/4] drm/amd/display/test: Optimize kunit test suite dml_dcn20_fpu_dcn21_update_bw_bounding_box_test
The KUnit init function of the suite dml_dcn20_fpu_dcn21_update_bw_bounding_box_test does not need to be executed before every test, but only once before the test suite, since it's just used to store backup copies of DCN global structures. So, turn it into a suite_init. Signed-off-by: Joao Paulo Pereira da Silva --- .../amd/display/test/kunit/dc/dml/dcn20/dcn20_fpu_test.c| 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/display/test/kunit/dc/dml/dcn20/dcn20_fpu_test.c b/drivers/gpu/drm/amd/display/test/kunit/dc/dml/dcn20/dcn20_fpu_test.c index c51a0afbe518..b13a952e0227 100644 --- a/drivers/gpu/drm/amd/display/test/kunit/dc/dml/dcn20/dcn20_fpu_test.c +++ b/drivers/gpu/drm/amd/display/test/kunit/dc/dml/dcn20/dcn20_fpu_test.c @@ -449,10 +449,10 @@ static struct _vcs_dpi_soc_bounding_box_st original_dcn2_1_soc; static struct _vcs_dpi_ip_params_st original_dcn2_1_ip; /** - * dcn20_fpu_dcn21_update_bw_bounding_box_test_init - Store backup copies of DCN global structures + * dcn20_fpu_dcn21_update_bw_bounding_box_test_suite_init - Store backup copies of DCN global structures * @test: represents a running instance of a test. */ -static int dcn20_fpu_dcn21_update_bw_bounding_box_test_init(struct kunit *test) +static int dcn20_fpu_dcn21_update_bw_bounding_box_test_suite_init(struct kunit_suite *suite) { memcpy(_dcn2_1_soc, _1_soc, sizeof(struct _vcs_dpi_soc_bounding_box_st)); memcpy(_dcn2_1_ip, _1_ip, sizeof(struct _vcs_dpi_ip_params_st)); @@ -553,7 +553,7 @@ static struct kunit_case dcn20_fpu_dcn21_update_bw_bounding_box_test_cases[] = { static struct kunit_suite dcn21_update_bw_bounding_box_test_suite = { .name = "dml_dcn20_fpu_dcn21_update_bw_bounding_box_test", - .init = dcn20_fpu_dcn21_update_bw_bounding_box_test_init, + .suite_init = dcn20_fpu_dcn21_update_bw_bounding_box_test_suite_init, .exit = dcn20_fpu_dcn21_update_bw_bounding_box_test_exit, .test_cases = dcn20_fpu_dcn21_update_bw_bounding_box_test_cases, }; -- 2.44.0
[PATCH 2/4] drm/amd/display/test: Fix kunit test that is not running
The KUnit test file test/kunit/dc/dml/calcs/bw_fixed_test.c does not have the correct path relative to the file being tested, dc/basics/bw_fixed.c. Also, it is neither compiling nor running. Therefore, change the test file path and import it conditionally in the file dc/basics/bw_fixed.c to make it runnable. Signed-off-by: Joao Paulo Pereira da Silva --- drivers/gpu/drm/amd/display/dc/basics/bw_fixed.c | 3 +++ .../test/kunit/dc/{dml/calcs => basics}/bw_fixed_test.c| 0 2 files changed, 3 insertions(+) rename drivers/gpu/drm/amd/display/test/kunit/dc/{dml/calcs => basics}/bw_fixed_test.c (100%) diff --git a/drivers/gpu/drm/amd/display/dc/basics/bw_fixed.c b/drivers/gpu/drm/amd/display/dc/basics/bw_fixed.c index c8cb89e0d4d0..f18945fc84b9 100644 --- a/drivers/gpu/drm/amd/display/dc/basics/bw_fixed.c +++ b/drivers/gpu/drm/amd/display/dc/basics/bw_fixed.c @@ -186,3 +186,6 @@ struct bw_fixed bw_mul(const struct bw_fixed arg1, const struct bw_fixed arg2) return res; } +#if IS_ENABLED(CONFIG_DRM_AMD_DC_BASICS_KUNIT_TEST) +#include "../../test/kunit/dc/basics/bw_fixed_test.c" +#endif diff --git a/drivers/gpu/drm/amd/display/test/kunit/dc/dml/calcs/bw_fixed_test.c b/drivers/gpu/drm/amd/display/test/kunit/dc/basics/bw_fixed_test.c similarity index 100% rename from drivers/gpu/drm/amd/display/test/kunit/dc/dml/calcs/bw_fixed_test.c rename to drivers/gpu/drm/amd/display/test/kunit/dc/basics/bw_fixed_test.c -- 2.44.0
[PATCH 1/4] drm/amd/display: Refactor AMD display KUnit tests configs
Configs in AMD display KUnit tests can be clarified. Remove unnecessary configs, rename configs to follow a pattern, and update config Help blocks. Signed-off-by: Joao Paulo Pereira da Silva --- drivers/gpu/drm/amd/display/Kconfig | 31 ++- drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 2 +- .../dc/dml/dcn20/display_mode_vba_20.c| 2 +- .../dc/dml/dcn20/display_rq_dlg_calc_20.c | 2 +- .../drm/amd/display/test/kunit/.kunitconfig | 7 ++--- .../gpu/drm/amd/display/test/kunit/Makefile | 4 +-- 6 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig index 11b0e54262f3..b2760adb3da9 100644 --- a/drivers/gpu/drm/amd/display/Kconfig +++ b/drivers/gpu/drm/amd/display/Kconfig @@ -51,25 +51,25 @@ config DRM_AMD_SECURE_DISPLAY This option enables the calculation of crc of specific region via debugfs. Cooperate with specific DMCU FW. -config DCE_KUNIT_TEST - bool "Run all KUnit tests for DCE" if !KUNIT_ALL_TESTS +config DRM_AMD_DC_KUNIT_TEST + bool "Enable KUnit tests for the root of DC" if !KUNIT_ALL_TESTS depends on DRM_AMD_DC && KUNIT default KUNIT_ALL_TESTS help - Enables unit tests for the Display Controller Engine. Only useful for kernel - devs running KUnit. + Enables unit tests for files in the root of the Display Core directory. + Only useful for kernel devs running KUnit. For more information on KUnit and unit tests in general please refer to the KUnit documentation in Documentation/dev-tools/kunit/. If unsure, say N. -config DML_KUNIT_TEST +config DRM_AMD_DC_DML_KUNIT_TEST bool "Run all KUnit tests for DML" if !KUNIT_ALL_TESTS depends on DRM_AMD_DC_FP && KUNIT default KUNIT_ALL_TESTS help - Enables unit tests for the Display Controller Engine. Only useful for kernel + Enables unit tests for the Display Controller Next. Only useful for kernel devs running KUnit. For more information on KUnit and unit tests in general please refer to @@ -77,26 +77,13 @@ config DML_KUNIT_TEST If unsure, say N. -config AMD_DC_BASICS_KUNIT_TEST +config DRM_AMD_DC_BASICS_KUNIT_TEST bool "Enable KUnit tests for the 'basics' sub-component of DAL" if !KUNIT_ALL_TESTS depends on DRM_AMD_DC && KUNIT default KUNIT_ALL_TESTS help - Enables unit tests for the Display Core. Only useful for kernel - devs running KUnit. - - For more information on KUnit and unit tests in general please refer to - the KUnit documentation in Documentation/dev-tools/kunit/. - - If unsure, say N. - -config AMD_DC_KUNIT_TEST - bool "Enable KUnit tests for the 'utils' sub-component of DAL" if !KUNIT_ALL_TESTS - depends on DRM_AMD_DC && KUNIT - default KUNIT_ALL_TESTS - help - Enables unit tests for the basics folder of Display Core. Only useful for - kernel devs running KUnit. + Enables unit tests for the basics folder of the Display Core. Only useful + for kernel devs running KUnit. For more information on KUnit and unit tests in general please refer to the KUnit documentation in Documentation/dev-tools/kunit/. diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c index 7aafdfeac60e..7efd4768b0d7 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c @@ -1439,6 +1439,6 @@ bool dc_wake_and_execute_gpint(const struct dc_context *ctx, enum dmub_gpint_com return result; } -#if IS_ENABLED(CONFIG_AMD_DC_KUNIT_TEST) +#if IS_ENABLED(CONFIG_DRM_AMD_DC_KUNIT_TEST) #include "../test/kunit/dc/dc_dmub_srv_test.c" #endif diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c index aea6e29fd6e5..5c5be75c08e0 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c @@ -5117,6 +5117,6 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l } } -#if IS_ENABLED(CONFIG_DML_KUNIT_TEST) +#if IS_ENABLED(CONFIG_DRM_AMD_DC_DML_KUNIT_TEST) #include "../../test/kunit/dc/dml/dcn20/display_mode_vba_20_test.c" #endif diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c index 45f75a7f84c7..aab34156e9ae 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c +++
[PATCH 0/4] drm/amd/display: Update Display Core unit tests
Hey, I'm interested in contributing for display tests from this patch-set. I've noticed potential updates related to both refactoring and optimization. This patch-set applies these suggestions. [WHY] 1. The single test suite in the file test/kunit/dc/dml/calcs/bw_fixed_test.c, which tests some static functions defined in the dc/basics/bpw_fixed.c, is not being run. According to kunit documentation (https://www.kernel.org/doc/html/latest/dev-tools/kunit/usage.html#testing-static-functions), there are two strategies for testing static functions, but none of them seem to be configured. Additionally, it appears that the Config DCE_KUNIT_TEST should be associated with this test, since it was introduced in the same patch of the test (https://lore.kernel.org/amd-gfx/20240222155811.44096-3-rodrigo.sique...@amd.com/), but it is not being used anywhere in the display driver. 2. Also, according to the documentation, "The display/tests folder replicates the folder hierarchy of the display folder". However, note that this test file (test/kunit/dc/dml/calcs/bw_fixed_test.c) has a conflicting path with the file that is being tested (dc/basics/bw_fixed.c). 3. Config Names and Helps are a bit misleading and don't follow a strict pattern. For example, the config DML_KUNIT_TEST indicates that it is used to activate tests for the Display Core Engine, but instead activates tests for the Display Core Next. Also, note the different name patterns in DML_KUNIT_TEST and AMD_DC_BASICS_KUNIT_TEST. 4. The test suite dcn21_update_bw_bounding_box_test_suite configures an init function that doesn't need to be executed before every test, but only once before the suite runs. 5. There are some not updated info in the Documentation, such as the recommended command to run the tests: $ ./tools/testing/kunit/kunit.py run --arch=x86_64 \ --kunitconfig=drivers/gpu/drm/amd/display/tests (it doesn't work since there is no .kunitconfig in drivers/gpu/drm/amd/display/tests) [HOW] 1. Revise Config names and Help blocks. 2. Change the path of the test file bw_fixed_test from test/kunit/dc/dml/calcs/bw_fixed_test.c to test/kunit/dc/basics/bw_fixed_test.c to make it consistent with the Documentation and the other display driver tests. Make this same test file run by importing it conditionally in the file dc/basics/bw_fixed_test.c. 3. Turn the test init function of the suite dcn21_update_bw_bounding_box_test_suite into a suite init. 4. Update Documentation Joao Paulo Pereira da Silva (4): drm/amd/display: Refactor AMD display KUnit tests configs drm/amd/display/test: Fix kunit test that is not running drm/amd/display/test: Optimize kunit test suite dml_dcn20_fpu_dcn21_update_bw_bounding_box_test Documentation/gpu: Update AMD Display Core Unit Test documentation .../gpu/amdgpu/display/display-test.rst | 20 ++-- drivers/gpu/drm/amd/display/Kconfig | 31 ++- .../gpu/drm/amd/display/dc/basics/bw_fixed.c | 3 ++ drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 2 +- .../dc/dml/dcn20/display_mode_vba_20.c| 2 +- .../dc/dml/dcn20/display_rq_dlg_calc_20.c | 2 +- .../drm/amd/display/test/kunit/.kunitconfig | 7 ++--- .../gpu/drm/amd/display/test/kunit/Makefile | 4 +-- .../dc/{dml/calcs => basics}/bw_fixed_test.c | 0 .../test/kunit/dc/dml/dcn20/dcn20_fpu_test.c | 6 ++-- 10 files changed, 32 insertions(+), 45 deletions(-) rename drivers/gpu/drm/amd/display/test/kunit/dc/{dml/calcs => basics}/bw_fixed_test.c (100%) -- 2.44.0
[PATCH] Revert "drm/etnaviv: Expose a few more chipspecs to userspace"
From: Christian Gmeiner This reverts commit 1dccdba084897443d116508a8ed71e0ac8a031a4. In userspace a different approach was choosen - hwdb. As a result, there is no need for these values. Signed-off-by: Christian Gmeiner --- drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 20 --- drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 12 - drivers/gpu/drm/etnaviv/etnaviv_hwdb.c | 34 -- include/uapi/drm/etnaviv_drm.h | 5 4 files changed, 71 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c index 734412aae94d..e47e5562535a 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c @@ -164,26 +164,6 @@ int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, u32 param, u64 *value) *value = gpu->identity.eco_id; break; - case ETNAVIV_PARAM_GPU_NN_CORE_COUNT: - *value = gpu->identity.nn_core_count; - break; - - case ETNAVIV_PARAM_GPU_NN_MAD_PER_CORE: - *value = gpu->identity.nn_mad_per_core; - break; - - case ETNAVIV_PARAM_GPU_TP_CORE_COUNT: - *value = gpu->identity.tp_core_count; - break; - - case ETNAVIV_PARAM_GPU_ON_CHIP_SRAM_SIZE: - *value = gpu->identity.on_chip_sram_size; - break; - - case ETNAVIV_PARAM_GPU_AXI_SRAM_SIZE: - *value = gpu->identity.axi_sram_size; - break; - default: DBG("%s: invalid param: %u", dev_name(gpu->dev), param); return -EINVAL; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h index 7d5e9158e13c..197e0037732e 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h @@ -54,18 +54,6 @@ struct etnaviv_chip_identity { /* Number of Neural Network cores. */ u32 nn_core_count; - /* Number of MAD units per Neural Network core. */ - u32 nn_mad_per_core; - - /* Number of Tensor Processing cores. */ - u32 tp_core_count; - - /* Size in bytes of the SRAM inside the NPU. */ - u32 on_chip_sram_size; - - /* Size in bytes of the SRAM across the AXI bus. */ - u32 axi_sram_size; - /* Size of the vertex cache. */ u32 vertex_cache_size; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c index d8e7334de8ce..8665f2658d51 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c @@ -17,10 +17,6 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = { .thread_count = 128, .shader_core_count = 1, .nn_core_count = 0, - .nn_mad_per_core = 0, - .tp_core_count = 0, - .on_chip_sram_size = 0, - .axi_sram_size = 0, .vertex_cache_size = 8, .vertex_output_buffer_size = 1024, .pixel_pipes = 1, @@ -52,11 +48,6 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = { .register_max = 64, .thread_count = 256, .shader_core_count = 1, - .nn_core_count = 0, - .nn_mad_per_core = 0, - .tp_core_count = 0, - .on_chip_sram_size = 0, - .axi_sram_size = 0, .vertex_cache_size = 8, .vertex_output_buffer_size = 512, .pixel_pipes = 1, @@ -89,10 +80,6 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = { .thread_count = 512, .shader_core_count = 2, .nn_core_count = 0, - .nn_mad_per_core = 0, - .tp_core_count = 0, - .on_chip_sram_size = 0, - .axi_sram_size = 0, .vertex_cache_size = 16, .vertex_output_buffer_size = 1024, .pixel_pipes = 1, @@ -125,10 +112,6 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = { .thread_count = 512, .shader_core_count = 2, .nn_core_count = 0, - .nn_mad_per_core = 0, - .tp_core_count = 0, - .on_chip_sram_size = 0, - .axi_sram_size = 0, .vertex_cache_size = 16, .vertex_output_buffer_size = 1024, .pixel_pipes = 1, @@ -160,11 +143,6 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = { .register_max = 64, .thread_count = 512, .shader_core_count = 2, - .nn_core_count = 0, - .nn_mad_per_core = 0, - .tp_core_count = 0, - .on_chip_sram_size = 0, - .axi_sram_size = 0,
[PATCH] drm/bridge: imx: Fix unmet depenency for PHY_FSL_SAMSUNG_HDMI_PHY
When enabling i.MX8MP DWC HDMI driver, it automatically selects PHY_FSL_SAMSUNG_HDMI_PHY, since it wont' work without the phy. This may cause some Kconfig warnings during various build tests. Fix this by implying the phy instead of selecting the phy. Fixes: 1f36d634670d ("drm/bridge: imx: add bridge wrapper driver for i.MX8MP DWC HDMI") Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202404190103.llm8ltup-...@intel.com/ Signed-off-by: Adam Ford diff --git a/drivers/gpu/drm/bridge/imx/Kconfig b/drivers/gpu/drm/bridge/imx/Kconfig index 7687ed652df5..8f125c75050d 100644 --- a/drivers/gpu/drm/bridge/imx/Kconfig +++ b/drivers/gpu/drm/bridge/imx/Kconfig @@ -9,7 +9,7 @@ config DRM_IMX8MP_DW_HDMI_BRIDGE depends on DRM_DW_HDMI depends on OF select DRM_IMX8MP_HDMI_PVI - select PHY_FSL_SAMSUNG_HDMI_PHY + imply PHY_FSL_SAMSUNG_HDMI_PHY help Choose this to enable support for the internal HDMI encoder found on the i.MX8MP SoC. -- 2.43.0
Re: [PATCH v2 1/4] drm: add devm release action
On 19/04/24 14:41, Maxime Ripard wrote: > On Fri, Apr 19, 2024 at 02:28:23PM +0530, Aravind Iddamsetty wrote: >> In scenarios where drm_dev_put is directly called by driver we want to >> release devm_drm_dev_init_release action associated with struct >> drm_device. >> >> v2: Directly expose the original function, instead of introducing a >> helper (Rodrigo) >> >> Cc: Thomas Hellstr_m >> Cc: Rodrigo Vivi >> >> Reviewed-by: Rodrigo Vivi >> Signed-off-by: Aravind Iddamsetty >> --- >> drivers/gpu/drm/drm_drv.c | 6 ++ >> include/drm/drm_drv.h | 2 ++ >> 2 files changed, 8 insertions(+) >> >> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c >> index 243cacb3575c..ba60cbb0725f 100644 >> --- a/drivers/gpu/drm/drm_drv.c >> +++ b/drivers/gpu/drm/drm_drv.c >> @@ -714,6 +714,12 @@ static int devm_drm_dev_init(struct device *parent, >> devm_drm_dev_init_release, dev); >> } >> >> +void devm_drm_dev_release_action(struct drm_device *dev) >> +{ >> +devm_release_action(dev->dev, devm_drm_dev_init_release, dev); >> +} >> +EXPORT_SYMBOL(devm_drm_dev_release_action); > Again, this needs to be documented. sorry I missed your earlier email, will address this. Thanks, Aravind. > > Maxime
Re: [PATCH][next] drivers: video: Simplify device_node cleanup using __free
20 Apr 2024 1:13:42 am Dmitry Baryshkov : > On Sat, Apr 20, 2024 at 12:22:41AM +0530, Shresth Prasad wrote: >> >>> Please fix the subject line to be "backlight: : ...". I came >>> very close to deleting this patch without reading it ;-) . >> >> Really sorry about that, I'll fix it. >> >>> Do we need to get dev->of_node at all? The device, which we are >>> borrowing, already owns a reference to the node so I don't see >>> any point in this function taking an extra one. >>> >>> So why not simply make this: >>> >>> struct device_node *np = dev->of_node; >> >> Looking at it again, I'm not sure why the call to `of_node_put` is there in >> the first place. I think removing it will be fine. >> >> I'll fix both of these issues and send a patch v2. > > Just a stupid quesiton: on which platform was this patch tested? > > -- > With best wishes > Dmitry I tested the patch on a x86_64 qemu virtual machine Regards, Shresth
Re: [PATCH v3 2/2] misc: sram: Add DMA-BUF Heap exporting of SRAM areas
On Fri, Apr 19, 2024 at 06:57:47PM +0200, Christian Gmeiner wrote: > Am Di., 9. Apr. 2024 um 14:14 Uhr schrieb Greg Kroah-Hartman > : > > > > On Tue, Apr 09, 2024 at 02:06:05PM +0200, Pascal FONTAIN wrote: > > > From: Andrew Davis > > > > > > This new export type exposes to userspace the SRAM area as a DMA-BUF > > > Heap, > > > this allows for allocations of DMA-BUFs that can be consumed by various > > > DMA-BUF supporting devices. > > > > > > Signed-off-by: Andrew Davis > > > Tested-by: Pascal Fontain > > > > When sending on a patch from someone else, you too must sign off on it > > as per our documenation. Please read it and understand what you are > > agreeing to when you do that for a new version please. > > > > > --- > > > drivers/misc/Kconfig | 7 + > > > drivers/misc/Makefile| 1 + > > > drivers/misc/sram-dma-heap.c | 246 +++ > > > drivers/misc/sram.c | 6 + > > > drivers/misc/sram.h | 16 +++ > > > 5 files changed, 276 insertions(+) > > > create mode 100644 drivers/misc/sram-dma-heap.c > > > > > > diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig > > > index 9e4ad4d61f06..e6674e913168 100644 > > > --- a/drivers/misc/Kconfig > > > +++ b/drivers/misc/Kconfig > > > @@ -448,6 +448,13 @@ config SRAM > > > config SRAM_EXEC > > > bool > > > > > > +config SRAM_DMA_HEAP > > > + bool "Export on-chip SRAM pools using DMA-Heaps" > > > + depends on DMABUF_HEAPS && SRAM > > > + help > > > + This driver allows the export of on-chip SRAM marked as both pool > > > + and exportable to userspace using the DMA-Heaps interface. > > > > What will use this in userspace? > > > > I could imagine a way it might be used. This implies it is not needed because no one actually has actually used it for anything, so why make this change? > Imagine a SoC like TI's AM64x series, where some cores (A53) run Linux > and others (R5F) are managed by remoteproc to > execute a RTOS. When it comes to efficiently exchanging large data > sets, the conventional method involves using rpmsg, > which has limitations due to message size and could potentially slow > down data transfer. However, an alternative > approach could be to allocate a sizable chunk of SRAM memory in > userspace. By utilizing memcpy() to copy data into > this memory, followed by a single rpmsg signal to notify the RTOS that > the data is ready, we can leverage the faster access > speed of SRAM compared to DDR from the remoteproc. Why is virtio not used instead? thanks, greg k-h