On Sat, Nov 01, 2025 at 06:15:17PM -0500, Aaron Kling via B4 Relay wrote: > From: Aaron Kling <[email protected]> > > Without the cmu, nvdisplay will display colors that are notably darker > than intended. The vendor bootloader and the downstream display driver > enable the cmu and sets a sRGB table. Loading that table here results in > the intended colors. > > Signed-off-by: Aaron Kling <[email protected]> > --- > drivers/gpu/drm/tegra/dc.h | 13 +++ > drivers/gpu/drm/tegra/sor.c | 206 > ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 219 insertions(+)
What does "darker than intended" mean? Who defines the intention? How do
we know what the intention is? What this patch ultimately seems to be
doing is define sRGB to be the default colorspace. Is that always the
right default choice? What if people want to specify a different
colorspace?
Looking at the enum dp_colorimetry it seems like sRGB is the default for
DP at least. But then it says the default is sRGB or ITU-R BT.601, but
if I compare that to the Wikipedia article for sRGB that says sRGB is
closer to ITU-R BT.709 than BT.601. Can we narrow in what exactly the
LUT in this patch corresponds to?
> diff --git a/drivers/gpu/drm/tegra/dc.h b/drivers/gpu/drm/tegra/dc.h
> index
> 0559fa6b1bf70416e51d5067cc04a6ae6572de23..286eddd89a28f7ea9e64c00f03af76f6c68ae9d8
> 100644
> --- a/drivers/gpu/drm/tegra/dc.h
> +++ b/drivers/gpu/drm/tegra/dc.h
> @@ -447,11 +447,24 @@ int tegra_dc_rgb_exit(struct tegra_dc *dc);
> #define BASE_COLOR_SIZE_888 ( 8 << 0)
> #define BASE_COLOR_SIZE_101010 ( 10 << 0)
> #define BASE_COLOR_SIZE_121212 ( 12 << 0)
> +#define CMU_ENABLE_MASK (1 << 20)
> +#define CMU_ENABLE_DISABLE (0 << 20)
> +#define CMU_ENABLE_ENABLE (1 << 20)
_MASK and _DISABLE are unused (and also quite useless in this case).
>
> #define DC_DISP_SHIFT_CLOCK_OPTIONS 0x431
> #define SC1_H_QUALIFIER_NONE (1 << 16)
> #define SC0_H_QUALIFIER_NONE (1 << 0)
>
> +/* Nvdisplay */
> +#define DC_DISP_CORE_HEAD_SET_CONTROL_OUTPUT_LUT 0x431
> +#define OUTPUT_LUT_MODE_MASK (3 << 5)
> +#define OUTPUT_LUT_MODE_INTERPOLATE (1 << 5)
> +#define OUTPUT_LUT_SIZE_MASK (3 << 1)
> +#define OUTPUT_LUT_SIZE_SIZE_1025 (2 << 1)
> +
> +#define DC_DISP_COREPVT_HEAD_SET_OUTPUT_LUT_BASE 0x432
> +#define DC_DISP_COREPVT_HEAD_SET_OUTPUT_LUT_BASE_HI 0x433
> +
There's a section in this header titled "Tegra186 and later", where
these new definitions should go. Anything in this section is part of the
old registers (or the emulated ones for backwards compatibility).
> #define DC_DISP_DATA_ENABLE_OPTIONS 0x432
> #define DE_SELECT_ACTIVE_BLANK (0 << 0)
> #define DE_SELECT_ACTIVE (1 << 0)
> diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
> index
> 21f3dfdcc5c9576580b9aa9990dd1bedcdeb4482..a381cb35113c0f3191d7204302f4024f33141622
> 100644
> --- a/drivers/gpu/drm/tegra/sor.c
> +++ b/drivers/gpu/drm/tegra/sor.c
> @@ -443,6 +443,9 @@ struct tegra_sor {
> bool scdc_enabled;
>
> struct tegra_hda_format format;
> +
> + u64 *cmu_output_lut;
> + dma_addr_t cmu_output_phys;
> };
>
> struct tegra_sor_state {
> @@ -483,6 +486,180 @@ static inline struct tegra_sor *to_sor(struct
> tegra_output *output)
> return container_of(output, struct tegra_sor, output);
> }
>
> +static u16 default_srgb_lut[] = {
> + 0x6000, 0x60CE, 0x619D, 0x626C, 0x632D, 0x63D4,
[...]
> + 0x9FE2, 0x9FE9, 0x9FF0, 0x9FF7, 0x9FFF,
> +};
I don't take it there's a way to generate this table? And these are not
standard values that could be shared among different drivers?
You could probably make this a bit more compact by indenting the data
with a single tab and squeeze in 2 or 3 more values per line.
> static inline u32 tegra_sor_readl(struct tegra_sor *sor, unsigned int offset)
> {
> u32 value = readl(sor->regs + (offset << 2));
> @@ -2241,6 +2418,13 @@ static void tegra_sor_hdmi_disable(struct drm_encoder
> *encoder)
> dev_err(sor->dev, "failed to power off I/O pad: %d\n", err);
>
> host1x_client_suspend(&sor->client);
> +
> + if (sor->soc->has_nvdisplay) {
> + dma_free_coherent(dc->dev, ARRAY_SIZE(default_srgb_lut) *
> sizeof(u64),
> + sor->cmu_output_lut, sor->cmu_output_phys);
> + sor->cmu_output_lut = NULL;
> + sor->cmu_output_phys = 0;
> + }
> }
>
> static void tegra_sor_hdmi_enable(struct drm_encoder *encoder)
> @@ -2255,6 +2439,7 @@ static void tegra_sor_hdmi_enable(struct drm_encoder
> *encoder)
> unsigned long rate, pclk;
> unsigned int div, i;
> u32 value;
> + u64 r;
This can be moved into the branch to narrow the scope.
> int err;
>
> state = to_sor_state(output->connector.state);
> @@ -2557,6 +2742,27 @@ static void tegra_sor_hdmi_enable(struct drm_encoder
> *encoder)
> value &= ~DITHER_CONTROL_MASK;
> value &= ~BASE_COLOR_SIZE_MASK;
>
> + if (dc->soc->has_nvdisplay) {
> + sor->cmu_output_lut =
> + dma_alloc_coherent(dc->dev,
> ARRAY_SIZE(default_srgb_lut) * sizeof(u64),
> + &sor->cmu_output_phys, GFP_KERNEL);
You need to check for failure, otherwise you might NULL dereference the
pointer below. But then it's probably even better to allocate this at
probe time so that we can guarantee that the LUT can always be set.
> +
> + for (i = 0; i < ARRAY_SIZE(default_srgb_lut); i++) {
> + r = default_srgb_lut[i];
> + sor->cmu_output_lut[i] = (r << 32) | (r << 16) | r;
> + }
Given that this was taken from the downstream driver, this is probably
correct, but I'm not sure I understand why the same value is written to
the LUT thrice. Do you happen to know?
> + tegra_dc_writel(dc, (u32)(sor->cmu_output_phys & 0xffffffff),
> + DC_DISP_COREPVT_HEAD_SET_OUTPUT_LUT_BASE);
> + tegra_dc_writel(dc, (u32)(sor->cmu_output_phys >> 32),
> + DC_DISP_COREPVT_HEAD_SET_OUTPUT_LUT_BASE_HI);
You'll want to use the lower_32_bits() and upper_32_bits() functions
like we do for other address values (see hub.c, for example).
Thierry
signature.asc
Description: PGP signature
