On Thu, 22 May 2025 11:33:00 +0000
"Shankar, Uma" <uma.shan...@intel.com> wrote:

> One request though: Can we enhance the lut samples from existing 16bits to 
> 32bits as lut precision is
> going to be more than 16 in certain hardware. While adding the new UAPI, lets 
> extend this to 32 to make it future proof.
> Reference: https://patchwork.freedesktop.org/patch/642592/?series=129811&rev=4
> 
> +/**
> + * struct drm_color_lut_32 - Represents high precision lut values
> + *
> + * Creating 32 bit palette entries for better data
> + * precision. This will be required for HDR and
> + * similar color processing usecases.
> + */
> +struct drm_color_lut_32 {
> +     /*
> +      * Data for high precision LUTs
> +      */
> +     __u32 red;
> +     __u32 green;
> +     __u32 blue;
> +     __u32 reserved;
> +};

Hi,

I suppose you need this much precision for optical data? If so,
floating-point would be much more appropriate and we could probably
keep 16-bit storage.

What does the "more than 16-bit" hardware actually use? ISTR at least
AMD having some sort of float'ish point internal pipeline?

This sounds the same thing as non-uniformly distributed taps in a LUT.
That mimics floating-point input while this feels like floating-point
output of a LUT.

I've recently decided for myself (and Weston) that I will never store
optical data in an integer format, because it is far too wasteful. That's
why the electrical encodings like power-2.2 are so useful, not just for
emulating a CRT.


Thanks,
pq

Attachment: pgpN0alU2dCos.pgp
Description: OpenPGP digital signature

Reply via email to