[FFmpeg-devel] SDR->HDR tone mapping algorithm?

2019-02-08 Thread Harish Krupo
Hello,

We are in the process of implementing HDR rendering support in the
Weston display compositor [1] (HDR discussion here [2]). When HDR
and SDR surfaces like a video buffer and a subtitle buffer are presented
together, the composition would take place as follows:
- If the display does not support HDR metadata:
  in-coming HDR surfaces would be tone mapped using opengl to SDR and
  blended with the other SDR surfaces. We are currently using the Hable
  operator for tone mapping.
- If the display supports setting HDR metadata:
  SDR surfaces would be tone mapped to HDR and blended with HDR surfaces.

The literature available for SDR->HDR tone mapping varies from simple
linear expansion of luminance to CNN based approaches. We wanted to know
your recommendations for an acceptable algorithm for SDR->HDR tone mapping.

Any help is greatly appreciated!

[1] https://gitlab.freedesktop.org/wayland/weston
[2] 
https://lists.freedesktop.org/archives/wayland-devel/2019-January/039809.html

Thank you
Regards
Harish Krupo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] SDR->HDR tone mapping algorithm?

2019-02-08 Thread Harish Krupo
Hi Vittorio,

Vittorio Giovara  writes:

> On Fri, Feb 8, 2019 at 3:22 AM Harish Krupo 
> wrote:
>
>> Hello,
>>
>> We are in the process of implementing HDR rendering support in the
>> Weston display compositor [1] (HDR discussion here [2]). When HDR
>> and SDR surfaces like a video buffer and a subtitle buffer are presented
>> together, the composition would take place as follows:
>> - If the display does not support HDR metadata:
>>   in-coming HDR surfaces would be tone mapped using opengl to SDR and
>>   blended with the other SDR surfaces. We are currently using the Hable
>>   operator for tone mapping.
>> - If the display supports setting HDR metadata:
>>   SDR surfaces would be tone mapped to HDR and blended with HDR surfaces.
>>
>> The literature available for SDR->HDR tone mapping varies from simple
>> linear expansion of luminance to CNN based approaches. We wanted to know
>> your recommendations for an acceptable algorithm for SDR->HDR tone mapping.
>>
>> Any help is greatly appreciated!
>>
>> [1] https://gitlab.freedesktop.org/wayland/weston
>> [2]
>> https://lists.freedesktop.org/archives/wayland-devel/2019-January/039809.html
>>
>> Thank you
>> Regards
>> Harish Krupo
>>
>
> In *theory* the tonemapping functions should be reversible, so if you use
> vf_tonemap or vf_tonemap_opencl and properly expand the range via zimg
> (vf_zscale) before compression it should work fine. However I have never
> tried it myself, so I cannot guarantee that those filters will work as is.
> Of course haasn from the libplacebo project might have better suggestions,
> so you should really reach out to him.

Thanks, will try reversing the algorithms. Sure, will contact Haasn.

Regards
Harish Krupo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] SDR->HDR tone mapping algorithm?

2019-02-08 Thread Harish Krupo
Hi Jean,

Jean-Baptiste Kempf  writes:

> Hello,
>
> On Fri, 8 Feb 2019, at 09:17, Harish Krupo wrote:
>> The literature available for SDR->HDR tone mapping varies from simple
>> linear expansion of luminance to CNN based approaches. We wanted to know
>> your recommendations for an acceptable algorithm for SDR->HDR tone mapping.
>
> You really need to talk to haasn from the libplacebo project.

Sure, will do so. Thanks!

Regards
Harish Krupo

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] SDR->HDR tone mapping algorithm?

2019-02-12 Thread Harish Krupo
nd without meaningful latency.
>

We agree and thats why, while deciding the target color space in the
compositor we consider the display's supported colorspaces. This means
we will only apply one gamut conversion step per buffer which will be the
target colorspace.

> 2. Rec 2020 is not (inherently) HDR. Also, the choice of color gamut has
>nothing to do with the choice of transfer function. I might have Rec
>709 HDR content. In general, when ingesting a buffer, the user should
>be responsible for tagging both its color primaries and its transfer
>function.
>

We are adding few protocols to provide us exactly this information.

> 3. If you're compositing in linear light, then you most likely want to
>be using at least 16-bit per channel floating point buffers, with 1.0
>mapping to "SDR white", and HDR values being treated as above 1.0.
>
>This is also a good color space to use for ingesting buffers, since
>it allows treating SDR and HDR inputs "identically", but extreme
>caution must be applied due to the fact that with floating point
>buffers, we're left at the mercy of what the client wants to put into
>them (10^20? NaN? Negative values?). Extra metadata must still be
>communicated between the client and the compositor to ensure both
>sides agree on the signal range of the floating point buffer
>contents.
>
> 4. Applications need a way to bypass the color pipeline in the
>compositor, i.e. applications need a way to tag their buffers as
>"this buffer is in display N's native (SDR|HDR) color space". This of
>course only makes sense if applications both have a way of knowing
>what display N's native SDR/HDR color space is, as well as which
>display N they're being displayed (more) on. Such buffers should be
>preserved as much as possible end-to-end, ideally being just directly
>scanned out as-is.
>

The compositor has good enough information about the system state and
considers a system wide view of all the buffers from all the
applications and comes up with a target colorspace/HDR
metadata. This means that the application need not bypass or even be
concerned about the output colorspace. Applications should just send the
buffers' original colorspace and metadata information and trust that the
compositor will take care of the rest. :)

> 5. Implementing a "good" HDR-to-SDR tone mapping operator; and even the
>question of whether to use the display's HDR or SDR mode, requires
>knowledge of what brightness range your composited buffer contains.
>Crucially, I think applications should be allowed to tag their
>buffers with the brightest value that they "can" contain. If they
>fail to do so, we should assume the highest possible value permitted
>by the transfer function specified (e.g. 10,000 nits for PQ). Putting
>this metadata into the protocol early would allow us to explore
>better tone mapping functions later on.
>
> Some final words of advice,
>
> 1. The protocol suggestions for color management in Wayland have all
>seemed terribly over-engineered compared to the problem they are
>trying to solve. I have had some short discussions with Link Mauve on
>the topic of how to design a protocol that's as simple as possible
>while still fulfilling its purpose, and we started drafting our own
>protocol for this, but it's sitting in a WIP state somewhere.
>
> 2. I see that Graeme Gill has posted a bit in at least some of these
>threads. I recommend listening to his advice as much as possible.
>
> On Fri, 08 Feb 2019 22:01:49 +0530, Harish Krupo  
> wrote:
>> Hi Vittorio,
>> 
>> Vittorio Giovara  writes:
>> 
>> > On Fri, Feb 8, 2019 at 3:22 AM Harish Krupo 
>> > wrote:
>> >
>> >> Hello,
>> >>
>> >> We are in the process of implementing HDR rendering support in the
>> >> Weston display compositor [1] (HDR discussion here [2]). When HDR
>> >> and SDR surfaces like a video buffer and a subtitle buffer are presented
>> >> together, the composition would take place as follows:
>> >> - If the display does not support HDR metadata:
>> >>   in-coming HDR surfaces would be tone mapped using opengl to SDR and
>> >>   blended with the other SDR surfaces. We are currently using the Hable
>> >>   operator for tone mapping.
>> >> - If the display supports setting HDR metadata:
>> >>   SDR surfaces would be tone mapped to HDR and blended with HDR surfaces.
>> >>
>> >> The literature available for SDR->HDR tone mapping varies from simple
>> >> linear expansion of lum