Phil Rhodes <phil_rhodes <at> rocketmail.com> writes:

> 
> > It should be added that both avc and hevc support 12bit 
> > encoding...
> It's a bit more complicated than just a twelve-bit video signal. Vision 
uses the 10-bit image as a base and
> then trims it using the additional data stream, taking into account the 
capabilities of the display.
> Correct display would require knowledge of the capabilities of the 
display hardware itself, not to
> mention significant reverse-engineering of Dolby's engineering approach. 
Most Vision decoders will
> be built into displays and therefore have that knowledge.
> Decoding of Vision to the point where it could be transcoded into other 
HDR formats is possibly more
> feasible based on the standards for those formats.
> P  

I agree. Somehow the Dolby Vision decoder that would be built into
FFmpeg would need to auto-detect the hardware capabilities of the monitor.

It would also have to be tested against the hardware TV Dolby Vision
decoders for conformance.

Andrew Sun
_______________________________________________
ffmpeg-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to