On 24/03/14 10:22, Rémi Denis-Courmont wrote:
> On Mon, 24 Mar 2014 10:09:45 +0100, Luca Barbato <[email protected]>
>> In case of VDA and VT it could be anything from UYVY422
> 
> Last I checked, libavcodec did not support hardware acceleration for
> anything other than 4:2:0 - just looking at the pixel format lists...
> Anyway, is there even *real* hardware slice-level decoder that supports
> 4:2:2? As far as I know, hardware video acceleration frameworks support
> 4:2:2 for post-processing and rendering, not (really yet) for decoding.

I'm playing with abstractions that use the bitstream and not the slice.

>> to YUV420Planar to ...
> 
> There are several cases whereby the application needs to know the
> underlying type of a surface:
> 1) A function is the acceleration framework requests it as a parameter.
> 2) The application wants to copy the surface content back to main memory.
> 3) The application wants to show the chroma sampling as meta-data.
> 4) The application exports the surface via interoperability (e.g. to
> OpenGL).
> 
> Hence, it seems more reasonable to allocate separate pixel formats for
> separate samplings and depths... if the situation ever arises.

The application will know asking the opaque delivered by the means
provided by the underlying protocol/whatnot.

>>> Presumably new pixel formats would be required to
>>> unambiguously convey another chroma sampling or depth to the
>>> get_format() implementation.
>>
>> Nope, opaque is opaque, you know nothing of what's is inside and the
>> opaque must be able to describe itself.
> 
> Sorry but that is crap. The application typically knows (and needs to
> know) the surface size. And sometimes it also needs to know the colour
> space, chroma sampling and composent depth. It is one thing that the
> surface content is not directly accessible. It is a completely different
> thing to know nothing of the surface properties.

bitstream/slice is fed to a blackbox that outputs opaque data in unknown
format.

The application then can render/manipulate the opaque by the means
provided by the blackbox and outside Libav.

hwaccel2 will optionally provide generic abstractions to manipulate the
opaque (render them, probe them for the format and such).

All we do currently is to wrap the opaque in an avframe and make sure it
behaves as expected (e.g. refcounting works on seeks and such).

lu
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to