On 24/03/14 22:34, wm4 wrote: > On Mon, 24 Mar 2014 22:25:35 +0100 > Luca Barbato <[email protected]> wrote: > >> On 24/03/14 22:09, Rémi Denis-Courmont wrote: >>> Le lundi 24 mars 2014, 21:53:35 Luca Barbato a écrit : >>>> First friendly warning: please tone down two notches. >>> >>> I suggest you check your facts before you flame me then taunt me. >> >> Nobody is flaming you, just this kind of tone is not welcome. >> >>>>> If you keep a single pixel format regardless, the only way for the >>>>> application to know the correct parameters is look at the last pixel >>>>> format (i.e. software) in the list and second-guess libavcodec that this >>>>> indicates the correct chroma type and pixel depth. This would be >>>>> vomitively ugly, and it would forever lock libavcodec into offering only >>>>> a single software output pixel format ever. Is that really what you want? >>>> >>>> It is actually what normally happens, as explained by Hendrik, Anton and >>>> wm4. >>> >>> This is patently false. Look at get_pixel_format() from h264.c for yourself! >> >> The code in avconv_vdpau and mpv does the surface mapping based on the >> global context (set by the very same application). >> >> VDA does the same. > > I think the problem at hand is that there's no way for the hwaccel API > to signal to the application which surface format should be used. >
Mostly because it was assumed it doesn't know at all. You configure the decoder, you feed it the surfaces. All hwaccel was doing was just feed the decoder the slices and get back the opaque frames to you. lu _______________________________________________ libav-devel mailing list [email protected] https://lists.libav.org/mailman/listinfo/libav-devel
