On Mon, 24 Mar 2014 22:40:30 +0200
Rémi Denis-Courmont <[email protected]> wrote:

> Le lundi 24 mars 2014, 21:24:47 Hendrik Leppkes a écrit :
> > > The point is that currently those surface types are used by 4:2:0 surfaces
> > > only. If support for 4:2:2 or Hi10p or whatever gets added to libavcodec,
> > > the newly used surface types should be assigned new pixel formats,
> > > consistently with the underlying acceleration APIs as well as with
> > > libavcodec software pixel formats.
> > > 
> > > It would suck for applications to have to second-guess libavcodec.
> > 
> > IMHO, it would suck for applications to need yet another mapping table
> > to map surface types to pixel formats, when one pixel format that says
> > "there is a hardware surface in there" is all thats needed.
> 
> I don't think it would suck. But most importantly, what you propose would 
> *break* existing applications, both at source and binary level. This is not 
> acceptable.

How? You create decoder and surfaces. It's entirely under your control.

> > For most HWAccels, avcodec just doesn't need to know, nor should it know.
> > 
> > If I take DXVA2 as an example, since thats what I know best, the
> > application negotiates with the GPU driver which format is used, which
> > can in theory be anything. In practice, most GPUs support NV12, some
> > support YV12 additionally.
> 
> DxVA2 negotiates the format while instantiating the decoder, but I somewhat 
> doubt that the drivers will actually do any good if the application does not 
> match the chroma sampling manually. In practice, applications just work 
> because so far, device drivers only ever exposed 4:2:0 8-bits such that there 
> is no way to hit a mismatch. But I would hardly be surprised if DxVA2 
> applications started breaking if/when 4:2:2 or Hi10p-capable hardware comes 
> out.
> 
> > avcodec does not care, nor should it care, which format I negotiated
> > for with the driver. It just passes the opaque surfaces around, from
> > the user app to the hardware decoder, and back again.
> 
> avcodec *has* to care because it the application needs to know this. This is 
> needed for VA-API's vaCreateSurfaces() and VDPAU's VdpVideoSurfaceCreate to 
> work, period.
> 
> Not only that, but it obviously affects semantics when dumping the frame to 
> main memory, or when exporting it to 
> OpenGL/EGL/OpenCL/CUDA/NVENC/QSV/whatever 
> via interoperability functions.

Then just carry the real format as part as your own frame structure
(your equivalent of AVFrame), and negotiate it using your own pixel
formats. I don't think VLC uses the libav ones natively. So the status
quo is that you can do all these things just fine.
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to