On Mon, 24 Mar 2014 23:09:21 +0100, wm4 <[email protected]> wrote:
> Not sure about this. Different decoders might support different pixel
> formats for the same thing.

I never said that the pixel formats should expose the internal packing.

> For example, it seems VDA always prefers packed 422, even for 420.

I would say that is irrelevant. If VDA always uses 422, then you might as
well define AV_PIX_FMT_VDA to mean 422. If VDA completely hides the chroma
sampling from the application, then I agree that a single AV_PIX_FMT_VDA
for
all uses is good. However, the presence of cv_pix_fmt_type in the AV VDA
context makes me suspect that VDA does in fact care about the chroma
sampling just like the other hardware accelerators do.

In any case, VA and VDPAU require the application to know the chroma type
(and, so far, they assume 8-bits per component) for the purposes of
surface allocation, interoperability and copy to memory. Hence libavcodec
needs to supply that information. The pixel format seems like the obvious
way to do so. And even if another new mechanism were defined, the
_current_ VA and VDPAU pixel formats would not be compatible with it.

On the other hand, VA and VDPAU do not require the application to know or
care about the actual packing (say NV12 vs YV12, YUYV vs UYVY), so I agree
that there is no point in exposing the packing through the pixel format,
or through any other mean. I never asked for _that_.

DxVA2 is another matter altogether as it does not seem to define a fixed
set of surface types.

> Likewise, there could be decoders which
> support hi10p decoding, but pretend the surfaces are normal 420 8 bit
> surfaces.

All I'm saying is that libavcodec should match libva and libvdpau (and
consequently VLC). If those libraries reuse the currently implicitly
8-bits chroma types for 10-bits, then I am totally fine with reusing the
existing libav pixel formats accordingly.

In practice, this is extremely unlikely though because of the memory
allocation would be too small to fit the extra bits.

> (There was something about adding 10 bit support for vdpau
> some months ago. I'm not sure how that was supposed to work or if it
> was just a misunderstanding, but it looked like it was pretending to
> return 8 bit surfaces for 10 bit decoding.)

IMU, that would have used new video surface chroma types. But it never
came to fruition as the Gallium guys never posted any patches of any sort.
VDPAU already defines separate 8-bits and 10-bits types for RGB, that is
to say output surfaces in VDPAU parliance.

> Is it really that straight-forward? Maybe separating this would be
> better. We don't store the codec profile in the pixel format either.

The codec profile is stored in the AVCodecContext. I am fine with storing
the sampling there, but that does not seem to fit well with the
get_format() pattern.

> And when we did that, it was ugly and we eventually removed it (like
> AV_PIX_FMT_VDPAU_H264).

Indeed, the input codec is irrelevant to the VDPAU video surface. But the
chroma type is very relevant to the VDPAU video surface.

> IMO it's best if AV_PIX_FMT_VDPAU means that it's a VdpVideoSurface,
> instead of attempting to encode more into it.

The point is, there is not one type of VDPAU video surface. There are
three types of them, of which two are actively used. To preserve
compatibility, a new pixel format would be required to support 422 anyway.

With that in mind, is it better to have simply:

   AV_PIX_FMT_VDPAU_420,
#define AV_PIX_FMT_VDPAU AV_PIX_FMT_VDPAU_420
   /*...*/
   AV_PIX_FMT_VDPAU_422,

or

   AV_PIX_FMT_VDPAU,
   /* ... */
   AV_PIX_FMT_VDPAU2,
   /* ... */

VdpChromaType av_vdpau2_get_chroma_type(const AVPixelFormat *pix_fmts)
{
    /* Lot of code here */
}

Either way, you end up with two pixel formats. I think it the first option
is cleaner, saner and simpler.

-- 
Rémi Denis-Courmont
Sent from my collocated server
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to