On 24/01/16 14:52, wm4 wrote:
> On Sun, 24 Jan 2016 14:20:54 +0000
> Mark Thompson <[email protected]> wrote:
>
> ...
>
>>>> +
>>>> + new_frame = av_frame_alloc();
>>>> + if(!new_frame)
>>>> + return AVERROR(ENOMEM);
>>>> +
>>>> + av_vaapi_surface_pool_get(&ctx->pool, new_frame);
>>>> +
>>>> + av_log(ctx, AV_LOG_DEBUG, "Decoder given reference to surface %#x.\n",
>>>> + (VASurfaceID)new_frame->data[3]);
>>>> +
>>>> + av_frame_copy_props(new_frame, frame);
>>>> + av_frame_unref(frame);
>>>> + av_frame_move_ref(frame, new_frame);
>>>
>>> What are these acrobatics for? Why not use frame directly?
>>
>> The AVFrame supplied by the decoder already has a lot of the metadata filled
>> in, but the surface pool needs to make a new reference to its own AVFrame.
>>
>> Without this, you get display order == decode order (that was fun to track
>> down).
>
> I still don't understand - AVFrames aren't referenced, AVBufferRefs
> are.
av_vaapi_surface_pool_get() wants a whole new AVFrame so that it can call
av_frame_ref() to make the buffers new references to the existing things held
in its frame pool.
I could change this so that it just fills in buf[0], buf[1] and data[3] (and
more?) and then it could be called directly on the given AVFrame here, but that
seems like it would introduce inconvenient subtlety for other users.
>> ...
>>>> + // If there isn't an exact match, we will choose the last
>>>> (highest)
>>>> + // profile in the mapping table. This is unfortunate because it
>>>> + // could well be wrong, but it improves compatibility because
>>>> streams
>>>> + // and decoders do not necessarily conform to the requirements.
>>>> + // (Without this, a lot of streams in FATE are discarded when then
>>>> + // could have worked - H.264 extended profile which actually
>>>> decodes
>>>> + // with main, for example.)
>>>
>>> I still think this should not be done by default, and instead a user
>>> option should enable it.
>>
>> I guess that works if the error message suggests the option they could try.
>> "-lax-codec-profile-constraints"? Other hwaccels might want it too.
>
> Yes, something like this. At least vdpau already checks it pretty
> strictly (even verifies the level).
Ok, I'll look into it.
>> ...
>>>> +
>>>> + // Decide on the internal chroma format.
>>>> + {
>>>> + VAConfigAttrib attr;
>>>> +
>>>> + // Currently the software only supports YUV420, so just make sure
>>>> + // that the hardware we have does too.
>>>> +
>>>> + memset(&attr, 0, sizeof(attr));
>>>> + attr.type = VAConfigAttribRTFormat;
>>>> + vas = vaGetConfigAttributes(ctx->hardware_context->display,
>>>> config->profile,
>>>> + VAEntrypointVLD, &attr, 1);
>>>> + if(vas != VA_STATUS_SUCCESS) {
>>>> + av_log(ctx, AV_LOG_ERROR, "Failed to fetch config attributes:
>>>> "
>>>> + "%d (%s).\n", vas, vaErrorStr(vas));
>>>> + return AVERROR(EINVAL);
>>>> + }
>>>> + if(!(attr.value & VA_RT_FORMAT_YUV420)) {
>>>> + av_log(ctx, AV_LOG_ERROR, "Hardware does not support required
>>>> "
>>>> + "chroma format (%#x).\n", attr.value);
>>>> + return AVERROR(EINVAL);
>>>> + }
>>>> +
>>>> + output->rt_format = VA_RT_FORMAT_YUV420;
>>>
>>> I'm confused, shouldn't this set it to something retrieved by the
>>> function above?
>>
>> Read again - it's just a check, the code errors out on all other
>> possibilities.
>>
>
> Oh, ok.
>
>> This section is only present because more could be added. Currently the
>> only other possibility in the Intel driver is greyscale for H.264 decoding
>> only, so it didn't seem worth it now.
>>
>
> What about conversion to RGB? I think 10 bit will also have a different
> RT format, not sure.
This is only deciding on the format of the surface which the decoder actually
operates on, which isn't going to be RGB.
YUV 10-bit does have a different format - implementing that will need support
here to select VA_RT_FORMAT_YUV420_10BIT.
>>>> + }
>>>> +
>>>> + // Decide on the image format.
>>>> + if(avctx->pix_fmt == AV_PIX_FMT_VAAPI) {
>>>> + // We are going to be passing through a VAAPI surface directly:
>>>> + // they will stay as whatever opaque internal format for that
>>>> time,
>>>> + // and we never need to make VAImages from them.
>>>> +
>>>> + av_log(ctx, AV_LOG_DEBUG, "Using VAAPI opaque output format.\n");
>>>> +
>>>> + output->av_format = AV_PIX_FMT_VAAPI;
>>>> + memset(&output->image_format, 0, sizeof(output->image_format));
>>>> +
>>>> + } else {
>>>> + int image_format_count;
>>>> + VAImageFormat *image_format_list;
>>>> + int pix_fmt;
>>>> +
>>>> + // We might want to force a change to the output format here
>>>> + // if we are intending to use VADeriveImage?
>>>> +
>>>> + image_format_count =
>>>> vaMaxNumImageFormats(ctx->hardware_context->display);
>>>> + image_format_list = av_calloc(image_format_count,
>>>> + sizeof(VAImageFormat));
>>>> + if(!image_format_list)
>>>> + return AVERROR(ENOMEM);
>>>> +
>>>> + vas = vaQueryImageFormats(ctx->hardware_context->display,
>>>> image_format_list,
>>>> + &image_format_count);
>>>> + if(vas != VA_STATUS_SUCCESS) {
>>>> + av_log(ctx, AV_LOG_ERROR, "Failed to query image formats: "
>>>> + "%d (%s).\n", vas, vaErrorStr(vas));
>>>> + return AVERROR(EINVAL);
>>>> + }
>>>> +
>>>> + for(i = 0; i < image_format_count; i++) {
>>>> + pix_fmt = vaapi_get_pix_fmt(image_format_list[i].fourcc);
>>>> + if(pix_fmt == AV_PIX_FMT_NONE)
>>>> + continue;
>>>> + if(pix_fmt == avctx->pix_fmt)
>>>> + break;
>>>> + }
>>>> + if(i < image_format_count) {
>>>> + av_log(ctx, AV_LOG_DEBUG, "Using desired output format %s "
>>>> + "(%#x).\n", av_get_pix_fmt_name(pix_fmt),
>>>> + image_format_list[i].fourcc);
>>>> + } else {
>>>> + for(i = 0; i < image_format_count; i++) {
>>>> + pix_fmt = vaapi_get_pix_fmt(image_format_list[i].fourcc);
>>>> + if(pix_fmt != AV_PIX_FMT_NONE)
>>>> + break;
>>>> + }
>>>> + if(i >= image_format_count) {
>>>> + av_log(ctx, AV_LOG_ERROR, "No supported output format
>>>> found.\n");
>>>> + av_free(image_format_list);
>>>> + return AVERROR(EINVAL);
>>>> + }
>>>> + av_log(ctx, AV_LOG_DEBUG, "Using alternate output format %s "
>>>> + "(%#x).\n", av_get_pix_fmt_name(pix_fmt),
>>>> + image_format_list[i].fourcc);
>>>> + }
>>>> +
>>>> + output->av_format = pix_fmt;
>>>> + memcpy(&output->image_format, &image_format_list[i],
>>>> + sizeof(VAImageFormat));
>>>> +
>>>> + av_free(image_format_list);
>>>> + }
>>>
>>> Seems to pick a format randomly (from the unordered image_format_list).
>>> Maybe it'd just be better to force NV12, and if that doesn't work,
>>> retry with yuv420p, or so.
>>
>> It picks the format you asked for (avctx->pix_fmt), or if not present takes
>> the first usable one on the list. In practice you ask for YUV420P and
>> probably get it, with NV12 otherwise.
>>
>
> The first from the list returned by libva is still arbitrary.
>
> What's the format "you asked for"? Does this refer to the -pix_fmt
> command line option, or whatever it was?
>
I was meaning the one supplied as AVCodecContext.pix_fmt. Looking again, I'm
not sure this is actually meaningful.
It's also the cause of the problem with FATE
h264-reinit-large_420_8-to-small_420_8, so maybe the fixed-list suggestion is
actually the way to go here. I'll think further on it.
> ...
Thanks,
- Mark
_______________________________________________
ffmpeg-devel mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel