>>>> Hello,
>>>> I've been experimenting with encoding with a  cuda card, and I noticed 
>>>> that setting the pict_type member of the AVFrame structure to 
>>>> AV_PICTURE_TYPE_I >does not trigger it to encode the next frame as an IDR 
>>>> frame as libx264 does. I looked at the nvidia docs and it appears there is 
>>>> a mechanism for this behavior.
>>> Is that behaviour documented anywhere?
>> I found documentation on the nvidia site 
>> (https://developer.nvidia.com/nvencode), here is the section of the document 
>> that I referred to in my previous post...
>I'm talking about ffmpeg AVFrames.
>I was not aware libx264 even did that, and I don't see any documentation that 
>it's possible to force an IDR/I frame that way.
>It should be trivial to add the same behaviour to nvenc.

Not exactly documented completely, but I found out about it when I look at the 
libx264 options output by the command "ffmpeg -h encoder=libx264" which lists a 
-forced_idr option. After seeing that I looked into libx264.c and found the 
area where it switches on frame->pict_type and sets the corresponding 
X264Context members.

Here is the code from libx264.c, X264_frame function. Its indispensable when 
dealing with live RTP streams and recovering from packet loss.

        switch (frame->pict_type) {
        case AV_PICTURE_TYPE_I:
            x4->pic.i_type = x4->forced_idr >= 0 ? X264_TYPE_IDR
                                                 : X264_TYPE_KEYFRAME;
        case AV_PICTURE_TYPE_P:
            x4->pic.i_type = X264_TYPE_P;
        case AV_PICTURE_TYPE_B:
            x4->pic.i_type = X264_TYPE_B;
            x4->pic.i_type = X264_TYPE_AUTO;

I'm trying to put together a patch, but I'm struggling a bit setting up to 
ffmpeg-devel mailing list

Reply via email to