The default value of CuvidContext::nb_surfaces was reduced from 25 to 5 (as (CUVID_MAX_DISPLAY_DELAY + 1)) in 402d98c9d467dff6931d906ebb732b9a00334e0b.
In cuvid_is_buffer_full() delay can be 2 * CUVID_MAX_DISPLAY_DELAY with double rate deinterlacing. ctx->nb_surfaces is CUVID_DEFAULT_NUM_SURFACES = (CUVID_MAX_DISPLAY_DELAY + 1) by default, in which case cuvid_is_buffer_full() will always return true and cuvid_output_frame() will never read any data since it will not call ff_decode_get_packet(). --- I think part of the problem might be that cuvid_is_buffer_full() does not know how many frames are actually in the driver's queue and assumes it is the maximum, even if none have yet been added. This was preventing any frames from being decoded using NVDEC with MythTV for some streams. See https://github.com/MythTV/mythtv/issues/1039 --- libavcodec/cuviddec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libavcodec/cuviddec.c b/libavcodec/cuviddec.c index 67076a1752..05dcafab6e 100644 --- a/libavcodec/cuviddec.c +++ b/libavcodec/cuviddec.c @@ -120,7 +120,7 @@ typedef struct CuvidParsedFrame #define CUVID_MAX_DISPLAY_DELAY (4) // Actual pool size will be determined by parser. -#define CUVID_DEFAULT_NUM_SURFACES (CUVID_MAX_DISPLAY_DELAY + 1) +#define CUVID_DEFAULT_NUM_SURFACES ((2 * CUVID_MAX_DISPLAY_DELAY) + 1) static int CUDAAPI cuvid_handle_video_sequence(void *opaque, CUVIDEOFORMAT* format) { -- 2.43.0 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".