On 8/24/2019 3:18 PM, Michael Niedermayer wrote: > Testcase: > 14843/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_NUV_fuzzer-5661969614372864 > Testcase: > 16257/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_NUV_fuzzer-5769175464673280 > > Found-by: continuous fuzzing process > https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg > Signed-off-by: Michael Niedermayer <mich...@niedermayer.cc> > --- > libavcodec/nuv.c | 25 +++++++++++++++++++++---- > tests/ref/fate/nuv-rtjpeg | 1 - > 2 files changed, 21 insertions(+), 5 deletions(-) > > diff --git a/libavcodec/nuv.c b/libavcodec/nuv.c > index 75b14bce5b..39479d2389 100644 > --- a/libavcodec/nuv.c > +++ b/libavcodec/nuv.c > @@ -42,6 +42,7 @@ typedef struct NuvContext { > unsigned char *decomp_buf; > uint32_t lq[64], cq[64]; > RTJpegContext rtj; > + AVPacket flush_pkt; > } NuvContext; > > static const uint8_t fallback_lquant[] = { > @@ -172,6 +173,20 @@ static int decode_frame(AVCodecContext *avctx, void > *data, int *got_frame, > NUV_COPY_LAST = 'L' > } comptype; > > + if (!avpkt->data) { > + if (avctx->internal->need_flush) { > + avctx->internal->need_flush = 0; > + ret = ff_setup_buffered_frame_for_return(avctx, data, c->pic, > &c->flush_pkt); > + if (ret < 0) > + return ret; > + *got_frame = 1; > + } > + return 0; > + } > + c->flush_pkt = *avpkt; > + c->pic->pkt_dts = c->flush_pkt.dts; > + > + > if (buf_size < 12) { > av_log(avctx, AV_LOG_ERROR, "coded frame too small\n"); > return AVERROR_INVALIDDATA; > @@ -204,8 +219,8 @@ static int decode_frame(AVCodecContext *avctx, void > *data, int *got_frame, > } > break; > case NUV_COPY_LAST: > - keyframe = 0; > - break; > + avctx->internal->need_flush = 1; > + return buf_size; > default: > keyframe = 1; > break; > @@ -313,6 +328,7 @@ retry: > if ((result = av_frame_ref(picture, c->pic)) < 0) > return result; > > + avctx->internal->need_flush = 0; > *got_frame = 1; > return orig_size; > } > @@ -364,6 +380,7 @@ AVCodec ff_nuv_decoder = { > .init = decode_init, > .close = decode_end, > .decode = decode_frame, > - .capabilities = AV_CODEC_CAP_DR1, > - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, > + .capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY, > + .caps_internal = FF_CODEC_CAP_SETS_PKT_DTS | FF_CODEC_CAP_SETS_PKT_POS | > + FF_CODEC_CAP_INIT_CLEANUP, > }; > diff --git a/tests/ref/fate/nuv-rtjpeg b/tests/ref/fate/nuv-rtjpeg > index b6f3b080dc..0914b985ec 100644 > --- a/tests/ref/fate/nuv-rtjpeg > +++ b/tests/ref/fate/nuv-rtjpeg > @@ -6,7 +6,6 @@ > 0, 118, 118, 0, 460800, 0x54aedafe > 0, 152, 152, 0, 460800, 0xb7aa8b56 > 0, 177, 177, 0, 460800, 0x283ea3b5 > -0, 202, 202, 0, 460800, 0x283ea3b5
I haven't been following these cfr -> vfr patches too closely, but why remove the duplicate frames (thus changing the expected compliant behavior) instead of ensuring the duplicate ones are made with no performance penalty by reusing the buffer reference from the previous frame? > 0, 235, 235, 0, 460800, 0x10e577de > 0, 269, 269, 0, 460800, 0x4e091ee2 > 0, 302, 302, 0, 460800, 0x2ea88828 > _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".