Re: [FFmpeg-devel] [PATCH 1/2] Revert "avcodec/qtrle: Do not output duplicated frames on insufficient input"
> > if you dont return 3 fields you break the normative specification. This > speaks > about the "output of the decoding process" not how to interpret the output. > > I bring MPEG2 up here because we dont do what the normative spec says > because it doesnt make sense for us. It does make sense if you output on a > analoge interlaced PAL/NTSC screen. It is fundamentally the same as the > CFR case, on one side a interlaced display as factor on the other a output > only capable to handle fixed duration (CFR) frames. > > About CFR codecs, the cases this is about are generally input packets that > code the equivalent of "nothing changed". In frameworks that allow only CFR > an implementation would produce a duplicated frame. In frameworks that > allow > VFR an implementation can omit the duplicated frame. > I don't really care about qtrle here but this is not comparable. If you are a cfr output device, you *know* that you have to duplicate a field because of the flag that's exported and this duration is clearly defined. But with the qtrle patch, you don't know how long the duration of the frame just returned is. Kieran ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avfilter/af_atempo: Make ffplay display correct timestamps when seeking
On 5/8/19 1:13 AM, Paul B Mahol wrote: On 5/8/19, Pavel Koshevoy wrote: NOTE: this is a refinement of the patch from Paul B Mahol offset all output timestamps by same amount of first input timestamp --- libavfilter/af_atempo.c | 11 ++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/libavfilter/af_atempo.c b/libavfilter/af_atempo.c index bfdad7d76b..688dac5464 100644 --- a/libavfilter/af_atempo.c +++ b/libavfilter/af_atempo.c @@ -103,6 +103,9 @@ typedef struct ATempoContext { // 1: output sample position int64_t position[2]; +// first input timestamp, all other timestamps are offset by this one +int64_t start_pts; + // sample format: enum AVSampleFormat format; @@ -186,6 +189,7 @@ static void yae_clear(ATempoContext *atempo) atempo->nfrag = 0; atempo->state = YAE_LOAD_FRAGMENT; +atempo->start_pts = AV_NOPTS_VALUE; atempo->position[0] = 0; atempo->position[1] = 0; @@ -1068,7 +1072,7 @@ static int push_samples(ATempoContext *atempo, atempo->dst_buffer->nb_samples = n_out; // adjust the PTS: -atempo->dst_buffer->pts = +atempo->dst_buffer->pts = atempo->start_pts + av_rescale_q(atempo->nsamples_out, (AVRational){ 1, outlink->sample_rate }, outlink->time_base); @@ -1097,6 +1101,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *src_buffer) const uint8_t *src = src_buffer->data[0]; const uint8_t *src_end = src + n_in * atempo->stride; +if (atempo->start_pts == AV_NOPTS_VALUE) +atempo->start_pts = av_rescale_q(src_buffer->pts, + inlink->time_base, + outlink->time_base); + while (src < src_end) { if (!atempo->dst_buffer) { atempo->dst_buffer = ff_get_audio_buffer(outlink, n_out); -- 2.16.4 Should be fine. Pushed, thank you. Pavel. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH V1 0/2] Use avctx->framerate first for frame rate setting
On Tue, May 7, 2019 at 9:54 AM myp...@gmail.com wrote: > > On Sun, May 5, 2019 at 11:23 AM myp...@gmail.com wrote: > > > > On Sun, May 5, 2019 at 9:31 AM Carl Eugen Hoyos wrote: > > > > > > Am So., 5. Mai 2019 um 03:23 Uhr schrieb myp...@gmail.com > > > : > > > > > > > > On Sun, Apr 28, 2019 at 7:02 PM myp...@gmail.com > > > > wrote: > > > > > > > > > > On Sun, Apr 28, 2019 at 5:30 PM Gyan wrote: > > > > > > > > > > > > > > > > > > > > > > > > On 28-04-2019 07:19 AM, myp...@gmail.com wrote: > > > > > > > On Sat, Apr 27, 2019 at 8:22 PM Gyan wrote: > > > > > > >> > > > > > > >> > > > > > > >> On 27-04-2019 05:25 PM, Carl Eugen Hoyos wrote: > > > > > > >>> 2019-04-27 13:17 GMT+02:00, Jun Zhao : > > > > > > perfer avctx->framerate first than use avctx->time_base when > > > > > > setting the frame rate to encoder. 1/time_base is not the > > > > > > average frame rate if the frame rate is not constant. > > > > > > >>> But why would the average framerate be a good choice to set > > > > > > >>> the encoder timebase? > > > > > > >>> > > > > > > >> Also, note that x264/5 RC looks at the framerate. > > > > > > >> See > > > > > > >> https://code.videolan.org/videolan/x264/commit/c583687fab832ba7eaf8626048f05ad1f861a855 > > > > > > >> > > > > > > >> I can generate a difference with x264 by setting -enc_time_base > > > > > > >> to > > > > > > >> different values (with vsync vfr). > > > > > > >> Maybe check that this change does not lead to a significant > > > > > > >> change in > > > > > > >> output. Although I think this would be still an improvement for > > > > > > >> those > > > > > > >> cases where r_frame_rate >> avg_frame_rate > > > > > > >> > > > > > > >> Gyan > > > > > > > Yes, framerate and time_base is not close correlation in vfr case, > > > > > > > e,g, I can setting the framerate = 60fps, but time_base = 1/1000 > > > > > > > s, > > > > > > > then setting pts like: > > > > > > > > > > > > > > time_base = 1/1000 s = 1 millisecond > > > > > > > framerate = 60 fps per second > > > > > > > PTS 01633506683100 ... > > > > > > > > > > > > > > PTS delta 161717161717 ... > > > > > > > > > > > > > > we will get 16ms * 20 frames + 17 ms * 40 frames = 1000ms > > > > > > > > > > > > I'm aware of the relationship between TB and PTS. My point is > > > > > > x264's RC > > > > > > adjusts its quantizer based on fps. You're changing that value so > > > > > > the > > > > > > output bitrate will change for the same input with the same encoder > > > > > > config if (avg_frame_rate) != (ticks * 1/TB). > > > > > > > > > > > > Gyan > > > > > in fact,this is the purpose of this patch, we used FFmpeg API to > > > > > setting the time_base/pts/framerate like above to tuning the PTS. > > > > > > > > Any other comments? > > > > > > Please explain why this patch is a good idea / what gets fixed. > > > > > In this patch, we can setting time_base/framerate without tight > > coupling from encoding API level, it's need for VFR case like above > > sample > > More comments for this patchset > > 1. This patchset used to when use FFmpeg codec API to setting > the avctx->time_base/framerate with PTS like following example: > > setting the framerate = 60fps, but time_base = 1/1000s = 1 millisecond, > and PTS like: > > PTS 01633506683100 ... > PTS delta 161717161717 ... > > get 16ms * 20 frames + 17 ms * 40 frames = 1000ms in one second. > > 2. For ffmpeg command tools, it's will support "enc_time_base" and "r" > options both for encoder > > In fact, time_base/framerate is not tightly coupled like current > libx264 wrapper. > Ping ? ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] vaapi_encode: Refactor encode misc parameter buffer creation
> -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf > Of Mark Thompson > Sent: Monday, May 6, 2019 23:21 > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: [FFmpeg-devel] [PATCH] vaapi_encode: Refactor encode misc > parameter buffer creation > > This removes the use of the nonstandard combined structures, which > generated some warnings with clang and will cause alignment problems > with some parameter buffer types. > --- > On 27/03/2019 14:18, Carl Eugen Hoyos wrote: > > Attached patch fixes many warnings when compiling vaapi with clang. > > Also tested with clang-3.4. > > ... > > How about this approach instead? I think something like it is going to be > needed anyway because of alignment problems with parameter structures > which aren't yet used. > > - Mark > > > libavcodec/vaapi_encode.c | 71 -- > libavcodec/vaapi_encode.h | 23 +++ > libavcodec/vaapi_encode_h264.c | 8 ++-- > 3 files changed, 60 insertions(+), 42 deletions(-) > > diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c > index 2dda451882..95031187df 100644 > --- a/libavcodec/vaapi_encode.c > +++ b/libavcodec/vaapi_encode.c > @@ -103,6 +103,29 @@ static int > vaapi_encode_make_param_buffer(AVCodecContext *avctx, > return 0; > } > > +static int vaapi_encode_make_misc_param_buffer(AVCodecContext > *avctx, > + VAAPIEncodePicture *pic, > + int type, > + const void *data, size_t len) > +{ > +// Construct the buffer on the stack - 1KB is much larger than any > +// current misc parameter buffer type (the largest is EncQuality at > +// 224 bytes). > +uint8_t buffer[1024]; > +VAEncMiscParameterBuffer header = { > +.type = type, > +}; > +size_t buffer_size = sizeof(header) + len; > +av_assert0(buffer_size <= sizeof(buffer)); > + > +memcpy(buffer, , sizeof(header)); > +memcpy(buffer + sizeof(header), data, len); > + > +return vaapi_encode_make_param_buffer(avctx, pic, > + VAEncMiscParameterBufferType, > + buffer, buffer_size); > +} > + > static int vaapi_encode_wait(AVCodecContext *avctx, > VAAPIEncodePicture *pic) > { > @@ -212,10 +235,10 @@ static int vaapi_encode_issue(AVCodecContext > *avctx, > > if (pic->type == PICTURE_TYPE_IDR) { > for (i = 0; i < ctx->nb_global_params; i++) { > -err = vaapi_encode_make_param_buffer(avctx, pic, > - > VAEncMiscParameterBufferType, > - > (char*)ctx->global_params[i], > - ctx->global_params_size[i]); > +err = vaapi_encode_make_misc_param_buffer(avctx, pic, > + > ctx->global_params_type[i], > + ctx->global_params[i], > + > ctx->global_params_size[i]); > if (err < 0) > goto fail; > } > @@ -1034,14 +1057,14 @@ int ff_vaapi_encode2(AVCodecContext *avctx, > AVPacket *pkt, > return AVERROR(ENOSYS); > } > > -static av_cold void vaapi_encode_add_global_param(AVCodecContext > *avctx, > - VAEncMiscParameterBuffer > *buffer, > - size_t size) > +static av_cold void vaapi_encode_add_global_param(AVCodecContext > *avctx, int type, > + void *buffer, size_t size) > { > VAAPIEncodeContext *ctx = avctx->priv_data; > > av_assert0(ctx->nb_global_params < MAX_GLOBAL_PARAMS); > > +ctx->global_params_type[ctx->nb_global_params] = type; > ctx->global_params [ctx->nb_global_params] = buffer; > ctx->global_params_size[ctx->nb_global_params] = size; > > @@ -1575,8 +1598,7 @@ rc_mode_found: > rc_bits_per_second, rc_window_size); > } > > -ctx->rc_params.misc.type = VAEncMiscParameterTypeRateControl; > -ctx->rc_params.rc = (VAEncMiscParameterRateControl) { > +ctx->rc_params = (VAEncMiscParameterRateControl) { > .bits_per_second= rc_bits_per_second, > .target_percentage = rc_target_percentage, > .window_size= rc_window_size, > @@ -1591,7 +1613,9 @@ rc_mode_found: > .quality_factor = rc_quality, > #endif > }; > -vaapi_encode_add_global_param(avctx, >rc_params.misc, > +vaapi_encode_add_global_param(avctx, > + VAEncMiscParameterTypeRateControl, > +
Re: [FFmpeg-devel] [PATCH v2 2/6] lavu/frame: Expand ROI documentation
> -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of > Guo, Yejun > Sent: Thursday, April 04, 2019 2:45 PM > To: FFmpeg development discussions and patches > Subject: Re: [FFmpeg-devel] [PATCH v2 2/6] lavu/frame: Expand ROI > documentation > > > > > -Original Message- > > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf > > Of Mark Thompson > > Sent: Wednesday, March 13, 2019 8:18 AM > > To: ffmpeg-devel@ffmpeg.org > > Subject: [FFmpeg-devel] [PATCH v2 2/6] lavu/frame: Expand ROI > > documentation > > > > Clarify and add examples for the behaviour of the quantisation offset, > > and define how multiple ranges should be handled. > > --- > > libavutil/frame.h | 46 ++ > > 1 file changed, 34 insertions(+), 12 deletions(-) > > Maybe we can first refine and push the first two patches? hello, just a kind reminder, since it is an interface change, it would be better to handle the first two patches earlier. > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On 09-05-2019 02:16, Michael Niedermayer wrote: > On Tue, May 07, 2019 at 10:05:12AM +0530, Shivam Goyal wrote: > >> The patch is for ticket #5154. >> >> I have improved the patch as suggested. >> >> Please review. >> >> Thank you, >> >> Shivam Goyal > >> Changelog|1 >> libavformat/Makefile |1 >> libavformat/allformats.c |1 >> libavformat/h264dec.c| 121 >> +++ >> libavformat/version.h|4 - >> 5 files changed, 126 insertions(+), 2 deletions(-) >> 34932cf36d17537b8fc34642e92cc1fff6ad481e add_arecont_h264_support_v3.patch >> From 2aa843626f939218179d3ec252f76f9991c33ed6 Mon Sep 17 00:00:00 2001 >> From: Shivam Goyal >> Date: Tue, 7 May 2019 10:01:15 +0530 >> Subject: [PATCH] lavf/h264: Add support for h264 video from Arecont camera, >> fixes ticket #5154 > [...] > >> @@ -117,4 +120,122 @@ static int h264_probe(const AVProbeData *p) >> return 0; >> } >> >> +static int arecont_h264_probe(const AVProbeData *p) >> +{ >> +int i, j, k, o = 0; >> +int ret = h264_probe(p); >> +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72, 0x0D, 0x0A}; >> + >> +if (!ret) >> +return 0; >> +for (i = 0; i + 7 < p->buf_size; i++){ >> +if (p->buf[i] == id[0] && !memcmp(id, p->buf + i, 8)) >> +o++; >> +} >> +if (o >= 1) >> +return ret + 1; >> +return 0; >> +} >> + >> +static int read_raw_arecont_h264_packet(AVFormatContext *s, AVPacket *pkt) >> +{ >> +int ret, size, start, end, new_size = 0, i, j, k, w = 0; >> +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72}; >> +uint8_t *data; >> +int64_t pos; >> + >> +//Extra to find the http header >> +size = 2 * ARECONT_H264_MIME_SIZE + RAW_PACKET_SIZE; >> +data = av_malloc(size); >> + >> +if (av_new_packet(pkt, size) < 0) >> +return AVERROR(ENOMEM); > > memleak on error I have tested this on the file attached to the ticket, the error didn't came. Please, could you tell me how to solve this error and why it came. Thanks for the review Shivam Goyal ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On 09-05-2019 02:16, Michael Niedermayer wrote: > On Tue, May 07, 2019 at 10:05:12AM +0530, Shivam Goyal wrote: > >> The patch is for ticket #5154. >> >> I have improved the patch as suggested. >> >> Please review. >> >> Thank you, >> >> Shivam Goyal > >> Changelog|1 >> libavformat/Makefile |1 >> libavformat/allformats.c |1 >> libavformat/h264dec.c| 121 >> +++ >> libavformat/version.h|4 - >> 5 files changed, 126 insertions(+), 2 deletions(-) >> 34932cf36d17537b8fc34642e92cc1fff6ad481e add_arecont_h264_support_v3.patch >> From 2aa843626f939218179d3ec252f76f9991c33ed6 Mon Sep 17 00:00:00 2001 >> From: Shivam Goyal >> Date: Tue, 7 May 2019 10:01:15 +0530 >> Subject: [PATCH] lavf/h264: Add support for h264 video from Arecont camera, >> fixes ticket #5154 > [...] > >> @@ -117,4 +120,122 @@ static int h264_probe(const AVProbeData *p) >> return 0; >> } >> >> +static int arecont_h264_probe(const AVProbeData *p) >> +{ >> +int i, j, k, o = 0; >> +int ret = h264_probe(p); >> +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72, 0x0D, 0x0A}; >> + >> +if (!ret) >> +return 0; >> +for (i = 0; i + 7 < p->buf_size; i++){ >> +if (p->buf[i] == id[0] && !memcmp(id, p->buf + i, 8)) >> +o++; >> +} >> +if (o >= 1) >> +return ret + 1; >> +return 0; >> +} >> + >> +static int read_raw_arecont_h264_packet(AVFormatContext *s, AVPacket *pkt) >> +{ >> +int ret, size, start, end, new_size = 0, i, j, k, w = 0; >> +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72}; >> +uint8_t *data; >> +int64_t pos; >> + >> +//Extra to find the http header >> +size = 2 * ARECONT_H264_MIME_SIZE + RAW_PACKET_SIZE; >> +data = av_malloc(size); >> + >> +if (av_new_packet(pkt, size) < 0) >> +return AVERROR(ENOMEM); > > memleak on error I tested it on the file attached to that ticket, the error didn't come. Please, could you tell me how to solve this and why this error came. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] libavfilter: Add multiple padding methods in FFmpeg dnn native mode.
> -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of > xwm...@pku.edu.cn > Sent: Wednesday, May 08, 2019 5:34 PM > To: ffmpeg-devel@ffmpeg.org > Subject: [FFmpeg-devel] [PATCH] libavfilter: Add multiple padding methods in > FFmpeg dnn native mode. > > > > > This patch is for the support of derain filter project in GSoC. It adds > supports for > the following operations: it is a general patch, not special for derain, so I think we don't need mention derain here, just explain it in general. > > > > > (1) Conv padding method: "SAME", "VALID" and "SAME_CLAMP_TO_EDGE" > > > > > These operations are all needed in derain filter. As we discussed before, the > "SAME_CLAMP_TO_EDGE" method is the same as dnn native padding method > in the current implementation. And the sr model generation code should be > So I sent a PR > (https://github.com/HighVoltageRocknRoll/sr/pull/4)to the original sr > repo(https://github.com/HighVoltageRocknRoll/sr). > > > > From c0724bb304a6f4c3ca935cccda5b810e5c4eceb1 Mon Sep 17 00:00:00 > 2001 > From: Xuewei Meng > Date: Wed, 8 May 2019 17:32:30 +0800 > Subject: [PATCH] Add multiple padding method in dnn native > > Signed-off-by: Xuewei Meng > --- > libavfilter/dnn_backend_native.c | 52 > libavfilter/dnn_backend_native.h | 3 ++ > 2 files changed, 43 insertions(+), 12 deletions(-) > > diff --git a/libavfilter/dnn_backend_native.c > b/libavfilter/dnn_backend_native.c > index 70d857f5f2..b7c0508d91 100644 > --- a/libavfilter/dnn_backend_native.c > +++ b/libavfilter/dnn_backend_native.c > @@ -59,6 +59,12 @@ static DNNReturnType set_input_output_native(void > *model, DNNData *input, DNNDat > return DNN_ERROR; > } > cur_channels = conv_params->output_num; > + > +if(conv_params->padding_method == VALID){ > +int pad_size = conv_params->kernel_size - 1; > +cur_height -= pad_size; > +cur_width -= pad_size; > +} > break; > case DEPTH_TO_SPACE: > depth_to_space_params = (DepthToSpaceParams > *)network->layers[layer].params; > @@ -75,6 +81,10 @@ static DNNReturnType set_input_output_native(void > *model, DNNData *input, DNNDat > if (network->layers[layer].output){ > av_freep(>layers[layer].output); > } > + > +if(cur_height <= 0 || cur_width <= 0) > +return DNN_ERROR; > + > network->layers[layer].output = av_malloc(cur_height * cur_width * > cur_channels * sizeof(float)); > if (!network->layers[layer].output){ > return DNN_ERROR; > @@ -157,13 +167,14 @@ DNNModel *ff_dnn_load_model_native(const char > *model_filename) > ff_dnn_free_model_native(); > return NULL; > } > +conv_params->padding_method = > (int32_t)avio_rl32(model_file_context); > conv_params->activation = > (int32_t)avio_rl32(model_file_context); > conv_params->input_num = > (int32_t)avio_rl32(model_file_context); > conv_params->output_num = > (int32_t)avio_rl32(model_file_context); > conv_params->kernel_size = > (int32_t)avio_rl32(model_file_context); > kernel_size = conv_params->input_num * > conv_params->output_num * >conv_params->kernel_size * > conv_params->kernel_size; > -dnn_size += 16 + (kernel_size + conv_params->output_num << > 2); > +dnn_size += 20 + (kernel_size + conv_params->output_num << > 2); > if (dnn_size > file_size || conv_params->input_num <= 0 || > conv_params->output_num <= 0 || > conv_params->kernel_size <= 0){ > avio_closep(_file_context); > @@ -221,23 +232,35 @@ DNNModel *ff_dnn_load_model_native(const char > *model_filename) > > static void convolve(const float *input, float *output, const > ConvolutionalParams *conv_params, int width, int height) > { > -int y, x, n_filter, ch, kernel_y, kernel_x; > int radius = conv_params->kernel_size >> 1; > int src_linesize = width * conv_params->input_num; > int filter_linesize = conv_params->kernel_size * > conv_params->input_num; > int filter_size = conv_params->kernel_size * filter_linesize; > +int pad_size = (conv_params->padding_method == VALID) ? > (conv_params->kernel_size - 1) / 2 : 0; > > -for (y = 0; y < height; ++y){ > -for (x = 0; x < width; ++x){ > -for (n_filter = 0; n_filter < conv_params->output_num; > ++n_filter){ > +for (int y = pad_size; y < height - pad_size; ++y){ > +for (int x = pad_size; x < width - pad_size; ++x){ > +for (int n_filter = 0; n_filter < conv_params->output_num; > ++n_filter){ > output[n_filter] = conv_params->biases[n_filter]; > -for (ch = 0; ch <
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On 09-05-2019 01:15, Reimar Döffinger wrote: > On Tue, May 07, 2019 at 10:05:12AM +0530, Shivam Goyal wrote: > >> +static int arecont_h264_probe(const AVProbeData *p) >> +{ >> +int i, j, k, o = 0; >> +int ret = h264_probe(p); >> +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72, 0x0D, 0x0A}; > > Should be "static const" instead of just "const". > Also this seems to be just "--fbdr\r\n"? Okay, I would change it on next version, Yeah it is '--fbdr\r\n' The file from Arecont h264 contains http header starting from this, many times. > +if (p->buf[i] == id[0] && !memcmp(id, p->buf + i, 8)) > +o++; > Is that optimization really helping much? > If you want to speed it up, I suspect it would be more > useful to either go for AV_RL64 or memchr() or > a combination. > Also, sizeof() instead of hard-coding the 8. > > +if (o >= 1) > +return ret + 1; > Since you only check for >= 1 you could have aborted the > scanning loop in the first match... Yeah, this would be a great idea. I will change this. > +static int read_raw_arecont_h264_packet(AVFormatContext *s, AVPacket *pkt) > +{ > +int ret, size, start, end, new_size = 0, i, j, k, w = 0; > +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72}; > "static", however this seems the same data (though 2 shorter). > I'd suggest defining the signature just once. > > +uint8_t *data; > +int64_t pos; > + > +//Extra to find the http header > +size = 2 * ARECONT_H264_MIME_SIZE + RAW_PACKET_SIZE; > +data = av_malloc(size); > + > +if (av_new_packet(pkt, size) < 0) > +return AVERROR(ENOMEM); > + > +pkt->pos = avio_tell(s->pb); > +pkt->stream_index = 0; > +pos = avio_tell(s->pb); > +ret = avio_read_partial(s->pb, data, size); > +if (ret < 0) { > +av_packet_unref(pkt); > +return ret; > +} > +if (pos <= ARECONT_H264_MIME_SIZE) { > +avio_seek(s->pb, 0, SEEK_SET); > +start = pos; > +} else { > +avio_seek(s->pb, pos - ARECONT_H264_MIME_SIZE, SEEK_SET); > +start = ARECONT_H264_MIME_SIZE; > +} > +ret = avio_read_partial(s->pb, data, size); > +if (ret < 0 || start >= ret) { > +av_packet_unref(pkt); > +return ret; > +} > You need to document more what you are doing here. > And even more importantly why you are using avio_read_partial. > And why you allocate both a packet and a separate "data" > with the same size. > And why not use av_get_packet. I saw the raw video demuxer, there it was using avio_read_partial. Because it is allowed to read less data then specified. > +if (i >= start && j + 1 > start && j + 1 <= end) { > +memcpy(pkt->data + new_size, data + start, i - start); > +new_size += i - start; > +memcpy(pkt->data + new_size, data + j + 2, end - j - 1); > +new_size += end - j - 1; > +w = 1; > +break; > +} else if (i < start && j + 1 >= start && j + 1 < end) { > +memcpy(pkt->data + new_size, data + j + 2, end - j -1); > +new_size += end - j - 1; > +w = 1; > +break; > +} else if (j + 1 > end && i > start && i <= end) { > +memcpy(pkt->data + new_size, data + start, i - start); > +new_size += i - start; > +w = 1; > +break; > +} > With some comments I might be able to review without > spending a lot of time reverse-engineering this... Okay, would add comments to this also. I also have another idea to solve this findind and removing the http header, but it would require to change RAW_PACKET_SIZE. So, just want to know, can i change this? if yes, then how much can i change this? Thanks for the review, Shivam Goyal ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] movie Filter reload Option
Please see this (very Short) thread for background. It is incorporated here. http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/243721.html The drawtext Filter has a reload Option, and when I use overlay with a PNG Image, like so: -f image2 -loop 1 -i overlay.png I can manipulate the overlay by changing out the PNG File while the ffmpeg command is running. (It works with RTMP sending to YouTube). I can make it "disappear" by copying a 1-pixel alpha PNG Image into the "overlay.png" file. I can manipulate the displayed drawtext in real-time by changing the contents of the "textfile" the drawtext filter uses when the "reload" option is set. So far, so good. What about doing the same with a video. That means that the movie filter would reload (or at least check) the file specified as input to that filter, with each run of the loop if a reload option was set, perhaps. ffmpeg appears to read that file once, at inception. Is there currently a way to accomplish this or would one have to add that to he code? Thanks. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] Revert "avcodec/qtrle: Do not output duplicated frames on insufficient input"
On Tue, May 07, 2019 at 01:39:44AM +0200, Hendrik Leppkes wrote: > On Tue, May 7, 2019 at 12:34 AM Michael Niedermayer > wrote: > > > > On Sun, May 05, 2019 at 08:51:08PM +0200, Marton Balint wrote: > > > This reverts commit a9dacdeea6168787a142209bd19fdd74aefc9dd6. > > > > > > I don't think it is a good idea to drop frames from CFR input just > > > because they > > > are duplicated, that can cause issues for API users expecting CFR input. > > > Also > > > it can cause issues at the end of file, if the last frame is a duplicated > > > frame. > > > > > > Fixes ticket #7880. > > > > > > Signed-off-by: Marton Balint > > > --- > > > libavcodec/qtrle.c| 12 ++--- > > > tests/ref/fate/qtrle-8bit | 109 > > > ++ > > > 2 files changed, 115 insertions(+), 6 deletions(-) > > > > This change would make the decoder quite a bit slower. It also would make > > encoding the output harder. > > For example motion estimation would be run over unchanged frames even when > > no cfr is wanted. > > This is simple: > There is X input packets, any other decoder outputs X output frames. > FFmpeg outputs Y output frames (where Y < X). How can this be correct > decoding? > > If you want to lesten the burden of static frames, a filter to detect > duplicates and make a stream VFR is what you should suggest, not > making decoders act "unexpectedly". > > > > > Also if one for consistency wants every decoder to not drop duplicated > > things > > that will cause some major problems in other decoders. > > Iam thinking of MPEG2 here, where the duplication is at a field level > > perfectly progressive material would be turned into some mess with field > > repetition in that case. Again undoing that in a subsequent stage would be > > quite a bit harder and wastefull > > > > There is quite a fundamental difference between CFR codecs where we > end up not generating output for an input packet just because we feel > like it, and the thought of somehow interpreting field repeat > metadata. That just smells like deflection, lets not go there. in ISO/IEC 13818-2: 1995 (E), repeat_first_field is part of the picture coding extension, if its removed it will no longer decode. Iam not sure calling this metadata is accurate. also the description of the field is: [...]If progressive_sequence is equal to 0 and progressive_frame is equal to 1: If this flag is set to 0, the output of the decoding process corresponding to this reconstructed frame consists of two fields. The first field (top or bottom field as identified by top_field_first) is followed by the other field. If it is set to 1, the output of the decoding process corresponding to this reconstructed frame consists of three fields. The first field (top or bottom field as identified by top_field_first) is followed by the other field, then the first field is repeated. [...] if you dont return 3 fields you break the normative specification. This speaks about the "output of the decoding process" not how to interpret the output. I bring MPEG2 up here because we dont do what the normative spec says because it doesnt make sense for us. It does make sense if you output on a analoge interlaced PAL/NTSC screen. It is fundamentally the same as the CFR case, on one side a interlaced display as factor on the other a output only capable to handle fixed duration (CFR) frames. About CFR codecs, the cases this is about are generally input packets that code the equivalent of "nothing changed". In frameworks that allow only CFR an implementation would produce a duplicated frame. In frameworks that allow VFR an implementation can omit the duplicated frame. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Avoid a single point of failure, be that a person or equipment. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [DECISION] scaletempo filter
On 5/7/19, Paul B Mahol wrote: > On 5/6/19, Marton Balint wrote: >> >> >> On Mon, 6 May 2019, Marton Balint wrote: >> >>> >>> >>> On Mon, 6 May 2019, Paul B Mahol wrote: >>> On 5/6/19, Marton Balint wrote: > > > On Sat, 4 May 2019, John Warburton wrote: > >> On Sat, May 4, 2019 at 3:34 PM Nicolas George >> wrote: >> >>> John Warburton (12019-05-04): >>> >>> > Is there a patch I can use to test scaletempo to compare it >>> > against >>> atempo? >>> > It'll be no trouble to do that with the normal audio that is >>> time-adjusted >>> > on that radio station. It may be that its increased quality is >>> > most >>> >>> John, we would appreciate your input about whether these new >>> implementation of atempo is superior or equal to the existing one >>> with >>> regard to your needs. > > I tested scaletempo (with default settings) and it is definitely worse > than atempo for small scaling factors like 25/24. > Have you tried other scaling factors? How you done testing? >>> >>> Simple hearing tests. I hear audible artifacts (a kind of audio >>> stuttering) with scaletempo. Here are some files where it is noticable: >>> >>> fate-suite/delphine-cin/LOGO-partial.CIN >>> fate-suite/real/spygames-2MB.rmvb >>> fate-suite/wmapro/Beethovens_9th-1_small.wma >>> fate-suite/dts/dts.ts >>> >>> The last one is noticable even with 2x scaling. >> >> Another one, audible with any (0.5, ~1, 2) scaling: >> >> fate-suite/paf/hod1-partial.paf >> > > OK, vote and patch dropped. > > Even by using basically same algorithm, it is very slow to get same > quality because does not use FFT for cross-correlation. > I got into possession of code that is better than atempo for very small scale factors (0.5). So I gonna write new filter which would also be able to change both tempo and pitch at same time. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [SOLVED] loop Video Filter Not Looping
On Wed, May 08, 2019 at 11:48:38PM +0200, Paul B Mahol wrote: > On 5/8/19, talkvi...@talkvideo.net wrote: > > The commands below all produce an output with a smaller video overlaid in > > the upper left. But, it will not loop. It plays > > the overlay until its end, and stays on the last frame. > > > > The overlay must of course be shorter in time than the main video, but in an > > RTMP stream, I would like to bring overlays in and out, in order to play > > clips during the stream. When the clip is done, It should disappear. > > > > Your loop filter incarnation is invalid. Size must be size of looped > frames and not unset. > > > The Warning I get is: > > "[Parsed_movie_1 @ 0x3ccb380] EOF timestamp not reliable" > > > > The overlay video was produced with the following command: > > > > ffmpeg -y -ss 23 -i -qmin 1 -qmax 2 -s 640x360 -t 2 > > overlay.flv > > > > > > My FFMPEG Is: > > ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers > > built with gcc 7 (GCC) > > configuration: --extra-libs=-lpthread --extra-libs=-lm --enable-gpl > > --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame > > --enable-libx264 --enable-nonfree --enable-fontconfig --enable-libfribidi > > libavutil 56. 22.100 / 56. 22.100 > > libavcodec 58. 35.100 / 58. 35.100 > > libavformat58. 20.100 / 58. 20.100 > > libavdevice58. 5.100 / 58. 5.100 > > libavfilter 7. 40.101 / 7. 40.101 > > libswscale 5. 3.100 / 5. 3.100 > > libswresample 3. 3.100 / 3. 3.100 > > libpostproc55. 3.100 / 55. 3.100 > > > > Thanks. > > > > > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > > "nal-hrd=vbr" -filter_complex "copy[in2]; > > movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > > [in2][tmp2]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k > > -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r > > 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > > "nal-hrd=vbr" -filter_complex "copy[in2]; > > movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > > ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > > "nal-hrd=vbr" -filter_complex "copy[in2]; > > movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > > ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > > "nal-hrd=vbr" -filter_complex "copy[in2]; > > movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > > ultrafast -s 1920x1080 -r 30 -f flv > > rtmp://a.rtmp.youtube.com/live2/ > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > > To unsubscribe, visit link above, or email > > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". That worked. This page: https://video.stackexchange.com/questions/12905/repeat-loop-input-video-with-ffmpeg/16933#16933 Has some info regarding that. Thanks. I changed the loop filter to: [tmp2]loop=2:60:1[tmp3]; Added number of frames to loop and skip. Worked. Thanks. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] loop Video Filter Not Looping
On 5/8/19, talkvi...@talkvideo.net wrote: > The commands below all produce an output with a smaller video overlaid in > the upper left. But, it will not loop. It plays > the overlay until its end, and stays on the last frame. > > The overlay must of course be shorter in time than the main video, but in an > RTMP stream, I would like to bring overlays in and out, in order to play > clips during the stream. When the clip is done, It should disappear. > Your loop filter incarnation is invalid. Size must be size of looped frames and not unset. > The Warning I get is: > "[Parsed_movie_1 @ 0x3ccb380] EOF timestamp not reliable" > > The overlay video was produced with the following command: > > ffmpeg -y -ss 23 -i -qmin 1 -qmax 2 -s 640x360 -t 2 > overlay.flv > > > My FFMPEG Is: > ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers > built with gcc 7 (GCC) > configuration: --extra-libs=-lpthread --extra-libs=-lm --enable-gpl > --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame > --enable-libx264 --enable-nonfree --enable-fontconfig --enable-libfribidi > libavutil 56. 22.100 / 56. 22.100 > libavcodec 58. 35.100 / 58. 35.100 > libavformat58. 20.100 / 58. 20.100 > libavdevice58. 5.100 / 58. 5.100 > libavfilter 7. 40.101 / 7. 40.101 > libswscale 5. 3.100 / 5. 3.100 > libswresample 3. 3.100 / 3. 3.100 > libpostproc55. 3.100 / 55. 3.100 > > Thanks. > > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > "nal-hrd=vbr" -filter_complex "copy[in2]; > movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > [in2][tmp2]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k > -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r > 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > "nal-hrd=vbr" -filter_complex "copy[in2]; > movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > "nal-hrd=vbr" -filter_complex "copy[in2]; > movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 > > /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params > "nal-hrd=vbr" -filter_complex "copy[in2]; > movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; > [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 > -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset > ultrafast -s 1920x1080 -r 30 -f flv > rtmp://a.rtmp.youtube.com/live2/ > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] libavfilter/vf_scale_cuda: fix frame dimensions
AVHWFramesContext has aligned width and height. When initializing a new AVFrame, it receives these aligned values (in av_hwframe_get_buffer), which leads to incorrect scaling. The resulting frames are cropped either horizontally or vertically. As a fix we can overwrite the dimensions to original values right after av_hwframe_get_buffer. More info, samples and reproduction steps are here https://github.com/Svechnikov/ffmpeg-scale-cuda-problem --- libavfilter/vf_scale_cuda.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/libavfilter/vf_scale_cuda.c b/libavfilter/vf_scale_cuda.c index c97a802..ef1bd82 100644 --- a/libavfilter/vf_scale_cuda.c +++ b/libavfilter/vf_scale_cuda.c @@ -463,6 +463,9 @@ static int cudascale_scale(AVFilterContext *ctx, AVFrame *out, AVFrame *in) if (ret < 0) return ret; +s->tmp_frame->width = s->planes_out[0].width; +s->tmp_frame->height = s->planes_out[0].height; + av_frame_move_ref(out, s->frame); av_frame_move_ref(s->frame, s->tmp_frame); -- 2.7.4 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] loop Video Filter Not Looping
The commands below all produce an output with a smaller video overlaid in the upper left. But, it will not loop. It plays the overlay until its end, and stays on the last frame. The overlay must of course be shorter in time than the main video, but in an RTMP stream, I would like to bring overlays in and out, in order to play clips during the stream. When the clip is done, It should disappear. The Warning I get is: "[Parsed_movie_1 @ 0x3ccb380] EOF timestamp not reliable" The overlay video was produced with the following command: ffmpeg -y -ss 23 -i -qmin 1 -qmax 2 -s 640x360 -t 2 overlay.flv My FFMPEG Is: ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7 (GCC) configuration: --extra-libs=-lpthread --extra-libs=-lm --enable-gpl --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame --enable-libx264 --enable-nonfree --enable-fontconfig --enable-libfribidi libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat58. 20.100 / 58. 20.100 libavdevice58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc55. 3.100 / 55. 3.100 Thanks. /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params "nal-hrd=vbr" -filter_complex "copy[in2]; movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; [in2][tmp2]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params "nal-hrd=vbr" -filter_complex "copy[in2]; movie=overlay.flv,loop=0,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params "nal-hrd=vbr" -filter_complex "copy[in2]; movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r 30 -pix_fmt yuv420p -f mp4 -t 20 out.mp4 /usr/local/bin/ffmpeg -re -y -i Input.flv -c:v libx264 -x264-params "nal-hrd=vbr" -filter_complex "copy[in2]; movie=overlay.flv,setpts=N/FRAME_RATE/TB,scale=640:360[tmp2]; [tmp2]loop=0[tmp3]; [in2][tmp3]overlay=70:100:shortest=0" -qmin 1 -qmax 15 -c:a aac -b:a 128k -b:v 8M -maxrate 80M -bufsize 80M -g 15 -preset ultrafast -s 1920x1080 -r 30 -f flv rtmp://a.rtmp.youtube.com/live2/ ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [DECISION] Project policy on closed source components
On Sun, 28 Apr 2019, Marton Balint wrote: Hi All, There has been discussion on the mailing list several times about the inclusion of support for closed source components (codecs, formats, filters, etc) in the main ffmpeg codebase. Also the removal of libNDI happened without general consensus, so a vote is necessary to justify the removal. So here is a call to the voting committee [1] to decide on the following two questions: 1) Should libNDI support be removed from the ffmpeg codebase? No. Regards, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avutil: Add NV24 and NV42 pixel formats
On Tue, May 07, 2019 at 03:19:55PM -0700, Philip Langdale wrote: > On 2019-05-07 14:43, Carl Eugen Hoyos wrote: > >Am Di., 7. Mai 2019 um 06:33 Uhr schrieb Philip Langdale > >: > >> > >>These are the 4:4:4 variants of the semi-planar NV12/NV21 formats. > >> > >>I'm surprised we've not had a reason to add them until now, but > >>they are the format that VDPAU uses when doing interop for 4:4:4 > >>surfaces. > > > >Is there already a (libswscale) patch that actually uses the new > >formats? > > No. I haven't written any swscale code for this yet, although I could, > but there's no specific requirement for it. being able to convert to and from a format has advantages. So i too think it would be "nice to have" some support for that thanks [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB If you think the mosad wants you dead since a long time then you are either wrong or dead since a long time. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On Tue, May 07, 2019 at 10:05:12AM +0530, Shivam Goyal wrote: > The patch is for ticket #5154. > > I have improved the patch as suggested. > > Please review. > > Thank you, > > Shivam Goyal > Changelog|1 > libavformat/Makefile |1 > libavformat/allformats.c |1 > libavformat/h264dec.c| 121 > +++ > libavformat/version.h|4 - > 5 files changed, 126 insertions(+), 2 deletions(-) > 34932cf36d17537b8fc34642e92cc1fff6ad481e add_arecont_h264_support_v3.patch > From 2aa843626f939218179d3ec252f76f9991c33ed6 Mon Sep 17 00:00:00 2001 > From: Shivam Goyal > Date: Tue, 7 May 2019 10:01:15 +0530 > Subject: [PATCH] lavf/h264: Add support for h264 video from Arecont camera, > fixes ticket #5154 [...] > @@ -117,4 +120,122 @@ static int h264_probe(const AVProbeData *p) > return 0; > } > > +static int arecont_h264_probe(const AVProbeData *p) > +{ > +int i, j, k, o = 0; > +int ret = h264_probe(p); > +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72, 0x0D, 0x0A}; > + > +if (!ret) > +return 0; > +for (i = 0; i + 7 < p->buf_size; i++){ > +if (p->buf[i] == id[0] && !memcmp(id, p->buf + i, 8)) > +o++; > +} > +if (o >= 1) > +return ret + 1; > +return 0; > +} > + > +static int read_raw_arecont_h264_packet(AVFormatContext *s, AVPacket *pkt) > +{ > +int ret, size, start, end, new_size = 0, i, j, k, w = 0; > +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72}; > +uint8_t *data; > +int64_t pos; > + > +//Extra to find the http header > +size = 2 * ARECONT_H264_MIME_SIZE + RAW_PACKET_SIZE; > +data = av_malloc(size); > + > +if (av_new_packet(pkt, size) < 0) > +return AVERROR(ENOMEM); memleak on error [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Asymptotically faster algorithms should always be preferred if you have asymptotical amounts of data signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avformat/ifv: added support for ifv cctv files
On Wed, May 08, 2019 at 09:28:33PM +0200, Reimar Döffinger wrote: > On Wed, May 08, 2019 at 03:06:37PM +0530, Swaraj Hota wrote: > > On Wed, May 08, 2019 at 12:52:01AM +0200, Reimar Döffinger wrote: > > > First, seeking should be handled specially, by resetting the state. > > > You should not make the get_packet less efficient because of that. > > > That should enable the "remember last position and start from there". > > > > > > As to the corruption case, well the question is what to do about that, > > > and I don't have the answer. > > > But if the solution were to e.g. ensure the frame offset is monotonous > > > then binary search could be used. > > > However there is also the possibility that the format does in fact allow > > > a completely arbitrary order of frames in the file, maybe even re-using > > > an earlier frame_offset if the same frame appears multiple times. > > > In that case this whole offset-based positioning code would simply be > > > wrong, and you'd have to store the current index position in your demuxer > > > instead of relying on avio_tell. > > > Maybe you chose this solution because you did not know that seeking > > > should be implemented via special functions? > > > > By "special functions" do you mean those "read_seek" functions that > > are present in many demuxers(Cuz I have not implemented that)? > > Yes. > > > If yes then am I mistaken that FFmpeg can also handle seeking > > automatically? (Carl suggesting something like that, iirc) > > It has some functionality, but I think in practice I don't think > you can get a proper fully working solution with it. > > > Keeping the corruption case aside, how do you suggest I implement this? > > Is it not required to skip bytes till the next frame boundary in the > > read_packet function (as done in the current patch) ? > > If not then I guess I can go with a binary search, or better yet > > remember the last position. > > It's a bit more complex (in some aspect, simpler in others), but > gxf.c might be an example. > > I will describe what I think would be a correct full implementation, > but I will also note that maybe this is asking a bit much. > You would start with changing your index reading function to > use av_add_index_entry instead of storing your own index. > In read_packet you simply remember your position in that index and > use that information to know where to read from the next time. > This should work for linear playback. > > Once that is done, read_seek would need implementing > (it seems read_seek2 is currently only implemented by subtitle > formats?). > There you would seek the index for the right position > (av_index_search_timestamp), and set the variables you > use in read_packet accordingly so you will read the proper > frames next. > If the desired position is not in the index, you > will also need to read the next index entries from > the file (to start with you could do that by calling > read_packet, even though it is a bit overkill). > > Unfortunately this is all a bit complicated and a > lot of things that need to be implemented right. > The "quick fix" would be to implement a read_seek > function that only returns -1, with that you > continue to use the current seeking code, but > it would allow you to tell the read_packet > function to use the "search index position based > on file position" code, whereas for normal > playback you just continue reading whatever > you stored as "next index position to read". > > I'm sure far from everything I wrote is clear, > but hopefully it gives you some starting points. > > Best regards, > Reimar Döffinger That was really helpful! Thanks a lot! It is more or less clear to me now. Okay then I will surely try the "complicated" one first, and fallback to the "quick fix" otherwise. Really sorry if I am asking a lot of questions here ':D Swaraj ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [DECISION] Project policy on closed source components
On Sun, Apr 28, 2019 at 11:02 PM Marton Balint wrote: > > Hi All, > > There has been discussion on the mailing list several times about the > inclusion of support for closed source components (codecs, formats, > filters, etc) in the main ffmpeg codebase. > > Also the removal of libNDI happened without general consensus, so a vote > is necessary to justify the removal. > > So here is a call to the voting committee [1] to decide on the following > two questions: > > 1) Should libNDI support be removed from the ffmpeg codebase? > Yes. To give a reasoning for this, I have taken a quick look at the history of enable-nonfree (first appearing in 3fe142e2555ec8b527f2ff4cc517c015a114e91a in January of 2008), and it seems like its reasoning was to enable linking of (open source) components that were already in the code base that were then found out to not have technical limitations in their build which would follow their incompatibility with LGPL, GPL or both (starting with the 3GPP AMR-NB decoder wrapper added in 891f64b33972bb35f64d0b7ae0928004ff278f5b in May of 2003). Later on, it would be utilized when similar incompatibilities were found, of which at this point in time the Google published Fraunhofer AAC decoder/encoder suite is probably the best known example of. Before that there was the case where people happened to look into what libfaac actually contained, and it was noticed that the library actually contained the code from the reference implementation, making it incompatible with its own license. Following that quick check of things, I looked at the licenses of what FFmpeg can be redistributably built under: LGPL versions 2.1 and 3, as well as GPLv2 and 3. Thus, effectively the most limiting license of these would be one of the versions of the GPL. Thus, in my opinion as much of the code in FFmpeg should be compatible with LGPLv2.1+ and GPLv2+. And thus we gain an understanding of what sort of closed source software we can utilize within FFmpeg without limiting the option in binary licenses for the user (basically, the poorly worded things regarding things that come with the system - which is why generally having schannel/dxva2|d3d11va/videotoolbox support is seen as "alright"). Then, depending on the amount of working alternatives that are under licenses which do not require additional limitations to available configurations, and acceptance of the dependencies utilized, we have components which require specific sets of configuration. Examples of such would be: - libaribb24 requires LGPLv3+ in its current git form, and GPLv3+ in its current latest release form. There is an alternative, but it requires porting of a custom glibc iconv module into the code base first, so usage of the alternative is not realistic right away. Thus one could argue that it might still be worth the while to have support for this library in the code base. - libfdk-aac is open source, and to my (admittedly hazy) understanding the patent-related clause only affects GPL and not LGPL configurations (due to the "no additional limitations" clause in the former?). There are no open source alternatives providing similar level of quality for HE-AAC, thus making it arguable that fdk-aac a worthwhile thing to keep around. Now, back to libNDI. Let's start with the side that in my opinion looks more positive to it, and then move to the things where I see it in a less positive way: - libNDI does not have open source alternatives, which would be under a license that could be used for a re-distributable binary. - libNDI does have a use case and it could be argued that there is a need for it in the community. Just looking at these first two lines, I could still argue that it might be worth it to have in the code base. But only if the basic requirement regarding the dependency's licensing passes. libNDI's state in my opinion is as follows: - libNDI is closed source, and even according to the more generous readings of the most strict license you can configure your FFmpeg binary under (GPL) cannot be considered acceptable as it is not a hardware driver interface, but rather just a network protocol implementation. Thus the general "is this OK" check for me does not get passed. Decklink for example seems to pass since I see people being OK with the hardware interface driver interpretation, and if I understand it correctly the reason why it is under nonfree currently is due to the SDK's poor licensing (?). I would have loved to have had this discussion happen in 2017, before libNDI support getting merged. That way this would not look like a knee-jerk reaction to certain events during the last year or so. Alas, that was not what happened. I am one of those guilty as charged for one reason or another to have not replied into that thread. Maybe we could have persuaded people to work on an open source alternative implementation earlier, instead of now. Also this would have less inconvenienced users who did have this wrapper in their FFmpeg code
Re: [FFmpeg-devel] [PATCH 01/15] avformat/matroskaenc: Fix relative timestamp check
On 5/6/2019 9:19 PM, Andreas Rheinhardt wrote: > Andreas Rheinhardt: >> At this point, ts already includes the ts_offset so that the relative >> time written with the cluster is already given by ts - mkv->cluster_pts. >> It is this number that needs to fit into an int16_t. >> >> Signed-off-by: Andreas Rheinhardt >> --- >> The only difference between this version and the earlier version is the >> authorship information. My earlier emails were munged and in order to >> make the committer's life easier, I'll resend the whole patchset. >> I would appreciate reviews and comments. >> libavformat/matroskaenc.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/libavformat/matroskaenc.c b/libavformat/matroskaenc.c >> index 1c98c0dceb..c006cbf35c 100644 >> --- a/libavformat/matroskaenc.c >> +++ b/libavformat/matroskaenc.c >> @@ -2404,7 +2404,7 @@ static int mkv_write_packet_internal(AVFormatContext >> *s, AVPacket *pkt, int add_ >> ts += mkv->tracks[pkt->stream_index].ts_offset; >> >> if (mkv->cluster_pos != -1) { >> -int64_t cluster_time = ts - mkv->cluster_pts + >> mkv->tracks[pkt->stream_index].ts_offset; >> +int64_t cluster_time = ts - mkv->cluster_pts; >> if ((int16_t)cluster_time != cluster_time) { >> av_log(s, AV_LOG_WARNING, "Starting new cluster due to >> timestamp\n"); >> mkv_start_new_cluster(s, pkt); >> > Ping for the whole patchset. > > - Andreas Set pushed. Thanks! ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCHv2] lavfi: add gblur_opencl filter
> -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf > Of Dylan Fernando > Sent: Tuesday, May 7, 2019 8:27 AM > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCHv2] lavfi: add gblur_opencl filter > > Anyone have any comments/feedback? I think unsharp_opencl with a negative amount should do similar thing as this one. What's the difference? Better quality? or better speed? Thanks! Ruiling ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On Tue, May 07, 2019 at 10:05:12AM +0530, Shivam Goyal wrote: > +static int arecont_h264_probe(const AVProbeData *p) > +{ > +int i, j, k, o = 0; > +int ret = h264_probe(p); > +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72, 0x0D, 0x0A}; Should be "static const" instead of just "const". Also this seems to be just "--fbdr\r\n"? > +if (p->buf[i] == id[0] && !memcmp(id, p->buf + i, 8)) > +o++; Is that optimization really helping much? If you want to speed it up, I suspect it would be more useful to either go for AV_RL64 or memchr() or a combination. Also, sizeof() instead of hard-coding the 8. > +if (o >= 1) > +return ret + 1; Since you only check for >= 1 you could have aborted the scanning loop in the first match... > +static int read_raw_arecont_h264_packet(AVFormatContext *s, AVPacket *pkt) > +{ > +int ret, size, start, end, new_size = 0, i, j, k, w = 0; > +const uint8_t id[] = {0x2D, 0x2D, 0x66, 0x62, 0x64, 0x72}; "static", however this seems the same data (though 2 shorter). I'd suggest defining the signature just once. > +uint8_t *data; > +int64_t pos; > + > +//Extra to find the http header > +size = 2 * ARECONT_H264_MIME_SIZE + RAW_PACKET_SIZE; > +data = av_malloc(size); > + > +if (av_new_packet(pkt, size) < 0) > +return AVERROR(ENOMEM); > + > +pkt->pos = avio_tell(s->pb); > +pkt->stream_index = 0; > +pos = avio_tell(s->pb); > +ret = avio_read_partial(s->pb, data, size); > +if (ret < 0) { > +av_packet_unref(pkt); > +return ret; > +} > +if (pos <= ARECONT_H264_MIME_SIZE) { > +avio_seek(s->pb, 0, SEEK_SET); > +start = pos; > +} else { > +avio_seek(s->pb, pos - ARECONT_H264_MIME_SIZE, SEEK_SET); > +start = ARECONT_H264_MIME_SIZE; > +} > +ret = avio_read_partial(s->pb, data, size); > +if (ret < 0 || start >= ret) { > +av_packet_unref(pkt); > +return ret; > +} You need to document more what you are doing here. And even more importantly why you are using avio_read_partial. And why you allocate both a packet and a separate "data" with the same size. And why not use av_get_packet. > +if (i >= start && j + 1 > start && j + 1 <= end) { > +memcpy(pkt->data + new_size, data + start, i - start); > +new_size += i - start; > +memcpy(pkt->data + new_size, data + j + 2, end - j - 1); > +new_size += end - j - 1; > +w = 1; > +break; > +} else if (i < start && j + 1 >= start && j + 1 < end) { > +memcpy(pkt->data + new_size, data + j + 2, end - j -1); > +new_size += end - j - 1; > +w = 1; > +break; > +} else if (j + 1 > end && i > start && i <= end) { > +memcpy(pkt->data + new_size, data + start, i - start); > +new_size += i - start; > +w = 1; > +break; > +} With some comments I might be able to review without spending a lot of time reverse-engineering this... Best regards, Reimar Döffinger ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [FFmpeg-cvslog] avcodec/cuviddec: add capability check for maximum macroblock count
Am Mi., 8. Mai 2019 um 12:08 Uhr schrieb Ruta Gadkari : > diff --git a/libavcodec/cuviddec.c b/libavcodec/cuviddec.c > index d59d1faf9e..acee78cf2c 100644 > --- a/libavcodec/cuviddec.c > +++ b/libavcodec/cuviddec.c > @@ -805,6 +805,12 @@ static int cuvid_test_capabilities(AVCodecContext *avctx, > return AVERROR(EINVAL); > } > > +if ((probed_width * probed_height) / 256 > caps->nMaxMBCount) { > +av_log(avctx, AV_LOG_ERROR, "Video macroblock count %d exceeds > maximum of %d\n", > + (int)(probed_width * probed_height) / 256, caps->nMaxMBCount); Why is this cast necessary? Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avformat/ifv: added support for ifv cctv files
On Wed, May 08, 2019 at 03:06:37PM +0530, Swaraj Hota wrote: > On Wed, May 08, 2019 at 12:52:01AM +0200, Reimar Döffinger wrote: > > First, seeking should be handled specially, by resetting the state. > > You should not make the get_packet less efficient because of that. > > That should enable the "remember last position and start from there". > > > > As to the corruption case, well the question is what to do about that, and > > I don't have the answer. > > But if the solution were to e.g. ensure the frame offset is monotonous then > > binary search could be used. > > However there is also the possibility that the format does in fact allow a > > completely arbitrary order of frames in the file, maybe even re-using an > > earlier frame_offset if the same frame appears multiple times. > > In that case this whole offset-based positioning code would simply be > > wrong, and you'd have to store the current index position in your demuxer > > instead of relying on avio_tell. > > Maybe you chose this solution because you did not know that seeking should > > be implemented via special functions? > > By "special functions" do you mean those "read_seek" functions that > are present in many demuxers(Cuz I have not implemented that)? Yes. > If yes then am I mistaken that FFmpeg can also handle seeking > automatically? (Carl suggesting something like that, iirc) It has some functionality, but I think in practice I don't think you can get a proper fully working solution with it. > Keeping the corruption case aside, how do you suggest I implement this? > Is it not required to skip bytes till the next frame boundary in the > read_packet function (as done in the current patch) ? > If not then I guess I can go with a binary search, or better yet > remember the last position. It's a bit more complex (in some aspect, simpler in others), but gxf.c might be an example. I will describe what I think would be a correct full implementation, but I will also note that maybe this is asking a bit much. You would start with changing your index reading function to use av_add_index_entry instead of storing your own index. In read_packet you simply remember your position in that index and use that information to know where to read from the next time. This should work for linear playback. Once that is done, read_seek would need implementing (it seems read_seek2 is currently only implemented by subtitle formats?). There you would seek the index for the right position (av_index_search_timestamp), and set the variables you use in read_packet accordingly so you will read the proper frames next. If the desired position is not in the index, you will also need to read the next index entries from the file (to start with you could do that by calling read_packet, even though it is a bit overkill). Unfortunately this is all a bit complicated and a lot of things that need to be implemented right. The "quick fix" would be to implement a read_seek function that only returns -1, with that you continue to use the current seeking code, but it would allow you to tell the read_packet function to use the "search index position based on file position" code, whereas for normal playback you just continue reading whatever you stored as "next index position to read". I'm sure far from everything I wrote is clear, but hopefully it gives you some starting points. Best regards, Reimar Döffinger ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 2/3] tools/crypto_bench: check malloc fail before using it
From: Jun Zhao Need to check malloc fail before using it, so adjust the location in the code. Signed-off-by: Jun Zhao --- tools/crypto_bench.c |8 +--- 1 files changed, 5 insertions(+), 3 deletions(-) diff --git a/tools/crypto_bench.c b/tools/crypto_bench.c index aca8bbb..ac9fcc4 100644 --- a/tools/crypto_bench.c +++ b/tools/crypto_bench.c @@ -665,8 +665,8 @@ struct hash_impl implementations[] = { int main(int argc, char **argv) { -uint8_t *input = av_malloc(MAX_INPUT_SIZE * 2); -uint8_t *output = input + MAX_INPUT_SIZE; +uint8_t *input; +uint8_t *output; unsigned i, impl, size; int opt; @@ -702,12 +702,14 @@ int main(int argc, char **argv) exit(opt != 'h'); } } - +input = av_malloc(MAX_INPUT_SIZE * 2); if (!input) fatal_error("out of memory"); for (i = 0; i < MAX_INPUT_SIZE; i += 4) AV_WB32(input + i, i); +output = input + MAX_INPUT_SIZE; + size = MAX_INPUT_SIZE; for (impl = 0; impl < FF_ARRAY_ELEMS(implementations); impl++) run_implementation(input, output, [impl], size); -- 1.7.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 3/3] tools/crypto_bench: update the comment about build command
From: Jun Zhao commit cd62f9d557f missing the comment about build Signed-off-by: Jun Zhao --- tools/crypto_bench.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/tools/crypto_bench.c b/tools/crypto_bench.c index ac9fcc4..0aff4ea 100644 --- a/tools/crypto_bench.c +++ b/tools/crypto_bench.c @@ -19,7 +19,7 @@ */ /* Optional external libraries; can be enabled using: - * make VERSUS=crypto+gcrypt+tomcrypt tools/crypto_bench */ + * make VERSUS=crypto+gcrypt+tomcrypt+mbedcrypto tools/crypto_bench */ #define USE_crypto 0x01/* OpenSSL's libcrypto */ #define USE_gcrypt 0x02/* GnuTLS's libgcrypt */ #define USE_tomcrypt 0x04/* LibTomCrypt */ -- 1.7.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 1/3] lavf/cover_rect: Fix logic check issue
From: Jun Zhao Fix logic check issue #6741 Signed-off-by: Jun Zhao --- libavfilter/vf_cover_rect.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/libavfilter/vf_cover_rect.c b/libavfilter/vf_cover_rect.c index f7f6103..41cd1a1 100644 --- a/libavfilter/vf_cover_rect.c +++ b/libavfilter/vf_cover_rect.c @@ -152,7 +152,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) } if (!xendptr || *xendptr || !yendptr || *yendptr || -!wendptr || *wendptr || !hendptr || !hendptr +!wendptr || *wendptr || !hendptr || *hendptr ) { return ff_filter_frame(ctx->outputs[0], in); } -- 1.7.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/mpegts: index only keyframes to ensure accurate seeks
On Mon, May 06, 2019 at 08:26:23PM -0700, Aman Gupta wrote: > From: Aman Gupta > > Signed-off-by: Aman Gupta > --- > libavformat/mpegts.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/libavformat/mpegts.c b/libavformat/mpegts.c > index 8a84e5cc19..49e282903c 100644 > --- a/libavformat/mpegts.c > +++ b/libavformat/mpegts.c > @@ -3198,9 +3198,9 @@ static int64_t mpegts_get_dts(AVFormatContext *s, int > stream_index, > ret = av_read_frame(s, ); > if (ret < 0) > return AV_NOPTS_VALUE; > -if (pkt.dts != AV_NOPTS_VALUE && pkt.pos >= 0) { > +if (pkt.dts != AV_NOPTS_VALUE && pkt.pos >= 0 && (pkt.flags & > AV_PKT_FLAG_KEY)) { > ff_reduce_index(s, pkt.stream_index); > -av_add_index_entry(s->streams[pkt.stream_index], pkt.pos, > pkt.dts, 0, 0, AVINDEX_KEYFRAME /* FIXME keyframe? */); > +av_add_index_entry(s->streams[pkt.stream_index], pkt.pos, > pkt.dts, 0, 0, AVINDEX_KEYFRAME); > if (pkt.stream_index == stream_index && pkt.pos >= *ppos) { What happens with streams that have no keyframes but use more dispersed intra refresh ? also breaks fate-seek-lavf-ts --- ./tests/ref/seek/lavf-ts2019-04-22 01:06:36.162990906 +0200 +++ tests/data/fate/seek-lavf-ts2019-05-08 18:07:40.290602734 +0200 @@ -16,7 +16,7 @@ ret: 0 st:-1 flags:1 ts:-0.740831 ret: 0 st: 0 flags:1 dts: 1.40 pts: 1.44 pos:564 size: 24801 ret: 0 st: 0 flags:0 ts: 2.15 -ret: 0 st: 1 flags:1 dts: 1.794811 pts: 1.794811 pos: 322608 size: 209 +ret: 0 st: 1 flags:1 dts: 2.160522 pts: 2.160522 pos: 404576 size: 209 ret: 0 st: 0 flags:1 ts: 1.047500 ret: 0 st: 0 flags:1 dts: 1.40 pts: 1.44 pos:564 size: 24801 ret: 0 st: 1 flags:0 ts:-0.058333 @@ -24,7 +24,7 @@ ret: 0 st: 1 flags:1 ts: 2.835833 ret: 0 st: 1 flags:1 dts: 2.160522 pts: 2.160522 pos: 404576 size: 209 ret: 0 st:-1 flags:0 ts: 1.730004 -ret: 0 st: 1 flags:1 dts: 1.429089 pts: 1.429089 pos: 159988 size: 208 +ret: 0 st: 0 flags:1 dts: 1.88 pts: 1.92 pos: 189692 size: 24786 ret: 0 st:-1 flags:1 ts: 0.624171 ret: 0 st: 0 flags:1 dts: 1.40 pts: 1.44 pos:564 size: 24801 ret: 0 st: 0 flags:0 ts:-0.481667 @@ -38,7 +38,7 @@ ret: 0 st:-1 flags:0 ts:-0.904994 ret: 0 st: 0 flags:1 dts: 1.40 pts: 1.44 pos:564 size: 24801 ret: 0 st:-1 flags:1 ts: 1.989173 -ret: 0 st: 0 flags:0 dts: 1.96 pts: 2.00 pos: 235000 size: 15019 +ret: 0 st: 0 flags:1 dts: 1.88 pts: 1.92 pos: 189692 size: 24786 ret: 0 st: 0 flags:0 ts: 0.883344 ret: 0 st: 0 flags:1 dts: 1.40 pts: 1.44 pos:564 size: 24801 ret: 0 st: 0 flags:1 ts:-0.222489 Test seek-lavf-ts failed. Look at tests/data/fate/seek-lavf-ts.err for details. make: *** [fate-seek-lavf-ts] Error 1 [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I have never wished to cater to the crowd; for what I know they do not approve, and what they approve I do not know. -- Epicurus signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH V2 7/7] libavfilter/dnn: add more data type support for dnn model input
Em qua, 8 de mai de 2019 às 05:28, Guo, Yejun escreveu: > > > > > -Original Message- > > From: Pedro Arthur [mailto:bygran...@gmail.com] > > Sent: Tuesday, April 30, 2019 1:47 AM > > To: FFmpeg development discussions and patches > > Cc: Guo, Yejun > > Subject: Re: [FFmpeg-devel] [PATCH V2 7/7] libavfilter/dnn: add more data > > type > > support for dnn model input > > > +sr_context->input.dt = DNN_FLOAT; > > > sr_context->sws_contexts[0] = NULL; > > > sr_context->sws_contexts[1] = NULL; > > > sr_context->sws_contexts[2] = NULL; > > > -- > > > 2.7.4 > > > > > > > LGTM. > > > > I think it would be valuable to add a few tests covering the features > > added by this patch series. > > I tried a bit to add FATE for dnn module, see basic code in attached file. > We can only test native mode because FATE does not allow external dependency. > > The native mode is still in early stage, and I plan to add support to > import TF model as native model with ffmpeg c code (as discussed in another > thread). > That's might be a better time to add FATE after the import is finished. > We can add unit tests for all the native ops at that time. > > As for this patch series, it mainly focus on the TF mode, it might not be > suitable to add FATE for it. > > So, how about to push this patch set, and add FATE when the native mode is a > little more mature? thanks. Patch set pushed, sorry for the delay. Later I'll properly review the unit test patch. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] libavfilter: Add multiple padding methods in FFmpeg dnn native mode.
> 在 2019年5月8日,下午5:33,xwm...@pku.edu.cn 写道: > > > > > This patch is for the support of derain filter project in GSoC. It adds > supports for the following operations: > > > > > (1) Conv padding method: "SAME", "VALID" and "SAME_CLAMP_TO_EDGE" > > > > > These operations are all needed in derain filter. As we discussed before, the > "SAME_CLAMP_TO_EDGE" method is the same as dnn native padding method in the > current implementation. And the sr model generation code should be changed if > mutiple padding method supports added. So I sent a PR > (https://github.com/HighVoltageRocknRoll/sr/pull/4)to the original sr > repo(https://github.com/HighVoltageRocknRoll/sr). Cannot sure Sergey Lavrushkin is maintaining that repo, maybe Pedro Arthur can tick he. > > > > From c0724bb304a6f4c3ca935cccda5b810e5c4eceb1 Mon Sep 17 00:00:00 2001 > From: Xuewei Meng > Date: Wed, 8 May 2019 17:32:30 +0800 > Subject: [PATCH] Add multiple padding method in dnn native > > Signed-off-by: Xuewei Meng > --- > libavfilter/dnn_backend_native.c | 52 > libavfilter/dnn_backend_native.h | 3 ++ > 2 files changed, 43 insertions(+), 12 deletions(-) > > diff --git a/libavfilter/dnn_backend_native.c > b/libavfilter/dnn_backend_native.c > index 70d857f5f2..b7c0508d91 100644 > --- a/libavfilter/dnn_backend_native.c > +++ b/libavfilter/dnn_backend_native.c > @@ -59,6 +59,12 @@ static DNNReturnType set_input_output_native(void *model, > DNNData *input, DNNDat > return DNN_ERROR; > } > cur_channels = conv_params->output_num; > + > +if(conv_params->padding_method == VALID){ > +int pad_size = conv_params->kernel_size - 1; > +cur_height -= pad_size; > +cur_width -= pad_size; > +} > break; > case DEPTH_TO_SPACE: > depth_to_space_params = (DepthToSpaceParams > *)network->layers[layer].params; > @@ -75,6 +81,10 @@ static DNNReturnType set_input_output_native(void *model, > DNNData *input, DNNDat > if (network->layers[layer].output){ > av_freep(>layers[layer].output); > } > + > +if(cur_height <= 0 || cur_width <= 0) > +return DNN_ERROR; > + > network->layers[layer].output = av_malloc(cur_height * cur_width * > cur_channels * sizeof(float)); > if (!network->layers[layer].output){ > return DNN_ERROR; > @@ -157,13 +167,14 @@ DNNModel *ff_dnn_load_model_native(const char > *model_filename) > ff_dnn_free_model_native(); > return NULL; > } > +conv_params->padding_method = > (int32_t)avio_rl32(model_file_context); > conv_params->activation = (int32_t)avio_rl32(model_file_context); > conv_params->input_num = (int32_t)avio_rl32(model_file_context); > conv_params->output_num = (int32_t)avio_rl32(model_file_context); > conv_params->kernel_size = (int32_t)avio_rl32(model_file_context); > kernel_size = conv_params->input_num * conv_params->output_num * > conv_params->kernel_size * conv_params->kernel_size; > -dnn_size += 16 + (kernel_size + conv_params->output_num << 2); > +dnn_size += 20 + (kernel_size + conv_params->output_num << 2); > if (dnn_size > file_size || conv_params->input_num <= 0 || > conv_params->output_num <= 0 || conv_params->kernel_size <= > 0){ > avio_closep(_file_context); > @@ -221,23 +232,35 @@ DNNModel *ff_dnn_load_model_native(const char > *model_filename) > > static void convolve(const float *input, float *output, const > ConvolutionalParams *conv_params, int width, int height) > { > -int y, x, n_filter, ch, kernel_y, kernel_x; > int radius = conv_params->kernel_size >> 1; > int src_linesize = width * conv_params->input_num; > int filter_linesize = conv_params->kernel_size * conv_params->input_num; > int filter_size = conv_params->kernel_size * filter_linesize; > +int pad_size = (conv_params->padding_method == VALID) ? > (conv_params->kernel_size - 1) / 2 : 0; > > -for (y = 0; y < height; ++y){ > -for (x = 0; x < width; ++x){ > -for (n_filter = 0; n_filter < conv_params->output_num; > ++n_filter){ > +for (int y = pad_size; y < height - pad_size; ++y){ > +for (int x = pad_size; x < width - pad_size; ++x){ > +for (int n_filter = 0; n_filter < conv_params->output_num; > ++n_filter){ > output[n_filter] = conv_params->biases[n_filter]; > -for (ch = 0; ch < conv_params->input_num; ++ch){ > -for (kernel_y = 0; kernel_y < conv_params->kernel_size; > ++kernel_y){ > -for (kernel_x = 0; kernel_x < > conv_params->kernel_size; ++kernel_x){ > -output[n_filter] +=
Re: [FFmpeg-devel] [PATCH 1/1] cuviddec: Add capability check for maximum macroblock count
applied smime.p7s Description: S/MIME Cryptographic Signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] configure: enable ffnvcodec, nvenc, nvdec for ppc64
applied smime.p7s Description: S/MIME Cryptographic Signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avformat/ifv: added support for ifv cctv files
On Wed, May 08, 2019 at 12:52:01AM +0200, Reimar Döffinger wrote: > On 07.05.2019, at 12:00, Swaraj Hota wrote: > > > On Sun, May 05, 2019 at 09:59:01PM +0200, Reimar Döffinger wrote: > >> > >> > >>> +/*read video index*/ > >>> +avio_seek(s->pb, 0xf8, SEEK_SET); > >> [...] > >>> +avio_skip(s->pb, ifv->vid_index->frame_offset - avio_tell(s->pb)); > >> > >> Why use avio_seek in one place and avio_skip in the other? > >> > > > > No particular reason. Essentially all are just skips. There is no > > backward seek. I left two seeks becuase they seemed more readable. > > Someone could know at a glance at what offset the first video and audio > > index are assumed/found to be. Should I change them to skips as well? > > Not quite sure how things work nowadays, but I'd suggest to use whichever > gives the most readable code. > Which would mean using avio_seek in this case. > > >>> +pos = avio_tell(s->pb); > >>> + > >>> +for (i = 0; i < ifv->total_vframes; i++) { > >>> +e = >vid_index[i]; > >>> +if (e->frame_offset >= pos) > >>> +break; > >>> +} > >> > >> This looks rather inefficient. > >> Wouldn't it make more sense to either > >> use a binary search or at least to > >> remember the position from the last read? > >> This also does not seem very robust either, > >> if a single frame_offset gets corrupted > >> to a very large value, this code will > >> never be able to find the "correct" position. > >> It seems to assume the frame_offset > >> is ordered increasingly (as would be needed for > >> binary search), but that property isn't > >> really checked/enforced. > >> > > > > Yeah it is indeed inefficient. But it also seems like the "correct" one. > > Because in case of seeking we might not be at the boundary of a frame > > and hence might need to skip to the boundary of next frame we can find. > > I guess this rules out binary search, and maybe also saving the last > > read. > > > > Regarding the frame_offset corruption, well that rules out binary search > > as well because then the order of the index will be disturbed. > > > > Or maybe I misunderstood? Please do mention if this can be done more > > efficiently by some method. I really need some ideas on this if it can > > be done. > > First, seeking should be handled specially, by resetting the state. > You should not make the get_packet less efficient because of that. > That should enable the "remember last position and start from there". > > As to the corruption case, well the question is what to do about that, and I > don't have the answer. > But if the solution were to e.g. ensure the frame offset is monotonous then > binary search could be used. > However there is also the possibility that the format does in fact allow a > completely arbitrary order of frames in the file, maybe even re-using an > earlier frame_offset if the same frame appears multiple times. > In that case this whole offset-based positioning code would simply be wrong, > and you'd have to store the current index position in your demuxer instead of > relying on avio_tell. > Maybe you chose this solution because you did not know that seeking should be > implemented via special functions? By "special functions" do you mean those "read_seek" functions that are present in many demuxers(Cuz I have not implemented that)? If yes then am I mistaken that FFmpeg can also handle seeking automatically? (Carl suggesting something like that, iirc) Keeping the corruption case aside, how do you suggest I implement this? Is it not required to skip bytes till the next frame boundary in the read_packet function (as done in the current patch) ? If not then I guess I can go with a binary search, or better yet remember the last position. Thank you. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] libavfilter: Add multiple padding methods in FFmpeg dnn native mode.
This patch is for the support of derain filter project in GSoC. It adds supports for the following operations: (1) Conv padding method: "SAME", "VALID" and "SAME_CLAMP_TO_EDGE" These operations are all needed in derain filter. As we discussed before, the "SAME_CLAMP_TO_EDGE" method is the same as dnn native padding method in the current implementation. And the sr model generation code should be changed if mutiple padding method supports added. So I sent a PR (https://github.com/HighVoltageRocknRoll/sr/pull/4)to the original sr repo(https://github.com/HighVoltageRocknRoll/sr). From c0724bb304a6f4c3ca935cccda5b810e5c4eceb1 Mon Sep 17 00:00:00 2001 From: Xuewei Meng Date: Wed, 8 May 2019 17:32:30 +0800 Subject: [PATCH] Add multiple padding method in dnn native Signed-off-by: Xuewei Meng --- libavfilter/dnn_backend_native.c | 52 libavfilter/dnn_backend_native.h | 3 ++ 2 files changed, 43 insertions(+), 12 deletions(-) diff --git a/libavfilter/dnn_backend_native.c b/libavfilter/dnn_backend_native.c index 70d857f5f2..b7c0508d91 100644 --- a/libavfilter/dnn_backend_native.c +++ b/libavfilter/dnn_backend_native.c @@ -59,6 +59,12 @@ static DNNReturnType set_input_output_native(void *model, DNNData *input, DNNDat return DNN_ERROR; } cur_channels = conv_params->output_num; + +if(conv_params->padding_method == VALID){ +int pad_size = conv_params->kernel_size - 1; +cur_height -= pad_size; +cur_width -= pad_size; +} break; case DEPTH_TO_SPACE: depth_to_space_params = (DepthToSpaceParams *)network->layers[layer].params; @@ -75,6 +81,10 @@ static DNNReturnType set_input_output_native(void *model, DNNData *input, DNNDat if (network->layers[layer].output){ av_freep(>layers[layer].output); } + +if(cur_height <= 0 || cur_width <= 0) +return DNN_ERROR; + network->layers[layer].output = av_malloc(cur_height * cur_width * cur_channels * sizeof(float)); if (!network->layers[layer].output){ return DNN_ERROR; @@ -157,13 +167,14 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename) ff_dnn_free_model_native(); return NULL; } +conv_params->padding_method = (int32_t)avio_rl32(model_file_context); conv_params->activation = (int32_t)avio_rl32(model_file_context); conv_params->input_num = (int32_t)avio_rl32(model_file_context); conv_params->output_num = (int32_t)avio_rl32(model_file_context); conv_params->kernel_size = (int32_t)avio_rl32(model_file_context); kernel_size = conv_params->input_num * conv_params->output_num * conv_params->kernel_size * conv_params->kernel_size; -dnn_size += 16 + (kernel_size + conv_params->output_num << 2); +dnn_size += 20 + (kernel_size + conv_params->output_num << 2); if (dnn_size > file_size || conv_params->input_num <= 0 || conv_params->output_num <= 0 || conv_params->kernel_size <= 0){ avio_closep(_file_context); @@ -221,23 +232,35 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename) static void convolve(const float *input, float *output, const ConvolutionalParams *conv_params, int width, int height) { -int y, x, n_filter, ch, kernel_y, kernel_x; int radius = conv_params->kernel_size >> 1; int src_linesize = width * conv_params->input_num; int filter_linesize = conv_params->kernel_size * conv_params->input_num; int filter_size = conv_params->kernel_size * filter_linesize; +int pad_size = (conv_params->padding_method == VALID) ? (conv_params->kernel_size - 1) / 2 : 0; -for (y = 0; y < height; ++y){ -for (x = 0; x < width; ++x){ -for (n_filter = 0; n_filter < conv_params->output_num; ++n_filter){ +for (int y = pad_size; y < height - pad_size; ++y){ +for (int x = pad_size; x < width - pad_size; ++x){ +for (int n_filter = 0; n_filter < conv_params->output_num; ++n_filter){ output[n_filter] = conv_params->biases[n_filter]; -for (ch = 0; ch < conv_params->input_num; ++ch){ -for (kernel_y = 0; kernel_y < conv_params->kernel_size; ++kernel_y){ -for (kernel_x = 0; kernel_x < conv_params->kernel_size; ++kernel_x){ -output[n_filter] += input[CLAMP_TO_EDGE(y + kernel_y - radius, height) * src_linesize + - CLAMP_TO_EDGE(x + kernel_x - radius, width) * conv_params->input_num + ch] * -conv_params->kernel[n_filter * filter_size + kernel_y * filter_linesize + -
Re: [FFmpeg-devel] [PATCH V2 7/7] libavfilter/dnn: add more data type support for dnn model input
> -Original Message- > From: Pedro Arthur [mailto:bygran...@gmail.com] > Sent: Tuesday, April 30, 2019 1:47 AM > To: FFmpeg development discussions and patches > Cc: Guo, Yejun > Subject: Re: [FFmpeg-devel] [PATCH V2 7/7] libavfilter/dnn: add more data type > support for dnn model input > > +sr_context->input.dt = DNN_FLOAT; > > sr_context->sws_contexts[0] = NULL; > > sr_context->sws_contexts[1] = NULL; > > sr_context->sws_contexts[2] = NULL; > > -- > > 2.7.4 > > > > LGTM. > > I think it would be valuable to add a few tests covering the features > added by this patch series. I tried a bit to add FATE for dnn module, see basic code in attached file. We can only test native mode because FATE does not allow external dependency. The native mode is still in early stage, and I plan to add support to import TF model as native model with ffmpeg c code (as discussed in another thread). That's might be a better time to add FATE after the import is finished. We can add unit tests for all the native ops at that time. As for this patch series, it mainly focus on the TF mode, it might not be suitable to add FATE for it. So, how about to push this patch set, and add FATE when the native mode is a little more mature? thanks. > > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > > To unsubscribe, visit link above, or email > > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". 0001-add-a-unit-test-for-dnn-module.patch Description: 0001-add-a-unit-test-for-dnn-module.patch ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] lavf/h264: add support for h264 video from Arecont camera, fixes ticket #5154
On 07-05-2019 10:05, Shivam Goyal wrote: > The patch is for ticket #5154. > > I have improved the patch as suggested. > > Please review. > > Thank you, > > Shivam Goyal Ping, Please review. Thank you Shivam Goyal ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: Add stream metadata from first frame's metadata
On Wed, May 8, 2019 at 12:09 AM Gyan wrote: > > > On 08-05-2019 12:25 PM, Jun Li wrote: > > On Tue, May 7, 2019 at 11:40 PM Gyan wrote: > > > > > >> Also, is there a chance that there may be multiple sources for > >> orientation data available for a given stream? If yes, what's the > >> interaction? It looks like you append a new SD element. > >> > > Thanks Gyan for review ! > > Nicolas George asked the same question before. :) > > > > Yes, this patch can't handle the case every frame has its own > orientation. > > From a technical perspective, it is absolutely possible, for example a > > motion jpeg stream with different orientation value > > on every frame. I think an ideal solution for this case is a filter doing > > transformation per orientation every frame. > > I'm not referring to dynamic per-frame orientation. > > I'm wondering about a scenario where the container has orientation > metadata and so does the packet payload (which can only be accessed by > the decoder). Is there any possibility of that happening? What if they > are different? > > Gyan > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". Good point ! I do not know the answer. Let's keep it open for a while to see anyone in the community has the answer. If no one reply the question, I am going to add some logic to skip the sd whenever the side_data is already set by by demuxer from container level. Thanks Gyan ! Best Regards, Jun ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avfilter/af_atempo: Make ffplay display correct timestamps when seeking
On 5/8/19, Pavel Koshevoy wrote: > NOTE: this is a refinement of the patch from Paul B Mahol > offset all output timestamps by same amount of first input timestamp > --- > libavfilter/af_atempo.c | 11 ++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/libavfilter/af_atempo.c b/libavfilter/af_atempo.c > index bfdad7d76b..688dac5464 100644 > --- a/libavfilter/af_atempo.c > +++ b/libavfilter/af_atempo.c > @@ -103,6 +103,9 @@ typedef struct ATempoContext { > // 1: output sample position > int64_t position[2]; > > +// first input timestamp, all other timestamps are offset by this one > +int64_t start_pts; > + > // sample format: > enum AVSampleFormat format; > > @@ -186,6 +189,7 @@ static void yae_clear(ATempoContext *atempo) > > atempo->nfrag = 0; > atempo->state = YAE_LOAD_FRAGMENT; > +atempo->start_pts = AV_NOPTS_VALUE; > > atempo->position[0] = 0; > atempo->position[1] = 0; > @@ -1068,7 +1072,7 @@ static int push_samples(ATempoContext *atempo, > atempo->dst_buffer->nb_samples = n_out; > > // adjust the PTS: > -atempo->dst_buffer->pts = > +atempo->dst_buffer->pts = atempo->start_pts + > av_rescale_q(atempo->nsamples_out, > (AVRational){ 1, outlink->sample_rate }, > outlink->time_base); > @@ -1097,6 +1101,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame > *src_buffer) > const uint8_t *src = src_buffer->data[0]; > const uint8_t *src_end = src + n_in * atempo->stride; > > +if (atempo->start_pts == AV_NOPTS_VALUE) > +atempo->start_pts = av_rescale_q(src_buffer->pts, > + inlink->time_base, > + outlink->time_base); > + > while (src < src_end) { > if (!atempo->dst_buffer) { > atempo->dst_buffer = ff_get_audio_buffer(outlink, n_out); > -- > 2.16.4 > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". Should be fine. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: Add stream metadata from first frame's metadata
On 08-05-2019 12:25 PM, Jun Li wrote: On Tue, May 7, 2019 at 11:40 PM Gyan wrote: Also, is there a chance that there may be multiple sources for orientation data available for a given stream? If yes, what's the interaction? It looks like you append a new SD element. Thanks Gyan for review ! Nicolas George asked the same question before. :) Yes, this patch can't handle the case every frame has its own orientation. From a technical perspective, it is absolutely possible, for example a motion jpeg stream with different orientation value on every frame. I think an ideal solution for this case is a filter doing transformation per orientation every frame. I'm not referring to dynamic per-frame orientation. I'm wondering about a scenario where the container has orientation metadata and so does the packet payload (which can only be accessed by the decoder). Is there any possibility of that happening? What if they are different? Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: Add stream metadata from first frame's metadata
On Tue, May 7, 2019 at 11:40 PM Gyan wrote: > > > On 08-05-2019 11:54 AM, Jun Li wrote: > > Fix #6945 > > Exif extension has 'Orientaion' field for image flip and rotation. > > This change is to add the first frame's exif into stream so that > > autorotation would use the info to adjust the frames. > > Suggest commit msg should be > > " > > 'Orientation' field from EXIF tags in first decoded frame is extracted > as stream side data so that ffmpeg can apply auto-rotation. > > " > Sure, will address that in next iteration. > > > --- > > fftools/ffmpeg.c | 57 +++- > > 1 file changed, 56 insertions(+), 1 deletion(-) > > > > diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c > > index 01f04103cf..98ccaf0732 100644 > > --- a/fftools/ffmpeg.c > > +++ b/fftools/ffmpeg.c > > @@ -2341,6 +2341,58 @@ static int decode_audio(InputStream *ist, > AVPacket *pkt, int *got_output, > > return err < 0 ? err : ret; > > } > > > > +static int set_metadata_from_1stframe(InputStream* ist, AVFrame* frame) > > +{ > > +// read exif Orientation data > > +AVDictionaryEntry *entry = av_dict_get(frame->metadata, > "Orientation", NULL, 0); > > +int orientation = 0; > > +int32_t* sd = NULL; > > +if (entry) { > > +orientation = atoi(entry->value); > > +sd = (int32_t*)av_stream_new_side_data(ist->st, > AV_PKT_DATA_DISPLAYMATRIX, 4 * 9); > > +if (!sd) > > +return AVERROR(ENOMEM); > > +memset(sd, 0, 4 * 9); > > +switch (orientation) > > +{ > > +case 1: // horizontal (normal) > > +av_display_rotation_set(sd, 0.0); > > +break; > > +case 2: // mirror horizontal > > +av_display_rotation_set(sd, 0.0); > > +av_display_matrix_flip(sd, 1, 0); > > +break; > > +case 3: // rotate 180 > > +av_display_rotation_set(sd, 180.0); > > +break; > > +case 4: // mirror vertical > > +av_display_rotation_set(sd, 0.0); > > +av_display_matrix_flip(sd, 0, 1); > > +break; > > +case 5: // mirror horizontal and rotate 270 CW > > +av_display_rotation_set(sd, 270.0); > > +av_display_matrix_flip(sd, 0, 1); > > +break; > > +case 6: // rotate 90 CW > > +av_display_rotation_set(sd, 90.0); > > +break; > > +case 7: // mirror horizontal and rotate 90 CW > > +av_display_rotation_set(sd, 90.0); > > +av_display_matrix_flip(sd, 0, 1); > > +break; > > +case 8: // rotate 270 CW > > +av_display_rotation_set(sd, 270.0); > > +break; > > +default: > > +av_display_rotation_set(sd, 0.0); > > +av_log(ist->dec_ctx, AV_LOG_WARNING, > > +"Exif orientation data out of range: %i. Set to > default value: horizontal(normal).\n", orientation); > > +break; > > +} > > +} > > +return 0; > > +} > > + > > static int decode_video(InputStream *ist, AVPacket *pkt, int > *got_output, int64_t *duration_pts, int eof, > > int *decode_failed) > > { > > @@ -2423,7 +2475,10 @@ static int decode_video(InputStream *ist, > AVPacket *pkt, int *got_output, int64_ > > decoded_frame->top_field_first = ist->top_field_first; > > > > ist->frames_decoded++; > > - > > +if (ist->frames_decoded == 1 && > > +((err = set_metadata_from_1stframe(ist, decoded_frame)) < 0)) > > +goto fail; > > + > > if (ist->hwaccel_retrieve_data && decoded_frame->format == > ist->hwaccel_pix_fmt) { > > err = ist->hwaccel_retrieve_data(ist->dec_ctx, decoded_frame); > > if (err < 0) > > Also, is there a chance that there may be multiple sources for > orientation data available for a given stream? If yes, what's the > interaction? It looks like you append a new SD element. > Thanks Gyan for review ! Nicolas George asked the same question before. :) Yes, this patch can't handle the case every frame has its own orientation. From a technical perspective, it is absolutely possible, for example a motion jpeg stream with different orientation value on every frame. I think an ideal solution for this case is a filter doing transformation per orientation every frame. But based on my limited experience, I think this kind of content is rare. What do you think ? Maybe someone in the community met this kind of content before ? Gyan > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v4 1/2] fftools/ffmpeg_filter, ffplay: Add flip support to rotation
On Tue, May 7, 2019 at 11:24 PM Jun Li wrote: > Current implemantion for autoratation does not support flip. > That is, if the matrix contains flip info, the API get_rotation > only reflects partial information. This change is for adding > support for hflip (vflip can be achieved by rotation+hflip). > --- > fftools/cmdutils.c| 4 ++-- > fftools/cmdutils.h| 2 +- > fftools/ffmpeg_filter.c | 31 ++- > fftools/ffplay.c | 22 ++ > libavutil/display.c | 14 ++ > libavutil/display.h | 14 ++ > libavutil/tests/display.c | 8 > tests/ref/fate/display| 4 > 8 files changed, 87 insertions(+), 12 deletions(-) > > diff --git a/fftools/cmdutils.c b/fftools/cmdutils.c > index 9cfbc45c2b..1235a3dd7b 100644 > --- a/fftools/cmdutils.c > +++ b/fftools/cmdutils.c > @@ -2172,13 +2172,13 @@ void *grow_array(void *array, int elem_size, int > *size, int new_size) > return array; > } > > -double get_rotation(AVStream *st) > +double get_rotation_hflip(AVStream *st, int* hflip) > { > uint8_t* displaymatrix = av_stream_get_side_data(st, > > AV_PKT_DATA_DISPLAYMATRIX, NULL); > double theta = 0; > if (displaymatrix) > -theta = -av_display_rotation_get((int32_t*) displaymatrix); > +theta = -av_display_rotation_hflip_get((int32_t*) displaymatrix, > hflip); > > theta -= 360*floor(theta/360 + 0.9/360); > > diff --git a/fftools/cmdutils.h b/fftools/cmdutils.h > index 6e2e0a2acb..0349d1bea7 100644 > --- a/fftools/cmdutils.h > +++ b/fftools/cmdutils.h > @@ -643,6 +643,6 @@ void *grow_array(void *array, int elem_size, int > *size, int new_size); > char name[128];\ > av_get_channel_layout_string(name, sizeof(name), 0, ch_layout); > > -double get_rotation(AVStream *st); > +double get_rotation_hflip(AVStream *st, int* hflip); > > #endif /* FFTOOLS_CMDUTILS_H */ > diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c > index 72838de1e2..b000958015 100644 > --- a/fftools/ffmpeg_filter.c > +++ b/fftools/ffmpeg_filter.c > @@ -807,22 +807,43 @@ static int configure_input_video_filter(FilterGraph > *fg, InputFilter *ifilter, > last_filter = ifilter->filter; > > if (ist->autorotate) { > -double theta = get_rotation(ist->st); > +int hflip = 0; > +double theta = get_rotation_hflip(ist->st, ); > > -if (fabs(theta - 90) < 1.0) { > +if (fabs(theta) < 1.0) { > +if (hflip) > +ret = insert_filter(_filter, _idx, "hflip", > NULL); > +} else if (fabs(theta - 90) < 1.0) { > ret = insert_filter(_filter, _idx, "transpose", > "clock"); > -} else if (fabs(theta - 180) < 1.0) { > -ret = insert_filter(_filter, _idx, "hflip", NULL); > if (ret < 0) > return ret; > -ret = insert_filter(_filter, _idx, "vflip", NULL); > +if (hflip) > +ret = insert_filter(_filter, _idx, "hflip", > NULL); > +} else if (fabs(theta - 180) < 1.0) { > +if (hflip) { // rotate 180 and hflip equals vflip > +ret = insert_filter(_filter, _idx, "vflip", > NULL); > +} else { > +ret = insert_filter(_filter, _idx, "hflip", > NULL); > +if (ret < 0) > +return ret; > +ret = insert_filter(_filter, _idx, "vflip", > NULL); > +} > } else if (fabs(theta - 270) < 1.0) { > ret = insert_filter(_filter, _idx, "transpose", > "cclock"); > +if (ret < 0) > +return ret; > +if (hflip) > +ret = insert_filter(_filter, _idx, "hflip", > NULL); > } else if (fabs(theta) > 1.0) { > char rotate_buf[64]; > snprintf(rotate_buf, sizeof(rotate_buf), "%f*PI/180", theta); > ret = insert_filter(_filter, _idx, "rotate", > rotate_buf); > +if (ret < 0) > +return ret; > +if (hflip) > +ret = insert_filter(_filter, _idx, "hflip", > NULL); > } > + > if (ret < 0) > return ret; > } > diff --git a/fftools/ffplay.c b/fftools/ffplay.c > index 8f050e16e6..2c77612193 100644 > --- a/fftools/ffplay.c > +++ b/fftools/ffplay.c > @@ -1914,19 +1914,33 @@ static int configure_video_filters(AVFilterGraph > *graph, VideoState *is, const c > } while (0) > > if (autorotate) { > -double theta = get_rotation(is->video_st); > +int hflip; > +double theta = get_rotation_hflip(is->video_st, ); > > -if (fabs(theta - 90) < 1.0) { > +if (fabs(theta) < 1.0) { > +if (hflip) > +INSERT_FILT("hflip", NULL); > +} else if (fabs(theta - 90) < 1.0) { > INSERT_FILT("transpose", "clock"); > +if (hflip) > +INSERT_FILT("hflip", NULL); >
Re: [FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: Add stream metadata from first frame's metadata
On 08-05-2019 11:54 AM, Jun Li wrote: Fix #6945 Exif extension has 'Orientaion' field for image flip and rotation. This change is to add the first frame's exif into stream so that autorotation would use the info to adjust the frames. Suggest commit msg should be " 'Orientation' field from EXIF tags in first decoded frame is extracted as stream side data so that ffmpeg can apply auto-rotation. " --- fftools/ffmpeg.c | 57 +++- 1 file changed, 56 insertions(+), 1 deletion(-) diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c index 01f04103cf..98ccaf0732 100644 --- a/fftools/ffmpeg.c +++ b/fftools/ffmpeg.c @@ -2341,6 +2341,58 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output, return err < 0 ? err : ret; } +static int set_metadata_from_1stframe(InputStream* ist, AVFrame* frame) +{ +// read exif Orientation data +AVDictionaryEntry *entry = av_dict_get(frame->metadata, "Orientation", NULL, 0); +int orientation = 0; +int32_t* sd = NULL; +if (entry) { +orientation = atoi(entry->value); +sd = (int32_t*)av_stream_new_side_data(ist->st, AV_PKT_DATA_DISPLAYMATRIX, 4 * 9); +if (!sd) +return AVERROR(ENOMEM); +memset(sd, 0, 4 * 9); +switch (orientation) +{ +case 1: // horizontal (normal) +av_display_rotation_set(sd, 0.0); +break; +case 2: // mirror horizontal +av_display_rotation_set(sd, 0.0); +av_display_matrix_flip(sd, 1, 0); +break; +case 3: // rotate 180 +av_display_rotation_set(sd, 180.0); +break; +case 4: // mirror vertical +av_display_rotation_set(sd, 0.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 5: // mirror horizontal and rotate 270 CW +av_display_rotation_set(sd, 270.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 6: // rotate 90 CW +av_display_rotation_set(sd, 90.0); +break; +case 7: // mirror horizontal and rotate 90 CW +av_display_rotation_set(sd, 90.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 8: // rotate 270 CW +av_display_rotation_set(sd, 270.0); +break; +default: +av_display_rotation_set(sd, 0.0); +av_log(ist->dec_ctx, AV_LOG_WARNING, +"Exif orientation data out of range: %i. Set to default value: horizontal(normal).\n", orientation); +break; +} +} +return 0; +} + static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_t *duration_pts, int eof, int *decode_failed) { @@ -2423,7 +2475,10 @@ static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_ decoded_frame->top_field_first = ist->top_field_first; ist->frames_decoded++; - +if (ist->frames_decoded == 1 && +((err = set_metadata_from_1stframe(ist, decoded_frame)) < 0)) +goto fail; + if (ist->hwaccel_retrieve_data && decoded_frame->format == ist->hwaccel_pix_fmt) { err = ist->hwaccel_retrieve_data(ist->dec_ctx, decoded_frame); if (err < 0) Also, is there a chance that there may be multiple sources for orientation data available for a given stream? If yes, what's the interaction? It looks like you append a new SD element. Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3 1/2] fftools/ffmpeg_filter, ffplay: Add flip support to rotation
On Tue, May 7, 2019 at 2:04 AM Moritz Barsnick wrote: > On Mon, May 06, 2019 at 22:36:41 -0700, Jun Li wrote: > > +double av_display_rotation_hflip_get(const int32_t matrix[9], int > *hflip) > > +{ > > +int32_t m[9]; > > +*hflip = 0; > > +memcpy(m, matrix, sizeof(int32_t) * 9); > > You were asked to avoid this (at another code location though). > BTW, sizeof(m) would be a better use of sizeof() here. (Or a constant > for the triple but related use of "9".) > > > +return av_display_rotation_get(m); > > +} > > \ No newline at end of file > > Cosmetic, but please add a line feed in the last line of the file. > > Moritz > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". Thanks Moritz for review. To address the two issues of line feed and "sizeof(m)", I updated the iteration here: https://patchwork.ffmpeg.org/patch/13029/ ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v4 1/2] fftools/ffmpeg_filter, ffplay: Add flip support to rotation
Current implemantion for autoratation does not support flip. That is, if the matrix contains flip info, the API get_rotation only reflects partial information. This change is for adding support for hflip (vflip can be achieved by rotation+hflip). --- fftools/cmdutils.c| 4 ++-- fftools/cmdutils.h| 2 +- fftools/ffmpeg_filter.c | 31 ++- fftools/ffplay.c | 22 ++ libavutil/display.c | 14 ++ libavutil/display.h | 14 ++ libavutil/tests/display.c | 8 tests/ref/fate/display| 4 8 files changed, 87 insertions(+), 12 deletions(-) diff --git a/fftools/cmdutils.c b/fftools/cmdutils.c index 9cfbc45c2b..1235a3dd7b 100644 --- a/fftools/cmdutils.c +++ b/fftools/cmdutils.c @@ -2172,13 +2172,13 @@ void *grow_array(void *array, int elem_size, int *size, int new_size) return array; } -double get_rotation(AVStream *st) +double get_rotation_hflip(AVStream *st, int* hflip) { uint8_t* displaymatrix = av_stream_get_side_data(st, AV_PKT_DATA_DISPLAYMATRIX, NULL); double theta = 0; if (displaymatrix) -theta = -av_display_rotation_get((int32_t*) displaymatrix); +theta = -av_display_rotation_hflip_get((int32_t*) displaymatrix, hflip); theta -= 360*floor(theta/360 + 0.9/360); diff --git a/fftools/cmdutils.h b/fftools/cmdutils.h index 6e2e0a2acb..0349d1bea7 100644 --- a/fftools/cmdutils.h +++ b/fftools/cmdutils.h @@ -643,6 +643,6 @@ void *grow_array(void *array, int elem_size, int *size, int new_size); char name[128];\ av_get_channel_layout_string(name, sizeof(name), 0, ch_layout); -double get_rotation(AVStream *st); +double get_rotation_hflip(AVStream *st, int* hflip); #endif /* FFTOOLS_CMDUTILS_H */ diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c index 72838de1e2..b000958015 100644 --- a/fftools/ffmpeg_filter.c +++ b/fftools/ffmpeg_filter.c @@ -807,22 +807,43 @@ static int configure_input_video_filter(FilterGraph *fg, InputFilter *ifilter, last_filter = ifilter->filter; if (ist->autorotate) { -double theta = get_rotation(ist->st); +int hflip = 0; +double theta = get_rotation_hflip(ist->st, ); -if (fabs(theta - 90) < 1.0) { +if (fabs(theta) < 1.0) { +if (hflip) +ret = insert_filter(_filter, _idx, "hflip", NULL); +} else if (fabs(theta - 90) < 1.0) { ret = insert_filter(_filter, _idx, "transpose", "clock"); -} else if (fabs(theta - 180) < 1.0) { -ret = insert_filter(_filter, _idx, "hflip", NULL); if (ret < 0) return ret; -ret = insert_filter(_filter, _idx, "vflip", NULL); +if (hflip) +ret = insert_filter(_filter, _idx, "hflip", NULL); +} else if (fabs(theta - 180) < 1.0) { +if (hflip) { // rotate 180 and hflip equals vflip +ret = insert_filter(_filter, _idx, "vflip", NULL); +} else { +ret = insert_filter(_filter, _idx, "hflip", NULL); +if (ret < 0) +return ret; +ret = insert_filter(_filter, _idx, "vflip", NULL); +} } else if (fabs(theta - 270) < 1.0) { ret = insert_filter(_filter, _idx, "transpose", "cclock"); +if (ret < 0) +return ret; +if (hflip) +ret = insert_filter(_filter, _idx, "hflip", NULL); } else if (fabs(theta) > 1.0) { char rotate_buf[64]; snprintf(rotate_buf, sizeof(rotate_buf), "%f*PI/180", theta); ret = insert_filter(_filter, _idx, "rotate", rotate_buf); +if (ret < 0) +return ret; +if (hflip) +ret = insert_filter(_filter, _idx, "hflip", NULL); } + if (ret < 0) return ret; } diff --git a/fftools/ffplay.c b/fftools/ffplay.c index 8f050e16e6..2c77612193 100644 --- a/fftools/ffplay.c +++ b/fftools/ffplay.c @@ -1914,19 +1914,33 @@ static int configure_video_filters(AVFilterGraph *graph, VideoState *is, const c } while (0) if (autorotate) { -double theta = get_rotation(is->video_st); +int hflip; +double theta = get_rotation_hflip(is->video_st, ); -if (fabs(theta - 90) < 1.0) { +if (fabs(theta) < 1.0) { +if (hflip) +INSERT_FILT("hflip", NULL); +} else if (fabs(theta - 90) < 1.0) { INSERT_FILT("transpose", "clock"); +if (hflip) +INSERT_FILT("hflip", NULL); } else if (fabs(theta - 180) < 1.0) { -INSERT_FILT("hflip", NULL); -INSERT_FILT("vflip", NULL); +if (hflip) { // rotate 180 and hflip equals vflip +INSERT_FILT("vflip", NULL); +
[FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: Add stream metadata from first frame's metadata
Fix #6945 Exif extension has 'Orientaion' field for image flip and rotation. This change is to add the first frame's exif into stream so that autorotation would use the info to adjust the frames. --- fftools/ffmpeg.c | 57 +++- 1 file changed, 56 insertions(+), 1 deletion(-) diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c index 01f04103cf..98ccaf0732 100644 --- a/fftools/ffmpeg.c +++ b/fftools/ffmpeg.c @@ -2341,6 +2341,58 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output, return err < 0 ? err : ret; } +static int set_metadata_from_1stframe(InputStream* ist, AVFrame* frame) +{ +// read exif Orientation data +AVDictionaryEntry *entry = av_dict_get(frame->metadata, "Orientation", NULL, 0); +int orientation = 0; +int32_t* sd = NULL; +if (entry) { +orientation = atoi(entry->value); +sd = (int32_t*)av_stream_new_side_data(ist->st, AV_PKT_DATA_DISPLAYMATRIX, 4 * 9); +if (!sd) +return AVERROR(ENOMEM); +memset(sd, 0, 4 * 9); +switch (orientation) +{ +case 1: // horizontal (normal) +av_display_rotation_set(sd, 0.0); +break; +case 2: // mirror horizontal +av_display_rotation_set(sd, 0.0); +av_display_matrix_flip(sd, 1, 0); +break; +case 3: // rotate 180 +av_display_rotation_set(sd, 180.0); +break; +case 4: // mirror vertical +av_display_rotation_set(sd, 0.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 5: // mirror horizontal and rotate 270 CW +av_display_rotation_set(sd, 270.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 6: // rotate 90 CW +av_display_rotation_set(sd, 90.0); +break; +case 7: // mirror horizontal and rotate 90 CW +av_display_rotation_set(sd, 90.0); +av_display_matrix_flip(sd, 0, 1); +break; +case 8: // rotate 270 CW +av_display_rotation_set(sd, 270.0); +break; +default: +av_display_rotation_set(sd, 0.0); +av_log(ist->dec_ctx, AV_LOG_WARNING, +"Exif orientation data out of range: %i. Set to default value: horizontal(normal).\n", orientation); +break; +} +} +return 0; +} + static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_t *duration_pts, int eof, int *decode_failed) { @@ -2423,7 +2475,10 @@ static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_ decoded_frame->top_field_first = ist->top_field_first; ist->frames_decoded++; - +if (ist->frames_decoded == 1 && +((err = set_metadata_from_1stframe(ist, decoded_frame)) < 0)) +goto fail; + if (ist->hwaccel_retrieve_data && decoded_frame->format == ist->hwaccel_pix_fmt) { err = ist->hwaccel_retrieve_data(ist->dec_ctx, decoded_frame); if (err < 0) -- 2.17.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avformat/ifv: added support for ifv cctv files
On 08.05.2019, at 08:01, Carl Eugen Hoyos wrote: > Am Mi., 8. Mai 2019 um 00:52 Uhr schrieb Reimar Döffinger > : >> >> On 07.05.2019, at 12:00, Swaraj Hota wrote: >> >>> On Sun, May 05, 2019 at 09:59:01PM +0200, Reimar Döffinger wrote: > +/*read video index*/ > +avio_seek(s->pb, 0xf8, SEEK_SET); [...] > +avio_skip(s->pb, ifv->vid_index->frame_offset - avio_tell(s->pb)); Why use avio_seek in one place and avio_skip in the other? >>> >>> No particular reason. Essentially all are just skips. There is no >>> backward seek. I left two seeks becuase they seemed more readable. >>> Someone could know at a glance at what offset the first video and audio >>> index are assumed/found to be. Should I change them to skips as well? >> >> Not quite sure how things work nowadays, but I'd suggest to use whichever >> gives the most readable code. >> Which would mean using avio_seek in this case. > > I originally suggested using avio_skip() instead of avio_seek() to clarify > that no back-seeking is involved. > I realize it may not have been the best approach... Well, it is good advice in principle, but in this case it seems to lead to ugly code (and it was only done in some cases). That kind of thing tends to be hard to know before making the change. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avutil: Add NV24 and NV42 pixel formats
Am Mi., 8. Mai 2019 um 00:20 Uhr schrieb Philip Langdale : > > On 2019-05-07 14:43, Carl Eugen Hoyos wrote: > > Am Di., 7. Mai 2019 um 06:33 Uhr schrieb Philip Langdale > > : > >> > >> These are the 4:4:4 variants of the semi-planar NV12/NV21 formats. > >> > >> I'm surprised we've not had a reason to add them until now, but > >> they are the format that VDPAU uses when doing interop for 4:4:4 > >> surfaces. > > > > Is there already a (libswscale) patch that actually uses the new > > formats? > > No. I haven't written any swscale code for this yet, although I could, > but there's no specific requirement for it. > > ffmpeg doesn't implement any of the (opengl) interop, but I've got an > mpv patch that does it and it needs the pixfmt to be able to work. How is ffmpeg (or any other application using libav*) handling the vdpau output if there is no conversion available? Sorry if I misunderstand, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avformat/ifv: added support for ifv cctv files
Am Mi., 8. Mai 2019 um 00:52 Uhr schrieb Reimar Döffinger : > > On 07.05.2019, at 12:00, Swaraj Hota wrote: > > > On Sun, May 05, 2019 at 09:59:01PM +0200, Reimar Döffinger wrote: > >> > >> > >>> +/*read video index*/ > >>> +avio_seek(s->pb, 0xf8, SEEK_SET); > >> [...] > >>> +avio_skip(s->pb, ifv->vid_index->frame_offset - avio_tell(s->pb)); > >> > >> Why use avio_seek in one place and avio_skip in the other? > >> > > > > No particular reason. Essentially all are just skips. There is no > > backward seek. I left two seeks becuase they seemed more readable. > > Someone could know at a glance at what offset the first video and audio > > index are assumed/found to be. Should I change them to skips as well? > > Not quite sure how things work nowadays, but I'd suggest to use whichever > gives the most readable code. > Which would mean using avio_seek in this case. I originally suggested using avio_skip() instead of avio_seek() to clarify that no back-seeking is involved. I realize it may not have been the best approach... Thank you for your comments, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".