Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
Hi, Em seg, 8 de out de 2018 às 23:59, Liu Steven escreveu: > > > > > 在 2018年8月15日,上午2:37,Pedro Arthur 写道: > > > > Patch pushed. > > How should i test it? If you already performed the training (train_srcnn.sh/train_espcn.sh) you can generate the model files using the script 'generate_header_and_model.py' provided in the repo. If not I'm attaching my generated models. Then ./ffmpeg -i img -vf sr=model=model_file_name output or if you have TF ./ffmpeg -i img -vf sr=model=model_file_name:dnn_backend=tensorflow output > > > bash generate_datasets.sh > (py3k) [root@onvideo sr]# ls > logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100* > logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.data-0-of-1 > logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.index > logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.meta > (py3k) [root@onvideo sr]# > > [root@onvideo nvidia]# ./ffmpeg > ffmpeg version N-91943-g1b98bfb Copyright (c) 2000-2018 the FFmpeg developers > built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-28) > configuration: --enable-ffnvcodec --enable-libtensorflow > --extra-ldflags=-L/data/liuqi/tensorflow/bazel-bin/tensorflow/ > libavutil 56. 19.101 / 56. 19.101 > libavcodec 58. 30.100 / 58. 30.100 > libavformat58. 18.100 / 58. 18.100 > libavdevice58. 4.103 / 58. 4.103 > libavfilter 7. 31.100 / 7. 31.100 > libswscale 5. 2.100 / 5. 2.100 > libswresample 3. 2.100 / 3. 2.100 > Hyper fast Audio and Video encoder > usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] > outfile}... > > Use -h to get full help or, even better, run 'man ffmpeg' > [root@onvideo nvidia]# pwd > /data/liuqi/ffmpeg/nvidia > [root@onvideo nvidia]# > > > BTW, the GitHub link looks no body maintaining it. > https://github.com/HighVoltageRocknRoll/sr Is there anything that is not working? > > > Thanks > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
> 在 2018年8月15日,上午2:37,Pedro Arthur 写道: > > Patch pushed. How should i test it? bash generate_datasets.sh (py3k) [root@onvideo sr]# ls logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100* logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.data-0-of-1 logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.index logdir/srcnn_batch_32_lr_1e-3_decay_adam/train/model_100.ckpt.meta (py3k) [root@onvideo sr]# [root@onvideo nvidia]# ./ffmpeg ffmpeg version N-91943-g1b98bfb Copyright (c) 2000-2018 the FFmpeg developers built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-28) configuration: --enable-ffnvcodec --enable-libtensorflow --extra-ldflags=-L/data/liuqi/tensorflow/bazel-bin/tensorflow/ libavutil 56. 19.101 / 56. 19.101 libavcodec 58. 30.100 / 58. 30.100 libavformat58. 18.100 / 58. 18.100 libavdevice58. 4.103 / 58. 4.103 libavfilter 7. 31.100 / 7. 31.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 Hyper fast Audio and Video encoder usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}... Use -h to get full help or, even better, run 'man ffmpeg' [root@onvideo nvidia]# pwd /data/liuqi/ffmpeg/nvidia [root@onvideo nvidia]# BTW, the GitHub link looks no body maintaining it. https://github.com/HighVoltageRocknRoll/sr Thanks > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
2018-08-15 1:49 GMT+03:00 Marton Balint : > > On Tue, 14 Aug 2018, Pedro Arthur wrote: > > 2018-08-14 15:45 GMT-03:00 Rostislav Pehlivanov : >> >>> On Thu, 2 Aug 2018 at 20:00, Sergey Lavrushkin >>> wrote: >>> >>> This patch removes conversions, declared inside the sr filter, and uses libswscale inside the filter to perform them for only Y channel of input. The sr filter still has uint formats as input, as it does not use chroma channels in models and these channels are upscaled using libswscale, float formats for input would cause unnecessary conversions during scaling for these channels. > [...] > > You are planning to remove *all* conversion still, right? Its still >>> unacceptable that there *are* conversions. >>> >> >> They are here because it is the most efficient way to do it. The >> filter works only on luminance channel therefore we only apply >> conversion to Y channel, and bicubic upscale to chrominance. >> I can't see how one can achieve the same result, without doing useless >> computations, if not in this way. >> > > Is there a reason why only the luminance channel is scaled this way? Can't > you also train scaling chroma planes the same way? This way you could > really eliminate the internal calls to swscale. If the user prefers to > scale only one channel, he can always split the planes and scale them > separately (using different filters) and then merge them. > If it is possible, I can then change sr filter to work only for Y channel. Can you give me some examples of how to split the planes, filter them separately and merge them back? ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
2018-08-14 19:49 GMT-03:00 Marton Balint : > > On Tue, 14 Aug 2018, Pedro Arthur wrote: > >> 2018-08-14 15:45 GMT-03:00 Rostislav Pehlivanov : >>> >>> On Thu, 2 Aug 2018 at 20:00, Sergey Lavrushkin wrote: >>> This patch removes conversions, declared inside the sr filter, and uses libswscale inside the filter to perform them for only Y channel of input. The sr filter still has uint formats as input, as it does not use chroma channels in models and these channels are upscaled using libswscale, float formats for input would cause unnecessary conversions during scaling for these channels. > > [...] > >>> You are planning to remove *all* conversion still, right? Its still >>> unacceptable that there *are* conversions. >> >> >> They are here because it is the most efficient way to do it. The >> filter works only on luminance channel therefore we only apply >> conversion to Y channel, and bicubic upscale to chrominance. >> I can't see how one can achieve the same result, without doing useless >> computations, if not in this way. > > > Is there a reason why only the luminance channel is scaled this way? Can't > you also train scaling chroma planes the same way? This way you could really > eliminate the internal calls to swscale. If the user prefers to scale only > one channel, he can always split the planes and scale them separately (using > different filters) and then merge them. The sr cnn tries to restore high frequency therefore it is aplied only to luminance because applying it to chrominance does not improve much quality and would be much slower. The most efficient way to do it is convert only Y channel to float apply the cnn to it and convert it back to int, and just upscale the UV with swscale bicubic filter. > > Thanks, > Marton > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
On Tue, 14 Aug 2018, Pedro Arthur wrote: 2018-08-14 15:45 GMT-03:00 Rostislav Pehlivanov : On Thu, 2 Aug 2018 at 20:00, Sergey Lavrushkin wrote: This patch removes conversions, declared inside the sr filter, and uses libswscale inside the filter to perform them for only Y channel of input. The sr filter still has uint formats as input, as it does not use chroma channels in models and these channels are upscaled using libswscale, float formats for input would cause unnecessary conversions during scaling for these channels. [...] You are planning to remove *all* conversion still, right? Its still unacceptable that there *are* conversions. They are here because it is the most efficient way to do it. The filter works only on luminance channel therefore we only apply conversion to Y channel, and bicubic upscale to chrominance. I can't see how one can achieve the same result, without doing useless computations, if not in this way. Is there a reason why only the luminance channel is scaled this way? Can't you also train scaling chroma planes the same way? This way you could really eliminate the internal calls to swscale. If the user prefers to scale only one channel, he can always split the planes and scale them separately (using different filters) and then merge them. Thanks, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
2018-08-14 15:45 GMT-03:00 Rostislav Pehlivanov : > On Thu, 2 Aug 2018 at 20:00, Sergey Lavrushkin wrote: > >> This patch removes conversions, declared inside the sr filter, and uses >> libswscale inside >> the filter to perform them for only Y channel of input. The sr filter >> still has uint >> formats as input, as it does not use chroma channels in models and these >> channels are >> upscaled using libswscale, float formats for input would cause unnecessary >> conversions >> during scaling for these channels. >> >> --- >> libavfilter/vf_sr.c | 134 >> +++- >> 1 file changed, 48 insertions(+), 86 deletions(-) >> >> diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c >> index 944a0e28e7..5ad1baa4c0 100644 >> --- a/libavfilter/vf_sr.c >> +++ b/libavfilter/vf_sr.c >> @@ -45,8 +45,8 @@ typedef struct SRContext { >> DNNModel *model; >> DNNData input, output; >> int scale_factor; >> -struct SwsContext *sws_context; >> -int sws_slice_h; >> +struct SwsContext *sws_contexts[3]; >> +int sws_slice_h, sws_input_linesize, sws_output_linesize; >> } SRContext; >> >> #define OFFSET(x) offsetof(SRContext, x) >> @@ -95,6 +95,10 @@ static av_cold int init(AVFilterContext *context) >> return AVERROR(EIO); >> } >> >> +sr_context->sws_contexts[0] = NULL; >> +sr_context->sws_contexts[1] = NULL; >> +sr_context->sws_contexts[2] = NULL; >> + >> return 0; >> } >> >> @@ -110,6 +114,7 @@ static int query_formats(AVFilterContext *context) >> av_log(context, AV_LOG_ERROR, "could not create formats list\n"); >> return AVERROR(ENOMEM); >> } >> + >> return ff_set_common_formats(context, formats_list); >> } >> >> @@ -140,21 +145,31 @@ static int config_props(AVFilterLink *inlink) >> else{ >> outlink->h = sr_context->output.height; >> outlink->w = sr_context->output.width; >> +sr_context->sws_contexts[1] = >> sws_getContext(sr_context->input.width, sr_context->input.height, >> AV_PIX_FMT_GRAY8, >> + >> sr_context->input.width, sr_context->input.height, AV_PIX_FMT_GRAYF32, >> + 0, NULL, NULL, NULL); >> +sr_context->sws_input_linesize = sr_context->input.width << 2; >> +sr_context->sws_contexts[2] = >> sws_getContext(sr_context->output.width, sr_context->output.height, >> AV_PIX_FMT_GRAYF32, >> + >> sr_context->output.width, sr_context->output.height, AV_PIX_FMT_GRAY8, >> + 0, NULL, NULL, NULL); >> +sr_context->sws_output_linesize = sr_context->output.width << 2; >> +if (!sr_context->sws_contexts[1] || !sr_context->sws_contexts[2]){ >> +av_log(context, AV_LOG_ERROR, "could not create SwsContext >> for conversions\n"); >> +return AVERROR(ENOMEM); >> +} >> switch (sr_context->model_type){ >> case SRCNN: >> -sr_context->sws_context = sws_getContext(inlink->w, >> inlink->h, inlink->format, >> - outlink->w, >> outlink->h, outlink->format, SWS_BICUBIC, NULL, NULL, NULL); >> -if (!sr_context->sws_context){ >> -av_log(context, AV_LOG_ERROR, "could not create >> SwsContext\n"); >> +sr_context->sws_contexts[0] = sws_getContext(inlink->w, >> inlink->h, inlink->format, >> + outlink->w, >> outlink->h, outlink->format, >> + SWS_BICUBIC, >> NULL, NULL, NULL); >> +if (!sr_context->sws_contexts[0]){ >> +av_log(context, AV_LOG_ERROR, "could not create >> SwsContext for scaling\n"); >> return AVERROR(ENOMEM); >> } >> sr_context->sws_slice_h = inlink->h; >> break; >> case ESPCN: >> -if (inlink->format == AV_PIX_FMT_GRAY8){ >> -sr_context->sws_context = NULL; >> -} >> -else{ >> +if (inlink->format != AV_PIX_FMT_GRAY8){ >> sws_src_h = sr_context->input.height; >> sws_src_w = sr_context->input.width; >> sws_dst_h = sr_context->output.height; >> @@ -184,13 +199,14 @@ static int config_props(AVFilterLink *inlink) >> sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2); >> break; >> default: >> -av_log(context, AV_LOG_ERROR, "could not create >> SwsContext for input pixel format"); >> +av_log(context, AV_LOG_ERROR, "could not create >> SwsContext for scaling for given input pixel format"); >> return AVERROR(EIO); >> } >> -sr_context->sws_context = sws_getContext(sws_src_w, >> sws_src_h, AV_PIX_FMT_GRAY8, >> -
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
On Thu, 2 Aug 2018 at 20:00, Sergey Lavrushkin wrote: > This patch removes conversions, declared inside the sr filter, and uses > libswscale inside > the filter to perform them for only Y channel of input. The sr filter > still has uint > formats as input, as it does not use chroma channels in models and these > channels are > upscaled using libswscale, float formats for input would cause unnecessary > conversions > during scaling for these channels. > > --- > libavfilter/vf_sr.c | 134 > +++- > 1 file changed, 48 insertions(+), 86 deletions(-) > > diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c > index 944a0e28e7..5ad1baa4c0 100644 > --- a/libavfilter/vf_sr.c > +++ b/libavfilter/vf_sr.c > @@ -45,8 +45,8 @@ typedef struct SRContext { > DNNModel *model; > DNNData input, output; > int scale_factor; > -struct SwsContext *sws_context; > -int sws_slice_h; > +struct SwsContext *sws_contexts[3]; > +int sws_slice_h, sws_input_linesize, sws_output_linesize; > } SRContext; > > #define OFFSET(x) offsetof(SRContext, x) > @@ -95,6 +95,10 @@ static av_cold int init(AVFilterContext *context) > return AVERROR(EIO); > } > > +sr_context->sws_contexts[0] = NULL; > +sr_context->sws_contexts[1] = NULL; > +sr_context->sws_contexts[2] = NULL; > + > return 0; > } > > @@ -110,6 +114,7 @@ static int query_formats(AVFilterContext *context) > av_log(context, AV_LOG_ERROR, "could not create formats list\n"); > return AVERROR(ENOMEM); > } > + > return ff_set_common_formats(context, formats_list); > } > > @@ -140,21 +145,31 @@ static int config_props(AVFilterLink *inlink) > else{ > outlink->h = sr_context->output.height; > outlink->w = sr_context->output.width; > +sr_context->sws_contexts[1] = > sws_getContext(sr_context->input.width, sr_context->input.height, > AV_PIX_FMT_GRAY8, > + > sr_context->input.width, sr_context->input.height, AV_PIX_FMT_GRAYF32, > + 0, NULL, NULL, NULL); > +sr_context->sws_input_linesize = sr_context->input.width << 2; > +sr_context->sws_contexts[2] = > sws_getContext(sr_context->output.width, sr_context->output.height, > AV_PIX_FMT_GRAYF32, > + > sr_context->output.width, sr_context->output.height, AV_PIX_FMT_GRAY8, > + 0, NULL, NULL, NULL); > +sr_context->sws_output_linesize = sr_context->output.width << 2; > +if (!sr_context->sws_contexts[1] || !sr_context->sws_contexts[2]){ > +av_log(context, AV_LOG_ERROR, "could not create SwsContext > for conversions\n"); > +return AVERROR(ENOMEM); > +} > switch (sr_context->model_type){ > case SRCNN: > -sr_context->sws_context = sws_getContext(inlink->w, > inlink->h, inlink->format, > - outlink->w, > outlink->h, outlink->format, SWS_BICUBIC, NULL, NULL, NULL); > -if (!sr_context->sws_context){ > -av_log(context, AV_LOG_ERROR, "could not create > SwsContext\n"); > +sr_context->sws_contexts[0] = sws_getContext(inlink->w, > inlink->h, inlink->format, > + outlink->w, > outlink->h, outlink->format, > + SWS_BICUBIC, > NULL, NULL, NULL); > +if (!sr_context->sws_contexts[0]){ > +av_log(context, AV_LOG_ERROR, "could not create > SwsContext for scaling\n"); > return AVERROR(ENOMEM); > } > sr_context->sws_slice_h = inlink->h; > break; > case ESPCN: > -if (inlink->format == AV_PIX_FMT_GRAY8){ > -sr_context->sws_context = NULL; > -} > -else{ > +if (inlink->format != AV_PIX_FMT_GRAY8){ > sws_src_h = sr_context->input.height; > sws_src_w = sr_context->input.width; > sws_dst_h = sr_context->output.height; > @@ -184,13 +199,14 @@ static int config_props(AVFilterLink *inlink) > sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2); > break; > default: > -av_log(context, AV_LOG_ERROR, "could not create > SwsContext for input pixel format"); > +av_log(context, AV_LOG_ERROR, "could not create > SwsContext for scaling for given input pixel format"); > return AVERROR(EIO); > } > -sr_context->sws_context = sws_getContext(sws_src_w, > sws_src_h, AV_PIX_FMT_GRAY8, > - sws_dst_w, > sws_dst_h, AV_PIX_FMT_GRAY8, SWS_BICUBIC, NULL, NULL, NULL); > -if (!sr_context->sws_context){ > -
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
Patch pushed. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 6/7] libavfilter/vf_sr.c: Removes uint8 -> float and float -> uint8 conversions.
Updated patch. 2018-08-02 21:52 GMT+03:00 Sergey Lavrushkin : > This patch removes conversions, declared inside the sr filter, and uses > libswscale inside > the filter to perform them for only Y channel of input. The sr filter > still has uint > formats as input, as it does not use chroma channels in models and these > channels are > upscaled using libswscale, float formats for input would cause unnecessary > conversions > during scaling for these channels. > > --- > libavfilter/vf_sr.c | 134 +++--- > -- > 1 file changed, 48 insertions(+), 86 deletions(-) > > diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c > index 944a0e28e7..5ad1baa4c0 100644 > --- a/libavfilter/vf_sr.c > +++ b/libavfilter/vf_sr.c > @@ -45,8 +45,8 @@ typedef struct SRContext { > DNNModel *model; > DNNData input, output; > int scale_factor; > -struct SwsContext *sws_context; > -int sws_slice_h; > +struct SwsContext *sws_contexts[3]; > +int sws_slice_h, sws_input_linesize, sws_output_linesize; > } SRContext; > > #define OFFSET(x) offsetof(SRContext, x) > @@ -95,6 +95,10 @@ static av_cold int init(AVFilterContext *context) > return AVERROR(EIO); > } > > +sr_context->sws_contexts[0] = NULL; > +sr_context->sws_contexts[1] = NULL; > +sr_context->sws_contexts[2] = NULL; > + > return 0; > } > > @@ -110,6 +114,7 @@ static int query_formats(AVFilterContext *context) > av_log(context, AV_LOG_ERROR, "could not create formats list\n"); > return AVERROR(ENOMEM); > } > + > return ff_set_common_formats(context, formats_list); > } > > @@ -140,21 +145,31 @@ static int config_props(AVFilterLink *inlink) > else{ > outlink->h = sr_context->output.height; > outlink->w = sr_context->output.width; > +sr_context->sws_contexts[1] = sws_getContext(sr_context->input.width, > sr_context->input.height, AV_PIX_FMT_GRAY8, > + > sr_context->input.width, sr_context->input.height, AV_PIX_FMT_GRAYF32, > + 0, NULL, NULL, NULL); > +sr_context->sws_input_linesize = sr_context->input.width << 2; > +sr_context->sws_contexts[2] = > sws_getContext(sr_context->output.width, > sr_context->output.height, AV_PIX_FMT_GRAYF32, > + > sr_context->output.width, sr_context->output.height, AV_PIX_FMT_GRAY8, > + 0, NULL, NULL, NULL); > +sr_context->sws_output_linesize = sr_context->output.width << 2; > +if (!sr_context->sws_contexts[1] || !sr_context->sws_contexts[2]){ > +av_log(context, AV_LOG_ERROR, "could not create SwsContext > for conversions\n"); > +return AVERROR(ENOMEM); > +} > switch (sr_context->model_type){ > case SRCNN: > -sr_context->sws_context = sws_getContext(inlink->w, > inlink->h, inlink->format, > - outlink->w, > outlink->h, outlink->format, SWS_BICUBIC, NULL, NULL, NULL); > -if (!sr_context->sws_context){ > -av_log(context, AV_LOG_ERROR, "could not create > SwsContext\n"); > +sr_context->sws_contexts[0] = sws_getContext(inlink->w, > inlink->h, inlink->format, > + outlink->w, > outlink->h, outlink->format, > + SWS_BICUBIC, > NULL, NULL, NULL); > +if (!sr_context->sws_contexts[0]){ > +av_log(context, AV_LOG_ERROR, "could not create > SwsContext for scaling\n"); > return AVERROR(ENOMEM); > } > sr_context->sws_slice_h = inlink->h; > break; > case ESPCN: > -if (inlink->format == AV_PIX_FMT_GRAY8){ > -sr_context->sws_context = NULL; > -} > -else{ > +if (inlink->format != AV_PIX_FMT_GRAY8){ > sws_src_h = sr_context->input.height; > sws_src_w = sr_context->input.width; > sws_dst_h = sr_context->output.height; > @@ -184,13 +199,14 @@ static int config_props(AVFilterLink *inlink) > sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2); > break; > default: > -av_log(context, AV_LOG_ERROR, "could not create > SwsContext for input pixel format"); > +av_log(context, AV_LOG_ERROR, "could not create > SwsContext for scaling for given input pixel format"); > return AVERROR(EIO); > } > -sr_context->sws_context = sws_getContext(sws_src_w, > sws_src_h, AV_PIX_FMT_GRAY8, > - sws_dst_w, > sws_dst_h, AV_PIX_FMT_GRAY8, SWS_BICUBIC, NULL, NULL, NULL); > -if (!sr_context->sws_context){ > -