Re: [FFmpeg-devel] [PATCH v4 11/11] avfilter/vf_dnn_detect: Fix null pointer dereference

2024-05-21 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Andreas Rheinhardt
> Sent: Tuesday, May 21, 2024 3:12 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v4 11/11] avfilter/vf_dnn_detect: Fix null
> pointer dereference
> 
> Zhao Zhili:
> > From: Zhao Zhili 
> >
> > Signed-off-by: Zhao Zhili 
> > ---
> >  libavfilter/vf_dnn_detect.c | 10 ++
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c
> > index b4eee06fe7..2a277d4169 100644
> > --- a/libavfilter/vf_dnn_detect.c
> > +++ b/libavfilter/vf_dnn_detect.c
> > @@ -807,11 +807,13 @@ static av_cold void
> dnn_detect_uninit(AVFilterContext *context)
> >  DnnDetectContext *ctx = context->priv;
> >  AVDetectionBBox *bbox;
> >  ff_dnn_uninit(>dnnctx);
> > -while(av_fifo_can_read(ctx->bboxes_fifo)) {
> > -av_fifo_read(ctx->bboxes_fifo, , 1);
> > -av_freep();
> > +if (ctx->bboxes_fifo) {
> > +while (av_fifo_can_read(ctx->bboxes_fifo)) {
> > +av_fifo_read(ctx->bboxes_fifo, , 1);
> > +av_freep();
> > +}
> > +av_fifo_freep2(>bboxes_fifo);
> >  }
> > -av_fifo_freep2(>bboxes_fifo);
> >  av_freep(>anchors);
> >  free_detect_labels(ctx);
> >  }
> 
> Please apply this patch soon; there is no need to wait for the other patches.
> (I independently stumbled upon this and sent a patch of my own.)
> 
> - Andreas
> 
This patch 11 pushed, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v4 01/11] avfilter/dnn: Refactor DNN parameter configuration system

2024-05-18 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Zhao
> Zhili
> Sent: Wednesday, May 8, 2024 12:08 AM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Zhao Zhili 
> Subject: [FFmpeg-devel] [PATCH v4 01/11] avfilter/dnn: Refactor DNN
> parameter configuration system
> 
> From: Zhao Zhili 
> 
> This patch trying to resolve mulitiple issues related to parameter
> configuration:
> 
> Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should
> be the common options of backend.
> 
> Secondly, backend options are hidden behind the scene. It's a
> AV_OPT_TYPE_STRING backend_configs for user, and parsed by each
> backend. We don't know each backend support what kind of options
> from the help message.
> 
> Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS.
> 
> Last but not the least, pass backend options via AV_OPT_TYPE_STRING
> makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible.
> 
> This patch puts backend common options and each backend options inside
> DnnContext to reduce code duplication, make options user friendly, and
> easy to extend for future usecase.
> 
This patch 01 LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 01/10] avfilter/dnn: Refactor DNN parameter configuration system

2024-05-06 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Zhao
> Zhili
> Sent: Tuesday, April 30, 2024 3:12 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Zhao Zhili 
> Subject: [FFmpeg-devel] [PATCH v3 01/10] avfilter/dnn: Refactor DNN
> parameter configuration system
> 
> From: Zhao Zhili 
> 
> This patch trying to resolve mulitiple issues related to parameter
> configuration:
> 
> Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should
> be the common options of backend.
> 
> Secondly, backend options are hidden behind the scene. It's a
> AV_OPT_TYPE_STRING backend_configs for user, and parsed by each
> backend. We don't know each backend support what kind of options
> from the help message.
> 
> Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS.
> 
> Last but not the least, pass backend options via AV_OPT_TYPE_STRING
> makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible.
> 
> This patch puts backend common options and each backend options inside
> DnnContext to reduce code duplication, make options user friendly, and
> easy to extend for future usecase.
> 
> There is a known issue that, for a filter which only support one or two
> of the backends, the help message still show the option of all three
> backends. Each DNN filter should be able to run on any backend. Current
> issue is mostly due to incomplete implementation (e.g., libtorch only
> support DFT_PROCESS_FRAME), and lack of maintenance on the filters.

This patch 01 basically looks good, two comments:
- it is possible that we add one dnn filter with one backend support first, and 
then
other backends one by one some-time later. So, please adjust the help message 
accordingly with only the supported backends.

- is it possible to split this patch into small patches for an easier detail 
review?

> 
> For example,
> 
> ./ffmpeg -h filter=dnn_processing
> 
> dnn_processing AVOptions:
>dnn_backend   ..FV... DNN backend (from INT_MIN to
> INT_MAX) (default tensorflow)
>  tensorflow  1..FV... tensorflow backend flag
>  openvino2..FV... openvino backend flag
>  torch   3..FV... torch backend flag
> 
> dnn_base AVOptions:
>model  ..F path to model file
>input  ..F input name of the model
>output ..F output name of the model
>backend_configs..F...P backend configs (deprecated)
>options..F...P backend configs (deprecated)
>nireq ..F number of request (from 0 to 
> INT_MAX)
> (default 0)
>async ..F use DNN async inference 
> (default true)
>device ..F device to run model
> 
> dnn_tensorflow AVOptions:
>sess_config..F config for SessionOptions
> 
> dnn_openvino AVOptions:
>batch_size..F batch size per request (from 1 
> to 1000)
> (default 1)
>input_resizable   ..F can input be resizable or not 
> (default
> false)
>layout..F input layout of model (from 0 
> to 2) (default
> none)
>  none0..F none
>  nchw1..F nchw
>  nhwc2..F nhwc
>scale   ..F Add scale preprocess operation. 
> Divide each
> element of input by specified value. (from INT_MIN to INT_MAX) (default 0)
>mean..F Add mean preprocess operation. 
> Subtract
> specified value from each element of input. (from INT_MIN to INT_MAX)
> (default 0)
> 
> dnn_th AVOptions:
>optimize  ..F turn on graph executor 
> optimization (from 0
> to 1) (default 0)
> ---
>  libavfilter/dnn/dnn_backend_common.h   |  13 ++-
>  libavfilter/dnn/dnn_backend_openvino.c | 146 ++---
>  libavfilter/dnn/dnn_backend_tf.c   |  82 +-
>  libavfilter/dnn/dnn_backend_torch.cpp  |  67 
>  libavfilter/dnn/dnn_interface.c|  89 +++
>  libavfilter/dnn_filter_common.c|  38 ++-
>  libavfilter/dnn_filter_common.h|  39 +++
>  libavfilter/dnn_interface.h|  67 +++-
>  libavfilter/vf_derain.c|   6 +-
>  libavfilter/vf_dnn_classify.c  |   4 +-
>  libavfilter/vf_dnn_detect.c|   4 +-
>  libavfilter/vf_dnn_processing.c|   4 +-
>  libavfilter/vf_sr.c|   6 +-
>  13 files changed, 336 insertions(+), 229 deletions(-)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN

2024-04-30 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Chen,
> Wenbin
> Sent: Tuesday, April 30, 2024 10:55 AM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> 
> > > On Apr 29, 2024, at 18:29, Guo, Yejun
> > > 
> > wrote:
> > >
> > >
> > >
> > >> -Original Message-
> > >> From: ffmpeg-devel  On Behalf Of
> > Zhao
> > >> Zhili
> > >> Sent: Sunday, April 28, 2024 6:55 PM
> > >> To: FFmpeg development discussions and patches  > >> de...@ffmpeg.org>
> > >> Subject: Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> > >>
> > >>
> > >>
> > >>> On Apr 28, 2024, at 18:34, Guo, Yejun  > >> intel@ffmpeg.org> wrote:
> > >>>
> > >>>> -Original Message-
> > >>>> From: ffmpeg-devel  > >>>> <mailto:ffmpeg-devel-boun...@ffmpeg.org>> On Behalf Of Zhao Zhili
> > >>>> Sent: Sunday, April 28, 2024 12:42 AM
> > >>>> To: ffmpeg-devel@ffmpeg.org <mailto:ffmpeg-devel@ffmpeg.org>
> > >>>> Cc: Zhao Zhili  > <mailto:zhiliz...@tencent.com>>
> > >>>> Subject: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> > >>>>
> > >>>> From: Zhao Zhili 
> > >>>>
> > >>>> During the refactor progress, I have found some serious issues,
> > >>>> which is not resolved by the patchset:
> > >>>>
> > >>>> 1. Tensorflow backend is broken.
> > >>>>
> > >>>> I think it doesn't work since 2021 at least. For example, it
> > >>>> destroy a thread and create a new thread for each frame, and it
> > >>>> destroy an invalid thread at the first
> > >>>> frame:
> > >>>
> > >>> It works from the day that code is merged, till today. It is by
> > >>> design to keep the code simplicity by using the feature that
> > >>> pthread_join accepts a parameter that is not a joinable thread.
> > >>>
> > >>> Please share more info if you experienced a real case that it does
> > >>> not
> > work.
> > >>
> > >> It will abort if ASSERT_LEVEL > 1.
> > >>
> > >> #define ASSERT_PTHREAD_ABORT(func, ret) do {\
> > >>char errbuf[AV_ERROR_MAX_STRING_SIZE] = ""; \
> > >>av_log(NULL, AV_LOG_FATAL, AV_STRINGIFY(func)   \
> > >>   " failed with error: %s\n",  \
> > >>   av_make_error_string(errbuf, AV_ERROR_MAX_STRING_SIZE,   \
> > >>AVERROR(ret))); \
> > >>abort();\
> > >> } while (0)
> > >>
> > >> I think the check is there just to prevent call pthread_join(0,
> > >> ) by
> > accident,
> > >> so we shouldn’t do that on purpose.
> > >>
> > > Nice catch with configure assert level > 1, will fix, and patch also
> > > welcome,
> > thanks.
> >
> > If I read the code correctly, it destroy a thread and create a new
> > thread for each frame. I think this “async” mode isn’t common in
> > ffmpeg’s design. Create new thread for each frame can be heavy on some
> > platforms. We use slice threading to improve parallel, and thread with
> > command queue to improve throughput. In this case with tensorflow do
> > the heavy lift, if it doesn’t support async operation, simple
> > synchronous operation with tensorflow backend should be find. The
> > “async” mode is unnecessary and use more resource  over the benefit it
> > provides.
> 
> I think we need to keep async support.
> 1. Some model cannot make full use of resource. This may be caused by
> tensorflow implementation or by model design. Async has benefit on this
> situation.
> 2. Async helps to build pipeline. You don't need to wait the output. If a
> "synchronous" filter followed by another "synchronous" filter, it can be the
> bottle neck of the whole pipeline.
> 
> The benefit on these two situations will be more obvious if model is running
> on GPU.( Tensorflow has not added device support yet.)

Yes, the async mode (even with current vanilla implementation) helps 
performance 
with the overlap of CPU and GPU. By offloading the dnn filter to GPU, the CPU 
can 
do other things at the same time.

For tensorflow backend, to running on GPU, just download and use the GPU version
of tensorflow c lib, no need to set with any option.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN

2024-04-29 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Zhao
> Zhili
> Sent: Sunday, April 28, 2024 6:55 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> 
> 
> 
> > On Apr 28, 2024, at 18:34, Guo, Yejun  intel@ffmpeg.org> wrote:
> >
> >> -Original Message-
> >> From: ffmpeg-devel  >> <mailto:ffmpeg-devel-boun...@ffmpeg.org>> On Behalf Of Zhao Zhili
> >> Sent: Sunday, April 28, 2024 12:42 AM
> >> To: ffmpeg-devel@ffmpeg.org <mailto:ffmpeg-devel@ffmpeg.org>
> >> Cc: Zhao Zhili mailto:zhiliz...@tencent.com>>
> >> Subject: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> >>
> >> From: Zhao Zhili 
> >>
> >> During the refactor progress, I have found some serious issues, which
> >> is not resolved by the patchset:
> >>
> >> 1. Tensorflow backend is broken.
> >>
> >> I think it doesn't work since 2021 at least. For example, it destroy
> >> a thread and create a new thread for each frame, and it destroy an
> >> invalid thread at the first
> >> frame:
> >
> > It works from the day that code is merged, till today. It is by design
> > to keep the code simplicity by using the feature that pthread_join
> > accepts a parameter that is not a joinable thread.
> >
> > Please share more info if you experienced a real case that it does not work.
> 
> It will abort if ASSERT_LEVEL > 1.
> 
> #define ASSERT_PTHREAD_ABORT(func, ret) do {\
> char errbuf[AV_ERROR_MAX_STRING_SIZE] = ""; \
> av_log(NULL, AV_LOG_FATAL, AV_STRINGIFY(func)   \
>" failed with error: %s\n",  \
>av_make_error_string(errbuf, AV_ERROR_MAX_STRING_SIZE,   \
> AVERROR(ret))); \
> abort();\
> } while (0)
> 
> I think the check is there just to prevent call pthread_join(0, ) by 
> accident,
> so we shouldn’t do that on purpose.
> 
Nice catch with configure assert level > 1, will fix, and patch also welcome, 
thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN

2024-04-28 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Zhao
> Zhili
> Sent: Sunday, April 28, 2024 12:42 AM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Zhao Zhili 
> Subject: [FFmpeg-devel] [PATCH WIP 0/9] Refactor DNN
> 
> From: Zhao Zhili 
> 
> During the refactor progress, I have found some serious issues, which is not
> resolved by the patchset:
> 
> 1. Tensorflow backend is broken.
> 
> I think it doesn't work since 2021 at least. For example, it destroy a thread 
> and
> create a new thread for each frame, and it destroy an invalid thread at the 
> first
> frame:

It works from the day that code is merged, till today. It is by design to keep 
the
code simplicity by using the feature that pthread_join accepts a parameter that
is not a joinable thread.

Please share more info if you experienced a real case that it does not work.
> 
> 
> pthread_join(async_module->thread_id, );
> if (status == DNN_ASYNC_FAIL) {
> av_log(ctx, AV_LOG_ERROR, "Unable to start inference as previous
> inference failed.\n");
> return DNN_GENERIC_ERROR;
> }
> ret = pthread_create(_module->thread_id, NULL,
> async_thread_routine, async_module);
> 
> 
> 2. Openvino V1 doesn't compile. It doesn't compile and no one complains, I
> think it's a hint to just keep the code for V2.

In plan, and patch is welcome.

> 
> 3. Error handling. It's easy to crash with incorrect command line arguments.

Thanks, will review your patchset one by one, it may take some time.

> 
> I don't have enough test case. Please share your test case and help on test.
> 
> Zhao Zhili (9):
>   avfilter/dnn: Refactor DNN parameter configuration system
>   avfilter/dnn_backend_openvino: Fix free context at random place
>   avfilter/dnn_backend_openvino: simplify memory allocation
>   avfilter/dnn_backend_tf: Remove one level of indentation
>   avfilter/dnn_backend_tf: Fix free context at random place
>   avfilter/dnn_backend_tf: Simplify memory allocation
>   avfilter/dnn_backend_torch: Simplify memory allocation
>   avfilter/dnn: Remove a level of dereference
>   avfilter/dnn: Use dnn_backend_info_list to search for dnn module
> 
>  libavfilter/dnn/dnn_backend_common.h   |  13 +-
>  libavfilter/dnn/dnn_backend_openvino.c | 210 ++---
>  libavfilter/dnn/dnn_backend_tf.c   | 194 ++-
>  libavfilter/dnn/dnn_backend_torch.cpp  | 112 +
>  libavfilter/dnn/dnn_interface.c| 107 ++---
>  libavfilter/dnn_filter_common.c|  38 -
>  libavfilter/dnn_filter_common.h|  37 ++---
>  libavfilter/dnn_interface.h|  73 +++--
>  libavfilter/vf_derain.c|   5 +-
>  libavfilter/vf_dnn_classify.c  |   3 +-
>  libavfilter/vf_dnn_detect.c|   3 +-
>  libavfilter/vf_dnn_processing.c|   3 +-
>  libavfilter/vf_sr.c|   5 +-
>  13 files changed, 428 insertions(+), 375 deletions(-)
> 
> --
> 2.34.1
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org
> with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v1] lavfi/dnn_backend_torch: Include mem.h

2024-04-10 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> fei.w.wang-at-intel@ffmpeg.org
> Sent: Tuesday, April 2, 2024 11:02 AM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Wang, Fei W 
> Subject: [FFmpeg-devel] [PATCH v1] lavfi/dnn_backend_torch: Include mem.h
> 
> From: Fei Wang 
> 
> Fix build fail since 790f793844.
> 
> Signed-off-by: Fei Wang 
> ---
>  libavfilter/dnn/dnn_backend_torch.cpp | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_torch.cpp
> b/libavfilter/dnn/dnn_backend_torch.cpp
> index fa9a2e6d99..ae55893a50 100644
> --- a/libavfilter/dnn/dnn_backend_torch.cpp
> +++ b/libavfilter/dnn/dnn_backend_torch.cpp
> @@ -31,6 +31,7 @@ extern "C" {
>  #include "dnn_io_proc.h"
>  #include "dnn_backend_common.h"
>  #include "libavutil/opt.h"
> +#include "libavutil/mem.h"
>  #include "queue.h"
>  #include "safe_queue.h"
>  }
LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] libavfilter/dnn_io_proc: Take step into consideration when crop frame

2024-04-04 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Tuesday, April 2, 2024 4:13 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 2/2] libavfilter/dnn_io_proc: Take step into
> consideration when crop frame
> 
> From: Wenbin Chen 
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/dnn/dnn_io_proc.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
> index e5d6edb301..d2ec9f63f5 100644
> --- a/libavfilter/dnn/dnn_io_proc.c
> +++ b/libavfilter/dnn/dnn_io_proc.c
> @@ -350,6 +350,7 @@ int ff_frame_to_dnn_classify(AVFrame *frame,
> DNNData *input, uint32_t bbox_index
>  const AVDetectionBBoxHeader *header;
>  const AVDetectionBBox *bbox;
>  AVFrameSideData *sd = av_frame_get_side_data(frame,
> AV_FRAME_DATA_DETECTION_BBOXES);
> +int max_step[4] = { 0 };
>  av_assert0(sd);
> 
>  /* (scale != 1 and scale != 0) or mean != 0 */ @@ -405,8 +406,9 @@ int
> ff_frame_to_dnn_classify(AVFrame *frame, DNNData *input, uint32_t
> bbox_index
>  offsety[1] = offsety[2] = AV_CEIL_RSHIFT(top, desc->log2_chroma_h);
>  offsety[0] = offsety[3] = top;
> 
> +av_image_fill_max_pixsteps(max_step, NULL, desc);
>  for (int k = 0; frame->data[k]; k++)
> -bbox_data[k] = frame->data[k] + offsety[k] * frame->linesize[k] +
> offsetx[k];
> +bbox_data[k] = frame->data[k] + offsety[k] * frame->linesize[k]
> + + offsetx[k] * max_step[k];
> 
>  sws_scale(sws_ctx, (const uint8_t *const *)_data, frame->linesize,
> 0, height,

Thanks for the catch, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] doc: Add libtoch backend option to dnn_processing

2024-03-25 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Monday, March 25, 2024 10:15 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v2] doc: Add libtoch backend option to
> dnn_processing
> 
> From: Wenbin Chen 
> 
> Signed-off-by: Wenbin Chen 
> ---
>  doc/filters.texi | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 18f0d1c5a7..bfa8ccec8b 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -12073,11 +12073,21 @@ need to build and install the OpenVINO for C
> library (see
>  @code{--enable-libopenvino} (--extra-cflags=-I... --extra-ldflags=-L... might
>  be needed if the header files and libraries are not installed into system 
> path)
> 
> +@item torch
> +Libtorch backend. To enable this backend you need to build and install
> Libtroch
> +for C++ library. Please download cxx11 ABI version (see
> +@url{https://pytorch.org/get-started/locally})
> +and configure FFmpeg with @code{--enable-libtorch
> +--extra-cflags=-I/libtorch_root/libtorch/include
> +--extra-cflags=-I/libtorch_root/libtorch/include/torch/csrc/api/include
> +--extra-ldflags=-L/libtorch_root/libtorch/lib/}
> +
>  @end table
> 
>  @item model
>  Set path to model file specifying network architecture and its parameters.
> -Note that different backends use different file formats. TensorFlow,
> OpenVINO backend can load files for only its format.
> +Note that different backends use different file formats. TensorFlow,
> OpenVINO
> +and Libtorch backend can load files for only its format.
> 
LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] doc: Add libtoch backend option to dnn_processing

2024-03-21 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Thursday, March 21, 2024 2:51 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] doc: Add libtoch backend option to
> dnn_processing
> 
> From: Wenbin Chen 
> 
> Signed-off-by: Wenbin Chen 
> ---
>  doc/filters.texi | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi index 913365671d..20605e72b2
> 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -12069,11 +12069,21 @@ need to build and install the OpenVINO for C
> library (see  @code{--enable-libopenvino} (--extra-cflags=-I... 
> --extra-ldflags=-
> L... might  be needed if the header files and libraries are not installed into
> system path)
> 
> +@item torch
> +Libtorch backend. To enable this backend you need to build and install
> +Libtroch for C++ library. Please download cxx11 ABI version (see
> +@url{https://pytorch.org/get-started/locally})
> +and configure FFmpeg with @code{--enable-libtorch
> +--extra-cflag=-I/libtorch_root/libtorch/include
> +--extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include

"s" is missed in extra-cflags
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v6] libavfi/dnn: add LibTorch as one of DNN backend

2024-03-19 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Guo,
> Yejun
> Sent: Friday, March 15, 2024 3:56 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>; wenbin.chen-at-intel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v6] libavfi/dnn: add LibTorch as one of
> DNN backend
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > Jean- Baptiste Kempf
> > Sent: Friday, March 15, 2024 3:05 PM
> > To: wenbin.chen-at-intel@ffmpeg.org; ffmpeg-devel  > de...@ffmpeg.org>
> > Subject: Re: [FFmpeg-devel] [PATCH v6] libavfi/dnn: add LibTorch as
> > one of DNN backend
> >
> > On Fri, 15 Mar 2024, at 05:42, wenbin.chen-at-intel@ffmpeg.org
> wrote:
> > > From: Wenbin Chen 
> > >
> > > PyTorch is an open source machine learning framework that
> > > accelerates the path from research prototyping to production
> > > deployment. Official
> > > website: https://pytorch.org/. We call the C++ library of PyTorch as
> > > LibTorch, the same below.
> >
> > LGTM. Please apply.
> 
> Cool, I plan to have a clean try next week and then apply the patch, hope not
> too late.

Pushed, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v6] libavfi/dnn: add LibTorch as one of DNN backend

2024-03-15 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Jean-
> Baptiste Kempf
> Sent: Friday, March 15, 2024 3:05 PM
> To: wenbin.chen-at-intel@ffmpeg.org; ffmpeg-devel  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH v6] libavfi/dnn: add LibTorch as one of
> DNN backend
> 
> On Fri, 15 Mar 2024, at 05:42, wenbin.chen-at-intel@ffmpeg.org wrote:
> > From: Wenbin Chen 
> >
> > PyTorch is an open source machine learning framework that accelerates
> > the path from research prototyping to production deployment. Official
> > website: https://pytorch.org/. We call the C++ library of PyTorch as
> > LibTorch, the same below.
> 
> LGTM. Please apply.

Cool, I plan to have a clean try next week and then apply the patch, hope not 
too late.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v5] libavfi/dnn: add LibTorch as one of DNN backend

2024-03-14 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Monday, March 11, 2024 1:02 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v5] libavfi/dnn: add LibTorch as one of DNN
> backend
> 
> From: Wenbin Chen 
> 
> PyTorch is an open source machine learning framework that accelerates
> the path from research prototyping to production deployment. Official
> website: https://pytorch.org/. We call the C++ library of PyTorch as
> LibTorch, the same below.
> 
> To build FFmpeg with LibTorch, please take following steps as reference:
> 1. download LibTorch C++ library in https://pytorch.org/get-started/locally/,
> please select C++/Java for language, and other options as your need.
> Please download cxx11 ABI version (libtorch-cxx11-abi-shared-with-deps-
> *.zip).
> 2. unzip the file to your own dir, with command
> unzip libtorch-shared-with-deps-latest.zip -d your_dir
> 3. export libtorch_root/libtorch/include and
> libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
> export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
> 4. config FFmpeg with ../configure --enable-libtorch --extra-cflag=-
> I/libtorch_root/libtorch/include --extra-cflag=-
> I/libtorch_root/libtorch/include/torch/csrc/api/include --extra-ldflags=-
> L/libtorch_root/libtorch/lib/
> 5. make
> 
> To run FFmpeg DNN inference with LibTorch backend:
> ./ffmpeg -i input.jpg -vf
> dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg
> The LibTorch_model.pt can be generated by Python with torch.jit.script() api.
> Please note, torch.jit.trace() is not recommanded, since it does not support
> ambiguous input size.

Can you provide more detail (maybe a link from pytorch) about the 
libtorch_model.py generation and so we can have a try.

> 
> Signed-off-by: Ting Fu 
> Signed-off-by: Wenbin Chen 
> ---
>  configure |   5 +-
>  libavfilter/dnn/Makefile  |   1 +
>  libavfilter/dnn/dnn_backend_torch.cpp | 597
> ++
>  libavfilter/dnn/dnn_interface.c   |   5 +
>  libavfilter/dnn_filter_common.c   |  15 +-
>  libavfilter/dnn_interface.h   |   2 +-
>  libavfilter/vf_dnn_processing.c   |   3 +
>  7 files changed, 624 insertions(+), 4 deletions(-)
>  create mode 100644 libavfilter/dnn/dnn_backend_torch.cpp
> 
> +static int fill_model_input_th(THModel *th_model, THRequestItem *request)
> +{
> +LastLevelTaskItem *lltask = NULL;
> +TaskItem *task = NULL;
> +THInferRequest *infer_request = NULL;
> +DNNData input = { 0 };
> +THContext *ctx = _model->ctx;
> +int ret, width_idx, height_idx, channel_idx;
> +
> +lltask = (LastLevelTaskItem *)ff_queue_pop_front(th_model-
> >lltask_queue);
> +if (!lltask) {
> +ret = AVERROR(EINVAL);
> +goto err;
> +}
> +request->lltask = lltask;
> +task = lltask->task;
> +infer_request = request->infer_request;
> +
> +ret = get_input_th(th_model, , NULL);
> +if ( ret != 0) {
> +goto err;
> +}
> +width_idx = dnn_get_width_idx_by_layout(input.layout);
> +height_idx = dnn_get_height_idx_by_layout(input.layout);
> +channel_idx = dnn_get_channel_idx_by_layout(input.layout);
> +input.dims[height_idx] = task->in_frame->height;
> +input.dims[width_idx] = task->in_frame->width;
> +input.data = av_malloc(input.dims[height_idx] * input.dims[width_idx] *
> +   input.dims[channel_idx] * sizeof(float));
> +if (!input.data)
> +return AVERROR(ENOMEM);
> +infer_request->input_tensor = new torch::Tensor();
> +infer_request->output = new torch::Tensor();
> +
> +switch (th_model->model->func_type) {
> +case DFT_PROCESS_FRAME:
> +input.scale = 255;
> +if (task->do_ioproc) {
> +if (th_model->model->frame_pre_proc != NULL) {
> +th_model->model->frame_pre_proc(task->in_frame, ,
> th_model->model->filter_ctx);
> +} else {
> +ff_proc_from_frame_to_dnn(task->in_frame, , ctx);
> +}
> +}
> +break;
> +default:
> +avpriv_report_missing_feature(NULL, "model function type %d",
> th_model->model->func_type);
> +break;
> +}
> +*infer_request->input_tensor = torch::from_blob(input.data,
> +{1, 1, input.dims[channel_idx], input.dims[height_idx],
> input.dims[width_idx]},

An extra dimension is added to support multiple frames for algorithms 
such as VideoSuperResolution, besides batch size, channel, height and width.

Let's first support the regular dimension for NCHW/NHWC,  and then
add support for multiple frames.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavfi/dnn: add LibTorch as one of DNN backend

2024-01-27 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Monday, January 22, 2024 2:11 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] libavfi/dnn: add LibTorch as one of DNN
> backend
> 
> From: Wenbin Chen 
> 
> PyTorch is an open source machine learning framework that accelerates the
> path from research prototyping to production deployment. Official
> websit: https://pytorch.org/. We call the C++ library of PyTorch as LibTorch,
> the same below.
> 
> To build FFmpeg with LibTorch, please take following steps as reference:
> 1. download LibTorch C++ library in https://pytorch.org/get-started/locally/,
> please select C++/Java for language, and other options as your need.
> 2. unzip the file to your own dir, with command unzip libtorch-shared-with-
> deps-latest.zip -d your_dir 3. export libtorch_root/libtorch/include and
> libtorch_root/libtorch/include/torch/csrc/api/include to $PATH export
> libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH 4. config FFmpeg
> with ../configure --enable-libtorch --extra-cflag=-
> I/libtorch_root/libtorch/include --extra-cflag=-
> I/libtorch_root/libtorch/include/torch/csrc/api/include --extra-ldflags=-
> L/libtorch_root/libtorch/lib/
> 5. make
> 
> To run FFmpeg DNN inference with LibTorch backend:
> ./ffmpeg -i input.jpg -vf
> dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg
> The LibTorch_model.pt can be generated by Python with torch.jit.script() api.
> Please note, torch.jit.trace() is not recommanded, since it does not support
> ambiguous input size.
> 
> Signed-off-by: Ting Fu 
> Signed-off-by: Wenbin Chen 
> ---
>  configure |   5 +-
>  libavfilter/dnn/Makefile  |   1 +
>  libavfilter/dnn/dnn_backend_torch.cpp | 585 ++
>  libavfilter/dnn/dnn_interface.c   |   5 +
>  libavfilter/dnn_filter_common.c   |  31 +-
>  libavfilter/dnn_interface.h   |   2 +-
>  libavfilter/vf_dnn_processing.c   |   3 +
>  7 files changed, 621 insertions(+), 11 deletions(-)  create mode 100644
> libavfilter/dnn/dnn_backend_torch.cpp
> 

I'm glad to see the libtorch as a new dnn backend personally, due to the fact 
that
more and more deep learning models are trained with PyTorch. PyTorch is a
necessary in the AI domain, including analysis/processing of image, video, audio
and subtitle (text) and even putting them together.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] libavfilter/vf_dnn_detect: Use class confidence to filt boxes

2024-01-27 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Wednesday, January 17, 2024 3:22 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 3/3] libavfilter/vf_dnn_detect: Use class
> confidence to filt boxes
> 
> From: Wenbin Chen 
> 
> Use class confidence instead of box_score to filt boxes, which is more
> accurate. Class confidence is obtained by multiplying class probability
> distribution and box_score.
> 
> Signed-off-by: Wenbin Chen 
> ---
Looks good to me, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/2] libavfilter/dnn_backend_openvino: Add dynamic output support

2023-12-29 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Wednesday, December 27, 2023 12:17 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 1/2] libavfilter/dnn_backend_openvino: Add
> dynamic output support
> 
> From: Wenbin Chen 
> 
> Add dynamic outputs support. Some models don't have fixed output size.
> Its size changes according to result. Now openvino can run these kinds of
> models.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 134 +++--
>  1 file changed, 59 insertions(+), 75 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 671a995c70..e207d44584 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -219,31 +219,26 @@ static int fill_model_input_ov(OVModel
> *ov_model, OVRequestItem *request)
>  task = lltask->task;
> 
LGTM, will push tomorrow, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 1/4] libavfiter/dnn_backend_openvino: Add multiple output support

2023-12-15 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Tuesday, December 12, 2023 10:34 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v2 1/4] libavfiter/dnn_backend_openvino:
> Add multiple output support
> 
> From: Wenbin Chen 
> 
> Add multiple output support to openvino backend. You can use '&' to split
> different output when you set output name using command line.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/dnn/dnn_backend_common.c   |   7 -
>  libavfilter/dnn/dnn_backend_openvino.c | 216 +
>  libavfilter/vf_dnn_detect.c|  11 +-
>  3 files changed, 150 insertions(+), 84 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_common.c
> b/libavfilter/dnn/dnn_backend_common.c
> index 91a4a3c4bf..632832ec36 100644
> --- a/libavfilter/dnn/dnn_backend_common.c
> +++ b/libavfilter/dnn/dnn_backend_common.c

LGTM, will push tomorrow.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/4] libavfilter/vf_dnn_detect: Set used pointer to NULL

2023-12-15 Thread Guo, Yejun
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Thursday, December 14, 2023 10:49 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 4/4] libavfilter/vf_dnn_detect: Set used
> pointer to NULL
> 
> From: Wenbin Chen 
> 
> Set used pointer to NULL in case it leaks the storage.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/vf_dnn_detect.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index
> 5668b8b017..3464af86c8 100644
> --- a/libavfilter/vf_dnn_detect.c
> +++ b/libavfilter/vf_dnn_detect.c
> @@ -223,6 +223,7 @@ static int dnn_detect_parse_yolo_output(AVFrame
> *frame, DNNData *output, int out
>  av_freep();
>  return AVERROR(ENOMEM);
>  }
> +bbox = NULL;
>  }
>  }
>  return 0;
> --
LGTM, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] libavfilter/vf_dnn_detect: Add yolo support

2023-11-24 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Tuesday, November 21, 2023 10:20 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 2/2] libavfilter/vf_dnn_detect: Add yolo
> support
> 
> From: Wenbin Chen 
> 
> Add yolo support. Yolo model doesn't output final result. It outputs candidate
> boxes, so we need post-process to remove overlap boxes to get final results.
> Also, the box's coordinators relate to cell and anchors, so we need these
> information to calculate boxes as well.
> 
> Model detail please refer to:
> https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/
> public/yolo-v2-tf
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c |   6 +-
>  libavfilter/vf_dnn_detect.c| 242 -
>  2 files changed, 244 insertions(+), 4 deletions(-)


Looks good to me, will push soon, thanks
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavfilter/dnn/openvino: Reduce redundant memory allocation

2023-11-10 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Thursday, November 9, 2023 4:13 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] libavfilter/dnn/openvino: Reduce
> redundant memory allocation
> 
> From: Wenbin Chen 
> 
> We can directly get data ptr from tensor, so that extral memory allocation can
> be removed.
LGTM, will push tomorrow, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 3/3] libavfilter/dnn: Initialze DNNData variables

2023-09-26 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Thursday, September 21, 2023 9:27 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v2 3/3] libavfilter/dnn: Initialze DNNData
> variables
> 
> From: Wenbin Chen 
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/dnn/dnn_backend_tf.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index b521de7fbe..25046b58d9 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -622,7 +622,7 @@ err:

LGTM, thanks, will push tomorrow
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 8/8] avfilter/dnn_backend_openvino: fix wild pointer on error path

2023-09-08 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Zhao Zhili
> Sent: Saturday, September 2, 2023 4:24 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Zhao Zhili 
> Subject: [FFmpeg-devel] [PATCH v2 8/8] avfilter/dnn_backend_openvino: fix
> wild pointer on error path
> 
> From: Zhao Zhili 
> 
> When
> ov_model_const_input_by_name/ov_model_const_output_by_name
> failed, input_port/output_port can be wild pointer.
> 
> Signed-off-by: Zhao Zhili 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
The patch set looks good to me, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] avfilter/dnn_backend_openvino: reduce indentation in free_model_ov

2023-08-30 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Zhao Zhili
> Sent: Saturday, August 19, 2023 1:53 AM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Zhao Zhili 
> Subject: [FFmpeg-devel] [PATCH 3/3] avfilter/dnn_backend_openvino:
> reduce indentation in free_model_ov
> 
> From: Zhao Zhili 
> 
> No functional changes.
> 
> Signed-off-by: Zhao Zhili 
> ---

Looks good, but could you re-send v2 against latest code? Thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] lavfi/dnn: Add OpenVINO API 2.0 support

2023-08-24 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Tuesday, August 15, 2023 4:27 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v2] lavfi/dnn: Add OpenVINO API 2.0
> support
> 
> From: Wenbin Chen 
> 
> OpenVINO API 2.0 was released in March 2022, which introduced new
> features.
> This commit implements current OpenVINO features with new 2.0 APIs. And
> will add other features in API 2.0.
> Please add installation path, which include openvino.pc, to
> PKG_CONFIG_PATH mannually for new OpenVINO libs config.
> 
> Signed-off-by: Ting Fu 
> Signed-off-by: Wenbin Chen 
> ---

LGTM, will push tomorrow, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavfilter/vf_dnn_detect: bbox index may bigger than bbox number

2023-07-18 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> wenbin.chen-at-intel@ffmpeg.org
> Sent: Monday, July 17, 2023 1:33 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] libavfilter/vf_dnn_detect: bbox index may
> bigger than bbox number
> 
> From: Wenbin Chen 
> 
> Fix a bug that queried bbox index may bigger than bbox's total number.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavfilter/vf_dnn_detect.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c
> index 06efce02a6..6ef04e0958 100644
> --- a/libavfilter/vf_dnn_detect.c
> +++ b/libavfilter/vf_dnn_detect.c
> @@ -106,12 +106,11 @@ static int dnn_detect_post_proc_ov(AVFrame
> *frame, DNNData *output, AVFilterCont
>  float x1 =  detections[i * detect_size + 5];
>  float y1 =  detections[i * detect_size + 6];
> 
> -bbox = av_get_detection_bbox(header, i);
> -
>  if (conf < conf_threshold) {
>  continue;
>  }
> 
> +bbox = av_get_detection_bbox(header, header->nb_bboxes -
> nb_bboxes);


Good catch, LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v1] libavfi/dnn: add Paddle Inference as one of DNN backend

2023-05-10 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> "zhilizhao(赵志立)"
> Sent: Wednesday, May 10, 2023 12:09 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH v1] libavfi/dnn: add Paddle Inference as
> one of DNN backend
> 
> 
> 
> > On May 10, 2023, at 10:25, WenzheWang  wrote:
> >
> > Dear Madam or Sir,
> >
> >
> > Hope this email finds you well.
> >
> >
> > I am writing this email since i recently found FFmepg remove DNN native
> backend, and i will be really grateful if you let me know if there is  any new
> plan on libavfilter/dnn.
> >
> >
> > I would like to explain to you again about the addition of dnn paddle
> backend.
> >
> > At  present, ffmpeg only supports openvino and tensorflow backend.
> Among  the current deep learning frameworks, TensorFlow is the most active
> in  development. TensorFlow has 174k stars and pytorch has 66.5k. openvino
> is 4.2k, and the models that openvino can implement are relatively few.  But
> in terms of attention on GitHub, there's no doubt that TensorFlow  and
> pytorch are more promising. Currently, the paddle framework has  reached
> 20.2k stars on github, which is much more widely used and active  than
> frameworks such as mxnet and caffe.
> 
> Stars don't matter much here.
> 
> Just for reference, there is a thread before:
> 
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20220523092918.9548-
> 2-ting...@intel.com/
> 
> >
> > Tensoflow has a very  rich ecosystem. The TensorFlow models library
> updates very quickly and  has existing examples of deep learning applications
> for image  classification, object detection, image generation text, and
> generation  of adversus-network models. The dnn libavfilter module is
> undoubtedly very necessary for tensorflow  backend to support. But the
> complexity of the TensorFlow API and the  complexity of the training are
> almost prohibitive, making it a love-hate  framework.
> >
> > PyTorch framework tends to be applied to academic  fast implementation,
> and its industrial application performance is not  good. For example, Pytorch
> framework makes a model to run on a server,  Android phone or embedded
> system, and its performance is poor compared  with other deep learning
> frameworks.
> >
> >
> > PaddlePadddle  is an open source framework of Baidu, which is also used
> by many people  in China. It is very consistent with the usage habits of
> developers,  but the practicability of the API still needs to be further
> strengthened. However, Paddle is the only deep learning framework I  have
> ever used, which does not configure any third-party libraries and  can be
> used directly by cloning make. Besides, Paddle occupies a small  amount of
> memory and is fast. It also serves a considerable number of  projects inside
> Baidu, which is very strong in industrial application.  And PaddlePaddle
> supports multiple machine and multiple card training.

Imo, my idea is that we can add 1 or 2 dnn backends as discussed at 
http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html

The background is that we see different good models from different deep learning
frameworks, and most framework does not support models developed with other 
frameworks due to different model format. imo, we'd support several popular 
frameworks.


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V7 3/3] lavfi/dnn: Remove DNN native backend

2023-04-27 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Ting Fu
> Sent: Thursday, April 27, 2023 5:44 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V7 3/3] lavfi/dnn: Remove DNN native
> backend
> 
> According to discussion in
> https://etherpad.mit.edu/p/FF_dev_meeting_20221202 and the proposal in
> http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html,
> the DNN native backend should be removed at first step.
> All the DNN native backend related codes are deleted.
> 
> Signed-off-by: Ting Fu 

LGTM, with mixed emotions, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V6 3/3] lavfi/dnn: Remove DNN native backend

2023-04-26 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Ting Fu
> Sent: Monday, March 6, 2023 9:56 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V6 3/3] lavfi/dnn: Remove DNN native
> backend

This commit LGTM, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V6 2/3] lavfi/dnn: Modified DNN native backend related tools and docs.

2023-04-26 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Ting Fu
> Sent: Monday, March 6, 2023 9:56 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V6 2/3] lavfi/dnn: Modified DNN native
> backend related tools and docs.
> 
> Deleted the native backend related files in 'tools' dir. Modify its'
> docs and codes mentioned in such docs.

We'll remove native backend, so change the default backend in filters, 
and also remove the python scripts which generate native model file.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V6 1/3] lavfi/dnn: Mark native backend as unsupported

2023-04-26 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Ting Fu
> Sent: Monday, March 6, 2023 9:56 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V6 1/3] lavfi/dnn: Mark native backend as
> unsupported
> 
> Native is deprecated value for backed_type option. Modify related error

Native backend will be removed, and so change the interface first.

> message.
> 
> Signed-off-by: Ting Fu 
> ---
>  libavfilter/dnn/dnn_interface.c | 10 +-
>  1 file changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c
> index 554a36b0dc..5b1695a1dd 100644
> --- a/libavfilter/dnn/dnn_interface.c
> +++ b/libavfilter/dnn/dnn_interface.c
> @@ -24,7 +24,6 @@
>   */
> 
>  #include "../dnn_interface.h"
> -#include "dnn_backend_native.h"
>  #include "dnn_backend_tf.h"
>  #include "dnn_backend_openvino.h"
>  #include "libavutil/mem.h"
> @@ -39,13 +38,6 @@ DNNModule *ff_get_dnn_module(DNNBackendType
> backend_type)
>  }
> 
>  switch(backend_type){
> -case DNN_NATIVE:
> -dnn_module->load_model = _dnn_load_model_native;
> -dnn_module->execute_model = _dnn_execute_model_native;
> -dnn_module->get_result = _dnn_get_result_native;
> -dnn_module->flush = _dnn_flush_native;
> -dnn_module->free_model = _dnn_free_model_native;
> -break;
>  case DNN_TF:
>  #if (CONFIG_LIBTENSORFLOW == 1)
>  dnn_module->load_model = _dnn_load_model_tf; @@ -71,7 +63,7
> @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type)
>  #endif
>  break;
>  default:
> -av_log(NULL, AV_LOG_ERROR, "Module backend_type is not native or
> tensorflow\n");
> +av_log(NULL, AV_LOG_ERROR, "Module backend_type is not
> + supported or enabled.\n");
>  av_freep(_module);
>  return NULL;
>  }
> --
> 2.17.1
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org
> with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V4 2/3] lavfi/dnn: Delete DNN native backend releated tools and docs.

2023-01-15 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Ting
> Fu
> Sent: Friday, January 6, 2023 5:19 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V4 2/3] lavfi/dnn: Delete DNN native
> backend releated tools and docs.
> 
> Signed-off-by: Ting Fu 
> ---
>  doc/filters.texi|  43 +-
>  tools/python/convert.py |  56 ---
>  tools/python/convert_from_tensorflow.py | 607 
>  tools/python/convert_header.py  |  26 -
>  4 files changed, 4 insertions(+), 728 deletions(-)  delete mode 100644
> tools/python/convert.py  delete mode 100644
> tools/python/convert_from_tensorflow.py
>  delete mode 100644 tools/python/convert_header.py
> 
> diff --git a/doc/filters.texi b/doc/filters.texi index 9c32339141..797d1c9fe2
> 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -11222,9 +11222,6 @@ See
> @url{http://openaccess.thecvf.com/content_ECCV_2018/papers/Xia_Li_Recu
> rrent_
>  Training as well as model generation scripts are provided in  the repository 
> at
> @url{https://github.com/XueweiMeng/derain_filter.git}.
> 
> -Native model files (.model) can be generated from TensorFlow model -files
> (.pb) by using tools/python/convert.py
> -
>  The filter accepts the following options:
> 
>  @table @option
> @@ -11245,21 +11242,16 @@ Specify which DNN backend to use for model
> loading and execution. This option ac  the following values:
> 
>  @table @samp
> -@item native
> -Native implementation of DNN loading and execution.
> -
>  @item tensorflow
>  TensorFlow backend. To enable this backend you  need to install the
> TensorFlow for C library (see
>  @url{https://www.tensorflow.org/install/lang_c}) and configure FFmpeg with
> @code{--enable-libtensorflow}  @end table -Default value is @samp{native}.
> 
>  @item model
>  Set path to model file specifying network architecture and its parameters.
> -Note that different backends use different file formats. TensorFlow and
> native -backend can load files for only its format.
> +Note that different backends use different file formats. TensorFlow can load
> files for only its format.
>  @end table
> 
>  To get full functionality (such as async execution), please use the
> @ref{dnn_processing} filter.
> @@ -11583,9 +11575,6 @@ Specify which DNN backend to use for model
> loading and execution. This option ac  the following values:
> 
>  @table @samp
> -@item native
> -Native implementation of DNN loading and execution.
> -
>  @item tensorflow
>  TensorFlow backend. To enable this backend you  need to install the
> TensorFlow for C library (see @@ -11601,14 +11590,9 @@ be needed if the
> header files and libraries are not installed into system path)
> 
>  @end table
> 
> -Default value is @samp{native}.

Which will be the default value in all these filters, and looks we need to 
also update the code in the filters.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2 1/2] lavfi/dnn: Modify error message for incorrect backend_type

2023-01-04 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Ting Fu
> Sent: Monday, January 2, 2023 11:50 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V2 1/2] lavfi/dnn: Modify error message for
> incorrect backend_type
> 
> Signed-off-by: Ting Fu 
> ---
>  libavfilter/dnn/dnn_interface.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c
> index 554a36b0dc..fa484c0905 100644
> --- a/libavfilter/dnn/dnn_interface.c
> +++ b/libavfilter/dnn/dnn_interface.c
> @@ -71,7 +71,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType
> backend_type)
>  #endif
>  break;
>  default:
> -av_log(NULL, AV_LOG_ERROR, "Module backend_type is not native or
> tensorflow\n");
> +av_log(NULL, AV_LOG_ERROR, "Module backend_type is not
> + supported or enabled.\n");

We need to remove "case DNN_NATIVE:" in this patch, and so native backend will 
go here 'default:'.

>  av_freep(_module);
>  return NULL;
>  }

Please also update doc/filters.texi in this commit, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] libavfilter/dnn: fix openvino async mode

2022-12-16 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Saliev, Rafik F
> Sent: Friday, December 16, 2022 6:35 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v2] libavfilter/dnn: fix openvino async mode
> 
> Bugfix: The OpenVino DNN backend in the 'async' mode sets 'task-
> >inference_done' to 'complete' prior to data copy from OpenVino output
> buffer to task's output frame.
> This order causes task destroy in ff_dnn_get_result_common() prior to model
> output processing.
> 
> Signed-off-by: Rafik Saliev 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index b494f26f55..b67f288336 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -244,7 +244,6 @@ static void infer_completion_callback(void *args)
>  av_assert0(request->lltask_count >= 1);
>  for (int i = 0; i < request->lltask_count; ++i) {
>  task = request->lltasks[i]->task;
> -task->inference_done++;
> 
>  switch (ov_model->model->func_type) {
>  case DFT_PROCESS_FRAME:
> @@ -278,6 +277,7 @@ static void infer_completion_callback(void *args)
>  break;
>  }
> 
> +task->inference_done++;
>  av_freep(>lltasks[i]);
>  output.data = (uint8_t *)output.data
>+ output.width * output.height * output.channels *
> get_datatype_size(output.dt);
> --
LGTM, will push soon, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavfilter/dnn/dnn_backend_openvino.c: fix openvino async mode

2022-12-15 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Saliev, Rafik F
> Sent: Monday, December 12, 2022 6:31 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] libavfilter/dnn/dnn_backend_openvino.c:
> fix openvino async mode
> 
> Bugfix: The OpenVino DNN backend in the 'async' mode sets 'task-
> >inference_done' to 'complete' prior to data copy from OpenVino output
> buffer to task's output frame.
> This order causes task destroy in ff_dnn_get_result_common() prior to
> model output processing.
> 
> Signed-off-by: Rafik Saliev 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>

There's a warning at 
https://patchwork.ffmpeg.org/project/ffmpeg/patch/ph7pr11mb5887f1a68c19249217a6c09eda...@ph7pr11mb5887.namprd11.prod.outlook.com/,
 
please fix and send v2 patch, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] backends of libavfilter/dnn module

2022-12-03 Thread Guo, Yejun
Hi,

There are discussions about dnn module at 
https://etherpad.mit.edu/p/FF_dev_meeting_20221202.

1) Regarding "Delete the native backend? Yes",
I agree since it is performance sensitive, and I can take this effort. 

And want to get advice on how to delete it. Usually, we'd mark it as 
deprecated, and then delete after a long time. As for this case, my idea is:
Step1: delete code of native backend under libavfilter/dnn, and also add error 
message in filters to let end user get a clear message that it is not supported.
Step2:  after a long time (maybe next major release), delete the 'error 
message' (all code relative to native backend) in filters.

2) Regarding "Move to a separate library? Delete? Move to somewhere else?",
@Pedro Arthur  any comment?

And I have two other opens about the backend.

3) Adding libtorch as a new backend after native backend is deleted.
Deep learning models are usually developed on different frameworks and so the 
model files are in different format. The current most popular framework is 
probably pytorch and we see lots of new model files in pytorch format. My idea 
is to embrace libtorch as a new backend, and @Fu, Ting has finished the code 
and is willing to upstream it and also adding a new feature in 
vf_dnn_processing.c with basic VSR (video super resolution) whose model file is 
now only available in pytorch format.

4) how about many other deep learning frameworks?
There are also many other promising dnn inference frameworks, and how do we 
support them? One method is that they add a glue layer to implement the 
interface in libavfilter/dnn_interface.h, and we (ffmpeg) can dlopen the glue 
layer library. Actually, the implementation can be another framework, or even 
be a service. Anyway, this can be a long term solution, and can be discussed in 
more detail when this requirement appears.

Thanks
Yejun
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2] lavf/dnn: dump OpenVINO model input/output names to OVMdel struct.

2022-07-23 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of Ting Fu
Sent: 2022年7月21日 17:41
To: ffmpeg-devel@ffmpeg.org
Subject: [FFmpeg-devel] [PATCH V2] lavf/dnn: dump OpenVINO model input/output 
names to OVMdel struct.

Dump all input/output names to OVModel struct. In case other funcs use them for 
reporting errors or locating issues.

Signed-off-by: Ting Fu 
---
 libavfilter/dnn/dnn_backend_openvino.c | 66 +++---
 1 file changed, 48 insertions(+), 18 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index cf012aca4c..b494f26f55 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -58,6 +58,8 @@ typedef struct OVModel{
 SafeQueue *request_queue;   // holds OVRequestItem
 Queue *task_queue;  // holds TaskItem
 Queue *lltask_queue; // holds LastLevelTaskItem
+const char *all_input_names;
+const char *all_output_names;
 } OVModel;
 
 // one request for one call to openvino @@ -211,19 +213,9 @@ static void 
infer_completion_callback(void *args)
 
 status = ie_infer_request_get_blob(request->infer_request, 
task->output_names[0], _blob);
 if (status != OK) {
-//incorrect output name
-char *model_output_name = NULL;
-char *all_output_names = NULL;
-size_t model_output_count = 0;
-av_log(ctx, AV_LOG_ERROR, "Failed to get model output data\n");
-status = ie_network_get_outputs_number(ov_model->network, 
_output_count);
-for (size_t i = 0; i < model_output_count; i++) {
-status = ie_network_get_output_name(ov_model->network, i, 
_output_name);
-APPEND_STRING(all_output_names, model_output_name)
-}
 av_log(ctx, AV_LOG_ERROR,
"output \"%s\" may not correct, all output(s) are: \"%s\"\n",
-   task->output_names[0], all_output_names);
+   task->output_names[0], ov_model->all_output_names);
 return;
 }
 
@@ -336,13 +328,23 @@ static int init_model_ov(OVModel *ov_model, const char 
*input_name, const char *
 // while we pass NHWC data from FFmpeg to openvino
 status = ie_network_set_input_layout(ov_model->network, input_name, NHWC);
 if (status != OK) {
-av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for input 
%s\n", input_name);
+if (status == NOT_FOUND) {
+av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model, failed 
to set input layout as NHWC, "\
+  "all input(s) are: \"%s\"\n", 
input_name, ov_model->all_input_names);
+} else{
+av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for input 
%s\n", input_name);
+}
 ret = DNN_GENERIC_ERROR;
 goto err;
 }
 status = ie_network_set_output_layout(ov_model->network, output_name, 
NHWC);
 if (status != OK) {
-av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for output 
%s\n", output_name);
+if (status == NOT_FOUND) {
+av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model, failed 
to set output layout as NHWC, "\
+  "all output(s) are: \"%s\"\n", 
input_name, ov_model->all_output_names);
+} else{
+av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for output 
%s\n", output_name);
+}
 ret = DNN_GENERIC_ERROR;
 goto err;
 }
@@ -505,7 +507,6 @@ static int get_input_ov(void *model, DNNData *input, const 
char *input_name)
 OVModel *ov_model = model;
 OVContext *ctx = _model->ctx;
 char *model_input_name = NULL;
-char *all_input_names = NULL;
 IEStatusCode status;
 size_t model_input_count = 0;
 dimensions_t dims;
@@ -538,15 +539,12 @@ static int get_input_ov(void *model, DNNData *input, 
const char *input_name)
 input->width= input_resizable ? -1 : dims.dims[3];
 input->dt   = precision_to_datatype(precision);
 return 0;
-} else {
-//incorrect input name
-APPEND_STRING(all_input_names, model_input_name)
 }
 
 ie_network_name_free(_input_name);
 }
 
-av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model, all input(s) 
are: \"%s\"\n", input_name, all_input_names);
+av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model, all 
+ input(s) are: \"%s\"\n", input_name, ov_model->all_input_names);
 return AVERROR(EINVAL);
 }
 
@@ -729,6 +727,8 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, 
DNNFunctionType func_
 OVModel *ov_model = NULL;
 OVContext *ctx = NULL;
 IEStatusCode status;
+size_t node_count = 0;
+char *node_name = NULL;
 
 model = av_mallocz(sizeof(DNNModel));
 if (!model){
@@ -744,6 +744,8 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, 
DNNFunctionType func_
 ov_model->model = model;
 

Re: [FFmpeg-devel] [PATCH] lavf/sr: fix the segmentation fault caused by incorrect input frame free.

2022-07-21 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of Fu, Ting
Sent: 2022年7月21日 17:57
To: FFmpeg development discussions and patches 
Subject: Re: [FFmpeg-devel] [PATCH] lavf/sr: fix the segmentation fault caused 
by incorrect input frame free.

Kindly ping

> -Original Message-
> From: ffmpeg-devel  On Behalf Of Paul 
> B Mahol
> Sent: Monday, June 27, 2022 07:03 PM
> To: FFmpeg development discussions and patches 
> 
> Subject: Re: [FFmpeg-devel] [PATCH] lavf/sr: fix the segmentation 
> fault caused by incorrect input frame free.
> 
> lgtm

Thanks, and lgtm, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2 8/8] libavfilter: Remove DNNReturnType from DNN Module

2022-03-08 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of Shubhanshu 
Saxena
Sent: 2022年3月3日 2:06
To: ffmpeg-devel@ffmpeg.org
Cc: Shubhanshu Saxena 
Subject: [FFmpeg-devel] [PATCH V2 8/8] libavfilter: Remove DNNReturnType from 
DNN Module

This patch removes all occurences of DNNReturnType from the DNN module.
This commit replaces DNN_SUCCESS by 0 (essentially the same), so the functions 
with DNNReturnType now return 0 in case of success, the negative values 
otherwise.

Signed-off-by: Shubhanshu Saxena 
---
 libavfilter/dnn/dnn_backend_common.c  | 10 ++--
 libavfilter/dnn/dnn_backend_common.h  |  8 +--
 libavfilter/dnn/dnn_backend_native.c  | 16 +++---
 .../dnn/dnn_backend_native_layer_avgpool.c|  2 +-
 .../dnn/dnn_backend_native_layer_conv2d.c |  4 +-
 .../dnn/dnn_backend_native_layer_dense.c  |  2 +-
 .../dnn_backend_native_layer_depth2space.c|  2 +-
 libavfilter/dnn/dnn_backend_openvino.c| 48 
 libavfilter/dnn/dnn_backend_tf.c  | 56 +--
 libavfilter/dnn/dnn_io_proc.c | 14 ++---
 libavfilter/dnn_interface.h   |  2 -
 libavfilter/vf_derain.c   |  2 +-
 libavfilter/vf_dnn_classify.c |  4 +-
 libavfilter/vf_dnn_detect.c   |  4 +-
 libavfilter/vf_dnn_processing.c   |  8 +--
 libavfilter/vf_sr.c   |  4 +-
 16 files changed, 92 insertions(+), 94 deletions(-)

Thanks, LGTM, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 8/8] libavfilter: Remove DNNReturnType from DNN Module

2022-03-02 Thread Guo, Yejun



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: Thursday, February 24, 2022 4:23 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH 8/8] libavfilter: Remove DNNReturnType from
> DNN Module
> 
> This patch removes all occurences of DNNReturnType from the DNN module.
> This commit replaces DNN_SUCCESS by 0 (essentially the same), so the
> functions with DNNReturnType now return 0 in case of success, the negative
> values otherwise.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_common.c  | 10 ++--
>  libavfilter/dnn/dnn_backend_common.h  |  8 +--
>  libavfilter/dnn/dnn_backend_native.c  | 16 +++---
>  .../dnn/dnn_backend_native_layer_avgpool.c|  2 +-
>  .../dnn/dnn_backend_native_layer_conv2d.c |  4 +-
>  .../dnn/dnn_backend_native_layer_dense.c  |  2 +-
>  .../dnn_backend_native_layer_depth2space.c|  2 +-
>  libavfilter/dnn/dnn_backend_openvino.c| 48 
>  libavfilter/dnn/dnn_backend_tf.c  | 56 +--
>  libavfilter/dnn/dnn_io_proc.c | 14 ++---
>  libavfilter/dnn_interface.h   |  2 -
>  libavfilter/vf_derain.c   |  2 +-
>  libavfilter/vf_dnn_classify.c |  4 +-
>  libavfilter/vf_dnn_detect.c   |  4 +-
>  libavfilter/vf_dnn_processing.c   |  8 +--
>  libavfilter/vf_sr.c   |  4 +-
>  16 files changed, 92 insertions(+), 94 deletions(-)
> 


LGTM, will push soon, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] DNN data layout CHW or HWC and data normalization or denormalization?

2021-12-31 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of Alex
Sent: 2022年1月1日 3:16
To: FFmpeg development discussions and patches 
Subject: [FFmpeg-devel] DNN data layout CHW or HWC and data normalization or 
denormalization?

Hi!
Can any one tell me is it layout of DNNData->data are chw or hwc or nchw or 
nhwc?
And how to perform convertion from hwc to chw and reverse?
How to normalize and denormalize data (-1..0..1 <=> 0...255) ?

Just NHWC is supported now, and patches are welcome.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] avfilter/dnn: fix incompatible integer to pointer conversion warning

2021-12-10 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of 
lance.lmw...@gmail.com
Sent: 2021年12月9日 22:37
To: ffmpeg-devel@ffmpeg.org
Cc: Limin Wang 
Subject: [FFmpeg-devel] [PATCH v2] avfilter/dnn: fix incompatible integer to 
pointer conversion warning

From: Limin Wang 

---
 libavfilter/dnn/dnn_backend_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_common.c 
b/libavfilter/dnn/dnn_backend_common.c
index 6a9c4cc..dd7bdf4 100644
--- a/libavfilter/dnn/dnn_backend_common.c
+++ b/libavfilter/dnn/dnn_backend_common.c
@@ -83,10 +83,10 @@ static void *async_thread_routine(void *args)
 void *request = async_module->args;
 
 if (async_module->start_inference(request) != DNN_SUCCESS) {
-return DNN_ASYNC_FAIL;
+return (void*)DNN_ASYNC_FAIL;

there is already (void*) in DNN_ASYNC_FAIL

 }
 async_module->callback(request);
-return DNN_ASYNC_SUCCESS;
+return (void*)DNN_ASYNC_SUCCESS;
 }
 
 DNNReturnType ff_dnn_async_module_cleanup(DNNAsyncExecModule *async_module)
--
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with 
subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/5] avfilter/dnn: fix the return value of async_thread_routine

2021-12-09 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of 
lance.lmw...@gmail.com
Sent: 2021年12月9日 9:20
To: ffmpeg-devel@ffmpeg.org
Cc: Limin Wang 
Subject: [FFmpeg-devel] [PATCH 3/5] avfilter/dnn: fix the return value of 
async_thread_routine

From: Limin Wang 

Signed-off-by: Limin Wang 
---
 libavfilter/dnn/dnn_backend_common.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_common.c 
b/libavfilter/dnn/dnn_backend_common.c
index 6a9c4cc..8c020e5 100644
--- a/libavfilter/dnn/dnn_backend_common.c
+++ b/libavfilter/dnn/dnn_backend_common.c
@@ -83,10 +83,13 @@ static void *async_thread_routine(void *args)
 void *request = async_module->args;
 
 if (async_module->start_inference(request) != DNN_SUCCESS) {
-return DNN_ASYNC_FAIL;
+pthread_exit((void*)DNN_ASYNC_FAIL);
+return NULL;

Could you share the reason for this change?
From man pthread_exit:
Performing a return from the start function of any thread other than the main  
thread  results  in  an  implicit  call  to
pthread_exit(), using the function's return value as the thread's exit status.

 }
 async_module->callback(request);
-return DNN_ASYNC_SUCCESS;
+
+pthread_exit((void*)DNN_ASYNC_SUCCESS);
+return NULL;
 }
 
 DNNReturnType ff_dnn_async_module_cleanup(DNNAsyncExecModule *async_module)
-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avfilter/dnn/dnn_backend_common: check thread create status before join thread

2021-11-24 Thread Guo, Yejun


-Original Message-
From: ffmpeg-devel  On Behalf Of Steven Liu
Sent: 2021年11月19日 20:48
To: ffmpeg-devel@ffmpeg.org
Cc: Steven Liu ; Yu Yang 
Subject: [FFmpeg-devel] [PATCH] avfilter/dnn/dnn_backend_common: check thread 
create status before join thread

From: Steven Liu 

fix SIGSEGV problem, check the thread create status before join thread.
set the init status to 0 when create DNNAsyncExecModule, and set status
to 1 after pthread_create success.

coredump backtrace info:
[Thread 0x7fff4778e700 (LWP 323218) exited]

Program received signal SIGSEGV, Segmentation fault.
0x7fffed71af81 in pthread_join () from /lib64/libpthread.so.0
(gdb) bt
0  0x7fffed71af81 in pthread_join () from /lib64/libpthread.so.0
1  0x00872e3a in ff_dnn_start_inference_async (ctx=0x30cbe80, 
async_module=0x4848c58) at libavfilter/dnn/dnn_backend_common.c:122
2  0x00870f70 in execute_model_tf (request=0x4848c40, 
lltask_queue=0x484c7c0) at libavfilter/dnn/dnn_backend_tf.c:
3  0x00871195 in ff_dnn_execute_model_tf (model=0x30c9700, 
exec_params=0x7fffafb0) at libavfilter/dnn/dnn_backend_tf.c:1168
4  0x0084a475 in ff_dnn_execute_model (ctx=0x30f8388, 
in_frame=0x4890fc0, out_frame=0x485f780) at libavfilter/dnn_filter_common.c:129
5  0x00524d69 in activate (filter_ctx=0x3100a40) at 
libavfilter/vf_dnn_processing.c:299
6  0x0046bc68 in ff_filter_activate (filter=0x3100a40) at 
libavfilter/avfilter.c:1364
7  0x004701fd in ff_filter_graph_run_once (graph=0x3114cc0) at 
libavfilter/avfiltergraph.c:1341
8  0x00471331 in push_frame (graph=0x3114cc0) at 
libavfilter/buffersrc.c:156
9  0x00471861 in av_buffersrc_add_frame_flags (ctx=0x484ce00, 
frame=0x41670c0, flags=4) at libavfilter/buffersrc.c:224
10 0x0042d415 in ifilter_send_frame (ifilter=0x314e300, 
frame=0x41670c0) at fftools/ffmpeg.c:2249
11 0x0042d682 in send_frame_to_filters (ist=0x30ff1c0, 
decoded_frame=0x41670c0) at fftools/ffmpeg.c:2323
12 0x0042e3b5 in decode_video (ist=0x30ff1c0, pkt=0x30b0b40, 
got_output=0x7fffb524, duration_pts=0x7fffb528, eof=0, 
decode_failed=0x7fffb520)
   at fftools/ffmpeg.c:2525
13 0x0042ecd4 in process_input_packet (ist=0x30ff1c0, pkt=0x3148cc0, 
no_eof=0) at fftools/ffmpeg.c:2681
14 0x00435b2d in process_input (file_index=0) at fftools/ffmpeg.c:4579
15 0x00435fe8 in transcode_step () at fftools/ffmpeg.c:4719
16 0x0043610b in transcode () at fftools/ffmpeg.c:4773
17 0x004368a7 in main (argc=8, argv=0x7fffbd68) at 
fftools/ffmpeg.c:4977

Reported-by: Yu Yang 
Signed-off-by: Steven Liu 
---
 libavfilter/dnn/dnn_backend_common.c | 23 +++
 libavfilter/dnn/dnn_backend_common.h |  1 +
 libavfilter/dnn/dnn_backend_tf.c |  4 +++-
 3 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_common.c 
b/libavfilter/dnn/dnn_backend_common.c
index 6a9c4cc87f..a25d0eded1 100644
--- a/libavfilter/dnn/dnn_backend_common.c
+++ b/libavfilter/dnn/dnn_backend_common.c
@@ -96,10 +96,13 @@ DNNReturnType 
ff_dnn_async_module_cleanup(DNNAsyncExecModule *async_module)
 return DNN_ERROR;
 }
 #if HAVE_PTHREAD_CANCEL
-pthread_join(async_module->thread_id, );
-if (status == DNN_ASYNC_FAIL) {
-av_log(NULL, AV_LOG_ERROR, "Last Inference Failed.\n");
-return DNN_ERROR;
+if (async_module->thread_created) {
+pthread_join(async_module->thread_id, );
+if (status == DNN_ASYNC_FAIL) {
+av_log(NULL, AV_LOG_ERROR, "Last Inference Failed.\n");
+return DNN_ERROR;
+}
+async_module->thread_created = 0;
 }
 #endif
 async_module->start_inference = NULL;
@@ -119,16 +122,20 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, 
DNNAsyncExecModule *async_
 }
 
 #if HAVE_PTHREAD_CANCEL
-pthread_join(async_module->thread_id, );
-if (status == DNN_ASYNC_FAIL) {
-av_log(ctx, AV_LOG_ERROR, "Unable to start inference as previous 
inference failed.\n");
-return DNN_ERROR;
+if (async_module->thread_created) {
+pthread_join(async_module->thread_id, );
+if (status == DNN_ASYNC_FAIL) {
+av_log(ctx, AV_LOG_ERROR, "Unable to start inference as previous 
inference failed.\n");
+return DNN_ERROR;
+}
+async_module->thread_created = 0;
 }
 ret = pthread_create(_module->thread_id, NULL, async_thread_routine, 
async_module);
 if (ret != 0) {
 av_log(ctx, AV_LOG_ERROR, "Unable to start async inference.\n");
 return DNN_ERROR;
 }
+async_module->thread_created = 1;
 #else
 if (async_module->start_inference(async_module->args) != DNN_SUCCESS) {
 return DNN_ERROR;
diff --git a/libavfilter/dnn/dnn_backend_common.h 
b/libavfilter/dnn/dnn_backend_common.h
index 6b6a5e21ae..6c4909a489 100644
--- a/libavfilter/dnn/dnn_backend_common.h
+++ 

Re: [FFmpeg-devel] [PATCH v5 1/6] lavfi/dnn: Task-based Inference in Native Backend

2021-08-27 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年8月26日 5:08
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH v5 1/6] lavfi/dnn: Task-based Inference in
> Native Backend
> 
> This commit rearranges the code in Native Backend to use the TaskItem for
> inference.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_native.c | 176 ++-
>  libavfilter/dnn/dnn_backend_native.h |   2 +
>  2 files changed, 121 insertions(+), 57 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native.c
> b/libavfilter/dnn/dnn_backend_native.c
> index a6be27f1fd..3b2a3aa55d 100644
> --- a/libavfilter/dnn/dnn_backend_native.c
> +++ b/libavfilter/dnn/dnn_backend_native.c

LGTM, will push this patch set soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution Modes in DNN Filters

2021-08-21 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: Saturday, August 21, 2021 3:05 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution Modes
> in DNN Filters
> 
> On Sat, Aug 21, 2021 at 8:41 AM Guo, Yejun  wrote:
> 
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > Shubhanshu Saxena
> > > Sent: 2021年8月20日 22:21
> > > To: ffmpeg-devel@ffmpeg.org
> > > Cc: Shubhanshu Saxena 
> > > Subject: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution
> > > Modes in DNN Filters
> > >
> > > This commit unifies the async and sync mode from the DNN filters'
> > > perspective. As of this commit, the Native backend only supports
> > > synchronous execution mode.
> > >
> > > Now the user can switch between async and sync mode by using the
> 'async'
> > > option in the backend_configs. The values can be 1 for async and 0
> > > for
> > sync
> > > mode of execution.
> > >
> > > This commit affects the following filters:
> > > 1. vf_dnn_classify
> > > 2. vf_dnn_detect
> > > 3. vf_dnn_processing
> > > 4. vf_sr
> > > 5. vf_derain
> > >
> > > Signed-off-by: Shubhanshu Saxena 
> > > ---
> > >  libavfilter/dnn/dnn_backend_common.c   |  2 +-
> > >  libavfilter/dnn/dnn_backend_common.h   |  5 +-
> > >  libavfilter/dnn/dnn_backend_native.c   | 59 +++-
> > >  libavfilter/dnn/dnn_backend_native.h   |  6 ++
> > >  libavfilter/dnn/dnn_backend_openvino.c | 94
> > > ++ libavfilter/dnn/dnn_backend_openvino.h |  3
> +-
> > >  libavfilter/dnn/dnn_backend_tf.c   | 35 ++
> > >  libavfilter/dnn/dnn_backend_tf.h   |  3 +-
> > >  libavfilter/dnn/dnn_interface.c|  8 +--
> > >  libavfilter/dnn_filter_common.c| 23 +--
> > >  libavfilter/dnn_filter_common.h|  3 +-
> > >  libavfilter/dnn_interface.h|  4 +-
> > >  libavfilter/vf_derain.c|  7 ++
> > >  libavfilter/vf_dnn_classify.c  |  4 +-
> > >  libavfilter/vf_dnn_detect.c|  8 +--
> > >  libavfilter/vf_dnn_processing.c|  8 +--
> > >  libavfilter/vf_sr.c|  8 +++
> > >  17 files changed, 140 insertions(+), 140 deletions(-)
> > >
> >
> > https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=4638 caught a
> > warning:
> > CC  libavfilter/vf_dnn_detect.o
> > src/libavfilter/vf_dnn_detect.c:499:12: warning: ‘dnn_detect_activate’
> > defined but not used [-Wunused-function]  static int
> > dnn_detect_activate(AVFilterContext *filter_ctx)
> > ^~~
> > CC  libavfilter/vf_dnn_processing.o
> > src/libavfilter/vf_dnn_processing.c:413:12: warning: ‘activate’
> > defined but not used [-Wunused-function]  static int
> > activate(AVFilterContext *filter_ctx)
> > ^~~~
> >
> > I know it is fixed by the next patch, and the reason to separate these
> > patches is for better change tracking.
> >
> > So, we can add 'av_unused' for these unused functions in this patch.
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> >
> 
> If I understood correctly, we need to rename these unused activate
> functions to av_unused in this patch and then in the next patch fix these as
> already done. Please correct me if I am wrong.

Just add the following change in this patch 2.

$ git diff
diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c
index 2666abfcdc..11de376753 100644
--- a/libavfilter/vf_dnn_detect.c
+++ b/libavfilter/vf_dnn_detect.c
@@ -496,7 +496,7 @@ static int dnn_detect_activate_async(AVFilterContext 
*filter_ctx)
 return 0;
 }

-static int dnn_detect_activate(AVFilterContext *filter_ctx)
+static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx)
 {
 DnnDetectContext *ctx = filter_ctx->priv;

diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index 410cc887dc..7435dd4959 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -410,7 +410,7 @@ static int activate_async(AVFilterContext *filter_ctx)
 return 0;
 }

-static int activate(AVFilterContext *filter_ctx)
+static av_unused int activate(AVFilterContext *filter_ctx)
 {
 DnnProcessingContext *ctx = filter_ctx->priv;

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution Modes in DNN Filters

2021-08-20 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年8月20日 22:21
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution Modes in
> DNN Filters
> 
> This commit unifies the async and sync mode from the DNN filters'
> perspective. As of this commit, the Native backend only supports
> synchronous execution mode.
> 
> Now the user can switch between async and sync mode by using the 'async'
> option in the backend_configs. The values can be 1 for async and 0 for sync
> mode of execution.
> 
> This commit affects the following filters:
> 1. vf_dnn_classify
> 2. vf_dnn_detect
> 3. vf_dnn_processing
> 4. vf_sr
> 5. vf_derain
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_common.c   |  2 +-
>  libavfilter/dnn/dnn_backend_common.h   |  5 +-
>  libavfilter/dnn/dnn_backend_native.c   | 59 +++-
>  libavfilter/dnn/dnn_backend_native.h   |  6 ++
>  libavfilter/dnn/dnn_backend_openvino.c | 94 ++
> libavfilter/dnn/dnn_backend_openvino.h |  3 +-
>  libavfilter/dnn/dnn_backend_tf.c   | 35 ++
>  libavfilter/dnn/dnn_backend_tf.h   |  3 +-
>  libavfilter/dnn/dnn_interface.c|  8 +--
>  libavfilter/dnn_filter_common.c| 23 +--
>  libavfilter/dnn_filter_common.h|  3 +-
>  libavfilter/dnn_interface.h|  4 +-
>  libavfilter/vf_derain.c|  7 ++
>  libavfilter/vf_dnn_classify.c  |  4 +-
>  libavfilter/vf_dnn_detect.c|  8 +--
>  libavfilter/vf_dnn_processing.c|  8 +--
>  libavfilter/vf_sr.c|  8 +++
>  17 files changed, 140 insertions(+), 140 deletions(-)
> 

https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=4638 caught a warning:
CC  libavfilter/vf_dnn_detect.o
src/libavfilter/vf_dnn_detect.c:499:12: warning: ‘dnn_detect_activate’ defined 
but not used [-Wunused-function]
 static int dnn_detect_activate(AVFilterContext *filter_ctx)
^~~
CC  libavfilter/vf_dnn_processing.o
src/libavfilter/vf_dnn_processing.c:413:12: warning: ‘activate’ defined but not 
used [-Wunused-function]
 static int activate(AVFilterContext *filter_ctx)
^~~~

I know it is fixed by the next patch, and the reason to separate these patches 
is for better change tracking.

So, we can add 'av_unused' for these unused functions in this patch.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 9/9] [GSoC] lavfi/dnn: DNNAsyncExecModule Execution Failure Handling

2021-08-09 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Fu,
> Ting
> Sent: 2021年8月9日 18:13
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH v3 9/9] [GSoC] lavfi/dnn:
> DNNAsyncExecModule Execution Failure Handling
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > Shubhanshu Saxena
> > Sent: 2021年8月8日 18:56
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Shubhanshu Saxena 
> > Subject: [FFmpeg-devel] [PATCH v3 9/9] [GSoC] lavfi/dnn:
> > DNNAsyncExecModule Execution Failure Handling
> >
> > This commit adds the case handling if the asynchronous execution of a
> > request fails by checking the exit status of the thread when joining
> > before starting another execution. On failure, it does the cleanup as well.
> >
> > Signed-off-by: Shubhanshu Saxena 
> > ---
> >  libavfilter/dnn/dnn_backend_common.c | 23 +++
> >  libavfilter/dnn/dnn_backend_tf.c | 10 +-
> >  2 files changed, 28 insertions(+), 5 deletions(-)
> >
> > diff --git a/libavfilter/dnn/dnn_backend_common.c
> > b/libavfilter/dnn/dnn_backend_common.c
> > index 470fffa2ae..426683b73d 100644
> > --- a/libavfilter/dnn/dnn_backend_common.c
> > +++ b/libavfilter/dnn/dnn_backend_common.c
> > @@ -23,6 +23,9 @@
> >
> >  #include "dnn_backend_common.h"
> >
> > +#define DNN_ASYNC_SUCCESS (void *)0
> > +#define DNN_ASYNC_FAIL (void *)-1
> > +
> >  int ff_check_exec_params(void *ctx, DNNBackendType backend,
> > DNNFunctionType func_type, DNNExecBaseParams *exec_params)  {
> >  if (!exec_params) {
> > @@ -79,18 +82,25 @@ static void *async_thread_routine(void *args)
> >  DNNAsyncExecModule *async_module = args;
> >  void *request = async_module->args;
> >
> > -async_module->start_inference(request);
> > +if (async_module->start_inference(request) != DNN_SUCCESS) {
> > +return DNN_ASYNC_FAIL;
> > +}
> >  async_module->callback(request);
> > -return NULL;
> > +return DNN_ASYNC_SUCCESS;
> >  }
> >
> >  DNNReturnType ff_dnn_async_module_cleanup(DNNAsyncExecModule
> > *async_module)  {
> > +void *status = 0;
> >  if (!async_module) {
> >  return DNN_ERROR;
> >  }
> >  #if HAVE_PTHREAD_CANCEL
> > -pthread_join(async_module->thread_id, NULL);
> > +pthread_join(async_module->thread_id, );
> > +if (status == DNN_ASYNC_FAIL) {
> > +av_log(NULL, AV_LOG_ERROR, "Last Inference Failed.\n");
> > +return DNN_ERROR;
> > +}
> >  #endif
> >  async_module->start_inference = NULL;
> >  async_module->callback = NULL;
> > @@ -101,6 +111,7 @@ DNNReturnType
> > ff_dnn_async_module_cleanup(DNNAsyncExecModule *async_module)
> > DNNReturnType ff_dnn_start_inference_async(void *ctx,
> > DNNAsyncExecModule *async_module)  {
> >  int ret;
> > +void *status = 0;
> >
> >  if (!async_module) {
> >  av_log(ctx, AV_LOG_ERROR, "async_module is null when starting
> > async inference.\n"); @@ -108,7 +119,11 @@ DNNReturnType
> > ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_
> >  }
> >
> >  #if HAVE_PTHREAD_CANCEL
> > -pthread_join(async_module->thread_id, NULL);
> > +pthread_join(async_module->thread_id, );
> > +if (status == DNN_ASYNC_FAIL) {
> > +av_log(ctx, AV_LOG_ERROR, "Unable to start inference as
> > + previous
> > inference failed.\n");
> > +return DNN_ERROR;
> > +}
> >  ret = pthread_create(_module->thread_id, NULL,
> > async_thread_routine, async_module);
> >  if (ret != 0) {
> >  av_log(ctx, AV_LOG_ERROR, "Unable to start async
> > inference.\n"); diff --git a/libavfilter/dnn/dnn_backend_tf.c
> > b/libavfilter/dnn/dnn_backend_tf.c
> > index fb3f6f5ea6..ffec1b1328 100644
> > --- a/libavfilter/dnn/dnn_backend_tf.c
> > +++ b/libavfilter/dnn/dnn_backend_tf.c
> > @@ -91,6 +91,7 @@ AVFILTER_DEFINE_CLASS(dnn_tensorflow);
> >
> >  static DNNReturnType execute_model_tf(TFRequestItem *request,
> Queue
> > *inference_queue);  static void infer_completion_callback(void *args);
> > +static inline void destroy_request_item(TFRequestItem **arg);
> >
> >  static void free_buffer(void *data, size_t length)  { @@ -172,6
> > +173,10 @@ static DNNReturnType tf_start_inference(void *args)
> >request->status);
> >  if (TF_GetCode(request->status) != TF_OK) {
> >  av_log(_model->ctx, AV_LOG_ERROR, "%s",
> > TF_Message(request-
> > >status));
> > +tf_free_request(infer_request);
> > +if (ff_safe_queue_push_back(tf_model->request_queue, request) <
> 0) {
> > +destroy_request_item();
> > +}
> >  return DNN_ERROR;
> >  }
> >  return DNN_SUCCESS;
> > @@ -1095,7 +1100,10 @@ static DNNReturnType
> > execute_model_tf(TFRequestItem *request, Queue *inference_q
> >  }
> >
> >  if (task->async) {
> > -return ff_dnn_start_inference_async(ctx, >exec_module);
> > +if (ff_dnn_start_inference_async(ctx, 

Re: [FFmpeg-devel] [PATCH 2/2] lavfi/dnn_backend_ov: Rename RequestItem to OVRequestItem

2021-07-21 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年7月12日 0:15
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH 2/2] lavfi/dnn_backend_ov: Rename
> RequestItem to OVRequestItem
> 
> Rename RequestItem to OVRequestItem in the OpenVINO backend to avoid
> confusion.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 24 
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index b340859c12..f8d548feaf 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -54,18 +54,18 @@ typedef struct OVModel{
>  ie_core_t *core;
>  ie_network_t *network;
>  ie_executable_network_t *exe_network;
> -SafeQueue *request_queue;   // holds RequestItem
> +SafeQueue *request_queue;   // holds OVRequestItem
>  Queue *task_queue;  // holds TaskItem
>  Queue *inference_queue; // holds InferenceItem
>  } OVModel;
> 
>  // one request for one call to openvino -typedef struct RequestItem {
> +typedef struct OVRequestItem {
>  ie_infer_request_t *infer_request;
>  InferenceItem **inferences;
>  uint32_t inference_count;
>  ie_complete_call_back_t callback;
> -} RequestItem;
> +} OVRequestItem;
> 
>  #define APPEND_STRING(generated_string, iterate_string)
> \
>  generated_string = generated_string ? av_asprintf("%s %s",
> generated_string, iterate_string) : \ @@ -111,7 +111,7 @@ static int
> get_datatype_size(DNNDataType dt)
>  }
>  }
> 
> -static DNNReturnType fill_model_input_ov(OVModel *ov_model,
> RequestItem *request)
> +static DNNReturnType fill_model_input_ov(OVModel *ov_model,
> +OVRequestItem *request)
>  {
>  dimensions_t dims;
>  precision_e precision;
> @@ -198,7 +198,7 @@ static void infer_completion_callback(void *args)
>  dimensions_t dims;
>  precision_e precision;
>  IEStatusCode status;
> -RequestItem *request = args;
> +OVRequestItem *request = args;
>  InferenceItem *inference = request->inferences[0];
>  TaskItem *task = inference->task;
>  OVModel *ov_model = task->model;
> @@ -381,7 +381,7 @@ static DNNReturnType init_model_ov(OVModel
> *ov_model, const char *input_name, co
>  }
> 
>  for (int i = 0; i < ctx->options.nireq; i++) {
> -RequestItem *item = av_mallocz(sizeof(*item));
> +OVRequestItem *item = av_mallocz(sizeof(*item));
>  if (!item) {
>  goto err;
>  }
> @@ -422,7 +422,7 @@ err:
>  return DNN_ERROR;
>  }
> 
> -static DNNReturnType execute_model_ov(RequestItem *request, Queue
> *inferenceq)
> +static DNNReturnType execute_model_ov(OVRequestItem *request,
> Queue
> +*inferenceq)
>  {
>  IEStatusCode status;
>  DNNReturnType ret;
> @@ -639,7 +639,7 @@ static DNNReturnType get_output_ov(void *model,
> const char *input_name, int inpu
>  OVModel *ov_model = model;
>  OVContext *ctx = _model->ctx;
>  TaskItem task;
> -RequestItem *request;
> +OVRequestItem *request;
>  AVFrame *in_frame = NULL;
>  AVFrame *out_frame = NULL;
>  IEStatusCode status;
> @@ -779,7 +779,7 @@ DNNReturnType ff_dnn_execute_model_ov(const
> DNNModel *model, DNNExecBaseParams *
>  OVModel *ov_model = model->model;
>  OVContext *ctx = _model->ctx;
>  TaskItem task;
> -RequestItem *request;
> +OVRequestItem *request;
> 
>  if (ff_check_exec_params(ctx, DNN_OV, model->func_type,
> exec_params) != 0) {
>  return DNN_ERROR;
> @@ -827,7 +827,7 @@ DNNReturnType
> ff_dnn_execute_model_async_ov(const DNNModel *model,
> DNNExecBasePa  {
>  OVModel *ov_model = model->model;
>  OVContext *ctx = _model->ctx;
> -RequestItem *request;
> +OVRequestItem *request;
>  TaskItem *task;
>  DNNReturnType ret;
> 
> @@ -904,7 +904,7 @@ DNNReturnType ff_dnn_flush_ov(const DNNModel
> *model)  {
>  OVModel *ov_model = model->model;
>  OVContext *ctx = _model->ctx;
> -RequestItem *request;
> +OVRequestItem *request;
>  IEStatusCode status;
>  DNNReturnType ret;
> 
> @@ -943,7 +943,7 @@ void ff_dnn_free_model_ov(DNNModel **model)
>  if (*model){
>  OVModel *ov_model = (*model)->model;
>  while (ff_safe_queue_size(ov_model->request_queue) != 0) {
> -RequestItem *item = ff_safe_queue_pop_front(ov_model-
> >request_queue);
> +OVRequestItem *item =
> + ff_safe_queue_pop_front(ov_model->request_queue);
>  if (item && item->infer_request) {
>  ie_infer_request_free(>infer_request);
>  }
> --
Thanks, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email

Re: [FFmpeg-devel] [PATCH V2 3/6] lavfi/dnn_backend_tf: Request-based Execution

2021-07-11 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年7月5日 18:31
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH V2 3/6] lavfi/dnn_backend_tf: Request-
> based Execution
> 
> This commit uses TFRequestItem and the existing sync execution mechanism
> to use request-based execution. It will help in adding async functionality to
> the TensorFlow backend later.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_common.h   |   3 +
>  libavfilter/dnn/dnn_backend_openvino.c |   2 +-
>  libavfilter/dnn/dnn_backend_tf.c   | 156 ++---
>  3 files changed, 91 insertions(+), 70 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_common.h
> b/libavfilter/dnn/dnn_backend_common.h
> index df59615f40..5281fdfed1 100644
> --- a/libavfilter/dnn/dnn_backend_common.h
> +++ b/libavfilter/dnn/dnn_backend_common.h
> @@ -26,6 +26,9 @@
> 
>  #include "../dnn_interface.h"
> 
> +#define DNN_BACKEND_COMMON_OPTIONS \
> +{ "nireq",   "number of request", 
> OFFSET(options.nireq),
> AV_OPT_TYPE_INT,{ .i64 = 0 }, 0, INT_MAX, FLAGS },
> +
>  // one task for one function call from dnn interface  typedef struct TaskItem
> {
>  void *model; // model for the backend diff --git
> a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 3295fc79d3..f34b8150f5 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -75,7 +75,7 @@ typedef struct RequestItem {  #define FLAGS
> AV_OPT_FLAG_FILTERING_PARAM  static const AVOption
> dnn_openvino_options[] = {
>  { "device", "device to run model", OFFSET(options.device_type),
> AV_OPT_TYPE_STRING, { .str = "CPU" }, 0, 0, FLAGS },
> -{ "nireq",  "number of request",   OFFSET(options.nireq),
> AV_OPT_TYPE_INT,{ .i64 = 0 }, 0, INT_MAX, FLAGS },
> +DNN_BACKEND_COMMON_OPTIONS
>  { "batch_size",  "batch size per request", OFFSET(options.batch_size),
> AV_OPT_TYPE_INT,{ .i64 = 1 }, 1, 1000, FLAGS},
>  { "input_resizable", "can input be resizable or not",
> OFFSET(options.input_resizable), AV_OPT_TYPE_BOOL,   { .i64 = 0 }, 0, 1,
> FLAGS },
>  { NULL }
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index 578748eb35..e8007406c8 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -35,11 +35,13 @@
>  #include "dnn_backend_native_layer_maximum.h"
>  #include "dnn_io_proc.h"
>  #include "dnn_backend_common.h"
> +#include "safe_queue.h"
>  #include "queue.h"
>  #include 
> 
>  typedef struct TFOptions{
>  char *sess_config;
> +uint32_t nireq;
>  } TFOptions;
> 
>  typedef struct TFContext {
> @@ -53,6 +55,7 @@ typedef struct TFModel{
>  TF_Graph *graph;
>  TF_Session *session;
>  TF_Status *status;
> +SafeQueue *request_queue;
>  Queue *inference_queue;
>  } TFModel;
> 
> @@ -77,12 +80,13 @@ typedef struct TFRequestItem {  #define FLAGS
> AV_OPT_FLAG_FILTERING_PARAM  static const AVOption
> dnn_tensorflow_options[] = {
>  { "sess_config", "config for SessionOptions", 
> OFFSET(options.sess_config),
> AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },
> +DNN_BACKEND_COMMON_OPTIONS
>  { NULL }
>  };
> 
>  AVFILTER_DEFINE_CLASS(dnn_tensorflow);
> 
> -static DNNReturnType execute_model_tf(Queue *inference_queue);
> +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue
> +*inference_queue);
> 
>  static void free_buffer(void *data, size_t length)  { @@ -237,6 +241,7 @@
> static DNNReturnType get_output_tf(void *model, const char *input_name,
> int inpu
>  AVFrame *in_frame = av_frame_alloc();
>  AVFrame *out_frame = NULL;
>  TaskItem task;
> +TFRequestItem *request;
> 
>  if (!in_frame) {
>  av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for input
> frame\n"); @@ -267,7 +272,13 @@ static DNNReturnType
> get_output_tf(void *model, const char *input_name, int inpu
>  return DNN_ERROR;
>  }
> 
> -ret = execute_model_tf(tf_model->inference_queue);
> +request = ff_safe_queue_pop_front(tf_model->request_queue);
> +if (!request) {
> +av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
> +return DNN_ERROR;
> +}
> +
> +ret = execute_model_tf(request, tf_model->inference_queue);
>  *output_width = out_frame->width;
>  *output_height = out_frame->height;
> 
> @@ -771,6 +782,7 @@ DNNModel *ff_dnn_load_model_tf(const char
> *model_filename, DNNFunctionType func_  {
>  DNNModel *model = NULL;
>  TFModel *tf_model = NULL;
> +TFContext *ctx = NULL;
> 
>  model = av_mallocz(sizeof(DNNModel));
>  if (!model){
> @@ -782,13 +794,14 @@ DNNModel *ff_dnn_load_model_tf(const char
> *model_filename, DNNFunctionType func_
>  av_freep();
>  return NULL;
>  }
> -

Re: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Fix Memory Leak in execute_model_ov

2021-07-04 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年6月19日 0:23
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Fix
> Memory Leak in execute_model_ov
> 
> In cases where the execution inside the function execute_model_ov fails,
> push the RequestItem back to the request_queue before returning the error.
> In case pushing back fails, release the allocated memory.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 702c4fb9ee..29ec8f6a8f 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -448,12 +448,12 @@ static DNNReturnType
> execute_model_ov(RequestItem *request, Queue *inferenceq)
>  status = ie_infer_set_completion_callback(request->infer_request,
> >callback);
>  if (status != OK) {
>  av_log(ctx, AV_LOG_ERROR, "Failed to set completion callback for
> inference\n");
> -return DNN_ERROR;
> +goto err;
>  }
>  status = ie_infer_request_infer_async(request->infer_request);
>  if (status != OK) {
>  av_log(ctx, AV_LOG_ERROR, "Failed to start async inference\n");
> -return DNN_ERROR;
> +goto err;
>  }
>  return DNN_SUCCESS;
>  } else {
> @@ -464,11 +464,17 @@ static DNNReturnType
> execute_model_ov(RequestItem *request, Queue *inferenceq)
>  status = ie_infer_request_infer(request->infer_request);
>  if (status != OK) {
>  av_log(ctx, AV_LOG_ERROR, "Failed to start synchronous model
> inference\n");
> -return DNN_ERROR;
> +goto err;
>  }
>  infer_completion_callback(request);
>  return (task->inference_done == task->inference_todo) ?
> DNN_SUCCESS : DNN_ERROR;
>  }
> +err:
> +if (ff_safe_queue_push_back(ov_model->request_queue, request) < 0)
> {
> +ie_infer_request_free(>infer_request);
> +av_freep();
> +}
> +return DNN_ERROR;
>  }
> 
>  static DNNReturnType get_input_ov(void *model, DNNData *input, const
> char *input_name)
> --

LGTM, will push soon, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: fix crash when target is not specified

2021-06-18 Thread Guo, Yejun


> -Original Message-
> From: Guo, Yejun 
> Sent: 2021年6月13日 22:43
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun 
> Subject: [PATCH] lavfi/dnn_backend_openvino.c: fix crash when target is not
> specified
> 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 709a772a4d..dee0a8e047 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -596,8 +596,10 @@ static DNNReturnType
> extract_inference_from_task(DNNFunctionType func_type, Task
>  InferenceItem *inference;
>  const AVDetectionBBox *bbox = av_get_detection_bbox(header, i);
> 
> -if (av_strncasecmp(bbox->detect_label, params->target, 
> sizeof(bbox-
> >detect_label)) != 0) {
> -continue;
> +if (params->target) {
> +if (av_strncasecmp(bbox->detect_label, params->target,
> sizeof(bbox->detect_label)) != 0) {
> +continue;
> +}
>  }
> 
>  inference = av_malloc(sizeof(*inference));

Will push tomorrow if there's no objection, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Fix Memory Leak for RequestItem

2021-06-18 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年6月15日 1:56
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Fix
> Memory Leak for RequestItem
> 
> Fix memory leak for RequestItem upon error while pushing to the
> request_queue in the completion callback.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 709a772a4d..702c4fb9ee 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -293,6 +293,8 @@ static void infer_completion_callback(void *args)
> 
>  request->inference_count = 0;
>  if (ff_safe_queue_push_back(requestq, request) < 0) {
> +ie_infer_request_free(>infer_request);
> +av_freep();
>  av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
>  return;
>  }
LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: fix crash when target is not specified

2021-06-13 Thread Guo Yejun
---
 libavfilter/dnn/dnn_backend_openvino.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index 709a772a4d..dee0a8e047 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -596,8 +596,10 @@ static DNNReturnType 
extract_inference_from_task(DNNFunctionType func_type, Task
 InferenceItem *inference;
 const AVDetectionBBox *bbox = av_get_detection_bbox(header, i);
 
-if (av_strncasecmp(bbox->detect_label, params->target, 
sizeof(bbox->detect_label)) != 0) {
-continue;
+if (params->target) {
+if (av_strncasecmp(bbox->detect_label, params->target, 
sizeof(bbox->detect_label)) != 0) {
+continue;
+}
 }
 
 inference = av_malloc(sizeof(*inference));
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2 5/5] lavfi/dnn: Fill Task using Common Function

2021-06-10 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年6月6日 2:08
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH V2 5/5] lavfi/dnn: Fill Task using Common
> Function
> 
> This commit adds a common function for filling the TaskItems
> in all three backends.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_common.c   | 20 
>  libavfilter/dnn/dnn_backend_common.h   | 15 +++
>  libavfilter/dnn/dnn_backend_openvino.c | 23 +++
>  3 files changed, 42 insertions(+), 16 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_common.c
> b/libavfilter/dnn/dnn_backend_common.c
> index a522ab5650..4d9d3f79b1 100644
> --- a/libavfilter/dnn/dnn_backend_common.c
> +++ b/libavfilter/dnn/dnn_backend_common.c
> @@ -49,3 +49,23 @@ int ff_check_exec_params(void *ctx, DNNBackendType
> backend, DNNFunctionType func
> 
>  return 0;
>  }
> +
> +DNNReturnType ff_dnn_fill_task(TaskItem *task, DNNExecBaseParams
> *exec_params, void *backend_model, int async, int do_ioproc) {
> +if (task == NULL || exec_params == NULL || backend_model == NULL)
> +return DNN_ERROR;
> +if (do_ioproc != 0 && do_ioproc != 1)
> +return DNN_ERROR;
> +if (async != 0 && async != 1)
> +return DNN_ERROR;
> +
> +task->do_ioproc = do_ioproc;
> +task->async = async;
> +task->input_name = exec_params->input_name;
> +task->in_frame = exec_params->in_frame;
> +task->out_frame = exec_params->out_frame;
> +task->model = backend_model;
> +task->nb_output = exec_params->nb_output;
> +task->output_names = exec_params->output_names;
> +
> +return DNN_SUCCESS;
> +}
> diff --git a/libavfilter/dnn/dnn_backend_common.h
> b/libavfilter/dnn/dnn_backend_common.h
> index d962312c16..df59615f40 100644
> --- a/libavfilter/dnn/dnn_backend_common.h
> +++ b/libavfilter/dnn/dnn_backend_common.h
> @@ -48,4 +48,19 @@ typedef struct InferenceItem {
> 
>  int ff_check_exec_params(void *ctx, DNNBackendType backend,
> DNNFunctionType func_type, DNNExecBaseParams *exec_params);
> 
> +/**
> + * Fill the Task for Backend Execution. It should be called after
> + * checking execution parameters using ff_check_exec_params.
> + *
> + * @param task pointer to the allocated task
> + * @param exec_param pointer to execution parameters
> + * @param backend_model void pointer to the backend model
> + * @param async flag for async execution. Must be 0 or 1
> + * @param do_ioproc flag for IO processing. Must be 0 or 1
> + *
> + * @retval DNN_SUCCESS if successful
> + * @retval DNN_ERROR if flags are invalid or any parameter is NULL
> + */
> +DNNReturnType ff_dnn_fill_task(TaskItem *task, DNNExecBaseParams
> *exec_params, void *backend_model, int async, int do_ioproc);
> +
>  #endif
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index c2487c35be..709a772a4d 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -793,14 +793,9 @@ DNNReturnType ff_dnn_execute_model_ov(const
> DNNModel *model, DNNExecBaseParams *
>  }
>  }
> 
> -task.do_ioproc = 1;
> -task.async = 0;
> -task.input_name = exec_params->input_name;
> -task.in_frame = exec_params->in_frame;
> -task.output_names = _params->output_names[0];
> -task.out_frame = exec_params->out_frame ? exec_params->out_frame :
> exec_params->in_frame;
> -task.nb_output = exec_params->nb_output;
> -task.model = ov_model;
> +if (ff_dnn_fill_task(, exec_params, ov_model, 0, 1) !=
> DNN_SUCCESS) {
> +return DNN_ERROR;
> +}
> 
>  if (extract_inference_from_task(ov_model->model->func_type, ,
> ov_model->inference_queue, exec_params) != DNN_SUCCESS) {
>  av_log(ctx, AV_LOG_ERROR, "unable to extract inference from
> task.\n");
> @@ -841,14 +836,10 @@ DNNReturnType
> ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa
>  return DNN_ERROR;
>  }
> 
> -task->do_ioproc = 1;
> -task->async = 1;
> -task->input_name = exec_params->input_name;
> -task->in_frame = exec_params->in_frame;
> -task->output_names = _params->output_names[0];
> -task->out_frame = exec_params->out_frame ?
> exec_params->out_frame : exec_params->in_frame;
> -task->nb_output = exec_params->nb_output;
> -task->model = ov_model;
> +if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS)
> {
> +return DNN_ERROR;
> +}
> +
>  if (ff_queue_push_back(ov_model->task_queue, task) < 0) {
>  av_freep();
>  av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n");

will push tomorrow if there's no objection, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or 

Re: [FFmpeg-devel] [PATCH 2/2] lavfi/vf_drawtext.c: fix CID 1485003

2021-06-08 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Ting
> Fu
> Sent: 2021年6月4日 10:23
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 2/2] lavfi/vf_drawtext.c: fix CID 1485003
> 
> CID 1485003: Memory - illegal accesses (UNINIT)
> Using uninitialized value "sd".
> 
> Signed-off-by: Ting Fu 
> ---
>  libavfilter/vf_drawtext.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/vf_drawtext.c b/libavfilter/vf_drawtext.c
> index 382d589e26..c4c09894e4 100644
> --- a/libavfilter/vf_drawtext.c
> +++ b/libavfilter/vf_drawtext.c
> @@ -1554,7 +1554,7 @@ static int filter_frame(AVFilterLink *inlink,
> AVFrame *frame)
>  AVFrameSideData *sd;
>  int loop = 1;
> 
> -if (s->text_source == AV_FRAME_DATA_DETECTION_BBOXES && sd) {
> +if (s->text_source == AV_FRAME_DATA_DETECTION_BBOXES) {
>  sd = av_frame_get_side_data(frame,
> AV_FRAME_DATA_DETECTION_BBOXES);
>  if (sd) {
>  header = (AVDetectionBBoxHeader *)sd->data;
> --
thanks, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavfi/dnn/dnn_io_proc.c: fix CID 1484955

2021-06-04 Thread Guo, Yejun


> -Original Message-
> From: Guo, Yejun 
> Sent: 2021年5月29日 21:24
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun 
> Subject: [PATCH] lavfi/dnn/dnn_io_proc.c: fix CID 1484955
> 
> CID 1484955:  Memory - corruptions  (ARRAY_VS_SINGLETON)
> ---
>  libavfilter/dnn/dnn_io_proc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
> index 021d004e1d..f55424d97c 100644
> --- a/libavfilter/dnn/dnn_io_proc.c
> +++ b/libavfilter/dnn/dnn_io_proc.c
> @@ -128,7 +128,7 @@ DNNReturnType
> ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData *input, void *lo
>  }
>  sws_scale(sws_ctx, (const uint8_t **)frame->data,
> frame->linesize, 0, frame->height,
> -   (uint8_t * const*)(>data),
> +   (uint8_t * const [4]){input->data, 0, 0, 0},
> (const int [4]){frame->width * 3 * sizeof(float),
> 0, 0, 0});
>  sws_freeContext(sws_ctx);
>  break;
will push tomorrow if there's no objection.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] lavfi/dnn/dnn_io_proc.c: fix CID 1484955

2021-05-29 Thread Guo Yejun
CID 1484955:  Memory - corruptions  (ARRAY_VS_SINGLETON)
---
 libavfilter/dnn/dnn_io_proc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
index 021d004e1d..f55424d97c 100644
--- a/libavfilter/dnn/dnn_io_proc.c
+++ b/libavfilter/dnn/dnn_io_proc.c
@@ -128,7 +128,7 @@ DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, 
DNNData *input, void *lo
 }
 sws_scale(sws_ctx, (const uint8_t **)frame->data,
frame->linesize, 0, frame->height,
-   (uint8_t * const*)(>data),
+   (uint8_t * const [4]){input->data, 0, 0, 0},
(const int [4]){frame->width * 3 * sizeof(float), 
0, 0, 0});
 sws_freeContext(sws_ctx);
 break;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Correct Pointer Type while Freeing

2021-05-27 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月28日 2:06
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_openvino.c: Correct
> Pointer Type while Freeing
> 
> This commit corrects the type of pointer of elements from the
> inference queue in ff_dnn_free_model_ov.
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_openvino.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index e0781e854a..58c4ec9c9b 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -963,7 +963,7 @@ void ff_dnn_free_model_ov(DNNModel **model)
>  ff_safe_queue_destroy(ov_model->request_queue);
> 
>  while (ff_queue_size(ov_model->inference_queue) != 0) {
> -TaskItem *item =
> ff_queue_pop_front(ov_model->inference_queue);
> +InferenceItem *item =
> ff_queue_pop_front(ov_model->inference_queue);
>  av_freep();
>  }
>  ff_queue_destroy(ov_model->inference_queue);

thanks for the catch, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter support draw text with detection bounding boxes in side_data

2021-05-25 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Guo,
> Yejun
> Sent: 2021年5月25日 9:08
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter
> support draw text with detection bounding boxes in side_data
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> Guo,
> > Yejun
> > Sent: 2021年5月20日 11:04
> > To: FFmpeg development discussions and patches
> > 
> > Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter
> > support draw text with detection bounding boxes in side_data
> >
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> Ting
> > > Fu
> > > Sent: 2021年5月14日 16:47
> > > To: ffmpeg-devel@ffmpeg.org
> > > Subject: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter 
> > > support
> > > draw text with detection bounding boxes in side_data
> > >
> > > This feature can be used with dnn detection by setting vf_drawtext's
> option
> > > text_source=side_data_detection_bboxes, for example:
> > > ./ffmpeg -i face.jpeg -vf
> > >
> >
> dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:\
> > >
> >
> input=data:output=detection_out:labels=face-detection-adas-0001.label,dra
> > > wbox=box_source=
> > >
> >
> side_data_detection_bboxes,drawtext=text_source=side_data_detection_bbo
> > > xes:fontcolor=green:\
> > > fontsize=40, -y face_detect.jpeg
> > > Please note, the default fontsize of vf_drawtext is 12, which may be too
> > > small to be seen clearly.
> > >
> > > Signed-off-by: Ting Fu 
> > > ---
> > >  doc/filters.texi  |  8 
> > >  libavfilter/vf_drawtext.c | 77
> > > ---
> > >  2 files changed, 79 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/doc/filters.texi b/doc/filters.texi
> > > index f2ac8c4cc8..d10e6de03d 100644
> > > --- a/doc/filters.texi
> > > +++ b/doc/filters.texi
> > > @@ -10788,6 +10788,14 @@ parameter @var{text}.
> > >
> > >  If both @var{text} and @var{textfile} are specified, an error is thrown.
> > >
> > > +@item text_source
> > > +Text source should be set as side_data_detection_bboxes if you want to
> > use
> > > text data in
> > > +detection bboxes of side data.
> > > +
> > > +If text source is set, @var{text} and @var{textfile} will be ignored and 
> > > still
> > > use
> > > +text data in detection bboxes of side data. So please do not use this
> > > parameter
> > > +if you are not sure about the text source.
> > > +
> > >  @item reload
> > >  If set to 1, the @var{textfile} will be reloaded before each frame.
> > >  Be sure to update it atomically, or it may be read partially, or even 
> > > fail.
> > > diff --git a/libavfilter/vf_drawtext.c b/libavfilter/vf_drawtext.c
> > > index 7ea057b812..382d589e26 100644
> > > --- a/libavfilter/vf_drawtext.c
> > > +++ b/libavfilter/vf_drawtext.c
> > > @@ -55,6 +55,7 @@
> > >  #include "libavutil/time_internal.h"
> > >  #include "libavutil/tree.h"
> > >  #include "libavutil/lfg.h"
> > > +#include "libavutil/detection_bbox.h"
> > >  #include "avfilter.h"
> > >  #include "drawutils.h"
> > >  #include "formats.h"
> > > @@ -199,6 +200,8 @@ typedef struct DrawTextContext {
> > >  int tc24hmax;   ///< 1 if timecode is wrapped
> to
> > 24
> > > hours, 0 otherwise
> > >  int reload; ///< reload text file for each
> frame
> > >  int start_number;   ///< starting frame number for
> > > n/frame_num var
> > > +char *text_source_string;   ///< the string to specify text data
> > > source
> > > +enum AVFrameSideDataType text_source;
> > >  #if CONFIG_LIBFRIBIDI
> > >  int text_shaping;   ///< 1 to shape the text before
> > > drawing it
> > >  #endif
> > > @@ -246,6 +249,7 @@ static const AVOption drawtext_options[]= {
> > >  { "alpha",   "apply alpha while rendering", OFFSET(a_expr),
> > > AV_OPT_TYPE_STRING, { .str = "1" },  .flags = FLAGS },
&

Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter support draw text with detection bounding boxes in side_data

2021-05-24 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Guo,
> Yejun
> Sent: 2021年5月20日 11:04
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter
> support draw text with detection bounding boxes in side_data
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of Ting
> > Fu
> > Sent: 2021年5月14日 16:47
> > To: ffmpeg-devel@ffmpeg.org
> > Subject: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter support
> > draw text with detection bounding boxes in side_data
> >
> > This feature can be used with dnn detection by setting vf_drawtext's option
> > text_source=side_data_detection_bboxes, for example:
> > ./ffmpeg -i face.jpeg -vf
> >
> dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:\
> >
> input=data:output=detection_out:labels=face-detection-adas-0001.label,dra
> > wbox=box_source=
> >
> side_data_detection_bboxes,drawtext=text_source=side_data_detection_bbo
> > xes:fontcolor=green:\
> > fontsize=40, -y face_detect.jpeg
> > Please note, the default fontsize of vf_drawtext is 12, which may be too
> > small to be seen clearly.
> >
> > Signed-off-by: Ting Fu 
> > ---
> >  doc/filters.texi  |  8 
> >  libavfilter/vf_drawtext.c | 77
> > ---
> >  2 files changed, 79 insertions(+), 6 deletions(-)
> >
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index f2ac8c4cc8..d10e6de03d 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -10788,6 +10788,14 @@ parameter @var{text}.
> >
> >  If both @var{text} and @var{textfile} are specified, an error is thrown.
> >
> > +@item text_source
> > +Text source should be set as side_data_detection_bboxes if you want to
> use
> > text data in
> > +detection bboxes of side data.
> > +
> > +If text source is set, @var{text} and @var{textfile} will be ignored and 
> > still
> > use
> > +text data in detection bboxes of side data. So please do not use this
> > parameter
> > +if you are not sure about the text source.
> > +
> >  @item reload
> >  If set to 1, the @var{textfile} will be reloaded before each frame.
> >  Be sure to update it atomically, or it may be read partially, or even fail.
> > diff --git a/libavfilter/vf_drawtext.c b/libavfilter/vf_drawtext.c
> > index 7ea057b812..382d589e26 100644
> > --- a/libavfilter/vf_drawtext.c
> > +++ b/libavfilter/vf_drawtext.c
> > @@ -55,6 +55,7 @@
> >  #include "libavutil/time_internal.h"
> >  #include "libavutil/tree.h"
> >  #include "libavutil/lfg.h"
> > +#include "libavutil/detection_bbox.h"
> >  #include "avfilter.h"
> >  #include "drawutils.h"
> >  #include "formats.h"
> > @@ -199,6 +200,8 @@ typedef struct DrawTextContext {
> >  int tc24hmax;   ///< 1 if timecode is wrapped to
> 24
> > hours, 0 otherwise
> >  int reload; ///< reload text file for each frame
> >  int start_number;   ///< starting frame number for
> > n/frame_num var
> > +char *text_source_string;   ///< the string to specify text data
> > source
> > +enum AVFrameSideDataType text_source;
> >  #if CONFIG_LIBFRIBIDI
> >  int text_shaping;   ///< 1 to shape the text before
> > drawing it
> >  #endif
> > @@ -246,6 +249,7 @@ static const AVOption drawtext_options[]= {
> >  { "alpha",   "apply alpha while rendering", OFFSET(a_expr),
> > AV_OPT_TYPE_STRING, { .str = "1" },  .flags = FLAGS },
> >  {"fix_bounds", "check and fix text coords to avoid clipping",
> > OFFSET(fix_bounds), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS},
> >  {"start_number", "start frame number for n/frame_num variable",
> > OFFSET(start_number), AV_OPT_TYPE_INT, {.i64=0}, 0, INT_MAX, FLAGS},
> > +{"text_source", "the source of text", OFFSET(text_source_string),
> > AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS },
> >
> >  #if CONFIG_LIBFRIBIDI
> >  {"text_shaping", "attempt to shape text before drawing",
> > OFFSET(text_shaping), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1, FLAGS},
> > @@ -690,6 +694,16 @@ out:
> >  }
> >  #endif
> >
> > +static enum AVFrameSideDataType text_source_st

Re: [FFmpeg-devel] [PATCH 2/2] lavfi/dnn: refine code to separate processing and detection in backends

2021-05-21 Thread Guo, Yejun



> -Original Message-
> From: Guo, Yejun 
> Sent: Monday, May 17, 2021 1:54 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun 
> Subject: [PATCH 2/2] lavfi/dnn: refine code to separate processing and
> detection in backends
> 
> ---
>  libavfilter/dnn/dnn_backend_native.c   |  2 +-
>  libavfilter/dnn/dnn_backend_openvino.c |  6 --
>  libavfilter/dnn/dnn_backend_tf.c   | 20 +++-
>  libavfilter/dnn/dnn_io_proc.c  | 18 ++
>  libavfilter/dnn/dnn_io_proc.h  |  3 ++-
>  5 files changed, 24 insertions(+), 25 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native.c
> b/libavfilter/dnn/dnn_backend_native.c
> index b5f1c16538..a6be27f1fd 100644
> --- a/libavfilter/dnn/dnn_backend_native.c
> +++ b/libavfilter/dnn/dnn_backend_native.c
> @@ -314,7 +314,7 @@ static DNNReturnType execute_model_native(const
> DNNModel *model, const char *inp
>  if (native_model->model->frame_pre_proc != NULL) {
>  native_model->model->frame_pre_proc(in_frame, ,
> native_model->model->filter_ctx);
>  } else {
> -ff_proc_from_frame_to_dnn(in_frame, , native_model-
> >model->func_type, ctx);
> +ff_proc_from_frame_to_dnn(in_frame, , ctx);
>  }
>  }
> 
> diff --git a/libavfilter/dnn/dnn_backend_openvino.c
> b/libavfilter/dnn/dnn_backend_openvino.c
> index 1ff8a720b9..e0781e854a 100644
> --- a/libavfilter/dnn/dnn_backend_openvino.c
> +++ b/libavfilter/dnn/dnn_backend_openvino.c
> @@ -186,15 +186,17 @@ static DNNReturnType
> fill_model_input_ov(OVModel *ov_model, RequestItem *request
>  task = inference->task;
>  switch (task->ov_model->model->func_type) {
>  case DFT_PROCESS_FRAME:
> -case DFT_ANALYTICS_DETECT:
>  if (task->do_ioproc) {
>  if (ov_model->model->frame_pre_proc != NULL) {
>  ov_model->model->frame_pre_proc(task->in_frame, ,
> ov_model->model->filter_ctx);
>  } else {
> -ff_proc_from_frame_to_dnn(task->in_frame, , 
> ov_model-
> >model->func_type, ctx);
> +ff_proc_from_frame_to_dnn(task->in_frame, ,
> + ctx);
>  }
>  }
>  break;
> +case DFT_ANALYTICS_DETECT:
> +ff_frame_to_dnn_detect(task->in_frame, , ctx);
> +break;
>  case DFT_ANALYTICS_CLASSIFY:
>  ff_frame_to_dnn_classify(task->in_frame, , inference-
> >bbox_index, ctx);
>  break;
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index 5908aeb359..4c16c2bdb0 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -763,12 +763,22 @@ static DNNReturnType execute_model_tf(const
> DNNModel *model, const char *input_n
>  }
>  input.data = (float *)TF_TensorData(input_tensor);
> 
> -if (do_ioproc) {
> -if (tf_model->model->frame_pre_proc != NULL) {
> -tf_model->model->frame_pre_proc(in_frame, , tf_model-
> >model->filter_ctx);
> -} else {
> -ff_proc_from_frame_to_dnn(in_frame, , tf_model->model-
> >func_type, ctx);
> +switch (tf_model->model->func_type) {
> +case DFT_PROCESS_FRAME:
> +if (do_ioproc) {
> +if (tf_model->model->frame_pre_proc != NULL) {
> +tf_model->model->frame_pre_proc(in_frame, , tf_model-
> >model->filter_ctx);
> +} else {
> +ff_proc_from_frame_to_dnn(in_frame, , ctx);
> +}
>  }
> +break;
> +case DFT_ANALYTICS_DETECT:
> +ff_frame_to_dnn_detect(in_frame, , ctx);
> +break;
> +default:
> +avpriv_report_missing_feature(ctx, "model function type %d",
> tf_model->model->func_type);
> +break;
>  }
> 
>  tf_outputs = av_malloc_array(nb_output, sizeof(*tf_outputs)); diff --git
> a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c index
> 1e2bef3f9a..e01661103b 100644
> --- a/libavfilter/dnn/dnn_io_proc.c
> +++ b/libavfilter/dnn/dnn_io_proc.c
> @@ -94,7 +94,7 @@ DNNReturnType
> ff_proc_from_dnn_to_frame(AVFrame *frame, DNNData *output, void *l
>  return DNN_SUCCESS;
>  }
> 
> -static DNNReturnType
> proc_from_frame_to_dnn_frameprocessing(AVFrame *frame, DNNData
> *input, void *log_ctx)
> +DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData
> *input,
> +void *log_ctx)
>  {
>  struct SwsContext *sws_ct

Re: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter support draw text with detection bounding boxes in side_data

2021-05-19 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Ting
> Fu
> Sent: 2021年5月14日 16:47
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 3/3] libavfilter: vf_drawtext filter support
> draw text with detection bounding boxes in side_data
> 
> This feature can be used with dnn detection by setting vf_drawtext's option
> text_source=side_data_detection_bboxes, for example:
> ./ffmpeg -i face.jpeg -vf
> dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:\
> input=data:output=detection_out:labels=face-detection-adas-0001.label,dra
> wbox=box_source=
> side_data_detection_bboxes,drawtext=text_source=side_data_detection_bbo
> xes:fontcolor=green:\
> fontsize=40, -y face_detect.jpeg
> Please note, the default fontsize of vf_drawtext is 12, which may be too
> small to be seen clearly.
> 
> Signed-off-by: Ting Fu 
> ---
>  doc/filters.texi  |  8 
>  libavfilter/vf_drawtext.c | 77
> ---
>  2 files changed, 79 insertions(+), 6 deletions(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index f2ac8c4cc8..d10e6de03d 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -10788,6 +10788,14 @@ parameter @var{text}.
> 
>  If both @var{text} and @var{textfile} are specified, an error is thrown.
> 
> +@item text_source
> +Text source should be set as side_data_detection_bboxes if you want to use
> text data in
> +detection bboxes of side data.
> +
> +If text source is set, @var{text} and @var{textfile} will be ignored and 
> still
> use
> +text data in detection bboxes of side data. So please do not use this
> parameter
> +if you are not sure about the text source.
> +
>  @item reload
>  If set to 1, the @var{textfile} will be reloaded before each frame.
>  Be sure to update it atomically, or it may be read partially, or even fail.
> diff --git a/libavfilter/vf_drawtext.c b/libavfilter/vf_drawtext.c
> index 7ea057b812..382d589e26 100644
> --- a/libavfilter/vf_drawtext.c
> +++ b/libavfilter/vf_drawtext.c
> @@ -55,6 +55,7 @@
>  #include "libavutil/time_internal.h"
>  #include "libavutil/tree.h"
>  #include "libavutil/lfg.h"
> +#include "libavutil/detection_bbox.h"
>  #include "avfilter.h"
>  #include "drawutils.h"
>  #include "formats.h"
> @@ -199,6 +200,8 @@ typedef struct DrawTextContext {
>  int tc24hmax;   ///< 1 if timecode is wrapped to 24
> hours, 0 otherwise
>  int reload; ///< reload text file for each frame
>  int start_number;   ///< starting frame number for
> n/frame_num var
> +char *text_source_string;   ///< the string to specify text data
> source
> +enum AVFrameSideDataType text_source;
>  #if CONFIG_LIBFRIBIDI
>  int text_shaping;   ///< 1 to shape the text before
> drawing it
>  #endif
> @@ -246,6 +249,7 @@ static const AVOption drawtext_options[]= {
>  { "alpha",   "apply alpha while rendering", OFFSET(a_expr),
> AV_OPT_TYPE_STRING, { .str = "1" },  .flags = FLAGS },
>  {"fix_bounds", "check and fix text coords to avoid clipping",
> OFFSET(fix_bounds), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS},
>  {"start_number", "start frame number for n/frame_num variable",
> OFFSET(start_number), AV_OPT_TYPE_INT, {.i64=0}, 0, INT_MAX, FLAGS},
> +{"text_source", "the source of text", OFFSET(text_source_string),
> AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS },
> 
>  #if CONFIG_LIBFRIBIDI
>  {"text_shaping", "attempt to shape text before drawing",
> OFFSET(text_shaping), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1, FLAGS},
> @@ -690,6 +694,16 @@ out:
>  }
>  #endif
> 
> +static enum AVFrameSideDataType text_source_string_parse(const char
> *text_source_string)
> +{
> +av_assert0(text_source_string);
> +if (!strcmp(text_source_string, "side_data_detection_bboxes")) {
> +return AV_FRAME_DATA_DETECTION_BBOXES;
> +} else {
> +return AVERROR(EINVAL);
> +}
> +}
> +
>  static av_cold int init(AVFilterContext *ctx)
>  {
>  int err;
> @@ -731,9 +745,28 @@ static av_cold int init(AVFilterContext *ctx)
>  s->text = av_strdup("");
>  }
> 
> +if (s->text_source_string) {
> +s->text_source = text_source_string_parse(s->text_source_string);
> +if ((int)s->text_source < 0) {
> +av_log(ctx, AV_LOG_ERROR, "Error text source: %s\n",
> s->text_source_string);
> +return AVERROR(EINVAL);
> +}
> +}
> +
> +if (s->text_source == AV_FRAME_DATA_DETECTION_BBOXES) {
> +if (s->text) {
> +av_log(ctx, AV_LOG_WARNING, "Multiple texts provided, will
> use text_source only\n");
> +av_free(s->text);
> +}
> +s->text =
> av_mallocz(AV_DETECTION_BBOX_LABEL_NAME_MAX_SIZE *
> + (AV_NUM_DETECTION_BBOX_CLASSIFY +
> 1));
> +if (!s->text)
> +return AVERROR(ENOMEM);
> +}
> +
>  if (!s->text) {
>  av_log(ctx, AV_LOG_ERROR,
> -   

[FFmpeg-devel] [PATCH V2] lavfi/dnn_filter_common.h: make filter option 'options' as deprecated

2021-05-17 Thread Guo, Yejun
we'd use 'backend_configs' to avoid confusion.
---
 libavfilter/dnn_filter_common.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h
index 09ddd8a5ca..36319bfef8 100644
--- a/libavfilter/dnn_filter_common.h
+++ b/libavfilter/dnn_filter_common.h
@@ -45,7 +45,7 @@ typedef struct DnnContext {
 { "input",  "input name of the model",
OFFSET(model_inputname),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
 { "output", "output name of the model",   
OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, 
FLAGS },\
 { "backend_configs","backend configs",
OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
-{ "options","backend configs",
OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
+{ "options", "backend configs (deprecated, use backend_configs)", 
OFFSET(backend_options),  AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | 
AV_OPT_FLAG_DEPRECATED},\
 { "async",  "use DNN async inference",OFFSET(async),   
 AV_OPT_TYPE_BOOL,  { .i64 = 1}, 0, 1, FLAGS},
 
 
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/2] lavfi/dnn_filter_common.h: remove filter option 'options'

2021-05-17 Thread Guo, Yejun


> -Original Message-
> From: Steven Liu 
> Sent: 2021年5月17日 15:46
> To: FFmpeg development discussions and patches
> 
> Cc: Steven Liu ; Guo, Yejun 
> Subject: Re: [FFmpeg-devel] [PATCH 1/2] lavfi/dnn_filter_common.h: remove
> filter option 'options'
> 
> 
> 
> > 2021年5月17日 下午1:54,Guo, Yejun  写道:
> >
> > we'd use 'backend_configs' to avoid confusion
> > ---
> > libavfilter/dnn_filter_common.h | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/libavfilter/dnn_filter_common.h
> b/libavfilter/dnn_filter_common.h
> > index 09ddd8a5ca..51caa71c24 100644
> > --- a/libavfilter/dnn_filter_common.h
> > +++ b/libavfilter/dnn_filter_common.h
> > @@ -45,7 +45,6 @@ typedef struct DnnContext {
> > { "input",  "input name of the model",
> OFFSET(model_inputname),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0,
> FLAGS },\
> > { "output", "output name of the model",
> OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0,
> 0, FLAGS },\
> > { "backend_configs","backend configs",
> OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0,
> FLAGS },\
> > -{ "options","backend configs",
> OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0,
> FLAGS },\
> Not sure if there have users are using this options,
> What about use deprecated for it?
yes, will add AV_OPT_FLAG_DEPRECATED into FLAGS, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 2/2] lavfi/dnn: refine code to separate processing and detection in backends

2021-05-17 Thread Guo, Yejun
---
 libavfilter/dnn/dnn_backend_native.c   |  2 +-
 libavfilter/dnn/dnn_backend_openvino.c |  6 --
 libavfilter/dnn/dnn_backend_tf.c   | 20 +++-
 libavfilter/dnn/dnn_io_proc.c  | 18 ++
 libavfilter/dnn/dnn_io_proc.h  |  3 ++-
 5 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_native.c 
b/libavfilter/dnn/dnn_backend_native.c
index b5f1c16538..a6be27f1fd 100644
--- a/libavfilter/dnn/dnn_backend_native.c
+++ b/libavfilter/dnn/dnn_backend_native.c
@@ -314,7 +314,7 @@ static DNNReturnType execute_model_native(const DNNModel 
*model, const char *inp
 if (native_model->model->frame_pre_proc != NULL) {
 native_model->model->frame_pre_proc(in_frame, , 
native_model->model->filter_ctx);
 } else {
-ff_proc_from_frame_to_dnn(in_frame, , 
native_model->model->func_type, ctx);
+ff_proc_from_frame_to_dnn(in_frame, , ctx);
 }
 }
 
diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index 1ff8a720b9..e0781e854a 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -186,15 +186,17 @@ static DNNReturnType fill_model_input_ov(OVModel 
*ov_model, RequestItem *request
 task = inference->task;
 switch (task->ov_model->model->func_type) {
 case DFT_PROCESS_FRAME:
-case DFT_ANALYTICS_DETECT:
 if (task->do_ioproc) {
 if (ov_model->model->frame_pre_proc != NULL) {
 ov_model->model->frame_pre_proc(task->in_frame, , 
ov_model->model->filter_ctx);
 } else {
-ff_proc_from_frame_to_dnn(task->in_frame, , 
ov_model->model->func_type, ctx);
+ff_proc_from_frame_to_dnn(task->in_frame, , ctx);
 }
 }
 break;
+case DFT_ANALYTICS_DETECT:
+ff_frame_to_dnn_detect(task->in_frame, , ctx);
+break;
 case DFT_ANALYTICS_CLASSIFY:
 ff_frame_to_dnn_classify(task->in_frame, , 
inference->bbox_index, ctx);
 break;
diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c
index 5908aeb359..4c16c2bdb0 100644
--- a/libavfilter/dnn/dnn_backend_tf.c
+++ b/libavfilter/dnn/dnn_backend_tf.c
@@ -763,12 +763,22 @@ static DNNReturnType execute_model_tf(const DNNModel 
*model, const char *input_n
 }
 input.data = (float *)TF_TensorData(input_tensor);
 
-if (do_ioproc) {
-if (tf_model->model->frame_pre_proc != NULL) {
-tf_model->model->frame_pre_proc(in_frame, , 
tf_model->model->filter_ctx);
-} else {
-ff_proc_from_frame_to_dnn(in_frame, , 
tf_model->model->func_type, ctx);
+switch (tf_model->model->func_type) {
+case DFT_PROCESS_FRAME:
+if (do_ioproc) {
+if (tf_model->model->frame_pre_proc != NULL) {
+tf_model->model->frame_pre_proc(in_frame, , 
tf_model->model->filter_ctx);
+} else {
+ff_proc_from_frame_to_dnn(in_frame, , ctx);
+}
 }
+break;
+case DFT_ANALYTICS_DETECT:
+ff_frame_to_dnn_detect(in_frame, , ctx);
+break;
+default:
+avpriv_report_missing_feature(ctx, "model function type %d", 
tf_model->model->func_type);
+break;
 }
 
 tf_outputs = av_malloc_array(nb_output, sizeof(*tf_outputs));
diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
index 1e2bef3f9a..e01661103b 100644
--- a/libavfilter/dnn/dnn_io_proc.c
+++ b/libavfilter/dnn/dnn_io_proc.c
@@ -94,7 +94,7 @@ DNNReturnType ff_proc_from_dnn_to_frame(AVFrame *frame, 
DNNData *output, void *l
 return DNN_SUCCESS;
 }
 
-static DNNReturnType proc_from_frame_to_dnn_frameprocessing(AVFrame *frame, 
DNNData *input, void *log_ctx)
+DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData *input, void 
*log_ctx)
 {
 struct SwsContext *sws_ctx;
 int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
@@ -243,7 +243,7 @@ DNNReturnType ff_frame_to_dnn_classify(AVFrame *frame, 
DNNData *input, uint32_t
 return DNN_SUCCESS;
 }
 
-static DNNReturnType proc_from_frame_to_dnn_analytics(AVFrame *frame, DNNData 
*input, void *log_ctx)
+DNNReturnType ff_frame_to_dnn_detect(AVFrame *frame, DNNData *input, void 
*log_ctx)
 {
 struct SwsContext *sws_ctx;
 int linesizes[4];
@@ -271,17 +271,3 @@ static DNNReturnType 
proc_from_frame_to_dnn_analytics(AVFrame *frame, DNNData *i
 sws_freeContext(sws_ctx);
 return DNN_SUCCESS;
 }
-
-DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData *input, 
DNNFunctionType func_type, void *log_ctx)
-{
-switch (func_type)
-{
-case DFT_PROCESS_FRAME:
-return proc_from_frame_to_dnn_frameprocessing(frame, input, log_ctx);
-case DFT_ANALYTICS_DETECT:
-return 

[FFmpeg-devel] [PATCH 1/2] lavfi/dnn_filter_common.h: remove filter option 'options'

2021-05-17 Thread Guo, Yejun
we'd use 'backend_configs' to avoid confusion
---
 libavfilter/dnn_filter_common.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h
index 09ddd8a5ca..51caa71c24 100644
--- a/libavfilter/dnn_filter_common.h
+++ b/libavfilter/dnn_filter_common.h
@@ -45,7 +45,6 @@ typedef struct DnnContext {
 { "input",  "input name of the model",
OFFSET(model_inputname),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
 { "output", "output name of the model",   
OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, 
FLAGS },\
 { "backend_configs","backend configs",
OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
-{ "options","backend configs",
OFFSET(backend_options),  AV_OPT_TYPE_STRING,{ .str = NULL }, 0, 0, FLAGS 
},\
 { "async",  "use DNN async inference",OFFSET(async),   
 AV_OPT_TYPE_BOOL,  { .i64 = 1}, 0, 1, FLAGS},
 
 
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] lavfi/vf_dnn_processing.c: fix CID 1460603

2021-05-16 Thread Guo, Yejun


> -Original Message-
> From: Guo, Yejun 
> Sent: 2021年5月11日 15:41
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun 
> Subject: [PATCH 3/3] lavfi/vf_dnn_processing.c: fix CID 1460603
> 
> CID 1460603 (#1 of 1): Improper use of negative value (NEGATIVE_RETURNS)
> ---
>  libavfilter/vf_dnn_processing.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
> index e05d59a649..e1d9d24683 100644
> --- a/libavfilter/vf_dnn_processing.c
> +++ b/libavfilter/vf_dnn_processing.c
> @@ -225,6 +225,9 @@ static int copy_uv_planes(DnnProcessingContext *ctx,
> AVFrame *out, const AVFrame
>  uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h);
>  for (int i = 1; i < 3; ++i) {
>  int bytewidth = av_image_get_linesize(in->format, in->width,
> i);
> +if (bytewidth < 0) {
> +return AVERROR(EINVAL);
> +}
>  av_image_copy_plane(out->data[i], out->linesize[i],
>  in->data[i], in->linesize[i],
>  bytewidth, uv_height);
> --
will push tomorrow if there's no objection, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V5 5/5] lavfi/dnn_backend_native_layer_mathunary.h: Documentation

2021-05-16 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月14日 15:11
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH V5 5/5]
> lavfi/dnn_backend_native_layer_mathunary.h: Documentation
> 
> Add documentation for Unary Math Layer
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  .../dnn/dnn_backend_native_layer_mathunary.h  | 30
> +++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native_layer_mathunary.h
> b/libavfilter/dnn/dnn_backend_native_layer_mathunary.h
> index 151a73200a..ed79947896 100644
> --- a/libavfilter/dnn/dnn_backend_native_layer_mathunary.h
> +++ b/libavfilter/dnn/dnn_backend_native_layer_mathunary.h
> @@ -54,7 +54,37 @@ typedef struct DnnLayerMathUnaryParams{
>  DNNMathUnaryOperation un_op;
>  } DnnLayerMathUnaryParams;
> 
> +/**
> + * @brief Load the Unary Math Layer.
> + *
> + * It assigns the unary math layer with DnnLayerMathUnaryParams
> + * after parsing from the model file context.
> + *
> + * @param layer pointer to the DNN layer instance
> + * @param model_file_context pointer to model file context
> + * @param file_size model file size to check if data is read
> + * correctly from the model file
> + * @param operands_num operand count of the whole model to
> + * check if data is read correctly from the model file
> + * @return number of bytes read from the model file
> + * @retval 0 if out of memory or an error occurs
> + */
>  int ff_dnn_load_layer_math_unary(Layer *layer, AVIOContext
> *model_file_context, int file_size, int operands_num);
> +
> +/**
> + * @brief Execute the Unary Math Layer.
> + *
> + * It applies the unary operator parsed while
> + * loading to the given input operands.
> + *
> + * @param operands all operands for the model
> + * @param input_operand_indexes input operand indexes for this layer
> + * @param output_operand_index output operand index for this layer
> + * @param parameters unary math layer parameters
> + * @param ctx pointer to Native model context for logging
> + * @retval 0 if the execution succeeds
> + * @retval DNN_ERROR if the execution fails
> + */
>  int ff_dnn_execute_layer_math_unary(DnnOperand *operands, const
> int32_t *input_operand_indexes,
>  int32_t output_operand_index,
> const void *parameters, NativeContext *ctx);
> 
LGTM, will push soon, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V4 2/5] lavfi/dnn_backend_native_layer_conv2d.h: Documentation

2021-05-13 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月13日 15:04
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH V4 2/5]
> lavfi/dnn_backend_native_layer_conv2d.h: Documentation
> 
> Add documentation for 2D Convolution Layer
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  .../dnn/dnn_backend_native_layer_conv2d.h | 27
> +++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
> b/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
> index 03ca795c61..f951f8d80e 100644
> --- a/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
> +++ b/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
> @@ -34,7 +34,34 @@ typedef struct ConvolutionalParams{
>  float *biases;
>  } ConvolutionalParams;
> 
> +/**
> + * @brief Load the 2D Convolution Layer.
> + *
> + * It assigns the 2D convolution layer with ConvolutionalParams
> + * after parsing from the model file context.
> + *
> + * @param layer pointer to the DNN layer instance
> + * @param model_file_context pointer to model file context
> + * @param file_size model file size to check if data is read
> + * correctly from the model file
> + * @param operands_num operand count of the whole model to
> + * check if data is read correctly from the model file
> + * @return number of bytes read from the model file
> + * @retval 0 if an error occurs or out of memory
> + */
>  int ff_dnn_load_layer_conv2d(Layer *layer, AVIOContext
> *model_file_context, int file_size, int operands_num);
> +
> +/**
> + * @brief Execute the 2D Convolution Layer.
> + *
> + * @param operands all operands for the model
> + * @param input_operand_indexes input operand indexes for this layer
> + * @param output_operand_index output operand index for this layer
> + * @param parameters average pooling parameters

typo for 'average pooling parameters' in patch 2 to patch 5.

will push patch 1 soon.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/5 v2] lavfi/dnn_backend_native_layer_dense.h: Documentation

2021-05-12 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月13日 2:45
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH 3/5 v2]
> lavfi/dnn_backend_native_layer_dense.h: Documentation
> 
> Add documentation for Dense Layer
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  .../dnn/dnn_backend_native_layer_dense.h  | 28
> +++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native_layer_dense.h
> b/libavfilter/dnn/dnn_backend_native_layer_dense.h
> index 86248856ae..83fcb18831 100644
> --- a/libavfilter/dnn/dnn_backend_native_layer_dense.h
> +++ b/libavfilter/dnn/dnn_backend_native_layer_dense.h
> @@ -31,7 +31,35 @@ typedef struct DenseParams{
>  float *biases;
>  } DenseParams;
> 
> +/**
> + * @brief Load the Densely-Connnected Layer.

Connnected -> Connected

> + *
> + * It assigns the layer parameters to the hyperparameters
> + * like activation, bias, and kernel size after parsing
> + * from the model file context.

dense layer does not has parameter kernel size, it is derived
from layer's input number and output number.

it is a time consuming work to list all the correct parameters here,
so we might simplify the doc like below for all the patches.

It assigns the densely-connected layer with DenseParams
after parsing from the model file context.

> + *
> + * @param layer pointer to the DNN layer instance
> + * @param model_file_context pointer to model file context
> + * @param file_size model file size to check if data is read
> + * correctly from the model file
> + * @param operands_num operand count of the whole model to
> + * check if data is read correctly from the model file
> + * @return number of bytes read from the model file
> + * @retval 0 if out of memory or an error occurs
> + */
>  int ff_dnn_load_layer_dense(Layer *layer, AVIOContext *model_file_context,
> int file_size, int operands_num);
> +
> +/**
> + * @brief Execute the Densely-Connnected Layer.

typo: Connnected


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/5] lavfi/dnn_backend_native_layer_avgpool.h: Documentation

2021-05-12 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: Wednesday, May 12, 2021 5:02 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 1/5]
> lavfi/dnn_backend_native_layer_avgpool.h: Documentation
> 
> On Wed, May 12, 2021 at 7:52 AM Guo, Yejun  wrote:
> 
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > Shubhanshu Saxena
> > > Sent: 2021年5月8日 20:10
> > > To: ffmpeg-devel@ffmpeg.org
> > > Cc: Shubhanshu Saxena 
> > > Subject: [FFmpeg-devel] [PATCH 1/5]
> >
> 
> Okay, I'll remove the spaces and correct these lines. Thank you.
> 
> Also, since the parameters for loading and execution functions are the same
> in other layers, I need to correct them as well. Right?

Yes, right.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/5] lavfi/dnn_backend_native_layer_avgpool.h: Documentation

2021-05-11 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月8日 20:10
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH 1/5]
> lavfi/dnn_backend_native_layer_avgpool.h: Documentation
> 
> Add documentation for Average Pool Layer
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  .../dnn/dnn_backend_native_layer_avgpool.h| 27
> +++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native_layer_avgpool.h
> b/libavfilter/dnn/dnn_backend_native_layer_avgpool.h
> index 75d9eb187b..0f629b9165 100644
> --- a/libavfilter/dnn/dnn_backend_native_layer_avgpool.h
> +++ b/libavfilter/dnn/dnn_backend_native_layer_avgpool.h
> @@ -33,7 +33,34 @@ typedef struct AvgPoolParams{
>  DNNPaddingParam padding_method;
>  } AvgPoolParams;
> 
> +/**
> + * @brief Load Average Pooling Layer.
> + *
> + * It assigns the layer parameters to the hyperparameters
> + * like strides, padding method, and kernel size after
> + * parsing from the model file context.
> + *

please run 'git show' for every patch to make sure there's no
tailing spaces in the change.

> + * @param layer pointer to the DNN layer instance
> + * @param model_file_context pointer to model file context
> + * @param file_size model file size
> + * @param operands_num number of operands for the layer

operands_num is the operand count of the whole model,
it is used to check the data read from model file is correct,
just like the usage of file_size.

> + * @return Size of DNN Layer
Size -> size.
return the number of bytes read from model file.

> + * @retval 0 if model file context contains invalid hyperparameters.
return 0 for error.

there's another case to return 0 for out of memory.

> + */
>  int ff_dnn_load_layer_avg_pool(Layer *layer, AVIOContext
> *model_file_context, int file_size, int operands_num);
> +
> +/**
> + * @brief Execute the Average Pooling Layer.
> + * Padding in channel dimensions is currently not supported.
> + *
> + * @param operands input operands

operands contain all the operands of the model

> + * @param input_operand_indexes input operand indexes

input operand indexes for this layer.

> + * @param output_operand_index output operand index

output operand index for this layer.

> + * @param parameters average pooling parameters
> + * @param ctx pointer to Native model context

and its usage is for logging only.

> + * @retval 0 if the execution succeeds
> + * @retval DNN_ERROR if the execution fails
> + */
>  int ff_dnn_execute_layer_avg_pool(DnnOperand *operands, const int32_t
> *input_operand_indexes,
>int32_t output_operand_index,
> const void *parameters, NativeContext *ctx);
> 
> --
> 2.27.0
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 3/3] lavfi/vf_dnn_processing.c: fix CID 1460603

2021-05-11 Thread Guo, Yejun
CID 1460603 (#1 of 1): Improper use of negative value (NEGATIVE_RETURNS)
---
 libavfilter/vf_dnn_processing.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index e05d59a649..e1d9d24683 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -225,6 +225,9 @@ static int copy_uv_planes(DnnProcessingContext *ctx, 
AVFrame *out, const AVFrame
 uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h);
 for (int i = 1; i < 3; ++i) {
 int bytewidth = av_image_get_linesize(in->format, in->width, i);
+if (bytewidth < 0) {
+return AVERROR(EINVAL);
+}
 av_image_copy_plane(out->data[i], out->linesize[i],
 in->data[i], in->linesize[i],
 bytewidth, uv_height);
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 2/3] lavfi/dnn/dnn_io_proc.c: fix Improper use of negative value (NEGATIVE_RETURNS)

2021-05-11 Thread Guo, Yejun
fix coverity CID 1473511 and 1473566
---
 libavfilter/dnn/dnn_io_proc.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
index d5d2654162..02c8e13ed7 100644
--- a/libavfilter/dnn/dnn_io_proc.c
+++ b/libavfilter/dnn/dnn_io_proc.c
@@ -28,6 +28,9 @@ DNNReturnType ff_proc_from_dnn_to_frame(AVFrame *frame, 
DNNData *output, void *l
 {
 struct SwsContext *sws_ctx;
 int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
+if (bytewidth < 0) {
+return DNN_ERROR;
+}
 if (output->dt != DNN_FLOAT) {
 avpriv_report_missing_feature(log_ctx, "data type rather than 
DNN_FLOAT");
 return DNN_ERROR;
@@ -98,6 +101,9 @@ static DNNReturnType 
proc_from_frame_to_dnn_frameprocessing(AVFrame *frame, DNND
 {
 struct SwsContext *sws_ctx;
 int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
+if (bytewidth < 0) {
+return DNN_ERROR;
+}
 if (input->dt != DNN_FLOAT) {
 avpriv_report_missing_feature(log_ctx, "data type rather than 
DNN_FLOAT");
 return DNN_ERROR;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 1/3] lavfi/dnn/dnn_io_proc.c: Fix Out-of-bounds access (ARRAY_VS_SINGLETON)

2021-05-11 Thread Guo, Yejun
fix coverity CID 1473571, 1473577 and 1482089
---
 libavfilter/dnn/dnn_io_proc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/libavfilter/dnn/dnn_io_proc.c b/libavfilter/dnn/dnn_io_proc.c
index 1e2bef3f9a..d5d2654162 100644
--- a/libavfilter/dnn/dnn_io_proc.c
+++ b/libavfilter/dnn/dnn_io_proc.c
@@ -154,7 +154,7 @@ static DNNReturnType 
proc_from_frame_to_dnn_frameprocessing(AVFrame *frame, DNND
 }
 sws_scale(sws_ctx, (const uint8_t **)frame->data,
frame->linesize, 0, frame->height,
-   (uint8_t * const*)(>data),
+   (uint8_t * const [4]){input->data, 0, 0, 0},
(const int [4]){frame->width * sizeof(float), 0, 0, 
0});
 sws_freeContext(sws_ctx);
 break;
@@ -236,7 +236,7 @@ DNNReturnType ff_frame_to_dnn_classify(AVFrame *frame, 
DNNData *input, uint32_t
 
 sws_scale(sws_ctx, (const uint8_t *const *)_data, frame->linesize,
0, height,
-   (uint8_t *const *)(>data), linesizes);
+   (uint8_t *const [4]){input->data, 0, 0, 0}, linesizes);
 
 sws_freeContext(sws_ctx);
 
@@ -266,7 +266,7 @@ static DNNReturnType 
proc_from_frame_to_dnn_analytics(AVFrame *frame, DNNData *i
 }
 
 sws_scale(sws_ctx, (const uint8_t *const *)frame->data, frame->linesize, 
0, frame->height,
-   (uint8_t *const *)(>data), linesizes);
+   (uint8_t *const [4]){input->data, 0, 0, 0}, linesizes);
 
 sws_freeContext(sws_ctx);
 return DNN_SUCCESS;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result check for av_frame_get_side_data

2021-05-10 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Guo,
> Yejun
> Sent: 2021年5月7日 16:54
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result
> check for av_frame_get_side_data
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > Steven Liu
> > Sent: 2021年5月7日 16:30
> > To: FFmpeg development discussions and patches
> > 
> > Subject: Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result
> > check for av_frame_get_side_data
> >
> > Guo, Yejun  于2021年5月7日周五 下午4:11写
> 道:
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: ffmpeg-devel  On Behalf Of
> > > > Steven Liu
> > > > Sent: 2021年5月7日 14:43
> > > > To: ffmpeg-devel@ffmpeg.org
> > > > Cc: Steven Liu 
> > > > Subject: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result
> check
> > for
> > > > av_frame_get_side_data
> > > >
> > > > CID: 1482090
> > >
> > > thanks for the patch, what does CID mean?
> > It means Coverity ID :D
> > Have sent a invite to you yet.
> 
> waiting for the invitation, I even checked the Junk folder, still nothing
> received.
> 
> > >
> > > > there can return null from av_frame_get_side_data, and will use
> sd->data
> > > > after av_frame_get_side_data, so should check null return value.
> > > >
> > > > Signed-off-by: Steven Liu 
> > > > ---
> > > >  libavfilter/vf_dnn_classify.c | 4 
> > > >  1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/libavfilter/vf_dnn_classify.c 
> > > > b/libavfilter/vf_dnn_classify.c
> > > > index 18fcd452d0..7900255cfe 100644
> > > > --- a/libavfilter/vf_dnn_classify.c
> > > > +++ b/libavfilter/vf_dnn_classify.c
> > > > @@ -77,6 +77,10 @@ static int dnn_classify_post_proc(AVFrame
> *frame,
> > > > DNNData *output, uint32_t bbox
> > > >  }
> > > >
> > > >  sd = av_frame_get_side_data(frame,
> > > > AV_FRAME_DATA_DETECTION_BBOXES);
> > > > +if (!sd) {
> > > > +av_log(filter_ctx, AV_LOG_ERROR, "Cannot get side data in
> > > > dnn_classify_post_proc\n");
> > > > +return -1;
> > > > +}
> > >
> > > The check happens in the backend,
> > > see
> >
> https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/dnn/dnn_back
> > end_openvino.c#L536,
> > > this function will not be invoked if sd is NULL.
> > Do you mean need contain_valid_detection_bbox before
> > av_frame_get_side_data here?
> 
> that function is used within dnn backend, i think you code in this patch is
> good to me.
> 
pushed
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2 4/4] dnn/vf_dnn_detect: add tensorflow output parse support

2021-05-10 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Guo,
> Yejun
> Sent: 2021年5月10日 14:14
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH V2 4/4] dnn/vf_dnn_detect: add
> tensorflow output parse support
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of Ting
> > Fu
> > Sent: 2021年5月6日 16:46
> > To: ffmpeg-devel@ffmpeg.org
> > Subject: [FFmpeg-devel] [PATCH V2 4/4] dnn/vf_dnn_detect: add
> tensorflow
> > output parse support
> >
> > Testing model is tensorflow offical model in github repo, please refer
> >
> https://github.com/tensorflow/models/blob/master/research/object_detecti
> > on/g3doc/tf1_detection_zoo.md
> > to download the detect model as you need.
> > For example, local testing was carried on with
> > 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and
> > used one image of dog in
> >
> https://github.com/tensorflow/models/blob/master/research/object_detecti
> > on/test_images/image1.jpg
> >
> > Testing command is:
> > ./ffmpeg -i image1.jpg -vf
> > dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\
> >
> "num_detections_scores_classes_boxes":m
> > odel=ssd_mobilenet_v2_coco.pb,\
> > showinfo -f null -
> >
> > We will see the result similar as below:
> > [Parsed_showinfo_1 @ 0x33e65f0]   side data - detection bounding boxes:
> > [Parsed_showinfo_1 @ 0x33e65f0] source: ssd_mobilenet_v2_coco.pb
> > [Parsed_showinfo_1 @ 0x33e65f0] index: 0,   region: (382, 60) ->
> > (1005, 593), label: 18, confidence: 9834/1.
> > [Parsed_showinfo_1 @ 0x33e65f0] index: 1,   region: (12, 8) -> (328,
> > 549), label: 18, confidence: 8555/1.
> > [Parsed_showinfo_1 @ 0x33e65f0] index: 2,   region: (293, 7) -> (682,
> > 458), label: 1, confidence: 8033/1.
> > [Parsed_showinfo_1 @ 0x33e65f0] index: 3,   region: (342, 0) -> (690,
> > 325), label: 1, confidence: 5878/1.
> >
> > There are two boxes of dog with cores 94.05% & 93.45% and two boxes of
> > person with scores 80.33% & 58.78%.
> >
> > Signed-off-by: Ting Fu 
> > ---
> >  libavfilter/vf_dnn_detect.c | 95
> > -
> >  1 file changed, 94 insertions(+), 1 deletion(-)
> >
> > diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c
> > index 7d39acb653..818b53a052 100644
> > --- a/libavfilter/vf_dnn_detect.c
> > +++ b/libavfilter/vf_dnn_detect.c
> > @@ -48,6 +48,9 @@ typedef struct DnnDetectContext {
> >  #define FLAGS AV_OPT_FLAG_FILTERING_PARAM |
> > AV_OPT_FLAG_VIDEO_PARAM
> >  static const AVOption dnn_detect_options[] = {
> >  { "dnn_backend", "DNN backend",
> > OFFSET(backend_type), AV_OPT_TYPE_INT,   { .i64 = 2 },
> > INT_MIN, INT_MAX, FLAGS, "backend" },
> > +#if (CONFIG_LIBTENSORFLOW == 1)
> > +{ "tensorflow",  "tensorflow backend flag",0,
> > AV_OPT_TYPE_CONST, { .i64 = 1 },0, 0, FLAGS, "backend" },
> > +#endif
> >  #if (CONFIG_LIBOPENVINO == 1)
> >  { "openvino","openvino backend flag",  0,
> > AV_OPT_TYPE_CONST, { .i64 = 2 },0, 0, FLAGS, "backend" },
> >  #endif
> > @@ -59,7 +62,7 @@ static const AVOption dnn_detect_options[] = {
> >
> >  AVFILTER_DEFINE_CLASS(dnn_detect);
> >
> > -static int dnn_detect_post_proc(AVFrame *frame, DNNData *output,
> > uint32_t nb, AVFilterContext *filter_ctx)
> > +static int dnn_detect_post_proc_ov(AVFrame *frame, DNNData *output,
> > AVFilterContext *filter_ctx)
> >  {
> >  DnnDetectContext *ctx = filter_ctx->priv;
> >  float conf_threshold = ctx->confidence;
> > @@ -136,6 +139,96 @@ static int dnn_detect_post_proc(AVFrame *frame,
> > DNNData *output, uint32_t nb, AV
> >  return 0;
> >  }
> >
> > +static int dnn_detect_post_proc_tf(AVFrame *frame, DNNData *output,
> > AVFilterContext *filter_ctx)
> > +{
> > +DnnDetectContext *ctx = filter_ctx->priv;
> > +int proposal_count;
> > +float conf_threshold = ctx->confidence;
> > +float *conf, *position, *label_id, x0, y0, x1, y1;
> > +int nb_bboxes = 0;
> > +AVFrameSideData *sd;
> > +AVDetectionBBox *bbox;
> > +AVDetectionBBoxHeader *header;
> > +
> > +proposal_count = *(float *)(output[0].data);
> > +conf   = output[1].data;
> > +position   = output[3].data;
>

Re: [FFmpeg-devel] [PATCH V2 4/4] dnn/vf_dnn_detect: add tensorflow output parse support

2021-05-10 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of Ting
> Fu
> Sent: 2021年5月6日 16:46
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH V2 4/4] dnn/vf_dnn_detect: add tensorflow
> output parse support
> 
> Testing model is tensorflow offical model in github repo, please refer
> https://github.com/tensorflow/models/blob/master/research/object_detecti
> on/g3doc/tf1_detection_zoo.md
> to download the detect model as you need.
> For example, local testing was carried on with
> 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and
> used one image of dog in
> https://github.com/tensorflow/models/blob/master/research/object_detecti
> on/test_images/image1.jpg
> 
> Testing command is:
> ./ffmpeg -i image1.jpg -vf
> dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\
> "num_detections_scores_classes_boxes":m
> odel=ssd_mobilenet_v2_coco.pb,\
> showinfo -f null -
> 
> We will see the result similar as below:
> [Parsed_showinfo_1 @ 0x33e65f0]   side data - detection bounding boxes:
> [Parsed_showinfo_1 @ 0x33e65f0] source: ssd_mobilenet_v2_coco.pb
> [Parsed_showinfo_1 @ 0x33e65f0] index: 0,   region: (382, 60) ->
> (1005, 593), label: 18, confidence: 9834/1.
> [Parsed_showinfo_1 @ 0x33e65f0] index: 1,   region: (12, 8) -> (328,
> 549), label: 18, confidence: 8555/1.
> [Parsed_showinfo_1 @ 0x33e65f0] index: 2,   region: (293, 7) -> (682,
> 458), label: 1, confidence: 8033/1.
> [Parsed_showinfo_1 @ 0x33e65f0] index: 3,   region: (342, 0) -> (690,
> 325), label: 1, confidence: 5878/1.
> 
> There are two boxes of dog with cores 94.05% & 93.45% and two boxes of
> person with scores 80.33% & 58.78%.
> 
> Signed-off-by: Ting Fu 
> ---
>  libavfilter/vf_dnn_detect.c | 95
> -
>  1 file changed, 94 insertions(+), 1 deletion(-)
> 
> diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c
> index 7d39acb653..818b53a052 100644
> --- a/libavfilter/vf_dnn_detect.c
> +++ b/libavfilter/vf_dnn_detect.c
> @@ -48,6 +48,9 @@ typedef struct DnnDetectContext {
>  #define FLAGS AV_OPT_FLAG_FILTERING_PARAM |
> AV_OPT_FLAG_VIDEO_PARAM
>  static const AVOption dnn_detect_options[] = {
>  { "dnn_backend", "DNN backend",
> OFFSET(backend_type), AV_OPT_TYPE_INT,   { .i64 = 2 },
> INT_MIN, INT_MAX, FLAGS, "backend" },
> +#if (CONFIG_LIBTENSORFLOW == 1)
> +{ "tensorflow",  "tensorflow backend flag",0,
> AV_OPT_TYPE_CONST, { .i64 = 1 },0, 0, FLAGS, "backend" },
> +#endif
>  #if (CONFIG_LIBOPENVINO == 1)
>  { "openvino","openvino backend flag",  0,
> AV_OPT_TYPE_CONST, { .i64 = 2 },0, 0, FLAGS, "backend" },
>  #endif
> @@ -59,7 +62,7 @@ static const AVOption dnn_detect_options[] = {
> 
>  AVFILTER_DEFINE_CLASS(dnn_detect);
> 
> -static int dnn_detect_post_proc(AVFrame *frame, DNNData *output,
> uint32_t nb, AVFilterContext *filter_ctx)
> +static int dnn_detect_post_proc_ov(AVFrame *frame, DNNData *output,
> AVFilterContext *filter_ctx)
>  {
>  DnnDetectContext *ctx = filter_ctx->priv;
>  float conf_threshold = ctx->confidence;
> @@ -136,6 +139,96 @@ static int dnn_detect_post_proc(AVFrame *frame,
> DNNData *output, uint32_t nb, AV
>  return 0;
>  }
> 
> +static int dnn_detect_post_proc_tf(AVFrame *frame, DNNData *output,
> AVFilterContext *filter_ctx)
> +{
> +DnnDetectContext *ctx = filter_ctx->priv;
> +int proposal_count;
> +float conf_threshold = ctx->confidence;
> +float *conf, *position, *label_id, x0, y0, x1, y1;
> +int nb_bboxes = 0;
> +AVFrameSideData *sd;
> +AVDetectionBBox *bbox;
> +AVDetectionBBoxHeader *header;
> +
> +proposal_count = *(float *)(output[0].data);
> +conf   = output[1].data;
> +position   = output[3].data;
> +label_id   = output[2].data;
> +
> +sd = av_frame_get_side_data(frame,
> AV_FRAME_DATA_DETECTION_BBOXES);
> +if (sd) {
> +av_log(filter_ctx, AV_LOG_ERROR, "already have dnn bounding
> boxes in side data.\n");
> +return -1;
> +}
> +
> +for (int i = 0; i < proposal_count; ++i) {
> +if (conf[i] < conf_threshold)
> +continue;
> +nb_bboxes++;
> +}
> +
> +if (nb_bboxes == 0) {
> +av_log(filter_ctx, AV_LOG_VERBOSE, "nothing detected in this
> frame.\n");
> +return 0;
> +}
> +
> +header = av_detection_bbox_create_side_data(frame, nb_bboxes);
> +if (!header) {
> +av_log(filter_ctx, AV_LOG_ERROR, "failed to create side data
> with %d bounding boxes\n", nb_bboxes);
> +return -1;
> +}
> +
> +av_strlcpy(header->source, ctx->dnnctx.model_filename,
> sizeof(header->source));
> +
> +for (int i = 0; i < proposal_count; ++i) {
> +y0 = position[i * 4];
> +x0 = position[i * 4 + 1];
> +y1 = position[i * 4 + 2];
> +x1 = position[i * 4 + 3];
> +
> +bbox = av_get_detection_bbox(header, i);
> +
> +if (conf[i] < conf_threshold) {
> + 

Re: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf: fix cross library usage

2021-05-08 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年5月8日 9:18
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf: fix
> cross library usage
> 
> From: Limin Wang 
> 
> duplicate ff_hex_to_data() function from avformat and rename it to
> hex_to_data() as static function.
> 
> Signed-off-by: Limin Wang 
> ---
>  libavfilter/dnn/dnn_backend_tf.c | 41
> +---
>  1 file changed, 38 insertions(+), 3 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index 03fe310..5980919 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -28,8 +28,8 @@
>  #include "dnn_backend_native_layer_conv2d.h"
>  #include "dnn_backend_native_layer_depth2space.h"
>  #include "libavformat/avio.h"
> -#include "libavformat/internal.h"
>  #include "libavutil/avassert.h"
> +#include "libavutil/avstring.h"
>  #include "../internal.h"
>  #include "dnn_backend_native_layer_pad.h"
>  #include "dnn_backend_native_layer_maximum.h"
> @@ -195,6 +195,38 @@ static DNNReturnType get_output_tf(void *model,
> const char *input_name, int inpu
>  return ret;
>  }
> 
> +#define SPACE_CHARS " \t\r\n"
> +static int hex_to_data(uint8_t *data, int data_size, const char *p)
> +{
> +int c, len, v;
> +
> +len = 0;
> +v   = 1;
> +for (;;) {
> +p += strspn(p, SPACE_CHARS);
> +if (*p == '\0')
> +break;
> +c = av_toupper((unsigned char) *p++);
> +if (c >= '0' && c <= '9')
> +c = c - '0';
> +else if (c >= 'A' && c <= 'F')
> +c = c - 'A' + 10;
> +else
> +break;
> +v = (v << 4) | c;
> +if (v & 0x100) {
> +if (data) {
> +if (len >= data_size)
> +return AVERROR(ERANGE);
> +data[len] = v;
> +}
> +len++;
> +v = 1;
> +}
> +}
> +return len;
> +}
> +
>  static DNNReturnType load_tf_model(TFModel *tf_model, const char
> *model_filename)
>  {
>  TFContext *ctx = _model->ctx;
> @@ -219,14 +251,17 @@ static DNNReturnType load_tf_model(TFModel
> *tf_model, const char *model_filename
>  return DNN_ERROR;
>  }
>  config = tf_model->ctx.options.sess_config + 2;
> -sess_config_length = ff_hex_to_data(NULL, config);
> +sess_config_length = hex_to_data(NULL, 0, config);
> 
>  sess_config = av_mallocz(sess_config_length +
> AV_INPUT_BUFFER_PADDING_SIZE);
>  if (!sess_config) {
>  av_log(ctx, AV_LOG_ERROR, "failed to allocate memory\n");
>  return DNN_ERROR;
>  }
> -ff_hex_to_data(sess_config, config);
> +if (hex_to_data(sess_config, sess_config_length, config) < 0) {
> +av_log(ctx, AV_LOG_ERROR, "failed to convert hex to data\n");
> +return DNN_ERROR;
> +}
>  }
> 
>  graph_def = read_graph(model_filename);
> --
LGTM, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result check for av_frame_get_side_data

2021-05-07 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Steven Liu
> Sent: 2021年5月7日 16:30
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result
> check for av_frame_get_side_data
> 
> Guo, Yejun  于2021年5月7日周五 下午4:11写道:
> >
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > Steven Liu
> > > Sent: 2021年5月7日 14:43
> > > To: ffmpeg-devel@ffmpeg.org
> > > Cc: Steven Liu 
> > > Subject: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result check
> for
> > > av_frame_get_side_data
> > >
> > > CID: 1482090
> >
> > thanks for the patch, what does CID mean?
> It means Coverity ID :D
> Have sent a invite to you yet.

waiting for the invitation, I even checked the Junk folder, still nothing 
received.

> >
> > > there can return null from av_frame_get_side_data, and will use sd->data
> > > after av_frame_get_side_data, so should check null return value.
> > >
> > > Signed-off-by: Steven Liu 
> > > ---
> > >  libavfilter/vf_dnn_classify.c | 4 
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c
> > > index 18fcd452d0..7900255cfe 100644
> > > --- a/libavfilter/vf_dnn_classify.c
> > > +++ b/libavfilter/vf_dnn_classify.c
> > > @@ -77,6 +77,10 @@ static int dnn_classify_post_proc(AVFrame *frame,
> > > DNNData *output, uint32_t bbox
> > >  }
> > >
> > >  sd = av_frame_get_side_data(frame,
> > > AV_FRAME_DATA_DETECTION_BBOXES);
> > > +if (!sd) {
> > > +av_log(filter_ctx, AV_LOG_ERROR, "Cannot get side data in
> > > dnn_classify_post_proc\n");
> > > +return -1;
> > > +}
> >
> > The check happens in the backend,
> > see
> https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/dnn/dnn_back
> end_openvino.c#L536,
> > this function will not be invoked if sd is NULL.
> Do you mean need contain_valid_detection_bbox before
> av_frame_get_side_data here?

that function is used within dnn backend, i think you code in this patch is 
good to me.

> 
> 
> >
> > anyway, I think it's nice to have another check here.
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result check for av_frame_get_side_data

2021-05-07 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Steven Liu
> Sent: 2021年5月7日 14:43
> To: ffmpeg-devel@ffmpeg.org
> Cc: Steven Liu 
> Subject: [FFmpeg-devel] [PATCH] avfilter/vf_dnn_classify: add result check for
> av_frame_get_side_data
> 
> CID: 1482090

thanks for the patch, what does CID mean?

> there can return null from av_frame_get_side_data, and will use sd->data
> after av_frame_get_side_data, so should check null return value.
> 
> Signed-off-by: Steven Liu 
> ---
>  libavfilter/vf_dnn_classify.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c
> index 18fcd452d0..7900255cfe 100644
> --- a/libavfilter/vf_dnn_classify.c
> +++ b/libavfilter/vf_dnn_classify.c
> @@ -77,6 +77,10 @@ static int dnn_classify_post_proc(AVFrame *frame,
> DNNData *output, uint32_t bbox
>  }
> 
>  sd = av_frame_get_side_data(frame,
> AV_FRAME_DATA_DETECTION_BBOXES);
> +if (!sd) {
> +av_log(filter_ctx, AV_LOG_ERROR, "Cannot get side data in
> dnn_classify_post_proc\n");
> +return -1;
> +}

The check happens in the backend,
see 
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/dnn/dnn_backend_openvino.c#L536,
 
this function will not be invoked if sd is NULL.

anyway, I think it's nice to have another check here.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] configure: dnn needs avformat

2021-05-05 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Matthias C. M. Troffaes
> Sent: 2021年5月4日 20:27
> To: ffmpeg-devel@ffmpeg.org
> Cc: Matthias C. M. Troffaes 
> Subject: [FFmpeg-devel] [PATCH 2/2] configure: dnn needs avformat
> 
> The source file "libavfilter/dnn/dnn_backend_native.h" includes
> "libavformat/avio.h", so avformat needs to be declared as a dependency.
> ---
>  configure | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/configure b/configure
> index 9e45c0822c..8725a94f8a 100755
> --- a/configure
> +++ b/configure
> @@ -2660,7 +2660,7 @@ cbs_vp9_select="cbs"
>  dct_select="rdft"
>  dirac_parse_select="golomb"
>  dnn_suggest="libtensorflow libopenvino"
> -dnn_deps="swscale"
> +dnn_deps="avformat swscale"
>  error_resilience_select="me_cmp"
>  faandct_deps="faan"
>  faandct_select="fdctdsp"
> --
> 2.25.1
> 

thanks for the catch, the native backend uses AVIOContext to load model
from file. Will push this patch soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_native_layer_avgpool.c: Correct Spelling of Pixel

2021-05-05 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Shubhanshu Saxena
> Sent: 2021年5月1日 23:17
> To: ffmpeg-devel@ffmpeg.org
> Cc: Shubhanshu Saxena 
> Subject: [FFmpeg-devel] [PATCH] lavfi/dnn_backend_native_layer_avgpool.c:
> Correct Spelling of Pixel
> 
> Correct spelling of word `pixel` from `pxiels`
> 
> Signed-off-by: Shubhanshu Saxena 
> ---
>  libavfilter/dnn/dnn_backend_native_layer_avgpool.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_native_layer_avgpool.c
> b/libavfilter/dnn/dnn_backend_native_layer_avgpool.c
> index dcfb8c816f..89f1787523 100644
> --- a/libavfilter/dnn/dnn_backend_native_layer_avgpool.c
> +++ b/libavfilter/dnn/dnn_backend_native_layer_avgpool.c
> @@ -73,7 +73,7 @@ int ff_dnn_execute_layer_avg_pool(DnnOperand
> *operands, const int32_t *input_ope
>  DnnOperand *output_operand = [output_operand_index];
> 
>  /**
> - * When padding_method = SAME, the tensorflow will only padding the
> hald number of 0 pxiels
> + * When padding_method = SAME, the tensorflow will only padding
> the hald number of 0 pixels
>   * except the remainders.
>   * Eg: assuming the input height = 1080, the strides = 11, so the
> remainders = 1080 % 11 = 2
>   * and if ksize = 5: it will fill (5 - 2) >> 1 = 1 line before the 
> first line
> of input image,
> --
LGTM, will push soon, thanks.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH V2 6/6] lavfi/dnn_classify: add filter dnn_classify for classification based on detection bounding boxes

2021-05-04 Thread Guo, Yejun


> -Original Message-
> From: Guo, Yejun 
> Sent: 2021年4月29日 21:37
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun 
> Subject: [PATCH V2 6/6] lavfi/dnn_classify: add filter dnn_classify for
> classification based on detection bounding boxes
> 
> classification is done on every detection bounding box in frame's side data,
> which are the results of object detection (filter dnn_detect).
> 
> Please refer to commit log of dnn_detect for the material for detection,
> and see below for classification.
> 
> - download material for classifcation:
> wget
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/202
> 1.1/emotions-recognition-retail-0003.bin
> wget
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/202
> 1.1/emotions-recognition-retail-0003.xml
> wget
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/202
> 1.1/emotions-recognition-retail-0003.label
> 
> - run command as:
> ./ffmpeg -i cici.jpg -vf
> dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:in
> put=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0
> 001.label,dnn_classify=dnn_backend=openvino:model=emotions-recognition-
> retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:labels=em
> otions-recognition-retail-0003.label:target=face,showinfo -f null -
> 
> We'll see the detect result as below:
> [Parsed_showinfo_2 @ 0x55b7d25e77c0]   side data - detection bounding
> boxes:
> [Parsed_showinfo_2 @ 0x55b7d25e77c0] source:
> face-detection-adas-0001.xml, emotions-recognition-retail-0003.xml
> [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0,  region: (1005, 813) ->
> (1086, 905), label: face, confidence: 1/1.
> [Parsed_showinfo_2 @ 0x55b7d25e77c0]classify:  label:
> happy, confidence: 6757/1.
> [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1,  region: (888, 839) ->
> (967, 926), label: face, confidence: 6917/1.
> [Parsed_showinfo_2 @ 0x55b7d25e77c0]classify:  label: anger,
> confidence: 4320/1.
> 
> Signed-off-by: Guo, Yejun 
> ---
> the main change of V2 in this patch set is to rebase with latest code
> by resolving the conflicts.
> 
>  configure |   1 +
>  doc/filters.texi  |  39 
>  libavfilter/Makefile  |   1 +
>  libavfilter/allfilters.c  |   1 +
>  libavfilter/vf_dnn_classify.c | 330
> ++
>  5 files changed, 372 insertions(+)
>  create mode 100644 libavfilter/vf_dnn_classify.c
> 
will push tomorrow if there's no objection, thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH V2 6/6] lavfi/dnn_classify: add filter dnn_classify for classification based on detection bounding boxes

2021-04-29 Thread Guo, Yejun
classification is done on every detection bounding box in frame's side data,
which are the results of object detection (filter dnn_detect).

Please refer to commit log of dnn_detect for the material for detection,
and see below for classification.

- download material for classifcation:
wget 
https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.bin
wget 
https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.xml
wget 
https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.label

- run command as:
./ffmpeg -i cici.jpg -vf 
dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,dnn_classify=dnn_backend=openvino:model=emotions-recognition-retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:labels=emotions-recognition-retail-0003.label:target=face,showinfo
 -f null -

We'll see the detect result as below:
[Parsed_showinfo_2 @ 0x55b7d25e77c0]   side data - detection bounding boxes:
[Parsed_showinfo_2 @ 0x55b7d25e77c0] source: face-detection-adas-0001.xml, 
emotions-recognition-retail-0003.xml
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0,  region: (1005, 813) -> (1086, 
905), label: face, confidence: 1/1.
[Parsed_showinfo_2 @ 0x55b7d25e77c0]classify:  label: happy, 
confidence: 6757/1.
[Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1,  region: (888, 839) -> (967, 
926), label: face, confidence: 6917/1.
[Parsed_showinfo_2 @ 0x55b7d25e77c0]classify:  label: anger, 
confidence: 4320/1.

Signed-off-by: Guo, Yejun 
---
the main change of V2 in this patch set is to rebase with latest code
by resolving the conflicts.

 configure |   1 +
 doc/filters.texi  |  39 
 libavfilter/Makefile  |   1 +
 libavfilter/allfilters.c  |   1 +
 libavfilter/vf_dnn_classify.c | 330 ++
 5 files changed, 372 insertions(+)
 create mode 100644 libavfilter/vf_dnn_classify.c

diff --git a/configure b/configure
index 820f719a32..9f2dfaf2d4 100755
--- a/configure
+++ b/configure
@@ -3550,6 +3550,7 @@ derain_filter_select="dnn"
 deshake_filter_select="pixelutils"
 deshake_opencl_filter_deps="opencl"
 dilation_opencl_filter_deps="opencl"
+dnn_classify_filter_select="dnn"
 dnn_detect_filter_select="dnn"
 dnn_processing_filter_select="dnn"
 drawtext_filter_deps="libfreetype"
diff --git a/doc/filters.texi b/doc/filters.texi
index 36e35a175b..b405cc5dfb 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -10127,6 +10127,45 @@ ffmpeg -i INPUT -f lavfi -i 
nullsrc=hd720,geq='r=128+80*(sin(sqrt((X-W/2)*(X-W/2
 @end example
 @end itemize
 
+@section dnn_classify
+
+Do classification with deep neural networks based on bounding boxes.
+
+The filter accepts the following options:
+
+@table @option
+@item dnn_backend
+Specify which DNN backend to use for model loading and execution. This option 
accepts
+only openvino now, tensorflow backends will be added.
+
+@item model
+Set path to model file specifying network architecture and its parameters.
+Note that different backends use different file formats.
+
+@item input
+Set the input name of the dnn network.
+
+@item output
+Set the output name of the dnn network.
+
+@item confidence
+Set the confidence threshold (default: 0.5).
+
+@item labels
+Set path to label file specifying the mapping between label id and name.
+Each label name is written in one line, tailing spaces and empty lines are 
skipped.
+The first line is the name of label id 0,
+and the second line is the name of label id 1, etc.
+The label id is considered as name if the label file is not provided.
+
+@item backend_configs
+Set the configs to be passed into backend
+
+For tensorflow backend, you can set its configs with @option{sess_config} 
options,
+please use tools/python/tf_sess_config.py to get the configs for your system.
+
+@end table
+
 @section dnn_detect
 
 Do object detection with deep neural networks.
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 5a287364b0..6c22d0404e 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -243,6 +243,7 @@ OBJS-$(CONFIG_DILATION_FILTER)   += 
vf_neighbor.o
 OBJS-$(CONFIG_DILATION_OPENCL_FILTER)+= vf_neighbor_opencl.o opencl.o \
 opencl/neighbor.o
 OBJS-$(CONFIG_DISPLACE_FILTER)   += vf_displace.o framesync.o
+OBJS-$(CONFIG_DNN_CLASSIFY_FILTER)   += vf_dnn_classify.o
 OBJS-$(CONFIG_DNN_DETECT_FILTER) += vf_dnn_detect.o
 OBJS-$(CONFIG_DNN_PROCESSING_FILTER) += vf_dnn_processing.o
 OBJS-$(CONFIG_DOUBLEWEAVE_FILTER)+= vf_weave.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.

[FFmpeg-devel] [PATCH V2 5/6] lavfi/dnn: add classify support with openvino backend

2021-04-29 Thread Guo, Yejun
Signed-off-by: Guo, Yejun 
---
 libavfilter/dnn/dnn_backend_openvino.c | 143 +
 libavfilter/dnn/dnn_io_proc.c  |  60 +++
 libavfilter/dnn/dnn_io_proc.h  |   1 +
 libavfilter/dnn_filter_common.c|  21 
 libavfilter/dnn_filter_common.h|   2 +
 libavfilter/dnn_interface.h|  10 +-
 6 files changed, 218 insertions(+), 19 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index 4e58ff6d9c..1ff8a720b9 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -29,6 +29,7 @@
 #include "libavutil/avassert.h"
 #include "libavutil/opt.h"
 #include "libavutil/avstring.h"
+#include "libavutil/detection_bbox.h"
 #include "../internal.h"
 #include "queue.h"
 #include "safe_queue.h"
@@ -74,6 +75,7 @@ typedef struct TaskItem {
 // one task might have multiple inferences
 typedef struct InferenceItem {
 TaskItem *task;
+uint32_t bbox_index;
 } InferenceItem;
 
 // one request for one call to openvino
@@ -182,12 +184,23 @@ static DNNReturnType fill_model_input_ov(OVModel 
*ov_model, RequestItem *request
 request->inferences[i] = inference;
 request->inference_count = i + 1;
 task = inference->task;
-if (task->do_ioproc) {
-if (ov_model->model->frame_pre_proc != NULL) {
-ov_model->model->frame_pre_proc(task->in_frame, , 
ov_model->model->filter_ctx);
-} else {
-ff_proc_from_frame_to_dnn(task->in_frame, , 
ov_model->model->func_type, ctx);
+switch (task->ov_model->model->func_type) {
+case DFT_PROCESS_FRAME:
+case DFT_ANALYTICS_DETECT:
+if (task->do_ioproc) {
+if (ov_model->model->frame_pre_proc != NULL) {
+ov_model->model->frame_pre_proc(task->in_frame, , 
ov_model->model->filter_ctx);
+} else {
+ff_proc_from_frame_to_dnn(task->in_frame, , 
ov_model->model->func_type, ctx);
+}
 }
+break;
+case DFT_ANALYTICS_CLASSIFY:
+ff_frame_to_dnn_classify(task->in_frame, , 
inference->bbox_index, ctx);
+break;
+default:
+av_assert0(!"should not reach here");
+break;
 }
 input.data = (uint8_t *)input.data
  + input.width * input.height * input.channels * 
get_datatype_size(input.dt);
@@ -276,6 +289,13 @@ static void infer_completion_callback(void *args)
 }
 task->ov_model->model->detect_post_proc(task->out_frame, , 
1, task->ov_model->model->filter_ctx);
 break;
+case DFT_ANALYTICS_CLASSIFY:
+if (!task->ov_model->model->classify_post_proc) {
+av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide 
post proc\n");
+return;
+}
+task->ov_model->model->classify_post_proc(task->out_frame, 
, request->inferences[i]->bbox_index, task->ov_model->model->filter_ctx);
+break;
 default:
 av_assert0(!"should not reach here");
 break;
@@ -513,7 +533,44 @@ static DNNReturnType get_input_ov(void *model, DNNData 
*input, const char *input
 return DNN_ERROR;
 }
 
-static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, 
TaskItem *task, Queue *inference_queue)
+static int contain_valid_detection_bbox(AVFrame *frame)
+{
+AVFrameSideData *sd;
+const AVDetectionBBoxHeader *header;
+const AVDetectionBBox *bbox;
+
+sd = av_frame_get_side_data(frame, AV_FRAME_DATA_DETECTION_BBOXES);
+if (!sd) { // this frame has nothing detected
+return 0;
+}
+
+if (!sd->size) {
+return 0;
+}
+
+header = (const AVDetectionBBoxHeader *)sd->data;
+if (!header->nb_bboxes) {
+return 0;
+}
+
+for (uint32_t i = 0; i < header->nb_bboxes; i++) {
+bbox = av_get_detection_bbox(header, i);
+if (bbox->x < 0 || bbox->w < 0 || bbox->x + bbox->w >= frame->width) {
+return 0;
+}
+if (bbox->y < 0 || bbox->h < 0 || bbox->y + bbox->h >= frame->width) {
+return 0;
+}
+
+if (bbox->classify_count == AV_NUM_DETECTION_BBOX_CLASSIFY) {
+return 0;
+}
+}
+
+return 1;
+}
+
+static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, 
TaskItem *task, Queue *inference_queue, DNNExecBaseParams *exec_params)
 {
 switch (func_type) {
 case DFT_PROCESS_FRAME:
@@ -532,6 +589,45 @@ static DNNReturnTyp

[FFmpeg-devel] [PATCH V2 4/6] lavfi/dnn: refine dnn interface to add DNNExecBaseParams

2021-04-29 Thread Guo, Yejun
Different function type of model requires different parameters, for
example, object detection detects lots of objects (cat/dog/...) in
the frame, and classifcation needs to know which object (cat or dog)
it is going to classify.

The current interface needs to add a new function with more parameters
to support new requirement, with this change, we can just add a new
struct (for example DNNExecClassifyParams) based on DNNExecBaseParams,
and so we can continue to use the current interface execute_model just
with params changed.
---
 libavfilter/dnn/Makefile   |  1 +
 libavfilter/dnn/dnn_backend_common.c   | 51 ++
 libavfilter/dnn/dnn_backend_common.h   | 31 
 libavfilter/dnn/dnn_backend_native.c   | 15 +++-
 libavfilter/dnn/dnn_backend_native.h   |  3 +-
 libavfilter/dnn/dnn_backend_openvino.c | 50 -
 libavfilter/dnn/dnn_backend_openvino.h |  6 +--
 libavfilter/dnn/dnn_backend_tf.c   | 18 +++--
 libavfilter/dnn/dnn_backend_tf.h   |  3 +-
 libavfilter/dnn_filter_common.c| 20 --
 libavfilter/dnn_interface.h| 14 +--
 11 files changed, 139 insertions(+), 73 deletions(-)
 create mode 100644 libavfilter/dnn/dnn_backend_common.c
 create mode 100644 libavfilter/dnn/dnn_backend_common.h

diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile
index d6d58f4b61..4cfbce0efc 100644
--- a/libavfilter/dnn/Makefile
+++ b/libavfilter/dnn/Makefile
@@ -2,6 +2,7 @@ OBJS-$(CONFIG_DNN)   += 
dnn/dnn_interface.o
 OBJS-$(CONFIG_DNN)   += dnn/dnn_io_proc.o
 OBJS-$(CONFIG_DNN)   += dnn/queue.o
 OBJS-$(CONFIG_DNN)   += dnn/safe_queue.o
+OBJS-$(CONFIG_DNN)   += dnn/dnn_backend_common.o
 OBJS-$(CONFIG_DNN)   += dnn/dnn_backend_native.o
 OBJS-$(CONFIG_DNN)   += dnn/dnn_backend_native_layers.o
 OBJS-$(CONFIG_DNN)   += 
dnn/dnn_backend_native_layer_avgpool.o
diff --git a/libavfilter/dnn/dnn_backend_common.c 
b/libavfilter/dnn/dnn_backend_common.c
new file mode 100644
index 00..a522ab5650
--- /dev/null
+++ b/libavfilter/dnn/dnn_backend_common.c
@@ -0,0 +1,51 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * DNN common functions different backends.
+ */
+
+#include "dnn_backend_common.h"
+
+int ff_check_exec_params(void *ctx, DNNBackendType backend, DNNFunctionType 
func_type, DNNExecBaseParams *exec_params)
+{
+if (!exec_params) {
+av_log(ctx, AV_LOG_ERROR, "exec_params is null when execute model.\n");
+return AVERROR(EINVAL);
+}
+
+if (!exec_params->in_frame) {
+av_log(ctx, AV_LOG_ERROR, "in frame is NULL when execute model.\n");
+return AVERROR(EINVAL);
+}
+
+if (!exec_params->out_frame) {
+av_log(ctx, AV_LOG_ERROR, "out frame is NULL when execute model.\n");
+return AVERROR(EINVAL);
+}
+
+if (exec_params->nb_output != 1 && backend != DNN_TF) {
+// currently, the filter does not need multiple outputs,
+// so we just pending the support until we really need it.
+avpriv_report_missing_feature(ctx, "multiple outputs");
+return AVERROR(EINVAL);
+}
+
+return 0;
+}
diff --git a/libavfilter/dnn/dnn_backend_common.h 
b/libavfilter/dnn/dnn_backend_common.h
new file mode 100644
index 00..cd9c0f5339
--- /dev/null
+++ b/libavfilter/dnn/dnn_backend_common.h
@@ -0,0 +1,31 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * 

[FFmpeg-devel] [PATCH V2 3/6] lavfi/dnn_backend_openvino.c: move the logic for batch mode earlier

2021-04-29 Thread Guo, Yejun
---
 libavfilter/dnn/dnn_backend_openvino.c | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index a8a02d7589..9f3c696e0a 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -432,13 +432,6 @@ static DNNReturnType execute_model_ov(RequestItem 
*request, Queue *inferenceq)
 ctx = >ov_model->ctx;
 
 if (task->async) {
-if (ff_queue_size(inferenceq) < ctx->options.batch_size) {
-if (ff_safe_queue_push_front(task->ov_model->request_queue, 
request) < 0) {
-av_log(ctx, AV_LOG_ERROR, "Failed to push back 
request_queue.\n");
-return DNN_ERROR;
-}
-return DNN_SUCCESS;
-}
 ret = fill_model_input_ov(task->ov_model, request);
 if (ret != DNN_SUCCESS) {
 return ret;
@@ -793,6 +786,11 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel 
*model, const char *i
 return DNN_ERROR;
 }
 
+if (ff_queue_size(ov_model->inference_queue) < ctx->options.batch_size) {
+// not enough inference items queued for a batch
+return DNN_SUCCESS;
+}
+
 request = ff_safe_queue_pop_front(ov_model->request_queue);
 if (!request) {
 av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH V2 2/6] lavfi/dnn_backend_openvino.c: add InferenceItem between TaskItem and RequestItem

2021-04-29 Thread Guo, Yejun
There's one task item for one function call from dnn interface,
there's one request item for one call to openvino. For classify,
one task might need multiple inference for classification on every
bounding box, so add InferenceItem.
---
 libavfilter/dnn/dnn_backend_openvino.c | 157 ++---
 1 file changed, 115 insertions(+), 42 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index 267c154c87..a8a02d7589 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -54,8 +54,10 @@ typedef struct OVModel{
 ie_executable_network_t *exe_network;
 SafeQueue *request_queue;   // holds RequestItem
 Queue *task_queue;  // holds TaskItem
+Queue *inference_queue; // holds InferenceItem
 } OVModel;
 
+// one task for one function call from dnn interface
 typedef struct TaskItem {
 OVModel *ov_model;
 const char *input_name;
@@ -64,13 +66,20 @@ typedef struct TaskItem {
 AVFrame *out_frame;
 int do_ioproc;
 int async;
-int done;
+uint32_t inference_todo;
+uint32_t inference_done;
 } TaskItem;
 
+// one task might have multiple inferences
+typedef struct InferenceItem {
+TaskItem *task;
+} InferenceItem;
+
+// one request for one call to openvino
 typedef struct RequestItem {
 ie_infer_request_t *infer_request;
-TaskItem **tasks;
-int task_count;
+InferenceItem **inferences;
+uint32_t inference_count;
 ie_complete_call_back_t callback;
 } RequestItem;
 
@@ -127,7 +136,12 @@ static DNNReturnType fill_model_input_ov(OVModel 
*ov_model, RequestItem *request
 IEStatusCode status;
 DNNData input;
 ie_blob_t *input_blob = NULL;
-TaskItem *task = request->tasks[0];
+InferenceItem *inference;
+TaskItem *task;
+
+inference = ff_queue_peek_front(ov_model->inference_queue);
+av_assert0(inference);
+task = inference->task;
 
 status = ie_infer_request_get_blob(request->infer_request, 
task->input_name, _blob);
 if (status != OK) {
@@ -159,9 +173,14 @@ static DNNReturnType fill_model_input_ov(OVModel 
*ov_model, RequestItem *request
 // change to be an option when necessary.
 input.order = DCO_BGR;
 
-av_assert0(request->task_count <= dims.dims[0]);
-for (int i = 0; i < request->task_count; ++i) {
-task = request->tasks[i];
+for (int i = 0; i < ctx->options.batch_size; ++i) {
+inference = ff_queue_pop_front(ov_model->inference_queue);
+if (!inference) {
+break;
+}
+request->inferences[i] = inference;
+request->inference_count = i + 1;
+task = inference->task;
 if (task->do_ioproc) {
 if (ov_model->model->frame_pre_proc != NULL) {
 ov_model->model->frame_pre_proc(task->in_frame, , 
ov_model->model->filter_ctx);
@@ -183,7 +202,8 @@ static void infer_completion_callback(void *args)
 precision_e precision;
 IEStatusCode status;
 RequestItem *request = args;
-TaskItem *task = request->tasks[0];
+InferenceItem *inference = request->inferences[0];
+TaskItem *task = inference->task;
 SafeQueue *requestq = task->ov_model->request_queue;
 ie_blob_t *output_blob = NULL;
 ie_blob_buffer_t blob_buffer;
@@ -229,10 +249,11 @@ static void infer_completion_callback(void *args)
 output.dt   = precision_to_datatype(precision);
 output.data = blob_buffer.buffer;
 
-av_assert0(request->task_count <= dims.dims[0]);
-av_assert0(request->task_count >= 1);
-for (int i = 0; i < request->task_count; ++i) {
-task = request->tasks[i];
+av_assert0(request->inference_count <= dims.dims[0]);
+av_assert0(request->inference_count >= 1);
+for (int i = 0; i < request->inference_count; ++i) {
+task = request->inferences[i]->task;
+task->inference_done++;
 
 switch (task->ov_model->model->func_type) {
 case DFT_PROCESS_FRAME:
@@ -259,13 +280,13 @@ static void infer_completion_callback(void *args)
 break;
 }
 
-task->done = 1;
+av_freep(>inferences[i]);
 output.data = (uint8_t *)output.data
   + output.width * output.height * output.channels * 
get_datatype_size(output.dt);
 }
 ie_blob_free(_blob);
 
-request->task_count = 0;
+request->inference_count = 0;
 if (ff_safe_queue_push_back(requestq, request) < 0) {
 av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
 return;
@@ -370,11 +391,11 @@ static DNNReturnType init_model_ov(OVModel *ov_model, 
const char *input_name, co
 goto err;
 }
 
-item->tasks = av_malloc_array(ctx->options.batch_size, 
sizeof(*item->tasks));
-if (!item->tasks) {
+item->inferences = av_malloc_array(ctx->options.batch_size, 
sizeof(*item->inferences));
+if (!item->inferences) {
 goto err;

[FFmpeg-devel] [PATCH V2 1/6] lavfi/dnn_backend_openvino.c: unify code for infer request for sync/async

2021-04-29 Thread Guo, Yejun
---
the main change of V2 in this patch set is to rebase with latest code
by resolving the conflicts

 libavfilter/dnn/dnn_backend_openvino.c | 49 +++---
 1 file changed, 21 insertions(+), 28 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_openvino.c 
b/libavfilter/dnn/dnn_backend_openvino.c
index a8032fe56b..267c154c87 100644
--- a/libavfilter/dnn/dnn_backend_openvino.c
+++ b/libavfilter/dnn/dnn_backend_openvino.c
@@ -52,9 +52,6 @@ typedef struct OVModel{
 ie_core_t *core;
 ie_network_t *network;
 ie_executable_network_t *exe_network;
-ie_infer_request_t *infer_request;
-
-/* for async execution */
 SafeQueue *request_queue;   // holds RequestItem
 Queue *task_queue;  // holds TaskItem
 } OVModel;
@@ -269,12 +266,9 @@ static void infer_completion_callback(void *args)
 ie_blob_free(_blob);
 
 request->task_count = 0;
-
-if (task->async) {
-if (ff_safe_queue_push_back(requestq, request) < 0) {
-av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
-return;
-}
+if (ff_safe_queue_push_back(requestq, request) < 0) {
+av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
+return;
 }
 }
 
@@ -347,11 +341,6 @@ static DNNReturnType init_model_ov(OVModel *ov_model, 
const char *input_name, co
 goto err;
 }
 
-// create infer_request for sync execution
-status = ie_exec_network_create_infer_request(ov_model->exe_network, 
_model->infer_request);
-if (status != OK)
-goto err;
-
 // create infer_requests for async execution
 if (ctx->options.nireq <= 0) {
 // the default value is a rough estimation
@@ -502,10 +491,9 @@ static DNNReturnType get_output_ov(void *model, const char 
*input_name, int inpu
 OVModel *ov_model = model;
 OVContext *ctx = _model->ctx;
 TaskItem task;
-RequestItem request;
+RequestItem *request;
 AVFrame *in_frame = NULL;
 AVFrame *out_frame = NULL;
-TaskItem *ptask = 
 IEStatusCode status;
 input_shapes_t input_shapes;
 
@@ -557,11 +545,16 @@ static DNNReturnType get_output_ov(void *model, const 
char *input_name, int inpu
 task.out_frame = out_frame;
 task.ov_model = ov_model;
 
-request.infer_request = ov_model->infer_request;
-request.task_count = 1;
-request.tasks = 
+request = ff_safe_queue_pop_front(ov_model->request_queue);
+if (!request) {
+av_frame_free(_frame);
+av_frame_free(_frame);
+av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
+return DNN_ERROR;
+}
+request->tasks[request->task_count++] = 
 
-ret = execute_model_ov();
+ret = execute_model_ov(request);
 *output_width = out_frame->width;
 *output_height = out_frame->height;
 
@@ -633,8 +626,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel 
*model, const char *input_n
 OVModel *ov_model = model->model;
 OVContext *ctx = _model->ctx;
 TaskItem task;
-RequestItem request;
-TaskItem *ptask = 
+RequestItem *request;
 
 if (!in_frame) {
 av_log(ctx, AV_LOG_ERROR, "in frame is NULL when execute model.\n");
@@ -674,11 +666,14 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel 
*model, const char *input_n
 task.out_frame = out_frame;
 task.ov_model = ov_model;
 
-request.infer_request = ov_model->infer_request;
-request.task_count = 1;
-request.tasks = 
+request = ff_safe_queue_pop_front(ov_model->request_queue);
+if (!request) {
+av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
+return DNN_ERROR;
+}
+request->tasks[request->task_count++] = 
 
-return execute_model_ov();
+return execute_model_ov(request);
 }
 
 DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, const char 
*input_name, AVFrame *in_frame,
@@ -821,8 +816,6 @@ void ff_dnn_free_model_ov(DNNModel **model)
 }
 ff_queue_destroy(ov_model->task_queue);
 
-if (ov_model->infer_request)
-ie_infer_request_free(_model->infer_request);
 if (ov_model->exe_network)
 ie_exec_network_free(_model->exe_network);
 if (ov_model->network)
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v4] doc/filters: Documentation to add sess_config option for tensorflow backend

2021-04-29 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月29日 20:46
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH v4] doc/filters: Documentation to add
> sess_config option for tensorflow backend
> 
> From: Limin Wang 
> 
> Signed-off-by: Limin Wang 
> ---
>  doc/filters.texi | 11 +--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index e99d70a..4af5b72 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -10214,6 +10214,12 @@ Set the input name of the dnn network.
>  @item output
>  Set the output name of the dnn network.
> 
> +@item backend_configs
> +Set the configs to be passed into backend
> +
> +For tensorflow backend, you can set its configs with @option{sess_config}
> options,
> +please use tools/python/tf_sess_config.py to get the configs of TensorFlow
> backend for your system.
> +
>  @item async
>  use DNN async execution if set (default: set),
>  roll back to sync execution if the backend does not support async.
> @@ -10242,9 +10248,10 @@ Handle the Y channel with srcnn.pb (see
> @ref{sr} filter) for frame with yuv420p
>  @end example
> 
>  @item
> -Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes
> frame size, for format yuv420p (planar YUV formats supported):
> +Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes
> frame size, for format yuv420p (planar YUV formats supported),
> +please use tools/python/tf_sess_config.py to get the configs for your

will add the missed 'of TensorFlow backend' as discussed, and push soon.

> system.
>  @example
> -./ffmpeg -i 480p.jpg -vf
> format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:
> input=x:output=y -y tmp.espcn.jpg
> +./ffmpeg -i 480p.jpg -vf
> format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:
> input=x:output=y:backend_configs=sess_config=0x10022805320e09cdcc
> ec3f20012a01303801 -y tmp.espcn.jpg
>  @end example
> 
>  @end itemize
> --
> 1.8.3.1
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf: simplify the code with ff_hex_to_data

2021-04-29 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月28日 21:17
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf:
> simplify the code with ff_hex_to_data
> 
> On Wed, Apr 28, 2021 at 12:26:54PM +, Guo, Yejun wrote:
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > lance.lmw...@gmail.com
> > > Sent: 2021年4月28日 18:47
> > > To: ffmpeg-devel@ffmpeg.org
> > > Cc: Limin Wang 
> > > Subject: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf:
> > > simplify the code with ff_hex_to_data
> > >
> > > From: Limin Wang 
> > >
> > > please use tools/python/tf_sess_config.py to get the sess_config after
> that.
> > > note the byte order of session config is in normal order.
> > > bump the MICRO version for the config change.
> > >
> > > Signed-off-by: Limin Wang 
> > > ---
> > >  libavfilter/dnn/dnn_backend_tf.c | 42 
> > > +++--
> > >  libavfilter/version.h|  2 +-
> > >  tools/python/tf_sess_config.py   | 45
> > > 
> > >  3 files changed, 54 insertions(+), 35 deletions(-)
> > >  create mode 100644 tools/python/tf_sess_config.py
> > >
> > > diff --git a/libavfilter/dnn/dnn_backend_tf.c
> > > b/libavfilter/dnn/dnn_backend_tf.c
> > > index fb799d2..076dd3d 100644
> > > --- a/libavfilter/dnn/dnn_backend_tf.c
> > > +++ b/libavfilter/dnn/dnn_backend_tf.c
> > > @@ -28,6 +28,7 @@
> > >  #include "dnn_backend_native_layer_conv2d.h"
> > >  #include "dnn_backend_native_layer_depth2space.h"
> > >  #include "libavformat/avio.h"
> > > +#include "libavformat/internal.h"
> > >  #include "libavutil/avassert.h"
> > >  #include "../internal.h"
> > >  #include "dnn_backend_native_layer_pad.h"
> > > @@ -206,53 +207,26 @@ static DNNReturnType load_tf_model(TFModel
> > > *tf_model, const char *model_filename
> > >
> > >  // prepare the sess config data
> > >  if (tf_model->ctx.options.sess_config != NULL) {
> > > +const char *config;
> > >  /*
> > >  tf_model->ctx.options.sess_config is hex to present the
> serialized
> > > proto
> > >  required by TF_SetConfig below, so we need to first generate
> the
> > > serialized
> > > -proto in a python script, the following is a script example to
> > > generate
> > > -serialized proto which specifies one GPU, we can change the
> script
> > > to add
> > > -more options.
> > > -
> > > -import tensorflow as tf
> > > -gpu_options = tf.GPUOptions(visible_device_list='0')
> > > -config = tf.ConfigProto(gpu_options=gpu_options)
> > > -s = config.SerializeToString()
> > > -b = ''.join("%02x" % int(ord(b)) for b in s[::-1])
> > > -print('0x%s' % b)
> > > -
> > > -the script output looks like: 0xab...cd, and then pass 0xab...cd 
> > > to
> > > sess_config.
> > > +proto in a python script, tools/python/tf_sess_config.py is a
> script
> > > example
> > > +to generate the configs of sess_config.
> > >  */
> > > -char tmp[3];
> > > -tmp[2] = '\0';
> > > -
> > >  if (strncmp(tf_model->ctx.options.sess_config, "0x", 2) != 0) {
> > >  av_log(ctx, AV_LOG_ERROR, "sess_config should start with
> > > '0x'\n");
> > >  return DNN_ERROR;
> > >  }
> > > +config = tf_model->ctx.options.sess_config + 2;
> > > +sess_config_length = ff_hex_to_data(NULL, config);
> > >
> > > -sess_config_length = strlen(tf_model->ctx.options.sess_config);
> > > -if (sess_config_length % 2 != 0) {
> > > -av_log(ctx, AV_LOG_ERROR, "the length of sess_config is
> not
> > > even (%s), "
> > > -  "please re-generate the
> > > config.\n",
> > > -
> > > tf_model->ctx.options.sess_config);
> > > -return DNN_ERROR;
> > > -}
> > > -
> > > -sess_config_length -= 2; //ignore the first '0x'
> > > -sess_config_length /= 2; //get the data length in byte
> > > -
> > > -sess_config = av_malloc(sess_config_length);
> > > +sess_config = av_mallocz(sess_config_length +
> > > AV_INPUT_BUFFER_PADDING_SIZE);
> >
> > just get a concern, why we need to add PADDING_SIZE here.
> > Will there be potential issue if not add?
> 
> I just want to make sure it's safe even if the sess_config_length is zero.
> 
ok, will push soon.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 2/2] doc/filters: Documentation to add sess_config option for tensorflow backend

2021-04-28 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月28日 18:47
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH v3 2/2] doc/filters: Documentation to add
> sess_config option for tensorflow backend
> 
> From: Limin Wang 
> 
> Signed-off-by: Limin Wang 
> ---
>  doc/filters.texi | 14 --
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index e99d70a..24f2335 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -10161,6 +10161,9 @@ The label id is considered as name if the label
> file is not provided.
>  @item backend_configs
>  Set the configs to be passed into backend
> 
> +For tensorflow backend, you can set its configs with @option{sess_config}
> options,
> +please use tools/python/tf_sess_config.py to get the configs
> +
>  @item async
>  use DNN async execution if set (default: set),
>  roll back to sync execution if the backend does not support async.
> @@ -10214,6 +10217,9 @@ Set the input name of the dnn network.
>  @item output
>  Set the output name of the dnn network.
> 
> +For tensorflow backend, you can set its configs with @option{sess_config}
> options,
> +please use tools/python/tf_sess_config.py to get the configs
> +
>  @item async
>  use DNN async execution if set (default: set),
>  roll back to sync execution if the backend does not support async.
> @@ -10242,9 +10248,10 @@ Handle the Y channel with srcnn.pb (see
> @ref{sr} filter) for frame with yuv420p
>  @end example
> 
>  @item
> -Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes
> frame size, for format yuv420p (planar YUV formats supported):
> +Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes
> frame size, for format yuv420p (planar YUV formats supported), please
> +use tools/python/tf_sess_config.py to get the configs for your system.

use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for 
your system.

>  @example
> -./ffmpeg -i 480p.jpg -vf
> format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:
> input=x:output=y -y tmp.espcn.jpg
> +./ffmpeg -i 480p.jpg -vf
> format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:
> input=x:output=y:backend_configs=sess_config=0x10022805320e09cdcc
> ec3f20012a01303801 -y tmp.espcn.jpg
>  @end example
> 
>  @end itemize
> @@ -18905,6 +18912,9 @@ Note that different backends use different file
> formats. TensorFlow backend
>  can load files for both formats, while native backend can load files for only
>  its format.
> 
> +For tensorflow backend, you can set its configs with @option{sess_config}
> options,
> +please use tools/python/tf_sess_config.py to get the configs.
> +

this change can be removed since it is under vf.

>  @item scale_factor
>  Set scale factor for SRCNN model. Allowed values are @code{2}, @code{3}
> and @code{4}.
>  Default value is @code{2}. Scale factor is necessary for SRCNN model,
> because it accepts
> --
> 1.8.3.1
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf: simplify the code with ff_hex_to_data

2021-04-28 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月28日 18:47
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH v3 1/2] avfilter/dnn/dnn_backend_tf:
> simplify the code with ff_hex_to_data
> 
> From: Limin Wang 
> 
> please use tools/python/tf_sess_config.py to get the sess_config after that.
> note the byte order of session config is in normal order.
> bump the MICRO version for the config change.
> 
> Signed-off-by: Limin Wang 
> ---
>  libavfilter/dnn/dnn_backend_tf.c | 42 +++--
>  libavfilter/version.h|  2 +-
>  tools/python/tf_sess_config.py   | 45
> 
>  3 files changed, 54 insertions(+), 35 deletions(-)
>  create mode 100644 tools/python/tf_sess_config.py
> 
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index fb799d2..076dd3d 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -28,6 +28,7 @@
>  #include "dnn_backend_native_layer_conv2d.h"
>  #include "dnn_backend_native_layer_depth2space.h"
>  #include "libavformat/avio.h"
> +#include "libavformat/internal.h"
>  #include "libavutil/avassert.h"
>  #include "../internal.h"
>  #include "dnn_backend_native_layer_pad.h"
> @@ -206,53 +207,26 @@ static DNNReturnType load_tf_model(TFModel
> *tf_model, const char *model_filename
> 
>  // prepare the sess config data
>  if (tf_model->ctx.options.sess_config != NULL) {
> +const char *config;
>  /*
>  tf_model->ctx.options.sess_config is hex to present the serialized
> proto
>  required by TF_SetConfig below, so we need to first generate the
> serialized
> -proto in a python script, the following is a script example to
> generate
> -serialized proto which specifies one GPU, we can change the script
> to add
> -more options.
> -
> -import tensorflow as tf
> -gpu_options = tf.GPUOptions(visible_device_list='0')
> -config = tf.ConfigProto(gpu_options=gpu_options)
> -s = config.SerializeToString()
> -b = ''.join("%02x" % int(ord(b)) for b in s[::-1])
> -print('0x%s' % b)
> -
> -the script output looks like: 0xab...cd, and then pass 0xab...cd to
> sess_config.
> +proto in a python script, tools/python/tf_sess_config.py is a script
> example
> +to generate the configs of sess_config.
>  */
> -char tmp[3];
> -tmp[2] = '\0';
> -
>  if (strncmp(tf_model->ctx.options.sess_config, "0x", 2) != 0) {
>  av_log(ctx, AV_LOG_ERROR, "sess_config should start with
> '0x'\n");
>  return DNN_ERROR;
>  }
> +config = tf_model->ctx.options.sess_config + 2;
> +sess_config_length = ff_hex_to_data(NULL, config);
> 
> -sess_config_length = strlen(tf_model->ctx.options.sess_config);
> -if (sess_config_length % 2 != 0) {
> -av_log(ctx, AV_LOG_ERROR, "the length of sess_config is not
> even (%s), "
> -  "please re-generate the
> config.\n",
> -
> tf_model->ctx.options.sess_config);
> -return DNN_ERROR;
> -}
> -
> -sess_config_length -= 2; //ignore the first '0x'
> -sess_config_length /= 2; //get the data length in byte
> -
> -sess_config = av_malloc(sess_config_length);
> +sess_config = av_mallocz(sess_config_length +
> AV_INPUT_BUFFER_PADDING_SIZE);

just get a concern, why we need to add PADDING_SIZE here.
Will there be potential issue if not add?

>  if (!sess_config) {
>  av_log(ctx, AV_LOG_ERROR, "failed to allocate memory\n");
>  return DNN_ERROR;
>  }
> -
> -for (int i = 0; i < sess_config_length; i++) {
> -int index = 2 + (sess_config_length - 1 - i) * 2;
> -tmp[0] = tf_model->ctx.options.sess_config[index];
> -tmp[1] = tf_model->ctx.options.sess_config[index + 1];
> -sess_config[i] = strtol(tmp, NULL, 16);
> -}
> +ff_hex_to_data(sess_config, config);
>  }
> 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get sess_config

2021-04-27 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月28日 9:19
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get
> sess_config
> 
> On Wed, Apr 28, 2021 at 12:44:06AM +, Guo, Yejun wrote:
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > lance.lmw...@gmail.com
> > > Sent: 2021年4月27日 17:26
> > > To: ffmpeg-devel@ffmpeg.org
> > > Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to
> > > get sess_config
> > >
> > > On Tue, Apr 27, 2021 at 06:41:11AM +, Guo, Yejun wrote:
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: ffmpeg-devel  On Behalf
> Of
> > > > > lance.lmw...@gmail.com
> > > > > Sent: 2021年4月27日 14:29
> > > > > To: ffmpeg-devel@ffmpeg.org
> > > > > Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script
> > > to
> > > > > get sess_config
> > > > >
> > > > > On Tue, Apr 27, 2021 at 04:25:55AM +, Guo, Yejun wrote:
> > > > > >
> > > > > >
> > > > > > > -Original Message-
> > > > > > > From: Guo, Yejun
> > > > > > > Sent: 2021年4月27日 12:11
> > > > > > > To: FFmpeg development discussions and patches
> > > > > > > 
> > > > > > > Subject: RE: [FFmpeg-devel] [PATCH 4/6] tools/python: add help
> > > script
> > > > > to
> > > > > > > get sess_config
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -Original Message-
> > > > > > > > From: ffmpeg-devel  On
> > > Behalf
> > > > > Of
> > > > > > > > lance.lmw...@gmail.com
> > > > > > > > Sent: 2021年4月26日 18:49
> > > > > > > > To: ffmpeg-devel@ffmpeg.org
> > > > > > > > Cc: Limin Wang 
> > > > > > > > Subject: [FFmpeg-devel] [PATCH 4/6] tools/python: add help
> script
> > > to
> > > > > get
> > > > > > > > sess_config
> > > > > > > >
> > > > > > > > From: Limin Wang 
> > > > > > > >
> > > > > > > > Please note the byte order of the hex data is in normal order.
> > > > > > > >
> > > > > > > > Signed-off-by: Limin Wang 
> > > > > > > > ---
> > > > > > > >  tools/python/tf_sess_config.py | 44
> > > > > > > > ++
> > > > > > > >  1 file changed, 44 insertions(+)
> > > > > > > >  create mode 100644 tools/python/tf_sess_config.py
> > > > > > > >
> > > > > > > > diff --git a/tools/python/tf_sess_config.py
> > > > > > > b/tools/python/tf_sess_config.py
> > > > > > > > new file mode 100644
> > > > > > > > index 000..e4e38bd
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tools/python/tf_sess_config.py
> > > > > > > > @@ -0,0 +1,44 @@
> > > > > > >
> > > > > > > this patch changes the order in current implementation, we'd
> better
> > > > > > > merge patch 4 and patch 5 in a single patch, to adjust the order 
> > > > > > > in
> > > one
> > > > > > > patch.
> > > > >
> > > > > I'm OK with that. I think few people have used the option yet.
> > > >
> > > > yes, but we still need to keep the patch modular. There will be
> > > misleading
> > > > if people bisect the code with patch 4, without patch 5.
> > >
> > > OK, will update and merge the two patch.
> > >
> > > >
> > > > >
> > > > > >
> > > > > > and, we may remove '0x' at the beginning of value, since it is no
> > > longer
> > > > > > a hex value in math. It is the byte order.
> > > > >
> > > > > For hex string, it's preferable to prefix with '0x'.
> > > >
> > > > for example, '0x123' is not the value of 3+2*16+1*256, I'm afraid it is
> > > confusing.
> > >
> > > No, your exmple is invalid and will report error for it expects hex 
> > > stringand
> > > 2 hex digits
> > > can represent 1 byte. That's why the current code will check whether
> > > sess_config_length
> > > is even.
> >
> > my example is to show it is not a valid hex value in math.
> >
> > >
> > >
> > > >
> > > > anyway, '0x' as prefix is also an option, we can output a message to
> > > explain it
> > > > in python script.
> >
> > please output a message in python script for an explanation of the order,
> thanks.
> >
> 
> I'm not clear for what's message for explanation, what's message you expect?

like the message in the commit log.

> 
> TF_SetConfig wants user to provide the config as a serialized protobuf string.
> If you turn on the print by json in the python script, that's 
> print(list(map(hex,
> s)))
> ['0x10', '0x2', '0x28', '0x5', '0x32', '0xe', '0x9', '0x9a', '0x99', '0x99', 
> '0x99',
> '0x99', '0x99', '0xb9', '0x3f', '0x20', '0x1', '0x2a', '0x1', '0x30', '0x38', 
> '0x1']
> 
> So we use the hex string format like below so that we can use them in config
> easily.
> 0x10022805320e099a99b93f20012a01303801
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get sess_config

2021-04-27 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月27日 17:26
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to
> get sess_config
> 
> On Tue, Apr 27, 2021 at 06:41:11AM +, Guo, Yejun wrote:
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> > > lance.lmw...@gmail.com
> > > Sent: 2021年4月27日 14:29
> > > To: ffmpeg-devel@ffmpeg.org
> > > Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script
> to
> > > get sess_config
> > >
> > > On Tue, Apr 27, 2021 at 04:25:55AM +, Guo, Yejun wrote:
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: Guo, Yejun
> > > > > Sent: 2021年4月27日 12:11
> > > > > To: FFmpeg development discussions and patches
> > > > > 
> > > > > Subject: RE: [FFmpeg-devel] [PATCH 4/6] tools/python: add help
> script
> > > to
> > > > > get sess_config
> > > > >
> > > > >
> > > > >
> > > > > > -Original Message-
> > > > > > From: ffmpeg-devel  On
> Behalf
> > > Of
> > > > > > lance.lmw...@gmail.com
> > > > > > Sent: 2021年4月26日 18:49
> > > > > > To: ffmpeg-devel@ffmpeg.org
> > > > > > Cc: Limin Wang 
> > > > > > Subject: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script
> to
> > > get
> > > > > > sess_config
> > > > > >
> > > > > > From: Limin Wang 
> > > > > >
> > > > > > Please note the byte order of the hex data is in normal order.
> > > > > >
> > > > > > Signed-off-by: Limin Wang 
> > > > > > ---
> > > > > >  tools/python/tf_sess_config.py | 44
> > > > > > ++
> > > > > >  1 file changed, 44 insertions(+)
> > > > > >  create mode 100644 tools/python/tf_sess_config.py
> > > > > >
> > > > > > diff --git a/tools/python/tf_sess_config.py
> > > > > b/tools/python/tf_sess_config.py
> > > > > > new file mode 100644
> > > > > > index 000..e4e38bd
> > > > > > --- /dev/null
> > > > > > +++ b/tools/python/tf_sess_config.py
> > > > > > @@ -0,0 +1,44 @@
> > > > >
> > > > > this patch changes the order in current implementation, we'd better
> > > > > merge patch 4 and patch 5 in a single patch, to adjust the order in
> one
> > > > > patch.
> > >
> > > I'm OK with that. I think few people have used the option yet.
> >
> > yes, but we still need to keep the patch modular. There will be
> misleading
> > if people bisect the code with patch 4, without patch 5.
> 
> OK, will update and merge the two patch.
> 
> >
> > >
> > > >
> > > > and, we may remove '0x' at the beginning of value, since it is no
> longer
> > > > a hex value in math. It is the byte order.
> > >
> > > For hex string, it's preferable to prefix with '0x'.
> >
> > for example, '0x123' is not the value of 3+2*16+1*256, I'm afraid it is
> confusing.
> 
> No, your exmple is invalid and will report error for it expects hex stringand
> 2 hex digits
> can represent 1 byte. That's why the current code will check whether
> sess_config_length
> is even.

my example is to show it is not a valid hex value in math.

> 
> 
> >
> > anyway, '0x' as prefix is also an option, we can output a message to
> explain it
> > in python script.

please output a message in python script for an explanation of the order, 
thanks.


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get sess_config

2021-04-27 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月27日 14:29
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to
> get sess_config
> 
> On Tue, Apr 27, 2021 at 04:25:55AM +, Guo, Yejun wrote:
> >
> >
> > > -Original Message-
> > > From: Guo, Yejun
> > > Sent: 2021年4月27日 12:11
> > > To: FFmpeg development discussions and patches
> > > 
> > > Subject: RE: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script
> to
> > > get sess_config
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: ffmpeg-devel  On Behalf
> Of
> > > > lance.lmw...@gmail.com
> > > > Sent: 2021年4月26日 18:49
> > > > To: ffmpeg-devel@ffmpeg.org
> > > > Cc: Limin Wang 
> > > > Subject: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to
> get
> > > > sess_config
> > > >
> > > > From: Limin Wang 
> > > >
> > > > Please note the byte order of the hex data is in normal order.
> > > >
> > > > Signed-off-by: Limin Wang 
> > > > ---
> > > >  tools/python/tf_sess_config.py | 44
> > > > ++
> > > >  1 file changed, 44 insertions(+)
> > > >  create mode 100644 tools/python/tf_sess_config.py
> > > >
> > > > diff --git a/tools/python/tf_sess_config.py
> > > b/tools/python/tf_sess_config.py
> > > > new file mode 100644
> > > > index 000..e4e38bd
> > > > --- /dev/null
> > > > +++ b/tools/python/tf_sess_config.py
> > > > @@ -0,0 +1,44 @@
> > >
> > > this patch changes the order in current implementation, we'd better
> > > merge patch 4 and patch 5 in a single patch, to adjust the order in one
> > > patch.
> 
> I'm OK with that. I think few people have used the option yet.

yes, but we still need to keep the patch modular. There will be misleading
if people bisect the code with patch 4, without patch 5.

> 
> >
> > and, we may remove '0x' at the beginning of value, since it is no longer
> > a hex value in math. It is the byte order.
> 
> For hex string, it's preferable to prefix with '0x'.

for example, '0x123' is not the value of 3+2*16+1*256, I'm afraid it is 
confusing.

anyway, '0x' as prefix is also an option, we can output a message to explain it
in python script.

> 
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> 
> --
> Thanks,
> Limin Wang
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get sess_config

2021-04-26 Thread Guo, Yejun


> -Original Message-
> From: Guo, Yejun
> Sent: 2021年4月27日 12:11
> To: FFmpeg development discussions and patches
> 
> Subject: RE: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to
> get sess_config
> 
> 
> 
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > lance.lmw...@gmail.com
> > Sent: 2021年4月26日 18:49
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Limin Wang 
> > Subject: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get
> > sess_config
> >
> > From: Limin Wang 
> >
> > Please note the byte order of the hex data is in normal order.
> >
> > Signed-off-by: Limin Wang 
> > ---
> >  tools/python/tf_sess_config.py | 44
> > ++
> >  1 file changed, 44 insertions(+)
> >  create mode 100644 tools/python/tf_sess_config.py
> >
> > diff --git a/tools/python/tf_sess_config.py
> b/tools/python/tf_sess_config.py
> > new file mode 100644
> > index 000..e4e38bd
> > --- /dev/null
> > +++ b/tools/python/tf_sess_config.py
> > @@ -0,0 +1,44 @@
> 
> this patch changes the order in current implementation, we'd better
> merge patch 4 and patch 5 in a single patch, to adjust the order in one
> patch.

and, we may remove '0x' at the beginning of value, since it is no longer
a hex value in math. It is the byte order.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get sess_config

2021-04-26 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月26日 18:49
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH 4/6] tools/python: add help script to get
> sess_config
> 
> From: Limin Wang 
> 
> Please note the byte order of the hex data is in normal order.
> 
> Signed-off-by: Limin Wang 
> ---
>  tools/python/tf_sess_config.py | 44
> ++
>  1 file changed, 44 insertions(+)
>  create mode 100644 tools/python/tf_sess_config.py
> 
> diff --git a/tools/python/tf_sess_config.py b/tools/python/tf_sess_config.py
> new file mode 100644
> index 000..e4e38bd
> --- /dev/null
> +++ b/tools/python/tf_sess_config.py
> @@ -0,0 +1,44 @@

this patch changes the order in current implementation, we'd better
merge patch 4 and patch 5 in a single patch, to adjust the order in one patch.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 5/6] avfilter/dnn/dnn_backend_tf: simplify the code with ff_hex_to_data

2021-04-26 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> lance.lmw...@gmail.com
> Sent: 2021年4月26日 18:49
> To: ffmpeg-devel@ffmpeg.org
> Cc: Limin Wang 
> Subject: [FFmpeg-devel] [PATCH 5/6] avfilter/dnn/dnn_backend_tf: simplify
> the code with ff_hex_to_data
> 
> From: Limin Wang 
> 
> please use tools/python/tf_sess_config.py to get the sess_config after that.
> note the byte order of session config is the normal order.
> 
> Signed-off-by: Limin Wang 
> ---
>  libavfilter/dnn/dnn_backend_tf.c | 34 ++
>  1 file changed, 6 insertions(+), 28 deletions(-)
> 
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index fb799d2..0084157 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -28,6 +28,7 @@
>  #include "dnn_backend_native_layer_conv2d.h"
>  #include "dnn_backend_native_layer_depth2space.h"
>  #include "libavformat/avio.h"
> +#include "libavformat/internal.h"
>  #include "libavutil/avassert.h"
>  #include "../internal.h"
>  #include "dnn_backend_native_layer_pad.h"
> @@ -202,35 +203,21 @@ static DNNReturnType load_tf_model(TFModel
> *tf_model, const char *model_filename
>  TF_SessionOptions *sess_opts;
>  const TF_Operation *init_op;
>  uint8_t *sess_config = NULL;
> -int sess_config_length = 0;
> +int sess_config_length = ff_hex_to_data(NULL,
> tf_model->ctx.options.sess_config + 2);
> 
>  // prepare the sess config data
>  if (tf_model->ctx.options.sess_config != NULL) {
>  /*
>  tf_model->ctx.options.sess_config is hex to present the
> serialized proto
>  required by TF_SetConfig below, so we need to first generate
> the serialized
> -proto in a python script, the following is a script example to
> generate
> -serialized proto which specifies one GPU, we can change the
> script to add
> -more options.
> -
> -import tensorflow as tf
> -gpu_options = tf.GPUOptions(visible_device_list='0')
> -config = tf.ConfigProto(gpu_options=gpu_options)
> -s = config.SerializeToString()
> -b = ''.join("%02x" % int(ord(b)) for b in s[::-1])
> -print('0x%s' % b)
> -
> -the script output looks like: 0xab...cd, and then pass 0xab...cd to
> sess_config.
> +proto in a python script, tools/python/tf_sess_config.py is a
> script example
> +to generate the configs of sess_config.
>  */
> -char tmp[3];
> -tmp[2] = '\0';
> -
>  if (strncmp(tf_model->ctx.options.sess_config, "0x", 2) != 0) {
>  av_log(ctx, AV_LOG_ERROR, "sess_config should start with
> '0x'\n");
>  return DNN_ERROR;
>  }

there are two '+2' to skip "0x" in the code, we'd better to unify here after
"0x" checking like:

// skip "0x"
const char *config = tf_model->ctx.options.sess_config + 2;
sess_config_length = ff_hex_to_data(NULL, config);
...

> 
> -sess_config_length = strlen(tf_model->ctx.options.sess_config);
>  if (sess_config_length % 2 != 0) {
>  av_log(ctx, AV_LOG_ERROR, "the length of sess_config is
> not even (%s), "
>"please re-generate the
> config.\n",
> @@ -238,21 +225,12 @@ static DNNReturnType load_tf_model(TFModel
> *tf_model, const char *model_filename
>  return DNN_ERROR;
>  }
> 
> -sess_config_length -= 2; //ignore the first '0x'
> -sess_config_length /= 2; //get the data length in byte
> -
> -sess_config = av_malloc(sess_config_length);
> +sess_config = av_mallocz(sess_config_length +
> AV_INPUT_BUFFER_PADDING_SIZE);
>  if (!sess_config) {
>  av_log(ctx, AV_LOG_ERROR, "failed to allocate memory\n");
>  return DNN_ERROR;
>  }
> -
> -for (int i = 0; i < sess_config_length; i++) {
> -int index = 2 + (sess_config_length - 1 - i) * 2;
> -tmp[0] = tf_model->ctx.options.sess_config[index];
> -tmp[1] = tf_model->ctx.options.sess_config[index + 1];
> -sess_config[i] = strtol(tmp, NULL, 16);
> -}
> +ff_hex_to_data(sess_config, tf_model->ctx.options.sess_config +
> 2);
>  }
> 
>  graph_def = read_graph(model_filename);
> --
> 1.8.3.1
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


  1   2   3   4   5   6   7   8   9   >