Re: [FFmpeg-devel] [RFC]separation of multiple outputs' encoding

2020-05-18 Thread Tao Zhang
If no more comments, I will try to code something to create a pseudo
encoder which run the actual encoding in the separate thread. Thanks

Tao Zhang  于2020年5月15日周五 上午10:14写道:
>
> Marton Balint  于2020年5月15日周五 上午2:33写道:
> >
> >
> >
> > On Thu, 14 May 2020, Tao Zhang wrote:
> >
> > > Hi,
> > > FFmpeg supports multiple outputs created out of the same input in the
> > > same process like
> > > ffmpeg -i input -filter_complex '[0:v]yadif,split=3[out1][out2][out3]' \
> > >-map '[out1]' -s 1280x720 -acodec … -vcodec … output1 \
> > >-map '[out2]' -s 640x480  -acodec … -vcodec … output2 \
> > >-map '[out3]' -s 320x240  -acodec … -vcodec … output3
> > > In ffmpeg.c, multiple outputs are processed sequentially like
> > > for (i = 0; i < nb_output_streams; i++)
> > >encoding one frame;
> > >
> > > As below wiki noted, the slowest encoder will slow down the whole
> > > process. Some encoders (like libx264) perform their encoding "threaded
> > > and in the background", but x264_encoder_encode still cost some time.
> > > And it is noticeable when multiple x264_encoder_encodes run in the same 
> > > thread.
> > > https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
> > >
> > > For API users, they can use separate thread for multiple encoding in
> > > their own code. But is there any way to rescue command line users?
> >
> > I am not sure I understand what you want. Processing will still be limited
> > to the slowest encoder, because input processing will still be driven by
> > the slowest encoder, even if the encoders run in separate threads.
> >
> > Buffering encoder input frames is not an option, because input frames are
> > uncompressed, therefore huge. So if you want the faster x264 encoder to
> > finish faster, you have to drive it from a different input, ultimately you
> > should run 3 separate encode processes and accept that decoding and yadif
> > processing happens in all 3 cases separately causing some overhead.
> Currently FFmpeg works like below:
> main thread:
> encoding frame 1 for output 1; encoding frame 1 for output 2; encoding
> frame 1 for output 3; encoding frame 2 for output 1; encoding frame 2
> for output 2; encoding frame 2 for output 3;...
>
> What I want to do is
> thread 1:
> encoding frame 1 for output 1; encoding frame 2 for output 1;...
> thread 2:
> encoding frame 1 for output 2; encoding frame 2 for output 2;...
> thread 3:
> encoding frame 1 for output 3; encoding frame 2 for output 3;...
>
> As the input frame is ref counted, so the buffering memory is not
> huge? Also we can set the max buffering frame limit.
>
> Forgive me for not using multiple separate processes, because mostly
> inputs are network streaming and bandwidth is expensive, a few inputs
> are capture card working as exclusive device.
> >
> > Regards,
> > Marton
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [RFC]separation of multiple outputs' encoding

2020-05-14 Thread Tao Zhang
Marton Balint  于2020年5月15日周五 上午2:33写道:
>
>
>
> On Thu, 14 May 2020, Tao Zhang wrote:
>
> > Hi,
> > FFmpeg supports multiple outputs created out of the same input in the
> > same process like
> > ffmpeg -i input -filter_complex '[0:v]yadif,split=3[out1][out2][out3]' \
> >-map '[out1]' -s 1280x720 -acodec … -vcodec … output1 \
> >-map '[out2]' -s 640x480  -acodec … -vcodec … output2 \
> >-map '[out3]' -s 320x240  -acodec … -vcodec … output3
> > In ffmpeg.c, multiple outputs are processed sequentially like
> > for (i = 0; i < nb_output_streams; i++)
> >encoding one frame;
> >
> > As below wiki noted, the slowest encoder will slow down the whole
> > process. Some encoders (like libx264) perform their encoding "threaded
> > and in the background", but x264_encoder_encode still cost some time.
> > And it is noticeable when multiple x264_encoder_encodes run in the same 
> > thread.
> > https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
> >
> > For API users, they can use separate thread for multiple encoding in
> > their own code. But is there any way to rescue command line users?
>
> I am not sure I understand what you want. Processing will still be limited
> to the slowest encoder, because input processing will still be driven by
> the slowest encoder, even if the encoders run in separate threads.
>
> Buffering encoder input frames is not an option, because input frames are
> uncompressed, therefore huge. So if you want the faster x264 encoder to
> finish faster, you have to drive it from a different input, ultimately you
> should run 3 separate encode processes and accept that decoding and yadif
> processing happens in all 3 cases separately causing some overhead.
Currently FFmpeg works like below:
main thread:
encoding frame 1 for output 1; encoding frame 1 for output 2; encoding
frame 1 for output 3; encoding frame 2 for output 1; encoding frame 2
for output 2; encoding frame 2 for output 3;...

What I want to do is
thread 1:
encoding frame 1 for output 1; encoding frame 2 for output 1;...
thread 2:
encoding frame 1 for output 2; encoding frame 2 for output 2;...
thread 3:
encoding frame 1 for output 3; encoding frame 2 for output 3;...

As the input frame is ref counted, so the buffering memory is not
huge? Also we can set the max buffering frame limit.

Forgive me for not using multiple separate processes, because mostly
inputs are network streaming and bandwidth is expensive, a few inputs
are capture card working as exclusive device.
>
> Regards,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [RFC]separation of multiple outputs' encoding

2020-05-14 Thread Tao Zhang
Hi,
FFmpeg supports multiple outputs created out of the same input in the
same process like
ffmpeg -i input -filter_complex '[0:v]yadif,split=3[out1][out2][out3]' \
-map '[out1]' -s 1280x720 -acodec … -vcodec … output1 \
-map '[out2]' -s 640x480  -acodec … -vcodec … output2 \
-map '[out3]' -s 320x240  -acodec … -vcodec … output3
In ffmpeg.c, multiple outputs are processed sequentially like
for (i = 0; i < nb_output_streams; i++)
encoding one frame;

As below wiki noted, the slowest encoder will slow down the whole
process. Some encoders (like libx264) perform their encoding "threaded
and in the background", but x264_encoder_encode still cost some time.
And it is noticeable when multiple x264_encoder_encodes run in the same thread.
https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs

For API users, they can use separate thread for multiple encoding in
their own code. But is there any way to rescue command line users?

I want to request for your comments on separation of multiple outputs'
encoding in ffmpeg. Refactoring ffmpeg.c seems not a tiny task. Is it
acceptable to create a pseudo encoder which run the actual encoding in
the separate thread?

Best Regards
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [RFC][PATCH] avformat/fifo: add timeshift option to delay output

2020-05-08 Thread Tao Zhang
I have tested with below commands, It works fine. Thanks Marton.
ffmpeg -i input_rtmp_addr -map 0:v -map 0:a -c copy  -f fifo
-timeshift 20 -queue_size 600 -fifo_format flv output_rtmp_addr
ffmpeg -stream_loop -1 -re -i input_file -map 0:v -map 0:a -c copy  -f
fifo -timeshift 20 -queue_size 600 -fifo_format flv
output_rtmp_addr

Marton Balint  于2020年5月8日周五 上午4:28写道:
>
> Signed-off-by: Marton Balint 
> ---
>  libavformat/fifo.c | 59 
> +-
>  1 file changed, 58 insertions(+), 1 deletion(-)
>
> diff --git a/libavformat/fifo.c b/libavformat/fifo.c
> index d11dc6626c..17748e94ce 100644
> --- a/libavformat/fifo.c
> +++ b/libavformat/fifo.c
> @@ -19,6 +19,8 @@
>   * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
>   */
>
> +#include 
> +
>  #include "libavutil/avassert.h"
>  #include "libavutil/opt.h"
>  #include "libavutil/time.h"
> @@ -77,6 +79,9 @@ typedef struct FifoContext {
>  /* Value > 0 signals queue overflow */
>  volatile uint8_t overflow_flag;
>
> +atomic_int_least64_t queue_duration;
> +int64_t last_sent_dts;
> +int64_t timeshift;
>  } FifoContext;
>
>  typedef struct FifoThreadContext {
> @@ -98,9 +103,12 @@ typedef struct FifoThreadContext {
>   * so finalization by calling write_trailer and ff_io_close must be done
>   * before exiting / reinitialization of underlying muxer */
>  uint8_t header_written;
> +
> +int64_t last_received_dts;
>  } FifoThreadContext;
>
>  typedef enum FifoMessageType {
> +FIFO_NOOP,
>  FIFO_WRITE_HEADER,
>  FIFO_WRITE_PACKET,
>  FIFO_FLUSH_OUTPUT
> @@ -159,6 +167,15 @@ static int fifo_thread_flush_output(FifoThreadContext 
> *ctx)
>  return av_write_frame(avf2, NULL);
>  }
>
> +static int64_t next_duration(AVFormatContext *avf, AVPacket *pkt, int64_t 
> *last_dts)
> +{
> +AVStream *st = avf->streams[pkt->stream_index];
> +int64_t dts = av_rescale_q(pkt->dts, st->time_base, AV_TIME_BASE_Q);
> +int64_t duration = (*last_dts == AV_NOPTS_VALUE ? 0 : dts - *last_dts);
> +*last_dts = dts;
> +return duration;
> +}
> +
>  static int fifo_thread_write_packet(FifoThreadContext *ctx, AVPacket *pkt)
>  {
>  AVFormatContext *avf = ctx->avf;
> @@ -167,6 +184,9 @@ static int fifo_thread_write_packet(FifoThreadContext 
> *ctx, AVPacket *pkt)
>  AVRational src_tb, dst_tb;
>  int ret, s_idx;
>
> +if (fifo->timeshift && pkt->dts != AV_NOPTS_VALUE)
> +atomic_fetch_sub_explicit(>queue_duration, next_duration(avf, 
> pkt, >last_received_dts), memory_order_relaxed);
> +
>  if (ctx->drop_until_keyframe) {
>  if (pkt->flags & AV_PKT_FLAG_KEY) {
>  ctx->drop_until_keyframe = 0;
> @@ -209,6 +229,9 @@ static int fifo_thread_dispatch_message(FifoThreadContext 
> *ctx, FifoMessage *msg
>  {
>  int ret = AVERROR(EINVAL);
>
> +if (msg->type == FIFO_NOOP)
> +return 0;
> +
>  if (!ctx->header_written) {
>  ret = fifo_thread_write_header(ctx);
>  if (ret < 0)
> @@ -390,12 +413,13 @@ static void *fifo_consumer_thread(void *data)
>  AVFormatContext *avf = data;
>  FifoContext *fifo = avf->priv_data;
>  AVThreadMessageQueue *queue = fifo->queue;
> -FifoMessage msg = {FIFO_WRITE_HEADER, {0}};
> +FifoMessage msg = {fifo->timeshift ? FIFO_NOOP : FIFO_WRITE_HEADER, {0}};
>  int ret;
>
>  FifoThreadContext fifo_thread_ctx;
>  memset(_thread_ctx, 0, sizeof(FifoThreadContext));
>  fifo_thread_ctx.avf = avf;
> +fifo_thread_ctx.last_received_dts = AV_NOPTS_VALUE;
>
>  while (1) {
>  uint8_t just_flushed = 0;
> @@ -429,6 +453,10 @@ static void *fifo_consumer_thread(void *data)
>  if (just_flushed)
>  av_log(avf, AV_LOG_INFO, "FIFO queue flushed\n");
>
> +if (fifo->timeshift)
> +while (atomic_load_explicit(>queue_duration, 
> memory_order_relaxed) < fifo->timeshift)
> +av_usleep(1);
> +
>  ret = av_thread_message_queue_recv(queue, , 0);
>  if (ret < 0) {
>  av_thread_message_queue_set_err_send(queue, ret);
> @@ -488,6 +516,8 @@ static int fifo_init(AVFormatContext *avf)
> " only when drop_pkts_on_overflow is also turned on\n");
>  return AVERROR(EINVAL);
>  }
> +atomic_init(>queue_duration, 0);
> +fifo->last_sent_dts = AV_NOPTS_VALUE;
>
>  oformat = av_guess_format(fifo->format, avf->url, NULL);
>  if (!oformat) {
> @@ -563,6 +593,9 @@ static int fifo_write_packet(AVFormatContext *avf, 
> AVPacket *pkt)
>  goto fail;
>  }
>
> +if (fifo->timeshift && pkt->dts != AV_NOPTS_VALUE)
> +atomic_fetch_add_explicit(>queue_duration, next_duration(avf, 
> pkt, >last_sent_dts), memory_order_relaxed);
> +
>  return ret;
>  fail:
>  if (pkt)
> @@ -576,6 +609,27 @@ static int fifo_write_trailer(AVFormatContext *avf)
>  int ret;
>
>  av_thread_message_queue_set_err_recv(fifo->queue, 

Re: [FFmpeg-devel] [PATCH v2 1/3] avformat/fifo: add options to slow down writing packets to match real time approximately

2020-05-07 Thread Tao Zhang
Marton Balint  于2020年5月7日周四 下午6:23写道:
>
>
>
> On Thu, 7 May 2020, leozhang wrote:
>
> > Suggested-by: Nicolas George 
> > Reviewed-by:  Nicolas George 
> > Reviewed-by:  Marton Balint 
> > Reviewed-by:  Andreas Rheinhardt 
>
> You seem to misunderstand the use of this tag. You should only add these
> if you received an explict LGTM for your patches. This has not happened
> here, you only got suggestions, and those suggestions were concerning your
> earlier patch versions.
Yes, I have. Please ignore above Reviewed-by
>
> Also, what happened to the suggestion of using a buffer based approach and
> using realtime only for flushing? I will code something, and see how it
> goes, and will post a result as an RFC patch.
I didn't try using a buffer based approach in fifo.c. If you're
willing to post such RFC patch, I'm OK. Thanks
>
> Regards,
> Marton
>
>
> > Signed-off-by: leozhang 
> > ---
> > doc/muxers.texi| 21 +
> > libavformat/fifo.c | 46 ++
> > 2 files changed, 67 insertions(+)
> >
> > diff --git a/doc/muxers.texi b/doc/muxers.texi
> > index 536433b..14528f1 100644
> > --- a/doc/muxers.texi
> > +++ b/doc/muxers.texi
> > @@ -2274,6 +2274,17 @@ certain (usually permanent) errors the recovery is 
> > not attempted even when
> > Specify whether to wait for the keyframe after recovering from
> > queue overflow or failure. This option is set to 0 (false) by default.
> >
> > +@item realtime @var{bool}
> > +If set to 1 (true), slow down writing packets to match real time 
> > approximately.
> > +This is similar to @ref{the realtime or arealtime filters,,the 
> > "realtime_002c-arealtime" section in the ffmpeg-filters 
> > manual,ffmpeg-filters}.
> > +Please note that in some cases without filtering, such as stream copy, you 
> > can also use it.
> > +
> > +@item realtime_speed
> > +It is the same as the speed option to realtime or arealtime filters.
> > +
> > +@item realtime_limit @var{duration}
> > +It is the same as the limit option to realtime or arealtime filters.
> > +
> > @end table
> >
> > @subsection Examples
> > @@ -2291,6 +2302,16 @@ ffmpeg -re -i ... -c:v libx264 -c:a aac -f fifo 
> > -fifo_format flv -map 0:v -map 0
> >
> > @end itemize
> >
> > +@itemize
> > +
> > +@item
> > +Stream something to rtmp server, instead of using -re option.
> > +@example
> > +ffmpeg -i your_input_file -c copy -map 0:v -map 0:a -f fifo -fifo_format 
> > flv -realtime 1 rtmp://example.com/live/stream_name
> > +@end example
> > +
> > +@end itemize
> > +
> > @anchor{tee}
> > @section tee
> >
> > diff --git a/libavformat/fifo.c b/libavformat/fifo.c
> > index d11dc66..7acc420 100644
> > --- a/libavformat/fifo.c
> > +++ b/libavformat/fifo.c
> > @@ -26,6 +26,7 @@
> > #include "libavutil/threadmessage.h"
> > #include "avformat.h"
> > #include "internal.h"
> > +#include 
> >
> > #define FIFO_DEFAULT_QUEUE_SIZE  60
> > #define FIFO_DEFAULT_MAX_RECOVERY_ATTEMPTS   0
> > @@ -77,6 +78,17 @@ typedef struct FifoContext {
> > /* Value > 0 signals queue overflow */
> > volatile uint8_t overflow_flag;
> >
> > +/* Slow down writing packets to match real time approximately */
> > +int realtime;
> > +
> > +/* Speed factor for the processing when realtime */
> > +double realtime_speed;
> > +
> > +/* Time limit for the pauses when realtime */
> > +int64_t realtime_limit;
> > +
> > +int64_t delta;
> > +unsigned inited;
> > } FifoContext;
> >
> > typedef struct FifoThreadContext {
> > @@ -183,6 +195,31 @@ static int fifo_thread_write_packet(FifoThreadContext 
> > *ctx, AVPacket *pkt)
> > dst_tb = avf2->streams[s_idx]->time_base;
> > av_packet_rescale_ts(pkt, src_tb, dst_tb);
> >
> > +if (fifo->realtime) {
> > +int64_t pts = av_rescale_q(pkt->dts, dst_tb, AV_TIME_BASE_Q) / 
> > fifo->realtime_speed;
> > +int64_t now = av_gettime_relative();
> > +int64_t sleep = pts - now + fifo->delta;
> > +
> > +if (!fifo->inited) {
> > +sleep = 0;
> > +fifo->delta = now - pts;
> > +fifo->inited = 1;
> > +}
> > +
> > +if (FFABS(sleep) > fifo->realtime_limit / fifo->realtime_speed) {
> > +av_log(avf, AV_LOG_WARNING, "time discontinuity detected: 
> > %"PRIi64" us, resetting\n", sleep);
> > +sleep = 0;
> > +fifo->delta = now - pts;
> > +}
> > +
> > +if (sleep > 0) {
> > +av_log(avf, AV_LOG_DEBUG, "sleeping %"PRIi64" us\n", sleep);
> > +for (; sleep > 6; sleep -= 6)
> > +av_usleep(6);
> > +av_usleep(sleep);
> > +}
> > +}
> > +
> > ret = av_write_frame(avf2, pkt);
> > if (ret >= 0)
> > av_packet_unref(pkt);
> > @@ -630,6 +667,15 @@ static const AVOption options[] = {
> > {"recover_any_error", "Attempt recovery regardless of type of the 
> > error", OFFSET(recover_any_error),
> >  

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-05-04 Thread Tao Zhang
Marton Balint  于2020年5月5日周二 上午3:48写道:
>
>
>
> On Sat, 2 May 2020, Tao Zhang wrote:
>
> > Marton Balint  于2020年5月2日周六 下午7:05写道:
>
> [...]
>
> >> I see. But you could add an option to the fifo muxer to only write header
> >> when the first packet arrives. This way you will be able to use a
> >> bitstream filter to buffer packets and the fifo muxer will only write
> >> header when the first packet passes through the bitstream filter. Does
> >> this seem OK?
> > It seems OK. If nobody object it, I'm glad to add
> > write_header_on_first_packet option on fifo muxer and create a new
> > bitstream filter which buffers fixed duration packets.
>
> Great. I suggest a shorter name for the option of the fifo muxer,
> delay_write_header I think is enough, you can explain the details in the
> docs.
Great suggestion. I'll follow it.
>
> Also it should be pretty straightforward to add this feature to fifo.c,
> from a quick look at the code one approach is to add a NOOP message type,
> and if the option is set, then use that message type for the first
> message, if it is not set, use the WRITE_HEADER message type, as it is
> used now.
>
> Also we should find a good name for the bitstream filter, maybe "buffer",
> or "fifo", or if you want to better refer to the use case, then
> "timeshift" can also work.
Maybe "caching" or "cache"?
>
> Another thing you should think about is what should happen at the end of
> the stream, when flushing the bitstream filter:
> - drop pending packets
> - flush pending packets as fast as possible
> - flush pendning packets realtime
> Maybe it should be selectable between the 3 options? I can imagine use
> cases for all three possibilities.
Although I have not imagined the first and second use cases, I'm glad
to implement all three.
>
> Thanks,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-05-02 Thread Tao Zhang
Marton Balint  于2020年5月2日周六 下午7:05写道:
>
>
>
> On Sat, 2 May 2020, Tao Zhang wrote:
>
> > Marton Balint  于2020年5月1日周五 下午9:35写道:
> >>
> >>
> >>
> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >>
> >> > Andreas Rheinhardt  于2020年4月30日周四 下午4:23写道:
> >> >>
> >> >> Tao Zhang:
> >> >> > Marton Balint  于2020年4月30日周四 下午3:26写道:
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >> >> >>
> >> >> >>> Marton Balint  于2020年4月30日周四 上午4:55写道:
> >> >> >>>>
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >> >> >>>>
> >> >> >>>>> Marton Balint  于2020年4月30日周四 上午12:03写道:
> >> >> >>>>>>
> >> >> >>>>>>
> >> >> >>>>>>
> >> >> >>>>>> On Wed, 29 Apr 2020, leozhang wrote:
> >> >> >>>>>>
> >> >> >>>>>>> In some applications, it is required to add delay to live 
> >> >> >>>>>>> streaming.
> >> >> >>>>>>
> >> >> >>>>>> In what applications? And if you do this, why not run
> >> >> >>>>>>
> >> >> >>>>>> sleep 20; ffmpeg 
> >> >> >>>>> In live streaming applications, someone wouldn't want broadcast 
> >> >> >>>>> what's
> >> >> >>>>> comming next immediately.
> >> >> >>>>> Sleep 20 then ffmpeg is not ok, because the stream is still
> >> >> >>>>> broadcasting immediately, and lost 20 seconds signal.
> >> >> >>>>
> >> >> >>>> So you want to buffer 20 seconds of input, and then start the 
> >> >> >>>> output?
> >> >> >>> yes
> >> >> >>
> >> >> >> Then your timing based approach is not the best way to do that. What 
> >> >> >> you
> >> >> >> want is a buffer fullness based approach. E.g. somewhere in the 
> >> >> >> chain of
> >> >> >> ffmpeg components you want to have a fixed buffer size of 20 seconds 
> >> >> >> of
> >> >> >> data, which is always kept filled.
> >> >> > I don't think bitstream filter can have a buffer which is always
> >> >> > filled with 20 seconds data, because bitstream filter don't handle
> >> >> > timestamp or time base.
> >> >> > Feel free to point out if I have wrong understanding about bitstream 
> >> >> > filter.
> >> >>
> >> >> Indeed you have. Your understanding seems to be based on the old
> >> >> bitstream filter API, the one before
> >> >> 33d18982fa03feb061c8f744a4f0a9175c1f63ab (from November 2013).
> >> > Learned it.
> >> > One question I met is the actual muxer (not fifo proxy muxer)
> >> > write_header function should be called after the delay, seems that I
> >> > can not achieve it by the bitstream filter in a clean way.
> >>
> >> Yes, ffmpeg.c does not delay writing the header until the first
> >> packet. Ideally ffmpeg.c code should be unlcuttered to be able to delay
> >> writing the header at least until the first packet arrives, but it
> >> seems to me that would require quite a bit of nontrivial ffmpeg.c
> >> refactoring.
> >>
> >> Is it a big issue if writing the header is not delayed? Also the fifo code
> >> has the abilty to retry if the RTMP strem times out or whatever, can't
> >> that be used to work around this?
> > Establishing the connection too early, but not pushing the data will
> > cause the end user's player buffer to freeze.
>
> I see. But you could add an option to the fifo muxer to only write header
> when the first packet arrives. This way you will be able to use a
> bitstream filter to buffer packets and the fifo muxer will only write
> header when the first packet passes through the bitstream filter. Does
> this seem OK?
It seems OK. If nobody object it, I'm glad to add
write_header_on_first_packet option on fifo muxer and create a new
bitstream filter which buffers fixed duration packets.
>
> Thanks,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-05-02 Thread Tao Zhang
Marton Balint  于2020年5月1日周五 下午9:35写道:
>
>
>
> On Thu, 30 Apr 2020, Tao Zhang wrote:
>
> > Andreas Rheinhardt  于2020年4月30日周四 下午4:23写道:
> >>
> >> Tao Zhang:
> >> > Marton Balint  于2020年4月30日周四 下午3:26写道:
> >> >>
> >> >>
> >> >>
> >> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >> >>
> >> >>> Marton Balint  于2020年4月30日周四 上午4:55写道:
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >> >>>>
> >> >>>>> Marton Balint  于2020年4月30日周四 上午12:03写道:
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Wed, 29 Apr 2020, leozhang wrote:
> >> >>>>>>
> >> >>>>>>> In some applications, it is required to add delay to live 
> >> >>>>>>> streaming.
> >> >>>>>>
> >> >>>>>> In what applications? And if you do this, why not run
> >> >>>>>>
> >> >>>>>> sleep 20; ffmpeg 
> >> >>>>> In live streaming applications, someone wouldn't want broadcast 
> >> >>>>> what's
> >> >>>>> comming next immediately.
> >> >>>>> Sleep 20 then ffmpeg is not ok, because the stream is still
> >> >>>>> broadcasting immediately, and lost 20 seconds signal.
> >> >>>>
> >> >>>> So you want to buffer 20 seconds of input, and then start the output?
> >> >>> yes
> >> >>
> >> >> Then your timing based approach is not the best way to do that. What you
> >> >> want is a buffer fullness based approach. E.g. somewhere in the chain of
> >> >> ffmpeg components you want to have a fixed buffer size of 20 seconds of
> >> >> data, which is always kept filled.
> >> > I don't think bitstream filter can have a buffer which is always
> >> > filled with 20 seconds data, because bitstream filter don't handle
> >> > timestamp or time base.
> >> > Feel free to point out if I have wrong understanding about bitstream 
> >> > filter.
> >>
> >> Indeed you have. Your understanding seems to be based on the old
> >> bitstream filter API, the one before
> >> 33d18982fa03feb061c8f744a4f0a9175c1f63ab (from November 2013).
> > Learned it.
> > One question I met is the actual muxer (not fifo proxy muxer)
> > write_header function should be called after the delay, seems that I
> > can not achieve it by the bitstream filter in a clean way.
>
> Yes, ffmpeg.c does not delay writing the header until the first
> packet. Ideally ffmpeg.c code should be unlcuttered to be able to delay
> writing the header at least until the first packet arrives, but it
> seems to me that would require quite a bit of nontrivial ffmpeg.c
> refactoring.
>
> Is it a big issue if writing the header is not delayed? Also the fifo code
> has the abilty to retry if the RTMP strem times out or whatever, can't
> that be used to work around this?
Establishing the connection too early, but not pushing the data will
cause the end user's player buffer to freeze.
>
> Regards,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 2/3] avformat/fifo: add option to write packets in paced way

2020-04-30 Thread Tao Zhang
Nicolas George  于2020年4月29日周三 下午9:51写道:
>
> leozhang (12020-04-29):
> > Signed-off-by: leozhang 
> > ---
> >  doc/muxers.texi|  3 +++
> >  libavformat/fifo.c | 19 +++
> >  2 files changed, 22 insertions(+)
> >
> > diff --git a/doc/muxers.texi b/doc/muxers.texi
> > index a74cbc4..5140c00 100644
> > --- a/doc/muxers.texi
> > +++ b/doc/muxers.texi
> > @@ -2274,6 +2274,9 @@ queue overflow or failure. This option is set to 0 
> > (false) by default.
> >  @item output_delay
> >  Time to delay output, in microseconds. Default value is 0.
> >
>
> > +@item paced
> > +If set to 1 (true), write packets in paced way. Default value is 0 (false).
>
> This does not explain to somebody who does not already know. Please have
> a look at the doc for the filter "realtime" and use a similar wording,
> preferably with cross-references.
I'm reading the realtime filter and have learned a lot. Will make more
clear description in v2 version.
>
> > +
> >  @end table
> >
> >  @subsection Examples
> > diff --git a/libavformat/fifo.c b/libavformat/fifo.c
> > index bdecf2d..81b7e9e 100644
> > --- a/libavformat/fifo.c
> > +++ b/libavformat/fifo.c
> > @@ -79,6 +79,12 @@ typedef struct FifoContext {
> >
> >  /* Time to delay output, in microseconds */
> >  uint64_t output_delay;
> > +
> > +/* If set to 1, write packets in paced way */
> > +int paced;
> > +
> > +/* Time to start */
>
> > +uint64_t start_time;
>
> The return value of av_gettime_relative() is signed.
will fix it in v2 version.
>
> >  } FifoContext;
> >
> >  typedef struct FifoThreadContext {
> > @@ -184,6 +190,14 @@ static int fifo_thread_write_packet(FifoThreadContext 
> > *ctx, AVPacket *pkt)
> >  src_tb = avf->streams[s_idx]->time_base;
> >  dst_tb = avf2->streams[s_idx]->time_base;
> >  av_packet_rescale_ts(pkt, src_tb, dst_tb);
> > +if (fifo->paced) {
> > +uint64_t pts = av_rescale_q(pkt->dts, dst_tb, AV_TIME_BASE_Q);
> > +uint64_t now = av_gettime_relative() - fifo->start_time;
>
> > +av_assert0(now >= fifo->output_delay);
>
> av_gettime_relative() is not guaranteed to be monotonic, an assert is
> not acceptable here.
will fix it in v2 version.
>
> > +if (pts > now - fifo->output_delay) {
>
> You probably need to make provisions for when the process was paused and
> then resumed, and for timestamps discontinuities.
will fix it in v2 version.
>
> > +av_usleep(pts - (now - fifo->output_delay));
>
> The argument to av_usleep() is unsigned, you are passing uint64_t, it
> can overflow.
will fix it in v2 version.
>
> > +}
>
> > +}
> >
> >  ret = av_write_frame(avf2, pkt);
> >  if (ret >= 0)
> > @@ -515,6 +529,8 @@ static int fifo_init(AVFormatContext *avf)
> >  return AVERROR(ret);
> >  fifo->overflow_flag_lock_initialized = 1;
> >
> > +fifo->start_time = av_gettime_relative();
> > +
> >  return 0;
> >  }
> >
> > @@ -637,6 +653,9 @@ static const AVOption options[] = {
> >  {"output_delay", "Time to delay output, in microseconds", 
> > OFFSET(output_delay),
> >   AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, 
> > AV_OPT_FLAG_ENCODING_PARAM},
> >
> > +{"paced", "Write packets in paced way", OFFSET(paced),
> > + AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, AV_OPT_FLAG_ENCODING_PARAM},
> > +
> >  {NULL},
> >  };
> >
>
> Regards,
>
> --
>   Nicolas George
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-04-30 Thread Tao Zhang
Andreas Rheinhardt  于2020年4月30日周四 下午4:23写道:
>
> Tao Zhang:
> > Marton Balint  于2020年4月30日周四 下午3:26写道:
> >>
> >>
> >>
> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >>
> >>> Marton Balint  于2020年4月30日周四 上午4:55写道:
> >>>>
> >>>>
> >>>>
> >>>> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >>>>
> >>>>> Marton Balint  于2020年4月30日周四 上午12:03写道:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Wed, 29 Apr 2020, leozhang wrote:
> >>>>>>
> >>>>>>> In some applications, it is required to add delay to live streaming.
> >>>>>>
> >>>>>> In what applications? And if you do this, why not run
> >>>>>>
> >>>>>> sleep 20; ffmpeg 
> >>>>> In live streaming applications, someone wouldn't want broadcast what's
> >>>>> comming next immediately.
> >>>>> Sleep 20 then ffmpeg is not ok, because the stream is still
> >>>>> broadcasting immediately, and lost 20 seconds signal.
> >>>>
> >>>> So you want to buffer 20 seconds of input, and then start the output?
> >>> yes
> >>
> >> Then your timing based approach is not the best way to do that. What you
> >> want is a buffer fullness based approach. E.g. somewhere in the chain of
> >> ffmpeg components you want to have a fixed buffer size of 20 seconds of
> >> data, which is always kept filled.
> > I don't think bitstream filter can have a buffer which is always
> > filled with 20 seconds data, because bitstream filter don't handle
> > timestamp or time base.
> > Feel free to point out if I have wrong understanding about bitstream filter.
>
> Indeed you have. Your understanding seems to be based on the old
> bitstream filter API, the one before
> 33d18982fa03feb061c8f744a4f0a9175c1f63ab (from November 2013).
Learned it.
One question I met is the actual muxer (not fifo proxy muxer)
write_header function should be called after the delay, seems that I
can not achieve it by the bitstream filter in a clean way.
>
> - Andreas
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-04-30 Thread Tao Zhang
Marton Balint  于2020年4月30日周四 下午3:26写道:
>
>
>
> On Thu, 30 Apr 2020, Tao Zhang wrote:
>
> > Marton Balint  于2020年4月30日周四 上午4:55写道:
> >>
> >>
> >>
> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >>
> >> > Marton Balint  于2020年4月30日周四 上午12:03写道:
> >> >>
> >> >>
> >> >>
> >> >> On Wed, 29 Apr 2020, leozhang wrote:
> >> >>
> >> >> > In some applications, it is required to add delay to live streaming.
> >> >>
> >> >> In what applications? And if you do this, why not run
> >> >>
> >> >> sleep 20; ffmpeg 
> >> > In live streaming applications, someone wouldn't want broadcast what's
> >> > comming next immediately.
> >> > Sleep 20 then ffmpeg is not ok, because the stream is still
> >> > broadcasting immediately, and lost 20 seconds signal.
> >>
> >> So you want to buffer 20 seconds of input, and then start the output?
> > yes
>
> Then your timing based approach is not the best way to do that. What you
> want is a buffer fullness based approach. E.g. somewhere in the chain of
> ffmpeg components you want to have a fixed buffer size of 20 seconds of
> data, which is always kept filled.
I don't think bitstream filter can have a buffer which is always
filled with 20 seconds data, because bitstream filter don't handle
timestamp or time base.
Feel free to point out if I have wrong understanding about bitstream filter.
>
> >>
> >> >>
> >> >> I don't see how this is useful at all.
> >> >>
> >> >> And what is -paced? What it is used for? Isn't it the same as using 
> >> >> ffmpeg
> >> >> -re? You really should better explain your use case.
> >> > -re read the input, -paced write the output.
> >>
> >> But why do you want to delay every output packet?
> > By default, ffmpeg will output packets as fast as possible.
> > So I delay output every packet at native frame rate to simulate live stream.
>
> So you want realtime output. But since I guess your input is already
> realtime, it is suboptimal to limit processing to realtime in two places,
> you want to simply FIFO buffer 20 seconds of data in an ffmpeg component.
>
> The best place for that may not be the fifo muxer. I'd say maybe a
> separate bitstream filter is the most clean solution to achieve that. But
> others may have something else in mind.
ditto
>
> Regards,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-04-30 Thread Tao Zhang
Andreas Rheinhardt  于2020年4月30日周四 下午1:45写道:
>
> Tao Zhang:
> > Marton Balint  于2020年4月30日周四 上午4:55写道:
> >>
> >>
> >>
> >> On Thu, 30 Apr 2020, Tao Zhang wrote:
> >>
> >>> Marton Balint  于2020年4月30日周四 上午12:03写道:
> >>>>
> >>>>
> >>>>
> >>>> On Wed, 29 Apr 2020, leozhang wrote:
> >>>>
> >>>>> In some applications, it is required to add delay to live streaming.
> >>>>
> >>>> In what applications? And if you do this, why not run
> >>>>
> >>>> sleep 20; ffmpeg 
> >>> In live streaming applications, someone wouldn't want broadcast what's
> >>> comming next immediately.
> >>> Sleep 20 then ffmpeg is not ok, because the stream is still
> >>> broadcasting immediately, and lost 20 seconds signal.
> >>
> >> So you want to buffer 20 seconds of input, and then start the output?
> > yes
> >>
> >>>>
> >>>> I don't see how this is useful at all.
> >>>>
> >>>> And what is -paced? What it is used for? Isn't it the same as using 
> >>>> ffmpeg
> >>>> -re? You really should better explain your use case.
> >>> -re read the input, -paced write the output.
> >>
> >> But why do you want to delay every output packet?
> > By default, ffmpeg will output packets as fast as possible.
> > So I delay output every packet at native frame rate to simulate live stream.
>
> What would be the benefit of your patch for API users (i.e. users that
> directly use libavformat and not ffmpeg.c)? Right now your problem seems
> to be in ffmpeg.c and not in libavformat.
API users can use the fifo psedo muxer directly, without implementing
the similar functions by themself.
> >>
> >>>>
> >>>> Regards,
> >>>> Marton
> >>>>
> >>>>> For example, you can add 20 seconds to rtmp stream with below command:
> >>>>> ffmpeg -i your_input_stream_address -c copy -map 0:a -map 0:v -f fifo 
> >>>>> -paced 1 -queue_size 600
> >>>>> -output_delay 2000 -fifo_format flv 
> >>>>> rtmp://example.com/live/delayed_stream_name
> >>>>>
> >>>>> leozhang (3):
> >>>>>  avformat/fifo: add option to delay output
> >>>>>  avformat/fifo: add option to write packets in paced way
> >>>>>  doc/muxers: add command example how to delay output live stream
> >>>>>
> >>>>> doc/muxers.texi| 17 +
> >>>>> libavformat/fifo.c | 26 ++
> >>>>> 2 files changed, 43 insertions(+)
> >>>>>
> >>>>> --
> >>>>> 1.8.3.1
> >>>>>
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/3] avformat/fifo: add option to delay output

2020-04-29 Thread Tao Zhang
Nicolas George  于2020年4月29日周三 下午9:31写道:
>
> leozhang (12020-04-29):
> > Signed-off-by: leozhang 
> > ---
> >  doc/muxers.texi| 3 +++
> >  libavformat/fifo.c | 7 +++
> >  2 files changed, 10 insertions(+)
> >
> > diff --git a/doc/muxers.texi b/doc/muxers.texi
> > index cb2bb42..a74cbc4 100644
> > --- a/doc/muxers.texi
> > +++ b/doc/muxers.texi
> > @@ -2271,6 +2271,9 @@ certain (usually permanent) errors the recovery is 
> > not attempted even when
> >  Specify whether to wait for the keyframe after recovering from
> >  queue overflow or failure. This option is set to 0 (false) by default.
> >
>
> > +@item output_delay
> > +Time to delay output, in microseconds. Default value is 0.
>
> This is not accurate enough. This will block every output packet for the
> extra specified duration.
>
> Not sure if it is very useful, compared with the other patch.
In live streaming applications, if user wouldn't want broadcast what's
coming next immediately, can set wanted output delay.
>
> > +
> >  @end table
> >
> >  @subsection Examples
> > diff --git a/libavformat/fifo.c b/libavformat/fifo.c
> > index d11dc66..bdecf2d 100644
> > --- a/libavformat/fifo.c
> > +++ b/libavformat/fifo.c
> > @@ -77,6 +77,8 @@ typedef struct FifoContext {
> >  /* Value > 0 signals queue overflow */
> >  volatile uint8_t overflow_flag;
> >
> > +/* Time to delay output, in microseconds */
> > +uint64_t output_delay;
> >  } FifoContext;
> >
> >  typedef struct FifoThreadContext {
> > @@ -397,6 +399,8 @@ static void *fifo_consumer_thread(void *data)
> >  memset(_thread_ctx, 0, sizeof(FifoThreadContext));
> >  fifo_thread_ctx.avf = avf;
> >
> > +av_usleep(fifo->output_delay);
> > +
> >  while (1) {
> >  uint8_t just_flushed = 0;
> >
> > @@ -630,6 +634,9 @@ static const AVOption options[] = {
> >  {"recover_any_error", "Attempt recovery regardless of type of the 
> > error", OFFSET(recover_any_error),
> >   AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, AV_OPT_FLAG_ENCODING_PARAM},
> >
>
> > +{"output_delay", "Time to delay output, in microseconds", 
> > OFFSET(output_delay),
> > + AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, 
> > AV_OPT_FLAG_ENCODING_PARAM},
>
> AV_OPT_TYPE_DURATION and adapt the description and documentation.
Will fix it in v2 version. Thanks.
>
> > +
> >  {NULL},
> >  };
> >
>
> Regards,
>
> --
>   Nicolas George
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-04-29 Thread Tao Zhang
Marton Balint  于2020年4月30日周四 上午4:55写道:
>
>
>
> On Thu, 30 Apr 2020, Tao Zhang wrote:
>
> > Marton Balint  于2020年4月30日周四 上午12:03写道:
> >>
> >>
> >>
> >> On Wed, 29 Apr 2020, leozhang wrote:
> >>
> >> > In some applications, it is required to add delay to live streaming.
> >>
> >> In what applications? And if you do this, why not run
> >>
> >> sleep 20; ffmpeg 
> > In live streaming applications, someone wouldn't want broadcast what's
> > comming next immediately.
> > Sleep 20 then ffmpeg is not ok, because the stream is still
> > broadcasting immediately, and lost 20 seconds signal.
>
> So you want to buffer 20 seconds of input, and then start the output?
yes
>
> >>
> >> I don't see how this is useful at all.
> >>
> >> And what is -paced? What it is used for? Isn't it the same as using ffmpeg
> >> -re? You really should better explain your use case.
> > -re read the input, -paced write the output.
>
> But why do you want to delay every output packet?
By default, ffmpeg will output packets as fast as possible.
So I delay output every packet at native frame rate to simulate live stream.
>
> >>
> >> Regards,
> >> Marton
> >>
> >> > For example, you can add 20 seconds to rtmp stream with below command:
> >> > ffmpeg -i your_input_stream_address -c copy -map 0:a -map 0:v -f fifo 
> >> > -paced 1 -queue_size 600
> >> > -output_delay 2000 -fifo_format flv 
> >> > rtmp://example.com/live/delayed_stream_name
> >> >
> >> > leozhang (3):
> >> >  avformat/fifo: add option to delay output
> >> >  avformat/fifo: add option to write packets in paced way
> >> >  doc/muxers: add command example how to delay output live stream
> >> >
> >> > doc/muxers.texi| 17 +
> >> > libavformat/fifo.c | 26 ++
> >> > 2 files changed, 43 insertions(+)
> >> >
> >> > --
> >> > 1.8.3.1
> >> >
> >> > ___
> >> > ffmpeg-devel mailing list
> >> > ffmpeg-devel@ffmpeg.org
> >> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >> >
> >> > To unsubscribe, visit link above, or email
> >> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> >> ___
> >> ffmpeg-devel mailing list
> >> ffmpeg-devel@ffmpeg.org
> >> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >>
> >> To unsubscribe, visit link above, or email
> >> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 0/3] Patch set to delay output live stream

2020-04-29 Thread Tao Zhang
Marton Balint  于2020年4月30日周四 上午12:03写道:
>
>
>
> On Wed, 29 Apr 2020, leozhang wrote:
>
> > In some applications, it is required to add delay to live streaming.
>
> In what applications? And if you do this, why not run
>
> sleep 20; ffmpeg 
In live streaming applications, someone wouldn't want broadcast what's
comming next immediately.
Sleep 20 then ffmpeg is not ok, because the stream is still
broadcasting immediately, and lost 20 seconds signal.
>
> I don't see how this is useful at all.
>
> And what is -paced? What it is used for? Isn't it the same as using ffmpeg
> -re? You really should better explain your use case.
-re read the input, -paced write the output.
>
> Regards,
> Marton
>
> > For example, you can add 20 seconds to rtmp stream with below command:
> > ffmpeg -i your_input_stream_address -c copy -map 0:a -map 0:v -f fifo 
> > -paced 1 -queue_size 600
> > -output_delay 2000 -fifo_format flv 
> > rtmp://example.com/live/delayed_stream_name
> >
> > leozhang (3):
> >  avformat/fifo: add option to delay output
> >  avformat/fifo: add option to write packets in paced way
> >  doc/muxers: add command example how to delay output live stream
> >
> > doc/muxers.texi| 17 +
> > libavformat/fifo.c | 26 ++
> > 2 files changed, 43 insertions(+)
> >
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter/vf_yaepblur: add yaepblur filter

2019-12-07 Thread Tao Zhang
Paul B Mahol  于2019年12月7日周六 下午11:21写道:
>
> On 12/7/19, Tao Zhang  wrote:
> > I'm sorry for the late reply.
> >
> > Paul B Mahol  于2019年12月7日周六 上午2:48写道:
> >>
> >> On 12/5/19, Paul B Mahol  wrote:
> >> > On 12/5/19, Tao Zhang  wrote:
> >> >> Hello everyone,
> >> >> Can I assume this patch is ok if no comments or objections?
> >> >
> >> > Yes, give some time for it to be applied.
> >> > I'm busy with other stuff right now. So this patch LGTM (Note to
> >> > committer to bump minor of libavfiter upon pushing).
> >>
> >> I do not get anywhere near output of this filter with either
> >> avgblur/gblur/boxblur?
> >>
> >> How should output of this filter look like?
> > Assume keeping planes and radius unchanged,
> > 1) the output is near avgblur/boxblur when sigma close to INT_MAX
> > 2) the output is near raw input when sigma close to one, so why not
> > zero? just keep code simple, I need to gurantee the
> > denominator larger than zero
> > 3) the output is local adaptive that means blur stronger on flat area
> > while blur weaker on edge otherwise.
>
> Is there plan to make it faster with SIMD?
Yes, I will work on this.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter/vf_yaepblur: add yaepblur filter

2019-12-07 Thread Tao Zhang
I'm sorry for the late reply.

Paul B Mahol  于2019年12月7日周六 上午2:48写道:
>
> On 12/5/19, Paul B Mahol  wrote:
> > On 12/5/19, Tao Zhang  wrote:
> >> Hello everyone,
> >> Can I assume this patch is ok if no comments or objections?
> >
> > Yes, give some time for it to be applied.
> > I'm busy with other stuff right now. So this patch LGTM (Note to
> > committer to bump minor of libavfiter upon pushing).
>
> I do not get anywhere near output of this filter with either
> avgblur/gblur/boxblur?
>
> How should output of this filter look like?
Assume keeping planes and radius unchanged,
1) the output is near avgblur/boxblur when sigma close to INT_MAX
2) the output is near raw input when sigma close to one, so why not
zero? just keep code simple, I need to gurantee the
denominator larger than zero
3) the output is local adaptive that means blur stronger on flat area
while blur weaker on edge otherwise.
>
> >
> >>
> >> Tao Zhang  于2019年12月3日周二 下午5:26写道:
> >>>
> >>> ping:)
> >>>
> >>> leozhang  于2019年11月25日周一 下午5:53写道:
> >>> >
> >>> > Signed-off-by: leozhang 
> >>> > ---
> >>> > This filter blur the input while preserving edges, with slice threads
> >>> > speed up.
> >>> > My test speed is about 100fps on 1080p video with 16 threads, on my
> >>> > test
> >>> > machine whose cpu is E5-2660 v4 2.0GHz using 16 threads.
> >>> > I gauss that i7-9700K 3.6GHz can run faster more.
> >>> > The test command is
> >>> > ffmpeg -s 1920x1080 -r 30 -i your_test_file.yuv -filter_threads 16 -vf
> >>> > yaepblur -f null -
> >>> >
> >>> >  doc/filters.texi  |  22 +++
> >>> >  libavfilter/Makefile  |   1 +
> >>> >  libavfilter/allfilters.c  |   1 +
> >>> >  libavfilter/vf_yaepblur.c | 349
> >>> > ++
> >>> >  4 files changed, 373 insertions(+)
> >>> >  create mode 100644 libavfilter/vf_yaepblur.c
> >>> >
> >>> > diff --git a/doc/filters.texi b/doc/filters.texi
> >>> > index c04421b..61e93d5 100644
> >>> > --- a/doc/filters.texi
> >>> > +++ b/doc/filters.texi
> >>> > @@ -19775,6 +19775,28 @@ Only deinterlace frames marked as interlaced.
> >>> >  The default value is @code{all}.
> >>> >  @end table
> >>> >
> >>> > +@section yaepblur
> >>> > +
> >>> > +Apply blur filter while preserving edges ("yaepblur" means "yet
> >>> > another
> >>> > edge preserving blur filter").
> >>> > +The algorithm is described in
> >>> > +"J. S. Lee, Digital image enhancement and noise filtering by use of
> >>> > local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2,
> >>> > 1980."
> >>> > +
> >>> > +It accepts the following parameters:
> >>> > +
> >>> > +@table @option
> >>> > +@item radius, r
> >>> > +Set the window radius. Default value is 3.
> >>> > +
> >>> > +@item planes, p
> >>> > +Set which planes to filter. Default is only the first plane.
> >>> > +
> >>> > +@item sigma, s
> >>> > +Set blur strength. Default value is 128.
> >>> > +@end table
> >>> > +
> >>> > +@subsection Commands
> >>> > +This filter supports same @ref{commands} as options.
> >>> > +
> >>> >  @section zoompan
> >>> >
> >>> >  Apply Zoom & Pan effect.
> >>> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> >>> > index 6838d5c..b490a44 100644
> >>> > --- a/libavfilter/Makefile
> >>> > +++ b/libavfilter/Makefile
> >>> > @@ -442,6 +442,7 @@ OBJS-$(CONFIG_XSTACK_FILTER) +=
> >>> > vf_stack.o framesync.o
> >>> >  OBJS-$(CONFIG_YADIF_FILTER)  += vf_yadif.o
> >>> > yadif_common.o
> >>> >  OBJS-$(CONFIG_YADIF_CUDA_FILTER) += vf_yadif_cuda.o
> >>> > vf_yadif_cuda.ptx.o \
> >>> >  yadif_common.o
> >>> > +OBJS-$(CONFIG_YAEPBLUR_FILTER)   += vf_yaepblur.o
> >>> >  OBJS-$(CONFIG_ZMQ_FILTER)  

Re: [FFmpeg-devel] [PATCH] avfilter/vf_yaepblur: add yaepblur filter

2019-12-05 Thread Tao Zhang
Hello everyone,
Can I assume this patch is ok if no comments or objections?

Tao Zhang  于2019年12月3日周二 下午5:26写道:
>
> ping:)
>
> leozhang  于2019年11月25日周一 下午5:53写道:
> >
> > Signed-off-by: leozhang 
> > ---
> > This filter blur the input while preserving edges, with slice threads speed 
> > up.
> > My test speed is about 100fps on 1080p video with 16 threads, on my test 
> > machine whose cpu is E5-2660 v4 2.0GHz using 16 threads.
> > I gauss that i7-9700K 3.6GHz can run faster more.
> > The test command is
> > ffmpeg -s 1920x1080 -r 30 -i your_test_file.yuv -filter_threads 16 -vf 
> > yaepblur -f null -
> >
> >  doc/filters.texi  |  22 +++
> >  libavfilter/Makefile  |   1 +
> >  libavfilter/allfilters.c  |   1 +
> >  libavfilter/vf_yaepblur.c | 349 
> > ++
> >  4 files changed, 373 insertions(+)
> >  create mode 100644 libavfilter/vf_yaepblur.c
> >
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index c04421b..61e93d5 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -19775,6 +19775,28 @@ Only deinterlace frames marked as interlaced.
> >  The default value is @code{all}.
> >  @end table
> >
> > +@section yaepblur
> > +
> > +Apply blur filter while preserving edges ("yaepblur" means "yet another 
> > edge preserving blur filter").
> > +The algorithm is described in
> > +"J. S. Lee, Digital image enhancement and noise filtering by use of local 
> > statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
> > +
> > +It accepts the following parameters:
> > +
> > +@table @option
> > +@item radius, r
> > +Set the window radius. Default value is 3.
> > +
> > +@item planes, p
> > +Set which planes to filter. Default is only the first plane.
> > +
> > +@item sigma, s
> > +Set blur strength. Default value is 128.
> > +@end table
> > +
> > +@subsection Commands
> > +This filter supports same @ref{commands} as options.
> > +
> >  @section zoompan
> >
> >  Apply Zoom & Pan effect.
> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > index 6838d5c..b490a44 100644
> > --- a/libavfilter/Makefile
> > +++ b/libavfilter/Makefile
> > @@ -442,6 +442,7 @@ OBJS-$(CONFIG_XSTACK_FILTER) += 
> > vf_stack.o framesync.o
> >  OBJS-$(CONFIG_YADIF_FILTER)  += vf_yadif.o yadif_common.o
> >  OBJS-$(CONFIG_YADIF_CUDA_FILTER) += vf_yadif_cuda.o 
> > vf_yadif_cuda.ptx.o \
> >  yadif_common.o
> > +OBJS-$(CONFIG_YAEPBLUR_FILTER)   += vf_yaepblur.o
> >  OBJS-$(CONFIG_ZMQ_FILTER)+= f_zmq.o
> >  OBJS-$(CONFIG_ZOOMPAN_FILTER)+= vf_zoompan.o
> >  OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o
> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> > index 7c1e19e..8f41186 100644
> > --- a/libavfilter/allfilters.c
> > +++ b/libavfilter/allfilters.c
> > @@ -420,6 +420,7 @@ extern AVFilter ff_vf_xmedian;
> >  extern AVFilter ff_vf_xstack;
> >  extern AVFilter ff_vf_yadif;
> >  extern AVFilter ff_vf_yadif_cuda;
> > +extern AVFilter ff_vf_yaepblur;
> >  extern AVFilter ff_vf_zmq;
> >  extern AVFilter ff_vf_zoompan;
> >  extern AVFilter ff_vf_zscale;
> > diff --git a/libavfilter/vf_yaepblur.c b/libavfilter/vf_yaepblur.c
> > new file mode 100644
> > index 000..ef6fbc9
> > --- /dev/null
> > +++ b/libavfilter/vf_yaepblur.c
> > @@ -0,0 +1,349 @@
> > +/*
> > + * Copyright (C) 2019 Leo Zhang 
> > +
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with FFmpeg; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 
> > 02110-13

Re: [FFmpeg-devel] [PATCH] avfilter/vf_yaepblur: add yaepblur filter

2019-12-03 Thread Tao Zhang
ping:)

leozhang  于2019年11月25日周一 下午5:53写道:
>
> Signed-off-by: leozhang 
> ---
> This filter blur the input while preserving edges, with slice threads speed 
> up.
> My test speed is about 100fps on 1080p video with 16 threads, on my test 
> machine whose cpu is E5-2660 v4 2.0GHz using 16 threads.
> I gauss that i7-9700K 3.6GHz can run faster more.
> The test command is
> ffmpeg -s 1920x1080 -r 30 -i your_test_file.yuv -filter_threads 16 -vf 
> yaepblur -f null -
>
>  doc/filters.texi  |  22 +++
>  libavfilter/Makefile  |   1 +
>  libavfilter/allfilters.c  |   1 +
>  libavfilter/vf_yaepblur.c | 349 
> ++
>  4 files changed, 373 insertions(+)
>  create mode 100644 libavfilter/vf_yaepblur.c
>
> diff --git a/doc/filters.texi b/doc/filters.texi
> index c04421b..61e93d5 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -19775,6 +19775,28 @@ Only deinterlace frames marked as interlaced.
>  The default value is @code{all}.
>  @end table
>
> +@section yaepblur
> +
> +Apply blur filter while preserving edges ("yaepblur" means "yet another edge 
> preserving blur filter").
> +The algorithm is described in
> +"J. S. Lee, Digital image enhancement and noise filtering by use of local 
> statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
> +
> +It accepts the following parameters:
> +
> +@table @option
> +@item radius, r
> +Set the window radius. Default value is 3.
> +
> +@item planes, p
> +Set which planes to filter. Default is only the first plane.
> +
> +@item sigma, s
> +Set blur strength. Default value is 128.
> +@end table
> +
> +@subsection Commands
> +This filter supports same @ref{commands} as options.
> +
>  @section zoompan
>
>  Apply Zoom & Pan effect.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 6838d5c..b490a44 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -442,6 +442,7 @@ OBJS-$(CONFIG_XSTACK_FILTER) += 
> vf_stack.o framesync.o
>  OBJS-$(CONFIG_YADIF_FILTER)  += vf_yadif.o yadif_common.o
>  OBJS-$(CONFIG_YADIF_CUDA_FILTER) += vf_yadif_cuda.o 
> vf_yadif_cuda.ptx.o \
>  yadif_common.o
> +OBJS-$(CONFIG_YAEPBLUR_FILTER)   += vf_yaepblur.o
>  OBJS-$(CONFIG_ZMQ_FILTER)+= f_zmq.o
>  OBJS-$(CONFIG_ZOOMPAN_FILTER)+= vf_zoompan.o
>  OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 7c1e19e..8f41186 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -420,6 +420,7 @@ extern AVFilter ff_vf_xmedian;
>  extern AVFilter ff_vf_xstack;
>  extern AVFilter ff_vf_yadif;
>  extern AVFilter ff_vf_yadif_cuda;
> +extern AVFilter ff_vf_yaepblur;
>  extern AVFilter ff_vf_zmq;
>  extern AVFilter ff_vf_zoompan;
>  extern AVFilter ff_vf_zscale;
> diff --git a/libavfilter/vf_yaepblur.c b/libavfilter/vf_yaepblur.c
> new file mode 100644
> index 000..ef6fbc9
> --- /dev/null
> +++ b/libavfilter/vf_yaepblur.c
> @@ -0,0 +1,349 @@
> +/*
> + * Copyright (C) 2019 Leo Zhang 
> +
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
> + */
> +
> +/**
> + * @file
> + * yaep(yet another edge preserving) blur filter
> + *
> + * This implementation is based on an algorithm described in
> + * "J. S. Lee, Digital image enhancement and noise filtering by use of local 
> statistics, IEEE Trans. Pattern
> + * Anal. Mach. Intell. PAMI-2, 1980."
> + */
> +
> +#include "libavutil/opt.h"
> +#include "libavutil/imgutils.h"
> +#include "avfilter.h"
> +#include "internal.h"
> +
> +typedef struct YAEPContext {
> +const AVClass *class;
> +
> +int planes;
> +int radius;
> +int sigma;
> +
> +int nb_planes;
> +int planewidth[4];
> +int planeheight[4];
> +int depth;
> +
> +uint64_t *sat;///< summed area table
> +uint64_t *square_sat; ///< square summed area table
> +int sat_linesize;
> +
> +int (*pre_calculate_row)(AVFilterContext *ctx, void *arg, int jobnr, int 
> nb_jobs);
> +int (*filter_slice )(AVFilterContext *ctx, void *arg, int jobnr, int 
> nb_jobs);
> +} YAEPContext;
> +
> 

Re: [FFmpeg-devel] [PATCH V2 2/2] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-18 Thread Tao Zhang
Limin Wang  于2019年11月15日周五 下午11:24写道:
>
> On Fri, Nov 15, 2019 at 04:35:49PM +0800, leozhang wrote:
> > Reviewed-by: Paul B Mahol 
> > Reviewed-by: Jun Zhao 
> > Signed-off-by: leozhang 
> > ---
> >
> >  doc/filters.texi   |  4 
> >  libavfilter/vf_bilateral.c | 23 ++-
> >  2 files changed, 26 insertions(+), 1 deletion(-)
> >
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index e48f9c9..6b1a5cb 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -6355,6 +6355,10 @@ Allowed range is 0 to 1. Default is 0.1.
> >  Set planes to filter. Default is first only.
> >  @end table
> >
> > +@subsection Commands
> > +
> > +This filter supports the all above options as @ref{commands}.
> > +
> >  @section bitplanenoise
> >
> >  Show and measure bit plane noise.
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 36e53d2..ba3631c 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -29,6 +29,8 @@
> >  #include "internal.h"
> >  #include "video.h"
> >
> > +#include 
> > +
> >  typedef struct BilateralContext {
> >  const AVClass *class;
> >
> > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS 
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -333,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
> > *in)
> >  return ff_filter_frame(outlink, out);
> >  }
> >
> > +static int process_command(AVFilterContext *ctx, const char *cmd, const 
> > char *args,
> > +   char *res, int res_len, int flags)
> > +{
> > +BilateralContext *s = ctx->priv;
> > +int ret;
> > +float old_sigmaR = s->sigmaR;
> > +
> > +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> > flags)) < 0) {
> > +return ret;
> > +}
> > +
> > +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = init_lut(s)) 
> > < 0) {
>
> fabsf is for float
right, thanks for the catch, I'll change it.
>
> > +s->sigmaR = old_sigmaR;
> > +}
> > +
> > +return ret;
> > +}
> > +
> >  static av_cold void uninit(AVFilterContext *ctx)
> >  {
> >  BilateralContext *s = ctx->priv;
> > @@ -375,4 +395,5 @@ AVFilter ff_vf_bilateral = {
> >  .inputs= bilateral_inputs,
> >  .outputs   = bilateral_outputs,
> >  .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = process_command,
> >  };
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V4] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-15 Thread Tao Zhang
The new patchsets are
https://patchwork.ffmpeg.org/patch/16279/
https://patchwork.ffmpeg.org/patch/16280/
Please review it, thx

Tao Zhang  于2019年11月15日周五 下午4:38写道:
>
> I submit new patchset. Have separate functional and non-functional
> changes. Also updated doc, inspired by Paul's commit
> https://github.com/FFmpeg/FFmpeg/commit/45f03cdd20c3f9a83d4955fa32ffdc287229ccfd
>
> Tao Zhang  于2019年11月11日周一 下午7:55写道:
> >
> > ping, at the beginning of the new week:)
> >
> > leozhang  于2019年11月3日周日 下午6:50写道:
> >
> > >
> > > Reviewed-by: Paul B Mahol 
> > > Reviewed-by: Jun Zhao 
> > > Signed-off-by: leozhang 
> > > ---
> > >  libavfilter/vf_bilateral.c | 39 ++-
> > >  1 file changed, 34 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > > index 3c9d800..ba3631c 100644
> > > --- a/libavfilter/vf_bilateral.c
> > > +++ b/libavfilter/vf_bilateral.c
> > > @@ -29,6 +29,8 @@
> > >  #include "internal.h"
> > >  #include "video.h"
> > >
> > > +#include 
> > > +
> > >  typedef struct BilateralContext {
> > >  const AVClass *class;
> > >
> > > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> > >  } BilateralContext;
> > >
> > >  #define OFFSET(x) offsetof(BilateralContext, x)
> > > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > > +#define FLAGS 
> > > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> > >
> > >  static const AVOption bilateral_options[] = {
> > >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), 
> > > AV_OPT_TYPE_FLOAT, {.dbl=0.1}, 0.0,  10, FLAGS },
> > > @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
> > >  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> > >  }
> > >
> > > -static int config_input(AVFilterLink *inlink)
> > > +static int init_lut(BilateralContext *s)
> > >  {
> > > -BilateralContext *s = inlink->dst->priv;
> > > -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> > >  float inv_sigma_range;
> > >
> > > -s->depth = desc->comp[0].depth;
> > >  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
> > >
> > >  //compute a lookup table
> > >  for (int i = 0; i < (1 << s->depth); i++)
> > >  s->range_table[i] = expf(-i * inv_sigma_range);
> > >
> > > +return 0;
> > > +}
> > > +
> > > +static int config_input(AVFilterLink *inlink)
> > > +{
> > > +BilateralContext *s = inlink->dst->priv;
> > > +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> > > +
> > > +s->depth = desc->comp[0].depth;
> > > +init_lut(s);
> > > +
> > >  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, 
> > > desc->log2_chroma_w);
> > >  s->planewidth[0] = s->planewidth[3] = inlink->w;
> > >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, 
> > > desc->log2_chroma_h);
> > > @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, 
> > > AVFrame *in)
> > >  return ff_filter_frame(outlink, out);
> > >  }
> > >
> > > +static int process_command(AVFilterContext *ctx, const char *cmd, const 
> > > char *args,
> > > +   char *res, int res_len, int flags)
> > > +{
> > > +BilateralContext *s = ctx->priv;
> > > +int ret;
> > > +float old_sigmaR = s->sigmaR;
> > > +
> > > +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> > > flags)) < 0) {
> > > +return ret;
> > > +}
> > > +
> > > +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = 
> > > init_lut(s)) < 0) {
> > > +s->sigmaR = old_sigmaR;
> > > +}
> > > +
> > > +return ret;
> > > +}
> > > +
> > >  static av_cold void uninit(AVFilterContext *ctx)
> > >  {
> > >  BilateralContext *s = ctx->priv;
> > > @@ -367,4 +395,5 @@ AVFilter ff_vf_bilateral = {
> > >  .inputs= bilateral_inputs,
> > >  .outputs   = bilateral_outputs,
> > >  .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > > +.process_command = process_command,
> > >  };
> > > --
> > > 1.8.3.1
> > >
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> > >
> > > To unsubscribe, visit link above, or email
> > > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V4] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-15 Thread Tao Zhang
I submit new patchset. Have separate functional and non-functional
changes. Also updated doc, inspired by Paul's commit
https://github.com/FFmpeg/FFmpeg/commit/45f03cdd20c3f9a83d4955fa32ffdc287229ccfd

Tao Zhang  于2019年11月11日周一 下午7:55写道:
>
> ping, at the beginning of the new week:)
>
> leozhang  于2019年11月3日周日 下午6:50写道:
>
> >
> > Reviewed-by: Paul B Mahol 
> > Reviewed-by: Jun Zhao 
> > Signed-off-by: leozhang 
> > ---
> >  libavfilter/vf_bilateral.c | 39 ++-
> >  1 file changed, 34 insertions(+), 5 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..ba3631c 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -29,6 +29,8 @@
> >  #include "internal.h"
> >  #include "video.h"
> >
> > +#include 
> > +
> >  typedef struct BilateralContext {
> >  const AVClass *class;
> >
> > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS 
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
> >  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> >  }
> >
> > -static int config_input(AVFilterLink *inlink)
> > +static int init_lut(BilateralContext *s)
> >  {
> > -BilateralContext *s = inlink->dst->priv;
> > -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> >  float inv_sigma_range;
> >
> > -s->depth = desc->comp[0].depth;
> >  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
> >
> >  //compute a lookup table
> >  for (int i = 0; i < (1 << s->depth); i++)
> >  s->range_table[i] = expf(-i * inv_sigma_range);
> >
> > +return 0;
> > +}
> > +
> > +static int config_input(AVFilterLink *inlink)
> > +{
> > +BilateralContext *s = inlink->dst->priv;
> > +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> > +
> > +s->depth = desc->comp[0].depth;
> > +init_lut(s);
> > +
> >  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, 
> > desc->log2_chroma_w);
> >  s->planewidth[0] = s->planewidth[3] = inlink->w;
> >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, 
> > desc->log2_chroma_h);
> > @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
> > *in)
> >  return ff_filter_frame(outlink, out);
> >  }
> >
> > +static int process_command(AVFilterContext *ctx, const char *cmd, const 
> > char *args,
> > +   char *res, int res_len, int flags)
> > +{
> > +BilateralContext *s = ctx->priv;
> > +int ret;
> > +float old_sigmaR = s->sigmaR;
> > +
> > +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> > flags)) < 0) {
> > +return ret;
> > +}
> > +
> > +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = init_lut(s)) 
> > < 0) {
> > +s->sigmaR = old_sigmaR;
> > +}
> > +
> > +return ret;
> > +}
> > +
> >  static av_cold void uninit(AVFilterContext *ctx)
> >  {
> >  BilateralContext *s = ctx->priv;
> > @@ -367,4 +395,5 @@ AVFilter ff_vf_bilateral = {
> >  .inputs= bilateral_inputs,
> >  .outputs   = bilateral_outputs,
> >  .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = process_command,
> >  };
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V4] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-11 Thread Tao Zhang
ping, at the beginning of the new week:)

leozhang  于2019年11月3日周日 下午6:50写道:

>
> Reviewed-by: Paul B Mahol 
> Reviewed-by: Jun Zhao 
> Signed-off-by: leozhang 
> ---
>  libavfilter/vf_bilateral.c | 39 ++-
>  1 file changed, 34 insertions(+), 5 deletions(-)
>
> diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> index 3c9d800..ba3631c 100644
> --- a/libavfilter/vf_bilateral.c
> +++ b/libavfilter/vf_bilateral.c
> @@ -29,6 +29,8 @@
>  #include "internal.h"
>  #include "video.h"
>
> +#include 
> +
>  typedef struct BilateralContext {
>  const AVClass *class;
>
> @@ -54,7 +56,7 @@ typedef struct BilateralContext {
>  } BilateralContext;
>
>  #define OFFSET(x) offsetof(BilateralContext, x)
> -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> +#define FLAGS 
> AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
>
>  static const AVOption bilateral_options[] = {
>  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> {.dbl=0.1}, 0.0,  10, FLAGS },
> @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
>  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
>  }
>
> -static int config_input(AVFilterLink *inlink)
> +static int init_lut(BilateralContext *s)
>  {
> -BilateralContext *s = inlink->dst->priv;
> -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
>  float inv_sigma_range;
>
> -s->depth = desc->comp[0].depth;
>  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
>
>  //compute a lookup table
>  for (int i = 0; i < (1 << s->depth); i++)
>  s->range_table[i] = expf(-i * inv_sigma_range);
>
> +return 0;
> +}
> +
> +static int config_input(AVFilterLink *inlink)
> +{
> +BilateralContext *s = inlink->dst->priv;
> +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> +
> +s->depth = desc->comp[0].depth;
> +init_lut(s);
> +
>  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, 
> desc->log2_chroma_w);
>  s->planewidth[0] = s->planewidth[3] = inlink->w;
>  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, 
> desc->log2_chroma_h);
> @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
> *in)
>  return ff_filter_frame(outlink, out);
>  }
>
> +static int process_command(AVFilterContext *ctx, const char *cmd, const char 
> *args,
> +   char *res, int res_len, int flags)
> +{
> +BilateralContext *s = ctx->priv;
> +int ret;
> +float old_sigmaR = s->sigmaR;
> +
> +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> flags)) < 0) {
> +return ret;
> +}
> +
> +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = init_lut(s)) < 
> 0) {
> +s->sigmaR = old_sigmaR;
> +}
> +
> +return ret;
> +}
> +
>  static av_cold void uninit(AVFilterContext *ctx)
>  {
>  BilateralContext *s = ctx->priv;
> @@ -367,4 +395,5 @@ AVFilter ff_vf_bilateral = {
>  .inputs= bilateral_inputs,
>  .outputs   = bilateral_outputs,
>  .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> +.process_command = process_command,
>  };
> --
> 1.8.3.1
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V3] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-02 Thread Tao Zhang
Paul B Mahol  于2019年11月2日周六 下午6:34写道:
>
> You mixed functional and non-functional changes in single patch.
Lesson learned. I'll submit a new patch. Thanks a lot. Today is a good day.
> This is big no.
>
> On 11/2/19, Tao Zhang  wrote:
> > Good weekend. Is it ok or any more suggestions?
> >
> > Tao Zhang  于2019年10月28日周一 下午2:53写道:
> >>
> >> ping
> >>
> >> leozhang  于2019年10月24日周四 下午5:18写道:
> >> >
> >> > Reviewed-by: Paul B Mahol 
> >> > Reviewed-by: Jun Zhao 
> >> > Signed-off-by: leozhang 
> >> > ---
> >> >  libavfilter/vf_bilateral.c | 57
> >> > ++
> >> >  1 file changed, 43 insertions(+), 14 deletions(-)
> >> >
> >> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> >> > index 3c9d800..4d7bf68 100644
> >> > --- a/libavfilter/vf_bilateral.c
> >> > +++ b/libavfilter/vf_bilateral.c
> >> > @@ -29,6 +29,8 @@
> >> >  #include "internal.h"
> >> >  #include "video.h"
> >> >
> >> > +#include 
> >> > +
> >> >  typedef struct BilateralContext {
> >> >  const AVClass *class;
> >> >
> >> > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> >> >  } BilateralContext;
> >> >
> >> >  #define OFFSET(x) offsetof(BilateralContext, x)
> >> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> >> > +#define FLAGS
> >> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >> >
> >> >  static const AVOption bilateral_options[] = {
> >> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS),
> >> > AV_OPT_TYPE_FLOAT, {.dbl=0.1}, 0.0,  10, FLAGS },
> >> > @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
> >> >  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> >> >  }
> >> >
> >> > -static int config_input(AVFilterLink *inlink)
> >> > +static int init_lut(BilateralContext *s)
> >> >  {
> >> > -BilateralContext *s = inlink->dst->priv;
> >> > -const AVPixFmtDescriptor *desc =
> >> > av_pix_fmt_desc_get(inlink->format);
> >> >  float inv_sigma_range;
> >> >
> >> > -s->depth = desc->comp[0].depth;
> >> >  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
> >> >
> >> >  //compute a lookup table
> >> >  for (int i = 0; i < (1 << s->depth); i++)
> >> >  s->range_table[i] = expf(-i * inv_sigma_range);
> >> >
> >> > +return 0;
> >> > +}
> >> > +
> >> > +static int config_input(AVFilterLink *inlink)
> >> > +{
> >> > +BilateralContext *s = inlink->dst->priv;
> >> > +const AVPixFmtDescriptor *desc =
> >> > av_pix_fmt_desc_get(inlink->format);
> >> > +
> >> > +s->depth = desc->comp[0].depth;
> >> > +init_lut(s);
> >> > +
> >> >  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w,
> >> > desc->log2_chroma_w);
> >> >  s->planewidth[0] = s->planewidth[3] = inlink->w;
> >> >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h,
> >> > desc->log2_chroma_h);
> >> > @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink,
> >> > AVFrame *in)
> >> >  return ff_filter_frame(outlink, out);
> >> >  }
> >> >
> >> > +static int process_command(AVFilterContext *ctx, const char *cmd, const
> >> > char *args,
> >> > +   char *res, int res_len, int flags)
> >> > +{
> >> > +BilateralContext *s = ctx->priv;
> >> > +int ret;
> >> > +float old_sigmaR = s->sigmaR;
> >> > +
> >> > +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len,
> >> > flags)) < 0) {
> >> > +return ret;
> >> > +}
> >> > +
> >> > +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret =
> >> > init_lut(s)) < 0) {
> >> > +s->si

Re: [FFmpeg-devel] [PATCH V3] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-11-02 Thread Tao Zhang
Good weekend. Is it ok or any more suggestions?

Tao Zhang  于2019年10月28日周一 下午2:53写道:
>
> ping
>
> leozhang  于2019年10月24日周四 下午5:18写道:
> >
> > Reviewed-by: Paul B Mahol 
> > Reviewed-by: Jun Zhao 
> > Signed-off-by: leozhang 
> > ---
> >  libavfilter/vf_bilateral.c | 57 
> > ++
> >  1 file changed, 43 insertions(+), 14 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..4d7bf68 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -29,6 +29,8 @@
> >  #include "internal.h"
> >  #include "video.h"
> >
> > +#include 
> > +
> >  typedef struct BilateralContext {
> >  const AVClass *class;
> >
> > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS 
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
> >  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> >  }
> >
> > -static int config_input(AVFilterLink *inlink)
> > +static int init_lut(BilateralContext *s)
> >  {
> > -BilateralContext *s = inlink->dst->priv;
> > -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> >  float inv_sigma_range;
> >
> > -s->depth = desc->comp[0].depth;
> >  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
> >
> >  //compute a lookup table
> >  for (int i = 0; i < (1 << s->depth); i++)
> >  s->range_table[i] = expf(-i * inv_sigma_range);
> >
> > +return 0;
> > +}
> > +
> > +static int config_input(AVFilterLink *inlink)
> > +{
> > +BilateralContext *s = inlink->dst->priv;
> > +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> > +
> > +s->depth = desc->comp[0].depth;
> > +init_lut(s);
> > +
> >  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, 
> > desc->log2_chroma_w);
> >  s->planewidth[0] = s->planewidth[3] = inlink->w;
> >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, 
> > desc->log2_chroma_h);
> > @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
> > *in)
> >  return ff_filter_frame(outlink, out);
> >  }
> >
> > +static int process_command(AVFilterContext *ctx, const char *cmd, const 
> > char *args,
> > +   char *res, int res_len, int flags)
> > +{
> > +BilateralContext *s = ctx->priv;
> > +int ret;
> > +float old_sigmaR = s->sigmaR;
> > +
> > +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> > flags)) < 0) {
> > +return ret;
> > +}
> > +
> > +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = init_lut(s)) 
> > < 0) {
> > +s->sigmaR = old_sigmaR;
> > +}
> > +
> > +return ret;
> > +}
> > +
> >  static av_cold void uninit(AVFilterContext *ctx)
> >  {
> >  BilateralContext *s = ctx->priv;
> > @@ -358,13 +386,14 @@ static const AVFilterPad bilateral_outputs[] = {
> >  };
> >
> >  AVFilter ff_vf_bilateral = {
> > -.name  = "bilateral",
> > -.description   = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > -.priv_size = sizeof(BilateralContext),
> > -.priv_class= _class,
> > -.uninit= uninit,
> > -.query_formats = query_formats,
> > -.inputs= bilateral_inputs,
> > -.outputs   = bilateral_outputs,
> > -.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.name= "bilateral",
> > +.description = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > +.priv_size   = sizeof(BilateralContext),
> > +.priv_class  = _class,
> > +.uninit  = uninit,
> > +.query_formats   = query_formats,
> > +.inputs  = bilateral_inputs,
> > +.outputs = bilateral_outputs,
> > +.flags   = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = process_command,
> >  };
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter/vf_bilateral: remove useless memcpy

2019-10-31 Thread Tao Zhang
Paul B Mahol  于2019年10月30日周三 下午4:35写道:
>
> Why you think it is useless?
>
> Have you checked checksums matches before and after?
Hi Paul,
I add FATE test for the bilateral filter which is
https://patchwork.ffmpeg.org/patch/16042/. Checked checksums matched
before and after. Thanks
>
> On 10/30/19, leozhang  wrote:
> > Signed-off-by: leozhang 
> > ---
> >  libavfilter/vf_bilateral.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..ba3c6e1 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -277,8 +277,8 @@ static void bilateral_##name(BilateralContext *s, const
> > uint8_t *ssrc, uint8_t *
> >  factor_++;
> >   \
> >  }
> >   \
> >
> >   \
> > -memcpy(ypy, ycy, sizeof(float) * width);
> >   \
> > -memcpy(ypf, ycf, sizeof(float) * width);
> >   \
> > +ypy = ycy;  \
> > +ypf = ycf;  \
> >  }
> >   \
> >
> >   \
> >  for (int i = 0; i < height; i++)
> >   \
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter/vf_bilateral: remove useless memcpy

2019-10-30 Thread Tao Zhang
Paul B Mahol  于2019年10月30日周三 下午4:35写道:
>
> Why you think it is useless?
>
> Have you checked checksums matches before and after?
I compared md5sum were the same. Please point it out if I understand wrong.
>
> On 10/30/19, leozhang  wrote:
> > Signed-off-by: leozhang 
> > ---
> >  libavfilter/vf_bilateral.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..ba3c6e1 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -277,8 +277,8 @@ static void bilateral_##name(BilateralContext *s, const
> > uint8_t *ssrc, uint8_t *
> >  factor_++;
> >   \
> >  }
> >   \
> >
> >   \
> > -memcpy(ypy, ycy, sizeof(float) * width);
> >   \
> > -memcpy(ypf, ycf, sizeof(float) * width);
> >   \
> > +ypy = ycy;  \
> > +ypf = ycf;  \
> >  }
> >   \
> >
> >   \
> >  for (int i = 0; i < height; i++)
> >   \
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V3] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-10-28 Thread Tao Zhang
ping

leozhang  于2019年10月24日周四 下午5:18写道:
>
> Reviewed-by: Paul B Mahol 
> Reviewed-by: Jun Zhao 
> Signed-off-by: leozhang 
> ---
>  libavfilter/vf_bilateral.c | 57 
> ++
>  1 file changed, 43 insertions(+), 14 deletions(-)
>
> diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> index 3c9d800..4d7bf68 100644
> --- a/libavfilter/vf_bilateral.c
> +++ b/libavfilter/vf_bilateral.c
> @@ -29,6 +29,8 @@
>  #include "internal.h"
>  #include "video.h"
>
> +#include 
> +
>  typedef struct BilateralContext {
>  const AVClass *class;
>
> @@ -54,7 +56,7 @@ typedef struct BilateralContext {
>  } BilateralContext;
>
>  #define OFFSET(x) offsetof(BilateralContext, x)
> -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> +#define FLAGS 
> AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
>
>  static const AVOption bilateral_options[] = {
>  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> {.dbl=0.1}, 0.0,  10, FLAGS },
> @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
>  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
>  }
>
> -static int config_input(AVFilterLink *inlink)
> +static int init_lut(BilateralContext *s)
>  {
> -BilateralContext *s = inlink->dst->priv;
> -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
>  float inv_sigma_range;
>
> -s->depth = desc->comp[0].depth;
>  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
>
>  //compute a lookup table
>  for (int i = 0; i < (1 << s->depth); i++)
>  s->range_table[i] = expf(-i * inv_sigma_range);
>
> +return 0;
> +}
> +
> +static int config_input(AVFilterLink *inlink)
> +{
> +BilateralContext *s = inlink->dst->priv;
> +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> +
> +s->depth = desc->comp[0].depth;
> +init_lut(s);
> +
>  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, 
> desc->log2_chroma_w);
>  s->planewidth[0] = s->planewidth[3] = inlink->w;
>  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, 
> desc->log2_chroma_h);
> @@ -325,6 +335,24 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
> *in)
>  return ff_filter_frame(outlink, out);
>  }
>
> +static int process_command(AVFilterContext *ctx, const char *cmd, const char 
> *args,
> +   char *res, int res_len, int flags)
> +{
> +BilateralContext *s = ctx->priv;
> +int ret;
> +float old_sigmaR = s->sigmaR;
> +
> +if ((ret = ff_filter_process_command(ctx, cmd, args, res, res_len, 
> flags)) < 0) {
> +return ret;
> +}
> +
> +if (fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret = init_lut(s)) < 
> 0) {
> +s->sigmaR = old_sigmaR;
> +}
> +
> +return ret;
> +}
> +
>  static av_cold void uninit(AVFilterContext *ctx)
>  {
>  BilateralContext *s = ctx->priv;
> @@ -358,13 +386,14 @@ static const AVFilterPad bilateral_outputs[] = {
>  };
>
>  AVFilter ff_vf_bilateral = {
> -.name  = "bilateral",
> -.description   = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> -.priv_size = sizeof(BilateralContext),
> -.priv_class= _class,
> -.uninit= uninit,
> -.query_formats = query_formats,
> -.inputs= bilateral_inputs,
> -.outputs   = bilateral_outputs,
> -.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> +.name= "bilateral",
> +.description = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> +.priv_size   = sizeof(BilateralContext),
> +.priv_class  = _class,
> +.uninit  = uninit,
> +.query_formats   = query_formats,
> +.inputs  = bilateral_inputs,
> +.outputs = bilateral_outputs,
> +.flags   = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> +.process_command = process_command,
>  };
> --
> 1.8.3.1
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH V2] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-10-24 Thread Tao Zhang
Paul B Mahol  于2019年10月24日周四 下午3:42写道:
>
> Still not OK, you are using old process command which is pointless now.
Right. I will submit a new version. Thanks a lot.
>
> On 10/24/19, leozhang  wrote:
> > Reviewed-by: Paul B Mahol 
> > Reviewed-by: Jun Zhao 
> > Signed-off-by: leozhang 
> > ---
> >  libavfilter/vf_bilateral.c | 60
> > +++---
> >  1 file changed, 46 insertions(+), 14 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..87843eb 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -29,6 +29,8 @@
> >  #include "internal.h"
> >  #include "video.h"
> >
> > +#include 
> > +
> >  typedef struct BilateralContext {
> >  const AVClass *class;
> >
> > @@ -54,7 +56,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT,
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -91,19 +93,27 @@ static int query_formats(AVFilterContext *ctx)
> >  return ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> >  }
> >
> > -static int config_input(AVFilterLink *inlink)
> > +static int init_lut(BilateralContext *s)
> >  {
> > -BilateralContext *s = inlink->dst->priv;
> > -const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> >  float inv_sigma_range;
> >
> > -s->depth = desc->comp[0].depth;
> >  inv_sigma_range = 1.0f / (s->sigmaR * ((1 << s->depth) - 1));
> >
> >  //compute a lookup table
> >  for (int i = 0; i < (1 << s->depth); i++)
> >  s->range_table[i] = expf(-i * inv_sigma_range);
> >
> > +return 0;
> > +}
> > +
> > +static int config_input(AVFilterLink *inlink)
> > +{
> > +BilateralContext *s = inlink->dst->priv;
> > +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> > +
> > +s->depth = desc->comp[0].depth;
> > +init_lut(s);
> > +
> >  s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w,
> > desc->log2_chroma_w);
> >  s->planewidth[0] = s->planewidth[3] = inlink->w;
> >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h,
> > desc->log2_chroma_h);
> > @@ -325,6 +335,27 @@ static int filter_frame(AVFilterLink *inlink, AVFrame
> > *in)
> >  return ff_filter_frame(outlink, out);
> >  }
> >
> > +static int process_command(AVFilterContext *ctx, const char *cmd, const
> > char *args,
> > +   char *res, int res_len, int flags)
> > +{
> > +BilateralContext *s = ctx->priv;
> > +int ret = 0;
> > +
> > +if (   !strcmp(cmd, "sigmaS") || !(strcmp(cmd, "sigmaR"))
> > +|| !strcmp(cmd, "planes")) {
> > +float old_sigmaR = s->sigmaR;
> > +
> > +ret = av_opt_set(s, cmd, args, 0);
> > +if (!ret && fabs(old_sigmaR - s->sigmaR) > FLT_EPSILON && (ret =
> > init_lut(s)) < 0) {
> > +s->sigmaR = old_sigmaR;
> > +}
> > +} else {
> > +ret = AVERROR(ENOSYS);
> > +}
> > +
> > +return ret;
> > +}
> > +
> >  static av_cold void uninit(AVFilterContext *ctx)
> >  {
> >  BilateralContext *s = ctx->priv;
> > @@ -358,13 +389,14 @@ static const AVFilterPad bilateral_outputs[] = {
> >  };
> >
> >  AVFilter ff_vf_bilateral = {
> > -.name  = "bilateral",
> > -.description   = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > -.priv_size = sizeof(BilateralContext),
> > -.priv_class= _class,
> > -.uninit= uninit,
> > -.query_formats = query_formats,
> > -.inputs= bilateral_inputs,
> > -.outputs   = bilateral_outputs,
> > -.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.name= "bilateral",
> > +.description = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > +.priv_size   = sizeof(BilateralContext),
> > +.priv_class  = _class,
> > +.uninit  = uninit,
> > +.query_formats   = query_formats,
> > +.inputs  = bilateral_inputs,
> > +.outputs = bilateral_outputs,
> > +.flags   = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = process_command,
> >  };
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To 

Re: [FFmpeg-devel] [PATCH] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-10-23 Thread Tao Zhang
Paul B Mahol  于2019年10月23日周三 下午9:47写道:
>
> Not ok, range sigma is used to change array values you never update here.
Right, I will submit a new version which try to call config_props()
when the user set range sigma at runtime. Thanks a lot.
>
> On 10/23/19, leozhang  wrote:
> > ---
> >  libavfilter/vf_bilateral.c | 21 +++--
> >  1 file changed, 11 insertions(+), 10 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..a06f434 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -54,7 +54,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT,
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -358,13 +358,14 @@ static const AVFilterPad bilateral_outputs[] = {
> >  };
> >
> >  AVFilter ff_vf_bilateral = {
> > -.name  = "bilateral",
> > -.description   = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > -.priv_size = sizeof(BilateralContext),
> > -.priv_class= _class,
> > -.uninit= uninit,
> > -.query_formats = query_formats,
> > -.inputs= bilateral_inputs,
> > -.outputs   = bilateral_outputs,
> > -.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.name= "bilateral",
> > +.description = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > +.priv_size   = sizeof(BilateralContext),
> > +.priv_class  = _class,
> > +.uninit  = uninit,
> > +.query_formats   = query_formats,
> > +.inputs  = bilateral_inputs,
> > +.outputs = bilateral_outputs,
> > +.flags   = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = ff_filter_process_command,
> >  };
> > --
> > 1.8.3.1
> >
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter/vf_bilateral: process command to set the parameter at runtime

2019-10-23 Thread Tao Zhang
myp...@gmail.com  于2019年10月23日周三 下午8:50写道:
>
> On Wed, Oct 23, 2019 at 8:34 PM leozhang  wrote:
> >
> > ---
> >  libavfilter/vf_bilateral.c | 21 +++--
> >  1 file changed, 11 insertions(+), 10 deletions(-)
> >
> > diff --git a/libavfilter/vf_bilateral.c b/libavfilter/vf_bilateral.c
> > index 3c9d800..a06f434 100644
> > --- a/libavfilter/vf_bilateral.c
> > +++ b/libavfilter/vf_bilateral.c
> > @@ -54,7 +54,7 @@ typedef struct BilateralContext {
> >  } BilateralContext;
> >
> >  #define OFFSET(x) offsetof(BilateralContext, x)
> > -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> > +#define FLAGS 
> > AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >
> >  static const AVOption bilateral_options[] = {
> >  { "sigmaS", "set spatial sigma",OFFSET(sigmaS), AV_OPT_TYPE_FLOAT, 
> > {.dbl=0.1}, 0.0,  10, FLAGS },
> > @@ -358,13 +358,14 @@ static const AVFilterPad bilateral_outputs[] = {
> >  };
> >
> >  AVFilter ff_vf_bilateral = {
> > -.name  = "bilateral",
> > -.description   = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > -.priv_size = sizeof(BilateralContext),
> > -.priv_class= _class,
> > -.uninit= uninit,
> > -.query_formats = query_formats,
> > -.inputs= bilateral_inputs,
> > -.outputs   = bilateral_outputs,
> > -.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> I can't find a special reason to change above part
To make the equal signs aligned. Thanks a lot.
> > +.name= "bilateral",
> > +.description = NULL_IF_CONFIG_SMALL("Apply Bilateral filter."),
> > +.priv_size   = sizeof(BilateralContext),
> > +.priv_class  = _class,
> > +.uninit  = uninit,
> > +.query_formats   = query_formats,
> > +.inputs  = bilateral_inputs,
> > +.outputs = bilateral_outputs,
> > +.flags   = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> > +.process_command = ff_filter_process_command,
> >  };
> > --
> > 1.8.3.1
> >
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [v3] avformat/flvdec: delete unused code

2019-08-22 Thread Tao Zhang
Michael Niedermayer  于2019年8月18日周日 下午5:49写道:
>
> On Wed, Aug 14, 2019 at 11:07:18AM +0800, leozhang wrote:
> > Reviewed-by: Carl Eugen Hoyos 
> > Signed-off-by: leozhang 
> > ---
> >  libavformat/flvdec.c | 17 -
> >  1 file changed, 17 deletions(-)
>
> probably ok
Hi Michael, I'm sorry to bother you at your busy time and is it the
time to push it?
>
> thx
> [...]
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> it is not once nor twice but times without number that the same ideas make
> their appearance in the world. -- Aristotle
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [v3] avformat/flvdec: delete unused code

2019-08-16 Thread Tao Zhang
ping.

leozhang  于2019年8月14日周三 上午11:07写道:
>
> Reviewed-by: Carl Eugen Hoyos 
> Signed-off-by: leozhang 
> ---
>  libavformat/flvdec.c | 17 -
>  1 file changed, 17 deletions(-)
>
> diff --git a/libavformat/flvdec.c b/libavformat/flvdec.c
> index b531a39..6bfe624 100644
> --- a/libavformat/flvdec.c
> +++ b/libavformat/flvdec.c
> @@ -32,7 +32,6 @@
>  #include "libavutil/mathematics.h"
>  #include "libavutil/time_internal.h"
>  #include "libavcodec/bytestream.h"
> -#include "libavcodec/mpeg4audio.h"
>  #include "avformat.h"
>  #include "internal.h"
>  #include "avio_internal.h"
> @@ -1265,22 +1264,6 @@ retry_duration:
>  if (st->codecpar->codec_id == AV_CODEC_ID_AAC && t && 
> !strcmp(t->value, "Omnia A/XE"))
>  st->codecpar->extradata_size = 2;
>
> -if (st->codecpar->codec_id == AV_CODEC_ID_AAC && 0) {
> -MPEG4AudioConfig cfg;
> -
> -if (avpriv_mpeg4audio_get_config(, 
> st->codecpar->extradata,
> - 
> st->codecpar->extradata_size * 8, 1) >= 0) {
> -st->codecpar->channels   = cfg.channels;
> -st->codecpar->channel_layout = 0;
> -if (cfg.ext_sample_rate)
> -st->codecpar->sample_rate = cfg.ext_sample_rate;
> -else
> -st->codecpar->sample_rate = cfg.sample_rate;
> -av_log(s, AV_LOG_TRACE, "mp4a config channels %d sample rate 
> %d\n",
> -st->codecpar->channels, st->codecpar->sample_rate);
> -}
> -}
> -
>  ret = FFERROR_REDO;
>  goto leave;
>  }
> --
> 1.8.3.1
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] avformat/flvdec: delete unused code

2019-08-13 Thread Tao Zhang
Carl Eugen Hoyos  于2019年8月13日周二 下午5:37写道:
>
> Am Di., 13. Aug. 2019 um 09:06 Uhr schrieb leozhang :
> >
> > Signed-off-by: leozhang 
> > ---
> >  libavformat/flvdec.c | 16 
> >  1 file changed, 16 deletions(-)
> >
> > diff --git a/libavformat/flvdec.c b/libavformat/flvdec.c
> > index b531a39..4e8faed 100644
> > --- a/libavformat/flvdec.c
> > +++ b/libavformat/flvdec.c
> > @@ -1265,22 +1265,6 @@ retry_duration:
> >  if (st->codecpar->codec_id == AV_CODEC_ID_AAC && t && 
> > !strcmp(t->value, "Omnia A/XE"))
> >  st->codecpar->extradata_size = 2;
> >
> > -if (st->codecpar->codec_id == AV_CODEC_ID_AAC && 0) {
> > -MPEG4AudioConfig cfg;
> > -
> > -if (avpriv_mpeg4audio_get_config(, 
> > st->codecpar->extradata,
> > - 
> > st->codecpar->extradata_size * 8, 1) >= 0) {
> > -st->codecpar->channels   = cfg.channels;
> > -st->codecpar->channel_layout = 0;
> > -if (cfg.ext_sample_rate)
> > -st->codecpar->sample_rate = cfg.ext_sample_rate;
> > -else
> > -st->codecpar->sample_rate = cfg.sample_rate;
> > -av_log(s, AV_LOG_TRACE, "mp4a config channels %d sample 
> > rate %d\n",
> > -st->codecpar->channels, st->codecpar->sample_rate);
> > -}
> > -}
> > -
> >  ret = FFERROR_REDO;
> >  goto leave;
> >  }
>
> You forgot to remove '#include "libavcodec/mpeg4audio.h"'.
Thanks. Will send a new patch which removes this header.
>
> In case anybody is interested, here is the original thread:
> https://ffmpeg.org/pipermail/ffmpeg-devel/2009-February/065799.html
>
> Carl Eugen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] dash: add descriptor which is useful to the scheme defined by ISO/IEC 23009-1:2014/Amd.2:2015.

2019-07-16 Thread Tao Zhang
Jeyapal, Karthick  于2019年7月17日周三 上午10:46写道:
>
>
> On 7/15/19 8:41 AM, leozhang wrote:
> > change history:
> > 1. remove unnecessary cast.
> > 2. add some braces.
> >
> > Please comment, Thanks
> Thanks for sending the patch. Please find some of my comments inlined below.
Thanks for your comments. I made some changes below. Please review it, thanks.
> >
> > Signed-off-by: leozhang 
> > ---
> >  doc/muxers.texi   |  3 +++
> >  libavformat/dashenc.c | 35 ---
> >  2 files changed, 35 insertions(+), 3 deletions(-)
> >
> > diff --git a/doc/muxers.texi b/doc/muxers.texi
> > index b109297..ac06ad2 100644
> > --- a/doc/muxers.texi
> > +++ b/doc/muxers.texi
> > @@ -275,6 +275,9 @@ of the adaptation sets and a,b,c,d and e are the 
> > indices of the mapped streams.
> >  To map all video (or audio) streams to an AdaptationSet, "v" (or "a") can 
> > be used as stream identifier instead of IDs.
> >
> >  When no assignment is defined, this defaults to an AdaptationSet for each 
> > stream.
> > +
> > +Optional syntax is "id=x,descriptor=descriptor_str,streams=a,b,c 
> > id=y,streams=d,e" and so on, descriptor is useful to the scheme defined by 
> > ISO/IEC 23009-1:2014/Amd.2:2015.
> > +And descriptor_str must be a properly formatted XML element, which is 
> > encoded by base64.
> Two comments:
> 1. Please provide an example here. So that it is easier for people to 
> understand
For the instance using descriptor in VR tiled video application,
this short interesting video
https://www.hhi.fraunhofer.de/en/departments/vca/research-groups/multimedia-communications/research-topics/mpeg-omaf.html
is more intuitive than a textual description.
Then, how DASH pack media data in tile based streaming?
Refer ISO/IEC 23009-1:2014/Amd.2:2015, for example, the descriptor
string  indicates that AdaptationSet is
the top-left corner tile of full video divided into 2x2 tiles.
Finally, how to use FFmpeg DASH muxer to generate the tile based streaming?
We split the video by NxN tiles, insert descriptor syntax together
with AdaptationSet in MPD.
For example, the pseudo ffmpeg command {-adaptation_sets
"id=0,descriptor=PFN1cHBsZW1lbnRhbFByb3BlcnR5IHNjaGVtZUlkVXJpPSJ1cm46bXBlZzpkYXNoOnNyZDoyMDE0IiB2YWx1ZT0iMCwwLDAsMSwxLDIsMiIvPg==,streams=v"}
will
insert descriptor string  like
below




  ...



In addition to VR applications, zoomed video part can also be
indicated by descriptor.
> 2. Why do we need this to be base64 encoded? What is the use-case where a 
> normal string doesn't work?
The parser code used comma and space as separator. The unencrypted
descriptor string like  contains
comma and space, which disturbs the normal parse result.
> >  @item timeout @var{timeout}
> >  Set timeout for socket I/O operations. Applicable only for HTTP output.
> >  @item index_correction @var{index_correction}
> > diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
> > index b25afb4..a48031c 100644
> > --- a/libavformat/dashenc.c
> > +++ b/libavformat/dashenc.c
> > @@ -34,6 +34,7 @@
> >  #include "libavutil/rational.h"
> >  #include "libavutil/time.h"
> >  #include "libavutil/time_internal.h"
> > +#include "libavutil/base64.h"
> >
> >  #include "avc.h"
> >  #include "avformat.h"
> > @@ -68,6 +69,7 @@ typedef struct Segment {
> >
> >  typedef struct AdaptationSet {
> >  char id[10];
> > +char descriptor[1024];
> Please change this char * and allocate it dynamically. I understand there are 
> some legacy code in dashenc using this 1024 length.
> But at least new code should follow dynamic allocation.
Agree, will fix it
> >  enum AVMediaType media_type;
> >  AVDictionary *metadata;
> >  AVRational min_frame_rate, max_frame_rate;
> > @@ -748,7 +750,8 @@ static int write_adaptation_set(AVFormatContext *s, 
> > AVIOContext *out, int as_ind
> >  role = av_dict_get(as->metadata, "role", NULL, 0);
> >  if (role)
> >  avio_printf(out, "\t\t\t > schemeIdUri=\"urn:mpeg:dash:role:2011\" value=\"%s\"/>\n", role->value);
> > -
> > +if (strlen(as->descriptor))
> > +avio_printf(out, "\t\t\t%s\n", as->descriptor);
> >  for (i = 0; i < s->nb_streams; i++) {
> >  OutputStream *os = >streams[i];
> >  char bandwidth_str[64] = {'\0'};
> > @@ -820,7 +823,7 @@ static int parse_adaptation_sets(AVFormatContext *s)
> >  {
> >  DASHContext *c = s->priv_data;
> >  const char *p = c->adaptation_sets;
> > -enum { new_set, parse_id, parsing_streams } state;
> > +enum { new_set, parse_id, parsing_streams, parse_descriptor } state;
> >  AdaptationSet *as;
> >  int i, n, ret;
> >
> > @@ -837,6 +840,9 @@ static int parse_adaptation_sets(AVFormatContext *s)
> >  }
> >
> >  // syntax id=0,streams=0,1,2 id=1,streams=3,4 and so on
> > +// option id=0,descriptor=descriptor_str,streams=0,1,2 and so on
> > +// descriptor is useful to the scheme defined by ISO/IEC 
> > 23009-1:2014/Amd.2:2015
> > +// descriptor_str must be a properly 

Re: [FFmpeg-devel] dash: add descriptor which is useful to the scheme defined by ISO/IEC 23009-1:2014/Amd.2:2015.

2019-07-16 Thread Tao Zhang
Let me add that, descriptor provides extensible syntax and semantics
for describing Adaptation Set properties.
In my scenario,I implemented one VR tiled video system using descriptor.

leozhang  于2019年7月15日周一 上午11:11写道:
>
> change history:
> 1. remove unnecessary cast.
> 2. add some braces.
>
> Please comment, Thanks
>
> Signed-off-by: leozhang 
> ---
>  doc/muxers.texi   |  3 +++
>  libavformat/dashenc.c | 35 ---
>  2 files changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/doc/muxers.texi b/doc/muxers.texi
> index b109297..ac06ad2 100644
> --- a/doc/muxers.texi
> +++ b/doc/muxers.texi
> @@ -275,6 +275,9 @@ of the adaptation sets and a,b,c,d and e are the indices 
> of the mapped streams.
>  To map all video (or audio) streams to an AdaptationSet, "v" (or "a") can be 
> used as stream identifier instead of IDs.
>
>  When no assignment is defined, this defaults to an AdaptationSet for each 
> stream.
> +
> +Optional syntax is "id=x,descriptor=descriptor_str,streams=a,b,c 
> id=y,streams=d,e" and so on, descriptor is useful to the scheme defined by 
> ISO/IEC 23009-1:2014/Amd.2:2015.
> +And descriptor_str must be a properly formatted XML element, which is 
> encoded by base64.
>  @item timeout @var{timeout}
>  Set timeout for socket I/O operations. Applicable only for HTTP output.
>  @item index_correction @var{index_correction}
> diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
> index b25afb4..a48031c 100644
> --- a/libavformat/dashenc.c
> +++ b/libavformat/dashenc.c
> @@ -34,6 +34,7 @@
>  #include "libavutil/rational.h"
>  #include "libavutil/time.h"
>  #include "libavutil/time_internal.h"
> +#include "libavutil/base64.h"
>
>  #include "avc.h"
>  #include "avformat.h"
> @@ -68,6 +69,7 @@ typedef struct Segment {
>
>  typedef struct AdaptationSet {
>  char id[10];
> +char descriptor[1024];
>  enum AVMediaType media_type;
>  AVDictionary *metadata;
>  AVRational min_frame_rate, max_frame_rate;
> @@ -748,7 +750,8 @@ static int write_adaptation_set(AVFormatContext *s, 
> AVIOContext *out, int as_ind
>  role = av_dict_get(as->metadata, "role", NULL, 0);
>  if (role)
>  avio_printf(out, "\t\t\t schemeIdUri=\"urn:mpeg:dash:role:2011\" value=\"%s\"/>\n", role->value);
> -
> +if (strlen(as->descriptor))
> +avio_printf(out, "\t\t\t%s\n", as->descriptor);
>  for (i = 0; i < s->nb_streams; i++) {
>  OutputStream *os = >streams[i];
>  char bandwidth_str[64] = {'\0'};
> @@ -820,7 +823,7 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  {
>  DASHContext *c = s->priv_data;
>  const char *p = c->adaptation_sets;
> -enum { new_set, parse_id, parsing_streams } state;
> +enum { new_set, parse_id, parsing_streams, parse_descriptor } state;
>  AdaptationSet *as;
>  int i, n, ret;
>
> @@ -837,6 +840,9 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  }
>
>  // syntax id=0,streams=0,1,2 id=1,streams=3,4 and so on
> +// option id=0,descriptor=descriptor_str,streams=0,1,2 and so on
> +// descriptor is useful to the scheme defined by ISO/IEC 
> 23009-1:2014/Amd.2:2015
> +// descriptor_str must be a properly formatted XML element, encoded by 
> base64.
>  state = new_set;
>  while (*p) {
>  if (*p == ' ') {
> @@ -854,7 +860,30 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  if (*p)
>  p++;
>  state = parse_id;
> -} else if (state == parse_id && av_strstart(p, "streams=", )) {
> +} else if (state == parse_id && av_strstart(p, "descriptor=", )) {
> +char *encode_str, *decode_str;
> +int decode_size, ret;
> +
> +n = strcspn(p, ",");
> +encode_str = av_strndup(p, n);
> +decode_size = AV_BASE64_DECODE_SIZE(n);
> +decode_str = av_mallocz(decode_size);
> +if (decode_str) {
> +ret = av_base64_decode(decode_str, encode_str, decode_size);
> +if (ret >= 0)
> +snprintf(as->descriptor, sizeof(as->descriptor), "%.*s", 
> decode_size, decode_str);
> +else
> +av_log(s, AV_LOG_WARNING, "descriptor string is invalid 
> base64 encode\n");
> +} else {
> +av_log(s, AV_LOG_WARNING, "av_mallocz failed, will not parse 
> descriptor\n");
> +}
> +p += n;
> +if (*p)
> +p++;
> +state = parse_descriptor;
> +av_freep(_str);
> +av_freep(_str);
> +} else if ((state == parse_id || state == parse_descriptor) && 
> av_strstart(p, "streams=", )) { //descriptor is optional
>  state = parsing_streams;
>  } else if (state == parsing_streams) {
>  AdaptationSet *as = >as[c->nb_as - 1];
> --
> 1.8.3.1
>
> ___
> ffmpeg-devel mailing list
> 

Re: [FFmpeg-devel] dash: add descriptor which is useful to the scheme defined by ISO/IEC 23009-1:2014/Amd.2:2015.

2019-07-14 Thread Tao Zhang
ping?

leozhang  于2019年7月12日周五 下午4:31写道:
>
>  Reference ISO/IEC 23009-1:2014/Amd.2:2015, a spatial relationship descriptor 
> is defined as a spatial part of a content component (e.g. a region of 
> interest, or a tile)
>  and represented by either an Adaptation Set or a Sub-Representation.
>
> Signed-off-by: leozhang 
> ---
>  doc/muxers.texi   |  3 +++
>  libavformat/dashenc.c | 36 +---
>  2 files changed, 36 insertions(+), 3 deletions(-)
>
> diff --git a/doc/muxers.texi b/doc/muxers.texi
> index b109297..ac06ad2 100644
> --- a/doc/muxers.texi
> +++ b/doc/muxers.texi
> @@ -275,6 +275,9 @@ of the adaptation sets and a,b,c,d and e are the indices 
> of the mapped streams.
>  To map all video (or audio) streams to an AdaptationSet, "v" (or "a") can be 
> used as stream identifier instead of IDs.
>
>  When no assignment is defined, this defaults to an AdaptationSet for each 
> stream.
> +
> +Optional syntax is "id=x,descriptor=descriptor_str,streams=a,b,c 
> id=y,streams=d,e" and so on, descriptor is useful to the scheme defined by 
> ISO/IEC 23009-1:2014/Amd.2:2015.
> +And descriptor_str must be a properly formatted XML element, which is 
> encoded by base64.
>  @item timeout @var{timeout}
>  Set timeout for socket I/O operations. Applicable only for HTTP output.
>  @item index_correction @var{index_correction}
> diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
> index b25afb4..f7ebb1f 100644
> --- a/libavformat/dashenc.c
> +++ b/libavformat/dashenc.c
> @@ -34,6 +34,7 @@
>  #include "libavutil/rational.h"
>  #include "libavutil/time.h"
>  #include "libavutil/time_internal.h"
> +#include "libavutil/base64.h"
>
>  #include "avc.h"
>  #include "avformat.h"
> @@ -68,6 +69,7 @@ typedef struct Segment {
>
>  typedef struct AdaptationSet {
>  char id[10];
> +char descriptor[1024];
>  enum AVMediaType media_type;
>  AVDictionary *metadata;
>  AVRational min_frame_rate, max_frame_rate;
> @@ -748,7 +750,8 @@ static int write_adaptation_set(AVFormatContext *s, 
> AVIOContext *out, int as_ind
>  role = av_dict_get(as->metadata, "role", NULL, 0);
>  if (role)
>  avio_printf(out, "\t\t\t schemeIdUri=\"urn:mpeg:dash:role:2011\" value=\"%s\"/>\n", role->value);
> -
> +if (strlen(as->descriptor))
> +avio_printf(out, "\t\t\t%s\n", as->descriptor);
>  for (i = 0; i < s->nb_streams; i++) {
>  OutputStream *os = >streams[i];
>  char bandwidth_str[64] = {'\0'};
> @@ -820,7 +823,7 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  {
>  DASHContext *c = s->priv_data;
>  const char *p = c->adaptation_sets;
> -enum { new_set, parse_id, parsing_streams } state;
> +enum { new_set, parse_id, parsing_streams, parse_descriptor } state;
>  AdaptationSet *as;
>  int i, n, ret;
>
> @@ -837,6 +840,9 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  }
>
>  // syntax id=0,streams=0,1,2 id=1,streams=3,4 and so on
> +// option id=0,descriptor=descriptor_str,streams=0,1,2 and so on
> +// descriptor is useful to the scheme defined by ISO/IEC 
> 23009-1:2014/Amd.2:2015
> +// descriptor_str must be a properly formatted XML element, encoded by 
> base64.
>  state = new_set;
>  while (*p) {
>  if (*p == ' ') {
> @@ -854,7 +860,31 @@ static int parse_adaptation_sets(AVFormatContext *s)
>  if (*p)
>  p++;
>  state = parse_id;
> -} else if (state == parse_id && av_strstart(p, "streams=", )) {
> +} else if (state == parse_id && av_strstart(p, "descriptor=", )) {
> +char *encode_str;
> +uint8_t *decode_str;
> +int decode_size, ret;
> +
> +n = strcspn(p, ",");
> +encode_str = av_strndup(p, n);
> +decode_size = AV_BASE64_DECODE_SIZE(n);
> +decode_str = (uint8_t *)av_mallocz(decode_size);
> +if (decode_str) {
> +ret = av_base64_decode(decode_str, encode_str, decode_size);
> +if (ret >= 0)
> +snprintf(as->descriptor, sizeof(as->descriptor), "%.*s", 
> decode_size, decode_str);
> +else
> +av_log(s, AV_LOG_WARNING, "descriptor string is invalid 
> base64 encode\n");
> +} else
> +av_log(s, AV_LOG_WARNING, "av_mallocz failed, will not parse 
> descriptor\n");
> +
> +p += n;
> +if (*p)
> +p++;
> +state = parse_descriptor;
> +av_freep(_str);
> +av_freep(_str);
> +} else if ((state == parse_id || state == parse_descriptor) && 
> av_strstart(p, "streams=", )) { //descriptor is optional
>  state = parsing_streams;
>  } else if (state == parsing_streams) {
>  AdaptationSet *as = >as[c->nb_as - 1];
> --
> 1.8.3.1
>
> ___
> ffmpeg-devel mailing list
> 

Re: [FFmpeg-devel] [PATCH][FFmpeg-devel v2] Add GPU accelerated video crop filter

2019-03-25 Thread Tao Zhang
Timo Rothenpieler  于2019年3月25日周一 下午6:31写道:
>
> On 25/03/2019 09:27, Tao Zhang wrote:
> >>> Hi,
> >>>
> >>> Timo and Mark and I have been discussing this, and we think the right
> >>> thing to do is add support to vf_scale_cuda to respect the crop
> >>> properties on an input AVFrame. Mark posted a patch to vf_crop to
> >>> ensure that the properties are set, and then the scale filter should
> >>> respect those properties if they are set. You can look at
> >>> vf_scale_vaapi for how the properties are read, but they will require
> >>> explicit handling to adjust the src dimensions passed to the scale
> >>> filter.
> > Maybe a little not intuitive to users.
> >>>
> >>> This will be a more efficient way of handling crops, in terms of total
> >>> lines of code and also allowing crop/scale with one less copy.
> >>>
> >>> I know this is quite different from the approach you've taken here, and
> >>> we appreciate the work you've done, but it should be better overall to
> >>> implement this integrated method.
> >> Hi Philip,
> >>
> >> Glad to hear you guys had discussion on this. As I am also considering the 
> >> problem, I have some questions about your idea.
> >> So, what if user did not insert a scale_cuda after crop filter? Do you 
> >> plan to automatically insert scale_cuda or just ignore the crop?
> >> What if user want to do crop,transpose_cuda,scale_cuda? So we also need to 
> >> handle crop inside transpose_cuda filter?
>  >
> > I have the same question.
> Ideally, scale_cuda should be auto-inserted at the required places once
> it works that way.
> Otherwise it seems pointless to me if the user still has to manually
> insert it after the generic filters setting metadata.
>
> For that reason it should also still support getting its parameters
> passed directly as a fallback, and potentially even expose multiple
> filter names, so crop_cuda and transpose_cuda are still visible, but
> ultimately point to the same filter code.
>
> We have a transpose_npp, right now, but with libnpp slowly being on its
> way out, transpose_cuda is needed, and ultimately even a format_cuda
> filter, since right now scale_npp is the only filter that can convert
> pixel formats on the hardware.
> I'd also like to see scale_cuda to support a few more interpolation
> algorithms, but that's not very important for now.
>
> All this functionality can be in the same filter, which is scale_cuda.
> The point of that is that it avoids needless expensive frame copies as
> much as possible.
I think frame copy is very low cost on cuda. Maybe the better way is
to create common functions for cuda related repeat code, but keep
frame detailed filtering seperate.
For example, if someone want to do bilateral filter with cuda, just
need focus on the filtering itself. True, scale_cuda can crop, however
hard to do bilateral filtering currently.
Thank you all for the discussion. If I have wrong, feel free to point it out.
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH][FFmpeg-devel v2] Add GPU accelerated video crop filter

2019-03-25 Thread Tao Zhang
Song, Ruiling  于2019年3月25日周一 下午3:26写道:
>
>
>
> > -Original Message-
> > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
> > Philip Langdale via ffmpeg-devel
> > Sent: Monday, March 25, 2019 12:57 PM
> > To: FFmpeg development discussions and patches 
> > Cc: Philip Langdale 
> > Subject: Re: [FFmpeg-devel] [PATCH][FFmpeg-devel v2] Add GPU accelerated
> > video crop filter
> >
> > On Sat, 23 Mar 2019 23:51:10 +0800
> > UsingtcNower  wrote:
> >
> > > Signed-off-by: UsingtcNower 
> > > ---
> > >  Changelog   |   1 +
> > >  configure   |   1 +
> > >  doc/filters.texi|  31 +++
> > >  libavfilter/Makefile|   1 +
> > >  libavfilter/allfilters.c|   1 +
> > >  libavfilter/version.h   |   2 +-
> > >  libavfilter/vf_crop_cuda.c  | 638
> > > 
> > > libavfilter/vf_crop_cuda.cu | 109  8 files changed, 783
> > > insertions(+), 1 deletion(-) create mode 100644
> > > libavfilter/vf_crop_cuda.c create mode 100644
> > > libavfilter/vf_crop_cuda.cu
> > >
> > > diff --git a/Changelog b/Changelog
> > > index ad7e82f..f224fc8 100644
> > > --- a/Changelog
> > > +++ b/Changelog
> > > @@ -20,6 +20,7 @@ version :
> > >  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
> > >  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
> > >  - removed libndi-newtek
> > > +- crop_cuda GPU accelerated video crop filter
> >
> > Hi,
> >
> > Timo and Mark and I have been discussing this, and we think the right
> > thing to do is add support to vf_scale_cuda to respect the crop
> > properties on an input AVFrame. Mark posted a patch to vf_crop to
> > ensure that the properties are set, and then the scale filter should
> > respect those properties if they are set. You can look at
> > vf_scale_vaapi for how the properties are read, but they will require
> > explicit handling to adjust the src dimensions passed to the scale
> > filter.
Maybe a little not intuitive to users.
> >
> > This will be a more efficient way of handling crops, in terms of total
> > lines of code and also allowing crop/scale with one less copy.
> >
> > I know this is quite different from the approach you've taken here, and
> > we appreciate the work you've done, but it should be better overall to
> > implement this integrated method.
> Hi Philip,
>
> Glad to hear you guys had discussion on this. As I am also considering the 
> problem, I have some questions about your idea.
> So, what if user did not insert a scale_cuda after crop filter? Do you plan 
> to automatically insert scale_cuda or just ignore the crop?
> What if user want to do crop,transpose_cuda,scale_cuda? So we also need to 
> handle crop inside transpose_cuda filter?
I have the same question.
> (looks like we do not have transpose_cuda right now, but this filter seems 
> needed if user want to do transpose job using cuda.)
>
> Thanks!
> Ruiling
> >
> > Thanks,
> >
> > >
> > >  version 4.1:
> > > diff --git a/configure b/configure
> > > index 331393f..3f3ac2f 100755
> > > --- a/configure
> > > +++ b/configure
> > > @@ -2973,6 +2973,7 @@ qsvvpp_select="qsv"
> > >  vaapi_encode_deps="vaapi"
> > >  v4l2_m2m_deps="linux_videodev2_h sem_timedwait"
> > >
> > > +crop_cuda_filter_deps="ffnvcodec cuda_nvcc"
> > >  hwupload_cuda_filter_deps="ffnvcodec"
> > >  scale_npp_filter_deps="ffnvcodec libnpp"
> > >  scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
> > > diff --git a/doc/filters.texi b/doc/filters.texi
> > > index 4ffb392..ee16a2d 100644
> > > --- a/doc/filters.texi
> > > +++ b/doc/filters.texi
> > > @@ -7415,6 +7415,37 @@ If the specified expression is not valid, it
> > > is kept at its current value.
> > >  @end table
> > >
> > > +@section crop_cuda
> > > +
> > > +Crop the input video to given dimensions, implemented in CUDA.
> > > +
> > > +It accepts the following parameters:
> > > +
> > > +@table @option
> > > +
> > > +@item w
> > > +The width of the output video. It defaults to @code{iw}.
> > > +This expression is evaluated only once during the filter
> > > +configuration.
> > > +
> > > +@item h
> > > +The height of the output video. It defaults to @code{ih}.
> > > +This expression is evaluated only once during the filter
> > > +configuration.
> > > +
> > > +@item x
> > > +The horizontal position, in the input video, of the left edge of the
> > > output +video. It defaults to @code{(in_w-out_w)/2}.
> > > +This expression is evaluated only once during the filter
> > > +configuration.
> > > +
> > > +@item y
> > > +The vertical position, in the input video, of the top edge of the
> > > output video. +It defaults to @code{(in_h-out_h)/2}.
> > > +This expression is evaluated only once during the filter
> > > +configuration.
> > > +@end table
> > > +
> > >  @section cropdetect
> > >
> > >  Auto-detect the crop size.
> > > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > > index fef6ec5..84df037 100644
> > > --- a/libavfilter/Makefile
> > > 

Re: [FFmpeg-devel] [PATCH][FFmpeg-devel v2] Add GPU accelerated video crop filter

2019-03-23 Thread Tao Zhang
Got it. Thanks Steven

Steven Liu  于2019年3月24日周日 上午9:00写道:

>
>
> > 在 2019年3月24日,07:26,Tao Zhang  写道:
> >
> > The corrected version. If there are no other comments or objections,
> could
> > this be pushed?
> Of course, maybe this need waiting for other reviewer, about 24 hours
> after maybe better than push it now.
> Because other reviewer maybe busy or not online weekend.
>
> >
> > UsingtcNower  于2019年3月23日周六 下午11:51写道:
> >
> >> Signed-off-by: UsingtcNower 
> >> ---
> >> Changelog   |   1 +
> >> configure   |   1 +
> >> doc/filters.texi|  31 +++
> >> libavfilter/Makefile|   1 +
> >> libavfilter/allfilters.c|   1 +
> >> libavfilter/version.h   |   2 +-
> >> libavfilter/vf_crop_cuda.c  | 638
> >> 
> >> libavfilter/vf_crop_cuda.cu | 109 
> >> 8 files changed, 783 insertions(+), 1 deletion(-)
> >> create mode 100644 libavfilter/vf_crop_cuda.c
> >> create mode 100644 libavfilter/vf_crop_cuda.cu
> >>
> >> diff --git a/Changelog b/Changelog
> >> index ad7e82f..f224fc8 100644
> >> --- a/Changelog
> >> +++ b/Changelog
> >> @@ -20,6 +20,7 @@ version :
> >> - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
> >> - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
> >> - removed libndi-newtek
> >> +- crop_cuda GPU accelerated video crop filter
> >>
> >>
> >> version 4.1:
> >> diff --git a/configure b/configure
> >> index 331393f..3f3ac2f 100755
> >> --- a/configure
> >> +++ b/configure
> >> @@ -2973,6 +2973,7 @@ qsvvpp_select="qsv"
> >> vaapi_encode_deps="vaapi"
> >> v4l2_m2m_deps="linux_videodev2_h sem_timedwait"
> >>
> >> +crop_cuda_filter_deps="ffnvcodec cuda_nvcc"
> >> hwupload_cuda_filter_deps="ffnvcodec"
> >> scale_npp_filter_deps="ffnvcodec libnpp"
> >> scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
> >> diff --git a/doc/filters.texi b/doc/filters.texi
> >> index 4ffb392..ee16a2d 100644
> >> --- a/doc/filters.texi
> >> +++ b/doc/filters.texi
> >> @@ -7415,6 +7415,37 @@ If the specified expression is not valid, it is
> >> kept at its current
> >> value.
> >> @end table
> >>
> >> +@section crop_cuda
> >> +
> >> +Crop the input video to given dimensions, implemented in CUDA.
> >> +
> >> +It accepts the following parameters:
> >> +
> >> +@table @option
> >> +
> >> +@item w
> >> +The width of the output video. It defaults to @code{iw}.
> >> +This expression is evaluated only once during the filter
> >> +configuration.
> >> +
> >> +@item h
> >> +The height of the output video. It defaults to @code{ih}.
> >> +This expression is evaluated only once during the filter
> >> +configuration.
> >> +
> >> +@item x
> >> +The horizontal position, in the input video, of the left edge of the
> >> output
> >> +video. It defaults to @code{(in_w-out_w)/2}.
> >> +This expression is evaluated only once during the filter
> >> +configuration.
> >> +
> >> +@item y
> >> +The vertical position, in the input video, of the top edge of the
> output
> >> video.
> >> +It defaults to @code{(in_h-out_h)/2}.
> >> +This expression is evaluated only once during the filter
> >> +configuration.
> >> +@end table
> >> +
> >> @section cropdetect
> >>
> >> Auto-detect the crop size.
> >> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> >> index fef6ec5..84df037 100644
> >> --- a/libavfilter/Makefile
> >> +++ b/libavfilter/Makefile
> >> @@ -187,6 +187,7 @@ OBJS-$(CONFIG_COPY_FILTER)   +=
> >> vf_copy.o
> >> OBJS-$(CONFIG_COREIMAGE_FILTER)  += vf_coreimage.o
> >> OBJS-$(CONFIG_COVER_RECT_FILTER) += vf_cover_rect.o
> >> lavfutils.o
> >> OBJS-$(CONFIG_CROP_FILTER)   += vf_crop.o
> >> +OBJS-$(CONFIG_CROP_CUDA_FILTER)  += vf_crop_cuda.o
> >> vf_crop_cuda.ptx.o
> >> OBJS-$(CONFIG_CROPDETECT_FILTER) += vf_cropdetect.o
> >> OBJS-$(CONFIG_CUE_FILTER)+= f_cue.o
> >> OBJS-$(CONFIG_CURVES_FILTER) += 

Re: [FFmpeg-devel] [PATCH][FFmpeg-devel v2] Add GPU accelerated video crop filter

2019-03-23 Thread Tao Zhang
The corrected version. If there are no other comments or objections, could
this be pushed?

UsingtcNower  于2019年3月23日周六 下午11:51写道:

> Signed-off-by: UsingtcNower 
> ---
>  Changelog   |   1 +
>  configure   |   1 +
>  doc/filters.texi|  31 +++
>  libavfilter/Makefile|   1 +
>  libavfilter/allfilters.c|   1 +
>  libavfilter/version.h   |   2 +-
>  libavfilter/vf_crop_cuda.c  | 638
> 
>  libavfilter/vf_crop_cuda.cu | 109 
>  8 files changed, 783 insertions(+), 1 deletion(-)
>  create mode 100644 libavfilter/vf_crop_cuda.c
>  create mode 100644 libavfilter/vf_crop_cuda.cu
>
> diff --git a/Changelog b/Changelog
> index ad7e82f..f224fc8 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -20,6 +20,7 @@ version :
>  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
>  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
>  - removed libndi-newtek
> +- crop_cuda GPU accelerated video crop filter
>
>
>  version 4.1:
> diff --git a/configure b/configure
> index 331393f..3f3ac2f 100755
> --- a/configure
> +++ b/configure
> @@ -2973,6 +2973,7 @@ qsvvpp_select="qsv"
>  vaapi_encode_deps="vaapi"
>  v4l2_m2m_deps="linux_videodev2_h sem_timedwait"
>
> +crop_cuda_filter_deps="ffnvcodec cuda_nvcc"
>  hwupload_cuda_filter_deps="ffnvcodec"
>  scale_npp_filter_deps="ffnvcodec libnpp"
>  scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 4ffb392..ee16a2d 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -7415,6 +7415,37 @@ If the specified expression is not valid, it is
> kept at its current
>  value.
>  @end table
>
> +@section crop_cuda
> +
> +Crop the input video to given dimensions, implemented in CUDA.
> +
> +It accepts the following parameters:
> +
> +@table @option
> +
> +@item w
> +The width of the output video. It defaults to @code{iw}.
> +This expression is evaluated only once during the filter
> +configuration.
> +
> +@item h
> +The height of the output video. It defaults to @code{ih}.
> +This expression is evaluated only once during the filter
> +configuration.
> +
> +@item x
> +The horizontal position, in the input video, of the left edge of the
> output
> +video. It defaults to @code{(in_w-out_w)/2}.
> +This expression is evaluated only once during the filter
> +configuration.
> +
> +@item y
> +The vertical position, in the input video, of the top edge of the output
> video.
> +It defaults to @code{(in_h-out_h)/2}.
> +This expression is evaluated only once during the filter
> +configuration.
> +@end table
> +
>  @section cropdetect
>
>  Auto-detect the crop size.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index fef6ec5..84df037 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -187,6 +187,7 @@ OBJS-$(CONFIG_COPY_FILTER)   +=
> vf_copy.o
>  OBJS-$(CONFIG_COREIMAGE_FILTER)  += vf_coreimage.o
>  OBJS-$(CONFIG_COVER_RECT_FILTER) += vf_cover_rect.o
> lavfutils.o
>  OBJS-$(CONFIG_CROP_FILTER)   += vf_crop.o
> +OBJS-$(CONFIG_CROP_CUDA_FILTER)  += vf_crop_cuda.o
> vf_crop_cuda.ptx.o
>  OBJS-$(CONFIG_CROPDETECT_FILTER) += vf_cropdetect.o
>  OBJS-$(CONFIG_CUE_FILTER)+= f_cue.o
>  OBJS-$(CONFIG_CURVES_FILTER) += vf_curves.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index c51ae0f..550e545 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -175,6 +175,7 @@ extern AVFilter ff_vf_copy;
>  extern AVFilter ff_vf_coreimage;
>  extern AVFilter ff_vf_cover_rect;
>  extern AVFilter ff_vf_crop;
> +extern AVFilter ff_vf_crop_cuda;
>  extern AVFilter ff_vf_cropdetect;
>  extern AVFilter ff_vf_cue;
>  extern AVFilter ff_vf_curves;
> diff --git a/libavfilter/version.h b/libavfilter/version.h
> index c71282c..5aa95f4 100644
> --- a/libavfilter/version.h
> +++ b/libavfilter/version.h
> @@ -31,7 +31,7 @@
>
>  #define LIBAVFILTER_VERSION_MAJOR   7
>  #define LIBAVFILTER_VERSION_MINOR  48
> -#define LIBAVFILTER_VERSION_MICRO 100
> +#define LIBAVFILTER_VERSION_MICRO 101
>
>  #define LIBAVFILTER_VERSION_INT AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR,
> \
> LIBAVFILTER_VERSION_MINOR,
> \
> diff --git a/libavfilter/vf_crop_cuda.c b/libavfilter/vf_crop_cuda.c
> new file mode 100644
> index 000..fc6a2a6
> --- /dev/null
> +++ b/libavfilter/vf_crop_cuda.c
> @@ -0,0 +1,638 @@
> +/*
> +* Copyright (c) 2019, iQIYI CORPORATION. All rights reserved.
> +*
> +* Permission is hereby granted, free of charge, to any person obtaining a
> +* copy of this software and associated documentation files (the
> "Software"),
> +* to deal in the Software without restriction, including without
> limitation
> +* the rights to use, copy, modify, merge, publish, distribute, sublicense,
> +* and/or sell copies of the Software, 

Re: [FFmpeg-devel] [PATCH] Add GPU accelerated video crop filter

2019-03-23 Thread Tao Zhang
Done it. Thanks Steven and Timo

Timo Rothenpieler  于2019年3月23日周六 下午10:52写道:

> On 23.03.2019 14:46, Steven Liu wrote:
> > Documentation of the crop_cuda should be submitted together.
> >
> >
> > Thanks
> > Steven
> >
>
> True, forgot about that. Should be almost identical to that of the
> regular crop filter.
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Add GPU accelerated video crop filter

2019-03-23 Thread Tao Zhang
Got it and corrected. Thanks Timo

Timo Rothenpieler  于2019年3月23日周六 下午7:53写道:

> On 23.03.2019 12:31, UsingtcNower wrote:
>  > diff --git a/configure b/configure
>  > index 331393f..88f1e91 100755
>  > --- a/configure
>  > +++ b/configure
>  > @@ -2978,6 +2978,7 @@ scale_npp_filter_deps="ffnvcodec libnpp"
>  >   scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
>  >   thumbnail_cuda_filter_deps="ffnvcodec cuda_nvcc"
>  >   transpose_npp_filter_deps="ffnvcodec libnpp"
>  > +crop_cuda_filter_deps="ffnvcodec cuda_nvcc"
>
> These are generally kept in alphabetical order.
>
> > +static av_cold int init_processing_chain(AVFilterContext *ctx, int
> in_width, int in_height,
> > + int out_width, int out_height,
> > + int left, int top)
> > +{
> > +CUDACropContext *s = ctx->priv;
> > +
> > +AVHWFramesContext *in_frames_ctx;
> > +
> > +enum AVPixelFormat in_format;
> > +enum AVPixelFormat out_format;
> > +int ret;
> > +
> > +/* check that we have a hw context */
> > +if (!ctx->inputs[0]->hw_frames_ctx) {
> > +av_log(ctx, AV_LOG_ERROR, "No hw context provided on input\n");
> > +return AVERROR(EINVAL);
> > +}
> > +in_frames_ctx =
> (AVHWFramesContext*)ctx->inputs[0]->hw_frames_ctx->data;
> > +in_format = in_frames_ctx->sw_format;
> > +out_format= (s->format == AV_PIX_FMT_NONE) ? in_format :
> s->format;
> > +
> > +if (!format_is_supported(in_format)) {
> > +av_log(ctx, AV_LOG_ERROR, "Unsupported input format: %s\n",
> > +   av_get_pix_fmt_name(in_format));
> > +return AVERROR(ENOSYS);
> > +}
> > +if (!format_is_supported(out_format)) {
> > +av_log(ctx, AV_LOG_ERROR, "Unsupported output format: %s\n",
> > +   av_get_pix_fmt_name(out_format));
> > +return AVERROR(ENOSYS);
> > +}
> > +
> > +if (in_width == out_width && in_height == out_height)
> > +s->passthrough = 1;
> > +
> > +s->in_fmt = in_format;
> > +s->out_fmt = out_format;
> > +
> > +s->planes_in[0].width   = in_width;
> > +s->planes_in[0].height  = in_height;
> > +s->planes_out[0].width  = out_width;
> > +s->planes_out[0].height = out_height;
> > +s->planes_in[0].left = left;
> > +s->planes_in[0].top = top;
> > +s->planes_out[0].left = 0;
> > +s->planes_out[0].top = 0;
>
> This is a nit, but why not align all of them?
>
> Also missing a version bump. I'd say bumping lavf micro version is enough.
>
>
> Otherwise this looks good to me. Will give it a test later, and I don't
> really see any reason not to merge this.
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".