Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Priebe, Jason
On 9/23/16, Paul B Mahol <one...@gmail.com> wrote:

> On 9/28/16, Priebe, Jason <jpri...@cbcnewmedia.com> wrote:
>
> > If there's a better way to decode these still images without using
> > an intermediate temp file, please point me to it, and I'll make the
> > change.
> 
> Using avformat/avcodec calls to decode input from named pipe into AVFrame.

OK.  I was able to synthesize the code from ff_load_image() and the code 
from this example: 

http://www.ffmpeg.org/doxygen/trunk/doc_2examples_2avio_reading_8c-example.html

It took 130 lines of code to read an image from a memory buffer, and
about 40 of those lines are essentially duplicated from ff_load_image().

It seems like a function like this should be in the lavfutils, not
buried in a random filter.  And maybe those 40 lines could be shared 
between this new function an dthe ff_load_image() function?

Jason Priebe
CBC New Media
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Priebe, Jason
On 9/23/16, Paul B Mahol <one...@gmail.com> wrote:

> On 9/27/16, Priebe, Jason <jpri...@cbcnewmedia.com> wrote:
> > On 9/23/16, Paul B Mahol <one...@gmail.com> wrote:
> >
> > - it uses a slightly inelegant technique to read the images; it writes
> >   the image data to a temp file so it can call ff_load_image().  I didn't
> >   see a function that can load an image directly from an in-memory byte
> > array.
> 
> AVFrame stores all decoded data from image. Using temp files is ridicculous.

I do store the *decoded* image in an AVFrame.  I only use the temp file
when I finish reading the *encoded* image from the named pipe -- I
write to a temp file, hand that off to ff_load_image(), and once the
image has been decoded, I destroy the temp file.

Like I said, I don't see any way to decode an in-memory encoded
image (PNG, JPG, etc.) with the existing function calls.  I only
see ff_load_image(), which takes a filename.

I think that trying to decode a PNG out of an in-memory buffer would
require refactoring inside of files like libavformat/utils.c, which
would require a deeper understanding of the internals than I have.

If there's a better way to decode these still images without using
an intermediate temp file, please point me to it, and I'll make the
change.

> > - Portability - I'm worried this is the big one.  mkfifo isn't readily
> >   available on Windows without compatibility libraries, and even then,
> >   I'm not sure whether they would work the same way they do under *nix.
> >   Generally speaking, how does the ffmpeg team tackle cross-platform
> >   issues like this?
> 
> IIRC named pipes are available for Windows.

I think you are right -- the *concept* of named pipes exists in
Windows, but the mkfifo() call doesn't create them.  You have
to use calls like CreateNamedPipe() and ConnectNamedPipe().

It looks like there is some windows compatibility code for handling
standard file open() calls in libavutil/file_open.c, so I suppose
that something like that could be built for named pipes (maybe one
call that creates and opens the named pipe, with conditional calls
for *nix and Windows?)

This is way outside my wheelhouse -- I don't even have a Windows
build environment.  But if it's a show-stopper, then I can slog
my way through it. 

Jason Priebe
CBC New Media 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-27 Thread Priebe, Jason
On 9/23/16, Paul B Mahol  wrote:

> Named pipe approach would implement video source which would read images
> from named pipe. It would read from named pipe until it decodes single frame
> and then would use that frame as input to next filter, for example
> overlay filter.
>
> If it encounters EOF in named pipe it would not abort but would instead keep
> sending last frame it got, for example completely transparent frame.
>
> If it suddenly get more data from pipe it would update its internal
> frame and output it as input to next filter in chain.
>
> So command would look like this:
>
> imagepipe=named_pipe:rate=30[o],[0:v][o]overlay=x=0:y=0 ...
>
> And than in another terminal, you would use commands like this:
>
> cat random_image.format > named_pipe

Paul:  this is a really good idea (when you first mentioned pipes, I
thought you meant to use pipes as a standard ffmpeg input, which doesn't
really work in the way I'm trying to make it work here).  But a purpose-
built filter that reads from a pipe is another story.

I built an imagepipe filter that I'd like to submit as a patch, but 
I have some questions before I do that:

- it outputs YUVA420P.  Does it need to output other pixel formats to
  be accepted?

- it uses a slightly inelegant technique to read the images; it writes
  the image data to a temp file so it can call ff_load_image().  I didn't
  see a function that can load an image directly from an in-memory byte array.

- I'm not 100% sure how to write the test.  I added a block=1 option to
  the filter so that it will block on each frame to read in an image from
  the pipe; this is designed for testing only (normally, you want non-blocking
  reads).  But I don't know how to leverage FATE to build a test that
  runs ffmpeg and also in another process, writes files to the pipe.  I
  think I can do it if I add a new function to fate-run.sh, but I don't know
  if that is discouraged.

- Portability - I'm worried this is the big one.  mkfifo isn't readily
  available on Windows without compatibility libraries, and even then,
  I'm not sure whether they would work the same way they do under *nix.
  Generally speaking, how does the ffmpeg team tackle cross-platform
  issues like this?

Thanks for any guidance!

Jason Priebe
CBC New Media
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avfilter: add dynoverlay filter.

2016-09-22 Thread Priebe, Jason
Thanks for the detailed review of the code, Paul.  You caught some dumb
mistakes that I should have picked up on before I sent the submission.

> dynoverlay sounds very powerful, but this filter funcionality is very limited.

True, it is limited, but it does something that no other filter does (I am
interested in your named pipe idea, though -- see note at the end).

> What's up with code identation?

I don't know -- I didn't realize there was a problem.  I'm just using vi; maybe
it formatted stuff in a funny way.  Or maybe you don't like my braces on new
lines?

> > +If the named PNG file does not exist, the filter will do nothing.
> 
> I do not like this design. Supporting only PNG with RGBA, working only
> in YUV420P.

I built the filter to satisfy a specific need; I thought that maybe others
who needed other formats could add in the support for those formats later.
I thought I saw some filters in ffmpeg that only support specific formats,
so I didn't realize that was a problem.

> > +@item check_interval
> > +(optional) The interval (in ms) between checks for updates to the overlay
> > file. For
> > +efficiency, the filter does not check the filesystem on every frame. You
> > can make
> > +it check more frequently (less efficient, but more responsive to changes in
> > the
> > +overlay PNG) by specifying a lower number. Or you can make it check less
> > frequently
> > +(more efficient, but less responsive to changes in the overlay PNG) by
> > specifying
> > +a higher number.
> > +
> > +Default value is @code{250}.
> > +@end table
> 
> This approach is bad to provide such functionality.

Why?  Yes, it's a lot of system calls, but it's still performant, so is 
it fundamentally wrong?

> > + -vf format=pix_fmts=yuv420p \
> > + -vf dynoverlay=overlayfile=/var/tmp/overlay.png:check_interval=100 \
> 
> You can not give more than one -vf's, only one will ever be used.

My mistake - I was trying to take a much more complicated example and distill it
down and anonymize it. 

> > +#include 
> > ... many headers omitted ...
> > +#include "libavutil/lfg.h"
> 
> Are all those headers really needed?

Probably not; I could try to pare it down.

> > + if (ctx->overlay_frame)
> > + {
> > + // TODO - make sure we have freed everything
> > + av_freep(>overlay_frame->data[0]);
> > + }
> > + ctx->overlay_frame = NULL;
> 
> This is wrong.

Thanks for catching this; I should use av_frame_free(), right?

> > + if (ctx->overlay_frame)
> > + {
> 
> Check is not needed.
> 
> > + av_frame_free (>overlay_frame);
> > + }
> > + ctx->overlay_frame = NULL;
> 
> Not needed.

Does this mean that it is safe to call av_frame_free() with NULL ?

> I think much better approach is to use named pipes, probably as
> separate video filter source.

I would love to see an example of this technique.  Based on what I read online
(http://ffmpeg.gusari.org/viewtopic.php?f=11=2774 for example), I didn't
think this technique worked.  And even if it could work, wouldn't I need a 
process
that is constantly generating frame data from PNGs to feed it to ffmpeg, which
then has to read it 30 times a second (when it only changes every 2-3 
*minutes*).

That seems extremely inefficient for overlays that are not changing frequently.

This is why I was trying to read the PNG once, hold it in memory until it 
changes
or is removed.  Maybe it would be more elegant if I wasn't polling the 
filesystem;
I could send signals to ffmpeg instead.  But I don't see a clear advantage to 
that
technique, and there are some disadvantages.  It's extremely simple for an 
external
application to write to a specified file.  But a model based on signalling would
require that the process also know the PID of the ffmpeg process.

I guess my question for you is whether this filter has any value to the larger
community.  If not, I'll just maintain it myself as a patch that I can apply to
my own builds.

Jason Priebe
CBC New Media
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avfilter: add dynoverlay filter.

2016-09-22 Thread Priebe, Jason
This patch adds a new filter that allows you to drive dynamic graphic overlays
on a live encoding by creating/updating/deleting a specified 32-bit PNG.
This is very different from the overlay filter, because it lets you change
the overlay in real time during a live stream.  It doesn't allow you to overlay
video on top of video, but you can overlay still images over video, which is
useful for things like lower-thirds and fullscreen graphics.

It is efficient in its handling of PNG files, as it only decodes the PNG data
when it changes.  It is not optimized in its handling of the compositing, since
it composites the entire image on every frame, even if the majority of the
overlay is fully transparent.  Even with that caveat, it only takes about
2% of overall CPU while compositing 1920x1080 images on HD video on a 
Core i7-6700K.

I'm pretty sure that I'm allocating my frames/buffers correctly and that I'm
freeing them as expected.  But if I've missed something, please let me know.

I did my best with the FATE test.  I understand the concept of "generate 
video, perform the filter, calculate the MD5 of the output, compare to
expected MD5".  But I didn't really see how I was supposed to visually inspect
the output video before committing the test.  

I modified the fate-run.sh script to output the ffmpeg command it was running
so I could capture the output video and make sure it contained what I expected.
So I think my test is good.  

It's a very basic test -- just makes sure it can read the PNG and overlay it.
I don't do anything fancy like remove or update the PNG during the encoding,
although that would be a more complete test of what the filter is designed to 
do.

The test included requires a PNG overlay image, dynoverlay.png.  It can be
downloaded from http://imgur.com/a/6PIkT



---
Changelog | 1 +
doc/filters.texi | 54 ++
libavfilter/Makefile | 1 +
libavfilter/allfilters.c | 1 +
libavfilter/version.h | 2 +-
libavfilter/vf_dynoverlay.c | 439 
tests/fate/filter-video.mak | 3 +
7 files changed, 500 insertions(+), 1 deletion(-)
create mode 100644 libavfilter/vf_dynoverlay.c

diff --git a/Changelog b/Changelog
index 2d0a449..5b620b4 100644
--- a/Changelog
+++ b/Changelog
@@ -31,6 +31,7 @@ version :
- MediaCodec HEVC decoding
- TrueHD encoder
- Meridian Lossless Packing (MLP) encoder
+- dynoverlay filter
version 3.1:
diff --git a/doc/filters.texi b/doc/filters.texi
index 070e57d..e67e29a 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -7080,6 +7080,60 @@ For more information about fontconfig, check:
For more information about libfribidi, check:
@url{http://fribidi.org/}.
+@section dynoverlay
+
+Uses a PNG with alpha to dynamically add, update, and remove overlays
+during live streams.
+
+If the named PNG file does not exist, the filter will do nothing.
+
+When the filter first detects the presence of the PNG, it will load it
+into a memory and overlay it on all frames until the PNG is either
+updated or removed. If the PNG is updated, the filter will read it into
+memory again. If the PNG is removed, the filter will clear the memory
+and stop overlaying the image.
+
+Note that this filter only works with YUV420P video.
+
+The filter accepts the following options:
+
+@table @option
+@item overlayfile
+(required) The name of the PNG that will contain the overlays. Note that the 
file
+may or may not exist when ffmpeg is launched. It can be created, updated,
+and removed from the filesystem, and the filter will respond accordingly.
+
+@item check_interval
+(optional) The interval (in ms) between checks for updates to the overlay 
file. For
+efficiency, the filter does not check the filesystem on every frame. You can 
make
+it check more frequently (less efficient, but more responsive to changes in the
+overlay PNG) by specifying a lower number. Or you can make it check less 
frequently
+(more efficient, but less responsive to changes in the overlay PNG) by 
specifying
+a higher number.
+
+Default value is @code{250}.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Add an overlay to video captured from a DeckLink Mini card; check for updates 
to
+the overlay PNG every 100ms
+@example
+ffmpeg -probesize 1k -r 3/1001 \
+ -f decklink -i 'DeckLink Mini Recorder (1)@@11' -y \
+ -vf format=pix_fmts=yuv420p \
+ -vf dynoverlay=overlayfile=/var/tmp/overlay.png:check_interval=100 \
+ -pix_fmt yuv420p \
+ -s 960x540 \
+ -c:v libx264 -profile:v baseline \
+ -b:v 1024k \
+ -c:a aac -ac 2 -b:a 192k -ar 44100 \
+ -f flv -flags +global_header 'rtmp://streaming.example.com/appname/streamname'
+@end example
+@end itemize
+
@section edgedetect
Detect and draw edges. The filter uses the Canny Edge Detection algorithm.
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 5cd10fa..80a485c 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -164,6 +164,7 @@ OBJS-$(CONFIG_DRAWBOX_FILTER) += vf_drawbox.o
OBJS-$(CONFIG_DRAWGRAPH_FILTER) +=