Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Priebe, Jason
On 9/23/16, Paul B Mahol  wrote:

> On 9/28/16, Priebe, Jason  wrote:
>
> > If there's a better way to decode these still images without using
> > an intermediate temp file, please point me to it, and I'll make the
> > change.
> 
> Using avformat/avcodec calls to decode input from named pipe into AVFrame.

OK.  I was able to synthesize the code from ff_load_image() and the code 
from this example: 

http://www.ffmpeg.org/doxygen/trunk/doc_2examples_2avio_reading_8c-example.html

It took 130 lines of code to read an image from a memory buffer, and
about 40 of those lines are essentially duplicated from ff_load_image().

It seems like a function like this should be in the lavfutils, not
buried in a random filter.  And maybe those 40 lines could be shared 
between this new function an dthe ff_load_image() function?

Jason Priebe
CBC New Media
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Moritz Barsnick
On Wed, Sep 28, 2016 at 12:40:24 +, Priebe, Jason wrote:
> Like I said, I don't see any way to decode an in-memory encoded
> image (PNG, JPG, etc.) with the existing function calls.  I only
> see ff_load_image(), which takes a filename.

The image2pipe demuxer already handles a "stream" of still image files:

$ (cat 01.png; sleep 3; cat 02.png) | ffmpeg -f image2pipe -i - [...]
(This Unix "|" pipe could be replaced by a named pipe.)

Unfortunately, what it does not do, is to set the timestamp of the
incoming image to the wallclock, or to the realtime offset. Instead, it
assumes a constant frame rate.

If it did set the timestamps accordingly, you could duplicate its input
frames via the fps filter before overlaying the image. This also
assumes the fps filter could duplicate images as long as it doesn't get
new ones, to fulfill the constant output rate. I guess it doesn't do
that currently.

> I think you are right -- the *concept* of named pipes exists in
> Windows, but the mkfifo() call doesn't create them.  You have
> to use calls like CreateNamedPipe() and ConnectNamedPipe().

Yes, you'd need a helper program in order to be able to achieve
something as easy as Unix's "cat 03.png > /my/fifo".

Just thinking out loud,
Moritz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Paul B Mahol
On 9/28/16, Priebe, Jason  wrote:
> On 9/23/16, Paul B Mahol  wrote:
>
>> On 9/27/16, Priebe, Jason  wrote:
>> > On 9/23/16, Paul B Mahol  wrote:
>> >
>> > - it uses a slightly inelegant technique to read the images; it writes
>> >   the image data to a temp file so it can call ff_load_image().  I
>> > didn't
>> >   see a function that can load an image directly from an in-memory byte
>> > array.
>>
>> AVFrame stores all decoded data from image. Using temp files is
>> ridicculous.
>
> I do store the *decoded* image in an AVFrame.  I only use the temp file
> when I finish reading the *encoded* image from the named pipe -- I
> write to a temp file, hand that off to ff_load_image(), and once the
> image has been decoded, I destroy the temp file.
>
> Like I said, I don't see any way to decode an in-memory encoded
> image (PNG, JPG, etc.) with the existing function calls.  I only
> see ff_load_image(), which takes a filename.
>
> I think that trying to decode a PNG out of an in-memory buffer would
> require refactoring inside of files like libavformat/utils.c, which
> would require a deeper understanding of the internals than I have.
>
> If there's a better way to decode these still images without using
> an intermediate temp file, please point me to it, and I'll make the
> change.

Using avformat/avcodec calls to decode input from named pipe into AVFrame.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-28 Thread Priebe, Jason
On 9/23/16, Paul B Mahol  wrote:

> On 9/27/16, Priebe, Jason  wrote:
> > On 9/23/16, Paul B Mahol  wrote:
> >
> > - it uses a slightly inelegant technique to read the images; it writes
> >   the image data to a temp file so it can call ff_load_image().  I didn't
> >   see a function that can load an image directly from an in-memory byte
> > array.
> 
> AVFrame stores all decoded data from image. Using temp files is ridicculous.

I do store the *decoded* image in an AVFrame.  I only use the temp file
when I finish reading the *encoded* image from the named pipe -- I
write to a temp file, hand that off to ff_load_image(), and once the
image has been decoded, I destroy the temp file.

Like I said, I don't see any way to decode an in-memory encoded
image (PNG, JPG, etc.) with the existing function calls.  I only
see ff_load_image(), which takes a filename.

I think that trying to decode a PNG out of an in-memory buffer would
require refactoring inside of files like libavformat/utils.c, which
would require a deeper understanding of the internals than I have.

If there's a better way to decode these still images without using
an intermediate temp file, please point me to it, and I'll make the
change.

> > - Portability - I'm worried this is the big one.  mkfifo isn't readily
> >   available on Windows without compatibility libraries, and even then,
> >   I'm not sure whether they would work the same way they do under *nix.
> >   Generally speaking, how does the ffmpeg team tackle cross-platform
> >   issues like this?
> 
> IIRC named pipes are available for Windows.

I think you are right -- the *concept* of named pipes exists in
Windows, but the mkfifo() call doesn't create them.  You have
to use calls like CreateNamedPipe() and ConnectNamedPipe().

It looks like there is some windows compatibility code for handling
standard file open() calls in libavutil/file_open.c, so I suppose
that something like that could be built for named pipes (maybe one
call that creates and opens the named pipe, with conditional calls
for *nix and Windows?)

This is way outside my wheelhouse -- I don't even have a Windows
build environment.  But if it's a show-stopper, then I can slog
my way through it. 

Jason Priebe
CBC New Media 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

2016-09-27 Thread Paul B Mahol
On 9/27/16, Priebe, Jason  wrote:
> On 9/23/16, Paul B Mahol  wrote:
>
>> Named pipe approach would implement video source which would read images
>> from named pipe. It would read from named pipe until it decodes single
>> frame
>> and then would use that frame as input to next filter, for example
>> overlay filter.
>>
>> If it encounters EOF in named pipe it would not abort but would instead
>> keep
>> sending last frame it got, for example completely transparent frame.
>>
>> If it suddenly get more data from pipe it would update its internal
>> frame and output it as input to next filter in chain.
>>
>> So command would look like this:
>>
>> imagepipe=named_pipe:rate=30[o],[0:v][o]overlay=x=0:y=0 ...
>>
>> And than in another terminal, you would use commands like this:
>>
>> cat random_image.format > named_pipe
>
> Paul:  this is a really good idea (when you first mentioned pipes, I
> thought you meant to use pipes as a standard ffmpeg input, which doesn't
> really work in the way I'm trying to make it work here).  But a purpose-
> built filter that reads from a pipe is another story.
>
> I built an imagepipe filter that I'd like to submit as a patch, but
> I have some questions before I do that:
>
> - it outputs YUVA420P.  Does it need to output other pixel formats to
>   be accepted?

Not neccessary if adding other formats is easy.

>
> - it uses a slightly inelegant technique to read the images; it writes
>   the image data to a temp file so it can call ff_load_image().  I didn't
>   see a function that can load an image directly from an in-memory byte
> array.

AVFrame stores all decoded data from image. Using temp files is ridicculous.

>
> - I'm not 100% sure how to write the test.  I added a block=1 option to
>   the filter so that it will block on each frame to read in an image from
>   the pipe; this is designed for testing only (normally, you want
> non-blocking
>   reads).  But I don't know how to leverage FATE to build a test that
>   runs ffmpeg and also in another process, writes files to the pipe.  I
>   think I can do it if I add a new function to fate-run.sh, but I don't know
>   if that is discouraged.

Test can be added later.

>
> - Portability - I'm worried this is the big one.  mkfifo isn't readily
>   available on Windows without compatibility libraries, and even then,
>   I'm not sure whether they would work the same way they do under *nix.
>   Generally speaking, how does the ffmpeg team tackle cross-platform
>   issues like this?

IIRC named pipes are available for Windows.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel