Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-24 Thread Clément Bœsch
On Fri, Nov 13, 2015 at 03:57:22PM +0100, Nicolas George wrote:
> Le primidi 21 brumaire, an CCXXIV, Clement Boesch a écrit :
> > You construct something like this:
> > 
> >   AVAsyncContext(
> > AVAsyncReader(
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   ...
> > ),
> > AVAsyncReader(
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   ...
> > ),
> > 
> >   )
> 
> Is there a specific reason you use a different type for decoders and
> readers? If I understand correctly, readers would be demuxers, and
> basically, if you look from high enough, demuxers and decoders are basically
> the same.
> 

Not sure I understand what you mean. It's important to have a threading
separation between the two for the sole reason of performance. Yes,
readers are demuxers in this simple model; one reader for N decoders.

[...]
> > - first, in threadmessage it seems I can't yet flush the fifo properly
> >   (for example by calling av_frame_free() on each entry of the frame
> >   queue): should I extend the API? Nicolas, maybe you have a suggestion?
> 
> Not really. IIUC, you need a function for the sender to tell "all queued
> messages are useless, discard them"? Or maybe more generic out-of-band /
> prioritized messages?

No idea what would be appropriate; I just need a way to drain the demuxer
and decoder queues, which contain AVFrame and AVPacket that needs to be
free'd properly.

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-13 Thread Nicolas George
Le tridi 23 brumaire, an CCXXIV, Paul B Mahol a écrit :
> This monolithic is incorrect, it is easy to add support for loading
> filters its just that it never get commited, there is even patch for this.

I do not think you are right. Filters need to use internal APIs to process
frames, dynamic loading would not work. But this is just the same for codecs
and demuxers, so making this a point against lavfi seems dishonest.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-13 Thread Paul B Mahol
On 11/13/15, wm4  wrote:
> On Fri, 13 Nov 2015 15:57:22 +0100
> Nicolas George  wrote:
>
>> Le primidi 21 brumaire, an CCXXIV, Clement Boesch a ecrit :
>> > You construct something like this:
>> >
>> >   AVAsyncContext(
>> > AVAsyncReader(
>> >   AVAsyncDecoder(...),
>> >   AVAsyncDecoder(...),
>> >   AVAsyncDecoder(...),
>> >   ...
>> > ),
>> > AVAsyncReader(
>> >   AVAsyncDecoder(...),
>> >   AVAsyncDecoder(...),
>> >   ...
>> > ),
>> > 
>> >   )
>>
>> Is there a specific reason you use a different type for decoders and
>> readers? If I understand correctly, readers would be demuxers, and
>> basically, if you look from high enough, demuxers and decoders are
>> basically
>> the same.
>>
>> Actually, I think we already have the framework for a high-level
>> optionally-asynchronous API: libavfilter. Of course it is far from ready
>> for
>> it, but the basic principles are sound.
>
> libavfilter is too inflexible, brittle, monolithic. There's nothing
> asynchronous about it either. I also want to see how you'd do global
> format negotiation when you add subtitles, demuxers, muxers etc.

This monolithic is incorrect, it is easy to add support for loading
filters its just that
it never get commited, there is even patch for this.

>
> The basic idea you have is right (and the same thing more general media
> frameworks do, like dshow, gstreamer, etc.), but I suggest you design a
> new sub-library from scratch for this.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-13 Thread Nicolas George
Le tridi 23 brumaire, an CCXXIV, wm4 a écrit :
> libavfilter is too inflexible, brittle, monolithic.

Care to elaborate?

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-13 Thread wm4
On Fri, 13 Nov 2015 15:57:22 +0100
Nicolas George  wrote:

> Le primidi 21 brumaire, an CCXXIV, Clement Boesch a écrit :
> > You construct something like this:
> > 
> >   AVAsyncContext(
> > AVAsyncReader(
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   ...
> > ),
> > AVAsyncReader(
> >   AVAsyncDecoder(...),
> >   AVAsyncDecoder(...),
> >   ...
> > ),
> > 
> >   )  
> 
> Is there a specific reason you use a different type for decoders and
> readers? If I understand correctly, readers would be demuxers, and
> basically, if you look from high enough, demuxers and decoders are basically
> the same.
> 
> Actually, I think we already have the framework for a high-level
> optionally-asynchronous API: libavfilter. Of course it is far from ready for
> it, but the basic principles are sound.

libavfilter is too inflexible, brittle, monolithic. There's nothing
asynchronous about it either. I also want to see how you'd do global
format negotiation when you add subtitles, demuxers, muxers etc.

The basic idea you have is right (and the same thing more general media
frameworks do, like dshow, gstreamer, etc.), but I suggest you design a
new sub-library from scratch for this.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-13 Thread Nicolas George
Le primidi 21 brumaire, an CCXXIV, Clement Boesch a écrit :
> You construct something like this:
> 
>   AVAsyncContext(
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> 
>   )

Is there a specific reason you use a different type for decoders and
readers? If I understand correctly, readers would be demuxers, and
basically, if you look from high enough, demuxers and decoders are basically
the same.

Actually, I think we already have the framework for a high-level
optionally-asynchronous API: libavfilter. Of course it is far from ready for
it, but the basic principles are sound.

The big step it would require is support for AVMEDIA_TYPE_DATA frames. There
was some talk on libav-devel of merging AVPacket into AVFrame, that would be
a big first step. With this, decoders are just DATA->VIDEO or DATA->AUDIO
filters, demuxers are sources with multiple DATA streams (or filters with
one DATA input, with protocols the sources), and of course the symmetric for
encoders and muxers.

Libavfilter already supports n:m schemes (if I understand correctly what you
mean by that: n input frames produce m output frames?), and allows automatic
conversion to supported formats. My current work on de-recursiving intends,
amongst other things, to make parallelism possible, and I have a later
project of adding a callback to the sinks to get the frames immediately
instead of polling.

Also, not related to libavfilter but relevant to the discussion: amongst the
projects that I am slowly maturing is the implementation of an I/O even loop
API that would allow to clean up all the inconsistent uses of poll() with
subprotocols, and also to get rid of pthread_cancel() for the UDP thread.

> - first, in threadmessage it seems I can't yet flush the fifo properly
>   (for example by calling av_frame_free() on each entry of the frame
>   queue): should I extend the API? Nicolas, maybe you have a suggestion?

Not really. IIUC, you need a function for the sender to tell "all queued
messages are useless, discard them"? Or maybe more generic out-of-band /
prioritized messages?

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-12 Thread Clément Bœsch
On Thu, Nov 12, 2015 at 10:15:16PM +0100, wm4 wrote:
> On Wed, 11 Nov 2015 08:45:52 +0100
> Hendrik Leppkes  wrote:
> 
> > On Wed, Nov 11, 2015 at 1:27 AM, Clément Bœsch  wrote:
> > > From: Clément Bœsch 
> 
> > > I hope this is going in a direction most developers like. If it's not
> > > the case, I'm of course open to any reworking but please be specific.  
> > 
> > I'm quite confused by this approach, it doesn't really seem to
> > correspond at all to what we talked about earlier.
> > You can't really put a high-level async API on our low-level sync API,
> > we should start re-thinking the low level API first.
> > 
> > The first goal should be to get to a new low level API that implements
> > decoupled m:n output, then you could build a high-level API on top of
> > that, if you wanted something like that.
> > Such a low level API would easily integrate into other frameworks,
> > while this async API seems to be rather rigid and may be rather
> > troublesome to integrate into existing workflows.
> > 
> 
> I have to agree. It's just not what I expected. While I think having a
> higher level API in FFmpeg would be very good, the async m:n API is
> what we really need.
> 
> Besides that, the presented API looks pretty rigid and inflexible. This
> is basically the wrong way to start. If you start with a high level
> API, all effort should be put into making that API _good_, and not
> making its implementation a playground for low level API improvements.

Alright. Well I was kind of expecting such comments, so sure, OK. I guess
I was trying to solve several issues at a time. Patch withdraw for now
(but I'm probably going to continue to work a bit on it locally).

[...]

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-12 Thread wm4
On Wed, 11 Nov 2015 08:45:52 +0100
Hendrik Leppkes  wrote:

> On Wed, Nov 11, 2015 at 1:27 AM, Clément Bœsch  wrote:
> > From: Clément Bœsch 

> > I hope this is going in a direction most developers like. If it's not
> > the case, I'm of course open to any reworking but please be specific.  
> 
> I'm quite confused by this approach, it doesn't really seem to
> correspond at all to what we talked about earlier.
> You can't really put a high-level async API on our low-level sync API,
> we should start re-thinking the low level API first.
> 
> The first goal should be to get to a new low level API that implements
> decoupled m:n output, then you could build a high-level API on top of
> that, if you wanted something like that.
> Such a low level API would easily integrate into other frameworks,
> while this async API seems to be rather rigid and may be rather
> troublesome to integrate into existing workflows.
> 

I have to agree. It's just not what I expected. While I think having a
higher level API in FFmpeg would be very good, the async m:n API is
what we really need.

Besides that, the presented API looks pretty rigid and inflexible. This
is basically the wrong way to start. If you start with a high level
API, all effort should be put into making that API _good_, and not
making its implementation a playground for low level API improvements.
If the goal is fixing VideoToolbox, ffplay.c could probably be used for
experiments. (It runs a separate decoder thread, so it might be ideal
for trying such async things.)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-10 Thread Hendrik Leppkes
On Wed, Nov 11, 2015 at 1:27 AM, Clément Bœsch  wrote:
> From: Clément Bœsch 
>
> ---
>
> So here is a first prototype of a higher level API following an
> asynchronous model to (for now) simply use the current synchronous API.
>
> I suggest to look at libavutil/async.h (sorry no doxy yet) and
> doc/examples/async_demuxing_decoding.c for an example of usage.
>
> Basically what the current draft proposes for the user is to instanciate
> an asynchronous context, in which you register Readers (the user will
> generally do a demuxing in the associated callback to get a packet), in
> which you register Decoders (basically just a wrapper for an
> AVCodecContext) to which you associate a push frame callback.
>
> You construct something like this:
>
>   AVAsyncContext(
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> 
>   )
>
> You will generally have a Reader per file but that can be anything you
> want as long as you can pull of packets.
>
> Now implementation wise, the interesting part is that every level is
> asynchronous:
>
> - The context will spawn a thread for each reader
> - Then in each reader, you will have a thread for each decoder
> - Then each decoder will have its own queue of packets, and its own
>   queue of output frames
> - Then along with each decoder, you will have Watcher in its dedicated
>   thread: the watcher is monitoring the frame queue of its decoder, and
>   will call the user push frame callback anytime a new one appears in
>   the queue.
>
> The interesting part about the watcher is that it will allow the user to
> transparently do any operation on the frame without blocking the
> decoding or the demuxing: basically, blocking in push_frame for the user
> will just block any subsequent call to the callback, but the packet
> and frame queues of the decoder will continue to fill up, ready to be
> pop'ed.
>
> Note: the size of the packet and frame queue for each decoder is
> configurable
>
> Note2: the packet queue is not set in the reader so that a decoder that
> isn't pulling its packets and decoding fast enough will not block the
> others. As a result, the Reader just reads one packet and is then
> responsible for broadcasting it to the appropriate decoder (by just
> adding it to its internal queue).
>
> Now all of this doesn't require any modification to the current API so
> it doesn't really solve the problem of writing asynchronous
> decoders/hwaccel. The goal for the next iteration will be to adjust the
> VideoToolbox hwaccel to make use of the asynchronous model, and
> basically just make it call the user callback by itself. It will be a
> good candidate for such thing.
>
> Another concerning limitation in the current code is the inability to
> seek: basically the user can not lock the codec and reader context to
> execute a seek, nor flush the queues. And I'd like suggestion of
> direction in that regard:
>
> - first, in threadmessage it seems I can't yet flush the fifo properly
>   (for example by calling av_frame_free() on each entry of the frame
>   queue): should I extend the API? Nicolas, maybe you have a suggestion?
>
> - secondly, how should I allow the user to lock/unlock the reader (and
>   as a result the likely AVFormatContext user side) and decoder? I added
>   a few prototypes in async.h but I'm pretty sure that won't fit many
>   cases.  Comments very welcome.
>
> I hope this is going in a direction most developers like. If it's not
> the case, I'm of course open to any reworking but please be specific.

I'm quite confused by this approach, it doesn't really seem to
correspond at all to what we talked about earlier.
You can't really put a high-level async API on our low-level sync API,
we should start re-thinking the low level API first.

The first goal should be to get to a new low level API that implements
decoupled m:n output, then you could build a high-level API on top of
that, if you wanted something like that.
Such a low level API would easily integrate into other frameworks,
while this async API seems to be rather rigid and may be rather
troublesome to integrate into existing workflows.

>
> Also, I'm really sorry I'm not sending this to libav-devel even so I
> apparently looked forward a colaboration in a previous thread. I was
> planning to but unfortunately the threadmessage API is not there yet and
> it would have require way too much work for an initial prototype. When
> we come up with something that satisfy everyone and this code becomes
> something more than just a draft, I will maybe do the necessary.
> Unfortunately, porting VideoToolbox accel, threadmessage and probably a
> bunch of other ffmpeg specific features (I can thing about
> av_dynarray2_add and maybe more) is going to require a bit too much code
> adjustment, so I might just share the async.h and red

Re: [FFmpeg-devel] [PATCH] avutil/WIP: add AVAsync API

2015-11-10 Thread Zhang Rui
2015-11-11 8:27 GMT+08:00 Clément Bœsch :
> From: Clément Bœsch 
>
> ---
>
> So here is a first prototype of a higher level API following an
> asynchronous model to (for now) simply use the current synchronous API.
>
> I suggest to look at libavutil/async.h (sorry no doxy yet) and
> doc/examples/async_demuxing_decoding.c for an example of usage.
>
> Basically what the current draft proposes for the user is to instanciate
> an asynchronous context, in which you register Readers (the user will
> generally do a demuxing in the associated callback to get a packet), in
> which you register Decoders (basically just a wrapper for an
> AVCodecContext) to which you associate a push frame callback.
>
> You construct something like this:
>
>   AVAsyncContext(
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> AVAsyncReader(
>   AVAsyncDecoder(...),
>   AVAsyncDecoder(...),
>   ...
> ),
> 
>   )
>
> You will generally have a Reader per file but that can be anything you
> want as long as you can pull of packets.
>
> Now implementation wise, the interesting part is that every level is
> asynchronous:
>
> - The context will spawn a thread for each reader
> - Then in each reader, you will have a thread for each decoder
> - Then each decoder will have its own queue of packets, and its own
>   queue of output frames
> - Then along with each decoder, you will have Watcher in its dedicated
>   thread: the watcher is monitoring the frame queue of its decoder, and
>   will call the user push frame callback anytime a new one appears in
>   the queue.
>
> The interesting part about the watcher is that it will allow the user to
> transparently do any operation on the frame without blocking the
> decoding or the demuxing: basically, blocking in push_frame for the user
> will just block any subsequent call to the callback, but the packet
> and frame queues of the decoder will continue to fill up, ready to be
> pop'ed.
>
> Note: the size of the packet and frame queue for each decoder is
> configurable
>
> Note2: the packet queue is not set in the reader so that a decoder that
> isn't pulling its packets and decoding fast enough will not block the
> others. As a result, the Reader just reads one packet and is then
> responsible for broadcasting it to the appropriate decoder (by just
> adding it to its internal queue).
>
> Now all of this doesn't require any modification to the current API so
> it doesn't really solve the problem of writing asynchronous
> decoders/hwaccel. The goal for the next iteration will be to adjust the
> VideoToolbox hwaccel to make use of the asynchronous model, and
> basically just make it call the user callback by itself. It will be a
> good candidate for such thing.
>
> Another concerning limitation in the current code is the inability to
> seek: basically the user can not lock the codec and reader context to
> execute a seek, nor flush the queues. And I'd like suggestion of
> direction in that regard:

If we want a fat reader, seek should be handled inside reader,
and let user call AVAsyncReader.seek_async(...),
to request reader handle seek operation at next round before av_read_frame().

If we want a thin reader, just let user push packet into
reader's internal packet queue.

Anyhow, I don't think it's a good idea to put av_read_frame()
and avformat_seek_file() into different thread.

> - first, in threadmessage it seems I can't yet flush the fifo properly
>   (for example by calling av_frame_free() on each entry of the frame
>   queue): should I extend the API? Nicolas, maybe you have a suggestion?

Maybe serial number in frame_queue as ffplay's packet_queue?

> - secondly, how should I allow the user to lock/unlock the reader (and
>   as a result the likely AVFormatContext user side) and decoder? I added
>   a few prototypes in async.h but I'm pretty sure that won't fit many
>   cases.  Comments very welcome.

IMO, we don't need to allow user to lock/unlock reader/decoder,
we just provide thread-safe reader/decoder API.

> I hope this is going in a direction most developers like. If it's not
> the case, I'm of course open to any reworking but please be specific.
>
> Also, I'm really sorry I'm not sending this to libav-devel even so I
> apparently looked forward a colaboration in a previous thread. I was
> planning to but unfortunately the threadmessage API is not there yet and
> it would have require way too much work for an initial prototype. When
> we come up with something that satisfy everyone and this code becomes
> something more than just a draft, I will maybe do the necessary.
> Unfortunately, porting VideoToolbox accel, threadmessage and probably a
> bunch of other ffmpeg specific features (I can thing about
> av_dynarray2_add and maybe more) is going to require a bit too much code
> adjustment, so I might just share the async.h and redirect them to a
> working implementation here
>
>