Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-07 Thread Michael Niedermayer
On Thu, Jun 07, 2018 at 11:23:40AM +0200, Paul B Mahol wrote:
> On 6/6/18, Nicolas George  wrote:
> > Michael Niedermayer (2018-06-04):
> >> If noone, who has time to reply knows the awnser then you probably have
> >> to
> >> find it out from the code and any unfinished patchsets
> >>
> >> sending nicolas a private mail may also be more vissible to him than the
> >> ML
> >> in case he is busy
> >
> > In terms of design, the big thing missing from lavfi is a clean API to
> > run a filter graph. Right now, it relies on requests on outputs, with a
> > fragile heuristic to find the "oldest" one. A clean API would allow to
> > run it as a whole and react to frames on output or requests on inputs,
> > possibly with callbacks.
> >
> > This must come before threading, because it is what allows to control
> > threading: threading is efficient when the system can start several
> > threads and let them run, doing their work. If it is constantly stopping
> > and re-starting because the calling API makes too small steps, much time
> > is wasted.
> >

> > But more than that, it requires somebody working on it. Speaking for
> > myself, the toxic ambiance in the project since a few months has
> > destroyed my motivation for doing anything ambitious on it. And to be
> > completely forthright, I feel that Paul is partly responsible for that
> > toxic ambiance; see his interventions on the thread about enforcing the
> > code of conduct for example.
> 
> Your contributions will be missed.
> 
> Good bye.

Id like to see both you and nicolas work together on libavfilter. Thats a
"WIN" for the community and FFmpeg. You two are the top 2 libavfilter
develoepers currently. Its really _REALLY_ stupid if you two fight like this.
Because no matter who "wins" this, everyone looses.

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Concerning the gods, I have no means of knowing whether they exist or not
or of what sort they may be, because of the obscurity of the subject, and
the brevity of human life -- Protagoras


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-07 Thread Paul B Mahol
On 6/6/18, Nicolas George  wrote:
> Michael Niedermayer (2018-06-04):
>> If noone, who has time to reply knows the awnser then you probably have
>> to
>> find it out from the code and any unfinished patchsets
>>
>> sending nicolas a private mail may also be more vissible to him than the
>> ML
>> in case he is busy
>
> In terms of design, the big thing missing from lavfi is a clean API to
> run a filter graph. Right now, it relies on requests on outputs, with a
> fragile heuristic to find the "oldest" one. A clean API would allow to
> run it as a whole and react to frames on output or requests on inputs,
> possibly with callbacks.
>
> This must come before threading, because it is what allows to control
> threading: threading is efficient when the system can start several
> threads and let them run, doing their work. If it is constantly stopping
> and re-starting because the calling API makes too small steps, much time
> is wasted.
>
> But more than that, it requires somebody working on it. Speaking for
> myself, the toxic ambiance in the project since a few months has
> destroyed my motivation for doing anything ambitious on it. And to be
> completely forthright, I feel that Paul is partly responsible for that
> toxic ambiance; see his interventions on the thread about enforcing the
> code of conduct for example.

Your contributions will be missed.

Good bye.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-06 Thread Nicolas George
Michael Niedermayer (2018-06-04):
> If noone, who has time to reply knows the awnser then you probably have to
> find it out from the code and any unfinished patchsets
> 
> sending nicolas a private mail may also be more vissible to him than the ML
> in case he is busy

In terms of design, the big thing missing from lavfi is a clean API to
run a filter graph. Right now, it relies on requests on outputs, with a
fragile heuristic to find the "oldest" one. A clean API would allow to
run it as a whole and react to frames on output or requests on inputs,
possibly with callbacks.

This must come before threading, because it is what allows to control
threading: threading is efficient when the system can start several
threads and let them run, doing their work. If it is constantly stopping
and re-starting because the calling API makes too small steps, much time
is wasted.

But more than that, it requires somebody working on it. Speaking for
myself, the toxic ambiance in the project since a few months has
destroyed my motivation for doing anything ambitious on it. And to be
completely forthright, I feel that Paul is partly responsible for that
toxic ambiance; see his interventions on the thread about enforcing the
code of conduct for example.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-03 Thread Michael Niedermayer
On Sun, Jun 03, 2018 at 07:43:24PM +0200, Paul B Mahol wrote:
> On 6/2/18, Paul B Mahol  wrote:
> > On 5/2/18, Paul B Mahol  wrote:
> >> On 9/11/16, Paul B Mahol  wrote:
> >>> On 9/10/16, Nicolas George  wrote:
>  Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a ecrit :
> > So everybody agrees, we should proceed.
> 
>  I am proceeding, but as you can see in the patch, there is still a fair
>  amount of work to be done. Still, people can help if they want to speed
>  things up, especially since a significant part of the work is design
>  decisions that I can not do alone and will need to be discussed.
> 
>  What needs to be done (using this mail as a notepad, but including the
>  tasks
>  where help is required):
> 
>  - Finish documenting the scheduling and make sure the implementation
>  matches
>    the documentation.
> 
>  - Discuss if "private_fields.h" is acceptable or decide another
>  solution.
> 
>  - Clearly identify and isolate the parts of the scheduling that are
>  needed
>    only for request_frame()/request_frame() compatibility.
> 
>  - Decide exactly what parts of the scheduling are the responsibility of
>    filters (possibly in the compatibility activate function) and what
>  parts
>    are handled by the framework.
> 
>  - Think ahead about threading and use wrapper to access fields that
>  will
>    require locking or synchronization.
> 
>  - Think about features whose need I realized while trying to get it
>  working:
>    distinguish productive / processing activation, synchronize several
>  filter
>    graphs.
> 
>  Please feel free to ask details about any of these points: not only
>  would
>  getting interest help me stay motivated, but discussing implementation
>  details and explaining the design would help me having a clear idea of
>  the
>  whole system.
> >>>
> >>> For start removal of recursiveness is mostly I'm interested in.
> >>> What needs to be done for that, can I help somehow?
> >>>
> >>
> >> Hi,
> >>
> >> So what's remain to be done to have frame threading in lavfi?
> >>
> >
> > Ping
> >
> 
> Ping

If noone, who has time to reply knows the awnser then you probably have to
find it out from the code and any unfinished patchsets

sending nicolas a private mail may also be more vissible to him than the ML
in case he is busy

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

He who knows, does not speak. He who speaks, does not know.

Lao Tzu


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-03 Thread Paul B Mahol
On 6/2/18, Paul B Mahol  wrote:
> On 5/2/18, Paul B Mahol  wrote:
>> On 9/11/16, Paul B Mahol  wrote:
>>> On 9/10/16, Nicolas George  wrote:
 Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a ecrit :
> So everybody agrees, we should proceed.

 I am proceeding, but as you can see in the patch, there is still a fair
 amount of work to be done. Still, people can help if they want to speed
 things up, especially since a significant part of the work is design
 decisions that I can not do alone and will need to be discussed.

 What needs to be done (using this mail as a notepad, but including the
 tasks
 where help is required):

 - Finish documenting the scheduling and make sure the implementation
 matches
   the documentation.

 - Discuss if "private_fields.h" is acceptable or decide another
 solution.

 - Clearly identify and isolate the parts of the scheduling that are
 needed
   only for request_frame()/request_frame() compatibility.

 - Decide exactly what parts of the scheduling are the responsibility of
   filters (possibly in the compatibility activate function) and what
 parts
   are handled by the framework.

 - Think ahead about threading and use wrapper to access fields that
 will
   require locking or synchronization.

 - Think about features whose need I realized while trying to get it
 working:
   distinguish productive / processing activation, synchronize several
 filter
   graphs.

 Please feel free to ask details about any of these points: not only
 would
 getting interest help me stay motivated, but discussing implementation
 details and explaining the design would help me having a clear idea of
 the
 whole system.
>>>
>>> For start removal of recursiveness is mostly I'm interested in.
>>> What needs to be done for that, can I help somehow?
>>>
>>
>> Hi,
>>
>> So what's remain to be done to have frame threading in lavfi?
>>
>
> Ping
>

Ping
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-06-02 Thread Paul B Mahol
On 5/2/18, Paul B Mahol  wrote:
> On 9/11/16, Paul B Mahol  wrote:
>> On 9/10/16, Nicolas George  wrote:
>>> Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a ecrit :
 So everybody agrees, we should proceed.
>>>
>>> I am proceeding, but as you can see in the patch, there is still a fair
>>> amount of work to be done. Still, people can help if they want to speed
>>> things up, especially since a significant part of the work is design
>>> decisions that I can not do alone and will need to be discussed.
>>>
>>> What needs to be done (using this mail as a notepad, but including the
>>> tasks
>>> where help is required):
>>>
>>> - Finish documenting the scheduling and make sure the implementation
>>> matches
>>>   the documentation.
>>>
>>> - Discuss if "private_fields.h" is acceptable or decide another
>>> solution.
>>>
>>> - Clearly identify and isolate the parts of the scheduling that are
>>> needed
>>>   only for request_frame()/request_frame() compatibility.
>>>
>>> - Decide exactly what parts of the scheduling are the responsibility of
>>>   filters (possibly in the compatibility activate function) and what
>>> parts
>>>   are handled by the framework.
>>>
>>> - Think ahead about threading and use wrapper to access fields that will
>>>   require locking or synchronization.
>>>
>>> - Think about features whose need I realized while trying to get it
>>> working:
>>>   distinguish productive / processing activation, synchronize several
>>> filter
>>>   graphs.
>>>
>>> Please feel free to ask details about any of these points: not only
>>> would
>>> getting interest help me stay motivated, but discussing implementation
>>> details and explaining the design would help me having a clear idea of
>>> the
>>> whole system.
>>
>> For start removal of recursiveness is mostly I'm interested in.
>> What needs to be done for that, can I help somehow?
>>
>
> Hi,
>
> So what's remain to be done to have frame threading in lavfi?
>

Ping
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2018-05-02 Thread Paul B Mahol
On 9/11/16, Paul B Mahol  wrote:
> On 9/10/16, Nicolas George  wrote:
>> Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>>> So everybody agrees, we should proceed.
>>
>> I am proceeding, but as you can see in the patch, there is still a fair
>> amount of work to be done. Still, people can help if they want to speed
>> things up, especially since a significant part of the work is design
>> decisions that I can not do alone and will need to be discussed.
>>
>> What needs to be done (using this mail as a notepad, but including the
>> tasks
>> where help is required):
>>
>> - Finish documenting the scheduling and make sure the implementation
>> matches
>>   the documentation.
>>
>> - Discuss if "private_fields.h" is acceptable or decide another solution.
>>
>> - Clearly identify and isolate the parts of the scheduling that are
>> needed
>>   only for request_frame()/request_frame() compatibility.
>>
>> - Decide exactly what parts of the scheduling are the responsibility of
>>   filters (possibly in the compatibility activate function) and what
>> parts
>>   are handled by the framework.
>>
>> - Think ahead about threading and use wrapper to access fields that will
>>   require locking or synchronization.
>>
>> - Think about features whose need I realized while trying to get it
>> working:
>>   distinguish productive / processing activation, synchronize several
>> filter
>>   graphs.
>>
>> Please feel free to ask details about any of these points: not only would
>> getting interest help me stay motivated, but discussing implementation
>> details and explaining the design would help me having a clear idea of
>> the
>> whole system.
>
> For start removal of recursiveness is mostly I'm interested in.
> What needs to be done for that, can I help somehow?
>

Hi,

So what's remain to be done to have frame threading in lavfi?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-11 Thread Paul B Mahol
On 9/10/16, Nicolas George  wrote:
> Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>> So everybody agrees, we should proceed.
>
> I am proceeding, but as you can see in the patch, there is still a fair
> amount of work to be done. Still, people can help if they want to speed
> things up, especially since a significant part of the work is design
> decisions that I can not do alone and will need to be discussed.
>
> What needs to be done (using this mail as a notepad, but including the
> tasks
> where help is required):
>
> - Finish documenting the scheduling and make sure the implementation
> matches
>   the documentation.
>
> - Discuss if "private_fields.h" is acceptable or decide another solution.
>
> - Clearly identify and isolate the parts of the scheduling that are needed
>   only for request_frame()/request_frame() compatibility.
>
> - Decide exactly what parts of the scheduling are the responsibility of
>   filters (possibly in the compatibility activate function) and what parts
>   are handled by the framework.
>
> - Think ahead about threading and use wrapper to access fields that will
>   require locking or synchronization.
>
> - Think about features whose need I realized while trying to get it
> working:
>   distinguish productive / processing activation, synchronize several
> filter
>   graphs.
>
> Please feel free to ask details about any of these points: not only would
> getting interest help me stay motivated, but discussing implementation
> details and explaining the design would help me having a clear idea of the
> whole system.

For start removal of recursiveness is mostly I'm interested in.
What needs to be done for that, can I help somehow?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-10 Thread Nicolas George
Le quartidi 24 fructidor, an CCXXIV, Paul B Mahol a écrit :
> So everybody agrees, we should proceed.

I am proceeding, but as you can see in the patch, there is still a fair
amount of work to be done. Still, people can help if they want to speed
things up, especially since a significant part of the work is design
decisions that I can not do alone and will need to be discussed.

What needs to be done (using this mail as a notepad, but including the tasks
where help is required):

- Finish documenting the scheduling and make sure the implementation matches
  the documentation.

- Discuss if "private_fields.h" is acceptable or decide another solution.

- Clearly identify and isolate the parts of the scheduling that are needed
  only for request_frame()/request_frame() compatibility.

- Decide exactly what parts of the scheduling are the responsibility of
  filters (possibly in the compatibility activate function) and what parts
  are handled by the framework.

- Think ahead about threading and use wrapper to access fields that will
  require locking or synchronization.

- Think about features whose need I realized while trying to get it working:
  distinguish productive / processing activation, synchronize several filter
  graphs.

Please feel free to ask details about any of these points: not only would
getting interest help me stay motivated, but discussing implementation
details and explaining the design would help me having a clear idea of the
whole system.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-09 Thread Paul B Mahol
On 9/4/16, Michael Niedermayer  wrote:
> On Sun, Sep 04, 2016 at 10:16:57PM +0200, Nicolas George wrote:
>> Le nonidi 19 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>> > And what would that cleaner implementation do?
>>
>> There is a rather simple implementation of format change in lavfi: have
>> on
>> each input a boolean flag "can_deal_with_format_change". If a frame with
>> a
>> different format arrives on a filter that does not have the flag, just
>> insert the scale/aresample filter on the link to force the format to be
>> the
>> same.
>>
>> It is not entirely optimal, but it is quite simple and does the work.
>>
>> And it could probably be implemented independently of my changes. But I
>> do
>> not want to spend time on it before finishing this, or it will never end.
>>
>> (Actually, I think we had something like that for buffersrc but it
>> disappeared at some time. avconv also has some very strange code to
>> handle
>> format changes.)
>
> we have a few filters that should work fine with format changes
> i would have assumed that still works
>
>
>>
>> > At current rate your lavfi changes will never get in, which sucks.
>>
>> Which is why I would like to be authorized to ignore this kind of
>> hiccups.
>> Format change does not currently work, this particular case used to work
>> only by chance. Can I break it and repair it later?
>
> I think there are 2 things
> First, filters simply supporting format changes without any magic or
> reinitialization,
> they just work if formats change. This should continue to be so
> but it also shouldnt really add any complication or am i missing
> something ?
> This was the type i was interrested in previously ... (until patch
> reviews made me loose interrest)
>
> Second, graph reinitialization, this is hard to get right and if its
> done right it still doesnt work for many usecases due to destroyed
> state.
> I dont think temporary worsening graph reinitialization is a problem
> but thats just my oppinion
>

So everybody agrees, we should proceed.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-04 Thread Michael Niedermayer
On Sun, Sep 04, 2016 at 10:16:57PM +0200, Nicolas George wrote:
> Le nonidi 19 fructidor, an CCXXIV, Paul B Mahol a écrit :
> > And what would that cleaner implementation do?
> 
> There is a rather simple implementation of format change in lavfi: have on
> each input a boolean flag "can_deal_with_format_change". If a frame with a
> different format arrives on a filter that does not have the flag, just
> insert the scale/aresample filter on the link to force the format to be the
> same.
> 
> It is not entirely optimal, but it is quite simple and does the work.
> 
> And it could probably be implemented independently of my changes. But I do
> not want to spend time on it before finishing this, or it will never end.
> 
> (Actually, I think we had something like that for buffersrc but it
> disappeared at some time. avconv also has some very strange code to handle
> format changes.)

we have a few filters that should work fine with format changes
i would have assumed that still works


> 
> > At current rate your lavfi changes will never get in, which sucks.
> 
> Which is why I would like to be authorized to ignore this kind of hiccups.
> Format change does not currently work, this particular case used to work
> only by chance. Can I break it and repair it later?

I think there are 2 things
First, filters simply supporting format changes without any magic or
reinitialization,
they just work if formats change. This should continue to be so
but it also shouldnt really add any complication or am i missing
something ?
This was the type i was interrested in previously ... (until patch
reviews made me loose interrest)

Second, graph reinitialization, this is hard to get right and if its
done right it still doesnt work for many usecases due to destroyed
state.
I dont think temporary worsening graph reinitialization is a problem
but thats just my oppinion


[...]


-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Breaking DRM is a little like attempting to break through a door even
though the window is wide open and the only thing in the house is a bunch
of things you dont want and which you would get tomorrow for free anyway


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-04 Thread Paul B Mahol
On 9/4/16, Nicolas George  wrote:
> Which is why I would like to be authorized to ignore this kind of hiccups.
> Format change does not currently work, this particular case used to work
> only by chance. Can I break it and repair it later?

I agree, format change in filtergraph, when works it works by pure luck.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-04 Thread Nicolas George
Le nonidi 19 fructidor, an CCXXIV, Paul B Mahol a écrit :
> And what would that cleaner implementation do?

There is a rather simple implementation of format change in lavfi: have on
each input a boolean flag "can_deal_with_format_change". If a frame with a
different format arrives on a filter that does not have the flag, just
insert the scale/aresample filter on the link to force the format to be the
same.

It is not entirely optimal, but it is quite simple and does the work.

And it could probably be implemented independently of my changes. But I do
not want to spend time on it before finishing this, or it will never end.

(Actually, I think we had something like that for buffersrc but it
disappeared at some time. avconv also has some very strange code to handle
format changes.)

> At current rate your lavfi changes will never get in, which sucks.

Which is why I would like to be authorized to ignore this kind of hiccups.
Format change does not currently work, this particular case used to work
only by chance. Can I break it and repair it later?

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-04 Thread Paul B Mahol
On 9/4/16, Nicolas George  wrote:
> Le quintidi 15 fructidor, an CCXXIV, Michael Niedermayer a ecrit :
>> ./ffmpeg -i tickets//679/oversized_pgs_subtitles.mkv -filter_complex
>> '[0:s:1]scale=848x480,[0:v]overlay=shortest=1' test.avi
>> fails assertion:
>> Assertion progress failed at libavfilter/avfilter.c:1391
>>
>> https://trac.ffmpeg.org/attachment/ticket/679/oversized_pgs_subtitles.mkv
>
> This one was an easy fix.
>
>> ffmpeg -v 0 -i tickets/3539/crash.swf -map 0 -t  1  -f framecrc -
>> output changes, not sure this is a bug but reporting it anyway as i
>> noticed
>
> This one is much more tricky. When a format change is detected, ffmpeg
> resets the graphs but does not drain them first. With this big change,
> frames inserted do not reach as far as previously: less immediate
> processing, more possibility to work in a thread.
>
> But the issue also happens with the current code:
>
> ffmpeg -i a%02d.png -vf fps=50 -f framecrc -
>
> with two input PNGs of different size, will only output the second one,
> because the first one is buffered by vf_fps and discarded when the graph is
> reconfigured.
>
> This is very tricky, because flushing the graph on format change is not the
> same thing as flushing it at EOF. Imagine the input format for an overlay
> change: flushing would mean finishing the background video with the last
> image before the format change, this is not at all what expected.
>
> I do not see a clean way of getting this particular example and all similar
> ones working. I can see an ugly solution, but I would rather avoid it and
> wait for a cleaner implementation of handling format changes.

And what would that cleaner implementation do?

At current rate your lavfi changes will never get in, which sucks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-09-04 Thread Nicolas George
Le quintidi 15 fructidor, an CCXXIV, Michael Niedermayer a écrit :
> ./ffmpeg -i tickets//679/oversized_pgs_subtitles.mkv -filter_complex 
> '[0:s:1]scale=848x480,[0:v]overlay=shortest=1' test.avi
> fails assertion:
> Assertion progress failed at libavfilter/avfilter.c:1391
> 
> https://trac.ffmpeg.org/attachment/ticket/679/oversized_pgs_subtitles.mkv

This one was an easy fix.

> ffmpeg -v 0 -i tickets/3539/crash.swf -map 0 -t  1  -f framecrc -
> output changes, not sure this is a bug but reporting it anyway as i
> noticed

This one is much more tricky. When a format change is detected, ffmpeg
resets the graphs but does not drain them first. With this big change,
frames inserted do not reach as far as previously: less immediate
processing, more possibility to work in a thread.

But the issue also happens with the current code:

ffmpeg -i a%02d.png -vf fps=50 -f framecrc -

with two input PNGs of different size, will only output the second one,
because the first one is buffered by vf_fps and discarded when the graph is
reconfigured.

This is very tricky, because flushing the graph on format change is not the
same thing as flushing it at EOF. Imagine the input format for an overlay
change: flushing would mean finishing the background video with the last
image before the format change, this is not at all what expected.

I do not see a clean way of getting this particular example and all similar
ones working. I can see an ugly solution, but I would rather avoid it and
wait for a cleaner implementation of handling format changes.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Michael Niedermayer
On Wed, Aug 31, 2016 at 02:20:27PM +0200, Paul B Mahol wrote:
> On 8/31/16, Michael Niedermayer  wrote:
> > On Wed, Aug 31, 2016 at 10:18:31AM +0200, Paul B Mahol wrote:
> >> On 8/30/16, Nicolas George  wrote:
> >> > Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a ecrit :
> >> >> the filter frame multithreading would just internally, in filter
> >> >> context
> >> >> cache frames, once enough frames are in cache - call workers and be
> >> >> done,
> >> >> repeat. At eof call workers on remaining frames in cache.
> >> >
> >> > I have no idea how much thought you have already given to it, but I am
> >> > pretty sure it is not as simple as that with the current architecture.
> >> > By
> >> > far.
> >>
> >> Yes, it is very simple. You just need to allocate own buffers that
> >> would be needed
> >> by filter. Than you just call ctx->internal->execute()...
> >
> > I would have thought that
> > filter_frame() would insert frames into a input fifo and remove &
> > output frames from a output fifo and pass them on to the next filters
> > ff_filter_frame()
> > if the output fifo is empty and the input full it would block
> > (implicitly gving its CPU time to the workers)
> >
> > and there would be backgound threads continuosly running that pick
> > frames out of the input fifo and process them into an output fifo
> > and wake up the main thread if the fifo state changes
> >
> > using execute() would IIUC (please correct me if i misunderstand)
> > only execute when ff_filter_frame() is executed, this would restrict
> > what can execute at the same time, also execute() needs to
> > wait for all threads to finish before it can return, this too limits
> > paralelism compared to continuously running workers that more
> > independantly pick tasks and work on them
> > but maybe i misuderstood how you intend to use execute()
> 
> How this one would work if for example first frame needs 10 seconds to
> generate and all other needs 1 second? How would you know from your output
> fifo that you got first frame, so you can pass first frame to next filter(s)?

one possibility:

frame 1 comes in

first worker adds a output entry to the end of the output linked list
and  starts working on frame 1

frame 2 comes in

second worker adds a output entry to the end of the output linked list
and starts working on frame 2

second worker finishes and replaces/adds its output to its output entry

frame 3 comes in, the next output entry (from worker 1) is not finished
so nothing can be output yet

second worker adds a output entry to the end of the output linked list
and starts working on frame 3

first worker finishes and replaces/adds its output to its output entry

frame 4 comes in, the next output entry (from worker 1) is ready and
is sent to the next filter



> 
> How do you know that your solution is always optimal? (Not saying that
> mine is anything better)

i dont know if its always optimal, in fact it likely is not always
optimal. It seemed simple and reasonably good.


> How do you limit number of threads that will specifically work for
> this filter instance?

i had assumed that a fixed number of worker threads would be used
for each filter, some of these may be idle and consume no CPU if there
is nothing to do
I assumed a fixed number for simplicity, nothing in the design
should depend on the number being fixed


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The real ebay dictionary, page 3
"Rare item" - "Common item with rare defect or maybe just a lie"
"Professional" - "'Toy' made in china, not functional except as doorstop"
"Experts will know" - "The seller hopes you are not an expert"


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Paul B Mahol
On 8/31/16, Nicolas George  wrote:
> Le quintidi 15 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>> Yes, it is very simple. You just need to allocate own buffers that
>> would be needed
>> by filter. Than you just call ctx->internal->execute()...
>
> I am sorry, but with the current API, to have something that really works
> with threads, not just pseudo-threading that is so full of blocking that it
> is mostly sequential, it is much more complicated than that. I think.
> Please prove me wrong.

How would one use already available workers from lavfi/pthread.c ?
If somebody wrote better implementation than me and its faster I will drop mine.

>
>> Does this fixes buffer queue overflow for specific scenarios?
>
> Not by itself, but then rewriting framesync to use the link's fifo becomes
> very easy and fix all the cases that can possibly work.
>
> Regards,
>
> --
>   Nicolas George
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Nicolas George
Le quintidi 15 fructidor, an CCXXIV, Paul B Mahol a écrit :
> Yes, it is very simple. You just need to allocate own buffers that
> would be needed
> by filter. Than you just call ctx->internal->execute()...

I am sorry, but with the current API, to have something that really works
with threads, not just pseudo-threading that is so full of blocking that it
is mostly sequential, it is much more complicated than that. I think. Please
prove me wrong.

> Does this fixes buffer queue overflow for specific scenarios?

Not by itself, but then rewriting framesync to use the link's fifo becomes
very easy and fix all the cases that can possibly work.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Nicolas George
Le quintidi 15 fructidor, an CCXXIV, Michael Niedermayer a écrit :
> ./ffmpeg -i tickets//679/oversized_pgs_subtitles.mkv -filter_complex 
> '[0:s:1]scale=848x480,[0:v]overlay=shortest=1' test.avi
> fails assertion:
> Assertion progress failed at libavfilter/avfilter.c:1391
> 
> https://trac.ffmpeg.org/attachment/ticket/679/oversized_pgs_subtitles.mkv
> 
> ffmpeg -v 0 -i tickets/3539/crash.swf -map 0 -t  1  -f framecrc -
> output changes, not sure this is a bug but reporting it anyway as i
> noticed
> 
> http://samples.ffmpeg.org/ffmpeg-bugs/trac/ticket3539/
> 
> doc/examples/filtering_video matrixbench_mpeg2.mpg
> also breaks

Thanks for the testing. I reproduced the first one (does not fail with "-f
null -") and will try to fix all these.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Paul B Mahol
On 8/31/16, Michael Niedermayer  wrote:
> On Wed, Aug 31, 2016 at 10:18:31AM +0200, Paul B Mahol wrote:
>> On 8/30/16, Nicolas George  wrote:
>> > Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>> >> the filter frame multithreading would just internally, in filter
>> >> context
>> >> cache frames, once enough frames are in cache - call workers and be
>> >> done,
>> >> repeat. At eof call workers on remaining frames in cache.
>> >
>> > I have no idea how much thought you have already given to it, but I am
>> > pretty sure it is not as simple as that with the current architecture.
>> > By
>> > far.
>>
>> Yes, it is very simple. You just need to allocate own buffers that
>> would be needed
>> by filter. Than you just call ctx->internal->execute()...
>
> I would have thought that
> filter_frame() would insert frames into a input fifo and remove &
> output frames from a output fifo and pass them on to the next filters
> ff_filter_frame()
> if the output fifo is empty and the input full it would block
> (implicitly gving its CPU time to the workers)
>
> and there would be backgound threads continuosly running that pick
> frames out of the input fifo and process them into an output fifo
> and wake up the main thread if the fifo state changes
>
> using execute() would IIUC (please correct me if i misunderstand)
> only execute when ff_filter_frame() is executed, this would restrict
> what can execute at the same time, also execute() needs to
> wait for all threads to finish before it can return, this too limits
> paralelism compared to continuously running workers that more
> independantly pick tasks and work on them
> but maybe i misuderstood how you intend to use execute()

How this one would work if for example first frame needs 10 seconds to
generate and all other needs 1 second? How would you know from your output
fifo that you got first frame, so you can pass first frame to next filter(s)?

How do you know that your solution is always optimal? (Not saying that
mine is anything better)
How do you limit number of threads that will specifically work for
this filter instance?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Michael Niedermayer
On Wed, Aug 31, 2016 at 10:18:31AM +0200, Paul B Mahol wrote:
> On 8/30/16, Nicolas George  wrote:
> > Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a ecrit :
> >> the filter frame multithreading would just internally, in filter context
> >> cache frames, once enough frames are in cache - call workers and be done,
> >> repeat. At eof call workers on remaining frames in cache.
> >
> > I have no idea how much thought you have already given to it, but I am
> > pretty sure it is not as simple as that with the current architecture. By
> > far.
> 
> Yes, it is very simple. You just need to allocate own buffers that
> would be needed
> by filter. Than you just call ctx->internal->execute()...

I would have thought that
filter_frame() would insert frames into a input fifo and remove &
output frames from a output fifo and pass them on to the next filters
ff_filter_frame()
if the output fifo is empty and the input full it would block
(implicitly gving its CPU time to the workers)

and there would be backgound threads continuosly running that pick
frames out of the input fifo and process them into an output fifo
and wake up the main thread if the fifo state changes

using execute() would IIUC (please correct me if i misunderstand)
only execute when ff_filter_frame() is executed, this would restrict
what can execute at the same time, also execute() needs to
wait for all threads to finish before it can return, this too limits
paralelism compared to continuously running workers that more
independantly pick tasks and work on them
but maybe i misuderstood how you intend to use execute()


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

You can kill me, but you cannot change the truth.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-31 Thread Paul B Mahol
On 8/30/16, Nicolas George  wrote:
> Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a ecrit :
>> the filter frame multithreading would just internally, in filter context
>> cache frames, once enough frames are in cache - call workers and be done,
>> repeat. At eof call workers on remaining frames in cache.
>
> I have no idea how much thought you have already given to it, but I am
> pretty sure it is not as simple as that with the current architecture. By
> far.

Yes, it is very simple. You just need to allocate own buffers that
would be needed
by filter. Than you just call ctx->internal->execute()...

> In the meantime, I finally got the non-recursive version passing FATE. Here
> are the raw patch, so that people can have an idea what this is all about.
> There is still a lot of cleanup and documentation to do, as you can see.

Does this fixes buffer queue overflow for specific scenarios?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-30 Thread Michael Niedermayer
On Tue, Aug 30, 2016 at 09:08:18PM +0200, Nicolas George wrote:
> Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a écrit :
> > the filter frame multithreading would just internally, in filter context
> > cache frames, once enough frames are in cache - call workers and be done,
> > repeat. At eof call workers on remaining frames in cache.
> 
> I have no idea how much thought you have already given to it, but I am
> pretty sure it is not as simple as that with the current architecture. By
> far.
> 
> In the meantime, I finally got the non-recursive version passing FATE. Here
> are the raw patch, so that people can have an idea what this is all about.
> There is still a lot of cleanup and documentation to do, as you can see.
> 
> Regards,
> 
> -- 
>   Nicolas George

./ffmpeg -i tickets//679/oversized_pgs_subtitles.mkv -filter_complex 
'[0:s:1]scale=848x480,[0:v]overlay=shortest=1' test.avi
fails assertion:
Assertion progress failed at libavfilter/avfilter.c:1391

https://trac.ffmpeg.org/attachment/ticket/679/oversized_pgs_subtitles.mkv

ffmpeg -v 0 -i tickets/3539/crash.swf -map 0 -t  1  -f framecrc -
output changes, not sure this is a bug but reporting it anyway as i
noticed

http://samples.ffmpeg.org/ffmpeg-bugs/trac/ticket3539/

doc/examples/filtering_video matrixbench_mpeg2.mpg
also breaks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Why not whip the teacher when the pupil misbehaves? -- Diogenes of Sinope


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-30 Thread Nicolas George
Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a écrit :
> the filter frame multithreading would just internally, in filter context
> cache frames, once enough frames are in cache - call workers and be done,
> repeat. At eof call workers on remaining frames in cache.

I have no idea how much thought you have already given to it, but I am
pretty sure it is not as simple as that with the current architecture. By
far.

In the meantime, I finally got the non-recursive version passing FATE. Here
are the raw patch, so that people can have an idea what this is all about.
There is still a lot of cleanup and documentation to do, as you can see.

Regards,

-- 
  Nicolas George
From b73206d61b94f5b3c2cd854d901c2a59c423bcde Mon Sep 17 00:00:00 2001
From: Nicolas George 
Date: Tue, 30 Aug 2016 20:12:20 +0200
Subject: [PATCH 1/4] fate/colorkey: disable audio stream.

The test is not supposed to cover audio.
Also, using -vframes along with an audio stream depends on
the exact order the frames are processed by filters, it is
too much constraint to guarantee.

Signed-off-by: Nicolas George 
---
 tests/fate/ffmpeg.mak | 2 +-
 tests/ref/fate/ffmpeg-filter_colorkey | 9 -
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/tests/fate/ffmpeg.mak b/tests/fate/ffmpeg.mak
index 3b91c12..60f1303 100644
--- a/tests/fate/ffmpeg.mak
+++ b/tests/fate/ffmpeg.mak
@@ -20,7 +20,7 @@ fate-ffmpeg-filter_complex: CMD = framecrc -filter_complex color=d=1:r=5 -fflags
 
 FATE_SAMPLES_FFMPEG-$(CONFIG_COLORKEY_FILTER) += fate-ffmpeg-filter_colorkey
 fate-ffmpeg-filter_colorkey: tests/data/filtergraphs/colorkey
-fate-ffmpeg-filter_colorkey: CMD = framecrc -idct simple -fflags +bitexact -flags +bitexact  -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/cavs/cavs.mpg -fflags +bitexact -flags +bitexact -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/lena.pnm -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/colorkey -sws_flags +accurate_rnd+bitexact -fflags +bitexact -flags +bitexact -qscale 2 -vframes 10
+fate-ffmpeg-filter_colorkey: CMD = framecrc -idct simple -fflags +bitexact -flags +bitexact  -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/cavs/cavs.mpg -fflags +bitexact -flags +bitexact -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/lena.pnm -an -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/colorkey -sws_flags +accurate_rnd+bitexact -fflags +bitexact -flags +bitexact -qscale 2 -vframes 10
 
 FATE_FFMPEG-$(CONFIG_COLOR_FILTER) += fate-ffmpeg-lavfi
 fate-ffmpeg-lavfi: CMD = framecrc -lavfi color=d=1:r=5 -fflags +bitexact
diff --git a/tests/ref/fate/ffmpeg-filter_colorkey b/tests/ref/fate/ffmpeg-filter_colorkey
index 9fbdfeb..effc13b 100644
--- a/tests/ref/fate/ffmpeg-filter_colorkey
+++ b/tests/ref/fate/ffmpeg-filter_colorkey
@@ -3,17 +3,8 @@
 #codec_id 0: rawvideo
 #dimensions 0: 720x576
 #sar 0: 0/1
-#tb 1: 1/48000
-#media_type 1: audio
-#codec_id 1: pcm_s16le
-#sample_rate 1: 48000
-#channel_layout 1: 3
 0,  0,  0,1,   622080, 0x4e30accb
-1,  0,  0, 1152, 4608, 0x
-1,   1152,   1152, 1152, 4608, 0xbca29063
 0,  1,  1,1,   622080, 0x7d941c14
-1,   2304,   2304, 1152, 4608, 0x6e70df10
-1,   3456,   3456, 1152, 4608, 0x95e6a535
 0,  2,  2,1,   622080, 0xf7451c5b
 0,  3,  3,1,   622080, 0xb2c74319
 0,  4,  4,1,   622080, 0xc9b80b79
-- 
2.9.3

From b55d3b23665663ef61435c19b4a722740e048284 Mon Sep 17 00:00:00 2001
From: Nicolas George 
Date: Tue, 30 Aug 2016 15:28:41 +0200
Subject: [PATCH 2/4] lavfi: split frame_count between input and output.

AVFilterLink.frame_count is supposed to count the number of frames
that were passed on the link, but with min_samples, that number is
not always the same for the source and destination filters.
With the addition of a FIFO on the link, the difference will become
more significant.

Split the variable in two: frame_count_in counts the number of
frames that entered the link, frame_count_out counts the number
of frames that were sent to the destination filter.

Signed-off-by: Nicolas George 
---
 libavfilter/af_ashowinfo.c   |  2 +-
 libavfilter/af_volume.c  |  2 +-
 libavfilter/asrc_sine.c  |  2 +-
 libavfilter/avf_showfreqs.c  |  4 ++--
 libavfilter/avfilter.c   |  5 +++--
 libavfilter/avfilter.h   |  2 +-
 libavfilter/f_loop.c |  2 +-
 libavfilter/f_metadata.c |  4 ++--
 libavfilter/f_select.c   |  2 +-
 libavfilter/f_streamselect.c |  2 +-
 libavfilter/vf_bbox.c|  2 +-
 libavfilter/vf_blackdetect.c |  2 +-
 libavfilter/vf_blend.c   |  2 +-
 libavfilter/vf_crop.c|  2 +-
 libavfilter/vf_decimate.c|  2 +-
 libavfilter/vf_detelecine.c  |  2 +-
 libavfilter/vf_drawtext.c|  4 ++--
 libavfilter/vf_eq.c 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-30 Thread Paul B Mahol
On Tuesday, August 30, 2016, Nicolas George  wrote:

> Le duodi 12 fructidor, an CCXXIV, Paul B Mahol a écrit :
> > Nicolas, what is status of this?
> >
> > I'm currently interested in frame multithreading in lavfi.
>
> I am currently locked on a patch series to replace the recursive calls to
> filter_frames() with a FIFO on each link.
>
> I think this is an absolute necessity before considering inter-filter
> multithreading.
>
> Unfortunately, this is very tricky business because the filters'
> implementation are not ready for it. Filters must be activated, by actually
> calling their filter_frame() methods, when a frame is available, but not
> repeatedly so when they can not perform any work, but they often provide no
> visible test for that. Plus, the global API for activating a filtergraph
> was
> not designed with that kind of working in mind.
>
> I have made significant progress in the last weeks. I think I have got the
> propagating of frames working, and the propagating of EOF conditions on
> 1-to-1 filters too, but there is still an issue with filters with multiple
> inputs.
>
> I could post the whole patch as is at any time, but I do not think it would
> do much good as is.
>
>
the filter frame multithreading would just internally, in filter context
cache frames, once enough frames are in cache - call workers and be done,
repeat. At eof call workers on remaining frames in cache.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-30 Thread Nicolas George
Le duodi 12 fructidor, an CCXXIV, Paul B Mahol a écrit :
> Nicolas, what is status of this?
> 
> I'm currently interested in frame multithreading in lavfi.

I am currently locked on a patch series to replace the recursive calls to
filter_frames() with a FIFO on each link.

I think this is an absolute necessity before considering inter-filter
multithreading.

Unfortunately, this is very tricky business because the filters'
implementation are not ready for it. Filters must be activated, by actually
calling their filter_frame() methods, when a frame is available, but not
repeatedly so when they can not perform any work, but they often provide no
visible test for that. Plus, the global API for activating a filtergraph was
not designed with that kind of working in mind.

I have made significant progress in the last weeks. I think I have got the
propagating of frames working, and the propagating of EOF conditions on
1-to-1 filters too, but there is still an issue with filters with multiple
inputs.

I could post the whole patch as is at any time, but I do not think it would
do much good as is.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2016-08-28 Thread Paul B Mahol
Hi,

On Thu, Oct 30, 2014 at 1:59 PM, Michael Niedermayer 
wrote:

> On Thu, Oct 30, 2014 at 11:50:46AM +0100, Stefano Sabatini wrote:
> > Sorry for the slow reply.
> >
> > On date Wednesday 2014-10-22 23:45:42 +0200, Nicolas George encoded:
> > > [ CCing Anton, as most that is written here also apply to libav too,
> and
> > > this would be a good occasion to try a cross-fork cooperation; if that
> is
> > > not wanted, please let us know so we can drop the cc. ]
> > >
> > > 1. Problems with the current design
> > >
> > >   1.1. Mixed input-/output-driven model
> > >
> > > Currently, lavfi is designed to work in a mixed input-driven and
> > > output-driven model. That means the application needs sometimes to
> add
> > > input to buffersources and sometimes request output to
> buffersinks. This
> > > is a bit of a nuisance, because it requires the application to do
> it
> > > properly: adding input on the wrong input or requesting a frame on
> the
> > > wrong output will cause extra memory consumption or latency.
> > >
> > > With the libav API, it can not work at all since there is no
> mechanism
> > > to determine which input needs a frame in order to proceed.
> > >
> > > The libav API is clearly designed for a more output-driven
> > > implementation, with FIFOs anywhere to prevent input-driven frames
> to
> > > reach unready filters. Unfortunately, since it is impossible from
> the
> > > outside to guess what output will get a frame next, that can cause
> > > frames to accumulate anywhere in the filter graph, eating a lot of
> > > memory unnecessarily.
> > >
> > > FFmpeg's API has eliminated FIFOs in favour of queues in filters
> that
> > > need it, but these queues can not be controlled for unusual filter
> > > graphs with extreme needs. Also, there still is an implicit FIFO
> inside
> > > buffersink.
> > >
> > >   1.2. Recursive implementation
> > >
> > > All work in a filter graph is triggered by recursive invocations
> of the
> > > filters' methods. It makes debugging harder. It also can lead to
> large
> > > stack usage and makes frame- and filter-level multithreading
> harder to
> > > implement. It also prevents some diagnosis from working reliably.
> > >
> > >   1.3. EOF handling
> > >
> > > Currently, EOF is propagated only through the return value of the
> > > request_frame() method. That means it only works in an
> output-driven
> > > scheme. It also means that it has no timestamp attached to it;
> this is
> > > an issue for filters where the duration of the last frame is
> relevant,
> > > like vf_fps.
> > >
> > >   1.4. Latency
> > >
> > > Some filters need to know the timestamp of the next frame in order
> to
> > > know when the current frame will stop and be able to process it:
> > > overlay, fps are two examples. These filters will introduce a
> latency of
> > > one input frame that could otherwise be avoided.
> > >
> > >   1.5. Timestamps
> > >
> > > Some filters do not care about timestamps at all. Some check and
> have a
> > > proper handling of NOPTS values. Some filters just assume the
> frames
> > > will have timestamps, and possibly make extra assumptions on that:
> > > monotony, consistency, etc. That is an inconsistent mess.
> > >
> > >   1.6. Sparse streams
> > >
> > > There is a more severe instance of the latency issue when the input
> > > comes from an interleaved sparse stream: in that case, waiting for
> the
> > > next frame in order to find the end of the current one may require
> > > demuxing a large chunk of input, in turn provoking a lot of
> activity on
> > > other inputs of the graph.
> >
> > Other issues.
> >
>
> > S1. the filtergraph can't properly readapt to mid-stream
> > changes involving assumed invariants (aspect ratio, size, timebase,
> > pixel format, sample_rate). Indeed the framework was designed as
> > though some of these properties (the ones defined by query_formats)
> > were not allowed to change.
>
> no, no and no :)
> the filtergraph should be able adapt to some changes like aspect,
> resolution and pixel / sample format. These are not invariants, some
> of this definitly worked when i tried it long ago
> i posted a (incomplete) patchset that fixes bugs in this relation
> long ago There are filters that can perfectly well handle changes in
> resolution, aspect, formats, ...
> and there are filters which are buggy but could when they are fixed
> also equally well support such changes
> and there are filters which fundamentally do not support some changes,
> these need to either be reinited and loose state/history or a
> scale/aresample be inserted before them when changes on their input
> occur
> for some filters reinit is not appropriate, examples are things that
> are intended to collect global statistics
> Also scale / aresample filters can serve as
> "parameter change 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-30 Thread Stefano Sabatini
Sorry for the slow reply.

On date Wednesday 2014-10-22 23:45:42 +0200, Nicolas George encoded: 
 [ CCing Anton, as most that is written here also apply to libav too, and
 this would be a good occasion to try a cross-fork cooperation; if that is
 not wanted, please let us know so we can drop the cc. ]
 
 1. Problems with the current design
 
   1.1. Mixed input-/output-driven model
 
 Currently, lavfi is designed to work in a mixed input-driven and
 output-driven model. That means the application needs sometimes to add
 input to buffersources and sometimes request output to buffersinks. This
 is a bit of a nuisance, because it requires the application to do it
 properly: adding input on the wrong input or requesting a frame on the
 wrong output will cause extra memory consumption or latency.
 
 With the libav API, it can not work at all since there is no mechanism
 to determine which input needs a frame in order to proceed.
 
 The libav API is clearly designed for a more output-driven
 implementation, with FIFOs anywhere to prevent input-driven frames to
 reach unready filters. Unfortunately, since it is impossible from the
 outside to guess what output will get a frame next, that can cause
 frames to accumulate anywhere in the filter graph, eating a lot of
 memory unnecessarily.
 
 FFmpeg's API has eliminated FIFOs in favour of queues in filters that
 need it, but these queues can not be controlled for unusual filter
 graphs with extreme needs. Also, there still is an implicit FIFO inside
 buffersink.
 
   1.2. Recursive implementation
 
 All work in a filter graph is triggered by recursive invocations of the
 filters' methods. It makes debugging harder. It also can lead to large
 stack usage and makes frame- and filter-level multithreading harder to
 implement. It also prevents some diagnosis from working reliably.
 
   1.3. EOF handling
 
 Currently, EOF is propagated only through the return value of the
 request_frame() method. That means it only works in an output-driven
 scheme. It also means that it has no timestamp attached to it; this is
 an issue for filters where the duration of the last frame is relevant,
 like vf_fps.
 
   1.4. Latency
 
 Some filters need to know the timestamp of the next frame in order to
 know when the current frame will stop and be able to process it:
 overlay, fps are two examples. These filters will introduce a latency of
 one input frame that could otherwise be avoided.
 
   1.5. Timestamps
 
 Some filters do not care about timestamps at all. Some check and have a
 proper handling of NOPTS values. Some filters just assume the frames
 will have timestamps, and possibly make extra assumptions on that:
 monotony, consistency, etc. That is an inconsistent mess.
 
   1.6. Sparse streams
 
 There is a more severe instance of the latency issue when the input
 comes from an interleaved sparse stream: in that case, waiting for the
 next frame in order to find the end of the current one may require
 demuxing a large chunk of input, in turn provoking a lot of activity on
 other inputs of the graph.

Other issues.

S1. the filtergraph can't properly readapt to mid-stream
changes involving assumed invariants (aspect ratio, size, timebase,
pixel format, sample_rate). Indeed the framework was designed as
though some of these properties (the ones defined by query_formats)
were not allowed to change.

S2. Another problem is that we initialize the filter before the
filtergraph, so for example the single filter can't readapt to the
filtergraph topology. For example it would be useful to have the split
filter to change the number of outputs depending on the number of
outputs specified, but this can't be easily achieved. (That's in my
opinion a minor problem though).

S3. It is not possible to direct commands towards a specific
filter. For this we can add an ID to each filter instance. We could
have something has:
color:left_color=c=red   [left]
color:right_color=c=blue [right]

then you can send commands (e.g. with zmq) with:
echo left_color c yellow | tools/zmqsend

S4. We should support output encoding movie. We got stuck designing
the interface for that.

...

About fifos and queues, we could add some options to control fifo
filters to limit their size.

For example we could specify the maximum number of allowed queued
frames, or the total allowed size, and the dropping policy (drop last,
drop first, drop random frame in the midst).

 2. Proposed API changes
 
   To fix/enhance all these issues, I believe a complete rethink of the
   scheduling design of the library is necessary. I propose the following
   changes.
 
   Note: some of these changes are not 100% related to the issues I raised,
   but looked like a good idea while thinking on an API rework.
 
   2.1. AVFrame.duration
 
 Add a duration field to AVFrame; if set, 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-30 Thread Michael Niedermayer
On Thu, Oct 30, 2014 at 11:50:46AM +0100, Stefano Sabatini wrote:
 Sorry for the slow reply.
 
 On date Wednesday 2014-10-22 23:45:42 +0200, Nicolas George encoded: 
  [ CCing Anton, as most that is written here also apply to libav too, and
  this would be a good occasion to try a cross-fork cooperation; if that is
  not wanted, please let us know so we can drop the cc. ]
  
  1. Problems with the current design
  
1.1. Mixed input-/output-driven model
  
  Currently, lavfi is designed to work in a mixed input-driven and
  output-driven model. That means the application needs sometimes to add
  input to buffersources and sometimes request output to buffersinks. This
  is a bit of a nuisance, because it requires the application to do it
  properly: adding input on the wrong input or requesting a frame on the
  wrong output will cause extra memory consumption or latency.
  
  With the libav API, it can not work at all since there is no mechanism
  to determine which input needs a frame in order to proceed.
  
  The libav API is clearly designed for a more output-driven
  implementation, with FIFOs anywhere to prevent input-driven frames to
  reach unready filters. Unfortunately, since it is impossible from the
  outside to guess what output will get a frame next, that can cause
  frames to accumulate anywhere in the filter graph, eating a lot of
  memory unnecessarily.
  
  FFmpeg's API has eliminated FIFOs in favour of queues in filters that
  need it, but these queues can not be controlled for unusual filter
  graphs with extreme needs. Also, there still is an implicit FIFO inside
  buffersink.
  
1.2. Recursive implementation
  
  All work in a filter graph is triggered by recursive invocations of the
  filters' methods. It makes debugging harder. It also can lead to large
  stack usage and makes frame- and filter-level multithreading harder to
  implement. It also prevents some diagnosis from working reliably.
  
1.3. EOF handling
  
  Currently, EOF is propagated only through the return value of the
  request_frame() method. That means it only works in an output-driven
  scheme. It also means that it has no timestamp attached to it; this is
  an issue for filters where the duration of the last frame is relevant,
  like vf_fps.
  
1.4. Latency
  
  Some filters need to know the timestamp of the next frame in order to
  know when the current frame will stop and be able to process it:
  overlay, fps are two examples. These filters will introduce a latency of
  one input frame that could otherwise be avoided.
  
1.5. Timestamps
  
  Some filters do not care about timestamps at all. Some check and have a
  proper handling of NOPTS values. Some filters just assume the frames
  will have timestamps, and possibly make extra assumptions on that:
  monotony, consistency, etc. That is an inconsistent mess.
  
1.6. Sparse streams
  
  There is a more severe instance of the latency issue when the input
  comes from an interleaved sparse stream: in that case, waiting for the
  next frame in order to find the end of the current one may require
  demuxing a large chunk of input, in turn provoking a lot of activity on
  other inputs of the graph.
 
 Other issues.
 

 S1. the filtergraph can't properly readapt to mid-stream
 changes involving assumed invariants (aspect ratio, size, timebase,
 pixel format, sample_rate). Indeed the framework was designed as
 though some of these properties (the ones defined by query_formats)
 were not allowed to change.

no, no and no :)
the filtergraph should be able adapt to some changes like aspect,
resolution and pixel / sample format. These are not invariants, some
of this definitly worked when i tried it long ago
i posted a (incomplete) patchset that fixes bugs in this relation
long ago There are filters that can perfectly well handle changes in
resolution, aspect, formats, ...
and there are filters which are buggy but could when they are fixed
also equally well support such changes
and there are filters which fundamentally do not support some changes,
these need to either be reinited and loose state/history or a
scale/aresample be inserted before them when changes on their input
occur
for some filters reinit is not appropriate, examples are things that
are intended to collect global statistics
Also scale / aresample filters can serve as
parameter change barriers, filters afterwards do not need to deal
with such changes


 
 S2. Another problem is that we initialize the filter before the
 filtergraph, so for example the single filter can't readapt to the
 filtergraph topology. For example it would be useful to have the split
 filter to change the number of outputs depending on the number of
 outputs specified, but this can't be easily achieved. (That's in my
 opinion a minor problem though).
 

 S3. 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-24 Thread Nicolas George
Le duodi 2 brumaire, an CCXXIII, Clement Boesch a écrit :
 I'd be curious to hear about how VapourSynth  friends handle that
 problem, because AFAIK it's only one way. It's likely they don't have to
 deal with the same problems we have though (the usage is more limited, no
 audio typically); typically because they don't seem stream but file based
 (so easy to index and exact seek etc.).

I am not sure what you mean here. This was about recursive design. It is not
very difficult to turn the recursive design into an iterative one: just
handle the call stack manually. In this particular case, most of the filters
use only tail-recursion (return ff_filter_frame(outlink, frame);), so
there is even no need to handle a stack: just replace the function calls by
message passing.

People here seem impressed by VapourSynth's nice python syntax to build
scripts, but it does not look extraordinary to me. I am convinced this is
just a syntax to build the graph. For example, where lavfi requires to
write [v][s]overlay=x=42:y=12[vs], VapourSynth would require something
like vs = v.overlay(s, 42, 12). Just a syntactic matter. The actual work
certainly happens when the graph is connected to its output.

I believe it would be rather easy to have the same kind of interface for
lavfi; all the filters are already introspectable, and that is the most
major point. Unfortunately, two issues would make it of little use. First,
lavfi handles only frames, not packets, so all the encoding and muxing would
have to be handled separately. Second, lavfi is designed to make a lot of
format decisions when the graph is complete, so the scripting capabilities
would not have a lot of relevant information to work on. I wonder how
VapourSynth handles that, but I am afraid to learn that it just do automatic
conversions on the spot without globally optimizing them.

Regards,

-- 
  Nicolas George


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-24 Thread Nicolas George
Le duodi 2 brumaire, an CCXXIII, Clement Boesch a écrit :
 More still standing problems while we are at it:
 
1.7. Metadata
 
  Metadata are not available at graph level, or at least filter
  level, only at frame level. We also need to define how they can be
  injected and fetched from the users (think rotate metadata).

That is an interesting issue. At graph level, that is easy, but that would
be mostly useless (the rotate filter changes the rotate metadata).

At filter level, that is harder because it requires all filters to forward
the metadata at init time, so extra code in a lot of filters. Furthermore,
since our graphs are not constructed in order, and can even theoretically
contain cycles, it requires another walk-over to ensure stabilization. The
whole query_formats() / config_props() is already too complex IMHO.

Actually, I believe I can propose a simple solution: inject the stream
metadata as frame metadata on dummy frames. Filters that need them are
changed to examine the dummy frames, filters that do not need them just
ignore them and let the framework forward it.

(Of course, the whole metadata system can never work perfectly: the scale
filter does not update any dpi metadata; the crop filter would need to
update the aperture metadata for photos, and if the crop is not centered I
am not even sure this makes sense, etc. If someone adds xmin, xmax,
xscl (no, not this one, bad work habit), ymin, ymax to the frames
produced by vsrc_mandelbrot, or the geographic equivalent to satellite
images, how is the rotate filter supposed to handle that? The best answer
would probably be we do not care much.)

1.8. Seeking
 
  Way more troublesome: being able to request an exact frame in the past.
  This currently limits a lot the scope of the filters.
 
  thumbnail filter is a good example of this problem: the filter
  doesn't need to keep all the frames it analyzes in memory, it just
  needs statistics about them, and then fetches the best in the batch.
  Currently, it needs to keep them all because we are in a forward
  stream based logic. This model is kind of common and quite a pain to
  implement currently.
 
  I don't think the compression you propose at the end would really
  solve that.

You raise an interesting point. Unlimited FIFOs (with or without external
storage or compression: they are just means of handling larger FIFOs with
smaller hardware) can be of some help in that case, but not much.

In the particular example you indicate, I can imagine a solution with two
filters: thumbnail-detect outputs just pseudo-frame metadata with the
timestamp of the selected thumbnails images, and thumbnail-select use that
metadata from one input, reading the actual frames from its second input
connected to a large FIFO. But that is outright ugly.

For actual seeking, I suppose we would need a mechanism to send messages
backward on the graph.

As for the actual implementation, I suppose that a filter that supports
seeking would be required to advertise so on its output: I can seek back to
pts=42, and a filter that requires seeking from its input would give
forewarning: I may need to seek back to pts=12, so that the framework can
buffer all frames from 12 to 42.

That requires thinking.

1.9. Automatic I/O count
 
  ... [a] split [b][c] ... should guess there is 2 outputs.
  ... [a][b][c] concat [d] ... as well

I believe this one to be pretty easy, design-wise, in fact: just decide on a
standard name for the options that give the number of input and outputs,
maybe just nb_inputs and nb_outputs, and then it is only a matter of
tweaking the graph parser to set them if possible and necessary.

 Did you already started some development? Do you need help?
 
 I'm asking because it looks like it could be split into small relatively
 easy tasks on the Trac and helps introducing new comers (and also track
 the progress if some people assign themselves to these tickets).

I have not started writing code: for large re-design, I would not risk
someone telling me this is stupid, you can do the same thing ten times
simpler like that.

You are right, some of the points I raise are mostly stand-alone tasks.

  AVFilterLink.pts: current timestamp of the link, i.e. end timestamp of
  the last forwarede frame, assuming the duration was correct. This is
  somewhat redundant with the fields in AVFrame, but can carry the
  information even when there is no actual frame.
 The timeline system seems to be able to workaround this. How is this going
 to help?

I do not see how this is related. When the timeline system is invoked, there
is a frame, with a timestamp. The timestamp may be NOPTS, but that is just a
matter for the enable expression to handle correctly.

The issue I am trying to address is the one raised in this example: suppose
overlay detects EOF on its secondary input; the last secondary frames were
at PTS 40, 41, 42, and now here 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-24 Thread wm4
On Fri, 24 Oct 2014 13:07:22 +0200
Nicolas George geo...@nsup.org wrote:

 Le duodi 2 brumaire, an CCXXIII, Clement Boesch a écrit :
  I'd be curious to hear about how VapourSynth  friends handle that
  problem, because AFAIK it's only one way. It's likely they don't have to
  deal with the same problems we have though (the usage is more limited, no
  audio typically); typically because they don't seem stream but file based
  (so easy to index and exact seek etc.).
 
 I am not sure what you mean here. This was about recursive design. It is not
 very difficult to turn the recursive design into an iterative one: just
 handle the call stack manually. In this particular case, most of the filters
 use only tail-recursion (return ff_filter_frame(outlink, frame);), so
 there is even no need to handle a stack: just replace the function calls by
 message passing.
 
 People here seem impressed by VapourSynth's nice python syntax to build
 scripts, but it does not look extraordinary to me. I am convinced this is
 just a syntax to build the graph. For example, where lavfi requires to
 write [v][s]overlay=x=42:y=12[vs], VapourSynth would require something
 like vs = v.overlay(s, 42, 12). Just a syntactic matter. The actual work
 certainly happens when the graph is connected to its output.

But guess which syntax is nicer. And this is not only about syntax.
VapourSynth is truly more flexible, and there's a damn good reason why
avisynth is still alive (despite being a pile of crap code-wise), and
why VapourSynth bases its design on it (hopefully only the good and
sane parts).

Also note that Python is not required for VapourSynth. I could hack up
a Lua scripting frontend in 2 hours or so.

Last but not least, the nicest part about VapourSynth is that you can
use external filters. Nobody who writes a very sophisticated video
filter wants to deal with FFmpeg development practices and the
monolithic repo (except FFmpeg devs).

 I believe it would be rather easy to have the same kind of interface for
 lavfi; all the filters are already introspectable, and that is the most
 major point. Unfortunately, two issues would make it of little use. First,
 lavfi handles only frames, not packets, so all the encoding and muxing would
 have to be handled separately.

That has nothing to do with VapourSynth, though.

 Second, lavfi is designed to make a lot of
 format decisions when the graph is complete, so the scripting capabilities
 would not have a lot of relevant information to work on. I wonder how
 VapourSynth handles that, but I am afraid to learn that it just do automatic
 conversions on the spot without globally optimizing them.

Yes, lavfi refuses to work unless you configure the graph, which
makes the whole thing very inflexible.

As far as I know, VapourSynth has no automatic conversions. If it really
matters, the script can manually insert conversion filters where it
matters. Lavfi's format negotiation is a monster: how often did even
FFmpeg devs wonder why the hell lavfi is converting to a specific
format at a certain point? After all, the complexity tradeoff might not
be worth the return of the global optimization (which often makes bad
decisions too).

VapourSynth's approach in this case is making all filters support a
small set of sane formats. So you don't have to care about conversions,
and in fact you don't need to convert at all, unless input or output
uses fringe pixel formats.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-24 Thread Michael Niedermayer
On Fri, Oct 24, 2014 at 04:22:23PM +0200, Nicolas George wrote:
 Le duodi 2 brumaire, an CCXXIII, Clement Boesch a écrit :
[...]

 As for the actual implementation, I suppose that a filter that supports
 seeking would be required to advertise so on its output: I can seek back to
 pts=42, and a filter that requires seeking from its input would give
 forewarning: I may need to seek back to pts=12, so that the framework can
 buffer all frames from 12 to 42.
 
 That requires thinking.

its easier for the input to seek to pts 2 in that case and discard data
till pts 12. I assume pts42 is not the first point it can seek to
files can generally be reopened to restart from the first frame

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

You can kill me, but you cannot change the truth.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-23 Thread Clément Bœsch
On Wed, Oct 22, 2014 at 11:45:42PM +0200, Nicolas George wrote:
 
 [ CCing Anton, as most that is written here also apply to libav too, and
 this would be a good occasion to try a cross-fork cooperation; if that is
 not wanted, please let us know so we can drop the cc. ]
 
 1. Problems with the current design
 
   1.1. Mixed input-/output-driven model
 
 Currently, lavfi is designed to work in a mixed input-driven and
 output-driven model. That means the application needs sometimes to add
 input to buffersources and sometimes request output to buffersinks. This
 is a bit of a nuisance, because it requires the application to do it
 properly: adding input on the wrong input or requesting a frame on the
 wrong output will cause extra memory consumption or latency.
 
 With the libav API, it can not work at all since there is no mechanism
 to determine which input needs a frame in order to proceed.
 
 The libav API is clearly designed for a more output-driven
 implementation, with FIFOs anywhere to prevent input-driven frames to
 reach unready filters. Unfortunately, since it is impossible from the
 outside to guess what output will get a frame next, that can cause
 frames to accumulate anywhere in the filter graph, eating a lot of
 memory unnecessarily.
 
 FFmpeg's API has eliminated FIFOs in favour of queues in filters that
 need it, but these queues can not be controlled for unusual filter
 graphs with extreme needs. Also, there still is an implicit FIFO inside
 buffersink.
 

   1.2. Recursive implementation
 
 All work in a filter graph is triggered by recursive invocations of the
 filters' methods. It makes debugging harder. It also can lead to large
 stack usage and makes frame- and filter-level multithreading harder to
 implement. It also prevents some diagnosis from working reliably.
 

This is definitely a huge hindrance and related to 1.1

I'd be curious to hear about how VapourSynth  friends handle that
problem, because AFAIK it's only one way. It's likely they don't have to
deal with the same problems we have though (the usage is more limited, no
audio typically); typically because they don't seem stream but file based
(so easy to index and exact seek etc.).

   1.3. EOF handling
 
 Currently, EOF is propagated only through the return value of the
 request_frame() method. That means it only works in an output-driven
 scheme. It also means that it has no timestamp attached to it; this is
 an issue for filters where the duration of the last frame is relevant,
 like vf_fps.
 
   1.4. Latency
 
 Some filters need to know the timestamp of the next frame in order to
 know when the current frame will stop and be able to process it:
 overlay, fps are two examples. These filters will introduce a latency of
 one input frame that could otherwise be avoided.
 
   1.5. Timestamps
 
 Some filters do not care about timestamps at all. Some check and have a
 proper handling of NOPTS values. Some filters just assume the frames
 will have timestamps, and possibly make extra assumptions on that:
 monotony, consistency, etc. That is an inconsistent mess.
 
   1.6. Sparse streams
 
 There is a more severe instance of the latency issue when the input
 comes from an interleaved sparse stream: in that case, waiting for the
 next frame in order to find the end of the current one may require
 demuxing a large chunk of input, in turn provoking a lot of activity on
 other inputs of the graph.
 

More still standing problems while we are at it:

   1.7. Metadata

 Metadata are not available at graph level, or at least filter
 level, only at frame level. We also need to define how they can be
 injected and fetched from the users (think rotate metadata).

   1.8. Seeking

 Way more troublesome: being able to request an exact frame in the past.
 This currently limits a lot the scope of the filters.

 thumbnail filter is a good example of this problem: the filter
 doesn't need to keep all the frames it analyzes in memory, it just
 needs statistics about them, and then fetches the best in the batch.
 Currently, it needs to keep them all because we are in a forward
 stream based logic. This model is kind of common and quite a pain to
 implement currently.

 I don't think the compression you propose at the end would really
 solve that.

   1.9. Automatic I/O count

 ... [a] split [b][c] ... should guess there is 2 outputs.
 ... [a][b][c] concat [d] ... as well


 2. Proposed API changes
 
   To fix/enhance all these issues, I believe a complete rethink of the
   scheduling design of the library is necessary. I propose the following
   changes.
 

Did you already started some development? Do you need help?

I'm asking because it looks like it could be split into small relatively
easy tasks on the Trac and helps 

Re: [FFmpeg-devel] Evolution of lavfi's design and API

2014-10-23 Thread wm4
On Wed, 22 Oct 2014 23:45:42 +0200
Nicolas George geo...@nsup.org wrote:

 
 [ CCing Anton, as most that is written here also apply to libav too, and
 this would be a good occasion to try a cross-fork cooperation; if that is
 not wanted, please let us know so we can drop the cc. ]
 
 1. Problems with the current design
 
   1.1. Mixed input-/output-driven model
 
 Currently, lavfi is designed to work in a mixed input-driven and
 output-driven model. That means the application needs sometimes to add
 input to buffersources and sometimes request output to buffersinks. This
 is a bit of a nuisance, because it requires the application to do it
 properly: adding input on the wrong input or requesting a frame on the
 wrong output will cause extra memory consumption or latency.
 
 With the libav API, it can not work at all since there is no mechanism
 to determine which input needs a frame in order to proceed.
 
 The libav API is clearly designed for a more output-driven
 implementation, with FIFOs anywhere to prevent input-driven frames to
 reach unready filters. Unfortunately, since it is impossible from the
 outside to guess what output will get a frame next, that can cause
 frames to accumulate anywhere in the filter graph, eating a lot of
 memory unnecessarily.
 
 FFmpeg's API has eliminated FIFOs in favour of queues in filters that
 need it, but these queues can not be controlled for unusual filter
 graphs with extreme needs. Also, there still is an implicit FIFO inside
 buffersink.
 
   1.2. Recursive implementation
 
 All work in a filter graph is triggered by recursive invocations of the
 filters' methods. It makes debugging harder. It also can lead to large
 stack usage and makes frame- and filter-level multithreading harder to
 implement. It also prevents some diagnosis from working reliably.
 
   1.3. EOF handling
 
 Currently, EOF is propagated only through the return value of the
 request_frame() method. That means it only works in an output-driven
 scheme. It also means that it has no timestamp attached to it; this is
 an issue for filters where the duration of the last frame is relevant,
 like vf_fps.
 
   1.4. Latency
 
 Some filters need to know the timestamp of the next frame in order to
 know when the current frame will stop and be able to process it:
 overlay, fps are two examples. These filters will introduce a latency of
 one input frame that could otherwise be avoided.
 
   1.5. Timestamps
 
 Some filters do not care about timestamps at all. Some check and have a
 proper handling of NOPTS values. Some filters just assume the frames
 will have timestamps, and possibly make extra assumptions on that:
 monotony, consistency, etc. That is an inconsistent mess.
 
   1.6. Sparse streams
 
 There is a more severe instance of the latency issue when the input
 comes from an interleaved sparse stream: in that case, waiting for the
 next frame in order to find the end of the current one may require
 demuxing a large chunk of input, in turn provoking a lot of activity on
 other inputs of the graph.
 
 2. Proposed API changes
 
   To fix/enhance all these issues, I believe a complete rethink of the
   scheduling design of the library is necessary. I propose the following
   changes.
 
   Note: some of these changes are not 100% related to the issues I raised,
   but looked like a good idea while thinking on an API rework.
 
   2.1. AVFrame.duration
 
 Add a duration field to AVFrame; if set, it indicates the duration of
 the frame. Thus, it becomes unnecessary to wait for the next frame to
 know when the current frame stops, reducing the latency.
 
 Another solution would be to add a dedicated function on buffersrc to
 inject a timestamp for end or activity on a link. That would avoid the
 need of adding a field to AVFrame.
 
   2.2. Add some fields to AVFilterLink
 
 AVFilterLink.pts: current timestamp of the link, i.e. end timestamp of
 the last forwarede frame, assuming the duration was correct. This is
 somewhat redundant with the fields in AVFrame, but can carry the
 information even when there is no actual frame.
 
 AVFilterLink.status: if not 0, gives the return status of trying to pass
 a frame on this link. The typical use would be EOF.
 
   2.3. AVFilterLink.need_ts
 
 Add a field to AVFilterLink to specify that the output filter requires
 reliable timestamps on its input. More precisely, specify how reliable
 the timestamps need to be: is the duration necessary? do the timestamps
 need to be monotonic? continuous?
 
 For audio streams, consistency between timestamps and the number of
 samples may also be tested. For video streams, constant frame rate may
 be enforced, but I am not sure about this one.
 
 A fixpts filter should be provided to allow the