On 25.11.2014, at 10:01, Daniel Oberhoff <danieloberh...@gmail.com> wrote: >> Am 24.11.2014 um 17:16 schrieb Reimar Döffinger <reimar.doeffin...@gmx.de>: >> >>> On Mon, Nov 24, 2014 at 12:35:58PM +0100, Daniel Oberhoff wrote: >>> inout -> filter1 -> filter2 -> output >>> >>> some threads processing frame n in the output (i.e. encoding), other >>> threads procesing frame n+1 in filter2, others processing frame n+2 in >>> filter1, and yet others processing frame n+3 decoding. This way >>> non-parallel filters can be sped up, and diminishing returns for too much >>> striping can be avoided. With modern cpus scaling easily up to 24 hardware >>> threads I see this as neccessary to fully utilize the hardware. >> >> Keep in mind the two things: >> 1) It only works for cases where many filters are used, which is not >> necessarily a common case > > Also, not quite. Even just decode/encode had a pipeline depth of 2 (the > decoder could decode frame n+1 while the encoder encodes frame n). Every > filter deepens this more...
If you run encode and decode with multithreading, they already run in different threads. So if you have only one filter, you should not have any gains at all from per-frame filter multithreading. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel