Re: [FFmpeg-user] ffmpeg architecture question #3

2020-04-29 Thread Carl Eugen Hoyos
Am Mi., 29. Apr. 2020 um 13:04 Uhr schrieb Mark Filipak
:

> When ffmpeg decodes a soft telecined video, does the
> decoder output 24 FPS progressive?

I don't think so, I would have expected 24000/1001 fps

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-25 Thread Carl Eugen Hoyos
Am Sa., 25. Apr. 2020 um 04:20 Uhr schrieb Mark Filipak
:
>
> On 04/24/2020 01:22 PM, Carl Eugen Hoyos wrote:
> >
> >
> >> Am 24.04.2020 um 11:10 schrieb Mark Filipak 
> >> :
> >>
> >> I've been told that, for soft telecined video the decoder is fully 
> >> compliant
> >> and therefore outputs 30fps
> >
> > (“fps” is highly ambiguous in this sentence.)

The decoder is "compliant" in the sense that the stream it outputs for
soft-telecined
input has a time base of 3/1001, the output has 24000/1001 "frames per
second" though.

> > This is not correct.
> > I believe I told you some time ago that this is not how the decoder behaves.
>
> I beg your pardon, Carl Eugen. I thought you said that the decoders are
> fully compliant and therefore produce interlaced fields.

I am quite sure I wrote the opposite several times as replies to your mails.
Note that FFmpeg cannot produce "fields" because it cannot deal with devices
that know what a "field" is.
(Just as there are "interlacing" filters that you would call differently if it
were your decision, we also decided to name some of the filters that
deal with frames "field"-filters because this allows understanding the filters'
purpose for everybody except broadcast and video engineers like you.)

> > I believe such a behaviour would not make sense for FFmpeg (because
> > you cannot connect FFmpeg’s output to an NTSC CRT). The telecine filter
> > would not work at all if above were the case.
> > Or in other words: FFmpeg outputs approximately 24 frames per second for
> > typical soft-telecined program streams.

> > The only thing FFmpeg does to be “compliant” is to forward the correct time 
> > base.
>
> By "correct time base" you mean 24/1.001, correct?

lol, no

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

Sorry, the p24 "source" *is* soft telecine.

On 04/24/2020 11:06 PM, pdr0 wrote:

Mark Filipak wrote




If you take a soft telecine input, encode it directly to rawvideo or
lossless output, you can confirm this.
The output is 29.97 (interlaced content) .


When I do 'telecine=pattern=5', I wind up with this

|<--1/6s-->|
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

I have confirmed it by single-frame stepping through test videos.


No.


The above timing is for an MKV of the 55-telecine transcode, not for the
decoder's output.


That's telecine=pattern=5 on a 23.976p native progressive source

I thought this thread was about using a soft telecine source , and how
ffmpeg handles that

because you were making assumptions "So, if the 'i30, TFF' from the decoder
is correct, the following must be the full picture: "

Obviously i30 does not refer to a 23.976p native progressive source...




Pattern looks correct, but unless you are doing something differently ,
your
timescale is not correct

When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test,
using
telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 *
29.97
= 74.925).


Not for me. I've seen 74.925 FPS just one time. Since I considered it a
failure, I didn't save the
video and its log, so I don't know how I got it.


This mean RF flags are used, 29.97i output from decoder. Since
its 74.925fps, the scale in your diagram for 1/6s is wrong for
telecine=pattern=5


For this command line:

ffmpeg -report -i "1.018.m2ts" -filter_complex
"telecine=pattern=5,split=5[A][B][C][D][E],[A]select='eq(mod(n+1\,5)\,1)'[F],[B]select='eq(mod(n+1\,5)\,2)'[G],[C]select='eq(mod(n+1\,5)\,3)'[H],[D]select='eq(mod(n+1\,5)\,4)'[I],[E]select='eq(mod(n+1\,5)\,0)'[J],[F][G][H][I][J]interleave=nb_inputs=5"
-map 0 -c:v libx264 -crf 20 -codec:a copy -codec:s copy
"C:\AVOut\1.018.4.MKV"

MPV playback of '1.018.4.MKV' says "FPS: 59.940 (estimated)" (not
74.925fps).


Is that m2ts a soft telecine BD's ? This thread was about soft telecine...


I see your misunderstanding. Here's my original diagram:


|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source
[A/a___][B/b___][B/c___][C/d___][D/d___] hard telecine
[A/-_][-/a_][B/-_][-/b_][B/-_][-/c_][C/-_][-/d_][D/-_][-/d_] i30-TFF
[A/a___][B/b___][B/c___][C/d___][D/d___] deinterlace
[A/a__][B/b__][C/c__][D/d__] detelecine
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

So, you see, the source is p24. "i30-TFF" is what I thought came out of the decoder -- that is based 
on the latest info (and it is what took me by surprise as I'd always thought that ffmpeg decoders 
always output frames).


Soft telecine is nowhere in that diagram. Sorry for the confusion.

CORRECTION: The p24 "source" *is* soft telecine. I'm working on BDs and DVDs in parallel and 
momentarily got my wires crossed.


Of course, it is soft telecined. Otherwise, i30-TFF wouldn't be there at all. The p24 source would 
go directly to 55-telecine.



Most film BD's are native progressive 23.976


Yes, that is the "source" in the diagram.


Both ffplay and mpv look like they ignore the repeat field flags, the
preview is progressive 23.976p


I use MPV. I'm unsure what you mean by "preview". ...and "preview" of
what? The decoder output or
the MKV output video?


The "preview" of the video is what you see when ffplay window opens or mpv
opens. It's a RGB converted representation what you are using as input to
mpv or ffplay .


Oh, I didn't know there was a distinction. I thought it was just the playback.


So I'm referring to a soft telecine source, because that's
what you were talking about


I hope that confusion is resolved.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

On 04/24/2020 11:06 PM, pdr0 wrote:

Mark Filipak wrote




If you take a soft telecine input, encode it directly to rawvideo or
lossless output, you can confirm this.
The output is 29.97 (interlaced content) .


When I do 'telecine=pattern=5', I wind up with this

|<--1/6s-->|
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

I have confirmed it by single-frame stepping through test videos.


No.


The above timing is for an MKV of the 55-telecine transcode, not for the
decoder's output.


That's telecine=pattern=5 on a 23.976p native progressive source

I thought this thread was about using a soft telecine source , and how
ffmpeg handles that

because you were making assumptions "So, if the 'i30, TFF' from the decoder
is correct, the following must be the full picture: "

Obviously i30 does not refer to a 23.976p native progressive source...




Pattern looks correct, but unless you are doing something differently ,
your
timescale is not correct

When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test,
using
telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 *
29.97
= 74.925).


Not for me. I've seen 74.925 FPS just one time. Since I considered it a
failure, I didn't save the
video and its log, so I don't know how I got it.


This mean RF flags are used, 29.97i output from decoder. Since
its 74.925fps, the scale in your diagram for 1/6s is wrong for
telecine=pattern=5


For this command line:

ffmpeg -report -i "1.018.m2ts" -filter_complex
"telecine=pattern=5,split=5[A][B][C][D][E],[A]select='eq(mod(n+1\,5)\,1)'[F],[B]select='eq(mod(n+1\,5)\,2)'[G],[C]select='eq(mod(n+1\,5)\,3)'[H],[D]select='eq(mod(n+1\,5)\,4)'[I],[E]select='eq(mod(n+1\,5)\,0)'[J],[F][G][H][I][J]interleave=nb_inputs=5"
-map 0 -c:v libx264 -crf 20 -codec:a copy -codec:s copy
"C:\AVOut\1.018.4.MKV"

MPV playback of '1.018.4.MKV' says "FPS: 59.940 (estimated)" (not
74.925fps).


Is that m2ts a soft telecine BD's ? This thread was about soft telecine...


I see your misunderstanding. Here's my original diagram:


|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source
[A/a___][B/b___][B/c___][C/d___][D/d___] hard telecine
[A/-_][-/a_][B/-_][-/b_][B/-_][-/c_][C/-_][-/d_][D/-_][-/d_] i30-TFF
[A/a___][B/b___][B/c___][C/d___][D/d___] deinterlace
[A/a__][B/b__][C/c__][D/d__] detelecine
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

So, you see, the source is p24. "i30-TFF" is what I thought came out of the decoder -- that is based 
on the latest info (and it is what took me by surprise as I'd always thought that ffmpeg decoders 
always output frames).


Soft telecine is nowhere in that diagram. Sorry for the confusion.


Most film BD's are native progressive 23.976


Yes, that is the "source" in the diagram.


Both ffplay and mpv look like they ignore the repeat field flags, the
preview is progressive 23.976p


I use MPV. I'm unsure what you mean by "preview". ...and "preview" of
what? The decoder output or
the MKV output video?


The "preview" of the video is what you see when ffplay window opens or mpv
opens. It's a RGB converted representation what you are using as input to
mpv or ffplay .


Oh, I didn't know there was a distinction. I thought it was just the playback.


So I'm referring to a soft telecine source, because that's
what you were talking about


I hope that confusion is resolved.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Mark Filipak wrote
> 
>> 
>> If you take a soft telecine input, encode it directly to rawvideo or
>> lossless output, you can confirm this.
>> The output is 29.97 (interlaced content) .
>> 
>>> When I do 'telecine=pattern=5', I wind up with this
>>>
>>> |<--1/6s-->|
>>> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
>>>
>>> I have confirmed it by single-frame stepping through test videos.
>> 
>> No.
> 
> The above timing is for an MKV of the 55-telecine transcode, not for the
> decoder's output.

That's telecine=pattern=5 on a 23.976p native progressive source

I thought this thread was about using a soft telecine source , and how
ffmpeg handles that

because you were making assumptions "So, if the 'i30, TFF' from the decoder
is correct, the following must be the full picture: "

Obviously i30 does not refer to a 23.976p native progressive source...



>> Pattern looks correct, but unless you are doing something differently ,
>> your
>> timescale is not correct
>> 
>> When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test,
>> using
>> telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 *
>> 29.97
>> = 74.925).
> 
> Not for me. I've seen 74.925 FPS just one time. Since I considered it a
> failure, I didn't save the 
> video and its log, so I don't know how I got it.
> 
>> This mean RF flags are used, 29.97i output from decoder. Since
>> its 74.925fps, the scale in your diagram for 1/6s is wrong for
>> telecine=pattern=5
> 
> For this command line:
> 
> ffmpeg -report -i "1.018.m2ts" -filter_complex 
> "telecine=pattern=5,split=5[A][B][C][D][E],[A]select='eq(mod(n+1\,5)\,1)'[F],[B]select='eq(mod(n+1\,5)\,2)'[G],[C]select='eq(mod(n+1\,5)\,3)'[H],[D]select='eq(mod(n+1\,5)\,4)'[I],[E]select='eq(mod(n+1\,5)\,0)'[J],[F][G][H][I][J]interleave=nb_inputs=5"
>  
> -map 0 -c:v libx264 -crf 20 -codec:a copy -codec:s copy
> "C:\AVOut\1.018.4.MKV"
> 
> MPV playback of '1.018.4.MKV' says "FPS: 59.940 (estimated)" (not
> 74.925fps).

Is that m2ts a soft telecine BD's ? This thread was about soft telecine...

Most film BD's are native progressive 23.976




>> Both ffplay and mpv look like they ignore the repeat field flags, the
>> preview is progressive 23.976p
> 
> I use MPV. I'm unsure what you mean by "preview". ...and "preview" of
> what? The decoder output or 
> the MKV output video?

The "preview" of the video is what you see when ffplay window opens or mpv
opens. It's a RGB converted representation what you are using as input to
mpv or ffplay . So I'm referring to a soft telecine source, because that's
what you were talking about





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

On 04/24/2020 01:28 PM, pdr0 wrote:

Mark Filipak wrote

I've been told that, for soft telecined video
  the decoder is fully compliant and therefore outputs 30fps
I've also been told that the 30fps is interlaced (which I found
surprising)
Is this correct so far?


Yes

If you take a soft telecine input, encode it directly to rawvideo or
lossless output, you can confirm this.
The output is 29.97 (interlaced content) .


When I do 'telecine=pattern=5', I wind up with this

|<--1/6s-->|
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

I have confirmed it by single-frame stepping through test videos.


No.


The above timing is for an MKV of the 55-telecine transcode, not for the 
decoder's output.


Pattern looks correct, but unless you are doing something differently , your
timescale is not correct

When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test, using
telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 * 29.97
= 74.925).


Not for me. I've seen 74.925 FPS just one time. Since I considered it a failure, I didn't save the 
video and its log, so I don't know how I got it.



This mean RF flags are used, 29.97i output from decoder. Since
its 74.925fps, the scale in your diagram for 1/6s is wrong for
telecine=pattern=5


For this command line:

ffmpeg -report -i "1.018.m2ts" -filter_complex 
"telecine=pattern=5,split=5[A][B][C][D][E],[A]select='eq(mod(n+1\,5)\,1)'[F],[B]select='eq(mod(n+1\,5)\,2)'[G],[C]select='eq(mod(n+1\,5)\,3)'[H],[D]select='eq(mod(n+1\,5)\,4)'[I],[E]select='eq(mod(n+1\,5)\,0)'[J],[F][G][H][I][J]interleave=nb_inputs=5" 
-map 0 -c:v libx264 -crf 20 -codec:a copy -codec:s copy "C:\AVOut\1.018.4.MKV"


MPV playback of '1.018.4.MKV' says "FPS: 59.940 (estimated)" (not 
74.925fps).


Both ffplay and mpv look like they ignore the repeat field flags, the
preview is progressive 23.976p


I use MPV. I'm unsure what you mean by "preview". ...and "preview" of what? The decoder output or 
the MKV output video?


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

On 04/24/2020 01:22 PM, Carl Eugen Hoyos wrote:




Am 24.04.2020 um 11:10 schrieb Mark Filipak 
:

I've been told that, for soft telecined video the decoder is fully compliant 
and therefore outputs 30fps


(“fps” is highly ambiguous in this sentence.)

This is not correct.
I believe I told you some time ago that this is not how the decoder behaves.


I beg your pardon, Carl Eugen. I thought you said that the decoders are fully compliant and 
therefore produce interlaced fields.



I believe such a behaviour would not make sense for FFmpeg (because you cannot 
connect FFmpeg’s output to an NTSC CRT). The telecine filter would not work at 
all if above were the case.
Or in other words: FFmpeg outputs approximately 24 frames per second for 
typical soft-telecined program streams.

The only thing FFmpeg does to be “compliant” is to forward the correct time 
base.


By "correct time base" you mean 24/1.001, correct?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

On 04/24/2020 11:30 AM, Edward Park wrote:

Hi,

I don't know if the decoder outputs 30fps as is from 24fps soft telecine, but if it does, 
it must include the flags that you need to reconstruct the original 24 format or set it 
as metadata because frame stepping in ffplay (using the "s" key on the 
keyboard) goes over 1/24 s progressive frames, even though the stream info says 29.97fps.


I've seen the same operation frame-stepping via MPV. That fact, and assertions by the HandBrake 
folks that (at least from their perspective), the ffmpeg libraries (decoders?) work solely on frames 
-- at least, that's my interpretation of what they said.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> e.g
>> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv
> 
> (Consider to test with other output formats.)

What did you have in mind?
e.g.

ffmpeg -i input.mpeg -c:v utvideo -an output.avi

The output is 29.97, according to ffmpeg and double check using official
utvideo VFW decoder, duplicate frames . Missing 3 frames if duplicates abide
by RF flags

e.g.

ffmpeg -i input.mpeg -c:v utvideo -an output.mkv

Output is 29.97, but no duplicate frames.  Missing 1 frame

eg.

ffmpeg -i input.mpeg -c:v libx264 -crf 18 -an output.mp4

Output is 29.97 with duplicates . Elementary stream analysis confirms
finding.  But missing  3 frames if duplicates abide by RF flags


ffmpeg -i input.mpeg -c:v libx264 -crf 18 -an output.mkv

Output is 23.976 no duplicates . Missing 1 frame



eg.

ffmpeg -i input.mpeg -c:v ffv1 -an output_ffv1.mkv

Output is 29.97, no duplicates.  Missing 1 frame


Looks like some container differences too. 






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Carl Eugen Hoyos


> Am 24.04.2020 um 19:34 schrieb pdr0 :
> 
> Carl Eugen Hoyos-2 wrote
>>> Am 24.04.2020 um 11:10 schrieb Mark Filipak 
> 
>> markfilipak.windows+ffmpeg@
> 
>> :
>>> 
>>> I've been told that, for soft telecined video the decoder is fully
>>> compliant and therefore outputs 30fps
>> 
>> (“fps” is highly ambiguous in this sentence.)
>> 
>> This is not correct.
>> I believe I told you some time ago that this is not how the decoder
>> behaves. I believe such a behaviour would not make sense for FFmpeg
>> (because you cannot connect FFmpeg’s output to an NTSC CRT). The telecine
>> filter would not work at all if above were the case.
>> Or in other words: FFmpeg outputs approximately 24 frames per second for
>> typical soft-telecined program streams.
>> 
>> The only thing FFmpeg does to be “compliant” is to forward the correct
>> time base.
> 
> 
> If you use direct encode, no filters, no switches, the output from soft
> telecine input video is 29.97p, where every 5th frame is a duplicate

No

> e.g
> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv

(Consider to test with other output formats.)

> But you can "force" it to output 23.976p by using -vf fps
> 
> Is this what you mean by "forward the correct time base" ?

No.

Carl Eugen 
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Edward Park
Hi,

> Output is actually 29.97p with 5th frame duplicates . The repeat field flags
> are not taken into account.

> If you use direct encode, no filters, no switches, the output from soft
> telecine input video is 29.97p, where every 5th frame is a duplicate
> 
> e.g
> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv
> 
> But you can "force" it to output 23.976p by using -vf fps
> 
> Is this what you mean by "forward the correct time base" ?

I think 5th frame duplicated is only accurate for shorter durations, I think 
you will see if you look at the timestamps of each frame over a longer period. 
They advance by 2 60fps 'ticks', 3 ticks, etc as if the duration was determined 
using rf and tff flags.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
pdr0 wrote
> If you take a soft telecine input, encode it directly to rawvideo or
> lossless output, you can confirm this. 
> The output is 29.97 (interlaced content) . 


So my earlier post is  incorrect

Output is actually 29.97p with 5th frame duplicates . The repeat field flags
are not taken into account.





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> Am 24.04.2020 um 11:10 schrieb Mark Filipak 

> markfilipak.windows+ffmpeg@

> :
>> 
>> I've been told that, for soft telecined video the decoder is fully
>> compliant and therefore outputs 30fps
> 
> (“fps” is highly ambiguous in this sentence.)
> 
> This is not correct.
> I believe I told you some time ago that this is not how the decoder
> behaves. I believe such a behaviour would not make sense for FFmpeg
> (because you cannot connect FFmpeg’s output to an NTSC CRT). The telecine
> filter would not work at all if above were the case.
> Or in other words: FFmpeg outputs approximately 24 frames per second for
> typical soft-telecined program streams.
> 
> The only thing FFmpeg does to be “compliant” is to forward the correct
> time base.


If you use direct encode, no filters, no switches, the output from soft
telecine input video is 29.97p, where every 5th frame is a duplicate

e.g
ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv

But you can "force" it to output 23.976p by using -vf fps

Is this what you mean by "forward the correct time base" ?



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Mark Filipak wrote
>> I've been told that, for soft telecined video
>>  the decoder is fully compliant and therefore outputs 30fps
>> I've also been told that the 30fps is interlaced (which I found
>> surprising)
>> Is this correct so far?

Yes

If you take a soft telecine input, encode it directly to rawvideo or
lossless output, you can confirm this. 
The output is 29.97 (interlaced content) . 



> When I do 'telecine=pattern=5', I wind up with this
> 
> |<--1/6s-->|
> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
> 
> I have confirmed it by single-frame stepping through test videos.

No. 

Pattern looks correct, but unless you are doing something differently , your
timescale is not correct

When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test, using
telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 * 29.97
= 74.925). This mean RF flags are used, 29.97i output from decoder. Since
its 74.925fps, the scale in your diagram for 1/6s is wrong for
telecine=pattern=5 


Both ffplay and mpv look like they ignore the repeat field flags, the
preview is progressive 23.976p



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Carl Eugen Hoyos


> Am 24.04.2020 um 11:10 schrieb Mark Filipak 
> :
> 
> I've been told that, for soft telecined video the decoder is fully compliant 
> and therefore outputs 30fps

(“fps” is highly ambiguous in this sentence.)

This is not correct.
I believe I told you some time ago that this is not how the decoder behaves. I 
believe such a behaviour would not make sense for FFmpeg (because you cannot 
connect FFmpeg’s output to an NTSC CRT). The telecine filter would not work at 
all if above were the case.
Or in other words: FFmpeg outputs approximately 24 frames per second for 
typical soft-telecined program streams.

The only thing FFmpeg does to be “compliant” is to forward the correct time 
base.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Edward Park
Hi,

I don't know if the decoder outputs 30fps as is from 24fps soft telecine, but 
if it does, it must include the flags that you need to reconstruct the original 
24 format or set it as metadata because frame stepping in ffplay (using the "s" 
key on the keyboard) goes over 1/24 s progressive frames, even though the 
stream info says 29.97fps.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Mark Filipak

On 04/24/2020 05:10 AM, Mark Filipak wrote:

Hello,

I've been told that, for soft telecined video

|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source

 the decoder is fully compliant and therefore outputs 30fps

|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source
[A/a___][B/b___][B/c___][C/d___][D/d___] hard telecine

I've also been told that the 30fps is interlaced (which I found surprising)

|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source
[A/a___][B/b___][B/c___][C/d___][D/d___] hard telecine
[A/-_][-/a_][B/-_][-/b_][B/-_][-/c_][C/-_][-/d_][D/-_][-/d_] i30-TFF

Is this correct so far?


(No response, so continuing.)

When I do 'telecine=pattern=5', I wind up with this

|<--1/6s-->|
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

I have confirmed it by single-frame stepping through test videos.

So, if the 'i30, TFF' from the decoder is correct, the following must be the 
full picture:

|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__] source
[A/a___][B/b___][B/c___][C/d___][D/d___] hard telecine
[A/-_][-/a_][B/-_][-/b_][B/-_][-/c_][C/-_][-/d_][D/-_][-/d_] i30, TFF
[A/a___][B/b___][B/c___][C/d___][D/d___] deinterlace
[A/a__][B/b__][C/c__][D/d__] detelecine
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

Now, I'm not telling ffmpeg to do the deinterlace or the detelecine. If it indeed is doing 
deinterlace & detelecine -- I don't know how to get from i30-TFF to 55-telecine any other way -- it 
must be doing it on its own.


Is this correct?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Nicolas George
Michael Koch (12020-04-21):
> If the blend filter gets two input frames with different timestamps, then
> what's the timestamp of the output frame?

To understand how it works, you need to think of frames not as punctual
instant in time, it is an interval going from the frame's PTS to the
next frame's PTS.

When a filter combines two or more streams of frames, you have to
imagine several graduations on the same time axis: a new frame could be
generated for any graduation. If two graduations fall at the same time,
that means two parts of the output image change simultaneously;
otherwise, a part of the image stays the same and a part changes.

All this is grouped together in the framesync system:
http://ffmpeg.org/ffmpeg-filters.html#Options-for-filters-with-several-inputs-_0028framesync_0029

It lest control which streams triggers a new output frame. For example,
if you want to overlay a clock on a 24000/1001 fps video, you could have
a frame at t =~ 0.918, 0.959, 1.001, but we do not want a frame at t = 1
for the change of the clock.

Unfortunately, this is not yet exposed to users as options.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Paul B Mahol
On 4/21/20, Michael Koch  wrote:
> Am 21.04.2020 um 11:07 schrieb Paul B Mahol:
>> On 4/21/20, Michael Koch  wrote:
> I now appreciate that 'blend' has a "preferred" input similar to
> 'overlay',
> but that behavior is not
> documented. In the case of 'overlay', the name "main" doesn't convey
> that
> meaning, and in the case
> of 'blend', that behavior is not documented at all. Both documentations
> should explain how
> timestamps control output and that the 1st filter-input's timestamp
> determines the filter-output's
> timestamp.
 Blend filter does not have preferred input since long time.
>>> If the blend filter gets two input frames with different timestamps,
>>> then what's the timestamp of the output frame?
>>> I can think ot at least 5 possible scenarios:
>>> -- timestamp is copied from the first input
>>> -- copied from the second input
>>> -- the smaller of the two timestamps
>>> -- the larger of the two timestams
>>> -- the artihmetic mean of the two timestamps
>>>
>> It is discouraged to use blend in such case.
>
> How would you solve the problem?
> Given is a sequence of frames 1, 2, 3, 4, ...
> Wanted is a sequence 1, 1, 1+2, 2, 2, 3, 3, 3+4, 4, 4, ...
> where "+" stands for a mix of two frames.
>

Sometimes best solution is no solution.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Michael Koch

Am 21.04.2020 um 11:07 schrieb Paul B Mahol:

On 4/21/20, Michael Koch  wrote:

I now appreciate that 'blend' has a "preferred" input similar to
'overlay',
but that behavior is not
documented. In the case of 'overlay', the name "main" doesn't convey that
meaning, and in the case
of 'blend', that behavior is not documented at all. Both documentations
should explain how
timestamps control output and that the 1st filter-input's timestamp
determines the filter-output's
timestamp.

Blend filter does not have preferred input since long time.

If the blend filter gets two input frames with different timestamps,
then what's the timestamp of the output frame?
I can think ot at least 5 possible scenarios:
-- timestamp is copied from the first input
-- copied from the second input
-- the smaller of the two timestamps
-- the larger of the two timestams
-- the artihmetic mean of the two timestamps


It is discouraged to use blend in such case.


How would you solve the problem?
Given is a sequence of frames 1, 2, 3, 4, ...
Wanted is a sequence 1, 1, 1+2, 2, 2, 3, 3, 3+4, 4, 4, ...
where "+" stands for a mix of two frames.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Paul B Mahol
On 4/21/20, Michael Koch  wrote:
>
>>> I now appreciate that 'blend' has a "preferred" input similar to
>>> 'overlay',
>>> but that behavior is not
>>> documented. In the case of 'overlay', the name "main" doesn't convey that
>>> meaning, and in the case
>>> of 'blend', that behavior is not documented at all. Both documentations
>>> should explain how
>>> timestamps control output and that the 1st filter-input's timestamp
>>> determines the filter-output's
>>> timestamp.
>> Blend filter does not have preferred input since long time.
>
> If the blend filter gets two input frames with different timestamps,
> then what's the timestamp of the output frame?
> I can think ot at least 5 possible scenarios:
> -- timestamp is copied from the first input
> -- copied from the second input
> -- the smaller of the two timestamps
> -- the larger of the two timestams
> -- the artihmetic mean of the two timestamps
>

It is discouraged to use blend in such case.
Inputs must have same timestamps anyway I you wish to do anything useful.
Also timebase is picked from framesync.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Michael Koch



I now appreciate that 'blend' has a "preferred" input similar to 'overlay',
but that behavior is not
documented. In the case of 'overlay', the name "main" doesn't convey that
meaning, and in the case
of 'blend', that behavior is not documented at all. Both documentations
should explain how
timestamps control output and that the 1st filter-input's timestamp
determines the filter-output's
timestamp.

Blend filter does not have preferred input since long time.


If the blend filter gets two input frames with different timestamps, 
then what's the timestamp of the output frame?

I can think ot at least 5 possible scenarios:
-- timestamp is copied from the first input
-- copied from the second input
-- the smaller of the two timestamps
-- the larger of the two timestams
-- the artihmetic mean of the two timestamps

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-21 Thread Paul B Mahol
On 4/20/20, Mark Filipak  wrote:
> Hi, Ted,
>
> On 04/20/2020 06:20 AM, Edward Park wrote:
>> Hey,
>>
 I don't understand what you mean by "recursively".
>>>
>>> Haven't you heard? There's no recursion. There's no problem. The 'blend'
>>> filter just has some fun undocumented features. Hours and hours, days and
>>> days of fun. So much fun I just can't stand it. Too much fun.
>>
>> There's no recursion because a filtergraph is typically supposed to be a
>> directed acyclic graph, there is no hierarchy to traverse.
>
> Thank you. Yes, I see that now. I thought that filtergraphs recursed (and
> failed in this case)
> because when I placed 'datagraph' filters into the filtergraph, I saw only
> the frames that succeeded
> (i.e., made their way to the output), not all frames -- 'datagraph' doesn't
> work like an
> oscilloscope -- so the behavior appeared to be failure to recurse. I did not
> try splitting out a
> separate stream and 'map'ping it to a 2nd output video as you suggested --
> thank you -- but I trust
> that technique works and I will use it in the future.
>
>> Blend not specifying which of the two input frames it takes the timestamps
>> from is true enough, except the only reason it poses a problem is because
>> it leads to another filter getting two frames with the exact same
>> timestamp, as they were split earlier on in the digraph. And it's not
>> obvious by any means, but you can sort of deduce that blend will take the
>> timestamps from the first input stream, blend having a "top" and "bottom"
>> stream (I mean on the z-axis, lest this cause any more confusion) kind of
>> implies similar operation to the overlay filter applied on the two inputs
>> that each go through some other filter, with an added alpha channel, and
>> the description for the overlay filter says the first input is the "main"
>> that the second "overlay" is composited on.
>
> I now appreciate that 'blend' has a "preferred" input similar to 'overlay',
> but that behavior is not
> documented. In the case of 'overlay', the name "main" doesn't convey that
> meaning, and in the case
> of 'blend', that behavior is not documented at all. Both documentations
> should explain how
> timestamps control output and that the 1st filter-input's timestamp
> determines the filter-output's
> timestamp.

Blend filter does not have preferred input since long time.

If I received coin for single misinformation you and others posted in
this thread
I would already be very rich person.
Sometimes is simply best to leave  to be at top.

>
>> On a different note, in the interest of making the flow of frames within
>> the filtergraph something simple enough to picture using my rather simple
>> brain ...
>
> You are far too modest, Ted.  ;-)
>
>>... this is my attempt at simplifying a filtergraph you posted a while ago,
>> I'm not sure if it's accurate, but I can't tell if I'm reproducing the
>> same result even when frame stepping (because to compare frame by frame, I
>> had to compare it to another telecine, and the only one I'd seen is the
>> 3-2 pulldown. And I really cannot tell the difference when playing at
>> speed, I can tell them apart if I step frame by frame, but not identify
>> which is which, had to draw a label on them)
>
> I single step through frames to see the effect of the filter (which is
> targeted to filter solely
> (n+1)%5==3 frames, so is easy to distinguish by simply counting: 0... (step)
> 1... (step) 2... ). MPV
> permits such single-stepping. I don't know whether ffplay does.
>
>> Could you see if it actually does do the same thing?
>> telecine=pattern=5,select='n=2:e=ifnot(mod(mod(n,5)+1,3),1,2)'[C],split[AB_DE],select='not(mod(n+3,4))'[B],[C][B]blend[B/C],[AB_DE][B/C]interleave
>
> "do the same thing"? Do you mean: Do the same thing as 23 pull-down?
> 23 pull-down: A B B+C C+D D E F F+G G+H H ...   30fps
> 55 pull-down: A A A+B B   B C C C+D D   D ...   60fps
>
> You see, it's for situations like this that I portray frames like this:
>
> |<--1/6s-->|
> [A/a__][B/b__][C/c__][D/d__]
> [A/a___][B/b___][B/c___][C/d___][D/d___] 23-telecine
> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
>
> I find such timing diagrams to be simple to understand and unambiguous. They
> clearly show what
> happens to top/bottom half-pictures.
>
>> The pads are labeled according to an ABCDE pattern at the telecine, I
>> don't know if that makes sense or is correct at all.
>
> I believe that the names of pads are arbitrary. I use [A][B][C]... because
> they are easy to see and
> because they are compact.
>
>> It does make it possible to 4up 1920x1080 streams with different filters
>> and compare them in real time without falling below ~60fps. I think the
>> fact that "split" actually copies a stream, while "select" splits a stream
>> is kind of confusing now. "Select" also adds another stream of video but I
>> 

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-20 Thread Mark Filipak

Hi, Ted,

On 04/20/2020 06:20 AM, Edward Park wrote:

Hey,


I don't understand what you mean by "recursively".


Haven't you heard? There's no recursion. There's no problem. The 'blend' filter 
just has some fun undocumented features. Hours and hours, days and days of fun. 
So much fun I just can't stand it. Too much fun.


There's no recursion because a filtergraph is typically supposed to be a 
directed acyclic graph, there is no hierarchy to traverse.


Thank you. Yes, I see that now. I thought that filtergraphs recursed (and failed in this case) 
because when I placed 'datagraph' filters into the filtergraph, I saw only the frames that succeeded 
(i.e., made their way to the output), not all frames -- 'datagraph' doesn't work like an 
oscilloscope -- so the behavior appeared to be failure to recurse. I did not try splitting out a 
separate stream and 'map'ping it to a 2nd output video as you suggested -- thank you -- but I trust 
that technique works and I will use it in the future.



Blend not specifying which of the two input frames it takes the timestamps from is true enough, except the only reason 
it poses a problem is because it leads to another filter getting two frames with the exact same timestamp, as they were 
split earlier on in the digraph. And it's not obvious by any means, but you can sort of deduce that blend will take the 
timestamps from the first input stream, blend having a "top" and "bottom" stream (I mean on the 
z-axis, lest this cause any more confusion) kind of implies similar operation to the overlay filter applied on the two 
inputs that each go through some other filter, with an added alpha channel, and the description for the overlay filter 
says the first input is the "main" that the second "overlay" is composited on.


I now appreciate that 'blend' has a "preferred" input similar to 'overlay', but that behavior is not 
documented. In the case of 'overlay', the name "main" doesn't convey that meaning, and in the case 
of 'blend', that behavior is not documented at all. Both documentations should explain how 
timestamps control output and that the 1st filter-input's timestamp determines the filter-output's 
timestamp.



On a different note, in the interest of making the flow of frames within the 
filtergraph something simple enough to picture using my rather simple brain ...


You are far too modest, Ted.  ;-)


... this is my attempt at simplifying a filtergraph you posted a while ago, I'm 
not sure if it's accurate, but I can't tell if I'm reproducing the same result 
even when frame stepping (because to compare frame by frame, I had to compare 
it to another telecine, and the only one I'd seen is the 3-2 pulldown. And I 
really cannot tell the difference when playing at speed, I can tell them apart 
if I step frame by frame, but not identify which is which, had to draw a label 
on them)


I single step through frames to see the effect of the filter (which is targeted to filter solely 
(n+1)%5==3 frames, so is easy to distinguish by simply counting: 0... (step) 1... (step) 2... ). MPV 
permits such single-stepping. I don't know whether ffplay does.



Could you see if it actually does do the same thing?
telecine=pattern=5,select='n=2:e=ifnot(mod(mod(n,5)+1,3),1,2)'[C],split[AB_DE],select='not(mod(n+3,4))'[B],[C][B]blend[B/C],[AB_DE][B/C]interleave


"do the same thing"? Do you mean: Do the same thing as 23 pull-down?
23 pull-down: A B B+C C+D D E F F+G G+H H ...   30fps
55 pull-down: A A A+B B   B C C C+D D   D ...   60fps

You see, it's for situations like this that I portray frames like this:

|<--1/6s-->|
[A/a__][B/b__][C/c__][D/d__]
[A/a___][B/b___][B/c___][C/d___][D/d___] 23-telecine
[A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine

I find such timing diagrams to be simple to understand and unambiguous. They clearly show what 
happens to top/bottom half-pictures.



The pads are labeled according to an ABCDE pattern at the telecine, I don't 
know if that makes sense or is correct at all.


I believe that the names of pads are arbitrary. I use [A][B][C]... because they are easy to see and 
because they are compact.



It does make it possible to 4up 1920x1080 streams with different filters and compare them in real time 
without falling below ~60fps. I think the fact that "split" actually copies a stream, while 
"select" splits a stream is kind of confusing now. "Select" also adds another stream of 
video but I think splitting then using select with boolean expressions to discard the not selected frames has 
to be wasteful.


Is there any alternative? I think not. I seek to filter solely frames 2 7 12 17 ...etc. so I use 
(n+1)%5==3 (i.e., select='eq(mod(n+1\,5)\,3)').

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link 

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-20 Thread Edward Park
Hey,

>> I don't understand what you mean by "recursively".
> 
> Haven't you heard? There's no recursion. There's no problem. The 'blend' 
> filter just has some fun undocumented features. Hours and hours, days and 
> days of fun. So much fun I just can't stand it. Too much fun.

There's no recursion because a filtergraph is typically supposed to be a 
directed acyclic graph, there is no hierarchy to traverse. Blend not specifying 
which of the two input frames it takes the timestamps from is true enough, 
except the only reason it poses a problem is because it leads to another filter 
getting two frames with the exact same timestamp, as they were split earlier on 
in the digraph. And it's not obvious by any means, but you can sort of deduce 
that blend will take the timestamps from the first input stream, blend having a 
"top" and "bottom" stream (I mean on the z-axis, lest this cause any more 
confusion) kind of implies similar operation to the overlay filter applied on 
the two inputs that each go through some other filter, with an added alpha 
channel, and the description for the overlay filter says the first input is the 
"main" that the second "overlay" is composited on.

On a different note, in the interest of making the flow of frames within the 
filtergraph something simple enough to picture using my rather simple brain, 
this is my attempt at simplifying a filtergraph you posted a while ago, I'm not 
sure if it's accurate, but I can't tell if I'm reproducing the same result even 
when frame stepping (because to compare frame by frame, I had to compare it to 
another telecine, and the only one I'd seen is the 3-2 pulldown. And I really 
cannot tell the difference when playing at speed, I can tell them apart if I 
step frame by frame, but not identify which is which, had to draw a label on 
them)

Could you see if it actually does do the same thing? 
telecine=pattern=5,select='n=2:e=ifnot(mod(mod(n,5)+1,3),1,2)'[C],split[AB_DE],select='not(mod(n+3,4))'[B],[C][B]blend[B/C],[AB_DE][B/C]interleave

The pads are labeled according to an ABCDE pattern at the telecine, I don't 
know if that makes sense or is correct at all.
It does make it possible to 4up 1920x1080 streams with different filters and 
compare them in real time without falling below ~60fps. I think the fact that 
"split" actually copies a stream, while "select" splits a stream is kind of 
confusing now. "Select" also adds another stream of video but I think splitting 
then using select with boolean expressions to discard the not selected frames 
has to be wasteful.

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Mark Filipak

Hi Michael,

On 04/19/2020 03:31 PM, Michael Koch wrote:

Am 19.04.2020 um 20:44 schrieb Mark Filipak:

I'm hooking into this to reply in order to get the message below into the 
thread.

But first, I'd like to say that I had no idea this would be controversial. I asked whether ffmpeg 
traversed filter complexes recursively because that was not happening. Apparently it does recurse, 
but only if you connect certain pads to certain other pads. 


I don't understand what you mean by "recursively".


Haven't you heard? There's no recursion. There's no problem. The 'blend' filter just has some fun 
undocumented features. Hours and hours, days and days of fun. So much fun I just can't stand it. Too 
much fun.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
pdr0 wrote
> As Paul pointed out, interleave works using timestamps , not "frames". If
> you took 2 separate video files, with the same fps, same timestamps, they
> won't interleave correctly in ffmpeg. The example in the documentation
> actually does not work if they had the same timestamps. You would have to
> offset the PTS of one relative to the other for interleave to work
> correctly. 
> 
> If you check the timestamps of each output node, you will see why
> "interleave" is not
> working here, and why it works properly in some other cases .  To get it
> working in this example, you would need [D] to assume [H]'s timestamps,
> because those are where the "gaps" or "holes" are in [C] . It might be
> possible using an expression using setpts

 
 Here's your "proof", and why Paul's succinct "perfect" answer was indeed
correct
 
The blend filter in your example, combined (n-1) with (n). This messed up
the timestamps if you -map the output node of blend suggested earlier such
as [D2] - they don't match the "holes" or the missing ones in [C] . Ie. they
are not "complementary" . 
 
 I was looking for a -vf setpts expression to fix and offset the timestamps,
or somehow "assume" the [H] branch timestamps, because those are the ones
that are complementary and "fit".
 
 But it turns out that it's much easier - the input timestamps from blend
filter take on the first node. Originally it was [G][H] ; [H][G] makes blend
take H's timestamps
 
(Sorry about the long line, I have a problem with "^" split in windows with
-filter_complex)
 
 ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend=all_mode=average,split[D][D2],
[C][D]interleave[out]" -map [out] -c:v libx264 -crf 20 testout2.mkv  -map
[D2] -c:v libx264 -crf 20 testD2pts.mkv  -y 




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Michael Koch

Am 19.04.2020 um 20:44 schrieb Mark Filipak:
I'm hooking into this to reply in order to get the message below into 
the thread.


But first, I'd like to say that I had no idea this would be 
controversial. I asked whether ffmpeg traversed filter complexes 
recursively because that was not happening. Apparently it does 
recurse, but only if you connect certain pads to certain other pads. 


I don't understand what you mean by "recursively".

Specifically, it depends on how you 'wire up' a 'blend' filter. I 
learned that from Michael Koch after he suggested that I reverse the 
connections to the 'blend' filter. It worked and instead of getting 
80% of the input frames in the output, I got 100% of the frames (minus 
a couple of frames, but that doesn't matter).


I think that's a timestamp problem. The output of the blend filter might 
have the same timestamp as one of its inputs. And then the interleave 
filter gets two frames with the same timestamp and the output becomes 
unpredictable.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Mark Filipak

I'm hooking into this to reply in order to get the message below into the 
thread.

But first, I'd like to say that I had no idea this would be controversial. I asked whether ffmpeg 
traversed filter complexes recursively because that was not happening. Apparently it does recurse, 
but only if you connect certain pads to certain other pads. Specifically, it depends on how you 
'wire up' a 'blend' filter. I learned that from Michael Koch after he suggested that I reverse the 
connections to the 'blend' filter. It worked and instead of getting 80% of the input frames in the 
output, I got 100% of the frames (minus a couple of frames, but that doesn't matter). I will be 
conducting experiments to determine what are the 'magic' pads. In the meantime, I spoke with an 
engineer...


I just got off the phone with a video production engineer who's very familiar with ffmpeg and who 
graciously requested the call to attempt to straighten me out. I didn't ask him for permission to 
mention his name, so I don't.


Regarding soft telecine:
According to the engineer -- and I have no reason to doubt him; I take what he says as gospel -- 
ffmpeg decoders are compliant and therefore always output 30fps, interlaced (per the metadata flags) 
even though you and I know the content is actually progressive.


Now I begin to understand why pdr0 is so focused on detelecine. I am not explicitly detelecining, 
but apparently, ffmpeg is detelecining as a first step. I'm thinking about that and trying to reason 
out how that affects what I'm doing, or indeed, whether it affects what I'm doing at all.


Regarding 55-telecine:
From the above, and from 'telecine=pattern=5', it's clear to me now that ffmpeg 
is doing the following:

Step 1: Unpackage the p24 packets, decode the pictures, and output i30 via 
23-telecine.

Step 2: Detelecine the i30 back to p24. Okay, that reconstructs the 'pictures' (as 'pictures' are 
defined in the MPEG spec) -- Caveat: Provided that the metadata read by the decoder is correct 
(which is not always true, and is the reason for the '-analyzeduration' and '-probesize' directives 
(options, if you prefer) -- which I always understood and have used, e.g., "-analyzeduration 
50 -probesize 50").

Sanity check: Am I correct so far?

Step 3: Apply 'telecine=pattern=5' to obtain 60fps with frames = A A A+B B B.

...and I'm right back where I started, to wit: Frames that are duplicated and, for the center frame, 
combed. So what have I learned? I'm not sure, maybe nothing.


The result looks beautiful on a 60Hz TV. The samples I've made so far have PTS failures at about 
2:30 running time, but I can't see how those failures are related to 55-telecine. And I don't 
understand why my explorations are worthless and a waste of time.


It seems to me that whether a decoder outputs p24 or i30 is moot. It does come as a surprise that 
the decoder honors the metadata flags and therefore outputs i30. Given that, I understand how bogus 
metadata would result in failure -- I have 2 DVDs from a fellow in South Korea via ebay that have 
bad metadata. They're soft telecined but the metadata is wrong:


frames.frame.0.interlaced_frame=1 <- should be '0'
frames.frame.0.top_field_first=0
frames.frame.0.repeat_pict=1
frames.frame.1.interlaced_frame=0
frames.frame.1.top_field_first=1
frames.frame.1.repeat_pict=0
frames.frame.2.interlaced_frame=1 <- should be '0'
frames.frame.2.top_field_first=1
frames.frame.2.repeat_pict=1
frames.frame.3.interlaced_frame=0
frames.frame.3.top_field_first=0
frames.frame.3.repeat_pict=0

but I'm not transcoding from those discs.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 18:46 Uhr schrieb Mark Filipak
> 

> markfilipak.windows+ffmpeg@

> :
>>
>> On 04/19/2020 12:31 PM, Carl Eugen Hoyos wrote:
>> > Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0 

> pdr0@

> :
>> >
>> >> In his specific situation, he has a single combed frame. What he
>> >> chooses for yadif (or any deinterlacer) results in a different result
>> >> - both wrong - for his case.
>> >
>> >> If the selects "top" he gets an "A" duplicate frame. If he selects
>> >> "bottom" he gets a "B" duplicate frame .
> 
> To clarify: Above does not describe in a useful way how yadif
> operates and how its options can be used. I do understand
> that you can create a command line that makes it appear as if
> this would be the way yadif operates, but to assume that this
> is the normal behaviour that needs some kind of description
> for posterity is completely absurd.
> 
> Or in other words: Induction is not a useful way of showing
> or proving technical properties.


Nobody is saying this is "normal" behaviour for general use. I EXPLICITLY
wrote this is for application in a very specific scenario. 

That's what you happens when you cut and edit out the context of a clear
message , or choose read selectively. 





>> > No.
>>
>> No?
>>
>> But I can see the judder. Please, clarify.
>>
>> 55-telecine outputs frames A A A+B B B   ...no judder, 1/24th second comb
>> in 3rd frame.
>> Yadif top outputs judder and no comb ...so I assume that the stream
>> is A A A B B.
>> Yadif bottom outputs judder and no comb  ...so I assume that the stream
>> is A A B B B.
>>
>> My assumptions are based on what I see on the TV during playback and what
>> top &
>> bottom mean. Is any of that wrong? If so, how is it wrong?
>>
>> I apologize for being ignorant. I endeavor to become less ignorant.
> 
> Just a few thoughts:
> 
> There is no "yadif top" and "yadif bottom", rtfm.


"Top" is top field first, "Bottom" is bottom field first.  I placed mine in
"quotes". But it's clear what he is trying to communication



> No (useful) de-interlacer in FFmpeg duplicates a frame in
> normal operation, the thought that it might do this is
> completely ridiculous.

It does when you use it on progressive content. This is not a "normal"
operation - I explicitly said that.  This is progressive content with a
combed frame.  It demonstrates your lack of understanding of what is going
on, or you didn't bother to read the background information before replying. 



> yadif uses simplified motion compensation, you cannot combine
> it with select the way you can combine a linear interpolation filter
> (that does no motion compensation) with the select filter. Please
> avoid reporting it as a bug that this is not documented: We cannot
> document every single theoretical use case (your use case is 100%
> theoretical), we instead want to keep the documentation readable.

Nobody is saying this is a bug. This is the expected behaviour in this
specific situation.



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 18:46 Uhr schrieb Mark Filipak
:
>
> On 04/19/2020 12:31 PM, Carl Eugen Hoyos wrote:
> > Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0 :
> >
> >> In his specific situation, he has a single combed frame. What he
> >> chooses for yadif (or any deinterlacer) results in a different result
> >> - both wrong - for his case.
> >
> >> If the selects "top" he gets an "A" duplicate frame. If he selects
> >> "bottom" he gets a "B" duplicate frame .

To clarify: Above does not describe in a useful way how yadif
operates and how its options can be used. I do understand
that you can create a command line that makes it appear as if
this would be the way yadif operates, but to assume that this
is the normal behaviour that needs some kind of description
for posterity is completely absurd.

Or in other words: Induction is not a useful way of showing
or proving technical properties.

> > No.
>
> No?
>
> But I can see the judder. Please, clarify.
>
> 55-telecine outputs frames A A A+B B B   ...no judder, 1/24th second comb in 
> 3rd frame.
> Yadif top outputs judder and no comb ...so I assume that the stream is A 
> A A B B.
> Yadif bottom outputs judder and no comb  ...so I assume that the stream is A 
> A B B B.
>
> My assumptions are based on what I see on the TV during playback and what top 
> &
> bottom mean. Is any of that wrong? If so, how is it wrong?
>
> I apologize for being ignorant. I endeavor to become less ignorant.

Just a few thoughts:

There is no "yadif top" and "yadif bottom", rtfm.

You did not post the command line including the complete,
uncut console output. (please don't do it now)

No (useful) de-interlacer in FFmpeg duplicates a frame in
normal operation, the thought that it might do this is
completely ridiculous.

yadif uses simplified motion compensation, you cannot combine
it with select the way you can combine a linear interpolation filter
(that does no motion compensation) with the select filter. Please
avoid reporting it as a bug that this is not documented: We cannot
document every single theoretical use case (your use case is 100%
theoretical), we instead want to keep the documentation readable.

I am still quite sure I posted a command line that you can test. You
don't have to report back.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Mark Filipak

On 04/19/2020 12:31 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0 :


In his specific situation, he has a single combed frame. What he chooses for
yadif (or any deinterlacer) results in a different result - both wrong - for
his case.



If the selects "top" he gets an "A" duplicate frame. If he selects
"bottom" he gets a "B" duplicate frame .


No.


No?

But I can see the judder. Please, clarify.

55-telecine outputs frames A A A+B B B   ...no judder, 1/24th second comb in 
3rd frame.
Yadif top outputs judder and no comb ...so I assume that the stream is A A 
A B B.
Yadif bottom outputs judder and no comb  ...so I assume that the stream is A A 
B B B.

My assumptions are based on what I see on the TV during playback and what top & bottom mean. Is any 
of that wrong? If so, how is it wrong?


I apologize for being ignorant. I endeavor to become less ignorant.

Thanks,
Mark.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0 :

> In his specific situation, he has a single combed frame. What he chooses for
> yadif (or any deinterlacer) results in a different result - both wrong - for
> his case.

> If the selects "top" he gets an "A" duplicate frame. If he selects
> "bottom" he gets a "B" duplicate frame .

No.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 16:31 Uhr schrieb pdr0 

> pdr0@

> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am 19.04.2020 um 08:08 schrieb pdr0
> 
>> >> Other types of typical single rate deinterlacing (such as yadif) will
>> >> force you to choose the top or bottom field
>> >
>> > As already explained: This is not true.
>>
>> How so?  Assuming you're actually applying it (all frames), or deint=1
>> and the frame is marked as interlaced in ffmpeg parlance:
>>
>> If you use mode=0  (single rate), either it auto selects the parity, or
>> you explicitly set it top or bottom
> 
> Yes, but this does not imply (in any way) "choose the top or bottom
> field".
> 

I agree - It's a poor choice of words if you take it out of context, and cut
out all the background information

Already explained

> It's a very specific scenario - He needs to keep that combed frame, as a
> single frame to retain the pattern. Single rate deinterlacing by any
> method
> will cause you to choose either the top field or bottom field, resulting
> in
> a duplicate frame or the prior or next frame - and it's counterproductive
> for what he wanted (blend deinterlacing to keep both fields as a single
> frame) 


In his specific situation, he has a single combed frame. What he chooses for
yadif (or any deinterlacer) results in a different result - both wrong - for
his case. If the selects "top" he gets an "A" duplicate frame. If he selects
"bottom" he gets a "B" duplicate frame . 

Do you need farther explanation?







--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 16:31 Uhr schrieb pdr0 :
>
> Carl Eugen Hoyos-2 wrote
> > Am 19.04.2020 um 08:08 schrieb pdr0

> >> Other types of typical single rate deinterlacing (such as yadif) will
> >> force you to choose the top or bottom field
> >
> > As already explained: This is not true.
>
> How so?  Assuming you're actually applying it (all frames), or deint=1
> and the frame is marked as interlaced in ffmpeg parlance:
>
> If you use mode=0  (single rate), either it auto selects the parity, or
> you explicitly set it top or bottom

Yes, but this does not imply (in any way) "choose the top or bottom field".

> If you use mode =1 (double rate) , it's a non issue because both
> fields are retained

The (visually horrible) issue of wrong parity is especially noticeable
for double rate.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> Am 19.04.2020 um 08:08 schrieb pdr0 

> pdr0@

> :
>> 
>> Other types of typical single rate deinterlacing (such as yadif) will
>> force
>> you to choose the top or bottom field
> 
> As already explained: This is not true.

How so ?  Assuming you're actually applying it (all frames), or deint=1 and
the frame is marked as interlaced in ffmpeg parlance:

If you use mode=0  (single rate), either it auto selects the parity, or you
explicitly set it top or bottom

If you use mode =1 (double rate) , it's a non issue because both fields are
retained



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Carl Eugen Hoyos


> Am 19.04.2020 um 08:08 schrieb pdr0 :
> 
> Other types of typical single rate deinterlacing (such as yadif) will force
> you to choose the top or bottom field

As already explained: This is not true.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread Mark Filipak



On 04/19/2020 02:08 AM, pdr0 wrote:

Mark Filipak wrote

My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.

"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by
filtering all lines with a
(1 2 1) filter."

I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1"
refers. pdr0 recommended it
and I found that it works better than any of the other deinterlace
filters. Without pdr0's help, I
would never have tried it.



[1,2,1] refers to a vertical convolution kernel in image processing. It
refers to a "block" of 1 horizontal, 3 vertical pixels.  The numbers are the
weights. The center "2" refers to the current pixel.  Pixel values are
multipled by the numbers , and the sum is taken, that's the output value.
Sometimes a normalization calculation is applied with some implementations,
you'd have to look at the actual code to check. But the net effect is each
line is blended with it's neighbor above and below  .


Thank you, pdr0. That makes perfect sense.


In general, it's frowned upon, because you get "ghosting" or double image .


Yes, that's what I see. But I don't care. Why don't I care? 1, ghosting is plainly evident for 
planes such as building sides during panning shots, not for patterned surfaces such as landscape 
shots or people's faces, and 2, it greatly reduces the (apparent, but not real) judder that would 
otherwise show up for the original, "combed" frame. When you also consider that the ghost is there 
for only 1/60th second, it becomes nearly invisible. Of course, that's not true for 23-telecine. In 
23-telecine (i.e., 30fps), there are 2 combed frames, they abut each other, and each of them is 
1/30th second. That means that the ghosting is 1/15th second -- clearly awful.



1 frame now is a mix of 2
different times, instead of distinct frames. But you need this property in
this specific case to retain the pattern in terms of reducing the judder .
Other types of typical single rate deinterlacing (such as yadif) will force
you to choose the top or bottom field, and you will get a duplicate frame of
before or after ruining your pattern . Double rate deinterlacing will
introduce 2 frames in that spot, ruining your pattern . Those work against
your reasons for doing this - anti-judder


BINGO! You are a pro, pdr0. You have really taken the time to understand what 
I'm doing. Thank you!

By the way, my first successful movie, 55-telecine transcode has finished. (The one I did earlier 
today has PTS errors.) Hold on while I watch it ("Patton" Blu-ray).


Oh, dear. There were no errors, yet the video freezes at about 3:20. If I back up 5 seconds and 
resume, the video plays again, through where it froze, but the audio is gone (silence). If I 
continue letting it play, the audio eventually resumes, but at the wrong place (i.e., audio from a 
scene several minutes further along in the stream). If I continue letting it play, the video freezes 
again. The total running time (which should be a constant), is not constant. It stays just ahead of 
the actual running time by, about 2x seconds, until the video freezes, at which time the total 
running time also freezes at the value it had prior to the freeze.


I think that this 55-telecine is stressing ffmpeg in ways it's not been stressed before and is 
exposing flaws that I fear the ffmpeg principals will not accept because they think what I'm doing 
is a load of crap.



To me, deinterlace just means weaving the odd & even lines. To me, a frame
that is already woven
doesn't need deinterlacing. I know that the deinterlace filters do
additional processing, but none
of them go into sufficient detail for me to know, in advance, what they
do.


Weave means both intact fields are combined into a frame . Basically do
nothing. That's how video is commonly stored.


Yes, I know. I thought I invented the word "weave" -- I wanted to avoid the word "interlace" -- for 
lines that have been combined in a frame: "Woven lines" instead of "interlaced lines". I would have 
used "interleave", but someone fairly significant in the video world is already using "interleave" 
to replace "interlace" when referring to fields: "Interleaved fields" instead of "interlaced fields".



If it's progressive video, yes it doesn't need deinterlacing. By definition,
progressive means both fields are from the same time, belong to the same
frame


There ya go. You have read the MPEG spec.


Deinterlace means separating the fields and resizing them to full dimension
frames by whatever algorithm +/- additional processing

Single rate deinterlace means half the fields are discarded (29.97i =>
29.97p) . Either odd or even fields are kept .


Oh, now I understand you. You see, I don't want to do that. What I want is to not throw away either 
odd or even. I want to blend them. If you throw away the combed frame's odd field, then 55-telecine 
(A A A+B B B) 

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Mark Filipak wrote
> My experience is that regarding "decombing" frames 2 7 12 17 ..., 
> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.
> 
> "lb/linblenddeint
> "Linear blend deinterlacing filter that deinterlaces the given block by
> filtering all lines with a 
> (1 2 1) filter."
> 
> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1"
> refers. pdr0 recommended it 
> and I found that it works better than any of the other deinterlace
> filters. Without pdr0's help, I 
> would never have tried it.


[1,2,1] refers to a vertical convolution kernel in image processing. It
refers to a "block" of 1 horizontal, 3 vertical pixels.  The numbers are the
weights. The center "2" refers to the current pixel.  Pixel values are
multipled by the numbers , and the sum is taken, that's the output value.
Sometimes a normalization calculation is applied with some implementations,
you'd have to look at the actual code to check. But the net effect is each
line is blended with it's neighbor above and below  . 

In general, it's frowned upon, because you get "ghosting" or double image .
1 frame now is a mix of 2 
different times, instead of distinct frames. But you need this property in
this specific case to retain the pattern in terms of reducing the judder .
Other types of typical single rate deinterlacing (such as yadif) will force
you to choose the top or bottom field, and you will get a duplicate frame of
before or after ruining your pattern . Double rate deinterlacing will
introduce 2 frames in that spot, ruining your pattern . Those work against
your reasons for doing this - anti-judder



> To me, deinterlace just means weaving the odd & even lines. To me, a frame
> that is already woven 
> doesn't need deinterlacing. I know that the deinterlace filters do
> additional processing, but none 
> of them go into sufficient detail for me to know, in advance, what they
> do.


Weave means both intact fields are combined into a frame . Basically do
nothing. That's how video is commonly stored. 

If it's progressive video, yes it doesn't need deinterlacing. By definition,
progressive means both fields are from the same time, belong to the same
frame

Deinterlace means separating the fields and resizing them to full dimension
frames by whatever algorithm +/- additional processing

Single rate deinterlace means half the fields are discarded (29.97i =>
29.97p) . Either odd or even fields are kept . Double rate deinterlacing
means all fields are kept (29.97i => 59.94p). The "rate" refers to the
output rate





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 10:36 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 04:12 Uhr schrieb Mark Filipak
:


On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace"
solely to them as single frames. For that application, 'pp=linblenddeint'
appears to do a better job (visually) than does 'yadif'.


Did you ever test my suggestion?


I've reviewed all your posts to this and related threads, Carl Eugen, and I've not found your 
suggestion. Do you mind suggesting it again?


Thanks.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 04:12 Uhr schrieb Mark Filipak
:
>
> On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:
> > Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
> > :
> >
> >> My experience is that regarding "decombing" frames 2 7 12 17 ...,
> >> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.
> >
> > (Funny that while I always strongly disagreed some people also
> > said this when yadif was new - this doesn't make it more "true"
> > in any useful sense though.)
>
> I am splitting out solely frames (n+1)%5=3 and applying "deinterlace"
> solely to them as single frames. For that application, 'pp=linblenddeint'
> appears to do a better job (visually) than does 'yadif'.

Did you ever test my suggestion?

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

Oops. "(n+1)%5=3" should have been "(n+1)%5==3".

"On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace" solely to them as single 
frames. For that application, 'pp=linblenddeint' appears to do a better job (visually) than does 
'yadif'.



"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by
filtering all lines with a (1 2 1) filter."

I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers.


To the fact that no other filter uses as little information to deinterlace.


To me, deinterlace just means weaving the odd & even lines. To me, a frame that is already woven 
doesn't need deinterlacing. I know that the deinterlace filters do additional processing, but none 
of them go into sufficient detail for me to know, in advance, what they do.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace" solely to them as single 
frames. For that application, 'pp=linblenddeint' appears to do a better job (visually) than does 
'yadif'.



"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by
filtering all lines with a (1 2 1) filter."

I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers.


To the fact that no other filter uses as little information to deinterlace.


To me, deinterlace just means weaving the odd & even lines. To me, a frame that is already woven 
doesn't need deinterlacing. I know that the deinterlace filters do additional processing, but none 
of them go into sufficient detail for me to know, in advance, what they do.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:

> My experience is that regarding "decombing" frames 2 7 12 17 ...,
> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.

(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)

> "lb/linblenddeint
> "Linear blend deinterlacing filter that deinterlaces the given block by
> filtering all lines with a (1 2 1) filter."
>
> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" 
> refers.

To the fact that no other filter uses as little information to deinterlace.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 08:44 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 21:32 Uhr schrieb Mark Filipak
:


Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace.


pp=linblenddeint is a (very simple) deinterlacer, once upon a
time it was the preferred deinterlacer for some users, possibly
because of its low performance requirements.
telecine=pattern=5 produces one interlaced frame out of five
(assuming non-static input).


Well, you see, I don't call frames 2 7 12 17 ... of the 55 telecine "interlaced". I have been 
calling them "combed", but I do so simply because, 1, according to the MPEG spec they are not 
interlaced, and 2, there's no established term for them. Perhaps there is a better term than 
"combed", but I don't know it. I do know that according to the MPEG spec, "interlaced" is not the 
correct term. Applying "interlace" to both 1/framerate & 1/24 second temporal line differences 
confuses novices regarding what interlace actually is. That confusion leads to a cascade of 
confusion regarding many, many other processes (in ffmpeg and elsewhere) in which the terms 
"interlace" and "deinterlace" are used indiscriminately.



Carl Eugen

PS:
Note that you have a different definition of "interlaced" than
FFmpeg due to the fact that you only think of analogue video
transmission which FFmpeg does not support. FFmpeg can
only deal with digital video frames, so "interlace" within
FFmpeg is not a process but a property of (some) frames. I
believe you call this property "combing".
Or in other words: FFmpeg does not offer any explicit
"deinterlacing" capabilities, only different filters for decombing
that we call deinterlacers (like linblenddeint, bwdif and yadif).


Carl Eugen, you hit the nail on the head!

The MPEG spec defines interlace as 1/fieldrate. To the best of my knowledge, the temporal difference 
between odd/even lines in the "combed" frame(s) of a telecine (any telecine) is 1/24 sec (not 
1/fieldrate). I know that most folks call that "interlace", but trust me, applying the same term to 
two quite different phenomena is one of the things that confuses novices.


To clarify: I don't think of analog video transmission. To the best of my knowledge, when a vintage 
TV program is mastered to DVD (or presumably to BD though I've not encountered one), the analog 
tapes are digitized and packaged as 1/framerate interlaced fields. That's where I enter the picture. 
Though it is true that I date from the time of analog TV, and though it's true that I'm an engineer 
who designed for analog TV, I did so strictly in the digital domain, designing an integrated fsync, 
vsync, dot clock (and, for NTSC, color bust) sequencer in order to make Atari game systems more 
compliant with  NTSC & PAL timing standards.



PPS:
I know very well that even inside FFmpeg there are several
definitions of "interlaced frames". But since we discuss filters
in an FFmpeg filter chain, neither decoding field-encoded
mpeg2video or paff streams nor mbaff or ildct encoding are
relevant, only the actual content of single frames is which can
be progressive or interlaced (for you: "not combed" or
"combed") which is - in theory and to a very large degree in
practice - independent of the encoding method.


Thanks for your insight on this. My experience is that regarding "decombing" frames 2 7 12 17 ..., 
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by filtering all lines with a 
(1 2 1) filter."


I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers. pdr0 recommended it 
and I found that it works better than any of the other deinterlace filters. Without pdr0's help, I 
would never have tried it.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 21:32 Uhr schrieb Mark Filipak
:

> Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace.

pp=linblenddeint is a (very simple) deinterlacer, once upon a
time it was the preferred deinterlacer for some users, possibly
because of its low performance requirements.
telecine=pattern=5 produces one interlaced frame out of five
(assuming non-static input).

Carl Eugen

PS:
Note that you have a different definition of "interlaced" than
FFmpeg due to the fact that you only think of analogue video
transmission which FFmpeg does not support. FFmpeg can
only deal with digital video frames, so "interlace" within
FFmpeg is not a process but a property of (some) frames. I
believe you call this property "combing".
Or in other words: FFmpeg does not offer any explicit
"deinterlacing" capabilities, only different filters for decombing
that we call deinterlacers (like linblenddeint, bwdif and yadif).

PPS:
I know very well that even inside FFmpeg there are several
definitions of "interlaced frames". But since we discuss filters
in an FFmpeg filter chain, neither decoding field-encoded
mpeg2video or paff streams nor mbaff or ildct encoding are
relevant, only the actual content of single frames is which can
be progressive or interlaced (for you: "not combed" or
"combed") which is - in theory and to a very large degree in
practice - independent of the encoding method.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 01:01 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:


I'm not using the 46 telecine anymore because you introduced me to 
'pp=linblenddeint'
-- thanks again! -- which allowed me to decomb via the 55 telecine.


Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen


The subjects of prior threads are getting mixed in with this thread, "ffmepg 
architecture question".

The architecture question is about recursion/non-recursion of filter complexes.

The prior threads were about how to decomb a telecine in general and a 55-telecine in particular. 
Oh, well. It's my fault. I shouldn't have cranked one Jack-in-the-box before closing the previous 
Jack-in-the-box.


Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace. The transcode source is 
progressive video (p24), not interlace video (i30-telecast or 125-telecast).


I'm performing p24-to-p60 transcode via 55 pull-down telecine. The result has 1 combed frame in 
every set of 5 frames (P P C P P). I'm trying to decomb those combed frames.


'pp' seems to do a better job of decombing because it has a procedure, 'pp=linblenddeint', that 
seems to do a better job of mixing the combed fields. 'yadif' seems to be optimized solely for 
deinterlacing.


To be clear: I will never be processing telecast sources and will never be 
deinterlacing.

Thank you all for being patient.

Regards,
Mark.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Jim DeLaHunt

On 2020-04-18 02:08, Paul B Mahol wrote:

[Mark Filipak] is just genuine troller, and do not know better, I 
propose you just ignore his troll attempts. 


I disagree. What I see from Mark's messages is that he is genuinely 
using ffmpeg for reasonable purposes. He runs into limitations of the 
inadequate documentation, which is not up to the task of explaining this 
complex, capable, but inconsistent piece of software. He asks questions. 
He persists in trying to get answers, even in the face of rudeness and 
dismissal.


I think that adds up to a legitimate contributor to the list, not to a 
troll.

    —Jim DeLaHunt, software engineer, Vancouver, Canada

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

Sorry, the previous post got sent accidentally by my email program. Kindly 
ignore it.

On 04/18/2020 01:01 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:


I'm not using the 46 telecine anymore because you introduced me to 
'pp=linblenddeint'
-- thanks again! -- which allowed me to decomb via the 55 telecine.


Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen


This thread, "ffmepg architecture question", is getting mixed in
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0 

> pdr0@

> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
>> > 
>>
>> > markfilipak.windows+ffmpeg@
>>
>> > :
>> >
>> >> I'm not using the 46 telecine anymore because you introduced me to
>> >> 'pp=linblenddeint'
>> >> -- thanks again! -- which allowed me to decomb via the 55 telecine.
>> >
>> > Why do you think that pp is a better de-interlacer than yadif?
>> > (On hardware younger that's not more than ten years old.)
>>
>> It's not a question of "better" in his case.
>>
>> It's a very specific scenario - He needs to keep that combed frame, as a
>> single frame to retain the pattern.
> 
> I know, while I agree with all other developers that this is useless,
> I have explained how it can be done.

I dislike it too, but that's just an opinion . He's asking a technical
question - that deserves a technical answer


>> Single rate deinterlacing by any method
>> will cause you to choose either the top field or bottom field, resulting
>> in
>> a duplicate frame or the prior or next frame - and it's counterproductive
>> for what he wanted (blend deinterlacing to keep both fields as a single
>> frame)
> 
> (To the best of my knowledge, this is technically simply not true.)
> 
> yadif by default does not change the number of frames.
> (Or in other words: It works just like the pp algorithms, only better)

most deinterlacers have 2 modes, single and double rate. For example, yadif
has mode =0, or  mode =1 . eg. if you stared with a 29.97 interlaced source,
you will get 29.97p in single rate, 59.94p in double rate. Double rate is
more "proper" for interlaced content. Single rate discards half the temporal
information

In general, blend deinterlacing is terrible, the worst type of
deinterlacing, but he "needs" it for his specific scenario.  The "quality"
of yadif is quite low ,deinterlacing and aliasing artifacts. bwdif is
slightly better, and there are more complex deinterlacers not offered by
ffmpeg 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0 :
>
> Carl Eugen Hoyos-2 wrote
> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
> > 
>
> > markfilipak.windows+ffmpeg@
>
> > :
> >
> >> I'm not using the 46 telecine anymore because you introduced me to
> >> 'pp=linblenddeint'
> >> -- thanks again! -- which allowed me to decomb via the 55 telecine.
> >
> > Why do you think that pp is a better de-interlacer than yadif?
> > (On hardware younger that's not more than ten years old.)
>
> It's not a question of "better" in his case.
>
> It's a very specific scenario - He needs to keep that combed frame, as a
> single frame to retain the pattern.

I know, while I agree with all other developers that this is useless,
I have explained how it can be done.

> Single rate deinterlacing by any method
> will cause you to choose either the top field or bottom field, resulting in
> a duplicate frame or the prior or next frame - and it's counterproductive
> for what he wanted (blend deinterlacing to keep both fields as a single
> frame)

(To the best of my knowledge, this is technically simply not true.)

yadif by default does not change the number of frames.
(Or in other words: It works just like the pp algorithms, only better)

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
> 

> markfilipak.windows+ffmpeg@

> :
> 
>> I'm not using the 46 telecine anymore because you introduced me to
>> 'pp=linblenddeint'
>> -- thanks again! -- which allowed me to decomb via the 55 telecine.
> 
> Why do you think that pp is a better de-interlacer than yadif?
> (On hardware younger that's not more than ten years old.)

It's not a question of "better" in his case. 

It's a very specific scenario - He needs to keep that combed frame, as a
single frame to retain the pattern. Single rate deinterlacing by any method
will cause you to choose either the top field or bottom field, resulting in
a duplicate frame or the prior or next frame - and it's counterproductive
for what he wanted (blend deinterlacing to keep both fields as a single
frame)






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Paul B Mahol wrote
> On 4/18/20, pdr0 

> pdr0@

>  wrote:
>> Mark Filipak wrote
>>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>>> working because it is working
>>> for me.
>>
>>
>> Interleave works correctly in terms of timestamps
>>
>> Unless I'm misunderstanding the point of this thread, your "recursion
>> issue"
>> can be explained from how  interleave works
>>
>>
> 
> He is just genuine troller, and do not know better, I propose you just
> ignore his troll attempts.


I do not believe so. He is truly interested in how ffmpeg works. 

Your prior comment about interleave and timestamps was succinct and perfect
- but I can see why it would be "cryptic" for many users. If someone is
claims that comment is "irrelevant", then they are not "seeing" what you
see. It deserves to be expanded upon; if not for him, then do it for other
people who search for information. 

There are different types of people, different learning styles, and
different ways of seeing things.  Teach other people what you know to be
true.  Explain in different words if they don't get it. A bit of tolerance
now, especially in today's crappy world goes a long way. 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:

> I'm not using the 46 telecine anymore because you introduced me to 
> 'pp=linblenddeint'
> -- thanks again! -- which allowed me to decomb via the 55 telecine.

Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Zwanzig

On 4/18/2020 2:08 AM, Paul B Mahol wrote:

On 4/18/20, pdr0  wrote:

Mark Filipak wrote

Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
working because it is working for me.



Interleave works correctly in terms of timestamps

Unless I'm misunderstanding the point of this thread, your "recursion issue"
can be explained from how  interleave works



He is just genuine troller, and do not know better, I propose you just
ignore his troll attempts.


Which "he" are you referring to? pdr0 or Mark? (Paul?) Or someone else?

z!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Paul B Mahol
On 4/18/20, pdr0  wrote:
> Mark Filipak wrote
>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>> working because it is working
>> for me.
>
>
> Interleave works correctly in terms of timestamps
>
> Unless I'm misunderstanding the point of this thread, your "recursion issue"
> can be explained from how  interleave works
>
>

He is just genuine troller, and do not know better, I propose you just
ignore his troll attempts.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread pdr0
Mark Filipak wrote
> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
> working because it is working 
> for me.


Interleave works correctly in terms of timestamps

Unless I'm misunderstanding the point of this thread, your "recursion issue"
can be explained from how  interleave works




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 06:19 PM, pdr0 wrote:

Paul B Mahol wrote

Interleave filter use frame pts/timestamps for picking frames.



I think Paul is correct.

@Mark -
Everything in filter chain works as expected, except interleave in this case


The only problem I have with 'interleave' is that it doesn't halt at end-of-stream, but that's no 
big deal.



You can test and verify the output of each node in a filter graph,
individually, by splitting and using -map >
D2 below demonstrates that the output of blend is working properly , and
this also implies that G,H were correct, but you could split and -map them
too to double check

  ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend=all_mode=average,split[D][D2],[C][D]interleave[out]"
-map [out] -c:v libx264 -crf 20 testout.mkv  -map [D2] -c:v libx264 -crf 20
testD2.mkv  -y


Ah! I see what you're doing with [D2]. I'll try that in a few minutes (as soon as I figure out how I 
can pass-through LPCM 5.1 without error).


I'm not using the 46 telecine anymore because you introduced me to 'pp=linblenddeint' -- thanks 
again! -- which allowed me to decomb via the 55 telecine.



As Paul pointed out, interleave works using timestamps , not "frames". If
you took 2 separate video files, with the same fps, same timestamps, they
won't interleave correctly in ffmpeg. The example in the documentation
actually does not work if they had the same timestamps. You would have to
offset the PTS of one relative to the other for interleave to work
correctly.


That sounds like a PITA. Good thing I'm not merging 2 streams. I don't anticipate ever having to do 
that.



If you check the timestamps of each output node, you will see why it's not
working here, and why it works properly in some other cases .  To get it
working in this example, you would need [D] to assume [H]'s timestamps,
because those are where the "gaps" or "holes" are in [C] . It might be
possible using an expression using setpts


Gee, pdr0, I'm sorry you took the time to write about 'interleave' not working because it is working 
for me.


Oops! Got to run. I just received an email from Farook Farshad  of Saudi Aramco Oil 
informing me of a proposal and that I need to "revert back".  ;-)

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread pdr0
Paul B Mahol wrote
> Interleave filter use frame pts/timestamps for picking frames.


I think Paul is correct. 

@Mark -
Everything in filter chain works as expected, except interleave in this case

You can test and verify the output of each node in a filter graph,
individually, by splitting and using -map.

D2 below demonstrates that the output of blend is working properly , and
this also implies that G,H were correct, but you could split and -map them
too to double check

 ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend=all_mode=average,split[D][D2],[C][D]interleave[out]"
-map [out] -c:v libx264 -crf 20 testout.mkv  -map [D2] -c:v libx264 -crf 20
testD2.mkv  -y

As Paul pointed out, interleave works using timestamps , not "frames". If
you took 2 separate video files, with the same fps, same timestamps, they
won't interleave correctly in ffmpeg. The example in the documentation
actually does not work if they had the same timestamps. You would have to
offset the PTS of one relative to the other for interleave to work
correctly. 

If you check the timestamps of each output node, you will see why it's not
working here, and why it works properly in some other cases .  To get it
working in this example, you would need [D] to assume [H]'s timestamps,
because those are where the "gaps" or "holes" are in [C] . It might be
possible using an expression using setpts





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak



On 04/17/2020 02:35 PM, Ted Park wrote:

Hi,


"split[A][B],[A]select='eq(mod((n+1)\,5)\,3)'[C],[B]datascope,null[D],interleave"

Though the [B][D] branch passes every frame that is presented at [B], datascope 
does not appear for frames 2 7 12 17 etc.

That reveals that traversal of ffmpeg filter complexes is not recursive.


I'm pretty sure filter_complex came from "filtergraph that is not simple 
(complex)" rather than a complex of filters.


I think I remember reading that a filter complex is called a "filter complex" because it contains 
pads that support multiple (complex) filters to be constructed with multiple paths.



There's no nesting filters, what context are you referring to recursion  in??


Good question, Ted. It would be very much the same as code recursion but using video frames instead 
of execution state.


With recursion, a frame that succeeds to the output (or to the input queue of a multiple input 
'collector', e.g., interleave) would be taken back to the input and tested again with other paths in 
order to (possibly) generate more successes. Essentially, with recursion, a frame could be cloned 
and 'appear' simultaneously in differing parts of the filter complex.


Without recursion, a frame that succeeds becomes essentially 'frozen' and is 
not 'tested' further.

It appears that frames passing through ffmpeg filter complexes 'freeze' on 
success and do not recurse.


I've been trying to get to understand the 55 telecine filter script you came up 
with ...


The 55 telecine filter complex has only one split. The upper split, [A], passes all frames unaltered 
except 2 7 12 17 etc. It passes them unaltered because they're not combed. In contrast, the lower 
split, [B], passes frames 2 7 12 17 etc. through a 'pp=linblenddeint' filter in order to reduce 
their combing.



... and eliminate as many splits as possible, do you mean how the datascope 
wouldn't appear for the frames selected? ...


In the process of designing the filter complex, I discovered a most important property of how ffmpeg 
operates. I'm attempting to get a developer (Paul?) to confirm whether what I've seen is 
non-recursion or whether I'm mistaken.



... Same timestamps might be the issue again, maybe setpts=PTS+1 would make 
them show up? ...


Again, you have zoomed in on a very important point: When (and where in the filter complex) does 
ffmpeg assign frame numbers? I would guess that ffmpeg assigns frame numbers (0 1 2 3...) at the 
input, based on the PTSs of the arriving frames. The alternative would be that ffmpeg defers 
assigning the 'next' frame number (n+1) to a frame only when the frame succeeds to the output (or a 
queue), but I don't think that's what ffmpeg does. Again, this is a very important architectural 
detail, and I seek confirmation from a developer (Paul?). The reason I think that ffmpeg *does not* 
defer is that deferring would turn fixed modulo 'n' frame calculations into variables (that would be 
insanely difficult to determine in advance).



... Or does interleave take identical dts input and order them according to 
some rule?


Yes, there are rules regarding PTSs & DTSs regarding whether the frame is a B-frame or not, and, I 
think, whether PTS & DTS are within certain timing margins (I recall 100 micro-seconds but that may 
be wrong ...it's not really important in any event).

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 02:46 PM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

Another cogent point:

Suppose I put 'datascope' before a filter that would pass the original frame
(say, based on a
color), but that the filter won't pass the 'scope' image (because it doesn't
contain that color). I
haven't tried it, but I'll bet that the 'scope' image doesn't appear at all
and that the frame is
dropped.

(Note that if I moved the 'datascope' to after the filter, it would work as
expected.)

Supposing that I'm correct, and considering the prior experiments that I did
conduct, the
non-recursive nature of ffmpeg filter complexes is an important
architectural feature that's not
documented.

Understand that I'm not criticizing ffmpeg. ffmpeg works how it works and
that's fine. It just
doesn't work like an oscilloscope. It could have been designed to work like
an oscilloscope, but it
wasn't. That's okay, but it really should be documented.

Do you agree, Paul, or am I mistaken?


Tried this simple command?

ffmpeg -f lavfi -i testsrc2 -f lavfi -i testsrc -lavfi
"[0:v]select='mod(n,2)'[a];[1:v]select='1-mod(n,2)'[b];[a][b]interleave"
-f null -

Interleave filter use frame pts/timestamps for picking frames.


The interleave filter is not the issue, Paul. Your response is off-topic. Nonetheless, I'll try to 
respond constructively.


Regarding your command line, I read https://ffmpeg.org/ffmpeg-devices.html#lavfi. I didn't 
understand it. That 'said', your command line appears to alternate frames: frame 0 from testsrc, 
frame 1 from testscr2, etc. But to directly answer *your* question: No, I have not "Tried this 
simple command".


Since you have not responded to my "Do you agree" questions, and have instead presented me with some 
sort of puzzle (or test), well, I don't know what to think.


If there's one thing I've learned about you, Paul, it's that you don't answer simple questions with 
simple answers.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Paul B Mahol
On 4/17/20, Mark Filipak  wrote:
> Another cogent point:
>
> Suppose I put 'datascope' before a filter that would pass the original frame
> (say, based on a
> color), but that the filter won't pass the 'scope' image (because it doesn't
> contain that color). I
> haven't tried it, but I'll bet that the 'scope' image doesn't appear at all
> and that the frame is
> dropped.
>
> (Note that if I moved the 'datascope' to after the filter, it would work as
> expected.)
>
> Supposing that I'm correct, and considering the prior experiments that I did
> conduct, the
> non-recursive nature of ffmpeg filter complexes is an important
> architectural feature that's not
> documented.
>
> Understand that I'm not criticizing ffmpeg. ffmpeg works how it works and
> that's fine. It just
> doesn't work like an oscilloscope. It could have been designed to work like
> an oscilloscope, but it
> wasn't. That's okay, but it really should be documented.
>
> Do you agree, Paul, or am I mistaken?

Tried this simple command?

ffmpeg -f lavfi -i testsrc2 -f lavfi -i testsrc -lavfi
"[0:v]select='mod(n,2)'[a];[1:v]select='1-mod(n,2)'[b];[a][b]interleave"
-f null -

Interleave filter use frame pts/timestamps for picking frames.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Ted Park
Hi,

> "split[A][B],[A]select='eq(mod((n+1)\,5)\,3)'[C],[B]datascope,null[D],interleave"
> 
> Though the [B][D] branch passes every frame that is presented at [B], 
> datascope does not appear for frames 2 7 12 17 etc.
> 
> That reveals that traversal of ffmpeg filter complexes is not recursive.

I'm pretty sure filter_complex came from "filtergraph that is not simple 
(complex)" rather than a complex of filters. There's no nesting filters, what 
context are you referring to recursion  in?? I've been trying to get to 
understand the 55 telecine filter script you came up with, and eliminate as 
many splits as possible, do you mean how the datascope wouldn't appear for the 
frames selected? Same timestamps might be the issue again, maybe setpts=PTS+1 
would make them show up? Or does interleave take identical dts input and order 
them according to some rule?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

Another cogent point:

Suppose I put 'datascope' before a filter that would pass the original frame (say, based on a 
color), but that the filter won't pass the 'scope' image (because it doesn't contain that color). I 
haven't tried it, but I'll bet that the 'scope' image doesn't appear at all and that the frame is 
dropped.


(Note that if I moved the 'datascope' to after the filter, it would work as 
expected.)

Supposing that I'm correct, and considering the prior experiments that I did conduct, the 
non-recursive nature of ffmpeg filter complexes is an important architectural feature that's not 
documented.


Understand that I'm not criticizing ffmpeg. ffmpeg works how it works and that's fine. It just 
doesn't work like an oscilloscope. It could have been designed to work like an oscilloscope, but it 
wasn't. That's okay, but it really should be documented.


Do you agree, Paul, or am I mistaken?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

Another revealing example:

"split[A][B],[A]select='eq(mod((n+1)\,5)\,3)'[C],[B]datascope,null[D],interleave"

Though the [B][D] branch passes every frame that is presented at [B], datascope does not appear for 
frames 2 7 12 17 etc.


That reveals that traversal of ffmpeg filter complexes is not recursive.

Do you agree, Paul, or am I mistaken?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Carl Zwanzig

Sigh.

On 4/17/2020 5:39 AM, Mark Filipak wrote:

On 04/17/2020 06:34 AM, Monex wrote:


Please do not use complicated phrases or words like "germane" - many 
ffmpeg-users are not native English speakers and you are causing 
confusion; it is not necessary on a technical list.


Actually, they -are- necessary on a technical list when supposedly simple 
words aren't correct. (Ever read an academic or technical paper?)


[...] I'm always careful to use the best words and punctuation and 
sentence construction. I really don't know how to make you happy. You know, 
if someone doesn't understand what I write, they can always ask for 
clarification.


It's usually best to use the correct word for a concept in the language 
you're currently using, and even that word is borrowed from another 
language. I could easily argue that some other writers here are not skilled 
in English idioms, so -their- messages are unclear. A common example- in 
English, "Feel free to provide" suggests that the writer is offering 
permission to the other person, while the way it's often used here is to say 
"you need to give us...".



You are too verbose (noisy) in your posts. Verbosity is only useful in 
commands and console outputs, or when telling someone to stfu.


"too verbose" == "not cryptic"?

Your complaints are ridiculous.


I tend to agree. I'm also not going to discuss this further in at least the 
next few days.


z!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

For example,
In the select branch that contains

datascope=size=1920x1080:x=45:y=340:mode=color2,not(eq(mod((n+1)\,5)\,3))

datascope appears for frames 0 1 3 4 5 6 8 9 10 11 13 14 15 16 18 19 etc.
whereas in the select branch that contains

datascope=size=1920x1080:x=45:y=340:mode=color2,eq(mod((n+1)\,5)\,3)

datascope appears solely for frames 2 7 12 17 etc.

That reveals that, though datascope is placed before the modulo filter and is therefore exposed to 
the entire stream, only those frames that pass completely through the branch are 'scoped'. 
Logically, datascope's 'scope' must be replacing the contents of the frames that pass through, not 
the frames that don't pass through.


Do you agree, or am I mistaken?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 11:23 AM, Paul B Mahol wrote:
-snip-

datascope appears if you switch order of interleave pads, or use
hstack/vstack filter instead of interleave filter.


Thank you, but I've had no difficulty using datascope. It does not appear in the output video if no 
frames flow through it. The fact that it does not appear means that no frames flowed through the 
branch of the filter complex into which I put it. I use that as a means to discover how ffmpeg works.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Paul B Mahol
On 4/17/20, Mark Filipak  wrote:
> I reran the tests with these command lines:
>
> SET FFREPORT=file=FOO-GH.LOG:level=32
>
> ffmpeg -i %1 -filter_complex
> "telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend[D],[C][D]interleave"
> -map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-GH.MKV"
>
> SET FFREPORT=file=FOO-HG.LOG:level=32
>
> ffmpeg -i %1 -filter_complex
> "telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend[D],[C][D]interleave"
> -map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-HG.MKV"
>
> The datascope doesn't appear in either output, so frame 1 (zero-based) is
> not traversing [E][H]
> (upper command) or [E][G] (lower command).
>
> Therefore, I'm pretty confident that once frame 1 gets enqueued at [C], the
> filter chain is not
> recursed.
>

datascope appears if you switch order of interleave pads, or use
hstack/vstack filter instead of interleave filter.

> Another interesting thing is the behavior of 'blend'.
>
> If blend gets a hit to its 2nd input (but not the 1st) the total frames
> output = 479.
>
> If blend gets a hit to its 1st input (but not the 2nd) the total frames
> output = 594.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

I reran the tests with these command lines:

SET FFREPORT=file=FOO-GH.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-GH.MKV"


SET FFREPORT=file=FOO-HG.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-HG.MKV"


The datascope doesn't appear in either output, so frame 1 (zero-based) is not traversing [E][H] 
(upper command) or [E][G] (lower command).


Therefore, I'm pretty confident that once frame 1 gets enqueued at [C], the filter chain is not 
recursed.


Another interesting thing is the behavior of 'blend'.

If blend gets a hit to its 2nd input (but not the 1st) the total frames output 
= 479.

If blend gets a hit to its 1st input (but not the 2nd) the total frames output 
= 594.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 06:34 AM, Monex wrote:

On 17/04/2020 11:52, Mark Filipak wrote:

On 04/17/2020 05:48 AM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

On 04/17/2020 05:38 AM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-

That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','


My filtergraph is slightly abbreviated to keep within a 70 character line
limit.

The important thing is to portray the logic of the filter chain.

I think you may be the only person who "can not dec_ip[h]er it".


Haha, you are kind of person that never admits own mistakes.


You apparently aren't monitoring other threads. Just a couple of minutes ago
I acknowledged that I
was wrong about DTS.

The difference between us, Paul, is that you engage in character
assassination. If you stopped
trying to assign motives and looked purely at behavior, I think you'd find
more joy.


So when you will admit mistake in this thread?


When it's pointed out to me. By the way, the filter graph sketch I posted is 
not even complete. I left some stuff out that is not germane (again, to stay 
within 70 characters).


It is often requested on this list NOT to cut your command lines and/or console 
output.


Well then, I'm good. I've never cut my command lines or console output.


Most technical people are capable of following a line that is wrapped after 70 
characters. Escape your long lines (backslash) if necessary.


I've never seen that prior to this list. What does it do? When I've encountered escaped eols here, 
they didn't seem to do anything in my email program (Thunderbird). The lines were broken. I figured 
the backslashes must be put in by the poster's OS. Can you inform me or give me a link?



Please do not use complicated phrases or words like "germane" - many 
ffmpeg-users are not native English speakers and you are causing confusion; it is not 
necessary on a technical list.


Supposing that readers have no English dictionary, or access to an on-line dictionary, or access to 
Google translate, that leaves me responsible for monitoring my vocabulary. Are you serious? Do I 
have to somehow gauge my word usage? I'm always careful to use the best words and punctuation and 
sentence construction. I really don't know how to make you happy. You know, if someone doesn't 
understand what I write, they can always ask for clarification.



You are too verbose (noisy) in your posts. Verbosity is only useful in commands 
and console outputs, or when telling someone to stfu.


"too verbose" == "not cryptic"?

Your complaints are ridiculous.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Monex
On 17/04/2020 11:52, Mark Filipak wrote:
> On 04/17/2020 05:48 AM, Paul B Mahol wrote:
>> On 4/17/20, Mark Filipak  wrote:
>>> On 04/17/2020 05:38 AM, Paul B Mahol wrote:
 On 4/17/20, Mark Filipak  wrote:
> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
> -snip-
>> That is not filter graph. It is your wrong interpretation of it.
>> I can not dechiper it at all, because your removed crucial info like ','
>
> My filtergraph is slightly abbreviated to keep within a 70 character line
> limit.
>
> The important thing is to portray the logic of the filter chain.
>
> I think you may be the only person who "can not dec_ip[h]er it".

 Haha, you are kind of person that never admits own mistakes.
>>>
>>> You apparently aren't monitoring other threads. Just a couple of minutes ago
>>> I acknowledged that I
>>> was wrong about DTS.
>>>
>>> The difference between us, Paul, is that you engage in character
>>> assassination. If you stopped
>>> trying to assign motives and looked purely at behavior, I think you'd find
>>> more joy.
>>>
>>
>> So when you will admit mistake in this thread?
> 
> When it's pointed out to me. By the way, the filter graph sketch I posted is 
> not even complete. I left some stuff out that is not germane (again, to stay 
> within 70 characters).
>
>
It is often requested on this list NOT to cut your command lines and/or console 
output. Most technical people are capable of following a line that is wrapped 
after 70 characters. Escape your long lines (backslash) if necessary.

Please do not use complicated phrases or words like "germane" - many 
ffmpeg-users are not native English speakers and you are causing confusion; it 
is not necessary on a technical list.

You are too verbose (noisy) in your posts. Verbosity is only useful in commands 
and console outputs, or when telling someone to stfu.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak
Argh! I keep making mistakes. I'm working too quickly. You see, I had 2 differing versions: one 
using 'telecine=pattern=5' and one using 'telecine=pattern=46'. They tried to do the same thing, but 
by differing methods (differing filter graphs). It's really easy to get them mixed up.


Here is a sketch of the original graph:

split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[E]select='eq(mod(n+1\,5)\,2)'[G]blend[D]
 [F]select='eq(mod(n+1\,5)\,3)'[H]


Here is a sketch of the graph with 'blend' inputs reversed:

split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[E]select='eq(mod(n+1\,5)\,2)'[H]blend[D]
 [F]select='eq(mod(n+1\,5)\,3)'[G]

Here are the command lines (as a unified script):

: Run original filter chain
SET FFREPORT=file=FOO-GH.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)'[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-GH.MKV"


: Run reversed 'blend' inputs
SET FFREPORT=file=FOO-HG.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)'[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-HG.MKV"


Here are the logs in the same order:

ffmpeg started on 2020-04-17 at 06:35:28
Report written to "FOO-GH.LOG"
Log level: 32
Command line:
ffmpeg -i "M:\\Test Videos\\23.976p.mkv" -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\\,5)\\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\\,5)\\,2)'[G],[F]select='eq(mod(n+1\\,5)\\,3)'[H],[G][H]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\\AVOut\\FOO-GH.MKV"

ffmpeg version git-2020-04-03-52523b6 Copyright (c) 2000-2020 the FFmpeg 
developers
  built with gcc 9.3.1 (GCC) 20200328
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls 
--enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype 
--enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg 
--enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt 
--enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib 
--enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc 
--enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx 
--enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec 
--enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf

  libavutil  56. 42.102 / 56. 42.102
  libavcodec 58. 77.101 / 58. 77.101
  libavformat58. 42.100 / 58. 42.100
  libavdevice58.  9.103 / 58.  9.103
  libavfilter 7. 77.101 /  7. 77.101
  libswscale  5.  6.101 /  5.  6.101
  libswresample   3.  6.100 /  3.  6.100
  libpostproc55.  6.100 / 55.  6.100
Input #0, matroska,webm, from 'M:\Test Videos\23.976p.mkv':
  Metadata:
encoder : libebml v1.3.9 + libmatroska v1.5.2
creation_time   : 2020-04-04T03:44:24.00Z
  Duration: 00:00:10.01, start: 0.00, bitrate: 544 kb/s
Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 
23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)

Metadata:
  BPS-eng : 538378
  DURATION-eng: 00:00:10.01000
  NUMBER_OF_FRAMES-eng: 240
  NUMBER_OF_BYTES-eng: 673646
  _STATISTICS_WRITING_APP-eng: mkvmerge v41.0.0 ('Smarra') 64-bit
  _STATISTICS_WRITING_DATE_UTC-eng: 2020-04-04 03:44:24
  _STATISTICS_TAGS-eng: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES
[Parsed_telecine_0 @ 01aa1b2384c0] Telecine pattern 46 yields up to 3 frames per frame, pts 
advance factor: 4/10

Stream mapping:
  Stream #0:0 (h264) -> telecine
  interleave -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[Parsed_telecine_0 @ 01aa18f0e700] Telecine pattern 46 yields up to 3 frames per frame, pts 
advance factor: 4/10

[libx264 @ 01aa18ee2d40] using SAR=1/1
[libx264 @ 01aa18ee2d40] MB rate (816000) > level limit (16711680)
[libx264 @ 01aa18ee2d40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
AVX FMA3 BMI2 AVX2
[libx264 @ 01aa18ee2d40] profile High, level 6.2, 4:2:0, 8-bit
[libx264 @ 01aa18ee2d40] 264 - core 159 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - 
http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex 
subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 
deadzone=21,11 fast_pskip=1 

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 03:56 AM, Michael Koch wrote:

Am 17.04.2020 um 09:44 schrieb Mark Filipak:

On 04/17/2020 02:41 AM, Michael Koch wrote:

Am 17.04.2020 um 08:02 schrieb Mark Filipak:

Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60 
transcode has concluded.

But remaining is an ffmpeg behavior that seems (to me) to be key to understanding ffmpeg 
architecture, to wit: The characteristics of frame traversal through a filter chain.


From a prior topic:
-
Filter graph:

split[A]    select='not(eq(mod(n+1\,5)\,3))' [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

What I expected/hoped:

split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4 //5 frames
 [B]split[D] _ 1 _ _ _ [F]blend[D]   |
 [E] _ _ 2 _ _ [G]   blend of 1+2

What appears to be happening:

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4 //4 frames
 [B]split[D] _ _ _ _ _ [F]blend[D]
 [E] _ _ 2 _ _ [G]

The behavior is as though because frame 1 (see Note) can take the [A][C] path, it does take it & 
that leaves nothing left to also take the [B][D][F] path, so blend never outputs.


Only an untested idea, what happens when you change the order of the inputs of the blend filter, 
first [G] and then [F]?


This would be an important topic for someone writing a book, eh?

I assume you mean this, Michael:

split[A]    select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ 2 _ _ [F]blend[D]
 [E] _ _ _ _ _ [G]



no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave everything else 
as it is.

Michael


I found my old command lines in a log file.
=
: Run original filter chain
SET FFREPORT=file=FOO-HG.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)'[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-HG.MKV"


: Run reversed 'blend' inputs
SET FFREPORT=file=FOO-GH.LOG:level=32

ffmpeg -i %1 -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)'[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\AVOut\FOO-GH.MKV"

=
There is a difference. Here are the logs

ffmpeg started on 2020-04-17 at 06:36:11
Report written to "FOO-HG.LOG"
Log level: 32
Command line:
ffmpeg -i "M:\\Test Videos\\23.976p.mkv" -filter_complex 
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\\,5)\\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\\,5)\\,2)'[G],[F]select='eq(mod(n+1\\,5)\\,3)'[H],[H][G]blend[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -an -sn "C:\\AVOut\\FOO-HG.MKV"

ffmpeg version git-2020-04-03-52523b6 Copyright (c) 2000-2020 the FFmpeg 
developers
  built with gcc 9.3.1 (GCC) 20200328
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls 
--enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype 
--enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg 
--enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt 
--enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib 
--enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc 
--enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx 
--enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec 
--enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf

  libavutil  56. 42.102 / 56. 42.102
  libavcodec 58. 77.101 / 58. 77.101
  libavformat58. 42.100 / 58. 42.100
  libavdevice58.  9.103 / 58.  9.103
  libavfilter 7. 77.101 /  7. 77.101
  libswscale  5.  6.101 /  5.  6.101
  libswresample   3.  6.100 /  3.  6.100
  libpostproc55.  6.100 / 55.  6.100
Input #0, matroska,webm, from 'M:\Test Videos\23.976p.mkv':
  Metadata:
encoder : libebml v1.3.9 + libmatroska v1.5.2
creation_time   : 2020-04-04T03:44:24.00Z
  Duration: 00:00:10.01, start: 0.00, bitrate: 544 kb/s
Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 
23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)

Metadata:
  BPS-eng : 538378
  DURATION-eng: 00:00:10.01000
  NUMBER_OF_FRAMES-eng: 240
  NUMBER_OF_BYTES-eng: 673646
  

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak



On 04/17/2020 03:56 AM, Michael Koch wrote:

Am 17.04.2020 um 09:44 schrieb Mark Filipak:

On 04/17/2020 02:41 AM, Michael Koch wrote:

Am 17.04.2020 um 08:02 schrieb Mark Filipak:

Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60 
transcode has concluded.

But remaining is an ffmpeg behavior that seems (to me) to be key to understanding ffmpeg 
architecture, to wit: The characteristics of frame traversal through a filter chain.


From a prior topic:
-
Filter graph:

split[A]    select='not(eq(mod(n+1\,5)\,3))' [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

What I expected/hoped:

split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4 //5 frames
 [B]split[D] _ 1 _ _ _ [F]blend[D]   |
 [E] _ _ 2 _ _ [G]   blend of 1+2

What appears to be happening:

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4 //4 frames
 [B]split[D] _ _ _ _ _ [F]blend[D]
 [E] _ _ 2 _ _ [G]

The behavior is as though because frame 1 (see Note) can take the [A][C] path, it does take it & 
that leaves nothing left to also take the [B][D][F] path, so blend never outputs.


Only an untested idea, what happens when you change the order of the inputs of the blend filter, 
first [G] and then [F]?


This would be an important topic for someone writing a book, eh?

I assume you mean this, Michael:

split[A]    select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ 2 _ _ [F]blend[D]
 [E] _ _ _ _ _ [G]



no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave everything else 
as it is.

Michael


That will have to be an experiment for you to try because I'm no longer using 'blend' and I've found 
a filter chain that's not so complex (i.e., not 2 levels of 'select'). For a book writer, reworking 
my original filter chain (posted in other topics) and making the change you suggest would be a 
worthy experiment. I mean, not recusing the filter chain for alternative paths when a existing path 
has succeeded, if that is factual, is a rather important ffmpeg architectural feature.


Let me know if you need me to run experiments. I'll try to reconstruct my 
original filter chain.

Regards,
Mark.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 05:48 AM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

On 04/17/2020 05:38 AM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-

That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','


My filtergraph is slightly abbreviated to keep within a 70 character line
limit.

The important thing is to portray the logic of the filter chain.

I think you may be the only person who "can not dec_ip[h]er it".


Haha, you are kind of person that never admits own mistakes.


You apparently aren't monitoring other threads. Just a couple of minutes ago
I acknowledged that I
was wrong about DTS.

The difference between us, Paul, is that you engage in character
assassination. If you stopped
trying to assign motives and looked purely at behavior, I think you'd find
more joy.



So when you will admit mistake in this thread?


When it's pointed out to me. By the way, the filter graph sketch I posted is not even complete. I 
left some stuff out that is not germane (again, to stay within 70 characters).

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Paul B Mahol
On 4/17/20, Mark Filipak  wrote:
> On 04/17/2020 05:38 AM, Paul B Mahol wrote:
>> On 4/17/20, Mark Filipak  wrote:
>>> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
>>> -snip-
 That is not filter graph. It is your wrong interpretation of it.
 I can not dechiper it at all, because your removed crucial info like ','
>>>
>>> My filtergraph is slightly abbreviated to keep within a 70 character line
>>> limit.
>>>
>>> The important thing is to portray the logic of the filter chain.
>>>
>>> I think you may be the only person who "can not dec_ip[h]er it".
>>
>> Haha, you are kind of person that never admits own mistakes.
>
> You apparently aren't monitoring other threads. Just a couple of minutes ago
> I acknowledged that I
> was wrong about DTS.
>
> The difference between us, Paul, is that you engage in character
> assassination. If you stopped
> trying to assign motives and looked purely at behavior, I think you'd find
> more joy.
>

So when you will admit mistake in this thread?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 05:38 AM, Paul B Mahol wrote:

On 4/17/20, Mark Filipak  wrote:

On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-

That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','


My filtergraph is slightly abbreviated to keep within a 70 character line
limit.

The important thing is to portray the logic of the filter chain.

I think you may be the only person who "can not dec_ip[h]er it".


Haha, you are kind of person that never admits own mistakes.


You apparently aren't monitoring other threads. Just a couple of minutes ago I acknowledged that I 
was wrong about DTS.


The difference between us, Paul, is that you engage in character assassination. If you stopped 
trying to assign motives and looked purely at behavior, I think you'd find more joy.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Paul B Mahol
On 4/17/20, Mark Filipak  wrote:
> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
> -snip-
>> That is not filter graph. It is your wrong interpretation of it.
>> I can not dechiper it at all, because your removed crucial info like ','
>
> My filtergraph is slightly abbreviated to keep within a 70 character line
> limit.
>
> The important thing is to portray the logic of the filter chain.
>
> I think you may be the only person who "can not dec_ip[h]er it".

Haha, you are kind of person that never admits own mistakes.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-

That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','


My filtergraph is slightly abbreviated to keep within a 70 character line limit.

The important thing is to portray the logic of the filter chain.

I think you may be the only person who "can not dec_ip[h]er it".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Paul B Mahol
On 4/17/20, Mark Filipak  wrote:
> Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
> transcode has concluded.
>
> But remaining is an ffmpeg behavior that seems (to me) to be key to
> understanding ffmpeg
> architecture, to wit: The characteristics of frame traversal through a
> filter chain.
>
>  From a prior topic:
> -
> Filter graph:

That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','

>
> split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
>   [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
>   [E]select='eq(mod(n+1\,5)\,3)'[G]
>
> What I expected/hoped:
>
> split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4  //5 frames
>   [B]split[D] _ 1 _ _ _ [F]blend[D]   |
>   [E] _ _ 2 _ _ [G]   blend of 1+2
>
> What appears to be happening:
>
> split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
>   [B]split[D] _ _ _ _ _ [F]blend[D]
>   [E] _ _ 2 _ _ [G]
>
> The behavior is as though because frame 1 (see Note) can take the [A][C]
> path, it does take it &
> that leaves nothing left to also take the [B][D][F] path, so blend never
> outputs.
> -
> (Note: I originally wrote "frame n+1==1" but that was an error.)
>
> I assume that frame numbers are assigned at the input of the filter chain as
> frames are encountered,
> and that the following actions occur.
> Frame 0: Traverses [A][C] and is enqueued at [C].
> Frame 1: Traverses [A][C] and is enqueued at [C] (see Proposition).
> Frame 2: Traverses [B][E][G] and is enqueued at [G].
> Frame 3: Traverses [A][C] and is enqueued at [C].
> Frame 4: Traverses [A][C] and is enqueued at [C].
>
> Proposition: Frame 1 could also traverse [B][D][F] and be enqueued at [F]
> but since it's already
> enqueued at [C], it does not do so.
>
> Specifically, it appears that ffmpeg does not recurse the filter chain for
> frames that are already
> enqueued, thus Frame 1 is not enqueued at [F], thus 'blend' doesn't activate
> when Frame 2 arrives at
> [G], thus Frame 2 is never enqueued at [D] and never appears in the output
> of 'interleave'.
>
> Is what I've written correct? Authoritative confirmation or correction of
> this architectural detail
> is desired.
>
> Regards,
> Mark.
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Ted Park
Hi,

> no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave everything 
> else as it is.


I thought the latter was the intended order (or maybe it's just the order my 
brain read it in). The other one results in a ton of duplicate timestamp errors 
and the correction cancels something out, it looks closer to the original 
24/1.001.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Michael Koch

Am 17.04.2020 um 09:44 schrieb Mark Filipak:

On 04/17/2020 02:41 AM, Michael Koch wrote:

Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) 
p24-to-p60 transcode has concluded.


But remaining is an ffmpeg behavior that seems (to me) to be key to 
understanding ffmpeg architecture, to wit: The characteristics of 
frame traversal through a filter chain.


From a prior topic:
-
Filter graph:

split[A]    select='not(eq(mod(n+1\,5)\,3))' [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

What I expected/hoped:

split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4 //5 frames
 [B]split[D] _ 1 _ _ _ [F]blend[D]   |
 [E] _ _ 2 _ _ [G]   blend of 1+2

What appears to be happening:

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4 //4 frames
 [B]split[D] _ _ _ _ _ [F]blend[D]
 [E] _ _ 2 _ _ [G]

The behavior is as though because frame 1 (see Note) can take the 
[A][C] path, it does take it & that leaves nothing left to also take 
the [B][D][F] path, so blend never outputs.


Only an untested idea, what happens when you change the order of the 
inputs of the blend filter, first [G] and then [F]?


This would be an important topic for someone writing a book, eh?

I assume you mean this, Michael:

split[A]    select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ 2 _ _ [F]blend[D]
 [E] _ _ _ _ _ [G]



no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave 
everything else as it is.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Mark Filipak

On 04/17/2020 02:41 AM, Michael Koch wrote:

Am 17.04.2020 um 08:02 schrieb Mark Filipak:

Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60 
transcode has concluded.

But remaining is an ffmpeg behavior that seems (to me) to be key to understanding ffmpeg 
architecture, to wit: The characteristics of frame traversal through a filter chain.


From a prior topic:
-
Filter graph:

split[A]    select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

What I expected/hoped:

split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4  //5 frames
 [B]split[D] _ 1 _ _ _ [F]blend[D]   |
 [E] _ _ 2 _ _ [G]   blend of 1+2

What appears to be happening:

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ _ _ _ [F]blend[D]
 [E] _ _ 2 _ _ [G]

The behavior is as though because frame 1 (see Note) can take the [A][C] path, it does take it & 
that leaves nothing left to also take the [B][D][F] path, so blend never outputs.


Only an untested idea, what happens when you change the order of the inputs of the blend filter, 
first [G] and then [F]?


This would be an important topic for someone writing a book, eh?

I assume you mean this, Michael:

split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ 2 _ _ [F]blend[D]
 [E] _ _ _ _ _ [G]

Yes, I've done that, not because I thought it would make a difference, but because it just happens 
to have been my first configuration.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Michael Koch

Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) 
p24-to-p60 transcode has concluded.


But remaining is an ffmpeg behavior that seems (to me) to be key to 
understanding ffmpeg architecture, to wit: The characteristics of 
frame traversal through a filter chain.


From a prior topic:
-
Filter graph:

split[A]    select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
 [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
 [E]select='eq(mod(n+1\,5)\,3)'[G]

What I expected/hoped:

split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4  //5 frames
 [B]split[D] _ 1 _ _ _ [F]blend[D]   |
 [E] _ _ 2 _ _ [G]   blend of 1+2

What appears to be happening:

split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
 [B]split[D] _ _ _ _ _ [F]blend[D]
 [E] _ _ 2 _ _ [G]

The behavior is as though because frame 1 (see Note) can take the 
[A][C] path, it does take it & that leaves nothing left to also take 
the [B][D][F] path, so blend never outputs.


Only an untested idea, what happens when you change the order of the 
inputs of the blend filter, first [G] and then [F]?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".