Re: [FFmpeg-user] Overlay images to frames in video

2021-04-07 Thread pdr0
Rainer M. Krug-2 wrote
> Hi
> 
> First poster, o apologies for any forgotten info.
> 
> 
> I have a video with the following metadata:
> 
> ```
> $ ffprobe ./1.pre-processed.data/bemovi/20210208_00097.avi
> ffprobe version 4.3.2 Copyright (c) 2007-2021 the FFmpeg developers
>   built with Apple clang version 12.0.0 (clang-1200.0.32.29)
>   configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_4 --enable-shared
> --enable-pthreads --enable-version3 --enable-avresample --cc=clang
> --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls
> --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d
> --enable-libmp3lame --enable-libopus --enable-librav1e
> --enable-librubberband --enable-libsnappy --enable-libsrt
> --enable-libtesseract --enable-libtheora --enable-libvidstab
> --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264
> --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma
> --enable-libfontconfig --enable-libfreetype --enable-frei0r
> --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb
> --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq
> --enable-libzimg --disable-libjack --disable-indev=jack
> --enable-videotoolbox
>   libavutil  56. 51.100 / 56. 51.100
>   libavcodec 58. 91.100 / 58. 91.100
>   libavformat58. 45.100 / 58. 45.100
>   libavdevice58. 10.100 / 58. 10.100
>   libavfilter 7. 85.100 /  7. 85.100
>   libavresample   4.  0.  0 /  4.  0.  0
>   libswscale  5.  7.100 /  5.  7.100
>   libswresample   3.  7.100 /  3.  7.100
>   libpostproc55.  7.100 / 55.  7.100
> Input #0, avi, from './1.pre-processed.data/bemovi/20210208_00097.avi':
>   Metadata:
> encoder : Lavf58.45.100
>   Duration: 00:00:12.50, start: 0.00, bitrate: 91831 kb/s
> Stream #0:0: Video: png (PNG  / 0x20474E50), pal8(pc), 2048x2048,
> 92565 kb/s, 10 fps, 10 tbr, 10 tbn, 10 tbc
> Metadata:
>   title   : FileAVI write  
> ```
> 
> In addition, I have 125 images (jpg, but I can as easily create them as
> png) which contain some labelling of the individual frames. The particles
> are moving, the images are different.
> 
> Now I want to overlay the images over the corresponding frames.
> 
> What is the easiest to do this? I could convert them to a move, then
> overlay these two, but I have the feeling I could do this in one step?
> 
> Any suggestions?
> 
> Thanks,
> 
> Rainer


Overlay them where? What x,y position? What is the dimension of your jpg
sequence?

jpg does not specify an alpha channel (transparency information). If you
overlay the jpg sequence over the base video layer , and assuming they are
the same dimensions, you will "cover up" the video layer entirely . You will
not see the video, only the jpg sequence

When you say you can "create them as PNG", does the original source have an
alpha channel?






--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] TGA File color looks incorrect when converted to H.264

2021-04-06 Thread pdr0
Craig L. wrote
> Hi, I am running the following command to convert this red TGA image 
> into a video.
> 
> However, the finished video color does not match the TGA images color.
> 
> Is there something I can do to ensure that the color comes out right?
> 
> I plan on also adding another video stream to this FYI.
> 
> 
> Here is my command:
> 
> /usr/local/bin/ffmpeg -loop 1  -i  FULL.tga  -aspect 1:1    -c:v 
> libx264    -profile:v high -pix_fmt yuv420p -level 5.1  -vsync 1 -async 
> 1  -crf 14   -t 5.000   -r 30    -y  finished.mp4
> 
> 
> Here is a link to download the original image: (Not sure if i could 
> attach it to this email)
> 
> https://drive.google.com/file/d/1V3mZNR7srcUPiNq8dvb4q44oUqhEej6O/view?usp=sharing
> 
> Here is a link to the finished result from the above command.
> 
> https://drive.google.com/file/d/1f3qFLgDKg5kLen1FJzp-C-Xb3yGeHWhc/view?usp=sharing
> 
> 
> 
> Here is the ffmpeg response:
> 
> /usr/local/bin/ffmpeg -loop 1  -i  FULL.tga  -aspect 1:1    -c:v 
> libx264    -profile:v high -pix_fmt yuv420p -level 5.1  -vsync 1 -async 
> 1  -crf 14   -t 5.000   -r 30    -y  finished.mp4
> ffmpeg version 4.3.1-static https://johnvansickle.com/ffmpeg/ Copyright 
> (c) 2000-2020 the FFmpeg developers
>    built with gcc 8 (Debian 8.3.0-6)
>    configuration: --enable-gpl --enable-version3 --enable-static 
> --disable-debug --disable-ffplay --disable-indev=sndio 
> --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r 
> --enable-gnutls --enable-gmp --enable-libgme --enable-gray 
> --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf 
> --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb 
> --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband 
> --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis 
> --enable-libopus --enable-libtheora --enable-libvidstab 
> --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp 
> --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d 
> --enable-libxvid --enable-libzvbi --enable-libzimg
>    libavutil  56. 51.100 / 56. 51.100
>    libavcodec 58. 91.100 / 58. 91.100
>    libavformat    58. 45.100 / 58. 45.100
>    libavdevice    58. 10.100 / 58. 10.100
>    libavfilter 7. 85.100 /  7. 85.100
>    libswscale  5.  7.100 /  5.  7.100
>    libswresample   3.  7.100 /  3.  7.100
>    libpostproc    55.  7.100 / 55.  7.100
> Input #0, image2, from 'FULL.tga':
>    Duration: 00:00:00.04, start: 0.00, bitrate: 699843 kb/s
>      Stream #0:0: Video: targa, bgr24, 1080x1080, 25 tbr, 25 tbn, 25 tbc
> Stream mapping:
>    Stream #0:0 -> #0:0 (targa (native) -> h264 (libx264))
> Press [q] to stop, [?] for help
> [libx264 @ 0x72b9f00] using SAR=1/1
> [libx264 @ 0x72b9f00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
> AVX FMA3 BMI2 AVX2 AVX512
> [libx264 @ 0x72b9f00] profile High, level 5.1, 4:2:0, 8-bit
> [libx264 @ 0x72b9f00] 264 - core 161 r3018 db0d417 - H.264/MPEG-4 AVC 
> codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - 
> options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 
> psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 
> 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 
> threads=34 lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 
> interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 
> b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 
> keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf 
> mbtree=1 crf=14.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 
> aq=1:1.00
> Output #0, mp4, to 'finished.mp4':
>    Metadata:
>      encoder : Lavf58.45.100
>      Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 
> 1080x1080 [SAR 1:1 DAR 1:1], q=-1--1, 30 fps, 15360 tbn, 30 tbc
>      Metadata:
>    encoder : Lavc58.91.100 libx264
>      Side data:
>    cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
> frame=   84 fps=0.0 q=20.0 size=   0kB time=00:00:00.10 bitrate=   
> 3.8kbits/frame=  150 fps=0.0 q=-1.0 Lsize=  10kB time=00:00:04.90 
> bitrate=  17.3kbits/s dup=25 drop=0 speed=6.24x
> video:8kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB 
> muxing overhead: 33.530972%
> [libx264 @ 0x72b9f00] frame I:1 Avg QP: 1.00  size:   273
> [libx264 @ 0x72b9f00] frame P:38    Avg QP: 1.21  size:    53
> [libx264 @ 0x72b9f00] frame B:111   Avg QP: 4.67  size:    45
> [libx264 @ 0x72b9f00] consecutive B-frames:  1.3%  0.0%  0.0% 98.7%
> [libx264 @ 0x72b9f00] mb I  I16..4: 100.0%  0.0%  0.0%
> [libx264 @ 0x72b9f00] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4: 0.0%  
> 0.0%  0.0%  0.0%  0.0%    skip:100.0%
> [libx264 @ 0x72b9f00] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8: 0.0%  
> 0.0%  0.0%  direct: 0.0%  skip:100.0%
> [libx264 @ 0x72b9f00] 8x8 transform intra:0.0%
> [libx264 @ 0x72b9f00] coded y,uvDC,uvAC intra: 0.0% 0.0% 0.0% inter: 
> 0.0% 0.0% 

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 13:40, pdr0 wrote:
>> 
>> This zip file example has the original 24000/1001, weighted frame
>> blending
>> to 12/1001, and decimation to 6/1001 - is this something close to
>> what you had in mind ?
>> https://www.mediafire.com/file/qj819m3vctx4o4q/blends_example.zip/file
> 
> Thanks for that (...I wish I knew how you are making those...).
> convertfps_119.88(blends).mp4 actually looks to be the better choice for
> my 60Hz TV -- the TV is 
> interpolating well -- but I think the weighting could be tweaked (which is
> something I planned to do 
> once my filter complex was working properly).

I'm using avisynth ,because it has preset functions for everything. e.g.
ConvertFPS does blended framerate conversions. It's 1 line.  ie. You don't
have to break it down into select, merge, interleave - but I'll post a
vapoursynth template for you to reproduce it , and you can experiment with
different weights - since you've used vapoursynth before, and you won't be
bothered by PTS issues or frames out of order

On a 60Hz display, those two files should look identical from the way it was
made. The 60Hz display only displays every 2nd frame on the 12/1001 fps
video. 




Mark Filipak (ffmpeg) wrote
> On 2021-04-01 13:40, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> What I'm trying to do is make a 12/1001fps cfr in which each frame
>>> is
>>> a proportionally weighted
>>> pixel mix of the 24 picture-per-second original:
>>> A B AAABB AABBB A.
>>> I'm sure it would be way better than standard telecine -- zero judder --
>>> and I'm pretty sure it
>>> would be so close to motion vector interpolation that any difference
>>> would
>>> be imperceptible. I'm
>>> also sure that it would be a much faster process than mvinterpolate. The
>>> only question would be
>>> resulting file size (though I think they would be very close).
>> 
>> Is this 12/1001 CFR intended for a 60Hz display? ...
> 
> Yes, or a 120Hz display.
> 
> Please, correct me if I'm wrong.
> 
> The 120fps frames are conveying 24 pictures-per-second, i.e. 5 discrete
> frames per picture with the 
> 1st of each set of 5 being an identical duplicate of the original (e.g.
> A), so down converting 
> to 60fps is not a simple case of dropping alternate frames (i.e. 5/2 is
> not an integer).
> The lines below are: 24FPS / 120FPS / 60FPS (BY ALTERNATE FRAME DROP).
> A.B.C.D...
> A.B.AAABB.AABBB.A.B.C.BBBCC.BBCCC.B.C.D.CCCDD.CCDDD.C.D...
> A.AAABB.A.C.BBCCC.C.CCCDD.C.E.DDEEE.E.EEEFF.E.G.FFGGG.G...
> 
> Note how, in the 60FPS stream, there is no B or D or F frames.
> That continues and any 
> loss of 'pure' frames would probably be noticeable. ...UPDATE: The loss is
> noticeable (as flicker).

Looks correct - that's the same thing going on with the
59.94(blends,decimation) file - every 2nd frame is selected (selecteven)

That's also what's going on when you watch the 12/1001 file on most 60Hz
displays.



> My cheap 60Hz TV accepts 120fps and it apparently interpolates during down
> conversion to 60fps. I 
> assume that's true of all 60Hz TVs because, given that the frames are sent
> to the TV as raw frames 
> via HDMI, doing pixel interpolation in real time within the TV is a snap,
> so all 60Hz TVs probably 
> do it fine.

In what way does it "interpolate ?"

If it's doing anything other than dropping frames, there must be additional
processing in your TV set - and "additional processing" is definitely not
standard for "cheap" sets.  The majority of 60Hz displays will only display
every 2nd frame, no interpolation. 



>  
> 
>> "proportionally weighted pixel mix" it sounds like a standard frame
>> blended
>> conversion . eg. You drop a 23.976p asset on a 12/1001fps timeline in
>> a
>> NLE and enable frame blending. Or in avisynth it would be
>> ConvertFPS(12,1001)
> 
> Well, I don't know what "NLE" means, so you'll need to enlighten me. 

NLE is a "non linear editor.", used to edit videos.   Blend conversions are
one type of standardized form of conversion. For example many TV stations
used to perform this sort of blend conversions to/from different framerates
(some still do). It's frowned upon in many circles . Pros/cons



> First, let me redefine A.B.AAABB.AABBB.A.B as
> 5A0B.4A1B.3A2B.2A3B.1A4B.0A5B.

ie. 5 frame cycles, linear interpolation of weights. This is the same
pattern that was posted i

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0
Mark Filipak (ffmpeg) wrote
> 
> Is this another documentation problem?
> 
> https://ffmpeg.org/ffmpeg-filters.html#fps-1
> "11.88 fps
> Convert the video to specified constant frame rate by duplicating or
> dropping frames as necessary."
> 
> I want to duplicate (specifically, double and only double) all frames. And
> I want to avoid any 
> dropping. I guess the key is: What does 'as neccessary' mean?
> 
> Like so much of the documentation, it's vague.
> 
> That 'said', I've seen fps drop frames that had slightly 'late' PTSs.

It achieves desired framerate by adding or dropping frames. If your
timestamps are "off", the expected results will be "off"

If you have buggy input timestamps , another option might be setts bitstream
filter that was committed recently

https://ffmpeg.org/ffmpeg-bitstream-filters.html#setts





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0
Mark Filipak (ffmpeg) wrote
> What I'm trying to do is make a 12/1001fps cfr in which each frame is
> a proportionally weighted 
> pixel mix of the 24 picture-per-second original:
> A B AAABB AABBB A.
> I'm sure it would be way better than standard telecine -- zero judder --
> and I'm pretty sure it 
> would be so close to motion vector interpolation that any difference would
> be imperceptible. I'm 
> also sure that it would be a much faster process than mvinterpolate. The
> only question would be 
> resulting file size (though I think they would be very close).

Is this 12/1001 CFR intended for a 60Hz display? Do you decimate by 2 to
make it 6/1001 after ?

"proportionally weighted pixel mix" it sounds like a standard frame blended
conversion . eg. You drop a 23.976p asset on a 12/1001fps timeline in a
NLE and enable frame blending. Or in avisynth it would be
ConvertFPS(12,1001)

This zip file example has the original 24000/1001, weighted frame blending
to 12/1001, and decimation to 6/1001 - is this something close to
what you had in mind ?
https://www.mediafire.com/file/qj819m3vctx4o4q/blends_example.zip/file

The "textbook" fps conversion categories are 1) duplicates 2) blends 3)
optical flow (such as minterpolate) . Each has various pros/cons

For what you are considering (blends), the frames will be "evenly spaced" in
time - technically less judder -  but there will be "strobing" and blurry
loss of quality (frame blending). Every 5th frame is an "original" frame,
original quality that "snaps" into view. What some people do to reduce that
is offset the timing (resample all frames, instead of keeping every 5th
original), to reduce that jarring effect; same with optical flow - resample
all frames instead of keeping T=0.0 (shift to T=0.1)

The "best" is to get a 120Hz/300Hz judder free display :) And if you're in
that group that likes motion interpolation,  a motion interpolation display





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 11:41, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>>>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>>>> less than 0.1%.
>>>>
>>>> The filter complex is thinned down to just this:
>>>> settb=1/72,showinfo
>>>>
>>>> Here is selected lines from the showinfo report (with   ...comments):
>>>>
>>>> [Parsed_showinfo_1 @ 0247d719ef00] config in time_base: 1/72,
>>>> frame_rate: 24000/1001
>>>>      ...So, deltaPTS (calculated: 1/TB/FR) should be 30030.
>>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   1 pts:  30240   ...should
>>>> be
>>>> 30030
>>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   2 pts:  59760   ...should
>>>> be
>>>> 60060
>>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   3 pts:  9   ...should
>>>> be
>>>> 90090
>>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   4 pts: 120240   ...should
>>>> be
>>>> 120120
>>>>
>>>> The PTS variance is 0.7%.
>>>>
>>>> Why are PTS values different from what's expected?
>>>>
>>>> Note: If I force deltaPTS via setpts=N*30030, then of course I get
>>>> what's
>>>> expected.
>>>>
>>>> Thanks. This is critical and your explanation is greatly appreciated!
>>>> Mark.
>>>
>>> UPDATE
>>>
>>> If I change the filter complex to this:
>>>
>>> settb=1/72,setpts=N*30030,fps=fps=48000/1001,showinfo
>>>
>>> all my follow-on processing goes straight into the toilet.
>>>
>>> Explanation of the factors in the filter complex:
>>> settb=1/72   ...mandate 1.3[8..] ms time resolution
>>> setpts=N*30030   ...force the input to exactly 24000/1001fps cfr
>>> fps=fps=48000/1001   ...frame double
>>>
>>> However, fps=fps=48000/1001 does more than just frame double. It resets
>>> TB
>>> to 20.8541[6..] ms time
>>> resolution. Look:
>>>
>>> [Parsed_showinfo_3 @ 01413bf0ef00] config in time_base: 1001/48000,
>>> frame_rate: 48000/1001
>>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   0 pts:  0
>>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   1 pts:  1
>>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   2 pts:  2
>>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   3 pts:  3
>>>
>>> Gee, I wish the fps filter documention said that it changes TB and sets
>>> deltaPTS to '1'.
>>>
>>> My follow-on frame processing can't tolerate 20.8541[6..] ms time
>>> resolution -- that explains why my
>>> mechanical frame gynmastics have been failing!
>>>
>>> Explanation: My follow-on processing does fractional frame adjustment
>>> that
>>> requires at least
>>> 8.341[6..] ms resolution.
>>>
>>> Workaround: I can frame double by another method that's somewhat ugly
>>> but
>>> that I know works and
>>> doesn't trash time resolution.
>> 
>> Did you try changing the order? ie. -vf fps first ?
> 
> Before the 'settb=1/72,setpts=N*30030'? That wouldn't be appropriate
> because I need to guarantee 
> that the input is forced to 24000/1001fps cfr, first. Only then will
> fps=fps=48000/1001 actually 
> double each frame without dropping any -- without such assurance, if any
> particular frame happens to 
> have a PTS that's 'faster' than 24000/1001fps, then the shift to
> 48000/1001fps would drop it because 
> the fps filter works solely at the frame level.




That's what -vf fps=24000/1001 does. It forces 24000/1001 CFR. Use it first

I'm sure it was mentioned in one of your other threads

normal MKV
pts: 42 pts_time:0.042
pts: 83 pts_time:0.083
pts:125 pts_time:0.125


-vf fps=24000/1001
pts:  1 pts_time:0.0417083
pts:  2 pts_time:0.0834167
pts:  3 pts_time:0.125125


-vf "fps=24000/1001,settb=1/72,setpts=N*30030"
pts:  30030 pts_time:0.0417083
pts:  60060 pts_time:0.0834167
pts:  90090 pts_time:0.125125 




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0

On 2021-04-01 11:41, pdr0 wrote:
> Mark Filipak (ffmpeg) wrote
>> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>>> less than 0.1%.
>>>
>>> The filter complex is thinned down to just this: settb=1/72,showinfo
>>>
>>> Here is selected lines from the showinfo report (with   ...comments):
>>>
>>> [Parsed_showinfo_1 @ 0247d719ef00] config in time_base: 1/72,
>>> frame_rate: 24000/1001
>>>      ...So, deltaPTS (calculated: 1/TB/FR) should be 30030.
>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   1 pts:  30240   ...should be
>>> 30030
>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   2 pts:  59760   ...should be
>>> 60060
>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   3 pts:  9   ...should be
>>> 90090
>>> [Parsed_showinfo_1 @ 0247d719ef00] n:   4 pts: 120240   ...should be
>>> 120120
>>>
>>> The PTS variance is 0.7%.
>>>
>>> Why are PTS values different from what's expected?
>>>
>>> Note: If I force deltaPTS via setpts=N*30030, then of course I get
>>> what's
>>> expected.
>>>
>>> Thanks. This is critical and your explanation is greatly appreciated!
>>> Mark.
>>
>> UPDATE
>>
>> If I change the filter complex to this:
>>
>> settb=1/72,setpts=N*30030,fps=fps=48000/1001,showinfo
>>
>> all my follow-on processing goes straight into the toilet.
>>
>> Explanation of the factors in the filter complex:
>> settb=1/72   ...mandate 1.3[8..] ms time resolution
>> setpts=N*30030   ...force the input to exactly 24000/1001fps cfr
>> fps=fps=48000/1001   ...frame double
>>
>> However, fps=fps=48000/1001 does more than just frame double. It resets
>> TB
>> to 20.8541[6..] ms time
>> resolution. Look:
>>
>> [Parsed_showinfo_3 @ 01413bf0ef00] config in time_base: 1001/48000,
>> frame_rate: 48000/1001
>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   0 pts:  0
>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   1 pts:  1
>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   2 pts:  2
>> [Parsed_showinfo_3 @ 01413bf0ef00] n:   3 pts:  3
>>
>> Gee, I wish the fps filter documention said that it changes TB and sets
>> deltaPTS to '1'.
>>
>> My follow-on frame processing can't tolerate 20.8541[6..] ms time
>> resolution -- that explains why my
>> mechanical frame gynmastics have been failing!
>>
>> Explanation: My follow-on processing does fractional frame adjustment
>> that
>> requires at least
>> 8.341[6..] ms resolution.
>>
>> Workaround: I can frame double by another method that's somewhat ugly but
>> that I know works and
>> doesn't trash time resolution.
> 
> Did you try changing the order? ie. -vf fps first ?

Before the 'settb=1/72,setpts=N*30030'? That wouldn't be appropriate
because I need to guarantee 
that the input is forced to 24000/1001fps cfr, first.
>

That's what -vf fps=24000/1001 does. It forces 24000/1001 CFR. Use it first

I'm sure it was mentioned in one of your other threads

-vf fps=24000/1001
pts_time:0.0417083
pts_time:0.0834167
pts_time:0.125125

normal MKV
pts_time:0.042
pts_time:0.083
pts_time:0.125 







--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why are PTS values different from what's expected?

2021-04-01 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>> less than 0.1%.
>> 
>> The filter complex is thinned down to just this: settb=1/72,showinfo
>> 
>> Here is selected lines from the showinfo report (with   ...comments):
>> 
>> [Parsed_showinfo_1 @ 0247d719ef00] config in time_base: 1/72,
>> frame_rate: 24000/1001
>>     ...So, deltaPTS (calculated: 1/TB/FR) should be 30030.
>> [Parsed_showinfo_1 @ 0247d719ef00] n:   1 pts:  30240   ...should be
>> 30030
>> [Parsed_showinfo_1 @ 0247d719ef00] n:   2 pts:  59760   ...should be
>> 60060
>> [Parsed_showinfo_1 @ 0247d719ef00] n:   3 pts:  9   ...should be
>> 90090
>> [Parsed_showinfo_1 @ 0247d719ef00] n:   4 pts: 120240   ...should be
>> 120120
>> 
>> The PTS variance is 0.7%.
>> 
>> Why are PTS values different from what's expected?
>> 
>> Note: If I force deltaPTS via setpts=N*30030, then of course I get what's
>> expected.
>> 
>> Thanks. This is critical and your explanation is greatly appreciated!
>> Mark.
> 
> UPDATE
> 
> If I change the filter complex to this:
> 
> settb=1/72,setpts=N*30030,fps=fps=48000/1001,showinfo
> 
> all my follow-on processing goes straight into the toilet.
> 
> Explanation of the factors in the filter complex:
> settb=1/72   ...mandate 1.3[8..] ms time resolution
> setpts=N*30030   ...force the input to exactly 24000/1001fps cfr
> fps=fps=48000/1001   ...frame double
> 
> However, fps=fps=48000/1001 does more than just frame double. It resets TB
> to 20.8541[6..] ms time 
> resolution. Look:
> 
> [Parsed_showinfo_3 @ 01413bf0ef00] config in time_base: 1001/48000,
> frame_rate: 48000/1001
> [Parsed_showinfo_3 @ 01413bf0ef00] n:   0 pts:  0
> [Parsed_showinfo_3 @ 01413bf0ef00] n:   1 pts:  1
> [Parsed_showinfo_3 @ 01413bf0ef00] n:   2 pts:  2
> [Parsed_showinfo_3 @ 01413bf0ef00] n:   3 pts:  3
> 
> Gee, I wish the fps filter documention said that it changes TB and sets
> deltaPTS to '1'.
> 
> My follow-on frame processing can't tolerate 20.8541[6..] ms time
> resolution -- that explains why my 
> mechanical frame gynmastics have been failing!
> 
> Explanation: My follow-on processing does fractional frame adjustment that
> requires at least 
> 8.341[6..] ms resolution.
> 
> Workaround: I can frame double by another method that's somewhat ugly but
> that I know works and 
> doesn't trash time resolution.

Did you try changing the order? ie. -vf fps first ?




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] filter script file

2021-03-21 Thread pdr0
Mark Filipak (ffmpeg) wrote
> Is the format of a filter script file documented anywhere? I can't find
> any.
> 
> Working command is:
> 
> ffmpeg -i source.mkv -filter_script:v test.filter_script -map 0 -codec:v
> libx265 -codec:a copy 
> -codec:s copy -dn test.mkv
> 
> If the test.filter_script file contains this:
> 
> settb=expr=1/72,setpts=N*24024,fieldmatch,yadif=deint=interlaced,telecine=pattern=4
> 
> it works, but if the test.filter_script file contains this:
> 
> settb=expr=1/72,setpts=N*24024,fieldmatch, \
> yadif=deint=interlaced,telecine=pattern=4
> 
> the transcode fails.
> 
> [AVFilterGraph @ 027205e29d80] No such filter: '
> yadif'
> 
> Obviously, the ' \' isn't working.


On windows, for a text file, you don't need a carriage return or line break
character - just hit enter (remove the "\")





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fieldmatch "marked as interlaced" -- doesn't work

2021-03-18 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-03-18 01:55, pdr0 wrote:
> 
>> https://www.mediafire.com/file/m46kc4p1uvt7ae3/cadence_tests.zip/file
> 
> Thanks again. I haven't tested my filters on cadence.mp4 yet to see if
> they work as expected.
> 
> How did you make cadence.mp4? Did you use ffmpeg to make it? Or did you
> use something else?
> 
> If you somehow used ffmpeg, would you share the command line?


The original animation (I'm sure you've seen variations this video before
for testing) was done in after effects. (It is very simple and could
produced in any number of free, open source programs such as natron,
blender, shotcut etc...)

The re-organization into fields, and cadence patterns was done in avisynth

ffmpeg libx264 was used for the encoding






--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fieldmatch "marked as interlaced" -- doesn't work

2021-03-17 Thread pdr0
Mark Filipak (ffmpeg) wrote
> I hoped that "marked as interlaced" [1] meant that
> 
> 'select=expr=not(eq(interlace_type\,TOPFIRST)+eq(interlace_type\,BOTTOMFIRST))'
> [2]
> 
> would work. However, the 'select' doesn't work. I'm counting on the
> 'select' working -- not working 
> is a complete show stopper.
> 
> Is there some other species of "marked as interlaced" that will make the
> 'select' work?
> 
> Thanks,
> Mark.
> 
> [1] From https://ffmpeg.org/ffmpeg-filters.html#fieldmatch
> "The separation of the field matching and the decimation is notably
> motivated by the possibility of inserting a de-interlacing filter
> fallback between the two. If the source has mixed telecined and real
> interlaced content, fieldmatch will not be able to match fields for
> the interlaced parts. But these remaining combed frames will be
> *marked as interlaced*, and thus can be de-interlaced by a later
> filter such as yadif before decimation."
> 
> [2] From https://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect
> "interlace_type (video only)
> "The frame interlace type. It can assume one of the following values:
> "PROGRESSIVE
> "The frame is progressive (not interlaced).
> "TOPFIRST
> "The frame is top-field-first.
> "BOTTOMFIRST
> "The frame is bottom-field-first."



Try using combmatch=full for fieldmatch

In this zip file is a sample test video "cadence.mp4". It has 23.976p
content, 29.97i (59.94 fields/sec interlaced) content, and 29.97p content,
all in a 29.97i stream. (There are many others cadences, but those are the 3
most common)
https://www.mediafire.com/file/m46kc4p1uvt7ae3/cadence_tests.zip/file

In this example for -vf select after fieldmatch, the 2 branches are
"progressive" and "not progressive". You can experiment with split and
various processing with interleave in the filter chain

progressive frames after fieldmatch
ffmpeg -i cadence.mp4 -filter_complex "fieldmatch=combmatch=full,
select='eq(interlace_type\,PROGRESSIVE)'" -c:v libx264 -crf 18 -an
fieldmatch_combmatchfull_prog.mkv

not progressive frames after fieldmatch
ffmpeg -i cadence.mp4 -filter_complex "fieldmatch=combmatch=full,
select='not(eq(interlace_type\,PROGRESSIVE))'" -c:v libx264 -crf 18 -an
fieldmatch_combmatchfull_notprog.mkv -y









--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question regarding .gif thumbnailing

2021-03-17 Thread pdr0
FFmpeg-users mailing list wrote
> You mean the master version without checking out the n4.3.2 release?
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday 17 March 2021 16:40, pdr0 

> pdr0@

>  wrote:
> 
>> FFmpeg-users mailing list wrote
>>
>> > Trying your command in my console gives the exact same "buggy" gif
>> > thumbnail I sent in my first email. Here's a screenshot of my console's
>> > process: https://files.catbox.moe/m2xldm.png
>> > ‐‐‐ Original Message ‐‐‐
>> > On Wednesday 17 March 2021 16:07, pdr0 <
>>
>> > pdr0@
>>
>> > > wrote:
>> >
>> > > FFmpeg-users mailing list wrote
>> > >
>> > > > I'm using 4.3.2 built from source.
>> > >
>> > > maybe there is a regression ? post your console output
>> > > https://i.postimg.cc/htBxJ81m/output.gif
>> > > ffmpeg -i frrsev.gif -filter_complex "scale=250:250, split[a][b]; [a]
>> > >
>> palettegen=reserve_transparent=on:transparency_color=ff[p],[b][p]paletteuse"
>> > > output.gif
>>
>> I'm on windows but I can reproduce your issue with the 4.3.2 release
>> version. You need to use the git version
>>
>>
>> -
>>
>> Sent from: http://ffmpeg-users.933282.n4.nabble.com/
>>
>> ffmpeg-user mailing list
>> 

> ffmpeg-user@

>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>
>> To unsubscribe, visit link above, or email
>> 

> ffmpeg-user-request@

>  with subject "unsubscribe".
> 
> 
> ___


Yes, use the "git master" branch, which has all the bug fixes . The 4.3.2
"release" stable branch lags behind about 6 months and still has that old
gif transparency disposal bug




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question regarding .gif thumbnailing

2021-03-17 Thread pdr0
FFmpeg-users mailing list wrote
> Trying your command in my console gives the exact same "buggy" gif
> thumbnail I sent in my first email. Here's a screenshot of my console's
> process: https://files.catbox.moe/m2xldm.png
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday 17 March 2021 16:07, pdr0 

> pdr0@

>  wrote:
> 
>> FFmpeg-users mailing list wrote
>>
>> > I'm using 4.3.2 built from source.
>>
>> maybe there is a regression ? post your console output
>>
>> https://i.postimg.cc/htBxJ81m/output.gif
>>
>> ffmpeg -i frrsev.gif -filter_complex "scale=250:250, split[a][b]; [a]
>> palettegen=reserve_transparent=on:transparency_color=ff[p],[b][p]paletteuse"
>> output.gif
>>
>>
>> 

I'm on windows but I can reproduce your issue with the 4.3.2 release
version. You need to use the git version




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question regarding .gif thumbnailing

2021-03-17 Thread pdr0
FFmpeg-users mailing list wrote
> I'm using 4.3.2 built from source.


maybe there is a regression ? post your console output

https://i.postimg.cc/htBxJ81m/output.gif

ffmpeg -i frrsev.gif -filter_complex "scale=250:250, split[a][b]; [a]
palettegen=reserve_transparent=on:transparency_color=ff[p],[b][p]paletteuse"
output.gif



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question regarding .gif thumbnailing

2021-03-17 Thread pdr0
FFmpeg-users mailing list wrote
> Hello. I'm using ffmpeg to generate thumbnails for my JavaScript web
> project. Now I'm having a little problem with .gif thumbnails. Original
> gif: https://files.catbox.moe/frrsev.gif my software's thumbnail:
> https://files.catbox.moe/3rtv3o.gif
> 
> As you can see, the thumbnail is not perfect and kind of buggy. This only
> happens with transparent .gifs. The filter i'm using is: [0:v]
> scale=${thumbSizeFilter},split [a][b]; [a]
> palettegen=reserve_transparent=on:transparency_color=ff [p]; [b][p]
> paletteuse
> 
> Any ideas on how to improve it? Thanks.

This works ok for me; it was fixed a while back. Make sure you use a recent
git version



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Recorded Frame Timestamps are Inconsistent! How to Fix it?

2021-03-15 Thread pdr0
Hassan wrote
> Hello,
> 
> I am using ffmpeg on a Windows 10 machine and I want to record the desktop
> at a high frame rate while appending accurate timestamps to each frame.
> I am recording my desktop using the following command:
> 
> ffmpeg -f gdigrab -framerate 60 -i desktop -vf "settb=AVTB,
> setpts='trunc(PTS/1K)*1K+st(1,trunc(RTCTIME/1K))-1K*trunc(ld(1)/1K)',
> drawtext=fontfile=ArialBold.ttf:fontsize=40:fontcolor=white:text='%{localtime}.%{eif\:1M*t-1K*trunc(t*1K)\:d}:box=1:boxborderw=20:boxcolor=black@1.0:x=10:y=10'"
> -c:v libx264rgb -crf 0 -preset ultrafast output.mkv
> 
> The long text next to -vf flag is used to append timestamp (date and
> current time in milliseconds) on the top left corner of the frame with
> black background.
> 
> The issue is that, ideally, when I am recording at 60 FPS, each subsequent
> frame should have a timestamp with an increment of 16.66 msec. However,
> the
> timestamp is not incremented as such. Instead, it stays the same on a lot
> of frames and then changes.
> 
> For example, when I break the video into frames, the frame titled
> "img0428.png" has the timestamp 18:44:16.828 (hh:mm:ss.millisec)
> [image: image.png].
> Then until "next 40 frames, it says the same. On file "img0469.png", the
> timestamp changes and becomes 18:44:17.510.
> [image: image.png]
> So, the timestamp changed after 41 frames and the time difference is 682
> milliseconds. Ideally, each of the 40 frames between these two frames
> should carry an incremental timestamp by a step size of 16.66 msec but
> this
> is not happening.
> 
> Therefore, my questions are as follows:
> 1. Am I using the right method to append timestamps to the recorded
> frames?
> 2. What is the reason that the timestamping on the frames is not correct?
> 3. How can I fix this issue?
> 4. What are the alternate methods to append accurate epoch timestamps to
> each of the recorded frames?
> 
> Please guide me. Thanks.
> 
> -- 
> Regards
> Hassan Iqbal


gdigrab is not very optimized. 

Your hardware might not be fast enough to capture desktop at 60fps,
resulting in dropped frames, and thus wrong times. Look at the console
output for the fps for a rough idea of the processing speed on your system





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Wanted: Fields-to-frames filter that does not add cosmetics

2021-03-05 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-03-05 11:13, James Darnley wrote:
>> On 05/03/2021, Mark Filipak (ffmpeg) 

> markfilipak@

>  wrote:
>>> I seek a fields-to-frames filter that does not add cosmetics. In my
>>> pursuit,
>>> I look for such a
>>> filter every time I peruse the filter docs for anything. I've yet to
>>> find
>>> such a filter.
>>>
>>> Do you know of a fields-to-frames filter that does not add cosmetics?
>> 
>> separatefields - "splits each frame into its components fields,
>> producing a new half height clip with twice the frame rate and twice
>> the frame count"
> 
> Yes, I could do (and have done) that, followed by 'shuffleframes=00',
> followed by 'tinterlace' [1]. 
> But that seems like a lengthy (slow) way to do what should be a simple
> (faster) thing [2].
> 
> [1] [A+b] ==> [A][b] ==> [A][A] ==> [A+A]
> [2] [A+b] ==> [A+A]
> 
> 
> If you're curious about what I'm doing, look:
> [A+a][B+c][C+d][D+d][D+d]   ...SOURCE is a (consistent) mess [3]
> [A+A][a+a][B+B][c+c][C+C][d+d][D+D][d+d][D+D][d+d]  
> ...yadif=mode=send_field
> [A+A][a+a][B+B][B+B][C+C][c+c][D+D][d+d]   ...shuffleframes=0 1 2 2 4 3 6
> 7 -1 -1
> [A+a][B+B][C+c][D+d]   ...tinterlace=mode=interleave_bottom to make TARGET
> [4]
> 
> [3] Telecined (=30fps) ==> frame 1 discard (=24fps) ==> frame 3 repeat
> (=30fps).
> [4] The TARGET is beautiful. No cosmetic filtering needed (or possible).
> 
> 
> [A+a][B+c][C+d][D+d][D+d]   ...SOURCE
> [A][a][B][c][C][d][D][d][D][d]   ...separatefields
> [A][a][B][B][C][c][D][d]   ...shuffleframes=0 1 2 2 4 3 6 7 -1 -1
> [A+a][B+B][C+c][D+d]   ...weave=first_field=top to make TARGET
> 
> Hmmm... That appears to work. I'll try it.
> 
> I guess I got stuck on using tinterlace as the last step and couldn't see
> that separatefields & 
> weave would be simpler (and faster) than yadif and without yadif's
> cosmetics.
> 
> Thanks!


Yes, yadif is not the right filter for what you're doing, because of the
spatial interpolation. Yadif is a deinterlacer, and as a general rule you
don't deinterlace progressive content (that has matching field pairs), or
you'll degrade it


Mark Filipak (ffmpeg) wrote
> [4] The TARGET is beautiful. No cosmetic filtering needed (or possible).

Cosmetic filter is wanted and possible on B+B. 

B is an orphaned field, missing it's partner "b" . B+B is going to be full
of aliasing/stairstepping. The field interpolation algorithm used to
generate the pseudo "b" makes a difference. For example, -vf nnedi=field=t
applied selectively on that B+B frame will look substantially better, almost
like a fake B+b . Or temporally filtered B+b (from A+a and C+c data), such
as with QTGMC in vapoursynth or avisynth will look better than either. If
you want demos or more info let me know













--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Wanted: Fields-to-frames filter that does not add cosmetics

2021-03-05 Thread pdr0
Mark Filipak (ffmpeg) wrote
> 'yadif=mode=send_field' is one way to convert fields to frames at the same
> frame size and twice the 
> FR. It does it by repeating fields, but it also adds cosmetics -- it is,
> after all, a motion 
> interpolation filter.
> 
> I seek a fields-to-frames filter that does not add cosmetics. In my
> pursuit, I look for such a 
> filter every time I peruse the filter docs for anything. I've yet to find
> such a filter.
> 
> Do you know of a fields-to-frames filter that does not add cosmetics?

Yadif is not a motion interpolation filter

Motion interpolation implies resampling new data points in time - such as
optical flow (e.g. minterpolate or svpflow) when new "in-between" frames are
inserted using motion vectors. In contrast, yadif's interpolation is 1)
spatial, not temporal, and  2) Existing fields are not resampled in time
when converted to frames. ie. The motion characteristics are the same as the
input. eg. 59.94 samples/s interlaced source still has 59.94 same samples/s
progressive in the output, not some other retimed number . Or 23.976 samples
in 3:2 pulldown, still has 23.976 samples progressive in the output with
triplicates and duplicates. It's the same motion characteristics at the same
temporal positions - there is no "interpolation" of motion.

It sounds like you want a bob filter with simple spatial interpolation, such
as line doubling for the spatial interpolation (e.g.  nearest neighbor) ?
When you have missing scan lines, there is always some "cosmetics" . You
can't get something from nothing. Some type of spatial interpolation +/-
temporal interpolation (using data from adjacent fields) is always involved
to fill in the missing scan lines. Clarify how you want this to be done. 

Or, did you want field matching with decimation instead ?  eg. reconstruct
the original progressive frames that were organized in fields with pulldown
pattern , and decimating the duplicates. AKA "inverse telecine" ?

What kind of "fields to frames" did you want ?





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SSIM filter showing (small) differences in identical(?) streams

2021-02-26 Thread pdr0
Ian Pilcher wrote
> I am trying to understand how the SSIM and VMAF filters work, with an
> eye to finding the "best" compression settings for a video which will
> be composed from a series of TIFF images.  Unfortunately, I'm stuck at
> the beginning, as I can't get the SSIM filter to behave as expected.
> 
> ./source contains the original sequence of images.
> 
>   $ tiffinfo source/00.tif
>   TIFF Directory at offset 0x473108 (4665608)
> Image Width: 1440 Image Length: 1080
> Bits/Sample: 8
> Sample Format: unsigned integer
> Compression Scheme: None
> Photometric Interpretation: RGB color
> Samples/Pixel: 3
> Rows/Strip: 1
> Planar Configuration: single image plane
> 
> I attempt to create a lossless video of the first minute.
> 
>   $ ffmpeg -start_number 0 -framerate 6/1001 -i source/%06d.tif -t 
> 00:01:00 -c:v huffyuv lossless.mkv
> 
> The result appears reasonable.
> 
>   $ mediainfo lossless.mkv
>   General
>   Unique ID: 
> 235140899628261703308032414639716345340
> (0xB0E67D6EF6B78362D9BCF9EA3080A5FC)
>   Complete name: lossless.mkv
>   Format   : Matroska
>   Format version   : Version 4
>   File size: 8.54 GiB
>   Duration : 59 s 994 ms
>   Overall bit rate : 1 223 Mb/s
>   Writing application  : Lavf58.45.100
>   Writing library  : Lavf58.45.100
>   ErrorDetectionType   : Per level 1
> 
>   Video
>   ID   : 1
>   Format   : HuffYUV
>   Format version   : Version 2
>   Codec ID : V_MS/VFW/FOURCC / HFYU
>   Duration : 59 s 994 ms
>   Bit rate : 1 199 Mb/s
>   Width: 1 440 pixels
>   Height   : 1 080 pixels
>   Display aspect ratio : 4:3
>   Frame rate mode  : Constant
>   Frame rate   : 59.940 FPS
>   Color space  : RGB
>   Bit depth: 8 bits
>   Scan type: Progressive
>   Bits/(Pixel*Frame)   : 12.860
>   Stream size  : 8.37 GiB (98%)
>   Writing library  : Lavc58.91.100 huffyuv
>   Default  : Yes
>   Forced   : No
> 
> Now let's see what the SSIM filter says.
> 
>   $ ffmpeg -i lossless.mkv -start_number 0 -framerate 6/1001 -i 
> source/%06d.tif -t 00:01:00 -filter_complex ssim -f null -
>   ...
>   Input #0, matroska,webm, from 'lossless.mkv':
> Metadata:
>   ENCODER : Lavf58.45.100
> Duration: 00:00:59.99, start: 0.00, bitrate: 1223104 kb/s
>   Stream #0:0: Video: huffyuv (HFYU / 0x55594648), bgr0, 1440x1080, 
> 59.94 fps, 59.94 tbr, 1k tbn, 1k tbc (default)
>   Metadata:
> ENCODER : Lavc58.91.100 huffyuv
> DURATION: 00:00:59.99400
>   Input #1, image2, from 'source/%06d.tif':
> Duration: 01:47:16.00, start: 0.00, bitrate: N/A
>   Stream #1:0: Video: tiff, rgb24, 1440x1080, 59.94 tbr, 59.94 tbn, 
> 59.94 tbc
>   ...
>   [Parsed_ssim_0 @ 0x55fb09c738c0] not matching timebases found between 
> first input: 1/1000 and second input 1001/6, results may be incorrect!
>   ...
>   [Parsed_ssim_0 @ 0x55fb09c738c0] SSIM R:0.833774 (7.793009) G:0.835401 
> (7.835723) B:0.831058 (7.722615) All:0.833411 (7.783532)
> 
> That's not what I expected.  My understanding is that the R, G, B, and
> All values should all be "1.00 (inf)".
> 
> The "not matching timebases" warning is the obvious thing to look at.
> After much searching, I came upon the -video_track_timescale option, but
> it seems to only take an integer, and 60 is not the same as 59.94, so it
> seems that I simply can't directly compare a video stream with a non-
> integer framerate to an image sequence.
> 
> As a workaround, I tried extracting the lossless video frames as a
> separate image sequence.
> 
>   $ ffmpeg -i lossless.mkv -start_number 0 lossless/%06d.png
> 
> This created the expected sequence of image files (00.png -
> 003595.png).  Since I have both the "source" and the "lossless" streams
> as images sequences, I can use ImageMagick to compare them.
> 
>   $ for I in `seq -w 0 3595` ; do compare -metric AE source/00${I}.tif 
> lossless/00${I}.png /tmp/diff.png 2>/dev/null || echo $I ; done
> 
> This produces no output, indicating that ImageMagick thinks that the
> TIFF files in ./source and the PNG files in ./lossless contain
> completely identical image data.  What does the SSIM filter say?

Re: [FFmpeg-user] How can I force a 360kHz time base?

2021-02-26 Thread pdr0
Mark Filipak (ffmpeg) wrote
> 
> Currently, the ffmpeg internal time base appears to be 1kHz. 

No. Again, there is no ffmpeg "inherent" 1ms time base. I answered this at
doom9 already. I also posted in your other thread demonstrated the timestamp
results with the same video in vob vs. mkv.  I suggest you prove it to
yourself - because your posting of the same misinformation does not help.
http://ffmpeg.org/pipermail/ffmpeg-user/2021-February/052118.html

Again,  the problem is your input video MKV container timebase. 1/1000s

Try it with vob, or mpeg-ts, or mpeg-ps. 1/9s



> It doesn't have to be constant. It doesn't have to be 36Hz. What I'm
> suggesting is that if it 
> *is* 36Hz, that would make a single time base that works for
> everything. I'm not suggesting that 
> it be non-modifiable.
> 
> /quote]
> 
> You're suggesting to the wrong people. tbn is a function of the container
> timebase. You'd have to re-write MPEG2-TS, MPEG-PS, MKV, MOV, MP4, etc..
> ISO specs. ffmpeg is just following specs
> 
> If the intermediate filter calculations could use another finer timebase,
> ok , that might be useful for some situations - but you'd have to
> recalculate the original timestamps. You could not use the original data,
> which was already limited by the existing container timebase
> 
> quote
> 
> I hoped that setting 'settb=expr=1/36' would produce what I want, but
> it didn't because ffmpeg's 
> inherent 1 millisecond time base resolution superseded it. No matter what
> 'settb' is specified, the 
> PTS resolution is going to be 1 millisecond.

The precision is already lost when you started with mkv . Setting the tb
afterwards does not change the original values or add more precision that
was already lost. I

eg If you start with an actual timestamp of 0.001, how can you "guess" the
real value should have been 0.00127846336564 ? You can't directly. You can
infer values, or recalculate values using a given framerate, but those are
not actual timestamps in the actual file.

In order to salvage a MKV starting point, you'd have to use -vf fps to
reissue the timestamps. Or don't start with MKV



> The source video is a 5 second MKV clip from a commercial DVD.
> 
>>> [Parsed_showinfo_1 @ 0211128f2340] n:   1 pts:  11880 pts_time:0.033  
>>> pos:10052 
>>> fmt:yuv420p sar:32/27 s:240x236 i:P iskey:0 type:B checksum:3CF10BFE
>>> plane_checksum:[64208370 

Interesting s:240x236 would be a bizarre resolution for DVD. 




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PTS resolution

2021-02-23 Thread pdr0
Mark Filipak (ffmpeg) wrote
> In contrast, my best information so far is that, at least out of the
> encoder, ffmpeg encodes frames with PTS resolution = 1ms.

Not true;

Check the timestamps at each step. Decoding, prefilter, postfilter after
each filter, postencode. If you need to check timestamps inbetween filters,
use -vf showinfo.

If you export use a container timebase of 1/1000s such as mkv, then yes, you
are limited beacuse of the container timebase. That has nothing to do with
ffmpeg. If you've "ripped" a dvd using makemkv, for input into ffmpeg then
you start with a timebase of 1/1000s. That's not ffmpeg's fault either

The container timebase for a MPEG2-PS is 90K (or 1/9s) .ie.  ffmpeg -i
input.vob tbn is 90k

tb:1/9 fr:3/1001 sar:8/9
pts_time:0
pts_time:0.0333556 
pts_time:0.0667222
pts_time:0.100089

(it's still wrong, it should have been )
tb:1/9 fr:3/1001 sar:8/9
pts_time:0
pts_time:0.0333667
pts_time:0.0667333
pts_time:0.1001

But the point is the "resolution" is finer than 1ms from decoding

But if you use DVD in MKV container input, the timebase reduces the "finer"
resolution to 1ms
tb:1/1000 fr:19001/317 sar:8/9
pts_time:0
pts_time:0.033
pts_time:0.067
pts_time:0.1







--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PTS resolution

2021-02-23 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-02-23 00:41, Carl Zwanzig wrote:
> -snip-
>> If you're starting with mpeg-ps or -ts, ...
> 
> There's no such thing as PTS in mpeg-ts. The transport stream sets the SCR
> (System Clock Reference) 
> (aka TB) but the PTSs are in the presentation stream, stored as integer
> ticks of the SCR.

There is no such thing as /external/ timestamps, or container timestamps for
MPEG-TS that govern the timing. Nor are there external timestamps  any CFR
container formats such as AVI. For AVI and MPEG-TS, MPEG-PS - the content
can be VFR (using field or frame repeats), but the container is CFR only.
Each coded frame cannot have variable display times in within a GOP.  




> I've been told (at doom9.org) that MKV (which is a TS) stores PTSs but I
> find that hard to believe.

Then read the MKV documentation, section 16 "External Timestamp Files".

MKV is a container format. "TS" is usually reserved to denote for MPEG2-TS
"TS for Transport Stream" (such as .ts, .m2ts, .mts) , not so loosely as any
container stream.

FFMpeg and modern video container formats have to be able to handle VFR
without coding field or frame repeats - External timestamps are how PTS are
controlled. This is what video players use to control playback. When you
extract PTS from MKV using mkvextract or ffprobe, that's what you're getting
- an external timestamps file. When you multiplex in timestamps file with
mkvmerge, that's what you 're using - an external timestamps file. MP4, MOV,
FLV, also use this method. 











--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] shuffleframes -- unexpected results

2021-02-21 Thread pdr0
pdr0 wrote
> Mark Filipak (ffmpeg) wrote
>> On 2021-02-21 21:39, pdr0 wrote:
>>> Mark Filipak (ffmpeg) wrote
>>>> I've run some test cases for the shuffleframes filter. I've documented
>>>> shuffleframes via my
>>>> preferred documentation format (below). In the course of my testing, I
>>>> found 2 cases (marked "*
>>>> Expected", below) that produced unexpected results, to wit: If the 1st
>>>> frame is discarded, the
>>>> entire input is discarded, even if the 1st discard is followed by
>>>> frames
>>>> that are supposed to be
>>>> retained.
>>>>
>>>> -1 1...Blocks the pipeline (discards all frames).
>>>> *
>>>> Expected 1 3 5 7 ..
>>>> -1 1 2   ...Blocks the pipeline (discards all frames).
>>>> * Expected 1 2 4 5 7 8 ..
>>> 
>>> These 2 cases produce the expected result for me
>>> 
>>> If the "entire input is discarded" - do you mean you get no output file
>>> at
>>> all ?
>> 
>> No output file at all. The transcodes complete (they don't hang awaiting
>> end-of-stream) but since 
>> the pipeline contains no frames, the encoder makes nothing and ffmpeg
>> makes no files.
>> 
>>> Post more info, including the console log
>> 
>> First, for context, the scripts:
>> ffmpeg -i 0.mkv -vf shuffleframes="-1" -dn "\-1.mkv"
>> ffmpeg -i 0.mkv -vf shuffleframes="-1 1" -dn "\-1 1.mkv"
>> ffmpeg -i 0.mkv -vf shuffleframes="-1 1 2" -dn "\-1 1 2.mkv"
>> 
>> I had to escape the '-' in the filenames in order to avoid this
>> complaint:
>> "Unrecognized option '1 1 2.mkv'.
>> "Error splitting the argument list: Option not found"
>> 
>> Perhaps it would be expeditious if you showed me your command line that
>> works, eh?
> 
> I didn't need to escape 
> 
> ffmpeg -i input.avs -vf shuffleframes="-1 1" -c:v libx264 -crf 20 -an
> out1.mkv
> ffmpeg -i input.avs -vf shuffleframes="-1 1 2" -c:v libx264 -crf 20 -an
> out2.mkv
> 
> input.avs is a 24.0fps blankclip with showframenumber() to overlay the
> framenumbers so I could examine the output later. 
> 
> (I noticed mp4 container does not work properly for this, there are
> duplicated frames, but mkv output is ok)
> 
> 
> 
> 
> Mark Filipak (ffmpeg) wrote
>> Oh dear. Is there a escaping row between Windows and ffmpeg? Here's the
>> logfile:
>> ffmpeg started on 2021-02-21 at 11:50:29
>> Report written to "ffmpeg-20210221-115029.log"
>> Log level: 32
>> Command line:
>> ffmpeg -i 0.mkv -vf "shuffleframes=-1 1 2" -dn "\\-1 1 2.mkv"
>> ffmpeg version N-100851-g9f38fac053 Copyright (c) 2000-2021 the FFmpeg
>> developers
>>built with gcc 9.3-win32 (GCC) 20200320
>>configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static
>> --pkg-config=pkg-config 
>> --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32
>> --enable-gpl --enable-version3 
>> --disable-debug --disable-w32threads --enable-pthreads --enable-iconv
>> --enable-zlib --enable-libxml2 
>> --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma
>> --enable-fontconfig 
>> --enable-opencl --enable-libvmaf --enable-vulkan --enable-libvorbis
>> --enable-amf --enable-libaom 
>> --enable-avisynth --enable-libdav1d --enable-libdavs2 --enable-ffnvcodec
>> --enable-cuda-llvm 
>> --enable-libglslang --enable-libass --enable-libbluray
>> --enable-libmp3lame
>> --enable-libopus 
>> --enable-libtheora --enable-libvpx --enable-libwebp --enable-libmfx
>> --enable-libopencore-amrnb 
>> --enable-libopencore-amrwb --enable-libopenjpeg --enable-librav1e
>> --enable-librubberband 
>> --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt
>> --enable-libsvtav1 
>> --enable-libtwolame --enable-libuavs3d --enable-libvidstab
>> --enable-libx264 --enable-libx26 
>> libavutil  56. 64.100 / 56. 64.100
>>libavcodec 58.119.100 / 58.119.100
>>libavformat58. 65.101 / 58. 65.101
>>libavdevice58. 11.103 / 58. 11.103
>>libavfilter 7.100.100 /  7.100.100
>>libswscale  5.  8.100 /  5.  8.100
>>libswresample   3.  8.100 /  3.  8.100
>>libpostproc55.  8.100 / 55.  8.100
>> Input #0, matroska,webm, from '0.mkv':
>>Metadata:
>>  ENCODER : Lavf58.65.101
>

Re: [FFmpeg-user] shuffleframes -- unexpected results

2021-02-21 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 2021-02-21 21:39, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> I've run some test cases for the shuffleframes filter. I've documented
>>> shuffleframes via my
>>> preferred documentation format (below). In the course of my testing, I
>>> found 2 cases (marked "*
>>> Expected", below) that produced unexpected results, to wit: If the 1st
>>> frame is discarded, the
>>> entire input is discarded, even if the 1st discard is followed by frames
>>> that are supposed to be
>>> retained.
>>>
>>> -1 1...Blocks the pipeline (discards all frames).
>>> *
>>> Expected 1 3 5 7 ..
>>> -1 1 2   ...Blocks the pipeline (discards all frames).
>>> * Expected 1 2 4 5 7 8 ..
>> 
>> These 2 cases produce the expected result for me
>> 
>> If the "entire input is discarded" - do you mean you get no output file
>> at
>> all ?
> 
> No output file at all. The transcodes complete (they don't hang awaiting
> end-of-stream) but since 
> the pipeline contains no frames, the encoder makes nothing and ffmpeg
> makes no files.
> 
>> Post more info, including the console log
> 
> First, for context, the scripts:
> ffmpeg -i 0.mkv -vf shuffleframes="-1" -dn "\-1.mkv"
> ffmpeg -i 0.mkv -vf shuffleframes="-1 1" -dn "\-1 1.mkv"
> ffmpeg -i 0.mkv -vf shuffleframes="-1 1 2" -dn "\-1 1 2.mkv"
> 
> I had to escape the '-' in the filenames in order to avoid this complaint:
> "Unrecognized option '1 1 2.mkv'.
> "Error splitting the argument list: Option not found"
> 
> Perhaps it would be expeditious if you showed me your command line that
> works, eh?

I didn't need to escape 

ffmpeg -i input.avs -vf shuffleframes="-1 1" -c:v libx264 -crf 20 -an
out1.mkv
ffmpeg -i input.avs -vf shuffleframes="-1 1 2" -c:v libx264 -crf 20 -an
out2.mkv

input.avs is a 24.0fps blankclip with showframenumber() to overlay the
framenumbers so I could examine the output later. 

(I noticed mp4 container does not work properly for this, there are
duplicated frames, but mkv output is ok)




Mark Filipak (ffmpeg) wrote
> Oh dear. Is there a escaping row between Windows and ffmpeg? Here's the
> logfile:
> ffmpeg started on 2021-02-21 at 11:50:29
> Report written to "ffmpeg-20210221-115029.log"
> Log level: 32
> Command line:
> ffmpeg -i 0.mkv -vf "shuffleframes=-1 1 2" -dn "\\-1 1 2.mkv"
> ffmpeg version N-100851-g9f38fac053 Copyright (c) 2000-2021 the FFmpeg
> developers
>built with gcc 9.3-win32 (GCC) 20200320
>configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static
> --pkg-config=pkg-config 
> --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32
> --enable-gpl --enable-version3 
> --disable-debug --disable-w32threads --enable-pthreads --enable-iconv
> --enable-zlib --enable-libxml2 
> --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma
> --enable-fontconfig 
> --enable-opencl --enable-libvmaf --enable-vulkan --enable-libvorbis
> --enable-amf --enable-libaom 
> --enable-avisynth --enable-libdav1d --enable-libdavs2 --enable-ffnvcodec
> --enable-cuda-llvm 
> --enable-libglslang --enable-libass --enable-libbluray --enable-libmp3lame
> --enable-libopus 
> --enable-libtheora --enable-libvpx --enable-libwebp --enable-libmfx
> --enable-libopencore-amrnb 
> --enable-libopencore-amrwb --enable-libopenjpeg --enable-librav1e
> --enable-librubberband 
> --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt
> --enable-libsvtav1 
> --enable-libtwolame --enable-libuavs3d --enable-libvidstab
> --enable-libx264 --enable-libx26 
> libavutil  56. 64.100 / 56. 64.100
>libavcodec 58.119.100 / 58.119.100
>libavformat58. 65.101 / 58. 65.101
>libavdevice58. 11.103 / 58. 11.103
>libavfilter 7.100.100 /  7.100.100
>libswscale  5.  8.100 /  5.  8.100
>libswresample   3.  8.100 /  3.  8.100
>libpostproc55.  8.100 / 55.  8.100
> Input #0, matroska,webm, from '0.mkv':
>Metadata:
>  ENCODER : Lavf58.65.101
>Duration: 00:00:05.76, start: 0.00, bitrate: 295 kb/s
>  Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m, progressive),
> 240x236 [SAR 32:27 DAR 
> 640:531], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
>  Metadata:
>ENCODER : Lavc58.119.100 libx264
>DURATION: 00:00:05.75600
>  Stream #0:1: Audio: vorbis, 48000 Hz, stereo, fltp (default)
>  Metadata:
>_STATISTICS_TAGS-

Re: [FFmpeg-user] shuffleframes -- unexpected results

2021-02-21 Thread pdr0
Mark Filipak (ffmpeg) wrote
> I've run some test cases for the shuffleframes filter. I've documented
> shuffleframes via my 
> preferred documentation format (below). In the course of my testing, I
> found 2 cases (marked "* 
> Expected", below) that produced unexpected results, to wit: If the 1st
> frame is discarded, the 
> entire input is discarded, even if the 1st discard is followed by frames
> that are supposed to be 
> retained.
> 
>-1 1...Blocks the pipeline (discards all frames). *
> Expected 1 3 5 7 ..
>-1 1 2   ...Blocks the pipeline (discards all frames).
> * Expected 1 2 4 5 7 8 ..

These 2 cases produce the expected result for me

If the "entire input is discarded" - do you mean you get no output file at
all ?

Post more info, including the console log





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Is there something about inputting raw frames

2021-02-12 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 10:34 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
> 
> -snip-
> 
>> "72fps" or "144fps" equivalent in a cinema is not the same thing - the
>> analogy would be the cinema is repeating frames, vs interpolating new
>> in-between frames on a motion flow TV. ...
> 
> To some transcodes, repeating frames does apply. To interpolation, some
> other considerations apply. 
> Also, I contend that, due to its real-time schedule, a motion flow TV
> can't faithfully reproduce 24 
> pictures per second converted to 60 pictures per second via motion vector
> interperpolation.

Motion flow in TV's is the hardware (realtime) implementation of software
optical flow. There are different trade names by different manufacturers,
but they all use motion interpolation and optical flow.  For a 24pN native
source, it's a real time conversion to 120fps, then evenly divided by 2 for
60fps. Each frame is "evenly spaced in time".   It produces similar results
to mvtools2, interframe, twixtor, svpflow, kronos, DAIN, all the ones I
mentioned in your other thread . The TV version of optical flow even produce
similar artifacts. This has been around for many years, probably 10-15
years,  and those TV optical flow sets almost as long. It might be new to
you , but this is all old news.



If you're still interested, I tested the avs script and produces the proper
results you want for the checkerblend with InterleaveEvery . The short
version is it's probably not worth it if you're using libx265. Minterpolate
was your bottleneck; interframe or any interplolation script is unlikely to
be a bottleneck unless you use very slow settings




Observations . This was tested on a 5 year old laptop. 

If we take Interframe @ 6/1001 ( Preset="medium", Tuning="smooth",
InputType="2D", NewNum=6, NewDen=1001, GPU=True) I used cores=8 prefetch
8 for the "baseline" avs version

Interframe @6/1001 ffmpeg pipe speed 38-39fps
Interframe @48000/1001 ffmpeg pipe speed 46-47fps
Interleaveevery checker ffmpeg pipe speed  49-50fps (interesting that this
was faster than the Interframe 48000/1001 run alone)

Actual encoding speeds
interframe @6/1001 libx265 8-8.5fps
interleaveevery checker libx265 7-7.5fps (interesting that the pipe speed
was faster, but the actual encoding speed slower than just using interframe)
interframe @6/1001 libx264 21-22fps
interleaveevery checker libx264 22-23fps




You can measure pipe speed vs actual encoding speeds to help determine where
some bottlenecks are

ffmpeg -i input.avs -an -f null NUL

or

vspipe --y4m input.vpy - | ffmpeg -f yuv4mpegpipe -i - -an -f null NUL

There are other tools like vsedit, avspmod, avsmeter that can help you
optimize scripts by benchmarking speeds, so you can adjust settings, preview
results. If you're not using these, they are helpful tools that complement
ffmpeg workflows



When you were using minterpolate - that was probably the bottleneck. So any
filter chain speed optimization will make a significant difference in
realized encoding speed when using minterpolate , and could be worth
pursuing if speed optimization vs. quality was your goal. 

But interframe pipe speed into ffmpeg is much faster, and generally not the
bottleneck. (You could use faster interframe settings if you needed to). 
It's better to address bottlenecks (more "bang for you buck") . On this
hardware, when using the checker blend approach, there was marginal speed
improvement with additional artifacts and jerkiness , and it actually was
slower than just interframe when using libx265 - that latter observation was
unexpected because the pipe speed was faster than Interframe @6/1001 .
Might be some threading issues with the avs version, or some LUTs not
optimized eg. mt_lutspa for the checker mask


On my setup, libx265 (using your same settings) - is the bottleneck. So
adusting the script or filters will make little differnece (but  faster
script means more resources for libx265 to use, it'll be marginally faster)
. YMMV between hardware. In all cases , the input pipe speed is
significantly faster than the actual libx265 encoding speed. Just switching
to libx264 or using faster libx265 settings more than doubled the speed for
me

Check your bottlenecks; if you need more info on scripts or procedure or
samples let me know. I don't think I can easily translate it to vapoursynth,
my python is weak. The InterleaveEvery function and frame based nature of
avisynth is what makes it easy to do




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Is there something about inputting raw frames

2021-02-12 Thread pdr0
Mark Filipak (ffmpeg) wrote
>> Either way, cadence wise that's going to be worse in terms of smoothness
>> then an optical flow retimed 6/1001 . (Some people would argue it's
>> worse period, you're retiming it and making it look like a soap opera...)
> 
> You know, I think that "soap opera" opinion is entirely bogus. In a movie
> theater, projectors are 
> triple shuttered. That essentially brings the frame rate up to 72fps with
> picture rate of 24pps (or, 
> if you include the times when the shutter is closed, 144fps). When some
> people see on a TV what they 
> would see in a cinema, they say it looks like a soap opera. It's not what
> they're used to seeing on 
> a TV. I think 60fps on a 60Hz TV looks much better, that it looks like
> reality. If you've been 
> following what I've been doing, you'll know that's been my objective all
> along. I'd hoped that 
> minterpolate would do it, but minterpolate makes too many errors. svpflow
> does a much better job and 
> it does it via GPU (so transcoding goes from 4 days to 14 hours). I'm
> pretty confident that going to 
> 48fps (instead of 60fps) and then adding a modulo-4 frame will speed up
> the transcode by about 40% 
> (to 8-1/2 hours -- an overnight job!).

It's ok to "like" one thing vs. another, that's why there are motion flow
TV's that use optical flow/motion interpolation on the fly, and judderless
TV's that have different refresh rates

"72fps" or "144fps" equivalent in a cinema is not the same thing - the
analogy would be the cinema is repeating frames, vs interpolating new
in-between frames on a motion flow TV. Actual film samples in the cinema are
still at 24.  A judderless TV looks like a theatre, because they have the
equivalent of repeating frames at 120Hz, 144Hz, 300Hz, etc... The motion
characteristics are the same for the judderless display and the cinema. 


Optical flow motion interpolation is generating new motion samples. It's
completely different. Soap operas are shot at 59.94fps , not 24fps. New
motion samples and (synthesized or real) high frame rate recording
completely change the look of the material. The other difference is shutter
speed of the acquisition camera. The faster the acquisition speed, usually
the higher the shutter speed and less motion blur. Synthesized interpolation
does not remove the motion blur with 24p acquisition, in fact it adds more
blur. Native 59.94p acquisition is "sharper" with  less motion blur

The live look or soap opera reality TV is ok for sports, reality TV, news,
but it changes the look of a theatrical movie shot at 24p. Motion
interpolated 59.94p looks completely different from the cinema, and that's
the issue many people have with it





>> Are you actually interested in workarounds and getting the job done, or
>> just
>> how to do this in ffmpeg?
> 
> Well, I guess I just want to get the job done. The linchpin is the added
> frame. What I want to do is 
> create a modulo-4, 1/60th second gap in the frames coming out of
> VapourSynth and filling it with a 
> checkerboard blend of the frames on either side of the gap -- essentially
> a blended decombing -- 
> with PTSs set to give 60/1.001fps. I realize that will produce a slight
> judder (1 frame in every 5 
> frames, but based on my experiments with minterplolate, that judder is
> *almost* imperceptible. If 
> shuffleframes proves to be the problem, I'll do a 3322telecine and
> checkerboard blend the combed 
> frame. I'll get where I want to go eventually. All the guesswork &
> discovery regarding how ffmpeg 
> filters work is just awfully tedious.
> 
>> If you just want it done, this is easier in avisynth because of the
>> InterleaveEvery function;
>> http://avisynth.nl/index.php/ApplyEvery#InterleaveEvery
> 
> Oh, my. Avisynth? 'InterleaveEvery', eh? That doesn't sound like what I
> want, but I'll check it out. 
> Thanks.

I  don't know why frames are being dropped with those ffmpeg filters, but
I'd like to figure out why

But in the meantime, if your ulitmate goal was smooth interpolation to
59.94, why not just use Interframe to generate 59.94 ? Earlier, you
mentioned speed - is that the reason ?

The "checkerboard  blend from the gap" -  If the 48000/1001 interpolated
stream is A,B,C,D,E , you want a frame inserted between D and E, that is
comprised of a "checkerboard blend" of D and E, for a resulting 6/1001 ? 
It seems like a poor tradeoff for a small gain in speed. But InterleaveEvery
is one way of doing it





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Is there something about inputting raw frames

2021-02-11 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 02:28 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> On 02/12/2021 01:27 AM, pdr0 wrote:
>>>> Mark Filipak (ffmpeg) wrote
>>>>> Is there something about inputting raw frames that I don't know?
>>>>>
>>>>> I'm using 'vspipe' to pipe raw frames to 'ffmpeg -i pipe:'.
>>>>> The vapoursynth script, 'Mark's.vpy', is known good.
>>>>> The output of vapoursynth is known good.
>>>>> I've tried to be careful to retain valid PTSs, but apparently have
>>>>> failed.
>>>>> The output should be around 1200 frames, but 364 frames are dropped.
>>>>> I've frame stepped through the target, 'Mark's_script_6.mkv', and the
>>>>> frames that are there are in
>>>>> the correct order.
>>>>> The only thing I can guess is that ffmpeg handles 48/1.001fps raw
>>>>> video
>>>>> frames in such a way that
>>>>> PTS is not valid or can't be changed with 'setpts=fps=6/1001'.
>>>>> Can anyone see an error. Or, lacking an error, does anyone know of a
>>>>> workaround?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Mark's_script_6.cmd
>>>>> =
>>>>> ECHO from vapoursynth import core>Mark's.vpy
>>>>> ECHO video =
>>>>> core.ffms2.Source(source='Mark\'s_source.mkv')>>Mark's.vpy
>>>>> ECHO import havsfunc as havsfunc>>Mark's.vpy
>>>>> ECHO video = havsfunc.InterFrame(video, Preset="medium",
>>>>> Tuning="smooth",
>>>>> InputType="2D",
>>>>> NewNum=48000, NewDen=1001, GPU=True)>>Mark's.vpy
>>>>> ECHO video.set_output()>>Mark's.vpy
>>>>> vspipe --y4m Mark's.vpy - | ffmpeg -thread_queue_size 2048 -i pipe:
>>>>> -filter_complex
>>>>> "setpts=N*1001/6/TB, split[1][2], [1]shuffleframes=0 1 2 3 3,
>>>>> select=not(eq(mod(n\,5)\,4))[3],
>>>>> [2]tblend=all_expr='if(eq(mod(X,2),mod(Y,2)),TOP,BOTTOM)',
>>>>> shuffleframes=0
>>>>> 1 2 2 3,
>>>>> select=eq(mod(n\,5)\,4)[4], [3][4]interleave" -i Mark's_source.mkv
>>>>> -map
>>>>> 0:v -map 1:a -codec:v
>>>>> libx265 -x265-params "crf=16:qcomp=0.60" -codec:a copy -codec:s copy
>>>>> Mark's_script_6.mkv -y
>>>>
>>>>
>>>> Are you trying to keep the same frames from vapoursynth output node,
>>>> but
>>>> assign 6/1001 fps and timestamps instead of 48000/1001 ?
>>>> (effectively
>>>> making it a speed up)
>>>>
>>>> If so, the workaround is : add after the Interframe line
>>>>
>>>> video = core.std.AssumeFPS(video, fpsnum=6, fpsden=1001)
>>>
>>> After your suggested addition to the python script, Mark's.vpy,
>>> With '-filter_complex "setpts=N*1001/6/TB, split[1][2]...' there are
>>> 335 drops.
>>> With '-filter_complex "split[1][2]...' there are 190 drops.
>> 
>> The workaround is correct for the PTS
>> 
>>   AssumeFPS is used to change the frame rate (and timestamps) without
>> changing the frame count. It just assigns a framerate (and their PTS). So
>> you would use that instead of setpts
>> 
>> There are no frame drops or additions from vapoursynth. The framecount,
>> framerate and PTS are correct at that point. You can verify this by
>> encoding
>> the vpy script directly without other filters.
>> 
>> So this implies the some drops are from setpts, and some other drops are
>> from some of your other filters
> 
> Well, I previously removed the 'setpts' directives with no change. Also, I
> previously tested with no 
> ffmpeg filters at all and got the expected sped up video. I honestly can't
> see anything else to 
> discover. But I'll start stripping filters one by one and hack a solution
> (or make a discovery). Of 
> course, the 'shuffleframes' directives are most suspect, but I've used
> 'shuffleframes' in the past 
> and succeeded.
> 
> Thanks for your help. It was instrumental. The rest is up to me.




I realize this is ffmpeg-user board, but why not do some of the video
processing in vapoursynth ? You're already using it for part of it.  But I'm
not sure what you're trying to do ?

It looks like you're taking the 48000/1001 interpolated frame doubled output
and adding a "BC~C.bc~c" frame to ever

Re: [FFmpeg-user] Is there something about inputting raw frames

2021-02-11 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 01:27 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> Is there something about inputting raw frames that I don't know?
>>>
>>> I'm using 'vspipe' to pipe raw frames to 'ffmpeg -i pipe:'.
>>> The vapoursynth script, 'Mark's.vpy', is known good.
>>> The output of vapoursynth is known good.
>>> I've tried to be careful to retain valid PTSs, but apparently have
>>> failed.
>>> The output should be around 1200 frames, but 364 frames are dropped.
>>> I've frame stepped through the target, 'Mark's_script_6.mkv', and the
>>> frames that are there are in
>>> the correct order.
>>> The only thing I can guess is that ffmpeg handles 48/1.001fps raw video
>>> frames in such a way that
>>> PTS is not valid or can't be changed with 'setpts=fps=6/1001'.
>>> Can anyone see an error. Or, lacking an error, does anyone know of a
>>> workaround?
>>>
>>> Thanks.
>>>
>>> Mark's_script_6.cmd
>>> =
>>> ECHO from vapoursynth import core>Mark's.vpy
>>> ECHO video = core.ffms2.Source(source='Mark\'s_source.mkv')>>Mark's.vpy
>>> ECHO import havsfunc as havsfunc>>Mark's.vpy
>>> ECHO video = havsfunc.InterFrame(video, Preset="medium",
>>> Tuning="smooth",
>>> InputType="2D",
>>> NewNum=48000, NewDen=1001, GPU=True)>>Mark's.vpy
>>> ECHO video.set_output()>>Mark's.vpy
>>> vspipe --y4m Mark's.vpy - | ffmpeg -thread_queue_size 2048 -i pipe:
>>> -filter_complex
>>> "setpts=N*1001/6/TB, split[1][2], [1]shuffleframes=0 1 2 3 3,
>>> select=not(eq(mod(n\,5)\,4))[3],
>>> [2]tblend=all_expr='if(eq(mod(X,2),mod(Y,2)),TOP,BOTTOM)',
>>> shuffleframes=0
>>> 1 2 2 3,
>>> select=eq(mod(n\,5)\,4)[4], [3][4]interleave" -i Mark's_source.mkv -map
>>> 0:v -map 1:a -codec:v
>>> libx265 -x265-params "crf=16:qcomp=0.60" -codec:a copy -codec:s copy
>>> Mark's_script_6.mkv -y
>> 
>> 
>> Are you trying to keep the same frames from vapoursynth output node, but
>> assign 6/1001 fps and timestamps instead of 48000/1001 ? (effectively
>> making it a speed up)
>> 
>> If so, the workaround is : add after the Interframe line
>> 
>> video = core.std.AssumeFPS(video, fpsnum=6, fpsden=1001)
> 
> After your suggested addition to the python script, Mark's.vpy,
> With '-filter_complex "setpts=N*1001/6/TB, split[1][2]...' there are
> 335 drops.
> With '-filter_complex "split[1][2]...' there are 190 drops.

The workaround is correct for the PTS

 AssumeFPS is used to change the frame rate (and timestamps) without
changing the frame count. It just assigns a framerate (and their PTS). So
you would use that instead of setpts

There are no frame drops or additions from vapoursynth. The framecount,
framerate and PTS are correct at that point. You can verify this by encoding
the vpy script directly without other filters. 

So this implies the some drops are from setpts, and some other drops are
from some of your other filters






--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Is there something about inputting raw frames

2021-02-11 Thread pdr0
Mark Filipak (ffmpeg) wrote
> Is there something about inputting raw frames that I don't know?
> 
> I'm using 'vspipe' to pipe raw frames to 'ffmpeg -i pipe:'.
> The vapoursynth script, 'Mark's.vpy', is known good.
> The output of vapoursynth is known good.
> I've tried to be careful to retain valid PTSs, but apparently have failed.
> The output should be around 1200 frames, but 364 frames are dropped.
> I've frame stepped through the target, 'Mark's_script_6.mkv', and the
> frames that are there are in 
> the correct order.
> The only thing I can guess is that ffmpeg handles 48/1.001fps raw video
> frames in such a way that 
> PTS is not valid or can't be changed with 'setpts=fps=6/1001'.
> Can anyone see an error. Or, lacking an error, does anyone know of a
> workaround?
> 
> Thanks.
> 
> Mark's_script_6.cmd
> =
> ECHO from vapoursynth import core>Mark's.vpy
> ECHO video = core.ffms2.Source(source='Mark\'s_source.mkv')>>Mark's.vpy
> ECHO import havsfunc as havsfunc>>Mark's.vpy
> ECHO video = havsfunc.InterFrame(video, Preset="medium", Tuning="smooth",
> InputType="2D", 
> NewNum=48000, NewDen=1001, GPU=True)>>Mark's.vpy
> ECHO video.set_output()>>Mark's.vpy
> vspipe --y4m Mark's.vpy - | ffmpeg -thread_queue_size 2048 -i pipe:
> -filter_complex 
> "setpts=N*1001/6/TB, split[1][2], [1]shuffleframes=0 1 2 3 3,
> select=not(eq(mod(n\,5)\,4))[3], 
> [2]tblend=all_expr='if(eq(mod(X,2),mod(Y,2)),TOP,BOTTOM)', shuffleframes=0
> 1 2 2 3, 
> select=eq(mod(n\,5)\,4)[4], [3][4]interleave" -i Mark's_source.mkv -map
> 0:v -map 1:a -codec:v 
> libx265 -x265-params "crf=16:qcomp=0.60" -codec:a copy -codec:s copy
> Mark's_script_6.mkv -y





Are you trying to keep the same frames from vapoursynth output node, but
assign 6/1001 fps and timestamps instead of 48000/1001 ? (effectively
making it a speed up)

If so, the workaround is : add after the Interframe line

video = core.std.AssumeFPS(video, fpsnum=6, fpsden=1001)




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] facing a error/issue in compression efficiency under h.265

2021-02-10 Thread pdr0
USMAN AAMER wrote
> Hi,
> 
> I am comparing the compression efficiency of H.264 and H.265 codecs.
> I have studied many research papers showing that the compression
> efficiency
> of H.265 is much better than H.264.
> 
> But I am not able to get the same results.
> I am trying to compress 664 YUV 4:2:0 video sequence with ffmpeg under
> H.264 and H.265 codecs and got the resultant videos of following sizes:
> H.264: 5.58 MB
> H.265: 6.66 MB
> 
> I am using the following commands for compression:
> H.264:
> ffmpeg -i video.y4m -c:v libx264 -an -strict experimental -preset slow
> -CRF
> 30 - b:v 800k -f mp4 output.mp4
> 
> and for H.265:
>   ffmpeg -i video.y4m -c:v libx265 -an -strict experimental -preset slow
> -CRF 30 - b:v 800k -f mp4 output1.mp4
> 
> Kindly tell me what is the issue. Waiting for the kind response.
> 
> Thank you.



For libx264 and libx265: CRF and bitrate encoding (-b:v)  are mutually
exclusive methods of rate control; yet you have specified both

 "Compression efficiency" implies some measure of quality at a given
bitrate. Higher compression efficiency implies higher quality at a given
bitrate. But you have no measure of quality. And you have different
bitrates.







--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Questions about "Cannot allocate memory"

2021-01-31 Thread pdr0
Mark Filipak (ffmpeg) wrote
> And in lines like this:
> "[matroska @ 01f9e7236480] Starting new cluster due to timestamp"
>Why are new clusters started? What causes it?
>In this context, what is a cluster?
>What does "due to timestamp" really mean? Are there timestamp errors?

Not sure about the other questions, but clusters are the "units" of the  mkv
container specification. There is a variable, but maximum limit to the size
(in bytes or duration) of a cluster. Once you pass a certain duration a new
cluster is started, it's normal. Look at the mkvmerge documentation for more
information. 
For ffmpeg specific mkv muxer options, they are listed in the fullhelp



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate performance & alternative

2021-01-29 Thread pdr0
Mark Filipak (ffmpeg) wrote
> On 01/28/2021 07:42 PM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> But perhaps by "process in parallel" you mean something else, eh?
>>> ...something I'm unaware of. Can
>>> you expand on that?
>> 
>> 
>> I mean "divide and conquer" to use all resources. If you're at 20% CPU
>> usage, you can run 4-5 processes
>> 
>> eg. Split video in to 4-5 segments. Process each simultaneously, each to
>> a
>> lossless intermediate, so you're at 100% CPU usage. Then reassemble and
>> encode to your final format
> 
> I don't think that will work very well, even if I carefully cut on key
> frames. The reason is that 
> the minterpolate filter drops 10 frames and that means that at the join of
> each section there'll be 
> a 1/6 second jump (or maybe worse).


This has nothing to do with keyframes; I'm not talking about stream copy.

eg. -vf trim splits in the uncompressed domain (data is decoded to
uncompressed frames) . If you need  to you can split stages as lossless
intermediates

You cut on cadence boundaries of 24 frame cycles. 24p is evenly divisible
into 120p (5x) . 60p takes every 2nd frame from that 120p result this is
what you are doing with optical flow retiming, so every frame is evenly
spaced in time. 





> Note: Whether the minterpolate filter drops 10 frames or 5 frames is the
> subject of [FFmpeg-user] 
> minterpolate PTS v frame count weirdness.

Not sure what this is referring to ? Any more details ?

If you're getting PTS , frame count weirdness, split it out as lossless
intermediates



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate ...for Paul

2021-01-29 Thread pdr0
pdr0 wrote
> More settings would help too - maybe you can improve the filter. I'll post
> an example later similar to one posted by Mark, where it's "solvable"
> using
> other methods, but not using minterpolate. Minterpolate maxes out at a
> block
> size of 16, and that causes problems in that and similar examples, nor
> does
> it have internal pad options to improve motion vectors. 

Here are some test videos - 
https://www.mediafire.com/file/9inkxdvi8iuo5hi/interpolation_test_videos.zip/file

I made a source video "interp_test_src.mp4" @23.976p which has simulated
camera pan movement similar to Mark's example. 


"minterpolate_default.mp4" is the using the default settings . Similar
artifacts along top and bottom of frame near the letterbox edge.  Cropping
and padding (both external to the filter) did not help much. Central portion
along the windows have some bad areas in some frames too.  Typical motion
interpolation artifacts

Test1_mvtools2 using typical settings (default blksize of 16) . Similar
artifacts

test2_mvtools2_nocrop_nopad has a blksize of 32, but no crop or pad
internally. It's better in the central and top and bottom, but still some
"edge dragging" artifacts

Test2_mvtools2 has blksize of 32, is cropped and padded internally to
improve motion vectors (this makes a difference along the frame borders
along the letter box bars), then letterbox bars added back. This is much
cleaner with only minor artifacting. This one would actually be usable by
most people



On this sequence, a larger blocksize of 32 helps with the central artifacts,
and the and internal padding helps with frame edges for mvtools2. 

I suspect if minterpolate had options for larger blocksizes and internal
padding it would improve too




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate ...for Paul

2021-01-29 Thread pdr0
Paul B Mahol wrote
>> The problem is ffmpeg minterpolate is s slow, and you have no usable
>> preview. Some of the other methods mentioned earlier do have previews - 
>> so
>> you can tweak settings, preview, readjust etc
>>
>> 
> 
> Why you ignore fact that libavfilter also allows usable preview and
> readjust of parameters.

You can technically, but minterpolate is not very user friendly - It's too
slow for real work and feedback, and you cannot keyframe the settings on
different scenes very easily. It's barely usable unless you program your own
GUI around libavfilter 

More settings would help too - maybe you can improve the filter. I'll post
an example later similar to one posted by Mark, where it's "solvable" using
other methods, but not using minterpolate. Minterpolate maxes out at a block
size of 16, and that causes problems in that and similar examples, nor does
it have internal pad options to improve motion vectors. 



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate ...for Paul

2021-01-29 Thread pdr0




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate ...for Paul

2021-01-28 Thread pdr0
Mark Filipak (ffmpeg) wrote
> I've never heard of "optical flow errors". What could they be? (Got any
> links to 
> explanations?)

The artifacts in your video are optical flow errors :)

If you've ever used it - you'd recognize these artifacts. There are very
common



There are about a dozen prototypical "fail" categories or common errors that
plague all types of optical flow

These are errors either of motion vectors, or object flow (object boundaries
or "masks"), occlusion errors. 

Internet is full of examples, explanations. The topic is rather large, just
search google, there is lots of info. If you have a specific question then
ask.

Sometimes you get clean interpolated frame results;  but sometimes there are
massive distracting errors. It varies by situation and sources. 

Your example has one of the common categories of "fail" where there are
repeating patterns and textures. It falls under the "Picket Fence" fail . A
prototypical tracking or dolly shot by a picket fence, or brick wall will
come up with interpolation errors

The peripheral edges error are common because there is less data beyond the
periphery of the frame, for n-1, n+1 and the motion vectors are less
accurate compared to the center of the frame

Another common one is when objects pass over another. The flow masks aren't
perfect and you end up with blobby edge artifacts around objects







>>...For artifacts around frame edges, letterbox edges usually some form
>> of padding is used. I don't think ffmpeg minterpolate has those.
> 
> I've done that. The result was just okay. The slight riffling on the frame
> boundaries during camera 
> panning isn't all that objectionable to me. It occurs to me that
> minterpolute could queue frames and 
> look 'forward' to later frames in order to resolve boundary macroblock
> artifacts -- afterall, it has 
> the motion vectors, eh?

Some algorithms can use N-3, N-2, N-1, N, N+1, N+2, N+3, I don't think
minterpolate can.  More is not always better. Often you get more
contamination with a larger "window"

Sometimes just changing the blocksize can produce better (or worse) results.
The problem is ffmpeg minterpolate is s slow, and you have no usable
preview. Some of the other methods mentioned earlier do have previews -  so
you can tweak settings, preview, readjust etc









--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate performance & alternative

2021-01-28 Thread pdr0
Mark Filipak (ffmpeg) wrote
> But perhaps by "process in parallel" you mean something else, eh?
> ...something I'm unaware of. Can 
> you expand on that?


I mean "divide and conquer" to use all resources. If you're at 20% CPU
usage, you can run 4-5 processes

eg. Split video in to 4-5 segments. Process each simultaneously, each to a
lossless intermediate, so you're at 100% CPU usage. Then reassemble and
encode to your final format



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate performance & alternative

2021-01-28 Thread pdr0
Mark Filipak (ffmpeg) wrote
> Suppose I explain like this: Take any of the various edge-detecting,
> deinterlacing filters and, for 
> each line-pair (y & y+1), align both output lines (y & y+1) to the mean of
> the input's 
> line(y).Y-edge & line(y+1).Y-edge. To do that, only single line-pairs are
> processed (not between 
> line-pairs), and no motion vector interpolation is needed.

There is no decomb or deinterlacing filter that does this. 

How you would do is deinterlace and optical flow. These are separate
operations

Motion vectors are required, otherwise - how else would you determine where
an object moves ? How else would you get the correct "mean" and position?  A
sobel operator looks at the current frame (or field). It's a spatial
operation. There is no relationship between the previous or next. Or if you
want, even scanlines are independent from odd scanlines.  The relationship
is the  motion vector.

Y and Y+1 are even and odd scanlines.  They are combed in the current frame
because they come from different points in time . T=0, T=1.

In this case, your "spatial mean"  is also  a "temporal mean" . eg. As an
object or pixel moves from position x=1 at T=0 to x=5 at T=1 in the next
field. The spatial mean is x=3 at T=0.5.  Instead of time T=0, T=1, you want
T=0.5, assuming linear interpolation. This is optical flow. The inbetween
point in time, and it's resulting data.



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate performance & alternative

2021-01-28 Thread pdr0
Paul B Mahol wrote
> On Thu, Jan 28, 2021 at 10:23 PM Mark Filipak (ffmpeg) 

> markfilipak@

> 
> wrote:
> 
>> Synopsis:
>>
>> I seek to use minterpolate to take advantage of its superior output. I
>> present some performance
>> issues followed by an alternative filter_complex. So, this presentation
>> necessarily addresses 2
>> subjects.
>>
>> Problem:
>>
>> I'm currently transcoding a 2:43:05 1920x1080, 24FPS progressive video to
>> 60FPS via minterpolate
>> filter. Apparently, the transcode will take a little more than 3 days.
>>
>> Hardware:
>>
>> There are 4 CPU cores (with 2 threads, each) that run at 3.6 GHz. There
>> is
>> also an NVIDIA GTX 980M
>> GPU having 1536 CUDA cores with a driver that implements the Optimus,
>> CUDA-as-coprocessors architecture.
>>
>> Performance:
>>
>> During the transcode, ffmpeg is consuming only between 10% & 20% of the
>> CPU. It appears to be
>> single-threaded, and it appears to not be using Optimus at all.
>>
>> Is there a way to coax minterpolate to expand its hardware usage?
>>
>> Alternative filter_complex:
>>
>> minterpolate converts 24FPS to 60FPS by interpolating every frame via
>> motion vectors to produce a 60
>> picture/second stream in a 60FPS transport. It does a truly amazing job,
>> but without expanded
>> hardware usage, it takes too long to do it.
>>
>> A viable alternative is to 55 telecine the source (which simply
>> duplicates
>> the n%5!=2 frames) while
>> interpolating solely the n%5==2 frames. That should take much less time
>> and would produce a 24
>> picture/second stream in a 60FPS transport -- totally acceptable.
>>
>> The problem is that motion vector interpolation requires that
>> minterpolate
>> be 'split' out and run in
>> parallel with the main path in the filter_complex so that the
>> interpolated
>> frames can be plucked out
>> (n%5==2) and interleaved at the end of the filter_complex. That doesn't
>> make much sense because it
>> doesn't decrease processing (or processing time) and, if the fully
>> motion-interpolated stream is
>> produced anyway, then output it directly instead of interleaving. What's
>> needed is an interpolation
>> alternative to minterpolate.
>>
>> Alternative Interpolation:
>>
>> 55 telecine with no interpolation or smoothing works well even though the
>> n%5==2 frames are combed
>> but decombing is desired. The problem with that is: I can't find a
>> deinterlace filter that does
>> pixel interpolation without reintroducing some telecine judder. The issue
>> involves spacial alignment
>> of the odd & even lines in the existing filters.
>>
>> Some existing filters align the decombed lines with the input's top
>> field,
>> some align the decombed
>> lines with the input's bottom field. What's desired is a filter that
>> aligns the decombed lines with
>> the spacial mean. I suggest that the Sobel might be appropriate for the
>> decombing (or at least, that
>> the Sobel can be employed to visualize what's desired).
>>
>> Sobel of line y:   __/\_/\_ (edges)
>> Sobel of line y+1: __/\_/\_
>> Desired output:
>>   line y:   /\_/\___ (aligned to mean)
>>   line y+1: /\_/\___ (aligned to mean)
>> I could find this:
>>   line y:   __/\_/\_
>>   line y+1: __/\_/\_ (aligned to top line
>> edges)
>> and I could find this:
>>   line y:   __/\_/\_ (aligned to bottom
>> line edges)
>>   line y+1: __/\_/\_
>>
>>
> Sorry, but I can not decipher above stuff. Does anybody else can?


He wants an "inbetween" scanline (and perhaps resampled to a full frame) for
the target frame . Y=0 and Y=1 represent scanlines from 2 different times
(fields from 2 different times). He wants something in the middle, such as
Y=0.5. ie. A retimed in-between frame , perhaps using optical flow

The other option he had been using was blend deinterlacing , a vertical blur
between Y=0 and Y=1, which combines both times, but obvious problems with
blurring and ghosting

(The best option, hands down,  is a judderless display, so you don't have
all these artifacts or blurring. Very inexpensive nowadays)









--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate performance & alternative

2021-01-28 Thread pdr0
Mark Filipak (ffmpeg) wrote
> Is there a way to coax minterpolate to expand its hardware usage?

Not directly;

One way might be to split along cadence boundaries and process in parallel
(e.g. 4x), then reassemble

(There are other optical flow solutions that use GPU in other software , and
some are much faster. But all of them are prone to typical artifacts, but
some are slightly better than others)







--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate ...for Paul

2021-01-28 Thread pdr0
Mark Filipak (ffmpeg) wrote
> 
> In the video,
> 
> Look at the behavior of the dots on the gate behind the police here:
> 0:5.422 to 0:10.127.
> 
> Look especially at the top of roof of the building here: 0:12.012 to
> 0:12.179, for apparent 
> macroblock errors.
> 
> Here's the video:
> 
> https://www.dropbox.com/t/8sKE0jEguUxQgPjD
> 
> Here's the command line:
> 
> ffmpeg -i "source=24FPS.mkv" -map 0 -filter_complex 
> "minterpolate=fps=6/1001:mi_mode=mci:mc_mode=obmc:scd=fdiff:scd_threshold=10:vsbmc=1:search_param=20"
>  
> -codec:v libx265 -x265-params "crf=20:qcomp=0.60" -codec:a copy -codec:s
> copy minterpolate.mkv


Those are not macroblock errors. They are typical optical flow errors.
Motion interpolation is never perfect , there are always some types
artifacts, occlusions, edge morphing

There are several other methods and algorithms you can use outside of
FFmpeg, some are GPU accelerated. e.g. svpflow, mvtools2, DAIN, twixtor,
resolve. For artifacts around frame edges, letterbox edges usually some form
of padding is used. I don't think ffmpeg minterpolate has those.




--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] What is the usage for -cgop/+cgop and -g options.

2021-01-26 Thread pdr0
Hongyi Zhao wrote
> I noticed there are -cgop/+cgop and -g options used by ffmpeg. But I
> really can't figure out the usage of them. Any hints will be highly
> appreciated.

-g is the maximum gop length (the maximum keyframe interval) . Some
codecs/settings have adaptive GOP and I frame placement, but it will never
exceed that value

cgop is for specifying closed GOP



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] minterpolate problem

2021-01-26 Thread pdr0
Jim DeLaHunt-2 wrote
> Perhaps the character between 'mci' and 'mc_mode' should be ':' instead 
> of '='?

That works for me

-vf
minterpolate=fps=6/1001:mi_mode=mci:mc_mode=obmc:scd=fdiff:scd_threshold=10

Each one is a separate option and argument

https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_minterpolate.c



--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Slide show with vfr

2021-01-24 Thread pdr0
Wolfgang Hugemann wrote
> I did one step backward and tried to construct a vfr video from the
> scratch using slides a an input:
> 
> ffmpeg -y -f concat -i input.txt colors.mkv
> 
> with input.txt as:
> 
> ffconcat version 1.0
> file 'red.png'
> duration 250ms
> file 'yellow.png'
> duration 500ms
> file 'green.png'
> duration 500ms
> file 'cyan.png'
> duration 250ms
> file 'blue.png'
> duration 500ms
> file 'black.png'
> duration 500ms
> 
> This resulted in cfr for mp4 an vfr for mkv or webm (according to
> MediaInfo, a Windows application). However, there seems to be something
> wrong with the result colors.mkv, as no player, including ffplay uses
> the specified durations.

Express duration in seconds

Repeat the last image as per 
https://trac.ffmpeg.org/wiki/Slideshow#Concatdemuxer

file 'red.png'
duration 0.25
file 'yellow.png'
duration 0.5
file 'green.png'
duration 0.5
file 'cyan.png'
duration 0.25
file 'blue.png'
duration 0.5
file 'black.png'
duration 0.5 
file 'black.png'





--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] tinterlace broken - SAR & DAR wrong

2021-01-04 Thread pdr0
Mark Filipak (ffmpeg) wrote
>  
>> 
>> You can't interleave images with different dimensions
>> 
>> Aout has separated fields, to 720x240 , but Bout is 720x480
> 
> [Bout] is 720x240: I'm using 'mode=send_field' in 'bwdif=mode=send_field',
> and the following 
> 'decimate' doesn't change that. The problem is that 'tinterleave' is
> marking the output as "SAR = 
> 16:9" instead of "8:9" (which is what it should be and should never be
> changed).

No, [Bout] is 720x480

bwdif in send_field mode means double rate deinterlacing. 720x480 59.94
fields per second interlaced input becomes 720x480 59.94 progressive images

Test it out yourself on the the Bout branch
ffmpeg -i "INPUT.VOB" -filter_complex "separatefields, shuffleframes=0 1 2 4
3 6 5 7 8 9,
select=eq(mod(n\,10)\,2)+eq(mod(n\,10)\,3)+eq(mod(n\,10)\,6)+eq(mod(n\,10)\,7),
tinterlace, setsar=sar=8/9, bwdif=mode=send_field:deint=all,
decimate=cycle=2" -an -c:v libx264 -crf 18 -t 00:00:10 bout.mkv -y



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] tinterlace broken - SAR & DAR wrong

2021-01-04 Thread pdr0
Mark Filipak (ffmpeg) wrote
> 
> You wrote: "What's wrong with using setsar filter after tinterlace?"
> 
> I tried that from the git-go. I just reran it.
> 
> ffmpeg -report -i "source 720x480 [SAR 8x9 DAR 4x3] 29.97 fps.VOB"
> -filter_complex "separatefields, 
> shuffleframes=0 1 2 4 3 6 5 7 8 9, split[Ain][Bin], 
> [Ain]select=not(eq(mod(n\,10)\,2)+eq(mod(n\,10)\,3)+eq(mod(n\,10)\,6)+eq(mod(n\,10)\,7))[Aout],
>  
> [Bin]select=eq(mod(n\,10)\,2)+eq(mod(n\,10)\,3)+eq(mod(n\,10)\,6)+eq(mod(n\,10)\,7),
> tinterlace, 
> setsar=sar=8/9, bwdif=mode=send_field:deint=all, decimate=cycle=2[Bout],
> [Aout][Bout]interleave, 
> tinterlace, setsar=sar=8/9, bwdif=mode=send_frame:deint=interlaced"
> -codec:a copy -codec:s copy -dn 
> output.mkv
> 
> Putting setsar after tinterlace doesn't work. I did investigate that but I
> can't recall why -- I 
> think it's because DAR is then wrong. The report is below.


You can't interleave images with different dimensions

Aout has separated fields, to 720x240 , but Bout is 720x480  

Possibly you want to apply separatefields again before Bout 







--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-27 Thread pdr0

Not sure about other OS's - For Windows, you don't need to escape commas
inside the text field;  but the paths for fonts needs to be escaped

These work ok in windows

one comma
ffmpeg -f lavfi -r 24 -i color=c=green:s=640x480 -vf
drawtext="fontfile='C\:\\Windows\\Fonts\\Arial.ttf':text='Room Temp Water,
No Heat Mat':fontcolor=white:fontsize=24:
box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)*0.05:y=(h-text_h)*0.05"
-crf 20 -an -frames:v 120 test1.mp4 -y

five commas
ffmpeg -f lavfi -r 24 -i color=c=green:s=640x480 -vf
drawtext="fontfile='C\:\\Windows\\Fonts\\Arial.ttf':text='Room Temp Water, ,
, , , No Heat Mat':fontcolor=white:fontsize=24:
box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)*0.05:y=(h-text_h)*0.05"
-crf 20 -an -frames:v 120 test2.mp4 -y



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Decimation with ppsrc=1 malfunctions

2020-12-27 Thread pdr0
pdr0 wrote
> I don't know if it's the full explanation...
> 
> The way it should work is ppsrc should disable everything else , given if
> input src ,and ppsrc have the exact same timecodes
> 
> 
> In theory, you should get same as -vf decimate on preprocessed.mkv
> 
> ffmpeg -i preprocessed.mkv -vf decimate -c:v libx264 -crf 18 -an
> preprodec.mkv
> 
> gives drops at - 1019,1024,1029...
> 
> 
> 
> The interval is different and there are 2 more frames , 138 vs. 136

ffmpeg -i preprocessed.mkv -vf decimate=scthresh=0 -c:v libx264 -crf 18 -an
preprodec2.mkv -y

gives 138 frames when run alone, but 136 frames when run with 2 inputs and
ppsrc=1 . 

It's still unexpected behaviour



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Decimation with ppsrc=1 malfunctions

2020-12-27 Thread pdr0
I don't know if it's the full explanation...

The way it should work is ppsrc should disable everything else , given if
input src ,and ppsrc have the exact same timecodes


In theory, you should get same as -vf decimate on preprocessed.mkv

ffmpeg -i preprocessed.mkv -vf decimate -c:v libx264 -crf 18 -an
preprodec.mkv

gives drops at - 1019,1024,1029...



The interval is different and there are 2 more frames , 138 vs. 136





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Decimation with ppsrc=1 malfunctions

2020-12-27 Thread pdr0
One issue is scene change is still active. When you crop to a region of
interest, a small change is effectively a larger % change. eg. The delta
between 1020 and 1021 is large when head is going up. If you disable scene
change, you get set intervals, it's no longer adaptive by scene change. 

-filter_complex "[0:0][1:0]decimate='ppsrc=1':scthresh=0"

Drops become regular 1012,1017,1022,1027,1032



Other differences are preprocessed.mkv is a lossy version and encoded
progressively instead of interlaced or mbaff. There are motion vectors that
cross fields that shouldn't, so you get motion when you shouldn't when you
examine individual fields. ie. Fields are "muddied" in preprocessed.mkv . 

Also the chroma encoding is different when you encode progressively. You can
see the colored edges are different on some fields. You can set chroma=0 to
disable, or use a proper prepreprocessed version (Lossless and encoded with
mbaff)

These latter two aren't the explanation in this specific case, but you
should test with original source, or lossless versions




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PNGs with transparent pixels to GIF

2020-12-21 Thread pdr0
MediaMouth wrote
> Frame disposal makes sense as the culprit
> Turns out I'm on version 4.3.1, which I think is most recent.

The git version might not have made it into the release version

This is the old ticket and fix

https://trac.ffmpeg.org/ticket/7902







--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Couple of questions about GIFs

2020-12-20 Thread pdr0
But for web supported video in MP4 container, there is no native alpha
channel support directly (there are workarounds by using a HTML5 background
canvas and 2 videos)

And for video - VP9, webm video does have alpha channel support , 8 or 10bit
per channel , and is  supported by modern browsers, no licensing issues

But some users have restrictions set to disable embeded videos in browser
locally, or some websites might have restrictions on file types (e.g. some
forums)

An alternative modern animated image format, that has fairly wide support is
animated webp (not limited to 256 colors, supports alpha, has an RGBA
variant, and supported by modern browsers) . But gif is still the most
widely supported animated image format




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PNGs with transparent pixels to GIF

2020-12-20 Thread pdr0
MediaMouth wrote
> 
> In this case the "artifact" I was referring to was a piece of the opaque
> image itself that remains on all frames of the GIF even though it does not
> appear in the source PNGs
> 
> I posted the ZIP file of the source PNGs and resulting GIF here
> https://drive.google.com/file/d/1Eu1Qy8LWHrcanwtn84Klwh-k4IZtM4Na/view?usp=sharing;
> if that helps.
> Here's the reference CLI again, used to convert the PNGS to GIF:
>> ffmpeg -i PngSeq/Frame_%05d.png -framerate 12 -filter_complex
>> "palettegen[PG],[0:v][PG]paletteuse" PngSeq.gif

This has to do with gif frame disposal , and has been fixed a few months
back. I cannot reproduce it locally.  You need up update your ffmpeg.





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PNGs with transparent pixels to GIF

2020-12-20 Thread pdr0
MediaMouth wrote
>> eg.
>> ffmpeg -r 12 -i toile4-4-%d.png -filter_complex
>> "palettegen[PG],[0:v][PG]paletteuse" toile4.gif
>> 
> 
> 
> Following up on the documentation links provided, I wasn't able to work
> out what the details of your "-filter_complex" entries were about

https://ffmpeg.org/ffmpeg.html#Complex-filtergraphs

filter_complex "connects" multiple inputs and/or outputs. The names in
brackets are named input or output "pads". They are similar to variable
names . eg. [PG] could have been called [label1]. [0:v] means 1st file
(numbering starts with zero), video stream. Similary [0:a] would mean first
file, audio stream . There are reserved syntax for video and audio streams.
This should be in the documetantion on stream numbering

The first input node is omitted because there is only one input into the
filter graph. The output node is also omitted, because there is only 1
choice of output. This really means [out] 

-filter_complex "[0:v]palettegen[PG],[0:v][PG]paletteuse[out]" -map [out]


> I ran it against a test png seq it did work  -- I got a gif that preserved
> the PNG transparency -- but I got some artifacts.
> 
> Attaching both the PNG seq and the resulting GIF in PngSeq.zip (below)
> (hopefully it posts)
> The command I used was based on your example:
>> ffmpeg -i PngSeq/Frame_%05d.png -framerate 12 -filter_complex
>> "palettegen[PG],[0:v][PG]paletteuse" PngSeq.gif
> 
> Questions:
>   - What caused / how to avoid the artifacts?
>   - What is PG in your example?

I don't see the attachment, but 2 common artifacts for gif in general are
quantization artifacts (gif only has 256 colors max), and the alpha channel
can look very poor and binarized. Or dithering artifacts. There are
different algorithms for dithering 

"PG" is explained above



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] PNGs with transparent pixels to GIF

2020-12-20 Thread pdr0
FFmpeg-users mailing list wrote
> Hello,
> 
> I'm trying to create a GIF from an image sequence of PNGs with transparent
> pixels, but these transparent pixels convert to black in the resulting
> GIF. I'm using the following command :
> 
> $ ffmpeg -i toile4-4-%d.png -framerate 12 toile4.webm

Did you mean gif or webm ? The extension is webm in the commandline

For gif use -vf palettegen and paletteuse to reserve 1 color for
transparency

https://ffmpeg.org/ffmpeg-filters.html#palettegen-1
https://ffmpeg.org/ffmpeg-filters.html#paletteuse

You can combine the two by using -filter_complex, instead of creating a
palette png intermediate

eg.
ffmpeg -r 12 -i toile4-4-%d.png -filter_complex
"palettegen[PG],[0:v][PG]paletteuse" toile4.gif





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Compression is a lot smaller

2020-08-19 Thread pdr0
Cecil Westerhof-3 wrote
> yuv420p(top first)

Maybe partially related - your camera files are encoded interlaced TFF (the
content might not be) , but your commandline specifies progressive encoding.
This has other implications in the way other programs / players handle the
file if the content is interlaced

add
 -flags +ildct+ilme -x264opts tff=1






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Issue with pix_fmt yuvj444p and libx265

2020-07-04 Thread pdr0
Samik Some wrote
> Somewhat related question. Does sws_flags have any effect when 
> converting to yuvj444p color space using scale? (since no actual 
> resizing is needed)

Yes. The sws flags are used to control the RGB to YUV conversion. In this
case , full range, and 709 for the matrix conversion

When you specify pix_fmt yuvj444, full range is performed (because of the
"j"), but a 601 conversion by default (wrong colors for "HD"). You can try
it yourself by omitting -vf scale and just using -pix_fmt yuvj444p

The -x265-params colormatrix=bt709 is just a VUI flag; no conversion is done
there . It's just a "label" that other programs, players  might read as
"709" .





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Issue with pix_fmt yuvj444p and libx265

2020-07-03 Thread pdr0
You can use -x265-params to pass x265 settings. In this case to specify
input-csp i444. 

Personally, I prefer to explicitly control the RGB=>YUV conversion with -vf
scale or zscale

eg.
ffmpeg -r 24 -loop 1 -i iybW.png -vf
scale=out_color_matrix=bt709:out_range=full,format=yuvj444p -c:v libx265
-crf 18 -x265-params input-csp=i444:colormatrix=bt709 -an -t 00:00:01
yuvj444p.mkv






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] libaom - first frame not lossless when > 7 frames in source

2020-06-07 Thread pdr0
Intra only compression , using -g 1 makes it lossless . Maybe a clue there

Not sure why there is a discrepancy between aomenc, and ffmpeg libaom-av1




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] libaom - first frame not lossless when > 7 frames in source

2020-06-07 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Could it be the number of cores in your system / is the issue reproducible
> with -threads 1 ?

Issue is still present with -threads 1

Does not appear to be related to pixel format (eg. affects yuv420p source
test as well) 



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] libaom - first frame not lossless when > 7 frames in source

2020-06-07 Thread pdr0
Kieran O Leary wrote
> Any idea what's happening? Will I get the libx264-style answer: 'this is
> googles issue, 

I can replicate the ffmpeg issue (and with other sources), but I don't know
what the problem is

It's not a "google"  issue, because AOM aomenc.exe works, and produces
lossless output correctly




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Mark Filipak wrote
> 
>> 
>> If you take a soft telecine input, encode it directly to rawvideo or
>> lossless output, you can confirm this.
>> The output is 29.97 (interlaced content) .
>> 
>>> When I do 'telecine=pattern=5', I wind up with this
>>>
>>> |<--1/6s-->|
>>> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
>>>
>>> I have confirmed it by single-frame stepping through test videos.
>> 
>> No.
> 
> The above timing is for an MKV of the 55-telecine transcode, not for the
> decoder's output.

That's telecine=pattern=5 on a 23.976p native progressive source

I thought this thread was about using a soft telecine source , and how
ffmpeg handles that

because you were making assumptions "So, if the 'i30, TFF' from the decoder
is correct, the following must be the full picture: "

Obviously i30 does not refer to a 23.976p native progressive source...



>> Pattern looks correct, but unless you are doing something differently ,
>> your
>> timescale is not correct
>> 
>> When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test,
>> using
>> telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 *
>> 29.97
>> = 74.925).
> 
> Not for me. I've seen 74.925 FPS just one time. Since I considered it a
> failure, I didn't save the 
> video and its log, so I don't know how I got it.
> 
>> This mean RF flags are used, 29.97i output from decoder. Since
>> its 74.925fps, the scale in your diagram for 1/6s is wrong for
>> telecine=pattern=5
> 
> For this command line:
> 
> ffmpeg -report -i "1.018.m2ts" -filter_complex 
> "telecine=pattern=5,split=5[A][B][C][D][E],[A]select='eq(mod(n+1\,5)\,1)'[F],[B]select='eq(mod(n+1\,5)\,2)'[G],[C]select='eq(mod(n+1\,5)\,3)'[H],[D]select='eq(mod(n+1\,5)\,4)'[I],[E]select='eq(mod(n+1\,5)\,0)'[J],[F][G][H][I][J]interleave=nb_inputs=5"
>  
> -map 0 -c:v libx264 -crf 20 -codec:a copy -codec:s copy
> "C:\AVOut\1.018.4.MKV"
> 
> MPV playback of '1.018.4.MKV' says "FPS: 59.940 (estimated)" (not
> 74.925fps).

Is that m2ts a soft telecine BD's ? This thread was about soft telecine...

Most film BD's are native progressive 23.976




>> Both ffplay and mpv look like they ignore the repeat field flags, the
>> preview is progressive 23.976p
> 
> I use MPV. I'm unsure what you mean by "preview". ...and "preview" of
> what? The decoder output or 
> the MKV output video?

The "preview" of the video is what you see when ffplay window opens or mpv
opens. It's a RGB converted representation what you are using as input to
mpv or ffplay . So I'm referring to a soft telecine source, because that's
what you were talking about





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> e.g
>> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv
> 
> (Consider to test with other output formats.)

What did you have in mind?
e.g.

ffmpeg -i input.mpeg -c:v utvideo -an output.avi

The output is 29.97, according to ffmpeg and double check using official
utvideo VFW decoder, duplicate frames . Missing 3 frames if duplicates abide
by RF flags

e.g.

ffmpeg -i input.mpeg -c:v utvideo -an output.mkv

Output is 29.97, but no duplicate frames.  Missing 1 frame

eg.

ffmpeg -i input.mpeg -c:v libx264 -crf 18 -an output.mp4

Output is 29.97 with duplicates . Elementary stream analysis confirms
finding.  But missing  3 frames if duplicates abide by RF flags


ffmpeg -i input.mpeg -c:v libx264 -crf 18 -an output.mkv

Output is 23.976 no duplicates . Missing 1 frame



eg.

ffmpeg -i input.mpeg -c:v ffv1 -an output_ffv1.mkv

Output is 29.97, no duplicates.  Missing 1 frame


Looks like some container differences too. 






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
pdr0 wrote
> If you take a soft telecine input, encode it directly to rawvideo or
> lossless output, you can confirm this. 
> The output is 29.97 (interlaced content) . 


So my earlier post is  incorrect

Output is actually 29.97p with 5th frame duplicates . The repeat field flags
are not taken into account.





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> Am 24.04.2020 um 11:10 schrieb Mark Filipak 

> markfilipak.windows+ffmpeg@

> :
>> 
>> I've been told that, for soft telecined video the decoder is fully
>> compliant and therefore outputs 30fps
> 
> (“fps” is highly ambiguous in this sentence.)
> 
> This is not correct.
> I believe I told you some time ago that this is not how the decoder
> behaves. I believe such a behaviour would not make sense for FFmpeg
> (because you cannot connect FFmpeg’s output to an NTSC CRT). The telecine
> filter would not work at all if above were the case.
> Or in other words: FFmpeg outputs approximately 24 frames per second for
> typical soft-telecined program streams.
> 
> The only thing FFmpeg does to be “compliant” is to forward the correct
> time base.


If you use direct encode, no filters, no switches, the output from soft
telecine input video is 29.97p, where every 5th frame is a duplicate

e.g
ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv

But you can "force" it to output 23.976p by using -vf fps

Is this what you mean by "forward the correct time base" ?



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread pdr0
Mark Filipak wrote
>> I've been told that, for soft telecined video
>>  the decoder is fully compliant and therefore outputs 30fps
>> I've also been told that the 30fps is interlaced (which I found
>> surprising)
>> Is this correct so far?

Yes

If you take a soft telecine input, encode it directly to rawvideo or
lossless output, you can confirm this. 
The output is 29.97 (interlaced content) . 



> When I do 'telecine=pattern=5', I wind up with this
> 
> |<--1/6s-->|
> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
> 
> I have confirmed it by single-frame stepping through test videos.

No. 

Pattern looks correct, but unless you are doing something differently , your
timescale is not correct

When input is vob, mpeg2-ps or mpeg-es using soft telecine in my test, using
telecine=pattern=5 the output frame rate is 74.925 as expected  (2.5 * 29.97
= 74.925). This mean RF flags are used, 29.97i output from decoder. Since
its 74.925fps, the scale in your diagram for 1/6s is wrong for
telecine=pattern=5 


Both ffplay and mpv look like they ignore the repeat field flags, the
preview is progressive 23.976p



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Codec error when adding color parameter to fade

2020-04-21 Thread pdr0


When appending videos, you usually need to match the specs for the video,
including dimensions, framerate, pixel format

filtered.mp4 is 640x352, 30fps

intro.mp4 is 1080x608, 59.94fps

My guess is that is part of the reason.  The specs can change midstream for
a transport stream, but some players , decoders might have problems with it. 
MP4 container does not support changes midstream as well as transport
streams

The aspect ratio is slightly different between them. 1.818 vs. 1.776. You'd
have to letterbox the filtered version if converting that up; or if you
didn't care about the AR error, then just resize it





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Codec error when adding color parameter to fade

2020-04-21 Thread pdr0
Cemal Direk wrote
>  but  other problem:  iphone is not supporting  to  filter effect on phone
> when im joining(merging) video...
> 
>  ffmpeg -i video.mp4 -filter:v "fade=in:color=white:st=5:d=1,
> fade=out:color=white:st=44:d=1,format=yuv420p"  filtered.mp4
> 
> ffmpeg -i intro.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts temp1.ts
> ffmpeg -i filtered .mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts temp2.ts
> 
> ffmpeg -i "concat:temp1.ts|temp2.ts" -c copy -bsf:a aac_adtstoasc
> output.mp4
> 
> when i am sending  output.mp4  video via  whatsapp. then  android users
> can
> see all part of video but iphone user  cant see filtered affected part.
> iphone user is hearing only audio of filtered video.
> 
> now whats problem?



Do the specs match for all video and audio streams ? In the first post you
used -c:a aac and -c:v libx264, but you omitted those arguments this time. 

Post the full console output as Carl suggested





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Codec error when adding color parameter to fade

2020-04-21 Thread pdr0
Cemal Direk wrote
> Hi, im using this code
> 
> "ffmpeg -i video.mp4 -filter:v "fade=t=in:color=white:st=0.5:d=1"
> -filter:a
> "afade=in:st=0:d=1, afade=out:st=44:d=1" -c:v libx264 -c:a aac output.mp4"
> 
> then output is wrong codec. i can not open output.mp4 at any player.
> but i dont give to color=white parameter to command i can open
> output.mp4...
> whats the problem when i adding color parameter to command?(:color=white)
> 
> how can i solve this problem?

I'm assuming you are using "regular" yuv420p input.

The default color for fade is black. When you specify white, the output
pixel format is yuv444p for some reason,  and libx264 encodes as yuv444p

As a workaround, you can add format=yuv420p to the filter chain
-filter:v "fade=t=in:color=white:st=0.5:d=1,format=yuv420p"

I don't know if that is intended behavior, but it's not in the documentation





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
pdr0 wrote
> As Paul pointed out, interleave works using timestamps , not "frames". If
> you took 2 separate video files, with the same fps, same timestamps, they
> won't interleave correctly in ffmpeg. The example in the documentation
> actually does not work if they had the same timestamps. You would have to
> offset the PTS of one relative to the other for interleave to work
> correctly. 
> 
> If you check the timestamps of each output node, you will see why
> "interleave" is not
> working here, and why it works properly in some other cases .  To get it
> working in this example, you would need [D] to assume [H]'s timestamps,
> because those are where the "gaps" or "holes" are in [C] . It might be
> possible using an expression using setpts

 
 Here's your "proof", and why Paul's succinct "perfect" answer was indeed
correct
 
The blend filter in your example, combined (n-1) with (n). This messed up
the timestamps if you -map the output node of blend suggested earlier such
as [D2] - they don't match the "holes" or the missing ones in [C] . Ie. they
are not "complementary" . 
 
 I was looking for a -vf setpts expression to fix and offset the timestamps,
or somehow "assume" the [H] branch timestamps, because those are the ones
that are complementary and "fit".
 
 But it turns out that it's much easier - the input timestamps from blend
filter take on the first node. Originally it was [G][H] ; [H][G] makes blend
take H's timestamps
 
(Sorry about the long line, I have a problem with "^" split in windows with
-filter_complex)
 
 ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[H][G]blend=all_mode=average,split[D][D2],
[C][D]interleave[out]" -map [out] -c:v libx264 -crf 20 testout2.mkv  -map
[D2] -c:v libx264 -crf 20 testD2pts.mkv  -y 




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 18:46 Uhr schrieb Mark Filipak
> 

> markfilipak.windows+ffmpeg@

> :
>>
>> On 04/19/2020 12:31 PM, Carl Eugen Hoyos wrote:
>> > Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0 

> pdr0@

> :
>> >
>> >> In his specific situation, he has a single combed frame. What he
>> >> chooses for yadif (or any deinterlacer) results in a different result
>> >> - both wrong - for his case.
>> >
>> >> If the selects "top" he gets an "A" duplicate frame. If he selects
>> >> "bottom" he gets a "B" duplicate frame .
> 
> To clarify: Above does not describe in a useful way how yadif
> operates and how its options can be used. I do understand
> that you can create a command line that makes it appear as if
> this would be the way yadif operates, but to assume that this
> is the normal behaviour that needs some kind of description
> for posterity is completely absurd.
> 
> Or in other words: Induction is not a useful way of showing
> or proving technical properties.


Nobody is saying this is "normal" behaviour for general use. I EXPLICITLY
wrote this is for application in a very specific scenario. 

That's what you happens when you cut and edit out the context of a clear
message , or choose read selectively. 





>> > No.
>>
>> No?
>>
>> But I can see the judder. Please, clarify.
>>
>> 55-telecine outputs frames A A A+B B B   ...no judder, 1/24th second comb
>> in 3rd frame.
>> Yadif top outputs judder and no comb ...so I assume that the stream
>> is A A A B B.
>> Yadif bottom outputs judder and no comb  ...so I assume that the stream
>> is A A B B B.
>>
>> My assumptions are based on what I see on the TV during playback and what
>> top &
>> bottom mean. Is any of that wrong? If so, how is it wrong?
>>
>> I apologize for being ignorant. I endeavor to become less ignorant.
> 
> Just a few thoughts:
> 
> There is no "yadif top" and "yadif bottom", rtfm.


"Top" is top field first, "Bottom" is bottom field first.  I placed mine in
"quotes". But it's clear what he is trying to communication



> No (useful) de-interlacer in FFmpeg duplicates a frame in
> normal operation, the thought that it might do this is
> completely ridiculous.

It does when you use it on progressive content. This is not a "normal"
operation - I explicitly said that.  This is progressive content with a
combed frame.  It demonstrates your lack of understanding of what is going
on, or you didn't bother to read the background information before replying. 



> yadif uses simplified motion compensation, you cannot combine
> it with select the way you can combine a linear interpolation filter
> (that does no motion compensation) with the select filter. Please
> avoid reporting it as a bug that this is not documented: We cannot
> document every single theoretical use case (your use case is 100%
> theoretical), we instead want to keep the documentation readable.

Nobody is saying this is a bug. This is the expected behaviour in this
specific situation.



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 16:31 Uhr schrieb pdr0 

> pdr0@

> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am 19.04.2020 um 08:08 schrieb pdr0
> 
>> >> Other types of typical single rate deinterlacing (such as yadif) will
>> >> force you to choose the top or bottom field
>> >
>> > As already explained: This is not true.
>>
>> How so?  Assuming you're actually applying it (all frames), or deint=1
>> and the frame is marked as interlaced in ffmpeg parlance:
>>
>> If you use mode=0  (single rate), either it auto selects the parity, or
>> you explicitly set it top or bottom
> 
> Yes, but this does not imply (in any way) "choose the top or bottom
> field".
> 

I agree - It's a poor choice of words if you take it out of context, and cut
out all the background information

Already explained

> It's a very specific scenario - He needs to keep that combed frame, as a
> single frame to retain the pattern. Single rate deinterlacing by any
> method
> will cause you to choose either the top field or bottom field, resulting
> in
> a duplicate frame or the prior or next frame - and it's counterproductive
> for what he wanted (blend deinterlacing to keep both fields as a single
> frame) 


In his specific situation, he has a single combed frame. What he chooses for
yadif (or any deinterlacer) results in a different result - both wrong - for
his case. If the selects "top" he gets an "A" duplicate frame. If he selects
"bottom" he gets a "B" duplicate frame . 

Do you need farther explanation?







--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-19 Thread pdr0
Mark Filipak wrote
>> The result of telecine is progressive content (you started with
>> progressive
>> content) , but the output signal is interlaced. 
> 
> According to the Motion Pictures Experts Group, it's not interlaced
> because the odd/even lines are
> not separated by 1/fieldrate seconds; they are separated by 1/24 sec. 

1/field rate seconds in the case of hard telecine. That's the reason for
telecine. If you take broadcast on a 1080i station, everything is
transmitted like that. "23.976p" native is not supported. If you watch a
23.976p movie on a 1080i29.97 station, it's hard telecined. That's why it's
called 29.97i (not p) . It's the transmission format. It's "23.976p in
29.97i" 

In the case of soft, it's a unique exception - and only applies to DVD and
it depends on the hardware. The flags output a 29.97i signal. If you have a
flag reading player it's 29.97i. That's the signal sent to the TV. Your TV
IVTC's it. If you have a cadence reading player, it treats it as 23.976p
signal sent over HDMI



>> That refers to interlaced content. This does not refer to your specific
>> case. You don't have interlaced content. Do you see the distinction ?
> 
> Well, of course, but why are we (you) discussing interlaced content? 

You were talking about decomb vs. deinterlacing . That's what this thread
title says.  Logically it follows you should talk about interlace content.
In general, you only deinterlace interlaced content ... How can you NOT talk
about it ? Notice everything in my prior posts in this particular thread was
in the general sense , other examples like 16mm, camcorder... no reference
was even made to your specific case. Or if you assume your specific case,
why are you talking about deinterlacing? It needs to be referred to in order
to to contrast against IVTC and weave

Anyways carry on...





>> The point is, visually, when looking at individual
>> frames, combing looks the same, the mechanism is the same.
> 
> (see above).
> 
>> The underlying
>> video can be different in terms of content (it might be interlaced, it
>> might
>> be progressive) , but you can't determine that on a single frame
> 
> Sure I can. If I put 1/60th second of comb next to 1/24th second of comb,
> I think anyone would see
> that the 1/24th second frame looks worse. 

Yes, in terms of degree of combing

But in terms of what the underlying video content actually is, there is no
way you can tell from 1 frame (2 fields) if it was originally a 23.976p
video, or anything else. You have to examine the adjacent fields next to
that frame as well. If the prior and next frames had 2 different fields
each, for 4 different times represented, it's interlaced content. That's
also why it should be called "combing" , not "interlaced frame" or
"interlaced content frame" 





>> What kind of motion compensation did you have in mind?
> ... [show rest of quote]
> 
> Hahahaha... ANY motion compensation. ...Is there more than one?

I want to understand what you mean by "motion compensation." In general
terms. What do you expect to happen or want it to look like. If there is no
motion, then what; if there is big motion, then what 
And there are other programs than ffmpeg that can do this; I'm trying to see
if it's possible in ffmpeg




>> "recurse" works if your timestamps are correct.
> 
> Carl Eugen said that, too. How could the timestamps be wrong? I was using
> the test video ("MOVE"
> "TEXT") that you gave me. I proved that, for that video (which I assumed
> was perfect), ffmpeg
> traversal of the filter complex does not occur. 

Did you look at D2 output in my earlier post in that thread?  "testD2.mkv" ? 
If you were correct, there should be no blending in D2. There is blending in
D2... 

The blend filter changed the timestamps. You blended it with n-1
(select='eq(mod(n+1\,5)\,2)')

Split each intermediate node and check the output and timestamps eg. C2, H2
as well . C2 and H2 are the "working" proper timestamp branches  (without
the n-1) . Their timestamps are complementary. D (or D2) is off and does not
fit , so interleave does not work as expected









--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Carl Eugen Hoyos-2 wrote
>> Am 19.04.2020 um 08:08 schrieb pdr0 

> pdr0@

> :
>> 
>> Other types of typical single rate deinterlacing (such as yadif) will
>> force
>> you to choose the top or bottom field
> 
> As already explained: This is not true.

How so ?  Assuming you're actually applying it (all frames), or deint=1 and
the frame is marked as interlaced in ffmpeg parlance:

If you use mode=0  (single rate), either it auto selects the parity, or you
explicitly set it top or bottom

If you use mode =1 (double rate) , it's a non issue because both fields are
retained



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-19 Thread pdr0
Mark Filipak wrote
> 
> I would love to use motion compensation but I can't, at least not with
> ffmpeg. Now, if there was 
> such a thing as smart telecine...
> 
> A A A+B B B -- input
> A A B B -- pass 4 frames directly to output
>A A+B B   -- pass 3 frames to filter
>   X  -- motion compensated filter to output
> 
> Unfortunately, ffmpeg can't do that because ffmpeg does not recurse the
> filter complex. That is, 
> ffmpeg can pass the 2nd & 4th frames to the output, or to the filter, but
> not both.

What kind of motion compensation did you have in mind? 

"recurse" works if your timestamps are correct.  Interleave works with
timestamps. If "X" has the timestamp of position 3 (A+B or the original
combed frame)



>> Double rate deinterlacing keeps all the temporal information. Recall what
>> "interlace content" really means. It's 59.94 distinct moments in time
>> captured per second . In motion you have 59.94 different images.
> 
> That may be true in general, but the result of 55-telecine is
> A A A+B B B ... repeat to end-of-stream
> So there aren't 59.94 distinct moments. There're only 24 distinct moments
> (the same as the input).

Exactly !

That refers to interlaced content. This does not refer to your specific
case. You don't have interlaced content. Do you see the distinction ? You
have 23.976 distinct moments/sec, just "packaged" differently with repeat
fields . That's progressive content. Interlaced content means 59.94 distinct
moments/sec


>> Single rate deinterlacing drops 1/2 the temporal information (either
>> even,
>> or odd fields are retained)
>> 
>> single rate deinterlace: 29.97i interlaced content => 29.97p output
>> double rate deinterlace: 29.97i interlaced content => 59.94p output
> 
> There is no 29.97i interlaced content. There's p24 content (the source)
> and p60 content that combs 
> frames 2 7 12 17 etc. (the transcode).

I thought it was obvious but those comments do  not refer to your specific
side case. You don't have 29.97i interlaced content . 

You apply deinterlacing to interlaced content.  You don't deinterlace
progressive content (in general)

29.97i interlaced content are things like soap operas, sports, some types of
home video, documentaries







> Once again, this is a terminology problem. You are one person who
> acknowledges that terminology 
> problems exist, and that I find heartening. Here's how I have resolved it:
> I call a 30fps stream that's 23-telecined from p24, "t30" -- "t" for
> "telecine".
> I call a 30fps stream that's 1/fieldrate interlaced, "s30" -- "s" for
> "scan" (see Note).
> I call a 60fps stream that's 23-telecined, then frame doubled, "t30x2".
> I call a 60fps stream that's 55-telecined from p24, "t60".
> Note: I would have called this "i30", but "i30" is already taken.
> 
> Now, the reason I write "p24" instead of "24p" -- I'm not the only person
> who does this -- is so it 
> fits an overall scheme that's compact, but that an ordinary human being
> can pretty much understand:
> 16:9-480p24 -- this is soft telecine
> 4:3-480t30 -- this is hard telecine
> 16:9-1080p24
> I'm not listing all the possible combinations of aspect ratio & line count
> & frame rate, but you 
> probably get the idea.

It's descriptive, but the problem is not very many people use this
terminology. NLE's , professional programs,  broadcast stations, post
production houses do not use this notation.  e.g "480p24" anywhere else
would be native progressive such as web video. Some places use pN to denote
native progressive . So you're going to have problems with communication...I
would write out the full sentence



> Erm... I'm not analyzing a mystery video. I'm transcoding from a known
> source. I know what the 
> content actually is.

I thought it was obvious, those comments  refers to "in general" , not your
specific case. More communication issues..


>> There are dozens of processing algorithms (not just talking about
>> ffmpeg).
>> There are many ways to "decomb"  something . The one you ended up using
>> is
>> categorized as  a blend deinterlacer because the top and bottom field are
>> blended with each other. If you examine the separated fields , the fields
>> are co-mingled, no longer distinct. You needed to retain both fields for
>> your purpose
> 
> No, I don't. I don't want to retain both fields. I want to blend them.
> That's what 
> 'pp=linblenddeint' does, and that's why I'm happy with it.

Yes, that should have said retain both fields blended. The alternative is
dropping  a field like a standard deinterlacer



>> There is no distinction in terms of distribution of application for this
>> type of filter.  You put the distinction on filtering specific frames by
>> using select.  You could apply blend deinterlace to every frame too (for
>> interlaced content) - how is that any different visually in terms of any
>> single frame there vs. your every 5th frame ?
> 
> I honestly don't know. What I do know is if I pass
> 

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-19 Thread pdr0
Mark Filipak wrote
> My experience is that regarding "decombing" frames 2 7 12 17 ..., 
> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.
> 
> "lb/linblenddeint
> "Linear blend deinterlacing filter that deinterlaces the given block by
> filtering all lines with a 
> (1 2 1) filter."
> 
> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1"
> refers. pdr0 recommended it 
> and I found that it works better than any of the other deinterlace
> filters. Without pdr0's help, I 
> would never have tried it.


[1,2,1] refers to a vertical convolution kernel in image processing. It
refers to a "block" of 1 horizontal, 3 vertical pixels.  The numbers are the
weights. The center "2" refers to the current pixel.  Pixel values are
multipled by the numbers , and the sum is taken, that's the output value.
Sometimes a normalization calculation is applied with some implementations,
you'd have to look at the actual code to check. But the net effect is each
line is blended with it's neighbor above and below  . 

In general, it's frowned upon, because you get "ghosting" or double image .
1 frame now is a mix of 2 
different times, instead of distinct frames. But you need this property in
this specific case to retain the pattern in terms of reducing the judder .
Other types of typical single rate deinterlacing (such as yadif) will force
you to choose the top or bottom field, and you will get a duplicate frame of
before or after ruining your pattern . Double rate deinterlacing will
introduce 2 frames in that spot, ruining your pattern . Those work against
your reasons for doing this - anti-judder



> To me, deinterlace just means weaving the odd & even lines. To me, a frame
> that is already woven 
> doesn't need deinterlacing. I know that the deinterlace filters do
> additional processing, but none 
> of them go into sufficient detail for me to know, in advance, what they
> do.


Weave means both intact fields are combined into a frame . Basically do
nothing. That's how video is commonly stored. 

If it's progressive video, yes it doesn't need deinterlacing. By definition,
progressive means both fields are from the same time, belong to the same
frame

Deinterlace means separating the fields and resizing them to full dimension
frames by whatever algorithm +/- additional processing

Single rate deinterlace means half the fields are discarded (29.97i =>
29.97p) . Either odd or even fields are kept . Double rate deinterlacing
means all fields are kept (29.97i => 59.94p). The "rate" refers to the
output rate





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread pdr0
Mark Filipak wrote
>> Deinterlacing does not necessarily have to be used in the context of
>> "telecast".  e.g. a consumer camcorder recording home video interlaced
>> content is technically not "telecast".  Telecast implies "broadcast on
>> television"
> 
> You are right of course. I use "telecast" (e.g., i30-telecast) simply to
> distinguish the origin of 
> scans from hard telecines. Can you suggest a better term? Perhaps
> "i30-camera" versus "i30"? Or 
> maybe the better approach would be to distinguish hard telecine: "i30"
> versus "i30-progressive"? Or 
> maybe distinguish both of them: "i30-camera" versus "i30-progressive"?


Some home video cameras can shoot native progressive modes too - 24p ,
23.976p. Some DV cameras shoot 24p advanced pulldown or standard.

 So why not use a descriptive term for what it actually in terms of content,
and how it's arranged or stored?  (see below)



>> The simplest operational definition is double rate deinterlacing
>> separates
>> and resizes each field to a frame +/- other processing. Single rate
>> deinterlacing does the same as double, but discards either even or odd
>> frames (or fields if they are discarded before the resize)
> 
> I think I understand your reference to "resize": line-doubling of
> half-height images to full-height 
> images, right?

"Resizing " a field in this context is any method of taking a field and
enlarging it to a full sized frame. There are dozens of different
algorithms. Line doubling is one method, but that is essentially a "nearest 
neighbor" resize without any interpolation. That's the simplest type. Some
complex deinterlacers use information from other fields to fill in the
missing information with adaptive motion compensation


> But I don't understand how "double rate" fits in. Seems to me that fields
> have to be converted 
> (resized) to frames no matter what the "rate" is. I also don't understand
> why either rate or 
> double-rate would discard anything.

The "rate" describes the output frame rate. 

Double rate deinterlacing keeps all the temporal information. Recall what
"interlace content" really means. It's 59.94 distinct moments in time
captured per second . In motion you have 59.94 different images.  

Single rate deinterlacing drops 1/2 the temporal information (either even,
or odd fields are retained)

single rate deinterlace: 29.97i interlaced content => 29.97p output
double rate deinterlace: 29.97i interlaced content => 59.94p output


>> I know you meant telecine up conversion of 23.976p to 29.97i (not "p").
>> But
>> other framerates can be telecined eg. An 8mm 16fps telecine to 29.97i.
> 
> Well, when I've telecined, the result is p30, not i30. Due to the presence
> of ffmpeg police, I 
> hesitate to write that ffmpeg outputs only frames -- that is certainly
> true of HandBrake, though. 
> When I refer to 24fps and 30fps (and 60fps, too) I include 24/1.001 and
> 30/1.001 (and 60/1.001) 
> without explicitly writing it. Most ordinary people (and most BD & DVD
> packaging) don't mention or 
> know about "/1.001".


The result of telecine is progressive content (you started with progressive
content) , but the output signal is interlaced. That's the reason for
telecine in the first place - that 29.97i signal is required for equipment
compatibility. So it's commonly denoted as 29.97i  . That can be confusing
because interlaced content is also 29.97i.  That's why /content/ is used to
describe everything .

When I'm lazy I use 23.976p notation (but it really means 24000/1001) , 
because 24.0p is something else - for example, there are both 24.0p and
23.976p blu-ray and they are different frame rates . Similarly, I use
"29.97" (but it really means 3/1001), because "30.0" is something else.
You can have cameras or web video as 30.0p. Both exist and are different and
should be differentiated otherwise you have time and sync issues



>> "Combing" is just a generic, non-specific visual description. There can
>> be
>> other causes for "combing". eg. A warped film scan that causes spatial
>> field
>> misalignment can look like "combing". Interlaced content in motion , when
>> viewed on a progressive display without processing is also described as
>> "combing" - it's the same underlying mechanism of upper and lower field
>> taken at different points in time
> 
> Again, good points. May I suggest that when I use "combing" I mean the
> frame content that results 
> from a 1/24th second temporal difference between the odd lines of a
> progressive image and the even 
> line of the same progressive image that results from telecine? If there's
> a better term, I'll use 
> that better term. Do you know of a better term?

I know what you're trying to say , but the term "combing" , it's appearance,
and underlying mechanism is the same.  This is how the term "combing" is
currently used in both general public and industry professionals. If you
specifically mean combining on frames from telecine , then you should say so
, 

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread pdr0
Mark Filipak wrote
> Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30
> (or p25) and, optionally, 
> smoothing the resulting p30 (or p25) frames.

That is the description for single rate deinterlacing. But that is not what
a flat panel TV does with interlaced content or "telecast" - it double rate
deinterlaces to 50p (50Hz regions) or 59.94p (60Hz regions). The distinction
is important to mention; one method discards half the temporal information
and motion is not as smooth.

Deinterlacing does not necessarily have to be used in the context of
"telecast".  e.g. a consumer camcorder recording home video interlaced
content is technically not "telecast".  Telecast implies "broadcast on
television"

The simplest operational definition is double rate deinterlacing separates
and resizes each field to a frame +/- other processing. Single rate
deinterlacing does the same as double, but discards either even or odd
frames (or fields if they are discarded before the resize)


> Combing is fields that are temporally offset by 1/24th second (or 1/25th
> second) resulting from 
> telecine up-conversion of p24 to p30 (or p25).

I know you meant telecine up conversion of 23.976p to 29.97i (not "p"). But
other framerates can be telecined eg. An 8mm 16fps telecine to 29.97i. 

"Combing" is just a generic, non-specific visual description. There can be
other causes for "combing". eg. A warped film scan that causes spatial field
misalignment can look like "combing". Interlaced content in motion , when
viewed on a progressive display without processing is also described as
"combing" - it's the same underlying mechanism of upper and lower field
taken at different points in time


> Decombing is smoothing combed frames. 

Yes, but this is an ambiguous term. "Decombing" can imply anything from
various methods of deinterlacing to inverse telecine / removing pulldown .




> It seems to me that some people call combing that results from telecine,
> interlace. Though they are 
> superficially similar, they're different.

Yes, it's more appropriately called "combing".

When writing your book , I suggest mentioning field matching and decimation
( inverse telecine, removing pulldown) in contrast to deinterlacing. 

I recommend describing the content. That's the key distinguishing factor
that determines what you have in terms of interlaced content vs. progressive
content that has been telecined









--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0 

> pdr0@

> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
>> > 
>>
>> > markfilipak.windows+ffmpeg@
>>
>> > :
>> >
>> >> I'm not using the 46 telecine anymore because you introduced me to
>> >> 'pp=linblenddeint'
>> >> -- thanks again! -- which allowed me to decomb via the 55 telecine.
>> >
>> > Why do you think that pp is a better de-interlacer than yadif?
>> > (On hardware younger that's not more than ten years old.)
>>
>> It's not a question of "better" in his case.
>>
>> It's a very specific scenario - He needs to keep that combed frame, as a
>> single frame to retain the pattern.
> 
> I know, while I agree with all other developers that this is useless,
> I have explained how it can be done.

I dislike it too, but that's just an opinion . He's asking a technical
question - that deserves a technical answer


>> Single rate deinterlacing by any method
>> will cause you to choose either the top field or bottom field, resulting
>> in
>> a duplicate frame or the prior or next frame - and it's counterproductive
>> for what he wanted (blend deinterlacing to keep both fields as a single
>> frame)
> 
> (To the best of my knowledge, this is technically simply not true.)
> 
> yadif by default does not change the number of frames.
> (Or in other words: It works just like the pp algorithms, only better)

most deinterlacers have 2 modes, single and double rate. For example, yadif
has mode =0, or  mode =1 . eg. if you stared with a 29.97 interlaced source,
you will get 29.97p in single rate, 59.94p in double rate. Double rate is
more "proper" for interlaced content. Single rate discards half the temporal
information

In general, blend deinterlacing is terrible, the worst type of
deinterlacing, but he "needs" it for his specific scenario.  The "quality"
of yadif is quite low ,deinterlacing and aliasing artifacts. bwdif is
slightly better, and there are more complex deinterlacers not offered by
ffmpeg 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
> 

> markfilipak.windows+ffmpeg@

> :
> 
>> I'm not using the 46 telecine anymore because you introduced me to
>> 'pp=linblenddeint'
>> -- thanks again! -- which allowed me to decomb via the 55 telecine.
> 
> Why do you think that pp is a better de-interlacer than yadif?
> (On hardware younger that's not more than ten years old.)

It's not a question of "better" in his case. 

It's a very specific scenario - He needs to keep that combed frame, as a
single frame to retain the pattern. Single rate deinterlacing by any method
will cause you to choose either the top field or bottom field, resulting in
a duplicate frame or the prior or next frame - and it's counterproductive
for what he wanted (blend deinterlacing to keep both fields as a single
frame)






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Paul B Mahol wrote
> On 4/18/20, pdr0 

> pdr0@

>  wrote:
>> Mark Filipak wrote
>>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>>> working because it is working
>>> for me.
>>
>>
>> Interleave works correctly in terms of timestamps
>>
>> Unless I'm misunderstanding the point of this thread, your "recursion
>> issue"
>> can be explained from how  interleave works
>>
>>
> 
> He is just genuine troller, and do not know better, I propose you just
> ignore his troll attempts.


I do not believe so. He is truly interested in how ffmpeg works. 

Your prior comment about interleave and timestamps was succinct and perfect
- but I can see why it would be "cryptic" for many users. If someone is
claims that comment is "irrelevant", then they are not "seeing" what you
see. It deserves to be expanded upon; if not for him, then do it for other
people who search for information. 

There are different types of people, different learning styles, and
different ways of seeing things.  Teach other people what you know to be
true.  Explain in different words if they don't get it. A bit of tolerance
now, especially in today's crappy world goes a long way. 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread pdr0
Mark Filipak wrote
> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
> working because it is working 
> for me.


Interleave works correctly in terms of timestamps

Unless I'm misunderstanding the point of this thread, your "recursion issue"
can be explained from how  interleave works




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread pdr0
Paul B Mahol wrote
> Interleave filter use frame pts/timestamps for picking frames.


I think Paul is correct. 

@Mark -
Everything in filter chain works as expected, except interleave in this case

You can test and verify the output of each node in a filter graph,
individually, by splitting and using -map.

D2 below demonstrates that the output of blend is working properly , and
this also implies that G,H were correct, but you could split and -map them
too to double check

 ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=46,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]split[E][F],[E]select='eq(mod(n+1\,5)\,2)',datascope=size=1920x1080:x=45:y=340:mode=color2[G],[F]select='eq(mod(n+1\,5)\,3)'[H],[G][H]blend=all_mode=average,split[D][D2],[C][D]interleave[out]"
-map [out] -c:v libx264 -crf 20 testout.mkv  -map [D2] -c:v libx264 -crf 20
testD2.mkv  -y

As Paul pointed out, interleave works using timestamps , not "frames". If
you took 2 separate video files, with the same fps, same timestamps, they
won't interleave correctly in ffmpeg. The example in the documentation
actually does not work if they had the same timestamps. You would have to
offset the PTS of one relative to the other for interleave to work
correctly. 

If you check the timestamps of each output node, you will see why it's not
working here, and why it works properly in some other cases .  To get it
working in this example, you would need [D] to assume [H]'s timestamps,
because those are where the "gaps" or "holes" are in [C] . It might be
possible using an expression using setpts





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-16 Thread pdr0
Mark Filipak wrote
> By the way, 'interleave' not recognizing end-of-stream (or 'select' not
> generating end-of-stream, 
> whichever the cause) isn't a big deal as I'll be queuing up transcodes --
> as many as I can -- to run 
> overnight.


But it would be nice to find some way to terminate, especially if you were
batching . The "hang" limits what you can do if you were processing
sequentially overnight

If you have known framecount, you can enter "-frames:v whatever" to send the
end signal. In that example, -frames:v 600. But it hangs prior to the end,
at 598. One way might to duplicate the last frame, or add with -vf tpad,
then cut it off with -frames:v 

If you add tpad=stop=1 to the end of each selection, the end result is the
last 2 frames are duplicates , but at least it terminates
ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))',tpad=stop=1[C],[B]select='eq(mod(n+1\,5)\,3)',pp=lb,tpad=stop=1[D],[C][D]interleave"
-c:v libx264 -crf 18 -frames:v 600 -an out_tpad.mkv

But another issue is "automatically" getting the proper frame count and
entering that telecined=5 output frame number value into the -frames:v field
(it should be 2.5x the original framecount).  It should be possible use
ffprobe to parse the framecount, and plugging that variable into a ffmpeg,
but it's beyond my batch scripting knowledge





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-16 Thread pdr0

To overcome a problem, I'm trying to understand the propagation of frames in
a filter complex.



The behavior is as though because frame n+1==1 can take the [A][C] path, it
does take it & that 
leaves nothing left to also take the [B][D][F] path, so blend never outputs.

I've used 'datascope' in various parts of the filter graph in an attempt to
confirm this on my own. 
It's difficult because my test video doesn't display frame #s.

If that indeed is the behavior, then ...

I need a way to duplicate a frame, # n+1%5==1 in this case, so that the
'blend' operates.




This doesn't answer the question in this thread directly;  but was the idea
still to blend deinterlace the combed frame from telecine selectively ?

If the problem is getting the blend to work on the target, another way is to
use -vf pp=lb with your selection sets.

ffmpeg -i 23.976p_framenumber.mp4 -filter_complex
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]select='eq(mod(n+1\,5)\,3)',pp=lb[D],[C][D]interleave"
-c:v libx264 -crf 18 -an out_pp_lb.mkv

Here are 2 framenumber labelled versions for testing,
"23.976p_framenumber.mp4" has the original framenumbers, 240 frames.
"telecine5_framenumbers.mp4" is the output of telecine=5 only, then new
framenumbers put on afterwards . (beware this version has 600 frames because
no select was used with interleave)
https://www.mediafire.com/file/qzwds9bog77wqke/23.976p_testing.zip/file





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Decombing via screening - 'tblend' bug (?)

2020-04-15 Thread pdr0




This:

ffmpeg -i IN -filter_complex 
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]select='eq(mod(n+1\,5)\,3)',split[E][F],[E][F]blend[D],[C][D]interleave"
 
OUT

outputs 598 frames. 'blend' outputs as expected.

This:

ffmpeg -i IN -filter_complex 
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]select='eq(mod(n+1\,5)\,3)',separatefields,scale=height=2*in_h:sws_flags=neighbor,setsar=1,tblend[D],[C][D]interleave"
 
OUT

outputs 716 frames. 'tblend' (documented in the same article) outputs extra
frames.

Now, does that look consistent to you?

But of course, since I can't read, and I'm always wrong...

___


You would expect 719 frame output if you started with a 240 frame, 23.976p
clip

tblend changes the frame count by -1 . Default mode doesn't appear to do
anything except drop the original frame zero. When you use all_mode=average,
it blends adjacent frames.  The new frame zero becomes 50/50 mix of (old
frame 0,1). New frame one becomes a 50/50 mix of (old frame 1,2).  You can
test the filter by itself to verify this

When you separate fields, you have 2 times the number of original frames. 
If you resize them to full height and treat them as frames, you still have 2
times the number of frames on that selection set

If you started with a 240 frame clip , you should end up with a 600 frame
clip after that telecine filter only with those settings. 

If you take every 5th frame from the telecine output ; 600/5 =120 .
Separating fields give you 120*2 =240. Applying tblend=all_mode=average
after gives you 240-1=239 frames. This is [D] . 

[C] is the other frame selection set; 600-120=480

Interleaving selection [C] with [D] should give you (600-120) + (240-1) =
719

The 716 might be from interleave dropping frames at the end when it hangs. 
Expect it to hang because there is no end-of-stream signal when used with
select. 

https://ffmpeg.org/ffmpeg-filters.html#interleave_002c-ainterleave




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How does ffmpeg handle chroma when transcoding video?

2020-04-07 Thread pdr0
jake9wi wrote
> When I transcode a video (say from x264 to vp9 without scaling) does
> ffmpeg up-scale the chroma to 4:4:4 for processing or does it leave it at
> 4:2:0?


It does not change the chroma unless you tell it to change it, or some
filter or output format requires you to change it 



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".