This:
ffmpeg -i IN -filter_complex
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod(n+1\,5)\,3))'[C],[B]select='eq(mod(n+1\,5)\,3)',split[E][F],[E][F]blend[D],[C][D]interleave"
OUT
outputs 598 frames. 'blend' outputs as expected.
This:
ffmpeg -i IN -filter_complex
To overcome a problem, I'm trying to understand the propagation of frames in
a filter complex.
The behavior is as though because frame n+1==1 can take the [A][C] path, it
does take it & that
leaves nothing left to also take the [B][D][F] path, so blend never outputs.
I've used 'datascope' in
Paul B Mahol wrote
> Interleave filter use frame pts/timestamps for picking frames.
I think Paul is correct.
@Mark -
Everything in filter chain works as expected, except interleave in this case
You can test and verify the output of each node in a filter graph,
individually, by splitting and
Mark Filipak wrote
> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
> working because it is working
> for me.
Interleave works correctly in terms of timestamps
Unless I'm misunderstanding the point of this thread, your "recursion issue"
can be
Mark Filipak wrote
> By the way, 'interleave' not recognizing end-of-stream (or 'select' not
> generating end-of-stream,
> whichever the cause) isn't a big deal as I'll be queuing up transcodes --
> as many as I can -- to run
> overnight.
But it would be nice to find some way to terminate,
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
>
> markfilipak.windows+ffmpeg@
> :
>
>> I'm not using the 46 telecine anymore because you introduced me to
>> 'pp=linblenddeint'
>> -- thanks again! -- which allowed me to decomb via the 55 telecine.
>
> Why
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0
> pdr0@
> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
>> >
>>
>> > markfilipak.windows+ffmpeg@
>>
>>
Paul B Mahol wrote
> On 4/18/20, pdr0
> pdr0@
> wrote:
>> Mark Filipak wrote
>>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>>> working because it is working
>>> for me.
>>
>>
>> Interleave works correc
jake9wi wrote
> When I transcode a video (say from x264 to vp9 without scaling) does
> ffmpeg up-scale the chroma to 4:4:4 for processing or does it leave it at
> 4:2:0?
It does not change the chroma unless you tell it to change it, or some
filter or output format requires you to change it
; filtering all lines with a
> (1 2 1) filter."
>
> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1"
> refers. pdr0 recommended it
> and I found that it works better than any of the other deinterlace
> filters. Without pdr0's help,
Mark Filipak wrote
>
> I would love to use motion compensation but I can't, at least not with
> ffmpeg. Now, if there was
> such a thing as smart telecine...
>
> A A A+B B B -- input
> A A B B -- pass 4 frames directly to output
>A A+B B -- pass 3 frames to filter
> X --
Cemal Direk wrote
> Hi, im using this code
>
> "ffmpeg -i video.mp4 -filter:v "fade=t=in:color=white:st=0.5:d=1"
> -filter:a
> "afade=in:st=0:d=1, afade=out:st=44:d=1" -c:v libx264 -c:a aac output.mp4"
>
> then output is wrong codec. i can not open output.mp4 at any player.
> but i dont give to
pdr0 wrote
> As Paul pointed out, interleave works using timestamps , not "frames". If
> you took 2 separate video files, with the same fps, same timestamps, they
> won't interleave correctly in ffmpeg. The example in the documentation
> actually does not work if they had th
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 18:46 Uhr schrieb Mark Filipak
>
> markfilipak.windows+ffmpeg@
> :
>>
>> On 04/19/2020 12:31 PM, Carl Eugen Hoyos wrote:
>> > Am So., 19. Apr. 2020 um 18:11 Uhr schrieb pdr0
> pdr0@
> :
>> >
Carl Eugen Hoyos-2 wrote
>> Am 19.04.2020 um 08:08 schrieb pdr0
> pdr0@
> :
>>
>> Other types of typical single rate deinterlacing (such as yadif) will
>> force
>> you to choose the top or bottom field
>
> As already explained: This is not true.
H
Mark Filipak wrote
>> The result of telecine is progressive content (you started with
>> progressive
>> content) , but the output signal is interlaced.
>
> According to the Motion Pictures Experts Group, it's not interlaced
> because the odd/even lines are
> not separated by 1/fieldrate seconds;
Carl Eugen Hoyos-2 wrote
> Am So., 19. Apr. 2020 um 16:31 Uhr schrieb pdr0
> pdr0@
> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am 19.04.2020 um 08:08 schrieb pdr0
>
>> >> Other types of typical single rate deinterlacing (such as yadif) will
>
Mark Filipak wrote
> Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30
> (or p25) and, optionally,
> smoothing the resulting p30 (or p25) frames.
That is the description for single rate deinterlacing. But that is not what
a flat panel TV does with interlaced content or
Mark Filipak wrote
>> Deinterlacing does not necessarily have to be used in the context of
>> "telecast". e.g. a consumer camcorder recording home video interlaced
>> content is technically not "telecast". Telecast implies "broadcast on
>> television"
>
> You are right of course. I use
pdr0 wrote
> If you take a soft telecine input, encode it directly to rawvideo or
> lossless output, you can confirm this.
> The output is 29.97 (interlaced content) .
So my earlier post is incorrect
Output is actually 29.97p with 5th frame duplicates . The repeat field flags
are
Carl Eugen Hoyos-2 wrote
>> Am 24.04.2020 um 11:10 schrieb Mark Filipak
> markfilipak.windows+ffmpeg@
> :
>>
>> I've been told that, for soft telecined video the decoder is fully
>> compliant and therefore outputs 30fps
>
> (“fps” is highly ambiguous in this sentence.)
>
> This is not
Mark Filipak wrote
>> I've been told that, for soft telecined video
>> the decoder is fully compliant and therefore outputs 30fps
>> I've also been told that the 30fps is interlaced (which I found
>> surprising)
>> Is this correct so far?
Yes
If you take a soft telecine input, encode it
When appending videos, you usually need to match the specs for the video,
including dimensions, framerate, pixel format
filtered.mp4 is 640x352, 30fps
intro.mp4 is 1080x608, 59.94fps
My guess is that is part of the reason. The specs can change midstream for
a transport stream, but some
Carl Eugen Hoyos-2 wrote
>> e.g
>> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv
>
> (Consider to test with other output formats.)
What did you have in mind?
e.g.
ffmpeg -i input.mpeg -c:v utvideo -an output.avi
The output is 29.97, according to ffmpeg and double check using official
Mark Filipak wrote
>
>>
>> If you take a soft telecine input, encode it directly to rawvideo or
>> lossless output, you can confirm this.
>> The output is 29.97 (interlaced content) .
>>
>>> When I do 'telecine=pattern=5', I wind up with this
>>>
>>>
Cemal Direk wrote
> but other problem: iphone is not supporting to filter effect on phone
> when im joining(merging) video...
>
> ffmpeg -i video.mp4 -filter:v "fade=in:color=white:st=5:d=1,
> fade=out:color=white:st=44:d=1,format=yuv420p" filtered.mp4
>
> ffmpeg -i intro.mp4 -c copy
Cecil Westerhof-3 wrote
> yuv420p(top first)
Maybe partially related - your camera files are encoded interlaced TFF (the
content might not be) , but your commandline specifies progressive encoding.
This has other implications in the way other programs / players handle the
file if the content is
Samik Some wrote
> Somewhat related question. Does sws_flags have any effect when
> converting to yuvj444p color space using scale? (since no actual
> resizing is needed)
Yes. The sws flags are used to control the RGB to YUV conversion. In this
case , full range, and 709 for the matrix
You can use -x265-params to pass x265 settings. In this case to specify
input-csp i444.
Personally, I prefer to explicitly control the RGB=>YUV conversion with -vf
scale or zscale
eg.
ffmpeg -r 24 -loop 1 -i iybW.png -vf
scale=out_color_matrix=bt709:out_range=full,format=yuvj444p -c:v libx265
Carl Eugen Hoyos-2 wrote
> Could it be the number of cores in your system / is the issue reproducible
> with -threads 1 ?
Issue is still present with -threads 1
Does not appear to be related to pixel format (eg. affects yuv420p source
test as well)
--
Sent from:
Kieran O Leary wrote
> Any idea what's happening? Will I get the libx264-style answer: 'this is
> googles issue,
I can replicate the ffmpeg issue (and with other sources), but I don't know
what the problem is
It's not a "google" issue, because AOM aomenc.exe works, and produces
lossless output
Intra only compression , using -g 1 makes it lossless . Maybe a clue there
Not sure why there is a discrepancy between aomenc, and ffmpeg libaom-av1
--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
I don't know if it's the full explanation...
The way it should work is ppsrc should disable everything else , given if
input src ,and ppsrc have the exact same timecodes
In theory, you should get same as -vf decimate on preprocessed.mkv
ffmpeg -i preprocessed.mkv -vf decimate -c:v libx264 -crf
Not sure about other OS's - For Windows, you don't need to escape commas
inside the text field; but the paths for fonts needs to be escaped
These work ok in windows
one comma
ffmpeg -f lavfi -r 24 -i color=c=green:s=640x480 -vf
drawtext="fontfile='C\:\\Windows\\Fonts\\Arial.ttf':text='Room
One issue is scene change is still active. When you crop to a region of
interest, a small change is effectively a larger % change. eg. The delta
between 1020 and 1021 is large when head is going up. If you disable scene
change, you get set intervals, it's no longer adaptive by scene change.
pdr0 wrote
> I don't know if it's the full explanation...
>
> The way it should work is ppsrc should disable everything else , given if
> input src ,and ppsrc have the exact same timecodes
>
>
> In theory, you should get same as -vf decimate on preprocessed.mkv
>
>
MediaMouth wrote
> Frame disposal makes sense as the culprit
> Turns out I'm on version 4.3.1, which I think is most recent.
The git version might not have made it into the release version
This is the old ticket and fix
https://trac.ffmpeg.org/ticket/7902
--
Sent from:
FFmpeg-users mailing list wrote
> Hello,
>
> I'm trying to create a GIF from an image sequence of PNGs with transparent
> pixels, but these transparent pixels convert to black in the resulting
> GIF. I'm using the following command :
>
> $ ffmpeg -i toile4-4-%d.png -framerate 12 toile4.webm
Did
MediaMouth wrote
>> eg.
>> ffmpeg -r 12 -i toile4-4-%d.png -filter_complex
>> "palettegen[PG],[0:v][PG]paletteuse" toile4.gif
>>
>
>
> Following up on the documentation links provided, I wasn't able to work
> out what the details of your "-filter_complex" entries were about
MediaMouth wrote
>
> In this case the "artifact" I was referring to was a piece of the opaque
> image itself that remains on all frames of the GIF even though it does not
> appear in the source PNGs
>
> I posted the ZIP file of the source PNGs and resulting GIF here
>
But for web supported video in MP4 container, there is no native alpha
channel support directly (there are workarounds by using a HTML5 background
canvas and 2 videos)
And for video - VP9, webm video does have alpha channel support , 8 or 10bit
per channel , and is supported by modern browsers,
Mark Filipak (ffmpeg) wrote
>
> You wrote: "What's wrong with using setsar filter after tinterlace?"
>
> I tried that from the git-go. I just reran it.
>
> ffmpeg -report -i "source 720x480 [SAR 8x9 DAR 4x3] 29.97 fps.VOB"
> -filter_complex "separatefields,
> shuffleframes=0 1 2 4 3 6 5 7 8 9,
Mark Filipak (ffmpeg) wrote
>
>>
>> You can't interleave images with different dimensions
>>
>> Aout has separated fields, to 720x240 , but Bout is 720x480
>
> [Bout] is 720x240: I'm using 'mode=send_field' in 'bwdif=mode=send_field',
> and the following
> 'decimate' doesn't change that. The
Paul B Mahol wrote
> On Thu, Jan 28, 2021 at 10:23 PM Mark Filipak (ffmpeg)
> markfilipak@
>
> wrote:
>
>> Synopsis:
>>
>> I seek to use minterpolate to take advantage of its superior output. I
>> present some performance
>> issues followed by an alternative filter_complex. So, this
Mark Filipak (ffmpeg) wrote
> Suppose I explain like this: Take any of the various edge-detecting,
> deinterlacing filters and, for
> each line-pair (y & y+1), align both output lines (y & y+1) to the mean of
> the input's
> line(y).Y-edge & line(y+1).Y-edge. To do that, only single line-pairs
Mark Filipak (ffmpeg) wrote
>
> In the video,
>
> Look at the behavior of the dots on the gate behind the police here:
> 0:5.422 to 0:10.127.
>
> Look especially at the top of roof of the building here: 0:12.012 to
> 0:12.179, for apparent
> macroblock errors.
>
> Here's the video:
>
>
Mark Filipak (ffmpeg) wrote
> Is there a way to coax minterpolate to expand its hardware usage?
Not directly;
One way might be to split along cadence boundaries and process in parallel
(e.g. 4x), then reassemble
(There are other optical flow solutions that use GPU in other software , and
some
Mark Filipak (ffmpeg) wrote
> And in lines like this:
> "[matroska @ 01f9e7236480] Starting new cluster due to timestamp"
>Why are new clusters started? What causes it?
>In this context, what is a cluster?
>What does "due to timestamp" really mean? Are there timestamp errors?
Not
Mark Filipak (ffmpeg) wrote
> I've never heard of "optical flow errors". What could they be? (Got any
> links to
> explanations?)
The artifacts in your video are optical flow errors :)
If you've ever used it - you'd recognize these artifacts. There are very
common
There are about a dozen
Mark Filipak (ffmpeg) wrote
> But perhaps by "process in parallel" you mean something else, eh?
> ...something I'm unaware of. Can
> you expand on that?
I mean "divide and conquer" to use all resources. If you're at 20% CPU
usage, you can run 4-5 processes
eg. Split video in to 4-5 segments.
Jim DeLaHunt-2 wrote
> Perhaps the character between 'mci' and 'mc_mode' should be ':' instead
> of '='?
That works for me
-vf
minterpolate=fps=6/1001:mi_mode=mci:mc_mode=obmc:scd=fdiff:scd_threshold=10
Each one is a separate option and argument
Hongyi Zhao wrote
> I noticed there are -cgop/+cgop and -g options used by ffmpeg. But I
> really can't figure out the usage of them. Any hints will be highly
> appreciated.
-g is the maximum gop length (the maximum keyframe interval) . Some
codecs/settings have adaptive GOP and I frame
USMAN AAMER wrote
> Hi,
>
> I am comparing the compression efficiency of H.264 and H.265 codecs.
> I have studied many research papers showing that the compression
> efficiency
> of H.265 is much better than H.264.
>
> But I am not able to get the same results.
> I am trying to compress 664 YUV
--
Sent from: http://ffmpeg-users.933282.n4.nabble.com/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user
To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject
pdr0 wrote
> More settings would help too - maybe you can improve the filter. I'll post
> an example later similar to one posted by Mark, where it's "solvable"
> using
> other methods, but not using minterpolate. Minterpolate maxes out at a
> block
> size of
Paul B Mahol wrote
>> The problem is ffmpeg minterpolate is s slow, and you have no usable
>> preview. Some of the other methods mentioned earlier do have previews -
>> so
>> you can tweak settings, preview, readjust etc
>>
>>
>
> Why you ignore fact that libavfilter also allows usable
Mark Filipak (ffmpeg) wrote
> On 01/28/2021 07:42 PM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> But perhaps by "process in parallel" you mean something else, eh?
>>> ...something I'm unaware of. Can
>>> you expand on that?
>>
>>
>
Wolfgang Hugemann wrote
> I did one step backward and tried to construct a vfr video from the
> scratch using slides a an input:
>
> ffmpeg -y -f concat -i input.txt colors.mkv
>
> with input.txt as:
>
> ffconcat version 1.0
> file 'red.png'
> duration 250ms
> file 'yellow.png'
> duration 500ms
Mark Filipak (ffmpeg) wrote
> Is there something about inputting raw frames that I don't know?
>
> I'm using 'vspipe' to pipe raw frames to 'ffmpeg -i pipe:'.
> The vapoursynth script, 'Mark's.vpy', is known good.
> The output of vapoursynth is known good.
> I've tried to be careful to retain
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 01:27 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> Is there something about inputting raw frames that I don't know?
>>>
>>> I'm using 'vspipe' to pipe raw frames to 'ffmpeg -i pipe:'.
>>> The va
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 02:28 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> On 02/12/2021 01:27 AM, pdr0 wrote:
>>>> Mark Filipak (ffmpeg) wrote
>>>>> Is there something about inputting raw frames that I don't know?
>
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>> less than 0.1%.
>>
>> The filter complex is thinned down to just this: settb=1/72,showinfo
>>
>> Here is selected lines from the
On 2021-04-01 11:41, pdr0 wrote:
> Mark Filipak (ffmpeg) wrote
>> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>>> less than 0.1%.
>>>
>>> The filter complex is thinn
Mark Filipak (ffmpeg) wrote
>
> Is this another documentation problem?
>
> https://ffmpeg.org/ffmpeg-filters.html#fps-1
> "11.88 fps
> Convert the video to specified constant frame rate by duplicating or
> dropping frames as necessary."
>
> I want to duplicate (specifically, double and only
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 11:41, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> On 2021-04-01 07:13, Mark Filipak (ffmpeg) wrote:
>>>> The source is MKV. MKV has a 1/1000 TB, so any PTS variance should be
>>>> less than 0.1%.
>>&g
Mark Filipak (ffmpeg) wrote
> What I'm trying to do is make a 12/1001fps cfr in which each frame is
> a proportionally weighted
> pixel mix of the 24 picture-per-second original:
> A B AAABB AABBB A.
> I'm sure it would be way better than standard telecine -- zero judder --
> and
Mark Filipak (ffmpeg) wrote
> On 2021-04-01 13:40, pdr0 wrote:
>>
>> This zip file example has the original 24000/1001, weighted frame
>> blending
>> to 12/1001, and decimation to 6/1001 - is this something close to
>> what you had in mind ?
Craig L. wrote
> Hi, I am running the following command to convert this red TGA image
> into a video.
>
> However, the finished video color does not match the TGA images color.
>
> Is there something I can do to ensure that the color comes out right?
>
> I plan on also adding another video
FFmpeg-users mailing list wrote
> Hello. I'm using ffmpeg to generate thumbnails for my JavaScript web
> project. Now I'm having a little problem with .gif thumbnails. Original
> gif: https://files.catbox.moe/frrsev.gif my software's thumbnail:
> https://files.catbox.moe/3rtv3o.gif
>
> As you can
FFmpeg-users mailing list wrote
> I'm using 4.3.2 built from source.
maybe there is a regression ? post your console output
https://i.postimg.cc/htBxJ81m/output.gif
ffmpeg -i frrsev.gif -filter_complex "scale=250:250, split[a][b]; [a]
Mark Filipak (ffmpeg) wrote
> On 2021-03-18 01:55, pdr0 wrote:
>
>> https://www.mediafire.com/file/m46kc4p1uvt7ae3/cadence_tests.zip/file
>
> Thanks again. I haven't tested my filters on cadence.mp4 yet to see if
> they work as expected.
>
> How did you make cade
FFmpeg-users mailing list wrote
> You mean the master version without checking out the n4.3.2 release?
>
> ‐‐‐ Original Message ‐‐‐
> On Wednesday 17 March 2021 16:40, pdr0
> pdr0@
> wrote:
>
>> FFmpeg-users mailing list wrote
>>
>> > Trying
ednesday 17 March 2021 16:07, pdr0
> pdr0@
> wrote:
>
>> FFmpeg-users mailing list wrote
>>
>> > I'm using 4.3.2 built from source.
>>
>> maybe there is a regression ? post your console output
>>
>> https://i.postimg.cc/htBxJ81
Mark Filipak (ffmpeg) wrote
> Is the format of a filter script file documented anywhere? I can't find
> any.
>
> Working command is:
>
> ffmpeg -i source.mkv -filter_script:v test.filter_script -map 0 -codec:v
> libx265 -codec:a copy
> -codec:s copy -dn test.mkv
>
> If the test.filter_script
Hassan wrote
> Hello,
>
> I am using ffmpeg on a Windows 10 machine and I want to record the desktop
> at a high frame rate while appending accurate timestamps to each frame.
> I am recording my desktop using the following command:
>
> ffmpeg -f gdigrab -framerate 60 -i desktop -vf "settb=AVTB,
Mark Filipak (ffmpeg) wrote
> I hoped that "marked as interlaced" [1] meant that
>
> 'select=expr=not(eq(interlace_type\,TOPFIRST)+eq(interlace_type\,BOTTOMFIRST))'
> [2]
>
> would work. However, the 'select' doesn't work. I'm counting on the
> 'select' working -- not working
> is a complete
Mark Filipak (ffmpeg) wrote
>
> Currently, the ffmpeg internal time base appears to be 1kHz.
No. Again, there is no ffmpeg "inherent" 1ms time base. I answered this at
doom9 already. I also posted in your other thread demonstrated the timestamp
results with the same video in vob vs. mkv. I
Ian Pilcher wrote
> I am trying to understand how the SSIM and VMAF filters work, with an
> eye to finding the "best" compression settings for a video which will
> be composed from a series of TIFF images. Unfortunately, I'm stuck at
> the beginning, as I can't get the SSIM filter to behave as
Mark Filipak (ffmpeg) wrote
> In contrast, my best information so far is that, at least out of the
> encoder, ffmpeg encodes frames with PTS resolution = 1ms.
Not true;
Check the timestamps at each step. Decoding, prefilter, postfilter after
each filter, postencode. If you need to check
Mark Filipak (ffmpeg) wrote
> On 2021-02-23 00:41, Carl Zwanzig wrote:
> -snip-
>> If you're starting with mpeg-ps or -ts, ...
>
> There's no such thing as PTS in mpeg-ts. The transport stream sets the SCR
> (System Clock Reference)
> (aka TB) but the PTSs are in the presentation stream, stored
Mark Filipak (ffmpeg) wrote
> 'yadif=mode=send_field' is one way to convert fields to frames at the same
> frame size and twice the
> FR. It does it by repeating fields, but it also adds cosmetics -- it is,
> after all, a motion
> interpolation filter.
>
> I seek a fields-to-frames filter that
Mark Filipak (ffmpeg) wrote
> On 2021-03-05 11:13, James Darnley wrote:
>> On 05/03/2021, Mark Filipak (ffmpeg)
> markfilipak@
> wrote:
>>> I seek a fields-to-frames filter that does not add cosmetics. In my
>>> pursuit,
>>> I look for such a
>>> filter every time I peruse the filter docs for
Rainer M. Krug-2 wrote
> Hi
>
> First poster, o apologies for any forgotten info.
>
>
> I have a video with the following metadata:
>
> ```
> $ ffprobe ./1.pre-processed.data/bemovi/20210208_00097.avi
> ffprobe version 4.3.2 Copyright (c) 2007-2021 the FFmpeg developers
> built with Apple
Mark Filipak (ffmpeg) wrote
>> Either way, cadence wise that's going to be worse in terms of smoothness
>> then an optical flow retimed 6/1001 . (Some people would argue it's
>> worse period, you're retiming it and making it look like a soap opera...)
>
> You know, I think that "soap opera"
Mark Filipak (ffmpeg) wrote
> On 02/12/2021 10:34 AM, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>
> -snip-
>
>> "72fps" or "144fps" equivalent in a cinema is not the same thing - the
>> analogy would be the cinema is repeating frames, vs inter
Mark Filipak (ffmpeg) wrote
> I've run some test cases for the shuffleframes filter. I've documented
> shuffleframes via my
> preferred documentation format (below). In the course of my testing, I
> found 2 cases (marked "*
> Expected", below) that produced unexpected results, to wit: If the
Mark Filipak (ffmpeg) wrote
> On 2021-02-21 21:39, pdr0 wrote:
>> Mark Filipak (ffmpeg) wrote
>>> I've run some test cases for the shuffleframes filter. I've documented
>>> shuffleframes via my
>>> preferred documentation format (below). In the course of my
pdr0 wrote
> Mark Filipak (ffmpeg) wrote
>> On 2021-02-21 21:39, pdr0 wrote:
>>> Mark Filipak (ffmpeg) wrote
>>>> I've run some test cases for the shuffleframes filter. I've documented
>>>> shuffleframes via my
>>>> preferred documentation for
88 matches
Mail list logo