[FFmpeg-user] FFmpeg doesn't stop after it has written the correct output file
Hello all, I just created a special effect which shows several optical sound tracks, like in cine film. The output video is perfectly ok and has the correct length 20s. But FFmpeg doesn't stop after it has written the output file. I have to terminate it with ctrl-c, and I don't understand why. The problem can be reproduced with some audio input files, but not with all. This input file can be used for reproducing: ffmpeg -f lavfi -i "sine=1k:b=2,channelmap=0|0" -t 20 -y sine.mp3 This is the command line for the special effect: ffmpeg -i sine.mp3 -lavfi "asplit=4[a0][a1][a2][a3];[a0]asplit[b0][c0];[a1]adelay=0.05:all=1,volume='gt(t,5)':eval=frame,asplit[b1][c1];[a2]adelay=0.1:all=1,volume='gt(t,10)':eval=frame,asplit[b2][c2];[a3]adelay=0.15:all=1,volume='gt(t,15)':eval=frame,asplit[b3][c3];[b0]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v0];[b1]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v1];[b2]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v2];[b3]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v3];[v0][v1][v2][v3]vstack=4,transpose;[c0][c1][c2][c3]amix=4" -y out.mp4 The console outputs are below. Michael C:\Users\astro\Desktop>ffmpeg -f lavfi -i "sine=1k:b=2,channelmap=0|0" -t 20 -y sine.mp3 ffmpeg version 2021-03-09-git-c35e456f54-essentials_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10.2.0 (Rev6, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband libavutil 56. 67.100 / 56. 67.100 libavcodec 58.129.100 / 58.129.100 libavformat 58. 71.100 / 58. 71.100 libavdevice 58. 12.100 / 58. 12.100 libavfilter 7.109.100 / 7.109.100 libswscale 5. 8.100 / 5. 8.100 libswresample 3. 8.100 / 3. 8.100 libpostproc 55. 8.100 / 55. 8.100 Input #0, lavfi, from 'sine=1k:b=2,channelmap=0|0': Duration: N/A, start: 0.00, bitrate: 1411 kb/s Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Stream mapping: Stream #0:0 -> #0:0 (pcm_s16le (native) -> mp3 (libmp3lame)) Press [q] to stop, [?] for help Output #0, mp3, to 'sine.mp3': Metadata: TSSE : Lavf58.71.100 Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p Metadata: encoder : Lavc58.129.100 libmp3lame size= 313kB time=00:00:19.98 bitrate= 128.4kbits/s speed= 120x video:0kB audio:313kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.078921% C:\Users\astro\Desktop>ffmpeg -i sine.mp3 -lavfi "asplit=4[a0][a1][a2][a3];[a0]asplit[b0][c0];[a1]adelay=0.05:all=1,volume='gt(t,5)':eval=frame,asplit[b1][c1];[a2]adelay=0.1:all=1,volume='gt(t,10)':eval=frame,asplit[b2][c2];[a3]adelay=0.15:all=1,volume='gt(t,15)':eval=frame,asplit[b3][c3];[b0]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v0];[b1]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v1];[b2]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v2];[b3]showwaves=mode=cline:split_channels=true:s=1080x480:colors=white[v3];[v0][v1][v2][v3]vstack=4,transpose;[c0][c1][c2][c3]amix=4" -y out.mp4 ffmpeg version 2021-03-09-git-c35e456f54-essentials_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10.2.0 (Rev6, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheor
Re: [FFmpeg-user] hstack with one video offset in time (and keep audio synced)?
Am 17.03.2021 um 19:31 schrieb Steven Kan: On Mar 6, 2021, at 11:22 AM, Steven Kan wrote: On Mar 5, 2021, at 2:00 PM, Michael Koch wrote: Am 05.03.2021 um 20:33 schrieb Steven Kan: I’d like to assemble these videos, side-by-side, but synced in time, which means the TrailDown video needs to start 50 seconds after the TrailUp video. try this command line: ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex "[0]tpad=start_duration=50[a];[a][1]hstack” out.mp4 Thank you! That worked perfectly, and now that I understand the syntax, this works as well (to pad the end of the second track by 15 sec: ffmpeg -i Input1.mp4 -i Input2 -filter_complex "[0]tpad=start_duration=50[a];[1]tpad=stop_duration=15[b];[a][b]hstack” Out.mp4 More on this! tpad=start_duration works to delay the start of one video, but now I need to fix the audio sync. For example this command: ffmpeg -i TrailDown.mp4 -i TrailUp.mp4 -filter_complex "[0]tpad=start_duration=2.5[a];[a][1]hstack" -vcodec libx264 Coyote2Up.mp4 results in this video: https://www.youtube.com/watch?v=_PDPONEU3YA#t=35s Only the left half (TrailDown) camera has a microphone, and it’s apparent that the audio sync is off by the same 2.5 seconds that I’ve delayed its video. You can hear the female coyote (with the stumpy tail) scratching the ground 2.5 seconds before she actually does it. What flag should I add to also delay its audio? Thanks! Add the "adelay" filter to the filter chain: adelay=2500 Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] 'mix' filter questions
Am 20.03.2021 um 04:31 schrieb Mark Filipak (ffmpeg): : : : : : Specify scale, if it is set it will be multiplied with sum of each weight multiplied with pixel values to give final destination pixel value. By default scale is auto scaled to sum of weights. Correct would be "By default scale is set to (1 / sum_of weights)". The same error is also in the documentation of the "tmix" filter. Is there one scale for all inputs or one scale per input? One for all inputs. Criticism (re, "Specify weight"): "If number of weights is smaller than number of frames" -- Huh? "number of inputs" The same error is also in the documentation of the "tmix" filter. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Cutting out part of a video does not work
Am 26.03.2021 um 09:55 schrieb Cecil Westerhof via ffmpeg-user: I want to publish a speech I gave during a Zoom meeting. But cutting it out does not work. When I use: ffmpeg -y -i 2021-03-25ToastmastersClubAvond.mp4 -ss 1190 -to 1631 -acodec copy -vcodec copy -async 1 speech.mp4 The video starts just a bit to late. But when I use: ffmpeg -y -i 2021-03-25ToastmastersClubAvond.mp4 -ss 1185 -to 1631 -acodec copy -vcodec copy -async 1 speech.mp4 which is five seconds earlier. I get the same output. Am I doing something wrong? Without re-encoding you can only cut at keyframes. If you want to cut at exact times, you must remove -acodec copy and -vcodec copy Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Cut part of a video, crop it and blackout parts of it
Am 05.04.2021 um 01:48 schrieb Cecil Westerhof via ffmpeg-user: I have to cut out a part of a video, crop it and blackout two parts. I do this with: ffmpeg -y \ -ss 00:19:49\ -i 2021-03-25ToastmastersClubAvond.mp4 \ -to 442 \ -vf " crop = 1440:1080:240:0 , drawbox=enable='between(t, 0, 2)' : color = black : w = in_w : h = in_h : thickness = fill , drawbox=enable='between(t, 339, 342)' : color = black : w = in_w : h = in_h : thickness = fill " \ -acodec copy\ -vcodec libx264 \ -crf23 \ speechClean2.mp4 The reason I do -to in the output is that in this way the metadata concerning the video length is less out of whack. I was wondering if this is a good way, or that it could be done better? Also, beside the timestamps the parameters for drawbox are the same. Is there a way that I do not have to repeat them? drawbox=enable='bitor(between(t, 0, 2),between(t, 339, 342))' Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Cut part of a video, crop it and blackout parts of it
Am 05.04.2021 um 11:58 schrieb Cecil Westerhof via ffmpeg-user: Michael Koch writes: Am 05.04.2021 um 01:48 schrieb Cecil Westerhof via ffmpeg-user: I have to cut out a part of a video, crop it and blackout two parts. I do this with: ffmpeg -y \ -ss 00:19:49\ -i 2021-03-25ToastmastersClubAvond.mp4 \ -to 442 \ -vf " crop = 1440:1080:240:0 , drawbox=enable='between(t, 0, 2)' : color = black : w = in_w : h = in_h : thickness = fill , drawbox=enable='between(t, 339, 342)' : color = black : w = in_w : h = in_h : thickness = fill " \ -acodec copy\ -vcodec libx264 \ -crf23 \ speechClean2.mp4 The reason I do -to in the output is that in this way the metadata concerning the video length is less out of whack. I was wondering if this is a good way, or that it could be done better? Also, beside the timestamps the parameters for drawbox are the same. Is there a way that I do not have to repeat them? drawbox=enable='bitor(between(t, 0, 2),between(t, 339, 342))' Works like a charm. Strange thing is that it seems to be much faster. About 20%. To much to be because of different load I think. But I am not complaining. ;-) If you want to cover the whole frame size, you could also use this (for 50% gray): eq=enable='bitor(between(t, 0, 2),between(t, 339, 342))':contrast=0 or add :brightness=-0.5 if you want black. It might be faster than drawbox. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Overlay images to frames in video
Am 08.04.2021 um 09:27 schrieb Rainer M Krug: ffmpeg -i ‘background_movie.avi' -i 'overlay.avi' -filter_complex 'overlay=0x0’ ‘final_movie.avi’ I think overlay=0x0 is not doing what you expect. It does set the x option to hexadecimal 0, and it doesn't specify the y option. In your case that doesn't matter because the default values are 0. You could write 'overlay=x=0:y=0' or you could simplify it to just 'overlay' without any options. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Plotting Circles and labels on individual frames of a movie
Am 08.04.2021 um 09:46 schrieb Rainer M Krug: Hi I have a series of videos of moving particles (multiple particles per frame / movie), and would like to add a circle around each particle and add a label. At the moment I am using a script in R to plot, for each frame, these circles and labels into a png with alpha channel, combine the pngs to a movie, and overlay this movie to the original movie (see the thread 'Overlay images to frames in video’ for the background). Now I realised, that the actual plotting of the labels takes up nearly 40% of the time of the R script and I would like to make this process faster. I have found “draw_text” (https://ffmpeg.org/ffmpeg-filters.html#drawtext), but I have no idea how I could do this. We can assume that I have a text file with the following columns: FRAME: the frame on which the labels and circle should be plotted X: the x-coordinate Y: the y=coordinate LABEL: label for the point Also possibly important, there are multiple (many) particles for which circles and labels need to be plotted on each frame. I also found the “sendcmd” (https://ffmpeg.org/ffmpeg-filters.html#sendcmd_002c-asendcmd) but I do not get my head around how I can combine these two. It's possible to do this with FFmpeg if the number of particles is known in advance and constant in all frames. Use several drawtext commands, one for each particle. But with a changing number of particles I have no idea how to do it. There are a few examples for sendcmd in chapter 2.86 of my book: http://www.astro-electronic.de/FFmpeg_Book.pdf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Plotting Circles and labels on individual frames of a movie
Am 08.04.2021 um 11:10 schrieb Rainer M Krug: Hi Nicolas On 8 Apr 2021, at 10:58, Nicolas George wrote: Rainer M Krug (12021-04-08): I have found “draw_text” (https://ffmpeg.org/ffmpeg-filters.html#drawtext), but I have no idea how I could do this. We can assume that I have a text file with the following columns: FRAME: the frame on which the labels and circle should be plotted X: the x-coordinate Y: the y=coordinate LABEL: label for the point Also possibly important, there are multiple (many) particles for which circles and labels need to be plotted on each frame. This looks like something ASS subtitles can do easily. Good idea! Interesting - could you give me some more pointers? Formats? Chapter 2.139 in my book. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Guide to denoise VHS videos
Am 12.04.2021 um 00:39 schrieb Ulf Zibis: My question is not about head-switching effects, it mainly is about temporal noise. E.G., there is the hqdn3d filter, which can do both, spacial and temporal denoise. But I'm missing some guidance about using the parameters and in comparision to other denoise filters. See here: http://ffmpeg.org/pipermail/ffmpeg-user/2019-October/045838.html Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Change in video length and loss of audio sync
Am 13.04.2021 um 21:53 schrieb John Harlow via ffmpeg-user: RES1=`nice -20 cpulimit -l 400 /usr/bin/ffmpeg -y -hide_banner -loglevel verbose -r 29.97 -i "$TMP" \ I'm not sure, but doesn't the -r option before the input overwrite the framerate of the input? Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 22.04.2021 um 12:50 schrieb Ulf Zibis: Hi, I want to filter a video from pts 0 to 1999 with filter A, then from 2000 to 2199 with filter B, from 2200 with filter A and finally the whole stream with filter C. Can one please give me an example for a working command line? untested: ffmpeg -i input.mp4 -lavfi split[a][b];[a]filter_A[c];[b]filter_B[d];[c][d]select=bitor(between(t,0,1999),between(t,2200,1)),filter_C out.mp4 Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 22.04.2021 um 23:27 schrieb Ulf Zibis: Am 22.04.21 um 13:01 schrieb Michael Koch: Am 22.04.2021 um 12:50 schrieb Ulf Zibis: Hi, I want to filter a video from pts 0 to 1999 with filter A, then from 2000 to 2199 with filter B, from 2200 with filter A and finally the whole stream with filter C. Can one please give me an example for a working command line? untested: ffmpeg -i input.mp4 -lavfi split[a][b];[a]filter_A[c];[b]filter_B[d];[c][d]select=bitor(between(t,0,1999),between(t,2200,1)),filter_C out.mp4 I found out, that the 'between(t,a,b)' should be surrounded by colons. But anyway the reality is more complex. I have to use 3 different filters. So I tried: ffmpeg -i input.mp4 -filter_complex "split[a][b][c];[a]filter_A[d];[b]filter_B[e];[c]filter_C[f];[d]select='between(t,0,1999)';[e]select='between(t,2000,2199)';[f]select='between(t,2200,1)'",filter_D out.mp4 With this I got a weird file with 3 video streams: I think in this example it's not correct how you are using the select filter. The select filter needs two or more inputs, and it selects one of them and passes this stream to the output. The "between" function is including at both ends of the interval. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 23.04.2021 um 08:31 schrieb Gyan Doshi: You may be thinking of the interleave filter. select uses exactly one input. you are right and what I wrote was wrong. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 22.04.2021 um 23:27 schrieb Ulf Zibis: Am 22.04.21 um 13:01 schrieb Michael Koch: Am 22.04.2021 um 12:50 schrieb Ulf Zibis: Hi, I want to filter a video from pts 0 to 1999 with filter A, then from 2000 to 2199 with filter B, from 2200 with filter A and finally the whole stream with filter C. Can one please give me an example for a working command line? untested: ffmpeg -i input.mp4 -lavfi split[a][b];[a]filter_A[c];[b]filter_B[d];[c][d]select=bitor(between(t,0,1999),between(t,2200,1)),filter_C out.mp4 I found out, that the 'between(t,a,b)' should be surrounded by colons. But anyway the reality is more complex. I have to use 3 different filters. So I tried: ffmpeg -i input.mp4 -filter_complex "split[a][b][c];[a]filter_A[d];[b]filter_B[e];[c]filter_C[f];[d]select='between(t,0,1999)';[e]select='between(t,2000,2199)';[f]select='between(t,2200,1)'",filter_D out.mp4 Sorry that my example was wrong. Please have a look at the "streamselect" filter instead of "select". However the problem might be easier to solve if your filters A, B and C have timeline support with "enable" option. Use this command to find out if your filters have timeline support: ffmpeg -filters Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 23.04.2021 um 13:35 schrieb Nicolas George: Ulf Zibis (12021-04-23): I already had checked that. Unfortunately, crop has no timeline support: How would that work? Enabling or disabling crop changes the output resolution, which is not supported by libavfilter. The crop filter could be used to change the x,y coordinates, with constant output resolution. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 23.04.2021 um 14:05 schrieb Nicolas George: Michael Koch (12021-04-23): The crop filter could be used to change the x,y coordinates, with constant output resolution. With timeline? Please elaborate? Please forget what I wrote. It's wrong. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 23.04.2021 um 15:58 schrieb Ulf Zibis: Am 23.04.21 um 08:07 schrieb Michael Koch: The "between" function is including at both ends of the interval. Thanks for the info. Oh what a pitty ... imagine I set the upper border of the lower range to 10:23.59, the lower border of the upper range to 10:23.60 and there is a frame at 10:23.595. Then this frame would not be catched by both ranges. untested command line, second try with streamselect filter: ffmpeg -i input.mp4 -lavfi split=3[a][b][c];[a]filter_A[d];[b]filter_B[e];[c]filter_C[f];[d][e][f]streamselect=inputs=3:map='gt(t,1000)+gt(t,2000)',filter_D out.mp4 The expression for "map" is 0 for t<=1000, 1 for t>1000 and 2 for t>2000. If you want a "greater or equal" expression, use "gte" instead of "gt". Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 23.04.2021 um 23:19 schrieb Ulf Zibis: Am 23.04.21 um 08:31 schrieb Gyan Doshi: On this topic generally, see https://superuser.com/q/1632998/ Much thanks for this hint. I was successful with this approach: ffmpeg -i 'Stille Tage in Clichy.mpg' -vf "split=3[a][b][c];[a]trim=0:2888.36,setpts=PTS-STARTPTS,crop=352:496:0:36[a];[b]trim=2888.36:3293.96,setpts=PTS-STARTPTS,crop=352:496:0:42[b];[c]trim=3293.96,setpts=PTS-STARTPTS,crop=352:496:0:38[c];[a][b][c]concat=n=3:v=1:a=0",scale=iw:ih/2,hqdn3d=2:1.5:12 -preset slow -movflags +faststart 'Stille Tage in Clichy_c_3_scale_1_2_hqdn3d_2_1_12_slow.mp4' You can simplify this by using sendcmd: ffmpeg -i input.mp4 -vf "sendcmd=f=test.cmd,crop@1=352:496:0:36,scale=iw:ih/2,hqdn3d=2:1.5:12 out.mp4 The file "test.cmd" contains these two lines: 2888.36 crop@1 y 42; 3293.96 crop@1 y 38; The above example is untested. There is a similar example at the end of chapter 2.86 in my book: http://www.astro-electronic.de/FFmpeg_Book.pdf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to filter time dependent
Am 24.04.2021 um 19:24 schrieb Michael Koch: Am 23.04.2021 um 23:19 schrieb Ulf Zibis: Am 23.04.21 um 08:31 schrieb Gyan Doshi: On this topic generally, see https://superuser.com/q/1632998/ Much thanks for this hint. I was successful with this approach: ffmpeg -i 'Stille Tage in Clichy.mpg' -vf "split=3[a][b][c];[a]trim=0:2888.36,setpts=PTS-STARTPTS,crop=352:496:0:36[a];[b]trim=2888.36:3293.96,setpts=PTS-STARTPTS,crop=352:496:0:42[b];[c]trim=3293.96,setpts=PTS-STARTPTS,crop=352:496:0:38[c];[a][b][c]concat=n=3:v=1:a=0",scale=iw:ih/2,hqdn3d=2:1.5:12 -preset slow -movflags +faststart 'Stille Tage in Clichy_c_3_scale_1_2_hqdn3d_2_1_12_slow.mp4' You can simplify this by using sendcmd: ffmpeg -i input.mp4 -vf "sendcmd=f=test.cmd,crop@1=352:496:0:36,scale=iw:ih/2,hqdn3d=2:1.5:12 out.mp4 I forgot the quotation mark " at the end of the filter chain: ffmpeg -i input.mp4 -vf "sendcmd=f=test.cmd,crop@1=352:496:0:36,scale=iw:ih/2,hqdn3d=2:1.5:12" out.mp4 "@1" can be omitted if there is only one crop filter in the filter chain. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Possible to change pitch of audio in a downloaded mp4 file?
Am 28.04.2021 um 08:51 schrieb Bo Berglund: Sometimes when I download a video it turns out to have some issues that has raised the audio pitch of the video making it not so enjoyable to watch/hear. So I wonder if there is an ffmpeg command that can modify the pitch of the audio without changing the playback speed or lipsync? Yes, this is possible with a combination of asetrate, atempo and aresample filters. See chapter 3.4 in my book: http://www.astro-electronic.de/FFmpeg_Book.pdf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Possible to change pitch of audio in a downloaded mp4 file?
Am 28.04.2021 um 10:53 schrieb Bo Berglund: On Wed, 28 Apr 2021 09:03:26 +0200, Michael Koch wrote: Am 28.04.2021 um 08:51 schrieb Bo Berglund: Sometimes when I download a video it turns out to have some issues that has raised the audio pitch of the video making it not so enjoyable to watch/hear. So I wonder if there is an ffmpeg command that can modify the pitch of the audio without changing the playback speed or lipsync? Yes, this is possible with a combination of asetrate, atempo and aresample filters. See chapter 3.4 in my book: http://www.astro-electronic.de/FFmpeg_Book.pdf Thanks for the book! It has many useful items. But chapter 3.4 seems to deal only in modifying an audio file whereas I am talking about an mp4 video with both audio and video content. I had tested this (which I found by googling) before I posted: ffmpeg -i input20.mp4 -filter:a "atempo=1.25" -vn output20.mp4 There were no errors displayed but the resulting file *ONLY* contains the audio part, I neeed both and the audio change must not change the length or lipsync of the file. Sure, that's because -vn means "no video output". Just remove this option and then your output file will contain audio and video. I already have a script to change lipsync using ffmpeg this way (shifting audio 350 ms here): ffmpeg -hide_banner -i input.mp4 -itsoffset -0.35 -i input.mp4 -map 1:v -map 0:a -c copy output.mp4 Is this basically what one has to do, specifying the input twice? I am not really understanding how all the ffmpeg arguments do work... Alternatively you could also use the adelay filter. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Possible to change pitch of audio in a downloaded mp4 file?
Am 28.04.2021 um 23:34 schrieb Bo Berglund: On Wed, 28 Apr 2021 11:35:47 +0200, Michael Koch wrote: I had tested this (which I found by googling) before I posted: ffmpeg -i input20.mp4 -filter:a "atempo=1.25" -vn output20.mp4 There were no errors displayed but the resulting file *ONLY* contains the audio part, I neeed both and the audio change must not change the length or lipsync of the file. Sure, that's because -vn means "no video output". Just remove this option and then your output file will contain audio and video. OK, that included the video... But now there is no lip-sync at all. Seems to drift longer apart during the video playing. Audio and video running at different speeds. The atempo filter changes the length of the audio track, while keeping the pitch and the sample rate constant. As shown in the 5th command line in chapter 3.4 in my book. If you want to change only the audio pitch (and keep the sample rate and length constant), you must use a combination of asetrate, atempo and aresample filters. That's the 2nd command line in that chapter. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Convert flv to mp4. Bad video quality
Am 28.05.2021 um 11:25 schrieb Flavio Sartoretto: I use ffmpeg in order to convert fname.flv video to mp4: ffmpeg -i fname.flv -c:v mpeg4 -copyts -loglevel verbose fname.mp4 The video quality of my output is bad. How can I improve it? Add -q:v 1 to your command line. The number does specify the compression ratio. 0 ist best quality and 9 is highest compression. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Convert flv to mp4. Bad video quality
Am 28.05.2021 um 17:18 schrieb Carl Eugen Hoyos: This is (nearly) completely wrong: 9 is still high quality, highest compression happens at a much higher value. Old MEncoder documentation recommends not use a value lower than 2, sane values start between 5 and 10. This is an important information and should be added to the documentation. The -q option is missing in chapter 16.16.1 in ffmpeg-all.html Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] FFmpeg 360 video Spherical metadata header injection
Am 14.06.2021 um 18:23 schrieb Yann Cainjo: Hi FFmpeg users Using FFmpeg to transcode 360 video, does anyone know if it is possible to inject Spherical metadata ( https://github.com/google/spatial-media/blob/master/docs/spherical-video-rfc.md) in video header using only FFmpeg, without using Google Spatial Metadata Injector ( https://github.com/google/spatial-media/releases) ? As far as I know, this isn't yet possible with FFmpeg. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to improve result of transitions
However, now that I understand that for 4 seconds transition you need 4 seconds on each video, I don't understand why you need two parameters, duration and offset. Why not simply define the duration, and have ffmpeg calculate the offset as videoDuration minus transitionDuration? I think that's because when the transition begins FFmpeg doesn't yet know the length of the input stream. It could also be a live stream with unknown length. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Audio out of sync after video concatenation
Am 22.06.2021 um 12:14 schrieb ibur...@compuscience.com: I concatenate 3 videos as below: Video 1 - 13 seconds with audio Video 2 - 4 seconds without audio (I generated this video using xfade transition Video 3 - 4 seconds with audio If I concatenate video 1 and 3 everything works fine; audio synchronization is correct from start to end. However, when I concatenate all 3 of them, the audio from video 3 starts playing a little bit before the end of video 2. As video 3 is a person talking, you clearly see that the audio doesn't match the mouth movements. When you use the concat demuxer, all input streams must have the same size (width * height), same video codec, same framerate, same audio codec, same number of audio tracks and the same audio sample rate. There are two possible solutions: 1. You can add a silent audio track when you generate the second video. Just add a second input to your command line, for example: -f lavfi -i anullsrc=cl=stereo -shortest 2. Or use the "acrossfade" filter when you generate the second video. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Audio out of sync after video concatenation
Am 22.06.2021 um 20:16 schrieb ibur...@compuscience.com: Have you tried adding (silent) audio to video 2 and trying your join operation again? You could use `anullsrc': This is what I am trying to do, but wasn't able to accomplish the first step which is adding a silent audio to my video using anullsrc. I came up with the following command after doing some search and also based on Michael's command sample: Ffmpeg -i video.mp4 -f lavfi -i anullsrc=r=48000:cl=2 -c:v copy -c:a aac -shortest output.mp4 At the end of the console output I get the following: Press [q] to stop, [?] for help [aac @ 023281941140] Unsupported channel layout "1 channels (FR)" try to replace cl=2 by cl=stereo Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Compressed file is larger than original...
Am 25.06.2021 um 20:40 schrieb ibur...@compuscience.com: I'm trying to reduce the size of an mp4 file to adjust it for streaming. The video is about one minute long and I used the sample command shown in https://trac.ffmpeg.org/wiki/Encode/YouTube. The original file size is ~18MB while the compressed file ended up at 25MB. While would the compressed file be larger? Below is the ffmpeg output: ffmpeg -i HBR_TT747.mp4 -c:v libx264 -preset slow -crf 18 -c:a copy -pix_fmt yuv420p HBR_TT747_Compressed.mp4 try a larger value for -crf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Compressed file is larger than original...
--- Youtube: r_frame_rate=30/1 avg_frame_rate=30/1 bit_rate=1011745 nb_frames=1847 -- What seems significant to me here is that YouTube reduced the frame rate and consequently the number of frames to half. The quality of the video still seems reasonably to what I need it for, so is there a command in ffmpeg where I can reduce the frame rate and accomplish the same as YouTube? add -r 30 to your command line Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Compressed file is larger than original...
Am 25.06.2021 um 22:52 schrieb ibur...@compuscience.com: add -r 30 to your command line I did that; my new command looks like: -i HBR_TT747.mp4 -c:v libx264 -preset slow -crf 27 -r 30 -c:a copy -pix_fmt yuv420p HBR_TT747_Compressed.mp4 And it took it into account, now I have: r_frame_rate=30/1 nb_frames=1849 BUT...the resulting file size is still 11.2MB. Isn't that strange that we have now half of the frames and the file size only moved from 11.5 to 11.2 MB? You have half the frames, but each frame has a better quality than before. Now you can increase the crf value even more. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Problem with changing options at runtime with a command
Hi Gyan, As the docs state, the acceptable commands are w, h, x, ,y so the syntax is c crop -1 w 100 Is this documented somewhere? I mean typing "c" in the console while FFmpeg is running. I know that you mentioned it on stackoverflow some time ago, but I never found it in the FFmpeg docs. https://stackoverflow.com/questions/56058909/ffmpeg-drawtext-and-live-coordinates-with-sendcmd-zmq I think it should be added to chapter 34 in FFmpeg-all.html, "Changing options at runtime with a command". Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Problem with changing options at runtime with a command
Am 03.07.2021 um 15:06 schrieb Michael Koch: Hi Gyan, As the docs state, the acceptable commands are w, h, x, ,y so the syntax is c crop -1 w 100 Is this documented somewhere? I mean typing "c" in the console while FFmpeg is running. I know that you mentioned it on stackoverflow some time ago, but I never found it in the FFmpeg docs. https://stackoverflow.com/questions/56058909/ffmpeg-drawtext-and-live-coordinates-with-sendcmd-zmq I think it should be added to chapter 34 in FFmpeg-all.html, "Changing options at runtime with a command". P.S. In the same chapter could also be mentioned that two other methods exist: sendcmd and zmq. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Problem with changing options at runtime with a command
Am 03.07.2021 um 15:57 schrieb Alex Christoffer Rasmussen: thank you for the quick answer when trying this out I notice 3 things *1: the original size is kept* if the starting crop is *crop=h=100 *then using *c crop -1 h 150 *dose noting and the other way, What you are trying to do might be impossible. You can send a command to the crop filter and change the output height, but the next filter might not accept that you dynamically change the size. In some cases it works, for example "scale" immediately followed by "overlay". But in many other cases it doesn't work. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] looping an animated gif
Hello, I want to overlay an animated gif over a video. The gif must be looped because it's much shorter than the video. I did try the "-loop 1" option, but I get the error message "Option loop not found". However, the loop option works fine for jpg images. How can an animated gif be looped? ffmpeg -i background.MOV -loop 1 -i thumbsUp.gif -lavfi [0][1]overlay -t 10 out.mp4 Michael C:\Users\astro\Desktop>ffmpeg -i background.MOV -loop 1 -i thumbsUp.gif -lavfi [0][1]overlay -t 10 out.mp4 ffmpeg version 2021-07-04-git-301d275301-essentials_build-www.gyan.dev Copyright (c) 2000-2021 the FFmpeg developers built with gcc 10.3.0 (Rev2, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband libavutil 57. 0.100 / 57. 0.100 libavcodec 59. 3.100 / 59. 3.100 libavformat 59. 4.100 / 59. 4.100 libavdevice 59. 0.100 / 59. 0.100 libavfilter 8. 0.103 / 8. 0.103 libswscale 6. 0.100 / 6. 0.100 libswresample 4. 0.100 / 4. 0.100 libpostproc 56. 0.100 / 56. 0.100 [mov,mp4,m4a,3gp,3g2,mj2 @ 01b82af0e140] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000 [mov,mp4,m4a,3gp,3g2,mj2 @ 01b82af0e140] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000. Guessed Channel Layout for Input Stream #0.1 : stereo Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'background.MOV': Metadata: major_brand : qt minor_version : 537986816 compatible_brands: qt pana creation_time : 2021-01-27T12:28:58.00Z com.panasonic.Semi-Pro.metadata.xml: encoding="UTF-8" standalone="no" ?> : xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xmlns="urn:schemas-Professional-Plug-in:Semi-Pro:ClipMetadata:v1.0"> : : 060A2B340101010501010D211300BF61E24950BF563460541060C1650131 : 4056 : 1/24 : : : H264_420_LongGOP : 1080 : 1920 : 8 : 24p : VRFRState="constant">24/180 : NonDrop : 17:30:54:21 : : : 2 : 48000 : 16 : : : : 0 : : 2021-01-27T12:28:58+02:00 : 2021-01-27T12:28:58+02:00 : : : Panasonic : DC-GH5S : : : 2021-01-27T12:28:58+02:00 : : : : : xmlns="urn:schemas-Professional-Plug-in:P2:CameraMetadata:v1.2"> : : : STANDARD : : : BT.709 : : : : : : Duration: 00:02:49.00, start: 0.00, bitrate: 26146 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 24596 kb/s, 24 fps, 24 tbr, 24k tbn (default) Metadata: creation_time : 2021-01-27T12:28:58.00Z vendor_id : [0][0][0][0] timecode : 17:30:54:21 Stream #0:1(und): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, stereo, s16, 1536 kb/s (default) Metadata: creation_time : 2021-01-27T12:28:58.00Z vendor_id : pana timecode : 17:30:54:21 Stream #0:2(und): Data: none (tmcd / 0x64636D74), 0 kb/s (default) M
Re: [FFmpeg-user] looping an animated gif
Am 09.07.2021 um 08:49 schrieb Gyan Doshi: On 2021-07-09 12:07, Michael Koch wrote: Hello, I want to overlay an animated gif over a video. The gif must be looped because it's much shorter than the video. I did try the "-loop 1" option, but I get the error message "Option loop not found". However, the loop option works fine for jpg images. How can an animated gif be looped? GIF has a dedicated demuxer. See ffmpeg -h demuxer=gif There's a boolean option called -ignore loop. You can also apply -stream_loop. -ignore_loop 0 did solve the problem. Thank you! Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Pattern_Type Glob *.JPG wildcard input file support for Windows 10?
Am 13.07.2021 um 10:32 schrieb yaofahua--- via ffmpeg-user: This command may be helpful. cat *.jpg | ffmpeg -framerate 1/2 -pattern_type -f image2pipe -i - -filter_complex "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1:color=black,format=yuv420p" -r 30 -movflags +faststart output-1280-720.mp4 This looks like a UNIX command line. The question was for Windows 10. By the way, -f image2pipe is missing in the documentation ffmpeg-all.html Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Question regarding Sample Rates and Formats
Am 26.07.2021 um 14:27 schrieb tim.russ...@med-associates.com: Does FFMPEG support sample rates (using the '-ar' parameter) of 192k? I don't see it listed in the documentation. yes that works, here is an example: ffmpeg -f lavfi -i sine=1000:d=10 -ar 192000 sine.wav Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] -pix_fmt + pixel format conversions
Am 29.07.2021 um 09:12 schrieb Green Koopa: My input file is yuvj420p(pc, bt709). My target output is yuv420p(tv, bt709). I would like to use "-pix_fmt +yuv420p" to specify the output format, and to force me to be explicit in my conversions in the filtergraph. How do I achieve explicit conversions? The format filter appears to trigger implicit conversions. "crop=2704:1520:0:0,format=yuv420p,eq=saturation=1.2" causes error The filters 'Parsed_crop_0' and 'Parsed_format_1' do not have a common format and automatic conversion is disabled. If I go back to relying on implicit conversions, how do I output these automatically added filters? Add "-v verbose" to your command line and then search for the green lines in the console listing. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Why FFMPEG?
Am 16.08.2021 um 15:28 schrieb Reindl Harald: with you idiotic phone 50% auf my day would be wasted by swiching apps and say "ok, all fine there" can you please continue this discussion off-list Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] show the audio waveform of a file
Hi, is it possible to convert the waveform of an entire audio file (or a segment of it) to a picture? The "showwavespic" filter has the problem that for each sample it draws a line from +level to -level. For example, if the input is a +0.5 DC signal, I can't decide from the output picture if the level was +0.5 or -0.5. I need a filter that draws only a pixel for each sample. Like an oscilloscope. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] Is there an image viewer with FileSystemWatcher?
Hello, sometimes it's an iterative process to find the best parameters for a filter, especially when the filter changes brightness, contrast or colors: 1) Edit the filter parameters in the FFmpeg command line 2) Run FFmpeg 3) Open the output image in a viewer (normally I'm using IrfanView) 4) Realize that the output can be improved, close the image viewer and go back to step 1) Is there any image viewer that automatically detects when the image is overwritten and then shows the new image? I mean without closing and re-starting the viewer. Thanks, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Is there an image viewer with FileSystemWatcher?
Am 01.09.2021 um 09:30 schrieb Paul B Mahol: On Wed, Sep 1, 2021 at 9:26 AM Michael Koch wrote: Hello, sometimes it's an iterative process to find the best parameters for a filter, especially when the filter changes brightness, contrast or colors: 1) Edit the filter parameters in the FFmpeg command line 2) Run FFmpeg 3) Open the output image in a viewer (normally I'm using IrfanView) 4) Realize that the output can be improved, close the image viewer and go back to step 1) Is there any image viewer that automatically detects when the image is overwritten and then shows the new image? I mean without closing and re-starting the viewer. Use mpv, it supports all ffmpeg filters, and thus you need not to use closed source viewers at all. Thank you, I got mpv working with a complex filterchain. If anyone wants to try it, here is an example: rem FFmpeg: ffmpeg -i 7Z7A2027.jpg -filter_complex "split[1][2];[1]hue=h=0:s=1:b=-2[3];[2][3]hstack" -y out.jpg rem This is the same thing with MPV: mpv 7Z7A2027.jpg --keep-open=yes --lavfi-complex="[vid1]split[1][2];[1]hue=h=0:s=1:b=-2[3];[2][3]hstack,scale=iw/2:ih/2[vo]" Notes: 1) Don't use "-i" before the input file. 2) "--keep-open=yes" means that mpv doesn't close shortly after showing the output image. 3) "-filter_complex" must be replaced by "--lavfi-complex=" 2) The input pad in the filter chain must be called [vid1]. You can't omit it as in FFmpeg. 3) The output pad in the filter chain must be called [vo]. You can't omit it as in FFmpeg. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] toggle between two streams
I have two video streams and want to toggle between them in 1 second intervals. I did try this command: ffmpeg -i in1.mp4 -i in2.mp4 -lavfi streamselect=map='mod(t,2)' -t 30 out.mp4 This doesn't work because streamselect doesn't accept expressions. Who has an idea for a workaround? Thanks, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] toggle between two streams
Am 03.09.2021 um 14:21 schrieb Michael Koch: I have two video streams and want to toggle between them in 1 second intervals. I did try this command: ffmpeg -i in1.mp4 -i in2.mp4 -lavfi streamselect=map='mod(t,2)' -t 30 out.mp4 This doesn't work because streamselect doesn't accept expressions. Who has an idea for a workaround? Found a workaround myself: blend=all_expr='if(lt(mod(T,2),1),A,B)' Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] toggle between two streams
Am 03.09.2021 um 15:09 schrieb Paul B Mahol: On Fri, Sep 3, 2021 at 3:07 PM Michael Koch wrote: Am 03.09.2021 um 14:21 schrieb Michael Koch: I have two video streams and want to toggle between them in 1 second intervals. I did try this command: ffmpeg -i in1.mp4 -i in2.mp4 -lavfi streamselect=map='mod(t,2)' -t 30 out.mp4 This doesn't work because streamselect doesn't accept expressions. Who has an idea for a workaround? Found a workaround myself: blend=all_expr='if(lt(mod(T,2),1),A,B)' Slow, doesnt it supports commands? In this case I didn't want to use commands because I wanted it to toggle for a very long (or infinite) time. It would be nice if streamselect accepts expressions. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] toggle between two streams
Am 03.09.2021 um 15:30 schrieb Paul B Mahol: sendcmd supports expressions. yes, but it's complicated and doesn't always work. rem make two images for testing ffmpeg -f lavfi -i color=yellow -vf drawtext='text="1":fontcolor=red:fontsize=100:x=140:y=80' -frames 1 -y 1.png ffmpeg -f lavfi -i color=yellow -vf drawtext='text="2":fontcolor=red:fontsize=100:x=140:y=80' -frames 1 -y 2.png ffmpeg -loop 1 -i 1.png -loop 1 -i 2.png -lavfi sendcmd=f=cmd.txt,streamselect=map=0 -t 10 -y out.mp4 The file cmd.txt contains this line: 0 [expr] streamselect map 'gte(mod(T,2),1)'; The above example does work. But if I write all in the FFmpeg command line, then it doesn't work: ffmpeg -loop 1 -i 1.png -loop 1 -i 2.png -lavfi sendcmd=c="0 [expr] streamselect map 'gte(mod(T,2),1)'",streamselect=map=0 -t 10 -y out.mp4 Unable to parse graph description... Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] toggle between two streams
Am 03.09.2021 um 20:01 schrieb Reino Wijnsma: On 2021-09-03T16:25:50+0200, Michael Koch wrote: But if I write all in the FFmpeg command line, then it doesn't work: ffmpeg -loop 1 -i 1.png -loop 1 -i 2.png -lavfi sendcmd=c="0 [expr] streamselect map 'gte(mod(T,2),1)'",streamselect=map=0 -t 10 -y out.mp4 Unable to parse graph description... ffmpeg -loop 1 -i 1.png -loop 1 -i 2.png -lavfi "sendcmd=c='0 [expr] streamselect map '\''gte(mod(T\,2)\,1)'\''',streamselect=map=0" -t 10 -y out.mp4 Thank you, this command line is working fine. But it's difficult to understand. A simple task (toggle between two streams) should have a simple solution. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] Invitation: FFmpeg workshop
Hello, this week from Thursday 2021-09-09 to Sunday 2021-09-12 will take place the 12th "Sankt Andreasberger Teleskoptreffen", that's an event for amateur astronomers in the Harz Mountains in Germany. You are invited to observe the night sky and attend our lectures. You can come for one day, or you can stay overnight in your tent or camper. Within this event will take place a "FFmpeg Workshop" on Saturday 10 o'clock. I'll give a short overview what can be done with FFmpeg, and the participants are invited to bring their own examples and questions. We'll try to find solutions together. This workshop will be in german language. There is a small participants fee, 7 EUR per day. All lectures and workshops are free for participants. If you want to attend, you must register in advance on our website: https://www.sternwarte-sankt-andreasberg.de/termine/ The usual "GGG" Covid rule applies: You must be fully vaccinated or recovered or tested within the last 24 hours. See you soon, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] High audio latency (although low with ffplay!)
Am 08.09.2021 um 11:44 schrieb Arif Driessen: about 3 seconds latency. I have also experimented with these flags: -thread_queue_size, -fflags nobuffer, -flags low_delay ,-strict experimental, -re, -deadline realtime Any ideas? I had a similar problem with the "dshow" input device and solved it by setting -audio_buffer_size to a small value. However I don't know if this is applicable to your pulse or alsa devices. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Quality Reduced when Burning Subtitles
Am 13.09.2021 um 10:42 schrieb Veronica & Stephen McGuckin: Thank you. Please can you explain a little bit more about what option I should set. Regards -Original Message- From: ffmpeg-user On Behalf Of Paul B Mahol Sent: 11 September 2021 16:54 To: FFmpeg user questions Subject: Re: [FFmpeg-user] Quality Reduced when Burning Subtitles Next time set option for overlay filter. On Sat, Sep 11, 2021 at 5:47 PM Veronica & Stephen McGuckin < vsmcguc...@outlook.com> wrote: Hello, I am trying to overlay pgs subtitles onto a 4K video. I have extracted 10 minute to make sure I have the correct ffmpeg coding. The input file is 5.6 gigabytes with a bit rate of 78.6 Mb/s. I am using the following ffmpeg commands. ffmpeg -i c:\video\input.mkv -filter_complex “[0:v][0:s]overlay[v]” -c:v libx265 -pix_fmt yuv420p10le -profile:v high -x265-params “-crf=10 -film -hdr10+=1 -preset slow” -map “[v]” -map 0:a:0 -c:a copy -hdr10+c:\video\output.mkv overlay=format=yuv420p10 (or whatever the pixel format of the input streams is) If you don't specify the format, the default would be yuv420 (which is 8-bit). Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 18.09.2021 um 18:07 schrieb Cecil Westerhof via ffmpeg-user: I have several of the following (simplified): drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), I have been carefully tweaking to get them on the correct place. But when I was satisfied and generated all eight places where I wanted to have them the placement of the speaker and subject text is on different heights. Sometimes one or the other is different, sometimes both and sometimes they are the same. Is this a bug, or am I doing something wrong? The content of the variable "text_h" depends on which characters you are printing. If you don't use "text_h", then the characters "a" and "q" are printed at the same height: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50,drawtext=text='q':x=40:y=50 -frames 1 -y out1.png But if you use "text_h", then they are printed at different heights: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50-text_h,drawtext=text='q':x=40:y=50-text_h -frames 1 -y out2.png Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 18.09.2021 um 18:22 schrieb Michael Koch: Am 18.09.2021 um 18:07 schrieb Cecil Westerhof via ffmpeg-user: I have several of the following (simplified): drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), I have been carefully tweaking to get them on the correct place. But when I was satisfied and generated all eight places where I wanted to have them the placement of the speaker and subject text is on different heights. Sometimes one or the other is different, sometimes both and sometimes they are the same. Is this a bug, or am I doing something wrong? The content of the variable "text_h" depends on which characters you are printing. If you don't use "text_h", then the characters "a" and "q" are printed at the same height: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50,drawtext=text='q':x=40:y=50 -frames 1 -y out1.png But if you use "text_h", then they are printed at different heights: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50-text_h,drawtext=text='q':x=40:y=50-text_h -frames 1 -y out2.png It gets even worse when you try to print apostrophes. It's impossible to print all characters at the same height. It doesn't wok without text_h: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a_':x=20:y=50,drawtext=text='_gG':x=40:y=50,drawtext=text='``':x=60:y=50 -frames 1 -y out1.png And it also doesn't work with text_h subtracted: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a_':x=20:y=50-text_h,drawtext=text='_gG':x=40:y=50-text_h,drawtext=text='``':x=60:y=50-text_h -frames 1 -y out2.png Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 18.09.2021 um 18:51 schrieb Michael Koch: Am 18.09.2021 um 18:22 schrieb Michael Koch: Am 18.09.2021 um 18:07 schrieb Cecil Westerhof via ffmpeg-user: I have several of the following (simplified): drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), I have been carefully tweaking to get them on the correct place. But when I was satisfied and generated all eight places where I wanted to have them the placement of the speaker and subject text is on different heights. Sometimes one or the other is different, sometimes both and sometimes they are the same. Is this a bug, or am I doing something wrong? The content of the variable "text_h" depends on which characters you are printing. If you don't use "text_h", then the characters "a" and "q" are printed at the same height: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50,drawtext=text='q':x=40:y=50 -frames 1 -y out1.png But if you use "text_h", then they are printed at different heights: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a':x=20:y=50-text_h,drawtext=text='q':x=40:y=50-text_h -frames 1 -y out2.png It gets even worse when you try to print apostrophes. It's impossible to print all characters at the same height. It doesn't wok without text_h: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a_':x=20:y=50,drawtext=text='_gG':x=40:y=50,drawtext=text='``':x=60:y=50 -frames 1 -y out1.png And it also doesn't work with text_h subtracted: ffmpeg -f lavfi -i color=yellow -lavfi drawtext=text='a_':x=20:y=50-text_h,drawtext=text='_gG':x=40:y=50-text_h,drawtext=text='``':x=60:y=50-text_h -frames 1 -y out2.png Ticket 9427 Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 19.09.2021 um 01:08 schrieb Cecil Westerhof via ffmpeg-user: Cecil Westerhof via ffmpeg-user writes: I have several of the following (simplified): drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), I have been carefully tweaking to get them on the correct place. But when I was satisfied and generated all eight places where I wanted to have them the placement of the speaker and subject text is on different heights. Sometimes one or the other is different, sometimes both and sometimes they are the same. Is this a bug, or am I doing something wrong? As Michael Koch said: The content of the variable "text_h" depends on which characters you are printing. So I changed: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), to: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - 200, drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - 140, That gives satisfactory results. The only problem is when the font size changes. Then I have to remember I need to change these two values also. But I do not expect that to happen often, so I can live with that. In your case with the words "speaker" and "subject" it works, but generally it fails. For example if you remove the character "k" from "speaker", then the vertical alignment will be wrong. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 19.09.2021 um 01:08 schrieb Cecil Westerhof via ffmpeg-user: Cecil Westerhof via ffmpeg-user writes: I have several of the following (simplified): drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), I have been carefully tweaking to get them on the correct place. But when I was satisfied and generated all eight places where I wanted to have them the placement of the speaker and subject text is on different heights. Sometimes one or the other is different, sometimes both and sometimes they are the same. Is this a bug, or am I doing something wrong? As Michael Koch said: The content of the variable "text_h" depends on which characters you are printing. So I changed: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), to: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - 200, drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - 140, That gives satisfactory results. The only problem is when the font size changes. Then I have to remember I need to change these two values also. But I do not expect that to happen often, so I can live with that. As a workaround you could define the strings in a subtitle file and burn the subtitles into the frames. See chapters 2.164 and 2.165 in my book: http://www.astro-electronic.de/FFmpeg_Book.pdf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Weird results with drawtext
Am 19.09.2021 um 12:42 schrieb Cecil Westerhof via ffmpeg-user: Michael Koch writes: As Michael Koch said: The content of the variable "text_h" depends on which characters you are printing. So I changed: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - (text_h * 4), drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - (text_h * 2.2), to: drawtext= enable = 'between(t, 105, 115)': text = 'speaker': y = main_h - 200, drawtext= enable = 'between(t, 105, 115)': text = 'subject': y = main_h - 140, That gives satisfactory results. The only problem is when the font size changes. Then I have to remember I need to change these two values also. But I do not expect that to happen often, so I can live with that. In your case with the words "speaker" and "subject" it works, but generally it fails. For example if you remove the character "k" from "speaker", then the vertical alignment will be wrong. Speaker and subject where only templates, in reality there is a speaker and a subject. I made a video with eight sets of speakers and subjects and all look good (enough). So I think (hope) that it works. But who knows, maybe I need to go back to the drawing-board next time. (But I certainly hope not.) I found a simple solution. Replace y=100-text_h by y=100-ascent This seems to work for all characters. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] -framerate, -r and -itsoffset confusion
Am 20.09.2021 um 10:45 schrieb Duncan Robertson: What is the difference between -framerate and -r? -framerate is the framerate used for reading in the images. -r is the output framerate, it's 25 by default if you don't use this option. If -framerate is larger than -r, then some frames are dropped. If -framerate is smaller than -r, then some frames are duplicated. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] scaling algorithms
Am 20.09.2021 um 21:51 schrieb Paul B Mahol: On Sat, Oct 17, 2015 at 3:58 PM Michael Koch wrote: > That looks like the job for the filter, see inflate, deflate, erosion and dilation filters. I just tested the dilation filter. It's a nice workaround if the downscaling factor is 3:1, as in my example. But if the downscaling factor is smaller, for example from 5472x3648 to 900x600, which is 6:1, then it fails. The star would be visible in 50% of the frames and invisible in the other 50%. What I need is a scaling algorithm which automatically chooses the brightest pixel out of the corresponding pixels from the input file. If the downscaling factor is 6:1, then the neighborhood is 6x6 pixels. See new morpho filter, that gonna be in master soon. It is much faster than current filters and allow custom definition of structure/mask (rectangle/circle/etc) by using 2nd stream. That sounds interesting. Is the size of the neighborhood over which the filter is working defined by the size of the 2nd stream? Which pixel format must be used for the 2nd stream? What's the meaning of the pixel values in the 2nd stream? White = pixel is used, black = pixel is not used? What happens if a pixel is gray? Is the original pixel in the center of the structure or is it the top left pixel? What's the meaning of "open" and "close"? Please add some examples for the structure in the 2nd stream. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] scaling algorithms
Am 21.09.2021 um 08:57 schrieb Paul B Mahol: See new morpho filter, that gonna be in master soon. It is much faster than current filters and allow custom definition of structure/mask (rectangle/circle/etc) by using 2nd stream. I'd like to test it. Please push it. What's the meaning of "open" and "close"? Please add some examples for the structure in the 2nd stream. Use google. and learn something about morphological transforms. FFmpeg can not also educate an uneducated. In such cases it would be useful to find in the filter's documentation a link to a site where it's explained, for example: https://en.wikipedia.org/wiki/Mathematical_morphology Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] blend filter
I like the sample images for the blend filter that Paul has added here: https://trac.ffmpeg.org/wiki/Blend Can you please also share the command line for making these images? Thanks, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] blend filter
Am 30.09.2021 um 19:19 schrieb Paul B Mahol: On Thu, Sep 30, 2021 at 7:06 PM Michael Koch wrote: I like the sample images for the blend filter that Paul has added here: https://trac.ffmpeg.org/wiki/Blend Can you please also share the command line for making these images? mpv av://lavfi:color=s=256x256,format=gbrp,geq=r=H-1-Y:g=H-1-Y:b=H-1-Y -vf "lavfi=[split[a][b];[b]transpose[b];[a][b]blend=all_mode=harmonic,pseudocolor=preset=turbo]" ffmpeg -f lavfi -i color=s=256x256,format=gbrp,geq=r=H-1-Y:g=H-1-Y:b=H-1-Y -vf "split[a][b];[b]transpose[b];[a][b]blend=all_mode=harmonic,pseudocolor=preset=turbo" harmonic.png thank you! Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] blend filter
Am 30.09.2021 um 19:05 schrieb Michael Koch: I like the sample images for the blend filter that Paul has added here: https://trac.ffmpeg.org/wiki/Blend In the documentation for the "opacity" options for the blend filter is written: "Only used in combination with pixel component blend modes." What is a "pixel component blend mode"? It seems that the opacity options are used for many (or all?) predefined modes that are set with all_mode=... however the opacity options aren't used if a user-defined mode is set with all_expr=... Is this correct? Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 07:27 schrieb Shailesh kumar Dangi via ffmpeg-user: Hi Team, This is Shailesh Kumar Dangi, Working on Maventus Group Inc. I am using FFMPEG for video compression for bitrate reduction and I want to be the same aspect ratio as the original video aspect ratio. I am facing a problem with the video. Which is having some black space from the left and right sides. In this type of video aspect ratio getting changes. Can you please help the keep same aspect ratio the same as the original video aspect ratio which is not compressed? Please show your command line and the complete console output. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 08:03 schrieb Shailesh kumar Dangi via ffmpeg-user: Thanks, Micheal Here is the command, I am using to reduce the bitrate. ffmpeg -y -i original.MOV -b:v 4.5M -map_metadata 0:s:0 -c:v libx264 -preset superfast -tune film -ac 2 -c:a aac -maxrate 4M -bufsize 3M -strict -2 -vf scale=iw:ih -f mp4 output1.mp4 Please show also the complete console output. I think -vf scale=iw:ih can be omitted. It's useless because it means "leave the width and height as it is". Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 08:33 schrieb Shailesh kumar Dangi via ffmpeg-user: Hi, Please find below as an attachment. In the attachment input command and output, the console is there. Actual Input what i am using ffmpeg -y -i original.mov -b:v 4.5M -map_metadata 0:s:0 -c:v libx264 -preset superfast -tune film -ac 2 -c:a aac -maxrate 4M -bufsize 3M -strict -2 -f mp4 client.mp4 2<&1 It seems the input is 1920x1080 but the output is 1080x1920. I don't yet understand why. Does the original video look ok if you play it in VLC or FFplay? Does it work if you add -vf scale=1920:1080 ? Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 09:07 schrieb Shailesh kumar Dangi via ffmpeg-user: Yes, In the input video contains some black space around the video as you can see in the attachment below. If we pass the scale "-vf scale=1920:1080" then the video will be stretched. if we do not pass the scale. The scale will swap I mean 1080:1920. [image: Screenshot 2021-10-05 at 12.33.20 PM.png] Do we have any solutions for this? I don't think that the input video contains black spaces. The input video is 1920x1080 and it has "rotate=90" set in the metadata. That's why FFmpeg rotates the video and the output is 1080x1920 and there are no black spaces in the output. If you don't like this output, what do you want instead? Do you want to cut off the top and botton part? Then you must use the crop filter. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 09:54 schrieb Shailesh kumar Dangi via ffmpeg-user: Hi, Our final goal is to reduce the bitrate is below 5MB and the Aspect ratio, height, and width need to be the same. The bitrate is already below 5Mb/s, look at the end of the console output: bitrate=3772.4kbits/s Aspect ratio, width and height are also the same: Input: 1920x1080 with 90° rotation in metadata Output: 1080x1920 It seems I don't understand what you want to do. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 10:18 schrieb Shailesh kumar Dangi via ffmpeg-user: Do we have anything to stop rotation? Try to add the option -noautorotate Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 12:39 schrieb Shailesh kumar Dangi via ffmpeg-user: Hi, When I use to stop rotation. -noautorotate. it is working fine in my local system, But when I upload the code on the Linux server after compression video itself gets to rotate. Original Video- https://maventus-us-east.s3.amazonaws.com/videos/6155981eac83d208.mp4 Compressed Video - https://maventus-us-east.s3.amazonaws.com/videos/615c22997dd6a208.mp4 I don't want to be rotated after compression and it should look the same as the original one. You should always show the command line and the console output when asking here. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Regarding video compression
Am 05.10.2021 um 12:55 schrieb Shailesh kumar Dangi via ffmpeg-user: Please find an attachment as a command with the console. ffmpeg version 2.8.17-0ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609 Your FFmpeg version is much too old. You should update it. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] shuffleplanes
Hello, I have a question about the shuffleplanes filter. In this example the pixel format is RGB24. So the first plane is red and the second plane is green. I want to swap the red and green planes. ffmpeg -f lavfi -i testsrc2 -lavfi format=rgb24,split[a][b];[b]shuffleplanes=1:0:2[b];[a][b]vstack -frames 1 -y out.png But in the output green is swapped with blue. I don't understand why. I'm using the latest Windows build, just a few days old. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] shuffleplanes
Am 11.10.2021 um 11:26 schrieb Paul B Mahol: On Mon, Oct 11, 2021 at 11:15 AM Michael Koch wrote: Hello, I have a question about the shuffleplanes filter. In this example the pixel format is RGB24. So the first plane is red and the second plane is green. I want to swap the red and green planes. ffmpeg -f lavfi -i testsrc2 -lavfi format=rgb24,split[a][b];[b]shuffleplanes=1:0:2[b];[a][b]vstack -frames 1 -y out.png But in the output green is swapped with blue. I don't understand why. I'm using the latest Windows build, just a few days old. As usual you add additional premise that does not makes sense and is incorrect. rgb24 is not planar but packed. gbrp is used internally and works as expected, as filter does not care about R/G/B/A order. It just shuffles planes. thanks, that explains the unexpected result. It would be helpful to add a note to the documentation: "The input pixel format must be planar, for example gbrp. If a non-planar pixel format is used (for example rgb24), a format conversion is inserted automatically and you might get unexpected results." Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] fftfilt
I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. If I set the weight_U and weight_V options to 1, then the greenish tint disappears: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)':weight_U=1:weight_V=1 -y out2.png Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 12.10.2021 um 18:52 schrieb Paul B Mahol: On Tue, Oct 12, 2021 at 10:55 AM Michael Koch wrote: I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. Expressions by default for U and V are copied from Y if are unset. filter works only in YUV or gray space thus in above combination one gets green tint. ok, that explains the greenish tint. I think in most cases it doesn't make sense to copy the weight_Y expression to the U and V weights. It would be better to set them to 1 by default. With the current behaviour, all four examples in the documentation are incomplete and :weight_U=1:weight_V=1 must be added. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 12.10.2021 um 19:29 schrieb Michael Koch: Am 12.10.2021 um 18:52 schrieb Paul B Mahol: On Tue, Oct 12, 2021 at 10:55 AM Michael Koch wrote: I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. Expressions by default for U and V are copied from Y if are unset. filter works only in YUV or gray space thus in above combination one gets green tint. I'm trying to make the filter's cutoff frequency independant of the image size. But that's not so easy because I don't know the size of the FFT array. It's calculated in vf_fftfilt.c lines 185 and 297. This calculation is difficult (and slow) to replicate in an expression, because either a loop or a logarithm is required. Would it be possible to add two new variables so that the FFT array size can be used in an expression? ARRAY_H = 1 << rdft_hbits ARRAY_V = 1 << rdft_vbits It the array size is known, things would become much easier. Thanks, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 13.10.2021 um 22:57 schrieb Paul B Mahol: On Wed, Oct 13, 2021 at 10:51 PM Michael Koch wrote: Am 12.10.2021 um 19:29 schrieb Michael Koch: Am 12.10.2021 um 18:52 schrieb Paul B Mahol: On Tue, Oct 12, 2021 at 10:55 AM Michael Koch wrote: I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. Expressions by default for U and V are copied from Y if are unset. filter works only in YUV or gray space thus in above combination one gets green tint. I'm trying to make the filter's cutoff frequency independant of the image size. But that's not so easy because I don't know the size of the FFT array. It's calculated in vf_fftfilt.c lines 185 and 297. This calculation is difficult (and slow) to replicate in an expression, because either a loop or a logarithm is required. Would it be possible to add two new variables so that the FFT array size can be used in an expression? ARRAY_H = 1 << rdft_hbits ARRAY_V = 1 << rdft_vbits It the array size is known, things would become much easier. What are you attempting to do? In the current state the filter's cutoff frequency is a function of image size. The cutoff frequency jumps by a factor 2 when the image size increases from 230 to 232. That's because of the factor 10/9 in line 185. I want to make the cutoff frequency independant of image size, and therefore I need the array size. Will work for all types of filters: lowpass, highpass, bandpass and notch. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 13.10.2021 um 23:09 schrieb Michael Koch: Am 13.10.2021 um 22:57 schrieb Paul B Mahol: On Wed, Oct 13, 2021 at 10:51 PM Michael Koch wrote: Am 12.10.2021 um 19:29 schrieb Michael Koch: Am 12.10.2021 um 18:52 schrieb Paul B Mahol: On Tue, Oct 12, 2021 at 10:55 AM Michael Koch wrote: I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. Expressions by default for U and V are copied from Y if are unset. filter works only in YUV or gray space thus in above combination one gets green tint. I'm trying to make the filter's cutoff frequency independant of the image size. But that's not so easy because I don't know the size of the FFT array. It's calculated in vf_fftfilt.c lines 185 and 297. This calculation is difficult (and slow) to replicate in an expression, because either a loop or a logarithm is required. Would it be possible to add two new variables so that the FFT array size can be used in an expression? ARRAY_H = 1 << rdft_hbits ARRAY_V = 1 << rdft_vbits It the array size is known, things would become much easier. What are you attempting to do? In the current state the filter's cutoff frequency is a function of image size. The cutoff frequency jumps by a factor 2 when the image size increases from 230 to 232. That's because of the factor 10/9 in line 185. I want to make the cutoff frequency independant of image size, and therefore I need the array size. Will work for all types of filters: lowpass, highpass, bandpass and notch. You can reproduce that with these commands: set "P=8" rem create 230x230 test image ffmpeg -f lavfi -i color=black:s=230x230 -lavfi geq='r=127.5+127.5*cos((X-W/2)*PI/(pow(2, (1+2*Y/H',colorchannelmixer=1:0:0:0:1:0:0:0:1:0:0:0,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png rem bandpass for wavelength 8 pixels ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='between(hypot(Y/H,X/W),1.9/%P%,2.1/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y out230.png rem create 232x232 test image ffmpeg -f lavfi -i color=black:s=232x232 -lavfi geq='r=127.5+127.5*cos((X-W/2)*PI/(pow(2, (1+2*Y/H',colorchannelmixer=1:0:0:0:1:0:0:0:1:0:0:0,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png rem bandpass for wavelength 8 pixels ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='between(hypot(Y/H,X/W),1.9/%P%,2.1/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y out232.png pause I did already try to make the cutoff frequency independant on image size by using X/W and Y/H. That works fine if the image size is halved or doubled. But it doesn't work if the image size is increased by a smaller factor. I think the correct way is to use X/ARRAY_H and Y/ARRAY_V Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 13.10.2021 um 23:52 schrieb Michael Koch: Am 13.10.2021 um 23:09 schrieb Michael Koch: Am 13.10.2021 um 22:57 schrieb Paul B Mahol: On Wed, Oct 13, 2021 at 10:51 PM Michael Koch wrote: Am 12.10.2021 um 19:29 schrieb Michael Koch: Am 12.10.2021 um 18:52 schrieb Paul B Mahol: On Tue, Oct 12, 2021 at 10:55 AM Michael Koch wrote: I have a question about the "fftfilt" filter. What's the default value of the weight_U and weight_V options? I'm asking because I get an unexpected result. This command line creates my input image for testing: ffmpeg -f lavfi -i color=black:s=300x50 -lavfi drawgrid=c=white:y=-1:w=2:h=51,split[a][b];[b]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[b][c];[c]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[c][d];[d]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[d][e];[e]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor,split[e][f];[f]crop=iw/2:x=0,scale=2*iw:ih:flags=neighbor[f];[a][b][c][d][e][f]vstack=6,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png This is the fftfilt lowpass example from the official documentation: ffmpeg -i test.png -vf fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)' -y out1.png Problem: The output has a greenish tint. Expressions by default for U and V are copied from Y if are unset. filter works only in YUV or gray space thus in above combination one gets green tint. I'm trying to make the filter's cutoff frequency independant of the image size. But that's not so easy because I don't know the size of the FFT array. It's calculated in vf_fftfilt.c lines 185 and 297. This calculation is difficult (and slow) to replicate in an expression, because either a loop or a logarithm is required. Would it be possible to add two new variables so that the FFT array size can be used in an expression? ARRAY_H = 1 << rdft_hbits ARRAY_V = 1 << rdft_vbits It the array size is known, things would become much easier. What are you attempting to do? Below is a tested example for FFT filtering where the filter frequency (or wavelength) isn't a function of image size. The array size is calculated by complicated macros. That's why I suggest to make the array size available as variables. Michael set "P=8" :: filter wavelength = pixels per linepair set "SIZE_H=460" :: horizontal size of test image set "SIZE_V=230" :: vertical size of test image set "ARRAY_H=pow(2,ceil(log(ceil(%SIZE_H%*10/9))/log(2)))" :: macro for horizontal fft array size set "ARRAY_V=pow(2,ceil(log(ceil(%SIZE_V%*10/9))/log(2)))" :: macro for vertical fft array size :: create test image, wavelength varies continuously from 4 to 8 (in the center) to 16 pixels per linepair: ffmpeg -f lavfi -i color=black:s=%SIZE_V%x%SIZE_V% -lavfi geq='r=127.5+127.5*cos((X-W/2)*PI/(pow(2,(1+2*Y/H',colorchannelmixer=1:0:0:0:1:0:0:0:1:0:0:0,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png :: lowpass, highpass, bandpass and notch filtering: ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/%ARRAY_H%,Y/%ARRAY_V%),2.0/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='gte(hypot(X/%ARRAY_H%,Y/%ARRAY_V%),2.0/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y highpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='between(hypot(X/%ARRAY_H%,Y/%ARRAY_V%),1.8/%P%,2.2/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y bandpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='1-between(hypot(X/%ARRAY_H%,Y/%ARRAY_V%),1.8/%P%,2.2/%P%)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y notch.png pause ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Michael ffmpeg -f lavfi -i color=black:s=230x230 -lavfi geq='r=127.5+127.5*cos((X-W/2)*PI/(pow(2,(1+2*Y/H',colorchannelmixer=1:0:0:0:1:0:0:0:1:0:0:0,split[h][v];[v]transpose[v];[v][h]hstack -frames 1 -y test.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='gte(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y highpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=128:dc_U=1:dc_V=1:weight_Y='between(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,0.8/8,1.2/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y bandpass.png ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='1-between(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,0.8/8,1.2/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y notch.png ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 12:53 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:29 PM Michael Koch wrote: Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Since when hstack fails because of different aspect ratio? Because I first make the left half of the test image, then transpose it and then hstack them together The fftfilt examples do of course work with any aspect ratio. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 13:01 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:59 PM Michael Koch wrote: Am 14.10.2021 um 12:53 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:29 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Since when hstack fails because of different aspect ratio? Because I first make the left half of the test image, then transpose it and then hstack them together The fftfilt examples do of course work with any aspect ratio. I think you do not understand what aspect ratio really means. width / height ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 13:11 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 1:08 PM Michael Koch wrote: Am 14.10.2021 um 13:01 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:59 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:53 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:29 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Since when hstack fails because of different aspect ratio? Because I first make the left half of the test image, then transpose it and then hstack them together The fftfilt examples do of course work with any aspect ratio. I think you do not understand what aspect ratio really means. width / height Doesn't all this introduces aliasing if not using power of 2 width and height square dimensions? I'm not sure if I understand what you mean. Do you see a significant difference if you make the input size 250x250 or 256x256? Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 13:23 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 1:18 PM Michael Koch wrote: Am 14.10.2021 um 13:11 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 1:08 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 13:01 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:59 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:53 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:29 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Since when hstack fails because of different aspect ratio? Because I first make the left half of the test image, then transpose it and then hstack them together The fftfilt examples do of course work with any aspect ratio. I think you do not understand what aspect ratio really means. width / height Doesn't all this introduces aliasing if not using power of 2 width and height square dimensions? I'm not sure if I understand what you mean. Do you see a significant difference if you make the input size 250x250 or 256x256? it did for first test.png you sent The first version was with square waves. It's better to test with the last version which uses sine waves and the wavelength changes continuously. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 13:29 schrieb Michael Koch: Am 14.10.2021 um 13:23 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 1:18 PM Michael Koch wrote: Am 14.10.2021 um 13:11 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 1:08 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 13:01 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:59 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:53 schrieb Paul B Mahol: On Thu, Oct 14, 2021 at 12:29 PM Michael Koch < astroelectro...@t-online.de> wrote: Am 14.10.2021 um 12:07 schrieb Paul B Mahol: Sorry but I'm not on windows, so I can not use your script. Then try the below (slightly improved) version. It would become much simpler with variables: ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) The test image contains wavelengths from 4 to 8 (in the center) to 16 pixels per linepair. The filter wavelength is independant of input size. You can change the size in the first command (but it must be 1:1 aspect ratio, otherwise hstack would fail) Since when hstack fails because of different aspect ratio? Because I first make the left half of the test image, then transpose it and then hstack them together The fftfilt examples do of course work with any aspect ratio. I think you do not understand what aspect ratio really means. width / height Doesn't all this introduces aliasing if not using power of 2 width and height square dimensions? I'm not sure if I understand what you mean. Do you see a significant difference if you make the input size 250x250 or 256x256? it did for first test.png you sent This is the command line for the lowpass filter, where the filter freqency isn't a function of input size. The constant "8" is the filter wavelength in pixels per linepair. ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png With two new variables for the FFT array size ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) the command line could be simplified to: ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/ARRAY_H,Y/ARRAY_V),1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png It's possble to simplify even more by defining X_REL = X / ARRAY_H Y_REL = Y / ARRAY_V Then the command line is ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X_REL,Y_REL),1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png By the way, fftfilt also has another problem. If the image contains the highest possible frequency (pixels are black, white, black, white and so on), this can't be filtered out with a lowpass filter. I think that's because of the YUV subsampling. As a workaround I did scale the image up before filtering and scale down after filtering. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 16:13 schrieb Michael Koch: This is the command line for the lowpass filter, where the filter freqency isn't a function of input size. The constant "8" is the filter wavelength in pixels per linepair. ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png With two new variables for the FFT array size ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) Thanks for the patch. I will test it in a few days. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] fftfilt
Am 14.10.2021 um 23:34 schrieb Michael Koch: Am 14.10.2021 um 16:13 schrieb Michael Koch: This is the command line for the lowpass filter, where the filter freqency isn't a function of input size. The constant "8" is the filter wavelength in pixels per linepair. ffmpeg -i test.png -vf scale=2*iw:2*ih,fftfilt=dc_Y=0:dc_U=0:dc_V=0:weight_Y='lte(hypot(X/pow(2,ceil(log(ceil(W*10/9))/log(2))),Y/pow(2,ceil(log(ceil(H*10/9))/log(2,1.0/8)':weight_U=1:weight_V=1,scale=iw/2:ih/2 -y lowpass.png With two new variables for the FFT array size ARRAY_H = pow(2,ceil(log(ceil(W*10/9))/log(2))) ARRAY_V = pow(2,ceil(log(ceil(H*10/9))/log(2))) Thanks for the patch. I will test it in a few days. Tested and working fine. Thanks, Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Alternative to Dynamic Text
Am 01.11.2021 um 16:21 schrieb LianCheng Tan: Hi, Beside using textfile (reload=1) in drawtext for dynamic text, is there any other method to add dynamic texts onto the video stream? You could use subtitles. It's described in the wiki: https://trac.ffmpeg.org/wiki/WikiStart There are also some examples for subtitles in my book, see chapters 2.176 and 1.177: http://www.astro-electronic.de/FFmpeg_Book.pdf Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to contribute to Wiki
Am 01.11.2021 um 15:15 schrieb PPRJ01: Hello All, I have been using ffmpeg for two years and I would be pleased to contribute to ffmpeg wiki. I don't know how to do it. Can you help me please ? First you need an account and you must log in. Then you go the the wiki page that you want to edit. At the bottom of the page is a button "Edit this page". Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Alternative to Dynamic Text
Am 01.11.2021 um 19:58 schrieb Michael Koch: Am 01.11.2021 um 16:21 schrieb LianCheng Tan: Hi, Beside using textfile (reload=1) in drawtext for dynamic text, is there any other method to add dynamic texts onto the video stream? You could use subtitles. It's described in the wiki: https://trac.ffmpeg.org/wiki/WikiStart There are also some examples for subtitles in my book, see chapters 2.176 and 1.177: http://www.astro-electronic.de/FFmpeg_Book.pdf sorry for the typo, I did mean 2.176 and 2.177 Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Alternative to Dynamic Text
Am 02.11.2021 um 00:09 schrieb LianCheng Tan: Thank you Michael. For subtitles, what if the content of .srt or .ass file is being updated at every second. Will ffmpeg crash because the file is being ‘locked’ by another process that is updating it? I don't know if this works. I thought the texts are known in advance and already written in the file. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to contribute to Wiki
Am 02.11.2021 um 18:38 schrieb PPRJ01: Thank you Michael, I tried to register a new account at https://trac.ffmpeg.org/register There is an antispam check in this page. I don't know what to answer to the first question about the project name. I think it begins with "f" and has 6 characters Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to contribute to Wiki
Am 02.11.2021 um 19:43 schrieb PPRJ01: Thank you again Michael for your kindly help. Now I have an account to edit existing Wiki pages. My question is how can I add a new page with my contribution before linking it to an existing page ? I have never done that in the the FFmpeg wiki. In other wikis you first create a link on an already existing page, and then you can click on the link to edit the new page. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Cropping video
Am 11.12.2021 um 22:24 schrieb Cecil Westerhof via ffmpeg-user: I gave a speech on zoom. It was recorded, but sadly together with others. So I want to crop my part out of it. At the moment this works: ffmpeg -y -i input.mp4 -filter:v "crop=480:360:80:180" output.mp4 Was just wondering if there was a better way. I think you have already found the best way. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Concatenating video cuts - audio gets out of sync...
Am 15.12.2021 um 13:30 schrieb Bo Berglund: I am using this script on Ubuntu 20.04.3 server which I wrote to paste together clips from a video into one single video. It takes the output video file as the first argument and all following are the clip files. Script has error checking for user input but I remoed these here for clarity: #!/bin/bash JOINFILE="joinfile.txt" NUMARGS=$# TARGETFILE="$1" if [ $NUMARGS > 1 ] then shift echo "file $1" > "$JOINFILE" shift while [ "$1" != "" ]; do echo "file $1" >> "$JOINFILE" shift done fi COMMAND="ffmpeg -f concat -safe 0 -i $JOINFILE -c copy $TARGETFILE" eval "$COMMAND" eval "rm $JOINFILE" exit 0 -- This has worked mostly well but now I have a problem when pasting two clips from a video where the audio gets badly out of sync at the paste point. Is there some way to modify the ffmpeg command such that the audio is not affected like this? You could remove -c copy, but that makes the process much slower. Are you sure that all input videos have the same properties? Same size, video codec, framerate, audio codec, number of audio tracks, audio sample rate? Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Concatenating video cuts - audio gets out of sync...
Am 15.12.2021 um 14:14 schrieb Bo Berglund: On Wed, 15 Dec 2021 13:45:13 +0100, Michael Koch wrote: You could remove -c copy, but that makes the process much slower. Are you sure that all input videos have the same properties? Same size, video codec, framerate, audio codec, number of audio tracks, audio sample rate? They are downloaded using the same ffmpeg download command which sets the video size to 480p, like this (variables are set appropriately in the script): ffmpeg -hide_banner -referer \"${VIDEOURL}\" -i \"${VIDEOSTR}\" -vf scale=w=-4:h=480 -c:v libx264 -preset fast -crf 26 -c:a copy -t ${CAPTURETIME} ${TARGETFILE} That means the audio codec, number of audio tracks and audio sample rate could be different. You should check those videos that don't fit together with ffprobe. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] New post
Am 30.12.2021 um 07:09 schrieb Jim DeLaHunt: On 2021-12-29 21:00, paul king wrote: In the ffmpeg Documentation there is this command for Select the pass number (1 or 2): ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null Where do you see this command? Could you provide a link to the specific section, please? Just to make sure that we all are looking at the same documentation. That's in the official documentation where the -pass option is explained. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Split Video into RGB and Alpha
Am 04.01.2022 um 03:19 schrieb Hanna Frangiyyeh: Hi Guys; I have a video file that has an embedded alpha in it. I would like to have it output two files, one RGB and the other Alpha. Any idea how to accomplish this? Making the RGB video is easy, just convert the pixel format to RGB. For extracting the A channel have a look at the "extractplanes" filter. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Filter Question
Am 04.01.2022 um 21:12 schrieb Hanna Frangiyyeh: Hi Guys; I'm new to ffmpeg and I'm trying to figure out what is wrong with the below filter. It keep erroring out with the following error: Unable to find a suitable output format for '[rgb_in][alpha_in][rgb_in]' ffpmpeg -i "D:\TEMP\Source Files\720E\MLB21_INT_PLAYER_720.mov" -i "D:\TEMP\Source Files\Audio\file_example_WAV_1MG.wav" -vf scale=1920:1080 -filter_complex 'split [rgb_in][alpha_in][rgb_in] lutrgb=a=minval [rgb_out][alpha_in] format=rgba, split [T1], fifo, lutrgb=r=maxval:g=maxval:b=maxval, [T2] overlay [rgb_out][T1] fifo, lutrgb=r=minval:g=minval:b=minval [T2][alpha_out]' -map '[rgb_out]' "D:\TEMP\Source Files\Done\MLB21_INT_PLAYER_720_ProRes422-1080_ST_F.MOV" -map '[alpha_out]' "D:\TEMP\Source Files\Done\MLB21_INT_PLAYER_720_ProRes422-1080_ST_M.MOV" -acodec copy -map 0:0 -map 1:a:0 -y [rgb_in][alpha_in] are the two output lables of the split filter. After that you must insert a comma. The next [rgb_in] is the input label of the lutrgb=a=minval filter. The lutrgb=a=minval filter has [rgb_out] as its output lable, after that you must also insert a comma. There may be more errors, but that's what I saw immediately. Michael ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".