[FFmpeg-user] Real-time, synchronized capture of multiple V4L sources
Hello, what I am trying to achieve is the following: I have multiple V4L sources with possibly different time bases (different internal start times, different fps) and I would like to capture them live and in sync to separate files. In my particular case those devices are two HDMI grabbers at 60 FPS and two webcams capturing at ~30 fps. Ideally, I would also like to preview the sources at the same time using e.g. an SDL or OpenGL window showing the inputs using overlay or {v,h}stack. If necessary, frames should be dropped or duplicated in order to maintain real-time even when capturing for a long period (say, 2-5 hours). The resulting videos should be constant frame rate (30 and 60 FPS, respectively, or all 60 FPS if that should be necessary). The main problem that I have is that the different video steams are not in sync right from the beginning and I cannot find a way to make them synchronized. As I will explain, I think the underlying problem (or solution) is quite simple, however to give you an idea how a minimal, naive approach could look like, consider the following example: ffmpeg -y \ -video_size .. -input_format .. -framerate 60 -i /dev/video0 \ -video_size .. -input_format .. -framerate 60 -i /dev/video1 \ -video_size .. -input_format .. -framerate 30 -i /dev/video2 \ -video_size .. -input_format .. -framerate 30 -i /dev/video3 \ -filter_complex " [0:v] format=abgr, vflip, split [hdmi0a][hdmi0b]; [1:v] format=abgr, vflip, split [hdmi1a][hdmi1b]; [2:v] format=abgr, split [cam0a][cam0b]; [3:v] format=abgr, split [cam1a][cam1b]; [hdmi0a] scale=.. [tmp0], [hdmi1a] scale=.., [tmp0] hstack [hdmistack]; [cam0a] scale=.. [tmp1], [cam1a] scale=.., [tmp1] hstack [camstack]; [hdmistack][camstack] vstack [preview] " \ -map "[preview]" -f opengl - \ -map "[hdmi0b]" -c:v h264_nvenc -qp 23 /tmp/hdmi0.mkv \ -map "[hdmi1b]" -c:v h264_nvenc -qp 23 /tmp/hdmi1.mkv \ -map "[cam0b]" -c:v h264 -qp 23 /tmp/cam0.mkv \ -map "[cam1b]" -c:v h264 -qp 23 /tmp/cam1.mkv When the V4L devices are initialized, the start timestamps of the input steams are all either a bit different (I suppose due to the fact that the devices are initialized in a particular order) and some start times may even be zero, supposedly due to a bug in Magewell HDMI capture boxes: ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 7.2.1 (GCC) 20171224 configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avisynth --enable-avresample --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libass --enable-libbluray --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-shared --enable-version3 --enable-opengl --enable-opencl libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat57. 83.100 / 57. 83.100 libavdevice57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libavresample 3. 7. 0 / 3. 7. 0 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc54. 7.100 / 54. 7.100 [video4linux2,v4l2 @ 0x5606648ea320] Dequeued v4l2 buffer contains corrupted data (0 bytes). Input #0, video4linux2,v4l2, from '/dev/video0': Duration: N/A, start: 0.00, bitrate: 2985984 kb/s Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1920x1080, 2985984 kb/s, 60 fps, 60 tbr, 1000k tbn, 1000k tbc Input #1, video4linux2,v4l2, from '/dev/video1': Duration: N/A, start: 122510.232758, bitrate: 2985984 kb/s Stream #1:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1920x1080, 2985984 kb/s, 60 fps, 60 tbr, 1000k tbn, 1000k tbc Input #2, video4linux2,v4l2, from '/dev/video2': Duration: N/A, start: 122510.619448, bitrate: N/A Stream #2:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x360, 30 fps, 30 tbr, 1000k tbn, 1000k tbc Input #3, video4linux2,v4l2, from '/dev/video3': Duration: N/A, start: 122510.997742, bitrate: N/A Stream #3:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x360, 30 fps, 30 tbr, 1000k tbn, 1000k tbc Stream mapping: Stream #0:0 (rawvideo) -> format Stream #1:0 (rawvideo) -> format Stream #2:0 (mjpeg) -> format Stream #3:0 (mjpeg) -> format vstack -> Stream #0:0 (rawvideo) split:output1 -> Stream #1:0 (h264_nvenc) split:output1 -> Stream #2:0 (h264_nvenc) split:output1 -> Stream #3:0 (libx264)
Re: [FFmpeg-user] Alpha Channel QT with SubTitles
Hope the output log helped, Any advice would be great ! > On Jan 11, 2018, at 22:06, Gandharv Bhagatwrote: > > Moritz, > > thanks for your reply, here is the output log > > > Last login: Wed Jan 10 12:27:34 on ttys001 > > MacBook-Pro-2:~ Gandharv.Bhagat.PFT$ cd Des > > -bash: cd: Des: No such file or directory > > MacBook-Pro-2:~ Gandharv.Bhagat.PFT$ cd Desktop/ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ks > > -bash: ks: command not found > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ls > > AETN_Workflow > > Alpha_1920x1080.png > > AudioScript > > Automator_Tools > > Clear Development > > IMF_Certificate-Gandharv.pdf > > Marketing > > Netflix > > Notes > > PFT OTT Solution for A STANFORD PRODUCTION V 0 3.docx > > PFT_Documents > > SF_Giants > > Spotify > > SubsQT.txt > > Subtitle.srt > > Vacation Request Form_Gandharv.pdf > > alex-honnold-taft-point-yosemite-california.adapt.1900.1.jpg > > iTunes > > ~$ster QC Qualification- Eric Que.docx > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ffmpeg -r 23.976 -loop 1 -i > ./Alpha_1920x1080.png -c:v qtrle -t 00:00:30.000 -vf > subtitles=“./Subtitle.srt:force_style='Fontsize=18,BorderStyle=1,Outline=0,Shadow=1,MarginV=25,FontName=Arial'" > Subs2.mov > >> > >> > >> > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ffmpeg -r 23.976 -loop 1 -i > /Users/Gandharv.Bhagat.PFT/Desktop/Alpha_1920x1080.png -c:v qtrle -t > 00:00:30.000 -vf > subtitles=“/Users/Gandharv.Bhagat.PFT/Desktop/Subtitle.srt:force_style='Fontsize=18,BorderStyle=1,Outline=0,Shadow=1,MarginV=25,FontName=Arial'" > Subs2.mov > > > > > > >> > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ffmpeg -r 23.976 -loop 1 -i > /Users/Gandharv.Bhagat.PFT/Desktop/Alpha_1920x1080.png -c:v qtrle -t > 00:00:30.000 -vf > subtitles="/Users/Gandharv.Bhagat.PFT/Desktop/SVPROJ438_1_ENRIQUE_COV_010818_V14_LOCKED.srt:force_style='Fontsize=18,BorderStyle=1,Outline=0,Shadow=1,MarginV=25,FontName=Circular > Spotify Head'" Subs2.mov > > ffmpeg version 3.3.3-tessus Copyright (c) 2000-2017 the FFmpeg developers > > built with Apple LLVM version 8.0.0 (clang-800.0.42.1) > > configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg > --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl > --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm > --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb > --enable-libopencore-amrwb --enable-libopus --enable-libsnappy > --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab > --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx > --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs > --enable-libxvid --enable-libzmq --enable-libzvbi --enable-version3 > --disable-ffplay --disable-indev=qtkit > > libavutil 55. 58.100 / 55. 58.100 > > libavcodec 57. 89.100 / 57. 89.100 > > libavformat57. 71.100 / 57. 71.100 > > libavdevice57. 6.100 / 57. 6.100 > > libavfilter 6. 82.100 / 6. 82.100 > > libswscale 4. 6.100 / 4. 6.100 > > libswresample 2. 7.100 / 2. 7.100 > > libpostproc54. 5.100 / 54. 5.100 > > [png_pipe @ 0x7fd19b000400] Stream #0: not enough frames to estimate rate; > consider increasing probesize > > Input #0, png_pipe, from > '/Users/Gandharv.Bhagat.PFT/Desktop/Alpha_1920x1080.png': > > Duration: N/A, bitrate: N/A > >Stream #0:0: Video: png, rgba(pc), 1920x1080 [SAR 2835:2835 DAR 16:9], 25 > tbr, 25 tbn, 25 tbc > > Stream mapping: > > Stream #0:0 -> #0:0 (png (native) -> qtrle (native)) > > Press [q] to stop, [?] for help > > [Parsed_subtitles_0 @ 0x7fd19ae00520] Shaper: FriBidi 0.19.2 (SIMPLE) > > [Parsed_subtitles_0 @ 0x7fd19ae00520] Unable to open > /Users/Gandharv.Bhagat.PFT/Desktop/SVPROJ438_1_ENRIQUE_COV_010818_V14_LOCKED.srt > > [AVFilterGraph @ 0x7fd19ad02f00] Error initializing filter 'subtitles' with > args > '/Users/Gandharv.Bhagat.PFT/Desktop/SVPROJ438_1_ENRIQUE_COV_010818_V14_LOCKED.srt:force_style=Fontsize=18,BorderStyle=1,Outline=0,Shadow=1,MarginV=25,FontName=Circular > Spotify Head' > > Error reinitializing filters! > > Failed to inject frame into filter network: No such file or directory > > Error while processing the decoded data for stream #0:0 > > Conversion failed! > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ > > MacBook-Pro-2:Desktop Gandharv.Bhagat.PFT$ ffmpeg -r 23.976 -loop 1 -i > /Users/Gandharv.Bhagat.PFT/Desktop/Alpha_1920x1080.png -c:v qtrle -t > 00:00:30.000 -vf >
Re: [FFmpeg-user] FFmpeg mosaic
Were you able to fix any of your issues? -- Sent from: http://www.ffmpeg-archive.org/ ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] Mosaic without working inputs. Is it possible?
Hi All, I'm using a 2x2 mosaic with four live UDP input streams and I was wondering how to make the mosaic work, even if one of the input channels isn't working. Is it possible? The FFmpeg simply stalls waiting for the input stream. And the UDP timeout raises an error. ffmpeg -i udp://239.192.11.111:1234 -i udp://239.192.11.112:1234 -i udp://239.192.11.113:1234 -i udp://239.192.11.114:1234 -filter_complex "nullsrc=size=640x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright]; [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft]; [3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][upperright] overlay=shortest=1:x=320 [tmp2]; [tmp2][lowerleft] overlay=shortest=1:y=240 [tmp3]; [tmp3][lowerright] overlay=shortest=1:x=320:y=240" -c:v libx264 udp://239.192.111.1:1234 Regards, Nuno -- Sent from: http://www.ffmpeg-archive.org/ ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Bitrate range when streaming to udp
2018-01-15 13:26 GMT+01:00 Alex Alex: > I have a *.ts file. I want to stream it via udp. No transcoding > or other similar things, just one stream of MPEG TS. > And I'd like to have CBR of the stream, as constant as possible. Do you know what CBR exactly means for a conforming transport stream? The reason I ask is that the answer is not trivial, the answer has nothing to do with constant frame size and you need a stream analyzer to decide if a transport stream is cbr. Apart from that: If you don't want to transcode, then the tool sending via udp cannot influence if what is sent is cbr or not, it is a property of the input file. Stop top-posting here! Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Bitrate range when streaming to udp
I'm sorry my explanation may be really unclear. I'll make one more attempt :) I have a *.ts file. I want to stream it via udp. No transcoding or other similar things, just one stream of MPEG TS. And I'd like to have CBR of the stream, as constant as possible. When trying to stream the file (with ffmpeg, vlc, tsplay etc.), I get a terrible bitrate dispersion which was shown at the picture (see link in the first message of this topic). I was very unhappy with this. And I was surprised that all tools I have used give me terrible bitrate dispersion. May the problem be not in tools but in the source file? Is it possible to prepare the file better in order to get better result stream bitrate? Or, is the result stream bitrate independent of the source file? WBR Alex 12.01.2018 16:20, Carl Eugen Hoyos пишет: > 2018-01-12 12:56 GMT+01:00 Alex Alex: > >> Does multicast bitrate depend on a source file features? Is it >> possible to prepare the file better for getting smoother output >> bitrate? > Are you asking about adaptive bitrate? That either needs files > encoded to the desired bitrates in advance or real-time encoding. > Sorry, I am not sure I understand the question. > > Please do not top-post here, Carl Eugen > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] Raw file generation takes more time with FFmpeg libraries
Hello all, I used the following link to install ffmpeg 3.3 version in windows, https://trac.ffmpeg.org/wiki/CompilationGuide/MSVC I am able to generate a raw file using the command and it takes less than a second in window, *ffmpeg.exe -ss -i -t -vcodec rawvideo -pix_fmt rgb24 -f rawvideo -an * I just renamed the main function in ffmpeg.c with the user defined name and called that function from another user defined function (from my own code). After linking the ffmpeg libraries with my own code, I am able to generate the raw file but it takes more time. I am little confused here. Why it takes more time? Am I missing anything while linking the libraries ? ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".