Re: [FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-05-02 Thread Steven Kan

> On Apr 25, 2024, at 1:33 PM, Steven Kan  wrote:
> 
>>> Hmmm.
>>> 
>>> This:
>>> 
>>> https://youtu.be/-NB1JzR5aCQ
>>> 
>>> is the result of:
>>> 
>>> ffmpeg -pattern_type glob -i '*.jpg' -vf "split [tmp][main]; [tmp]
>>> crop=iw:ih*0.05:0:ih*0.95, drawtext=text='%{metadata\\:DateTimeOriginal}':
>>> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=72:
>>> x=(w-text_w)*0.01: y=(h-text_h)*0.01 [text]; [main]
>>> tmix=frames=10:weights='1' [blend]; [blend][text] overlay=0:H*0.95" -y
>>> CombLapseWithTimeStampAndTmixSplit.mp4
>>> 
>>> which approximates the result I'm trying to achieve, but it's cheating,
>>> because it's making a separate stream of the bottom 5% of the video with
>>> the drawtext overlay, and then overlaying that on top of the blended
>>> frames. It appears to work only because there's nothing much happening in
>>> the bottom half of the frames.  If there were any significant bee activity
>>> in the bottom 5% it would be apparent that that section is not getting
>>> tmixed.

Here’s the latest result from the above technique, with the timestamp overlay 
now limited to just the area that will contain the text:

ffmpeg -thread_queue_size 4096 -pattern_type glob -i '*.jpg' -vf "split 
[tmp][main]; [tmp] crop=iw*.25:ih*0.05:0:ih*0.95, 
drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=72: 
x=(w-text_w)*0.01: y=(h-text_h)*0.01 [text]; [main] tmix=frames=50:weights='1' 
[blend]; [blend][text] overlay=0:H*0.95" -y 
CombLapseWithTimeStampAndTmixSplit.mp4

which produces:

 https://www.youtube.com/watch?v=N031e2g551A

It will be more interesting as the bees complete more comb, which gives the 
bees more space to work, which will reveal more comb.

You can see the background flickering behind the timestamp, due to some weird 
exposure variances that I’ll attempt to address in another thread.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Exposure Compensation within ffmpeg?

2024-05-02 Thread Steven Kan
Here are the results of my time-lapse-footage-to-date of my bees building comb 
in their hive box:

 https://www.youtube.com/watch?v=9rlji3udrPc

Despite I’m using a dSLR with fixed shutter speed and aperture, there’s still a 
lot of exposure variation throughout the sequence.

Is there a filter within ffmpeg to equalize/normalize the exposure values 
throughout the video? It would be even better if that exposure value could be 
equalized for some region-of-interest, e.g. the static parts of the images that 
have no bees or comb.

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-04-25 Thread Steven Kan

> On Apr 24, 2024, at 11:54 PM, Paul B Mahol  wrote:
> 
> On Thu, Apr 25, 2024 at 6:21 AM Steven Kan  wrote:
> 
>> 
>>> Thanks! This works, and I agree that it’s better than using the filename:
>>> 
>>> ffmpeg -pattern_type glob -i '*.jpg' -vf
>> "drawtext=text='%{metadata\\:DateTimeOriginal}':
>> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48:
>> x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithTimeStamp.mp4
>>> 
>>> I can now add tmix after drawtext:
>>> 
>>> ffmpeg -pattern_type glob -i '*.jpg' -vf
>> "drawtext=text='%{metadata\\:DateTimeOriginal}':
>> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48:
>> x=(w-text_w)*0.01: y=(h-text_h)*0.98, tmix=frames=10:weights='1'" -y
>> CombLapseWithTimeStampAndTmix.mp4
>>> 
>>> And it renders, but the timestamps get blended.
>>> 
>>> Can I use “split” to make one stream of images from which I can extract
>> the timestamp, and then another stream for tmix, and then overlay the
>> timestamp after tmix? How would I sync up the two streams, since tmix would
>> be N frames shorter than the original?
>> 
>> Hmmm.
>> 
>> This:
>> 
>> https://youtu.be/-NB1JzR5aCQ
>> 
>> is the result of:
>> 
>> ffmpeg -pattern_type glob -i '*.jpg' -vf "split [tmp][main]; [tmp]
>> crop=iw:ih*0.05:0:ih*0.95, drawtext=text='%{metadata\\:DateTimeOriginal}':
>> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=72:
>> x=(w-text_w)*0.01: y=(h-text_h)*0.01 [text]; [main]
>> tmix=frames=10:weights='1' [blend]; [blend][text] overlay=0:H*0.95" -y
>> CombLapseWithTimeStampAndTmixSplit.mp4
>> 
>> which approximates the result I'm trying to achieve, but it's cheating,
>> because it's making a separate stream of the bottom 5% of the video with
>> the drawtext overlay, and then overlaying that on top of the blended
>> frames. It appears to work only because there's nothing much happening in
>> the bottom half of the frames.  If there were any significant bee activity
>> in the bottom 5% it would be apparent that that section is not getting
>> tmixed.
>> 
>> Can anyone help me construct a filter chain that will overlay the internal
>> timestamp in an arbitrary position on top of the blended frames, after tmix
>> has been applied?
>> 
> 
> Not anyone, but what about adding drawtext *after* tmix ?

I actually tried that first, e.g.:

ffmpeg -pattern_type glob -i '*.jpg' -vf "tmix=frames=10:weights='1', 
drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithTimeStampAndTmix.mp4

and it renders no drawtext. If I change '%{metadata\\:DateTimeOriginal}' to 
'%{metadata\:lavf.image2dec.source_basename\:NA}' then the drawtext renders as 
“NA”, presumably because the tmixed images no longer have a filename or a 
timestamp.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-04-24 Thread Steven Kan

> Thanks! This works, and I agree that it’s better than using the filename:
> 
> ffmpeg -pattern_type glob -i '*.jpg' -vf 
> "drawtext=text='%{metadata\\:DateTimeOriginal}': 
> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
> x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithTimeStamp.mp4
> 
> I can now add tmix after drawtext:
> 
> ffmpeg -pattern_type glob -i '*.jpg' -vf 
> "drawtext=text='%{metadata\\:DateTimeOriginal}': 
> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
> x=(w-text_w)*0.01: y=(h-text_h)*0.98, tmix=frames=10:weights='1'" -y 
> CombLapseWithTimeStampAndTmix.mp4
> 
> And it renders, but the timestamps get blended.
> 
> Can I use “split” to make one stream of images from which I can extract the 
> timestamp, and then another stream for tmix, and then overlay the timestamp 
> after tmix? How would I sync up the two streams, since tmix would be N frames 
> shorter than the original?

Hmmm. 

This:

https://youtu.be/-NB1JzR5aCQ

is the result of:

ffmpeg -pattern_type glob -i '*.jpg' -vf "split [tmp][main]; [tmp] 
crop=iw:ih*0.05:0:ih*0.95, drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=72: 
x=(w-text_w)*0.01: y=(h-text_h)*0.01 [text]; [main] tmix=frames=10:weights='1' 
[blend]; [blend][text] overlay=0:H*0.95" -y 
CombLapseWithTimeStampAndTmixSplit.mp4

which approximates the result I'm trying to achieve, but it's cheating, because 
it's making a separate stream of the bottom 5% of the video with the drawtext 
overlay, and then overlaying that on top of the blended frames. It appears to 
work only because there's nothing much happening in the bottom half of the 
frames.  If there were any significant bee activity in the bottom 5% it would 
be apparent that that section is not getting tmixed.

Can anyone help me construct a filter chain that will overlay the internal 
timestamp in an arbitrary position on top of the blended frames, after tmix has 
been applied?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-04-24 Thread Steven Kan
> On Apr 24, 2024, at 11:24 AM, William C Bonner  wrote:
> 
> On Sun, Apr 21, 2024 at 12:08 PM Steven Kan  wrote:
> 
>>> On Mar 6, 2024, at 11:49 AM, Steven Kan  wrote:
>>> 
>>> Planning ahead to the successor to my honeycomb time-lapse video from
>> 2022, processed using tmix per advice from this group:
>>> 
>>> https://ffmpeg.org//pipermail/ffmpeg-user/2022-April/054742.html
>>> 
>>> https://www.youtube.com/watch?v=2dUGbGcGE2c
>>> 
>>> This time I’m capturing the photos with a dSLR and gphoto on a Raspberry
>> Pi, and the dSLR does not burn in a timestamp.
>>> 
>>> This is actually a good thing, because I don’t like how my timestamps
>> got tmixed away last time. I’d like to apply them after tmix.
>>> 
>>> I’m saving the photos with the timestamp as the sortable filename, e.g.
>>> 
>>> 2024-03-06-11-40-11.jpg
>>> 
>>> This time I’d like to tmix 50 frames, read the filename of the 50th
>> frame, re-arrange the text of the filename to U.S. style, e.g. "03/06/24,
>> 11:40:11 AM", and then drawtext it onto the output.
>>> 
>>> Can this be done in one pass? Or would I need to do a first pass to
>> create the text fields in some companion files, e.g.
>> 2024-03-06-11-40-11.txt, or even multiple passes to do the tmix first and
>> then the drawtext? I’d like to avoid multiple passes of video processing to
>> avoid generation loss, if possible.
>> 
>> Partial answer to my own question, from here:
>> 
>> 
>> https://superuser.com/questions/717103/burn-filenames-of-single-images-into-ffmpeg-output-video
>> 
>> So this command works:
>> 
>> ffmpeg -f image2  -export_path_metadata 1 -pattern_type glob -i '*.jpg'
>> -vf "drawtext=text='%{metadata\:lavf.image2dec.source_basename\:NA}':
>> fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48:
>> x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithFilenames.mp4
>> 
>> and creates:
>> 
>> https://www.kan.org/download/CombLapseWithFilenames.mp4
>> 
>> But the drawtext is the original filenames, e.g. 2024-04-21-08-20-11.jpg,
>> which format I chose in my photo-taking script so that they’d sort properly.
>> 
>> Can I reformat that in U.S.-style, e.g. "04/21/24, 08:20:11” and strip the
>> .jpg extension, and do this all in one pass?
>> 
> 
> I'd recommend using the metadata for your timelapse if it's available
> instead of the filename. This is what I'm using in my windows project that
> calls ffmpeg to create movies from gopro time lapse photos.
> 
> drawtext=fontfile=C\\:/Windows/Fonts/consola.ttf:fontcolor=white:fontsize=main_h/16:y=main_h-text_h-50:x=50:text=%{metadata\\:DateTimeOriginal}

Thanks! This works, and I agree that it’s better than using the filename:

ffmpeg -pattern_type glob -i '*.jpg' -vf 
"drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithTimeStamp.mp4

I can now add tmix after drawtext:

ffmpeg -pattern_type glob -i '*.jpg' -vf 
"drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
x=(w-text_w)*0.01: y=(h-text_h)*0.98, tmix=frames=10:weights='1'" -y 
CombLapseWithTimeStampAndTmix.mp4

And it renders, but the timestamps get blended.

Can I use “split” to make one stream of images from which I can extract the 
timestamp, and then another stream for tmix, and then overlay the timestamp 
after tmix? How would I sync up the two streams, since tmix would be N frames 
shorter than the original?

Thank you!!

ffmpeg -pattern_type glob -i '*.jpg' -vf 
"drawtext=text='%{metadata\\:DateTimeOriginal}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
x=(w-text_w)*0.01: y=(h-text_h)*0.98, tmix=frames=10:weights='1'" -y 
CombLapseWithTimeStampAndTmix.mp4
ffmpeg version N-109776-g7e1d474021-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --ena

Re: [FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-04-21 Thread Steven Kan
> On Mar 6, 2024, at 11:49 AM, Steven Kan  wrote:
> 
> Planning ahead to the successor to my honeycomb time-lapse video from 2022, 
> processed using tmix per advice from this group:
> 
> https://ffmpeg.org//pipermail/ffmpeg-user/2022-April/054742.html
> 
> https://www.youtube.com/watch?v=2dUGbGcGE2c
> 
> This time I’m capturing the photos with a dSLR and gphoto on a Raspberry Pi, 
> and the dSLR does not burn in a timestamp. 
> 
> This is actually a good thing, because I don’t like how my timestamps got 
> tmixed away last time. I’d like to apply them after tmix.
> 
> I’m saving the photos with the timestamp as the sortable filename, e.g. 
> 
> 2024-03-06-11-40-11.jpg
> 
> This time I’d like to tmix 50 frames, read the filename of the 50th frame, 
> re-arrange the text of the filename to U.S. style, e.g. "03/06/24, 11:40:11 
> AM", and then drawtext it onto the output. 
> 
> Can this be done in one pass? Or would I need to do a first pass to create 
> the text fields in some companion files, e.g. 2024-03-06-11-40-11.txt, or 
> even multiple passes to do the tmix first and then the drawtext? I’d like to 
> avoid multiple passes of video processing to avoid generation loss, if 
> possible.

Partial answer to my own question, from here:

https://superuser.com/questions/717103/burn-filenames-of-single-images-into-ffmpeg-output-video

So this command works:

ffmpeg -f image2  -export_path_metadata 1 -pattern_type glob -i '*.jpg' -vf 
"drawtext=text='%{metadata\:lavf.image2dec.source_basename\:NA}': 
fontfile=/System/Library/Fonts/Helvetica.ttc:fontcolor=white: fontsize=48: 
x=(w-text_w)*0.01: y=(h-text_h)*0.98" -y CombLapseWithFilenames.mp4

and creates:

https://www.kan.org/download/CombLapseWithFilenames.mp4

But the drawtext is the original filenames, e.g. 2024-04-21-08-20-11.jpg, which 
format I chose in my photo-taking script so that they’d sort properly.

Can I reformat that in U.S.-style, e.g. "04/21/24, 08:20:11” and strip the .jpg 
extension, and do this all in one pass? 

Thanks!

console outptut:

ffmpeg version N-109776-g7e1d474021-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57. 44.100 / 57. 44.100
  libavcodec 59. 63.100 / 59. 63.100
  libavformat59. 38.100 / 59. 38.100
  libavdevice59.  8.101 / 59.  8.101
  libavfilter 8. 56.100 /  8. 56.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc56.  7.100 / 56.  7.100
Input #0, image2, from '*.jpg':
  Duration: 00:00:00.40, start: 0.00, bitrate: N/A
  Stream #0:0: Video: mjpeg (Baseline), yuvj422p(pc, bt470bg/unknown/unknown), 
3008x2000, 25 fps, 25 tbr, 25 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7feb17f05980] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 0x7feb17f05980] profile High 4:2:2, level 5.1, 4:2:2, 8-bit
[libx264 @ 0x7feb17f05980] 264 - core 164 r3106 eaa68fa - H.264/MPEG-4 AVC 
codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: 
cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 
psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 
deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 
sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 
constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 
open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 
rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 
ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'CombLapse.mp4':
  Metadata:
encoder : Lavf59.38.100
  Stream #0:0: Video: h264 (avc1 / 0x31637661), yuvj422p(pc, 
bt470bg/unknown/unknown, progressive), 3008x2000, q=2-31, 25 fps, 12800 tbn
Metadata:
  encoder : Lavc59.63.100 libx264
Side data:
  cpb: bitrate max

[FFmpeg-user] Use processed filename as draw text after tmix, in one pass?

2024-03-06 Thread Steven Kan
Planning ahead to the successor to my honeycomb time-lapse video from 2022, 
processed using tmix per advice from this group:

 https://ffmpeg.org//pipermail/ffmpeg-user/2022-April/054742.html

 https://www.youtube.com/watch?v=2dUGbGcGE2c

This time I’m capturing the photos with a dSLR and gphoto on a Raspberry Pi, 
and the dSLR does not burn in a timestamp. 

This is actually a good thing, because I don’t like how my timestamps got 
tmixed away last time. I’d like to apply them after tmix.

I’m saving the photos with the timestamp as the sortable filename, e.g. 

2024-03-06-11-40-11.jpg

This time I’d like to tmix 50 frames, read the filename of the 50th frame, 
re-arrange the text of the filename to U.S. style, e.g. "03/06/24, 11:40:11 
AM", and then drawtext it onto the output. 

Can this be done in one pass? Or would I need to do a first pass to create the 
text fields in some companion files, e.g. 2024-03-06-11-40-11.txt, or even 
multiple passes to do the tmix first and then the drawtext? I’d like to avoid 
multiple passes of video processing to avoid generation loss, if possible.

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Is there any good reason why FFmpeg doesn't do "-vcodec copy" automatically.

2023-09-30 Thread Steven Kan
> On Sep 30, 2023, at 12:07 PM, Reindl Harald  wrote:
> 
> 
> Am 30.09.23 um 20:46 schrieb Stéphane Archer:
>> Is there any good reason why FFmpeg which sees that the video file input
>> and output match every single characteristic doesn't copy the stream to
>> avoid useless reencoding?
>> Basically doing "-vcodec copy" automatically.
> 
> is there a good reason to throw a video file through ffmpeg when you don't 
> want to touch it?
> 
> yeah, there may be a few reasons, and for the really *few* reasons you are 
> supposed to state that

Perhaps the user has an automated workflow that accepts X, Y, Z, and W, but 
always outputs W. If any particular incoming file happens to be W, it would be 
more efficient to not re-encode that particular file. 
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] "Leftover" ffmpeg instances eating RAM. Solutions?

2023-02-16 Thread Steven Kan
> On Feb 16, 2023, at 3:13 PM, Steven Kan  wrote:
> 
> Hi all,
> 
> I have 3 LaunchDaemons running on macOS nearly 24/7, each of which calls a 
> script to handle instances of ffmpeg (binaries from 
> https://evermeet.cx/ffmpeg) to pull 1 or 2 RSTP streams from IP cameras, 
> optionally hstack them, and then push them to 3 different YouTube channels:

ugh. Sorry for the atrocious formatting in my original post. I was attempting 
to use outline format, but apparently that doesn’t survive through the mailing 
list. Here it is again with the outlining baked in:
1. I have 3 LaunchDaemons running on macOS nearly 24/7, each of which calls a 
script to handle instances of ffmpeg (binaries from https://evermeet.cx/ffmpeg) 
to pull 1 or 2 RSTP streams from IP cameras, optionally hstack them, and then 
push them to 3 different YouTube channels:

a. https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live 

b. https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live 

c. https://www.youtube.com/channel/UCcIZVSZfzrxS6ynL8DQeC-Q/live

2. Each script runs ffmpeg in a loop and then repeats, due to the fact that 
some cameras have no audio, but YouTube streaming requires it, so I have a 
playlist of mp3 files that runs for 1:47:02. Scripts are of the general form 
StreamToYouTube.sh:

a. #!/bin/bash

b. cd /usr/local/bin/

c. while true

d. do

i. ./ffmpeg -thread_queue_size 2048 -i 
"rtsp://anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt 
-vcodec copy -acodec copy -t 01:47:02 -f flv 
"rtmp://a.rtmp.youtube.com/live2/"

ii. echo Stream1Complete

iii. sleep 5

e. done

3. The LaunchDaemon restarts a script if it dies for some reason, which it 
sometimes does for reasons I’m still searching for.

a. I also pause the scripts for a few minutes at 07:00 and 19:00 so 
that YouTube can create an archive of the footage and start a new video 
instance.

i. If I don’t do that, and a video gets longer than 12 hours, 
it will often get stuck in YouTube’s “processing purgatory” and never become 
viewable.

b. Finally, I have a watchdog timer that periodically checks if each 
stream is Live, and if it doesn’t it runs through a procedure to reset the 
YouTube stream and start the ffmpeg script again.

4. Most of this is working reliably, now, but I have the following problem: 
when I check the machine running all of this, I sometimes see 6 or more 
instances of ffmpeg (and their associated VTDecoderXPC processes) instead of 
the 3 that I should see.

a. Only the 3 desired instances are using any appreciable CPU time, but 
they all consume significant RAM.

b. As a result, sometimes this machine locks up or things start 
shutting down due to RAM starvation

5. One possible solution I thought of is to copy the ffmpeg executable to 
ffmpeg1, ffmpeg2, ffmpeg3, etc., and

using a different copy for each of my streaming services/scripts/daemons

a. This would allow to me to “killall ffmpegX” during my daily 
pause/cleanup routines

b. I don’t want to just “killall ffmpeg” because sometimes I want to 
reset only 1 of the 3 channels that

are running

c. Would a copy of ffmpeg as ffmpeg1 or ffmpeg2 work properly? Or do I 
have to build it somehow?

d. Or is there another way to identify which instance(s) of ffmpeg were 
created by each service, so

that I can kill them by PID?

6. Or am I doing something wrong with my services that’s causing them to leave 
“ghost” ffmpeg instances

behind?

a. Services are initiated via “sudo launchctl load 
StreamToYouTubeX.plist”

b. Services are terminated via “sudo launchctl unload 
StreamToYouTubeX.plist”

c. The plists contain just a simple call to, for example:

i. ProgramArguments

ii. 

iii. /usr/local/bin/StreamToYouTube2.sh

iv. 

Thanks!

Console dump follows:

ffmpeg version N-109676-g40512dbd96-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--ena

[FFmpeg-user] "Leftover" ffmpeg instances eating RAM. Solutions?

2023-02-16 Thread Steven Kan
Hi all,

I have 3 LaunchDaemons running on macOS nearly 24/7, each of which calls a 
script to handle instances of ffmpeg (binaries from https://evermeet.cx/ffmpeg) 
to pull 1 or 2 RSTP streams from IP cameras, optionally hstack them, and then 
push them to 3 different YouTube channels:
https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live
https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live
https://www.youtube.com/channel/UCcIZVSZfzrxS6ynL8DQeC-Q/live
Each script runs ffmpeg in a loop and then repeats, due to the fact that some 
cameras have no audio, but YouTube streaming requires it, so I have a playlist 
of mp3 files that runs for 1:47:02. Scripts are of the general form 
StreamToYouTube.sh:
#!/bin/bash
cd /usr/local/bin/
while true
do
./ffmpeg -thread_queue_size 2048 -i 
"rtsp://anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt 
-vcodec copy -acodec copy -t 01:47:02 -f flv 
"rtmp://a.rtmp.youtube.com/live2/"
echo Stream1Complete
sleep 5
done
The LaunchDaemon restarts a script if it dies for some reason, which it 
sometimes does for reasons I’m still searching for. 
I also pause the scripts for a few minutes at 07:00 and 19:00 so that YouTube 
can create an archive of the footage and start a new video instance. 
If I don’t do that, and a video gets longer than 12 hours, it will often get 
stuck in YouTube’s “processing purgatory” and never become viewable.
Finally, I have a watchdog timer that periodically checks if each stream is 
Live, and if it doesn’t it runs through a procedure to reset the YouTube stream 
and start the ffmpeg script again.
Most of this is working reliably, now, but I have the following problem: when I 
check the machine running all of this, I sometimes see 6 or more instances of 
ffmpeg (and their associated VTDecoderXPC processes) instead of the 3 that I 
should see.
Only the 3 desired instances are using any appreciable CPU time, but they all 
consume significant RAM.
As a result, sometimes this machine locks up or things start shutting down due 
to RAM starvation
One possible solution I thought of is to copy the ffmpeg executable to ffmpeg1, 
ffmpeg2, ffmpeg3, etc., and using a different copy for each of my streaming 
services/scripts/daemons
This would allow to me to “killall ffmpegX” during my daily pause/cleanup 
routines
I don’t want to just “killall ffmpeg” because sometimes I want to reset only 1 
of the 3 channels that are running
Would a copy of ffmpeg as ffmpeg1 or ffmpeg2 work properly? Or do I have to 
build it somehow?
Or is there another way to identify which instance(s) of ffmpeg were created by 
each service, so that I can kill them by PID?
Or am I doing something wrong with my services that’s causing them to leave 
“ghost” ffmpeg instances behind?
Services are initiated via “sudo launchctl load StreamToYouTubeX.plist”
Services are terminated via “sudo launchctl unload StreamToYouTubeX.plist”
The plists contain just a simple call to, for example:
ProgramArguments
 
/usr/local/bin/StreamToYouTube2.sh


Thanks!

Console dump follows:

ffmpeg version N-109676-g40512dbd96-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57. 44.100 / 57. 44.100
  libavcodec 59. 57.100 / 59. 57.100
  libavformat59. 36.100 / 59. 36.100
  libavdevice59.  8.101 / 59.  8.101
  libavfilter 8. 54.100 /  8. 54.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc56.  7.100 / 56.  7.100
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.11:554':
  Metadata:
title   : Session streamed by "preview"
comment : 
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(progressive), 2304x1296, 120 tbr, 
90k tbn
  Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp
[mp3 @ 0x7fea1f007140] Estimating duration from bitrate, this may be inaccurate
Input #1, concat, from 'playlist.txt':
  Duration: N/A, start: 0.00, bitrate: 320 kb/s
  Stream #1:0: Audio: mp3, 

Re: [FFmpeg-user] Mal-formed MP4s from Blue Iris won't play on Apple Silicon Macs. Diagnose ffprobe output?

2023-02-09 Thread Steven Kan
> On Feb 9, 2023, at 8:09 AM, Anatoly  wrote:
> 
> Hello,
> On Wed, 8 Feb 2023 16:11:39 -0800
> Steven Kan  wrote:
> 
>>> On Feb 7, 2023, at 10:56 PM, Ferdi Scholten 
>>> wrote:
>>> 
>>> I have a Windows PC running Blue Iris security camera software, and
>>> it has a "Direct-to-disc" option whereby it records video straight
>>> from each camera's h.264 RTSP stream without transcoding, to reduce
>>> CPU and HDD utilization. There is some re-packaging involved, and I
>>> think that's where the problem may lie.
> But are you sure that there is re-packaging?
> What if capture camera stream with ffmpeg itself?  
> ...snip...

Good question. “Repackaging” may be the wrong word, but BI must do something to 
take an RTSP stream and turn it into a file on a disk. There is no transcoding 
occurring with BI’s “direct-to-disc” feature, because the reason for that 
feature is to reduce CPU utilization, and Task Manager on the BI computer shows 
that it’s very, very low. But BI must at last write some sort of file name, 
file header, EOF marker, etc., right?

I tried ffprobe with the streams directly from these same 3 camera that 
produced the clips posted previously, and they all show no errors:

Amcrest camera:

./ffprobe -i 'rtsp://anonymous:password1@192.168.1.14:554' 
ffprobe version N-109776-g7e1d474021-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2007-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57. 44.100 / 57. 44.100
  libavcodec 59. 63.100 / 59. 63.100
  libavformat59. 38.100 / 59. 38.100
  libavdevice59.  8.101 / 59.  8.101
  libavfilter 8. 56.100 /  8. 56.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc56.  7.100 / 56.  7.100
Input #0, rtsp, from 'rtsp://anonymous:password1@192.168.1.14:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
  Stream #0:0: Video: h264 (Main), yuv420p(progressive), 2048x1536, 100 tbr, 
90k tbn

Reolink camera:

./ffprobe -i 'rtsp://anonymous:password1@192.168.1.11:554'
ffprobe version N-109776-g7e1d474021-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2007-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57. 44.100 / 57. 44.100
  libavcodec 59. 63.100 / 59. 63.100
  libavformat59. 38.100 / 59. 38.100
  libavdevice59.  8.101 / 59.  8.101
  libavfilter 8. 56.100 /  8. 56.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc56.  7.100 / 56.  7.100
Input #0, rtsp, from 'rtsp://admin:T0rtuga!@192.168.1.11:554':
  Metadata:
title   : Session streamed by "preview"
comment : 
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #0:0: Video: h264 (High), yuv420p(progressive), 2304x1296, 29.97 tbr, 
90k tbn
  Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp

Wyze cam with RTSP firmware:

./ffprobe -i rtsp://anonymous:password@192.168.1.49:554/live
ffprobe version N-109776-g7e1d474021-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2007-2023 the FFmpeg developers

Re: [FFmpeg-user] Mal-formed MP4s from Blue Iris won't play on Apple Silicon Macs. Diagnose ffprobe output?

2023-02-08 Thread Steven Kan
> On Feb 7, 2023, at 10:56 PM, Ferdi Scholten  wrote:
> 
> I have a Windows PC running Blue Iris security camera software, and it has a 
> “Direct-to-disc” option whereby it records video straight from each camera’s 
> h.264 RTSP stream without transcoding, to reduce CPU and HDD utilization. 
> There is some re-packaging involved, and I think that’s where the problem may 
> lie.
>> When I browse through recorded footage, either on the BI PC or on another 
>> device (e.g. my Mac), I have the option to download clips either with or 
>> without re-encoding to h.264. I obviously prefer to download without 
>> re-encoding. I’ll call these the “original” downloads. Once downloaded they 
>> behaved like normal MP4s on my Intel-based Mac. I could read them in 
>> QuickTime Player or in any other app that uses the QuickTime libraries, like 
>> DaVinci Resolve.
>> 
>> But last week I replaced my Intel Mac with an Apple Silicon Mac, and now 
>> these MP4 files are broken. They will not play back in QuickTime player, nor 
>> in the apps that are based on QuickTime. Curiously, VLC Player on my AS Mac 
>> will play the “bad” files back, but only after a few frames of what looks 
>> like gray snow. If I turn off the Hardware acceleration in VLC Player, then 
>> it will play back the bad files correctly from frame 1.
>> 
>> If I choose the option in Blue Iris or in the client viewer to re-encode the 
>> video to H.264 before download, then the resulting files behave properly on 
>> any computer I’ve tried, but I don’t want to have to wait for a re-encode 
>> every time I download a clip, and the file bloat is 10x.
>> 
>> I ran the original file and the re-encoded file through ffprobe. Can anyone 
>> decode what the error is, and what the Blue Iris developer can do to fix it?
>> 
>> Bad (original) file:
>> 
>> https://www.kan.org/download/BlueIris/TrailDown.20230203_042912-042924.495.mp4
>> 
>> Good (re-encoded) file:
>> 
>> https://www.kan.org/download/BlueIris/TrailDown.20230203_042912-042924.494.mp4
>> 
>> Console output from Bad (original) file:
>> 
>> ffprobe /Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4
>> ffprobe version N-109745-g7d49fef8b4-tessus  https://evermeet.cx/ffmpeg/  
>> Copyright (c) 2007-2023 the FFmpeg developers
>>   built with Apple clang version 11.0.0 (clang-1100.0.33.17)
>>   configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
>> --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
>> --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
>> --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
>> --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
>> --enable-libopenh264 --enable-libopenjpeg --enable-libopus 
>> --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
>> --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
>> --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
>> --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
>> --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
>> --enable-version3 --pkg-config-flags=--static --disable-ffplay
>>   libavutil  57. 44.100 / 57. 44.100
>>   libavcodec 59. 61.100 / 59. 61.100
>>   libavformat59. 37.100 / 59. 37.100
>>   libavdevice59.  8.101 / 59.  8.101
>>   libavfilter 8. 56.100 /  8. 56.100
>>   libswscale  6.  8.112 /  6.  8.112
>>   libswresample   4.  9.100 /  4.  9.100
>>   libpostproc56.  7.100 / 56.  7.100
>> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
>> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in I frame
>> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -6
>> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
>> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
>> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
>> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
>> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
>> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
>> '/Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4':
>>   Metadata:
>> major_brand : isom
>> minor_version   : 512
>> compatible_brands: isomiso2avc1mp41
>> encoder : Lavf58.45.100
>>   Duration: 00:00:11.30, start: 0.00, bitrate: 6417 kb/s
>>   Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), 
>> yuvj420p(pc, bt709, progressive), 2592x1944, 6388 kb/s, 19.91 fps, 20.67 
>> tbr, 90k tbn (default)
>> Metadata:
>>   handler_name: VideoHandler
>>   vendor_id   : [0][0][0][0]
>>   Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, 
>> fltp, 26 kb/s (default)
>> Metadata:
>>   handler_name: SoundHandler
>>   vendor_id   : [0][0][0][0]
>> 
>> 

Re: [FFmpeg-user] Mal-formed MP4s from Blue Iris won't play on Apple Silicon Macs. Diagnose ffprobe output?

2023-02-07 Thread Steven Kan
> On Feb 7, 2023, at 3:36 PM, Steven Kan  wrote:
> 
> 
> Console output from Bad (original) file:
> 
> ffprobe /Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4 
> ffprobe version N-109745-g7d49fef8b4-tessus  https://evermeet.cx/ffmpeg/  
> Copyright (c) 2007-2023 the FFmpeg developers
>  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
>  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
> --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
> --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
> --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
> --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
> --enable-libopenh264 --enable-libopenjpeg --enable-libopus 
> --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
> --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
> --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
> --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
> --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
> --enable-version3 --pkg-config-flags=--static --disable-ffplay
>  libavutil  57. 44.100 / 57. 44.100
>  libavcodec 59. 61.100 / 59. 61.100
>  libavformat59. 37.100 / 59. 37.100
>  libavdevice59.  8.101 / 59.  8.101
>  libavfilter 8. 56.100 /  8. 56.100
>  libswscale  6.  8.112 /  6.  8.112
>  libswresample   4.  9.100 /  4.  9.100
>  libpostproc56.  7.100 / 56.  7.100
> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in I frame
> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -6
> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
> [h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
> [h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
> '/Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4':
>   

One more note: The playback error is specific to Apple-Silicon-based Macs. 
Either file plays properly in Windows.

But the ffprobe output is the same whether I run ffprobe in Windows or on my 
Mac, so there appears to be an issue with the file itself that Windows can 
handle, that my Intel Mac could handle, but that my AS Mac can not handle. 

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Mal-formed MP4s from Blue Iris won't play on Apple Silicon Macs. Diagnose ffprobe output?

2023-02-07 Thread Steven Kan
I have a Windows PC running Blue Iris security camera software, and it has a 
“Direct-to-disc” option whereby it records video straight from each camera’s 
h.264 RTSP stream without transcoding, to reduce CPU and HDD utilization. There 
is some re-packaging involved, and I think that’s where the problem may lie.

When I browse through recorded footage, either on the BI PC or on another 
device (e.g. my Mac), I have the option to download clips either with or 
without re-encoding to h.264. I obviously prefer to download without 
re-encoding. I’ll call these the “original” downloads. Once downloaded they 
behaved like normal MP4s on my Intel-based Mac. I could read them in QuickTime 
Player or in any other app that uses the QuickTime libraries, like DaVinci 
Resolve.

But last week I replaced my Intel Mac with an Apple Silicon Mac, and now these 
MP4 files are broken. They will not play back in QuickTime player, nor in the 
apps that are based on QuickTime. Curiously, VLC Player on my AS Mac will play 
the “bad” files back, but only after a few frames of what looks like gray snow. 
If I turn off the Hardware acceleration in VLC Player, then it will play back 
the bad files correctly from frame 1. 

If I choose the option in Blue Iris or in the client viewer to re-encode the 
video to H.264 before download, then the resulting files behave properly on any 
computer I’ve tried, but I don’t want to have to wait for a re-encode every 
time I download a clip, and the file bloat is 10x.

I ran the original file and the re-encoded file through ffprobe. Can anyone 
decode what the error is, and what the Blue Iris developer can do to fix it?

Bad (original) file:

https://www.kan.org/download/BlueIris/TrailDown.20230203_042912-042924.495.mp4

Good (re-encoded) file:

https://www.kan.org/download/BlueIris/TrailDown.20230203_042912-042924.494.mp4

Console output from Bad (original) file:

ffprobe /Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4 
ffprobe version N-109745-g7d49fef8b4-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2007-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57. 44.100 / 57. 44.100
  libavcodec 59. 61.100 / 59. 61.100
  libavformat59. 37.100 / 59. 37.100
  libavdevice59.  8.101 / 59.  8.101
  libavfilter 8. 56.100 /  8. 56.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc56.  7.100 / 56.  7.100
[h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
[h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in I frame
[h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -6
[h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
[h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
[h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
[h264 @ 0x7fd082805200] error while decoding MB 161 121, bytestream -5
[h264 @ 0x7fd082805200] concealing 50 DC, 50 AC, 50 MV errors in P frame
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/TrailDown.20230203_042912-042924.495.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.45.100
  Duration: 00:00:11.30, start: 0.00, bitrate: 6417 kb/s
  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, 
bt709, progressive), 2592x1944, 6388 kb/s, 19.91 fps, 20.67 tbr, 90k tbn 
(default)
Metadata:
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, 
fltp, 26 kb/s (default)
Metadata:
  handler_name: SoundHandler
  vendor_id   : [0][0][0][0]

Console output from Good (re-encoded) file:

/Applications/ffmpeg/ffprobe 
/Users/steven/Downloads/TrailDown.20230203_042912-042924.494.mp4 
ffprobe version N-109745-g7d49fef8b4-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2007-2023 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  

Re: [FFmpeg-user] FFmpeg 5.1 streaming to Youtube - aborts after some time

2022-08-11 Thread Steven Kan
> On Aug 11, 2022, at 2:55 PM, Christian  wrote:
> 
> Hello,
> 
> I successfully compiled FFmpeg 5.1 on a Raspberry Pi 4B with newest
> Raspberry Pi OS 64 bit following this guidance
> https://pimylifeup.com/compiling-ffmpeg-raspberry-pi/ (I substituted 5.0
> with 5.1 in the last step).
> 
> I test this setup for continuous Youtube-Streaming and use this command:
> 
> ffmpeg -stream_loop -1 -re -i natur.mp4 -c:v copy -c:a aac -b:a 192k
> -flvflags +no_sequence_end+no_metadata+no_duration_filesize -f flv
> rtmp://a.rtmp.youtube.com/live2/
> 
> natur.mp4 is a 1:20min h.264 encoded video with 1280x720 resolution and
> 25fps. ffprobe natur.mp4:
> 
> ffprobe natur.mp4
> ffprobe version aba74d7 Copyright (c) 2007-2022 the FFmpeg developers
>   built with gcc 10 (Debian 10.2.1-6)
>   configuration: --extra-cflags=-I/usr/local/include
> --extra-ldflags=-L/usr/local/lib --extra-libs='-lpthread -lm -latomic'
> --arch=arm64 --enable-gmp --enable-gpl --enable-libaom --enable-libass
> --enable-libdav1d --enable-libdrm --enable-libfdk-aac
> --enable-libfreetype --enable-libkvazaar --enable-libmp3lame
> --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus
> --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libssh
> --enable-libvorbis --enable-libvpx --enable-libzimg --enable-libwebp
> --enable-libx264 --enable-libx265 --enable-libxml2 --enable-nonfree
> --enable-version3 --target-os=linux --enable-pthreads --enable-openssl
> --enable-hardcoded-tables
>   libavutil  57. 28.100 / 57. 28.100
>   libavcodec 59. 37.100 / 59. 37.100
>   libavformat59. 27.100 / 59. 27.100
>   libavdevice59.  7.100 / 59.  7.100
>   libavfilter 8. 44.100 /  8. 44.100
>   libswscale  6.  7.100 /  6.  7.100
>   libswresample   4.  7.100 /  4.  7.100
>   libpostproc56.  6.100 / 56.  6.100
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'natur.mp4':
>   Metadata:
> major_brand : isom
> minor_version   : 512
> compatible_brands: isomiso2avc1mp41
> title   : P1050005
> album_artist: Julien Lengelé
> encoder : Lavf56.4.101
> description : Cette vidéo traite de P1050005
>   Duration: 00:01:19.77, start: 0.00, bitrate: 7269 kb/s
>   Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661),
> yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 7268 kb/s, 25 fps, 25
> tbr, 12800 tbn (default)
> Metadata:
>   handler_name: VideoHandler
>   vendor_id   : [0][0][0][0]
>   Stream #0:1[0x2](fra): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
> stereo, fltp, 2 kb/s (default)
> Metadata:
>   handler_name: SoundHandler
>   vendor_id   : [0][0][0][0]
> 
> 
> At a random point of hours/days it stops always with following output:
> 
> frame=113674 fps= 25 q=-1.0 size= 4042995kB time=01:15:51.70
> bitrate=7276.4kbit
> frame=113687 fps= 25 q=-1.0 size= 4043565kB time=01:15:52.22
> bitrate=7276.6kbit
> frame=113699 fps= 25 q=-1.0 size= 4044189kB time=01:15:52.70
> bitrate=7277.0kbit
> frame=113712 fps= 25 q=-1.0 size= 4044697kB time=01:15:53.22
> bitrate=7277.1kbit
> frame=113724 fps= 25 q=-1.0 size= 4045234kB time=01:15:53.70
> bitrate=7277.3kbit
> frame=113737 fps= 25 q=-1.0 size= 4045828kB time=01:15:54.22
> bitrate=7277.5kbit
> frame=113750 fps= 25 q=-1.0 size= 4046335kB time=01:15:54.74
> bitrate=7277.6kbit
> WriteN, RTMP send error 104 (24 bytes)
> WriteN, RTMP send error 32 (59 bytes)
> WriteN, RTMP send error 9 (42 bytes)
> av_interleaved_write_frame(): Operation not permitted
> Last message repeated 2 times
> Error writing trailer of rtmp://a.rtmp.youtube.com/live2/: Operation not
> permitted
> frame=113753 fps= 25 q=-1.0 Lsize= 4046481kB time=01:15:54.90
> bitrate=7277.6kbits/s speed=0.997x
> video:4039503kB audio:1265kB subtitle:0kB other streams:0kB global
> headers:0kB muxing overhead: 0.141384%
> Error closing file rtmp://a.rtmp.youtube.com/live2/: Operation not
> permitted
> [aac @ 0x55b0ab82b0] Qavg: 65536.000
> Conversion failed!
> 

I have been streaming from RPi/ffmpeg to YT for a few years now, and the 
streams stop from time to time for reasons I never quite figured out. The best 
I could do was put a watchdog timer on it and the restart the stream when it 
failed.

Then Google changed the APIs and I could no longer restart the stream ingestion 
programmatically, even though I could restart the stream from the ffmpeg side. 

So I am very curious to see what you can figure out.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Average a "rolling" N frames?

2022-04-21 Thread Steven Kan
> On Apr 21, 2022, at 12:03 AM, Paul B Mahol  wrote:
> 
> On Thu, Apr 21, 2022 at 7:03 AM Steven Kan  wrote:
> 
>> I’m putting together a time-lapse video of bees building comb in the hive.
>> I have 5,000+ jpgs (and growing!) in a directory that I process with:
>> 
>> ffmpeg -hwaccel videotoolbox -framerate 60 -pattern_type glob -i '*.jpg'
>> -c:v h264_videotoolbox -b:v 100M CombLapse.mp4
>> 
>> which results in:
>> 
>> https://www.youtube.com/watch?v=CvGAHWVcbwY
>> 
>> Someone suggested that I try to “remove the bees” and get video of just
>> the comb. Which got me thinking, what if I could do a rolling average of,
>> say, 100 frames? So frame 1 of my output would be the average of frames 1 -
>> 100, and frame 2 of my output would be the average of frames 2 - 101, etc.
>> 
>> I’ve used -vf tmix=frames=10:weights=“1” to take 10 frames of input and
>> output 1 frame, but what syntax could I use to do a rolling average?
> 
> 
> tmix by default does rolling average if you use no other filters.

LOL; thanks! I should have RTFM before posting. Anyway, the results are 
amazing. For your esteemed review, here is the result of tmix:

1) The original with 1 frame per frame:

https://www.youtube.com/watch?v=CvGAHWVcbwY

2) The result of a 10-frame rolling average, tmix=frames=10:weights="1":

https://www.youtube.com/watch?v=2dUGbGcGE2c

2) The result of a 50-frame rolling average, tmix=frames=50:weights="1":

https://www.youtube.com/watch?v=aiBw7rAVC7k

Besides the surreal visuals, the other benefit is that it’s not really affected 
by YT's “compression crush,” since the bees are reduced to ghosts anway, and 
the comb is pretty much static.

Which is the most visually compelling?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Average a "rolling" N frames?

2022-04-20 Thread Steven Kan
I’m putting together a time-lapse video of bees building comb in the hive. I 
have 5,000+ jpgs (and growing!) in a directory that I process with:

ffmpeg -hwaccel videotoolbox -framerate 60 -pattern_type glob -i '*.jpg' -c:v 
h264_videotoolbox -b:v 100M CombLapse.mp4

which results in:

https://www.youtube.com/watch?v=CvGAHWVcbwY

Someone suggested that I try to “remove the bees” and get video of just the 
comb. Which got me thinking, what if I could do a rolling average of, say, 100 
frames? So frame 1 of my output would be the average of frames 1 - 100, and 
frame 2 of my output would be the average of frames 2 - 101, etc.

I’ve used -vf tmix=frames=10:weights=“1” to take 10 frames of input and output 
1 frame, but what syntax could I use to do a rolling average? 

Or would I have to loop through the jpgs twice? Once to get 4,900 averaged 
stills and then another run to combine those into my video?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMPEG RTSP stream problem

2022-04-07 Thread Steven Kan
> 2022. 04. 07. 16:25 keltezéssel, Asbóth Bence írta:
>> Hi!
>> 
>> Maybe authentication takes so long? Try to keep the stream open, eg with
>> VLC.
>> 
>> 
> On Apr 7, 2022, at 9:03 AM, Balogh László  wrote:
> 
> Hi!
> 
> I tried, did not help. :(
> 
> On VLC authenticating takes only a few sec. I need to wait just until to the 
> next image is created. I don't know why ffmpeg is waiting 2 minutes.
> 
> I attached a full screenlog of the script while running: log01.txt. You can 
> see that about every 2 minutes needed to take one picture, however its about 
> 16 secs that the stream contains a new frame (checked).
> 
> I also attached a log file which was created by ffmeg (option -report added 
> to command): ffmpeg-20220407-174123.log. I see no problems or errors.

I don’t know what the problem is with your setup, but I use a very similar 
command with a Wyze Cam v3/RTSP firmware and an M1 Mac Mini, and it works in 
about ~4 seconds per loop. I don’t know why it takes that long; I would have 
expected something closer to 1 second, but it’s not taking a minute like yours 
is. What camera and computer are you using?

cat ./WyzeSnapshotMultiFrameTest.sh 
  
#!/bin/bash

IPAddress=$1
NumFrames=$2
OutPath=$3

cd /usr/local/bin
for ((i=0; i<=$NumFrames; i++)); do
outfile=$OutPath
outfile+=$i
outfile+=.jpg
echo $outfile
./ffmpeg -rtsp_transport tcp -i 
rtsp://anonymous:password@$IPAddress/live -frames:v 1 $outfile
done

which results in:

ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.27)
  configuration: --prefix=/Volumes/tempdisk/sw --extra-cflags=-fno-stack-check 
--arch=arm64 --cc=/usr/bin/clang --enable-gpl --enable-videotoolbox 
--enable-libopenjpeg --enable-libopus --enable-libmp3lame --enable-libx264 
--enable-libx265 --enable-libvpx --enable-libwebp --enable-libass 
--enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libsnappy 
--enable-libaom --enable-libvidstab --enable-libzimg --enable-libsvtav1 
--enable-version3 --pkg-config-flags=--static --disable-ffplay 
--enable-postproc --enable-nonfree --enable-neon --enable-runtime-cpudetect 
--disable-indev=qtkit --disable-indev=x11grab_xcb
  libavutil  56. 70.100 / 56. 70.100
  libavcodec 58.134.100 / 58.134.100
  libavformat58. 76.100 / 58. 76.100
  libavdevice58. 13.100 / 58. 13.100
  libavfilter 7.110.100 /  7.110.100
  libswscale  5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc55.  9.100 / 55.  9.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.49/live':
  Metadata:
title   : Session streamed by the WYZE Media Server
comment : live
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #0:0: Video: h264 (Main), yuv420p(tv, bt709, progressive), 1920x1080, 
20 fps, 20 tbr, 90k tbn, 40 tbc
  Stream #0:1: Audio: pcm_alaw, 16000 Hz, mono, s16, 128 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x1100a8000] deprecated pixel format used, make sure you did set 
range correctly
Output #0, image2, to '/Users/steven/FinchLapse/test/10.jpg':
  Metadata:
title   : Session streamed by the WYZE Media Server
comment : live
encoder : Lavf58.76.100
  Stream #0:0: Video: mjpeg, yuvj420p(pc, bt709, progressive), 1920x1080, 
q=2-31, 200 kb/s, 20 fps, 20 tbn
Metadata:
  encoder : Lavc58.134.100 mjpeg
Side data:
  cpb: bitrate max/min/avg: 0/0/20 buffer size: 0 vbv_delay: N/A
frame=1 fps=0.0 q=7.7 Lsize=N/A time=00:00:00.05 bitrate=N/A dup=1 drop=1 
speed=0.621x
video:107kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing 
overhead: unknown



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] drawtext reload=N > 1?

2022-03-21 Thread Steven Kan

> On Mar 1, 2022, at 11:29 PM, Gyan Doshi  wrote:
> 
> On 2022-02-28 12:28 pm, Gyan Doshi wrote:
>> 
>> 
>> On 2022-02-28 10:37 am, Steven Kan wrote:
>>> I am overlaying real-time weather on streaming video:
>>> 
>>> https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live
>>> 
>>> I have a script reading from openweather.org every 10 minutes and writing 
>>> to weather.txt*, and then drawtext reads weather.txt and applies it via:
>>> 
>>> ./ffmpeg -thread_queue_size 2048 -hwaccel videotoolbox -i 
>>> 'rtsp://anonymous:password1@192.168.1.13:554' -hwaccel videotoolbox -i 
>>> 'rtsp://anonymous:password1@192.168.1.45:554' -vcodec h264_videotoolbox 
>>> -b:v 5000k -acodec copy -t 2:00:00 -filter_complex 
>>> "hstack=inputs=2,fps=20[stacked];[stacked]drawtext='fontfile=/System/Library/Fonts/Helvetica.ttc:
>>>  textfile=/tmp/weather.txt: fontcolor=white: fontsize=48: 
>>> x=(w-text_w)*0.01: y=(h-text_h)*0.01:reload=600'" -f flv 
>>> "rtmp://a.rtmp.youtube.com/live2/”
>>> 
>>> It’s working, but it seems very inefficient to read weather.txt every 1 
>>> frame when it gets updated only every 12,000 frames.
>>> 
>>> According to the documentation, reload is a Boolean to read every frame or 
>>> not, and attempting reload=2 or reload=600 results in:
>>> 
>>> [drawtext @ 0x7fa696010600] Unable to parse option value "2" as boolean
>>> 
>>> Would it be worthy feature request to allow drawtext to accept integer 
>>> values N > 1, and then reload the text file every Nth frame? It seems like 
>>> a win for CPU and I/O loading, with the benefit of being fully backward 
>>> compatible with existing scripts that read in every 1 frame (e.g. reload=1).
>> 
>> This has come up before. I'll patch it to make it an interval.
> 
> Patched in git master.

Wow! That was a very elegant, efficient patch! One line of code, other than the 
modified variable declaration:

if (s->reload) {

if (s->reload && !(inlink->frame_count_out % s->reload)) {
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Streaming to youtube-live / stream copying from surveillance cam to youtube-live not recognized

2022-03-17 Thread Steven Kan

> On Mar 17, 2022, at 1:46 PM, Carl Zwanzig  wrote:
> 
> 
>> On 3/17/2022 12:25 PM, Steven Kan wrote:
>> The playlist is because the Reolink doesn’t have an audio track, which
>> YouTube Stream Now requires.
> You could also add a silent audio source with something like "-f lavfi -i 
> anullsrc"  (check the options for anullsrc, you may need to tweek them).
> 
> (It's always a good idea to post the -complete- command output when there's 
> an error or question.).

The playlist is now a feature, not a bug :D

https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Streaming to youtube-live / stream copying from surveillance cam to youtube-live not recognized

2022-03-17 Thread Steven Kan
> 
> On Mar 17, 2022, at 12:16 PM, Christian Pickhardt  
> wrote:
> 
> Mail an ffmpeg-user@ffmpeg.org
> 
> Hello ffmpeg-user-community,
> since some weeks I am trying to copy a stream from an rtsp-stream to a 
> rtmp-stream (targeting youtube) without success.
> 
> I have tested recent ffmpeg builds on Windows, and on my raspberry pi which 
> is running ffmpeg version 4.4.
> 
> What works:
> Version independed i can view the source stream by:
> ffplay -i rtsp://:@/
> 
> It is also possible to write the stream to disk an view it using different 
> viewers: VLC/Firefox/Windowsplayer and so on.
> I can easyly stream data from file(created with a GoPro-Camera) using rtmp to 
> youtube-live. Almost instantly the stream is recognized
> and shows up after some seconds.
> 
> What doesn't work:
> When I try to stream the data (stream or file) from my WebCams (D-Link or 
> Trendnet) to youtube-live no stream is detected.
> 
> --This works without 
> problems---
> C:\Temp\..\ffmpeg-master-latest-win64-gpl\bin>ffmpeg -i 
> "c:\Temp\GOPR1903.MP4" -r 25 -f flv 
> rtmp://a.rtmp.youtube.com/live2/5zq1-uztf-kuz2-ab21-ccda (<-just a sample 
> key...)
> 
> 
> ---The following seems to work, but w/o 
> result,
> --meaning no data stream is detected on 
> youtube-live:--
> (all ips and keys are randomly choosen...)
> 
> C:\Temp\Software\MPlayer\ffmpeg-master-latest-win64-gpl\bin> ffmpeg -loglevel 
> debug -i rtsp://:ufwoierwe1321232@10.12.0.5/live1.sdp 
> -rtsp_transport  tcp -codec copy -bufsize 512k -g 50 -threads 2 -pix_fmt 
> yuvj420p -f h264 "rtmp://a.rtmp.youtube.com/live2/5zq1-uztf-kuz2-ab21-ccda”

Try changing -f h264 to -f flv. YouTube Stream Now prefers flv for some reason. 
Here’s what I use from a Wyze Cam v3 with RTSP firmware:

./ffmpeg -thread_queue_size 2048 -i 
'rtsp://anonymous:password@192.168.1.49/live' -vcodec copy -acodec copy -t 
2:00:00 -f flv "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"

and it works just fine:

https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live 


and from a Reolink RLC-423:

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt 
-vcodec copy -acodec copy -t 01:47:02 -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-other-youtube-key”

The playlist is because the Reolink doesn’t have an audio track, which YouTube 
Stream Now requires.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Average N frames of RTSP video for better snapshot quality?

2022-03-08 Thread Steven Kan

> On Mar 8, 2022, at 11:02 AM, Steven Kan  wrote:
> 
> 
>> On Mar 8, 2022, at 10:32 AM, Michael Koch  
>> wrote:
>> 
>> Am 08.03.2022 um 19:09 schrieb Steven Kan:
>>> After 7.5 years of waiting, my banana plant is finally flowering! I want to 
>>> do a time-lapse capture of the flowering and fruiting process. Due to its 
>>> location, the easiest way for me to get a camera out there is to use a 
>>> little WyzeCam v3 with the RTSP firmware and the Wyze lamp socket. 
>>> Unfortunately the WyzeCam doesn’t (yet) have a externally accessible JPG 
>>> snapshot feature, so I have a cron job set up to:
>>> 
>>> ./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@$IPAddress/live 
>>> -frames:v 1 $outfile
>>> 
>>> every hour. The results are OK, but not fantastic:
>>> 
>>> https://www.kan.org/pictures/BananaTimeLapseFirstImage.jpg 
>>> <https://www.kan.org/pictures/BananaTimeLapseFirstImage.jpg>
>>> 
>>> Is there a way to tell ffmpeg to collect N frames of video and output one 
>>> single averaged image to improve the SNR? Even if there’s some wind, the 
>>> flower stalk shouldn’t be moving much.
>>> 
>>> I tried:
>>> 
>>> ./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live 
>>> -frames:v 10 ~/BananaLapse/MultiFrame%03d.jpg
>>> 
>>> and that results in N JPGs. I suppose I could have a second ffmpeg command 
>>> that averages those 10 JPGs, but can this all be done in one pass? Thanks!
>> 
>> You can use the "tmix" filter before you extract the images from the video.
>> 
>> Michael
> 
> Thanks! Can I get a little help on the syntax? Right now it’s still expecting 
> to output multiple images:
> 
> ./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live 
> -frames:v 10 -vf tmix=frames=10:weights="1" ~/BananaLapse/MultiFrame.jpg

Ah, I think I figured it out. This works:

./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live -vf 
tmix=frames=10:weights="1" -frames:v 1 ~/BananaLapse/MultiFrame.jpg

I now have the -vf first, which averages 10 frames into 1, and then -frames:v 
expects only 1, correct? The output appears to be what I expect, with various 
values of N:

https://www.kan.org/pictures/MultiFrame1.jpg 
<https://www.kan.org/pictures/MultiFrame1.jpg>

https://www.kan.org/pictures/MultiFrame10.jpg 
<https://www.kan.org/pictures/MultiFrame10.jpg>

https://www.kan.org/pictures/MultiFrame128.jpg 
<https://www.kan.org/pictures/MultiFrame128.jpg>

And after all that I’m not sure it improves the image that much. I’ll check 
again at night, when the SNR will get worse.

Thanks for the help!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Average N frames of RTSP video for better snapshot quality?

2022-03-08 Thread Steven Kan

> On Mar 8, 2022, at 10:32 AM, Michael Koch  wrote:
> 
> Am 08.03.2022 um 19:09 schrieb Steven Kan:
>> After 7.5 years of waiting, my banana plant is finally flowering! I want to 
>> do a time-lapse capture of the flowering and fruiting process. Due to its 
>> location, the easiest way for me to get a camera out there is to use a 
>> little WyzeCam v3 with the RTSP firmware and the Wyze lamp socket. 
>> Unfortunately the WyzeCam doesn’t (yet) have a externally accessible JPG 
>> snapshot feature, so I have a cron job set up to:
>> 
>> ./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@$IPAddress/live 
>> -frames:v 1 $outfile
>> 
>> every hour. The results are OK, but not fantastic:
>> 
>> https://www.kan.org/pictures/BananaTimeLapseFirstImage.jpg 
>> <https://www.kan.org/pictures/BananaTimeLapseFirstImage.jpg>
>> 
>> Is there a way to tell ffmpeg to collect N frames of video and output one 
>> single averaged image to improve the SNR? Even if there’s some wind, the 
>> flower stalk shouldn’t be moving much.
>> 
>> I tried:
>> 
>> ./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live 
>> -frames:v 10 ~/BananaLapse/MultiFrame%03d.jpg
>> 
>> and that results in N JPGs. I suppose I could have a second ffmpeg command 
>> that averages those 10 JPGs, but can this all be done in one pass? Thanks!
> 
> You can use the "tmix" filter before you extract the images from the video.
> 
> Michael

Thanks! Can I get a little help on the syntax? Right now it’s still expecting 
to output multiple images:

./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live 
-frames:v 10 -vf tmix=frames=10:weights="1" ~/BananaLapse/MultiFrame.jpg
ffmpeg version N-102535-g6ff2aba088-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  57.  0.100 / 57.  0.100
  libavcodec 59.  1.100 / 59.  1.100
  libavformat59.  2.100 / 59.  2.100
  libavdevice59.  0.100 / 59.  0.100
  libavfilter 8.  0.101 /  8.  0.101
  libswscale  6.  0.100 /  6.  0.100
  libswresample   4.  0.100 /  4.  0.100
  libpostproc56.  0.100 / 56.  0.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.39/live':
  Metadata:
title   : Session streamed by the WYZE Media Server
comment : live
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #0:0: Video: h264 (Main), yuv420p(tv, bt709, progressive), 1920x1080, 
20 fps, 20 tbr, 90k tbn
  Stream #0:1: Audio: pcm_alaw, 16000 Hz, mono, s16, 128 kb/s
File '/Users/steven/BananaLapse/MultiFrame.jpg' already exists. Overwrite? 
[y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x10e418000] deprecated pixel format used, make sure you did set 
range correctly
Output #0, image2, to '/Users/steven/BananaLapse/MultiFrame.jpg':
  Metadata:
title   : Session streamed by the WYZE Media Server
comment : live
encoder : Lavf59.2.100
  Stream #0:0: Video: mjpeg, yuvj420p(pc, progressive), 1920x1080, q=2-31, 200 
kb/s, 20 fps, 20 tbn
Metadata:
  encoder : Lavc59.1.100 mjpeg
Side data:
  cpb: bitrate max/min/avg: 0/0/20 buffer size: 0 vbv_delay: N/A
[image2 @ 0x7fa809c0adc0] Could not get frame filename number 2 from pattern 
'/Users/steven/BananaLapse/MultiFrame.jpg'. Use '-frames:v 1' for a single 
image, or '-update' option, or use a pattern such as %03d within the filename.
av_interleaved_write_frame(): Invalid argument
[image2 @ 0x7fa809c0adc0] Could not get frame filename number 2 from pattern 
'/Users/steven/BananaLapse/MultiFrame.jpg'. Use '-frames:v 1' for a single 
image, or '-update' option, or use a pattern such as %03d within the filename.
av_interleaved_write_frame(): Invalid argument
[image2 @

[FFmpeg-user] Average N frames of RTSP video for better snapshot quality?

2022-03-08 Thread Steven Kan
After 7.5 years of waiting, my banana plant is finally flowering! I want to do 
a time-lapse capture of the flowering and fruiting process. Due to its 
location, the easiest way for me to get a camera out there is to use a little 
WyzeCam v3 with the RTSP firmware and the Wyze lamp socket. Unfortunately the 
WyzeCam doesn’t (yet) have a externally accessible JPG snapshot feature, so I 
have a cron job set up to:

./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@$IPAddress/live 
-frames:v 1 $outfile

every hour. The results are OK, but not fantastic:

https://www.kan.org/pictures/BananaTimeLapseFirstImage.jpg 


Is there a way to tell ffmpeg to collect N frames of video and output one 
single averaged image to improve the SNR? Even if there’s some wind, the flower 
stalk shouldn’t be moving much. 

I tried:

./ffmpeg -rtsp_transport tcp -i rtsp://anonymous:password@192.168.1.39/live 
-frames:v 10 ~/BananaLapse/MultiFrame%03d.jpg

and that results in N JPGs. I suppose I could have a second ffmpeg command that 
averages those 10 JPGs, but can this all be done in one pass? Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] drawtext reload=N > 1?

2022-02-28 Thread Steven Kan


> On Feb 27, 2022, at 10:49 PM, Adam Nielsen via ffmpeg-user 
>  wrote:
> 
>> Would it be worthy feature request to allow drawtext to accept
>> integer values N > 1, and then reload the text file every Nth frame?
>> It seems like a win for CPU and I/O loading, with the benefit of
>> being fully backward compatible with existing scripts that read in
>> every 1 frame (e.g. reload=1).
>> 
>> Or are there downsides to this that I’m not seeing?
> 
> I thought about something similar as I'm also overlaying infrequently
> changed data (temperature) onto video.
> 
> However I ended up putting the file on a tmpfs partition so there's no
> actual disk IO, the whole thing sits in memory anyway.
> 
> Have you tried benchmarking to see how much benefit you'd get from this
> optimisation?  You could check CPU usage and I/O load with reload=1 and
> again with reload=0 and see what the difference is.  Let us know what
> you find, as I haven't actually tried this myself so it would be
> interesting to know what the impact is of reading the file on every
> frame.
> 
>> * and yes, I’m writing to weather.tmp and cping to weather.txt to
>> prevent a file I/O collision.
> 
> Do you mean "mv" instead of "cp"?  I don't think "cp" is atomic but
> "mv" is.  Using "cp" won't break anything but you might get a frame
> here or there with incomplete data.

Thanks for the tip about mv vs cp. I originally had mv, per the documentation, 
but then I changed it to cp during troubleshooting because I couldn’t figure 
out why my temp file kept disappearing, LOL. I’ve changed it back to mv.

Regarding I/O load, I know it’s probably negligible, but it just offends my 
sensibilities to read something 11,999 times for no reason. And if this feature 
gets implemented (thank you, Gyan!!!) then I won’t have to worry about where I 
put the tmp file. 
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] drawtext reload=N > 1?

2022-02-27 Thread Steven Kan
I am overlaying real-time weather on streaming video:

https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live

I have a script reading from openweather.org every 10 minutes and writing to 
weather.txt*, and then drawtext reads weather.txt and applies it via:

./ffmpeg -thread_queue_size 2048 -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.13:554' -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.45:554' -vcodec h264_videotoolbox -b:v 
5000k -acodec copy -t 2:00:00 -filter_complex 
"hstack=inputs=2,fps=20[stacked];[stacked]drawtext='fontfile=/System/Library/Fonts/Helvetica.ttc:
 textfile=/tmp/weather.txt: fontcolor=white: fontsize=48: x=(w-text_w)*0.01: 
y=(h-text_h)*0.01:reload=600'" -f flv 
"rtmp://a.rtmp.youtube.com/live2/”

It’s working, but it seems very inefficient to read weather.txt every 1 frame 
when it gets updated only every 12,000 frames.

According to the documentation, reload is a Boolean to read every frame or not, 
and attempting reload=2 or reload=600 results in:

[drawtext @ 0x7fa696010600] Unable to parse option value "2" as boolean

Would it be worthy feature request to allow drawtext to accept integer values N 
> 1, and then reload the text file every Nth frame? It seems like a win for CPU 
and I/O loading, with the benefit of being fully backward compatible with 
existing scripts that read in every 1 frame (e.g. reload=1).

Or are there downsides to this that I’m not seeing?

* and yes, I’m writing to weather.tmp and cping to weather.txt to prevent a 
file I/O collision.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] M1/Apple Silicon Max Res for -hwaccel videotoolbox?

2022-02-18 Thread Steven Kan
I am assembling RTSP feeds from two cameras into one YouTube stream with ffmpeg 
on my M1 Mac Mini:

https://www.youtube.com/channel/UCIVY11504PcY2sy2qpRhiMg/live

If my cameras are set to output 1920x1080 each, then this works, with CPU 
utilization of about 25%:

./ffmpeg -thread_queue_size 2048 -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.13:554' -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.45:554' -vcodec h264_videotoolbox -b:v 
5000k -acodec copy -t 02:00:00 -filter_complex "hstack=inputs=2,fps=20" -f flv 
"rtmp://a.rtmp.youtube.com/live2/"

I am simultaneously recording the original raw streams on a PC on my LAN via 
Blue Iris. With my current setup, my local copies of the raw video are only 
1920 x 1080, and the cameras are capable of 2592x1944.

YT accepts a maximum horizontal resolution of 3840, so I tried setting the 
cameras for 2592x1944 and scaling down via:

./ffmpeg -thread_queue_size 2048 -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.13:554' -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.45:554' -vcodec h264_videotoolbox -b:v 
5000k -acodec copy -t 02:00:00 -filter_complex 
"[0:v]scale=1920:-1[left];[1:v]scale=1920:-1[right];[left][right]hstack"  -f 
flv "rtmp://a.rtmp.youtube.com/live2/”

but that results in a stream of errors (full console dump at the bottom):

[h264 @ 0x12100d200] hardware accelerator failed to decode picture
[h264 @ 0x12100d800] hardware accelerator failed to decode picture
[h264 @ 0x12100de00] hardware accelerator failed to decode picture
[rtsp @ 0x12000ca00] max delay reached. need to consume packet
[rtsp @ 0x12000ca00] RTP: missed 141 packets
[rtsp @ 0x12000ca00] max delay reached. need to consume packet
[rtsp @ 0x12000ca00] RTP: missed 19 packets
[rtsp @ 0x12000ca00] max delay reached. need to consume packet
[rtsp @ 0x12000ca00] RTP: missed 289 packets
[rtsp @ 0x12000ca00] max delay reached. need to consume packet
[rtsp @ 0x12000ca00] RTP: missed 531 packets
[h264 @ 0x121026200] hardware accelerator failed to decode picture
[h264 @ 0x121026800] hardware accelerator failed to decode picture
[h264 @ 0x121026e00] hardware accelerator failed to decode picture

and, of course, the YT stream doesn’t work. 

If I remove  -hwaccel videotoolbox then it defaults to libx264, and it will 
stream, but CPU utilization on my Mac Mini goes from ~25% to 75%. 

What I don’t understand is that, if ffmpeg scales each 2592x1944 stream down to 
1920x1440 before hstack, how is that different from combining two original 
1920x1080 streams via hstack, other than the additional vertical pixels? Or 
does the scaling actually happen after hstack? Or is the limitation in the Y 
direction? Or am I doing this wrong? Or is this a question for Apple?

Full dump:

./ffmpeg -thread_queue_size 2048 -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.13:554' -hwaccel videotoolbox -i 
'rtsp://anonymous:password1@192.168.1.45:554' -vcodec h264_videotoolbox -b:v 
5000k -acodec copy -t 02:00:00 -filter_complex 
"[0:v]scale=1920:-1[left];[1:v]scale=1920:-1[right];[left][right]hstack"  -f 
flv "rtmp://a.rtmp.youtube.com/live2/"
ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.27)
  configuration: --prefix=/Volumes/tempdisk/sw --extra-cflags=-fno-stack-check 
--arch=arm64 --cc=/usr/bin/clang --enable-gpl --enable-videotoolbox 
--enable-libopenjpeg --enable-libopus --enable-libmp3lame --enable-libx264 
--enable-libx265 --enable-libvpx --enable-libwebp --enable-libass 
--enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libsnappy 
--enable-libaom --enable-libvidstab --enable-libzimg --enable-libsvtav1 
--enable-version3 --pkg-config-flags=--static --disable-ffplay 
--enable-postproc --enable-nonfree --enable-neon --enable-runtime-cpudetect 
--disable-indev=qtkit --disable-indev=x11grab_xcb
  libavutil  56. 70.100 / 56. 70.100
  libavcodec 58.134.100 / 58.134.100
  libavformat58. 76.100 / 58. 76.100
  libavdevice58. 13.100 / 58. 13.100
  libavfilter 7.110.100 /  7.110.100
  libswscale  5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc55.  9.100 / 55.  9.100
Input #0, rtsp, from 'rtsp://anonymous:password1@192.168.1.13:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
  Stream #0:0: Video: h264 (Main), yuv420p(progressive), 2592x1944, 20 fps, 20 
tbr, 90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://anonymous:password1@192.168.1.45:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.128000, bitrate: N/A
  Stream #1:0: Video: h264 (Main), yuv420p(progressive), 2592x1944, 20 fps, 20 
tbr, 90k tbn, 180k tbc
  Stream #1:1: Audio: aac (LC), 8000 Hz, mono, fltp
Stream mapping:
  Stream #0:0 (h264) -> scale
  Stream #1:0 (h264) -> scale
  hstack -> Stream #0:0 (h264_videotoolbox)
  Stream #1:1 -> #0:1 (copy)
Press [q] 

Re: [FFmpeg-user] Video Notation, A Video Lingua Franca

2022-02-08 Thread Steven Kan
Mark,

You can do it in the HTML or you can do it in a CSS, but I’m just going take 
whatever you give me and drop it in a folder!

> On Feb 8, 2022, at 1:51 PM, Mark Filipak  
> wrote:
> 
> On 2022-02-08 16:42, Steven Kan wrote:
>> Of course!
>> If you want, I can host them here:
>> https://www.kan.org/VideoNotation/Video%20Notation.01.Preface.html
> 
> That's awesome, Steven!
> 
> Yes! Indeed!
> 
> One thing, the backgrounds are gray. I don't specify a . I 
> think the default background is white. Should I add 
> 'BODY{background-color:white}' to 

Re: [FFmpeg-user] Video Notation, A Video Lingua Franca

2022-02-08 Thread Steven Kan
Of course! 

If you want, I can host them here:

https://www.kan.org/VideoNotation/Video%20Notation.01.Preface.html

I don’t have an unlimited hosting account, but I can’t imagine these will get 
that much traffic.

Of course if you don’t want them there, let me know and I’ll delete them.

> On Feb 8, 2022, at 12:49 PM, Mark Filipak  
> wrote:
> 
> On 2022-02-08 12:12, Steven Kan wrote:
>> Hi,
>> Can you fax them to me?
> 
> You're making a joke, right?
> 
> -- Mark.
> 

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Video Notation, A Video Lingua Franca

2022-02-08 Thread Steven Kan
Hi,

Can you fax them to me? 

> On Feb 7, 2022, at 10:57 PM, Mark Filipak  
> wrote:
> 
> This message is apparently the best I can do. The ffmpeg-user system 
> apparently won't allow HTML file attachments. I'm sorry.
> 
> ...I'm amazed by the number of people who can't handle ZIP.
> 
> Simply open the ZIP and extract the 11 enclosed HTML files to a directory and 
> then click any one of them.
> 
> If you don't know how to open a ZIP, perhaps your email client does -- just 
> double-click on the attachment and follow your email client's cues.
> 
> Regards,
> Mark Filipak.
> 
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Removing parts of a video using the select filter

2022-01-09 Thread Steven Kan


> On Jan 9, 2022, at 9:28 AM, Michael Koch  wrote:
> 
> Am 09.01.2022 um 18:14 schrieb Steven Kan:
>> I’m a little late to this party, and it’s not strictly an ffmpeg solution, 
>> but have a look at Lossless Cut:
>> 
>> https://mifi.no/losslesscut/
>> 
>> It does exactly what you want, graphically, with the option to cut at key 
>> frames (or not),
> 
> Are you sure? I saw this note at the bottom:
> "This app is not for exact cutting. Start cut time will be "rounded" to the 
> nearest previous keyframe, which may be a fraction of a second before your 
> desired cut point, or up to several seconds, depending on the encoding."
> 
> Michael

When you actually click Export, it pops up another dialog box that allows you 
specify “keyframe cut” or “normal cut.”

I haven’t done any comparison testing, since keyframe is almost always good 
enough for my limited needs. 

But give it a try! It’s free!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Removing parts of a video using the select filter

2022-01-09 Thread Steven Kan
I’m a little late to this party, and it’s not strictly an ffmpeg solution, but 
have a look at Lossless Cut:

https://mifi.no/losslesscut/

It does exactly what you want, graphically, with the option to cut at key 
frames (or not), includes audio, can concatenate, is free, and uses ffmpeg 
under the hood, thus making it on-topic for this list. 
--
Sent from my iPhone
Steven Kan
+1-818-620-3062 (m)

> On Jan 9, 2022, at 3:48 AM, Simon van Bernem via ffmpeg-user 
>  wrote:
> 
> 
> 
> My specific usecase is removing multiple parts
> 
> of a video. So e.g. I want to "remove the part
> 
> from 0:10 to 0:15 and the part from 1:15 to 4:10 and the part from 10:00 to 
> the end".
> 
> How would that look like with this approach?
> 
> Can you have multiple -ss and -to pairs?
> 
> Can you express to discard a section, not keep
> 
> it?
> 
> 
> 
> Regards,
> 
> Simon
> 
> 
> 
> 
> 
> 
> 
> 
>> 
>> Am 09.01.2022 in 12:39, Stephen Liu via ffmpeg-user  
>>   schrieb:
>> 
>> 
>> Hi, 
>> I run following command on Terminal to trim a section of the video (for VCD 
>> video):- 
>> $ ffmpeg -i imput.VOB -target pal-vcd -ss 00:02:10 -to 00:03:18 -c:v copy 
>> -c:a copy output.VOB 
>> 
>> -ss starting time (hrs:min:sec) 
>> -to stop time (hrs:min:sec) 
>> 
>> This command line works for me seamlessly. 
>> 
>> Regards 
>> 
>> On Sunday, January 9, 2022, 06:43:39 PM GMT+8, MacFH - C E Macfarlane - News 
>>wrote: 
>> 
>>> On 08/01/2022 23:43, amindfv--- via ffmpeg-user wrote: 
>>> 
>>> On Sat, Jan 08, 2022 at 08:20:46PM +, MacFH - C E Macfarlane - News 
>>> wrote: 
>>>> 
>>>> To select parts of a video, I use ... 
>>>> 
>>>> FFMPEG -ss-i-codec copy -to   
>>>> 
>>>> ... however it's tedious, because with this method the video will only 
>>>> break 
>>>> at certain points between compression units (can't remember the proper 
>>>> terminology), and it can take some experimentation to find the precise 
>>>> timing of these to get the audio right as well, and often, because of the 
>>>> way the compression works, the first extracted frame is often an unwanted 
>>>> last frame of a previous scene. 
>>>> 
>>>> I have long lamented that FFMPEG doesn't make this both easier and to 
>>>> allow 
>>>> greater resolution, if necessary recreating the start and end compression 
>>>> units to get the exact timing wanted. 
>>> 
>>> If you get rid of the "-codec copy" you can have any precision you'd like. 
>> 
>> But the entire video will be re-encoded, which is undesirable because, 
>> as the codecs use lossy compression, there will be further degradation 
>> of the entire video clip just gain some precision at each end. 
>> ___ 
>> ffmpeg-user mailing list 
>> ffmpeg-user@ffmpeg.org 
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user 
>> 
>> To unsubscribe, visit link above, or email 
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". 
>> 
>> ___ 
>> ffmpeg-user mailing list 
>> ffmpeg-user@ffmpeg.org 
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user 
>> 
>> To unsubscribe, visit link above, or email 
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". 
>> 
> 
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] HW Acceleration 101? 2-Up Streaming from RTSP-->ffmpeg-->YouTube

2021-11-08 Thread Steven Kan
Hmm. I hadn’t considered that. What I’m actually doing is combining two 1920  x 
1080 streams into a 3840 x 1080 stream. Would an NVidia 1030 be able to do 
that, twice?

Back on my Mac mini, I’m still learning how to manage services in macOS vs. 
Raspbian, but I have the stream running right now:

https://www.youtube.com/channel/UCcIZVSZfzrxS6ynL8DQeC-Q/live

My largest display is only 1920 x 1200. Can anyone with a 4K display tell me if 
YT is presenting this as a 3840-wide stream?

Thanks!

p.s. re: my previous misgivings about a reduction to “only 30% CPU’ was totally 
misguided. That’s 30% of one core, and the Mini has 8 cores, so the system is 
very lightly loaded. GUI-level Activity Monitor shows Idle at ~92%. I should be 
able to run many instances of ffmpeg on here, if I can get them all running 
properly. 

> On Nov 8, 2021, at 9:43 AM, andrei ka  wrote:
> 
> you could simply plug a recent low profile nvidia (e.g. 1030) into pcie
> slot of your hpe micro and nvenc would do 2 fhd h264 encodes like a charm
> 
> 
> On Sun, Nov 7, 2021 at 11:40 PM Steven Kan  wrote:
> 
>>> On Jan 18, 2021, at 10:42 PM, Carl Eugen Hoyos 
>> wrote:
>>> 
>>> Am Mo., 18. Jan. 2021 um 23:34 Uhr schrieb Steven Kan :
>>>> 
>>>>> On Jan 18, 2021, at 12:50 PM, Michael Koch <
>> astroelectro...@t-online.de> wrote:
>>> 
>>>>>> C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024
>> -i rtsp://anonymous:password@192.168.1.47:554 -i rtsp://
>> anonymous:password@192.168.1.50:554 -vcodec h264_amf -acodec copy -t
>> 01:47:02 -filter_complex hstack=inputs=2 -f flv out.flv
>>>>>> 
>>>>>> [snip
>>>> 
>>>>>> Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.47:554':
>>>>>> Metadata:
>>>>>>  title   : Media Server
>>>>>> Duration: N/A, start: 0.08, bitrate: N/A
>>>>>>  Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive),
>> 1920x1080, 25 fps, 25 tbr, 90k tbn, 180k tbc
>>>>>> Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
>>>>>> Metadata:
>>>>>>  title   : Media Server
>>>>>> Duration: N/A, start: 0.10, bitrate: N/A
>>>>>>  Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1920x1080,
>> 100 tbr, 90k tbn, 180k tbc
>>>>> 
>>>>> I see that the two streams have different pixel formats yuvj420p and
>> yuv420p. You could try to bring them to the same pixel format before using
>> hstack.
>>>>> [0]format=yuv420p[a];[a][1]hstack
>>>>> 
>>>>> It's only a wild guess, I'm not sure.
>>>> 
>>>> Do I put this into the filter_complex argument, e.g. -filter_complex
>> "[0]format=yuv420p[a];[a][1] hstack=inputs=2”
>>>> 
>>>> That still results in the "Conversion failed!” error.
>>> 
>>> There is a "scale" missing behind format iirc but for performance
>> reasons you
>>> want to overwrite the pix_fmt instead, not sure if this is possible.
>> 
>> Some progress, here. I’m now attempting this on an M1-powered Mac mini:
>> 
>> ./ffmpeg -thread_queue_size 1024 -hwaccel videotoolbox -i rtsp://
>> anonymous:password@192.168.1.47:554 -hwaccel videotoolbox -i rtsp://
>> anonymous:password@192.168.1.50:554 -vcodec h264_videotoolbox -acodec
>> copy -t 01:00:00 -filter_complex hstack=inputs=2 -f flv "rtmp://
>> a.rtmp.youtube.com/live2/> 
>> which results in:
>> 
>> ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
>>  built with Apple clang version 12.0.0 (clang-1200.0.32.27)
>>  configuration: --prefix=/Volumes/tempdisk/sw
>> --extra-cflags=-fno-stack-check --arch=arm64 --cc=/usr/bin/clang
>> --enable-gpl --enable-videotoolbox --enable-libopenjpeg --enable-libopus
>> --enable-libmp3lame --enable-libx264 --enable-libx265 --enable-libvpx
>> --enable-libwebp --enable-libass --enable-libfreetype --enable-libtheora
>> --enable-libvorbis --enable-libsnappy --enable-libaom --enable-libvidstab
>> --enable-libzimg --enable-libsvtav1 --enable-version3
>> --pkg-config-flags=--static --disable-ffplay --enable-postproc
>> --enable-nonfree --enable-neon --enable-runtime-cpudetect
>> --disable-indev=qtkit --disable-indev=x11grab_xcb
>>  libavutil  56. 70.100 / 56. 70.100
>>  libavcodec 58.134.100 / 58.134.100
>>  libavformat58. 76.100 / 58. 76.100
>>  libavdevice58. 13.100 / 58. 13.100
>>  libavfilter 7.110.100 /  7.1

Re: [FFmpeg-user] HW Acceleration 101? 2-Up Streaming from RTSP-->ffmpeg-->YouTube

2021-11-07 Thread Steven Kan
> On Jan 18, 2021, at 10:42 PM, Carl Eugen Hoyos  wrote:
> 
> Am Mo., 18. Jan. 2021 um 23:34 Uhr schrieb Steven Kan :
>> 
>>> On Jan 18, 2021, at 12:50 PM, Michael Koch  
>>> wrote:
> 
>>>> C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
>>>> rtsp://anonymous:password@192.168.1.47:554 -i 
>>>> rtsp://anonymous:password@192.168.1.50:554 -vcodec h264_amf -acodec copy 
>>>> -t 01:47:02 -filter_complex hstack=inputs=2 -f flv out.flv
>>>> 
>>>> [snip
>> 
>>>> Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.47:554':
>>>> Metadata:
>>>>   title   : Media Server
>>>> Duration: N/A, start: 0.08, bitrate: N/A
>>>>   Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 
>>>> 1920x1080, 25 fps, 25 tbr, 90k tbn, 180k tbc
>>>> Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
>>>> Metadata:
>>>>   title   : Media Server
>>>> Duration: N/A, start: 0.10, bitrate: N/A
>>>>   Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 
>>>> tbr, 90k tbn, 180k tbc
>>> 
>>> I see that the two streams have different pixel formats yuvj420p and 
>>> yuv420p. You could try to bring them to the same pixel format before using 
>>> hstack.
>>> [0]format=yuv420p[a];[a][1]hstack
>>> 
>>> It's only a wild guess, I'm not sure.
>> 
>> Do I put this into the filter_complex argument, e.g. -filter_complex 
>> "[0]format=yuv420p[a];[a][1] hstack=inputs=2”
>> 
>> That still results in the "Conversion failed!” error.
> 
> There is a "scale" missing behind format iirc but for performance reasons you
> want to overwrite the pix_fmt instead, not sure if this is possible.

Some progress, here. I’m now attempting this on an M1-powered Mac mini:

./ffmpeg -thread_queue_size 1024 -hwaccel videotoolbox -i 
rtsp://anonymous:password@192.168.1.47:554 -hwaccel videotoolbox -i 
rtsp://anonymous:password@192.168.1.50:554 -vcodec h264_videotoolbox -acodec 
copy -t 01:00:00 -filter_complex hstack=inputs=2 -f flv 
"rtmp://a.rtmp.youtube.com/live2/ hstack:input0
  Stream #1:0 (h264) -> hstack:input1
  hstack -> Stream #0:0 (h264_videotoolbox)
  Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[rtsp @ 0x14a00ca00] max delay reached. need to consume packet
[rtsp @ 0x14a00ca00] RTP: missed 146 packets
[rtsp @ 0x139013a00] Thread message queue blocking; consider raising the 
thread_queue_size option (current value: 8)
[rtsp @ 0x14a00ca00] max delay reached. need to consume packet
[rtsp @ 0x14a00ca00] RTP: missed 80 packets
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/q9rg-sqaq-f0mg-yj1c-e42f':
  Metadata:
title   : Media Server
encoder : Lavf58.76.100
  Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), nv12(progressive), 
3840x1080, q=2-31, 200 kb/s, 1k tbn (default)
Metadata:
  encoder : Lavc58.134.100 h264_videotoolbox
  Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 8000 Hz, mono, fltp
[h264_videotoolbox @ 0x14b01b800] Color range not set for nv12. Using MPEG 
range.
[rtsp @ 0x139013a00] max delay reached. need to consume 
packettrate=6235.2kbits/s dup=0 drop=1 speed=0.794x
[rtsp @ 0x139013a00] RTP: missed 29 packets
[rtsp @ 0x139013a00] max delay reached. need to consume packet
[rtsp @ 0x139013a00] RTP: missed 78 packets
[rtsp @ 0x139013a00] max delay reached. need to consume packet
[rtsp @ 0x139013a00] RTP: missed 132 packets
[rtsp @ 0x139013a00] max delay reached. need to consume packet
[rtsp @ 0x139013a00] RTP: missed 49 packets
[rtsp @ 0x139013a00] max delay reached. need to consume packet
[rtsp @ 0x139013a00] RTP: missed 12 packets
[h264 @ 0x139074600] hardware accelerator failed to decode picture
[h264 @ 0x139074c00] hardware accelerator failed to decode picture
[h264 @ 0x139075200] hardware accelerator failed to decode picture
[h264 @ 0x139070a00] hardware accelerator failed to decode picture
[h264 @ 0x139072800] hardware accelerator failed to decode picture
[h264 @ 0x139072e00] hardware accelerator failed to decode picture
[h264 @ 0x139073400] hardware accelerator failed to decode picture
[h264 @ 0x139073a00] hardware accelerator failed to decode picture
Error while decoding stream #1:0: Unknown error occurred
[h264 @ 0x139074000] hardware accelerator failed to decode picture
Error while decoding stream #1:0: Unknown error occurred

After a few seconds those errors go away, and then the status line just reads:

frame=20249 fps= 55 q=-0.0 size=   67873kB time=00:06:12.99 
bitrate=1490.7kbits/s dup=0 drop=2 speed=   1x

I don’t know why the errors occur, nor why they resolve themselves,

Re: [FFmpeg-user] IP Camera Stream "Conversion failed!" from newer ffmpeg instance, but not from older?

2021-11-07 Thread Steven Kan
Scroll way down . . . .

> On Nov 7, 2021, at 11:01 AM, Leo Butler  wrote:
> 
> Steven Kan  writes:
> 
>> I normally stream my IP cameras to YouTube from my RPi. I just got my spiffy 
>> new (used) M1-based Mac mini, so I’m trying to move my streams over there so 
>> I can use the extra horsepower to stream 2-up from 2 cameras into one 
>> stream. 
>> 
>> But first I want to just duplicate what I’m doing on the RPi before I break 
>> it ;-)
>> 
>> I copied the exact same ffmpeg command:
>> 
>> ./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
>> "rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
>> 01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"
>> 
>> over from my RPi to my Mini, and it fails with "Conversion failed!”. 
>> Ordinarily I’d be inclined to update my software, but the version that works 
>> on my RPi is _older_ than the one on the M1 Mini that fails.
>> 
>> Do I need to add a missing library?
>> 
>> Thanks!
>> 
>> Working command from RPi:
>> 
>> ./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
>> "rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
>> 01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"
>> ffmpeg version N-89882-g4dbae00bac Copyright (c) 2000-2018 the FFmpeg 
>> developers
>>  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1) 20170516
>>  configuration: 
>>  libavutil  56.  7.100 / 56.  7.100
>>  libavcodec 58.  9.100 / 58.  9.100
>>  libavformat58.  5.101 / 58.  5.101
>>  libavdevice58.  0.101 / 58.  0.101
>>  libavfilter 7. 11.101 /  7. 11.101
>>  libswscale  5.  0.101 /  5.  0.101
>>  libswresample   3.  0.101 /  3.  0.101
>> Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
>>  Metadata:
>>title   : Media Server
>>  Duration: N/A, start: 0.10, bitrate: N/A
>>Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 
>> tbr, 90k tbn, 180k tbc
>>Stream #0:1: Audio: aac (LC), 8000 Hz, mono, fltp
>> Output #0, flv, to 
>> 'rtmp://a.rtmp.youtube.com/live2/':
>>  Metadata:
>>title   : Media Server
>>encoder : Lavf58.5.101
>>Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), 
>> yuv420p(progressive), 1920x1080, q=2-31, 100 tbr, 1k tbn, 90k tbc
>>Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 8000 Hz, mono, fltp
>> Stream mapping:
>>  Stream #0:0 -> #0:0 (copy)
>>  Stream #0:1 -> #0:1 (copy)
>> Press [q] to stop, [?] for help
>> [flv @ 0x1a0ac60] Timestamps are unset in a packet for stream 0. This is 
>> deprecated and will stop working in the future. Fix your code to set the 
>> timestamps properly
>> [flv @ 0x1a0ac60] Failed to update header with correct 
>> duration.ate=4149.8kbits/s speed=1.36x
>> [flv @ 0x1a0ac60] Failed to update header with correct filesize.
>> frame=  152 fps= 41 q=-1.0 Lsize=2546kB time=00:00:05.03 
>> bitrate=4140.5kbits/s speed=1.36x
>> video:2532kB audio:10kB subtitle:0kB other streams:0kB global headers:0kB 
>> muxing overhead: 0.158674%
>> 
>> Failing command from M1 mini:
>> 
>> ./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
>> "rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
>> 01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"
>> ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
>>  built with Apple clang version 12.0.0 (clang-1200.0.32.27)
>>  configuration: --prefix=/Volumes/tempdisk/sw 
>> --extra-cflags=-fno-stack-check --arch=arm64 --cc=/usr/bin/clang 
>> --enable-gpl --enable-videotoolbox --enable-libopenjpeg --enable-libopus 
>> --enable-libmp3lame --enable-libx264 --enable-libx265 --enable-libvpx 
>> --enable-libwebp --enable-libass --enable-libfreetype --enable-libtheora 
>> --enable-libvorbis --enable-libsnappy --enable-libaom --enable-libvidstab 
>> --enable-libzimg --enable-libsvtav1 --enable-version3 
>> --pkg-config-flags=--static --disable-ffplay --enable-postproc 
>> --enable-nonfree --enable-neon --enable-runtime-cpudetect 
>> --disable-indev=qtkit --disable-indev=x11grab_xcb
>>  libavutil  56. 70.100 / 56. 70.100
>>  libavcodec 58.134.100 / 58.134.100
>>  libavformat58. 76.100 / 58. 76.100
>>  libavdevice58. 13.100 / 58. 13.100
>>  libavfilter 7.110.100 /  7.110.100
>>  libswscale  5.  9.100 /  5.  9.100
>>  libswresample   3.  9.100 /  3. 

[FFmpeg-user] IP Camera Stream "Conversion failed!" from newer ffmpeg instance, but not from older?

2021-11-05 Thread Steven Kan
I normally stream my IP cameras to YouTube from my RPi. I just got my spiffy 
new (used) M1-based Mac mini, so I’m trying to move my streams over there so I 
can use the extra horsepower to stream 2-up from 2 cameras into one stream. 

But first I want to just duplicate what I’m doing on the RPi before I break it 
;-)

I copied the exact same ffmpeg command:

./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"

over from my RPi to my Mini, and it fails with "Conversion failed!”. Ordinarily 
I’d be inclined to update my software, but the version that works on my RPi is 
_older_ than the one on the M1 Mini that fails.

Do I need to add a missing library?

Thanks!

Working command from RPi:

./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"
ffmpeg version N-89882-g4dbae00bac Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1) 20170516
  configuration: 
  libavutil  56.  7.100 / 56.  7.100
  libavcodec 58.  9.100 / 58.  9.100
  libavformat58.  5.101 / 58.  5.101
  libavdevice58.  0.101 / 58.  0.101
  libavfilter 7. 11.101 /  7. 11.101
  libswscale  5.  0.101 /  5.  0.101
  libswresample   3.  0.101 /  3.  0.101
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 tbr, 
90k tbn, 180k tbc
Stream #0:1: Audio: aac (LC), 8000 Hz, mono, fltp
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/':
  Metadata:
title   : Media Server
encoder : Lavf58.5.101
Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), 
yuv420p(progressive), 1920x1080, q=2-31, 100 tbr, 1k tbn, 90k tbc
Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 8000 Hz, mono, fltp
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 0x1a0ac60] Timestamps are unset in a packet for stream 0. This is 
deprecated and will stop working in the future. Fix your code to set the 
timestamps properly
[flv @ 0x1a0ac60] Failed to update header with correct 
duration.ate=4149.8kbits/s speed=1.36x
[flv @ 0x1a0ac60] Failed to update header with correct filesize.
frame=  152 fps= 41 q=-1.0 Lsize=2546kB time=00:00:05.03 
bitrate=4140.5kbits/s speed=1.36x
video:2532kB audio:10kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: 0.158674%

Failing command from M1 mini:

./ffmpeg -thread_queue_size 1024 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.50:554" -vcodec copy -acodec copy -t 
01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/"
ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 12.0.0 (clang-1200.0.32.27)
  configuration: --prefix=/Volumes/tempdisk/sw --extra-cflags=-fno-stack-check 
--arch=arm64 --cc=/usr/bin/clang --enable-gpl --enable-videotoolbox 
--enable-libopenjpeg --enable-libopus --enable-libmp3lame --enable-libx264 
--enable-libx265 --enable-libvpx --enable-libwebp --enable-libass 
--enable-libfreetype --enable-libtheora --enable-libvorbis --enable-libsnappy 
--enable-libaom --enable-libvidstab --enable-libzimg --enable-libsvtav1 
--enable-version3 --pkg-config-flags=--static --disable-ffplay 
--enable-postproc --enable-nonfree --enable-neon --enable-runtime-cpudetect 
--disable-indev=qtkit --disable-indev=x11grab_xcb
  libavutil  56. 70.100 / 56. 70.100
  libavcodec 58.134.100 / 58.134.100
  libavformat58. 76.100 / 58. 76.100
  libavdevice58. 13.100 / 58. 13.100
  libavfilter 7.110.100 /  7.110.100
  libswscale  5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc55.  9.100 / 55.  9.100
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.07, bitrate: N/A
  Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 tbr, 
90k tbn, 180k tbc
  Stream #0:1: Audio: aac (LC), 8000 Hz, mono, fltp
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/':
  Metadata:
title   : Media Server
encoder : Lavf58.76.100
  Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), 
yuv420p(progressive), 1920x1080, q=2-31, 100 tbr, 1k tbn, 90k tbc
  Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 8000 Hz, mono, fltp
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 0x13187d800] Timestamps are unset in a packet for stream 0. This is 
deprecated and will stop working in the future. Fix your code to set the 
timestamps properly
[flv @ 

Re: [FFmpeg-user] FFmpeg on Apple Silicon (Preliminary Results)

2021-10-29 Thread Steven Kan
vendor_id   : [0][0][0][0]
File '/Users/steven/Desktop/StairClimb20x20x.mp4' already exists. Overwrite? 
[y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> hevc (hevc_videotoolbox))
Press [q] to stop, [?] for help
[h264 @ 0x11d008800] cabac decode of qscale diff failed at 159 89
[h264 @ 0x11d008800] error while decoding MB 159 89, bytestream -2
[h264 @ 0x11d008800] concealing 50 DC, 50 AC, 50 MV errors in P frame
Output #0, mp4, to '/Users/steven/Desktop/StairClimb20x20x.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.76.100
  Stream #0:0(und): Video: hevc (hev1 / 0x31766568), yuv420p(progressive), 
2560x1440, q=2-31, 200 kb/s, 14.58 fps, 11200 tbn (default)
Metadata:
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
  encoder : Lavc58.134.100 hevc_videotoolbox
[hevc_videotoolbox @ 0x12d81d400] Color range not set for yuv420p. Using MPEG 
range.
/Users/steven/Desktop/StairClimb20x.mp4: corrupt decoded frame in stream 0ed=   
0x
[h264 @ 0x11d008200] cabac decode of qscale diff failed at 155 89
[h264 @ 0x11d008200] error while decoding MB 155 89, bytestream -1
[h264 @ 0x11d008200] concealing 54 DC, 54 AC, 54 MV errors in P frame
/Users/steven/Desktop/StairClimb20x.mp4: corrupt decoded frame in stream 0
[h264 @ 0x11d008200] error while decoding MB 156 89, bytestream -6
[h264 @ 0x11d008200] concealing 53 DC, 53 AC, 53 MV errors in P frame
[h264 @ 0x11d008800] error while decoding MB 159 89, bytestream -5
/Users/steven/Desktop/StairClimb20x.mp4: corrupt decoded frame in stream 0

frame= 2734 fps=108 q=-0.0 Lsize=   46337kB time=00:03:07.26 
bitrate=2027.0kbits/s dup=0 drop=2879 speed=7.42x
video:46303kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: 0.073762%



> On Oct 25, 2021, at 9:14 PM, Steven Kan  wrote:
> 
> Hi Aleksid,
> 
> Any new news re: ffmpeg on Apple Silicon?
> 
> For everyone else, does ffmpeg en/decode performance generally scale with CPU 
> cores, GPU cores, both or neither? Or does it use the “media engine” 
> block(s)? 
> 
> Is there any way to tell which parts of the chip are being used? Or is this 
> all hidden/obfuscated by the video_toolbox API?
> 
> I’m very curious to see how the M1 Max would run certain types of tasks. 
> 
>> On Aug 7, 2020, at 1:02 PM, Aleksid  wrote:
>> 
>> Hi,
>> 
>> Just for information. Today I successfully compiled FFmpeg 4.3.1 for Apple
>> Silicon (macOS 11 Beta Big Sur) on Apple Developer Transition Kit.
>> 
>> Also I was able to include FFmpeg shared libraries for my test app.
>> 
>> I used basic configure options to make sure that FFmpeg works on arm64:
>> 
>> ./configure --prefix=/usr/local/Cellar/ffmpeg/4.3.1 --enable-shared
>> --extra-cflags="-fno-stack-check" --enable-gpl  --enable-version3
>> --enable-hardcoded-tables --enable-pthreads --enable-nonfree
>> 
>> I discovered only one issue (probably my mistake). dylibs work only when I
>> put them to /usr/local/Cellar/ffmpeg/4.3.1/lib folder
>> If I put dylibs into my APP bundle /Contents/Frameworks these dylibs fail
>> to load, despite the fact that I load dylib using absolute file path from
>> my folder.
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>> 
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
> 
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Multiple input streams, one continuous output stream?

2021-10-27 Thread Steven Kan
Hi! I currently stream my IP cameras to YouTube with a command of the form:

ffmpeg -i "rtsp://anonymous:password@192.168.1.11:554" -vcodec copy -acodec 
copy -t 01:00:00 -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key”

It works well, for a single camera.

What options would I have if I want to cycle through N cameras, each with its 
own RTSP URI, switching cameras every M seconds? Can this be done from an 
ffmpeg command? Or is there where ffserve gets involved?

Thanks!
-- 
Steven "Rocket Man" Kan  #```
mailto:ste...@kan.org#  ```
http://www.kan.org   #```
aim://steven...@me.com   #  ```
 #```
~   ~  . \_@_/  ```
^_@ o. V  ```
  `-'  - \_@_   ~  .   ##
   V \   ~   . ##
~  .   #H2O##
~. #POLO#
Blood, sweat, and chlorine  ~~ ##

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFmpeg on Apple Silicon (Success)

2021-10-25 Thread Steven Kan
Hi Aleksid,

Any new news re: ffmpeg on Apple Silicon?

For everyone else, does ffmpeg en/decode performance generally scale with CPU 
cores, GPU cores, both or neither? Or does it use the “media engine” block(s)? 

Is there any way to tell which parts of the chip are being used? Or is this all 
hidden/obfuscated by the video_toolbox API?

I’m very curious to see how the M1 Max would run certain types of tasks. 

> On Aug 7, 2020, at 1:02 PM, Aleksid  wrote:
> 
> Hi,
> 
> Just for information. Today I successfully compiled FFmpeg 4.3.1 for Apple
> Silicon (macOS 11 Beta Big Sur) on Apple Developer Transition Kit.
> 
> Also I was able to include FFmpeg shared libraries for my test app.
> 
> I used basic configure options to make sure that FFmpeg works on arm64:
> 
> ./configure --prefix=/usr/local/Cellar/ffmpeg/4.3.1 --enable-shared
> --extra-cflags="-fno-stack-check" --enable-gpl  --enable-version3
> --enable-hardcoded-tables --enable-pthreads --enable-nonfree
> 
> I discovered only one issue (probably my mistake). dylibs work only when I
> put them to /usr/local/Cellar/ffmpeg/4.3.1/lib folder
> If I put dylibs into my APP bundle /Contents/Frameworks these dylibs fail
> to load, despite the fact that I load dylib using absolute file path from
> my folder.
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] MP4-->JPG changes brightness?

2021-05-15 Thread Steven Kan
> On May 15, 2021, at 11:06 AM, Steven Kan  wrote:
> 
>> On May 15, 2021, at 8:53 AM, Reino Wijnsma  wrote:
>> 
>> On 2021-05-15T08:15:16+0200, Steven Kan  wrote:
>>> Is there a flag I need to set in ffmpeg to get a jpg that looks like what 
>>> Preview produces? Is this error related to my issue?
>> 
>> Your 'TrailDownStillffmpeg.jpg' looks exactly like what I see when I open 
>> your 'TrailDown.mp4', so if you ask me, there's no "error" at all.
>> 
>> -- 
>> Reino
> 
> Hi Reino,
> 
> Thanks for checking. They do look very, very similar, but if you put the two 
> windows on top of each other and toggle the windows back and forth, you can 
> see a very slight difference in the brightness. It’s not that obvious if you 
> look at them side by side, but if you sequence them in a video editor, there 
> is a noticeable visual discontinuity, which is what I’m trying to avoid. 

Here’s an animated GIF showing the difference as I toggle back and forth:

https://www.kan.org/pictures/StillsFromFFmpegVsQTPreview.gif

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] MP4-->JPG changes brightness?

2021-05-15 Thread Steven Kan
> On May 15, 2021, at 8:53 AM, Reino Wijnsma  wrote:
> 
> On 2021-05-15T08:15:16+0200, Steven Kan  wrote:
>> Is there a flag I need to set in ffmpeg to get a jpg that looks like what 
>> Preview produces? Is this error related to my issue?
> 
> Your 'TrailDownStillffmpeg.jpg' looks exactly like what I see when I open 
> your 'TrailDown.mp4', so if you ask me, there's no "error" at all.
> 
> -- 
> Reino

Hi Reino,

Thanks for checking. They do look very, very similar, but if you put the two 
windows on top of each other and toggle the windows back and forth, you can see 
a very slight difference in the brightness. It’s not that obvious if you look 
at them side by side, but if you sequence them in a video editor, there is a 
noticeable visual discontinuity, which is what I’m trying to avoid. 

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] MP4-->JPG changes brightness?

2021-05-15 Thread Steven Kan
I have 2 trail cameras pointing up and down my backyard trail, and they 
frequently capture awesome footage like this:

https://www.youtube.com/watch?v=TYQj1fcSQJE 


You can actually hear her bonk him on the head!!

Assembling these clips in Da Vinci Resolve allows me to visually align the 
clips exactly, but before I do that I need to generate still images from the 
first frames of TrailUp.mp4 and TrailDown.mp4, e.g. TrailUpStill.jpg and 
TrailDownStill.jpg, to fill in the timeline gap when one video needs to start 
earlier than the other. See the above video where the "downhill" timestamp 
doesn't start changing until a few seconds in.

I’d been creating these stills on my Mac by opening the MP4s in QuickTime 
Player, copying the first frame, pasting into a new document in Preview, and 
saving as a JPG, which is all tedious manual labor.

I thought maybe I could semi-automate this by doing this in ffmpeg:

ffmpeg -i TrailDown.mp4 -vframes 1 TrailDownStill.jpg

This works, but the resulting jpg looks slightly different than the jpg 
produced by Preview. The brightness is different. I also took a screen capture 
of QuickTime Player and exported that as a jpg, and it look almost exactly the 
copy/paste into Preview, and slightly unlike that produced by ffmpeg. 

When I sequence these files into Resolve, the jpg produced via Preview looks 
exactly like I’ve paused the MP4 (which is the effect I want), whereas the jpg 
produced by ffmpeg has a slightly different brightness than the video, so 
there’s a discontinuity when it’s sequenced:

https://www.kan.org/pictures/TrailDownStillQTPreview.jpg 
 
https://www.kan.org/pictures/TrailDown.mp4 
 
https://www.kan.org/pictures/TrailDownStillffmpeg.jpg 
 
https://www.kan.org/pictures/TrailDownStillQTScreenCap.jpg 


Is there a flag I need to set in ffmpeg to get a jpg that looks like what 
Preview produces? Is this error related to my issue? 

"[swscaler @ 0x112d97000] deprecated pixel format used, make sure you did set 
range correctly”

console output follows. Thanks!

ffmpeg -i TrailDown.mp4 -vframes 1 TrailDownStill.jpg
ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'TrailDown.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-05-11T17:08:07.00Z
encoder : Lavf58.45.100
  Duration: 00:00:08.06, start: 0.00, bitrate: 6072 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6140 kb/s, 20 fps, 20 tbr, 16k tbn, 32k tbc (default)
Metadata:
  creation_time   : 2021-05-11T17:08:07.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 
15 kb/s (default)
Metadata:
  creation_time   : 2021-05-11T17:08:07.00Z
  handler_name: SoundHandler
  vendor_id   : [0][0][0][0]
File 'TrailDownStill.jpg' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x112d97000] deprecated pixel format used, make sure you did set 
range correctly
Output #0, image2, to 'TrailDownStill.jpg':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.65.100

Re: [FFmpeg-user] hstack with one video offset in time (and keep audio synced)?

2021-03-18 Thread Steven Kan
> On Mar 17, 2021, at 11:57 AM, Michael Koch  
> wrote:
> 
> Am 17.03.2021 um 19:31 schrieb Steven Kan:
>>> On Mar 6, 2021, at 11:22 AM, Steven Kan  wrote:
>>> 
>>>> On Mar 5, 2021, at 2:00 PM, Michael Koch  
>>>> wrote:
>>>> 
>>>> Am 05.03.2021 um 20:33 schrieb Steven Kan:
>>>>>>>> I’d like to assemble these videos, side-by-side, but synced in time, 
>>>>>>>> which
>>>>>>>> means the TrailDown video needs to start 50 seconds after the TrailUp
>>>>>>>> video.
>>>> try this command line:
>>>> 
>>>> ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex 
>>>> "[0]tpad=start_duration=50[a];[a][1]hstack” out.mp4
>>> Thank you! That worked perfectly, and now that I understand the syntax, 
>>> this works as well (to pad the end of the second track by 15 sec:
>>> 
>>> ffmpeg -i Input1.mp4 -i Input2 -filter_complex 
>>> "[0]tpad=start_duration=50[a];[1]tpad=stop_duration=15[b];[a][b]hstack” 
>>> Out.mp4
>> More on this!
>> 
>> tpad=start_duration works to delay the start of one video, but now I need to 
>> fix the audio sync. For example this command:
>> 
>> ffmpeg -i TrailDown.mp4 -i TrailUp.mp4 -filter_complex 
>> "[0]tpad=start_duration=2.5[a];[a][1]hstack" -vcodec libx264 Coyote2Up.mp4
>> 
>> results in this video:
>> 
>> https://www.youtube.com/watch?v=_PDPONEU3YA#t=35s
>> 
>> Only the left half (TrailDown) camera has a microphone, and it’s apparent 
>> that the audio sync is off by the same 2.5 seconds that I’ve delayed its 
>> video. You can hear the female coyote (with the stumpy tail) scratching the 
>> ground 2.5 seconds before she actually does it.
>> 
>> What flag should I add to also delay its audio? Thanks!
> 
> Add the "adelay" filter to the filter chain:
> adelay=2500

Fixed!!!

https://youtu.be/89HAnwZuO9g

Thank you,
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hstack with one video offset in time (and keep audio synced)?

2021-03-17 Thread Steven Kan
> On Mar 6, 2021, at 11:22 AM, Steven Kan  wrote:
> 
>> On Mar 5, 2021, at 2:00 PM, Michael Koch  wrote:
>> 
>> Am 05.03.2021 um 20:33 schrieb Steven Kan:
>>>>>> I’d like to assemble these videos, side-by-side, but synced in time, 
>>>>>> which
>>>>>> means the TrailDown video needs to start 50 seconds after the TrailUp
>>>>>> video. 
>> 
>> try this command line:
>> 
>> ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex 
>> "[0]tpad=start_duration=50[a];[a][1]hstack” out.mp4
> 
> Thank you! That worked perfectly, and now that I understand the syntax, this 
> works as well (to pad the end of the second track by 15 sec:
> 
> ffmpeg -i Input1.mp4 -i Input2 -filter_complex 
> "[0]tpad=start_duration=50[a];[1]tpad=stop_duration=15[b];[a][b]hstack” 
> Out.mp4

More on this! 

tpad=start_duration works to delay the start of one video, but now I need to 
fix the audio sync. For example this command:

ffmpeg -i TrailDown.mp4 -i TrailUp.mp4 -filter_complex 
"[0]tpad=start_duration=2.5[a];[a][1]hstack" -vcodec libx264 Coyote2Up.mp4

results in this video:

https://www.youtube.com/watch?v=_PDPONEU3YA#t=35s

Only the left half (TrailDown) camera has a microphone, and it’s apparent that 
the audio sync is off by the same 2.5 seconds that I’ve delayed its video. You 
can hear the female coyote (with the stumpy tail) scratching the ground 2.5 
seconds before she actually does it.

What flag should I add to also delay its audio? Thanks!


ffmpeg -i /Users/steven/Downloads/Record/DownLoad/TrailDown.mp4 -i 
/Users/steven/Downloads/Record/DownLoad/TrailUp.mp4 -filter_complex 
"[0]tpad=start_duration=2.5[a];[a][1]hstack" -vcodec libx264 Coyote2Up.mp4
ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailDown.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-17T17:23:18.00Z
  Duration: 00:00:46.15, start: 0.00, bitrate: 6339 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6321 kb/s, 20 fps, 20 tbr, 1k tbn, 2k tbc (default)
Metadata:
  creation_time   : 2021-03-17T17:23:18.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 
15 kb/s (default)
Metadata:
  creation_time   : 2021-03-17T17:23:18.00Z
  handler_name: SoundHandler
  vendor_id   : [0][0][0][0]
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailUp.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-17T17:22:48.00Z
encoder : Lavf58.45.100
  Duration: 00:00:50.20, start: 0.00, bitrate: 6292 kb/s
Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6291 kb/s, 20 fps, 20 tbr, 16k tbn, 32k tbc (default)
Metadata:
  creation_time   : 2021-03-17T17:22:48.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
File 'Coyote2Up.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 (h264) -> tpad (graph 0)
  Stream #1:0 (h264) -> hstack:input1 (graph 0)
  hstack (graph 0) -> Stream #0:0 (libx264)
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0x7fe8b501b600] Too many bits 8832.

Re: [FFmpeg-user] hstack with one video offset in time?

2021-03-06 Thread Steven Kan
> On Mar 5, 2021, at 2:00 PM, Michael Koch  wrote:
> 
> Am 05.03.2021 um 20:33 schrieb Steven Kan:
>>>>> I’d like to assemble these videos, side-by-side, but synced in time, which
>>>>> means the TrailDown video needs to start 50 seconds after the TrailUp
>>>>> video. The TrailDown side can be black/blank, or it can be stuck on the
>>>>> first frame of its video while the right side plays for the first 50
>>>>> seconds; it doesn’t matter to me.
>>>>> 
>>>>> I’ve tried all of the following:
>>>>> 
>>>>> -itsoffset 50 -i TrailDown.mp4 -i TrailUp.mp4
>>>>> -itsoffset 50 -i TrailDown.mp4 -itsoffset 0 -i TrailUp.mp4
>>>>> -i TrailDown.mp4 -itsoffset -50 -i TrailUp.mp4
>>>>> 
>>>> see tpad filter, need recent version.
>>> Thanks! I have figured out the syntax to pad a single video with tpad, e.g.:
>>> 
>>> ffmpeg -i 
>>> /Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
>>>  -filter_complex "tpad=start_duration=50" tPadOut.mp4
>>> 
>>> but I’m having trouble with the syntax to delay only one of two videos in 
>>> an hstack filter:
>>> 
>>> ffmpeg -i 
>>> /Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
>>>  -i 
>>> /Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4
>>>  -filter_complex "tpad=start_duration=50[v0];hstack=inputs=2” Coyote2Up.mp4
> 
> try this command line:
> 
> ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex 
> "[0]tpad=start_duration=50[a];[a][1]hstack” out.mp4

Thank you! That worked perfectly, and now that I understand the syntax, this 
works as well (to pad the end of the second track by 15 sec:

ffmpeg -i Input1.mp4 -i Input2 -filter_complex 
"[0]tpad=start_duration=50[a];[1]tpad=stop_duration=15[b];[a][b]hstack" Out.mp4

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hstack with one video offset in time?

2021-03-05 Thread Steven Kan

>>> 
>>> I’d like to assemble these videos, side-by-side, but synced in time, which
>>> means the TrailDown video needs to start 50 seconds after the TrailUp
>>> video. The TrailDown side can be black/blank, or it can be stuck on the
>>> first frame of its video while the right side plays for the first 50
>>> seconds; it doesn’t matter to me.
>>> 
>>> I’ve tried all of the following:
>>> 
>>> -itsoffset 50 -i TrailDown.mp4 -i TrailUp.mp4
>>> -itsoffset 50 -i TrailDown.mp4 -itsoffset 0 -i TrailUp.mp4
>>> -i TrailDown.mp4 -itsoffset -50 -i TrailUp.mp4
>>> 
>> 
>> see tpad filter, need recent version.
> 
> Thanks! I have figured out the syntax to pad a single video with tpad, e.g.:
> 
> ffmpeg -i 
> /Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
>  -filter_complex "tpad=start_duration=50" tPadOut.mp4
> 
> but I’m having trouble with the syntax to delay only one of two videos in an 
> hstack filter:
> 
> ffmpeg -i 
> /Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
>  -i 
> /Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4
>  -filter_complex "tpad=start_duration=50[v0];hstack=inputs=2” Coyote2Up.mp4


I brute-forced it by using tpad to pad the first video by itself, and then I 
used hstack to glue them together. Not the most efficient way of doing things, 
and I think that induces 2 generations of transcoding, so ideally I’d like to 
know how to do it properly in the future. But here’s the end result:

https://www.youtube.com/watch?v=ByUHKRtA6Zo=PLm4gRKwvTste1lnrHLIHWMZ1WpJSA1vb9=2

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hstack with one video offset in time?

2021-03-04 Thread Steven Kan
> On Mar 4, 2021, at 1:57 PM, Paul B Mahol  wrote:
> 
> On Thu, Mar 4, 2021 at 9:36 PM Steven Kan  wrote:
> 
>> I have captured some nice footage of 3 coyotes traipsing through my yard,
>> from two IP cameras facing opposite directions. Recording was initiated by
>> in-camera motion triggers, so the recordings start about 50 seconds apart,
>> as you can tell from the burned-in timestamps at the start of each video at
>> 01:10:17 and 01:11:07, respectively:
>> 
>> https://www.youtube.com/watch?v=lP5Kpg_vTEE
>> <https://www.youtube.com/watch?v=jvXoUhKuC5c> <
>> https://www.youtube.com/watch?v=jvXoUhKuC5c>
>> https://www.youtube.com/watch?v=jvXoUhKuC5c
>> 
>> I’d like to assemble these videos, side-by-side, but synced in time, which
>> means the TrailDown video needs to start 50 seconds after the TrailUp
>> video. The TrailDown side can be black/blank, or it can be stuck on the
>> first frame of its video while the right side plays for the first 50
>> seconds; it doesn’t matter to me.
>> 
>> I’ve tried all of the following:
>> 
>> -itsoffset 50 -i TrailDown.mp4 -i TrailUp.mp4
>> -itsoffset 50 -i TrailDown.mp4 -itsoffset 0 -i TrailUp.mp4
>> -i TrailDown.mp4 -itsoffset -50 -i TrailUp.mp4
>> 
> 
> see tpad filter, need recent version.

Thanks! I have figured out the syntax to pad a single video with tpad, e.g.:

ffmpeg -i 
/Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
 -filter_complex "tpad=start_duration=50" tPadOut.mp4

but I’m having trouble with the syntax to delay only one of two videos in an 
hstack filter:

ffmpeg -i 
/Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
 -i 
/Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4
 -filter_complex "tpad=start_duration=50[v0];hstack=inputs=2" Coyote2Up.mp4
ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-04T18:50:57.00Z
  Duration: 00:01:09.05, start: 0.00, bitrate: 6353 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6351 kb/s, 20 fps, 20 tbr, 1k tbn, 2k tbc (default)
Metadata:
  creation_time   : 2021-03-04T18:50:57.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-04T18:52:03.00Z
  Duration: 00:01:43.90, start: 0.00, bitrate: 6319 kb/s
Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6317 kb/s, 20 fps, 20 tbr, 1k tbn, 2k tbc (default)
Metadata:
  creation_time   : 2021-03-04T18:52:03.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Cannot find a matching stream for unlabeled input pad 1 on filter 
Parsed_hstack_1

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] hstack with one video offset in time?

2021-03-04 Thread Steven Kan
I have captured some nice footage of 3 coyotes traipsing through my yard, from 
two IP cameras facing opposite directions. Recording was initiated by in-camera 
motion triggers, so the recordings start about 50 seconds apart, as you can 
tell from the burned-in timestamps at the start of each video at 01:10:17 and 
01:11:07, respectively:

https://www.youtube.com/watch?v=lP5Kpg_vTEE
  
https://www.youtube.com/watch?v=jvXoUhKuC5c

I’d like to assemble these videos, side-by-side, but synced in time, which 
means the TrailDown video needs to start 50 seconds after the TrailUp video. 
The TrailDown side can be black/blank, or it can be stuck on the first frame of 
its video while the right side plays for the first 50 seconds; it doesn’t 
matter to me.

I’ve tried all of the following:

-itsoffset 50 -i TrailDown.mp4 -i TrailUp.mp4
-itsoffset 50 -i TrailDown.mp4 -itsoffset 0 -i TrailUp.mp4
-i TrailDown.mp4 -itsoffset -50 -i TrailUp.mp4

and they all result in both halves of the video doing nothing until they both 
start together, at a burned-in timestamp of 01:11:07. So I apparently have the 
syntax slightly wrong, or I’m using the wrong flag. Thanks!

ffmpeg -i 
/Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4
 -itsoffset -50 -i 
/Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4
 -filter_complex hstack=inputs=2 Coyote2Up.mp4
ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailDown_ch1_20210304010952_20210304011838.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-04T18:50:57.00Z
  Duration: 00:01:09.05, start: 0.00, bitrate: 6353 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6351 kb/s, 20 fps, 20 tbr, 1k tbn, 2k tbc (default)
Metadata:
  creation_time   : 2021-03-04T18:50:57.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Downloads/Record/DownLoad/TrailUp_ch1_20210304010838_20210304011506.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
creation_time   : 2021-03-04T18:52:03.00Z
  Duration: 00:01:43.90, start: 0.00, bitrate: 6319 kb/s
Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 
2592x1944, 6317 kb/s, 20 fps, 20 tbr, 1k tbn, 2k tbc (default)
Metadata:
  creation_time   : 2021-03-04T18:52:03.00Z
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
File 'Coyote2Up.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 (h264) -> hstack:input0
  Stream #1:0 (h264) -> hstack:input1
  hstack -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[libx264 @ 0x7fcf14806400] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcf14806400] profile High, level 6.0, 4:2:0, 8-bit
[libx264 @ 0x7fcf14806400] 264 - core 161 r3027 4121277 - H.264/MPEG-4 AVC 
codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: 
cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 
psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 
deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 
sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 
constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 

Re: [FFmpeg-user] Muxing Video from Cam1 with Audio from Cam2?

2021-02-20 Thread Steven Kan
Thank you Carl and DEF! Works perfectly!

I’m using out.mp4 just for syntax testing. In the actual application I’ll be 
outputting to:

-f flv "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key”

But it’s good to know about container issues!

> On Feb 19, 2021, at 7:14 PM, Carl Zwanzig  wrote:
> 
> On 2/19/2021 6:23 PM, Steven Kan wrote:
>> What’s the correct syntax to mux the video from new Cam1 with the audio
>> from older Cam2, while completely discarding the unwanted video from
>> Cam2? Preferably without transcoding anything, since I want to minimize
>> CPU usage. 
> 
> It's all about the mapping (and this is answered at 
> https://stackoverflow.com/questions/12938581/ffmpeg-mux-video-and-audio-from-another-video-mapping-issue/12943003)
> 
> but the command will look something like
> 
> ffmpeg -i VIDEOSOURCE -i AUDIOSOURCE -map 0.0 -map 1.1 -t 00:00:10 out.mp4
> (assume that the video is stream 0 in both and audio is stream 1)
> 
> Also look up the options -shortest and -apad to see if they may apply 
> (https://ffmpeg.org/documentation.html).
> 
> I'd consider whether .mkv is better output container than .mp4, which can 
> become corrupt if the file isn't closed properly.
> 
> Later,
> 
> z!
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Muxing Video from Cam1 with Audio from Cam2?

2021-02-19 Thread Steven Kan
After buying 6 new IP cameras recently, I have suddenly discovered that none of 
them has a microphone :facepalm:

I can’t return them because I’ve done some surgery on them to bypass their 12 V 
power input diodes, but now I want to stream video from them with audio. I do 
have a crummy older IP camera laying around that does have audio, but the video 
quality and features are not what I want.

What’s the correct syntax to mux the video from new Cam1 with the audio from 
older Cam2, while completely discarding the unwanted video from Cam2? 
Preferably without transcoding anything, since I want to minimize CPU usage. I 
ran the following command, but it just stalls at time=00:00:09.58 and never 
returns, and the resulting out.mp4 is an invalid movie with a size of 44 bytes. 
How do apply “-vn” to Input #1 only? Thanks!

ffmpeg -re -thread_queue_size 1024 -rtsp_transport tcp -i 
"rtsp://anonymous:password1@192.168.1.13:554/cam/realmonitor?channel=1=0"
 -i "rtsp://anonymous:password@192.168.1.39:554" -vn -acodec copy -t 00:00:10 
out.mp4

ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, rtsp, from 
'rtsp://anonymous:password1@192.168.1.13:554/cam/realmonitor?channel=1=0':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 tbr, 
90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.39:554':
  Metadata:
title   : Session streamed by "preview"
comment : 
  Duration: N/A, start: 0.00, bitrate: N/A
Stream #1:0: Video: h264 (High), yuv420p(progressive), 2560x1440, 29.97 
tbr, 90k tbn, 180k tbc
Stream #1:1: Audio: aac (LC), 16000 Hz, mono, fltp
Output #0, mp4, to 'out.mp4':
  Metadata:
title   : Media Server
encoder : Lavf58.65.100
Stream #0:0: Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp
Stream mapping:
  Stream #1:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
[rtsp @ 0x7fa887008000] Thread message queue blocking; consider raising the 
thread_queue_size option (current value: 8)
[rtsp @ 0x7fa887008000] max delay reached. need to consume packetx
[rtsp @ 0x7fa887008000] RTP: missed 1 packets
size=   0kB time=00:00:09.58 bitrate=   0.0kbits/s speed=1.11x

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Script to time all encoders on a given machine?

2021-02-07 Thread Steven Kan
I know that ffmpeg -encoders will return all the encoders compiled into a given 
build, but has anyone written a script to actually test each of them and time 
the results?

Something like:

for each $thisencoder in “ffmpeg -encoders”
time ffmpeg -i testfile.mp4 -vcodec $thisencoder out.mp4 (or whatever 
extension is suitable) > results.txt
next $encoder

with error checking, etc. 

Or would this not be useful because y’all already know what works on your 
machine?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFmpeg on Apple Silicon (Success)

2021-02-06 Thread Steven Kan
> On Aug 7, 2020, at 1:02 PM, Aleksid  wrote:
> 
> Hi,
> 
> Just for information. Today I successfully compiled FFmpeg 4.3.1 for Apple
> Silicon (macOS 11 Beta Big Sur) on Apple Developer Transition Kit.

Hi!

Have you moved up to an M1-based Mac? How is the performance of ffmpeg on 
released Apple Silicon?

I am using ffmpeg to pull RTSP streams from my IP cameras and push them to my 
YouTube channels. Previously my commands were of the form:

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.11:554" -vcodec copy -acodec copy -t 
01:00:00 -f flv "rtmp://a.rtmp.youtube.com/my-youtube-streaming-key”

and this took almost no CPU because there’s no transcoding. I can run 3 
instances of ffmpeg on Raspberry Pi 3 without using more than 50% CPU, total.

Now I want to stack 2 streams horizontally using something like:

./ffmpeg -re -thread_queue_size 1024 -i 
rtsp://anonymous:password@192.168.1.47:554 -i 
rtsp://anonymous:password@192.168.1.50:554 -vcodec  -acodec 
copy -t 01:00:00 -filter_complex "nullsrc=size=3840x1080 [base]; [0:v] 
setpts=PTS-STARTPTS, scale=1920x1080 [upperleft]; [1:v] setpts=PTS-STARTPTS, 
scale=1920x1080 [upperright]; [base][upperleft] overlay=shortest=1 [tmp1]; 
[tmp1][upperright] overlay=shortest=1:x=1920" -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key”

or

./ffmpeg -re -thread_queue_size 1024 -i 
rtsp://anonymous:password@192.168.1.47:554 -i 
rtsp://anonymous:password@192.168.1.50:554 -vcodec  -acodec 
copy -t 01:00:00 -filter_complex hstack=inputs=2 -f 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"

I have a few old Windows computers lying around, and they struggle with this, 
running upwards of 60-80% CPU, even with -vcodec h264_amf on my AMD box pushing 
much of the load to the integrated GPU, which runs at 55%. I can’t get h264_qsv 
to work on my Intel box, and I can’t get hstack to work if the two cameras have 
two different color spaces.

I can run this on my 2014 MacBook Pro with Nvidia GPU, using h264_videotoolbox:

./ffmpeg -re -thread_queue_size 1024 -i 
rtsp://anonymous:password@192.168.1.47:554 -i 
rtsp://anonymous:password@192.168.1.50:554 -vcodec h264_videotoolbox -acodec 
copy -t 01:00:00 -filter_complex "nullsrc=size=3840x1080 [base]; [0:v] 
setpts=PTS-STARTPTS, scale=1920x1080 [upperleft]; [1:v] setpts=PTS-STARTPTS, 
scale=1920x1080 [upperright]; [base][upperleft] overlay=shortest=1 [tmp1]; 
[tmp1][upperright] overlay=shortest=1:x=1920" -f flv 
"rtmp://a.rtmp.youtube.com/live2/rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"

and it runs at ~66% CPU, but both Intel Iris and nVIDIA GPUs report 0% in 
Activity Monitor. Furthermore this is my main work computer, so I can’t have it 
running ffmpeg all day long.

I actually want to run two such channels, so two instances of ffmpeg, and I 
don’t want to heat my house or have a loud fan blowing constantly.

Does ffmpeg on AS use the integrated GPU? Do you think an AS Mini would be able 
to run two instances of either form of those commands listed above, e.g. two 
different, 2-up, side-by-side videos, each 3840 x 1280?

The mini is a nice, quiet box that I could hide in one of my closets. At $700 
it’s not cheap, but even a refurbed Core i7 that could do what I want would 
cost in the same neighborhood, I think. Unless I’m just doing it wrong, which 
also is possible. Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HW Acceleration 101? 2-Up Streaming from RTSP-->ffmpeg-->YouTube

2021-01-18 Thread Steven Kan

> On Jan 18, 2021, at 12:50 PM, Michael Koch  
> wrote:
> 
> Am 18.01.2021 um 21:20 schrieb Steven Kan:
>>> On Jan 18, 2021, at 10:58 AM, Michael Koch  
>>> wrote:
>>> 
>>> Am 18.01.2021 um 19:18 schrieb Steven Kan:
>>>> But now I want to do a “2-Up” live stream of two different cameras, 
>>>> side-by-side. Here’s an archive from last night (waiting for a mating pair 
>>>> of Barn Owls to move in):
>>>> 
>>>> https://www.youtube.com/watch?v=GDN2MjPwn0Q=youtu.be 
>>>> <https://www.youtube.com/watch?v=GDN2MjPwn0Q=youtu.be>
>>>> 
>>>> The cameras are each outputting 1920 x 1080 @ 25 fps.
>>>> 
>>>> Now that I’m actually encoding, I need a lot more CPU/GPU. I’m running 
>>>> this in Win10 Pro/64 on an HP Microserver with an AMD Opteron X3418 
>>>> Quad-Core, and the CPU runs at about ~65-80% while the integrated GPU runs 
>>>> at about ~55%.
>>>> 
>>>> C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
>>>> rtsp://anonymous:password@192.168.1.47:554 
>>>>  -i 
>>>> rtsp://anonymous:password@192.168.1.50:554 
>>>>  -vcodec h264_amf -acodec copy 
>>>> -t 01:47:02 -filter_complex "nullsrc=size=3840x1080 [base]; [0:v] 
>>>> setpts=PTS-STARTPTS, scale=1920x1080 [upperleft]; [1:v] 
>>>> setpts=PTS-STARTPTS, scale=1920x1080 [upperright]; [base][upperleft] 
>>>> overlay=shortest=1 [tmp1]; [tmp1][upperright] overlay=shortest=1:x=1920" 
>>>> -f flv "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key 
>>>> ”
>>> Wouldn't it be easier to use something like
>>> [upperleft][upperright] hstack
>> It might be. I didn’t know hstack existed :D. When I googled ‘ffmpeg 2-up” 
>> the nullsrc/overlay examples were the first ones I found :D
>> 
>> I tried hstack, and I’m not getting at all the results I expect. On my 
>> Windows machine I get "Conversion failed!” error, whereas the same command 
>> with '-filter_complex “nullsrc . . . .’ does not fail:
>> 
>> C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
>> rtsp://anonymous:password@192.168.1.47:554 -i 
>> rtsp://anonymous:password@192.168.1.50:554 -vcodec h264_amf -acodec copy -t 
>> 01:47:02 -filter_complex hstack=inputs=2 -f flv out.flv
>> 
>> [snip

>> Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.47:554':
>>  Metadata:
>>title   : Media Server
>>  Duration: N/A, start: 0.08, bitrate: N/A
>>Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 
>> 1920x1080, 25 fps, 25 tbr, 90k tbn, 180k tbc
>> Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
>>  Metadata:
>>title   : Media Server
>>  Duration: N/A, start: 0.10, bitrate: N/A
>>Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 
>> tbr, 90k tbn, 180k tbc
> 
> I see that the two streams have different pixel formats yuvj420p and yuv420p. 
> You could try to bring them to the same pixel format before using hstack.
> [0]format=yuv420p[a];[a][1]hstack
> 
> It's only a wild guess, I'm not sure.

Do I put this into the filter_complex argument, e.g. -filter_complex 
"[0]format=yuv420p[a];[a][1] hstack=inputs=2” 

That still results in the "Conversion failed!” error.

Curiouser and curiouser. 

If I change -vcodec h264_amf to -vcodec libx264 with hstack, then i don’t get a 
“Conversion failed!” error. But the CPU goes to 100% and the speed never 
exceeds 0.5x, so it’s not a useful solution. But it does show that the -vcodec 
argument might be a part of the problem.

But -vcodec h264_amf worked just fine with the  -filter_complex “nullsrc=. . . 
. method, and it fails with hstack.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HW Acceleration 101? 2-Up Streaming from RTSP-->ffmpeg-->YouTube

2021-01-18 Thread Steven Kan
> On Jan 18, 2021, at 10:58 AM, Michael Koch  
> wrote:
> 
> Am 18.01.2021 um 19:18 schrieb Steven Kan:
>> But now I want to do a “2-Up” live stream of two different cameras, 
>> side-by-side. Here’s an archive from last night (waiting for a mating pair 
>> of Barn Owls to move in):
>> 
>> https://www.youtube.com/watch?v=GDN2MjPwn0Q=youtu.be 
>> <https://www.youtube.com/watch?v=GDN2MjPwn0Q=youtu.be>
>> 
>> The cameras are each outputting 1920 x 1080 @ 25 fps.
>> 
>> Now that I’m actually encoding, I need a lot more CPU/GPU. I’m running this 
>> in Win10 Pro/64 on an HP Microserver with an AMD Opteron X3418 Quad-Core, 
>> and the CPU runs at about ~65-80% while the integrated GPU runs at about 
>> ~55%.
>> 
>> C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
>> rtsp://anonymous:password@192.168.1.47:554 
>>  -i 
>> rtsp://anonymous:password@192.168.1.50:554 
>>  -vcodec h264_amf -acodec copy 
>> -t 01:47:02 -filter_complex "nullsrc=size=3840x1080 [base]; [0:v] 
>> setpts=PTS-STARTPTS, scale=1920x1080 [upperleft]; [1:v] setpts=PTS-STARTPTS, 
>> scale=1920x1080 [upperright]; [base][upperleft] overlay=shortest=1 [tmp1]; 
>> [tmp1][upperright] overlay=shortest=1:x=1920" -f flv 
>> "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key 
>> ”
> 
> Wouldn't it be easier to use something like
> [upperleft][upperright] hstack

It might be. I didn’t know hstack existed :D. When I googled ‘ffmpeg 2-up” the 
nullsrc/overlay examples were the first ones I found :D 

I tried hstack, and I’m not getting at all the results I expect. On my Windows 
machine I get "Conversion failed!” error, whereas the same command with 
'-filter_complex “nullsrc . . . .’ does not fail:

C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
rtsp://anonymous:password@192.168.1.47:554 -i 
rtsp://anonymous:password@192.168.1.50:554 -vcodec h264_amf -acodec copy -t 
01:47:02 -filter_complex hstack=inputs=2 -f flv out.flv   

ffmpeg version 2020-12-09-git-e5119a-essentials_build-www.gyan.dev 
Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 10.2.0 (Rev5, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv 
--enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib 
--enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid 
--enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass 
--enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora 
--enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 92.100 /  7. 92.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.47:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.08, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 
1920x1080, 25 fps, 25 tbr, 90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554':
  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 tbr, 
90k tbn, 180k tbc
Stream #1:1: Audio: aac (LC), 8000 Hz, mono, fltp
Stream mapping:
  Stream #0:0 (h264) -> hstack:input0
  Stream #1:0 (h264) -> hstack:input1
  hstack -> Stream #0:0 (h264_amf)
  Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[rtsp @ 02a54cba0fc0] Thread message queue blocking; consider raising the 
thread_queue_size option (current value: 8)
[rtsp @ 02a54cb2dd80] max delay reached. need to consume packet
[rtsp @ 02a54cb2dd80] RTP: missed 135 packets
[swscaler @ 02a550941000] deprecated pixel format used, make sure you did 
set range correctly
[swscaler @ 02a54ef7d380] deprecated pixel format used, make sure you did 
set range correctly
[h264_amf @ 02a54d76dc80] encoder->Init() failed with error 36
Error initializing output stream 0:0 -- Error while opening encoder for output 

[FFmpeg-user] HW Acceleration 101? 2-Up Streaming from RTSP-->ffmpeg-->YouTube

2021-01-18 Thread Steven Kan
I’ve been using fffmpeg as a “relay station” for a few years to pull RTSP 
streams from several IP cameras and push them to YouTube, such as my 24/7 
BeeCam:

https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live 


Command is of the form:

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.50:554 
" -vcodec copy -acodec copy -t 
01:00:00 -f flv "rtmp://a.rtmp.youtube.com/live2/ 
”

Because I’m deliberately just relaying the packets and _not_ doing any 
transcoding, the CPU utilization is remarkably low, and independent of camera 
resolution. I can run 3 instances of ffmpeg on Raspbian/Raspberry Pi 3B+ and 
each uses only about 10% of the CPU, despite each pushing a 5 MP camera stream.

This works very well; no problems here!

But now I want to do a “2-Up” live stream of two different cameras, 
side-by-side. Here’s an archive from last night (waiting for a mating pair of 
Barn Owls to move in):

https://www.youtube.com/watch?v=GDN2MjPwn0Q=youtu.be 


The cameras are each outputting 1920 x 1080 @ 25 fps.

Now that I’m actually encoding, I need a lot more CPU/GPU. I’m running this in 
Win10 Pro/64 on an HP Microserver with an AMD Opteron X3418 Quad-Core, and the 
CPU runs at about ~65-80% while the integrated GPU runs at about ~55%. 

C:\Program Files\ffmpeg\bin> .\ffmpeg.exe -re -thread_queue_size 1024 -i 
rtsp://anonymous:password@192.168.1.47:554 
 -i 
rtsp://anonymous:password@192.168.1.50:554 
 -vcodec h264_amf -acodec copy -t 
01:47:02 -filter_complex "nullsrc=size=3840x1080 [base]; [0:v] 
setpts=PTS-STARTPTS, scale=1920x1080 [upperleft]; [1:v] setpts=PTS-STARTPTS, 
scale=1920x1080 [upperright]; [base][upperleft] overlay=shortest=1 [tmp1]; 
[tmp1][upperright] overlay=shortest=1:x=1920" -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key 
”

ffmpeg version 2020-12-09-git-e5119a-essentials_build-www.gyan.dev 
 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 10.2.0 (Rev5, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv 
--enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-zlib 
--enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid 
--enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass 
--enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora 
--enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 92.100 /  7. 92.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.47:554': 

  Metadata:
title   : Media Server
  Duration: N/A, start: 0.08, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 
1920x1080, 25 fps, 25 tbr, 90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://anonymous:password@192.168.1.50:554': 

  Metadata:
title   : Media Server
  Duration: N/A, start: 0.10, bitrate: N/A
Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 100 tbr, 
90k tbn, 180k tbc
Stream #1:1: Audio: aac (LC), 8000 Hz, mono, fltp
Stream mapping:
  Stream #0:0 (h264) -> setpts
  Stream #1:0 (h264) -> setpts
  overlay -> Stream #0:0 (h264_amf)
  Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[rtsp @ 0199e58e14c0] Thread message queue blocking; consider raising the 
thread_queue_size option (current value: 8)
[rtsp @ 0199e586d9c0] max delay reached. need to consume packet
[rtsp @ 0199e586d9c0] RTP: missed 157 packets
[swscaler @ 0199e5cc5b80] deprecated pixel format used, make sure you did 
set range correctly
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key': 

  Metadata:
title   : Media Server
encoder : Lavf58.65.100
Stream #0:0: Video: h264 (h264_amf) ([7][0][0][0] / 0x0007), 
yuv420p(progressive), 3840x1080 [SAR 1:1 DAR 32:9], q=-1--1, 2000 kb/s, 25 fps, 
1k tbn, 25 tbc (default)
Metadata:
  encoder : Lavc58.115.102 h264_amf
Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 8000 Hz, mono, 

Re: [FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-29 Thread Steven Kan
> On Dec 28, 2020, at 9:55 PM, Steven Kan  wrote:
> 
>> On Dec 28, 2020, at 9:33 PM, Carl Zwanzig  wrote:
>> 
>> On 12/28/2020 5:44 PM, Carl Zwanzig wrote:
>>> (If I have a chance later tonight, I might look into the filter's source 
>>> code.)
>> 
>> I took an admittedly quick look at vf_drawtext.c and the helper routines. It 
>> looks like the only string quoting needed would be for '\' and '%', so not 
>> for a comma. I did not dive further into character processing. It does look 
>> like there's a substitution made for characters that can't be found in the 
>> font so it could be that the font is faulty or that the comma isn't really a 
>> comma (see prev. email).
> 
> Hi Carl,
> 
> Thanks for the research. I think the culprit is the font 臘‍♂️ (If that last 
> glyph doesn’t render properly in the mailing list, it’s a facepalm)
> 
> I switched from Keyboard.ttf to Helvetica.ttc, and now the commas render 
> properly. 
> 
> So sorry to everyone for the colossal waste of time and bandwidth. I’ll post 
> the final results of my experiment tomorrow as atonement.

So here’s what happens once I use the correct font:

https://www.youtube.com/watch?v=IpKW5Gzrayc 
<https://www.youtube.com/watch?v=IpKW5Gzrayc>

Also, while I was typing out the new text for the drawtext filter, I used a 
naked colon and received the following error:

Error initializing filter 'drawtext' with args 
'fontfile=/System/Library/Fonts/Helvetica.ttc: text=1:2:2, Room Temp Water, No 
Heat Mat: fontcolor=white: fontsize=24: box=1: boxcolor=black@0.5: 
boxborderw=5: x=w*0.02: y=h*0.02’

So in the future I’ll know if my errors are the results of unescaped characters 
or some other issue.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-28 Thread Steven Kan
> On Dec 28, 2020, at 9:33 PM, Carl Zwanzig  wrote:
> 
> On 12/28/2020 5:44 PM, Carl Zwanzig wrote:
>> (If I have a chance later tonight, I might look into the filter's source 
>> code.)
> 
> I took an admittedly quick look at vf_drawtext.c and the helper routines. It 
> looks like the only string quoting needed would be for '\' and '%', so not 
> for a comma. I did not dive further into character processing. It does look 
> like there's a substitution made for characters that can't be found in the 
> font so it could be that the font is faulty or that the comma isn't really a 
> comma (see prev. email).

Hi Carl,

Thanks for the research. I think the culprit is the font 臘‍♂️ (If that last 
glyph doesn’t render properly in the mailing list, it’s a facepalm)

I switched from Keyboard.ttf to Helvetica.ttc, and now the commas render 
properly. 

So sorry to everyone for the colossal waste of time and bandwidth. I’ll post 
the final results of my experiment tomorrow as atonement.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-28 Thread Steven Kan
> On Dec 28, 2020, at 1:02 PM, Carl Zwanzig  wrote:
> 
> On 12/27/2020 11:01 AM, Steven Kan wrote:
>> If I use that exact string, the comma renders as a white box. According
>> to the drawtext documentation, I should escape the comma, but depending
>> on the context I may have to escape the escape character before the
>> comma, etc.
> 
> I'm not on a mac, but it could be the shell mutating the comma or it could be 
> the drawtext filter, don't know. You might try assembling the filter string 
> into a shell variable and dropping that into the command line. Also, try 
> echo-ing the command like into a file to see if the commas are correct.
> 
> You might also try the 'textfile' option instead of 'text' to avoid any 
> munging the shell might do.

Hmmm. I just tried:

./ffmpeg -i /Users/steven/Documents/Baking/4Up.mp4 -t 00:00:02 -vf 
drawtext="fontfile=/System/Library/Fonts/Keyboard.ttf: fontcolor=white: 
textfile=/Users/steven/Documents/Baking/UpperLeft.txt'" -c:v libx264 
/Users/steven/Documents/Baking/TextwOverlayTest.mp4

where the file UpperLeft.txt contains "Room Temp Water, No Heat Mat,” 
and all the commas renders as boxes.

Text encoding for UpperLeft.txt is UTF-8.

Does this indicate that this is happening within drawtext itself? I’m using the 
latest snapshot:

./ffmpeg -i /Users/steven/Documents/Baking/4Up.mp4 -t 00:00:02 -vf 
drawtext="fontfile=/System/Library/Fonts/Keyboard.ttf: fontcolor=white: 
textfile=/Users/steven/Documents/Baking/UpperLeft.txt'" -c:v libx264 
/Users/steven/Documents/Baking/TextwOverlayTest.mp4
ffmpeg version N-100466-g29cef1bcd6-tessus  https://evermeet.cx/ffmpeg/  
Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg 
--extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl 
--enable-libaom --enable-libass --enable-libbluray --enable-libdav1d 
--enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame 
--enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenh264 --enable-libopenjpeg --enable-libopus 
--enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
--enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab 
--enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs 
--enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi 
--enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil  56. 62.100 / 56. 62.100
  libavcodec 58.115.102 / 58.115.102
  libavformat58. 65.100 / 58. 65.100
  libavdevice58. 11.103 / 58. 11.103
  libavfilter 7. 94.100 /  7. 94.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc55.  8.100 / 55.  8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 
'/Users/steven/Documents/Baking/4Up.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.62.100
  Duration: 00:00:11.00, start: 0.00, bitrate: 4114 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 
1920x1080 [SAR 1:1 DAR 16:9], 4111 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc 
(default)
Metadata:
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x7f8e5e005000] using SAR=1/1
[libx264 @ 0x7f8e5e005000] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
AVX FMA3 BMI2 AVX2
[libx264 @ 0x7f8e5e005000] profile High, level 4.0, 4:2:0, 8-bit
[libx264 @ 0x7f8e5e005000] 264 - core 161 r3027 4121277 - H.264/MPEG-4 AVC 
codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: 
cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 
psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 
deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 
sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 
constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 
open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 
rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 
ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '/Users/steven/Documents/Baking/TextwOverlayTest.mp4':
  Metadata:
major_brand : isom
minor_version   : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.65.100
Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p(progressive), 
1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 25 fps, 12800 tbn (default)
Metadata:
  handler_name: VideoHandler
  vendor_id   : [0][0][0][0]

Re: [FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-28 Thread Steven Kan
Thanks for the feedback. I should have specified that I’m using macOS 10.14.6.

Does anyone else have experience putting literal commas in their drawtext on 
macOS?

> On Dec 27, 2020, at 12:34 PM, pdr0  wrote:
> 
> 
> Not sure about other OS's - For Windows, you don't need to escape commas
> inside the text field;  but the paths for fonts needs to be escaped
> 
> These work ok in windows
> 
> one comma
> ffmpeg -f lavfi -r 24 -i color=c=green:s=640x480 -vf
> drawtext="fontfile='C\:\\Windows\\Fonts\\Arial.ttf':text='Room Temp Water,
> No Heat Mat':fontcolor=white:fontsize=24:
> box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)*0.05:y=(h-text_h)*0.05"
> -crf 20 -an -frames:v 120 test1.mp4 -y
> 
> five commas
> ffmpeg -f lavfi -r 24 -i color=c=green:s=640x480 -vf
> drawtext="fontfile='C\:\\Windows\\Fonts\\Arial.ttf':text='Room Temp Water, ,
> , , , No Heat Mat':fontcolor=white:fontsize=24:
> box=1:boxcolor=black@0.5:boxborderw=5:x=(w-text_w)*0.05:y=(h-text_h)*0.05"
> -crf 20 -an -frames:v 120 test2.mp4 -y
> 
> 
> 
> --
> Sent from: http://www.ffmpeg-archive.org/
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] How to properly escape a literal comma ", " in drawtext filter text?

2020-12-27 Thread Steven Kan
I am attempting to overlay the text, “Room Temp Water, No Heat Mat” onto some 
video.

If I use that exact string, the comma renders as a white box. According to the 
drawtext documentation, I should escape the comma, but depending on the context 
I may have to escape the escape character before the comma, etc.

Values from 0 to 7 escape slashes do not work, as they each result in a white 
box where I want my comma. This results in 8 white boxes:

./ffmpeg -i /Users/steven/Documents/Baking/4Up.mp4 -vf 
"drawtext=fontfile=/System/Library/Fonts/Keyboard.ttf: text='Room Temp 
Water,\,\\,\\\,,\,\\,\\\ No Heat Mat': fontcolor=white: 
fontsize=24: box=1: boxcolor=black@0.5: boxborderw=5: x=(w-text_w)*0.05: 
y=(h-text_h)*0.05" -c:v libx264 /Users/steven/Documents/Baking/TextwOverlay.mp4

What is the proper way to insert a literal comma into the text argument of 
drawtext?

Or should I continue adding slashes until it works? ;-) 

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] pcm_alaw audio from Wyze Cam to YouTube?

2020-04-19 Thread Steven Kan
I’m attempting to adapt my BeeCam stack to run an OwlCam, but this requires 
using a very low-power, WiFi-based camera because it will all need to be solar 
powered and battery-backed. So I’m testing a $25 Wyze camera, using their 
unsupported RTSP firmware, pulling it from the camera and pushing it out to 
YouTube via ffmpeg running on a Raspberry Pi 3.

If I use a playlist of MP3s, stored on the Pi, it works fine:

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.22/live" -f concat -safe 0 -i playlist.txt 
-vcodec copy -acodec copy -t 01:47:02 -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"
ffmpeg version N-89882-g4dbae00bac Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1) 20170516
  configuration: 
  libavutil  56.  7.100 / 56.  7.100
  libavcodec 58.  9.100 / 58.  9.100
  libavformat58.  5.101 / 58.  5.101
  libavdevice58.  0.101 / 58.  0.101
  libavfilter 7. 11.101 /  7. 11.101
  libswscale  5.  0.101 /  5.  0.101
  libswresample   3.  0.101 /  3.  0.101
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.22/live':
  Metadata:
title   : Session streamed by "wyze"
comment : live
  Duration: N/A, start: 0.00, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 15 fps, 
15 tbr, 90k tbn, 30 tbc
Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
[mp3 @ 0x22af430] Estimating duration from bitrate, this may be inaccurate
Input #1, concat, from 'playlist.txt':
  Duration: N/A, start: 0.00, bitrate: 320 kb/s
Stream #1:0: Audio: mp3, 44100 Hz, stereo, s16p, 320 kb/s
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key':
  Metadata:
title   : Session streamed by "wyze"
comment : live
encoder : Lavf58.5.101
Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), 
yuv420p(progressive), 1920x1080, q=2-31, 15 fps, 15 tbr, 1k tbn, 90k tbc
Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, s16p, 
320 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 1020 fps= 13 q=-1.0 size=   15007kB time=00:01:07.94 
bitrate=1809.4kbits/s speed=0.886x




If I delete the playlist, and leave -acodec at “copy”, ffmpeg happily reports 
an input stream in pcm_alaw fomat:


./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.22/live" -vcodec copy -acodec copy -t 
01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"
ffmpeg version N-89882-g4dbae00bac Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1) 20170516
  configuration: 
  libavutil  56.  7.100 / 56.  7.100
  libavcodec 58.  9.100 / 58.  9.100
  libavformat58.  5.101 / 58.  5.101
  libavdevice58.  0.101 / 58.  0.101
  libavfilter 7. 11.101 /  7. 11.101
  libswscale  5.  0.101 /  5.  0.101
  libswresample   3.  0.101 /  3.  0.101
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://anonymous:password@192.168.1.22/live':
  Metadata:
title   : Session streamed by "wyze"
comment : live
  Duration: N/A, start: 0.00, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 15 fps, 
15 tbr, 90k tbn, 30 tbc
Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key':
  Metadata:
title   : Session streamed by "wyze"
comment : live
encoder : Lavf58.5.101
Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), 
yuv420p(progressive), 1920x1080, q=2-31, 15 fps, 15 tbr, 1k tbn, 90k tbc
Stream #0:1: Audio: pcm_alaw ([7][0][0][0] / 0x0007), 8000 Hz, mono, s16, 
64 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame=   50 fps=8.2 q=-1.0 size= 567kB time=00:00:08.67 bitrate= 
535.7kbits/s speed=1.43x   


but YouTube complains that the incoming audio format is unsupported, and that I 
should use aac or mp3. Since ffmpeg doesn’t include an mp3 encoder by default, 
I tried aac, but now ffmpeg complains of "Non-monotonous DTS in output stream . 
. . “ and "Queue input is backward in time” and YouTube doesn’t present it.

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.22/live" -acodec aac -vcodec copy -t 
01:47:02 -f flv "rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key"
ffmpeg version N-89882-g4dbae00bac Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1) 20170516
  configuration: 
  libavutil  56.  7.100 / 56.  7.100
  libavcodec 58.  9.100 / 58.  9.100
  libavformat

Re: [FFmpeg-user] Add soft subtitles to YouTube Mouse Stream?

2019-11-21 Thread Steven Kan
Thank you for finding that! And, a few minutes later, in he goes:

https://youtu.be/gh7bUtECd0Y?t=29460 <https://youtu.be/gh7bUtECd0Y?t=29460>

Just before he pokes his nose in, you can see a guard bee at the far right of 
the entrance, not doing her job.

Back on-topic, I also have had an issue with these Reolink cameras, in that 
sometimes they get into a weird state that ffmpeg doesn’t like, and ffmpeg 
starts dumping out:

[rtsp @ 0x302c2f0] RTP: PT=60: bad cseq e680 expected=0b49
[rtsp @ 0x302c2f0] RTP: PT=60: bad cseq 93ab expected=0b49
[rtsp @ 0x302c2f0] RTP: PT=60: bad cseq 93ac expected=0b49
[rtsp @ 0x302c2f0] RTP: PT=60: bad cseq e682 expected=0b49

forever.

Has anyone seen this error, and/or know what it means? Stopping and restarting 
ffmpeg doesn’t fix this problem; I need to reboot the camera.

But I’m trying to grok how ffmpeg could put a camera into a bad state, just by 
connecting via rtsp.



> On Nov 21, 2019, at 1:58 AM, Moritz Barsnick  wrote:
> 
> On Wed, Nov 20, 2019 at 14:30:27 -0600, Steven Kan wrote:
>> Danke für deine Hilfe!
> 
> Gerne!
> 
>> Yes, I have mice in my yard :-(
>> 
>> They can actually invade a weak hive and wreak havoc! Do you remember
>> approximately when you saw it? YT automatically archives the footage
>> twice a day, so I’d like to review it and see the mouse!
> 
> I wasn't aware of that. Basically, about five minutes before I wrote my
> email, so around 14:05h CET. Have you considered downloading the
> archived YouTube stream and doing motion detection on it?
> 
> Actually, I found it:
> https://youtu.be/gh7bUtECd0Y?t=29118
> 

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Add soft subtitles to YouTube Stream Now?

2019-11-20 Thread Steven Kan
The good news is that all the required pieces appear to exist; the bad news is 
that no one has yet (to my knowledge) glued them together to make a functional 
solution. Here’s someone who has gotten very close:

https://github.com/szatmary/libcaption/issues/55#issuecomment-525688953

"I would like to share some of my findings regarding the 'broken' output from 
flv+srt when you use pipe eg '-' instead of file.

As a TS my goal is also to inject CC in the livestream. I only use Gstreamer 
instead of ffmpeg.
I was able to generate input data with gstreamer(h264 video wraped in flv) pass 
it to flv+srt and save to a file successfully.
gst-launch-1.0 videotestsrc ! ...etc ... ! flvmux ! fdsink | flv+srt - 
mysrtfile.srt output_with_cc.flv

The problem was when I was trying to pass output from flv+srt further to the 
next process (ffmpeg or gstreamer) for re-sending the result data to streaming 
server. I was able to capture this (broken) output to a file and compare it 
with working output created by flv+srt eg 'output_with_cc.flv' and there where 
a bunch of added lines with 'Matches: 2 Start pts: 4.271000' etc.
Those are produced by vtt.c (line 164 and 168) uncommenting this lines has 
helped to resolve this issue for now.

I assume this lines should not be printed when using pipe and it's bug."



> On Nov 20, 2019, at 2:16 PM, Verachten Bruno  wrote:
> 
> That's a very interesting subject (to me at least).
> I would like to embed automatic (or human generated, depending on the
> budget) subtitles to help hearing-impaired people grab most of the
> talk in our conference.
> I am producing H.264 and sending it (for the time being) to YouTube,
> so your request is not far from my needs.
> I will follow this subject with great attention.
> Thanks.
> 
> On Wed, Nov 20, 2019 at 5:20 PM Michael Shaffer  wrote:
>> 
>> I noticed your Youtube streams only last a day or so. I have a Python
>> script I made that keeps the ffmpeg process sending to Youtube. I have 5 IP
>> cameras going to youtube and they have been going about 9 months without
>> the stream ending. Anyways, if you want I could show you how the script
>> works. You would just have to change the stream keys and the bitrate that
>> each camera uses, so it knows when to restart the stream.
>> 
>> Michael
>> 
>> On Wed, Nov 20, 2019 at 12:17 AM Steven Kan  wrote:
>> 
>>> First time poster, so please be kind if I ask anything stupid!
>>> 
>>> I have a live BeeCam feed on YouTube:
>>> 
>>> https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live <
>>> https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live>
>>> 
>>> using YouTube’s “Stream Now” feature, which is distinct from a streaming
>>> “event” because I don’t have to schedule it. Whenever I’m pushing video to
>>> YouTube, the channel goes live.
>>> 
>>> The stream is supplied by a Raspberry Pi running as an ffmpeg “relay
>>> server,” e.g. it’s not doing any transcoding; it’s just repacking an RTSP
>>> stream from an off-the-shelf camera:
>>> 
>>> ./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i "rtsp://
>>> anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt
>>> -vcodec copy -acodec copy -t 01:47:02 -f flv "rtmp://
>>> a.rtmp.youtube.com/live2/my-youtube-streaming-key”
>>> 
>>> The -t and playlist.txt are because my camera lacks and audio feed, and YT
>>> requires an audio stream, so I have a collection of royalty-free mp3s in
>>> the playlist, and I’m wrapping this command in a loop.
>>> 
>>> When I run this on my RPi 2, CPU utilization for ffmpeg is <<<<10%, which
>>> is what I want, because I will have up to 3 instances of ffmpeg pushing 3
>>> camera streams to 3 YT channels during honey bee swarm season in Spring.
>>> 
>>> What I want to do is add some captions to the video as soft subtitles,
>>> e.g. my location, the present temperature, and the weather forecast. I
>>> don’t have enough CPU on the Pi to burn these into the video stream.
>>> 
>>> Is this possible in ffmpeg and with YouTube’s “stream now” feature?
>>> 
>>> I can get ffmpeg to put a soft subtitle into a local .mkv file:
>>> 
>>> ./ffmpeg -i video.mp4" -i SubtitleTest.srt -acodec copy -scodec copy
>>> out.mkv
>>> 
>>> but I changing the output to .m4v, mp4, or .flv results in errors such as:
>>> 
>>> Subtitle codec 'ass' for stream 2 is not compatible with FLV
>>> 
>>> and pushing mkv to YouTube via:
>>> 
>>> ./ffmpeg -i video.mp

Re: [FFmpeg-user] Add soft subtitles to YouTube Stream Now?

2019-11-20 Thread Steven Kan
Hi Michael,

Thanks! I actually have a cron job to stop, pause, and restart the streams 
every 12 hours, because otherwise the YT archived videos sometimes fail to 
upload.

Somewhere in my processing chain something usually craps out every few days, 
but even then I would sometimes have streams that would go on for 3-4 days, and 
those would stuck in “Processing. . . “ forever, and never be available to 
watch afterward. This is critical for me, because one of my primary motivations 
for having this/these streams is to capture when a swarm moves into my swarm 
trap. By observing scouting behavior in early Spring I can usually tell that 
I’m within 3-4 days of a swarm moving in, but there’s no predicting exactly 
when. So I need to ensure that all of my archives get saved successully, like 
this one:

https://www.youtube.com/watch?v=OtjpylAEP8I=youtu.be=1800 
<https://www.youtube.com/watch?v=OtjpylAEP8I=youtu.be=1800>

(that was post-processed to make a 2-up, but each half was taken by a camera, 
pushed to YT by ffmpeg)

So I actually have a watchdog timer running for each of my streams, plus the 
cron job to interrupt everything, twice a day.

But thanks for the offer!

> On Nov 20, 2019, at 10:11 AM, Michael Shaffer  wrote:
> 
> I noticed your Youtube streams only last a day or so. I have a Python
> script I made that keeps the ffmpeg process sending to Youtube. I have 5 IP
> cameras going to youtube and they have been going about 9 months without
> the stream ending. Anyways, if you want I could show you how the script
> works. You would just have to change the stream keys and the bitrate that
> each camera uses, so it knows when to restart the stream.
> 
> Michael
> 
> On Wed, Nov 20, 2019 at 12:17 AM Steven Kan  wrote:
> 
>> First time poster, so please be kind if I ask anything stupid!
>> 
>> I have a live BeeCam feed on YouTube:
>> 
>> https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live <
>> https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live>
>> 
>> using YouTube’s “Stream Now” feature, which is distinct from a streaming
>> “event” because I don’t have to schedule it. Whenever I’m pushing video to
>> YouTube, the channel goes live.
>> 
>> The stream is supplied by a Raspberry Pi running as an ffmpeg “relay
>> server,” e.g. it’s not doing any transcoding; it’s just repacking an RTSP
>> stream from an off-the-shelf camera:
>> 
>> ./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i "rtsp://
>> anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt
>> -vcodec copy -acodec copy -t 01:47:02 -f flv "rtmp://
>> a.rtmp.youtube.com/live2/my-youtube-streaming-key”
>> 
>> The -t and playlist.txt are because my camera lacks and audio feed, and YT
>> requires an audio stream, so I have a collection of royalty-free mp3s in
>> the playlist, and I’m wrapping this command in a loop.
>> 
>> When I run this on my RPi 2, CPU utilization for ffmpeg is <<<<10%, which
>> is what I want, because I will have up to 3 instances of ffmpeg pushing 3
>> camera streams to 3 YT channels during honey bee swarm season in Spring.
>> 
>> What I want to do is add some captions to the video as soft subtitles,
>> e.g. my location, the present temperature, and the weather forecast. I
>> don’t have enough CPU on the Pi to burn these into the video stream.
>> 
>> Is this possible in ffmpeg and with YouTube’s “stream now” feature?
>> 
>> I can get ffmpeg to put a soft subtitle into a local .mkv file:
>> 
>> ./ffmpeg -i video.mp4" -i SubtitleTest.srt -acodec copy -scodec copy
>> out.mkv
>> 
>> but I changing the output to .m4v, mp4, or .flv results in errors such as:
>> 
>> Subtitle codec 'ass' for stream 2 is not compatible with FLV
>> 
>> and pushing mkv to YouTube via:
>> 
>> ./ffmpeg -i video.mp4" -acodec copy -f mkv "rtmp://
>> a.rtmp.youtube.com/live2/my-youtube-streaming-key”
>> 
>> returns:
>> 
>> Requested output format 'mkv' is not a suitable output format
>> rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key > a.rtmp.youtube.com/live2/my-youtube-streaming-key>
>> 
>> Am I doing this fundamentally wrong? Or is this just not possible? If it’s
>> possible I will continue reading documentation until I get it working!!!
>> 
>> Thanks!
>> 
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>> 
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe&q

Re: [FFmpeg-user] Add soft subtitles to YouTube Mouse Stream?

2019-11-20 Thread Steven Kan
Danke für deine Hilfe!

Yes, I have mice in my yard :-(

They can actually invade a weak hive and wreak havoc! Do you remember 
approximately when you saw it? YT automatically archives the footage twice a 
day, so I’d like to review it and see the mouse!

Thank again!

> On Nov 20, 2019, at 7:11 AM, Moritz Barsnick  <mailto:barsn...@gmx.net>> wrote:
> 
> On Tue, Nov 19, 2019 at 22:45:44 -0600, Steven Kan wrote:
>> I have a live BeeCam feed on YouTube:
>> https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live 
>> <https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live> 
>> <https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live 
>> <https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live>>
> 
> Cure mice, if I may say so. (You're actually trying to show bees,
> right? I caught a glimpse of a mouse though.)

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Add soft subtitles to YouTube Stream Now?

2019-11-20 Thread Steven Kan
> On Nov 20, 2019, at 7:11 AM, Moritz Barsnick  wrote:
> 
> On Tue, Nov 19, 2019 at 22:45:44 -0600, Steven Kan wrote:
>> What I want to do is add some captions to the video as soft
>> subtitles, e.g. my location, the present temperature, and the weather
>> forecast. I don’t have enough CPU on the Pi to burn these into the
>> video stream.
>> 
>> Is this possible in ffmpeg and with YouTube’s “stream now” feature?
> 
> Check this page:
> https://support.google.com/youtube/answer/3068031?hl=en
> 
> YouTube currently only accepts FLV streams, and subtitles only as EIA
> 608/708 captions. As far as I can tell, ffmpeg is currently not capable
> of creating or embedding these captions.
> 
> There's an enhancement request for muxing support:
> https://trac.ffmpeg.org/ticket/1778#comment:10 
> <https://trac.ffmpeg.org/ticket/1778#comment:10>

If I’m demuxing your answer correctly, the answer is “no,” at least of now. 
Thank for your reply; at least now I won’t spend hours trying to find a 
solution that doesn’t exist! That enhancement request has had no updates for 2 
years :-(

EIA 608/708 caption generation appears to be non-trivial!

https://en.wikipedia.org/wiki/CEA-708 <https://en.wikipedia.org/wiki/CEA-708> 

I did find this libcaption project:

https://github.com/szatmary/libcaption <https://github.com/szatmary/libcaption>

and this page saying that the captions can be pushed via RTMP:

https://ghuntley.com/notes/closed-captioning/ 
<https://ghuntley.com/notes/closed-captioning/>

I know this is somewhat OT for ffmpeg, but can I push two RTMP streams to 
YouTube, one for the a/v, and one for the captions?

Back on-topic, given that libcaption exists, how difficult a task would it be 
to integrate support into ffmpeg? Is it just a matter of re-muxing the output 
from libcaption into the ffmpeg output stream? I’m not programmer, 
unfortunately, but I’m just trying to get a sense for the scope of the problem.

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Add soft subtitles to YouTube Stream Now?

2019-11-19 Thread Steven Kan
First time poster, so please be kind if I ask anything stupid!

I have a live BeeCam feed on YouTube:

https://www.youtube.com/channel/UCE0jx2Z6qbc5Co8x8Kyisag/live 


using YouTube’s “Stream Now” feature, which is distinct from a streaming 
“event” because I don’t have to schedule it. Whenever I’m pushing video to 
YouTube, the channel goes live.

The stream is supplied by a Raspberry Pi running as an ffmpeg “relay server,” 
e.g. it’s not doing any transcoding; it’s just repacking an RTSP stream from an 
off-the-shelf camera:

./ffmpeg -re -thread_queue_size 512 -rtsp_transport tcp -i 
"rtsp://anonymous:password@192.168.1.11:554" -f concat -safe 0 -i playlist.txt 
-vcodec copy -acodec copy -t 01:47:02 -f flv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key”

The -t and playlist.txt are because my camera lacks and audio feed, and YT 
requires an audio stream, so I have a collection of royalty-free mp3s in the 
playlist, and I’m wrapping this command in a loop.

When I run this on my RPi 2, CPU utilization for ffmpeg is 10%, which is 
what I want, because I will have up to 3 instances of ffmpeg pushing 3 camera 
streams to 3 YT channels during honey bee swarm season in Spring.

What I want to do is add some captions to the video as soft subtitles, e.g. my 
location, the present temperature, and the weather forecast. I don’t have 
enough CPU on the Pi to burn these into the video stream.

Is this possible in ffmpeg and with YouTube’s “stream now” feature?

I can get ffmpeg to put a soft subtitle into a local .mkv file:

./ffmpeg -i video.mp4" -i SubtitleTest.srt -acodec copy -scodec copy out.mkv

but I changing the output to .m4v, mp4, or .flv results in errors such as:

Subtitle codec 'ass' for stream 2 is not compatible with FLV

and pushing mkv to YouTube via:

./ffmpeg -i video.mp4" -acodec copy -f mkv 
"rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key”

returns:

Requested output format 'mkv' is not a suitable output format
rtmp://a.rtmp.youtube.com/live2/my-youtube-streaming-key 


Am I doing this fundamentally wrong? Or is this just not possible? If it’s 
possible I will continue reading documentation until I get it working!!!

Thanks!

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".