[FFmpeg-user] ffmpeg in jailshell encoding with libx264

2019-09-28 Thread Fred

Hi Guys,

I am running ffmpeg (statically linked) in a jailshell of a hosted 
server to encode webcam pictures into a movie using libx264 codec. Until 
recently all was working fine now after a CPanel upgrade (which also 
seems to have impacted the jailshell) it's suddenly not working. I still 
can encode e.g. to MPEG4 or Theora. But when using libx264 I always get 
this error.


#
[libx264 @ 0x70a0400] using SAR=1/1
[libx264 @ 0x70a0400] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 
AVX FMA3 BMI2 AVX2 AVX512
Error initializing output stream 0:0 -- Error while opening encoder for 
output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, 
width or height

#

Command Line used:
#
./ffmpeg -f image2 -framerate 12 -pattern_type glob -i 
"./2019-09-27/*.jpg" -filter:v scale=960:-1 -c:v libx264+ -crf 30 
-preset slow day_2019-09-27.mkv

#

FFMPEG version used:
#
ffmpeg version 4.2.1-static https://johnvansickle.com/ffmpeg/ Copyright 
(c) 2000-2019 the FFmpeg developers

  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-debug --disable-ffplay --disable-indev=sndio 
--disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r 
--enable-gnutls --enable-gmp --enable-libgme --enable-gray 
--enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf 
--enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb 
--enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis 
--enable-libopus --enable-libtheora --enable-libvidstab 
--enable-libvo-amrwbenc --enable-libvpx --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d 
--enable-libxvid --enable-libzvbi --enable-libzimg

  libavutil  56. 31.100 / 56. 31.100
  libavcodec 58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter 7. 57.100 /  7. 57.100
  libswscale  5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
#

Anyone some idea what limitation in the jailshell might impact the 
encoding capability of certain encoders?


Cheers

Fred
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg x265 on multi-cpus

2017-04-10 Thread fred fred
thank you Moritz !
anyway, even with -threads 0 
x265 [info]: Thread pool created using 24 threadsx265 [info]: Slices            
                  : 1x265 [info]: frame threads / pool features       : 5 / 
wpp(5 rows)
I see with top command between 400% - 500% CPU on ffmpeg ... I was expected 
like 2400% or 24 ffmpeg workers with 100%
where am I wring again ??
as for my last email, I write it with yahoo email web interface, I guess it's 
not fully compatible with the list managing software. Sorry
RegardsFred 

Le Lundi 10 avril 2017 12h20, Moritz Barsnick <barsn...@gmx.net> a écrit :
 

 On Mon, Apr 10, 2017 at 09:58:59 +, fred fred wrote:
> I have tried all possible ways to use all my 16 cpus (x2 threads) to encode 
> with libx265, but each time it fails...
> -pools, --pools, -p,  --numa-* ... and so on are not recognise at all

Those are x265 options (of the command line tool), not ffmpeg+libx265
options.

First, please try ffmpeg's own "-threads 16" option. That should do the
right thing for you.


If you need to directly use those pool options of libx265, use a special
ffmpeg option:
  "-x265-params pools=8" (or something similar)

Also observe the info ffmpeg's libx265 encoder outputs when beginning
to encode, e.g.:
> x265 [info]: Thread pool created using 4 threads
> x265 [info]: frame threads / pool features      : 2 / wpp(4 rows)

BTW, your email's formatting is terribly broken:
http://ffmpeg.org/pipermail/ffmpeg-user/2017-April/035824.html

Cheers,
Moritz
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

   
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg x265 on multi-cpus

2017-04-10 Thread fred fred
Erik, can you teach me how to reach maximum load  ? at least what is your 
command line use for that ?


Le Lundi 10 avril 2017 15h03, Erik Slagter  a écrit :
 

 >>> how did you come to the conclusion that parallelization scales magically
>>> and with no limits? if that would be realistic just throwing enough CPU
>>> cores on whatever problem would be the soplution - but that is not how
>>> computers are working
>>
>> On the other hand, both libx264 and libx265 do scale quite well by
>> smartly dividing the work over all available threads. Yes, you can
>> really have up to 800% cpu usage on a 4x2 cpu (which is quite
>> impressive), with both of them
> 
> but you can't expect that that scales up to every core count

But at least to more than eight cores (see the documentation), libx264
is (yet) a bit more scalable, but I guess libx265 will get better at it
as well. On a 2x2x12 core machine I had to start two concurrent
encodings to get all of the cpu's busy all of the time (libx265), but I
also must say it didn't really add that much to the total encoding frame
rate.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

   
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg x265 on multi-cpus

2017-04-10 Thread fred fred
get it, but then how to reach 800% for one task at least ?as for the girls, 
well, I strongly believe in practical experiments ;)
 

Le Lundi 10 avril 2017 15h06, Reindl Harald <h.rei...@thelounge.net> a 
écrit :
 

 

Am 10.04.2017 um 14:59 schrieb fred fred:
> hello Harald, it is not a conclusion, it's an expectation ! and a question : 
> for one file from some format to x265, what it the best efficient way to use 
> my 24 cores ? can you contribute ?BestFred

parallelization depends on a lot of things and you always have sharded 
ressources like memory, IO, thread-synchronisation - for one video task 
probably there is just no way to get 24 core to 100% CPU usage

with 3 different parallel tasks each using 8 cores probably better 
because each one has it's own thread-synchronisation and so on

you just can't expect that throwing enough CPU cores on a problem will 
solve it faster the same as 9 girls can't make a child in one month :-)

>      Le Lundi 10 avril 2017 14h54, Reindl Harald <h.rei...@thelounge.net> a 
>écrit :
> Am 10.04.2017 um 14:43 schrieb fred fred:
>> thank you Moritz !
>> anyway, even with -threads 0
>> x265 [info]: Thread pool created using 24 threadsx265
>> I see with top command between 400% - 500% CPU on ffmpeg ...
>> I was expected like 2400% or 24 ffmpeg workers with 100%
> 
> how did you come to the conclusion that parallelization scales magically
> and with no limits? if that would be realistic just throwing enough CPU
> cores on whatever problem would be the soplution - but that is not how
> computers are working

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

   
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] ffmpeg x265 on multi-cpus

2017-04-10 Thread fred fred
Dear list,I am currently on this equipment :
$lscpuArchitecture:          x86_64CPU op-mode(s):        32-bit, 64-bitByte 
Order:            Little EndianCPU(s):                24On-line CPU(s) list:   
0-23Thread(s) per core:    2Core(s) per socket:    6Socket(s):             
2NUMA node(s):          2Vendor ID:             GenuineIntelCPU family:         
   6Model:                 63Model name:            Intel(R) Xeon(R) CPU 
E5-2643 v3 @ 3.40GHzStepping:              2CPU MHz:               
1244.585BogoMIPS:              6803.58Virtualization:        VT-xL1d cache:     
        32KL1i cache:             32KL2 cache:              256KL3 cache:       
       20480KNUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22NUMA node1 
CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23
and $free -h              total        used        free      shared  buff/cache 
  availableMem:            62G        908M         54G        969M        7.2G  
       60GSwap:           67G          0B         67G

and tried to use ffmpeg last compilation, according to the ffmpeg installation 
page
$ffmpegffmpeg version N-85424-gadf9f04 Copyright (c) 2000-2017 the FFmpeg 
developers  built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-11)  
configuration: --prefix=/ffmpeg_build --extra-cflags=-I/ffmpeg_build/include 
--extra-ldflags='-L/ffmpeg_build/lib -ldl' --bindir=/bin 
--pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk_aac 
--enable-libfreetype --enable-libmp3lame --enable-libvorbis --enable-libvpx 
--enable-libx264 --enable-libx265  libavutil      55. 60.100 / 55. 60.100  
libavcodec     57. 92.100 / 57. 92.100  libavformat    57. 72.100 / 57. 72.100  
libavdevice    57.  7.100 / 57.  7.100  libavfilter     6. 84.101 /  6. 84.101  
libswscale      4.  7.100 /  4.  7.100  libswresample   2.  8.100 /  2.  8.100  
libpostproc    54.  6.100 / 54.  6.100Hyper fast Audio and Video encoder

I have tried all possible ways to use all my 16 cpus (x2 threads) to encode 
with libx265, but each time it fails...
-pools, --pools, -p,  --numa-* ... and so on are not recognise at all
$ffmpeg -p 16 -i Ice.avi  -c:v libx265 -preset veryslow -crf 28 -c:a aac -b:a 
128k Ice_x265.mp4Unrecognized option 'p'.Error splitting the argument list: 
Option not found
Internet and my friend Google didn't provide me with any solution...
so how to proceed ???
Thank you !RegardsFred






___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] ffmpeg

2014-12-17 Thread FRED COLLEY
Hi Every time i try and open my VIDEO’S Windows explorer closes it again, now I 
have a message come up that it is the ffmpeg that is causing the problem and to 
contact you for advice, I’m running vista...thank you. Regards Fred.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Fred Perie
2016-12-02 17:38 GMT+01:00 Joshua Grauman <j...@grauman.com>:
>
> Thanks guys, I'll look into these ideas!
>
> Josh
>
>
> On Fri, 2 Dec 2016, Paul B Mahol wrote:
>
>> On 12/2/16, Joshua Grauman <j...@grauman.com> wrote:
>>>
>>> Hello all,
>>>
>>> I am using the following command successfully to generate a screencast.
>>> The video comes from my program 'gen-vid' which outputs the raw frames
>>> with alpha channel. The resulting .avi has alpha channel as well which
is
>>> my goal. It all works great except that my computer can't handle doing
it
>>> real-time. So I am wondering if there is a different vcodec I could use
to
>>> achieve the same result that was less demaning on the cpu? I am willing
to
>>> sacrifice some compression for more speed, but would prefer not to have
to
>>> store all the raw frames without any compression. Storing the alpha
>>> channel is also a must. It is preferable if the compression is lossless.
>>> Does anyone have any other suggestions for compression other than png
that
>>> may be faster? Thanks!
>>
>>
>> utvideo, huffyuv, ffvhuff
>>
>>>
>>> ./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
>>> -framerate 30 -i - -vcodec png over.avi
>>>
>>> Josh
>>> ___
>>> ffmpeg-user mailing list
>>> ffmpeg-user@ffmpeg.org
>>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>>
>>> To unsubscribe, visit link above, or email
>>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>>
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Hi Joshua
I regularly use ffv1 with pix_fmt bgra
If speed is really a problem when reading/decoding, you can  also consider
having two files one containing the alpha channel codec ffv1 pix_fmt gray8
and the color using a codec like H264
I don't know how to do this with the ffmpeg command. I use the ffmpeg
libraries and a specific program.

Fred



--
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] How can I stream to a Mac Mini USB output?

2019-04-13 Thread Fred Rotbart

Hi,

I have a MiniDSP U-DIO8 
<https://www.minidsp.com/products/usb-audio-interface/u-dio8> connected 
to the USB port of a 2018 Mac Mini running Kodi 18.1-RC1 on Mojave 
10.14.4. The U-DIO8 provides 8 output channels as 4xSPDIF connections 
and similarly 8 channels of input as 4xSPDIF connections.


I can capture audio input from the input channels but I do not know how 
I can stream ffmpeg output to the U-DIO8 output channels.


If I list devices this is what I get:

~#ffmpeg -f avfoundation -list_devices true -i ""
ffmpeg version 4.0 Copyright (c) 2000-2018 the FFmpeg developers
  built with clang version 4.0.1 (tags/RELEASE_401/final)
  configuration: --prefix=/Users/fred/anaconda 
--cc=x86_64-apple-darwin13.4.0-clang --disable-doc --enable-shared 
--enable-static --enable-zlib --enable-pic --enable-gpl 
--enable-version3 --disable-nonfree --enable-hardcoded-tables 
--enable-avresample --enable-libfreetype --disable-openssl 
--disable-gnutls --enable-libvpx --enable-pthreads --enable-libopus 
--enable-postproc --disable-libx264

  libavutil  56. 14.100 / 56. 14.100
  libavcodec 58. 18.100 / 58. 18.100
  libavformat    58. 12.100 / 58. 12.100
  libavdevice    58.  3.100 / 58.  3.100
  libavfilter 7. 16.100 /  7. 16.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale  5.  1.100 /  5.  1.100
  libswresample   3.  1.100 /  3.  1.100
  libpostproc    55.  1.100 / 55.  1.100
[AVFoundation input device @ 0x7f8ee6c00540] AVFoundation video devices:
[AVFoundation input device @ 0x7f8ee6c00540] [0] FaceTime HD Camera
[AVFoundation input device @ 0x7f8ee6c00540] [1] Capture screen 0
[AVFoundation input device @ 0x7f8ee6c00540] AVFoundation audio devices:
[AVFoundation input device @ 0x7f8ee6c00540] [0] USBStreamer
[AVFoundation input device @ 0x7f8ee6c00540] [1] Built-in Microphone

The USBStreamer is the U-DIO8 device.

What do I need to add to the ffmpeg command line to cause ffmpeg to 
output to USBStreamer?


Thanks,
- Fred
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Panasonic GH4 4K live stream using ffmpeg

2019-07-15 Thread Fred Perie
On Mon, Jul 15, 2019 at 1:46 AM Michael Shaffer  wrote:
>
> Can anyone recommend a good HDMI input card that will work with ffmpeg in
> Ubuntu Linux?
>
> Thanks,
> Michael Shaffer
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Hello Michael,

I use Blackmagic Intensity Pro 4K under Mint whcih support UltraHD.
You have to compile ffmpeg with the relevant option : ./configure
--enable-decklink --extra-cflags="-I/usr/include/blackmagic" 

This works fine for me.

Fred
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Compilation Video

2022-02-17 Thread Fred Kemp
I am trying to automatically create a compilation video made up of 
security cam motion videos of relatively short duration (5 to 10 
seconds) - kind of like a time-lapse only with videos instead of images.


Each video snippet would append to the same compilation video file.  The 
compilation video would be limited to X minutes long and newer videos 
would knock off the oldest videos from the compilation video once the X 
is reached.


Can this be done using FFmpeg?

Thank you in advance for your consideration.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-22 Thread Fred Kemp
Thanks about the USB tip.  I’m trying to  concatenate automatically, however.   
We have many Arlo cameras where we CAN connect to the internet.  Otherwise, 
you’re right, we could just use a trail cam but the time someone would need to 
be spending going through assembling videos would not be worth it.  

Any programmers I can contact?

Sent from my iPhone

> On Feb 22, 2022, at 7:46 PM, Adam Nielsen via ffmpeg-user 
>  wrote:
> 
> 
>> 
>> I need help in trying to develop a security camera for a remote 
>> area of a farm.  There is no internet in some places there and some of 
>> the motion videos may be long, e.g., 20 to 30 minutes.
>> 
>> So, I would like to be able to record these longer motion videos on 
>> a Raspberry Pi locally, concatenate them and then be able to somehow 
>> quickly review the compilation/concatenated video on a video player and 
>> then download the snippet(s) of video to a smart phone.
> 
> You're going to have to do a fair bit of programming/scripting to get
> this I suspect, as I don't think there's anything around that can do
> this out of the box on a Pi.
> 
> However, since you won't want to use an SD card for this (as writing
> all the video will kill the SD card very quickly) you'll probably need
> to use a USB external hard drive.  In this case you could just buy two,
> and swap them over when you visit the camera.  Then back on another
> computer you can flick through the video on the USB hard drive.
> 
>> 1. Recording motion using Motion or MotionEyes to a particular
>>directory for the day,
>> 2. Then using FFMpeg to possibly automatically concatenate the
>>videos in that directory into one bigger file, and
>> 3. Then using a video player to scroll through the video and
>>download a particular segment to my iPhone.
> 
> Have you considered using a game camera instead of a Raspberry Pi?  They
> have motion sensors built in, they'll capture video of the motion, and
> save each event as a different video file.  Then you can visit it, swap
> over the memory card, and watch all the videos on any device you can
> plug the card into (even a smartphone if you have a card reader for
> it).  They run off batteries and include infrared lights to capture
> video at night, so they are well suited for remote areas where you
> don't need a live video feed.
> 
> The only real benefit of using the Pi would be that you get
> Ethernet/WiFi on it for remote access/live video, but if you won't be
> using that because it's too far away from a WiFi network and you don't
> want to use WiFi extenders or dig a cable, using a game camera will
> probably save you a huge amount of effort.
> 
> Cheers,
> Adam.
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-24 Thread Fred Kemp
Thanks again for the information!  Sorry for the late reply . . . for 
some reason, I only just got this . . .


I would guess that there is a way to automate that concatenating via 
FFMpeg and I have a nephew that codes in Linux for a full-time job.  I 
wanted to ask about the Raspberry Pi group what might be the best 
method(s) and then talk with him . . .


We are most interested in being able to group videos by a certain time 
period, e.g., the motion tripped within a particular time period.  The 
ideal setup would be looking at the concatenating video with two sliders 
- one for the beginning and one for the ending and downloading to the 
phone.  While moving the sliders to see images/time stamps of the video 
where to start and stop.  The time stamps could be on the original 
videos.   Is the a video player that might be able to do that?


Or, even having one slider and being able to look at a time period after 
that.  For example, the 20 minutes of video following a certain 
identified point of the concatenating video . . .


Thanks again for your help, Adam!!  Greatly appreciated!!


On 2/22/2022 11:03 PM, Adam Nielsen via ffmpeg-user wrote:

Thanks about the USB tip.  I’m trying to  concatenate automatically,
however.   We have many Arlo cameras where we CAN connect to the
internet.  Otherwise, you’re right, we could just use a trail cam but
the time someone would need to be spending going through assembling
videos would not be worth it.

Concatenating the videos into one would be fairly straightforward, if
somewhat inconvenient (if the video is of leaves blowing you'd have to
sit through it in full instead of just skipping to the next video).
But if you wanted to do this you could just copy the files off the trail
camera and run a short ffmpeg command to join them all together into
one video.

The hard part of what you ask is using the video player to scroll
through the videos and downloading a segment to your phone.

Also, how remote is this camera?  If you already have
Internet-connected cameras that do what you want, have you considered a
long range wireless link?  Mikrotik is one of the lower priced vendors,
with some of their longer range devices apparently being able to
maintain a line-of-sight link for 40 km (25 mi) on 2.4 GHz:

   https://mikrotik.com/products/group/wireless-systems

I haven't used any of these products so they are just examples of
what's available, not a recommendation.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-24 Thread Fred Kemp

I will look into Zoneminder, Thanks Anatoly!


On 2/23/2022 3:49 AM, Anatoly wrote:

On Tue, 22 Feb 2022 14:14:29 -0600
Fred Kemp  wrote:


      I need help in trying to develop a security camera for a remote
area of a farm.  There is no internet in some places there and some
of the motion videos may be long, e.g., 20 to 30 minutes.

      So, I would like to be able to record these longer motion videos
on a Raspberry Pi locally, concatenate them and then be able to
somehow quickly review the compilation/concatenated video on a video
player and then download the snippet(s) of video to a smart phone.

      I'm looking for direction as far as software goes.  I am not
tech savvy but I know enough to see this project possibly going down
a number of dead ends as I learn about the limitations of various
software packages.  Any help would be GREATLY appreciated!

      I was thinking of possibly:

  1.     Recording motion using Motion or MotionEyes to a particular
 directory for the day,
  2.     Then using FFMpeg to possibly automatically concatenate the
 videos in that directory into one bigger file, and
  3.     Then using a video player to scroll through the video and
 download a particular segment to my iPhone.

      Is this possible?

     Thank you in advance for your time!

what's about Zoneminder?
I'm not using it, but as far as I know it has functionality that is
close to what you asking for. But I may be wrong.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-22 Thread Fred Kemp
    I need help in trying to develop a security camera for a remote 
area of a farm.  There is no internet in some places there and some of 
the motion videos may be long, e.g., 20 to 30 minutes.


    So, I would like to be able to record these longer motion videos on 
a Raspberry Pi locally, concatenate them and then be able to somehow 
quickly review the compilation/concatenated video on a video player and 
then download the snippet(s) of video to a smart phone.


    I'm looking for direction as far as software goes.  I am not tech 
savvy but I know enough to see this project possibly going down a number 
of dead ends as I learn about the limitations of various software 
packages.  Any help would be GREATLY appreciated!


    I was thinking of possibly:

1.     Recording motion using Motion or MotionEyes to a particular
   directory for the day,
2.     Then using FFMpeg to possibly automatically concatenate the
   videos in that directory into one bigger file, and
3.     Then using a video player to scroll through the video and
   download a particular segment to my iPhone.

    Is this possible?

   Thank you in advance for your time!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Decode ac3 from multichannel USB input

2023-11-03 Thread Fred Rotbart
Finally, I managed to extract ac3 from the Digiface USB spdif input but 
not in a way that helped me.
It took a while but eventually I learned that the spdif signal is not 
pure ac3 but ac3 wrapped as SMPTE ST 337. VLC does not recognise this 
but ffmpeg and MediaInfo do.


So if I first capture the first two Digiface channels into a file using:
`ffmpeg -f avfoundation -i :2 -filter_complex "[0:a]pan=2C|c0=c0|c1=c1" 
-y capture.wav`


and pipe this file into the VLC audio capture device via Blackhole2ch or 
any other audio device:
`ffmpeg -y -i capture_2c.wav -f spdif -f audiotoolbox 
-audio_device_index 2 -`


it works since ffmpeg will extract the ac3 and if necessary, decode 
appropriately.


However, if I try to use VLC as the capture device directly:
`ffmpeg -f avfoundation -i :2 -filter_complex "[0:a]pan=2C|c0=c0|c1=c1" 
-f spdif -f audiotoolbox -audio_device_index 2 -`


ffmpeg does not extract the ac3 but passes on the raw signal as PCM!

I tried bypassing the problem by using a pipe, such as:
`ffmpeg -f avfoundation -i :2 -filter_complex "[0:a]pan=2C|c0=c0|c1=c1" 
-f s16le - | ffmpeg -i - -f audiotoolbox -audio_device_index 2 -`


Here after the pipe, ffmpeg recognised the format but this only worked 
for about 15 seconds and then no more data was passed.


Nothing I tried solved this problem, so unless I get another bright idea 
or some help, I guess I am giving up on this for now.


On 31/10/2023 17:39, Fred Rotbart wrote:
Okay! After many hours I am making some progress (if anyone is 
interested).


This extracts the channels, recognises the ac3 and decodes it but 
there are errors thrown from time to time and the rate is not constant.


ffmpeg -f avfoundation -capture_raw_data true -i :2 -filter_complex "\
[0:a]pan=1C|c0=c0[a0];\
[0:a]pan=1C|c0=c1[a1];\
[a0][a1]amerge=inputs=2[a3]" -map '[a3]' \
-f s16le - \
| ffmpeg -loglevel debug \
-acodec ac3 -i - \
-af 'pan=5.1|c0=FL|c1=FR|c4=FC|c5=LFE|c2=SL|c3=SR' \
-ar 48000 -y output.wav

How can I clean this up and make it more stable?

Thanks
-Fred

On 28/10/2023 15:36, Fred Rotbart wrote:
Here is one of my many other attempts. It should be clear that I am a 
beginner with ffmpeg.


No matter what I try, ffmpeg seems to merge all the 32 USB channels 
into 6.


For example:

ffmpeg -ac 2 -c ac3 -loglevel debug -f avfoundation -i :2 -af 
'pan=5.1' output.wav


Part of the output:

Splitting the commandline.
Reading option '-ac' ... matched as option 'ac' (set number of audio 
channels) with argument '2'.
Reading option '-c' ... matched as option 'c' (codec name) with 
argument 'ac3'.
Reading option '-loglevel' ... matched as option 'loglevel' (set 
logging level) with argument 'debug'.
Reading option '-f' ... matched as option 'f' (force format) with 
argument 'avfoundation'.

Reading option '-i' ... matched as input url with argument ':2'.
Reading option '-af' ... matched as option 'af' (set audio filters) 
with argument 'pan=5.1'.

Reading option 'output.wav' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Successfully parsed a group of options.
Parsing a group of options: input url :2.
Applying option ac (set number of audio channels) with argument 2.
Applying option c (codec name) with argument ac3.
Applying option f (force format) with argument avfoundation.
Successfully parsed a group of options.
Opening an input file: :2.
[avfoundation @ 0x7f96400041c0] audio device 'Digiface USB 
(24162724)' opened
For transform of length 128, inverse, mdct_float, flags: [aligned, 
out_of_place], found 3 matches:
    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], 
factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 
544
    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: 
[2, any], flags: [unaligned, out_of_place, inv_only], prio: 96
    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], 
factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], 
prio: -130976
For transform of length 64, inverse, fft_float, flags: [aligned, 
inplace, preshuf, asm_call], found 3 matches:
    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, 
asm_call], prio: 480
    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, 
asm_call], prio: 448
    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, 
asm_call], prio: 416

Transform tree:
    mdct_inv_float_avx2 - type: mdct_float, len: 128, factors[2]: [2, 
any], flags: [aligned, out_of_place, inv_only]
    fft_sr_asm_float_avx2 - type: fft_float, len: 64, factor: 2, 
flags: [aligned, inplace, out_of_place, preshuf, asm_call]
For transform of length 256, inverse, mdct_float, flags: [aligned

Re: [FFmpeg-user] Decode ac3 from multichannel USB input

2023-10-31 Thread Fred Rotbart

Okay! After many hours I am making some progress (if anyone is interested).

This extracts the channels, recognises the ac3 and decodes it but there 
are errors thrown from time to time and the rate is not constant.


ffmpeg -f avfoundation -capture_raw_data true -i :2 -filter_complex "\
[0:a]pan=1C|c0=c0[a0];\
[0:a]pan=1C|c0=c1[a1];\
[a0][a1]amerge=inputs=2[a3]" -map '[a3]' \
-f s16le - \
| ffmpeg -loglevel debug \
-acodec ac3 -i - \
-af 'pan=5.1|c0=FL|c1=FR|c4=FC|c5=LFE|c2=SL|c3=SR' \
-ar 48000 -y output.wav

How can I clean this up and make it more stable?

Thanks
-Fred

On 28/10/2023 15:36, Fred Rotbart wrote:
Here is one of my many other attempts. It should be clear that I am a 
beginner with ffmpeg.


No matter what I try, ffmpeg seems to merge all the 32 USB channels 
into 6.


For example:

ffmpeg -ac 2 -c ac3 -loglevel debug -f avfoundation -i :2 -af 
'pan=5.1' output.wav


Part of the output:

Splitting the commandline.
Reading option '-ac' ... matched as option 'ac' (set number of audio 
channels) with argument '2'.
Reading option '-c' ... matched as option 'c' (codec name) with 
argument 'ac3'.
Reading option '-loglevel' ... matched as option 'loglevel' (set 
logging level) with argument 'debug'.
Reading option '-f' ... matched as option 'f' (force format) with 
argument 'avfoundation'.

Reading option '-i' ... matched as input url with argument ':2'.
Reading option '-af' ... matched as option 'af' (set audio filters) 
with argument 'pan=5.1'.

Reading option 'output.wav' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Successfully parsed a group of options.
Parsing a group of options: input url :2.
Applying option ac (set number of audio channels) with argument 2.
Applying option c (codec name) with argument ac3.
Applying option f (force format) with argument avfoundation.
Successfully parsed a group of options.
Opening an input file: :2.
[avfoundation @ 0x7f96400041c0] audio device 'Digiface USB (24162724)' 
opened
For transform of length 128, inverse, mdct_float, flags: [aligned, 
out_of_place], found 3 matches:
    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], 
factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544
    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: 
[2, any], flags: [unaligned, out_of_place, inv_only], prio: 96
    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], 
factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], 
prio: -130976
For transform of length 64, inverse, fft_float, flags: [aligned, 
inplace, preshuf, asm_call], found 3 matches:
    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 480
    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 448
    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 416

Transform tree:
    mdct_inv_float_avx2 - type: mdct_float, len: 128, factors[2]: [2, 
any], flags: [aligned, out_of_place, inv_only]
    fft_sr_asm_float_avx2 - type: fft_float, len: 64, factor: 2, 
flags: [aligned, inplace, out_of_place, preshuf, asm_call]
For transform of length 256, inverse, mdct_float, flags: [aligned, 
out_of_place], found 3 matches:
    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], 
factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544
    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: 
[2, any], flags: [unaligned, out_of_place, inv_only], prio: 96
    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], 
factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], 
prio: -130976
For transform of length 128, inverse, fft_float, flags: [aligned, 
inplace, preshuf, asm_call], found 3 matches:
    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 480
    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 448
    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], 
factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], 
prio: 416

Transform tree:
    mdct_inv_float_avx2 - type: mdct_float, len: 256, factors[2]: [2, 
any], flags: [aligned, out_of_place, inv_only]
    fft_sr_asm_float_avx2 - type: fft_float, len: 128, factor: 2, 
flags: [aligned, inplace, out_of_place, preshuf, asm_call]

[avfoundation @ 0x7f96400041c0] All info found
Input #0, avfoundation, from ':2':
  Duration: N/A, start: 1307454.032041, bitrate: N/A
  Stream #0:0, 1, 1/100: Audio: ac3, 44100 Hz, 32 channels, fltp
Successfully opene

[FFmpeg-user] Decode ac3 from multichannel USB input

2023-10-25 Thread Fred Rotbart

Hi,

I have a RME Digiface USB as input to my Mac. This has 32 SPDIF input 
channels, of which the first two have an ac3 signal and the others are 
not used.
I have been trying to isolate and decode one of the ac3 channels to its 
six separate PCM channels but without success.


For example, as one of the many attempts, I tried:

ffmpeg -acodec ac3 -f avfoundation -capture_raw_data true -i :1 \
-map_channel 0.0.0, \
-af 'pan=5.1|c0=c0|c1=-|c2=FC|c3=LFE|c4=BL|c5=BR' \
output.wav

Can someone help me with this?

- Fred

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Decode ac3 from multichannel USB input

2023-10-28 Thread Fred Rotbart
i1 + 0 i2 + 0 i3 + 0 i4 + 
0 i5 + 0 i6 + 0 i7 + 0 i8 + 0 i9 + 0 i10 + 0 i11 + 0 i12 + 0 i13 + 0 i14 
+ 0 i15 + 0 i16 + 0 i17 + 0 i18 + 0 i19 + 0 i20 + 0 i21 + 0 i22 + 0 i23 
+ 0 i24 + 0 i25 + 0 i26 + 0 i27 + 0 i28 + 0 i29 + 0 i30 + 0 i31

[Parsed_pan_0 @ 0x7fe03c40e180] Pure channel mapping detected: M M M M M M
[auto_aresample_0 @ 0x7fe03c40d280] [SWR @ 0x7fe03c72d000] Using fltp 
internally between filters
[auto_aresample_0 @ 0x7fe03c40d280] ch:6 chl:5.1 fmt:fltp r:44100Hz -> 
ch:6 chl:5.1 fmt:s16 r:48000Hz

Output #0, wav, to 'output.wav':
  Metadata:
    ISFT    : Lavf60.3.100
  Stream #0:0, 0, 1/48000: Audio: pcm_s16le ([1][0][0][0] / 
0x0001), 48000 Hz, 5.1, s16, 4233 kb/s

    Metadata:
  encoder : Lavc60.3.100 pcm_s16le
[out#0/wav @ 0x7fe03b704840] All streams finished
[out#0/wav @ 0x7fe03b704840] Terminating muxer thread

I've read all the documentation and searched all the forums but could 
not find anything that would give me a clue.

Please help.

- Fred

On 25/10/2023 14:00, Fred Rotbart wrote:

Hi,

I have a RME Digiface USB as input to my Mac. This has 32 SPDIF input 
channels, of which the first two have an ac3 signal and the others are 
not used.
I have been trying to isolate and decode one of the ac3 channels to 
its six separate PCM channels but without success.


For example, as one of the many attempts, I tried:

ffmpeg -acodec ac3 -f avfoundation -capture_raw_data true -i :1 \
-map_channel 0.0.0, \
-af 'pan=5.1|c0=c0|c1=-|c2=FC|c3=LFE|c4=BL|c5=BR' \
output.wav

Can someone help me with this?

- Fred


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Limiting frame rate on encoding videos

2016-11-05 Thread Fred M. Sloniker
I encode videos for Livestream pretty routinely, and I'm looking to 
automate the process a bit more. Specifically, classic Livestream (which 
is what I use) has a 25 FPS maximum to use FLVs straight and not 
re-encode them. I can use an FFmpeg filter to set my output FPS to 25, 
but that's not what I want, because if my source video already has a 
framerate less than 25, there's no sense increasing it for no reason.


So. Is there a way to tell FFmpeg to only /decrease/ to the desired 
framerate, similar to the 'force_original_aspect_ratio=decrease' 
setting? Or a way to say, for instance, 'set FPS to the minimum of 25 
and the current frame rate'? This is the line I'm currently using in my 
Windows batch file; I believe setting the FPS /to/ 25 would be as simple 
as adding ',fps=25' inside the quoted parameters for -vf, but I haven't 
tested it.


"D:\ffmpeg\ffmpeg" -y -i "%~1" -vf 
"scale=w=480:h=360:force_original_aspect_ratio=decrease" -b:v 400k -b:a 
64k -ar 22050 -f flv "%~n1.flv"


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Two-pass versus CRF: a curiosity question

2016-11-11 Thread Fred M. Sloniker
I'm given to understand that, under the hood, two-pass and CRF are the 
same thing, just that two-pass calculates the necessary CRF to get the 
desired filesize. Is this correct? And if so, is it possible to see what 
CRF was selected in the two-pass encoding? I'm experimenting with 
encoding two versions of a video, one at a higher resolution than the 
other, and I'm curious if I could encode the high-res one with two-pass, 
then somehow encode the low-res one with the same CRF. (I already 
figured out that naively dividing the filesize by some amount for a 
second two-pass encode isn't going to work.)


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] workaround for wrong timings when combining still images?

2022-01-06 Thread fred konkin via ffmpeg-user
I'm trying to combine several png images into a gif or mp4 animation, but with 
frame durations set individually for each frame.  Using concat, with a command 
line such as:

ffmpeg -f concat -i concat_list.txt anim.gif

the durations specified for each png in concat_list.txt are rounded to the 
nearest 0.04s (ie 1/25 s) in anim.gif, as pointed out in this bug report:

https://trac.ffmpeg.org/ticket/9210

Can anyone suggest a workaround to specify frame timestamps more precisely than 
this 1/25 s timebase?

As an alternative I also tried to set the modification times for the png files 
to the desired intervals using touch -d, and then ran

ffmpeg -f image2 -pattern_type glob -i '*.png' -ts_from_file 2 anim.gif

but again the intervals are rounded to 0.04s in anim.gif, according to

ffprobe -show_frames anim.gif

This suggests the problem isn't just with concat.

Perhaps as a workaround a second pass over anim.gif could somehow fix the 
timings?

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".