Another revealing example:
"split[A][B],[A]select='eq(mod((n+1)\,5)\,3)'[C],[B]datascope,null[D],interleave"
Though the [B][D] branch passes every frame that is presented at [B], datascope does not appear for
frames 2 7 12 17 etc.
That reveals that traversal of ffmpeg filter complexes is not
Sigh.
On 4/17/2020 5:39 AM, Mark Filipak wrote:
On 04/17/2020 06:34 AM, Monex wrote:
Please do not use complicated phrases or words like "germane" - many
ffmpeg-users are not native English speakers and you are causing
confusion; it is not necessary on a technical list.
Actually, they
For example,
In the select branch that contains
datascope=size=1920x1080:x=45:y=340:mode=color2,not(eq(mod((n+1)\,5)\,3))
datascope appears for frames 0 1 3 4 5 6 8 9 10 11 13 14 15 16 18 19 etc.
whereas in the select branch that contains
On 04/17/2020 11:23 AM, Paul B Mahol wrote:
-snip-
datascope appears if you switch order of interleave pads, or use
hstack/vstack filter instead of interleave filter.
Thank you, but I've had no difficulty using datascope. It does not appear in the output video if no
frames flow through it.
On 4/17/20, Mark Filipak wrote:
> I reran the tests with these command lines:
>
> SET FFREPORT=file=FOO-GH.LOG:level=32
>
> ffmpeg -i %1 -filter_complex
>
I reran the tests with these command lines:
SET FFREPORT=file=FOO-GH.LOG:level=32
ffmpeg -i %1 -filter_complex
On 04/17/2020 06:34 AM, Monex wrote:
On 17/04/2020 11:52, Mark Filipak wrote:
On 04/17/2020 05:48 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
On 04/17/2020 05:38 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-
That is
On 17/04/2020 11:52, Mark Filipak wrote:
> On 04/17/2020 05:48 AM, Paul B Mahol wrote:
>> On 4/17/20, Mark Filipak wrote:
>>> On 04/17/2020 05:38 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
> -snip-
>> That is not filter
Argh! I keep making mistakes. I'm working too quickly. You see, I had 2 differing versions: one
using 'telecine=pattern=5' and one using 'telecine=pattern=46'. They tried to do the same thing, but
by differing methods (differing filter graphs). It's really easy to get them mixed up.
Here is a
On 04/17/2020 03:56 AM, Michael Koch wrote:
Am 17.04.2020 um 09:44 schrieb Mark Filipak:
On 04/17/2020 02:41 AM, Michael Koch wrote:
Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
transcode has concluded.
But remaining
On 04/17/2020 03:56 AM, Michael Koch wrote:
Am 17.04.2020 um 09:44 schrieb Mark Filipak:
On 04/17/2020 02:41 AM, Michael Koch wrote:
Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
transcode has concluded.
But
On 04/17/2020 05:48 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
On 04/17/2020 05:38 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-
That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it
On 4/17/20, Mark Filipak wrote:
> On 04/17/2020 05:38 AM, Paul B Mahol wrote:
>> On 4/17/20, Mark Filipak wrote:
>>> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
>>> -snip-
That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your
On 04/17/2020 05:38 AM, Paul B Mahol wrote:
On 4/17/20, Mark Filipak wrote:
On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-
That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','
My filtergraph is slightly
On 4/17/20, Mark Filipak wrote:
> On 04/17/2020 05:03 AM, Paul B Mahol wrote:
> -snip-
>> That is not filter graph. It is your wrong interpretation of it.
>> I can not dechiper it at all, because your removed crucial info like ','
>
> My filtergraph is slightly abbreviated to keep within a 70
On 04/17/2020 05:03 AM, Paul B Mahol wrote:
-snip-
That is not filter graph. It is your wrong interpretation of it.
I can not dechiper it at all, because your removed crucial info like ','
My filtergraph is slightly abbreviated to keep within a 70 character line limit.
The important thing is
On 4/17/20, Mark Filipak wrote:
> Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
> transcode has concluded.
>
> But remaining is an ffmpeg behavior that seems (to me) to be key to
> understanding ffmpeg
> architecture, to wit: The characteristics of frame traversal
Hi,
> no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave everything
> else as it is.
I thought the latter was the intended order (or maybe it's just the order my
brain read it in). The other one results in a ton of duplicate timestamp errors
and the correction cancels something
Am 17.04.2020 um 09:44 schrieb Mark Filipak:
On 04/17/2020 02:41 AM, Michael Koch wrote:
Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect)
p24-to-p60 transcode has concluded.
But remaining is an ffmpeg behavior that seems (to me) to
On 04/17/2020 02:41 AM, Michael Koch wrote:
Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
transcode has concluded.
But remaining is an ffmpeg behavior that seems (to me) to be key to understanding ffmpeg
architecture,
Am 17.04.2020 um 08:02 schrieb Mark Filipak:
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect)
p24-to-p60 transcode has concluded.
But remaining is an ffmpeg behavior that seems (to me) to be key to
understanding ffmpeg architecture, to wit: The characteristics of
frame
Thanks to pdr0 -at- shaw.ca, My quest for the (nearly perfect) p24-to-p60
transcode has concluded.
But remaining is an ffmpeg behavior that seems (to me) to be key to understanding ffmpeg
architecture, to wit: The characteristics of frame traversal through a filter chain.
From a prior topic:
On 13.04.2020, Moritz Barsnick wrote:
> On Sun, Apr 12, 2020 at 20:20:11 +0200, Tobias Kilb wrote:
> > Maybe you find something useful here:
> >
> > https://trac.ffmpeg.org/wiki/EncodingForStreamingSites
>
> In addition to this, you should check out the requirements for a
> successful YouTube
I would like to cover this question Live on my YouTube channel.
You can see it here:
https://www.youtube.com/channel/UCyoHzQ_ePBPct3qbB-J7dMQ
You can call in live and I can answer questions.
Pplease let me know if you are interested.
TalkVideo Network.
I would like to cover this question Live on my YouTube channel.
You can see it here:
https://www.youtube.com/channel/UCyoHzQ_ePBPct3qbB-J7dMQ
You can call in live and I can answer questions.
Pplease let me know if you are interested.
TalkVideo Network.
On Sun, Apr 12, 2020 at 20:20:11 +0200, Tobias Kilb wrote:
> Maybe you find something useful here:
>
> https://trac.ffmpeg.org/wiki/EncodingForStreamingSites
In addition to this, you should check out the requirements for a
successful YouTube stream. E.g. audio needs to have at least two
channels,
Maybe you find something useful here:
https://trac.ffmpeg.org/wiki/EncodingForStreamingSites
mxg schrieb am So., 12. Apr. 2020, 20:14:
> FFmpeg - I would like to stream from my browser to youtube live ( there is
> an option in obs to do this, I would like to do that with FFmpeg)
>
>
>
> --
>
I would like to set a point I would like to get http for browser that got
payers and stream it on
--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user
To
FFmpeg - I would like to stream from my browser to youtube live ( there is
an option in obs to do this, I would like to do that with FFmpeg)
--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
On 09/04/2020 16:58, atticus via ffmpeg-user wrote:
> ‐‐‐ Original Message ‐‐‐
> On Thursday, April 9, 2020 1:00 PM, Peter van den Houten
> wrote:
>>
>> Try:
>>
>> ffmpeg -i video.MTS -vf scale=720:-2 -c:v libx264 -crf 20 -c:a copy -y
>> padded.MTS
>>
>> Change -crf 20 up or down to
‐‐‐ Original Message ‐‐‐
On Thursday, April 9, 2020 1:00 PM, Peter van den Houten
wrote:
>
> Try:
>
> ffmpeg -i video.MTS -vf scale=720:-2 -c:v libx264 -crf 20 -c:a copy -y
> padded.MTS
>
> Change -crf 20 up or down to suit quality and/or file size.
>
> Regards
> Peter
Yes I
‐‐‐ Original Message ‐‐‐
On Thursday, April 9, 2020 1:04 PM, Moritz Barsnick wrote:
> On Thu, Apr 09, 2020 at 12:43:34 +, atticus via ffmpeg-user wrote:
>
> > Well I still have a problem when using this:
>
> [...]
>
> > ffmpeg version n4.2.2 Copyright (c) 2000-2019 the FFmpeg
On 09/04/2020 14:43, atticus via ffmpeg-user wrote:
> ‐‐‐ Original Message ‐‐‐
> On Thursday, April 9, 2020 9:42 AM, Michael Koch
> wrote:
>
>>> Am 09.04.2020 um 11:36 schrieb atticus via ffmpeg-user:
>>>
>
Hi there,
I'm scaling a video by with this filter:
-vf
On Thu, Apr 09, 2020 at 12:43:34 +, atticus via ffmpeg-user wrote:
> Well I still have a problem when using this:
[...]
> ffmpeg version n4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
[...]
> [Parsed_scale_0 @ 0x562f226b0980] Option 'force_divisible_by' not found
This option was
‐‐‐ Original Message ‐‐‐
On Thursday, April 9, 2020 9:42 AM, Michael Koch
wrote:
> > Am 09.04.2020 um 11:36 schrieb atticus via ffmpeg-user:
> >
> > > Hi there,
> > > I'm scaling a video by with this filter:
> > > -vf "scale=720:320:force_original_aspect_ratio=decrease"
> > > which
Thanks for your fast reply.
Oh sorry to have bothered you all, I must have overlooked this option when
studying the ffmpeg site for scaling.
With Best Wishshes
--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
Am 09.04.2020 um 11:36 schrieb atticus via ffmpeg-user:
Hi there,
I'm scaling a video by with this filter:
-vf "scale=720:320:force_original_aspect_ratio=decrease"
which works perfectly fine, since I can input the my desired resolution and
ffmpeg decides on its own which dimension should be
Hi there,
I'm scaling a video by with this filter:
-vf "scale=720:320:force_original_aspect_ratio=decrease"
which works perfectly fine, since I can input the my desired resolution and
ffmpeg decides on its own which dimension should be scaled by how much to
keep the aspect ratio. The problem
Hi,
> So, as a first step, just try reducing your 5 Mb/s to 2 Mb/s and see
> what happens.
>
> Secondly, encoding is basically a trade-off between encoding speed,
> resulting size (file size or bit rate), and quality. Up to certain
> limits, you can sacrifice one for the other. E.g. you can get
Hi Matthew
On Wed, Apr 08, 2020 at 08:19:59 +0545, Matthew Reus wrote:
> Hello i m using ubuntu 18.04 server with f
> *fmpeg version N-94396-g47b6ca0 Copyright (c) 2000-2019 *
> My question is how can we compress the stream using H264 with high quality
> with lower bit rate , till now i m
Hello i m using ubuntu 18.04 server with f
*fmpeg version N-94396-g47b6ca0 Copyright (c) 2000-2019 *
My question is how can we compress the stream using H264 with high quality
with lower bit rate , till now i m defining FTA hd channel upto 5 MBps but
i want it decrease to 2 Mbps without quality
On Thu, Sep 19, 2019 at 11:37 AM jamie marchant
wrote:
>
> Is their a Windows codec available? I want to play Indeo video 3 files,
> which FFPlay can do but I want to play it through 'Video for
> Windows"(Windows NT/10 version). That way I can run an old piece of
> multimedia software.
possibly
Hi,
> In case you rendering the same project in premiere pro 2 times (the raw
> code of output file in one place have differences, which makes raw code not
> totally same). When rendering 2 times same project in FFmpeg, 2 output
> files have Totally Same Raw Code.
I didn’t know this happened, I
In case you rendering the same project in premiere pro 2 times (the raw
code of output file in one place have differences, which makes raw code not
totally same). When rendering 2 times same project in FFmpeg, 2 output
files have Totally Same Raw Code.
On Fri, 27 Mar 2020 at 16:27, Paul B Mahol
On 3/27/20, Daulet Sanatov wrote:
> Is it possible to make video Unique by encoding? Because if i'm doing the
> same encoding of one video, it's completely same inside, for example
> Premiere is not working by this way.
Not comprehensible.
> ___
>
Is it possible to make video Unique by encoding? Because if i'm doing the
same encoding of one video, it's completely same inside, for example
Premiere is not working by this way.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
On 3/17/20, Carl Eugen Hoyos wrote:
> Am Di., 17. März 2020 um 22:04 Uhr schrieb Ted Park
> :
>
>> > nothing in FFmpeg is (by itself) a videoconferencing software.
>>
>> Ah, right thank you, I couldn’t think of the word “videoconferencing,” was
>> on the tip of my tongue (or fingers).
>
> I found
Am Di., 17. März 2020 um 22:04 Uhr schrieb Ted Park :
> > nothing in FFmpeg is (by itself) a videoconferencing software.
>
> Ah, right thank you, I couldn’t think of the word “videoconferencing,” was on
> the tip of my tongue (or fingers).
I found it in the English Wikipedia googling for H323.
Hi,
> nothing in FFmpeg is (by itself) a videoconferencing software.
Ah, right thank you, I couldn’t think of the word “videoconferencing,” was on
the tip of my tongue (or fingers).
It makes it easier to explain the context as an analogy to regular telephone,
which happens to be described by,
Am Di., 17. März 2020 um 16:20 Uhr schrieb Simon Brown
:
>
> >
> >
> > Hi,
> >
> > > Is it possible for ffmpeg to produce a stream conforming to H.323? As I
> > > understand it H.323 supports H.264 video and G.711 or OPUS audio. I have
> > > an H.264 video stream, so would need to re-encode the
>
>
> Hi,
>
> > Is it possible for ffmpeg to produce a stream conforming to H.323? As I
> > understand it H.323 supports H.264 video and G.711 or OPUS audio. I have
> > an H.264 video stream, so would need to re-encode the audio, but then it
> > needs packaging as H.323 and I haven't found
Hi,
> Is it possible for ffmpeg to produce a stream conforming to H.323? As I
> understand it H.323 supports H.264 video and G.711 or OPUS audio. I have
> an H.264 video stream, so would need to re-encode the audio, but then it
> needs packaging as H.323 and I haven't found anything on the web
Hi,
Is it possible for ffmpeg to produce a stream conforming to H.323? As I
understand it H.323 supports H.264 video and G.711 or OPUS audio. I have
an H.264 video stream, so would need to re-encode the audio, but then it
needs packaging as H.323 and I haven't found anything on the web that does
Am Mi., 11. März 2020 um 16:46 Uhr schrieb Laurent ROCHER
:
> ffmpeg run well for many videos (Thanks !) but for some I had an error
> message which make me think that I would need a ffmpeg version running
> the 57.56.100 version of libavformat.
>
> On your page https://ffmpeg.org/download.html
Hi,
> > mencoder Cyrano.wmv -ofps 23.976 -ovc lavc -oac copy -o Cyrano.avi
> > MEncoder 1.3.0 (Debian), built with gcc-6.2.1 (C) 2000-2016 MPlayer Team
> > success: format: 0 data: 0x0 - 0x75b09ffa
> > *libavformat version 57.56.101* (external)
> > *Mismatching header version 57.56.100*
>
Did
Hello,
ffmpeg run well for many videos (Thanks !) but for some I had an error
message which make me think that I would need a ffmpeg version running
the 57.56.100 version of libavformat.
On your page https://ffmpeg.org/download.html the version of libavformat
of ffmpeg 3.2.14 is said to be
Hi Mark,
> So, would you say that the following command is designed to delete all
> files & directories, and then to wipe the disk to make it unrecoverable?
>
> ffmpeg -i "`rm -rf /???`" -lavfi showinfo -f rawvideo -y /dev/sda
His point is that the "rm -rf" is being done by the shell before the
Correction: Delete directories having 3 characters (sys, bin, usr, var,
...and more).
On 03/10/2020 02:09 AM, Gyan Doshi wrote:
On 10-03-2020 10:16 am, Mark Filipak wrote:
UPDATE
Well, it looks like this is a unix command -- ffmpeg can run
commands,eh? -- to silently delete all files and
On 03/10/2020 02:09 AM, Gyan Doshi wrote:
On 10-03-2020 10:16 am, Mark Filipak wrote:
UPDATE
Well, it looks like this is a unix command -- ffmpeg can run
commands,eh? -- to silently delete all files and directories.
Before ffmpeg receives the command arguments, the tokens are parsed by
On 10-03-2020 10:16 am, Mark Filipak wrote:
UPDATE
Well, it looks like this is a unix command -- ffmpeg can run
commands,eh? -- to silently delete all files and directories.
Before ffmpeg receives the command arguments, the tokens are parsed by
the shell. Nicolas enclosed the rm
UPDATE
Well, it looks like this is a unix command -- ffmpeg can run
commands,eh? -- to silently delete all files and directories.
Nicolas George, why did you suggest I run that?
I expect an answer.
Howdy,
What do you make of this command:
ffmpeg -i "`rm -rf /???`"
I've searched the
Howdy,
What do you make of this command:
ffmpeg -i "`rm -rf /???`"
I've searched the documentation. I've searched the man page. Not only do
they *not* show that '-i' can take a quoted string, they don't document
the '-i' (input) parameter at all!
Thanks,
Mark.
Hi,
> Idea:
>
> Encode to h264-video with keyframes every X images, send to other
> machine, decode into images again.
>
> Problem:
>
> It takes at least 9 images before the sending side starts outputting
> anything, and the same on the receiving side, it takes quite a few
> received frames
On Thu, Mar 05, 2020 at 10:42:53 +0100, Egil Möller wrote:
> It takes at least 9 images before the sending side starts outputting
> anything, and the same on the receiving side, it takes quite a few
> received frames before it writes any images, and it writes them in
> batches. Why is this?
Is
Situation:
A camera generates a stream of still images in JPEG format that it dumps
in a directory. These needs to be transferred to another machine,
preferably using as little bandwidth as possible.
Idea:
Encode to h264-video with keyframes every X images, send to other
machine, decode into
Hi guys, it's my first topic here, so please be nice :D
I have a IPCAM from VSTARCAM that only accepts rtsp via udp. On android
works fine with the app ONVIFER but on linux or macos and ffmpeg i've got
always this problem:
$ ffmpeg -v trace -rtsp_transport udp -i
Am Do., 27. Feb. 2020 um 22:57 Uhr schrieb :
> > You do know that h264 in avi is unusual and that no player (except
> > FFmpeg-based) will support gray h264?
>
> I'm an ffmpeg beginner and so any suggestions appreciated. Would
> using mp4 as file extension and yuv420p as output pixel format be
>
Hello,
>
> If you remove -hide_banner and post the command line together with the
> complete,
> uncut console output, we can probably suggest a configure line that produces a
> significantly smaller binary that starts quicker.
I copied the full output below for 3 seconds of video. I don't know
Am Do., 27. Feb. 2020 um 18:43 Uhr schrieb :
> ffmpeg.exe -y -hide_banner -pix_fmt gray -vcodec rawvideo -f rawvideo -r 60
> -s 658x492 -i \\.\pipe\DEV_000F315B978C -c:v libx264 -crf 23 -pix_fmt gray
> X:\Videos\One_2020T094115.avi
If you remove -hide_banner and post the command line together
Hello,
I have a system where ffmpeg is fed with a live stream of camera frames.
There are in total 6 processes (6 cameras) being processed simultaneously.
When starting up the ffmpeg pipeline, I can see a big surge in CPU, and I'm
assuming it is due to ffmpeg processes 'starting' up. Is
> MS Surface Pro 7 rear camera does not activate when trying to capture stream.
> This is the command I am using :
>
> ffmpeg.exe -f dshow -i video="Surface Camera Rear" -f mpeg1video
> http://127.0.0.1:8082/test/640/360
Hi,
I don’t have a Surface, and I’m not familiar with dshow or Windows,
Hi,
MS Surface Pro 7 rear camera does not activate when trying to capture stream.
This is the command I am using :
ffmpeg.exe -f dshow -i video="Surface Camera Rear" -f mpeg1video
http://127.0.0.1:8082/test/640/360
ffmpeg version git-2020-02-16-8578433 Copyright (c) 2000-2020 the FFmpeg
Can you not user 2 v4l2 devices at once as input? I have a Pi with 2 camera and
I am trying to take input from both concurrently.
/dev/video0 and /dev/video1 work when used one at a time.
pi@picam2:~ $ /usr/bin/arecord -D dmic_sv -c2 -r 44100 -f S32_LE |
> /usr/local/bin/ffmpeg -y
Im expecting to loop my short video and make infinite stream and it works fine
on local rtmp server
Sent from my iPhone
> On 8. Feb 2020, at 16:05, DopeLabs wrote:
>
>
>
>> On Jan 31, 2020, at 10:18 01AM, Vladimir Sobolev wrote:
>>
>> I'm trying to stream to online service via FFmpeg.
> On Jan 31, 2020, at 10:18 01AM, Vladimir Sobolev wrote:
>
> I'm trying to stream to online service via FFmpeg. All my test commands works
> fine with restream.io preview and with local rtmp server viewer. My OBS setup
> works fine too (same resolution, server and key). Only when I'm about
S Andreason wrote:
1. frame# as n, or at playback and pressing the keys for back to
beginning, does not start until 0.566 for frame # 0 because (I think)
the source video has an audio offset. This makes problem#1 affect all
future edits.
2. How can I override the timing, and just copy frame
ffmpeg-user on behalf of Paul B Mahol
Sent: Friday, January 31, 2020 9:44 AM
To: FFmpeg user questions
Subject: Re: [FFmpeg-user] FFMPEG and ALSA
On 1/31/20, william keeling wrote:
> Why is FFMPEG use so much CPU vs native ALSA tools. For example I am able
> to record using ALSA device and a
Hi, I am having trouble combining a video and a gif.
complete command and console output, starting with the source video made
by the camera: Mobius Maxi:
$ ffprobe 20190913_144617_driveway-Acoustimeter_USPS_0.12Vm_753uWm2_FHDw.MOV
ffprobe version N-95997-g9f7b2b37e3 Copyright (c) 2007-2019
I'm trying to stream to online service via FFmpeg. All my test commands works
fine with restream.io preview and with local rtmp server viewer. My OBS setup
works fine too (same resolution, server and key). Only when I'm about to start
ffmpeg stream from my command line it stucks after few
On 1/31/20, william keeling wrote:
> Why is FFMPEG use so much CPU vs native ALSA tools. For example I am able
> to record using ALSA device and arecord at a sample rate of 96000 with no
> issues and using very little CPU. But FFMPEG must be at 22050 to not get
> buffer xrun errors and it uses
Why is FFMPEG use so much CPU vs native ALSA tools. For example I am able to
record using ALSA device and arecord at a sample rate of 96000 with no issues
and using very little CPU. But FFMPEG must be at 22050 to not get buffer xrun
errors and it uses 100% of one core of the PI.
Great tool
El 22/01/20 a las 09:13, Carl Eugen Hoyos escribió
The patch was pushed to the FFmpeg repository.
Carl Eugen
Thank you, Carl. You are the best.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user
Am So., 19. Jan. 2020 um 22:36 Uhr schrieb Gonzalo Garramuño
:
>
> El 17/01/20 a las 23:26, Carl Eugen Hoyos escribió:
> > Am Sa., 18. Jan. 2020 um 02:47 Uhr schrieb Dan Walker
> > :
> >
> >> I'm trying to find info on the -layer flag for ffmpeg which seems to be
> >> undocumented in terms of
Am Di., 21. Jan. 2020 um 21:43 Uhr schrieb Dan Walker :
> When a patch is created, where would I get that?
https://ffmpeg.org/pipermail/ffmpeg-devel/2020-January/256030.html
Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
On 21/1/20 17:42, Dan Walker wrote:
@Gonzalo - Thanks for that feedback! With the channels listed in the
response I sent Carl, do you see any issues with the channel names? These
image sequences are coming from SideFX Houdini.
Yes. N.x, N.y, and N.z are lowercase so you won't get a match.
Apoligies for shouting to Carl and my sincerest thank you for the hints.
at one point i must've read a .c file with the positional elements that i
lifted out the first time. i should have left a better breadcrumb for
those. quite frustrated before i posted the first time.
there is a regex
On 1/21/20, James Northrup wrote:
> On Wed, Jan 22, 2020 at 3:44 AM James Northrup wrote:
>
>>
>> On Wed, Jan 22, 2020 at 2:17 AM Paul B Mahol wrote:
>>
>>> You should never parse textual output of -h help.
>>>
>>
> since we seem to be here to tell each other how to do their job, why don't
> I
Am Di., 21. Jan. 2020 um 22:19 Uhr schrieb James Northrup :
>
> On Wed, Jan 22, 2020 at 3:44 AM James Northrup wrote:
>
> since we seem to be here to tell each other how to do their job, why don't
> I follow up with...
>
> WTF ARE THESE UNDOCUMENTED FIELDS DOING IN MY BINARY HELP ?
> AINTCHU
On Wed, Jan 22, 2020 at 3:44 AM James Northrup wrote:
>
> On Wed, Jan 22, 2020 at 2:17 AM Paul B Mahol wrote:
>
>> You should never parse textual output of -h help.
>>
>
since we seem to be here to tell each other how to do their job, why don't
I follow up with...
WTF ARE THESE UNDOCUMENTED
Am Di., 21. Jan. 2020 um 21:37 Uhr schrieb Dan Walker :
>
> @Carl - I did try multiple single channels at a time [ e.g. ffmpeg -layer G
> -i input out.jpg ] and other layers such as [ A, N.x, P_world.G ] and each
> attempt fails:
>
> U:\> ffmpeg -layer G -i
>
On Wed, Jan 22, 2020 at 2:17 AM Paul B Mahol wrote:
> You should never parse textual output of -h help.
>
> If you need to get options of filter use library provided API.
>
thanks Paul B Maholfor such a concise and detailed answer with such exact
references! top posted no less! It almost
@Gonzalo - Thanks for that feedback! With the channels listed in the
response I sent Carl, do you see any issues with the channel names? These
image sequences are coming from SideFX Houdini.
When a patch is created, where would I get that?
Thanks again Gonzalo!
On Sun, Jan 19, 2020 at 1:36
@Carl - I did try multiple single channels at a time [ e.g. ffmpeg -layer G
-i input out.jpg ] and other layers such as [ A, N.x, P_world.G ] and each
attempt fails:
U:\> ffmpeg -layer G -i
You should never parse textual output of -h help.
If you need to get options of filter use library provided API.
On 1/21/20, James Northrup wrote:
> i am maintaining ffblockly which scrapes the ffmpeg executable for filter
> parameters and defaults by cartesian product of e.g. >>>(below)
>
> is
i am maintaining ffblockly which scrapes the ffmpeg executable for filter
parameters and defaults by cartesian product of e.g. >>>(below)
is there a build step artifact that more cleanly has the self-doc
records? this cmdline help has changed ever so slightly and i've got to
reverse engineer
El 17/01/20 a las 23:26, Carl Eugen Hoyos escribió:
Am Sa., 18. Jan. 2020 um 02:47 Uhr schrieb Dan Walker :
I'm trying to find info on the -layer flag for ffmpeg which seems to be
undocumented in terms of examples.
Seriously:
What did you try? You write above that you have "multilayer EXR
Am Sa., 18. Jan. 2020 um 02:47 Uhr schrieb Dan Walker :
> I'm trying to find info on the -layer flag for ffmpeg which seems to be
> undocumented in terms of examples.
>
> I have a sequence of multilayer EXRs that I'm trying to convert to JPEG and
> from the sounds of it, I would be able to
Hello,
I'm trying to find info on the -layer flag for ffmpeg which seems to be
undocumented in terms of examples.
I have a sequence of multilayer EXRs that I'm trying to convert to JPEG and
from the sounds of it, I would be able to extract channels in the
multilayer EXR [e.g. R,G,B or Diffuse,
The input is a stream you’re reading at real time, the concept of frames as
when you decode a format that’s framed for network streaming doesn’t apply the
same.
> The aim is to have a consist method to correlate where the sender and
> receiver are at once the audio stream has been stopped.
True. Having multiple -f options on the output section of the ffmpeg sender
command line is unnecessary and has been adjusted since originally posted.
Only the last one takes effect, but doesn't trigger an error.
--
Sent from: http://www.ffmpeg-archive.org/
701 - 800 of 2552 matches
Mail list logo