>While this method is unorthodox and makes helping you needlessly
>difficult,
Sorry, I'm still learning the ropes... I found //streams.videolan.org/upload/
and placed the one phasebtop1ntscdvd.VOB file there. It this proper? I did
not have a TRAC number to fill in.
>the bigger issue is that
2018-08-08 0:21 GMT+02:00, Bob DeCarlo :
>>I sent out a google drive link. Let me know if this is OK.
>
> Just wondering if the video files were received or if I need to use
> a different method to submit them?
While this method is unorthodox and makes helping you needlessly
difficult, the
>All,
>I sent out a google drive link. Let me know if this is OK.
Just wondering if the video files were received or if I need to use a different
method to submit them?
Thanks,
Bob
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
2018-08-06 3:41 GMT+02:00, Elliott Balsley :
> I discovered an option in picamera python module to include the framerate in
> the SPS headers, so now I will use that instead of raspivid. Plus it’s more
> flexible, allowing for more advanced text overlays.
> Now ffplay detects it correctly as 30
> This is just duplicating the latency.
Ok, understood, so I will avoid piping.
>
> As said, the first thing to check is the actual framerate:
> If it really is 30fps, this has to be fixed first.
I did change the stream, so now ffmpeg detects it correctly at 30 tbr, did you
see my last
2018-08-05 3:50 GMT+02:00, Elliott Balsley :
> $ nc 192.168.3.172 8080 | /usr/local/bin/ffmpeg -r 5 -i pipe:0
> -analyzeduration 0 -probesize 32 -f h264 -an -c:v copy pipe:1 |
> /usr/local/bin/ffplay -i pipe:0
This is just duplicating the latency.
As said, the first thing to check is the actual
>
> You are missing the question.
>
> I am not asking for help to diagnose that simple experiment. I am asking
> if others have had experience with the top-level goal: capturing hours
> of 4k video, reliably and with low loss, using ffmpeg.
>
> I can use whatever capture card, operating system,
Den man. 6. aug. 2018, 23.52 skrev Joel Roth :
> Morten W. Petersen wrote:
> >
> > I think this is interesting, and it shouldn't be difficult to write a
> > frontend to it.
> >
> > If you post the code on GitHub, it would make it easier for someone to
> > write a GUI.
>
> Okay, here it is,
https://trac.ffmpeg.org/wiki/StreamingGuide#Latency
may be useful to you...
On Sun, Aug 5, 2018 at 7:41 PM, Elliott Balsley
wrote:
> I discovered an option in picamera python module to include the framerate in
> the SPS headers, so now I will use that instead of raspivid. Plus it’s more
>
On Wed, Jul 25, 2018 at 2:34 PM, Jim DeLaHunt wrote:
>
> On 2018-07-23 10:25, Rafael Lima wrote:
>>>
>>> They set up a 4k camera on top of a building (have electricity, but
>>> limited internet),
>>
>>
>> 4K on limited internet? is just in my mind that those two words doesn't
>> fit
>>
Hello,
I want to concatenate 3 videos with a fade between the last two videos.
However, the generation remains stuck to X frames.
Here is the command used:
ffmpeg -i "firstClip.mp4" -i "secondClip.mp4" -i "thirdClip.mp4" -y
-filter_complex
"[1:v]trim=start=0:end=3,setpts=PTS-STARTPTS[1_clip_1];
On 07-08-2018 01:24 PM, Michael Koch wrote:
That's correct, at the beginning there is only one image temp.jpg
ffmpeg's image sequence demuxer, when not reading from a pipe, will
probe for sequence size at initialization, so if an image doesn't exist
at that stage, it won't be
Am 07.08.2018 um 09:54 schrieb Michael Koch:
F:\xxx>c:/ffmpeg/ffmpeg -i temp%4d.jpg -vf lutyuv="y=0.9*val"
-frames 10 -q:v 1 -start_number 1 temp%4d.jpg
FFmpeg does not edit files in-place. Input and output have to be
different.
Let's assume I have a complex filter with several inputs.
F:\xxx>c:/ffmpeg/ffmpeg -i temp%4d.jpg -vf lutyuv="y=0.9*val"
-frames 10 -q:v 1 -start_number 1 temp%4d.jpg
FFmpeg does not edit files in-place. Input and output have to be
different.
Input and output are different. Input starts with number 0, and output
starts with number 1.
14 matches
Mail list logo