Re: [FFmpeg-user] ffmpeg on windows seems to ignore -to

2017-10-24 Thread Roger Pack
Works in linux? What versions of ffmpeg?

On 10/20/17, Kevin Duffey  wrote:
> Hi all,
> So I am using this simple command on Windows 10 (64-bit) command to cut out
> a small clip:
> ffmpeg -i input.mov -ss 00:18:22.0 -to 00:18:44.0 -c copy clip10.mov
>
> This *should* give me a 22 second clip.
> Instead, it seems to delay for a couple of minutes.. and when I use D
> option, I see a ton of
> cur_dts is invalid (this is harmless if it occurs once at the start per
> stream)  0x    Last message repeated 341 times
>
> I have tried various combos, including
> ffmpeg -i input.mov -ss 00:18:22.000 -to 00:18:44.000 -c copy
> clip10.movffmpeg -i input.mov -ss 00:18:22 -to 00:18:44 -c copy
> clip10.movffmpeg -i input.mov -ss 00:18:22 -t 00:18:44 -c copy clip10.mov
> as well as
> ffmpeg -ss 00:18:00 -i input.mov -ss 00:18:22 -to 00:18:44 -c copy
> clip10.mov
>
> and other variances with the .000, .0, and so forth.
> From some recent stack overflow and other bits online, I dont see that I am
> doing anything wrong.
> The -t is to typically specify the duration, where as -to is to specify the
> end point to stop the clip at.
> This is with ffmpeg latest (3.4).
> Any help would be appreciated.
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Take webcam picture with minimal delay

2017-10-24 Thread Roger Pack
Maybe overwrite a jpg file "over and over" and just grab a copy of it
on demand when you need it?

On 9/20/17, m...@stefan-kleeschulte.de  wrote:
> Hi everyone!
>
> I want to take a picture from a webcam (to send it in a http server's
> response). I already found a command to take a picture and write it to
> stdout (from where I can grab it in the http server):
>
> ffmpeg -f video4linux2 -video_size 1920x1080 -input_format yuyv422 -i
> /dev/video0 -f image2 -frames:v 1 -qscale:v 1 pipe:1
>
> The only drawback is that it takes about 3 seconds to get the picture.
> The delay comes from ffmpeg (not from the server/network), probably
> because it needs to wait for the webcam to initialize.
>
> Now my idea is to somehow keep the webcam active and grab the current
> video frame whenever a http request comes in. But I do not know how to
> do this with ffmpeg (on linux). Can I use one instance of ffmpeg to
> capture the webcam video and stream it *somewhere*, and then use a
> second instance of ffmpeg to extract the current frame of that stream?
> Or is there a better way to do it?
>
> Thanks!
> Stefan Kleeschulte
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SMPTE when converting to JPEGs

2017-10-24 Thread Carl Eugen Hoyos
2017-10-24 13:00 GMT+02:00 Wolfgang Hugemann :
>> You didn't answer Carl Eugen's very important question: Did you use
>> ffmpeg to convert the camera's AVI to JPEG frames? What command did you
>> use?
>
> Well, of course I used ffmpeg; otherwise I wouldn't ask it in this forum.

And in this forum you are expected to always post your command
line together with the complete, uncut console output:
Your mail is a good example why;-)

>> It's possible though to
>> instruct ffmpeg to create one output image for every input frame
>> (regardsless of its timestamp or frame rate)
>
> This is what I am looking for.

As said: -vsync

>> Please show us your full command line and the complete, uncut console
>> output.
>
> This is my Windows CMD line:
>
> C:\Programme\ffmpeg\bin\ffmpeg -ss 00:01:30.00 -t 00:00:04.00
> -i [CH01]14_10_43.avi -vf
> "drawtext=fontfile=/Windows/Fonts/arial.ttf:fontcolor=yellow:fontsize=350:timecode='09\:57\:00\:00':r=30:x=15:y=50"
> -q:v 3 _frames\%%02d.jpg

The console output indicates that your input has variable frame-rate,
the image2 muxer (requested by the output filename you chose)
defaults to constant frame rate, leading to frame duplication.

> And this is the output:
>
> ffmpeg version N-79630-g9ac154d Copyright (c) 2000-2016 the FFmpeg

This is old and unsupported, even if it should make no difference
for your example.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] nvenc burn subtitles while transcoding

2017-10-24 Thread James Girotti
>
> Shouldn't I be using hwupload_cuda, to upload frames to the CUDA engine,
> then apply overlay filter and after that download it back? At least I
> understood it that way. Or you are suggesting to download it from CUDA
> run on CPU and after that upload it back.
>

No, the overlay filter is software only so it runs on CPU in main memory.
The hw-decoded frames have to be downloaded from GPU memory to main memory,
then the CPU applies the overlay filter in main memory, finally upload the
frame from main memory to GPU memory for hw-encoding.

I am trying something like this, but I don't exactly know where to
> upload it:
> ffmpeg -hwaccel cuvid -c:v h264_cuvid  -i udp://239.255.6.2:1234
> -filter_complex "[i:0x4b1]scale_npp=w=1920:h=1080,hwdownload,format=nv12
> [base]; [base][i:0x4ba]overlay[v]" -map [v] -map i:0x4b2 -c:a libfdk_aac
> -profile:a aac_he -ac 2 -b:a 64k -ar 48000 -c:v h264_nvenc -preset llhq
> -rc vbr_hq -qmin:v 19 -qmax:v 21 -b:v 2500k -maxrate:v 5000k -profile:v
> high -f flv rtmp://127.0.0.1/live/test
>
> It works, but I don't see any acceleration probably because it is not in
> CUDA anymore and the manual is not really helpful here. Anybody with
> more experience?
>

I'm not sure what you mean by "I don't see any acceleration". The syntax
looks okay for hw-decoding/encoding, so that should be happening on your
GPU. You can monitor your GPU using "nvidia-smi dmon" during transcoding.
It will show you how much your GPU is using for decoding/encoding (to
verify that your GPU is "doing work").

Do you mean that transcoding is slow? That should be expected. Doing a
similar overlay runs at ~4X on my Phenom X2/GTX1050Ti, where pure
hw-transcoding (no overlay) can do ~23X (with resizing to 1080p). ~10X and
~110X without resizing at 480p. As above, overlay filter is a
software-based filter. I don't know the exact overlay filter internals, but
based on performance I'm guessing it's single-threaded so that could cause
a major slow-down as the encoder is waiting for each single frame from the
overlay filter to be done by your "slow" single-core CPU.

Also just a side-note, it looks like you're scaling your input video before
overlaying using scale_npp. Not sure if you're aware, but the h264_cuvid
decoder has resizing and cropping built in. So you don't need to use
scale_npp to do the resizing. You could do something like:

ffmpeg -hwaccel cuvid -c:v h264_cuvid  -resize 1920x1080 -i udp://
239.255.6.2:1234 -filter_complex "[i:0x4b1]hwdownload,format=nv12[base];
[base][i:0x4ba]overlay[v]; [v]hwupload_cuda[v]" -map "[v]"  ...

Interestingly, if you wanted to overlay, then resize. You could use
scale_npp for GPU/CUDA resizing after overlay. The hw-decoder couldn't be
used to resize at that point. That would be something like:

ffmpeg -hwaccel cuvid -c:v h264_cuvid  -i udp://239.255.6.2:1234
-filter_complex "[i:0x4b1]hwdownload,format=nv12[base];
[base][i:0x4ba]overlay[v];
[v]hwupload_cuda,scale_npp=w=1920:h=1080:format=nv12[v]"
-map "[v]" ..

Or maybe you could resize the subtitles before overlay (you could try
something similar with scale_npp instead of scale):

ffmpeg -hwaccel cuvid -c:v h264_cuvid  -resize 1920x1080 -i udp://
239.255.6.2:1234 -filter_complex "[i:0x4b1]hwdownload,format=nv12[base];
[i:0x4ba]scale=1920:1080[subtitle]; [base][subtitle]overlay[v];
[v]hwupload_cuda[v]" -map "[v]" .

If you're still reading: scale_npp uses CUDA whereas h264_cuvid resizing
uses the GPU Video Engine. You (or anyone really) might take that into
consideration when planning to do resizing as those options would put
different kinds of loads on your GPU.

Hope This Helps (or at least points you in the right direction)!
-J
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Canon C200 CRAW or .CRM

2017-10-24 Thread Gonzalo Garramuño



El 21/10/17 a las 14:42, Aaron Star escribió:

Here is a sample of C200 Raw :  http://tbf.me/a/cPz6X

I understand that FFMPEG does not encode EXR, but not sure why this is since 
its open source.  The other formats FFMPEG does support, and I am trying to 
stay away from the likes of DNxHR or Prores.

What I am trying to get to is simple batch processing of RAW footage to EXR, so 
the material can be edited in the ACES color space.   Compression to be handled 
by the EXR format.

Trying to stay out of integer intermediate formats, and keep all information 
from the source by dumping it into a linear FP format.

If I knew how to code this I would.

Thank you for your quick response.
FFMPEG is not the appropriate tool for this. You want to have a look at 
OpenImageIO (OIIO), which has exr savers and readers and craw readers.  
It comes with an utility to convert formats.
It was developed by Larry Gritz while at Sony Imageworks and it is an 
open source library.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SMPTE when converting to JPEGs

2017-10-24 Thread Phil Rhodes
> Sorry for using the wrong words; I meant "burned in", i.e. written onto 


> each frame / JPEG. (As you can tell from my name, I am not a native 
> speaker.)
Oh, it's a very obscure bit of terminology!
People call it "stuck down" if it's a graphic.
A million different bits of language for this sort of thing.
   
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SMPTE when converting to JPEGs

2017-10-24 Thread Moritz Barsnick
On Tue, Oct 24, 2017 at 13:00:53 +0200, Wolfgang Hugemann wrote:
> Well, of course I used ffmpeg; otherwise I wouldn't ask it in this forum.

Silly me.

> > It's possible though to
> > instruct ffmpeg to create one output image for every input frame
> > (regardsless of its timestamp or frame rate)
> 
> This is what I am looking for.

The option
-vsync passthrough
should do the trick. Let us know if it does (and if it doesn't of
course).

> C:\Programme\ffmpeg\bin\ffmpeg -ss 00:01:30.00 -t 00:00:04.00 -i 
> [CH01]14_10_43.avi -vf 
> "drawtext=fontfile=/Windows/Fonts/arial.ttf:fontcolor=yellow:fontsize=350:timecode='09\:57\:00\:00':r=30:x=15:y=50"
>  
> -q:v 3 _frames\%%02d.jpg

I don't know how drawtext behaves if the input isn't actually 30 fps.
Probably, the r= option only decides whether to count to 25 or to 30.

> ffmpeg version N-79630-g9ac154d Copyright (c) 2000-2016 the FFmpeg developers

This is a bit old, BTW, but it doesn't matter for this particular
issue.

> Output #0, image2, to '_frames\%02d.jpg':
>Metadata:
>  encoder : Lavf57.34.103
>  Stream #0:0: Video: mjpeg, yuvj420p(pc), 1920x1080, q=2-31, 200 kb/s, 30 
> fps, 30 tbn

ffmpeg assumes 30 fps for the output. I believe it copied the value
from what the input claims to have.

> frame=  119 fps= 72 q=3.0 Lsize=N/A time=00:00:03.96 bitrate=N/A dup=23 
> drop=0 speed=2.39x

And indeed, just to confirm, 23 frames were duplicated ("dup=23") in
oder to achieve the constant output framerate.

You can also use ffprobe to display each and every frame's timestamps,
and see whether there are irregularities in them. (That's just if
you're interested in the internals, or in debugging.) I don't know of a
parser which does that for you, I'd be happy to know if there is one.

Viele Grüße,
Moritz
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SMPTE when converting to JPEGs

2017-10-24 Thread Moritz Barsnick
On Tue, Oct 24, 2017 at 10:22:20 +0200, Wolfgang Hugemann wrote:
> > Is there a formalised way to embed timecode in JPEGs?
> > Does the OP mean "burned in?"
> 
> Sorry for using the wrong words; I meant "burned in", i.e. written onto 
> each frame / JPEG. (As you can tell from my name, I am not a native 
> speaker.)

You didn't answer Carl Eugen's very important question: Did you use
ffmpeg to convert the camera's AVI to JPEG frames? What command did you
use?

If the AVI says 30 fps, and it is as you assume that the frame rate is
different (or variable), and you used ffmpeg's default options, then
ffmpeg will have created a constant frame rate sequence of images,
duplicating or dropping frames as needed. It's possible though to
instruct ffmpeg to create one output image for every input frame
(regardsless of its timestamp or frame rate), but I want to understand
exactly what you did first. ;-)

Please show us your full command line and the complete, uncut console
output.

> My question is however where exactly ffmpeg's timecode stems from. It 
> doesn't seem to be a simple frame count (?).

Which timecode to you mean? What did you do with ffmpeg? ffmpeg doesn't
insert timecodes into JPEGs it creates (AFAIK).

> The Exif timestamp's accuracy is only integer seconds.

ffmpeg doesn't write EXIF. Unless I'm totally mistaken, the timestamps
are lost when creating image sequences.

> codec library (Lavc57.38.100 in my case) into the JPEG comment. I have 
> no idea whether you can change this behaviour.

From looking at the code, I though that this is hardcoded. A simple
test shows, though, that I can copy a JPEG comment from an input file
to an output file. *Sigh* I'm too stupid to understand that. Once we
figure it out, we can add it to the documentation. (But you won't be
able to put the timestamp into the comment.)

Cheers,
Moritz
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SMPTE when converting to JPEGs

2017-10-24 Thread Wolfgang Hugemann

Is there a formalised way to embed timecode in JPEGs?
Does the OP mean "burned in?"


Sorry for using the wrong words; I meant "burned in", i.e. written onto 
each frame / JPEG. (As you can tell from my name, I am not a native 
speaker.)


My question is however where exactly ffmpeg's timecode stems from. It 
doesn't seem to be a simple frame count (?).


Wolfgang Hugemann

P.S.:
The Exif timestamp's accuracy is only integer seconds. A more accurate 
timestamp could only be written into the JPEG comment or into the XP 
comment Exif tag, I guess. By default ffmpeg writes the name of the 
codec library (Lavc57.38.100 in my case) into the JPEG comment. I have 
no idea whether you can change this behaviour.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] nvenc burn subtitles while transcoding

2017-10-24 Thread Mitja Pirih
On 21. 10. 2017 00:35, James Girotti wrote:
>> When I add hw acceleration for decoding I stops. Any ideas what can I try?
>>
>>
> Not exactly sure how it works with filter_complex, but with regular filter
> you would do something like:
>
> "hwdownload,format=nv12,YOUR_FILTERS_HERE,format=yuv420p,hwupload_cuda"
>
> This will grab the decoded frames from GPU memory, convert to nv12 (from
> cuda format), then apply whatever filter you need, change the format to
> something reasonable (my example uses yuv420p, then upload frames back to
> GPU memory
>
> Hope that helps,
> -J

Shouldn't I be using hwupload_cuda, to upload frames to the CUDA engine,
then apply overlay filter and after that download it back? At least I
understood it that way. Or you are suggesting to download it from CUDA
run on CPU and after that upload it back.

I am trying something like this, but I don't exactly know where to
upload it:
ffmpeg -hwaccel cuvid -c:v h264_cuvid  -i udp://239.255.6.2:1234
-filter_complex "[i:0x4b1]scale_npp=w=1920:h=1080,hwdownload,format=nv12
[base]; [base][i:0x4ba]overlay[v]" -map [v] -map i:0x4b2 -c:a libfdk_aac
-profile:a aac_he -ac 2 -b:a 64k -ar 48000 -c:v h264_nvenc -preset llhq
-rc vbr_hq -qmin:v 19 -qmax:v 21 -b:v 2500k -maxrate:v 5000k -profile:v
high -f flv rtmp://127.0.0.1/live/test

It works, but I don't see any acceleration probably because it is not in
CUDA anymore and the manual is not really helpful here. Anybody with
more experience?

ffmpeg version N-87875-gf685bbc Copyright (c) 2000-2017 the FFmpeg
developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.5) 20160609
  configuration: --prefix=/home/mitja/ffmpeg_build
--pkg-config-flags=--static
--extra-cflags=-I/home/mitja/ffmpeg_build/include
--extra-ldflags=-L/home/mitja/ffmpeg_build/lib --extra-libs=-lpthread
--bindir=/home/mitja/bin --enable-cuda --enable-cuvid --enable-libnpp
--extra-cflags=-I/usr/local/cuda/include
--extra-ldflags=-L/usr/local/cuda/lib64 --enable-gpl --enable-libass
--enable-libfdk-aac --enable-libfreetype --enable-libmp3lame
--enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx
--enable-libx264 --enable-libx265 --enable-libspeex --enable-nonfree
--enable-nvenc
  libavutil  55. 79.100 / 55. 79.100
  libavcodec 57.108.100 / 57.108.100
  libavformat57. 84.100 / 57. 84.100
  libavdevice57. 11.100 / 57. 11.100
  libavfilter 6.108.100 /  6.108.100
  libswscale  4.  9.100 /  4.  9.100
  libswresample   2. 10.100 /  2. 10.100
  libpostproc54.  8.100 / 54.  8.100
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] decode_slice_header error
[h264 @ 0x2c76d80] no frame!
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] decode_slice_header error
[h264 @ 0x2c76d80] no frame!
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] SPS unavailable in decode_picture_timing
[h264 @ 0x2c76d80] non-existing PPS 0 referenced
[h264 @ 0x2c76d80] decode_slice_header error
[h264 @ 0x2c76d80] no frame!
Input #0, mpegts, from 'udp://239.255.6.2:1234':
  Duration: N/A, start: 4.615722, bitrate: N/A
  Program 13002
Stream #0:0[0x4b1]: Video: h264 (High) ([27][0][0][0] / 0x001B),
yuv420p(top first), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k
tbn, 50 tbc
Stream #0:1[0x4b2](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000
Hz, stereo, s16p, 192 kb/s
Stream #0:2[0x4ba](slv): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream mapping:
  Stream #0:0 (h264_cuvid) -> scale_npp (graph 0)
  Stream #0:2 (dvbsub) -> overlay:overlay (graph 0)
  overlay (graph 0) -> Stream #0:0 (h264_nvenc)
  Stream #0:1 -> #0:1 (mp2 (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
[mpegts @ 0x2c51e00] sub2video: using 1920x1080 canvas
Output #0, flv, to 'rtmp://127.0.0.1/live/test':
  Metadata:
encoder : Lavf57.84.100
Stream #0:0: Video: h264 (h264_nvenc) (High) ([7][0][0][0] /
0x0007), nv12, 1920x1080 [SAR 1:1 DAR 16:9], q=19-21, 2500 kb/s, 25 fps,
1k tbn, 25 tbc (default)
Metadata:
  encoder : Lavc57.108.100 h264_nvenc
Side data:
  cpb: bitrate max/min/avg: 500/0/250 buffer size: 500
vbv_delay: -1
Stream #0:1(eng): Audio: aac (libfdk_aac) (HE-AAC) ([10][0][0][0] /
0x000A), 48000 Hz, stereo, s16, 64 kb/s
Metadata:
  encoder : Lavc57.108.100 libfdk_aac
[graph 0 input from stream 0:2 @ 0x2ce5c80] Changing frame properties on
the fly is not supported by all filters.
Last message repeated 11 times
[graph 0 input from stream 0:2 @ 0x2ce5c80] Changing frame properties on
the fly is not