Re: [FFmpeg-user] MotionJpeg2000/ MXF

2016-12-02 Thread Christoph Gerstbauer

Hi Joseph,

Do you want to use MJPEG2000 or MJPEG2000 mathematically LOSSLESS?

Best Regards
Christoph

Am 30.11.16 um 02:08 schrieb Crystal Mastering:

Hi Guys

I need to convert some video files to MotionJpeg2000, and wrap them in an MXF 
container.

I’m using iffmepg as an interface but it won’t allow me to do this. This is 
strange because I know lots of
people are able to combine  MJP2k and MXF.

I’d be more than happy to pay someone to write a script to do this for me. 
iffmpeg has direct terminal access so
I assume I can easily upload some code via this.

Many thanks in advance

Joseph

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] How do I add an audio track to an xdcam video?

2016-12-02 Thread Marian Montagnino
How do I add an audio track to an xdcam video using FFMPEG via command line?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] testing rtmp input

2016-12-02 Thread Ricardo Kleemann
Hi,

What would be the best way to test whether an rtmp stream is present?

I'm reading an rtmp stream as input and creating an mp4 as output.

But the stream isn't always present. I was thinking of doing a quick test
for stream presence, maybe reading 5 seconds of input as a test prior to
attempting the output.

thanks
Ricardo
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Joshua Grauman

I'll keep that in mind if I really need extra speed. Thanks!

Josh

On Fri, 2 Dec 2016, Fred Perie wrote:


2016-12-02 17:38 GMT+01:00 Joshua Grauman :


Thanks guys, I'll look into these ideas!

Josh


On Fri, 2 Dec 2016, Paul B Mahol wrote:


On 12/2/16, Joshua Grauman  wrote:


Hello all,

I am using the following command successfully to generate a screencast.
The video comes from my program 'gen-vid' which outputs the raw frames
with alpha channel. The resulting .avi has alpha channel as well which

is

my goal. It all works great except that my computer can't handle doing

it

real-time. So I am wondering if there is a different vcodec I could use

to

achieve the same result that was less demaning on the cpu? I am willing

to

sacrifice some compression for more speed, but would prefer not to have

to

store all the raw frames without any compression. Storing the alpha
channel is also a must. It is preferable if the compression is lossless.
Does anyone have any other suggestions for compression other than png

that

may be faster? Thanks!



utvideo, huffyuv, ffvhuff



./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
-framerate 30 -i - -vcodec png over.avi

Josh
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Hi Joshua
I regularly use ffv1 with pix_fmt bgra
If speed is really a problem when reading/decoding, you can  also consider
having two files one containing the alpha channel codec ffv1 pix_fmt gray8
and the color using a codec like H264
I don't know how to do this with the ffmpeg command. I use the ffmpeg
libraries and a specific program.

Fred



--
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Fred Perie
2016-12-02 17:38 GMT+01:00 Joshua Grauman :
>
> Thanks guys, I'll look into these ideas!
>
> Josh
>
>
> On Fri, 2 Dec 2016, Paul B Mahol wrote:
>
>> On 12/2/16, Joshua Grauman  wrote:
>>>
>>> Hello all,
>>>
>>> I am using the following command successfully to generate a screencast.
>>> The video comes from my program 'gen-vid' which outputs the raw frames
>>> with alpha channel. The resulting .avi has alpha channel as well which
is
>>> my goal. It all works great except that my computer can't handle doing
it
>>> real-time. So I am wondering if there is a different vcodec I could use
to
>>> achieve the same result that was less demaning on the cpu? I am willing
to
>>> sacrifice some compression for more speed, but would prefer not to have
to
>>> store all the raw frames without any compression. Storing the alpha
>>> channel is also a must. It is preferable if the compression is lossless.
>>> Does anyone have any other suggestions for compression other than png
that
>>> may be faster? Thanks!
>>
>>
>> utvideo, huffyuv, ffvhuff
>>
>>>
>>> ./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
>>> -framerate 30 -i - -vcodec png over.avi
>>>
>>> Josh
>>> ___
>>> ffmpeg-user mailing list
>>> ffmpeg-user@ffmpeg.org
>>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>>
>>> To unsubscribe, visit link above, or email
>>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>>
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Hi Joshua
I regularly use ffv1 with pix_fmt bgra
If speed is really a problem when reading/decoding, you can  also consider
having two files one containing the alpha channel codec ffv1 pix_fmt gray8
and the color using a codec like H264
I don't know how to do this with the ffmpeg command. I use the ffmpeg
libraries and a specific program.

Fred



--
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fault-tolerant streaming and local saving at the same time

2016-12-02 Thread Daniel Tisch
>
> > 1) Choose some (preferably lossless) codec to use in mpegts
> > instead of raw data.
>
> FFmpeg contains its own lossless codec, ffv1.
>


mpegts does not support ffv1 either. ffmpeg can write ffv1 to mpegts
without warning (just like for yuv), but can not read that back.

user@host:~$ tools/ffmpeg/ffmpeg -i ~/Videos/Moments_of_Everyday_Life.mp4
-an -vcodec ffv1 -f mpegts Videos/test.ts
ffmpeg version N-82721-g89092fa Copyright (c) 2000-2016 the FFmpeg
developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
  configuration: --enable-gpl --enable-version3 --enable-nonfree
--enable-openssl --enable-avresample --enable-avisynth --enable-libmp3lame
--enable-libpulse --disable-librtmp --enable-libtheora --enable-libvorbis
--enable-libx264
  libavutil  55. 41.101 / 55. 41.101
  libavcodec 57. 66.108 / 57. 66.108
  libavformat57. 58.101 / 57. 58.101
  libavdevice57.  2.100 / 57.  2.100
  libavfilter 6. 68.100 /  6. 68.100
  libavresample   3.  2.  0 /  3.  2.  0
  libswscale  4.  3.101 /  4.  3.101
  libswresample   2.  4.100 /  2.  4.100
  libpostproc54.  2.100 / 54.  2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
'/home/dani/Videos/Moments_of_Everyday_Life.mp4':
  Metadata:
major_brand : mp42
minor_version   : 0
compatible_brands: mp42mp41
creation_time   : 2012-03-13T03:50:30.00Z
  Duration: 00:01:07.94, start: 0.00, bitrate: 7994 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv,
bt709), 1280x720 [SAR 1:1 DAR 16:9], 7804 kb/s, 23.98 fps, 23.98 tbr, 23976
tbn, 47.95 tbc (default)
Metadata:
  creation_time   : 2012-03-13T03:50:30.00Z
  handler_name: Mainconcept MP4 Video Media Handler
  encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz,
stereo, fltp, 189 kb/s (default)
Metadata:
  creation_time   : 2012-03-13T03:50:30.00Z
  handler_name: Mainconcept MP4 Sound Media Handler
Output #0, mpegts, to 'Videos/test.ts':
  Metadata:
major_brand : mp42
minor_version   : 0
compatible_brands: mp42mp41
encoder : Lavf57.58.101
Stream #0:0(eng): Video: ffv1, yuv420p, 1280x720 [SAR 1:1 DAR 16:9],
q=2-31, 200 kb/s, 23.98 fps, 90k tbn, 23.98 tbc (default)
Metadata:
  creation_time   : 2012-03-13T03:50:30.00Z
  handler_name: Mainconcept MP4 Video Media Handler
  encoder : Lavc57.66.108 ffv1
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> ffv1 (native))
Press [q] to stop, [?] for help
frame= 1627 fps= 95 q=-0.0 Lsize=  424439kB time=00:01:07.81
bitrate=51269.8kbits/s speed=3.97x
video:393553kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB
muxing overhead: 7.848062%


user@host:~$ tools/ffmpeg/ffmpeg -i Videos/test.ts -vcodec mpeg4 -y
Videos/output.flv
ffmpeg version N-82721-g89092fa Copyright (c) 2000-2016 the FFmpeg
developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
  configuration: --enable-gpl --enable-version3 --enable-nonfree
--enable-openssl --enable-avresample --enable-avisynth --enable-libmp3lame
--enable-libpulse --disable-librtmp --enable-libtheora --enable-libvorbis
--enable-libx264
  libavutil  55. 41.101 / 55. 41.101
  libavcodec 57. 66.108 / 57. 66.108
  libavformat57. 58.101 / 57. 58.101
  libavdevice57.  2.100 / 57.  2.100
  libavfilter 6. 68.100 /  6. 68.100
  libavresample   3.  2.  0 /  3.  2.  0
  libswscale  4.  3.101 /  4.  3.101
  libswresample   2.  4.100 /  2.  4.100
  libpostproc54.  2.100 / 54.  2.100
Input #0, mpegts, from 'Videos/test.ts':
  Duration: 00:01:07.82, start: 1.40, bitrate: 51269 kb/s
  Program 1
Metadata:
  service_name: Service01
  service_provider: FFmpeg
Stream #0:0[0x100]: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, flv, to 'Videos/output.flv':
Output file #0 does not contain any stream


Adding "-vcodec ffv1" before -i produces the same output.



> > 2) Choose some other muxer instead of mpegts.
>
> mkv does have a live option but it is not really suitable
> for rawvideo either.
>


I can not found any matroska related live option (or similar).



After all I tried to go for mpegts with lossless H.264 for the video and
high bitrate AAC for the audio, as there seem not to be any lossless audio
alternative, that is compatible with mpegts.
However, now I get the following errors on the receiving side. I am not
sure, but guessing that this might come from lost UDP packets?

user@host:~$ tools/ffmpeg/ffmpeg -f mpegts -i udp://127.0.0.1:2 -acodec
mp3 -vcodec flv -y Videos/output.flv
ffmpeg version N-82721-g89092fa Copyright (c) 2000-2016 the FFmpeg
developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
  configuration: --enable-gpl --enable-version3 --enable-nonfree
--enable-openssl --enable-avresample --enable-avisynth --enable-libmp3lame
--enable-libpulse --disable-librtmp --enable-libtheora 

Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Joshua Grauman

Thanks guys, I'll look into these ideas!

Josh

On Fri, 2 Dec 2016, Paul B Mahol wrote:


On 12/2/16, Joshua Grauman  wrote:

Hello all,

I am using the following command successfully to generate a screencast.
The video comes from my program 'gen-vid' which outputs the raw frames
with alpha channel. The resulting .avi has alpha channel as well which is
my goal. It all works great except that my computer can't handle doing it
real-time. So I am wondering if there is a different vcodec I could use to
achieve the same result that was less demaning on the cpu? I am willing to
sacrifice some compression for more speed, but would prefer not to have to
store all the raw frames without any compression. Storing the alpha
channel is also a must. It is preferable if the compression is lossless.
Does anyone have any other suggestions for compression other than png that
may be faster? Thanks!


utvideo, huffyuv, ffvhuff



./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
-framerate 30 -i - -vcodec png over.avi

Josh
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Paul B Mahol
On 12/2/16, Joshua Grauman  wrote:
> Hello all,
>
> I am using the following command successfully to generate a screencast.
> The video comes from my program 'gen-vid' which outputs the raw frames
> with alpha channel. The resulting .avi has alpha channel as well which is
> my goal. It all works great except that my computer can't handle doing it
> real-time. So I am wondering if there is a different vcodec I could use to
> achieve the same result that was less demaning on the cpu? I am willing to
> sacrifice some compression for more speed, but would prefer not to have to
> store all the raw frames without any compression. Storing the alpha
> channel is also a must. It is preferable if the compression is lossless.
> Does anyone have any other suggestions for compression other than png that
> may be faster? Thanks!

utvideo, huffyuv, ffvhuff

>
> ./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
> -framerate 30 -i - -vcodec png over.avi
>
> Josh
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Carl Eugen Hoyos
2016-12-02 16:51 GMT+01:00 Joshua Grauman :
> Hello all,
>
> I am using the following command successfully to generate a screencast. The
> video comes from my program 'gen-vid' which outputs the raw frames with
> alpha channel. The resulting .avi has alpha channel as well which is my
> goal. It all works great except that my computer can't handle doing it
> real-time. So I am wondering if there is a different vcodec I could use to
> achieve the same result that was less demaning on the cpu? I am willing to
> sacrifice some compression for more speed, but would prefer not to have to
> store all the raw frames without any compression. Storing the alpha channel
> is also a must. It is preferable if the compression is lossless. Does anyone
> have any other suggestions for compression other than png that may be
> faster? Thanks!
>
> ./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080
> -framerate 30 -i - -vcodec png over.avi

utvideo is faster than png, consider to change gen-vid so it outputs rgba to
slightly increase speed, this will also slightly increase performance with png.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Faster codec with alpha

2016-12-02 Thread Joshua Grauman

Hello all,

I am using the following command successfully to generate a screencast. 
The video comes from my program 'gen-vid' which outputs the raw frames 
with alpha channel. The resulting .avi has alpha channel as well which is 
my goal. It all works great except that my computer can't handle doing it 
real-time. So I am wondering if there is a different vcodec I could use to 
achieve the same result that was less demaning on the cpu? I am willing to 
sacrifice some compression for more speed, but would prefer not to have to 
store all the raw frames without any compression. Storing the alpha 
channel is also a must. It is preferable if the compression is lossless. 
Does anyone have any other suggestions for compression other than png that 
may be faster? Thanks!


./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1920x1080 
-framerate 30 -i - -vcodec png over.avi

Josh
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Mark Thompson
On 02/12/16 14:05, Victor dMdB wrote:
> Thanks for the response Mark!
> 
> I might be misreading the source code, but for decoding, doesn't
> vaapi_decode_init
> do all the heavy lifting? 
> It seems to already call the functions and sets the variables you
> mentioned.
> 
> So might the code be just:
> 
> avcodec_open2(codecCtx, codec,NULL);
> vaapi_decode_init(codecCtx); 

vaapi_decode_init() is a function inside the ffmpeg utility, not any of the 
libraries.  You need to implement what it does in your own application.

You can copy/adapt the relevant code directly from the ffmpeg utility into your 
application if you want (assuming you comply with the licence terms).

- Mark


> On Fri, 2 Dec 2016, at 09:03 PM, Mark Thompson wrote:
>> On 02/12/16 10:57, Victor dMdB wrote:
>>> I was wondering if there were any examples of implementations with
>>> avformatcontext?
>>>
>>> I've looked at the source of ffmpeg vaapi implementation:
>>> https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
>>>
>>> and there is a reference to the cli values here
>>> https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
>>>
>>> But I'm not really sure how one actually implements the it within either
>>> decoding or encoding pipeline?
>>
>> Start by making an hwdevice.  This can be done with
>> av_hwdevice_ctx_create(), or if you already have a VADisplay (for
>> example, to do stuff in X via DRI[23]) you can use
>> av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().
>>
>> For a decoder:
>>
>> Make the decoder as you normally would for software.  You must set an
>> AVCodecContext.get_format callback.
>> Start feeding data to the decoder.
>> Once enough there is enough data to determine the output format, the
>> get_format callback will be called (this will always happen before any
>> output is generated).
>> The callback has a set of possible formats to use, this will contain
>> AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams
>> are supported for a given decoder - for H.264 the hwaccel only supports
>> YUV 4:2:0 in 8-bit depth).
>> Make an hwframe context for the output frames and a struct vaapi_context
>> containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and
>> its callees for this part.
>> Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode
>> context (AVCodecContext.hwaccel_context) to the decoder.
>> Once you return from the callback, decoding continues and will give you
>> AV_PIX_FMT_VAAPI frames.
>> If you need the output frames in normal memory rather than GPU memory,
>> you can copy them back with av_hwframe_transfer_data().
>>
>> For an encoder:
>>
>> Find an hwframe context to use as the encoder input.  For a transcode
>> case this can be the one from the decoder above, or it could be output
>> from a filter like scale_vaapi.  If only have frames in normal memory,
>> you need to make a new one here.
>> Make the encoder as you normally would (you'll need to get the codec by
>> name (like "h264_vaapi"), because it will not choose it by default with
>> just the ID), and set AVCodecContext.hw_frames_ctx with your hwframe
>> context.
>> Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe
>> context.
>> If you only have input frames in normal memory, you will need to upload
>> them to GPU memory in the hwframe context with av_hwframe_transfer_data()
>> before giving them to the encoder.
>>
>>
>> - Mark
>>
>>
>> * It is intended that struct vaapi_context will be deprecated completely
>> soon, and this part will not be required (lavc will handle that context
>> creation).
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Victor dMdB
Thanks for the response Mark!

I might be misreading the source code, but for decoding, doesn't
vaapi_decode_init
do all the heavy lifting? 
It seems to already call the functions and sets the variables you
mentioned.

So might the code be just:

avcodec_open2(codecCtx, codec,NULL);
vaapi_decode_init(codecCtx); 

On Fri, 2 Dec 2016, at 09:03 PM, Mark Thompson wrote:
> On 02/12/16 10:57, Victor dMdB wrote:
> > I was wondering if there were any examples of implementations with
> > avformatcontext?
> > 
> > I've looked at the source of ffmpeg vaapi implementation:
> > https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
> > 
> > and there is a reference to the cli values here
> > https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
> > 
> > But I'm not really sure how one actually implements the it within either
> > decoding or encoding pipeline?
> 
> Start by making an hwdevice.  This can be done with
> av_hwdevice_ctx_create(), or if you already have a VADisplay (for
> example, to do stuff in X via DRI[23]) you can use
> av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().
> 
> For a decoder:
> 
> Make the decoder as you normally would for software.  You must set an
> AVCodecContext.get_format callback.
> Start feeding data to the decoder.
> Once enough there is enough data to determine the output format, the
> get_format callback will be called (this will always happen before any
> output is generated).
> The callback has a set of possible formats to use, this will contain
> AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams
> are supported for a given decoder - for H.264 the hwaccel only supports
> YUV 4:2:0 in 8-bit depth).
> Make an hwframe context for the output frames and a struct vaapi_context
> containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and
> its callees for this part.
> Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode
> context (AVCodecContext.hwaccel_context) to the decoder.
> Once you return from the callback, decoding continues and will give you
> AV_PIX_FMT_VAAPI frames.
> If you need the output frames in normal memory rather than GPU memory,
> you can copy them back with av_hwframe_transfer_data().
> 
> For an encoder:
> 
> Find an hwframe context to use as the encoder input.  For a transcode
> case this can be the one from the decoder above, or it could be output
> from a filter like scale_vaapi.  If only have frames in normal memory,
> you need to make a new one here.
> Make the encoder as you normally would (you'll need to get the codec by
> name (like "h264_vaapi"), because it will not choose it by default with
> just the ID), and set AVCodecContext.hw_frames_ctx with your hwframe
> context.
> Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe
> context.
> If you only have input frames in normal memory, you will need to upload
> them to GPU memory in the hwframe context with av_hwframe_transfer_data()
> before giving them to the encoder.
> 
> 
> - Mark
> 
> 
> * It is intended that struct vaapi_context will be deprecated completely
> soon, and this part will not be required (lavc will handle that context
> creation).
> 
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fault-tolerant streaming and local saving at the same time

2016-12-02 Thread Carl Eugen Hoyos
2016-12-02 14:13 GMT+01:00 Daniel Tisch :
>>
>> > ffmpeg -i some_source -acodec pcm_u16le -vcodec yuv4 -f mpegts
>>
>> This command line cannot work:
>> You cannot put random data into mpegts.
>>
> Ok, now I see that mpegts container does not support these raw codecs

> (however ffmpeg does not print a warning about that).

Yes, but you are generally responsible for your command line.
(That requests codec and format)

> Then I have 2 options to achieve my goal, right?
> 1) Choose some (preferably lossless) codec to use in mpegts
> instead of raw data.

FFmpeg contains its own lossless codec, ffv1.

> 2) Choose some other muxer instead of mpegts.

mkv does have a live option but it is not really suitable
for rawvideo either.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fault-tolerant streaming and local saving at the same time

2016-12-02 Thread Daniel Tisch
>
> > ffmpeg -i some_source -acodec pcm_u16le -vcodec yuv4 -f mpegts
>
> This command line cannot work:
> You cannot put random data into mpegts.
>


Ok, now I see that mpegts container does not support these raw codecs
(however ffmpeg does not print a warning about that).
Then I have 2 options to achieve my goal, right?
1) Choose some (preferably lossless) codec to use in mpegts instead of raw
data.
2) Choose some other muxer instead of mpegts.

Do you have a recommendation for the codecs/muxer?



>
> > ffmpeg version 2.8.8-0ubuntu0.16.04.1 Copyright (c) 2000-2016 the FFmpeg
>
> Please understand that only current FFmpeg git head is supported
> here.
>


Thanks for noting this, now I am using that.

Daniel
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Mark Thompson
On 02/12/16 10:57, Victor dMdB wrote:
> I was wondering if there were any examples of implementations with
> avformatcontext?
> 
> I've looked at the source of ffmpeg vaapi implementation:
> https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
> 
> and there is a reference to the cli values here
> https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
> 
> But I'm not really sure how one actually implements the it within either
> decoding or encoding pipeline?

Start by making an hwdevice.  This can be done with av_hwdevice_ctx_create(), 
or if you already have a VADisplay (for example, to do stuff in X via DRI[23]) 
you can use av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().

For a decoder:

Make the decoder as you normally would for software.  You must set an 
AVCodecContext.get_format callback.
Start feeding data to the decoder.
Once enough there is enough data to determine the output format, the get_format 
callback will be called (this will always happen before any output is 
generated).
The callback has a set of possible formats to use, this will contain 
AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams are 
supported for a given decoder - for H.264 the hwaccel only supports YUV 4:2:0 
in 8-bit depth).
Make an hwframe context for the output frames and a struct vaapi_context 
containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and its 
callees for this part.
Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode 
context (AVCodecContext.hwaccel_context) to the decoder.
Once you return from the callback, decoding continues and will give you 
AV_PIX_FMT_VAAPI frames.
If you need the output frames in normal memory rather than GPU memory, you can 
copy them back with av_hwframe_transfer_data().

For an encoder:

Find an hwframe context to use as the encoder input.  For a transcode case this 
can be the one from the decoder above, or it could be output from a filter like 
scale_vaapi.  If only have frames in normal memory, you need to make a new one 
here.
Make the encoder as you normally would (you'll need to get the codec by name 
(like "h264_vaapi"), because it will not choose it by default with just the 
ID), and set AVCodecContext.hw_frames_ctx with your hwframe context.
Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe context.
If you only have input frames in normal memory, you will need to upload them to 
GPU memory in the hwframe context with av_hwframe_transfer_data() before giving 
them to the encoder.


- Mark


* It is intended that struct vaapi_context will be deprecated completely soon, 
and this part will not be required (lavc will handle that context creation).

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] No pixel format specified - meaning of yuv420p?

2016-12-02 Thread Andy Furniss

Andy Furniss wrote:


Possibly, or you didn't restrict to baseline.

You really should know what format the source video is is to do
things properly.

If say it's interlaced and stored as 422 or 411 then the default
conversion to 420 will be wrong. You would need to add interl=1 to
the scale filter (even then it's not truly correct, but the
difference is hard to see). If you want to keep as interlaced you
would also need to encode as MBAFF with libx264 and be sure to check
field dominance is correctly flagged in stream and container.

Of course if the source is interlaced and you just want something
"disposable" for the web rather than an archive, you could just
de-interlace it. Choices still involved = framerate/fieldrate, but
may (depending on source format) be able to avoid source chroma
format issues.


Do you really need baseline - it's less efficient that other profiles,
also it won't do MBAFF.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] No pixel format specified - meaning of yuv420p?

2016-12-02 Thread Andy Furniss

MRob wrote:

Thank you for the very fast response, it's appreciated.

On 2016-12-01 16:23, Lou wrote:

On Thu, Dec 1, 2016, at 03:00 PM, MRob wrote:

I'm exporting a video from an older Adobe Elements (Windows)
with intention to put it on the web (both H.264 and VP8). I
exported using Adobe's "DV AVI" which appears to be the most
unmolested output format


DV is not a good choice: it's lossy and will mess up your width,
height, aspect ratio, etc. Install UT video. UT video is a free and
open compressed lossless format that works well as an intermediate
format:

http://umezawa.dyndns.info/archive/utvideo/?C=M;O=D

Then restart Elements and export using that. Make sure Elements
doesn't change the width, height, frame rate, etc (I recall Adobe
Media Encoder doing that often). Finally, re-enode the intermediate
file with ffmpeg.


Oh, thank you for that information. Unfortunately, it looks like I'm
 working with Premier Elements, and after installing UT video, I
don't see any facilities to export using it. Is this a limitation of
Premier? Or am I looking in the wrong place? Thanks for the off-topic
help with that.


[...]

But from reading that mailing list post and the error message
text, it sounds like adding "-pix_fmt yuv420p" affects the
output. I do not need to retain compatibility with terribly old
devices (though I am using baseline level 3.0), so I wanted to
ask if there is a better way to handle conversion in this case.


You'll need yuv420p. Most non-FFmpeg based players and various
devices don't support anything else.


I see, so the reason I hadn't seen that before was because any other
 videos I'd encoded likely had the yuv420p pixel format in the video
 stream already?


Possibly, or you didn't restrict to baseline.

You really should know what format the source video is is to do things
properly.

If say it's interlaced and stored as 422 or 411 then the default
conversion to 420 will be wrong. You would need to add interl=1 to the
scale filter (even then it's not truly correct, but the difference is
hard to see). If you want to keep as interlaced you would also need to
encode as MBAFF with libx264 and be sure to check field dominance is
correctly flagged in stream and container.

Of course if the source is interlaced and you just want something
"disposable" for the web rather than an archive, you could just
de-interlace it. Choices still involved = framerate/fieldrate, but may
(depending on source format) be able to avoid source chroma format issues.



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] fault-tolerant streaming and local saving at the same time

2016-12-02 Thread Carl Eugen Hoyos
2016-12-02 12:30 GMT+01:00 Daniel Tisch :

> ffmpeg -i some_source -acodec pcm_u16le -vcodec yuv4 -f mpegts

This command line cannot work:
You cannot put random data into mpegts.

> ffmpeg version 2.8.8-0ubuntu0.16.04.1 Copyright (c) 2000-2016 the FFmpeg

Please understand that only current FFmpeg git head is supported
here.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] fault-tolerant streaming and local saving at the same time

2016-12-02 Thread Daniel Tisch
Hi Folks,

I am trying to achieve what the subject says, and I would like to get some
feedback on my setup and a bit of help.

The problem: if I use a single ffmpeg command line to read the raw frames
from my Blackmagic Decklink capture card, encode it and A) stream over RTMP
to a remote server and B) save a local copy, then in case of either network
error or file write error (e.g. disk full), the whole process stops, so not
only the live stream fails, but I get no backup as well.

My intended solution: use 3 processes instead of one: 1) for capturing from
the decklink card and passing the media to 2 outputs, 2) for streaming and
3) for saving to file. But as far as I connect them through pipes, the
problem remains. Now I think the best option would be to connect the
processes with udp://127.0.0.1: style targets, because it is reliable
enough on the localhost, but the sender will not get stuck if any of the
receiver processes get stuck or die. For this to work, I need to use a
muxer in 1), that ensures audio-video sync, and that enables 2) and 3) to
demux the content even after they restart, so when they do not have the
full output, just start to listen to it somewhere in the middle. I am
trying to use mpegts for that.

My first question: is it the way to go, or am I missing something that
would enable a lot simpler setup?

My second question: how to open the input in 2) and 3), if 1) looks
something like this:
ffmpeg -i some_source -acodec pcm_u16le -vcodec yuv4 -f mpegts udp://
127.0.0.1:2
I have tried more things like this:
ffmpeg -f mpegts -f rawvideo -s 1280x720 -pix_fmt yuv420p -f u16le -ar
44100 -ac 2 -i udp://127.0.0.1:2 some_output
but I always get "Option video_size not found."

Thank you very much for your insights! The full ffmpeg commands and outputs
are below.

Thanks,
Daniel


sender:

user@host:~$ ffmpeg -i ~/Videos/Moments_of_Everyday_Life.mp4 -acodec
pcm_u16le -vcodec yuv4 -f mpegts udp://127.0.0.1:2
ffmpeg version 2.8.8-0ubuntu0.16.04.1 Copyright (c) 2000-2016 the FFmpeg
developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.2) 20160609
  configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1
--build-suffix=-ffmpeg --toolchain=hardened
--libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
--cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping
--disable-decoder=libopenjpeg --disable-decoder=libschroedinger
--enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa
--enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca
--enable-libcdio --enable-libflite --enable-libfontconfig
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm
--enable-libmodplug --enable-libmp3lame --enable-libopenjpeg
--enable-libopus --enable-libpulse --enable-librtmp
--enable-libschroedinger --enable-libshine --enable-libsnappy
--enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora
--enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack
--enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi
--enable-openal --enable-opengl --enable-x11grab --enable-libdc1394
--enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264
--enable-libopencv
  libavutil  54. 31.100 / 54. 31.100
  libavcodec 56. 60.100 / 56. 60.100
  libavformat56. 40.101 / 56. 40.101
  libavdevice56.  4.100 / 56.  4.100
  libavfilter 5. 40.101 /  5. 40.101
  libavresample   2.  1.  0 /  2.  1.  0
  libswscale  3.  1.101 /  3.  1.101
  libswresample   1.  2.101 /  1.  2.101
  libpostproc53.  3.100 / 53.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
'/home/dani/Videos/Moments_of_Everyday_Life.mp4':
  Metadata:
major_brand : mp42
minor_version   : 0
compatible_brands: mp42mp41
creation_time   : 2012-03-13 03:50:30
  Duration: 00:01:07.94, start: 0.00, bitrate: 7994 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv,
bt709), 1280x720 [SAR 1:1 DAR 16:9], 7804 kb/s, 23.98 fps, 23.98 tbr, 23976
tbn, 47.95 tbc (default)
Metadata:
  creation_time   : 2012-03-13 03:50:30
  handler_name: Mainconcept MP4 Video Media Handler
  encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz,
stereo, fltp, 189 kb/s (default)
Metadata:
  creation_time   : 2012-03-13 03:50:30
  handler_name: Mainconcept MP4 Sound Media Handler
Output #0, mpegts, to 'udp://127.0.0.1:2':
  Metadata:
major_brand : mp42
minor_version   : 0
compatible_brands: mp42mp41
encoder : Lavf56.40.101
Stream #0:0(eng): Video: yuv4, yuv420p, 1280x720 [SAR 1:1 DAR 16:9],
q=2-31, 200 kb/s, 23.98 fps, 90k tbn, 23.98 tbc (default)
Metadata:
  creation_time   : 2012-03-13 03:50:30
  handler_name: Mainconcept MP4 Video Media Handler
  encoder : Lavc56.60.100 yuv4
Stream #0:1(eng): Audio: pcm_u16le, 44100 Hz, stereo, s16, 1411 kb/s

Re: [FFmpeg-user] unable to convert png to movie, error: Invalid PNG signature ,

2016-12-02 Thread Carl Eugen Hoyos
2016-12-02 11:22 GMT+01:00 Puneet Singh :

> I am trying to convert a set of images to a video file. I have tried
> various ffmpeg command options but they all have failed.

> ffmpeg -r 1 -i image%02d.png

Since your files are actually jpeg, not png, you have to force the
right decoder:
$ ffmpeg -r 1 -vcodec mjpeg -i image%2d.png

> --extra-cflags=-fPIC

Why do you think that this makes sense?

Please remember that on this mailing list, only current
FFmpeg git head is supported.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] continuous live stream from video files

2016-12-02 Thread MRob

On 2016-12-01 23:22, Anthony Ettinger wrote:

I found the concatenate option with ffmpeg as described here:

https://trac.ffmpeg.org/wiki/Concatenate#protocol

I can concatenate static files if I know the list ahead of time and 
produce

an m3u8 stream


Would you kindly describe what tools/setup you use to provide the stream 
after creating it with ffmpeg?




as output with this:

ffmpeg -i 'concat:intermediate1.ts|intermediate2.ts' -c copy
test.m3u8

The problem I'm trying to solve is I want to be able to add files
dynamically to the list and I also want the stream to run 24/7 
continuously

looping when it reaches last file in the list.

Is this doable with ffmpeg and some bash?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Victor dMdB
I was wondering if there were any examples of implementations with
avformatcontext?

I've looked at the source of ffmpeg vaapi implementation:
https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html

and there is a reference to the cli values here
https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html

But I'm not really sure how one actually implements the it within either
decoding or encoding pipeline?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] unable to convert png to movie, error: Invalid PNG signature ,

2016-12-02 Thread Puneet Singh
Hi all,
I am trying to convert a set of images to a video file. I have tried
various ffmpeg command options but they all have failed. (command & error
logs attached: ffmpeg_errors.txt)


The following seems to work for me:
ffmpeg -f concat -safe 0 -i <(cat 

Re: [FFmpeg-user] continuous live stream from video files

2016-12-02 Thread Paul B Mahol
On 12/2/16, Anthony Ettinger  wrote:
> I found the concatenate option with ffmpeg as described here:
>
> https://trac.ffmpeg.org/wiki/Concatenate#protocol
>
> I can concatenate static files if I know the list ahead of time and produce
> an m3u8 stream as output with this:
>
> ffmpeg -i 'concat:intermediate1.ts|intermediate2.ts' -c copy
> test.m3u8
>
> The problem I'm trying to solve is I want to be able to add files
> dynamically to the list and I also want the stream to run 24/7 continuously
> looping when it reaches last file in the list.
>
> Is this doable with ffmpeg and some bash?

I doubt so, but feature could be added to concat demuxer.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".