Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Anatol
I don't think that 'keep source keyframes' might impose any a/v sync issues
that differ from any other encoding flows.
AFIAK, 'force_key_frames' acts on the output/encoding, it does not aware of
the decoding processing. For that matter - scene cuts are evaluated from
post-decoding/raw frames.

On Sat, May 2, 2015 at 12:31 PM, Haris Zukanovic 
haris.zukanovi...@gmail.com wrote:

 My simple idea was that instead of deducing from a formula like
 -force_key_frames 'expr:gte(t,n_forced*5)'
 force_key_frames somehow took this kind of info directly from the input
 stream and passed onto all output streams

 This could be called something like

 -force_key_frames 'from_source'


 Do you think it is possible to do?





 On 5/2/15 10:41 AM, Anatol wrote:

 It does not matter the type of the incoming protocol.
 And slight un-alignment tolerated by the CDN providers and Apple HLS
 validation tools.
 Therefore the source live stream can be used in an adaptive-bitrate sets,
 IF the other streams match their key frames.
 By the way Wowza has this option (keep Source key frames) in its
 Transcoder add-on.
 But Wowza also has other problems ...


 On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:

  On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:

 The idea is to gain the option to use the H264 source stream along with
 live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
 etc), that require aligned keyframes on ALL participating streams.

 Could it be ALL, except the livestream itself ?
 Or is the livestream also available via HLS ?
 If yes, then you’ll have to encode this one as well.
 This can only be achieved if the source stream is encoded with the same
 version of the encoder you’re using your self. I can imagine that
 different
 encoders could behave slightly different.
 BTW what is your source ? Is it predictable like DVB-S from the same
 broadcaster ?
  From the top of my head BBC broadcast has an I-frame every 12 or 13
 frames.
 Another potential issue could be the delay between the video and audio
 stream, which would force you to also encode the source stream.
 Hope it helps.

 On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:

  On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com

 wrote:

 Indeed force-ing keyframes at certain positions is meant to keep

 multiple

 output encodings keyframe aligned. The input stream is already h264 in

 our

 case.
 Moreover, if one could copy all iframe positions, and possibly also

 all

 other frametypes from the input stream there would not be any need for
 scene detection if that was already done in the input stream. I am not

 sure

 how much heavy lookahead calculations and perhaps other heavy

 calculations

 could also be skipped?

 What are you trying to achieve ? A performance boost ? I don’t think

 that

 you’ll achieve improvement worthwhile, if anything at all. The working

 of

 the encoder should need to be totally rewritten to make something like

 this

 happen at all. Encoding of a frame depends on former and following

 frames,

 the result I P or B frame is the result. Your source is h264  already,

 so I

 think you’ll rescale and re-encode to achieve that. The calculation has

 to

 be done. Knowing in advance that it will be en I P or B frame won’t make
 any difference in the amount of calculations in my opinion.

 On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl

 wrote:

 On 01 May 2015, at 13:06, Haris Zukanovic 

 haris.zukanovi...@gmail.com

 wrote:

 Is the decision about exactly which frame to make an IDR frame made

 in

 x264 or ffmpeg?
 In general I-frames are placed at scene-changes, this can happen

 random.

 Additionally you can can force an I-frame every arbitrary amount of

 frames

 by specifying the gop-size. The function of an I-frame is to hold max

 frame

 info P and B frames build on that complete I-frame. It doesn’t make

 sense

 from an encoding viewpoint to skip an I-frame at a scene-change, it’s

 just

 impossible.
 Adding more than ‘a minimum amount’ of I-frames only makes sense for
 seeking purposes, at the cost of less compression/higher then

 necessary

 bitrate.

 Any pointer or advice on where to look for this in the code?


 On 4/29/15 8:54 PM, Anatol wrote:

 No responses on that one?
 It is very important issue.

 On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
 haris.zukanovi...@gmail.com wrote:

  Hello,
 Can I use force_key_frames in some way to produce keyframes (IDR,

 not

 I-frames) at exactly the same PTS in output streams as they are

 found

 in

 the live input stream? Both input and output are h264 and live

 streaming.

 Something analogous to using 2 pass encoding for VOD and in the

 second

 pass keyframes are inserted exactly where they are recorded in the

 first

 pass... Is that something like that even theoretically doable for

 live

 streaming?



 thanx

 --
 --
 

Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Haris Zukanovic



On 5/2/15 10:06 AM, Henk D. Schoneveld wrote:

Another potential issue could be the delay between the video and audio stream, 
which would force you to also encode the source stream.
I had not thought about the audio at all... but audio is normally not a 
problem even if re-encoded to get it synced




--
--
Haris Zukanovic

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Anatol
It does not matter the type of the incoming protocol.
And slight un-alignment tolerated by the CDN providers and Apple HLS
validation tools.
Therefore the source live stream can be used in an adaptive-bitrate sets,
IF the other streams match their key frames.
By the way Wowza has this option (keep Source key frames) in its
Transcoder add-on.
But Wowza also has other problems ...


On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
wrote:


  On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:
 
  The idea is to gain the option to use the H264 source stream along with
  live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
  etc), that require aligned keyframes on ALL participating streams.
 Could it be ALL, except the livestream itself ?
 Or is the livestream also available via HLS ?
 If yes, then you’ll have to encode this one as well.
 This can only be achieved if the source stream is encoded with the same
 version of the encoder you’re using your self. I can imagine that different
 encoders could behave slightly different.
 BTW what is your source ? Is it predictable like DVB-S from the same
 broadcaster ?
 From the top of my head BBC broadcast has an I-frame every 12 or 13 frames.
 Another potential issue could be the delay between the video and audio
 stream, which would force you to also encode the source stream.
 Hope it helps.
 
  On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
  wrote:
 
 
  On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com
 
  wrote:
 
  Indeed force-ing keyframes at certain positions is meant to keep
 multiple
  output encodings keyframe aligned. The input stream is already h264 in
  our
  case.
  Moreover, if one could copy all iframe positions, and possibly also
 all
  other frametypes from the input stream there would not be any need for
  scene detection if that was already done in the input stream. I am not
  sure
  how much heavy lookahead calculations and perhaps other heavy
  calculations
  could also be skipped?
  What are you trying to achieve ? A performance boost ? I don’t think
 that
  you’ll achieve improvement worthwhile, if anything at all. The working
 of
  the encoder should need to be totally rewritten to make something like
 this
  happen at all. Encoding of a frame depends on former and following
 frames,
  the result I P or B frame is the result. Your source is h264  already,
 so I
  think you’ll rescale and re-encode to achieve that. The calculation has
 to
  be done. Knowing in advance that it will be en I P or B frame won’t make
  any difference in the amount of calculations in my opinion.
  On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 
  On 01 May 2015, at 13:06, Haris Zukanovic 
 haris.zukanovi...@gmail.com
 
  wrote:
 
  Is the decision about exactly which frame to make an IDR frame made
 in
  x264 or ffmpeg?
  In general I-frames are placed at scene-changes, this can happen
 random.
  Additionally you can can force an I-frame every arbitrary amount of
  frames
  by specifying the gop-size. The function of an I-frame is to hold max
  frame
  info P and B frames build on that complete I-frame. It doesn’t make
  sense
  from an encoding viewpoint to skip an I-frame at a scene-change, it’s
  just
  impossible.
  Adding more than ‘a minimum amount’ of I-frames only makes sense for
  seeking purposes, at the cost of less compression/higher then
 necessary
  bitrate.
  Any pointer or advice on where to look for this in the code?
 
 
  On 4/29/15 8:54 PM, Anatol wrote:
  No responses on that one?
  It is very important issue.
 
  On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
  haris.zukanovi...@gmail.com wrote:
 
  Hello,
  Can I use force_key_frames in some way to produce keyframes (IDR,
 not
  I-frames) at exactly the same PTS in output streams as they are
 found
  in
  the live input stream? Both input and output are h264 and live
  streaming.
 
  Something analogous to using 2 pass encoding for VOD and in the
  second
  pass keyframes are inserted exactly where they are recorded in the
  first
  pass... Is that something like that even theoretically doable for
  live
  streaming?
 
 
 
  thanx
 
  --
  --
  Haris Zukanovic
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  --
  --
  Haris Zukanovic
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  

Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Haris Zukanovic

My simple idea was that instead of deducing from a formula like
-force_key_frames 'expr:gte(t,n_forced*5)'
force_key_frames somehow took this kind of info directly from the input 
stream and passed onto all output streams


This could be called something like

-force_key_frames 'from_source'


Do you think it is possible to do?




On 5/2/15 10:41 AM, Anatol wrote:

It does not matter the type of the incoming protocol.
And slight un-alignment tolerated by the CDN providers and Apple HLS
validation tools.
Therefore the source live stream can be used in an adaptive-bitrate sets,
IF the other streams match their key frames.
By the way Wowza has this option (keep Source key frames) in its
Transcoder add-on.
But Wowza also has other problems ...


On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
wrote:


On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:

The idea is to gain the option to use the H264 source stream along with
live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
etc), that require aligned keyframes on ALL participating streams.

Could it be ALL, except the livestream itself ?
Or is the livestream also available via HLS ?
If yes, then you’ll have to encode this one as well.
This can only be achieved if the source stream is encoded with the same
version of the encoder you’re using your self. I can imagine that different
encoders could behave slightly different.
BTW what is your source ? Is it predictable like DVB-S from the same
broadcaster ?
 From the top of my head BBC broadcast has an I-frame every 12 or 13 frames.
Another potential issue could be the delay between the video and audio
stream, which would force you to also encode the source stream.
Hope it helps.

On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
wrote:


On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com

wrote:

Indeed force-ing keyframes at certain positions is meant to keep

multiple

output encodings keyframe aligned. The input stream is already h264 in

our

case.
Moreover, if one could copy all iframe positions, and possibly also

all

other frametypes from the input stream there would not be any need for
scene detection if that was already done in the input stream. I am not

sure

how much heavy lookahead calculations and perhaps other heavy

calculations

could also be skipped?

What are you trying to achieve ? A performance boost ? I don’t think

that

you’ll achieve improvement worthwhile, if anything at all. The working

of

the encoder should need to be totally rewritten to make something like

this

happen at all. Encoding of a frame depends on former and following

frames,

the result I P or B frame is the result. Your source is h264  already,

so I

think you’ll rescale and re-encode to achieve that. The calculation has

to

be done. Knowing in advance that it will be en I P or B frame won’t make
any difference in the amount of calculations in my opinion.

On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl

wrote:

On 01 May 2015, at 13:06, Haris Zukanovic 

haris.zukanovi...@gmail.com

wrote:

Is the decision about exactly which frame to make an IDR frame made

in

x264 or ffmpeg?
In general I-frames are placed at scene-changes, this can happen

random.

Additionally you can can force an I-frame every arbitrary amount of

frames

by specifying the gop-size. The function of an I-frame is to hold max

frame

info P and B frames build on that complete I-frame. It doesn’t make

sense

from an encoding viewpoint to skip an I-frame at a scene-change, it’s

just

impossible.
Adding more than ‘a minimum amount’ of I-frames only makes sense for
seeking purposes, at the cost of less compression/higher then

necessary

bitrate.

Any pointer or advice on where to look for this in the code?


On 4/29/15 8:54 PM, Anatol wrote:

No responses on that one?
It is very important issue.

On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
haris.zukanovi...@gmail.com wrote:


Hello,
Can I use force_key_frames in some way to produce keyframes (IDR,

not

I-frames) at exactly the same PTS in output streams as they are

found

in

the live input stream? Both input and output are h264 and live

streaming.

Something analogous to using 2 pass encoding for VOD and in the

second

pass keyframes are inserted exactly where they are recorded in the

first

pass... Is that something like that even theoretically doable for

live

streaming?



thanx

--
--
Haris Zukanovic

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

--
--
Haris Zukanovic

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Haris Zukanovic
I was just trying to imagine optimisations when encoding multiple quality
outputs... For example doing sceene detection only once.

In reality, I am trying to get the input and outputs keyframes aligned so
that I can use the input with -c:v copy as one of the variants in the
resulting multiple quality HLS delivery. This way I would not have to
reencode the input.
The only crucial requirement is to get the keyframes (IDR) aligned. Other
I-frames like those created from scene changes are not necessary to get
aligned.
Key frame aligned means that all key frames carry the same PTS in all
output streams.
On May 2, 2015 1:57 AM, Henk D. Schoneveld belca...@zonnet.nl wrote:


  On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com
 wrote:
 
  Indeed force-ing keyframes at certain positions is meant to keep multiple
  output encodings keyframe aligned. The input stream is already h264 in
 our
  case.
  Moreover, if one could copy all iframe positions, and possibly also all
  other frametypes from the input stream there would not be any need for
  scene detection if that was already done in the input stream. I am not
 sure
  how much heavy lookahead calculations and perhaps other heavy
 calculations
  could also be skipped?
 What are you trying to achieve ? A performance boost ? I don’t think that
 you’ll achieve improvement worthwhile, if anything at all. The working of
 the encoder should need to be totally rewritten to make something like this
 happen at all. Encoding of a frame depends on former and following frames,
 the result I P or B frame is the result. Your source is h264  already, so I
 think you’ll rescale and re-encode to achieve that. The calculation has to
 be done. Knowing in advance that it will be en I P or B frame won’t make
 any difference in the amount of calculations in my opinion.
  On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl wrote:
 
 
  On 01 May 2015, at 13:06, Haris Zukanovic haris.zukanovi...@gmail.com
 
  wrote:
 
  Is the decision about exactly which frame to make an IDR frame made in
  x264 or ffmpeg?
  In general I-frames are placed at scene-changes, this can happen random.
  Additionally you can can force an I-frame every arbitrary amount of
 frames
  by specifying the gop-size. The function of an I-frame is to hold max
 frame
  info P and B frames build on that complete I-frame. It doesn’t make
 sense
  from an encoding viewpoint to skip an I-frame at a scene-change, it’s
 just
  impossible.
  Adding more than ‘a minimum amount’ of I-frames only makes sense for
  seeking purposes, at the cost of less compression/higher then necessary
  bitrate.
  Any pointer or advice on where to look for this in the code?
 
 
  On 4/29/15 8:54 PM, Anatol wrote:
  No responses on that one?
  It is very important issue.
 
  On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
  haris.zukanovi...@gmail.com wrote:
 
  Hello,
  Can I use force_key_frames in some way to produce keyframes (IDR, not
  I-frames) at exactly the same PTS in output streams as they are found
  in
  the live input stream? Both input and output are h264 and live
  streaming.
 
  Something analogous to using 2 pass encoding for VOD and in the
 second
  pass keyframes are inserted exactly where they are recorded in the
  first
  pass... Is that something like that even theoretically doable for
 live
  streaming?
 
 
 
  thanx
 
  --
  --
  Haris Zukanovic
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  --
  --
  Haris Zukanovic
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user

 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


[FFmpeg-user] flatten f4v append with different resolution

2015-05-02 Thread Madovsky

Hi,

I'm trying to convert a long F4V recorded on FMS
with multiple append from a livestream. some
segments contains different resolutions than the one that started.
I started from 462x260, after 1920x1200 etc...
from flash player the stream works correctly, shows every segements.
from a third party video player, the hig resolution is not shown or only
a part of it with big pixel traces.
I tried to flatten the file with f4vpp which solve the width and height 
of the

smaller segment but the high resolution is cropped
I tried to find a way to resize with ffmpeg but no success.
is it possible to let fffmpeg resize at a certain part of the movie?

thanks
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Henk D. Schoneveld

 On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:
 
 The idea is to gain the option to use the H264 source stream along with
 live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
 etc), that require aligned keyframes on ALL participating streams.
Could it be ALL, except the livestream itself ?
Or is the livestream also available via HLS ?
If yes, then you’ll have to encode this one as well.
This can only be achieved if the source stream is encoded with the same version 
of the encoder you’re using your self. I can imagine that different encoders 
could behave slightly different.
BTW what is your source ? Is it predictable like DVB-S from the same 
broadcaster ?
From the top of my head BBC broadcast has an I-frame every 12 or 13 frames.
Another potential issue could be the delay between the video and audio stream, 
which would force you to also encode the source stream.
Hope it helps.
 
 On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 
 On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com
 wrote:
 
 Indeed force-ing keyframes at certain positions is meant to keep multiple
 output encodings keyframe aligned. The input stream is already h264 in
 our
 case.
 Moreover, if one could copy all iframe positions, and possibly also all
 other frametypes from the input stream there would not be any need for
 scene detection if that was already done in the input stream. I am not
 sure
 how much heavy lookahead calculations and perhaps other heavy
 calculations
 could also be skipped?
 What are you trying to achieve ? A performance boost ? I don’t think that
 you’ll achieve improvement worthwhile, if anything at all. The working of
 the encoder should need to be totally rewritten to make something like this
 happen at all. Encoding of a frame depends on former and following frames,
 the result I P or B frame is the result. Your source is h264  already, so I
 think you’ll rescale and re-encode to achieve that. The calculation has to
 be done. Knowing in advance that it will be en I P or B frame won’t make
 any difference in the amount of calculations in my opinion.
 On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl wrote:
 
 
 On 01 May 2015, at 13:06, Haris Zukanovic haris.zukanovi...@gmail.com
 
 wrote:
 
 Is the decision about exactly which frame to make an IDR frame made in
 x264 or ffmpeg?
 In general I-frames are placed at scene-changes, this can happen random.
 Additionally you can can force an I-frame every arbitrary amount of
 frames
 by specifying the gop-size. The function of an I-frame is to hold max
 frame
 info P and B frames build on that complete I-frame. It doesn’t make
 sense
 from an encoding viewpoint to skip an I-frame at a scene-change, it’s
 just
 impossible.
 Adding more than ‘a minimum amount’ of I-frames only makes sense for
 seeking purposes, at the cost of less compression/higher then necessary
 bitrate.
 Any pointer or advice on where to look for this in the code?
 
 
 On 4/29/15 8:54 PM, Anatol wrote:
 No responses on that one?
 It is very important issue.
 
 On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
 haris.zukanovi...@gmail.com wrote:
 
 Hello,
 Can I use force_key_frames in some way to produce keyframes (IDR, not
 I-frames) at exactly the same PTS in output streams as they are found
 in
 the live input stream? Both input and output are h264 and live
 streaming.
 
 Something analogous to using 2 pass encoding for VOD and in the
 second
 pass keyframes are inserted exactly where they are recorded in the
 first
 pass... Is that something like that even theoretically doable for
 live
 streaming?
 
 
 
 thanx
 
 --
 --
 Haris Zukanovic
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 --
 --
 Haris Zukanovic
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] How can I set in a D10 MXF (IMX50) file the flags for output color_range, -space, -transfer and primaries?

2015-05-02 Thread Christoph Gerstbauer



Am 01.05.15 um 11:21 schrieb tim nicholson:

On 30/04/15 22:03, Marton Balint wrote:

On Wed, 29 Apr 2015, Christoph Gerstbauer wrote:


I found out that a IMX50 mxf file encoded with FFmbc and the IMX50 mxf
file encoded with actual ffmpeg builds are different in these mxf
metadata flags (by reading out via ffprobe - show_streams)

Are you sure you mean specifically mxf metadata flags?
See my comments below.

Hello Tim, I wrongly mixed different issues.
The metadata flags color_range=tv, color_space=smpte170m, 
color_transfer=bt709 and color_primaries=bt470bg


are NOT mxf metadata flags. For MXF there exist seperate color range and signal 
standard metadata flags. (whhich can be read out bei mxf2raw (BBC) for example.





FFMBC IMX FILE stream 0:0:
color_range=tv
color_space=smpte170m
color_transfer=bt709
color_primaries=bt470bg


FFMPEG IMX FILE stream 0:0:
color_range=tv
color_space=unknown
color_transfer=unknown
color_primaries=unknown

I want to set these metadata flags to the same values, but with FFmpeg.
How can I produce the same output like ffmbc with these 4 metadata flags?
I didnt found a answer to that in the ffmpeg documentation :/

You can manually specify these settings to force them being set with
these parameters:

-color_primaries 5
-color_trc 1
-colorspace 6

What does not work as far as I know in ffmpeg is to actually get these
settings from the source video, even if ffprobe detects them.


And what does not work as far as I know in ffmpeg is these values
actually being written by the mxf muxer. At least I cannot find the
relevant UL's listed in the Generic Picture Essence Descriptor section
of mxfenc.c which is where I would expect to see them. Or any where else
for that matter.

Nor are they in ffmbc for that matter, so I suspect ffprobe is picking
them up from the essence rather than the specific mxf UL's. (ffmbc sets
the parameters Marton lists as part of the IMX target, which ffmpeg does
not have)

Given that MXF encoders should encode Transfer Characteristic whenever
possible smpte S377-1, this is clearly an omission and I am surprised
the IRT analyser doesn't spot it.

Christoph do you actually require the mxf metadata setting (as it really
ought to be, and what I thought you were after) or are you content with
it in the essence, in which case, in the absence of a target preset, you
will have to set the flags yourself.

As Marton Balint showed me the syntax, the setting of these 4 values 
worked, but they dont affect the metadata in the MXF container.

The metadata flags:
Color Siting and Signal Standard werent changed by using the syntax.
Before this test I though I could change these 2 params with the syntax 
descriped above.

But now I know that this is not working.
Furthermore I looked for a way to PASS the test with the IRT Analyzer.

Regarding to the ticket: How to set 3 specific metadata flags 
(ITU601/displayoffset) in FFmpegs IMX50 MXF-OP1a encoding
Yes I am still looking for a encoding option fot the mxf encoding of 
ffmpeg to set MANUALLY (of course) the flags for Signal Standard and 
Color Siting.
FFmpeg should never set it autmatically to any values, if the source 
would have empty flags.
So if I know from which source the content is comming, I want to be able 
to set the 2 mxf metadata flags, and also the 4 addional falg for the 
essence (color_range, color_space, color_transfer and color_primaries).


My motivation to this whole issue is:
I want to generate a IMX50 file from an ffv or ffvhuff file were the 
IMX50 target file has less transcoding loss as possible.
Most transcoders I use make an internal convertion from YUV422 to YUV422 
formats (e.g uncompress 4:2:2 to uncompressed 4:2:2) with an ADDIONAL 
loss. I guess it is an internal RGB convertion: YUV422 source to RGB 
(internal) to YUV422 target format. (anyway, these transcoders are black 
boxes and I cannot know if this is the cause)

So when I transcode with these transcoders I have 2 LOSSY steps:
1.) Avoidable LOSS of source is YUV422: addional chroma subsampling 
(RGB-YUV422)

2.) MPEG2 encoding loss

FFmpeg does not do that: It keeps the yuv422 native format and just 
encode it to mpeg2. - And that would lead to better quality IMX files.
Therefore I want to switch from professional transcoders to ffmpeg for 
generation IMX50. But I will still ned this metadata flag to do it 
perfectly.


Best Regards
Christoph Gerstbauer


Regards,
Marton
[..]


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Haris Zukanovic

My case is live streaming.
I have tried it and definitely keyframes are not aligned between input 
and output streams.
For all encoded output streams it is very simple to obtain alignement 
with setting the fixed GOP size. But the PTS and keyframes of input are 
never aligned with that.


Sometimes, PTS of the output streams are not aligned, and it seems that 
encoding does not start at the same time, like there is a delay to 
startup different for different output streams. I have seen this very 
often using the tee output





On 5/2/15 1:43 PM, Henk D. Schoneveld wrote:

On 02 May 2015, at 11:38, Anatol anatol2...@gmail.com wrote:

I don't think that 'keep source keyframes' might impose any a/v sync issues
that differ from any other encoding flows.
AFIAK, 'force_key_frames' acts on the output/encoding, it does not aware of
the decoding processing. For that matter - scene cuts are evaluated from
post-decoding/raw frames.

Did you try without thinking of alignment what happens ?
Take a source file of several minutes and do your encoding for several bitrates 
with HLS and then test see if problems do really show up. Meaning maybe you try 
to solve a potential problem, looking at recuirements, that in reality don’t 
exist ?

On Sat, May 2, 2015 at 12:31 PM, Haris Zukanovic 
haris.zukanovi...@gmail.com wrote:


My simple idea was that instead of deducing from a formula like
-force_key_frames 'expr:gte(t,n_forced*5)'
force_key_frames somehow took this kind of info directly from the input
stream and passed onto all output streams

This could be called something like

-force_key_frames 'from_source'


Do you think it is possible to do?





On 5/2/15 10:41 AM, Anatol wrote:


It does not matter the type of the incoming protocol.
And slight un-alignment tolerated by the CDN providers and Apple HLS
validation tools.
Therefore the source live stream can be used in an adaptive-bitrate sets,
IF the other streams match their key frames.
By the way Wowza has this option (keep Source key frames) in its
Transcoder add-on.
But Wowza also has other problems ...


On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
wrote:

On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:

The idea is to gain the option to use the H264 source stream along with
live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
etc), that require aligned keyframes on ALL participating streams.


Could it be ALL, except the livestream itself ?
Or is the livestream also available via HLS ?
If yes, then you’ll have to encode this one as well.
This can only be achieved if the source stream is encoded with the same
version of the encoder you’re using your self. I can imagine that
different
encoders could behave slightly different.
BTW what is your source ? Is it predictable like DVB-S from the same
broadcaster ?
 From the top of my head BBC broadcast has an I-frame every 12 or 13
frames.
Another potential issue could be the delay between the video and audio
stream, which would force you to also encode the source stream.
Hope it helps.


On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
wrote:

On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com

wrote:


Indeed force-ing keyframes at certain positions is meant to keep


multiple

output encodings keyframe aligned. The input stream is already h264 in

our


case.
Moreover, if one could copy all iframe positions, and possibly also


all

other frametypes from the input stream there would not be any need for

scene detection if that was already done in the input stream. I am not


sure


how much heavy lookahead calculations and perhaps other heavy


calculations


could also be skipped?


What are you trying to achieve ? A performance boost ? I don’t think


that
you’ll achieve improvement worthwhile, if anything at all. The working
of
the encoder should need to be totally rewritten to make something like
this
happen at all. Encoding of a frame depends on former and following
frames,
the result I P or B frame is the result. Your source is h264  already,
so I
think you’ll rescale and re-encode to achieve that. The calculation has
to
be done. Knowing in advance that it will be en I P or B frame won’t make

any difference in the amount of calculations in my opinion.


On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl


wrote:

On 01 May 2015, at 13:06, Haris Zukanovic 

haris.zukanovi...@gmail.com

wrote:

Is the decision about exactly which frame to make an IDR frame made


in

x264 or ffmpeg?

In general I-frames are placed at scene-changes, this can happen


random.

Additionally you can can force an I-frame every arbitrary amount of

frames
by specifying the gop-size. The function of an I-frame is to hold max
frame
info P and B frames build on that complete I-frame. It doesn’t make
sense
from an encoding viewpoint to skip an I-frame at a scene-change, it’s
just
impossible.

Adding more than ‘a minimum amount’ of 

Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Henk D. Schoneveld

 On 02 May 2015, at 11:38, Anatol anatol2...@gmail.com wrote:
 
 I don't think that 'keep source keyframes' might impose any a/v sync issues
 that differ from any other encoding flows.
 AFIAK, 'force_key_frames' acts on the output/encoding, it does not aware of
 the decoding processing. For that matter - scene cuts are evaluated from
 post-decoding/raw frames.
Did you try without thinking of alignment what happens ?
Take a source file of several minutes and do your encoding for several bitrates 
with HLS and then test see if problems do really show up. Meaning maybe you try 
to solve a potential problem, looking at recuirements, that in reality don’t 
exist ? 
 
 On Sat, May 2, 2015 at 12:31 PM, Haris Zukanovic 
 haris.zukanovi...@gmail.com wrote:
 
 My simple idea was that instead of deducing from a formula like
 -force_key_frames 'expr:gte(t,n_forced*5)'
 force_key_frames somehow took this kind of info directly from the input
 stream and passed onto all output streams
 
 This could be called something like
 
 -force_key_frames 'from_source'
 
 
 Do you think it is possible to do?
 
 
 
 
 
 On 5/2/15 10:41 AM, Anatol wrote:
 
 It does not matter the type of the incoming protocol.
 And slight un-alignment tolerated by the CDN providers and Apple HLS
 validation tools.
 Therefore the source live stream can be used in an adaptive-bitrate sets,
 IF the other streams match their key frames.
 By the way Wowza has this option (keep Source key frames) in its
 Transcoder add-on.
 But Wowza also has other problems ...
 
 
 On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:
 
 The idea is to gain the option to use the H264 source stream along with
 live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
 etc), that require aligned keyframes on ALL participating streams.
 
 Could it be ALL, except the livestream itself ?
 Or is the livestream also available via HLS ?
 If yes, then you’ll have to encode this one as well.
 This can only be achieved if the source stream is encoded with the same
 version of the encoder you’re using your self. I can imagine that
 different
 encoders could behave slightly different.
 BTW what is your source ? Is it predictable like DVB-S from the same
 broadcaster ?
 From the top of my head BBC broadcast has an I-frame every 12 or 13
 frames.
 Another potential issue could be the delay between the video and audio
 stream, which would force you to also encode the source stream.
 Hope it helps.
 
 On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com
 
 wrote:
 
 Indeed force-ing keyframes at certain positions is meant to keep
 
 multiple
 
 output encodings keyframe aligned. The input stream is already h264 in
 
 our
 
 case.
 Moreover, if one could copy all iframe positions, and possibly also
 
 all
 
 other frametypes from the input stream there would not be any need for
 scene detection if that was already done in the input stream. I am not
 
 sure
 
 how much heavy lookahead calculations and perhaps other heavy
 
 calculations
 
 could also be skipped?
 
 What are you trying to achieve ? A performance boost ? I don’t think
 
 that
 
 you’ll achieve improvement worthwhile, if anything at all. The working
 
 of
 
 the encoder should need to be totally rewritten to make something like
 
 this
 
 happen at all. Encoding of a frame depends on former and following
 
 frames,
 
 the result I P or B frame is the result. Your source is h264  already,
 
 so I
 
 think you’ll rescale and re-encode to achieve that. The calculation has
 
 to
 
 be done. Knowing in advance that it will be en I P or B frame won’t make
 any difference in the amount of calculations in my opinion.
 
 On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl
 
 wrote:
 
 On 01 May 2015, at 13:06, Haris Zukanovic 
 
 haris.zukanovi...@gmail.com
 
 wrote:
 
 Is the decision about exactly which frame to make an IDR frame made
 
 in
 
 x264 or ffmpeg?
 In general I-frames are placed at scene-changes, this can happen
 
 random.
 
 Additionally you can can force an I-frame every arbitrary amount of
 
 frames
 
 by specifying the gop-size. The function of an I-frame is to hold max
 
 frame
 
 info P and B frames build on that complete I-frame. It doesn’t make
 
 sense
 
 from an encoding viewpoint to skip an I-frame at a scene-change, it’s
 
 just
 
 impossible.
 Adding more than ‘a minimum amount’ of I-frames only makes sense for
 seeking purposes, at the cost of less compression/higher then
 
 necessary
 
 bitrate.
 
 Any pointer or advice on where to look for this in the code?
 
 
 On 4/29/15 8:54 PM, Anatol wrote:
 
 No responses on that one?
 It is very important issue.
 
 On Mon, Apr 27, 2015 at 11:47 PM, Haris Zukanovic 
 haris.zukanovi...@gmail.com wrote:
 
 Hello,
 Can I use force_key_frames in some way to 

Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Henk D. Schoneveld

 On 02 May 2015, at 18:00, Haris Zukanovic haris.zukanovi...@gmail.com wrote:
 
 My case is live streaming.
 I have tried it and definitely keyframes are not aligned between input and 
 output streams.
 For all encoded output streams it is very simple to obtain alignement with 
 setting the fixed GOP size. But the PTS and keyframes of input are never 
 aligned with that.
Different approach, adaptive means being able to switch to other ‘quality’ if 
circumstances change. So if the parts of individual fragments contain the same 
content everything is OK I think ?
Don’t start the encode of the ‘whole’ incoming stream, but the parts created by 
HLS of that same stream.
Maybe a workable solution ?
 
 Sometimes, PTS of the output streams are not aligned, and it seems that 
 encoding does not start at the same time, like there is a delay to startup 
 different for different output streams. I have seen this very often using the 
 tee output
 
 
 
 
 On 5/2/15 1:43 PM, Henk D. Schoneveld wrote:
 On 02 May 2015, at 11:38, Anatol anatol2...@gmail.com wrote:
 
 I don't think that 'keep source keyframes' might impose any a/v sync issues
 that differ from any other encoding flows.
 AFIAK, 'force_key_frames' acts on the output/encoding, it does not aware of
 the decoding processing. For that matter - scene cuts are evaluated from
 post-decoding/raw frames.
 Did you try without thinking of alignment what happens ?
 Take a source file of several minutes and do your encoding for several 
 bitrates with HLS and then test see if problems do really show up. Meaning 
 maybe you try to solve a potential problem, looking at recuirements, that in 
 reality don’t exist ?
 On Sat, May 2, 2015 at 12:31 PM, Haris Zukanovic 
 haris.zukanovi...@gmail.com wrote:
 
 My simple idea was that instead of deducing from a formula like
 -force_key_frames 'expr:gte(t,n_forced*5)'
 force_key_frames somehow took this kind of info directly from the input
 stream and passed onto all output streams
 
 This could be called something like
 
 -force_key_frames 'from_source'
 
 
 Do you think it is possible to do?
 
 
 
 
 
 On 5/2/15 10:41 AM, Anatol wrote:
 
 It does not matter the type of the incoming protocol.
 And slight un-alignment tolerated by the CDN providers and Apple HLS
 validation tools.
 Therefore the source live stream can be used in an adaptive-bitrate sets,
 IF the other streams match their key frames.
 By the way Wowza has this option (keep Source key frames) in its
 Transcoder add-on.
 But Wowza also has other problems ...
 
 
 On Sat, May 2, 2015 at 11:06 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 On 02 May 2015, at 06:27, Anatol anatol2...@gmail.com wrote:
 The idea is to gain the option to use the H264 source stream along with
 live transcoded streams in an adaptive bitrate delivery modes (HLS, HDS,
 etc), that require aligned keyframes on ALL participating streams.
 
 Could it be ALL, except the livestream itself ?
 Or is the livestream also available via HLS ?
 If yes, then you’ll have to encode this one as well.
 This can only be achieved if the source stream is encoded with the same
 version of the encoder you’re using your self. I can imagine that
 different
 encoders could behave slightly different.
 BTW what is your source ? Is it predictable like DVB-S from the same
 broadcaster ?
 From the top of my head BBC broadcast has an I-frame every 12 or 13
 frames.
 Another potential issue could be the delay between the video and audio
 stream, which would force you to also encode the source stream.
 Hope it helps.
 
 On Sat, May 2, 2015 at 2:48 AM, Henk D. Schoneveld belca...@zonnet.nl
 wrote:
 
 On 01 May 2015, at 20:43, Haris Zukanovic haris.zukanovi...@gmail.com
 wrote:
 
 Indeed force-ing keyframes at certain positions is meant to keep
 
 multiple
 output encodings keyframe aligned. The input stream is already h264 in
 our
 
 case.
 Moreover, if one could copy all iframe positions, and possibly also
 
 all
 other frametypes from the input stream there would not be any need for
 scene detection if that was already done in the input stream. I am not
 
 sure
 
 how much heavy lookahead calculations and perhaps other heavy
 
 calculations
 
 could also be skipped?
 
 What are you trying to achieve ? A performance boost ? I don’t think
 
 that
 you’ll achieve improvement worthwhile, if anything at all. The working
 of
 the encoder should need to be totally rewritten to make something like
 this
 happen at all. Encoding of a frame depends on former and following
 frames,
 the result I P or B frame is the result. Your source is h264  already,
 so I
 think you’ll rescale and re-encode to achieve that. The calculation has
 to
 be done. Knowing in advance that it will be en I P or B frame won’t make
 any difference in the amount of calculations in my opinion.
 
 On May 1, 2015 7:42 PM, Henk D. Schoneveld belca...@zonnet.nl
 
 wrote:
 On 01 May 2015, at 13:06, Haris Zukanovic 
 haris.zukanovi...@gmail.com
 wrote:
 Is the 

Re: [FFmpeg-user] flatten f4v append with different resolution

2015-05-02 Thread Carl Eugen Hoyos
Madovsky infos at madovsky.org writes:

 I tried to find a way to resize with ffmpeg but no success.

Command line and complete, uncut console output missing.

 is it possible to let ffmpeg resize at a certain part of 
 the movie?

FFmpeg does not support different output size in one video 
stream. You have to choose one resolution for the whole 
output stream. All parts of the input stream that have a 
different resolution will be scaled.

Carl Eugen

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Reuben Martin
On Saturday, May 02, 2015 06:00:32 PM Haris Zukanovic wrote:
 My case is live streaming.
 I have tried it and definitely keyframes are not aligned between input
 and output streams.
 For all encoded output streams it is very simple to obtain alignement
 with setting the fixed GOP size. But the PTS and keyframes of input are
 never aligned with that.
 

I’ve had success with it as long as I encode all the derivative streams at 
once from the same encoder process. This of course can consume significant CPU 
resources.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Anatol
Henk,
Its a real problem, if the streams are un-aligned, the playback gets into
jump-forward-backward mood.
It's not a problem to get source file key frames and to have them encoded
into the rest of the files.
The problem is with a live streaming, because it is not possible to query
it for the keyframe lications

Reuben,
The whole idea is to avoid intense CPU consumption for LIVE streaming.


On Sat, May 2, 2015 at 8:31 PM, Reuben Martin reube...@gmail.com wrote:

 On Saturday, May 02, 2015 06:00:32 PM Haris Zukanovic wrote:
  My case is live streaming.
  I have tried it and definitely keyframes are not aligned between input
  and output streams.
  For all encoded output streams it is very simple to obtain alignement
  with setting the fixed GOP size. But the PTS and keyframes of input are
  never aligned with that.
 

 I’ve had success with it as long as I encode all the derivative streams at
 once from the same encoder process. This of course can consume significant
 CPU
 resources.
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Henk D. Schoneveld

 On 02 May 2015, at 21:11, Anatol anatol2...@gmail.com wrote:
 
 Henk,
 Its a real problem, if the streams are un-aligned, the playback gets into
 jump-forward-backward mood.
If you create a 5min source file and split in 5 1 minute chunks with the help 
of the hls function of ffmpeg.
Then create 3 other quality streams form these chunks.
This results in a total of 4 hls compatible streams where switching between 
each of them should be without any problem I think.
 It's not a problem to get source file key frames and to have them encoded
 into the rest of the files.
 The problem is with a live streaming, because it is not possible to query
 it for the keyframe lications
 
 Reuben,
 The whole idea is to avoid intense CPU consumption for LIVE streaming.
 
 
 On Sat, May 2, 2015 at 8:31 PM, Reuben Martin reube...@gmail.com wrote:
 
 On Saturday, May 02, 2015 06:00:32 PM Haris Zukanovic wrote:
 My case is live streaming.
 I have tried it and definitely keyframes are not aligned between input
 and output streams.
 For all encoded output streams it is very simple to obtain alignement
 with setting the fixed GOP size. But the PTS and keyframes of input are
 never aligned with that.
 
 
 I’ve had success with it as long as I encode all the derivative streams at
 once from the same encoder process. This of course can consume significant
 CPU
 resources.
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Cropdetect Broken since 2.6

2015-05-02 Thread Jeremy
Sorry, I misread the original request. The command being used is :

./ffmpeg -i
/Users/jeremylk/Dev/Ruby/workspace/video_encoder_app/Breathing_5s.mov -vf
cropdetect=24:16:0 dummy.mov

On Sat, May 2, 2015 at 3:29 PM, Jeremy genericin...@gmail.com wrote:

 Sure thing. Below is the output from 2.5.6 (the last working version)
 followed by the output for the same command / same file using 2.6, when the
 behavior for cropdetect changed. Notice how the values for the 2.6 output
 show no crop detected. This is consistent in everything post-2.6.
 Everything pre-2.6 performs as intended.

 I've tested with 2.5.4, 2.5.5, 2.5.6, 2.6, and 2.6.2. Multiple files with
 different crop factors. Results are the same.

 *BEGIN 2.5.6 : *

 ffmpeg version 2.5.6 Copyright (c) 2000-2015 the FFmpeg developers
   built on May  2 2015 13:10:59 with Apple LLVM version 6.1.0
 (clang-602.0.49) (based on LLVM 3.6.0svn)
   configuration: --prefix=/usr/local --enable-gpl --enable-nonfree
 --enable-libass --enable-libfdk-aac --enable-libfreetype
 --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis
 --enable-libvpx --enable-libx264 --enable-libxvid
   libavutil  54. 15.100 / 54. 15.100
   libavcodec 56. 13.100 / 56. 13.100
   libavformat56. 15.102 / 56. 15.102
   libavdevice56.  3.100 / 56.  3.100
   libavfilter 5.  2.103 /  5.  2.103
   libswscale  3.  1.101 /  3.  1.101
   libswresample   1.  1.100 /  1.  1.100
   libpostproc53.  3.100 / 53.  3.100
 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
 '/Users/jeremylk/Dev/Ruby/workspace/video_encoder_app/Breathing_5s.mov':
   Metadata:
 major_brand : qt
 minor_version   : 537199360
 compatible_brands: qt
 creation_time   : 2015-05-02 22:19:01
 xmp : ?xpacket begin= id=W5M0MpCehiHzreSzNTczkc9d?
 : x:xmpmeta xmlns:x=adobe:ns:meta/ x:xmptk=Adobe
 XMP Core 5.6-c011 79.156380, 2014/05/06-23:40:11
 :  rdf:RDF xmlns:rdf=
 http://www.w3.org/1999/02/22-rdf-syntax-ns#;
 :   rdf:Description rdf:about=
 : xmlns:xmp=http://ns.adobe.com/xap/1.0/;
 : xmlns:xmpDM=
 http://ns.adobe.com/xmp/1.0/DynamicMedia/;
 : xmlns:stDim=
 http://ns.adobe.com/xap/1.0/sType/Dimensions#;
 : xmlns:xmpMM=http://ns.adobe.com/xap/1.0/mm/;
 : xmlns:stEvt=
 http://ns.adobe.com/xap/1.0/sType/ResourceEvent#;
 : xmlns:stRef=
 http://ns.adobe.com/xap/1.0/sType/ResourceRef#;
 : xmlns:creatorAtom=
 http://ns.adobe.com/creatorAtom/1.0/;
 : xmlns:dc=http://purl.org/dc/elements/1.1/;
 :xmp:CreateDate=2015-05-02T15:19:01-07:00
 :xmp:ModifyDate=2015-05-02T15:19:04-07:00
 :xmp:MetadataDate=2015-05-02T15:19:04-07:00
 :xmp:CreatorTool=Adobe Premiere Pro CC 2014
 (Macintosh)
 :xmpDM:startTimeScale=24000
 :xmpDM:startTimeSampleSize=1001
 :xmpDM:videoFrameRate=23.976024
 :xmpDM:videoFieldOrder=Progressive
 :xmpDM:videoPixelAspectRatio=1/1
 :xmpDM:audioSampleRate=48000
 :xmpDM:audioSampleType=16Int
 :xmpDM:audioChannelType=Stereo
 :
  xmpMM:InstanceID=xmp.iid:c306f9f8-3a2a-44aa-b6fd-ce97cd454332
 :
  xmpMM:DocumentID=c9ddc4e3-4fd0-0aba-ce95-9a6b004b
 :
  xmpMM:OriginalDocumentID=xmp.did:eea8539b-4d73-44d1-86f0-a05a53d032b2
 :dc:format=QuickTime
 :xmpDM:duration
 : xmpDM:value=241241
 : xmpDM:scale=1/24000/
 :xmpDM:altTimecode
 : xmpDM:timeValue=00:00:00:00
 : xmpDM:timeFormat=23976Timecode/
 :xmpDM:projectRef
 : xmpDM:type=movie/
 :xmpDM:videoFrameSize
 : stDim:w=1920
 : stDim:h=1080
 : stDim:unit=pixel/
 :xmpDM:startTimecode
 : xmpDM:timeFormat=23976Timecode
 : xmpDM:timeValue=00:00:00:00/
 :xmpMM:History
 : rdf:Seq
 :  rdf:li
 :   stEvt:action=saved
 :
 stEvt:instanceID=48139c48-3d3a-57f3-4788-ee180078
 :   stEvt:when=2015-05-02T15:19:04-07:00
 :   stEvt:softwareAgent=Adobe Premiere Pro CC
 2014 (Macintosh)
 :   stEvt:changed=//
 :  rdf:li
 :   

Re: [FFmpeg-user] Cropdetect Broken since 2.6

2015-05-02 Thread Jeremy
Sure thing. Below is the output from 2.5.6 (the last working version)
followed by the output for the same command / same file using 2.6, when the
behavior for cropdetect changed. Notice how the values for the 2.6 output
show no crop detected. This is consistent in everything post-2.6.
Everything pre-2.6 performs as intended.

I've tested with 2.5.4, 2.5.5, 2.5.6, 2.6, and 2.6.2. Multiple files with
different crop factors. Results are the same.

*BEGIN 2.5.6 : *

ffmpeg version 2.5.6 Copyright (c) 2000-2015 the FFmpeg developers
  built on May  2 2015 13:10:59 with Apple LLVM version 6.1.0
(clang-602.0.49) (based on LLVM 3.6.0svn)
  configuration: --prefix=/usr/local --enable-gpl --enable-nonfree
--enable-libass --enable-libfdk-aac --enable-libfreetype
--enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis
--enable-libvpx --enable-libx264 --enable-libxvid
  libavutil  54. 15.100 / 54. 15.100
  libavcodec 56. 13.100 / 56. 13.100
  libavformat56. 15.102 / 56. 15.102
  libavdevice56.  3.100 / 56.  3.100
  libavfilter 5.  2.103 /  5.  2.103
  libswscale  3.  1.101 /  3.  1.101
  libswresample   1.  1.100 /  1.  1.100
  libpostproc53.  3.100 / 53.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
'/Users/jeremylk/Dev/Ruby/workspace/video_encoder_app/Breathing_5s.mov':
  Metadata:
major_brand : qt
minor_version   : 537199360
compatible_brands: qt
creation_time   : 2015-05-02 22:19:01
xmp : ?xpacket begin= id=W5M0MpCehiHzreSzNTczkc9d?
: x:xmpmeta xmlns:x=adobe:ns:meta/ x:xmptk=Adobe
XMP Core 5.6-c011 79.156380, 2014/05/06-23:40:11
:  rdf:RDF xmlns:rdf=
http://www.w3.org/1999/02/22-rdf-syntax-ns#;
:   rdf:Description rdf:about=
: xmlns:xmp=http://ns.adobe.com/xap/1.0/;
: xmlns:xmpDM=
http://ns.adobe.com/xmp/1.0/DynamicMedia/;
: xmlns:stDim=
http://ns.adobe.com/xap/1.0/sType/Dimensions#;
: xmlns:xmpMM=http://ns.adobe.com/xap/1.0/mm/;
: xmlns:stEvt=
http://ns.adobe.com/xap/1.0/sType/ResourceEvent#;
: xmlns:stRef=
http://ns.adobe.com/xap/1.0/sType/ResourceRef#;
: xmlns:creatorAtom=
http://ns.adobe.com/creatorAtom/1.0/;
: xmlns:dc=http://purl.org/dc/elements/1.1/;
:xmp:CreateDate=2015-05-02T15:19:01-07:00
:xmp:ModifyDate=2015-05-02T15:19:04-07:00
:xmp:MetadataDate=2015-05-02T15:19:04-07:00
:xmp:CreatorTool=Adobe Premiere Pro CC 2014
(Macintosh)
:xmpDM:startTimeScale=24000
:xmpDM:startTimeSampleSize=1001
:xmpDM:videoFrameRate=23.976024
:xmpDM:videoFieldOrder=Progressive
:xmpDM:videoPixelAspectRatio=1/1
:xmpDM:audioSampleRate=48000
:xmpDM:audioSampleType=16Int
:xmpDM:audioChannelType=Stereo
:
 xmpMM:InstanceID=xmp.iid:c306f9f8-3a2a-44aa-b6fd-ce97cd454332
:
 xmpMM:DocumentID=c9ddc4e3-4fd0-0aba-ce95-9a6b004b
:
 xmpMM:OriginalDocumentID=xmp.did:eea8539b-4d73-44d1-86f0-a05a53d032b2
:dc:format=QuickTime
:xmpDM:duration
: xmpDM:value=241241
: xmpDM:scale=1/24000/
:xmpDM:altTimecode
: xmpDM:timeValue=00:00:00:00
: xmpDM:timeFormat=23976Timecode/
:xmpDM:projectRef
: xmpDM:type=movie/
:xmpDM:videoFrameSize
: stDim:w=1920
: stDim:h=1080
: stDim:unit=pixel/
:xmpDM:startTimecode
: xmpDM:timeFormat=23976Timecode
: xmpDM:timeValue=00:00:00:00/
:xmpMM:History
: rdf:Seq
:  rdf:li
:   stEvt:action=saved
:
stEvt:instanceID=48139c48-3d3a-57f3-4788-ee180078
:   stEvt:when=2015-05-02T15:19:04-07:00
:   stEvt:softwareAgent=Adobe Premiere Pro CC 2014
(Macintosh)
:   stEvt:changed=//
:  rdf:li
:   stEvt:action=created
:
stEvt:instanceID=xmp.iid:ba7ed7a1-efa4-4868-85db-f3ae28fe1fa2
:   stEvt:when=2015-05-02T15:19:01-07:00
:   stEvt:softwareAgent=Adobe Premiere Pro CC 2014
(Macintosh)/
:  rdf:li
:   stEvt:action=saved
:

[FFmpeg-user] No luck with live stream from ffmpeg to ffserver

2015-05-02 Thread En Figureo Canal
I haven’t had much luck deploying ffserver, nothing works out for me
streaming live from ffmpeg to ffserver. Had different problems, I don’t
know if my configuration is correct but, can’t really get ffsever to do
what I need and things are just frustrating me.

The last error message I’m getting is av_interleaved_write_frame(): Unknown
error along with Past duration too large. I’ve read somewhere saying that
it might be incompatible versions of ffmpeg, which I found odd and stupid.

I believed ffmpeg should work with any current version.

First, I’m trying to do a live stream using a capture card I’ve installed
and/or using VidBlaster but, haven’t been able. Audio can be done easily
but not Video.

Here’s my ffmpeg conf:


Feed channel2.ffm
 File /root/channel2.ffm
 FileMaxSize 64M
 /Feed

 Stream channel2.sdp
 Feed channel2.ffm

 Format rtp

 VideoCodec libx264
 #   VideoFrameRate 30
 #   VideoSize 640x360
 VideoBitRate 1000

 # Audio settings
 AudioCodec libmp3lame #libfdk_aac
 AudioSampleRate 41000
 AudioBitRate 96
 AudioChannels 2 #this is creating problem
  #  AVOptionAudio flags +global_header

 MaxTime 0
 AVOptionVideo me_range 16
 AVOptionVideo qdiff 4
 AVOptionVideo qmin 4
 AVOptionVideo qmax 40
 #AVOptionVideo good
 #   AVOptionVideo flags +global_header

 # Streaming settings
 PreRoll 10
 StartSendOnKey

 NoDefaults

 /Stream


When I send the feed to the server I get the previous mentioned error. What
exactly am I doing wrong? I’ve tried different combinations to send the
feed but nothing works, this is the last conf to send to the server:



Ffmpeg –re –rtbufsize 1500M –f dshow –I video=”input”:audio=”input” –acodec
libmp3lame –ar 44100 –ab 96k –vcodec libx264 –f flv
http://ip:8090/channel2.ffm


I've even tried feeding a video from my pc to the server, and still no luck.


Please guide me to the correct path to get this working, thanks.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Cropdetect Broken since 2.6

2015-05-02 Thread Carl Eugen Hoyos
Jeremy genericinbox at gmail.com writes:

 I can provide output data to support this

Please only provide your failing command line 
including complete, uncut console output if 
you want the issue fixed.

Carl Eugen

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Use force_key_frames to obtain keyframes at exactly the same positions as in the input stream?

2015-05-02 Thread Anatol
Henk,
Live streaming, not files.

On Sat, May 2, 2015 at 10:32 PM, Henk D. Schoneveld belca...@zonnet.nl
wrote:


  On 02 May 2015, at 21:11, Anatol anatol2...@gmail.com wrote:
 
  Henk,
  Its a real problem, if the streams are un-aligned, the playback gets into
  jump-forward-backward mood.
 If you create a 5min source file and split in 5 1 minute chunks with the
 help of the hls function of ffmpeg.
 Then create 3 other quality streams form these chunks.
 This results in a total of 4 hls compatible streams where switching
 between each of them should be without any problem I think.
  It's not a problem to get source file key frames and to have them encoded
  into the rest of the files.
  The problem is with a live streaming, because it is not possible to query
  it for the keyframe lications
 
  Reuben,
  The whole idea is to avoid intense CPU consumption for LIVE streaming.
 
 
  On Sat, May 2, 2015 at 8:31 PM, Reuben Martin reube...@gmail.com
 wrote:
 
  On Saturday, May 02, 2015 06:00:32 PM Haris Zukanovic wrote:
  My case is live streaming.
  I have tried it and definitely keyframes are not aligned between input
  and output streams.
  For all encoded output streams it is very simple to obtain alignement
  with setting the fixed GOP size. But the PTS and keyframes of input are
  never aligned with that.
 
 
  I’ve had success with it as long as I encode all the derivative streams
 at
  once from the same encoder process. This of course can consume
 significant
  CPU
  resources.
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user
 
  ___
  ffmpeg-user mailing list
  ffmpeg-user@ffmpeg.org
  http://ffmpeg.org/mailman/listinfo/ffmpeg-user

 ___
 ffmpeg-user mailing list
 ffmpeg-user@ffmpeg.org
 http://ffmpeg.org/mailman/listinfo/ffmpeg-user

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


[FFmpeg-user] Cropdetect Broken since 2.6

2015-05-02 Thread Jeremy
Hi all,

Cropdetect seems to be broken since 2.6. I noticed this the other day with
a new build for my encoding app, which pulled 2.6.2. Cropdetect no longer
worked with 24:16:0 variables. Pushing the threshold variable up to 65
created different values, but they were negative an incorrect.

2.5.6 seems to be the last time cropdetect worked properly.

I can provide output data to support this, if you want. Let me know if I
can help... definitely would love to see this fixed.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Multithreaded multioutput problem

2015-05-02 Thread Deron

On 4/21/15 2:51 AM, Marton Balint wrote:



On Mon, 20 Apr 2015, Deron wrote:


On 4/20/15 4:22 PM, Marton Balint wrote:


On Mon, 20 Apr 2015, Deron wrote:


On 4/20/15 1:48 PM, Marton Balint wrote:

On Mon, 20 Apr 2015, Deron wrote:

Another user has contacted me with the exact same problem hoping 
that I stumbled on a solution for the below problem. I did not, 
so I am adding some more extensive log output in hopes that 
someone might recongize the source of the problem.




I have also experienced this problem. Depending on your use case 
you can face a similar situation with only one input and one 
output as well and a few cores. There is no buffering between the 
various stages of the ffmpeg encoder/filter/decoder pipeline, so 
even if the stages by themselves are multi threaded, you won't be 
able to scale up, because passing data between the stages is done 
in a single thread.


Or at least that is what I think is going on. So as far as I know 
you can only scale up properly by running multiple ffmpeg 
instances. E.g: create a multicast and encode that.


Regards,
Marton



What would be the best way to solve this then? I'm not sure I 
understand what you mean by multicast. Having multiple ffmpeg's 
decoding the same original mpegts input would be pretty slow.


If that overhead is unacceptable, then I don't see any other way. 
You wrote you had plenty of free CPU :)


Speaking of CPU and having been away from the problem for many weeks, 
I forget about the important part. The encoding process for the data 
runs at better than real time (nearly 2x) when pulling from a file 
instead of the /dvb device. So I don't think that this is thread 
bound for me but something else.


I'm still missing a piece of the puzzle. Something else is causing me 
grief.




In that case have you tried using the threaded input read support of 
ffmpeg? You have to specify more than one inputs (add an extra dummy 
input source) and play with -thread_queue_size option.


Regards,
Ma 



I've resolved this problem and would like to report my solution. The 
error evidently was coming from some part of azap/dvb device. When I 
increased the buffer size as per http://panteltje.com/panteltje/dvd/ the 
problem went away. I can now capture and encode 6 channels 
simultaneously, outputting 2 video,1 audio, and images for each on this 
computer. I presume the problem was simply that setup time in ffmpeg 
increased enough that it (temporarily) exceeded real time encoding and 
overflowed the buffer between azap and ffmpeg.


I've considering writing an azap-like input module for ffmpeg. I'm sure 
this is generally frowned on, but in this case I see a couple of 
advantages. One is obviously this problem with too small a buffer. The 
other is when interference causes the stream to be degraded, ffmpeg will 
start consuming huge amounts of cpu until it actually freezes up the 
computer. Since azap knows when the signal tanks, it would appear to me 
it could deal with the encoding issue if it had access into the encoder 
itself. Maybe some way exists already to deal with it, or ffmpeg should 
simply be made more robust to deal with this problem?


Anyway, thanks for the help!

Deron

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] libfaac error not found

2015-05-02 Thread Eng . Hany Ahmed

you can get it from this command line   sudo apt-get  install  libfaac-dev  
if not work you can try with  compiling source## GET FAAC   cd 
$BUILD  wget http://downloads.sourceforge.net/faac/faac-1.28.tar.bz2  tar -xjf 
faac-1.28.tar.bz2 cd faac-1.28  

./configure --prefix=$PREFIX --enable-shared   

sudo make  sudo make install  sudo make clean  sudo make distclean


   Best Regards ,Web Creatives  Eng.Hany Ahmed  +201000871230  
+201096660598   http://NSlinux.com   http://Topx1.com   Skype: micro2me



Date: Sat, 2 May 2015 23:05:20 +0500
From: mtanz...@gmail.com
To: ffmpeg-user@ffmpeg.org
Subject: [FFmpeg-user] libfaac error not found

please help to solve this problem. As I'm really tired with it. confi.log
is attached.
 
-- 
Best Regards
Muhammad Tanzeem
0321-4200393

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user  
  
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user