Re: [FFmpeg-devel] [PATCH 2/2] configure: instruct MSVC 2015 to properly process UTF-8 string literals

2017-02-03 Thread Matt Oliver
On 4 February 2017 at 02:32, Hendrik Leppkes  wrote:

> On Fri, Feb 3, 2017 at 3:05 PM, James Almer  wrote:
> > On 2/3/2017 5:41 AM, Hendrik Leppkes wrote:
> >> Without the /UTF-8 switch, the MSVC compiler treats all files as in the
> >> system codepage, instead of in UTF-8, which causes UTF-8 string literals
> >> to be interpreted wrong.
> >>
> >> This switch was only introduced in VS2015 Update 2, and any earlier
> >> versions do not have an equivalent solution.
> >>
> >> Fixes fate-sub-scc on MSVC 2015+
> >> ---
> >>  configure | 3 +++
> >>  1 file changed, 3 insertions(+)
> >>
> >> diff --git a/configure b/configure
> >> index d3d652f0f4..231cc3eca7 100755
> >> --- a/configure
> >> +++ b/configure
> >> @@ -6327,6 +6327,9 @@ EOF
> >>  # Issue has been fixed in MSVC v19.00.24218.
> >>  check_cpp_condition windows.h "_MSC_FULL_VER >= 190024218" ||
> >>  check_cflags -d2SSAOptimizer-
> >> +# enable utf-8 source processing on VS2015 U2 and newer
> >> +check_cpp_condition windows.h "_MSC_FULL_VER >= 190023918" &&
> >> +add_cflags -utf-8
> >
> > Probably better use check_cflags, just in case.
> >
>
> check_cflags doesn't work, since most wrong options just cause it to
> emit a warning but not error out (although confusingly some do error
> out, like the d2 option above, since the d2 prefix directly targets
> the c2 compiler stage, and unknown options there error instead of
> warn).
> Thats the whole reason I added a version check in the first place
> instead of solely using check_cflags with it.
>
> I mean, no real harm to use check_cflags together with the version
> check, but since it doesn't do anything, I figured I would save the
> extra check and a few forks.


Is there any possibility to also find a fix for this on older msvc
versions? If you direct me to the specific lines of code that are
mis-compiling/executing ill have a look.
In the mean time this patch looks good to me.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/flacdsp: Avoid undefined operations in non debug builds

2017-02-03 Thread Michael Niedermayer
On Thu, Dec 15, 2016 at 01:32:18AM +0100, Michael Niedermayer wrote:
> This fixes ubsan warnings in non debug builds by using unsigned operations
> 
> in debug builds the correct signed operations are retained so that overflows
> (which should not occur in valid files and may indicate problems in the DSP 
> code
> or decoder) can be detected.
> 
> Alternatively they can be changed to unsigned unconditionally, then its
> not possible though to detect overflows easily if someone wants to test
> the DSP code for overflows.
> 
> The 2nd alternative would be to leave the code as it is and accept that
> there are undefined operations in the DSP code and that ubsan output is
> full of them in some cases.
> 
> Similar changes would be needed in some other DSP routines
> 
> Suggested-by: Matt Wolenetz 
> Signed-off-by: Michael Niedermayer 
> ---
>  libavcodec/flacdsp.c | 14 +++---
>  1 file changed, 11 insertions(+), 3 deletions(-)

applied

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you drop bombs on a foreign country and kill a hundred thousand
innocent people, expect your government to call the consequence
"unprovoked inhuman terrorist attacks" and use it to justify dropping
more bombs and killing more people. The technology changed, the idea is old.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] ffplay: fix borderless mode on Windows

2017-02-03 Thread Marton Balint
Signed-off-by: Marton Balint 
---
 ffplay.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ffplay.c b/ffplay.c
index 6325e6f..1c9db73 100644
--- a/ffplay.c
+++ b/ffplay.c
@@ -1261,13 +1261,15 @@ static int video_open(VideoState *is)
 }
 
 if (!window) {
-int flags = SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE;
+int flags = SDL_WINDOW_SHOWN;
 if (!window_title)
 window_title = input_filename;
 if (is_full_screen)
 flags |= SDL_WINDOW_FULLSCREEN_DESKTOP;
 if (borderless)
 flags |= SDL_WINDOW_BORDERLESS;
+else
+flags |= SDL_WINDOW_RESIZABLE;
 window = SDL_CreateWindow(window_title, SDL_WINDOWPOS_UNDEFINED, 
SDL_WINDOWPOS_UNDEFINED, w, h, flags);
 SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear");
 if (window) {
-- 
2.10.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/flacdec: Check for invalid vlcs

2017-02-03 Thread Michael Niedermayer
On Fri, Dec 09, 2016 at 04:29:35PM +0100, Michael Niedermayer wrote:
> Signed-off-by: Michael Niedermayer 
> ---
>  libavcodec/flacdec.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)

applied


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Those who are best at talking, realize last or never when they are wrong.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] mp3 encoder audio quality

2017-02-03 Thread Lou Logan
On Fri, Feb 3, 2017, at 02:20 PM, Lina Sharifi wrote:
> Hi all,
> I am trying to build an mp3 encoder( ffmpeg integrated with lame) in C++.
> For some reason I am not receiving good quality output
[...]

ffmpeg-devel mailing list is only for submitting patches.

libav-user is the correct mailing list for help using the FFmpeg
libraries.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] lavf/matroskadec: fix is_keyframe for early Blocks

2017-02-03 Thread Vignesh Venkatasubramanian
On Fri, Feb 3, 2017 at 2:42 PM, Chris Cunningham
 wrote:
> Blocks are marked as key frames whenever the "reference" field is
> zero. This breaks for non-keyframe Blocks with a reference timestamp
> of zero.
>
> The likelihood of reference timestamp being zero is increased by a
> longstanding bug in muxing that encodes reference timestamp as the
> absolute time of the referenced frame (rather than relative to the
> current Block timestamp, as described in MKV spec).
>
> Now using INT64_MIN to denote "no reference".
>
> Reported to chromium at http://crbug.com/497889 (contains sample)
> ---
>  libavformat/matroskadec.c | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/libavformat/matroskadec.c b/libavformat/matroskadec.c
> index e6737a70b2..7223e94b55 100644
> --- a/libavformat/matroskadec.c
> +++ b/libavformat/matroskadec.c
> @@ -89,6 +89,7 @@ typedef const struct EbmlSyntax {
>  int list_elem_size;
>  int data_offset;
>  union {
> +int64_t i;
>  uint64_tu;
>  double  f;
>  const char *s;
> @@ -696,7 +697,7 @@ static const EbmlSyntax matroska_blockgroup[] = {
>  { MATROSKA_ID_SIMPLEBLOCK,EBML_BIN,  0, offsetof(MatroskaBlock, bin) 
> },
>  { MATROSKA_ID_BLOCKDURATION,  EBML_UINT, 0, offsetof(MatroskaBlock, 
> duration) },
>  { MATROSKA_ID_DISCARDPADDING, EBML_SINT, 0, offsetof(MatroskaBlock, 
> discard_padding) },
> -{ MATROSKA_ID_BLOCKREFERENCE, EBML_SINT, 0, offsetof(MatroskaBlock, 
> reference) },
> +{ MATROSKA_ID_BLOCKREFERENCE, EBML_SINT, 0, offsetof(MatroskaBlock, 
> reference), { .i = INT64_MIN } },
>  { MATROSKA_ID_CODECSTATE, EBML_NONE },
>  {  1, EBML_UINT, 0, offsetof(MatroskaBlock, 
> non_simple), { .u = 1 } },
>  { 0 }
> @@ -1071,6 +1072,9 @@ static int ebml_parse_nest(MatroskaDemuxContext 
> *matroska, EbmlSyntax *syntax,
>
>  for (i = 0; syntax[i].id; i++)
>  switch (syntax[i].type) {
> +case EBML_SINT:
> +*(int64_t *) ((char *) data + syntax[i].data_offset) = 
> syntax[i].def.i;
> +break;
>  case EBML_UINT:
>  *(uint64_t *) ((char *) data + syntax[i].data_offset) = 
> syntax[i].def.u;
>  break;
> @@ -3361,7 +3365,7 @@ static int 
> matroska_parse_cluster_incremental(MatroskaDemuxContext *matroska)
>  matroska->current_cluster_num_blocks = blocks_list->nb_elem;
>  i= blocks_list->nb_elem - 1;
>  if (blocks[i].bin.size > 0 && blocks[i].bin.data) {
> -int is_keyframe = blocks[i].non_simple ? !blocks[i].reference : 
> -1;
> +int is_keyframe = blocks[i].non_simple ? blocks[i].reference == 
> INT64_MIN : -1;
>  uint8_t* additional = blocks[i].additional.size > 0 ?
>  blocks[i].additional.data : NULL;
>  if (!blocks[i].non_simple)
> @@ -3399,7 +3403,7 @@ static int matroska_parse_cluster(MatroskaDemuxContext 
> *matroska)
>  blocks  = blocks_list->elem;
>  for (i = 0; i < blocks_list->nb_elem; i++)
>  if (blocks[i].bin.size > 0 && blocks[i].bin.data) {
> -int is_keyframe = blocks[i].non_simple ? !blocks[i].reference : 
> -1;
> +int is_keyframe = blocks[i].non_simple ? blocks[i].reference == 
> INT64_MIN : -1;
>  res = matroska_parse_block(matroska, blocks[i].bin.data,
> blocks[i].bin.size, blocks[i].bin.pos,
> cluster.timecode, blocks[i].duration,

lgtm.

> --
> 2.11.0.483.g087da7b7c-goog
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel



-- 
Vignesh
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] mp3 encoder audio quality

2017-02-03 Thread Lina Sharifi
Hi all,
I am trying to build an mp3 encoder( ffmpeg integrated with lame) in C++.
For some reason I am not receiving good quality output. Here are the output
samples:

FFMPEG result:
https://drive.google.com/file/d/0B9DbYNPuSyiRYTFzRmliNWxLcnM/view?usp=sharing
Reference Sample:
https://drive.google.com/file/d/0B9DbYNPuSyiRMnlYYUtlTjEzNFU/view?usp=sharing

I am using avcodec_encode_audio2 API as shown in the enoding-decoding
example.
Also wanted to add, I see some delays in the encoder as when I am doing ret
= avcodec_encode_audio2(m_pCodecCtxOut, , pInputFrame,
_output); I dont get any output but after flushing with NULL I get some
output for 2-3 loops.

Any suggestion or idea is appreciated.

Thanks,
Lina
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v3] avfilter/scale: refactor common code for scaling height/width expressions

2017-02-03 Thread Michael Niedermayer
On Wed, Feb 01, 2017 at 04:30:18PM -0800, Aman Gupta wrote:
> From: Aman Gupta 
> 
> Implements support for height/width expressions in vf_scale_vaapi,
> by refactoring common code into a new libavfilter/scale.c
> ---
>  libavfilter/Makefile |   8 +--
>  libavfilter/scale.c  | 152 
> +++
>  libavfilter/scale.h  |  28 
>  libavfilter/vf_scale.c   | 109 +++
>  libavfilter/vf_scale_npp.c   |  93 +++---
>  libavfilter/vf_scale_vaapi.c |  19 --
>  6 files changed, 216 insertions(+), 193 deletions(-)
>  create mode 100644 libavfilter/scale.c
>  create mode 100644 libavfilter/scale.h
> 
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 68a94be..3231f08 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -257,10 +257,10 @@ OBJS-$(CONFIG_REPEATFIELDS_FILTER)   += 
> vf_repeatfields.o
>  OBJS-$(CONFIG_REVERSE_FILTER)+= f_reverse.o
>  OBJS-$(CONFIG_ROTATE_FILTER) += vf_rotate.o
>  OBJS-$(CONFIG_SAB_FILTER)+= vf_sab.o
> -OBJS-$(CONFIG_SCALE_FILTER)  += vf_scale.o
> -OBJS-$(CONFIG_SCALE_NPP_FILTER)  += vf_scale_npp.o
> -OBJS-$(CONFIG_SCALE_VAAPI_FILTER)+= vf_scale_vaapi.o
> -OBJS-$(CONFIG_SCALE2REF_FILTER)  += vf_scale.o
> +OBJS-$(CONFIG_SCALE_FILTER)  += vf_scale.o scale.o
> +OBJS-$(CONFIG_SCALE_NPP_FILTER)  += vf_scale_npp.o scale.o
> +OBJS-$(CONFIG_SCALE_VAAPI_FILTER)+= vf_scale_vaapi.o scale.o
> +OBJS-$(CONFIG_SCALE2REF_FILTER)  += vf_scale.o scale.o
>  OBJS-$(CONFIG_SELECT_FILTER) += f_select.o
>  OBJS-$(CONFIG_SELECTIVECOLOR_FILTER) += vf_selectivecolor.o
>  OBJS-$(CONFIG_SENDCMD_FILTER)+= f_sendcmd.o
> diff --git a/libavfilter/scale.c b/libavfilter/scale.c
> new file mode 100644
> index 000..50cd442
> --- /dev/null
> +++ b/libavfilter/scale.c
> @@ -0,0 +1,152 @@
> +/*
> + * Copyright (c) 2007 Bobby Bingham
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
> + */
> +
> +#include 
> +#include "scale.h"
> +#include "libavutil/eval.h"
> +#include "libavutil/mathematics.h"
> +#include "libavutil/pixdesc.h"
> +
> +static const char *const var_names[] = {
> +"PI",
> +"PHI",
> +"E",
> +"in_w",   "iw",
> +"in_h",   "ih",
> +"out_w",  "ow",
> +"out_h",  "oh",
> +"a",
> +"sar",
> +"dar",
> +"hsub",
> +"vsub",
> +"ohsub",
> +"ovsub",
> +NULL
> +};
> +
> +enum var_name {
> +VAR_PI,
> +VAR_PHI,
> +VAR_E,
> +VAR_IN_W,   VAR_IW,
> +VAR_IN_H,   VAR_IH,
> +VAR_OUT_W,  VAR_OW,
> +VAR_OUT_H,  VAR_OH,
> +VAR_A,
> +VAR_SAR,
> +VAR_DAR,
> +VAR_HSUB,
> +VAR_VSUB,
> +VAR_OHSUB,
> +VAR_OVSUB,
> +VARS_NB
> +};
> +
> +int ff_scale_eval_dimensions(void *log_ctx,
> +const char *w_expr, const char *h_expr,
> +AVFilterLink *inlink, AVFilterLink *outlink,
> +int *ret_w, int *ret_h)
> +{
> +const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> +const AVPixFmtDescriptor *out_desc = 
> av_pix_fmt_desc_get(outlink->format);
> +const char *expr;
> +int w, h;
> +int factor_w, factor_h;
> +int eval_w, eval_h;
> +int ret;
> +double var_values[VARS_NB], res;
> +
> +var_values[VAR_PI]= M_PI;
> +var_values[VAR_PHI]   = M_PHI;
> +var_values[VAR_E] = M_E;
> +var_values[VAR_IN_W]  = var_values[VAR_IW] = inlink->w;
> +var_values[VAR_IN_H]  = var_values[VAR_IH] = inlink->h;
> +var_values[VAR_OUT_W] = var_values[VAR_OW] = NAN;
> +var_values[VAR_OUT_H] = var_values[VAR_OH] = NAN;
> +var_values[VAR_A] = (double) inlink->w / inlink->h;
> +var_values[VAR_SAR]   = inlink->sample_aspect_ratio.num ?
> +(double) inlink->sample_aspect_ratio.num / 
> inlink->sample_aspect_ratio.den : 1;
> +var_values[VAR_DAR]   = var_values[VAR_A] * var_values[VAR_SAR];
> +var_values[VAR_HSUB]  = 1 << desc->log2_chroma_w;
> +var_values[VAR_VSUB]  = 1 << desc->log2_chroma_h;
> +var_values[VAR_OHSUB] = 1 << 

[FFmpeg-devel] fixed vs floating point mp3 encoder

2017-02-03 Thread Lina Sharifi
Hi,
I am using ffmpeg integrated with libmp3lame for encoding
(AV_CODEC_ID_MP3), Is there an option for to enable fixed point (integer)
encoder? Probably a codec flag?

Thanks,
Lina
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/hwcontext_vaapi: fix SEGV in vaTerminate when vaInitialize fails

2017-02-03 Thread Mark Thompson


On 03/02/17 22:44, Aman Gupta wrote:
> On Fri, Feb 3, 2017 at 12:19 PM, Mark Thompson  wrote:
> 
>> On 03/02/17 05:45, wm4 wrote:
>>> On Thu,  2 Feb 2017 09:29:13 -0800
>>> Aman Gupta  wrote:
>>>
 From: Aman Gupta 

 Program terminated with signal SIGSEGV, Segmentation fault.
 opts=opts@entry=0x0, flags=flags@entry=0) at
>> libavutil/hwcontext.c:494
 ---
  libavutil/hwcontext_vaapi.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

 diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
 index 6176bdc..0051acb 100644
 --- a/libavutil/hwcontext_vaapi.c
 +++ b/libavutil/hwcontext_vaapi.c
 @@ -961,14 +961,13 @@ static int vaapi_device_create(AVHWDeviceContext
>> *ctx, const char *device,
  return AVERROR(EINVAL);
  }

 -hwctx->display = display;
 -
  vas = vaInitialize(display, , );
  if (vas != VA_STATUS_SUCCESS) {
  av_log(ctx, AV_LOG_ERROR, "Failed to initialise VAAPI "
 "connection: %d (%s).\n", vas, vaErrorStr(vas));
  return AVERROR(EIO);
  }
 +hwctx->display = display;
  av_log(ctx, AV_LOG_VERBOSE, "Initialised VAAPI connection: "
 "version %d.%d\n", major, minor);

>>>
>>> Would that mean it doesn't free the display that was created with
>>> vaGetDisplay? Is that right?
>>>
>>> In my experiments, calling vaTerminate right after vaGetDisplay works
>>> just fine.
>>
>> Right, looking more carefully at libva that is exactly what you are meant
>> to do, and the code there is careful to make it all work.  The segfault
>> case I was thinking of here isn't exactly the same (and used the Intel
>> proprietary driver, which should probably be considered dubious), so
>> applying it was premature.
>>
>> Aman, can you explain more about the case you saw this in?
>>
> 
> I saw this when I was using libva master. vaInitialize() was failing in my
> environment (see https://github.com/01org/libva/issues/20) and after the
> failure ffmpeg crashed.
> 
> Here was the output from ffmpeg:
> 
> libva info: VA-API version 0.40.0
> libva info: va_getDriverName() returns 1
> libva error: va_getDriverName() failed with operation
> failed,driver_name=i965
> [AVHWDeviceContext @ 0x1b03d80] Failed to initialise VAAPI connection: 1
> (operation failed).
> Segmentation fault
> 
> And the backtrace:
> 
>   #0  0x00aff8a4 in vaTerminate ()
>   #1  0x00ae50ce in vaapi_device_free (ctx=) at
> libavutil/hwcontext_vaapi.c:882
>   #2  0x00ae1f9e in hwdevice_ctx_free (opaque=,
> data=) at libavutil/hwcontext.c:66
>   #3  0x00ad856f in buffer_replace (src=0x0, dst=0x7fffa26ef1b8) at
> libavutil/buffer.c:119
>   #4  av_buffer_unref (buf=buf@entry=0x7fffa26ef1f8) at
> libavutil/buffer.c:129
>   #5  0x00ae299f in av_hwdevice_ctx_create (pdevice_ref=0x170ac50
> , type=type@entry=AV_HWDEVICE_TYPE_VAAPI, device= out>,
>   opts=opts@entry=0x0, flags=flags@entry=0) at libavutil/hwcontext.c:494
>   #6  0x00400968 in vaapi_device_init (device=) at
> ffmpeg_vaapi.c:223
> 
> Definitely possible that this is a bug in libva instead, and that failure
> midway through vaInitialize() is not dealt with appropriately during
> vaTerminate().
> 
> Feel free to revert the commit.

Can you build libva with debug enabled and clarify exactly how and where it's 
failing there?  From your description on github I'm inclined to think it is 
some bad interaction in libva with running as root, but it would be good to be 
sure.  (And we should revert the change here.)

Thanks,

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] lavf/matroskadec: fix is_keyframe for early Blocks

2017-02-03 Thread Chris Cunningham
Blocks are marked as key frames whenever the "reference" field is
zero. This breaks for non-keyframe Blocks with a reference timestamp
of zero.

The likelihood of reference timestamp being zero is increased by a
longstanding bug in muxing that encodes reference timestamp as the
absolute time of the referenced frame (rather than relative to the
current Block timestamp, as described in MKV spec).

Now using INT64_MIN to denote "no reference".

Reported to chromium at http://crbug.com/497889 (contains sample)
---
 libavformat/matroskadec.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/libavformat/matroskadec.c b/libavformat/matroskadec.c
index e6737a70b2..7223e94b55 100644
--- a/libavformat/matroskadec.c
+++ b/libavformat/matroskadec.c
@@ -89,6 +89,7 @@ typedef const struct EbmlSyntax {
 int list_elem_size;
 int data_offset;
 union {
+int64_t i;
 uint64_tu;
 double  f;
 const char *s;
@@ -696,7 +697,7 @@ static const EbmlSyntax matroska_blockgroup[] = {
 { MATROSKA_ID_SIMPLEBLOCK,EBML_BIN,  0, offsetof(MatroskaBlock, bin) },
 { MATROSKA_ID_BLOCKDURATION,  EBML_UINT, 0, offsetof(MatroskaBlock, 
duration) },
 { MATROSKA_ID_DISCARDPADDING, EBML_SINT, 0, offsetof(MatroskaBlock, 
discard_padding) },
-{ MATROSKA_ID_BLOCKREFERENCE, EBML_SINT, 0, offsetof(MatroskaBlock, 
reference) },
+{ MATROSKA_ID_BLOCKREFERENCE, EBML_SINT, 0, offsetof(MatroskaBlock, 
reference), { .i = INT64_MIN } },
 { MATROSKA_ID_CODECSTATE, EBML_NONE },
 {  1, EBML_UINT, 0, offsetof(MatroskaBlock, 
non_simple), { .u = 1 } },
 { 0 }
@@ -1071,6 +1072,9 @@ static int ebml_parse_nest(MatroskaDemuxContext 
*matroska, EbmlSyntax *syntax,
 
 for (i = 0; syntax[i].id; i++)
 switch (syntax[i].type) {
+case EBML_SINT:
+*(int64_t *) ((char *) data + syntax[i].data_offset) = 
syntax[i].def.i;
+break;
 case EBML_UINT:
 *(uint64_t *) ((char *) data + syntax[i].data_offset) = 
syntax[i].def.u;
 break;
@@ -3361,7 +3365,7 @@ static int 
matroska_parse_cluster_incremental(MatroskaDemuxContext *matroska)
 matroska->current_cluster_num_blocks = blocks_list->nb_elem;
 i= blocks_list->nb_elem - 1;
 if (blocks[i].bin.size > 0 && blocks[i].bin.data) {
-int is_keyframe = blocks[i].non_simple ? !blocks[i].reference : -1;
+int is_keyframe = blocks[i].non_simple ? blocks[i].reference == 
INT64_MIN : -1;
 uint8_t* additional = blocks[i].additional.size > 0 ?
 blocks[i].additional.data : NULL;
 if (!blocks[i].non_simple)
@@ -3399,7 +3403,7 @@ static int matroska_parse_cluster(MatroskaDemuxContext 
*matroska)
 blocks  = blocks_list->elem;
 for (i = 0; i < blocks_list->nb_elem; i++)
 if (blocks[i].bin.size > 0 && blocks[i].bin.data) {
-int is_keyframe = blocks[i].non_simple ? !blocks[i].reference : -1;
+int is_keyframe = blocks[i].non_simple ? blocks[i].reference == 
INT64_MIN : -1;
 res = matroska_parse_block(matroska, blocks[i].bin.data,
blocks[i].bin.size, blocks[i].bin.pos,
cluster.timecode, blocks[i].duration,
-- 
2.11.0.483.g087da7b7c-goog

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/hwcontext_vaapi: fix SEGV in vaTerminate when vaInitialize fails

2017-02-03 Thread Aman Gupta
On Fri, Feb 3, 2017 at 12:19 PM, Mark Thompson  wrote:

> On 03/02/17 05:45, wm4 wrote:
> > On Thu,  2 Feb 2017 09:29:13 -0800
> > Aman Gupta  wrote:
> >
> >> From: Aman Gupta 
> >>
> >> Program terminated with signal SIGSEGV, Segmentation fault.
> >> opts=opts@entry=0x0, flags=flags@entry=0) at
> libavutil/hwcontext.c:494
> >> ---
> >>  libavutil/hwcontext_vaapi.c | 3 +--
> >>  1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
> >> index 6176bdc..0051acb 100644
> >> --- a/libavutil/hwcontext_vaapi.c
> >> +++ b/libavutil/hwcontext_vaapi.c
> >> @@ -961,14 +961,13 @@ static int vaapi_device_create(AVHWDeviceContext
> *ctx, const char *device,
> >>  return AVERROR(EINVAL);
> >>  }
> >>
> >> -hwctx->display = display;
> >> -
> >>  vas = vaInitialize(display, , );
> >>  if (vas != VA_STATUS_SUCCESS) {
> >>  av_log(ctx, AV_LOG_ERROR, "Failed to initialise VAAPI "
> >> "connection: %d (%s).\n", vas, vaErrorStr(vas));
> >>  return AVERROR(EIO);
> >>  }
> >> +hwctx->display = display;
> >>  av_log(ctx, AV_LOG_VERBOSE, "Initialised VAAPI connection: "
> >> "version %d.%d\n", major, minor);
> >>
> >
> > Would that mean it doesn't free the display that was created with
> > vaGetDisplay? Is that right?
> >
> > In my experiments, calling vaTerminate right after vaGetDisplay works
> > just fine.
>
> Right, looking more carefully at libva that is exactly what you are meant
> to do, and the code there is careful to make it all work.  The segfault
> case I was thinking of here isn't exactly the same (and used the Intel
> proprietary driver, which should probably be considered dubious), so
> applying it was premature.
>
> Aman, can you explain more about the case you saw this in?
>

I saw this when I was using libva master. vaInitialize() was failing in my
environment (see https://github.com/01org/libva/issues/20) and after the
failure ffmpeg crashed.

Here was the output from ffmpeg:

libva info: VA-API version 0.40.0
libva info: va_getDriverName() returns 1
libva error: va_getDriverName() failed with operation
failed,driver_name=i965
[AVHWDeviceContext @ 0x1b03d80] Failed to initialise VAAPI connection: 1
(operation failed).
Segmentation fault

And the backtrace:

  #0  0x00aff8a4 in vaTerminate ()
  #1  0x00ae50ce in vaapi_device_free (ctx=) at
libavutil/hwcontext_vaapi.c:882
  #2  0x00ae1f9e in hwdevice_ctx_free (opaque=,
data=) at libavutil/hwcontext.c:66
  #3  0x00ad856f in buffer_replace (src=0x0, dst=0x7fffa26ef1b8) at
libavutil/buffer.c:119
  #4  av_buffer_unref (buf=buf@entry=0x7fffa26ef1f8) at
libavutil/buffer.c:129
  #5  0x00ae299f in av_hwdevice_ctx_create (pdevice_ref=0x170ac50
, type=type@entry=AV_HWDEVICE_TYPE_VAAPI, device=,
  opts=opts@entry=0x0, flags=flags@entry=0) at libavutil/hwcontext.c:494
  #6  0x00400968 in vaapi_device_init (device=) at
ffmpeg_vaapi.c:223

Definitely possible that this is a bug in libva instead, and that failure
midway through vaInitialize() is not dealt with appropriately during
vaTerminate().

Feel free to revert the commit.

Aman


> - Mark
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
2017-02-03 23:18 GMT+02:00 Compn :

> On Fri, 3 Feb 2017 15:46:20 +0200, Ivo Andonov 
> wrote:
>
> > I successfully used a modified Pinetron library on Windows to use my own
> > software for decoding the stream. While fiddling with the modification I
> > saw they are using the statically linked FFmpeg API.
>
> the dvr company ships ffmpeg? they must ship ffmpeg source as well, the
> modified ffmpeg source may contain the patch needed to play such
> dvr files.
>
> where can we see this pinetron library ?
>
> -compn
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

I thought the same as well but am not too much into the licencing terms...
I tried in vain finding any sources.
They do not ship ffmpeg directly (as a separate library). They modify the
source and use it statically linked in their projects.
This is a link to the IE ActiveX for playing the stream (displaying DVR
cams): http://www.dvrstation.com/pdvratl.php?vendor=0#version=1,0,1,26 This
is also the library I modded in order to use the decoder on Windows
platforms before I decided to spend some time to research the differences
in respect to the MPEG4 standard and use the stream in a Linux environment.

This is a link to a 64-bit Linux app:
http://pinetron.ru/files/software/cms-lite-linux.zip Never actually tried
it. The libpapi-shared.so.* files are clearly based on the ffmpeg source.

This is the android app: http://www.apkmonk.com/app/com.pinetron.TouchCMS/
One library in there for Arm, also clearly based on ffmpeg.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Compn
On Fri, 3 Feb 2017 15:46:20 +0200, Ivo Andonov 
wrote:

> I successfully used a modified Pinetron library on Windows to use my own
> software for decoding the stream. While fiddling with the modification I
> saw they are using the statically linked FFmpeg API.

the dvr company ships ffmpeg? they must ship ffmpeg source as well, the
modified ffmpeg source may contain the patch needed to play such
dvr files.

where can we see this pinetron library ?

-compn
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/hwcontext_vaapi: fix SEGV in vaTerminate when vaInitialize fails

2017-02-03 Thread Mark Thompson
On 03/02/17 05:45, wm4 wrote:
> On Thu,  2 Feb 2017 09:29:13 -0800
> Aman Gupta  wrote:
> 
>> From: Aman Gupta 
>>
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> opts=opts@entry=0x0, flags=flags@entry=0) at libavutil/hwcontext.c:494
>> ---
>>  libavutil/hwcontext_vaapi.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
>> index 6176bdc..0051acb 100644
>> --- a/libavutil/hwcontext_vaapi.c
>> +++ b/libavutil/hwcontext_vaapi.c
>> @@ -961,14 +961,13 @@ static int vaapi_device_create(AVHWDeviceContext *ctx, 
>> const char *device,
>>  return AVERROR(EINVAL);
>>  }
>>  
>> -hwctx->display = display;
>> -
>>  vas = vaInitialize(display, , );
>>  if (vas != VA_STATUS_SUCCESS) {
>>  av_log(ctx, AV_LOG_ERROR, "Failed to initialise VAAPI "
>> "connection: %d (%s).\n", vas, vaErrorStr(vas));
>>  return AVERROR(EIO);
>>  }
>> +hwctx->display = display;
>>  av_log(ctx, AV_LOG_VERBOSE, "Initialised VAAPI connection: "
>> "version %d.%d\n", major, minor);
>>  
> 
> Would that mean it doesn't free the display that was created with
> vaGetDisplay? Is that right?
> 
> In my experiments, calling vaTerminate right after vaGetDisplay works
> just fine.

Right, looking more carefully at libva that is exactly what you are meant to 
do, and the code there is careful to make it all work.  The segfault case I was 
thinking of here isn't exactly the same (and used the Intel proprietary driver, 
which should probably be considered dubious), so applying it was premature.

Aman, can you explain more about the case you saw this in?

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] lavf/matroskadec: fix is_keyframe for early Blocks

2017-02-03 Thread wm4
On Fri, 3 Feb 2017 10:04:34 -0800
Vignesh Venkatasubramanian  wrote:

> On Thu, Feb 2, 2017 at 10:16 PM, wm4  wrote:
> > On Thu, 2 Feb 2017 10:47:52 -0800
> > Vignesh Venkatasubramanian  wrote:
> >  
> >> On Tue, Jan 31, 2017 at 10:18 PM, wm4  wrote:  
> >> >
> >> > On Tue, 31 Jan 2017 12:02:01 -0800
> >> > Chris Cunningham  wrote:
> >> >  
> >> > > Thanks for taking a look.
> >> > >
> >> > > Definitely missing a "break;" - will fix in subsequent patch.
> >> > >
> >> > > Agree timestamps should be relative (didn't realize this). Vignesh 
> >> > > points
> >> > > out that "0" in the test file is due to a bug in ffmpeg (and probably 
> >> > > other
> >> > > muxers) where this value is not written as a relative timestamp, but
> >> > > instead as the timestamp of the previous frame. https://github.com/FFmp
> >> > > eg/FFmpeg/blob/master/libavformat/matroskaenc.c#L2053
> >> > > 
> >> > >   
> >> >
> >> > Just a few lines below this reads
> >> >
> >> >mkv->last_track_timestamp[track_number - 1] = ts - mkv->cluster_pts;
> >> >
> >> > which looks like it intends to write a relative value. Though "ts" can
> >> > be a DTS, while the other value is always a PTS.  
> >>
> >> Just to clarify: This line makes the timestamp relative to the
> >> cluster's timestamp. Not relative to the block its referencing (which
> >> is what the spec says if i understand it correctly).  
> >
> > Yeah, the current spec just says "Timestamp of another frame used as a
> > reference (ie: B or P frame). The timestamp is relative to the  block
> > it's attached to."
> >
> > Is this a bug? Did FFmpeg always mux this incorrectly?
> >  
> 
> Technically it is a bug. Yes ffmpeg has always muxed it this way. But
> see the reply below.
> 
> > Is there even an implementation that uses the value written to block
> > reference elements? (And not in a trivial way like FFmpeg.)  
> 
> AFAIK, there is no practical use for the value written into
> BlockReference. All the WebM codecs (vp8, vp9, vorbis, opus) have the
> reference frame information in their bitstream and do not care about
> what is specified in the container. In fact, in case of VP9 there can
> be multiple reference frames and i'm not sure if there's even a way to
> specify that in the container. So far, the only use of this element
> has been to determine whether or not the Block is a keyframe.

Yes, that seems to be pretty much anyone's opinion about this
(including my own).

Still feels weird that FFmpeg is writing essentially arbitrary values.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread wm4
On Fri, 3 Feb 2017 18:37:52 +0100
u-9...@aetey.se wrote:

> On Fri, Feb 03, 2017 at 05:51:19PM +0100, Hendrik Leppkes wrote:
> > On Fri, Feb 3, 2017 at 5:36 PM,   wrote:  
> > > So get_format() is not a solution, mo matter how good or misleading
> > > its documentation is.  
> > 
> > "The application" can implement the get_format callback anyway it
> > wants, ask the user by carrier pigeon for all we care - but such user
> > interaction does simply not belong into the avcodec library - hence we
> > have the get_format callback for that, so that the decision can be
> > moved outside of the codec library and into the calling user-code.
> > Clearly whatever application you are working with should implement
> > these choices, and you should not try to shoe-horn this into
> > libavcodec, where it clearly does not belong.  
> 
> You suggest I should shoe-horn this into every application.
> Very helpful, thank you :)
> 
> As for "clearly", it is your personal feeling, not an argument.
> Seriously.
> 
> > We do not want hacks around the established systems just because it
> > doesn't fit your use-case or workflow, sorry.  
> 
> You should listen more on those who actually live in their workflows.
> It is there your code is being useful. Or not.
> 
> I happen to be in a suitable position (using the stuff and arranging it
> for others) to estimate what is useful.
> 
> Based on this I am trying to help ffmpeg.
> You are certainly free to refuse the help, for any reason :)

He didn't refuse help, he just explained how it is. We helped you
plenty by trying to explain you the mechanisms of this library.

With your special use-case (special as in does not fit into the API
conventions of libavcodec), you might be better off with creating your
own standalone cinepak decoder. That's not a bad thing; there's plenty
of multimedia software that does not use libavcodec. Part of the reason
is that one library can't make everyone happy and can't fit all
use-cases.

Back to being "constructive" - the only way your code could get
accepted is by implementing the get_format callback. There's a
precedent, 8bps.c, which would also show you how it's done. But maybe
it should be at most only 1 other output format that handles the most
efficient case, or so. There's really no reason to add them all.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] lavf/matroskadec: fix is_keyframe for early Blocks

2017-02-03 Thread Vignesh Venkatasubramanian
On Thu, Feb 2, 2017 at 10:16 PM, wm4  wrote:
> On Thu, 2 Feb 2017 10:47:52 -0800
> Vignesh Venkatasubramanian  wrote:
>
>> On Tue, Jan 31, 2017 at 10:18 PM, wm4  wrote:
>> >
>> > On Tue, 31 Jan 2017 12:02:01 -0800
>> > Chris Cunningham  wrote:
>> >
>> > > Thanks for taking a look.
>> > >
>> > > Definitely missing a "break;" - will fix in subsequent patch.
>> > >
>> > > Agree timestamps should be relative (didn't realize this). Vignesh points
>> > > out that "0" in the test file is due to a bug in ffmpeg (and probably 
>> > > other
>> > > muxers) where this value is not written as a relative timestamp, but
>> > > instead as the timestamp of the previous frame. https://github.com/FFmp
>> > > eg/FFmpeg/blob/master/libavformat/matroskaenc.c#L2053
>> > > 
>> >
>> > Just a few lines below this reads
>> >
>> >mkv->last_track_timestamp[track_number - 1] = ts - mkv->cluster_pts;
>> >
>> > which looks like it intends to write a relative value. Though "ts" can
>> > be a DTS, while the other value is always a PTS.
>>
>> Just to clarify: This line makes the timestamp relative to the
>> cluster's timestamp. Not relative to the block its referencing (which
>> is what the spec says if i understand it correctly).
>
> Yeah, the current spec just says "Timestamp of another frame used as a
> reference (ie: B or P frame). The timestamp is relative to the  block
> it's attached to."
>
> Is this a bug? Did FFmpeg always mux this incorrectly?
>

Technically it is a bug. Yes ffmpeg has always muxed it this way. But
see the reply below.

> Is there even an implementation that uses the value written to block
> reference elements? (And not in a trivial way like FFmpeg.)

AFAIK, there is no practical use for the value written into
BlockReference. All the WebM codecs (vp8, vp9, vorbis, opus) have the
reference frame information in their bitstream and do not care about
what is specified in the container. In fact, in case of VP9 there can
be multiple reference frames and i'm not sure if there's even a way to
specify that in the container. So far, the only use of this element
has been to determine whether or not the Block is a keyframe.

> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel



-- 
Vignesh
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread u-9iep
On Fri, Feb 03, 2017 at 05:51:19PM +0100, Hendrik Leppkes wrote:
> On Fri, Feb 3, 2017 at 5:36 PM,   wrote:
> > So get_format() is not a solution, mo matter how good or misleading
> > its documentation is.
> 
> "The application" can implement the get_format callback anyway it
> wants, ask the user by carrier pigeon for all we care - but such user
> interaction does simply not belong into the avcodec library - hence we
> have the get_format callback for that, so that the decision can be
> moved outside of the codec library and into the calling user-code.
> Clearly whatever application you are working with should implement
> these choices, and you should not try to shoe-horn this into
> libavcodec, where it clearly does not belong.

You suggest I should shoe-horn this into every application.
Very helpful, thank you :)

As for "clearly", it is your personal feeling, not an argument.
Seriously.

> We do not want hacks around the established systems just because it
> doesn't fit your use-case or workflow, sorry.

You should listen more on those who actually live in their workflows.
It is there your code is being useful. Or not.

I happen to be in a suitable position (using the stuff and arranging it
for others) to estimate what is useful.

Based on this I am trying to help ffmpeg.
You are certainly free to refuse the help, for any reason :)

Rune

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
2017-02-03 18:16 GMT+02:00 Carl Eugen Hoyos :

> 2017-02-03 16:54 GMT+01:00 Ivo Andonov :
> > 2017-02-03 16:59 GMT+02:00 Carl Eugen Hoyos :
>
> >> The encoder sets no user data, so identification is not as simple as
> >> hoped: How do subsequent frames start? Do you know what the
> >> first bytes mean?
> >>
> > Yes, no user data is set and as such automatic identification is not
> > possible. The raw stream coming from the DVR is prefixed by a propriatory
> > frame header (that I filter) which is not MPEG stream compliant and is
> > meant for the viewer/recorder (timestamp, keyframe flag, frame data size
> > etc).
>
> How would another owner of this DVR get the streams?
>

Now that is a separate question. If it has to be integrated in the ffmpeg
suite, I can provide a PHP script that connects to the DVR and fetches the
stream. This can be used by someone to write a libav protocol I suppose. I
just do not feel like too much experienced to program it in C and integrate
it in ffmpeg. Let me know if I should open a new thread for this.


>
> > So far I'm attempting to implement the decoding by explicitly specifing
> to
> > ffmpeg the video codec to use (-vcodec xvid_alogics).
>
> An easier solution is to define a new "bug" in libavcodec/options_table.h.
>

Carl, thanks for opening my eyes on this! Of course that's a much easier
and neater solution. I just plunged into the source code and a lot of
things of the overall workings are unknown to me.
Works as expected!

Excuse my ignorance on this, but how are these options specified via the
libavcodec api when opening a decoder context?



> Carl Eugen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] matroskaenc: Add support for writing video projection.

2017-02-03 Thread Aaron Colwell
On Fri, Feb 3, 2017 at 3:28 AM Vittorio Giovara 
wrote:

> On Thu, Feb 2, 2017 at 9:34 PM, James Almer  wrote:
> > On 1/31/2017 12:40 PM, Aaron Colwell wrote:
> >>
> >>
> >> On Tue, Jan 31, 2017 at 2:12 AM Vittorio Giovara <
> vittorio.giov...@gmail.com > wrote:
> >>
> >> On Sat, Jan 28, 2017 at 4:13 AM, James Almer  > wrote:
> >>> On 1/27/2017 11:21 PM, Aaron Colwell wrote:
>  On Fri, Jan 27, 2017 at 5:46 PM James Almer  > wrote:
> 
>  yeah. I'm a little confused why the demuxing code didn't implement
> this to
>  begin with.
> >>>
> >>> Vittorio (The one that implemented AVSphericalMapping) tried to add
> this at
> >>> first, but then removed it because he wasn't sure if he was doing it
> right.
> >>
> >> Hi,
> >> yes this was included initially but then we found out what those
> >> fields were really for, and I didn't want to make the users get as
> >> confused as we were. As a matter of fact Aaron I mentioned this to you
> >> when I proposed that we probably should have separated the classic
> >> equi projection from the tiled one in the specification, in order to
> >> simplify the implementation.
> >>
> >>
> >> Like I said before, it is not a different projection. It is still
> equirectangular and those parameters just crops the projection. It is very
> simple to just verify that the fields are zero if you don't want to support
> the cropped version.
>
> Hello,
> I'm sorry but I heavily disagree. The tiled equirectangular projection
> is something that cannot be used standalone, you have to do additional
> mathematics and take into account different files or streams to
> generate a "normal" or full-frame equirectangular projection. Having a
> separate type allows to include extensions such as the bounds fields,
> which can be safely ignored by the every user that do not need a tiled
> projection.
>

I still think you don't understand what these fields do given what you say
here. Yes there is a little more math. At the end of the day all these
fields do is specify a the min & max for the latitude & longitude. This
essentially translates to adding scale factors and offsets in your shader
or something similar in your 3D geometry creation logic. I get it if
implementations don't want to do this small bit of extra work, but saying
this is a different type seems strange because you wouldn't do this when
talking about cropped 2D images.


>
> It is too late to change the spec, but I do believe that the usecase
> is different enough to add a new type, in order to not overcomplicate
> the implementation.
>

It feels like you are just moving the problem to the demuxers and muxers
here. Adding a new type means all demuxers will have to contain logic to
generate these different types and all muxers will have to contain logic to
collapse these types back down to a single value.

I don't really want to keep arguing about this. If folks really want
different types then I'll do it just because I want to get reading and
writing of metadata working end-to-end. I'd like to make a small request to
use the term "cropped equirectagular" instead of "tiled equirectangular"
but I don't feel to strongly about that.


>
> > I know you're the one behind the spec in question, but wouldn't it
> be a
> > better idea to wait until AVSphericalMapping gets a way to propagate
> this
> > kind of information before adding support for muxing video projection
> > elements? Or maybe try to implement it right now...
> >
> 
>  I'm happy to implement support for the projection specific info. What
> would
>  be the best way to proceed. It seems like I could just place a union
> with
>  projection specific structs in AVSphericalMapping. I'd also like some
> >>>
> >>> I'm CCing Vittorio so he can chim in. I personally have no real
> preference.
> >>
> >> The best way in my opinion is to add a third type, such as
> >> AV_SPHERICAL_TILED_EQUI, and add the relevant fields in
> >> AVSphericalMapping, with a clear description about the actual use case
> >> for them, mentioning that they are used only in format. By the way,
> >> why do you mention adding a union? I think four plain fields should
> >> do.
> >>
> >>
> >> I don't think it is worth having the extra enum value for this. All the
> cropped fields do is control how you generate the spherical mesh or control
> the shader used to render the projection. If players don't want to support
> it they can just check to see that all the fields are zero and error out if
> not.
>
> Why would you have them check these fields every time, when this can
> be implicitly determined by the type semantics. I'm pretty sure API
> users prefer this scenario
>
> * check projection type
> -> if normal_equi -> project it
> -> if tiled_equi -> read additional data -> project it
>
> rather than
>
> 

Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread Hendrik Leppkes
On Fri, Feb 3, 2017 at 5:36 PM,   wrote:
>
>> So, what I don't
>> understand then, is why below you're claiming that get_format() doesn't do
>> this. this is exactly what get_format() does. Why do you believe
>> get_format() isn't capable of helping you accomplish this?
>
> get_format() apparently returns to the caller a suitable format
> from the supplied list. I read the documentation as carefully as
> I can and my interpretation is that it is the application who
> is to define the function and that it is the ffmpeg framework which
> supplies the list of formats supported by the codec. I appreciate
> if somebody corrects me and/or improves the documentation.
>
> But actually this does not matter in the particular situation.
> None of the parties (the decoder, the framework, the application)
> has all the necessary knowledge to be able to make the optimal choice.
> It is the human operator/administrator who may know. The choice depends
> among others e.g. on how fast swscaler is on the particular hardware
> (use it or not?), how much is the contents sensitive to color
> depths and so on.
> How can get_format() help with answering these questions??).
>
> So get_format() is not a solution, mo matter how good or misleading
> its documentation is.

"The application" can implement the get_format callback anyway it
wants, ask the user by carrier pigeon for all we care - but such user
interaction does simply not belong into the avcodec library - hence we
have the get_format callback for that, so that the decision can be
moved outside of the codec library and into the calling user-code.
Clearly whatever application you are working with should implement
these choices, and you should not try to shoe-horn this into
libavcodec, where it clearly does not belong.

We do not want hacks around the established systems just because it
doesn't fit your use-case or workflow, sorry.

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread u-9iep
Hello Ronald,

On Fri, Feb 03, 2017 at 08:52:53AM -0500, Ronald S. Bultje wrote:
> > I thought about generating the bodies of the functions from something
> > like a template but it did not feel like this would make the code more
> > understandable aka maintainable. So I wonder if there is any point in
> > doing this, given the same binary result.
> 
> wm4 has tried to make this point several times, it's about maintainability.

I really do understand.

> Let me explain, otherwise we'll keep going back and forth.

Thanks for the constructive discussion, hopefully I can make my point
clear here, and help you see the different perspective:

> Let's assume you have a function that does a, b and c, where b depends on
> the pix fmt. You're currently writing it as such:
> 
> function_565() {
> a
> b_565
> c
> }
> 
> function_24() {
> a
> b_24
> c
> }
> 
> function_32() {
> a
> b_32
> c
> }

Rather (or even more complicated) :

function_565() {
a; x; b; y; c; z; d;
}

function_24() {
a; p; b; q; c; r; d;
}

function_32() {
a; i; b; j; c; k; d;
}

Now, a small change in any of "a", "b", "c" would not necessarily have
the same consequence for all the functions, so templating would have
made it _harder_ to make safe changes, if we'd use something like

TEMPLATE(){ a; X; b; Y; c; Z; d; }

according to your suggestion.

Do you follow me? The "common" code happens to be the same _now_ but may
have to be different due to a change in any of a,b,c,d,x,y,z,p,q,r,i,j,k;

It would be also harder to follow the code flow.

> It should be pretty obvious that a and c are triplicated in this example.

Only in this statical situation, which is the contrary to maintainability.

> Now compare it with this:

[...]

> It might look larger, but that's merely because we're not writing out a and

It is not the size but the readability and change safety.

> Conclusion: better to maintain, identical performance. Only advantages, no
> disadvantages. Should be easy to accept, right?

Hope you see from the above: this is exactly why the code is structured
the way it does.

> So, what I don't
> understand then, is why below you're claiming that get_format() doesn't do
> this. this is exactly what get_format() does. Why do you believe
> get_format() isn't capable of helping you accomplish this?

get_format() apparently returns to the caller a suitable format
from the supplied list. I read the documentation as carefully as
I can and my interpretation is that it is the application who
is to define the function and that it is the ffmpeg framework which
supplies the list of formats supported by the codec. I appreciate
if somebody corrects me and/or improves the documentation.

But actually this does not matter in the particular situation.
None of the parties (the decoder, the framework, the application)
has all the necessary knowledge to be able to make the optimal choice.
It is the human operator/administrator who may know. The choice depends
among others e.g. on how fast swscaler is on the particular hardware
(use it or not?), how much is the contents sensitive to color
depths and so on.
How can get_format() help with answering these questions??).

So get_format() is not a solution, mo matter how good or misleading
its documentation is.

> > I did my best to look for a better way but it does not seem to be existing.

> Look into private options, for one. But really, get_format() solves this. I

We must have been thinking about different problems? The problem of
choosing the optimal format to decode to is not solvable with get_format()
(unless get_format() asks the human or e.g. checks an environment
variable or uses another out-of-band channel).

> can elaborate more if you really want me to, as above, but I feel you
> haven't really looked into it. I know this feeling, sometimes you've got
> something that works for you and you just want to commit it and be done
> with it.

:) I know this too, but I am quite confident in having done my homework
properly.

You try to explain your point and educate "the newcomer", which is very
helpful and appreciated.

Hope you are also prepared to see cases where your assumptions
do not hold.

> But that's not going to happen. The env variable will never be committed.
> Never. I guarantee it with 100% certainty. If you'd like this general

Then pity for ffmpeg, rejecting a useful feature without having any
comparable functionality.

Why not reserve a namespace in envvars, like
 FFMPEG_VIDEODEC_PIXFMT__
?
This would be (of course) much more general and useful than sticking
with the name I picked. Then some other codec could easily and coherently
take advantage of the same method as well.

There are possibly other (not necessarily pixel format related) scenarios
where corresponding namespaced envvars would help as out-of-band channels.

> feature to be picked up in the main codebase, you'll need to change at the
> very, very least how it is exposed as a selectable option. Env opts are not
> acceptable, they have 

Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Carl Eugen Hoyos
2017-02-03 16:54 GMT+01:00 Ivo Andonov :
> 2017-02-03 16:59 GMT+02:00 Carl Eugen Hoyos :

>> The encoder sets no user data, so identification is not as simple as
>> hoped: How do subsequent frames start? Do you know what the
>> first bytes mean?
>>
> Yes, no user data is set and as such automatic identification is not
> possible. The raw stream coming from the DVR is prefixed by a propriatory
> frame header (that I filter) which is not MPEG stream compliant and is
> meant for the viewer/recorder (timestamp, keyframe flag, frame data size
> etc).

How would another owner of this DVR get the streams?

> So far I'm attempting to implement the decoding by explicitly specifing to
> ffmpeg the video codec to use (-vcodec xvid_alogics).

An easier solution is to define a new "bug" in libavcodec/options_table.h.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
2017-02-03 17:54 GMT+02:00 Ivo Andonov :

> 2017-02-03 16:59 GMT+02:00 Carl Eugen Hoyos :
>
>> 2017-02-03 15:14 GMT+01:00 Ivo Andonov :
>> > 2017-02-03 15:58 GMT+02:00 Carl Eugen Hoyos :
>>
>> >> > I've got some old Pinetron DVRs that are supposed to produce a MPEG4
>> >> > bitstream. Indeed they are but no non-Pinetron software can decode
>> it.
>> >>
>> >> Please provide a sample!
>> >>
>> >> > I do now know what is the best approach for registering this as a new
>> >> > decoder
>> >>
>> >> It should be possible to identify the codec while reading the
>> bitstream.
>>
>> The encoder sets no user data, so identification is not as simple as
>> hoped: How do subsequent frames start? Do you know what the
>> first bytes mean?
>>
>>
> Yes, no user data is set and as such automatic identification is not
> possible. The raw stream coming from the DVR is prefixed by a propriatory
> frame header (that I filter) which is not MPEG stream compliant and is
> meant for the viewer/recorder (timestamp, keyframe flag, frame data size
> etc).
>
> So far I'm attempting to implement the decoding by explicitly specifing to
> ffmpeg the video codec to use (-vcodec xvid_alogics). xvid_alogics AVCodec
> is initialized the same way the mpeg4 one, just a different id. In the
> decode_init I change the avctx->codec_id to AV_CODEC_ID_MPEG4 and set a
> flag in workaround_bugs that I later use in the initially submitted block
> condition code.
> However it seems somewhere else in the code avctx->codec_id reverts back
> to avctx->codec->id and as a result I end up with a "Bad picture start
> code" due to ff_h263_decode_frame calling ff_h263_decode_picture_header
> instead of ff_mpeg4_decode_picture_header.
> I forgot to mention that my changes are against ffmpeg 2.6. (libavcodec
> 56.26.100)
>
>

I do not know if this is the best approach but I ended up with:

mpeg4videodec.c:

static av_cold int alogics_decode_init(AVCodecContext *avctx) {
   avctx->workaround_bugs |= FF_BUG_ALOGICS;
   return decode_init(avctx);
}

and

AVCodec ff_xvid_alogics_decoder = {
.name  = "xvid_alogics",
.long_name = NULL_IF_CONFIG_SMALL("XVID Alogics"),
.type  = AVMEDIA_TYPE_VIDEO,
.id= AV_CODEC_ID_MPEG4,
.priv_data_size= sizeof(Mpeg4DecContext),
.init  = alogics_decode_init,
.close = ff_h263_decode_end,
.decode= ff_h263_decode_frame,
.capabilities  = CODEC_CAP_DRAW_HORIZ_BAND | CODEC_CAP_DR1 |
 CODEC_CAP_TRUNCATED | CODEC_CAP_DELAY |
 CODEC_CAP_FRAME_THREADS,
.flush = ff_mpeg_flush,
.max_lowres= 3,
.pix_fmts  = ff_h263_hwaccel_pixfmt_list_420,
.profiles  = NULL_IF_CONFIG_SMALL(mpeg4_video_profiles),
.update_thread_context =
ONLY_IF_THREADS_ENABLED(mpeg4_update_thread_context),
.priv_class = _class,
};

Then the block of code in mpeg4video.h before pred = FASTDIV is enclosed in
an if-else of:

if (s->workaround_bugs & FF_BUG_ALOGICS) {
  //new code
} else {
  //original code
}

Ivo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
2017-02-03 16:59 GMT+02:00 Carl Eugen Hoyos :

> 2017-02-03 15:14 GMT+01:00 Ivo Andonov :
> > 2017-02-03 15:58 GMT+02:00 Carl Eugen Hoyos :
>
> >> > I've got some old Pinetron DVRs that are supposed to produce a MPEG4
> >> > bitstream. Indeed they are but no non-Pinetron software can decode it.
> >>
> >> Please provide a sample!
> >>
> >> > I do now know what is the best approach for registering this as a new
> >> > decoder
> >>
> >> It should be possible to identify the codec while reading the bitstream.
>
> The encoder sets no user data, so identification is not as simple as
> hoped: How do subsequent frames start? Do you know what the
> first bytes mean?
>
>
Yes, no user data is set and as such automatic identification is not
possible. The raw stream coming from the DVR is prefixed by a propriatory
frame header (that I filter) which is not MPEG stream compliant and is
meant for the viewer/recorder (timestamp, keyframe flag, frame data size
etc).

So far I'm attempting to implement the decoding by explicitly specifing to
ffmpeg the video codec to use (-vcodec xvid_alogics). xvid_alogics AVCodec
is initialized the same way the mpeg4 one, just a different id. In the
decode_init I change the avctx->codec_id to AV_CODEC_ID_MPEG4 and set a
flag in workaround_bugs that I later use in the initially submitted block
condition code.
However it seems somewhere else in the code avctx->codec_id reverts back to
avctx->codec->id and as a result I end up with a "Bad picture start code"
due to ff_h263_decode_frame calling ff_h263_decode_picture_header instead
of ff_mpeg4_decode_picture_header.
I forgot to mention that my changes are against ffmpeg 2.6. (libavcodec
56.26.100)


> > Not sure if attachments are fine here... Attempting one.
>
> I can confirm that your suggested change (above "pred = FASTDIV")
> works as expected.
>
> Carl Eugen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] configure: instruct MSVC 2015 to properly process UTF-8 string literals

2017-02-03 Thread Hendrik Leppkes
On Fri, Feb 3, 2017 at 3:05 PM, James Almer  wrote:
> On 2/3/2017 5:41 AM, Hendrik Leppkes wrote:
>> Without the /UTF-8 switch, the MSVC compiler treats all files as in the
>> system codepage, instead of in UTF-8, which causes UTF-8 string literals
>> to be interpreted wrong.
>>
>> This switch was only introduced in VS2015 Update 2, and any earlier
>> versions do not have an equivalent solution.
>>
>> Fixes fate-sub-scc on MSVC 2015+
>> ---
>>  configure | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/configure b/configure
>> index d3d652f0f4..231cc3eca7 100755
>> --- a/configure
>> +++ b/configure
>> @@ -6327,6 +6327,9 @@ EOF
>>  # Issue has been fixed in MSVC v19.00.24218.
>>  check_cpp_condition windows.h "_MSC_FULL_VER >= 190024218" ||
>>  check_cflags -d2SSAOptimizer-
>> +# enable utf-8 source processing on VS2015 U2 and newer
>> +check_cpp_condition windows.h "_MSC_FULL_VER >= 190023918" &&
>> +add_cflags -utf-8
>
> Probably better use check_cflags, just in case.
>

check_cflags doesn't work, since most wrong options just cause it to
emit a warning but not error out (although confusingly some do error
out, like the d2 option above, since the d2 prefix directly targets
the c2 compiler stage, and unknown options there error instead of
warn).
Thats the whole reason I added a version check in the first place
instead of solely using check_cflags with it.

I mean, no real harm to use check_cflags together with the version
check, but since it doesn't do anything, I figured I would save the
extra check and a few forks.

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2] avformat/hlsenc: add hls_flag option to write segments to temporary file until complete

2017-02-03 Thread Steven Liu
2017-02-02 8:28 GMT+08:00 Aman Gupta :

> From: Aman Gupta 
>
> Adds a `-hls_flags +temp_file` which will write segment data to
> filename.tmp, and then rename to filename when the segment is complete.
>
> This patch is similar in spirit to one used in Plex's ffmpeg fork, and
> allows a transcoding webserver to ensure incomplete segment files are
> never served up accidentally.
> ---
>  libavformat/hlsenc.c | 25 +
>  1 file changed, 25 insertions(+)
>
> diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
> index bd1e684..17d4fe4 100644
> --- a/libavformat/hlsenc.c
> +++ b/libavformat/hlsenc.c
> @@ -76,6 +76,7 @@ typedef enum HLSFlags {
>  HLS_SECOND_LEVEL_SEGMENT_INDEX = (1 << 8), // include segment index
> in segment filenames when use_localtime  e.g.: %%03d
>  HLS_SECOND_LEVEL_SEGMENT_DURATION = (1 << 9), // include segment
> duration (microsec) in segment filenames when use_localtime  e.g.: %%09t
>  HLS_SECOND_LEVEL_SEGMENT_SIZE = (1 << 10), // include segment size
> (bytes) in segment filenames when use_localtime  e.g.: %%014s
> +HLS_TEMP_FILE = (1 << 11),
>  } HLSFlags;
>
>  typedef enum {
> @@ -416,6 +417,7 @@ static int hls_mux_init(AVFormatContext *s)
>  return ret;
>  oc = hls->avf;
>
> +oc->filename[0]= '\0';
>  oc->oformat= hls->oformat;
>  oc->interrupt_callback = s->interrupt_callback;
>  oc->max_delay  = s->max_delay;
> @@ -815,6 +817,15 @@ static int hls_start(AVFormatContext *s)
>  char *filename, iv_string[KEYSIZE*2 + 1];
>  int err = 0;
>
> +if ((c->flags & HLS_TEMP_FILE) && oc->filename[0] != 0) {
> +size_t len = strlen(oc->filename);
> +char final_filename[sizeof(oc->filename)];
> +av_strlcpy(final_filename, oc->filename, len);
> +final_filename[len-4] = '\0';
> +ff_rename(oc->filename, final_filename, s);
> +oc->filename[len-4] = '\0';
> +}
> +
>  if (c->flags & HLS_SINGLE_FILE) {
>  av_strlcpy(oc->filename, c->basename,
> sizeof(oc->filename));
> @@ -915,6 +926,10 @@ static int hls_start(AVFormatContext *s)
>
>  set_http_options(, c);
>
> +if (c->flags & HLS_TEMP_FILE) {
> +av_strlcat(oc->filename, ".tmp", sizeof(oc->filename));
> +}
> +
>  if (c->key_info_file) {
>  if ((err = hls_encryption_start(s)) < 0)
>  goto fail;
> @@ -1364,6 +1379,15 @@ static int hls_write_trailer(struct AVFormatContext
> *s)
>   ff_rename(old_filename, hls->avf->filename, hls);
>  }
>
> +if ((hls->flags & HLS_TEMP_FILE) && oc->filename[0] != 0) {
> +size_t len = strlen(oc->filename);
> +char final_filename[sizeof(oc->filename)];
> +av_strlcpy(final_filename, oc->filename, len);
> +final_filename[len-4] = '\0';
> +ff_rename(oc->filename, final_filename, s);
> +oc->filename[len-4] = '\0';
> +}
> +
>  if (vtt_oc) {
>  if (vtt_oc->pb)
>  av_write_trailer(vtt_oc);
> @@ -1406,6 +1430,7 @@ static const AVOption options[] = {
>  {"hls_subtitle_path", "set path of hls subtitles",
> OFFSET(subtitle_filename), AV_OPT_TYPE_STRING, {.str = NULL},  0, 0,E},
>  {"hls_flags", "set flags affecting HLS playlist and media file
> generation", OFFSET(flags), AV_OPT_TYPE_FLAGS, {.i64 = 0 }, 0, UINT_MAX, E,
> "flags"},
>  {"single_file",   "generate a single media file indexed with byte
> ranges", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SINGLE_FILE }, 0, UINT_MAX,   E,
> "flags"},
> +{"temp_file", "write segment to temporary file and rename when
> complete", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_TEMP_FILE }, 0, UINT_MAX,   E,
> "flags"},
>  {"delete_segments", "delete segment files that are no longer part of
> the playlist", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_DELETE_SEGMENTS }, 0,
> UINT_MAX,   E, "flags"},
>  {"round_durations", "round durations in m3u8 to whole numbers", 0,
> AV_OPT_TYPE_CONST, {.i64 = HLS_ROUND_DURATIONS }, 0, UINT_MAX,   E,
> "flags"},
>  {"discont_start", "start the playlist with a discontinuity tag", 0,
> AV_OPT_TYPE_CONST, {.i64 = HLS_DISCONT_START }, 0, UINT_MAX,   E, "flags"},
> --
> 2.10.1 (Apple Git-78)
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


LGTM!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2] avformat/hlsenc: add hls_flag option to write segments to temporary file until complete

2017-02-03 Thread Steven Liu
2017-02-03 23:15 GMT+08:00 Steven Liu :

>
>
> 2017-02-02 8:28 GMT+08:00 Aman Gupta :
>
>> From: Aman Gupta 
>>
>> Adds a `-hls_flags +temp_file` which will write segment data to
>> filename.tmp, and then rename to filename when the segment is complete.
>>
>> This patch is similar in spirit to one used in Plex's ffmpeg fork, and
>> allows a transcoding webserver to ensure incomplete segment files are
>> never served up accidentally.
>> ---
>>  libavformat/hlsenc.c | 25 +
>>  1 file changed, 25 insertions(+)
>>
>> diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
>> index bd1e684..17d4fe4 100644
>> --- a/libavformat/hlsenc.c
>> +++ b/libavformat/hlsenc.c
>> @@ -76,6 +76,7 @@ typedef enum HLSFlags {
>>  HLS_SECOND_LEVEL_SEGMENT_INDEX = (1 << 8), // include segment index
>> in segment filenames when use_localtime  e.g.: %%03d
>>  HLS_SECOND_LEVEL_SEGMENT_DURATION = (1 << 9), // include segment
>> duration (microsec) in segment filenames when use_localtime  e.g.: %%09t
>>  HLS_SECOND_LEVEL_SEGMENT_SIZE = (1 << 10), // include segment size
>> (bytes) in segment filenames when use_localtime  e.g.: %%014s
>> +HLS_TEMP_FILE = (1 << 11),
>>  } HLSFlags;
>>
>>  typedef enum {
>> @@ -416,6 +417,7 @@ static int hls_mux_init(AVFormatContext *s)
>>  return ret;
>>  oc = hls->avf;
>>
>> +oc->filename[0]= '\0';
>>  oc->oformat= hls->oformat;
>>  oc->interrupt_callback = s->interrupt_callback;
>>  oc->max_delay  = s->max_delay;
>> @@ -815,6 +817,15 @@ static int hls_start(AVFormatContext *s)
>>  char *filename, iv_string[KEYSIZE*2 + 1];
>>  int err = 0;
>>
>> +if ((c->flags & HLS_TEMP_FILE) && oc->filename[0] != 0) {
>> +size_t len = strlen(oc->filename);
>> +char final_filename[sizeof(oc->filename)];
>> +av_strlcpy(final_filename, oc->filename, len);
>> +final_filename[len-4] = '\0';
>> +ff_rename(oc->filename, final_filename, s);
>> +oc->filename[len-4] = '\0';
>> +}
>> +
>>  if (c->flags & HLS_SINGLE_FILE) {
>>  av_strlcpy(oc->filename, c->basename,
>> sizeof(oc->filename));
>> @@ -915,6 +926,10 @@ static int hls_start(AVFormatContext *s)
>>
>>  set_http_options(, c);
>>
>> +if (c->flags & HLS_TEMP_FILE) {
>> +av_strlcat(oc->filename, ".tmp", sizeof(oc->filename));
>> +}
>> +
>>  if (c->key_info_file) {
>>  if ((err = hls_encryption_start(s)) < 0)
>>  goto fail;
>> @@ -1364,6 +1379,15 @@ static int hls_write_trailer(struct
>> AVFormatContext *s)
>>   ff_rename(old_filename, hls->avf->filename, hls);
>>  }
>>
>> +if ((hls->flags & HLS_TEMP_FILE) && oc->filename[0] != 0) {
>> +size_t len = strlen(oc->filename);
>> +char final_filename[sizeof(oc->filename)];
>> +av_strlcpy(final_filename, oc->filename, len);
>> +final_filename[len-4] = '\0';
>> +ff_rename(oc->filename, final_filename, s);
>> +oc->filename[len-4] = '\0';
>> +}
>> +
>>  if (vtt_oc) {
>>  if (vtt_oc->pb)
>>  av_write_trailer(vtt_oc);
>> @@ -1406,6 +1430,7 @@ static const AVOption options[] = {
>>  {"hls_subtitle_path", "set path of hls subtitles",
>> OFFSET(subtitle_filename), AV_OPT_TYPE_STRING, {.str = NULL},  0, 0,E},
>>  {"hls_flags", "set flags affecting HLS playlist and media file
>> generation", OFFSET(flags), AV_OPT_TYPE_FLAGS, {.i64 = 0 }, 0, UINT_MAX, E,
>> "flags"},
>>  {"single_file",   "generate a single media file indexed with byte
>> ranges", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SINGLE_FILE }, 0, UINT_MAX,   E,
>> "flags"},
>> +{"temp_file", "write segment to temporary file and rename when
>> complete", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_TEMP_FILE }, 0, UINT_MAX,   E,
>> "flags"},
>>  {"delete_segments", "delete segment files that are no longer part of
>> the playlist", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_DELETE_SEGMENTS }, 0,
>> UINT_MAX,   E, "flags"},
>>  {"round_durations", "round durations in m3u8 to whole numbers", 0,
>> AV_OPT_TYPE_CONST, {.i64 = HLS_ROUND_DURATIONS }, 0, UINT_MAX,   E,
>> "flags"},
>>  {"discont_start", "start the playlist with a discontinuity tag", 0,
>> AV_OPT_TYPE_CONST, {.i64 = HLS_DISCONT_START }, 0, UINT_MAX,   E, "flags"},
>> --
>> 2.10.1 (Apple Git-78)
>>
>> ___
>> ffmpeg-devel mailing list
>> ffmpeg-devel@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>>
>
>
> LGTM!
>

Documentation please!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Carl Eugen Hoyos
2017-02-03 15:14 GMT+01:00 Ivo Andonov :
> 2017-02-03 15:58 GMT+02:00 Carl Eugen Hoyos :

>> > I've got some old Pinetron DVRs that are supposed to produce a MPEG4
>> > bitstream. Indeed they are but no non-Pinetron software can decode it.
>>
>> Please provide a sample!
>>
>> > I do now know what is the best approach for registering this as a new
>> > decoder
>>
>> It should be possible to identify the codec while reading the bitstream.

The encoder sets no user data, so identification is not as simple as
hoped: How do subsequent frames start? Do you know what the
first bytes mean?

> Not sure if attachments are fine here... Attempting one.

I can confirm that your suggested change (above "pred = FASTDIV")
works as expected.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/dpxenc: support colour metadata in DPX encoder, fixes ticket #6023

2017-02-03 Thread Carl Eugen Hoyos
2017-02-03 15:46 GMT+01:00 Vittorio Giovara :

> if (priv_trc_opt = "") {

Why is this useful?

>if (avctx->color_trc == AVCOL_BT709)
>   buf[801] = DPX_BT709
>else if (avctx->color_trc == AVCOL_BT601)
>   buf[801] = DPX_BT601

I would suggest a switch()

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/dpxenc: support colour metadata in DPX encoder, fixes ticket #6023

2017-02-03 Thread Vittorio Giovara
On Fri, Feb 3, 2017 at 2:10 PM, Kieran O Leary  wrote:
> Hi Vittorio!
>
> thanks for getting back to me.
>
> On Fri, Feb 3, 2017 at 12:57 PM, Vittorio Giovara
>  wrote:
>>
>>
>>
>> Hey Kieran,
>> I think the code looks fine. I am just wondering if we should also
>> offer the possibility to set these flags from the standard context
>> options (-color_trc and others). I'm aware that not all values match
>> or are valid but maybe a small conversion table or extending the main
>> table could be a viable approach. Similarly this could be done for the
>> decoder so that color properties are not lost during a dpx->dpx
>> conversion maybe.
>
>
> That seems to be the general consensus from the replies from James Almer and
> Carl Eugen and it's what i should push towards.
> I added the new values locally to pixfmt.h. I'm thinking that these could be
> called in a similar way to the EXR decoder?
> https://github.com/FFmpeg/FFmpeg/blob/8a1759ad46f05375c957f33049b4592befbcb224/libavcodec/exr.c#L1840

Not sure to which changes you mean, all the values listed by that
commit are already supported by the current
AVColorTransferCharacteristic implementation. Incidentally, I believe
that the codec you point to is a perfect examples of something
libavcodec should not do, color conversion in a decoder: in my opinion
this a task that should be reserved for something external such as
lavfi or ideally lsws.

> In terms of translation tables, could you point me to some simlar code that
> could serve as a starting point for me? The nearest that made sense to me
> seems to be these values in vf_colorpsace.c
> https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_colorspace.c#L97
> ?

I didn't explain myself well enough, I was mainly suggesting that
rather than having a private option used to tag the final output, you
could *also* read the standard command line option in avctx->color_trc
and use it to tag the output. Since you can't reuse the value because
dpx seem to use a different table, you should translate the value from
AVColorTransferCharacteristic to whatever dpx accepts. In pseudocode

if (priv_trc_opt = "") {
   if (avctx->color_trc == AVCOL_BT709)
  buf[801] = DPX_BT709
   else if (avctx->color_trc == AVCOL_BT601)
  buf[801] = DPX_BT601
   ...
} else
   buf[801] = priv_trc_opt

-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
2017-02-03 15:58 GMT+02:00 Carl Eugen Hoyos :

> 2017-02-03 14:46 GMT+01:00 Ivo Andonov :
> > This is my first post here. Actually my first post to a mailing list.
> > Excuse me if this is the wrong place to write or if my mailing-list
> culture
> > is not complete!
>
> Remember not to top-post here.
>
> > I've got some old Pinetron DVRs that are supposed to produce a MPEG4
> > bitstream. Indeed they are but no non-Pinetron software can decode it.
>
> Please provide a sample!
>
> > I do now know what is the best approach for registering this as a new
> > decoder
>
> It should be possible to identify the codec while reading the bitstream.
>
> Carl Eugen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

Not sure if attachments are fine here... Attempting one.

Ivo


t.frm
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] configure: instruct MSVC 2015 to properly process UTF-8 string literals

2017-02-03 Thread James Almer
On 2/3/2017 5:41 AM, Hendrik Leppkes wrote:
> Without the /UTF-8 switch, the MSVC compiler treats all files as in the
> system codepage, instead of in UTF-8, which causes UTF-8 string literals
> to be interpreted wrong.
> 
> This switch was only introduced in VS2015 Update 2, and any earlier
> versions do not have an equivalent solution.
> 
> Fixes fate-sub-scc on MSVC 2015+
> ---
>  configure | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/configure b/configure
> index d3d652f0f4..231cc3eca7 100755
> --- a/configure
> +++ b/configure
> @@ -6327,6 +6327,9 @@ EOF
>  # Issue has been fixed in MSVC v19.00.24218.
>  check_cpp_condition windows.h "_MSC_FULL_VER >= 190024218" ||
>  check_cflags -d2SSAOptimizer-
> +# enable utf-8 source processing on VS2015 U2 and newer
> +check_cpp_condition windows.h "_MSC_FULL_VER >= 190023918" &&
> +add_cflags -utf-8

Probably better use check_cflags, just in case.

>  fi
>  
>  for pfx in "" host_; do
> 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] doc: add a lexicon

2017-02-03 Thread Clément Bœsch
On Tue, Jan 31, 2017 at 09:47:12AM -0900, Lou Logan wrote:
> On Mon, 30 Jan 2017 15:58:12 +0100, Clément Bœsch wrote:
> 
> > ---
> >  doc/lexicon | 23 +++
> >  1 file changed, 23 insertions(+)
> >  create mode 100644 doc/lexicon
> > 
> > diff --git a/doc/lexicon b/doc/lexicon
> > new file mode 100644
> > index 00..36ff803fb5
> > --- /dev/null
> > +++ b/doc/lexicon
> > @@ -0,0 +1,23 @@
> > +Common abbreviations/shorthands we use that don't need a comment
> > +
> > +
> > +dct/idct: (inverse) discrete cosine tranform
> 
> transform
> 

fixed

> Otherwise, LGTM.
> 

applied, thanks

> A few others if you want them:
> 

> nal: network abstraction layer
> rc: rate control
> sei: supplemental enhancement information

added.

Also added fdct as suggested by Chloe

-- 
Clément B.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Carl Eugen Hoyos
2017-02-03 14:46 GMT+01:00 Ivo Andonov :
> This is my first post here. Actually my first post to a mailing list.
> Excuse me if this is the wrong place to write or if my mailing-list culture
> is not complete!

Remember not to top-post here.

> I've got some old Pinetron DVRs that are supposed to produce a MPEG4
> bitstream. Indeed they are but no non-Pinetron software can decode it.

Please provide a sample!

> I do now know what is the best approach for registering this as a new
> decoder

It should be possible to identify the codec while reading the bitstream.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Paul B Mahol
On 2/3/17, Ivo Andonov  wrote:
> Hi Everyone,
>
> This is my first post here. Actually my first post to a mailing list.
> Excuse me if this is the wrong place to write or if my mailing-list culture
> is not complete!
>
> I've got some old Pinetron DVRs that are supposed to produce a MPEG4
> bitstream. Indeed they are but no non-Pinetron software can decode it.
> While changing the realtime clock battery on one of these I saw they are
> using the AS-3024 SoC. Any information about it is quite rare and hard to
> find. What I found in a kind of a brochure is the "Modified ISO/IEC
> 14496/2" statement.
>
> I successfully used a modified Pinetron library on Windows to use my own
> software for decoding the stream. While fiddling with the modification I
> saw they are using the statically linked FFmpeg API.
>
> Out of curiosity and wanting to have this codec implementation on the Linux
> platform to automate some tasks I digged deeper in order to understand the
> decoder difference in respect to the standards. The codec name they use is
> xvid_alogics.
>
> Finally there's the result and I wanted to share it with the comunity in
> case someone is interested or if one wants to use it for its own project.
>
> In short, what I did is modify libavcodec/mpeg4video.h, ff_mpeg4_pred_dc
> function and add the lines of code below right after the a, b, c vars being
> initialized and instead of the block of code that sets the pred var.
>
>   if (n == 0 || n == 4 || n == 5) pred = 1024; else
>   if (n == 1) pred = a; else
>   if (n == 2) pred = c; else
>   if (n == 3) {
> if (abs(a - b) < abs(b - c)) {
>   pred = c;
> } else {
>   pred = a;
> }
>   }
>
> I do now know what is the best approach for registering this as a new
> decoder and then having the above block within a condition. What I came up
> with is adding a new AVCodec to the mpeg4videodec.c file and modifying the
> decode_init to set a flag in AVCodecContext.workaround_bugs field that can
> later be used to condition the above block.

Why not share your code modifications and samples?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] DVR MPEG4 variant (AS-3024)

2017-02-03 Thread Ivo Andonov
Hi Everyone,

This is my first post here. Actually my first post to a mailing list.
Excuse me if this is the wrong place to write or if my mailing-list culture
is not complete!

I've got some old Pinetron DVRs that are supposed to produce a MPEG4
bitstream. Indeed they are but no non-Pinetron software can decode it.
While changing the realtime clock battery on one of these I saw they are
using the AS-3024 SoC. Any information about it is quite rare and hard to
find. What I found in a kind of a brochure is the "Modified ISO/IEC
14496/2" statement.

I successfully used a modified Pinetron library on Windows to use my own
software for decoding the stream. While fiddling with the modification I
saw they are using the statically linked FFmpeg API.

Out of curiosity and wanting to have this codec implementation on the Linux
platform to automate some tasks I digged deeper in order to understand the
decoder difference in respect to the standards. The codec name they use is
xvid_alogics.

Finally there's the result and I wanted to share it with the comunity in
case someone is interested or if one wants to use it for its own project.

In short, what I did is modify libavcodec/mpeg4video.h, ff_mpeg4_pred_dc
function and add the lines of code below right after the a, b, c vars being
initialized and instead of the block of code that sets the pred var.

  if (n == 0 || n == 4 || n == 5) pred = 1024; else
  if (n == 1) pred = a; else
  if (n == 2) pred = c; else
  if (n == 3) {
if (abs(a - b) < abs(b - c)) {
  pred = c;
} else {
  pred = a;
}
  }

I do now know what is the best approach for registering this as a new
decoder and then having the above block within a condition. What I came up
with is adding a new AVCodec to the mpeg4videodec.c file and modifying the
decode_init to set a flag in AVCodecContext.workaround_bugs field that can
later be used to condition the above block.

Ivo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread Ronald S. Bultje
Hi Rune,

On Fri, Feb 3, 2017 at 4:08 AM,  wrote:

> On Thu, Feb 02, 2017 at 11:16:35AM -0500, Ronald S. Bultje wrote:
> > On Thu, Feb 2, 2017 at 10:59 AM,  wrote:
> > > It is the irregular differences between them which are the reason
> > > for splitting. I would not call this "duplication". If you feel
> > > it is straightforward and important to make this more compact,
> > > with the same performance, just go ahead.
>
> > So, typically, we wouldn't duplicate the code, we'd template it. There's
> > some examples in h264 how to do it. You'd have a single
> (av_always_inline)
> > decode_codebook function, which takes "format" as an argument, and then
> > have three av_noinline callers to it (using fmt=rgb565, fmt=rgb24 or
> > fmt=rgb32).
> >
> > That way performance works as you want it, without the source code
> > duplication.
>
> (Thanks for the pointer. I'll look at how it is done in h264, but: )
>
> I thought about generating the bodies of the functions from something
> like a template but it did not feel like this would make the code more
> understandable aka maintainable. So I wonder if there is any point in
> doing this, given the same binary result.


wm4 has tried to make this point several times, it's about maintainability.
Let me explain, otherwise we'll keep going back and forth.

Let's assume you have a function that does a, b and c, where b depends on
the pix fmt. You're currently writing it as such:

function_565() {
a
b_565
c
}

function_24() {
a
b_24
c
}

function_32() {
a
b_32
c
}

It should be pretty obvious that a and c are triplicated in this example.
Now compare it with this:

av_always_inline function_generic(fmt) {
a
if (fmt == 565) {
b_565
} else if (fmt == 24) {
b_24
} else {
assert(fmt == 32);
b_32
}
c
}

function_565() {
funtion_generic(565);
}

function_24() {
function_generic(24);
}

function_32() {
function_generic(32);
}

It might look larger, but that's merely because we're not writing out a and
c. The key thing here is that we're no longer triplicating a and c. This is
a significant maintenance improvement. Also, because of the inline keywords
used, the performance will be identical.

Conclusion: better to maintain, identical performance. Only advantages, no
disadvantages. Should be easy to accept, right?

> > What we _especially_ don't do in FFmpeg is hardcoding colorspace
> > > > conversion coefficients in decoders, and doing colorspace conversion
> in
> > > > decoders.
> > >
> > > Have you got a suggestion how to do avoid this in this case,
> > > without sacrificing the speed?
>
> > Ah, yes, the question. So, the code change is quite big and it does
> various
> > things, and each of these might have a better alternative or be good
> as-is.
> > fundamentally, I don't really understand how _adding_ a colorspace
> > conversion does any good to speed. It fundamentally always makes things
> > slower. So can you explain why you need to _add_ a colorspace conversion?
>
> It moves the conversion from after the decoder, where the data to convert
> is all and whole frames, to the inside where the conversion applies
> to the codebooks, by the codec design much less than the output.
>
> > Why not just always output the native format? (And then do conversion in
>
> The "native" Cinepak format is actually unknown to swscaler, and I
> seriously doubt it would make sense to add it there, which would
> just largely cut down the decoding efficiency.


I see, so the codebook contains indexed entities that are re-used to
reconstruct the actual frames. That makes some sense. So, what I don't
understand then, is why below you're claiming that get_format() doesn't do
this. this is exactly what get_format() does. Why do you believe
get_format() isn't capable of helping you accomplish this?

> > > +char *out_fmt_override = getenv("CINEPAK_OUTPUT_FORMAT_
> OVERRIDE");
> > >
> > > > Absolutely not acceptable.
> > >
> > > 1. Why?
> > >
> >
> > Libavcodec is a library. Being sensitive to environment in a library, or
> > worse yet, affecting the environment, is typically not what is expected.
> > There are almost always better ways to do the same thing.
>
> I did my best to look for a better way but it does not seem to be existing.


Look into private options, for one. But really, get_format() solves this. I
can elaborate more if you really want me to, as above, but I feel you
haven't really looked into it. I know this feeling, sometimes you've got
something that works for you and you just want to commit it and be done
with it.

But that's not going to happen. The env variable will never be committed.
Never. I guarantee it with 100% certainty. If you'd like this general
feature to be picked up in the main codebase, you'll need to change at the
very, very least how it is exposed as a selectable option. Env opts are not
acceptable, they have never been and never will. A private codec option
(av_opt_*()) might be acceptable depending on how special/non-generic it

Re: [FFmpeg-devel] [PATCH 1/2] configure: add nologo switch to invocation of lib.exe

2017-02-03 Thread Carl Eugen Hoyos
2017-02-03 9:41 GMT+01:00 Hendrik Leppkes :
> This suppresses the startup banner, which is consistent with all other calls
> to the Windows SDK binaries.

>  if check_cmd lib.exe -list; then
> -SLIB_EXTRA_CMD=-'sed -e "s/ @[^ ]*//" $$(@:$(SLIBSUF)=.orig.def) 
> > $$(@:$(SLIBSUF)=.def); lib.exe /machine:$(LIBTARGET) 
> /def:$$(@:$(SLIBSUF)=.def) /out:$(SUBDIR)$(SLIBNAME:$(SLIBSUF)=.lib)'
> +SLIB_EXTRA_CMD=-'sed -e "s/ @[^ ]*//" $$(@:$(SLIBSUF)=.orig.def) 
> > $$(@:$(SLIBSUF)=.def); lib.exe /nologo /machine:$(LIBTARGET) 
> /def:$$(@:$(SLIBSUF)=.def) /out:$(SUBDIR)$(SLIBNAME:$(SLIBSUF)=.lib)'

Please commit such patches directly.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/dpxenc: support colour metadata in DPX encoder, fixes ticket #6023

2017-02-03 Thread Reto Kromer
Vittorio Giovara wrote:

>I think the code looks fine. I am just wondering if we
>should also offer the possibility to set these flags from 
>the standard context options (-color_trc and others). I'm
>aware that not all values match or are valid but maybe a
>small conversion table or extending the main table could be
>a viable approach. Similarly this could be done for the
>decoder so that color properties are not lost during a
>dpx->dpx conversion maybe.

+1

In my opinion, this is important. I guess to implement an
additional conversion table would be the best solution.

Best regards, Reto

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/dpxenc: support colour metadata in DPX encoder, fixes ticket #6023

2017-02-03 Thread Kieran O Leary
Hi Vittorio!

thanks for getting back to me.

On Fri, Feb 3, 2017 at 12:57 PM, Vittorio Giovara <
vittorio.giov...@gmail.com> wrote:

>
>
> Hey Kieran,
> I think the code looks fine. I am just wondering if we should also
> offer the possibility to set these flags from the standard context
> options (-color_trc and others). I'm aware that not all values match
> or are valid but maybe a small conversion table or extending the main
> table could be a viable approach. Similarly this could be done for the
> decoder so that color properties are not lost during a dpx->dpx
> conversion maybe.
>

That seems to be the general consensus from the replies from James Almer
and Carl Eugen and it's what i should push towards.
I added the new values locally to pixfmt.h. I'm thinking that these could
be called in a similar way to the EXR decoder? https://github.com/
FFmpeg/FFmpeg/blob/8a1759ad46f05375c957f33049b459
2befbcb224/libavcodec/exr.c#L1840

In terms of translation tables, could you point me to some simlar code that
could serve as a starting point for me? The nearest that made sense to me
seems to be these values in vf_colorpsace.c
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_colorspace.c#L97
?

-kieran
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/dpxenc: support colour metadata in DPX encoder, fixes ticket #6023

2017-02-03 Thread Vittorio Giovara
On Wed, Feb 1, 2017 at 1:42 PM, Kieran O Leary  wrote:
> Hello,
>
> I'm cc'ing Vittorio as I don't think that he's subscribed to the list but
> he's contributed to dpxenc.c and recent colorspace filters. The same with
> Kate Murray from the Library of Congress who knows a lot more about DPX than
> me. Apologies if this is inappropriate.
>
> I mostly based this patch on other ffmpeg encoders, such as pncenc.c. I'm
> not really a C coder, I'm a moving image archivist who needs to be able to
> specify colour metadata in DPX for various workflows. Please excuse my
> ignorance/mistakes.
>
> This patch adds documentation and two command line options for the DPX
> encoder:
> -trc (Transfer Characteristics) and -clr (Colorimetric Specification), which
> set colour metadata values in a DPX file. Currently these are hardcoded to
> always be 2, aka Linear. Ticket #6023 is related to this, but there have
> also been many mailing list posts about this issue:
> https://ffmpeg.org/pipermail/ffmpeg-user/2015-March/025630.html
> https://ffmpeg.org/pipermail/ffmpeg-user/2015-December/029456.html
>
> I've kept the default values as 2 (Linear) as this is what was originally in
> dpxenc, but I'm not sure of the value of this really. I think that there's
> more value in a default of 0 (User-defined) which would just leave the
> values unspecified. Or perhaps no value at all! The initial default of 2 for
> colorimetric was potentially useless as 2 is listed as 'Not applicable' for
> colorimetric specification in SMPTE 268M-2003.
>
> The values for each of these options are the integers listed in the SMPTE
> standards doc:
> https://web.archive.org/web/20050706060025/http://www.smpte.org/smpte_store/standards/pdf/s268m.pdf
>
> Initially I just had one argument that set the Transfer Characteristic and
> Colorimetric Specification to the same value, but perhaps some use cases
> could require that these values  be different? I'm not sure if they ever
> would. I have never seen real world files that suggest this but I can edit
> this if it seems weird.
>
> Some of the values from 0-12 are listed as 'Not applicable' for the
> colorimetric specification, but I didn't know how to specify just those
> numbers (0-1, 4-10) in the patch. Perhaps it's OK to leave it as is,
> otherwise hopefully someone can point me to similar code that I can learn
> from. Again, apologies for my ignorance.
>

Hey Kieran,
I think the code looks fine. I am just wondering if we should also
offer the possibility to set these flags from the standard context
options (-color_trc and others). I'm aware that not all values match
or are valid but maybe a small conversion table or extending the main
table could be a viable approach. Similarly this could be done for the
decoder so that color properties are not lost during a dpx->dpx
conversion maybe.
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] matroskaenc: Add support for writing video projection.

2017-02-03 Thread Vittorio Giovara
On Thu, Feb 2, 2017 at 9:34 PM, James Almer  wrote:
> On 1/31/2017 12:40 PM, Aaron Colwell wrote:
>>
>>
>> On Tue, Jan 31, 2017 at 2:12 AM Vittorio Giovara > > wrote:
>>
>> On Sat, Jan 28, 2017 at 4:13 AM, James Almer > > wrote:
>>> On 1/27/2017 11:21 PM, Aaron Colwell wrote:
 On Fri, Jan 27, 2017 at 5:46 PM James Almer > wrote:

 yeah. I'm a little confused why the demuxing code didn't implement this to
 begin with.
>>>
>>> Vittorio (The one that implemented AVSphericalMapping) tried to add this at
>>> first, but then removed it because he wasn't sure if he was doing it right.
>>
>> Hi,
>> yes this was included initially but then we found out what those
>> fields were really for, and I didn't want to make the users get as
>> confused as we were. As a matter of fact Aaron I mentioned this to you
>> when I proposed that we probably should have separated the classic
>> equi projection from the tiled one in the specification, in order to
>> simplify the implementation.
>>
>>
>> Like I said before, it is not a different projection. It is still 
>> equirectangular and those parameters just crops the projection. It is very 
>> simple to just verify that the fields are zero if you don't want to support 
>> the cropped version.

Hello,
I'm sorry but I heavily disagree. The tiled equirectangular projection
is something that cannot be used standalone, you have to do additional
mathematics and take into account different files or streams to
generate a "normal" or full-frame equirectangular projection. Having a
separate type allows to include extensions such as the bounds fields,
which can be safely ignored by the every user that do not need a tiled
projection.

It is too late to change the spec, but I do believe that the usecase
is different enough to add a new type, in order to not overcomplicate
the implementation.

> I know you're the one behind the spec in question, but wouldn't it be a
> better idea to wait until AVSphericalMapping gets a way to propagate this
> kind of information before adding support for muxing video projection
> elements? Or maybe try to implement it right now...
>

 I'm happy to implement support for the projection specific info. What would
 be the best way to proceed. It seems like I could just place a union with
 projection specific structs in AVSphericalMapping. I'd also like some
>>>
>>> I'm CCing Vittorio so he can chim in. I personally have no real preference.
>>
>> The best way in my opinion is to add a third type, such as
>> AV_SPHERICAL_TILED_EQUI, and add the relevant fields in
>> AVSphericalMapping, with a clear description about the actual use case
>> for them, mentioning that they are used only in format. By the way,
>> why do you mention adding a union? I think four plain fields should
>> do.
>>
>>
>> I don't think it is worth having the extra enum value for this. All the 
>> cropped fields do is control how you generate the spherical mesh or control 
>> the shader used to render the projection. If players don't want to support 
>> it they can just check to see that all the fields are zero and error out if 
>> not.

Why would you have them check these fields every time, when this can
be implicitly determined by the type semantics. I'm pretty sure API
users prefer this scenario

* check projection type
-> if normal_equi -> project it
-> if tiled_equi -> read additional data -> project it

rather than

* check projection type
-> if equi -> read additional data -> check if data needs additional
processing -> project it, or perform more operations before projecting

>> I was suggesting using a union because the projection bounds fields are for 
>> equirect, and there are different fields for the cubemap & mesh projections. 
>> I figured that the enum + union of structs might be a reasonable way to 
>> organize the projection specific fields.

This is a structure whose size does not depend on ABI and can be
extended as we like, there is no need to separate new fields in such a
way as long as they are properly documented, in my opinion.

Please keep me in CC.
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/internal: Document ugly looking windows specific file related function renaming

2017-02-03 Thread Michael Niedermayer
On Thu, Feb 02, 2017 at 09:07:25AM +0100, wm4 wrote:
> On Wed, 1 Feb 2017 22:24:14 +0100
> Michael Niedermayer  wrote:
> 
> > On Wed, Feb 01, 2017 at 03:26:45PM +0100, wm4 wrote:
> > > On Wed,  1 Feb 2017 14:35:50 +0100
> > > Michael Niedermayer  wrote:
> > >   
> > > > Found-by: ubitux
> > > > Signed-off-by: Michael Niedermayer 
> > > > ---
> > > >  libavutil/internal.h | 4 
> > > >  1 file changed, 4 insertions(+)
> > > > 
> > > > diff --git a/libavutil/internal.h b/libavutil/internal.h
> > > > index a19975d474..e97034887b 100644
> > > > --- a/libavutil/internal.h
> > > > +++ b/libavutil/internal.h
> > > > @@ -243,8 +243,12 @@ void avpriv_request_sample(void *avc,
> > > >  #pragma comment(linker, "/include:" EXTERN_PREFIX "avpriv_snprintf")
> > > >  #endif
> > > >  
> > > > +// Rename shared function (both use and implementation of them) so 
> > > > they are not shared
> > > > +// and each library carries its own copy of a implementation, this is 
> > > > needed as  
> > > 
> > > So why are they not appropriately named in the first place?
> > >   
> > > > +// the fd numbers are not transportable between libs on windows  
> > > 
> > > AFAIK they are, as long as they're linked to the same stdlib (which
> > > probably is the case for libav*).  
> > 
> > here is the commit introducing the first function using this system.
> > I belive its commit message is quite good and explains the reasoning.
> > Beyond that iam the wrong one to discuss this with, iam not a windows
> > developer and i didnt design this, which is not to say i like or
> > dislike it. I just added one function IIRC and i wanted to document it
> > as ubitux seemed to be confused for a moment when seeing it the first
> > time
> > 
> > commit e743e7ae6ee7e535c4394bec6fe6650d2b0dbf65
> > Author: Martin Storsjö 
> > Date:   Fri Aug 9 11:06:46 2013 +0300
> > 
> > libavutil: Make avpriv_open a library-internal function on msvcrt
> > 
> > Add one copy of the function into each of the libraries, similarly
> > to what we do for log2_tab. When using static libs, only one
> > copy of the file_open.o object file gets included, while when
> > using shared libraries, each of them get a copy of its own.
> > 
> > This fixes DLL builds with a statically linked C runtime, where
> > each DLL effectively has got its own instance of the C runtime,
> > where file descriptors can't be shared across runtimes.
> > 
> > On systems not using msvcrt, the function is not duplicated.
> > 
> > [...]
> 

> Maybe this would be better:
> 
> // If each libav* DLL is statically linked to the C runtime, FDs can
> // not be used across library boundaries. Duplicate these functions in
> // each DLL to avoid this problem.

LGTM

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The bravest are surely those who have the clearest vision
of what is before them, glory and danger alike, and yet
notwithstanding go out to meet it. -- Thucydides


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread u-9iep
On Fri, Feb 03, 2017 at 11:14:28AM +0100, wm4 wrote:
> > On a 16-bit-per-pixel output with a CPU-based decoder you will
> > not find _any_ over 25% of Cinepak speed. Raw video can not compete
> > either when indata delivery bandwidth si limited.
> > 
> > It has also an unused improvement margin in the encoder, still keeping
> > legacy decoders compatibility. The current encoder is already performing
> > a _way_ better than the proprietary one, so why leave this nice tool
> > unused where it can help?
> 
> I can't take this seriously, but I won't go and try finding a better
> codec just to make a point.

I take this as a statement that you believe something without having
the actual information.

[concerning get_format()]

> I don't know what you're thinking, but that's just wrong.

Thanks for making your point quite clear.

Have a nice day,
Rune

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread wm4
On Fri, 3 Feb 2017 10:46:12 +0100
u-9...@aetey.se wrote:

> On Thu, Feb 02, 2017 at 05:52:29PM +0100, wm4 wrote:
> > On Thu, 2 Feb 2017 16:59:51 +0100
> > u-9...@aetey.se wrote:  

> > The heavy code duplication has other downsides. What if someone fixes
> > a bug, but only in the rgb32 version and ignores the rgb565 version?  
> 
> There is no guarantee that this is the same bug or that the same fix
> would apply, because the functions are subtly different.

Yes, duplicated code with subtle changes is harder to maintain.

> > > Have you got a suggestion how to do avoid this in this case,
> > > without sacrificing the speed?  
> > 
> > Is there any value in this at all? It's a very old codec, that was
> > apparently used by some video games. What modern application of it
> > would there possibly be? And that in addition would require special
> > optimizations done by no other codec?  
> 
> Cinepak is not a "game codec" but a solid video codec and has been used
> very widely, for a long time, for a large range of applications.
> 
> For some niche applications it still provides the best result, being
> a single and far away leader in decoding speed.
> 
> On a 16-bit-per-pixel output with a CPU-based decoder you will
> not find _any_ over 25% of Cinepak speed. Raw video can not compete
> either when indata delivery bandwidth si limited.
> 
> It has also an unused improvement margin in the encoder, still keeping
> legacy decoders compatibility. The current encoder is already performing
> a _way_ better than the proprietary one, so why leave this nice tool
> unused where it can help?

I can't take this seriously, but I won't go and try finding a better
codec just to make a point.

> > > Any suggestion which can replace this approach?  
> > 
> > get_format would be more appropriate.  
> 
> get_format() does not belong to the decoder, nor is it up to the task,
> which I explained in a recent message.

I don't know what you're thinking, but that's just wrong.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread u-9iep
On Thu, Feb 02, 2017 at 05:52:29PM +0100, wm4 wrote:
> On Thu, 2 Feb 2017 16:59:51 +0100
> u-9...@aetey.se wrote:
> > On Thu, Feb 02, 2017 at 04:25:05PM +0100, wm4 wrote:
> > > I can see how conversion in the decoder could speed it up somewhat

> > I would not call "twice" for "somewhat" :)
> 
> Well, your original mail mentioned only speedups up to 20%.

I have to recite from the original mail:
"
Avoiding frame pixel format conversion by generating rgb565 in the decoder
for a corresponsing video buffer yields in our tests (on MMX-capable
i*86) more than twice [sic] the playback speed compared to decoding to rgb24.
"

> The heavy code duplication has other downsides. What if someone fixes
> a bug, but only in the rgb32 version and ignores the rgb565 version?

There is no guarantee that this is the same bug or that the same fix
would apply, because the functions are subtly different.

> > Have you got a suggestion how to do avoid this in this case,
> > without sacrificing the speed?
> 
> Is there any value in this at all? It's a very old codec, that was
> apparently used by some video games. What modern application of it
> would there possibly be? And that in addition would require special
> optimizations done by no other codec?

Cinepak is not a "game codec" but a solid video codec and has been used
very widely, for a long time, for a large range of applications.

For some niche applications it still provides the best result, being
a single and far away leader in decoding speed.

On a 16-bit-per-pixel output with a CPU-based decoder you will
not find _any_ over 25% of Cinepak speed. Raw video can not compete
either when indata delivery bandwidth si limited.

It has also an unused improvement margin in the encoder, still keeping
legacy decoders compatibility. The current encoder is already performing
a _way_ better than the proprietary one, so why leave this nice tool
unused where it can help?

> > Any suggestion which can replace this approach?
> 
> get_format would be more appropriate.

get_format() does not belong to the decoder, nor is it up to the task,
which I explained in a recent message.

> > This is very useful when you wish to check that you got it right
> > for a particular visual buffer / device, given that an apllication can
> > try to make its own (and possibly bad) choices. Not critical, I admit.
> 
> Add that to your application code. Or alternatively, make ffmpeg.c
> print the format (this would be useful for a few things).

I would be fine with commenting out these info-messages.

Regards,
Rune

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

2017-02-03 Thread u-9iep
Hello Ronald,

On Thu, Feb 02, 2017 at 11:16:35AM -0500, Ronald S. Bultje wrote:
> On Thu, Feb 2, 2017 at 10:59 AM,  wrote:
> > It is the irregular differences between them which are the reason
> > for splitting. I would not call this "duplication". If you feel
> > it is straightforward and important to make this more compact,
> > with the same performance, just go ahead.

> So, typically, we wouldn't duplicate the code, we'd template it. There's
> some examples in h264 how to do it. You'd have a single (av_always_inline)
> decode_codebook function, which takes "format" as an argument, and then
> have three av_noinline callers to it (using fmt=rgb565, fmt=rgb24 or
> fmt=rgb32).
> 
> That way performance works as you want it, without the source code
> duplication.

(Thanks for the pointer. I'll look at how it is done in h264, but: )

I thought about generating the bodies of the functions from something
like a template but it did not feel like this would make the code more
understandable aka maintainable. So I wonder if there is any point in
doing this, given the same binary result.

> > What we _especially_ don't do in FFmpeg is hardcoding colorspace
> > > conversion coefficients in decoders, and doing colorspace conversion in
> > > decoders.
> >
> > Have you got a suggestion how to do avoid this in this case,
> > without sacrificing the speed?

> Ah, yes, the question. So, the code change is quite big and it does various
> things, and each of these might have a better alternative or be good as-is.
> fundamentally, I don't really understand how _adding_ a colorspace
> conversion does any good to speed. It fundamentally always makes things
> slower. So can you explain why you need to _add_ a colorspace conversion?

It moves the conversion from after the decoder, where the data to convert
is all and whole frames, to the inside where the conversion applies
to the codebooks, by the codec design much less than the output.

> Why not just always output the native format? (And then do conversion in

The "native" Cinepak format is actually unknown to swscaler, and I
seriously doubt it would make sense to add it there, which would
just largely cut down the decoding efficiency.

> GPU if really needed, that is always near-free.)

You assume one have got a GPU.

The intention is to allow usable playback on as simple/slow devices
as possible.

> > > +char *out_fmt_override = getenv("CINEPAK_OUTPUT_FORMAT_OVERRIDE");
> >
> > > Absolutely not acceptable.
> >
> > 1. Why?
> >
> 
> Libavcodec is a library. Being sensitive to environment in a library, or
> worse yet, affecting the environment, is typically not what is expected.
> There are almost always better ways to do the same thing.

I did my best to look for a better way but it does not seem to be existing.

> For example, in this case:

> 2. I am open to a better solution of how to choose a format at run time   
> when the application lacks the knowledge for choosing the most suitable   
> format or does not even try to.   

> wm4 suggested earlier to implement a get_format() function. He meant this:
> 
> https://www.ffmpeg.org/doxygen/trunk/structAVCodecContext.html#ae85c5a0e81e9f97c063881148edc28b7

Unfortunately this is _not_ a solution.

I was of course aware of get_format(). Its functionality seems to
be intended for use by the applications: "decoding: Set by user".
Correct me if I am wrong (even better correct the documentation which
apparently is confusing?).

In any case, the codec has no means to know which output format is
"best" in a particular case so all it can do is to list the formats it
offers. This is done and using get_format() should work as expected,
which unfortunately does not solve anything:

There is of course hardly any application which tries to tune the codecs
to the output. (Mplayer seems to try but this does not seem to help,
possibly because its codec handling is generalized beyond ffmpeg codec
interfaces?)
Codecs efficiently supporting multiple formats are definitely a tiny
minority, so no surprize applications are not prepared to deal with this.

Applications also do lack any reliable knowledge of which format from the
codec is or is not efficient. It is only the deployment context which
can tell what is best for the particular case, which is known neither
to the codec nor to a general-purpose application.

To resum, I do not expect that there is any better solution than using
an out-of-band channel like a dedicated environment variable,
but would be of course happy to see one.

Regards,
Rune

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/2] configure: add nologo switch to invocation of lib.exe

2017-02-03 Thread Hendrik Leppkes
This suppresses the startup banner, which is consistent with all other calls
to the Windows SDK binaries.
---
 configure | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/configure b/configure
index b22c8b3389..d3d652f0f4 100755
--- a/configure
+++ b/configure
@@ -4804,7 +4804,7 @@ case $target_os in
 SLIBNAME_WITH_MAJOR='$(SLIBPREF)$(FULLNAME)-$(LIBMAJOR)$(SLIBSUF)'
 dlltool="${cross_prefix}dlltool"
 if check_cmd lib.exe -list; then
-SLIB_EXTRA_CMD=-'sed -e "s/ @[^ ]*//" $$(@:$(SLIBSUF)=.orig.def) > 
$$(@:$(SLIBSUF)=.def); lib.exe /machine:$(LIBTARGET) /def:$$(@:$(SLIBSUF)=.def) 
/out:$(SUBDIR)$(SLIBNAME:$(SLIBSUF)=.lib)'
+SLIB_EXTRA_CMD=-'sed -e "s/ @[^ ]*//" $$(@:$(SLIBSUF)=.orig.def) > 
$$(@:$(SLIBSUF)=.def); lib.exe /nologo /machine:$(LIBTARGET) 
/def:$$(@:$(SLIBSUF)=.def) /out:$(SUBDIR)$(SLIBNAME:$(SLIBSUF)=.lib)'
 if enabled x86_64; then
 LIBTARGET=x64
 fi
-- 
2.11.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/2] configure: instruct MSVC 2015 to properly process UTF-8 string literals

2017-02-03 Thread Hendrik Leppkes
Without the /UTF-8 switch, the MSVC compiler treats all files as in the
system codepage, instead of in UTF-8, which causes UTF-8 string literals
to be interpreted wrong.

This switch was only introduced in VS2015 Update 2, and any earlier
versions do not have an equivalent solution.

Fixes fate-sub-scc on MSVC 2015+
---
 configure | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/configure b/configure
index d3d652f0f4..231cc3eca7 100755
--- a/configure
+++ b/configure
@@ -6327,6 +6327,9 @@ EOF
 # Issue has been fixed in MSVC v19.00.24218.
 check_cpp_condition windows.h "_MSC_FULL_VER >= 190024218" ||
 check_cflags -d2SSAOptimizer-
+# enable utf-8 source processing on VS2015 U2 and newer
+check_cpp_condition windows.h "_MSC_FULL_VER >= 190023918" &&
+add_cflags -utf-8
 fi
 
 for pfx in "" host_; do
-- 
2.11.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel