Re: [FFmpeg-devel] [PATCH] avutil/frame: Simplify the video allocation

2018-09-10 Thread James Almer
On 9/10/2018 7:57 PM, Michael Niedermayer wrote:
> From: Luca Barbato 
> 
> Merged-by: James Almer 
> Padding-Remixed-by: Michael Niedermayer 
> Signed-off-by: Michael Niedermayer 
> ---
>  libavutil/frame.c | 32 
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/libavutil/frame.c b/libavutil/frame.c
> index deb9b6f334..6147c61259 100644
> --- a/libavutil/frame.c
> +++ b/libavutil/frame.c
> @@ -211,7 +211,8 @@ void av_frame_free(AVFrame **frame)
>  static int get_video_buffer(AVFrame *frame, int align)
>  {
>  const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
> -int ret, i;
> +int ret, i, padded_height;
> +int plane_padding = FFMAX(16 + 16/*STRIDE_ALIGN*/, align);

STRIDE_ALIGN can be 64 right now, depending on configuration time
options and environment. But there's no avx512 code in the tree just
yet, so i guess it's not important for now.

>  
>  if (!desc)
>  return AVERROR(EINVAL);
> @@ -236,23 +237,22 @@ static int get_video_buffer(AVFrame *frame, int align)
>  frame->linesize[i] = FFALIGN(frame->linesize[i], align);
>  }
>  
> -for (i = 0; i < 4 && frame->linesize[i]; i++) {
> -int h = FFALIGN(frame->height, 32);
> -if (i == 1 || i == 2)
> -h = AV_CEIL_RSHIFT(h, desc->log2_chroma_h);
> +padded_height = FFALIGN(frame->height, 32);
> +if ((ret = av_image_fill_pointers(frame->data, frame->format, 
> padded_height,
> +  NULL, frame->linesize)) < 0)
> +return ret;
>  
> -frame->buf[i] = av_buffer_alloc(frame->linesize[i] * h + 16 + 
> 16/*STRIDE_ALIGN*/ - 1);
> -if (!frame->buf[i])
> -goto fail;
> +frame->buf[0] = av_buffer_alloc(ret + 4*plane_padding);
> +if (!frame->buf[0])
> +goto fail;
>  
> -frame->data[i] = frame->buf[i]->data;
> -}
> -if (desc->flags & AV_PIX_FMT_FLAG_PAL || desc->flags & FF_PSEUDOPAL) {
> -av_buffer_unref(>buf[1]);
> -frame->buf[1] = av_buffer_alloc(AVPALETTE_SIZE);
> -if (!frame->buf[1])
> -goto fail;
> -frame->data[1] = frame->buf[1]->data;
> +if (av_image_fill_pointers(frame->data, frame->format, padded_height,
> +   frame->buf[0]->data, frame->linesize) < 0)
> +goto fail;
> +
> +for (i = 0; i < 4; i++) {
> +if (frame->data[i])
> +frame->data[i] += i*plane_padding;

I see now what you meant regarding the pointers post merge.

Thanks a lot. Will merge later with the suggested change in your reply
to this patch.

>  }
>  
>  frame->extended_data = frame->data;
> 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 0/5] Support for Decklink output of EIA-708 and AFD

2018-09-10 Thread Marton Balint



On Sun, 9 Sep 2018, Devin Heitmueller wrote:


On Sun, Sep 9, 2018 at 4:59 PM, Marton Balint  wrote:


Thanks, I applied patches 1-4.


 decklink: Add support for output of Active Format Description (AFD)



Regarding this one, I noticed you always set the AFD in line 12. Are you
sure that it is OK to use line 12 for all resolutions?


12 should be fine at all resolutions, as it just needs to be at least
after the first line for switching (see ST 2016-3-2009 Sec 5).  I
already have a subsequent patch which makes the line configurable (as
well as for 708 and SCTE-104), but I am trying to avoid overloading
you with patches (which tends to result in *nothing* getting merged).


Also, I think for
interlaced formats you should set AFD for both fields, otherwise some
equipment might scale/crop the two fields of a picture differently...


I've never seen a piece of equipment do such an incorrect scale/crop,
but I guess it's possible.  Part of the issue is that there are a few
different conditions in which it can vary between the two fields and
the way the underlying side-data is managed needs to be overhauled in
order to properly handle that case (e.g. the SEI can be on a field
basis in H.264, and we don't presently handle providing both values as
side data for the frame).

I think this patch handles the 99% use case (especially as PAFF
becomes less and less common),  Putting the same value on both lines
for interlaced formats is probably not a bad idea, although I suspect
in practice you're unlikely to run into equipment that has a problem
with it only appearing once.


Yes, just put the same value to both line 12 (or whichever line the user 
selects) and its pair line in the other field.


I am not worried about set top boxes or encoders, but other broadcast 
equipment (e.g: downconverters) working on uncompressed SDI. We 
hit this issue with product of a big vendor.


Regards,
Marton
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] swscale : add bitexact conv for grayf32 and gray16 to f32 conv

2018-09-10 Thread Michael Niedermayer
On Mon, Sep 10, 2018 at 07:57:42PM +0200, Martin Vignali wrote:
> >
> > does the LUT generation code produce different results on platforms ?
> > if so i would suggest to try to use double and to add a small offset if
> > needed
> >
> > a 8bit table has 256 entries, a 16bit table 65536
> > a difference would occur if a source value from 64bit floats gets rounded
> > differently to 32bit floats. If this occurs a small offset could be added
> > so that none of the 65536 cases end up close to being between 2 32bit
> > floats
> >
> > This would avoid teh rather complex code if it works
> >
> >
> Hello,
> 
> Can't test on other platform than x86_32 and x86_64, so i can't really
> answer to this question.

> It's the reason why, i write a code, which doesn't use float calc.

you can just use something like (possibly with more or less 0 or a magnitude
related value)
#define assert_stable_int(x) av_assert(llrintf(x+0.0001) == llrintf(x-0.0001))
#define assert_stable_float(x) av_assert((float)(x+0.0001) == 
(float)(x-0.0001))

and then place this where rounding happens, then you can see easily if
any test gets close to problematic values. No need for special HW


[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Let us carefully observe those good qualities wherein our enemies excel us
and endeavor to excel them, by avoiding what is faulty, and imitating what
is excellent in them. -- Plutarch


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/frame: Simplify the video allocation

2018-09-10 Thread Michael Niedermayer
On Tue, Sep 11, 2018 at 12:57:11AM +0200, Michael Niedermayer wrote:
> From: Luca Barbato 
> 
> Merged-by: James Almer 
> Padding-Remixed-by: Michael Niedermayer 
> Signed-off-by: Michael Niedermayer 
> ---
>  libavutil/frame.c | 32 
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/libavutil/frame.c b/libavutil/frame.c
> index deb9b6f334..6147c61259 100644
> --- a/libavutil/frame.c
> +++ b/libavutil/frame.c
> @@ -211,7 +211,8 @@ void av_frame_free(AVFrame **frame)
>  static int get_video_buffer(AVFrame *frame, int align)
>  {
>  const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
> -int ret, i;
> +int ret, i, padded_height;
> +int plane_padding = FFMAX(16 + 16/*STRIDE_ALIGN*/, align);
>  
>  if (!desc)
>  return AVERROR(EINVAL);
> @@ -236,23 +237,22 @@ static int get_video_buffer(AVFrame *frame, int align)
>  frame->linesize[i] = FFALIGN(frame->linesize[i], align);
>  }
>  
> -for (i = 0; i < 4 && frame->linesize[i]; i++) {
> -int h = FFALIGN(frame->height, 32);
> -if (i == 1 || i == 2)
> -h = AV_CEIL_RSHIFT(h, desc->log2_chroma_h);
> +padded_height = FFALIGN(frame->height, 32);
> +if ((ret = av_image_fill_pointers(frame->data, frame->format, 
> padded_height,
> +  NULL, frame->linesize)) < 0)
> +return ret;
>  
> -frame->buf[i] = av_buffer_alloc(frame->linesize[i] * h + 16 + 
> 16/*STRIDE_ALIGN*/ - 1);
> -if (!frame->buf[i])
> -goto fail;
> +frame->buf[0] = av_buffer_alloc(ret + 4*plane_padding);
> +if (!frame->buf[0])
> +goto fail;
>  
> -frame->data[i] = frame->buf[i]->data;
> -}
> -if (desc->flags & AV_PIX_FMT_FLAG_PAL || desc->flags & FF_PSEUDOPAL) {
> -av_buffer_unref(>buf[1]);
> -frame->buf[1] = av_buffer_alloc(AVPALETTE_SIZE);
> -if (!frame->buf[1])
> -goto fail;
> -frame->data[1] = frame->buf[1]->data;
> +if (av_image_fill_pointers(frame->data, frame->format, padded_height,
> +   frame->buf[0]->data, frame->linesize) < 0)
> +goto fail;
> +

> +for (i = 0; i < 4; i++) {
^
this is more ideal if its 1, saw it a moment after sending


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Does the universe only have a finite lifespan? No, its going to go on
forever, its just that you wont like living in it. -- Hiranya Peiri


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avutil/frame: Simplify the video allocation

2018-09-10 Thread Michael Niedermayer
From: Luca Barbato 

Merged-by: James Almer 
Padding-Remixed-by: Michael Niedermayer 
Signed-off-by: Michael Niedermayer 
---
 libavutil/frame.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/libavutil/frame.c b/libavutil/frame.c
index deb9b6f334..6147c61259 100644
--- a/libavutil/frame.c
+++ b/libavutil/frame.c
@@ -211,7 +211,8 @@ void av_frame_free(AVFrame **frame)
 static int get_video_buffer(AVFrame *frame, int align)
 {
 const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
-int ret, i;
+int ret, i, padded_height;
+int plane_padding = FFMAX(16 + 16/*STRIDE_ALIGN*/, align);
 
 if (!desc)
 return AVERROR(EINVAL);
@@ -236,23 +237,22 @@ static int get_video_buffer(AVFrame *frame, int align)
 frame->linesize[i] = FFALIGN(frame->linesize[i], align);
 }
 
-for (i = 0; i < 4 && frame->linesize[i]; i++) {
-int h = FFALIGN(frame->height, 32);
-if (i == 1 || i == 2)
-h = AV_CEIL_RSHIFT(h, desc->log2_chroma_h);
+padded_height = FFALIGN(frame->height, 32);
+if ((ret = av_image_fill_pointers(frame->data, frame->format, 
padded_height,
+  NULL, frame->linesize)) < 0)
+return ret;
 
-frame->buf[i] = av_buffer_alloc(frame->linesize[i] * h + 16 + 
16/*STRIDE_ALIGN*/ - 1);
-if (!frame->buf[i])
-goto fail;
+frame->buf[0] = av_buffer_alloc(ret + 4*plane_padding);
+if (!frame->buf[0])
+goto fail;
 
-frame->data[i] = frame->buf[i]->data;
-}
-if (desc->flags & AV_PIX_FMT_FLAG_PAL || desc->flags & FF_PSEUDOPAL) {
-av_buffer_unref(>buf[1]);
-frame->buf[1] = av_buffer_alloc(AVPALETTE_SIZE);
-if (!frame->buf[1])
-goto fail;
-frame->data[1] = frame->buf[1]->data;
+if (av_image_fill_pointers(frame->data, frame->format, padded_height,
+   frame->buf[0]->data, frame->linesize) < 0)
+goto fail;
+
+for (i = 0; i < 4; i++) {
+if (frame->data[i])
+frame->data[i] += i*plane_padding;
 }
 
 frame->extended_data = frame->data;
-- 
2.18.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v5 0/2] libavformat/mxfenc: add missing dnxhr mxfcontainer essence ULs

2018-09-10 Thread Carl Eugen Hoyos
2018-09-08 15:26 GMT+02:00, Jason Stevens :
> version 5 of this patch set properly sets DNxHR HQX/444 bit depth.
>
> Jason Stevens (2):
>   libavcodec/dnxhd: change ff_dnxhd_get_hr_frame_size to avpriv_
>   libavformat/mxfenc: add missing dnxhr mxfcontainer essence ULs

Patches applied.

Thank you, Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v8 1/2] lavc, doc, configure: add libxavs2 video encoder wrapper

2018-09-10 Thread Mark Thompson
On 10/09/18 04:59, hwren wrote:
> Signed-off-by: hwren 
> ---
>  Changelog  |   1 +
>  configure  |   4 +
>  doc/encoders.texi  |  49 
>  doc/general.texi   |  14 +++
>  libavcodec/Makefile|   1 +
>  libavcodec/allcodecs.c |   1 +
>  libavcodec/libxavs2.c  | 300 
> +
>  libavcodec/version.h   |   4 +-
>  8 files changed, 372 insertions(+), 2 deletions(-)
>  create mode 100644 libavcodec/libxavs2.c
> 
> ...> diff --git a/libavcodec/libxavs2.c b/libavcodec/libxavs2.c
> new file mode 100644
> index 000..a834f6e
> --- /dev/null
> +++ b/libavcodec/libxavs2.c
> ...
> +
> +/* Rate control */
> +if (avctx->bit_rate > 0) {
> +xavs2_opt_set2("RateControl",   "%d", 1);
> +xavs2_opt_set2("initial_qp","%d", cae->qp);
> +xavs2_opt_set2("TargetBitRate", "%"PRId64"", avctx->bit_rate);
> +} else {
> +xavs2_opt_set2("initial_qp","%d", cae->initial_qp);
> +xavs2_opt_set2("max_qp","%d", cae->max_qp);
> +xavs2_opt_set2("min_qp","%d", cae->min_qp);
> +}

The QP settings are the wrong way around - initial_qp, max_qp and min_qp should 
go with the rate control case.

> ...

Everything else LGTM now.

Does anyone else have any comments on this?  If not, I'll push it tomorrow with 
that fixed.

Thanks,

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] lavfi/silencedetect: fix spelling

2018-09-10 Thread Tristan Matthews
---
 libavfilter/af_silencedetect.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/libavfilter/af_silencedetect.c b/libavfilter/af_silencedetect.c
index 6e321a5d97..d9582aa589 100644
--- a/libavfilter/af_silencedetect.c
+++ b/libavfilter/af_silencedetect.c
@@ -38,7 +38,7 @@ typedef struct SilenceDetectContext {
 double duration;///< minimum duration of silence until 
notification
 int mono;   ///< mono mode : check each channel separately 
(default = check when ALL channels are silent)
 int channels;   ///< number of channels
-int independant_channels;   ///< number of entries in following arrays 
(always 1 in mono mode)
+int independent_channels;   ///< number of entries in following arrays 
(always 1 in mono mode)
 int64_t *nb_null_samples;   ///< (array) current number of continuous zero 
samples
 int64_t *start; ///< (array) if silence is detected, this 
value contains the time of the first zero sample (default/unset = INT64_MIN)
 int64_t frame_end;  ///< pts of the end of the current frame (used 
to compute duration of silence at EOS)
@@ -77,12 +77,12 @@ static av_always_inline void update(SilenceDetectContext 
*s, AVFrame *insamples,
 int is_silence, int current_sample, 
int64_t nb_samples_notify,
 AVRational time_base)
 {
-int channel = current_sample % s->independant_channels;
+int channel = current_sample % s->independent_channels;
 if (is_silence) {
 if (s->start[channel] == INT64_MIN) {
 s->nb_null_samples[channel]++;
 if (s->nb_null_samples[channel] >= nb_samples_notify) {
-s->start[channel] = insamples->pts + 
av_rescale_q(current_sample / s->channels + 1 - nb_samples_notify * 
s->independant_channels / s->channels,
+s->start[channel] = insamples->pts + 
av_rescale_q(current_sample / s->channels + 1 - nb_samples_notify * 
s->independent_channels / s->channels,
 (AVRational){ 1, s->last_sample_rate }, time_base);
 set_meta(insamples, s->mono ? channel + 1 : 0, "silence_start",
 av_ts2timestr(s->start[channel], _base));
@@ -141,14 +141,14 @@ static int config_input(AVFilterLink *inlink)
 int c;
 
 s->channels = inlink->channels;
-s->independant_channels = s->mono ? s->channels : 1;
-s->nb_null_samples = av_mallocz_array(sizeof(*s->nb_null_samples), 
s->independant_channels);
+s->independent_channels = s->mono ? s->channels : 1;
+s->nb_null_samples = av_mallocz_array(sizeof(*s->nb_null_samples), 
s->independent_channels);
 if (!s->nb_null_samples)
 return AVERROR(ENOMEM);
-s->start = av_malloc_array(sizeof(*s->start), s->independant_channels);
+s->start = av_malloc_array(sizeof(*s->start), s->independent_channels);
 if (!s->start)
 return AVERROR(ENOMEM);
-for (c = 0; c < s->independant_channels; c++)
+for (c = 0; c < s->independent_channels; c++)
 s->start[c] = INT64_MIN;
 
 switch (inlink->format) {
@@ -178,7 +178,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
*insamples)
 
 // scale number of null samples to the new sample rate
 if (s->last_sample_rate && s->last_sample_rate != srate)
-for (c = 0; c < s->independant_channels; c++) {
+for (c = 0; c < s->independent_channels; c++) {
 s->nb_null_samples[c] = srate * s->nb_null_samples[c] / 
s->last_sample_rate;
 }
 s->last_sample_rate = srate;
@@ -231,7 +231,7 @@ static av_cold void uninit(AVFilterContext *ctx)
 SilenceDetectContext *s = ctx->priv;
 int c;
 
-for (c = 0; c < s->independant_channels; c++)
+for (c = 0; c < s->independent_channels; c++)
 if (s->start[c] > INT64_MIN)
 update(s, NULL, 0, c, 0, s->time_base);
 av_freep(>nb_null_samples);
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] lavc: Add AV1 metadata bitstream filter

2018-09-10 Thread Michael Niedermayer
On Sun, Sep 09, 2018 at 11:08:12PM +0100, Mark Thompson wrote:
> Can adjust colour and timing information.
> ---
> A simple start to the bsf - metadata support still todo.
> 
> 
>  configure  |   1 +
>  libavcodec/Makefile|   1 +
>  libavcodec/av1_metadata_bsf.c  | 267 +
>  libavcodec/bitstream_filters.c |   1 +
>  4 files changed, 270 insertions(+)
>  create mode 100644 libavcodec/av1_metadata_bsf.c

breaks build on mips:
CC  libavcodec/av1_metadata_bsf.o
In file included from src/libavcodec/av1_metadata_bsf.c:25:
src/libavcodec/cbs_av1.h:364: warning: declaration does not declare anything
src/libavcodec/cbs_av1.h:380: warning: declaration does not declare anything
src/libavcodec/av1_metadata_bsf.c: In function ‘av1_metadata_filter’:
src/libavcodec/av1_metadata_bsf.c:134: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
src/libavcodec/av1_metadata_bsf.c: In function ‘av1_metadata_init’:
src/libavcodec/av1_metadata_bsf.c:182: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
make: *** [libavcodec/av1_metadata_bsf.o] Error 1
CC  libavcodec/cbs_av1.o
In file included from src/libavcodec/cbs_av1.c:25:
src/libavcodec/cbs_av1.h:364: warning: declaration does not declare anything
src/libavcodec/cbs_av1.h:380: warning: declaration does not declare anything
In file included from src/libavcodec/cbs_av1.c:677:
src/libavcodec/cbs_av1_syntax_template.c: In function 
‘cbs_av1_read_metadata_obu’:
src/libavcodec/cbs_av1_syntax_template.c:1656: error: ‘AV1RawMetadata’ has no 
member named ‘hdr_cll’
src/libavcodec/cbs_av1_syntax_template.c:1659: error: ‘AV1RawMetadata’ has no 
member named ‘hdr_mdcv’
src/libavcodec/cbs_av1_syntax_template.c:1662: error: ‘AV1RawMetadata’ has no 
member named ‘scalability’
src/libavcodec/cbs_av1_syntax_template.c:1665: error: ‘AV1RawMetadata’ has no 
member named ‘itut_t35’
src/libavcodec/cbs_av1_syntax_template.c:1668: error: ‘AV1RawMetadata’ has no 
member named ‘timecode’
In file included from src/libavcodec/cbs_av1.c:753:
src/libavcodec/cbs_av1_syntax_template.c: In function 
‘cbs_av1_write_metadata_obu’:
src/libavcodec/cbs_av1_syntax_template.c:1656: error: ‘AV1RawMetadata’ has no 
member named ‘hdr_cll’
src/libavcodec/cbs_av1_syntax_template.c:1659: error: ‘AV1RawMetadata’ has no 
member named ‘hdr_mdcv’
src/libavcodec/cbs_av1_syntax_template.c:1662: error: ‘AV1RawMetadata’ has no 
member named ‘scalability’
src/libavcodec/cbs_av1_syntax_template.c:1665: error: ‘AV1RawMetadata’ has no 
member named ‘itut_t35’
src/libavcodec/cbs_av1_syntax_template.c:1668: error: ‘AV1RawMetadata’ has no 
member named ‘timecode’
src/libavcodec/cbs_av1.c: In function ‘cbs_av1_free_metadata’:
src/libavcodec/cbs_av1.c:841: error: ‘AV1RawMetadata’ has no member named 
‘itut_t35’
src/libavcodec/cbs_av1.c: In function ‘cbs_av1_free_obu’:
src/libavcodec/cbs_av1.c:852: error: ‘AV1RawOBU’ has no member named 
‘tile_group’
src/libavcodec/cbs_av1.c:855: error: ‘AV1RawOBU’ has no member named ‘frame’
src/libavcodec/cbs_av1.c:858: error: ‘AV1RawOBU’ has no member named ‘tile_list’
src/libavcodec/cbs_av1.c:861: error: ‘AV1RawOBU’ has no member named ‘metadata’
src/libavcodec/cbs_av1.c: In function ‘cbs_av1_read_unit’:
src/libavcodec/cbs_av1.c:957: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
src/libavcodec/cbs_av1.c:967: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
src/libavcodec/cbs_av1.c:980: error: ‘AV1RawOBU’ has no member named 
‘frame_header’
src/libavcodec/cbs_av1.c:987: error: ‘AV1RawOBU’ has no member named 
‘tile_group’
src/libavcodec/cbs_av1.c:991: error: ‘AV1RawOBU’ has no member named 
‘tile_group’
src/libavcodec/cbs_av1.c:998: error: ‘AV1RawOBU’ has no member named ‘frame’
src/libavcodec/cbs_av1.c:1002: error: ‘AV1RawOBU’ has no member named ‘frame’
src/libavcodec/cbs_av1.c:1009: error: ‘AV1RawOBU’ has no member named 
‘tile_list’
src/libavcodec/cbs_av1.c:1013: error: ‘AV1RawOBU’ has no member named 
‘tile_list’
src/libavcodec/cbs_av1.c:1020: error: ‘AV1RawOBU’ has no member named ‘metadata’
src/libavcodec/cbs_av1.c: In function ‘cbs_av1_write_obu’:
src/libavcodec/cbs_av1.c:1078: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
src/libavcodec/cbs_av1.c:1088: error: ‘AV1RawOBU’ has no member named 
‘sequence_header’
src/libavcodec/cbs_av1.c:1101: error: ‘AV1RawOBU’ has no member named 
‘frame_header’
src/libavcodec/cbs_av1.c:1108: error: ‘AV1RawOBU’ has no member named 
‘tile_group’
src/libavcodec/cbs_av1.c:1112: error: ‘AV1RawOBU’ has no member named 
‘tile_group’
src/libavcodec/cbs_av1.c:1117: error: ‘AV1RawOBU’ has no member named ‘frame’
src/libavcodec/cbs_av1.c:1121: error: ‘AV1RawOBU’ has no member named ‘frame’
src/libavcodec/cbs_av1.c:1126: error: ‘AV1RawOBU’ has no member named 
‘tile_list’
src/libavcodec/cbs_av1.c:1130: error: ‘AV1RawOBU’ has no member named 
‘tile_list’
src/libavcodec/cbs_av1.c:1135: error: ‘AV1RawOBU’ has no member named ‘metadata’
make: *** 

Re: [FFmpeg-devel] swscale : add bitexact conv for grayf32 and gray16 to f32 conv

2018-09-10 Thread Martin Vignali
>
> does the LUT generation code produce different results on platforms ?
> if so i would suggest to try to use double and to add a small offset if
> needed
>
> a 8bit table has 256 entries, a 16bit table 65536
> a difference would occur if a source value from 64bit floats gets rounded
> differently to 32bit floats. If this occurs a small offset could be added
> so that none of the 65536 cases end up close to being between 2 32bit
> floats
>
> This would avoid teh rather complex code if it works
>
>
Hello,

Can't test on other platform than x86_32 and x86_64, so i can't really
answer to this question.
It's the reason why, i write a code, which doesn't use float calc.

Martin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] libavfilter: Removes stored DNN models. Adds support for native backend model file format in tf backend. Removes scaling and conversion with libswscale and replaces input fo

2018-09-10 Thread Pedro Arthur
2018-09-06 8:44 GMT-03:00 Sergey Lavrushkin :

> Here is the patch with reverted changes on sws removal. I didn't split the
> patch into two patches, because code, that supports native model file
> format in
> tf, is partially from code of default model construction, which is removed
> with default
> models and stored data.
>
Ok.



The scale_factor option is not necessary, it should be stored in the model
file. As the weights are trained for a specific factor, using anything
different from that will give bad results.
It seems the native depth to space conversion is buggy a few lines in the
top of output image are duplicated, tf backend is ok. BTW this bug was not
introduced by this patch.

Other than that LGTM, the above fixes can be done in separated patches.
I may push it by Friday.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v5 2/2] libavformat/mxfenc: add missing dnxhr mxfcontainer essence ULs

2018-09-10 Thread Baptiste Coudurier
Hi Jason,

On Sat, Sep 8, 2018 at 6:27 AM Jason Stevens  wrote:

> Add missing dnxhr mxf container essence ULs to the mxf encoder.
>
> This fixes dnxhr mxf files being quarantined by Avid Media Composer.
>
> Signed-off-by: Jason Stevens 
> ---
>  libavformat/mxfenc.c | 53 +++-
>  1 file changed, 52 insertions(+), 1 deletion(-)
>
> diff --git a/libavformat/mxfenc.c b/libavformat/mxfenc.c
> index 7f629dbe53..66814ef6a1 100644
> --- a/libavformat/mxfenc.c
> +++ b/libavformat/mxfenc.c
> @@ -146,6 +146,11 @@ enum ULIndex {
>  INDEX_DNXHD_720p_8bit_HIGH,
>  INDEX_DNXHD_720p_8bit_MEDIUM,
>  INDEX_DNXHD_720p_8bit_LOW,
> +INDEX_DNXHR_LB,
> +INDEX_DNXHR_SQ,
> +INDEX_DNXHR_HQ,
> +INDEX_DNXHR_HQX,
> +INDEX_DNXHR_444,
>  INDEX_JPEG2000,
>  INDEX_H264,
>  };
> @@ -345,6 +350,31 @@ static const MXFContainerEssenceEntry
> mxf_essence_container_uls[] = {
>{
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
>{
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x13,0x00,0x00
> },
>mxf_write_cdci_desc },
> +// DNxHR LB - CID 1274
> +{ {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x11,0x01,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x28,0x00,0x00
> },
> +  mxf_write_cdci_desc },
> +// DNxHR SQ - CID 1273
> +{ {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x11,0x01,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x27,0x00,0x00
> },
> +  mxf_write_cdci_desc },
> +// DNxHR HQ - CID 1272
> +{ {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x11,0x01,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x26,0x00,0x00
> },
> +  mxf_write_cdci_desc },
> +// DNxHR HQX - CID 1271
> +{ {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x11,0x01,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x25,0x00,0x00
> },
> +  mxf_write_cdci_desc },
> +// DNxHR 444 - CID 1270
> +{ {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x11,0x01,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x05,0x00
> },
> +  {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0A,0x04,0x01,0x02,0x02,0x71,0x24,0x00,0x00
> },
> +  mxf_write_cdci_desc },
>  // JPEG2000
>  { {
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x07,0x0d,0x01,0x03,0x01,0x02,0x0c,0x01,0x00
> },
>{
> 0x06,0x0e,0x2b,0x34,0x01,0x02,0x01,0x01,0x0d,0x01,0x03,0x01,0x15,0x01,0x08,0x00
> },
> @@ -1959,7 +1989,11 @@ AVPacket *pkt)
>  header_cid = pkt->data + 0x28;
>  cid = header_cid[0] << 24 | header_cid[1] << 16 | header_cid[2] << 8
> | header_cid[3];
>
> -if ((frame_size = avpriv_dnxhd_get_frame_size(cid)) < 0)
> +if ((frame_size = avpriv_dnxhd_get_frame_size(cid)) ==
> DNXHD_VARIABLE) {
> +frame_size = avpriv_dnxhd_get_hr_frame_size(cid,
> st->codecpar->width, st->codecpar->height);
> +}
> +
> +if (frame_size < 0)
>  return -1;
>  if ((sc->interlaced = avpriv_dnxhd_get_interlaced(cid)) < 0)
>  return AVERROR_INVALIDDATA;
> @@ -1998,6 +2032,23 @@ AVPacket *pkt)
>  case 1253:
>  sc->index = INDEX_DNXHD_720p_8bit_LOW;
>  break;
> +case 1274:
> +sc->index = INDEX_DNXHR_LB;
> +break;
> +case 1273:
> +sc->index = INDEX_DNXHR_SQ;
> +break;
> +case 1272:
> +sc->index = INDEX_DNXHR_HQ;
> +break;
> +case 1271:
> +sc->index = INDEX_DNXHR_HQX;
> +sc->component_depth = st->codecpar->bits_per_raw_sample;
> +break;
> +case 1270:
> +sc->index = INDEX_DNXHR_444;
> +sc->component_depth = st->codecpar->bits_per_raw_sample;
> +break;
>  default:
>  return -1;
>  }
>

Looks good to me.

-- 
Baptiste
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/h264dec: Fix init_context memleak on error path

2018-09-10 Thread Zhao Zhili

Please review, thanks!

On 2018年09月05日 16:53, Zhao Zhili wrote:

---
  libavcodec/h264dec.c | 28 ++--
  1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/libavcodec/h264dec.c b/libavcodec/h264dec.c
index 8d115fa..b2447e9 100644
--- a/libavcodec/h264dec.c
+++ b/libavcodec/h264dec.c
@@ -303,6 +303,7 @@ fail:
  static int h264_init_context(AVCodecContext *avctx, H264Context *h)
  {
  int i;
+int ret;
  
  h->avctx = avctx;

  h->cur_chroma_format_idc = -1;
@@ -337,22 +338,37 @@ static int h264_init_context(AVCodecContext *avctx, 
H264Context *h)
  
  for (i = 0; i < H264_MAX_PICTURE_COUNT; i++) {

  h->DPB[i].f = av_frame_alloc();
-if (!h->DPB[i].f)
-return AVERROR(ENOMEM);
+if (!h->DPB[i].f) {
+ret = AVERROR(ENOMEM);
+goto fail;
+}
  }
  
  h->cur_pic.f = av_frame_alloc();

-if (!h->cur_pic.f)
-return AVERROR(ENOMEM);
+if (!h->cur_pic.f) {
+ret = AVERROR(ENOMEM);
+goto fail;
+}
  
  h->last_pic_for_ec.f = av_frame_alloc();

-if (!h->last_pic_for_ec.f)
-return AVERROR(ENOMEM);
+if (!h->last_pic_for_ec.f) {
+ret = AVERROR(ENOMEM);
+goto fail;
+}
  
  for (i = 0; i < h->nb_slice_ctx; i++)

  h->slice_ctx[i].h264 = h;
  
  return 0;

+
+fail:
+h->nb_slice_ctx = 0;
+av_freep(>slice_ctx);
+for (i = 0; i < H264_MAX_PICTURE_COUNT; i++) {
+av_frame_free(>DPB[i].f);
+}
+av_frame_free(>cur_pic.f);
+return ret;
  }
  
  static av_cold int h264_decode_end(AVCodecContext *avctx)




___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avfilter: add nvidia NPP based transpose filter

2018-09-10 Thread Timo Rothenpieler

applied



smime.p7s
Description: S/MIME Cryptographic Signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avcodec/loco: switch to planar rgb format

2018-09-10 Thread Paul B Mahol
Remove now unused step variable.

Signed-off-by: Paul B Mahol 
---
 libavcodec/loco.c   | 69 -
 tests/ref/fate/loco-rgb | 10 +++---
 2 files changed, 39 insertions(+), 40 deletions(-)

diff --git a/libavcodec/loco.c b/libavcodec/loco.c
index 9d0f144451..f91d8709b0 100644
--- a/libavcodec/loco.c
+++ b/libavcodec/loco.c
@@ -114,19 +114,19 @@ static inline int loco_get_rice(RICEContext *r)
 }
 
 /* LOCO main predictor - LOCO-I/JPEG-LS predictor */
-static inline int loco_predict(uint8_t* data, int stride, int step)
+static inline int loco_predict(uint8_t* data, int stride)
 {
 int a, b, c;
 
 a = data[-stride];
-b = data[-step];
-c = data[-stride - step];
+b = data[-1];
+c = data[-stride - 1];
 
 return mid_pred(a, a + b - c, b);
 }
 
 static int loco_decode_plane(LOCOContext *l, uint8_t *data, int width, int 
height,
- int stride, const uint8_t *buf, int buf_size, int 
step)
+ int stride, const uint8_t *buf, int buf_size)
 {
 RICEContext rc;
 int val;
@@ -153,7 +153,7 @@ static int loco_decode_plane(LOCOContext *l, uint8_t *data, 
int width, int heigh
 /* restore top line */
 for (i = 1; i < width; i++) {
 val = loco_get_rice();
-data[i * step] = data[i * step - step] + val;
+data[i] = data[i - 1] + val;
 }
 data += stride;
 for (j = 1; j < height; j++) {
@@ -163,7 +163,7 @@ static int loco_decode_plane(LOCOContext *l, uint8_t *data, 
int width, int heigh
 /* restore all other pixels */
 for (i = 1; i < width; i++) {
 val = loco_get_rice();
-data[i * step] = loco_predict([i * step], stride, step) + val;
+data[i] = loco_predict([i], stride) + val;
 }
 data += stride;
 }
@@ -171,19 +171,18 @@ static int loco_decode_plane(LOCOContext *l, uint8_t 
*data, int width, int heigh
 return (get_bits_count() + 7) >> 3;
 }
 
-static void rotate_faulty_loco(uint8_t *data, int width, int height, int 
stride, int step)
+static void rotate_faulty_loco(uint8_t *data, int width, int height, int 
stride)
 {
 int y;
 
 for (y=1; y=y) {
 memmove(data + y*stride,
-data + y*(stride + step),
-step*(width-y));
+data + y*(stride + 1),
+(width-y));
 if (y+1 < height)
-memmove(data + y*stride + step*(width-y),
-data + (y+1)*stride,
-step*y);
+memmove(data + y*stride + (width-y),
+data + (y+1)*stride, y);
 }
 }
 }
@@ -209,49 +208,49 @@ static int decode_frame(AVCodecContext *avctx,
 switch(l->mode) {
 case LOCO_CYUY2: case LOCO_YUY2: case LOCO_UYVY:
 decoded = loco_decode_plane(l, p->data[0], avctx->width, avctx->height,
-p->linesize[0], buf, buf_size, 1);
+p->linesize[0], buf, buf_size);
 ADVANCE_BY_DECODED;
 decoded = loco_decode_plane(l, p->data[1], avctx->width / 2, 
avctx->height,
-p->linesize[1], buf, buf_size, 1);
+p->linesize[1], buf, buf_size);
 ADVANCE_BY_DECODED;
 decoded = loco_decode_plane(l, p->data[2], avctx->width / 2, 
avctx->height,
-p->linesize[2], buf, buf_size, 1);
+p->linesize[2], buf, buf_size);
 break;
 case LOCO_CYV12: case LOCO_YV12:
 decoded = loco_decode_plane(l, p->data[0], avctx->width, avctx->height,
-p->linesize[0], buf, buf_size, 1);
+p->linesize[0], buf, buf_size);
 ADVANCE_BY_DECODED;
 decoded = loco_decode_plane(l, p->data[2], avctx->width / 2, 
avctx->height / 2,
-p->linesize[2], buf, buf_size, 1);
+p->linesize[2], buf, buf_size);
 ADVANCE_BY_DECODED;
 decoded = loco_decode_plane(l, p->data[1], avctx->width / 2, 
avctx->height / 2,
-p->linesize[1], buf, buf_size, 1);
+p->linesize[1], buf, buf_size);
 break;
 case LOCO_CRGB: case LOCO_RGB:
-decoded = loco_decode_plane(l, p->data[0] + 
p->linesize[0]*(avctx->height-1), avctx->width, avctx->height,
--p->linesize[0], buf, buf_size, 3);
+decoded = loco_decode_plane(l, p->data[1] + 
p->linesize[1]*(avctx->height-1), avctx->width, avctx->height,
+-p->linesize[1], buf, buf_size);
 ADVANCE_BY_DECODED;
-decoded = loco_decode_plane(l, p->data[0] + 
p->linesize[0]*(avctx->height-1) + 1, avctx->width, avctx->height,
-