[FFmpeg-devel] Create MISB Video

2019-02-20 Thread Francisco José Raga López
Hi

Some time ago I developed an alternative opensource for analysis of MISB
videos,

https://github.com/All4Gis/QGISFMV

and together with a colleague, a parser of MISB videos to extract the
telemetry using ffmpeg in python.

https://github.com/paretech/klvdata

Now I would like to develop a multiplexer for the open source community.

Starting from videos and csv with telemetry I would like to create a MISB
video to be able to use it in my project, but I do not know where to start
and I think ffmpeg can do it.

Any idea how to start developing that functionality? Someone who wants to
help me

regards

Francisco Raga | Full-Stack Open Source GIS Developer



Móvil: (+34) 654275432 | e-Mail: franka1...@gmail.com | skype:
francisco_raga
Github: https://goo.gl/ydNTjY | Linkedin: https://goo.gl/TCfj8S | Site:
https://goo.gl/qiypDj


"La vida real no tiene ningún mapa.."  Ivy Compton Burnett
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/mips: [loongson] mmi optimizations for VP9 put and avg functions

2019-02-20 Thread gxw

> 在 2019年2月21日,上午9:55,Shiyou Yin  写道:
> 
>> -Original Message-
>> From: ffmpeg-devel-boun...@ffmpeg.org 
>>  
>> [mailto:ffmpeg-devel-boun...@ffmpeg.org 
>> ] On Behalf Of gxw
>> Sent: Tuesday, February 19, 2019 11:02 AM
>> To: ffmpeg-devel@ffmpeg.org 
>> Cc: gxw
>> Subject: [FFmpeg-devel] [PATCH] avcodec/mips: [loongson] mmi optimizations 
>> for VP9 put and avg
>> functions
>> 
>> VP9 decoding speed improved about 109.3%(from 32fps to 67fps, tested on 
>> loongson 3A3000).
>> ---
>> libavcodec/mips/Makefile   |   1 +
>> libavcodec/mips/vp9_mc_mmi.c   | 680 
>> +
>> libavcodec/mips/vp9dsp_init_mips.c |  42 +++
>> libavcodec/mips/vp9dsp_mips.h  |  50 +++
>> 4 files changed, 773 insertions(+)
>> create mode 100644 libavcodec/mips/vp9_mc_mmi.c
>> 
>> diff --git a/libavcodec/mips/Makefile b/libavcodec/mips/Makefile
>> index c827649..c5b54d5 100644
>> --- a/libavcodec/mips/Makefile
>> +++ b/libavcodec/mips/Makefile
>> @@ -88,3 +88,4 @@ MMI-OBJS-$(CONFIG_VC1_DECODER)+= 
>> mips/vc1dsp_mmi.o
>> MMI-OBJS-$(CONFIG_WMV2DSP)+= mips/wmv2dsp_mmi.o
>> MMI-OBJS-$(CONFIG_HEVC_DECODER)   += mips/hevcdsp_mmi.o
>> MMI-OBJS-$(CONFIG_VP3DSP) += mips/vp3dsp_idct_mmi.o
>> +MMI-OBJS-$(CONFIG_VP9_DECODER)+= mips/vp9_mc_mmi.o
>> diff --git a/libavcodec/mips/vp9_mc_mmi.c b/libavcodec/mips/vp9_mc_mmi.c
>> new file mode 100644
>> index 000..145bbff
>> --- /dev/null
>> +++ b/libavcodec/mips/vp9_mc_mmi.c
>> @@ -0,0 +1,680 @@
>> +/*
>> + * Copyright (c) 2019 gxw 
>> + *
>> + * This file is part of FFmpeg.
>> + *
>> + * FFmpeg is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU Lesser General Public
>> + * License as published by the Free Software Foundation; either
>> + * version 2.1 of the License, or (at your option) any later version.
>> + *
>> + * FFmpeg is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> + * Lesser General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU Lesser General Public
>> + * License along with FFmpeg; if not, write to the Free Software
>> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
>> USA
>> + */
>> +
>> +#include "libavcodec/vp9dsp.h"
>> +#include "libavutil/mips/mmiutils.h"
>> +#include "vp9dsp_mips.h"
>> +
>> +#define GET_DATA_H_MMI   \
>> +"pmaddhw%[ftmp4],%[ftmp4],   %[filter1]\n\t" \
>> +"pmaddhw%[ftmp5],%[ftmp5],   %[filter2]\n\t" \
>> +"paddw  %[ftmp4],%[ftmp4],   %[ftmp5]  \n\t" \
>> +"punpckhwd  %[ftmp5],%[ftmp4],   %[ftmp0]  \n\t" \
>> +"paddw  %[ftmp4],%[ftmp4],   %[ftmp5]  \n\t" \
>> +"pmaddhw%[ftmp6],%[ftmp6],   %[filter1]\n\t" \
>> +"pmaddhw%[ftmp7],%[ftmp7],   %[filter2]\n\t" \
>> +"paddw  %[ftmp6],%[ftmp6],   %[ftmp7]  \n\t" \
>> +"punpckhwd  %[ftmp7],%[ftmp6],   %[ftmp0]  \n\t" \
>> +"paddw  %[ftmp6],%[ftmp6],   %[ftmp7]  \n\t" \
>> +"punpcklwd  %[srcl], %[ftmp4],   %[ftmp6]  \n\t" \
>> +"pmaddhw%[ftmp8],%[ftmp8],   %[filter1]\n\t" \
>> +"pmaddhw%[ftmp9],%[ftmp9],   %[filter2]\n\t" \
>> +"paddw  %[ftmp8],%[ftmp8],   %[ftmp9]  \n\t" \
>> +"punpckhwd  %[ftmp9],%[ftmp8],   %[ftmp0]  \n\t" \
>> +"paddw  %[ftmp8],%[ftmp8],   %[ftmp9]  \n\t" \
>> +"pmaddhw%[ftmp10],   %[ftmp10],  %[filter1]\n\t" \
>> +"pmaddhw%[ftmp11],   %[ftmp11],  %[filter2]\n\t" \
>> +"paddw  %[ftmp10],   %[ftmp10],  %[ftmp11] \n\t" \
>> +"punpckhwd  %[ftmp11],   %[ftmp10],  %[ftmp0]  \n\t" \
>> +"paddw  %[ftmp10],   %[ftmp10],  %[ftmp11] \n\t" \
>> +"punpcklwd  %[srch], %[ftmp8],   %[ftmp10] \n\t"
>> +
>> +#define GET_DATA_V_MMI   \
>> +"punpcklhw  %[srcl], %[ftmp4],   %[ftmp5]  \n\t" \
>> +"pmaddhw%[srcl], %[srcl],%[filter10]   \n\t" \
>> +"punpcklhw  %[ftmp12],   %[ftmp6],   %[ftmp7]  \n\t" \
>> +"pmaddhw%[ftmp12],   %[ftmp12],  %[filter32]   \n\t" \
>> +"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>> +"punpcklhw  %[ftmp12],   %[ftmp8],   %[ftmp9]  \n\t" \
>> +"pmaddhw%[ftmp12],   %[ftmp12],  %[filter54]   \n\t" \
>> +"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>> +"punpcklhw  %[ftmp12],   %[ftmp10],  %[ftmp11] \n\t" \
>> +"pmaddhw%[ftmp12],   %[ftmp12],  %[filter76]   \n\t" \
>> +"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>> 

[FFmpeg-devel] [PATCH] avformat/avformat.h: update the comment from deprecated to new API

2019-02-20 Thread Steven Liu
Signed-off-by: Steven Liu 
---
 libavformat/avformat.h | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/libavformat/avformat.h b/libavformat/avformat.h
index fdaffa5bf4..12cc8387ed 100644
--- a/libavformat/avformat.h
+++ b/libavformat/avformat.h
@@ -36,15 +36,14 @@
  * into component streams, and the reverse process of muxing - writing supplied
  * data in a specified container format. It also has an @ref lavf_io
  * "I/O module" which supports a number of protocols for accessing the data 
(e.g.
- * file, tcp, http and others). Before using lavf, you need to call
- * av_register_all() to register all compiled muxers, demuxers and protocols.
+ * file, tcp, http and others).
  * Unless you are absolutely sure you won't use libavformat's network
  * capabilities, you should also call avformat_network_init().
  *
  * A supported input format is described by an AVInputFormat struct, conversely
  * an output format is described by AVOutputFormat. You can iterate over all
- * registered input/output formats using the av_iformat_next() /
- * av_oformat_next() functions. The protocols layer is not part of the public
+ * input/output formats using the  av_demuxer_iterate /
+ * av_muxer_iterate() functions. The protocols layer is not part of the public
  * API, so you can only get the names of supported protocols with the
  * avio_enum_protocols() function.
  *
-- 
2.15.2 (Apple Git-101.1)



___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 2/2] avformat/dashenc: Added comments

2019-02-20 Thread Karthick J
Added comments regarding usage of certain movflags in streaming mode.
---
 libavformat/dashenc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
index a0b44a0ec3..c5e882f4ae 100644
--- a/libavformat/dashenc.c
+++ b/libavformat/dashenc.c
@@ -1216,6 +1216,9 @@ static int dash_init(AVFormatContext *s)
 
 if (os->segment_type == SEGMENT_TYPE_MP4) {
 if (c->streaming)
+// frag_every_frame : Allows lower latency streaming
+// skip_sidx : Reduce bitrate overhead
+// skip_trailer : Avoids growing memory usage with time
 av_dict_set(, "movflags", 
"frag_every_frame+dash+delay_moov+skip_sidx+skip_trailer", 0);
 else
 av_dict_set(, "movflags", "frag_custom+dash+delay_moov", 
0);
-- 
2.17.1 (Apple Git-112)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Li, Zhong


> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
> Of Li, Zhong
> Sent: Thursday, February 21, 2019 2:01 PM
> To: Rogozhkin, Dmitry V ;
> ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder
> support
> 
> > > @@ -90,6 +90,17 @@ static av_cold int
> qsv_decode_init(AVCodecContext
> > > *avctx)
> > >  }
> > >  #endif
> > >
> > > +#if CONFIG_VP9_QSV_DECODER
> > > +if (avctx->codec_id == AV_CODEC_ID_VP9) {
> > > +static const char *uid_vp9dec_hw =
> > > "a922394d8d87452f878c51f2fc9b4131";
> >
> > Should not be actually needed (and I hope it will work:)). VP9 hw
> > plugin is actually a tiny compatibility stub which redirects
> > everything to the mediasdk library.  Considering that you just add VP9
> > decoding support you don't need to care about compatibility (I hope).
> > Hence, you can try to just initialize VP9 decoder directly from the mediasdk
> library as you are doing for AVC decoder.
> 
> Good point. But my question is that will it broken for the case "the latest
> ffmpeg + an old version MSDK"?
> Thus means:
> 1. Start from the version for MSDK support VP9 decoding, hw plugin is not
> needed.

Sorry for the typo, version->first version. Means at the beginning of vp9 
enabled.

> 2. Or we don't care the compatibility "the latest ffmpeg + an old version
> MSDK", user should update MSDK.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Li, Zhong
> > @@ -90,6 +90,17 @@ static av_cold int qsv_decode_init(AVCodecContext
> > *avctx)
> >  }
> >  #endif
> >
> > +#if CONFIG_VP9_QSV_DECODER
> > +if (avctx->codec_id == AV_CODEC_ID_VP9) {
> > +static const char *uid_vp9dec_hw =
> > "a922394d8d87452f878c51f2fc9b4131";
> 
> Should not be actually needed (and I hope it will work:)). VP9 hw plugin is
> actually a tiny compatibility stub which redirects everything to the mediasdk
> library.  Considering that you just add VP9 decoding support you don't need
> to care about compatibility (I hope). Hence, you can try to just initialize 
> VP9
> decoder directly from the mediasdk library as you are doing for AVC decoder.

Good point. But my question is that will it broken for the case "the latest 
ffmpeg + an old version MSDK"?
Thus means:
1. Start from the version for MSDK support VP9 decoding, hw plugin is not 
needed. 
2. Or we don't care the compatibility "the latest ffmpeg + an old version 
MSDK", user should update MSDK.

If it is case 1, I am quite happy to remove vp9 hw plugin code. If it is case2, 
I would say I can't agree. 
How do you think? 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()

2019-02-20 Thread Li, Zhong
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
> Of Mark Thompson
> Sent: Thursday, February 21, 2019 5:32 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current
> parser with MFXVideoDECODE_DecodeHeader()
> 
> On 20/02/2019 02:58, Zhong Li wrote:
> > Using MSDK parser can improve qsv decoder pass rate in some cases (E.g:
> > sps declares a wrong level_idc, smaller than it should be).
> > And it is necessary for adding new qsv decoders such as MJPEG and VP9
> > since current parser can't provide enough information.
>
> Can you explain the problem with level_idc?  Why would the libmfx parser
> determine a different answer?

Detail discussion is here: 
https://github.com/Intel-Media-SDK/MediaSDK/issues/582 
"Some clips declare a wrong level_idc, smaller than it should be", for example, 
a clip declare a level_idc= 1.0, but other sps/pps paramters such as resolution 
or reference number is out of scope. 
I believe this is a very common issue, many clips don't declare a correct 
level, and it should be decoded with decoder error detecting.
Currently internal parser is just reading what is the level_idc is, there is no 
error handing. Thus making MSDK decoding error

> Given that you need the current parser anyway (see previous mail), it would
> likely be more useful to extend it to supply any information which is missing.
> 
> > Actually using MFXVideoDECODE_DecodeHeader() was disscussed at
> > https://ffmpeg.org/pipermail/ffmpeg-devel/2015-July/175734.html and
> > merged as commit 1acb19d, but was overwritten when merged libav
> patches (commit: 1f26a23) without any explain.
> 
> I'm not sure where the explanation for this went; maybe it was only
> discussed on IRC.
> 
> The reason for using the internal parsers is that you need the information
> before libmfx is initialized at all in the hw_frames_ctx case (i.e. before the
> get_format callback which will supply the hardware context information),
> and once you require that anyway there isn't much point in parsing things
> twice for the same information.

As I see, there are very limited information needed before init a libmfx 
session (we must  init a libmfx session before call 
MFXVideoDECODE_DecodeHeader() ): There are resolution and pix_fmt. 
As you can see from my current implementation, I don't call internal parser 
before init the session, we are assuming a resolution (It may be provided from 
libavformat, but not must as Hendrik's comment) and pix_fmt (we assume it is 
NV12), and will correct it after MFXVideoDECODE_DecodeHeader().
It probably means we need to init the session twice for the first decoding call 
(e.g hevc/vp9 10bit clips, or resolution is not provided somewhere such as 
libavformat), but it is just happens when the first call (if header information 
is not changed after that) and the assumed resolution/pix_fmt is not correct.  
I think it is higher efficient than parse twice for every decoding calling, and 
it is not a workaround way since we still need to handling resolution/pix_fmt 
changing cases. 

> It's probably fine to parse it twice if you want, but the two cases really
> should be returning the same information.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 4/6] lavc/qsvdec: remove orignal parser code since not needed now

2019-02-20 Thread Li, Zhong
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
> Of Mark Thompson
> Sent: Thursday, February 21, 2019 5:23 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v2 4/6] lavc/qsvdec: remove orignal
> parser code since not needed now
> 
> On 20/02/2019 02:58, Zhong Li wrote:
> > Signed-off-by: Zhong Li 
> > ---
> >  configure   | 10 +-
> >  libavcodec/qsvdec.c | 16 +---  libavcodec/qsvdec.h |  2
> > --
> >  3 files changed, 6 insertions(+), 22 deletions(-)
> 
> You can't remove this, it's still needed - the stream properties must be
> determined before the get_format() callback.
> 
> Similarly, you will need to extend the VP9 parser to return the relevant
> information for the following patch so that it works in general rather than
> only in cases where the user can supply it externally.  It should be quite
> straightforward; see 182cf170a544bce069c8690c90b49381150a1f10.
> 
> - Mark

There are something different from vp8 for vp9. 
VP9 frame resolution probably can't be got from bitstream but may be referred 
from reference frames, see: 
https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/vp9.c#L565 
If parsing header is separated form decoding process, it will be a problem how 
to get the reference list information. 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Li, Zhong
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
> Of Carl Eugen Hoyos
> Sent: Thursday, February 21, 2019 8:23 AM
> To: FFmpeg development discussions and patches
> 
> Subject: Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder
> support
> 
> 2019-02-20 3:58 GMT+01:00, Zhong Li :
> > VP9 decoder is supported on Intel kabyLake+ platforms with MSDK
> > Version 1.19+
> 
> > diff --git a/Changelog b/Changelog
> > index f289812bfc..141ffd9610 100644
> > --- a/Changelog
> > +++ b/Changelog
> > @@ -20,6 +20,7 @@ version :
> >  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
> >  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
> 
> >  - Intel QSV-accelerated MJPEG decoding
> > +- Intel QSV-accelerated VP9 decoding
> 
> Please merge these lines.
> 
> Carl Eugen

Ok, will update. Thanks. 
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 0/5] Clean up CUDA SDK usage and remove non-free requirement

2019-02-20 Thread Philip Langdale
I've been thinking about this for a while, but I only recently made the
realisation that compiling cuda kernels to the ptx format does not
introduce any non-free dependencies - the ptx files are an intermediate
assembly code format that is actually compiled to binary form at
runtime. With that understood, we just need to switch the remaining
users of the CUDA SDK to ffnvcodec and we will remove the non-free
entanglements from cuda.

Philip Langdale (5):
  configure: Add an explicit check and option for nvcc
  avfilter/vf_yadif_cuda: Switch to using ffnvcodec
  avfilter/vf_scale_cuda: Switch to using ffnvcodec
  avfilter/vf_thumbnail_cuda: Switch to using ffnvcodec
  configure: Remove cuda_sdk dependency option

 configure|  36 ++-
 libavfilter/vf_scale_cuda.c  | 168 +--
 libavfilter/vf_scale_cuda.cu |  73 +++---
 libavfilter/vf_thumbnail_cuda.c  | 147 +++
 libavfilter/vf_thumbnail_cuda.cu |  25 +++--
 libavfilter/vf_yadif_cuda.c  |  58 ++-
 6 files changed, 281 insertions(+), 226 deletions(-)

-- 
2.19.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/5] avfilter/vf_yadif_cuda: Switch to using ffnvcodec

2019-02-20 Thread Philip Langdale
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.

Signed-off-by: Philip Langdale 
---
 configure   |  2 +-
 libavfilter/vf_yadif_cuda.c | 58 -
 2 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/configure b/configure
index 2219eb1515..a2890dc171 100755
--- a/configure
+++ b/configure
@@ -3526,7 +3526,7 @@ zscale_filter_deps="libzimg const_nan"
 scale_vaapi_filter_deps="vaapi"
 vpp_qsv_filter_deps="libmfx"
 vpp_qsv_filter_select="qsvvpp"
-yadif_cuda_filter_deps="cuda_sdk"
+yadif_cuda_filter_deps="ffnvcodec cuda_nvcc"
 
 # examples
 avio_dir_cmd_deps="avformat avutil"
diff --git a/libavfilter/vf_yadif_cuda.c b/libavfilter/vf_yadif_cuda.c
index 85e1aac5eb..141dcb17f7 100644
--- a/libavfilter/vf_yadif_cuda.c
+++ b/libavfilter/vf_yadif_cuda.c
@@ -18,9 +18,8 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
-#include 
 #include "libavutil/avassert.h"
-#include "libavutil/hwcontext_cuda.h"
+#include "libavutil/hwcontext_cuda_internal.h"
 #include "libavutil/cuda_check.h"
 #include "internal.h"
 #include "yadif.h"
@@ -49,7 +48,7 @@ typedef struct DeintCUDAContext {
 #define BLOCKX 32
 #define BLOCKY 16
 
-#define CHECK_CU(x) FF_CUDA_CHECK(ctx, x)
+#define CHECK_CU(x) FF_CUDA_CHECK_DL(ctx, s->hwctx->internal->cuda_dl, x)
 
 static CUresult call_kernel(AVFilterContext *ctx, CUfunction func,
 CUdeviceptr prev, CUdeviceptr cur, CUdeviceptr 
next,
@@ -64,6 +63,7 @@ static CUresult call_kernel(AVFilterContext *ctx, CUfunction 
func,
 int parity, int tff)
 {
 DeintCUDAContext *s = ctx->priv;
+CudaFunctions *cu = s->hwctx->internal->cuda_dl;
 CUtexObject tex_prev = 0, tex_cur = 0, tex_next = 0;
 int ret;
 int skip_spatial_check = s->yadif.mode&2;
@@ -88,32 +88,32 @@ static CUresult call_kernel(AVFilterContext *ctx, 
CUfunction func,
 };
 
 res_desc.res.pitch2D.devPtr = (CUdeviceptr)prev;
-ret = CHECK_CU(cuTexObjectCreate(_prev, _desc, _desc, NULL));
+ret = CHECK_CU(cu->cuTexObjectCreate(_prev, _desc, _desc, 
NULL));
 if (ret < 0)
 goto exit;
 
 res_desc.res.pitch2D.devPtr = (CUdeviceptr)cur;
-ret = CHECK_CU(cuTexObjectCreate(_cur, _desc, _desc, NULL));
+ret = CHECK_CU(cu->cuTexObjectCreate(_cur, _desc, _desc, 
NULL));
 if (ret < 0)
 goto exit;
 
 res_desc.res.pitch2D.devPtr = (CUdeviceptr)next;
-ret = CHECK_CU(cuTexObjectCreate(_next, _desc, _desc, NULL));
+ret = CHECK_CU(cu->cuTexObjectCreate(_next, _desc, _desc, 
NULL));
 if (ret < 0)
 goto exit;
 
-ret = CHECK_CU(cuLaunchKernel(func,
-  DIV_UP(dst_width, BLOCKX), 
DIV_UP(dst_height, BLOCKY), 1,
-  BLOCKX, BLOCKY, 1,
-  0, s->stream, args, NULL));
+ret = CHECK_CU(cu->cuLaunchKernel(func,
+  DIV_UP(dst_width, BLOCKX), 
DIV_UP(dst_height, BLOCKY), 1,
+  BLOCKX, BLOCKY, 1,
+  0, s->stream, args, NULL));
 
 exit:
 if (tex_prev)
-CHECK_CU(cuTexObjectDestroy(tex_prev));
+CHECK_CU(cu->cuTexObjectDestroy(tex_prev));
 if (tex_cur)
-CHECK_CU(cuTexObjectDestroy(tex_cur));
+CHECK_CU(cu->cuTexObjectDestroy(tex_cur));
 if (tex_next)
-CHECK_CU(cuTexObjectDestroy(tex_next));
+CHECK_CU(cu->cuTexObjectDestroy(tex_next));
 
 return ret;
 }
@@ -123,10 +123,11 @@ static void filter(AVFilterContext *ctx, AVFrame *dst,
 {
 DeintCUDAContext *s = ctx->priv;
 YADIFContext *y = >yadif;
+CudaFunctions *cu = s->hwctx->internal->cuda_dl;
 CUcontext dummy;
 int i, ret;
 
-ret = CHECK_CU(cuCtxPushCurrent(s->cu_ctx));
+ret = CHECK_CU(cu->cuCtxPushCurrent(s->cu_ctx));
 if (ret < 0)
 return;
 
@@ -179,10 +180,10 @@ static void filter(AVFilterContext *ctx, AVFrame *dst,
 parity, tff);
 }
 
-CHECK_CU(cuStreamSynchronize(s->stream));
+CHECK_CU(cu->cuStreamSynchronize(s->stream));
 
 exit:
-CHECK_CU(cuCtxPopCurrent());
+CHECK_CU(cu->cuCtxPopCurrent());
 return;
 }
 
@@ -192,10 +193,11 @@ static av_cold void deint_cuda_uninit(AVFilterContext 
*ctx)
 DeintCUDAContext *s = ctx->priv;
 YADIFContext *y = >yadif;
 
-if (s->cu_module) {
-CHECK_CU(cuCtxPushCurrent(s->cu_ctx));
-CHECK_CU(cuModuleUnload(s->cu_module));
-CHECK_CU(cuCtxPopCurrent());
+if (s->hwctx && s->cu_module) {
+CudaFunctions *cu = s->hwctx->internal->cuda_dl;
+CHECK_CU(cu->cuCtxPushCurrent(s->cu_ctx));
+CHECK_CU(cu->cuModuleUnload(s->cu_module));
+CHECK_CU(cu->cuCtxPopCurrent());
 }
 
 av_frame_free(>prev);
@@ -253,6 +255,7 @@ static int config_output(AVFilterLink *link)
 

[FFmpeg-devel] [PATCH 5/5] configure: Remove cuda_sdk dependency option

2019-02-20 Thread Philip Langdale
With all of our existing users of cuda_sdk switched over to ffnvcodec,
we could remove cuda_sdk completely and say that we should no longer
add code that requires the full sdk, and rather insist that such code
only use ffnvcodec, and avoid any non-free complications.

As discussed previously, the use of nvcc from the sdk is still
supported with a distinct option.

Signed-off-by: Philip Langdale 
---
 configure | 2 --
 1 file changed, 2 deletions(-)

diff --git a/configure b/configure
index 31576350bd..78fcc2e1eb 100755
--- a/configure
+++ b/configure
@@ -1819,7 +1819,6 @@ EXTRALIBS_LIST="
 "
 
 HWACCEL_LIBRARY_NONFREE_LIST="
-cuda_sdk
 libnpp
 "
 
@@ -6101,7 +6100,6 @@ for func in $COMPLEX_FUNCS; do
 done
 
 # these are off by default, so fail if requested and not available
-enabled cuda_sdk  && require cuda_sdk cuda.h cuCtxCreate -lcuda
 enabled chromaprint   && require chromaprint chromaprint.h 
chromaprint_get_version -lchromaprint
 enabled decklink  && { require_headers DeckLinkAPI.h &&
{ test_cpp_condition DeckLinkAPIVersion.h 
"BLACKMAGIC_DECKLINK_API_VERSION >= 0x0a090500" || die "ERROR: Decklink API 
version must be >= 10.9.5."; } }
-- 
2.19.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/5] avfilter/vf_scale_cuda: Switch to using ffnvcodec

2019-02-20 Thread Philip Langdale
This change switches the vf_scale_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.

Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.

Signed-off-by: Philip Langdale 
---
 configure|   2 +-
 libavfilter/vf_scale_cuda.c  | 168 +++
 libavfilter/vf_scale_cuda.cu |  73 ---
 3 files changed, 128 insertions(+), 115 deletions(-)

diff --git a/configure b/configure
index a2890dc171..57098149f9 100755
--- a/configure
+++ b/configure
@@ -2966,7 +2966,7 @@ v4l2_m2m_deps="linux_videodev2_h sem_timedwait"
 
 hwupload_cuda_filter_deps="ffnvcodec"
 scale_npp_filter_deps="ffnvcodec libnpp"
-scale_cuda_filter_deps="cuda_sdk"
+scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
 thumbnail_cuda_filter_deps="cuda_sdk"
 transpose_npp_filter_deps="ffnvcodec libnpp"
 
diff --git a/libavfilter/vf_scale_cuda.c b/libavfilter/vf_scale_cuda.c
index 53b7aa9531..c97a802ddc 100644
--- a/libavfilter/vf_scale_cuda.c
+++ b/libavfilter/vf_scale_cuda.c
@@ -20,14 +20,13 @@
 * DEALINGS IN THE SOFTWARE.
 */
 
-#include 
 #include 
 #include 
 
 #include "libavutil/avstring.h"
 #include "libavutil/common.h"
 #include "libavutil/hwcontext.h"
-#include "libavutil/hwcontext_cuda.h"
+#include "libavutil/hwcontext_cuda_internal.h"
 #include "libavutil/cuda_check.h"
 #include "libavutil/internal.h"
 #include "libavutil/opt.h"
@@ -53,10 +52,13 @@ static const enum AVPixelFormat supported_formats[] = {
 #define BLOCKX 32
 #define BLOCKY 16
 
-#define CHECK_CU(x) FF_CUDA_CHECK(ctx, x)
+#define CHECK_CU(x) FF_CUDA_CHECK_DL(ctx, s->hwctx->internal->cuda_dl, x)
 
 typedef struct CUDAScaleContext {
 const AVClass *class;
+
+AVCUDADeviceContext *hwctx;
+
 enum AVPixelFormat in_fmt;
 enum AVPixelFormat out_fmt;
 
@@ -80,7 +82,6 @@ typedef struct CUDAScaleContext {
 char *h_expr;   ///< height expression string
 
 CUcontext   cu_ctx;
-CUevent cu_event;
 CUmodulecu_module;
 CUfunction  cu_func_uchar;
 CUfunction  cu_func_uchar2;
@@ -88,12 +89,7 @@ typedef struct CUDAScaleContext {
 CUfunction  cu_func_ushort;
 CUfunction  cu_func_ushort2;
 CUfunction  cu_func_ushort4;
-CUtexrefcu_tex_uchar;
-CUtexrefcu_tex_uchar2;
-CUtexrefcu_tex_uchar4;
-CUtexrefcu_tex_ushort;
-CUtexrefcu_tex_ushort2;
-CUtexrefcu_tex_ushort4;
+CUstreamcu_stream;
 
 CUdeviceptr srcBuffer;
 CUdeviceptr dstBuffer;
@@ -258,48 +254,49 @@ static av_cold int cudascale_config_props(AVFilterLink 
*outlink)
 AVHWFramesContext *frames_ctx = 
(AVHWFramesContext*)inlink->hw_frames_ctx->data;
 AVCUDADeviceContext *device_hwctx = frames_ctx->device_ctx->hwctx;
 CUcontext dummy, cuda_ctx = device_hwctx->cuda_ctx;
+CudaFunctions *cu = device_hwctx->internal->cuda_dl;
 int w, h;
 int ret;
 
 extern char vf_scale_cuda_ptx[];
 
-ret = CHECK_CU(cuCtxPushCurrent(cuda_ctx));
+s->hwctx = device_hwctx;
+s->cu_stream = s->hwctx->stream;
+
+ret = CHECK_CU(cu->cuCtxPushCurrent(cuda_ctx));
+if (ret < 0)
+goto fail;
+
+ret = CHECK_CU(cu->cuModuleLoadData(>cu_module, vf_scale_cuda_ptx));
+if (ret < 0)
+goto fail;
+
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_uchar, s->cu_module, 
"Subsample_Bilinear_uchar"));
+if (ret < 0)
+goto fail;
+
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_uchar2, s->cu_module, 
"Subsample_Bilinear_uchar2"));
+if (ret < 0)
+goto fail;
+
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_uchar4, s->cu_module, 
"Subsample_Bilinear_uchar4"));
+if (ret < 0)
+goto fail;
+
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_ushort, s->cu_module, 
"Subsample_Bilinear_ushort"));
 if (ret < 0)
 goto fail;
 
-ret = CHECK_CU(cuModuleLoadData(>cu_module, vf_scale_cuda_ptx));
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_ushort2, s->cu_module, 
"Subsample_Bilinear_ushort2"));
+if (ret < 0)
+goto fail;
+
+CHECK_CU(cu->cuModuleGetFunction(>cu_func_ushort4, s->cu_module, 
"Subsample_Bilinear_ushort4"));
 if (ret < 0)
 goto fail;
 
-CHECK_CU(cuModuleGetFunction(>cu_func_uchar, s->cu_module, 
"Subsample_Bilinear_uchar"));
-CHECK_CU(cuModuleGetFunction(>cu_func_uchar2, s->cu_module, 
"Subsample_Bilinear_uchar2"));
-CHECK_CU(cuModuleGetFunction(>cu_func_uchar4, s->cu_module, 
"Subsample_Bilinear_uchar4"));
-CHECK_CU(cuModuleGetFunction(>cu_func_ushort, s->cu_module, 
"Subsample_Bilinear_ushort"));
-CHECK_CU(cuModuleGetFunction(>cu_func_ushort2, s->cu_module, 
"Subsample_Bilinear_ushort2"));
-CHECK_CU(cuModuleGetFunction(>cu_func_ushort4, s->cu_module, 
"Subsample_Bilinear_ushort4"));
-
-CHECK_CU(cuModuleGetTexRef(>cu_tex_uchar, 

[FFmpeg-devel] [PATCH 4/5] avfilter/vf_thumbnail_cuda: Switch to using ffnvcodec

2019-02-20 Thread Philip Langdale
This change switches the vf_thumbnail_cuda filter from using the
full cuda sdk to using the ffnvcodec headers and loader.

Most of the change is a direct mapping, but I also switched from
using texture references to using texture objects. This is supposed
to be the preferred way of using textures, and the texture object API
is the one I added to ffnvcodec.

Signed-off-by: Philip Langdale 
---
 configure|   2 +-
 libavfilter/vf_thumbnail_cuda.c  | 147 +--
 libavfilter/vf_thumbnail_cuda.cu |  25 +++---
 3 files changed, 93 insertions(+), 81 deletions(-)

diff --git a/configure b/configure
index 57098149f9..31576350bd 100755
--- a/configure
+++ b/configure
@@ -2967,7 +2967,7 @@ v4l2_m2m_deps="linux_videodev2_h sem_timedwait"
 hwupload_cuda_filter_deps="ffnvcodec"
 scale_npp_filter_deps="ffnvcodec libnpp"
 scale_cuda_filter_deps="ffnvcodec cuda_nvcc"
-thumbnail_cuda_filter_deps="cuda_sdk"
+thumbnail_cuda_filter_deps="ffnvcodec cuda_nvcc"
 transpose_npp_filter_deps="ffnvcodec libnpp"
 
 amf_deps_any="libdl LoadLibrary"
diff --git a/libavfilter/vf_thumbnail_cuda.c b/libavfilter/vf_thumbnail_cuda.c
index 22691e156f..0c06815643 100644
--- a/libavfilter/vf_thumbnail_cuda.c
+++ b/libavfilter/vf_thumbnail_cuda.c
@@ -20,10 +20,8 @@
 * DEALINGS IN THE SOFTWARE.
 */
 
-#include 
-
 #include "libavutil/hwcontext.h"
-#include "libavutil/hwcontext_cuda.h"
+#include "libavutil/hwcontext_cuda_internal.h"
 #include "libavutil/cuda_check.h"
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
@@ -31,7 +29,7 @@
 #include "avfilter.h"
 #include "internal.h"
 
-#define CHECK_CU(x) FF_CUDA_CHECK(ctx, x)
+#define CHECK_CU(x) FF_CUDA_CHECK_DL(ctx, s->hwctx->internal->cuda_dl, x)
 
 #define HIST_SIZE (3*256)
 #define DIV_UP(a, b) ( ((a) + (b) - 1) / (b) )
@@ -60,6 +58,7 @@ typedef struct ThumbnailCudaContext {
 AVRational tb;  ///< copy of the input timebase to ease access
 
 AVBufferRef *hw_frames_ctx;
+AVCUDADeviceContext *hwctx;
 
 CUmodulecu_module;
 
@@ -67,12 +66,10 @@ typedef struct ThumbnailCudaContext {
 CUfunction  cu_func_uchar2;
 CUfunction  cu_func_ushort;
 CUfunction  cu_func_ushort2;
-CUtexrefcu_tex_uchar;
-CUtexrefcu_tex_uchar2;
-CUtexrefcu_tex_ushort;
-CUtexrefcu_tex_ushort2;
+CUstreamcu_stream;
 
 CUdeviceptr data;
+
 } ThumbnailCudaContext;
 
 #define OFFSET(x) offsetof(ThumbnailCudaContext, x)
@@ -157,29 +154,44 @@ static AVFrame *get_best_frame(AVFilterContext *ctx)
 return picref;
 }
 
-static int thumbnail_kernel(ThumbnailCudaContext *ctx, CUfunction func, 
CUtexref tex, int channels,
+static int thumbnail_kernel(AVFilterContext *ctx, CUfunction func, int 
channels,
 int *histogram, uint8_t *src_dptr, int src_width, int src_height, int 
src_pitch, int pixel_size)
 {
-CUdeviceptr src_devptr = (CUdeviceptr)src_dptr;
-void *args[] = { , _width, _height };
-CUDA_ARRAY_DESCRIPTOR desc;
-
-desc.Width = src_width;
-desc.Height = src_height;
-desc.NumChannels = channels;
-if (pixel_size == 1) {
-desc.Format = CU_AD_FORMAT_UNSIGNED_INT8;
-}
-else {
-desc.Format = CU_AD_FORMAT_UNSIGNED_INT16;
-}
+int ret;
+ThumbnailCudaContext *s = ctx->priv;
+CudaFunctions *cu = s->hwctx->internal->cuda_dl;
+CUtexObject tex = 0;
+void *args[] = { , , _width, _height };
 
-CHECK_CU(cuTexRefSetAddress2D_v3(tex, , src_devptr, src_pitch));
-CHECK_CU(cuLaunchKernel(func,
-DIV_UP(src_width, BLOCKX), DIV_UP(src_height, 
BLOCKY), 1,
-BLOCKX, BLOCKY, 1, 0, 0, args, NULL));
+CUDA_TEXTURE_DESC tex_desc = {
+.filterMode = CU_TR_FILTER_MODE_LINEAR,
+.flags = CU_TRSF_READ_AS_INTEGER,
+};
 
-return 0;
+CUDA_RESOURCE_DESC res_desc = {
+.resType = CU_RESOURCE_TYPE_PITCH2D,
+.res.pitch2D.format = pixel_size == 1 ?
+  CU_AD_FORMAT_UNSIGNED_INT8 :
+  CU_AD_FORMAT_UNSIGNED_INT16,
+.res.pitch2D.numChannels = channels,
+.res.pitch2D.width = src_width,
+.res.pitch2D.height = src_height,
+.res.pitch2D.pitchInBytes = src_pitch,
+.res.pitch2D.devPtr = (CUdeviceptr)src_dptr,
+};
+
+ret = CHECK_CU(cu->cuTexObjectCreate(, _desc, _desc, NULL));
+if (ret < 0)
+goto exit;
+
+ret = CHECK_CU(cu->cuLaunchKernel(func,
+  DIV_UP(src_width, BLOCKX), 
DIV_UP(src_height, BLOCKY), 1,
+  BLOCKX, BLOCKY, 1, 0, s->cu_stream, 
args, NULL));
+exit:
+if (tex)
+CHECK_CU(cu->cuTexObjectDestroy(tex));
+
+return ret;
 }
 
 static int thumbnail(AVFilterContext *ctx, int *histogram, AVFrame *in)
@@ -189,40 +201,40 @@ static int thumbnail(AVFilterContext *ctx, int 
*histogram, AVFrame *in)
 
 switch (in_frames_ctx->sw_format) {
 case 

[FFmpeg-devel] [PATCH 1/5] configure: Add an explicit check and option for nvcc

2019-02-20 Thread Philip Langdale
The use of nvcc to compile cuda kernels is distinct from the use of
cuda sdk libraries and linking against those libraries. We have
previously not bothered to distinguish these two cases because all
the filters that used cuda kernels also used the sdk. In the following
changes, I'm going to remove the sdk dependency from those filters,
but we need a way to ensure that nvcc is present and functioning, and
also a way to explicitly disable its use so that the filters are not
built.

Note that, unlike the cuda_sdk dependency, using nvcc to compile
a kernel does not cause a build to become non-free. Although nvcc
is distributed with the cuda sdk, and is EULA encumbered, the
compilation process we use does not introduce any EULA covered
code or libraries into the build. In this sense, using nvcc is just
like using any other proprietary compiler like msvc - compiling free
code doesn't suddently make it non-free.

There was previously some confusion on this topic, but the important
distinction is that we use nvcc to generate ptx files - these are
not compiled GPU binaries, but rather an intermediate assembly
representation that is JIT compiled (and I think linked with certain
nvidia library code) when you actually try and run the kernel. nvidia
use this technique to relax machine code compatibility between
hardware generations.

From here, we can make two observations:
* The ptx files that we include in libavfilter are aggregated rather
  than linked, from the perspective of the (L)GPL
* No proprietary code is included with the ptx files. That code is
  only linked in at the final compilation step at runtime.

Signed-off-by: Philip Langdale 
---
 configure | 28 
 1 file changed, 28 insertions(+)

diff --git a/configure b/configure
index bf40c1dcb9..2219eb1515 100755
--- a/configure
+++ b/configure
@@ -322,6 +322,7 @@ External library support:
   --disable-amfdisable AMF video encoding code [autodetect]
   --disable-audiotoolbox   disable Apple AudioToolbox code [autodetect]
   --enable-cuda-sdkenable CUDA features that require the CUDA SDK [no]
+  --disable-cuda-nvcc  disable Nvidia CUDA compiler [autodetect]
   --disable-cuvid  disable Nvidia CUVID support [autodetect]
   --disable-d3d11vadisable Microsoft Direct3D 11 video acceleration 
code [autodetect]
   --disable-dxva2  disable Microsoft DirectX 9 video acceleration code 
[autodetect]
@@ -1001,6 +1002,10 @@ hostcc_o(){
 eval printf '%s\\n' $HOSTCC_O
 }
 
+nvcc_o(){
+eval printf '%s\\n' $NVCC_O
+}
+
 test_cc(){
 log test_cc "$@"
 cat > $TMPC
@@ -1022,6 +1027,13 @@ test_objcc(){
 test_cmd $objcc -Werror=missing-prototypes $CPPFLAGS $CFLAGS $OBJCFLAGS 
"$@" $OBJCC_C $(cc_o $TMPO) $TMPM
 }
 
+test_nvcc(){
+log test_nvcc "$@"
+cat > $TMPCU
+log_file $TMPCU
+test_cmd $nvcc -ptx $NVCCFLAGS "$@" $NVCC_C $(nvcc_o $TMPO) $TMPCU
+}
+
 test_cpp(){
 log test_cpp "$@"
 cat > $TMPC
@@ -1786,6 +1798,7 @@ HWACCEL_AUTODETECT_LIBRARY_LIST="
 audiotoolbox
 crystalhd
 cuda
+cuda_nvcc
 cuvid
 d3d11va
 dxva2
@@ -4238,6 +4251,7 @@ tmpfile TMPCPP .cpp
 tmpfile TMPE   $EXESUF
 tmpfile TMPH   .h
 tmpfile TMPM   .m
+tmpfile TMPCU  .cu
 tmpfile TMPO   .o
 tmpfile TMPS   .S
 tmpfile TMPSH  .sh
@@ -6641,6 +6655,20 @@ else
 nvccflags="$nvccflags -m32"
 fi
 
+check_nvcc() {
+log check_nvcc "$@"
+disable cuda_nvcc
+test_nvcc 

Re: [FFmpeg-devel] [PATCH] avformat/mov: fix hang while seek on a kind of fragmented mp4.

2019-02-20 Thread C.H.Liu
There was a quick solution at aa25198f1b925a464bdfa83a98476f08d26c9209,
which luckily works for #7572. To reproduce issue, you can switch to the
commit just before it.

As we discussed, #7572 has a deeper reason. We missed the last ‘sidx’ and
‘moof’ boxes.

This patch try to fix AV_NOPTS_VALUE in fragmented info structure. It may
be caused by missing boxes. Or the fragmented structures are not suitable
for separate audio and video ‘moof’.


Thanks and Regards,

Charles Liu



Carl Eugen Hoyos  于2019年2月21日周四 上午7:33写道:

> 2019-02-20 16:54 GMT+01:00, Charles Liu :
> > 1. organize fragmented information according to the tracks.
> > 2. do NOT skip the last boxes of fragmented info.
> >
> > ticket #7572
>
> How can I reproduce the hang with current FFmpeg git head?
>
> Carl Eugen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/mem: Mark DECLARE_ASM_ALIGNED as visibility("hidden") for __GNUC__

2019-02-20 Thread Fāng-ruì Sòng
> Why is it a good idea to remove them from the linker command line?

In short, it improves portability. I'm not suggesting removing
-Bsymbolic or --version-script from the ffmpeg build system. I mean
users will no longer have to specify the two options to link ffmpeg
object files into their own shared objects (AFAIK this patch address
all C side issues. There are some other problem in *.asm code, though;
I also saw BROKEN_RELOCATIONS in the code, thinking that it was
probably added when developers noticed it could bring problems. I
didn't dig up the history to learn more)

When using a different build system when the relevant SHFLAGS options
(-Bsymbolic and --version-script) aren't specified, there wouldn't be
mysterious linker errors "relocation R_X86_64_PC32 cannot be used
against ...". If the linker options aren't mandatory, ffmpeg can be
more easily integrated to other projects or build systems.

Or when linking libavcodec/*.o with other object files from the main
program to form another shared object (not
libavcodec/libavcodec.so.58). The current limitation (these global
constants having default visibility) means every shared object linking
in libavcodec/cabac.o (with the default visibility
ff_h264_cabac_tables) and libavcodec/x86/constants.o have to use
either -Bsymbolic, or to construct their own version script.

* -Bsymbolic this option applies to all exported symbols on the linker
command line, not just ffmpeg object files. This makes symbols
non-preemptive, and in particular, breaks C++ [dcl.inline], which says
"A static local variable in an inline function with external linkage
always refers to the same object." If this option is used, two
function-local static object may have different addresses.
* --version-script  libavcodec/libavcodec.ver specifies `global: av_*;
local: *;` Again, the problem is that it applies to all exported
symbols (may affect program code). If the program code doesn't want
all its symbols to be marked local, it'll have to define its own
version script. This is a hindrance that can be avoided.


On Thu, Feb 21, 2019 at 9:37 AM Fāng-ruì Sòng  wrote:
>
> Sorry if this doesn't attach to the correct thread as I didn't
> subscribe to this list and don't know the Message-ID of the thread.
>
> > The word "also" indicates here that this should be an independent patch.
>
> I added `#if defined(__GNUC__) && !(defined(_WIN32) ||
> defined(__CYGWIN__))`, not `#if (defined(__GNUC__) ||
> defined(__clang__)) && !(defined(_WIN32) || defined(__CYGWIN__))`. For
> consistency I removed the defined(__clang__) below. If that change
> should be an independent one, here is the amended version without the
> removal of defined(__clang__)
>
>
> Inline asm code assumes these DECLARE_ASM_ALIGNED declared global
> constants are non-preemptive, e.g.
>
> libavcodec/x86/cabac.h
> "lea"MANGLE(ff_h264_cabac_tables)", %0  \n\t"
>
> On ELF platforms, if -Wl,-Bsymbolic
> -Wl,--version-script,libavcodec/libavcodec.ver are removed from the
> linker command line, the symbol will be considered preemptive and fail
> to link to a DSO:
>
> ld.lld: error: relocation R_X86_64_PC32 cannot be used against
> symbol ff_h264_cabac_tables; recompile with -fPIC
>
> It is better to express the intention explicitly and mark such global
> constants hidden (non-preemptive). It also improves portability as no
> linker magic is required.
>
> DECLARE_ASM_CONST uses the "static" specifier to indicate internal
> linkage. The visibility annotation is unnecessary.
>
> Signed-off-by: Fangrui Song 
> ---
>  libavutil/mem.h | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/libavutil/mem.h b/libavutil/mem.h
> index 5fb1a02dd9..9afeed0b43 100644
> --- a/libavutil/mem.h
> +++ b/libavutil/mem.h
> @@ -100,6 +100,12 @@
>   * @param v Name of the variable
>   */
>
> +#if defined(__GNUC__) && !(defined(_WIN32) || defined(__CYGWIN__))
> +#define DECLARE_HIDDEN __attribute__ ((visibility ("hidden")))
> +#else
> +#define DECLARE_HIDDEN
> +#endif
> +
>  #if defined(__INTEL_COMPILER) && __INTEL_COMPILER < 1110 || 
> defined(__SUNPRO_C)
>  #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
>  #define DECLARE_ASM_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
> @@ -110,7 +116,7 @@
>  #define DECLARE_ASM_CONST(n,t,v)static const t av_used
> __attribute__ ((aligned (FFMIN(n, 16 v
>  #elif defined(__GNUC__) || defined(__clang__)
>  #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
> -#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
> ((aligned (n))) v
> +#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
> ((aligned (n))) DECLARE_HIDDEN v
>  #define DECLARE_ASM_CONST(n,t,v)static const t av_used
> __attribute__ ((aligned (n))) v
>  #elif defined(_MSC_VER)
>  #define DECLARE_ALIGNED(n,t,v)  __declspec(align(n)) t v



-- 
宋方睿
___
ffmpeg-devel 

Re: [FFmpeg-devel] [PATCH] avcodec/mips: [loongson] mmi optimizations for VP9 put and avg functions

2019-02-20 Thread Shiyou Yin
>-Original Message-
>From: ffmpeg-devel-boun...@ffmpeg.org [mailto:ffmpeg-devel-boun...@ffmpeg.org] 
>On Behalf Of gxw
>Sent: Tuesday, February 19, 2019 11:02 AM
>To: ffmpeg-devel@ffmpeg.org
>Cc: gxw
>Subject: [FFmpeg-devel] [PATCH] avcodec/mips: [loongson] mmi optimizations for 
>VP9 put and avg
>functions
>
>VP9 decoding speed improved about 109.3%(from 32fps to 67fps, tested on 
>loongson 3A3000).
>---
> libavcodec/mips/Makefile   |   1 +
> libavcodec/mips/vp9_mc_mmi.c   | 680 +
> libavcodec/mips/vp9dsp_init_mips.c |  42 +++
> libavcodec/mips/vp9dsp_mips.h  |  50 +++
> 4 files changed, 773 insertions(+)
> create mode 100644 libavcodec/mips/vp9_mc_mmi.c
>
>diff --git a/libavcodec/mips/Makefile b/libavcodec/mips/Makefile
>index c827649..c5b54d5 100644
>--- a/libavcodec/mips/Makefile
>+++ b/libavcodec/mips/Makefile
>@@ -88,3 +88,4 @@ MMI-OBJS-$(CONFIG_VC1_DECODER)+= 
>mips/vc1dsp_mmi.o
> MMI-OBJS-$(CONFIG_WMV2DSP)+= mips/wmv2dsp_mmi.o
> MMI-OBJS-$(CONFIG_HEVC_DECODER)   += mips/hevcdsp_mmi.o
> MMI-OBJS-$(CONFIG_VP3DSP) += mips/vp3dsp_idct_mmi.o
>+MMI-OBJS-$(CONFIG_VP9_DECODER)+= mips/vp9_mc_mmi.o
>diff --git a/libavcodec/mips/vp9_mc_mmi.c b/libavcodec/mips/vp9_mc_mmi.c
>new file mode 100644
>index 000..145bbff
>--- /dev/null
>+++ b/libavcodec/mips/vp9_mc_mmi.c
>@@ -0,0 +1,680 @@
>+/*
>+ * Copyright (c) 2019 gxw 
>+ *
>+ * This file is part of FFmpeg.
>+ *
>+ * FFmpeg is free software; you can redistribute it and/or
>+ * modify it under the terms of the GNU Lesser General Public
>+ * License as published by the Free Software Foundation; either
>+ * version 2.1 of the License, or (at your option) any later version.
>+ *
>+ * FFmpeg is distributed in the hope that it will be useful,
>+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
>+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>+ * Lesser General Public License for more details.
>+ *
>+ * You should have received a copy of the GNU Lesser General Public
>+ * License along with FFmpeg; if not, write to the Free Software
>+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
>USA
>+ */
>+
>+#include "libavcodec/vp9dsp.h"
>+#include "libavutil/mips/mmiutils.h"
>+#include "vp9dsp_mips.h"
>+
>+#define GET_DATA_H_MMI   \
>+"pmaddhw%[ftmp4],%[ftmp4],   %[filter1]\n\t" \
>+"pmaddhw%[ftmp5],%[ftmp5],   %[filter2]\n\t" \
>+"paddw  %[ftmp4],%[ftmp4],   %[ftmp5]  \n\t" \
>+"punpckhwd  %[ftmp5],%[ftmp4],   %[ftmp0]  \n\t" \
>+"paddw  %[ftmp4],%[ftmp4],   %[ftmp5]  \n\t" \
>+"pmaddhw%[ftmp6],%[ftmp6],   %[filter1]\n\t" \
>+"pmaddhw%[ftmp7],%[ftmp7],   %[filter2]\n\t" \
>+"paddw  %[ftmp6],%[ftmp6],   %[ftmp7]  \n\t" \
>+"punpckhwd  %[ftmp7],%[ftmp6],   %[ftmp0]  \n\t" \
>+"paddw  %[ftmp6],%[ftmp6],   %[ftmp7]  \n\t" \
>+"punpcklwd  %[srcl], %[ftmp4],   %[ftmp6]  \n\t" \
>+"pmaddhw%[ftmp8],%[ftmp8],   %[filter1]\n\t" \
>+"pmaddhw%[ftmp9],%[ftmp9],   %[filter2]\n\t" \
>+"paddw  %[ftmp8],%[ftmp8],   %[ftmp9]  \n\t" \
>+"punpckhwd  %[ftmp9],%[ftmp8],   %[ftmp0]  \n\t" \
>+"paddw  %[ftmp8],%[ftmp8],   %[ftmp9]  \n\t" \
>+"pmaddhw%[ftmp10],   %[ftmp10],  %[filter1]\n\t" \
>+"pmaddhw%[ftmp11],   %[ftmp11],  %[filter2]\n\t" \
>+"paddw  %[ftmp10],   %[ftmp10],  %[ftmp11] \n\t" \
>+"punpckhwd  %[ftmp11],   %[ftmp10],  %[ftmp0]  \n\t" \
>+"paddw  %[ftmp10],   %[ftmp10],  %[ftmp11] \n\t" \
>+"punpcklwd  %[srch], %[ftmp8],   %[ftmp10] \n\t"
>+
>+#define GET_DATA_V_MMI   \
>+"punpcklhw  %[srcl], %[ftmp4],   %[ftmp5]  \n\t" \
>+"pmaddhw%[srcl], %[srcl],%[filter10]   \n\t" \
>+"punpcklhw  %[ftmp12],   %[ftmp6],   %[ftmp7]  \n\t" \
>+"pmaddhw%[ftmp12],   %[ftmp12],  %[filter32]   \n\t" \
>+"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>+"punpcklhw  %[ftmp12],   %[ftmp8],   %[ftmp9]  \n\t" \
>+"pmaddhw%[ftmp12],   %[ftmp12],  %[filter54]   \n\t" \
>+"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>+"punpcklhw  %[ftmp12],   %[ftmp10],  %[ftmp11] \n\t" \
>+"pmaddhw%[ftmp12],   %[ftmp12],  %[filter76]   \n\t" \
>+"paddw  %[srcl], %[srcl],%[ftmp12] \n\t" \
>+"punpckhhw  %[srch], %[ftmp4],   %[ftmp5]  \n\t" \
>+"pmaddhw%[srch], %[srch],%[filter10]   \n\t" \
>+"punpckhhw  %[ftmp12],   %[ftmp6],   %[ftmp7]  \n\t" \
>+"pmaddhw%[ftmp12],   %[ftmp12],  %[filter32]   \n\t" \
>+"paddw  %[srch], %[srch],%[ftmp12] \n\t" \
>+"punpckhhw  %[ftmp12],   

Re: [FFmpeg-devel] [PATCH] avutil/mem: Mark DECLARE_ASM_ALIGNED as visibility("hidden") for __GNUC__

2019-02-20 Thread Fāng-ruì Sòng
Sorry if this doesn't attach to the correct thread as I didn't
subscribe to this list and don't know the Message-ID of the thread.

> The word "also" indicates here that this should be an independent patch.

I added `#if defined(__GNUC__) && !(defined(_WIN32) ||
defined(__CYGWIN__))`, not `#if (defined(__GNUC__) ||
defined(__clang__)) && !(defined(_WIN32) || defined(__CYGWIN__))`. For
consistency I removed the defined(__clang__) below. If that change
should be an independent one, here is the amended version without the
removal of defined(__clang__)


Inline asm code assumes these DECLARE_ASM_ALIGNED declared global
constants are non-preemptive, e.g.

libavcodec/x86/cabac.h
"lea"MANGLE(ff_h264_cabac_tables)", %0  \n\t"

On ELF platforms, if -Wl,-Bsymbolic
-Wl,--version-script,libavcodec/libavcodec.ver are removed from the
linker command line, the symbol will be considered preemptive and fail
to link to a DSO:

ld.lld: error: relocation R_X86_64_PC32 cannot be used against
symbol ff_h264_cabac_tables; recompile with -fPIC

It is better to express the intention explicitly and mark such global
constants hidden (non-preemptive). It also improves portability as no
linker magic is required.

DECLARE_ASM_CONST uses the "static" specifier to indicate internal
linkage. The visibility annotation is unnecessary.

Signed-off-by: Fangrui Song 
---
 libavutil/mem.h | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/libavutil/mem.h b/libavutil/mem.h
index 5fb1a02dd9..9afeed0b43 100644
--- a/libavutil/mem.h
+++ b/libavutil/mem.h
@@ -100,6 +100,12 @@
  * @param v Name of the variable
  */

+#if defined(__GNUC__) && !(defined(_WIN32) || defined(__CYGWIN__))
+#define DECLARE_HIDDEN __attribute__ ((visibility ("hidden")))
+#else
+#define DECLARE_HIDDEN
+#endif
+
 #if defined(__INTEL_COMPILER) && __INTEL_COMPILER < 1110 || defined(__SUNPRO_C)
 #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
 #define DECLARE_ASM_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
@@ -110,7 +116,7 @@
 #define DECLARE_ASM_CONST(n,t,v)static const t av_used
__attribute__ ((aligned (FFMIN(n, 16 v
 #elif defined(__GNUC__) || defined(__clang__)
 #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
-#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
((aligned (n))) v
+#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
((aligned (n))) DECLARE_HIDDEN v
 #define DECLARE_ASM_CONST(n,t,v)static const t av_used
__attribute__ ((aligned (n))) v
 #elif defined(_MSC_VER)
 #define DECLARE_ALIGNED(n,t,v)  __declspec(align(n)) t v
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] dav1d support

2019-02-20 Thread Lou Logan
On Wed, Feb 20, 2019, at 3:39 PM, Patel, Dhaval R wrote:
>
> Does anyone aware about targeted dates for Release 4.2 ?

No, but if it follows the trend of the last 5 releases it will be about 6 
months from the last release, so my guess is in May or June.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC mentored project: derain filter

2019-02-20 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
> Of Liu Steven
> Sent: Wednesday, February 20, 2019 7:18 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Cc: Thilo Borgmann ; Liu Steven
> 
> Subject: Re: [FFmpeg-devel] GSoC mentored project: derain filter
> 
> 
> 
> > 在 2019年2月20日,下午6:35,孟学苇  写道:
> >
> > Hi Dev-Community,
> >
> >
> >
> >
> > I am Iris Meng from China. I’m a PhD student in Institute of Digital Media,
> Peking University. I wish to contribute as a GSoC applicant this year.
> >
> > I am interested in Deep Learning. I want to add a derain filter in ffmpeg. 
> > If
> you have any suggestion or question, we can contact by email. My
> motivation and plans are as follows.
> >
> >
> >
> >
> >   Motivation
> >
> > Rain and fog are very common weather in actual life. However, it can affect
> the visibility. Especially in heavy rain, rain streaks from various directions
> accumulate and make the background scene misty, which will seriously
> influence the accuracy of many computer vision systems, including video
> surveillance, object detection and tracking in autonomous driving, etc.
> Therefore, it is an important task to remove the rain and fog, and recover the
> background from rain images. It can be used for image and video processing
> to make them clearer and it can be a preprocessing method for many
> computer vision systems.
> >
> >
> >
> >
> >   Proposed Idea
> >
> > We propose to implement this technology in ffmpeg. For video [1][2], we
> can utilize the relationship between frames to remove rain and fog. For
> single image [3], we can use traditional methods, such as discriminative
> sparse coding, low rank representation and the Gaussian mixture model. We
> can also use some deep learning methods. We should investigate these
> methods, and ultimately consider the effect of rain/fog removal and the
> complexity of the algorithm, and choose the optimal scheme.
> >
> >
> >
> >
> >   Practical application
> >
> > The derain and dehaze method can improve the subjective quality of
> videos and images.
> >
> >
> >
> >
> >  Development plan
> >
> > I would like to start working on my qualification task and try to solve my
> problems. Overall, I will follow the following steps to complete the project.
> >
> > (1)Literature and algorithm investigation
> >
> > (2)Data sets preparation
> >
> > (3)Coding: Implement network, training code, inference code and so on
> >
> > (4)Select the best method and transplantation it into ffmpeg
> >
> >
> >
> >
> >  Reference
> >
> >  [1] Zhang X, Li H, Qi Y, et al. Rain removal in video by combining 
> > temporal
> and chromatic properties[C]//2006 IEEE International Conference on
> Multimedia and Expo. IEEE, 2006: 461-464.
> >
> >  [2] Tripathi A K, Mukhopadhyay S. Removal of rain from videos: a
> review[J]. Signal, Image and Video Processing, 2014, 8(8): 1421-1430.
> >
> >  [3] Li X, Wu J, Lin Z, et al. Recurrent squeeze-and-excitation context
> aggregation net for single image deraining[C]//Proceedings of the European
> Conference on Computer Vision (ECCV). 2018: 254-269.
> >
> >
> >
> I think this can reference libavflter/sr.c to implementation, maybe you can
> try two ways to implement it, one is native and the other is model.
> 

and currently, only TensorFlow model is supported via tensorflow C API, 
you can easily save the model file in python with function 
tf.graph_util.convert_variables_to_constants and tf.train.write_graph.

For the native mode (executed with CPU), two operations (CONV and 
DEPTH_TO_SPACE) are supported now, you might add more.

> 
> Thanks
> Steven
> 
> >
> >
> >
> >
> > Thanks,
> >
> > Regards,
> >
> > Iris Meng
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> 
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] dav1d support

2019-02-20 Thread Patel, Dhaval R
Thanks for your reply James.

Does anyone aware about targeted dates for Release 4.2 ?


Thanks,
Dhaval

-Original Message-
From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of James 
Almer
Sent: Wednesday, February 20, 2019 2:58 PM
To: ffmpeg-devel@ffmpeg.org
Subject: Re: [FFmpeg-devel] dav1d support

On 2/20/2019 7:53 PM, Patel, Dhaval R wrote:
> Hi all,
> 
> We are trying to get dav1d in our software stack, I see ffmpeg plugin for 
> dav1d is there in master branch, but not yet in any stable release.
> 
> Is there a plan of including it in next stable release ? any release ? If 
> yes, what's date release is targeted for ?
> 
> 
> Thanks,
> Dhaval

Yes, it will be available in ffmpeg 4.2, whenever it's released. There's no 
defined date for it, but it should be in the coming months.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 3:58 GMT+01:00, Zhong Li :

> And it is necessary for adding new qsv decoders such as
> MJPEG and VP9 since current parser can't provide
> enough information.

Just curious: What information is missing?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 3:58 GMT+01:00, Zhong Li :
> VP9 decoder is supported on Intel kabyLake+ platforms with MSDK Version
> 1.19+

> diff --git a/Changelog b/Changelog
> index f289812bfc..141ffd9610 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -20,6 +20,7 @@ version :
>  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
>  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec

>  - Intel QSV-accelerated MJPEG decoding
> +- Intel QSV-accelerated VP9 decoding

Please merge these lines.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avutil/mem: Mark DECLARE_ASM_ALIGNED as visibility("hidden") for __GNUC__

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 10:13 GMT+01:00, Fāng-ruì Sòng :
> Inline asm code assumes these DECLARE_ASM_ALIGNED declared global
> constants are non-preemptive, e.g.
>
> libavcodec/x86/cabac.h
> "lea"MANGLE(ff_h264_cabac_tables)", %0  \n\t"
>
> On ELF platforms, if -Wl,-Bsymbolic
> -Wl,--version-script,libavcodec/libavcodec.ver are removed from the
> linker command line, the symbol will be considered preemptive and fail

Why is it a good idea to remove them from the linker command line?

> to link to a DSO:
>
> ld.lld: error: relocation R_X86_64_PC32 cannot be used against
> symbol ff_h264_cabac_tables; recompile with -fPIC
>
> It is better to express the intention explicitly and mark such global
> constants hidden (non-preemptive). It also improves portability as no
> linker magic is required.
>
> DECLARE_ASM_CONST uses the "static" specifier to indicate internal
> linkage. The visibility annotation is unnecessary.
>
> Also remove __clang__ as clang pretends to be gcc 4.2 and defines __GNUC__

The word "also" indicates here that this should be an
independent patch.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avformat/mov: fix hang while seek on a kind of fragmented mp4.

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 16:54 GMT+01:00, Charles Liu :
> 1. organize fragmented information according to the tracks.
> 2. do NOT skip the last boxes of fragmented info.
>
> ticket #7572

How can I reproduce the hang with current FFmpeg git head?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] dav1d support

2019-02-20 Thread James Almer
On 2/20/2019 7:53 PM, Patel, Dhaval R wrote:
> Hi all,
> 
> We are trying to get dav1d in our software stack, I see ffmpeg plugin for 
> dav1d is there in master branch, but not yet in any stable release.
> 
> Is there a plan of including it in next stable release ? any release ? If 
> yes, what's date release is targeted for ?
> 
> 
> Thanks,
> Dhaval

Yes, it will be available in ffmpeg 4.2, whenever it's released. There's
no defined date for it, but it should be in the coming months.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [FFmpeg-cvslog] Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 23:47 GMT+01:00, James Almer :
> On 2/20/2019 7:28 PM, Carl Eugen Hoyos wrote:
>> 2019-02-20 19:46 GMT+01:00, James Almer :
>>> ffmpeg | branch: master | James Almer  | Wed Feb 20
>>> 15:42:01 2019 -0300| [e4e04dce1fab81bcdef82e60184d50c73d212c6a] |
>>> committer:
>>> James Almer
>>>
>>> Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'
>>>
>>> * commit '28a8b5413b64b831dfb8650208bccd8b78360484':
>>>   h264/aarch64: add intra loop filter neon asm
>>
>> This breaks fate on Linux, does it work on any platform?
>>
>> Carl Eugen
>
> I fixed the prototypes that i forgot to update, so maybe that helps.

This was unrelated afaict.

Hopefully fixed, thank you, Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] dav1d support

2019-02-20 Thread Patel, Dhaval R
Hi all,

We are trying to get dav1d in our software stack, I see ffmpeg plugin for dav1d 
is there in master branch, but not yet in any stable release.

Is there a plan of including it in next stable release ? any release ? If yes, 
what's date release is targeted for ?


Thanks,
Dhaval

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [FFmpeg-cvslog] Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'

2019-02-20 Thread James Almer
On 2/20/2019 7:28 PM, Carl Eugen Hoyos wrote:
> 2019-02-20 19:46 GMT+01:00, James Almer :
>> ffmpeg | branch: master | James Almer  | Wed Feb 20
>> 15:42:01 2019 -0300| [e4e04dce1fab81bcdef82e60184d50c73d212c6a] | committer:
>> James Almer
>>
>> Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'
>>
>> * commit '28a8b5413b64b831dfb8650208bccd8b78360484':
>>   h264/aarch64: add intra loop filter neon asm
> 
> This breaks fate on Linux, does it work on any platform?
> 
> Carl Eugen

I fixed the prototypes that i forgot to update, so maybe that helps.
If not then i'm not sure. Checkasm seems to pass so i don't know why
h264-conformance-frext-hi422fr10_sony_b doesn't.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Proposal: Homebrew tap for FFmpeg

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 20:56 GMT+01:00, Lou Logan :
> On Wed, Feb 6, 2019, at 11:48 AM, Werner Robitza wrote:
>>
>> I propose that FFmpeg maintains its own ffmpeg formula under its
>> GitHub organization at github.com/ffmpeg/homebrew-ffmpeg (or similar).
>> This will ensure that there's one formula users will discover when
>> they look for an alternative tap, thus improving discoverability and
>> avoiding fragmentation. We could use the above link as a starting
>> point.
>
> The alternative tap originally proposed by Werner went ahead independently
> and has been implemented at:
> https://github.com/varenc/homebrew-ffmpeg

Great!
Should we add this link to the download page?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] libavcodec/zmbvenc: block scoring improvements/bug fixes

2019-02-20 Thread Carl Eugen Hoyos
2019-02-10 16:42 GMT+01:00, Tomas Härdin :
> lör 2019-02-09 klockan 13:10 + skrev Matthew Fearnley:
>> - Improve block choices by counting 0-bytes in the entropy score
>> - Make histogram use uint16_t type, to allow byte counts from 16*16
>> (current block size) up to 255*255 (maximum allowed 8bpp block size)
>> - Make sure score table is big enough for a full block's worth of bytes
>> - Calculate *xored without using code in inner loop
>> ---
>>  libavcodec/zmbvenc.c | 22 --
>>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> Passes FATE, looks good to me

I believe you asked on irc about fate tests:
Since such a test would depend on the zlib version, you cannot test
exact output, you could only test a round-trip (assuming the codec
really is lossless).

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [FFmpeg-cvslog] Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 19:46 GMT+01:00, James Almer :
> ffmpeg | branch: master | James Almer  | Wed Feb 20
> 15:42:01 2019 -0300| [e4e04dce1fab81bcdef82e60184d50c73d212c6a] | committer:
> James Almer
>
> Merge commit '28a8b5413b64b831dfb8650208bccd8b78360484'
>
> * commit '28a8b5413b64b831dfb8650208bccd8b78360484':
>   h264/aarch64: add intra loop filter neon asm

This breaks fate on Linux, does it work on any platform?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec/h264_direct: Fix overflow in POC comparission

2019-02-20 Thread Michael Niedermayer
On Thu, Feb 14, 2019 at 12:06:04AM +0100, Michael Niedermayer wrote:
> Fixes: runtime error: signed integer overflow: 2147421862 - -33624063 cannot 
> be represented in type 'int'
> Fixes: 
> 12885/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5733516975800320
> 
> Found-by: continuous fuzzing process 
> https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
> Signed-off-by: Michael Niedermayer 
> ---
>  libavcodec/h264_direct.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

will apply

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If a bugfix only changes things apparently unrelated to the bug with no
further explanation, that is a good sign that the bugfix is wrong.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [FFmpeg-cvslog] Merge commit '90adbf4abf336f8042aecdf1e18fdf76a96304b1'

2019-02-20 Thread Carl Eugen Hoyos
2019-02-20 18:49 GMT+01:00, James Almer :
> ffmpeg | branch: master | James Almer  | Wed Feb 20
> 14:47:13 2019 -0300| [0c126431f9b290f5651ec62f45627632d94c51ea] | committer:
> James Almer
>
> Merge commit '90adbf4abf336f8042aecdf1e18fdf76a96304b1'
>
> * commit '90adbf4abf336f8042aecdf1e18fdf76a96304b1':
>   cook: Use the correct table for 6-bit stereo coupling

How can I test this?

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2] ffmpeg: Add option to force a specific decode format

2019-02-20 Thread Mark Thompson
On 18/02/2019 05:05, Fu, Linjie wrote:
>> -Original Message-
>> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf
>> Of Fu, Linjie
>> Sent: Friday, November 16, 2018 16:37
>> To: FFmpeg development discussions and patches > de...@ffmpeg.org>
>> Subject: Re: [FFmpeg-devel] [PATCH v2] ffmpeg: Add option to force a
>> specific decode format
>>
>>> -Original Message-
>>> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On
>> Behalf
>>> Of Mark Thompson
>>> Sent: Thursday, November 15, 2018 05:48
>>> To: ffmpeg-devel@ffmpeg.org
>>> Subject: Re: [FFmpeg-devel] [PATCH v2] ffmpeg: Add option to force a
>>> specific decode format
>>>
>>> On 14/11/18 01:35, Fu, Linjie wrote:
> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On
>>> Behalf
> Of Mark Thompson
> Sent: Wednesday, November 14, 2018 09:11
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v2] ffmpeg: Add option to force a
> specific decode format
>
> On 14/11/18 00:50, Fu, Linjie wrote:
>>> -Original Message-
>>> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On
> Behalf
>>> Of Mark Thompson
>>> Sent: Wednesday, November 14, 2018 07:44
>>> To: ffmpeg-devel@ffmpeg.org
>>> Subject: [FFmpeg-devel] [PATCH v2] ffmpeg: Add option to force a
> specific
>>> decode format
>>>
>>> Fixes #7519.
>>> ---
>>>  doc/ffmpeg.texi  | 13 
>>>  fftools/ffmpeg.c | 10 ++
>>>  fftools/ffmpeg.h |  4 
>>>  fftools/ffmpeg_opt.c | 47
>>> 
>>>  4 files changed, 74 insertions(+)
>>>
>>> diff --git a/doc/ffmpeg.texi b/doc/ffmpeg.texi
>>> index 3717f22d42..d127bc0f0d 100644
>>> --- a/doc/ffmpeg.texi
>>> +++ b/doc/ffmpeg.texi
>>> @@ -920,6 +920,19 @@ would be more efficient.
>>>  When doing stream copy, copy also non-key frames found at the
>>>  beginning.
>>>
>>> +@item -decode_format[:@var{stream_specifier}]
>>> @var{pixfmt}[,@var{pixfmt}...] (@emph{input,per-stream})
>>> +Set the possible output formats to be used by the decoder for this
> stream.
>>> +If the decoder does not natively support any format in the given list
>>> for
>>> +the input stream then decoding will fail rather than continuing with a
>>> +different format.
>>> +
>>> +In general this should not be set - the decoder will select an output
>>> +format based on the input stream parameters and available
>>> components,
>>> and
>>> +that will be automatically converted to whatever the output
>> requires.
>>> It
>>> +may be useful to force a hardware decoder supporting output in
> multiple
>>> +different memory types to pick a desired one, or to ensure that a
> hardware
>>> +decoder is used when software fallback is also available.
>>> +
>>>  @item -init_hw_device
>>> @var{type}[=@var{name}][:@var{device}[,@var{key=value}...]]
>>>  Initialise a new hardware device of type @var{type} called
>>> @var{name},
>>> using the
>>>  given device parameters.
>>> diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
>>> index 38c21e944a..c651c8d3a8 100644
>>> --- a/fftools/ffmpeg.c
>>> +++ b/fftools/ffmpeg.c
>>> @@ -598,6 +598,7 @@ static void ffmpeg_cleanup(int ret)
>>>  avsubtitle_free(>prev_sub.subtitle);
>>>  av_frame_free(>sub2video.frame);
>>>  av_freep(>filters);
>>> +av_freep(>decode_formats);
>>>  av_freep(>hwaccel_device);
>>>  av_freep(>dts_buffer);
>>>
>>> @@ -2800,6 +2801,15 @@ static enum AVPixelFormat
>>> get_format(AVCodecContext *s, const enum AVPixelFormat
>>>  const AVCodecHWConfig  *config = NULL;
>>>  int i;
>>>
>>> +if (ist->decode_formats) {
>>> +for (i = 0; ist->decode_formats[i] != AV_PIX_FMT_NONE; i++)
>> {
>>> +if (ist->decode_formats[i] == *p)
>>> +break;
>>> +}
>>> +if (ist->decode_formats[i] != *p)
>>> +continue;
>>> +}
>>> +
>>>  if (!(desc->flags & AV_PIX_FMT_FLAG_HWACCEL))
>>>  break;
>>>
>>> diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
>>> index eb1eaf6363..b06fd18b1c 100644
>>> --- a/fftools/ffmpeg.h
>>> +++ b/fftools/ffmpeg.h
>>> @@ -125,6 +125,8 @@ typedef struct OptionsContext {
>>>  intnb_ts_scale;
>>>  SpecifierOpt *dump_attachment;
>>>  intnb_dump_attachment;
>>> +SpecifierOpt *decode_formats;
>>> +intnb_decode_formats;
>>>  SpecifierOpt *hwaccels;
>>>  intnb_hwaccels;
>>>  SpecifierOpt *hwaccel_devices;
>>> @@ -334,6 +336,8 @@ typedef struct 

Re: [FFmpeg-devel] [PATCH v5] Improved the performance of 1 decode + N filter graphs and adaptive bitrate.

2019-02-20 Thread Mark Thompson
On 20/02/2019 10:17, Wang, Shaofei wrote:
>> -Original Message-
>> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
>> Mark Thompson
>> Sent: Saturday, February 16, 2019 8:12 PM
>> To: ffmpeg-devel@ffmpeg.org
>> Subject: Re: [FFmpeg-devel] [PATCH v5] Improved the performance of 1
>> decode + N filter graphs and adaptive bitrate.
>> On 15/02/2019 21:54, Shaofei Wang wrote:
>>> It enabled multiple filter graph concurrency, which bring above about
>>> 4%~20% improvement in some 1:N scenarios by CPU or GPU acceleration
>>>
>>> Below are some test cases and comparison as reference.
>>> (Hardware platform: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz)
>>> (Software: Intel iHD driver - 16.9.00100, CentOS 7)
>>>
>>> For 1:N transcode by GPU acceleration with vaapi:
>>> ./ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi \
>>> -hwaccel_output_format vaapi \
>>> -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
>>> -vf "scale_vaapi=1280:720" -c:v h264_vaapi -f null /dev/null \
>>> -vf "scale_vaapi=720:480" -c:v h264_vaapi -f null /dev/null
>>>
>>> test results:
>>> 2 encoders 5 encoders 10 encoders
>>> Improved   6.1%6.9%   5.5%
>>>
>>> For 1:N transcode by GPU acceleration with QSV:
>>> ./ffmpeg -hwaccel qsv -c:v h264_qsv \
>>> -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
>>> -vf "scale_qsv=1280:720:format=nv12" -c:v h264_qsv -f null /dev/null
>> \
>>> -vf "scale_qsv=720:480:format=nv12" -c:v h264_qsv -f null
>>> /dev/null
>>>
>>> test results:
>>> 2 encoders  5 encoders 10 encoders
>>> Improved   6%   4% 15%
>>>
>>> For Intel GPU acceleration case, 1 decode to N scaling, by QSV:
>>> ./ffmpeg -hwaccel qsv -c:v h264_qsv \
>>> -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
>>> -vf "scale_qsv=1280:720:format=nv12,hwdownload" -pix_fmt nv12 -f
>> null /dev/null \
>>> -vf "scale_qsv=720:480:format=nv12,hwdownload" -pix_fmt nv12 -f
>>> null /dev/null
>>>
>>> test results:
>>> 2 scale  5 scale   10 scale
>>> Improved   12% 21%21%
>>>
>>> For CPU only 1 decode to N scaling:
>>> ./ffmpeg -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
>>> -vf "scale=1280:720" -pix_fmt nv12 -f null /dev/null \
>>> -vf "scale=720:480" -pix_fmt nv12 -f null /dev/null
>>>
>>> test results:
>>> 2 scale  5 scale   10 scale
>>> Improved   25%107%   148%
>>>
>>> Signed-off-by: Wang, Shaofei 
>>> Reviewed-by: Zhao, Jun 
>>> ---
>>>  fftools/ffmpeg.c| 121
>> 
>>>  fftools/ffmpeg.h|  14 ++
>>>  fftools/ffmpeg_filter.c |   1 +
>>>  3 files changed, 128 insertions(+), 8 deletions(-)
>>
>> On a bit more review, I don't think this patch works at all.
>>
> It has been tested and verified by a lot of cases. More fate cases need to be 
> covered now.
> 
>> The existing code is all written to be run serially.  This simplistic 
>> approach to
>> parallelising it falls down because many of those functions use variables
>> written in what were previously other functions called at different times but
>> have now become other threads, introducing undefined behaviour due to
>> data races.
>>
> Actually, this is not a patch to parallel every thing in the ffmpeg. It just 
> thread the input filter
> of the filter graph(tend for simple filter graph), which is a simple way to 
> improve N filter graph
> performance and also without introduce huge modification. So that there is 
> still a lot of serial function call, differences
> are that each filter graph need to init its output stream instead of init all 
> together and each
> filter graph will reap filters for its filter chain.

Indeed the existing encapsulation tries to keep things mostly separate, but in 
various places it accesses shared state which works fine in the serial case but 
fails when those parts are run in parallel.

Data races are undefined behaviour in C; introducing them is not acceptable.

>> To consider a single example (not the only one), the function
>> check_init_output_file() does not work at all after this change.  The test 
>> for
>> OutputStream initialisation (so that you run exactly once after all of the
>> output streams are ready) races with other threads setting those variables.
>> Since that's undefined behaviour you may get lucky sometimes and have the
>> output file initialisation run exactly once, but in general it will fail in 
>> unknown
>> ways.
>>
> 
> The check_init_output_file() should be responsible for the output file 
> related with
> specified output stream which managed by each thread chain, that means even it
> called by different thread, the data setting are different. Let me double 
> check.

Each output file can contain multiple streams - try a transcode with both audio 
and video filters.

(Incidentally, the patch as-is also crashes in this case if the transcode 
completes 

Re: [FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()

2019-02-20 Thread Mark Thompson
On 20/02/2019 02:58, Zhong Li wrote:
> Using MSDK parser can improve qsv decoder pass rate in some cases (E.g:
> sps declares a wrong level_idc, smaller than it should be).
> And it is necessary for adding new qsv decoders such as MJPEG and VP9
> since current parser can't provide enough information.

Can you explain the problem with level_idc?  Why would the libmfx parser 
determine a different answer?

Given that you need the current parser anyway (see previous mail), it would 
likely be more useful to extend it to supply any information which is missing.

> Actually using MFXVideoDECODE_DecodeHeader() was disscussed at
> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-July/175734.html and merged as 
> commit 1acb19d,
> but was overwritten when merged libav patches (commit: 1f26a23) without any 
> explain.

I'm not sure where the explanation for this went; maybe it was only discussed 
on IRC.

The reason for using the internal parsers is that you need the information 
before libmfx is initialised at all in the hw_frames_ctx case (i.e. before the 
get_format callback which will supply the hardware context information), and 
once you require that anyway there isn't much point in parsing things twice for 
the same information.

It's probably fine to parse it twice if you want, but the two cases really 
should be returning the same information.

> v2: split decode header from decode_init, and call it for everyframe to
> detect format/resoultion change. It can fix some regression issues such
> as hevc 10bits decoding.
> 
> Signed-off-by: Zhong Li 
> ---
>  libavcodec/qsvdec.c | 172 ++--
>  libavcodec/qsvdec.h |   2 +
>  2 files changed, 90 insertions(+), 84 deletions(-)

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 1/1] avcodec/vaapi_encode: add frame-skip func

2019-02-20 Thread Mark Thompson
On 20/02/2019 10:33, Jing SUN wrote:
> This implements app controlled frame skipping
> in vaapi encoding. To make a frame skipped,
> allocate its frame side data of the newly
> added AV_FRAME_DATA_SKIP_FRAME type and set
> its value to 1.
> 
> Signed-off-by: Jing SUN 
> ---
>  libavcodec/vaapi_encode.c | 112 
> --
>  libavcodec/vaapi_encode.h |   5 +++
>  libavutil/frame.c |   1 +
>  libavutil/frame.h |   5 +++
>  4 files changed, 119 insertions(+), 4 deletions(-)

Have a look at 
, 
which tries to implement this feature in a more general way without adding any 
ad-hoc API.

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 4/6] lavc/qsvdec: remove orignal parser code since not needed now

2019-02-20 Thread Mark Thompson
On 20/02/2019 02:58, Zhong Li wrote:
> Signed-off-by: Zhong Li 
> ---
>  configure   | 10 +-
>  libavcodec/qsvdec.c | 16 +---
>  libavcodec/qsvdec.h |  2 --
>  3 files changed, 6 insertions(+), 22 deletions(-)

You can't remove this, it's still needed - the stream properties must be 
determined before the get_format() callback.

Similarly, you will need to extend the VP9 parser to return the relevant 
information for the following patch so that it works in general rather than 
only in cases where the user can supply it externally.  It should be quite 
straightforward; see 182cf170a544bce069c8690c90b49381150a1f10.

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] libavcodec/zmbvenc: block scoring improvements/bug fixes

2019-02-20 Thread Michael Niedermayer
On Sat, Feb 09, 2019 at 01:10:20PM +, Matthew Fearnley wrote:
> - Improve block choices by counting 0-bytes in the entropy score
> - Make histogram use uint16_t type, to allow byte counts from 16*16
> (current block size) up to 255*255 (maximum allowed 8bpp block size)
> - Make sure score table is big enough for a full block's worth of bytes
> - Calculate *xored without using code in inner loop

This should have been split into multiple changes

compression seems to become slightly worse from this change

./ffmpeg -i matrixbench_mpeg2.mpg -vframes 30 -vcodec zmbv -an -y test-old.avi
./ffmpeg -i matrixbench_mpeg2.mpg -vframes 30 -vcodec zmbv -an -y test-new.avi

-rw-r- 1 michael michael 1175466 Feb 20 22:06 test-new.avi
-rw-r- 1 michael michael 1174832 Feb 20 22:07 test-old.avi



[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

"You are 36 times more likely to die in a bathtub than at the hands of a
terrorist. Also, you are 2.5 times more likely to become a president and
2 times more likely to become an astronaut, than to die in a terrorist
attack." -- Thoughty2



signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2] lavf/jacosubdec: compute subtitle duration correctly

2019-02-20 Thread Michael Niedermayer
On Tue, Feb 19, 2019 at 10:29:40AM +0100, Paul B Mahol wrote:
> On 2/19/19, Adam Sampson  wrote:
> > When a JACOsub subtitle has two timestamps, they represent its start and
> > end times (http://unicorn.us.com/jacosub/jscripts.html#l_times); the
> > duration is the difference between the two, not the sum of the two.
> >
> > The subtitle end times in the FATE test for this were wrong as a result;
> > fix them too. (This test is based on JACOsub's demo.txt, and the end
> > time computed for the last line using @ now matches what the comments
> > there say it should be.)
> >
> > Also tested in practice using MPV, a LaserDisc, and some authentic 1993
> > JACOsub files.
> >
> > Signed-off-by: Adam Sampson 
> > ---
> > v2: update the test data too (thanks, Carl Eugen!)
> >
> >  libavformat/jacosubdec.c   |  2 +-
> >  tests/ref/fate/sub-jacosub | 22 +++---
> >  2 files changed, 12 insertions(+), 12 deletions(-)
> >
> 
> LGTM

will apply

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you drop bombs on a foreign country and kill a hundred thousand
innocent people, expect your government to call the consequence
"unprovoked inhuman terrorist attacks" and use it to justify dropping
more bombs and killing more people. The technology changed, the idea is old.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Proposal: Homebrew tap for FFmpeg

2019-02-20 Thread Lou Logan
On Wed, Feb 6, 2019, at 11:48 AM, Werner Robitza wrote:
>
> I propose that FFmpeg maintains its own ffmpeg formula under its
> GitHub organization at github.com/ffmpeg/homebrew-ffmpeg (or similar).
> This will ensure that there's one formula users will discover when
> they look for an alternative tap, thus improving discoverability and
> avoiding fragmentation. We could use the above link as a starting
> point.

The alternative tap originally proposed by Werner went ahead independently and 
has been implemented at:
https://github.com/varenc/homebrew-ffmpeg

So if you're a Homebrew user looking for a formulae with additional options not 
present in the core Homebrew ffmpeg formulae you can use this tap.

More info is available in the community wiki:
https://trac.ffmpeg.org/wiki/CompilationGuide/macOS#Additionaloptions

Note that this tap is not maintained by or associated with FFmpeg.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Rogozhkin, Dmitry V
On Wed, 2019-02-20 at 10:58 +0800, Zhong Li wrote:
> VP9 decoder is supported on Intel kabyLake+ platforms with MSDK
> Version 1.19+
> 
> Signed-off-by: Zhong Li 
> ---
>  Changelog |  1 +
>  configure |  1 +
>  libavcodec/allcodecs.c|  1 +
>  libavcodec/qsv.c  |  5 +
>  libavcodec/qsvdec_other.c | 46 -
> --
>  5 files changed, 51 insertions(+), 3 deletions(-)
> 
> diff --git a/Changelog b/Changelog
> index f289812bfc..141ffd9610 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -20,6 +20,7 @@ version :
>  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
>  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
>  - Intel QSV-accelerated MJPEG decoding
> +- Intel QSV-accelerated VP9 decoding
>  
>  
>  version 4.1:
> diff --git a/configure b/configure
> index de994673a0..84fbe49bcc 100755
> --- a/configure
> +++ b/configure
> @@ -3037,6 +3037,7 @@ vp8_v4l2m2m_decoder_deps="v4l2_m2m
> vp8_v4l2_m2m"
>  vp8_v4l2m2m_encoder_deps="v4l2_m2m vp8_v4l2_m2m"
>  vp9_cuvid_decoder_deps="cuvid"
>  vp9_mediacodec_decoder_deps="mediacodec"
> +vp9_qsv_decoder_select="qsvdec"
>  vp9_rkmpp_decoder_deps="rkmpp"
>  vp9_vaapi_encoder_deps="VAEncPictureParameterBufferVP9"
>  vp9_vaapi_encoder_select="vaapi_encode"
> diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
> index 391619c38c..248b8f15b8 100644
> --- a/libavcodec/allcodecs.c
> +++ b/libavcodec/allcodecs.c
> @@ -776,6 +776,7 @@ extern AVCodec ff_vp8_v4l2m2m_encoder;
>  extern AVCodec ff_vp8_vaapi_encoder;
>  extern AVCodec ff_vp9_cuvid_decoder;
>  extern AVCodec ff_vp9_mediacodec_decoder;
> +extern AVCodec ff_vp9_qsv_decoder;
>  extern AVCodec ff_vp9_vaapi_encoder;
>  
>  // The iterate API is not usable with ossfuzz due to the excessive
> size of binaries created
> diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
> index 711fd3df1e..7dcfb04316 100644
> --- a/libavcodec/qsv.c
> +++ b/libavcodec/qsv.c
> @@ -60,6 +60,11 @@ int ff_qsv_codec_id_to_mfx(enum AVCodecID
> codec_id)
>  #endif
>  case AV_CODEC_ID_MJPEG:
>  return MFX_CODEC_JPEG;
> +#if QSV_VERSION_ATLEAST(1, 19)
> +case AV_CODEC_ID_VP9:
> +return MFX_CODEC_VP9;
> +#endif
> +
>  default:
>  break;
>  }
> diff --git a/libavcodec/qsvdec_other.c b/libavcodec/qsvdec_other.c
> index 8c9c1e6b13..50bfc818b0 100644
> --- a/libavcodec/qsvdec_other.c
> +++ b/libavcodec/qsvdec_other.c
> @@ -1,5 +1,5 @@
>  /*
> - * Intel MediaSDK QSV based MPEG-2, VC-1, VP8 and MJPEG decoders
> + * Intel MediaSDK QSV based MPEG-2, VC-1, VP8, MJPEG and VP9
> decoders
>   *
>   * copyright (c) 2015 Anton Khirnov
>   *
> @@ -60,8 +60,8 @@ static av_cold int qsv_decode_close(AVCodecContext
> *avctx)
>  {
>  QSVOtherContext *s = avctx->priv_data;
>  
> -#if CONFIG_VP8_QSV_DECODER
> -if (avctx->codec_id == AV_CODEC_ID_VP8)
> +#if CONFIG_VP8_QSV_DECODER || CONFIG_VP9_QSV_DECODER

Seems to be wrong since AV_CODEC_ID_VP8 is covered by
QSV_VERSION_ATLEAST(1, 12) and AV_CODEC_ID_VP9 by
QSV_VERSION_ATLEAST(1, 19). Thus, you may step into situation when one
of AV_CODEC_ID_* won't be declared...

> +if (avctx->codec_id == AV_CODEC_ID_VP8 || avctx->codec_id ==
> AV_CODEC_ID_VP9)
>  av_freep(>qsv.load_plugins);
>  #endif
>  
> @@ -90,6 +90,17 @@ static av_cold int qsv_decode_init(AVCodecContext
> *avctx)
>  }
>  #endif
>  
> +#if CONFIG_VP9_QSV_DECODER
> +if (avctx->codec_id == AV_CODEC_ID_VP9) {
> +static const char *uid_vp9dec_hw =
> "a922394d8d87452f878c51f2fc9b4131";

Should not be actually needed (and I hope it will work:)). VP9 hw
plugin is actually a tiny compatibility stub which redirects everything
to the mediasdk library.  Considering that you just add VP9 decoding
support you don't need to care about compatibility (I hope). Hence, you
can try to just initialize VP9 decoder directly from the mediasdk
library as you are doing for AVC decoder.

> +
> +av_freep(>qsv.load_plugins);
> +s->qsv.load_plugins = av_strdup(uid_vp9dec_hw);
> +if (!s->qsv.load_plugins)
> +return AVERROR(ENOMEM);
> +}
> +#endif
> +
>  s->packet_fifo = av_fifo_alloc(sizeof(AVPacket));
>  if (!s->packet_fifo) {
>  ret = AVERROR(ENOMEM);
> @@ -281,3 +292,32 @@ AVCodec ff_mjpeg_qsv_decoder = {
>  AV_PIX_FMT_NONE
> },
>  };
>  #endif
> +
> +#if CONFIG_VP9_QSV_DECODER
> +static const AVClass vp9_qsv_class = {
> +.class_name = "vp9_qsv",
> +.item_name  = av_default_item_name,
> +.option = options,
> +.version= LIBAVUTIL_VERSION_INT,
> +};
> +
> +AVCodec ff_vp9_qsv_decoder = {
> +.name   = "vp9_qsv",
> +.long_name  = NULL_IF_CONFIG_SMALL("VP9 video (Intel Quick
> Sync Video acceleration)"),
> +.priv_data_size = sizeof(QSVOtherContext),
> +.type   = AVMEDIA_TYPE_VIDEO,
> +.id = AV_CODEC_ID_VP9,
> +.init  

Re: [FFmpeg-devel] [PATCH] MAINTAINERS: add myself for tonemap_opencl

2019-02-20 Thread Michael Niedermayer
On Tue, Feb 19, 2019 at 12:20:07AM +, Song, Ruiling wrote:
> > -Original Message-
> > From: Song, Ruiling
> > Sent: Wednesday, February 13, 2019 9:29 AM
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Song, Ruiling 
> > Subject: [PATCH] MAINTAINERS: add myself for tonemap_opencl
> > 
> > Signed-off-by: Ruiling Song 
> > ---
> >  MAINTAINERS | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 7ac2d22..412a739 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -362,6 +362,7 @@ Filters:
> >vf_ssim.c Paul B Mahol
> >vf_stereo3d.c Paul B Mahol
> >vf_telecine.c Paul B Mahol
> > +  vf_tonemap_opencl.c   Ruiling Song
> >vf_yadif.cMichael Niedermayer
> >vf_zoompan.c  Paul B Mahol
> Ping?

will apply

thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Everything should be made as simple as possible, but not simpler.
-- Albert Einstein


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 5/6] lavc/qsvdec: Add mjpeg decoder support

2019-02-20 Thread Rogozhkin, Dmitry V
On Wed, 2019-02-20 at 10:58 +0800, Zhong Li wrote:
> Signed-off-by: Zhong Li 
> ---
>  Changelog |  1 +
>  configure |  1 +
>  libavcodec/Makefile   |  1 +
>  libavcodec/allcodecs.c|  1 +
>  libavcodec/qsvdec_other.c | 28 +++-
>  5 files changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/Changelog b/Changelog
> index 4d80e5b54f..f289812bfc 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -19,6 +19,7 @@ version :
>  - ARBC decoder
>  - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
>  - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
> +- Intel QSV-accelerated MJPEG decoding
>  
>  
>  version 4.1:
> diff --git a/configure b/configure
> index eaa56c07cf..de994673a0 100755
> --- a/configure
> +++ b/configure
> @@ -2997,6 +2997,7 @@ hevc_v4l2m2m_decoder_deps="v4l2_m2m
> hevc_v4l2_m2m"
>  hevc_v4l2m2m_decoder_select="hevc_mp4toannexb_bsf"
>  hevc_v4l2m2m_encoder_deps="v4l2_m2m hevc_v4l2_m2m"
>  mjpeg_cuvid_decoder_deps="cuvid"
> +mjpeg_qsv_decoder_select="qsvdec"
>  mjpeg_qsv_encoder_deps="libmfx"
>  mjpeg_qsv_encoder_select="qsvenc"
>  mjpeg_vaapi_encoder_deps="VAEncPictureParameterBufferJPEG"
> diff --git a/libavcodec/Makefile b/libavcodec/Makefile
> index 15c43a8a6a..fed4a13fe5 100644
> --- a/libavcodec/Makefile
> +++ b/libavcodec/Makefile
> @@ -423,6 +423,7 @@ OBJS-$(CONFIG_METASOUND_DECODER)   +=
> metasound.o metasound_data.o \
>  OBJS-$(CONFIG_MICRODVD_DECODER)+= microdvddec.o ass.o
>  OBJS-$(CONFIG_MIMIC_DECODER)   += mimic.o
>  OBJS-$(CONFIG_MJPEG_DECODER)   += mjpegdec.o
> +OBJS-$(CONFIG_MJPEG_QSV_DECODER)   += qsvdec_other.o
>  OBJS-$(CONFIG_MJPEG_ENCODER)   += mjpegenc.o
> mjpegenc_common.o \
>    mjpegenc_huffman.o
>  OBJS-$(CONFIG_MJPEGB_DECODER)  += mjpegbdec.o
> diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
> index b26aeca239..391619c38c 100644
> --- a/libavcodec/allcodecs.c
> +++ b/libavcodec/allcodecs.c
> @@ -759,6 +759,7 @@ extern AVCodec ff_hevc_videotoolbox_encoder;
>  extern AVCodec ff_libkvazaar_encoder;
>  extern AVCodec ff_mjpeg_cuvid_decoder;
>  extern AVCodec ff_mjpeg_qsv_encoder;
> +extern AVCodec ff_mjpeg_qsv_decoder;
>  extern AVCodec ff_mjpeg_vaapi_encoder;
>  extern AVCodec ff_mpeg1_cuvid_decoder;
>  extern AVCodec ff_mpeg2_cuvid_decoder;
> diff --git a/libavcodec/qsvdec_other.c b/libavcodec/qsvdec_other.c
> index 03251d2c85..8c9c1e6b13 100644
> --- a/libavcodec/qsvdec_other.c
> +++ b/libavcodec/qsvdec_other.c
> @@ -1,5 +1,5 @@
>  /*
> - * Intel MediaSDK QSV based MPEG-2, VC-1 and VP8 decoders
> + * Intel MediaSDK QSV based MPEG-2, VC-1, VP8 and MJPEG decoders
>   *
>   * copyright (c) 2015 Anton Khirnov
>   *
> @@ -255,3 +255,29 @@ AVCodec ff_vp8_qsv_decoder = {
>  .wrapper_name   = "qsv",
>  };
>  #endif
> +
> +#if CONFIG_MJPEG_QSV_DECODER
> +static const AVClass mjpeg_qsv_class = {
> +.class_name = "mjpeg_qsv",
> +.item_name  = av_default_item_name,
> +.option = options,
> +.version= LIBAVUTIL_VERSION_INT,
> +};
> +
> +AVCodec ff_mjpeg_qsv_decoder = {
> +.name   = "mjpeg_qsv",
> +.long_name  = NULL_IF_CONFIG_SMALL("MJPEG video (Intel Quick
> Sync Video acceleration)"),
> +.priv_data_size = sizeof(QSVOtherContext),
> +.type   = AVMEDIA_TYPE_VIDEO,
> +.id = AV_CODEC_ID_MJPEG,
> +.init   = qsv_decode_init,
> +.decode = qsv_decode_frame,
> +.flush  = qsv_decode_flush,
> +.close  = qsv_decode_close,
> +.capabilities   = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_DR1 |
> AV_CODEC_CAP_AVOID_PROBING,
> +.priv_class = _qsv_class,
> +.pix_fmts   = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12,
> +AV_PIX_FMT_QSV,
> +AV_PIX_FMT_NONE 

hm. If codec (each codec) declares list of formats here, why we have
some pix_fmts array declaration in one of the decoding functions? See
discussion for patch #3 in this series.

> },
> +};
> +#endif
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()

2019-02-20 Thread Rogozhkin, Dmitry V
On Wed, 2019-02-20 at 10:58 +0800, Zhong Li wrote:
> Using MSDK parser can improve qsv decoder pass rate in some cases
> (E.g:
> sps declares a wrong level_idc, smaller than it should be).
> And it is necessary for adding new qsv decoders such as MJPEG and VP9
> since current parser can't provide enough information.
> Actually using MFXVideoDECODE_DecodeHeader() was disscussed at
> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-July/175734.html and
> merged as commit 1acb19d,
> but was overwritten when merged libav patches (commit: 1f26a23)
> without any explain.
> 
> v2: split decode header from decode_init, and call it for everyframe
> to
> detect format/resoultion change. It can fix some regression issues
> such
> as hevc 10bits decoding.
> 
> Signed-off-by: Zhong Li 
> ---
>  libavcodec/qsvdec.c | 172 ++--
> 
>  libavcodec/qsvdec.h |   2 +
>  2 files changed, 90 insertions(+), 84 deletions(-)
> 
> diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
> index 4a0be811fb..efe054f5c5 100644
> --- a/libavcodec/qsvdec.c
> +++ b/libavcodec/qsvdec.c
> @@ -120,19 +120,18 @@ static inline unsigned int qsv_fifo_size(const
> AVFifoBuffer* fifo)
>  return av_fifo_size(fifo) / qsv_fifo_item_size();
>  }
>  
> -static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q)
> +static int qsv_decode_preinit(AVCodecContext *avctx, QSVContext *q,
> enum AVPixelFormat *pix_fmts, mfxVideoParam *param)
>  {
> -const AVPixFmtDescriptor *desc;
>  mfxSession session = NULL;
>  int iopattern = 0;
> -mfxVideoParam param = { 0 };
> -int frame_width  = avctx->coded_width;
> -int frame_height = avctx->coded_height;
>  int ret;
>  
> -desc = av_pix_fmt_desc_get(avctx->sw_pix_fmt);
> -if (!desc)
> -return AVERROR_BUG;
> +ret = ff_get_format(avctx, pix_fmts);
> +if (ret < 0) {
> +q->orig_pix_fmt = avctx->pix_fmt = AV_PIX_FMT_NONE;
> +return ret;
> +} else
> +q->orig_pix_fmt = pix_fmts[1];

You rely on the way how pix_fmts will be declared in some other place.
This gives issues to understand the code for the beginners and hence
difficulty in code support. Can we somehow show what is actually going
on here? For example, declare pix_fmts something like:
enum qsv_formats {
  QSV,
  QSV_NV12,
  QSV_MAX
}

enum AVPixelFormat pix_fmts[QSV_MAX+1] = {
  [QSV] = AV_PIX_FMT_QSV,
  [QSV_NV12] = AV_PIX_FMT_NV12,
  [QSV_MAX] = AV_PIX_FMT_NONE
}

After that we could address like:
q->orig_pix_fmt = pix_fmts[QSV_NV12];
both highlighting what we actually want to achieve and making sure that
we will have less mistakes supporting the list of fmts wish we reorder
something.

>  
>  if (!q->async_fifo) {
>  q->async_fifo = av_fifo_alloc(q->async_depth *
> qsv_fifo_item_size());
> @@ -170,48 +169,72 @@ static int qsv_decode_init(AVCodecContext
> *avctx, QSVContext *q)
>  return ret;
>  }
>  
> -ret = ff_qsv_codec_id_to_mfx(avctx->codec_id);
> -if (ret < 0)
> -return ret;
> +param->IOPattern   = q->iopattern;
> +param->AsyncDepth  = q->async_depth;
> +param->ExtParam= q->ext_buffers;
> +param->NumExtParam = q->nb_ext_buffers;
>  
> -param.mfx.CodecId  = ret;
> -param.mfx.CodecProfile = ff_qsv_profile_to_mfx(avctx->codec_id,
> avctx->profile);
> -param.mfx.CodecLevel   = avctx->level == FF_LEVEL_UNKNOWN ?
> MFX_LEVEL_UNKNOWN : avctx->level;
> -
> -param.mfx.FrameInfo.BitDepthLuma   = desc->comp[0].depth;
> -param.mfx.FrameInfo.BitDepthChroma = desc->comp[0].depth;
> -param.mfx.FrameInfo.Shift  = desc->comp[0].depth > 8;
> -param.mfx.FrameInfo.FourCC = q->fourcc;
> -param.mfx.FrameInfo.Width  = frame_width;
> -param.mfx.FrameInfo.Height = frame_height;
> -param.mfx.FrameInfo.ChromaFormat   = MFX_CHROMAFORMAT_YUV420;
> -
> -switch (avctx->field_order) {
> -case AV_FIELD_PROGRESSIVE:
> -param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
> -break;
> -case AV_FIELD_TT:
> -param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_FIELD_TFF;
> -break;
> -case AV_FIELD_BB:
> -param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_FIELD_BFF;
> -break;
> -default:
> -param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_UNKNOWN;
> -break;
> -}
> +return 0;
> + }
> +
> +static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q,
> mfxVideoParam *param)
> +{
> +int ret;
>  
> -param.IOPattern   = q->iopattern;
> -param.AsyncDepth  = q->async_depth;
> -param.ExtParam= q->ext_buffers;
> -param.NumExtParam = q->nb_ext_buffers;
> +avctx->width= param->mfx.FrameInfo.CropW;
> +avctx->height   = param->mfx.FrameInfo.CropH;
> +avctx->coded_width  = param->mfx.FrameInfo.Width;
> +avctx->coded_height = param->mfx.FrameInfo.Height;
> +avctx->level= param->mfx.CodecLevel;
> +   

[FFmpeg-devel] [PATCH] avutil/mem: Mark DECLARE_ASM_ALIGNED as visibility("hidden") for __GNUC__

2019-02-20 Thread Fāng-ruì Sòng
Inline asm code assumes these DECLARE_ASM_ALIGNED declared global
constants are non-preemptive, e.g.

libavcodec/x86/cabac.h
"lea"MANGLE(ff_h264_cabac_tables)", %0  \n\t"

On ELF platforms, if -Wl,-Bsymbolic
-Wl,--version-script,libavcodec/libavcodec.ver are removed from the
linker command line, the symbol will be considered preemptive and fail
to link to a DSO:

ld.lld: error: relocation R_X86_64_PC32 cannot be used against
symbol ff_h264_cabac_tables; recompile with -fPIC

It is better to express the intention explicitly and mark such global
constants hidden (non-preemptive). It also improves portability as no
linker magic is required.

DECLARE_ASM_CONST uses the "static" specifier to indicate internal
linkage. The visibility annotation is unnecessary.

Also remove __clang__ as clang pretends to be gcc 4.2 and defines __GNUC__

Signed-off-by: Fangrui Song 
---
 libavutil/mem.h | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/libavutil/mem.h b/libavutil/mem.h
index 5fb1a02dd9..47abe2c8e9 100644
--- a/libavutil/mem.h
+++ b/libavutil/mem.h
@@ -100,6 +100,12 @@
  * @param v Name of the variable
  */

+#if defined(__GNUC__) && !(defined(_WIN32) || defined(__CYGWIN__))
+#define DECLARE_HIDDEN __attribute__ ((visibility ("hidden")))
+#else
+#define DECLARE_HIDDEN
+#endif
+
 #if defined(__INTEL_COMPILER) && __INTEL_COMPILER < 1110 || defined(__SUNPRO_C)
 #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
 #define DECLARE_ASM_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
@@ -108,9 +114,9 @@
 #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned
(FFMIN(n, 16 v
 #define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
((aligned (FFMIN(n, 16 v
 #define DECLARE_ASM_CONST(n,t,v)static const t av_used
__attribute__ ((aligned (FFMIN(n, 16 v
-#elif defined(__GNUC__) || defined(__clang__)
+#elif defined(__GNUC__)
 #define DECLARE_ALIGNED(n,t,v)  t __attribute__ ((aligned (n))) v
-#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
((aligned (n))) v
+#define DECLARE_ASM_ALIGNED(n,t,v)  t av_used __attribute__
((aligned (n))) DECLARE_HIDDEN v
 #define DECLARE_ASM_CONST(n,t,v)static const t av_used
__attribute__ ((aligned (n))) v
 #elif defined(_MSC_VER)
 #define DECLARE_ALIGNED(n,t,v)  __declspec(align(n)) t v
-- 
2.20.1


-- 
宋方睿
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avformat/mov: fix hang while seek on a kind of fragmented mp4.

2019-02-20 Thread Charles Liu
1. organize fragmented information according to the tracks.
2. do NOT skip the last boxes of fragmented info.

ticket #7572

Signed-off-by: Charles Liu 
---
 libavformat/isom.h |  10 +-
 libavformat/mov.c  | 375 ++---
 2 files changed, 185 insertions(+), 200 deletions(-)

diff --git a/libavformat/isom.h b/libavformat/isom.h
index 69452cae8e..2b06f5fa58 100644
--- a/libavformat/isom.h
+++ b/libavformat/isom.h
@@ -125,7 +125,7 @@ typedef struct MOVEncryptionIndex {
 } MOVEncryptionIndex;
 
 typedef struct MOVFragmentStreamInfo {
-int id;
+unsigned stsd_id;
 int64_t sidx_pts;
 int64_t first_tfra_pts;
 int64_t tfdt_dts;
@@ -136,14 +136,13 @@ typedef struct MOVFragmentStreamInfo {
 typedef struct MOVFragmentIndexItem {
 int64_t moof_offset;
 int headers_read;
-int current;
-int nb_stream_info;
-MOVFragmentStreamInfo * stream_info;
+MOVFragmentStreamInfo stream_info;
 } MOVFragmentIndexItem;
 
 typedef struct MOVFragmentIndex {
 int allocated_size;
 int complete;
+int id;  // track id
 int current;
 int nb_items;
 MOVFragmentIndexItem * item;
@@ -274,7 +273,8 @@ typedef struct MOVContext {
 int moov_retry;
 int use_mfra_for;
 int has_looked_for_mfra;
-MOVFragmentIndex frag_index;
+unsigned nb_frag_indices;
+MOVFragmentIndex **frag_indices;
 int atom_depth;
 unsigned int aax_mode;  ///< 'aax' file has been detected
 uint8_t file_key[20];
diff --git a/libavformat/mov.c b/libavformat/mov.c
index bbd588c705..cd37d08299 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -1153,59 +1153,29 @@ static int mov_read_moov(MOVContext *c, AVIOContext 
*pb, MOVAtom atom)
 return 0; /* now go for mdat */
 }
 
-static MOVFragmentStreamInfo * get_frag_stream_info(
-MOVFragmentIndex *frag_index,
-int index,
-int id)
+static MOVFragmentIndex *mov_find_frag_index(MOVFragmentIndex **frag_indices, 
int nb_frag_indices, int track_id)   // name?
 {
-int i;
-MOVFragmentIndexItem * item;
+unsigned i;
+MOVFragmentIndex *frag_index = NULL;
 
-if (index < 0 || index >= frag_index->nb_items)
-return NULL;
-item = _index->item[index];
-for (i = 0; i < item->nb_stream_info; i++)
-if (item->stream_info[i].id == id)
-return >stream_info[i];
+for (i = 0; i < nb_frag_indices; i++)
+if (frag_indices[i]->id == track_id)
+frag_index = frag_indices[i];
 
-// This shouldn't happen
-return NULL;
+return frag_index;
 }
 
-static void set_frag_stream(MOVFragmentIndex *frag_index, int id)
+static MOVFragmentStreamInfo *get_current_frag_stream_info(MOVContext *c, int 
id)
 {
-int i;
-MOVFragmentIndexItem * item;
+MOVFragmentIndex *frag_index = NULL;
 
-if (frag_index->current < 0 ||
-frag_index->current >= frag_index->nb_items)
-return;
-
-item = _index->item[frag_index->current];
-for (i = 0; i < item->nb_stream_info; i++)
-if (item->stream_info[i].id == id) {
-item->current = i;
-return;
-}
-
-// id not found.  This shouldn't happen.
-item->current = -1;
-}
-
-static MOVFragmentStreamInfo * get_current_frag_stream_info(
-MOVFragmentIndex *frag_index)
-{
-MOVFragmentIndexItem *item;
-if (frag_index->current < 0 ||
+frag_index = mov_find_frag_index(c->frag_indices, c->nb_frag_indices, id);
+if (!frag_index ||
+frag_index->current < 0 ||
 frag_index->current >= frag_index->nb_items)
 return NULL;
 
-item = _index->item[frag_index->current];
-if (item->current >= 0 && item->current < item->nb_stream_info)
-return >stream_info[item->current];
-
-// This shouldn't happen
-return NULL;
+return _index->item[frag_index->current].stream_info;
 }
 
 static int search_frag_moof_offset(MOVFragmentIndex *frag_index, int64_t 
offset)
@@ -1232,9 +1202,10 @@ static int search_frag_moof_offset(MOVFragmentIndex 
*frag_index, int64_t offset)
 return b;
 }
 
-static int64_t get_stream_info_time(MOVFragmentStreamInfo * frag_stream_info)
+static int64_t get_frag_time(MOVFragmentIndex *frag_index,
+ int index, int track_id)
 {
-av_assert0(frag_stream_info);
+MOVFragmentStreamInfo *frag_stream_info = 
_index->item[index].stream_info;
 if (frag_stream_info->sidx_pts != AV_NOPTS_VALUE)
 return frag_stream_info->sidx_pts;
 if (frag_stream_info->first_tfra_pts != AV_NOPTS_VALUE)
@@ -1242,31 +1213,10 @@ static int64_t 
get_stream_info_time(MOVFragmentStreamInfo * frag_stream_info)
 return frag_stream_info->tfdt_dts;
 }
 
-static int64_t get_frag_time(MOVFragmentIndex *frag_index,
- int index, int track_id)
-{
-MOVFragmentStreamInfo * frag_stream_info;
-int64_t timestamp;
-int i;
-
-if (track_id >= 0) {
-frag_stream_info = get_frag_stream_info(frag_index, index, 

[FFmpeg-devel] [PATCH] http: Do not try to make a new request when seeking past the end of the file

2019-02-20 Thread Vittorio Giovara
From: Justin Ruggles 

This avoids making invalid HTTP Range requests for a byte range past the
known end of the file during a seek. Those requests generally return a HTTP
response of 416 Range Not Satisfiable, which causes an error response.

Reference: https://tools.ietf.org/html/rfc7233

Signed-off-by: Vittorio Giovara 
---
 libavformat/http.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/libavformat/http.c b/libavformat/http.c
index a0a0636cf2..1e40268599 100644
--- a/libavformat/http.c
+++ b/libavformat/http.c
@@ -1691,6 +1691,13 @@ static int64_t http_seek_internal(URLContext *h, int64_t 
off, int whence, int fo
 if (s->off && h->is_streamed)
 return AVERROR(ENOSYS);
 
+/* do not try to make a new connection if seeking past the end of the file 
*/
+if (s->end_off || s->filesize != UINT64_MAX) {
+uint64_t end_pos = s->end_off ? s->end_off : s->filesize;
+if (s->off >= end_pos)
+return s->off;
+}
+
 /* we save the old context in case the seek fails */
 old_buf_size = s->buf_end - s->buf_ptr;
 memcpy(old_buf, s->buf_ptr, old_buf_size);
-- 
2.20.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] GSoC mentored project: derain filter

2019-02-20 Thread Liu Steven


> 在 2019年2月20日,下午6:35,孟学苇  写道:
> 
> Hi Dev-Community,
> 
> 
> 
> 
> I am Iris Meng from China. I’m a PhD student in Institute of Digital Media, 
> Peking University. I wish to contribute as a GSoC applicant this year.
> 
> I am interested in Deep Learning. I want to add a derain filter in ffmpeg. If 
> you have any suggestion or question, we can contact by email. My motivation 
> and plans are as follows.
> 
> 
> 
> 
>   Motivation
> 
> Rain and fog are very common weather in actual life. However, it can affect 
> the visibility. Especially in heavy rain, rain streaks from various 
> directions accumulate and make the background scene misty, which will 
> seriously influence the accuracy of many computer vision systems, including 
> video surveillance, object detection and tracking in autonomous driving, etc. 
> Therefore, it is an important task to remove the rain and fog, and recover 
> the background from rain images. It can be used for image and video 
> processing to make them clearer and it can be a preprocessing method for many 
> computer vision systems.
> 
> 
> 
> 
>   Proposed Idea
> 
> We propose to implement this technology in ffmpeg. For video [1][2], we can 
> utilize the relationship between frames to remove rain and fog. For single 
> image [3], we can use traditional methods, such as discriminative sparse 
> coding, low rank representation and the Gaussian mixture model. We can also 
> use some deep learning methods. We should investigate these methods, and 
> ultimately consider the effect of rain/fog removal and the complexity of the 
> algorithm, and choose the optimal scheme.
> 
> 
> 
> 
>   Practical application
> 
> The derain and dehaze method can improve the subjective quality of videos and 
> images.
> 
> 
> 
> 
>  Development plan
> 
> I would like to start working on my qualification task and try to solve my 
> problems. Overall, I will follow the following steps to complete the project.
> 
> (1)Literature and algorithm investigation
> 
> (2)Data sets preparation
> 
> (3)Coding: Implement network, training code, inference code and so on
> 
> (4)Select the best method and transplantation it into ffmpeg
> 
> 
> 
> 
>  Reference
> 
>  [1] Zhang X, Li H, Qi Y, et al. Rain removal in video by combining 
> temporal and chromatic properties[C]//2006 IEEE International Conference on 
> Multimedia and Expo. IEEE, 2006: 461-464.
> 
>  [2] Tripathi A K, Mukhopadhyay S. Removal of rain from videos: a 
> review[J]. Signal, Image and Video Processing, 2014, 8(8): 1421-1430.
> 
>  [3] Li X, Wu J, Lin Z, et al. Recurrent squeeze-and-excitation context 
> aggregation net for single image deraining[C]//Proceedings of the European 
> Conference on Computer Vision (ECCV). 2018: 254-269.
> 
> 
> 
I think this can reference libavflter/sr.c to implementation, maybe you can try 
two ways to implement it, one is native and the other is model.


Thanks
Steven

> 
> 
> 
> 
> Thanks,
> 
> Regards,
> 
> Iris Meng
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel



___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] GSoC mentored project: derain filter

2019-02-20 Thread 孟学苇
Hi Dev-Community,




I am Iris Meng from China. I’m a PhD student in Institute of Digital Media, 
Peking University. I wish to contribute as a GSoC applicant this year.

I am interested in Deep Learning. I want to add a derain filter in ffmpeg. If 
you have any suggestion or question, we can contact by email. My motivation and 
plans are as follows.




   Motivation

Rain and fog are very common weather in actual life. However, it can affect the 
visibility. Especially in heavy rain, rain streaks from various directions 
accumulate and make the background scene misty, which will seriously influence 
the accuracy of many computer vision systems, including video surveillance, 
object detection and tracking in autonomous driving, etc. Therefore, it is an 
important task to remove the rain and fog, and recover the background from rain 
images. It can be used for image and video processing to make them clearer and 
it can be a preprocessing method for many computer vision systems.




   Proposed Idea

We propose to implement this technology in ffmpeg. For video [1][2], we can 
utilize the relationship between frames to remove rain and fog. For single 
image [3], we can use traditional methods, such as discriminative sparse 
coding, low rank representation and the Gaussian mixture model. We can also use 
some deep learning methods. We should investigate these methods, and ultimately 
consider the effect of rain/fog removal and the complexity of the algorithm, 
and choose the optimal scheme.




   Practical application

The derain and dehaze method can improve the subjective quality of videos and 
images.




  Development plan

I would like to start working on my qualification task and try to solve my 
problems. Overall, I will follow the following steps to complete the project.

(1)Literature and algorithm investigation

(2)Data sets preparation

(3)Coding: Implement network, training code, inference code and so on

(4)Select the best method and transplantation it into ffmpeg




  Reference

  [1] Zhang X, Li H, Qi Y, et al. Rain removal in video by combining 
temporal and chromatic properties[C]//2006 IEEE International Conference on 
Multimedia and Expo. IEEE, 2006: 461-464.

  [2] Tripathi A K, Mukhopadhyay S. Removal of rain from videos: a 
review[J]. Signal, Image and Video Processing, 2014, 8(8): 1421-1430.

  [3] Li X, Wu J, Lin Z, et al. Recurrent squeeze-and-excitation context 
aggregation net for single image deraining[C]//Proceedings of the European 
Conference on Computer Vision (ECCV). 2018: 254-269.







Thanks,

Regards,

Iris Meng
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v5] Improved the performance of 1 decode + N filter graphs and adaptive bitrate.

2019-02-20 Thread Wang, Shaofei
> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
> Mark Thompson
> Sent: Saturday, February 16, 2019 8:12 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v5] Improved the performance of 1
> decode + N filter graphs and adaptive bitrate.
> On 15/02/2019 21:54, Shaofei Wang wrote:
> > It enabled multiple filter graph concurrency, which bring above about
> > 4%~20% improvement in some 1:N scenarios by CPU or GPU acceleration
> >
> > Below are some test cases and comparison as reference.
> > (Hardware platform: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz)
> > (Software: Intel iHD driver - 16.9.00100, CentOS 7)
> >
> > For 1:N transcode by GPU acceleration with vaapi:
> > ./ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi \
> > -hwaccel_output_format vaapi \
> > -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
> > -vf "scale_vaapi=1280:720" -c:v h264_vaapi -f null /dev/null \
> > -vf "scale_vaapi=720:480" -c:v h264_vaapi -f null /dev/null
> >
> > test results:
> > 2 encoders 5 encoders 10 encoders
> > Improved   6.1%6.9%   5.5%
> >
> > For 1:N transcode by GPU acceleration with QSV:
> > ./ffmpeg -hwaccel qsv -c:v h264_qsv \
> > -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
> > -vf "scale_qsv=1280:720:format=nv12" -c:v h264_qsv -f null /dev/null
> \
> > -vf "scale_qsv=720:480:format=nv12" -c:v h264_qsv -f null
> > /dev/null
> >
> > test results:
> > 2 encoders  5 encoders 10 encoders
> > Improved   6%   4% 15%
> >
> > For Intel GPU acceleration case, 1 decode to N scaling, by QSV:
> > ./ffmpeg -hwaccel qsv -c:v h264_qsv \
> > -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
> > -vf "scale_qsv=1280:720:format=nv12,hwdownload" -pix_fmt nv12 -f
> null /dev/null \
> > -vf "scale_qsv=720:480:format=nv12,hwdownload" -pix_fmt nv12 -f
> > null /dev/null
> >
> > test results:
> > 2 scale  5 scale   10 scale
> > Improved   12% 21%21%
> >
> > For CPU only 1 decode to N scaling:
> > ./ffmpeg -i ~/Videos/1920x1080p_30.00_x264_qp28.h264 \
> > -vf "scale=1280:720" -pix_fmt nv12 -f null /dev/null \
> > -vf "scale=720:480" -pix_fmt nv12 -f null /dev/null
> >
> > test results:
> > 2 scale  5 scale   10 scale
> > Improved   25%107%   148%
> >
> > Signed-off-by: Wang, Shaofei 
> > Reviewed-by: Zhao, Jun 
> > ---
> >  fftools/ffmpeg.c| 121
> 
> >  fftools/ffmpeg.h|  14 ++
> >  fftools/ffmpeg_filter.c |   1 +
> >  3 files changed, 128 insertions(+), 8 deletions(-)
> 
> On a bit more review, I don't think this patch works at all.
> 
It has been tested and verified by a lot of cases. More fate cases need to be 
covered now.

> The existing code is all written to be run serially.  This simplistic 
> approach to
> parallelising it falls down because many of those functions use variables
> written in what were previously other functions called at different times but
> have now become other threads, introducing undefined behaviour due to
> data races.
>
Actually, this is not a patch to parallel every thing in the ffmpeg. It just 
thread the input filter
of the filter graph(tend for simple filter graph), which is a simple way to 
improve N filter graph
performance and also without introduce huge modification. So that there is 
still a lot of serial function call, differences
are that each filter graph need to init its output stream instead of init all 
together and each
filter graph will reap filters for its filter chain.

> To consider a single example (not the only one), the function
> check_init_output_file() does not work at all after this change.  The test for
> OutputStream initialisation (so that you run exactly once after all of the
> output streams are ready) races with other threads setting those variables.
> Since that's undefined behaviour you may get lucky sometimes and have the
> output file initialisation run exactly once, but in general it will fail in 
> unknown
> ways.
> 

The check_init_output_file() should be responsible for the output file related 
with
specified output stream which managed by each thread chain, that means even it
called by different thread, the data setting are different. Let me double check.

> If you want to resubmit this patch, you will need to refactor a lot of the 
> other
> code in ffmpeg.c to rule out these undefined cases.
> 
OK. This patch would only effect on SIMPLE filter graph.

> - Mark
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 5/6] lavc/qsvdec: Add mjpeg decoder support

2019-02-20 Thread Zhong Li
Signed-off-by: Zhong Li 
---
 Changelog |  1 +
 configure |  1 +
 libavcodec/Makefile   |  1 +
 libavcodec/allcodecs.c|  1 +
 libavcodec/qsvdec_other.c | 28 +++-
 5 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/Changelog b/Changelog
index 4d80e5b54f..f289812bfc 100644
--- a/Changelog
+++ b/Changelog
@@ -19,6 +19,7 @@ version :
 - ARBC decoder
 - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
 - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
+- Intel QSV-accelerated MJPEG decoding
 
 
 version 4.1:
diff --git a/configure b/configure
index eaa56c07cf..de994673a0 100755
--- a/configure
+++ b/configure
@@ -2997,6 +2997,7 @@ hevc_v4l2m2m_decoder_deps="v4l2_m2m hevc_v4l2_m2m"
 hevc_v4l2m2m_decoder_select="hevc_mp4toannexb_bsf"
 hevc_v4l2m2m_encoder_deps="v4l2_m2m hevc_v4l2_m2m"
 mjpeg_cuvid_decoder_deps="cuvid"
+mjpeg_qsv_decoder_select="qsvdec"
 mjpeg_qsv_encoder_deps="libmfx"
 mjpeg_qsv_encoder_select="qsvenc"
 mjpeg_vaapi_encoder_deps="VAEncPictureParameterBufferJPEG"
diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 15c43a8a6a..fed4a13fe5 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -423,6 +423,7 @@ OBJS-$(CONFIG_METASOUND_DECODER)   += metasound.o 
metasound_data.o \
 OBJS-$(CONFIG_MICRODVD_DECODER)+= microdvddec.o ass.o
 OBJS-$(CONFIG_MIMIC_DECODER)   += mimic.o
 OBJS-$(CONFIG_MJPEG_DECODER)   += mjpegdec.o
+OBJS-$(CONFIG_MJPEG_QSV_DECODER)   += qsvdec_other.o
 OBJS-$(CONFIG_MJPEG_ENCODER)   += mjpegenc.o mjpegenc_common.o \
   mjpegenc_huffman.o
 OBJS-$(CONFIG_MJPEGB_DECODER)  += mjpegbdec.o
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index b26aeca239..391619c38c 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -759,6 +759,7 @@ extern AVCodec ff_hevc_videotoolbox_encoder;
 extern AVCodec ff_libkvazaar_encoder;
 extern AVCodec ff_mjpeg_cuvid_decoder;
 extern AVCodec ff_mjpeg_qsv_encoder;
+extern AVCodec ff_mjpeg_qsv_decoder;
 extern AVCodec ff_mjpeg_vaapi_encoder;
 extern AVCodec ff_mpeg1_cuvid_decoder;
 extern AVCodec ff_mpeg2_cuvid_decoder;
diff --git a/libavcodec/qsvdec_other.c b/libavcodec/qsvdec_other.c
index 03251d2c85..8c9c1e6b13 100644
--- a/libavcodec/qsvdec_other.c
+++ b/libavcodec/qsvdec_other.c
@@ -1,5 +1,5 @@
 /*
- * Intel MediaSDK QSV based MPEG-2, VC-1 and VP8 decoders
+ * Intel MediaSDK QSV based MPEG-2, VC-1, VP8 and MJPEG decoders
  *
  * copyright (c) 2015 Anton Khirnov
  *
@@ -255,3 +255,29 @@ AVCodec ff_vp8_qsv_decoder = {
 .wrapper_name   = "qsv",
 };
 #endif
+
+#if CONFIG_MJPEG_QSV_DECODER
+static const AVClass mjpeg_qsv_class = {
+.class_name = "mjpeg_qsv",
+.item_name  = av_default_item_name,
+.option = options,
+.version= LIBAVUTIL_VERSION_INT,
+};
+
+AVCodec ff_mjpeg_qsv_decoder = {
+.name   = "mjpeg_qsv",
+.long_name  = NULL_IF_CONFIG_SMALL("MJPEG video (Intel Quick Sync 
Video acceleration)"),
+.priv_data_size = sizeof(QSVOtherContext),
+.type   = AVMEDIA_TYPE_VIDEO,
+.id = AV_CODEC_ID_MJPEG,
+.init   = qsv_decode_init,
+.decode = qsv_decode_frame,
+.flush  = qsv_decode_flush,
+.close  = qsv_decode_close,
+.capabilities   = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_DR1 | 
AV_CODEC_CAP_AVOID_PROBING,
+.priv_class = _qsv_class,
+.pix_fmts   = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12,
+AV_PIX_FMT_QSV,
+AV_PIX_FMT_NONE },
+};
+#endif
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 4/6] lavc/qsvdec: remove orignal parser code since not needed now

2019-02-20 Thread Zhong Li
Signed-off-by: Zhong Li 
---
 configure   | 10 +-
 libavcodec/qsvdec.c | 16 +---
 libavcodec/qsvdec.h |  2 --
 3 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/configure b/configure
index bf40c1dcb9..eaa56c07cf 100755
--- a/configure
+++ b/configure
@@ -2973,7 +2973,7 @@ h264_mediacodec_decoder_select="h264_mp4toannexb_bsf 
h264_parser"
 h264_mmal_decoder_deps="mmal"
 h264_nvenc_encoder_deps="nvenc"
 h264_omx_encoder_deps="omx"
-h264_qsv_decoder_select="h264_mp4toannexb_bsf h264_parser qsvdec"
+h264_qsv_decoder_select="h264_mp4toannexb_bsf qsvdec"
 h264_qsv_encoder_select="qsvenc"
 h264_rkmpp_decoder_deps="rkmpp"
 h264_rkmpp_decoder_select="h264_mp4toannexb_bsf"
@@ -2987,7 +2987,7 @@ hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf"
 hevc_mediacodec_decoder_deps="mediacodec"
 hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser"
 hevc_nvenc_encoder_deps="nvenc"
-hevc_qsv_decoder_select="hevc_mp4toannexb_bsf hevc_parser qsvdec"
+hevc_qsv_decoder_select="hevc_mp4toannexb_bsf qsvdec"
 hevc_qsv_encoder_select="hevcparse qsvenc"
 hevc_rkmpp_decoder_deps="rkmpp"
 hevc_rkmpp_decoder_select="hevc_mp4toannexb_bsf"
@@ -3007,7 +3007,7 @@ mpeg2_crystalhd_decoder_select="crystalhd"
 mpeg2_cuvid_decoder_deps="cuvid"
 mpeg2_mmal_decoder_deps="mmal"
 mpeg2_mediacodec_decoder_deps="mediacodec"
-mpeg2_qsv_decoder_select="qsvdec mpegvideo_parser"
+mpeg2_qsv_decoder_select="qsvdec"
 mpeg2_qsv_encoder_select="qsvenc"
 mpeg2_vaapi_encoder_select="cbs_mpeg2 vaapi_encode"
 mpeg2_v4l2m2m_decoder_deps="v4l2_m2m mpeg2_v4l2_m2m"
@@ -3024,11 +3024,11 @@ nvenc_hevc_encoder_select="hevc_nvenc_encoder"
 vc1_crystalhd_decoder_select="crystalhd"
 vc1_cuvid_decoder_deps="cuvid"
 vc1_mmal_decoder_deps="mmal"
-vc1_qsv_decoder_select="qsvdec vc1_parser"
+vc1_qsv_decoder_select="qsvdec"
 vc1_v4l2m2m_decoder_deps="v4l2_m2m vc1_v4l2_m2m"
 vp8_cuvid_decoder_deps="cuvid"
 vp8_mediacodec_decoder_deps="mediacodec"
-vp8_qsv_decoder_select="qsvdec vp8_parser"
+vp8_qsv_decoder_select="qsvdec"
 vp8_rkmpp_decoder_deps="rkmpp"
 vp8_vaapi_encoder_deps="VAEncPictureParameterBufferVP8"
 vp8_vaapi_encoder_select="vaapi_encode"
diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
index efe054f5c5..d776bd87ff 100644
--- a/libavcodec/qsvdec.c
+++ b/libavcodec/qsvdec.c
@@ -499,7 +499,6 @@ int ff_qsv_decode_close(QSVContext *q)
 av_fifo_free(q->async_fifo);
 q->async_fifo = NULL;
 
-av_parser_close(q->parser);
 avcodec_free_context(>avctx_internal);
 
 if (q->internal_session)
@@ -529,25 +528,12 @@ int ff_qsv_process_data(AVCodecContext *avctx, QSVContext 
*q,
 return AVERROR(ENOMEM);
 
 q->avctx_internal->codec_id = avctx->codec_id;
-
-q->parser = av_parser_init(avctx->codec_id);
-if (!q->parser)
-return AVERROR(ENOMEM);
-
-q->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;
 q->orig_pix_fmt   = AV_PIX_FMT_NONE;
 }
 
 if (!pkt->size)
 return qsv_decode(avctx, q, frame, got_frame, pkt);
 
-/* we assume the packets are already split properly and want
- * just the codec parameters here */
-av_parser_parse2(q->parser, q->avctx_internal,
- _data, _size,
- pkt->data, pkt->size, pkt->pts, pkt->dts,
- pkt->pos);
-
 /* TODO: flush delayed frames on reinit */
 
 ret = qsv_decode_header(avctx, q, pkt, pix_fmts, );
@@ -585,7 +571,7 @@ int ff_qsv_process_data(AVCodecContext *avctx, QSVContext 
*q,
 return qsv_decode(avctx, q, frame, got_frame, pkt);
 
 reinit_fail:
-q->orig_pix_fmt = q->parser->format = avctx->pix_fmt = AV_PIX_FMT_NONE;
+q->orig_pix_fmt = avctx->pix_fmt = AV_PIX_FMT_NONE;
 return ret;
 }
 
diff --git a/libavcodec/qsvdec.h b/libavcodec/qsvdec.h
index 4812fb2a6b..8e64839ca6 100644
--- a/libavcodec/qsvdec.h
+++ b/libavcodec/qsvdec.h
@@ -56,8 +56,6 @@ typedef struct QSVContext {
 int buffered_count;
 int reinit_flag;
 
-// the internal parser and codec context for parsing the data
-AVCodecParserContext *parser;
 AVCodecContext *avctx_internal;
 enum AVPixelFormat orig_pix_fmt;
 uint32_t fourcc;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 6/6] lavc/qsvdec: Add VP9 decoder support

2019-02-20 Thread Zhong Li
VP9 decoder is supported on Intel kabyLake+ platforms with MSDK Version 1.19+

Signed-off-by: Zhong Li 
---
 Changelog |  1 +
 configure |  1 +
 libavcodec/allcodecs.c|  1 +
 libavcodec/qsv.c  |  5 +
 libavcodec/qsvdec_other.c | 46 ---
 5 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/Changelog b/Changelog
index f289812bfc..141ffd9610 100644
--- a/Changelog
+++ b/Changelog
@@ -20,6 +20,7 @@ version :
 - libaribb24 based ARIB STD-B24 caption support (profiles A and C)
 - Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
 - Intel QSV-accelerated MJPEG decoding
+- Intel QSV-accelerated VP9 decoding
 
 
 version 4.1:
diff --git a/configure b/configure
index de994673a0..84fbe49bcc 100755
--- a/configure
+++ b/configure
@@ -3037,6 +3037,7 @@ vp8_v4l2m2m_decoder_deps="v4l2_m2m vp8_v4l2_m2m"
 vp8_v4l2m2m_encoder_deps="v4l2_m2m vp8_v4l2_m2m"
 vp9_cuvid_decoder_deps="cuvid"
 vp9_mediacodec_decoder_deps="mediacodec"
+vp9_qsv_decoder_select="qsvdec"
 vp9_rkmpp_decoder_deps="rkmpp"
 vp9_vaapi_encoder_deps="VAEncPictureParameterBufferVP9"
 vp9_vaapi_encoder_select="vaapi_encode"
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index 391619c38c..248b8f15b8 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -776,6 +776,7 @@ extern AVCodec ff_vp8_v4l2m2m_encoder;
 extern AVCodec ff_vp8_vaapi_encoder;
 extern AVCodec ff_vp9_cuvid_decoder;
 extern AVCodec ff_vp9_mediacodec_decoder;
+extern AVCodec ff_vp9_qsv_decoder;
 extern AVCodec ff_vp9_vaapi_encoder;
 
 // The iterate API is not usable with ossfuzz due to the excessive size of 
binaries created
diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index 711fd3df1e..7dcfb04316 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -60,6 +60,11 @@ int ff_qsv_codec_id_to_mfx(enum AVCodecID codec_id)
 #endif
 case AV_CODEC_ID_MJPEG:
 return MFX_CODEC_JPEG;
+#if QSV_VERSION_ATLEAST(1, 19)
+case AV_CODEC_ID_VP9:
+return MFX_CODEC_VP9;
+#endif
+
 default:
 break;
 }
diff --git a/libavcodec/qsvdec_other.c b/libavcodec/qsvdec_other.c
index 8c9c1e6b13..50bfc818b0 100644
--- a/libavcodec/qsvdec_other.c
+++ b/libavcodec/qsvdec_other.c
@@ -1,5 +1,5 @@
 /*
- * Intel MediaSDK QSV based MPEG-2, VC-1, VP8 and MJPEG decoders
+ * Intel MediaSDK QSV based MPEG-2, VC-1, VP8, MJPEG and VP9 decoders
  *
  * copyright (c) 2015 Anton Khirnov
  *
@@ -60,8 +60,8 @@ static av_cold int qsv_decode_close(AVCodecContext *avctx)
 {
 QSVOtherContext *s = avctx->priv_data;
 
-#if CONFIG_VP8_QSV_DECODER
-if (avctx->codec_id == AV_CODEC_ID_VP8)
+#if CONFIG_VP8_QSV_DECODER || CONFIG_VP9_QSV_DECODER
+if (avctx->codec_id == AV_CODEC_ID_VP8 || avctx->codec_id == 
AV_CODEC_ID_VP9)
 av_freep(>qsv.load_plugins);
 #endif
 
@@ -90,6 +90,17 @@ static av_cold int qsv_decode_init(AVCodecContext *avctx)
 }
 #endif
 
+#if CONFIG_VP9_QSV_DECODER
+if (avctx->codec_id == AV_CODEC_ID_VP9) {
+static const char *uid_vp9dec_hw = "a922394d8d87452f878c51f2fc9b4131";
+
+av_freep(>qsv.load_plugins);
+s->qsv.load_plugins = av_strdup(uid_vp9dec_hw);
+if (!s->qsv.load_plugins)
+return AVERROR(ENOMEM);
+}
+#endif
+
 s->packet_fifo = av_fifo_alloc(sizeof(AVPacket));
 if (!s->packet_fifo) {
 ret = AVERROR(ENOMEM);
@@ -281,3 +292,32 @@ AVCodec ff_mjpeg_qsv_decoder = {
 AV_PIX_FMT_NONE },
 };
 #endif
+
+#if CONFIG_VP9_QSV_DECODER
+static const AVClass vp9_qsv_class = {
+.class_name = "vp9_qsv",
+.item_name  = av_default_item_name,
+.option = options,
+.version= LIBAVUTIL_VERSION_INT,
+};
+
+AVCodec ff_vp9_qsv_decoder = {
+.name   = "vp9_qsv",
+.long_name  = NULL_IF_CONFIG_SMALL("VP9 video (Intel Quick Sync Video 
acceleration)"),
+.priv_data_size = sizeof(QSVOtherContext),
+.type   = AVMEDIA_TYPE_VIDEO,
+.id = AV_CODEC_ID_VP9,
+.init   = qsv_decode_init,
+.decode = qsv_decode_frame,
+.flush  = qsv_decode_flush,
+.close  = qsv_decode_close,
+.capabilities   = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_DR1 | 
AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_HYBRID,
+.priv_class = _qsv_class,
+.pix_fmts   = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12,
+AV_PIX_FMT_P010,
+AV_PIX_FMT_QSV,
+AV_PIX_FMT_NONE },
+.hw_configs = ff_qsv_hw_configs,
+.wrapper_name   = "qsv",
+};
+#endif
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()

2019-02-20 Thread Zhong Li
Using MSDK parser can improve qsv decoder pass rate in some cases (E.g:
sps declares a wrong level_idc, smaller than it should be).
And it is necessary for adding new qsv decoders such as MJPEG and VP9
since current parser can't provide enough information.
Actually using MFXVideoDECODE_DecodeHeader() was disscussed at
https://ffmpeg.org/pipermail/ffmpeg-devel/2015-July/175734.html and merged as 
commit 1acb19d,
but was overwritten when merged libav patches (commit: 1f26a23) without any 
explain.

v2: split decode header from decode_init, and call it for everyframe to
detect format/resoultion change. It can fix some regression issues such
as hevc 10bits decoding.

Signed-off-by: Zhong Li 
---
 libavcodec/qsvdec.c | 172 ++--
 libavcodec/qsvdec.h |   2 +
 2 files changed, 90 insertions(+), 84 deletions(-)

diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
index 4a0be811fb..efe054f5c5 100644
--- a/libavcodec/qsvdec.c
+++ b/libavcodec/qsvdec.c
@@ -120,19 +120,18 @@ static inline unsigned int qsv_fifo_size(const 
AVFifoBuffer* fifo)
 return av_fifo_size(fifo) / qsv_fifo_item_size();
 }
 
-static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q)
+static int qsv_decode_preinit(AVCodecContext *avctx, QSVContext *q, enum 
AVPixelFormat *pix_fmts, mfxVideoParam *param)
 {
-const AVPixFmtDescriptor *desc;
 mfxSession session = NULL;
 int iopattern = 0;
-mfxVideoParam param = { 0 };
-int frame_width  = avctx->coded_width;
-int frame_height = avctx->coded_height;
 int ret;
 
-desc = av_pix_fmt_desc_get(avctx->sw_pix_fmt);
-if (!desc)
-return AVERROR_BUG;
+ret = ff_get_format(avctx, pix_fmts);
+if (ret < 0) {
+q->orig_pix_fmt = avctx->pix_fmt = AV_PIX_FMT_NONE;
+return ret;
+} else
+q->orig_pix_fmt = pix_fmts[1];
 
 if (!q->async_fifo) {
 q->async_fifo = av_fifo_alloc(q->async_depth * qsv_fifo_item_size());
@@ -170,48 +169,72 @@ static int qsv_decode_init(AVCodecContext *avctx, 
QSVContext *q)
 return ret;
 }
 
-ret = ff_qsv_codec_id_to_mfx(avctx->codec_id);
-if (ret < 0)
-return ret;
+param->IOPattern   = q->iopattern;
+param->AsyncDepth  = q->async_depth;
+param->ExtParam= q->ext_buffers;
+param->NumExtParam = q->nb_ext_buffers;
 
-param.mfx.CodecId  = ret;
-param.mfx.CodecProfile = ff_qsv_profile_to_mfx(avctx->codec_id, 
avctx->profile);
-param.mfx.CodecLevel   = avctx->level == FF_LEVEL_UNKNOWN ? 
MFX_LEVEL_UNKNOWN : avctx->level;
-
-param.mfx.FrameInfo.BitDepthLuma   = desc->comp[0].depth;
-param.mfx.FrameInfo.BitDepthChroma = desc->comp[0].depth;
-param.mfx.FrameInfo.Shift  = desc->comp[0].depth > 8;
-param.mfx.FrameInfo.FourCC = q->fourcc;
-param.mfx.FrameInfo.Width  = frame_width;
-param.mfx.FrameInfo.Height = frame_height;
-param.mfx.FrameInfo.ChromaFormat   = MFX_CHROMAFORMAT_YUV420;
-
-switch (avctx->field_order) {
-case AV_FIELD_PROGRESSIVE:
-param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
-break;
-case AV_FIELD_TT:
-param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_FIELD_TFF;
-break;
-case AV_FIELD_BB:
-param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_FIELD_BFF;
-break;
-default:
-param.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_UNKNOWN;
-break;
-}
+return 0;
+ }
+
+static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q, mfxVideoParam 
*param)
+{
+int ret;
 
-param.IOPattern   = q->iopattern;
-param.AsyncDepth  = q->async_depth;
-param.ExtParam= q->ext_buffers;
-param.NumExtParam = q->nb_ext_buffers;
+avctx->width= param->mfx.FrameInfo.CropW;
+avctx->height   = param->mfx.FrameInfo.CropH;
+avctx->coded_width  = param->mfx.FrameInfo.Width;
+avctx->coded_height = param->mfx.FrameInfo.Height;
+avctx->level= param->mfx.CodecLevel;
+avctx->profile  = param->mfx.CodecProfile;
+avctx->field_order  = ff_qsv_map_picstruct(param->mfx.FrameInfo.PicStruct);
+avctx->pix_fmt  = ff_qsv_map_fourcc(param->mfx.FrameInfo.FourCC);
 
-ret = MFXVideoDECODE_Init(q->session, );
+ret = MFXVideoDECODE_Init(q->session, param);
 if (ret < 0)
 return ff_qsv_print_error(avctx, ret,
   "Error initializing the MFX video decoder");
 
-q->frame_info = param.mfx.FrameInfo;
+q->frame_info = param->mfx.FrameInfo;
+
+return 0;
+}
+
+static int qsv_decode_header(AVCodecContext *avctx, QSVContext *q, AVPacket 
*avpkt, enum AVPixelFormat *pix_fmts, mfxVideoParam *param)
+{
+int ret;
+
+mfxBitstream bs = { { { 0 } } };
+
+if (avpkt->size) {
+bs.Data   = avpkt->data;
+bs.DataLength = avpkt->size;
+bs.MaxLength  = bs.DataLength;
+bs.TimeStamp  = avpkt->pts;
+if (avctx->field_order 

[FFmpeg-devel] [PATCH v2 2/6] lavc/qsv: make function qsv_map_fourcc() can be called externally

2019-02-20 Thread Zhong Li
Signed-off-by: Zhong Li 
---
 libavcodec/qsv.c  | 4 ++--
 libavcodec/qsv_internal.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index 224bc00ce4..711fd3df1e 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -158,7 +158,7 @@ int ff_qsv_print_warning(void *log_ctx, mfxStatus err,
 return ret;
 }
 
-static enum AVPixelFormat qsv_map_fourcc(uint32_t fourcc)
+enum AVPixelFormat ff_qsv_map_fourcc(uint32_t fourcc)
 {
 switch (fourcc) {
 case MFX_FOURCC_NV12: return AV_PIX_FMT_NV12;
@@ -469,7 +469,7 @@ static mfxStatus qsv_frame_alloc(mfxHDL pthis, 
mfxFrameAllocRequest *req,
 frames_hwctx = frames_ctx->hwctx;
 
 frames_ctx->format= AV_PIX_FMT_QSV;
-frames_ctx->sw_format = qsv_map_fourcc(i->FourCC);
+frames_ctx->sw_format = ff_qsv_map_fourcc(i->FourCC);
 frames_ctx->width = i->Width;
 frames_ctx->height= i->Height;
 frames_ctx->initial_pool_size = req->NumFrameSuggested;
diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
index 51c23d5c7b..c432ac8a3f 100644
--- a/libavcodec/qsv_internal.h
+++ b/libavcodec/qsv_internal.h
@@ -91,6 +91,8 @@ int ff_qsv_print_warning(void *log_ctx, mfxStatus err,
 int ff_qsv_codec_id_to_mfx(enum AVCodecID codec_id);
 int ff_qsv_profile_to_mfx(enum AVCodecID codec_id, int profile);
 
+enum AVPixelFormat ff_qsv_map_fourcc(uint32_t fourcc);
+
 int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t *fourcc);
 enum AVPictureType ff_qsv_map_pictype(int mfx_pic_type);
 
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 0/6] Refact qsv decoder parser and add new decoders

2019-02-20 Thread Zhong Li
Replace current parser with MFXVideoDECODE_DecodeHeader(),
and add MJPEG/VP9 decoders.

Zhong Li (6):
  lavc/qsv: add function ff_qsv_map_picstruct()
  lavc/qsv: make function qsv_map_fourcc() can be called externally
  lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()
  lavc/qsvdec: remove orignal parser code since not needed now
  lavc/qsvdec: Add mjpeg decoder support
  lavc/qsvdec: Add VP9 decoder support

 Changelog |   2 +
 configure |  12 ++-
 libavcodec/Makefile   |   1 +
 libavcodec/allcodecs.c|   2 +
 libavcodec/qsv.c  |  27 +-
 libavcodec/qsv_internal.h |   4 +
 libavcodec/qsvdec.c   | 188 ++
 libavcodec/qsvdec.h   |   4 +-
 libavcodec/qsvdec_other.c |  72 ++-
 9 files changed, 201 insertions(+), 111 deletions(-)

-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 1/6] lavc/qsv: add function ff_qsv_map_picstruct()

2019-02-20 Thread Zhong Li
Signed-off-by: Zhong Li 
---
 libavcodec/qsv.c  | 18 ++
 libavcodec/qsv_internal.h |  2 ++
 2 files changed, 20 insertions(+)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index bb0d79588c..224bc00ce4 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -196,6 +196,24 @@ int ff_qsv_find_surface_idx(QSVFramesContext *ctx, 
QSVFrame *frame)
 return AVERROR_BUG;
 }
 
+enum AVFieldOrder ff_qsv_map_picstruct(int mfx_pic_struct)
+{
+enum AVFieldOrder field = AV_FIELD_UNKNOWN;
+switch (mfx_pic_struct & 0xF) {
+case MFX_PICSTRUCT_PROGRESSIVE:
+field = AV_FIELD_PROGRESSIVE;
+break;
+case MFX_PICSTRUCT_FIELD_TFF:
+field = AV_FIELD_TT;
+break;
+case MFX_PICSTRUCT_FIELD_BFF:
+field = AV_FIELD_BB;
+break;
+}
+
+return field;
+}
+
 enum AVPictureType ff_qsv_map_pictype(int mfx_pic_type)
 {
 enum AVPictureType type;
diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
index 394c558883..51c23d5c7b 100644
--- a/libavcodec/qsv_internal.h
+++ b/libavcodec/qsv_internal.h
@@ -94,6 +94,8 @@ int ff_qsv_profile_to_mfx(enum AVCodecID codec_id, int 
profile);
 int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t *fourcc);
 enum AVPictureType ff_qsv_map_pictype(int mfx_pic_type);
 
+enum AVFieldOrder ff_qsv_map_picstruct(int mfx_pic_struct);
+
 int ff_qsv_init_internal_session(AVCodecContext *avctx, mfxSession *session,
  const char *load_plugins);
 
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH v2 1/1] avcodec/vaapi_encode: add frame-skip func

2019-02-20 Thread Jing SUN
This implements app controlled frame skipping
in vaapi encoding. To make a frame skipped,
allocate its frame side data of the newly
added AV_FRAME_DATA_SKIP_FRAME type and set
its value to 1.

Signed-off-by: Jing SUN 
---
 libavcodec/vaapi_encode.c | 112 --
 libavcodec/vaapi_encode.h |   5 +++
 libavutil/frame.c |   1 +
 libavutil/frame.h |   5 +++
 4 files changed, 119 insertions(+), 4 deletions(-)

diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index b4e9fad..debcfa6 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -23,6 +23,7 @@
 #include "libavutil/common.h"
 #include "libavutil/log.h"
 #include "libavutil/pixdesc.h"
+#include "libavutil/intreadwrite.h"
 
 #include "vaapi_encode.h"
 #include "avcodec.h"
@@ -103,6 +104,41 @@ static int vaapi_encode_make_param_buffer(AVCodecContext 
*avctx,
 return 0;
 }
 
+static int vaapi_encode_check_if_skip(AVCodecContext *avctx,
+  VAAPIEncodePicture *pic)
+{
+AVFrameSideData *fside = NULL;
+VAAPIEncodeContext *ctx = avctx->priv_data;
+VAAPIEncodePicture *cur = NULL;
+int i = 0;
+if (!pic || !pic->input_image)
+return AVERROR(EINVAL);
+fside = av_frame_get_side_data(pic->input_image, AV_FRAME_DATA_SKIP_FRAME);
+if (fside)
+pic->skipped_flag = AV_RL8(fside->data);
+else
+pic->skipped_flag = 0;
+if (0 == pic->skipped_flag)
+return 0;
+if ((pic->type == PICTURE_TYPE_IDR) || (pic->type == PICTURE_TYPE_I)) {
+av_log(avctx, AV_LOG_INFO, "Can't skip IDR/I pic 
%"PRId64"/%"PRId64".\n",
+   pic->display_order, pic->encode_order);
+pic->skipped_flag = 0;
+return 0;
+}
+for (cur = ctx->pic_start; cur; cur = cur->next) {
+for (i=0; i < cur->nb_refs; ++i) {
+if (cur->refs[i] == pic) {
+av_log(avctx, AV_LOG_INFO, "Can't skip ref pic 
%"PRId64"/%"PRId64".\n",
+   pic->display_order, pic->encode_order);
+pic->skipped_flag = 0;
+return 0;
+}
+}
+}
+return 0;
+}
+
 static int vaapi_encode_wait(AVCodecContext *avctx,
  VAAPIEncodePicture *pic)
 {
@@ -412,6 +448,50 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
 }
 }
 
+err = vaapi_encode_check_if_skip(avctx, pic);
+if (err != 0)
+av_log(avctx, AV_LOG_ERROR, "Fail to check if skip.\n");
+
+#if VA_CHECK_VERSION(0,38,1)
+if (pic->skipped_flag) {
+av_log(avctx, AV_LOG_INFO, "Skip pic %"PRId64"/%"PRId64" as 
requested.\n",
+   pic->display_order, pic->encode_order);
+++ctx->skipped_pic_count;
+pic->encode_issued = 1;
+return 0;
+} else if (ctx->skipped_pic_count > 0) {
+VAEncMiscParameterBuffer *misc_param = NULL;
+VAEncMiscParameterSkipFrame *skip_param = NULL;
+
+misc_param = av_malloc(sizeof(VAEncMiscParameterBuffer) + 
sizeof(VAEncMiscParameterSkipFrame));
+misc_param->type = 
(VAEncMiscParameterType)VAEncMiscParameterTypeSkipFrame;
+skip_param = (VAEncMiscParameterSkipFrame *)misc_param->data;
+
+skip_param->skip_frame_flag = 1;
+skip_param->num_skip_frames = ctx->skipped_pic_count;
+skip_param->size_skip_frames = 0;
+
+err = vaapi_encode_make_param_buffer(avctx, pic,
+  VAEncMiscParameterBufferType, (void *)misc_param,
+  (sizeof(VAEncMiscParameterBuffer) +
+  sizeof(VAEncMiscParameterSkipFrame)));
+
+free(misc_param);
+
+if (err < 0)
+goto fail;
+
+ctx->skipped_pic_count = 0;
+}
+#else
+if (pic->skipped_flag) {
+av_log(avctx, AV_LOG_INFO, "Skip-frame isn't supported and pic 
%"PRId64"/%"PRId64" isn't skipped.\n",
+   pic->display_order, pic->encode_order);
+pic->skipped_flag = 0;
+ctx->skipped_pic_count = 0;
+}
+#endif
+
 vas = vaBeginPicture(ctx->hwctx->display, ctx->va_context,
  pic->input_surface);
 if (vas != VA_STATUS_SUCCESS) {
@@ -491,9 +571,23 @@ static int vaapi_encode_output(AVCodecContext *avctx,
 VAStatus vas;
 int err;
 
-err = vaapi_encode_wait(avctx, pic);
-if (err < 0)
-return err;
+if (!pic->skipped_flag) {
+err = vaapi_encode_wait(avctx, pic);
+if (err < 0)
+return err;
+} else {
+av_frame_free(>input_image);
+pic->encode_complete = 1;
+err = av_new_packet(pkt, 0);
+if (err < 0)
+goto fail;
+pkt->pts = pic->pts;
+av_buffer_unref(>output_buffer_ref);
+pic->output_buffer = VA_INVALID_ID;
+av_log(avctx, AV_LOG_DEBUG, "Output 0 byte for pic 
%"PRId64"/%"PRId64".\n",
+   pic->display_order, pic->encode_order);
+return 0;
+}
 
 buf_list = NULL;